title
stringlengths
8
300
abstract
stringlengths
0
10k
60GHz high-gain low-noise amplifiers with a common-gate inductive feedback in 65nm CMOS
In this paper, a novel design technique of common-gate inductive feedback is presented for millimeter-wave low-noise amplifiers (LNAs). For this technique, by adopting a gate inductor at the common-gate transistor of the cascode stage, the gain of the LNA can be enhanced even under a wideband operation. Using a 65nm CMOS process, transmission-line-based and spiral-inductor-based LNAs are fabricated for demonstration. With a dc power consumption of 33.6 mW from a 1.2-V supply voltage, the transmission-line-based LNA exhibits a gain of 20.6 dB and a noise figure of 5.4 dB at 60 GHz while the 3dB bandwidth is 14.1 GHz. As for the spiral-inductor-based LNA, consuming a dc power of 28.8 mW from a 1.2-V supply voltage, the circuit shows a gain of 18.0 dB and a noise figure of 4.5 dB at 60 GHz while the 3dB bandwidth is 12.2 GHz.
Effects of 4-week self cross body stretching with scapular stabilization on shoulder motions and horizontal adductor strength in subjects with limited shoulder horizontal adduction.
BACKGROUND Posterior shoulder tightness (PST) is related to shoulder conditions such as shoulder impingement and limited shoulder horizontal adduction (SHA). The purpose of this study was to compare the effects of self cross body stretching (CBS) with and without scapular stabilization (SS) on SHA and shoulder internal rotation (SIR) range of motion (ROM) and shoulder horizontal adductor strength (SHAS) in subjects with limited SHA. METHODS Twenty-six subjects (14 males, 12 females) with limited SHA was participated in this study. The SS group and without stabilization (WS) group were assigned randomly. The SS group performed self CBS with SS by applying belt just under the subject's axilla. The subjects were asked to perform self CBS in 4 times a week for 4 weeks. SHA and SIR RM were measured by Clinometer smartphone application, and SHAS by hand-held dynamometer (HHD) before and after 4-week self CBS. RESULTS 2 × 2 mixed analysis of variance (ANOVA) was used to identify the significance. If there was an interaction effect, t-test was used to confirm the simple effect. There was a significant interaction in SHA ROM and SHAS. The post-test value of SHA ROM was significantly greater in SS group than WS group (p < 0.0125). In SHAS, there was no significant difference between groups (p > 0.0125). CONCLUSION SS during self CBS could enhance to improve SHA, SIR ROM, and SHAS in individuals with limited SHA.
Infrared Frequency and Dissociation Constant of the Carboxylate Group
THE hexuronic acids of polyuronides such as the alginates and chondroitin sulphates vary in their acid dissociation constants (pKa) and in their abilities to react with periodate1. Both phenomena could be explained by the formation of a hydrogen bond between the carboxylate group and a hydroxyl group at C2 or C3 of the uronic acid1. Hanrahan2 inferred that the increase in frequency (ν) of the antisymmetric stretching vibration of the ionized carboxylate group in the mono-ions of dicarboxylic acids is due to the formation of CO−2 … HO hydrogen bonds. A similar increase in ν was found by Nakamoto3 when the carboxylate group underwent monodentate coordination by metal ions.
Call center simulation in Bell Canada
Call centers have relied historically, on Erlang-C bas estimation formulas to help determine number of age positions and queue parameters. These estimators h worked fairly well in traditional call centers, howeve recent trends such as skill-based routing, electro channels and interactive call handling demand mo sophisticated techniques (see Cleveland and Mayb 1997). Discrete event simulation provides the necess techniques to gain insight into these new trends, a helping to shape their current and future designs. T paper relates the experiences of designing call cen simulations in Bell Canada. We the experience constructing, executing and analysing a large call cen model. Problems that we faced are identified a potential solutions are given. The examples are tak from large and small call centers alike in the attempt bring forth some common problems that a simulation will face.
A unified formula for light-adapted pupil size.
The size of the pupil has a large effect on visual function, and pupil size depends mainly on the adapting luminance, modulated by other factors. Over the last century, a number of formulas have been proposed to describe this dependence. Here we review seven published formulas and develop a new unified formula that incorporates the effects of luminance, size of the adapting field, age of the observer, and whether one or both eyes are adapted. We provide interactive demonstrations and software implementations of the unified formula.
Decreased incidence of extra-alveolar air leakage or death prior to air leakage in high versus low rate positive pressure ventilation: Results of a randomised seven-centre trial in preterm infants
Two different ventilation techniques were compared in a seven-centre, randomised trial with 181 preterm infants up to and including 32 completed weeks gestational age, who needed mechanical ventilation because of lung disease of any type. Technique A used a constant rate (60 cycles/min), inspiratory time (IT) (0.33s) and inspiratory: expiratory ratio (I∶E) (1∶2). The tidal and minute volume was only changed by varying peak inspiratory pressure until weaning via continuous positive airway pressure. Technique B used a lower rate (30 cycles/min) with longer IT (1.0s). The I∶E ratio could be changed from 1∶1 to 2∶1 in case of hypoxaemia. Chest X-rays taken at fixed intervals were evaluated by a paediatric radiologist and a neonatologist unaware of the type of ventilation used in the patients. A reduction of at least 20% in extra-alveolar air leakage (EAL) or death prior to EAl was supposed in infants ventilated by method A. A sequential design was used to test this hypothesis. The null hypothesis was rejected (P=0.05) when the 22nd untied pair was completed. The largest reduction in EAL (−55%) was observed in the subgroup 31–32 weeks of gestation and none in the most immature group (<28 weeks). We conclude that in preterm infants requiring mechanical ventilation for any reason of lung insufficiency, ventilation at 60 cycles/min and short IT (0.33s) significantly reduces EAL or prior death compared with 30 cycles/min and a longer IT of 1s. We speculate that a further increase in rate and reduction of IT would also lower the risk of barotrauma in the most immature and susceptible infants.
Penis and Prepuce
The penis is the male organ of copulation and is composed of erectile tissue that encases the extrapelvic portion of the urethra (Fig. 66-1). The penis of the horse is musculocavernous and can be divided into three parts: the root, the body or shaft, and the glans penis. The penis originates caudally at the root, which is fixed to the lateral aspects of the ischial arch by two crura (leg-like parts) that converge to form the shaft of the penis. The shaft constitutes the major portion of the penis and begins at the junction of the crura. It is attached caudally to the symphysis ischii of the pelvis by two short suspensory ligaments that merge with the origin of the gracilis muscles (Fig. 66-2). The glans penis is the conical enlargement that caps the shaft. The urethra passes over the ischial arch between the crura and curves cranioventrally to become incorporated within erectile tissue of the penis. The mobile shaft and glans penis extend cranioventrally to the umbilical region of the abdominal wall. The body is cylindrical but compressed laterally. When quiescent, the penis is soft, compressible, and about 50 cm long. Fifteen to 20 cm lie free in the prepuce. When maximally erect, the penis is up to three times longer than when it is in a quiescent state. Erectile Bodies
Security and Efficiency Analysis of One Time Password Techniques
As a result of the rapid growth of available services provided via Internet, as well as multiple accounts a person owns, reliable user authentication schemes are mandatory for security purposes. OTP systems have prevailed as the best viable solution for security over sensitive information and pose an interesting field for research. Although, OTP schemes enhance authentication's security through various algorithmic customizations and extensions, certain compromises should be made; especially since excessively tolerable to vulnerability systems tend to have high computational and storage needs. In order to minimize the risk of a non-authenticated user having access to sensitive data, depending on the use, OTP system's architecture differs; as its tolerance towards already known attack methods. In this paper, the most widely accepted and promising OTP schemes are described and evaluated in terms of resistance against security attacks and in terms of computational intensity (performance efficiency). The results showed that there is a correlation between the security level, the computational efficiency and the storage needs of an OTP system.
A modulo based LSB steganography method
Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover images. The Least Significant Bit (LSB) steganography that replaces the least significant bits of the host medium is a widely used technique with low computational complexity and high insertion capacity. Although it has good perceptual transparency, it is vulnerable to steganalysis which is based on histogram analysis. In all the existing schemes detection of a secret message in a cover image can be easily detected from the histogram analysis and statistical analysis. Therefore developing new LSB steganography algorithms against statistical and histogram analysis is the prime requirement.
Reassessment of the Diagnostic Value of Histology in Patients with GERD, Using Multiple Biopsy Sites and an Appropriate Control Group
BACKGROUND:Histology is generally considered as a tool of limited value in the diagnosis of gastro-esophageal reflux disease (GERD).AIM:To reevaluate the diagnostic role of histological alterations in GERD, using multiple biopsy sites and an appropriate control group.METHODS:We studied 135 patients with typical and atypical symptoms of GERD. They underwent upper GI endoscopy and Los Angeles classification was used for grading cases with mucosal breaks. Biopsies were taken at the Z-line, 2 and 4 cm above it. Microscopic esophagitis was identified by necrosis/erosion, neutrophil/eosinophil intraepithelial infiltration, basal cell hyperplasia, elongation of papillae, dilation of intercellular spaces and a score (range: 0–2) was given for each lesion. Twenty-four-hour esophageal pH monitoring was performed in each patient. Twenty subjects without reflux symptoms, and with normal endoscopy and pH testing were considered as controls.RESULTS:Histological alterations were found in 100 of 119 GERD patients (84%) and in 3 of 20 controls (15%) with a significant difference (p < 0.00001). Histology was abnormal in 96% of patients with erosive esophagitis and in 76% of patients with nonerosive reflux disease (NERD). The sum of scores of microscopic lesions found in all biopsy sites ranged from 0 to 22 and we identified a cut-off value (score 2) that distinguished efficiently controls from GERD patients.CONCLUSIONS:In contrast with previous reports on the marginal role of histology in patients with GERD, our study shows that this technique can be a useful diagnostic tool, particularly in patients with NERD, when biopsies are taken at two sites including Z-line and 2 cm above it.
Clinical evaluation vs. economic evaluation: the case of a new drug.
To economically evaluate a new drug or other medical innovation one must assess both the changes in costs and in benefits. Safety and efficacy matter, but so do resources costs and social benefits. This paper evaluates the effects on expenditures of the recent introduction of cimetidine, a drug used in the prevention and treatment of duodenal ulcers. This evaluation is of interest in its own right and also as a "guide" for studying similar effects of other innovations. State Medicaid records are used to test the effects on hospitalization and aggregate medical care expenditures of this new medical innovation. After controlling to the extent possible for potential selection bias, we find that: 1) usage of cimetidine is associated with a lower level of medical care expenditures and fewer days of hospitalization per patient for those duodenal ulcer patients who had zero health care expenditures and zero days of hospitalization during the presample period; an annual cost saving of some $320.00 (20 per cent) per patient is indicated. Further analysis disclosed, however, that this saving was lower for patients with somewhat higher levels of health care expenditures and hospitalization in the presample period, and to some extent was reversed for the patients whose prior year's medical care expenditures and hospitalization were highest.
Orbital-specific mapping of the ligand exchange dynamics of Fe(CO)5 in solution
Transition-metal complexes have long attracted interest for fundamental chemical reactivity studies and possible use in solar energy conversion. Electronic excitation, ligand loss from the metal centre, or a combination of both, creates changes in charge and spin density at the metal site that need to be controlled to optimize complexes for photocatalytic hydrogen production and selective carbon–hydrogen bond activation. An understanding at the molecular level of how transition-metal complexes catalyse reactions, and in particular of the role of the short-lived and reactive intermediate states involved, will be critical for such optimization. However, suitable methods for detailed characterization of electronic excited states have been lacking. Here we show, with the use of X-ray laser-based femtosecond-resolution spectroscopy and advanced quantum chemical theory to probe the reaction dynamics of the benchmark transition-metal complex Fe(CO)5 in solution, that the photo-induced removal of CO generates the 16-electron Fe(CO)4 species, a homogeneous catalyst with an electron deficiency at the Fe centre, in a hitherto unreported excited singlet state that either converts to the triplet ground state or combines with a CO or solvent molecule to regenerate a penta-coordinated Fe species on a sub-picosecond timescale. This finding, which resolves the debate about the relative importance of different spin channels in the photochemistry of Fe(CO)5 (refs 4, 16,17,18,19 and 20), was made possible by the ability of femtosecond X-ray spectroscopy to probe frontier-orbital interactions with atom specificity. We expect the method to be broadly applicable in the chemical sciences, and to complement approaches that probe structural dynamics in ultrafast processes.
A Novel Deep Learning-Based Method of Improving Coding Efficiency from the Decoder-End for HEVC
Improving the coding efficiency is the eternal theme in video coding field. The traditional way for this purpose is to reduce the redundancies inside videos by adding numerous coding options at the encoder side. However, no matter what we have done, it is still hard to guarantee the optimal coding efficiency. On the other hand, the decoded video can be treated as a certain compressive sampling of the original video. According to the compressive sensing theory, it might be possible to further enhance the quality of the decoded video by some restoration methods. Different from the traditional methods, without changing the encoding algorithm, this paper focuses on an approach to improve the video's quality at the decoder end, which equals to further boosting the coding efficiency. Furthermore, we propose a very deep convolutional neural network to automatically remove the artifacts and enhance the details of HEVC-compressed videos, by utilizing that underused information left in the bit-streams and external images. Benefit from the prowess and efficiency of the fully end-to-end feed forward architecture, our approach can be treated as a better decoder to efficiently obtain the decoded frames with higher quality. Extensive experiments indicate our approach can further improve the coding efficiency post the deblocking and SAO in current HEVC decoder, averagely 5.0%, 6.4%, 5.3%, 5.5% BD-rate reduction for all intra, lowdelay P, lowdelay B and random access configurations respectively. This method can aslo be extended to any video coding standards.
CCSP: A compressed certificate status protocol
Trust in SSL-based communications is provided by Certificate Authorities (CAs) in the form of signed certificates. Checking the validity of a certificate involves three steps: (i) checking its expiration date, (ii) verifying its signature, and (iii) ensuring that it is not revoked. Currently, such certificate revocation checks are done either via Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP) servers. Unfortunately, despite the existence of these revocation checks, sophisticated cyber-attackers, may trick web browsers to trust a revoked certificate, believing that it is still valid. Consequently, the web browser will communicate (over TLS) with web servers controlled by cyber-attackers. Although frequently updated, nonced, and timestamped certificates may reduce the frequency and impact of such cyber-attacks, they impose a very large overhead to the CAs and OCSP servers, which now need to timestamp and sign on a regular basis all the responses, for every certificate they have issued, resulting in a very high overhead. To mitigate this overhead and provide a solution to the described cyber-attacks, we present CCSP: a new approach to provide timely information regarding the status of certificates, which capitalizes on a newly introduced notion called signed collections. In this paper, we present the design, preliminary implementation, and evaluation of CCSP in general, and signed collections in particular. Our preliminary results suggest that CCSP (i) reduces space requirements by more than an order of magnitude, (ii) lowers the number of signatures required by 6 orders of magnitude compared to OCSP-based methods, and (iii) adds only a few milliseconds of overhead in the overall user latency.
Hybrid driver IC for real-time TFT non-uniformity compensation of ultra high-definition AMOLED display
An UHD AMOLED display driver IC, enabling real-time TFT non-uniformity compensation, is presented with a hybrid driving scheme. The proposed hybrid driving scheme drives a mobile UHD (3840×1920) AMOLED panel, whose scan time is 7.7μs at a scan frequency of 60Hz, through the load of 30kohm resistance and 30pF capacitance. A proposed accurate current sensor embedded in the column driver and a back-end compensation scheme reduce maximum current error between emulated TFTs within 0.94 LSB (37nA) of 8-bit gray scales. Since the TFT variation is externally compensated, a simple 3T1C pixel circuit is employed in each pixel.
The Spinning Top Model, a New Path to Conceptualize Culture and Values: Applications to IS Research
In this conceptual study we first categorize, from existing literature, different conceptions of culture rooted in Anthropology and Sociology. We argue that these conceptions build up the logical structure of specific theoretical and empirical tools which address human/IS interactions in a cultural-based perspective. We then propose a new model of the individual’s global culture, the Spinning Top Model. We posit theoretical proposals based on this model and define a new analytical framework which can open new paths for IS research, the study of IT-related values , IT-attitudes and IT-behaviors.
A multi-centred audit of secondary spinal assessments in a trauma setting: are we ATLS compliant?
The global incidence of spinal cord injuries varies with the developed world having improved survival and 1 year mortality in a poly-trauma setting. This improved survival has been estimated at 20 % in a recent Cochrane review of Advanced Trauma Life Support (ATLS).The aim of this audit is to evaluate the management of patients with suspected spinal cord injury by the trauma and orthopaedic team in three centres in South Wales. A retrospective case note review of the secondary survey was performed. Inclusion criteria were patients 18 years and above, with poly-trauma and presenting to Accident and Emergency department at the treating hospital. We used ATLS guidelines as an audit tool and reviewed the documentation of key components of the secondary assessment. Forty-nine patients were included (29 males, 20 females) with an average age of 53.7 years (19–92 years). We found that completion of all components of the secondary survey for spinal injury was poor, 29 % receiving a digital per rectal examination despite suspected spinal injury. Paralysis level was not documented in 20.4 % of patients. Medical Research Council grade was only documented in 24.5 % although was assessed in 73.5 %. The secondary survey took place after 2 h in 54.6 % of patients. We found that the documentation of the performance of a secondary survey was poor. We found that most patients included in this study are not currently meeting the minimal standard suggested by the ATLS guidelines.
Safeguarding 5G wireless communication networks using physical layer security
The fifth generation (5G) network will serve as a key enabler in meeting the continuously increasing demands for future wireless applications, including an ultra-high data rate, an ultrawide radio coverage, an ultra-large number of devices, and an ultra-low latency. This article examines security, a pivotal issue in the 5G network where wireless transmissions are inherently vulnerable to security breaches. Specifically, we focus on physical layer security, which safeguards data confidentiality by exploiting the intrinsic randomness of the communications medium and reaping the benefits offered by the disruptive technologies to 5G. Among various technologies, the three most promising ones are discussed: heterogenous networks, massive multiple-input multiple-output, and millimeter wave. On the basis of the key principles of each technology, we identify the rich opportunities and the outstanding challenges that security designers must tackle. Such an identification is expected to decisively advance the understanding of future physical layer security.
Everyone Likes Shopping! Multi-class Product Categorization for e-Commerce
Online shopping caters the needs of millions of users on a daily basis. To build an accurate system that can retrieve relevant products for a query like “MB252 with travel bags” one requires product and query categorization mechanisms, which classify the text as Home&Garden>Kitchen&Dining>Kitchen Appliances>Blenders. One of the biggest challenges in e-Commerce is that providers like Amazon, e-Bay, Google, Yahoo! and Walmart organize products into different product taxonomies making it hard and time-consuming for sellers to categorize goods for each shopping platform. To address this challenge, we propose an automatic product categorization mechanism, which for a given product title assigns the correct product category from a taxonomy. We conducted an empirical evaluation on 445, 408 product titles and used a rich product taxonomy of 319 categories organized into 6 levels. We compared performance against multiple algorithms and found that the best performing system reaches .88 f-score.
Assessing Threat of Adversarial Examples on Deep Neural Networks
Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand-written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in of acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.
A subjective model for trustworthiness evaluation in the social Internet of Things
The integration of social networking concepts into the Internet of Things (IoT) has led to the so called Social Internet of Things (SIoT) paradigm, according to which the objects are capable of establishing social relationships in an autonomous way with respect to their owners. The benefits are those of improving scalability in information/service discovery when the SIoT is made of huge numbers of heterogeneous nodes, similarly to what happens with social networks among humans. In this paper we focus on the problem of understanding how the information provided by the other members of the SIoT has to be processed so as to build a reliable system on the basis of the behavior of the objects. We define a subjective model for the management of trustworthiness which builds upon the solutions proposed for P2P networks. Each node computes the trustworthiness of its friends on the basis of its own experience and on the opinion of the common friends with the potential service providers. We employ a feedback system and we combine the credibility and centrality of the nodes to evaluate the trust level. Preliminary simulations show the benefits of the proposed model towards the isolation of almost any malicious node in the network.
Community detection and stochastic block models: recent developments
The stochastic block model (SBM) is a random graph model with planted clusters. It is widely employed as a canonical model to study clustering and community detection, and provides generally a fertile ground to study the statistical and computational tradeoffs that arise in network and data sciences. This note surveys the recent developments that establish the fundamental limits for community detection in the SBM, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery (a.k.a., detection). The main results discussed are the phase transitions for exact recovery at the Chernoff-Hellinger threshold, the phase transition for weak recovery at the Kesten-Stigum threshold, the optimal distortion-SNR tradeoff for partial recovery, the learning of the SBM parameters and the gap between information-theoretic and computational thresholds. The note also covers some of the algorithms developed in the quest of achieving the limits, in particular two-round algorithms via graph-splitting, semi-definite programming, linearized belief propagation, classical and nonbacktracking spectral methods. A few open problems are also discussed.
Zero-Shot Activity Recognition with Verb Attribute Induction
In this paper, we investigate large-scale zero-shot activity recognition by modeling the visual and linguistic attributes of action verbs. For example, the verb “salute” has several properties, such as being a light movement, a social act, and short in duration. We use these attributes as the internal mapping between visual and textual representations to reason about a previously unseen action. In contrast to much prior work that assumes access to gold standard attributes for zero-shot classes and focuses primarily on object attributes, our model uniquely learns to infer action attributes from dictionary definitions and distributed word representations. Experimental results confirm that action attributes inferred from language can provide a predictive signal for zero-shot prediction of previously unseen activities.
Interactive Attention Networks for Aspect-Level Sentiment Classification
Aspect-level sentiment classification aims at identifying the sentiment polarity of specific target in its context. Previous approaches have realized the importance of targets in sentiment classification and developed various methods with the goal of precisely modeling their contexts via generating target-specific representations. However, these studies always ignore the separate modeling of targets. In this paper, we argue that both targets and contexts deserve special treatment and need to be learned their own representations via interactive learning. Then, we propose the interactive attention networks (IAN) to interactively learn attentions in the contexts and targets, and generate the representations for targets and contexts separately. With this design, the IAN model can well represent a target and its collocative context, which is helpful to sentiment classification. Experimental results on SemEval 2014 Datasets demonstrate the effectiveness of our model.
Hemodynamic determinants of oxygen consumption of the heart with special reference to the tension-time index.
SARNOFF, S. J., E. BRAUNWALD, G. H. WELCH, JR., R. B. CASE, W. N. STAINSBY AND R. MACRUZ. Hemodynamic determinants of oxygen cons~mnption of the heart with special reference to tension-time index. Am. J. Physiol. 192(1) : 148-156. 1958. -The hemodynamic determinants of myocardial oxygen utilization were aster tained in the isolated, metabolically supported, nonfailing canine heart. The primary determinant was found to be the total tension developed by the myocardium as indicated by the area beneath the systolic pressure curve (Tension-Time Index). The significance of these findings for the understanding of the low efficiency of the failing heart and the consequently increased importance of Laplace’s law are discussed. T HE PURPOSE of this investigation was to acquire a detailed and precise appreciation of the influence of each of the various hemodynamic phenomena, including the tension-time index, on the nonfailing heart’s utilization of oxygen. This required a preparation permitting a) independent control of these phenomena, b) essentially stable and nonfailing performance characteristics of the heart, and c) the ability to make determinations of total myocardial oxygen consumption with a high degree of precision. This experimental study was made possible by the isolated, metabolically supported, dog heart preparation ( I).
Finding software license violations through binary code clone detection
Software released in binary form frequently uses third-party packages without respecting their licensing terms. For instance, many consumer devices have firmware containing the Linux kernel, without the suppliers following the requirements of the GNU General Public License. Such license violations are often accidental, e.g., when vendors receive binary code from their suppliers with no indication of its provenance. To help find such violations, we have developed the Binary Analysis Tool (BAT), a system for code clone detection in binaries. Given a binary, such as a firmware image, it attempts to detect cloning of code from repositories of packages in source and binary form. We evaluate and compare the effectiveness of three of BAT's clone detection techniques: scanning for string literals, detecting similarity through data compression, and detecting similarity by computing binary deltas.
Effect of motor imagery training on symmetrical use of knee extensors during sit-to-stand and stand-to-sit tasks in post-stroke hemiparesis.
OBJECTIVE To investigate the effect of motor imagery training (MIT) on the symmetrical use of knee extensors during sit-to-stand and stand-to-sit tasks. METHODS We measured the electromyographic (EMG) data in the knee extensor on the affected side of 3 volunteers with post-stroke hemiparesis. We used a single-subject multiple-baseline research design across individuals. The EMG data were collected from knee extensors while performing the sit-to-stand and stand-to-sit tasks. The EMG activation and onset time ratios for the knee extensors were calculated by dividing the EMG activation and onset time of knee extensor action on the affected side by these on the unaffected side. MIT consisted of a 10-min detailed description of 5 stages: preparation, sit-to-stand tasks, weight shifting during standing, stand-to-sit tasks, and completion. RESULTS During MIT, the EMG activation ratios of participants 1, 2, and 3 increased by 11.24%, 18.07%, and 26.91%, respectively, in the sit-to-stand task and by 12.11%, 14.31%, and 25.92%, respectively, in the stand-to-sit task. During MIT, the onset time of participants 1, 2, and 3 decreased by 36.09%, 24.27%, and 25.61%, respectively, in the sit-to-stand task and by 26.81%, 27.20%, and 22.83%, respectively, for the stand-to-sit task. CONCLUSION These findings suggest that MIT has a positive effect on the symmetrical use of knee extensors during sit-to-stand and stand-to-sit tasks.
Land Surface Temperature Retrieval Methods From Landsat-8 Thermal Infrared Sensor Data
The importance of land surface temperature (LST) retrieved from high to medium spatial resolution remote sensing data for many environmental studies, particularly the applications related to water resources management over agricultural sites, was a key factor for the final decision of including a thermal infrared (TIR) instrument on board the Landsat Data Continuity Mission or Landsat-8. This new TIR sensor (TIRS) includes two TIR bands in the atmospheric window between 10 and 12 μm, thus allowing the application of split-window (SW) algorithms in addition to single-channel (SC) algorithms or direct inversions of the radiative transfer equation used in previous sensors on board the Landsat platforms, with only one TIR band. In this letter, we propose SC and SW algorithms to be applied to Landsat-8 TIRS data for LST retrieval. Algorithms were tested with simulated data obtained from forward simulations using atmospheric profile databases and emissivity spectra extracted from spectral libraries. Results show mean errors typically below 1.5 K for both SC and SW algorithms, with slightly better results for the SW algorithm than for the SC algorithm with increasing atmospheric water vapor contents.
Implementing a Functional Spreadsheet in Clean
It has been claimed that recent developments in the research on the efficiency of code generation and on graphical input/output interfacing have made it possible to use a functional language to write efficient programs that can compete with industrial applications written in a traditional imperative language. As one of the early steps in verifying this claim, this paper describes a first attempt to implement a spreadsheet in a lazy, purely functional language. An interesting aspect of the design is that the language with which the user specifies the relations between the cells of the spreadsheet is itself a lazy, purely functional and higher order language as well, and not some special dedicated spreadsheet language. Another interesting aspect of the design is that the spreadsheet incorporates symbolic reduction and normalisation of symbolic expressions (including equations). This introduces the possibility of asking the system to prove equality of symbolic cell expressions: a property which can greatly enhance the reliability of a particular user-defined spreadsheet. The resulting application is by no means a fully mature product. It is not intended as a competitor to commercially available spreadsheets. However, with its higher order lazy functional language and its symbolic capabilities it may serve as an interesting candidate to fill the gap between calculators with purely functional expressions and full-featured spreadsheets with dedicated non-functional spreadsheet languages. This paper describes the global design and important implementation issues in the development of the application. The experience gained and lessons learnt during this project are treated. Performance and use of the resulting application are compared with related work.
Single-stage single-switch PFC converter with extreme step-down voltage conversion ratio
This paper presents a new single-stage singleswitch (S4) high power factor correction (PFC) AC/DC converter suitable for low power applications (< 150 W) with a universal input voltage range (90–265 Vrms). The proposed topology integrates a buck-boost input current shaper followed by a buck and a buck-boost converter, respectively. As a result, the proposed converter can operate with larger duty cycles compared to the exiting S4 topologies; hence, making them suitable for extreme step-down voltage conversion applications. Several desirable features are gained when the three integrated converter cells operate in discontinuous conduction mode (DCM). These features include low semiconductor voltage stress, zero-current switch at turn-on, and simple control with a fast well-regulated output voltage. A detailed circuit analysis is performed to derive the design equations. The theoretical analysis and effectiveness of the proposed approach are confirmed by experimental results obtained from a 35-W/12-Vdc laboratory prototype.
Starch-composition, fine structure and architecture
Much has been written over many decades about the structure and properties of starch. As technology develops, the capacity to understand in more depth the structure of starch granules and how this complex organisation controls functionality develops in parallel. This review puts the current state of knowledge about starch structure in perspective and integrates aspects of starch composition, interactions, architecture and functionality. q 2003 Elsevier Ltd. All rights reserved.
The simple economics of cybercrimes
The characteristics of cybercriminals, cybercrime victims, and law enforcement agencies have a reinforcing effect on each other, leading to a vicious circle of cybercrime. In this article, the author assessed the cost-benefit structure of cybercriminals. From the potential victims' perspectives, an economic analysis can help explain the optimum investment necessary as well as the measures required to prevent hackers from cracking into their computer networks. The analysis from the cybercriminal's viewpoint also provides insight into factors that might encourage and energize his or her behavior.
An efficient multi-task PaaS cloud infrastructure based on docker and AWS ECS for application deployment
The setup environment and deployment of distributed applications is a human intensive and highly complex process that poses significant challenges. Nowadays many applications are developed in the cloud and existing applications are migrated to the cloud because of the promising advantages of cloud computing. Presenting two common serious challenging scenarios in the application development environment, we propose a multi-task PaaS cloud infrastructure using Docker and AWS services for application isolation, optimization and rapid deployment of distributed applications. We fully utilized Docker, a lightweight containerization technology that uses a host of the Linux kernel’s features such as namespaces and cgroup’s to sandbox processes into configurable virtual environments. The Amazon EC2 container service helps our container management framework. The cluster management framework uses optimistic, shared state scheduling to execute processes on EC2 instances using Docker containers. Several experimentations were carried out, one of the experimentation focused on a simulation of application deployment scheduling that shows our propose infrastructure is flexible, efficient and well optimized.
Care of the Athlete With Type 1 Diabetes Mellitus: A Clinical Review
CONTEXT Type 1 diabetes mellitus (T1DM) results from a highly specific immune-mediated destruction of pancreatic β cells, resulting in chronic hyperglycemia. For many years, one of the mainstays of therapy for patients with T1DM has been exercise balanced with appropriate medications and medical nutrition. Compared to healthy peers, athletes with T1DM experience nearly all the same health-related benefits from exercise. Despite these benefits, effective management of the T1DM athlete is a constant challenge due to various concerns such as the increased risk of hypoglycemia. This review seeks to summarize the available literature and aid clinicians in clinical decision-making for this patient population. EVIDENCE ACQUISITION PubMed searches were conducted for "type 1 diabetes mellitus AND athlete" along with "type 1 diabetes mellitus AND exercise" from database inception through November 2015. All articles identified by this search were reviewed if the article text was available in English and related to management of athletes with type 1 diabetes mellitus. Subsequent reference searches of retrieved articles yielded additional literature included in this review. RESULTS The majority of current literature available exists as recommendations, review articles, or proposed societal guidelines, with less prospective or higher-order treatment studies available. The available literature is presented objectively with an attempt to describe clinically relevant trends and findings in the management of athletes living with T1DM. CONCLUSIONS Managing T1DM in the context of exercise or athletic competition is a challenging but important skill for athletes living with this disease. A proper understanding of the hormonal milieu during exercise, special nutritional needs, glycemic control, necessary insulin dosing adjustments, and prevention/management strategies for exercise-related complications can lead to successful care plans for these patients. Individualized management strategies should be created with close cooperation between the T1DM athlete and their healthcare team (including a physician and dietitian).
An Internet accessible telepresence
The US Vice President, Al Gore, in a speech on the information superhighway, suggested that it could be used to remotely control a nuclear reactor. We do not have enough confidence in computer software, hardware, or networks to attempt this experiment, but have instead built a Internet-accessible, remote-controlled model car that provides a race driver's view via a video camera mounted on the model car. The remote user can see live video from the car, and, using a mouse, control the speed and direction of the car. The challenge was to build a car that could be controlled by novice users in narrow corridors, and that would work not only with the full motion video that the car natively provides, but also with the limited size and frame rate video available over the Internet multicast backbone. We have built a car that has been driven from a site 50 miles away over a 56-kbps IP link using $\mbox{{\tt nv}}$ format video at as little as one frame per second and at as low as $100\times 100$ pixels resolution. We also built hardware to control the car, using a slightly modified voice grade channel videophone. Our experience leads us to believe that it is now possible to put together readily available hardware and software components to build a cheap and effective telepresence.
The Global Burden of Snakebite: A Literature Analysis and Modelling Based on Regional Estimates of Envenoming and Deaths
BACKGROUND Envenoming resulting from snakebites is an important public health problem in many tropical and subtropical countries. Few attempts have been made to quantify the burden, and recent estimates all suffer from the lack of an objective and reproducible methodology. In an attempt to provide an accurate, up-to-date estimate of the scale of the global problem, we developed a new method to estimate the disease burden due to snakebites. METHODS AND FINDINGS The global estimates were based on regional estimates that were, in turn, derived from data available for countries within a defined region. Three main strategies were used to obtain primary data: electronic searching for publications on snakebite, extraction of relevant country-specific mortality data from databases maintained by United Nations organizations, and identification of grey literature by discussion with key informants. Countries were grouped into 21 distinct geographic regions that are as epidemiologically homogenous as possible, in line with the Global Burden of Disease 2005 study (Global Burden Project of the World Bank). Incidence rates for envenoming were extracted from publications and used to estimate the number of envenomings for individual countries; if no data were available for a particular country, the lowest incidence rate within a neighbouring country was used. Where death registration data were reliable, reported deaths from snakebite were used; in other countries, deaths were estimated on the basis of observed mortality rates and the at-risk population. We estimate that, globally, at least 421,000 envenomings and 20,000 deaths occur each year due to snakebite. These figures may be as high as 1,841,000 envenomings and 94,000 deaths. Based on the fact that envenoming occurs in about one in every four snakebites, between 1.2 million and 5.5 million snakebites could occur annually. CONCLUSIONS Snakebites cause considerable morbidity and mortality worldwide. The highest burden exists in South Asia, Southeast Asia, and sub-Saharan Africa.
Characterization and Theoretical Comparison of Branch-and-Bound Algorithms for Permutation Problems
Branch-and-bound implicit enumeration algorithms for permutation problems (discrete optimization problems where the set of feasible solutions is the permutation group <italic>S<subscrpt>n</subscrpt></italic>) are characterized in terms of a sextuple (<italic>B<subscrpt>p</subscrpt> S,E,D,L,U</italic>), where (1) <italic>B<subscrpt>p</subscrpt></italic> is the branching rule for permutation problems, (2) <italic>S</italic> is the next node selection rule, (3) <italic>E</italic> is the set of node elimination rules, (4) <italic>D</italic> is the node dominance function, (5) <italic>L</italic> is the node lower-bound cost function, and (6) <italic>U</italic> is an upper-bound solution cost. A general algorithm based on this characterization is presented and the dependence of the computational requirements on the choice of algorithm parameters, <italic>S, E, D, L,</italic> and <italic>U</italic> is investigated theoretically. The results verify some intuitive notions but disprove others.
Opportunities and Challenges of Switched Reluctance Motor Drives for Electric Propulsion: A Comparative Study
Selection of the proper electric traction drive is an important step in design and performance optimization of electrified powertrains. Due to the use of high energy magnets, permanent magnet synchronous machines (PMSM) have been the primary choice in the electric traction motor market. However, manufacturers are very interested to find a permanent magnet-free alternative as a fallback option due to unstable cost of rare-earth metals and fault tolerance issues related to the constant permanent magnet excitation. In this paper, a new comprehensive review of electric machines (EMs) that includes various new switched reluctance machine topologies in addition to conventional EMs such as PMSM, induction machine, synchronous reluctance machine (SynRel), and PM-assisted SynRel is presented. This paper is based on performances such as power density, efficiency, torque ripple, vibration and noise, and fault tolerance. These systematic examinations prove that recently proposed magnetic configurations such as double-stator switched reluctance machine can be a reasonable substitute for permanent magnet machines in electric traction applications.
Can Instagram posts help characterize urban micro-events?
Social media content, from platforms such as Twitter and Foursquare, has enabled an exciting new field of “social sensing”, where participatory content generated by users has been used to identify unexpected emerging or trending events. In contrast to such text-based channels, we focus on image-sharing social applications (specifically Instagram), and investigate how such urban social sensing can leverage upon the additional multi-modal, multimedia content. Given the significantly higher fraction of geotagged content on Instagram, we aim to use such channels to go beyond identification of long-lived events (e.g., a marathon) to achieve finer-grained characterization of multiple micro-events (e.g., a person winning the marathon) that occur over the lifetime of the macro-event. Via empirical analysis from a corpus of Instagram data from 3 international marathons, we establish the need for novel data pre-processing as: (a) semantic annotation of image content indeed provides additional features distinct from text captions, and (b) an appreciable fraction of the posted images do not pertain to the event under consideration. We propose a framework, called EiM, that combines such preprocessing with clustering-based event detection. We show that our initial prototype of EiM shows promising results: it is able to identify many micro-events in the three marathons, with spatial and temporal resolution that is less than 1% and 10%, respectively, of the corresponding ranges for the macro-event.
Pursuing the New While Sustaining the Current: Incumbent Strategies and Firm Value During the Nascent Period of Industry Change
This study considers the nascent period of industry change when the prevalent business model is being threatened by a new model, but there is significant uncertainty with respect to whether and whe...
HOXA-10 expression in the mid-secretory endometrium of infertile patients with either endometriosis, uterine fibromas or unexplained infertility.
BACKGROUND The aim of this study was to investigate HOXA-10 expression in endometrium from infertile patients with different forms of endometriosis; with uterine fibromas, or with unexplained infertility and from normal fertile women. METHODS Expression levels of HOXA-10 mRNA and protein in endometrium were measured during the mid-secretory phase. This study utilized laser capture microdissection, real-time RT-PCR and immunohistochemistry. RESULTS HOXA-10 mRNA and protein expression levels in endometrial stromal cells were significantly lower in infertile patients with different types of endometriosis (deep infiltrating endometriosis, ovarian endometriosis and superficial peritoneal endometriosis), with uterine myoma, and unexplained infertility patients as compared with healthy fertile controls. HOXA-10 mRNA expression levels of microdissected glandular epithelial cells were significantly lower than those of microdissected stromal cells, without significant differences among the different groups. No protein expression was detected in glandular epithelial cells. The percentage of patients with altered protein expression of HOXA-10 in stromal cells were significantly higher in patients with only superficial peritoneal endometriosis (100%, 20/20, P < 0.05) compared with the other infertile groups (deep infiltrating endometriosis: 72.7%, 16/22; ovarian endometriosis: 70.0%, 14/20; uterine myoma: 68.8%, 11/16; unexplained infertility: 55.6%, 5/9). CONCLUSION The present findings suggested that altered expression of HOXA-10 in endometrial stromal cells during the window of implantation may be one of the potential molecular mechanisms of infertility in infertile patients, particularly in patients with only superficial peritoneal endometriosis. One of the underlying causes of infertility in patients with only superficial endometriosis may be altered expression of HOXA-10 in endometrial stromal cells.
Churn Prediction System for Telecom using Filter-Wrapper and Ensemble Classification
Churn prediction in telecom is a challenging data mining task for retaining customers, especially, when we have imbalanced class distribution, high dimensionality and large number of samples in training set. To cope with this challenging task of churn prediction, we propose a new intelligent churn prediction system for telecom, named FW-ECP. The novelty of the FW-ECP lies in its ability to combine both filterand wrapper-based feature selection as well as exploit the learning capability of an ensemble classifier built using diverse base classifiers. In the filter phase, Particle Swarm Optimization-based undersampling and mRMR feature selection are employed to reduce the effect of imbalanced class distribution and large dimensionality. In Wrapper phase, we employ Genetic Algorithm that further discards irrelevant and redundant features. Random Forest, Rotation Forest, RotBoost and SVMs are then employed to exploit the new feature space. Finally, the ensemble classifier is constructed using both majority voting and stacking. We have tested and compared the performance of proposed FW-ECP system on two publicly available standard telecom datasets: Orange and Cell2Cell. FW-ECP takes into account both the imbalanced nature and large dimensionality of the training sets and yields better prediction performances compared with existing state-of-the-art approaches. The feature spaces for the Orange and Cell2Cell datasets are reduced to 24D and 18D, from 260D and 76D, respectively. The AUCs obtained by FW-ECP are 0.85 and 0.82 for Orange and Cell2Cell datasets, respectively.
Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis
For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM)(2), a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET.
Agile Load Transportation : Safe and Efficient Load Manipulation with Aerial Robots
In the past few decades, unmanned aerial vehicles (UAVs) have become promising mobile platforms capable of navigating semiautonomously or autonomously in uncertain environments. The level of autonomy and the flexible technology of these flying robots have rapidly evolved, making it possible to coordinate teams of UAVs in a wide spectrum of tasks. These applications include search and rescue missions; disaster relief operations, such as forest fires [1]; and environmental monitoring and surveillance. In some of these tasks, UAVs work in coordination with other robots, as in robot-assisted inspection at sea [2]. Recently, radio-controlled UAVs carrying radiation sensors and video cameras were used to monitor, diagnose, and evaluate the situation at Japans Fukushima Daiichi nuclear plant facility [3].
AmbiMax: Autonomous Energy Harvesting Platform for Multi-Supply Wireless Sensor Nodes
AmbiMax is an energy harvesting circuit and a supercapacitor based energy storage system for wireless sensor nodes (WSN). Previous WSNs attempt to harvest energy from various sources, and some also use supercapacitors instead of batteries to address the battery aging problem. However, they either waste much available energy due to impedance mismatch, or they require active digital control that incurs overhead, or they work with only one specific type of source. AmbiMax addresses these problems by first performing maximum power point tracking (MPPT) autonomously, and then charges supercapacitors at maximum efficiency. Furthermore, AmbiMax is modular and enables composition of multiple energy harvesting sources including solar, wind, thermal, and vibration, each with a different optimal size. Experimental results on a real WSN platform, Eco, show that AmbiMax successfully manages multiple power sources simultaneously and autonomously at several times the efficiency of the current state-of-the-art for WSNs
Comparison of the physiological responses to different small-sided games in elite young soccer players.
The purpose of this study was to compare the blood lactate (La-), heart rate (HR) and percentage of maximum HR (%HRmax) responses among the small-sided games (SSGs) in elite young soccer players. Sixteen players (average age 15.7 6 0.4 years; height 176.8 6 4.6 cm; body mass 65.5 6 5.6 kg; VO2max 53.1 6 5.9 ml · kg(-1) · min(-1); HRmax 195.9 6 7.4 b · min(-1)) volunteered to perform the YoYo intermittent recovery test and 6 bouts of soccer drills including 1-a-side, 2-a-side, 3-a-side, and 4-a-side games without a goalkeeper in random order at 2-day intervals. The differences in La-, HR and%HRmax either among the SSGs or among the bouts were identified using 4 x 6 (games x exercise bouts) 2-way analysis of variance with repeated measures. Significant differences were found on La-, HR, and %HRmax among the bouts (p ≤ 0.05). The 3-a-side and 4-a-side games were significantly higher than 1-a-side and 2-a-side games on HR and %HRmax (p ≤ 0.05), whereas the 1-a-side game significantly resulted in higher La- responses compared to other SSGs. This study demonstrated that physiological responses during the 1-a-side and 2-a-side games were different compared to 3-a-side and 4-a-side games. Therefore, it can be concluded that a decreased number of players results in increased intensity during SSGs including 6 bouts. These results suggest that coaches should pay attention on choosing the SSG type and the number of bouts to improve desired physical conditioning of elite young soccer players in soccer training.
Autotagger: A Model for Predicting Social Tags from Acoustic Features on Large Music Databases
Social tags are user-generated keywords associated with some resource on the Web. In the case of music, social tags have become an important component of “Web 2.0” recommender systems, allowing users to generate playlists based on use-dependent terms such as chill or jogging that have been applied to particular songs. In this paper, we propose a method for predicting these social tags directly from MP3 files. Using a set of 360 classifiers trained using the online ensemble learning algorithm FilterBoost, we map audio features onto social tags collected from the Web. The resulting automatic tags (or autotags) furnish information about music that is otherwise untagged or poorly tagged, allowing for insertion of previously unheard music into a social recommender. This avoids the “cold-start problem” common in such systems. Autotags can also be used to smooth the tag space from which similarities and recommendations are made by providing a set of comparable baseline tags for all tracks in a recommender system. Because the words we learn are the same as those used by people who label their music collections, it is easy to integrate our predictions into existing similarity and prediction methods based on web data.
Effects of heavy-resistance triceps surae muscle training on strength and muscularity of men and women.
The purpose of this study was to determine selected functional and structural effects of heavy-resistance training on the triceps surae muscles of men and women. We pretested 28 men and 28 women for triceps surae muscle isotonic strength and muscularity after five practice sessions that familiarized them with the study equipment. Triceps surae muscle isotonic strength was determined using a 1-repetition maximum seated heel raise. Muscularity involved the measurement of relaxed lower leg circumference and net circumference and ultrasonically determined triceps surae muscle thickness. Twenty-eight subjects (14 men, 14 women) were selected randomly after pretesting to participate in 24 sessions of standardized weight training primarily involving the triceps surae muscles, and the remaining subjects (14 men, 14 women) served as nontraining controls. After eight weeks of training, triceps surae muscle isotonic strength had increased significantly (p less than .001) for both men and women in the Treatment Group when compared with the Control Group. No other dependent variables changed significantly. We concluded that eight weeks of heavy-resistance training involving the triceps surae muscles elicits similar significant increases in isotonic muscle strength in both men and women without concurrent increases in muscularity.
Patterns and utility of routine surveillance in high grade endometrial cancer.
OBJECTIVE To evaluate surveillance methods and their utility in detecting recurrence of disease in a high grade endometrial cancer population. METHODS We performed a multi-institutional retrospective chart review of women diagnosed with high grade endometrial cancer between the years 2000 and 2011. Surveillance data was abstracted and analyzed. Surveillance method leading to detection of recurrence was identified and compared by stage of disease and site of recurrence. RESULTS Two hundred and fifty-four patients met the criteria for inclusion. Vaginal cytology was performed in the majority of early stage patients, but was utilized less in advanced stage patients. CA-125 and CT imaging were used more frequently in advanced stage patients compared to early stage. Thirty-six percent of patients experienced a recurrence and the majority of initial recurrences (76%) had a distant component. Modalities that detected cancer recurrences were: symptoms (56%), physical exam (18%), surveillance CT (15%), CA-125 (10%), and vaginal cytology (1%). All local recurrences were detected by symptoms or physical exam findings. While the majority of loco-regional and distant recurrences (68%) were detected by symptoms or physical exam, 28% were detected by surveillance CT scan or CA 125. One loco-regional recurrence was identified by vaginal cytology but no recurrences with a distant component detected by this modality. CONCLUSIONS Symptoms and physical examination identify the majority of high grade endometrial cancer recurrences, while vaginal cytology is the least likely surveillance modality to identify a recurrence. The role of CT and CA-125 surveillance outside of a clinical trial needs to be further reviewed.
Self-Paced Cross-Modal Subspace Matching
Cross-modal matching methods match data from different modalities according to their similarities. Most existing methods utilize label information to reduce the semantic gap between different modalities. However, it is usually time-consuming to manually label large-scale data. This paper proposes a Self-Paced Cross-Modal Subspace Matching (SCSM) method for unsupervised multimodal data. We assume that multimodal data are pair-wised and from several semantic groups, which form hard pair-wised constraints and soft semantic group constraints respectively. Then, we formulate the unsupervised cross-modal matching problem as a non-convex joint feature learning and data grouping problem. Self-paced learning, which learns samples from 'easy' to 'complex', is further introduced to refine the grouping result. Moreover, a multimodal graph is constructed to preserve the relationship of both inter- and intra-modality similarity. An alternating minimization method is employed to minimize the non-convex optimization problem, followed by the discussion on its convergence analysis and computational complexity. Experimental results on four multimodal databases show that SCSM outperforms state-of-the-art cross-modal subspace learning methods.
CREDIT SUPPLY, FLIGHT TO QUALITY AND EVERGREENING: AN ANALYSIS OF BANK-FIRM RELATIONSHIPS AFTER LEHMAN
This paper analyzes the effects of the financial crisis on credit supply by using highly detailed data on bank-firm relationships in Italy after Lehman’s collapse. We control for firms’ unobservable characteristics, such as credit demand and borrowers’ risk, by exploiting multiple lending. We find evidence of a contraction of credit supply, associated to low bank capitalization and scarce liquidity. The ability of borrowers to compensate through substitution across banks appears to have been limited. We also document that larger lesscapitalized banks reallocated loans away from riskier firms, contributing to credit procyclicality. Such ‘flight to quality’ has not occurred for smaller less-capitalized banks. We argue that this may have reflected, among other things, evergreening practices. We provide corroborating evidence based on data on borrowers' productivity and interest rates at bankfirm level. JEL Classification: E44, E51, G21, G34, L16.
Beyond war and PTSD: The crucial role of transition stress in the lives of military veterans.
Although only a relatively small minority of military veterans develop Posttraumatic Stress Disorder (PTSD), mental health theory and research with military veterans has focused primarily on PTSD and its treatment. By contrast, many and by some accounts most veterans experience high levels of stress during the transition to civilian life, however transition stress has received scant attention. In this paper we attempt to address this deficit by reviewing the wider range of challenges, rewards, successes, and failures that transitioning veterans might experience, as well as the factors that might moderate these experiences. To illuminate this argument, we briefly consider what it means to become a soldier (i.e., what is required to transition into military service) and more crucially what kind of stressors veterans might experience when they attempt to shed that identity (i.e., what is required to transition out of military service). We end by suggesting how an expanded research program on veteran transition stress might move forward.
MOSFET modeling for RF IC design
High-frequency (HF) modeling of MOSFETs for radio-frequency (RF) integrated circuit (IC) design is discussed. Modeling of the intrinsic device and the extrinsic components is discussed by accounting for important physical effects at both dc and HF. The concepts of equivalent circuits representing both intrinsic and extrinsic components in a MOSFET are analyzed to obtain a physics-based RF model. The procedures of the HF model parameter extraction are also developed. A subcircuit RF model based on the discussed approaches can be developed with good model accuracy. Further, noise modeling is discussed by analyzing the theoretical and experimental results in HF noise modeling. Analytical calculation of the noise sources has been discussed to understand the noise characteristics, including induced gate noise. The distortion behavior of MOSFET and modeling are also discussed. The fact that a MOSFET has much higher "low-frequency limit" is useful for designers and modelers to validate the distortion of a MOSFET model for RF application. An RF model could well predict the distortion behavior of MOSFETs if it can accurately describe both dc and ac small-signal characteristics with proper parameter extraction.
Working with evaluation stakeholders: A rationale, step-wise approach and toolkit.
In the broad field of evaluation, the importance of stakeholders is often acknowledged and different categories of stakeholders are identified. Far less frequent is careful attention to analysis of stakeholders' interests, needs, concerns, power, priorities, and perspectives and subsequent application of that knowledge to the design of evaluations. This article is meant to help readers understand and apply stakeholder identification and analysis techniques in the design of credible evaluations that enhance primary intended use by primary intended users. While presented using a utilization-focused-evaluation (UFE) lens, the techniques are not UFE-dependent. The article presents a range of the most relevant techniques to identify and analyze evaluation stakeholders. The techniques are arranged according to their ability to inform the process of developing and implementing an evaluation design and of making use of the evaluation's findings.
On the energy (in)efficiency of Hadoop clusters
Distributed processing frameworks, such as Yahoo!'s Hadoop and Google's MapReduce, have been successful at harnessing expansive datacenter resources for large-scale data analysis. However, their effect on datacenter energy efficiency has not been scrutinized. Moreover, the filesystem component of these frameworks effectively precludes scale-down of clusters deploying these frameworks (i.e. operating at reduced capacity). This paper presents our early work on modifying Hadoop to allow scale-down of operational clusters. We find that running Hadoop clusters in fractional configurations can save between 9% and 50% of energy consumption, and that there is a tradeoff between performance energy consumption. We also outline further research into the energy-efficiency of these frameworks.
Image Super-Resolution Via Sparse Representation
This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework.
Time-series prediction using a local linear wavelet neural network
A local linear wavelet neural network (LLWNN) is presented in this paper. The difference of the network with conventional wavelet neural network (WNN) is that the connection weights between the hidden layer and output layer of conventional WNN are replaced by a local linear model. A hybrid training algorithm of particle swarm optimization (PSO) with diversity learning and gradient descent method is introduced for training the LLWNN. Simulation results for the prediction of time-series show the feasibility and effectiveness of the proposed method. r 2005 Elsevier B.V. All rights reserved.
Face-TLD: Tracking-Learning-Detection applied to faces
A novel system for long-term tracking of a human face in unconstrained videos is built on Tracking-Learning-Detection (TLD) approach. The system extends TLD with the concept of a generic detector and a validator which is designed for real-time face tracking resistent to occlusions and appearance changes. The off-line trained detector localizes frontal faces and the online trained validator decides which faces correspond to the tracked subject. Several strategies for building the validator during tracking are quantitatively evaluated. The system is validated on a sitcom episode (23 min.) and a surveillance (8 min.) video. In both cases the system detects-tracks the face and automatically learns a multi-view model from a single frontal example and an unlabeled video.
Computational Photography: Epsilon to Coded Photography
Computational photography combines plentiful computing, digital sensors, modern optics, actuators, and smart lights to escape the limitations of traditional cameras, enables novel imaging applications and simplifies many computer vision tasks. However, a majority of current Computational photography methods involves taking multiple sequential photos by changing scene parameters and fusing the photos to create a richer representation. Epsilon photography is concerned with synthesizing omnipictures and proceeds by multiple capture single image paradigm (MCSI).The goal of Coded computational photography is to modify the optics, illumination or sensors at the time of capture so that the scene properties are encoded in a single (or a few) photographs. We describe several applications of coding exposure, aperture, illumination and sensing and describe emerging techniques to recover scene parameters from coded photographs.
Automatic chord recognition for music classification and retrieval
As one of the most important mid-level features of music, chord contains rich information of harmonic structure that is useful for music information retrieval. In this paper, we present a chord recognition system based on the N-gram model. The system is time-efficient, and its accuracy is comparable to existing systems. We further propose a new method to construct chord features for music emotion classification and evaluate its performance on commercial song recordings. Experimental results demonstrate the advantage of using chord features for music classification and retrieval.
An effective and fast iris recognition system based on a combined multiscale feature extraction technique
The randomness of iris pattern makes it one of the most reliable biometric traits. On the other hand, the complex iris image structure and the various sources of intra-class variations result in the difficulty of iris representation. Although, a number of iris recognition methods have been proposed, it has been found that several accurate iris recognition algorithms use multiscale techniques, which provide a well-suited representation for iris recognition. In this paper and after a thorough analysis and summarization, a multiscale edge detection approach has been employed as a pre-processing step to efficiently localize the iris followed by a new feature extraction technique which is based on a combination of some multiscale feature extraction techniques. This combination uses special Gabor filters and wavelet maxima components. Finally, a promising feature vector representation using moment invariants is proposed. This has resulted in a compact and efficient feature vector. In addition, a fast matching scheme based on exclusive OR operation to compute bits similarity is proposed where the result experimentation was carryout out using CASIA database. The experimental results have shown that the proposed system yields attractive performances and could be used for personal identification in an efficient and effective manner and comparable to the best iris recognition algorithm found in the current literature. 2007 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Fuzzing the ActionScript virtual machine
Fuzz testing is an automated testing technique where random data is used as an input to software systems in order to reveal security bugs/vulnerabilities. Fuzzed inputs must be binaries embedded with compiled bytecodes when testing against ActionScript virtual machines (AVMs). The current fuzzing method for JavaScript-like virtual machines is very limited when applied to compiler-involved AVMs. The complete source code should be both grammatically and semantically valid to allow execution by first passing through the compiler. In this paper, we present ScriptGene, an algorithmic approach to overcome the additional complexity of generating valid ActionScript programs. First, nearly-valid code snippets are randomly generated, with some controls on instruction flow. Second, we present a novel mutation method where the former code snippets are lexically analyzed and mutated with runtime information of the AVM, which helps us to build context for undefined behaviours against compiler-check and produce a high code coverage. Accordingly, we have implemented and evaluated ScriptGene on three different versions of Adobe AVMs. Results demonstrate that ScriptGene not only covers almost all the blocks of the official test suite (Tamarin), but also is capable of nearly twice the code coverage. The discovery of six bugs missed by the official test suite demonstrates the effectiveness, validity and novelty of ScriptGene.
How to Choose the Best Pivot Language for Automatic Translation of Low-Resource Languages
Recent research on multilingual statistical machine translation focuses on the usage of pivot languages in order to overcome language resource limitations for certain language pairs. Due to the richness of available language resources, English is, in general, the pivot language of choice. However, factors like language relatedness can also effect the choice of the pivot language for a given language pair, especially for Asian languages, where language resources are currently quite limited. In this article, we provide new insights into what factors make a pivot language effective and investigate the impact of these factors on the overall pivot translation performance for translation between 22 Indo-European and Asian languages. Experimental results using state-of-the-art statistical machine translation techniques revealed that the translation quality of 54.8% of the language pairs improved when a non-English pivot language was chosen. Moreover, 81.0% of system performance variations can be explained by a combination of factors such as language family, vocabulary, sentence length, language perplexity, translation model entropy, reordering, monotonicity, and engine performance.
Sentiment Embeddings with Applications to Sentiment Analysis
We propose learning sentiment-specific word embeddings dubbed sentiment embeddings in this paper. Existing word embedding learning algorithms typically only use the contexts of words but ignore the sentiment of texts. It is problematic for sentiment analysis because the words with similar contexts but opposite sentiment polarity, such as good and bad, are mapped to neighboring word vectors. We address this issue by encoding sentiment information of texts (e.g., sentences and words) together with contexts of words in sentiment embeddings. By combining context and sentiment level evidences, the nearest neighbors in sentiment embedding space are semantically similar and it favors words with the same sentiment polarity. In order to learn sentiment embeddings effectively, we develop a number of neural networks with tailoring loss functions, and collect massive texts automatically with sentiment signals like emoticons as the training data. Sentiment embeddings can be naturally used as word features for a variety of sentiment analysis tasks without feature engineering. We apply sentiment embeddings to word-level sentiment analysis, sentence level sentiment classification, and building sentiment lexicons. Experimental results show that sentiment embeddings consistently outperform context-based embeddings on several benchmark datasets of these tasks. This work provides insights on the design of neural networks for learning task-specific word embeddings in other natural language processing tasks.
A novel kernelized fuzzy C-means algorithm with application in medical image segmentation
Image segmentation plays a crucial role in many medical imaging applications. In this paper, we present a novel algorithm for fuzzy segmentation of magnetic resonance imaging (MRI) data. The algorithm is realized by modifying the objective function in the conventional fuzzy C-means (FCM) algorithm using a kernel-induced distance metric and a spatial penalty on the membership functions. Firstly, the original Euclidean distance in the FCM is replaced by a kernel-induced distance, and thus the corresponding algorithm is derived and called as the kernelized fuzzy C-means (KFCM) algorithm, which is shown to be more robust than FCM. Then a spatial penalty is added to the objective function in KFCM to compensate for the intensity inhomogeneities of MR image and to allow the labeling of a pixel to be influenced by its neighbors in the image. The penalty term acts as a regularizer and has a coefficient ranging from zero to one. Experimental results on both synthetic and real MR images show that the proposed algorithms have better performance when noise and other artifacts are present than the standard algorithms.
The sword went out to sea : (synthesis of a dream), by Delia Alton
The modernist poet and novelist H.D. (Hilda Doolittle) was profoundly interested in the occult and during WWII conducted spiritualist seances in her home. Those experiences form much of the basis for her novel "The Sword Went Out to Sea", which features an experimental structure and a blending of autobiography, dream, and vision. Even though it was far too complex and eccentric for the popular presses of her day, H.D. believed "The Sword Went Out to Sea" was "the crown" of her years of serious study of esoteric doctrine as well as of her daily writing practice. It has never before been published.
Arc magmatism and hydrocarbon generation in the northern Taranaki Basin, New Zealand
A Middle–Late Miocene andesitic arc named the Mohakatino Volcanic Centre is buried beneath younger sediments in the northern part of the Taranaki Basin, New Zealand’s primary oil production province. Volcanoes of the centre cover an area of about 3200 km 2 . An estimated 7000±3000 km 3 of andesite were erupted from the centre and between 1000 to 2500 km 3 of magma were intruded into the basement beneath the volcanic cones. The key element of the petroleum system altered by magmatism is the maturity of source rocks and the timing of expulsion, although volcanism also contributes to the formation of potential reservoirs and traps. In the northern Taranaki Basin, two periods of hydrocarbon expulsion occurred: following magmatism (14 Ma to about 8 Ma), and following renewed burial (since about 4 Ma). Thermal models indicate that Late Cretaceous terrestrial source rocks close to large magmatic intrusions became fully mature during magmatism. Overlying marine source rocks are modelled to still be generating and expelling hydrocarbons to the present day. Hence hydrocarbon expulsion and the charge history of this basin is partly governed by Miocene magmatism. Results are also relevant to many petroleum basins that contain similar andesitic arc volcanic rocks.
How developers search for code: a case study
With the advent of large code repositories and sophisticated search capabilities, code search is increasingly becoming a key software development activity. In this work we shed some light into how developers search for code through a case study performed at Google, using a combination of survey and log-analysis methodologies. Our study provides insights into what developers are doing and trying to learn when per- forming a search, search scope, query properties, and what a search session under different contexts usually entails. Our results indicate that programmers search for code very frequently, conducting an average of five search sessions with 12 total queries each workday. The search queries are often targeted at a particular code location and programmers are typically looking for code with which they are somewhat familiar. Further, programmers are generally seeking answers to questions about how to use an API, what code does, why something is failing, or where code is located.
Safe Policy Learning from Observations
In this paper, we consider the problem of learning a policy by observing numerous non-expert agents. Our goal is to extract a policy that, with high-confidence, acts better than the agents’ average performance. Such a setting is important for real-world problems where expert data is scarce but non-expert data can easily be obtained, e.g. by crowdsourcing. Our approach is to pose this problem as safe policy improvement in reinforcement learning. First, we evaluate an average behavior policy and approximate its value function. Then, we develop a stochastic policy improvement algorithm that safely improves the average behavior. The primary advantages of our approach, termed Rerouted Behavior Improvement (RBI), over other safe learning methods are its stability in the presence of value estimation errors and the elimination of a policy search process. We demonstrate these advantages in the Taxi grid-world domain and in four games from the Atari learning environment.
Aware LSTM model for Semantic Role Labeling
In Semantic Role Labeling (SRL) task, the tree structured dependency relation is rich in syntax information, but it is not well handled by existing models. In this paper, we propose Syntax Aware Long Short Time Memory (SA-LSTM). The structure of SA-LSTM changes according to dependency structure of each sentence, so that SA-LSTM can model the whole tree structure of dependency relation in an architecture engineering way. Experiments demonstrate that on Chinese Proposition Bank (CPB) 1.0, SA-LSTM improves F1 by 2.06% than ordinary bi-LSTM with feature engineered dependency relation information, and gives state-of-the-art F1 of 79.92%. On English CoNLL 2005 dataset, SA-LSTM brings improvement (2.1%) to bi-LSTM model and also brings slight improvement (0.3%) when added to the stateof-the-art model.
A model for team-based access control (TMAC 2004)
Role based access control (RBAC) has been proved to be effective for defining access control. However, in an environment where collaborative work is needed, additional features should be added on top of RBAC to accommodate the new requirements. In this paper we describe a team access control extension model called TMAC04 (Team-Based Access Control 2004), which is built on the well-known RBAC. The TMAC04 model efficiently represents teamwork in the real world. It allows certain users to join a team based on their existing roles in an organization within limited contexts and new permissions to perform the required work.
Averaged Least-Mean-Squares: Bias-Variance Trade-offs and Optimal Sampling Distributions
We consider the least-squares regression problem and provide a detailed asymptotic analysis of the performance of averaged constant-step-size stochastic gradient descent. In the strongly-convex case, we provide an asymptotic expansion up to explicit exponentially decaying terms. Our analysis leads to new insights into stochastic approximation algorithms: (a) it gives a tighter bound on the allowed step-size; (b) the generalization error may be divided into a variance term which is decaying as O(1/n), independently of the step-size γ, and a bias term that decays as O(1/γn); (c) when allowing non-uniform sampling of examples over a dataset, the choice of a good sampling density depends on the trade-off between bias and variance: when the variance term dominates, optimal sampling densities do not lead to much gain, while when the bias term dominates, we can choose larger step-sizes that lead to significant improvements.
Local Free-Style Perforator Flaps in Head and Neck Reconstruction: An Update and a Useful Classification.
BACKGROUND Any standard skin flap of the body including a detectable or identified perforator at its axis can be safely designed and harvested in a free-style fashion. METHODS Fifty-six local free-style perforator flaps in the head and neck region, 33 primary and 23 recycle flaps, were performed in 53 patients. The authors introduced the term "recycle" to describe a perforator flap harvested within the borders of a previously transferred flap. A Doppler device was routinely used preoperatively for locating perforators in the area adjacent to a given defect. The final flap design and degree of mobilization were decided intraoperatively, depending on the location of the most suitable perforator and the ability to achieve primary closure of the donor site. Based on clinical experience, the authors suggest a useful classification of local free-style perforator flaps. RESULTS All primary and 20 of 23 recycle free-style perforator flaps survived completely, providing tension-free coverage and a pleasing final contour for patients. In the remaining three recycle cases, the skeletonization of the pedicle resulted in pedicle damage, because of surrounding postradiotherapy scarring and flap failure. All donor sites except one were closed primarily, and all of them healed without any complications. CONCLUSIONS The free-style concept has significantly increased the potential and versatility of the standard local and recycled head and neck flap alternatives for moderate to large defects, providing a more robust, custom-made, tissue-sparing, and cosmetically superior outcome in a one-stage procedure, with minimal donor-site morbidity. CLINICAL QUESTION/LEVEL OF EVIDENCE Therapeutic, V.
The Ontology of Christopher Langan ' s Psychical Physics : The Neuropsychology of the Atemporal Recursive Processes – – an empirical framework
Christopher Langan's CTMU offers a new look into an old place: the universe. This theory is both attractive and elusive, a mixture of cosmology and ontology which promises a reunification of what science has so ignobly divided. While the cosmological and spiritual pieces of the puzzle are beyond the scope of this author's competence to define, I propose to demonstrate through specific psychological and neuroscientific example, some of the more tantalizing and etherial ideas mentioned in the CTMU. These ideas are tangible, and are subject to direct daily observation, and hence, empirical demonstration. It should be noted that I am an atheist, and need carry no baggage for Mr. Langan, who has in the CTMU offered up a new idea, one which can be proven or disproven as science, an idea worthy of consideration quite apart and independent of one's personal beliefs. Indeed, it is difficult to look at things in a new way, our very mental structure all but forbids it, the lateral prefrontal cortex providing a definitional template for experience, pairing down the information available to us, measuring the situation against our preconceptions, structures so very necessary for forming our snap judgments, and also, so inhibitive in the understanding of a new idea which does not conform to them (Gazzaniga, 2009 pp. 578-579). I propose to demonstrate the atemporal cycles of Christopher Langan's telic recursion, as it defines our internal universe and experience. This proof will be multidisciplinary in drawing together threads from a convergence of depth psychology, experimental psychology, and cognitive neuroscience. This ontological proof will be concluded with a series of experimental constructs, which although general, will outline a methodological approach toward demonstration of the psycho-ontological aspects of the CTMU, leaving but a clear experimentally defined articulation of the proposed theoretic connective tissue underlying the unity of the psychical and the physical, to close the empirical gap. Telic recursion: The coexistence and connectivity of Past, Present and Future In order to elucidate the application of the theories explained in the CTMU by way of specific example, I will ease my task by focusing my observations clearly in the theatre of the psycho-ontological. To accomplish both my expository, and my experimental, and therefore, linear aim, I must first decide upon a specific piece of theory in the CTMU which will allow me access along the self-referential and circular route of tautological self-containment. This pathway will become clear as we find foothold and firm purchase upon what appear at first, to be some of the more unlikely propositions put forward in the CTMU. I will encourage the reader to take in the entire of the CTMU for themselves, but, once cautioned as to the limits of extracting any fact from its context, I will now do exactly that, and present a few prime pieces of theory to which I will refer again and again, many of which appear as paradox, but are in fact, an everyday dynamic which underlies reality, a dynamic which if working properly, goes unnoticed: 1. Inasmuch as science is observational or perceptual in nature, the goal of providing a scientific model and mechanism for the evolution of complex systems ultimately requires a supporting theory of reality of which perception itself is the model (or theory-to-universe mapping). . . . . . the universe refines itself from unbound telesis or UBT, a primordial realm of infocognitive potential free of informational constraint. Under the guidance of a limiting (intrinsic) form of anthropic principle called the Telic Principle, SCSPL evolves by telic recursion, jointly configuring syntax and state while maximizing a generalized self-selection parameter and adjusting on the fly to freely-changing internal conditions. (Langan, 2002, p.1) 2. The currency of telic feedback is a quantifiable self-selection parameter, generalized utility, a generalized property of law and state in the maximization of which they undergo mutual refinement (note that generalized utility is self-descriptive or autologous, intrinsically and retroactively defined within the system, and “pre-informational” in the sense that it assigns no specific property to any specific object). Through telic feedback, a system retroactively selfconfigures by reflexively applying a “generalized utility function” to its internal existential potential or possible futures. In effect, the system brings itself into existence as a means of atemporal communication between its past and future whereby law and state, syntax and informational content, generate and refine each other across time to maximize total systemic self-utility. This defines a situation in which the true temporal identity of the system is a distributed point of temporal equilibrium that is both between and inclusive of past and future. In this sense, the system is timeless or atemporal. (p.6-7) 3. In short, two-valued logic is something without which reality could not exist. If it were eliminated, then true and false, real and unreal, and existence and nonexistence could not be distinguished, and the merest act of perception or cognition would be utterly impossible. (p. 13) 4. According to the nature of sentential logic, truth is tautologically based on the integrity of cognitive and perceptual reality. Cognition and perception comprise the primitive (selfdefinitive) basis of logic, and logic comprises the rules of structure and inference under which perception and cognition are stable and coherent. (p. 13) Therefore, the proposed tautology-preserving principles of reality theory should put mind back into the mix in an explicit, theoretically tractable way, effectively endowing logic with “selfprocessing capability”. This, after all, is exactly what it possesses in its natural manifestation . . . (p.14) 5. To put it another way: if the “noumenal” (perceptually independent) part of reality were truly unrelated to the phenomenal (cognition-isomorphic) part, then these two “halves” of reality would neither be coincident nor share a joint medium relating them. In that case, they would simply fall apart, and any integrated “reality” supposedly containing both of them would fail for lack of an integrated model. (p. 23) 6. In CTMU cosmogony, “nothingness” is informationally defined as zero constraint or pure freedom (unbound telesis or UBT), and the apparent construction of the universe is explained as a self-restriction of this potential. (p. 27) 7. Conspansion describes the “alternation” of these units between the dual (generalizedcognitive and informational) aspects of reality, and thus between syntax and state. This alternation, which permits localized mutual refinements of cognitive syntax and informational state, is essential to an evolutionary process called telic recursion. . . the conspansive nesting of atemporal events puts all of time in “simultaneous self-contact” without compromising ordinality (p. 30) 8. By putting temporally remote events in extended descriptive contact with each other, the Extended Superposition Principle enables coherent cross-temporal telic feedback and thus plays a necessary role in cosmic self-configuration. Among the higher-order determinant relationships in which events and objects can thus be implicated are utile state-syntax relationships called telons, telic attractors capable of guiding cosmic and biological evolution. (p. 31) 9. The process of reducing distinctions to the homogeneous syntactic media that support them is called syndiffeonic regression. This process involves unisection, whereby the rules of structure and dynamics that respectively govern a set of distinct objects are reduced to a “syntactic join” in an infocognitive lattice of syntactic media. 10. It follows that the active medium of cross-definition possesses logical primacy over laws and arguments alike, and is thus pre-informational and pre-nomological in nature...i.e., telic. Telesis, which can be characterized as “infocognitive potential”, is the primordial active medium from which laws and their arguments and parameters emerge by mutual refinement or telic recursion. (p. 35) 11. The Telic principle simply asserts that this is the case; the most fundamental imperative of reality is such as to force on it a supertautological, conspansive structure. Thus, the universe “selects itself” from unbound telesis or UBT, a realm of zero information and unlimited ontological potential, by means of telic recursion, whereby infocognitive syntax and its informational content are cross-refined through telic (syntax-state) feedback over the entire range of potential syntax-state relationships, up to and including all of spacetime and reality in general. . . the Extended Superposition Principle, a property of conspansive spacetime that coherently relates widely-separated events, lets the universe “retrodict” itself through meaningful cross-temporal feedback (p. 38) 12. Where the term telesis denotes this common component of information and syntax, SCSPL grammar refines infocognition by binding or constraining telesis as infocognition. (p. 43) 13. While an ordinary grammar recursively processes information or binds informational potential to an invariant syntax that distributes over its products, Γ grammar binds telesis, infocognitive potential ranging over possible relationships of syntax and state, by crossrefining syntax and its informational content through telic recursion. Telic recursion is the process responsible for configuring the syntax-content relationships on which standard informational recursion is based; its existence is an ontological requirement of reality. (p. 44) The metapsychology and neurophysiology of telic recursive function: So dear reader, now that you have seen the ideas, can you grasp them? It seems as if they are an impossibility, but to look more closely, one can find more than a shadow of reality in the
Online ICA: Understanding Global Dynamics of Nonconvex Optimization via Diffusion Processes
Solving statistical learning problems often involves nonconvex optimization. Despite the empirical success of nonconvex statistical optimization methods, their global dynamics, especially convergence to the desirable local minima, remain less well understood in theory. In this paper, we propose a new analytic paradigm based on diffusion processes to characterize the global dynamics of nonconvex statistical optimization. As a concrete example, we study stochastic gradient descent (SGD) for the tensor decomposition formulation of independent component analysis. In particular, we cast different phases of SGD into diffusion processes, i.e., solutions to stochastic differential equations. Initialized from an unstable equilibrium, the global dynamics of SGD transit over three consecutive phases: (i) an unstable Ornstein-Uhlenbeck process slowly departing from the initialization, (ii) the solution to an ordinary differential equation, which quickly evolves towards the desirable local minimum, and (iii) a stable Ornstein-Uhlenbeck process oscillating around the desirable local minimum. Our proof techniques are based upon Stroock and Varadhan’s weak convergence of Markov chains to diffusion processes, which are of independent interest.
Multisystemic treatment of serious juvenile offenders: long-term prevention of criminality and violence.
This article examined the long-term effects of multisystemic therapy (MST) vs. individual therapy (IT) on the prevention of criminal behavior and violent offending among 176 juvenile offenders at high risk for committing additional serious crimes. Results from multiagent, multimethod assessment batteries conducted before and after treatment showed that MST was more effective than IT in improving key family correlates of antisocial behavior and in ameliorating adjustment problems in individual family members. Moreover, results from a 4-year follow-up of rearrest data showed that MST was more effective than IT in preventing future criminal behavior, including violent offending. The implications of such findings for the design of violence prevention programs are discussed.
Accumulation of heavy metals in dietary vegetables and cultivated soil horizon in organic farming system in relation to atmospheric deposition in a seasonally dry tropical region of India.
Increasing consciousness about future sustainable agriculture and hazard free food production has lead organic farming to be a globally emerging alternative farm practice. We investigated the accumulation of air-borne heavy metals in edible parts of vegetables and in cultivated soil horizon in organic farming system in a low rain fall tropical region of India. The factorial design of whole experiment consisted of six vegetable crops (tomato, egg plant, spinach, amaranthus, carrot and radish) x two treatments (organic farming in open field and organic farming in glasshouse (OFG)) x seven independent harvest of each crop. The results indicated that except for Pb, atmospheric deposition of heavy metals increased consistently on time scale. Concentrations of heavy metals in cultivated soil horizon and in edible parts of open field grown vegetables increased over time and were significantly higher than those recorded in OFG plots. Increased contents of heavy metals in open field altered soil porosity, bulk density, water holding capacity, microbial biomass carbon, substrate-induced respiration, alkaline phosphatase and fluorescein diacetate hydrolytic activities. Vegetable concentrations of heavy metal appeared in the order Zn > Pb > Cu > Ni > Cd and were maximum in leaves (spinach and amaranths) followed by fruits (tomato and egg plant) and minimum in roots (carrot and radish). Multiple regression analysis indicated that the major contribution of most heavy metals to vegetable leaves was from atmosphere. For roots however, soil appeared to be equally important. The study suggests that if the present trend of atmospheric deposition is continued, it will lead to a destabilizing effect on this sustainable agricultural practice and will increase the dietary intake of toxic metals.
Modeling Business Processes - A Petri Net-Oriented Approach
modeling business processes a petri net-oriented approach modeling business processes a petri net oriented approach modeling business processes: a petri net-oriented approach modeling business processes a petri net oriented approach modeling business processes a petri net oriented approach modeling business processes: a petri net-oriented approach modeling business processes a petri net oriented approach a petri net-based software process model for developing modeling business processes a petri net oriented approach petri nets and business process management dagstuhl modeling business processes a petri net oriented approach killer app for petri nets process mining a petri net approach to analysis and composition of web information gathering and process modeling in a petri net modeling business processes a petri net oriented approach an ontology-based evaluation of process modeling with business process modeling in inspire using petri nets document about nc fairlane manuals is available on print from business process modeling to the specification of modeling of adaptive cyber physical systems using aspect petri net theory and the modeling of systems tbsh towards agent-based modeling and verification of a discussion of object-oriented process modeling modeling and simulation versions of business process using workflow modeling for virtual enterprise: a petri net simulation of it service processes with petri-nets george mason university the volgenau school of engineering process-oriented business performance management with syst 620 / ece 673 discrete event systems general knowledge questions answers on india tool-based business process modeling using the som approach income/w f a petri net based approach to w orkflow segment 2 exam study guide world history jbacs specifying business processes over objects rd.springer english june exam 2013 question paper 3 nulet
Distributed Recurrent Neural Forward Models with Synaptic Adaptation for Complex Behaviors of Walking Robots
Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain a variety of movements, the neural mechanisms generate movements while making appropriate predictions crucial for achieving adaptation. Such predictions or planning ahead can be achieved by way of internal models that are grounded in the overall behavior of the animal. Inspired by these findings, we present here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures) with the underlying neural mechanisms. The neural mechanisms consist of 1) central pattern generator based control for generating basic rhythmic patterns and coordinated movements, 2) distributed (at each leg) recurrent neural network based adaptive forward models with efference copies as internal models for sensory predictions and instantaneous state estimations, and 3) searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Using simulations we show that this bio-inspired approach with adaptive internal models allows the walking robot to perform complex locomotive behaviors as observed in insects, including walking on undulated terrains, crossing large gaps as well as climbing over high obstacles. Furthermore we demonstrate that the newly developed recurrent network based approach to online forward models outperforms the adaptive neuron forward models, which have hitherto been the state of the art, to model a subset of similar walking behaviors in walking robots. ∗Correspondence: [email protected] Current address: Riken Brain Science Institute, 2-1 Hirosawa, Wako, Saitama, Japan 1 ar X iv :1 50 6. 03 59 9v 1 [ cs .N E ] 1 1 Ju n 20 15
100 top-cited scientific papers in limb prosthetics
Research has tremendously contributed to the developments in both practical and fundamental aspects of limb prosthetics. These advancements are reflected in scientific articles, particularly in the most cited papers. This article aimed to identify the 100 top-cited articles in the field of limb prosthetics and to investigate their main characteristics. Articles related to the field of limb prosthetics and published in the Web of Knowledge database of the Institute for Scientific Information (ISI) from the period of 1980 to 2012. The 100 most cited articles in limb prosthetics were selected based on the citation index report. All types of articles except for proceedings and letters were included in the study. The study design and level of evidence were determined using Sackett's initial rules of evidence. The level of evidence was categorized either as a systematic review or meta-analysis, randomized controlled trial, cohort study, case-control study, case series, expert opinion, or design and development. The top cited articles in prosthetics were published from 1980 to 2012 with a citation range of 11 to 90 times since publication. The mean citation rate was 24.43 (SD 16.7) times. Eighty-four percent of the articles were original publications and were most commonly prospective (76%) and case series studies (67%) that used human subjects (96%) providing level 4 evidence. Among the various fields, rehabilitation (47%), orthopedics (29%), and sport sciences (28%) were the most common fields of study. The study established that studies conducted in North America and were written in English had the highest citations. Top cited articles primarily dealt with lower limb prosthetics, specifically, on transtibial and transradial prosthetic limbs. Majority of the articles were experimental studies.
Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control
In this work we investigate the problem of using public consensus networks – exemplified by systems like Ethereum and Bitcoin – to perform cryptographic functionalities that involve the manipulation of secret data, such as cryptographic access control. We consider a hybrid paradigm in which a secure client-side functionality manages cryptographic secrets, while an online consensus network performs public computation. Using this approach, we explore both the constructive and potentially destructive implications of such systems. We first show that this combination allows for the construction of stateful interactive functionalities (including general computation) from a stateless client-side functionality, which can be implemented using inexpensive trusted hardware or even purely cryptographic functionalities such as Witness Encryption. We then describe a number of practical applications that can be achieved today. These include rate limited mandatory logging; strong encrypted backups from weak passwords; enforcing fairness in multi-party computation; and destructive applications such as autonomous ransomware, which allows for payments without an online party.
Hybrid Security RSA Algorithm in Application of Web Service
A new hybrid security algorithm is presented for RSA cryptosystem named as Hybrid RSA. The system works on the concept of using two different keys- a private and a public for decryption and encryption processes. The value of public key (P) and private key (Q) depends on value of M, where M is the product of four prime numbers which increases the factorizing of variable M. moreover, the computation of P and Q involves computation of some more factors which makes it complex. This states that the variable x or M is transferred during encryption and decryption process, where x represents the multiplication of two prime numbers A and B. thus, it provides more secure path for encryption and decryption process. The proposed system is compared with the RSA and enhanced RSA (ERSA) algorithms to measure the key generation time, encryption and decryption time which is proved to be more efficient than RSA and ERSA.
User-Centered Robot Head Design: a Sensing Computing Interaction Platform for Robotics Research (SCIPRR)
We developed and evaluated a novel humanoid head, SCIPRR (Sensing, Computing, Interacting Platform for Robotics Research). SCIPRR is a head shell that was iteratively created with additive manufactur- ing. SCIPRR contains internal sca olding that allows sensors, small form computers, and a back-projection system to display an ani- mated face on a front-facing screen. SCIPRR was developed using User Centered Design principles and evaluated using three di erent methods. First, we created multiple, small-scale prototypes through additive manufacturing and performed polling and re nement of the overall head shape. Second, we performed usability evaluations of expert HRI mechanics as they swapped sensors and computers within the the SCIPRR head. Finally, we ran and analyzed an ex- periment to evaluate how much novices would like a robot with our head design to perform di erent social and traditional robot tasks. We made both major and minor changes a er each evalu- ation and iteration. Overall, expert users liked the SCIPRR head and novices wanted a robot with the SCIPRR head to perform more tasks (including social tasks) than a more traditional robot.
Emergent Language in a Multi-Modal, Multi-Step Referential Game
Inspired by previous work on emergent language in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration. The multi-modal multi-step setting allows agents to develop an internal language significantly closer to natural language, in that they share a single set of messages, and that the length of the conversation may vary according to the difficulty of the task. We examine these properties empirically using a dataset consisting of images and textual descriptions of mammals, where the agents are tasked with identifying the correct object. Our experiments indicate that a robust and efficient communication protocol emerges, where gradual information exchange informs better predictions and higher communication bandwidth improves generalization.
Algebraic Cryptanalysis of GOST Encryption Algorithm
This paper observes approaches to algebraic analysis of GOST 28147-89 encryption algorithm (also known as simply GOST), which is the basis of most secure information systems in Russia. The general idea of algebraic analysis is based on the representation of initial encryption algorithm as a system of multivariate quadratic equations, which define relations between a secret key and a cipher text. Extended linearization method is evaluated as a method for solving the nonlinear system of equations.
Mind the Traps! Design Guidelines for Rigorous BCI Experiments
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Mind the Traps! Design Guidelines for Rigorous BCI Experiments Camille Jeunet, Stefan Debener, Fabien Lotte, Jeremie Mattout, Reinhold Scherer, Catharina Zich
Efficient Hair Rendering under Dynamic, Low-Frequency Environmental Light Using Spherical Harmonics
We present an algorithm for efficient rendering of animated hair under a dynamic, low-frequency lighting environment. We use spherical harmonics (SH) to represent the environmental light. The transmittances between a point on a hair strand and the light sources are also represented by SH functions. Then, a convolution of SH functions and the scattering function of a hair strand is precomputed. This allows us to efficiently compute the intensity at a point on the hair. However, the computation of the transmittance is very time-consuming. We address this problem by using a voxel-based approach: the transmittance is computed by using a voxelized hair model. We further accelerate the computation by sampling the voxels. By using our method, we can render a hair model consisting of tens of thousands of hair strands at interactive frame rates. key words: hair rendering, spherical harmonics, environmental light, voxelization
Helium-hyperoxia: a novel intervention to improve the benefits of pulmonary rehabilitation for patients with COPD.
BACKGROUND Helium-hyperoxia (HH) reduces dyspnea and increases exercise tolerance in patients with COPD. We investigated whether breathing HH would allow patients to perform a greater intensity of exercise and improve the benefits of a pulmonary rehabilitation program. METHODS Thirty-eight nonhypoxemic patients with COPD (FEV(1)=47 +/- 17%(pred)) were randomized to rehabilitation breathing HH (60:40 He:O(2); n = 19) or air (n = 19). Patients cycled for 30 min, 3 days/week for 6 weeks breathing the assigned gas. Exercise intensity was prescribed from baseline, gas-specific, incremental exercise tests and was advanced as tolerated. The primary outcome was exercise tolerance assessed as a change in constant-load exercise time (CLT) following rehabilitation. Secondary outcomes were changes in exertional symptoms, health related quality of life (as assessed by the Short-form 36 and St George respiratory questionnaires), and peak oxygen consumption during an incremental exercise test. RESULTS The HH group had a greater change in CLT following rehabilitation compared to the air group (9.5 +/- 9.1 vs 4.3 +/- 6.3 min, p < 0.05). At an exercise isotime, dyspnea was significantly reduced in both groups, while leg discomfort only decreased in the HH group. The changes in exertional symptoms and peak oxygen consumption were not different between groups. Health-related quality of life significantly improved in both groups; however, the change in St. George respiratory questionnaire total score was greater with HH (-7.6 +/- 6.4 vs -3.6 +/- 5.6, p < 0.05). During rehabilitation, the HH group achieved a higher exercise intensity and training duration throughout the program (p < 0.05). CONCLUSIONS Breathing HH during pulmonary rehabilitation increases the intensity and duration of exercise training that can be performed and results in greater improvements in CLT for patients with COPD.
A Robust Deep Model for Improved Classification of AD/MCI Patients
Accurate classification of Alzheimer's disease (AD) and its prodromal stage, mild cognitive impairment (MCI), plays a critical role in possibly preventing progression of memory impairment and improving quality of life for AD patients. Among many research tasks, it is of a particular interest to identify noninvasive imaging biomarkers for AD diagnosis. In this paper, we present a robust deep learning system to identify different progression stages of AD patients based on MRI and PET scans. We utilized the dropout technique to improve classical deep learning by preventing its weight coadaptation, which is a typical cause of overfitting in deep learning. In addition, we incorporated stability selection, an adaptive learning factor, and a multitask learning strategy into the deep learning framework. We applied the proposed method to the ADNI dataset, and conducted experiments for AD and MCI conversion diagnosis. Experimental results showed that the dropout technique is very effective in AD diagnosis, improving the classification accuracies by 5.9% on average as compared to the classical deep learning methods.
Rainbow Color Map (Still) Considered Harmful
In this article, we reiterate the characteristics that make the rainbow color map a poor choice, provide examples that clearly illustrate these deficiencies even on simple data sets, and recommend better color maps for several categories of display. The goal is to make the rainbow color map as rare in visualization as the goto statement is in programming - which complicates the task of analyzing and verifying program correctness
Bayesian Optimization with Exponential Convergence
This paper presents a Bayesian optimization method with exponential convergencewithout the need of auxiliary optimization and without the δ-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence [ 1] requires access to the δ-cover sampling, which was considered to be impractical [ 1, 2]. Our approach eliminates both requirements and achieves an exponential convergence rate.
Snake Charmer: Physically Enabling Virtual Objects
Augmented and virtual reality have the potential of being indistinguishable from the real world. Holographic displays, including head mounted units, support this vision by creating rich stereoscopic scenes, with objects that appear to float in thin air - often within arm's reach. However, one has but to reach out and grasp nothing but air to destroy the suspension of disbelief. Snake-charmer is an attempt to provide physical form to virtual objects by revisiting the concept of Robotic Graphics or Encountered-type Haptic interfaces with current commodity hardware. By means of a robotic arm, Snake-charmer brings physicality to a virtual scene and explores what it means to truly interact with an object. We go beyond texture and position simulation and explore what it means to have a physical presence inside a virtual scene. We demonstrate how to render surface characteristics beyond texture and position, including temperature; how to physically move objects; and how objects can physically interact with the user's hand. We analyze our implementation, present the performance characteristics, and provide guidance for the construction of future physical renderers.
Sizing of Energy Storage and Diesel Generators in an Isolated Microgrid Using Discrete Fourier Transform (DFT)
This paper proposes a method for coordinated sizing of energy storage (ES) and diesel generators in an isolated microgrid based on discrete Fourier transform (DFT). ES and diesel generators have different response characteristics and can complementarily compensate the generation-demand imbalance at different time scales. The DFT-based coordinated dispatch strategy allocates balance power between the two components through frequency-time domain transform. The proposed method ensures that ES and diesel generators work in their respective and most efficient, fit way. Diesel generators consecutively work at a high-level generation while ES compensates small and frequent power fluctuations. Then, the capacities of ES and diesel generators are determined based on the coordinated dispatch strategy. The proposed method can also be used in the planning of ES with other dispatchable sources such as micro-turbine or fuel-cell. Finally, the effectiveness of the proposed method and its advantages are demonstrated via a practical case study.
Beamspace channel estimation for millimeter-wave massive MIMO systems with lens antenna array
By employing the lens antenna array, beamspace MIMO can utilize beam selection to reduce the number of required RF chains in mmWave massive MIMO systems without obvious performance loss. However, to achieve the capacity-approaching performance, beam selection requires the accurate information of beamspace channel of large size, which is challenging, especially when the number of RF chains is limited. To solve this problem, in this paper we propose a reliable support detection (SD)-based channel estimation scheme. Specifically, we propose to decompose the total beamspace channel estimation problem into a series of sub-problems, each of which only considers one sparse channel component. For each channel component, we first reliably detect its support by utilizing the structural characteristics of mmWave beamspace channel. Then, the influence of this channel component is removed from the total beamspace channel estimation problem. After the supports of all channel components have been detected, the nonzero elements of the sparse beamspace channel can be estimated with low pilot overhead. Simulation results show that the proposed SD-based channel estimation outperforms conventional schemes and enjoys satisfying accuracy, even in the low SNR region.
Ease of use and usefulness as measures of student experience in a multi-platform e-textbook pilot
As the use of electronic textbooks continues to expand and we approach the point where dominance of digital over print is becoming increasingly inevitable (Reynolds, 2011), research is needed to understand how students accept and use the technology. This is especially critical as we begin to explore the electronic format for required textbooks in higher education. The current study evaluates university students’ experiences with electronic textbooks (e-textbooks) during a pilot project with two textbook publishers, Flat World Knowledge (FWK) and Nelson Education (Nelson). Using the Technology Acceptance Model (TAM) as a framework, we examine the perceived ease of use and perceived usefulness of the technology. While previous research suggests that students have a general preference for textbooks in print rather than electronic format (Allen, 2009; Parsons, 2014; Woody, et al., 2010), our study suggests that preference may not dictate the likelihood that students will seek out and use print options. Our study also indicated that student experience with the open/affordable textbook (FWK) was very comparable to that of the high cost commercial text (Nelson). Despite overall positive reviews for the etextbooks across both platforms, students experienced a drop in enthusiasm for e-textbooks from the beginning to the end of the pilot.
Fuzzy model and optimization for airport gate assignment problem
Gate assignment is an important decision making problem which involves multiple and conflict objectives in airport. In this paper, fuzzy model is proposed to handle two main objectives, minimizing the total walking distance for passengers and maximizing the robustness of assignment. The idle times of flight-to-gate are regarded as fuzzy variables, and whose membership degrees are used to express influence on robustness of assignment. Adjustment function on membership degree is introduced to transfer two objectives into one. Modified genetic algorithm is adopted to optimize the NP-hard problem. Finally, illustrative example is given to evaluate the performance of fuzzy model. Three distribution functions are tested?? and comparison with the method of fixed buffer time is given. Simulation results demonstrate the feasibility and effectiveness of proposed fuzzy method.
A Double‐Edged Sword: Transformational Leadership and Individual Creativity
Leadership research has focused on the positive effects of transformational and charismatic leadership but has neglected the negative side effects. Addressing this gap, we analysed followers’ dependency on the leader as a relevant negative side effect in the relationship between transformational leadership and followers’ creativity and developed an integrative framework on parallel positive and negative effects of transformational leadership. As expected, results from a study with 416 R&D employees showed that transformational leadership promotes followers’ creativity but at the same time increases followers’ dependency which in turn reduces their creativity. This negative indirect effect attenuates the positive influence of transformational leadership on followers’ creativity.
Platelet-rich plasma efficacy versus corticosteroid injection treatment for chronic severe plantar fasciitis.
BACKGROUND Chronic plantar fasciitis is a common orthopedic condition that can prove difficult to successfully treat. In this study, autologous platelet-rich plasma (PRP), a concentrated bioactive blood component rich in cytokines and growth factors, was compared to traditional cortisone injection in the treatment of chronic cases of plantar fasciitis resistant to traditional nonoperative management. METHODS Forty patients (23 females and 17 males) with unilateral chronic plantar fasciitis that did not respond to a minimum of 4 months of standardized traditional nonoperative treatment modalities were prospectively randomized and treated with either a single ultrasound guided injection of 3 cc PRP or 40 mg DepoMedrol cortisone. American Orthopedic Foot and Ankle Society (AOFAS) hindfoot scoring was completed for all patients immediately prior to PRP or cortisone injection (pretreatment = time 0) and at 3, 6, 12, and 24 months following injection treatment. Baseline pretreatment radiographs and MRI studies were obtained in all cases to confirm the diagnosis of plantar fasciitis. RESULTS The cortisone group had a pretreatment average AOFAS score of 52, which initially improved to 81 at 3 months posttreatment but decreased to 74 at 6 months, then dropped to near baseline levels of 58 at 12 months, and continued to decline to a final score of 56 at 24 months. In contrast, the PRP group started with an average pretreatment AOFAS score of 37, which increased to 95 at 3 months, remained elevated at 94 at 6 and 12 months, and had a final score of 92 at 24 months. CONCLUSIONS PRP was more effective and durable than cortisone injection for the treatment of chronic recalcitrant cases of plantar fasciitis. LEVEL OF EVIDENCE Level I, prospective randomized comparative series.
Revisiting Defenses against Large-Scale Online Password Guessing Attacks
Brute force and dictionary attacks on password-only remote login services are now widespread and ever increasing. Enabling convenient login for legitimate users while preventing such attacks is a difficult problem. Automated Turing Tests (ATTs) continue to be an effective, easy-to-deploy approach to identify automated malicious login attempts with reasonable cost of inconvenience to users. In this paper, we discuss the inadequacy of existing and proposed login protocols designed to address large-scale online dictionary attacks (e.g., from a botnet of hundreds of thousands of nodes). We propose a new Password Guessing Resistant Protocol (PGRP), derived upon revisiting prior proposals designed to restrict such attacks. While PGRP limits the total number of login attempts from unknown remote hosts to as low as a single attempt per username, legitimate users in most cases (e.g., when attempts are made from known, frequently-used machines) can make several failed login attempts before being challenged with an ATT. We analyze the performance of PGRP with two real-world data sets and find it more promising than existing proposals.
Leading Edge Primer Ferroptosis : A Regulated Cell Death Nexus Linking Metabolism , Redox Biology , and Disease
Brent R. Stockwell,1,2,* José Pedro Friedmann Angeli,3,29 Hülya Bayir,4 Ashley I. Bush,5 Marcus Conrad,3 Scott J. Dixon,6 Simone Fulda,7 Sergio Gascón,8,28 Stavroula K. Hatzios,9,10 Valerian E. Kagan,11 Kay Noel,12 Xuejun Jiang,13 Andreas Linkermann,14 Maureen E. Murphy,15 Michael Overholtzer,13 Atsushi Oyagi,16 Gabriela C. Pagnussat,17 Jason Park,18 Qitao Ran,19 Craig S. Rosenfeld,12 Konstantin Salnikow,20 Daolin Tang,21,22 FrankM. Torti,23 Suzy V. Torti,24 Shinya Toyokuni,25 K.A. Woerpel,26 and Donna D. Zhang27