title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Deep ranking: Triplet MatchNet for music metric learning | Metric learning for music is an important problem for many music information retrieval (MIR) applications such as music generation, analysis, retrieval, classification and recommendation. Traditional music metrics are mostly defined on linear transformations of handcrafted audio features, and may be improper in many situations given the large variety of music styles and instrumentations. In this paper, we propose a deep neural network named Triplet MatchNet to learn metrics directly from raw audio signals of triplets of music excerpts with human-annotated relative similarity in a supervised fashion. It has the advantage of learning highly nonlinear feature representations and metrics in this end-to-end architecture. Experiments on a widely used music similarity measure dataset show that our method significantly outperforms three state-of-the-art music metric learning methods. Experiments also show that the learned features better preserve the partial orders of the relative similarity than handcrafted features. |
Inhibition of IK,ACh current may contribute to clinical efficacy of class I and class III antiarrhythmic drugs in patients with atrial fibrillation | Inward rectifier potassium currents IK1 and acetylcholine activated IK,ACh are implicated in atrial fibrillation (AF) pathophysiology. In chronic AF (cAF), IK,ACh develops a receptor-independent, constitutively active component that together with increased IK1 is considered to support maintenance of AF. Here, we tested whether class I (propafenone, flecainide) and class III (dofetilide, AVE0118) antiarrhythmic drugs inhibit atrial IK1 and IK,ACh in patients with and without cAF. IK1 and IK,ACh were measured with voltage clamp technique in atrial myocytes from 58 sinus rhythm (SR) and 35 cAF patients. The M-receptor agonist carbachol (CCh; 2 µM) was employed to activate IK,ACh. In SR, basal current was not affected by either drug indicating no effect of these compounds on IK1. In contrast, all tested drugs inhibited CCh-activated IK,ACh in a concentration-dependent manner. In cAF, basal current was confirmed to be larger than in SR (at −80 mV, −15.2 ± 1.2 pA/pF, n = 88/35 vs. −6.5 ± 0.4 pA/pF, n = 194/58 [myocytes/patients]; P < 0.05), whereas CCh-activated IK,ACh was smaller (−4.1 ± 0.5 pA/pF vs. −9.5 ± 0.6 pA/pF; P < 0.05). In cAF, receptor-independent constitutive IK,ACh contributes to increased basal current, which was reduced by flecainide and AVE0118 only. This may be due to inhibition of constitutively active IK,ACh channels. In cAF, all tested drugs reduced CCh-activated IK,ACh. We conclude that in cAF, flecainide and AVE0118 reduce receptor-independent, constitutively active IK,ACh, suggesting that they may block IK,ACh channels, whereas propafenone and dofetilide likely inhibit M-receptors. The efficacy of flecainide to terminate AF may in part result from blockade of IK,ACh. |
Golden section search over hyper-rectangle: a direct search method | Abstract: This paper generalises the golden section optimal search method to higher dimensional optimisation problem. The method is applicable to a strict quasi-convex function of N-variables over an N-dimensional hyper rectangle. An algorithm is proposed in N-dimension. The algorithm is illustrated graphically in two dimensions and verified through several test functions in higher dimension using MATLAB. |
Dietary intake of pollutant aerosols via vegetables influenced by atmospheric deposition and wastewater irrigation. | Pot culture experiments were conducted to study dietary intake of heavy metals via vegetables, spinach (Spinacia oleracea L.), radish (Raphanus sativus L.) and tomato (Lycopersicon esculentum Mill) grown under the influence of atmospheric deposition and wastewater irrigation. The results indicated substantial accumulation of heavy metals in vegetables, which contribute significantly to dietary intake of total heavy metals ranging from 1.34 to 110.40 μg g⁻¹ through leaves (spinach), 1.04 to 105.86 μg g⁻¹ through root (radish) and 0.608 to 82.19 μg g⁻¹ through fruits (tomato). Concentration of Cd, Ni and Pb in vegetables exceeded the safe limits of Prevention of Food Adulteration Act 1954. Health risk index for Cd and Pb exceeded the safe limits set by the United States Environmental Protection Agency. The study indicated that the atmospheric depositions as well as wastewater irrigation have significantly elevated the levels of heavy metals in dietary vegetables presenting a significant threat for the health of users. |
When Cryptocurrencies Mine Their Own Business | Bitcoin and hundreds of other cryptocurrencies employ a consensus protocol called Nakamoto consensus which reward miners for maintaining a public blockchain. In this paper, we study the security of this protocol with respect to rational miners and show how a minority of the computation power can incentivize the rest of the network to accept a blockchain of the minority’s choice. By deviating from the mining protocol, a mining pool which controls at least 38.2% of the network’s total computational power can, with modest financial capacity, gain mining advantage over honest mining. Such an attack creates a longer valid blockchain by forking the honest blockchain, and the attacker’s blockchain need not disrupt any “legitimate” non-mining transactions present on the honest blockchain. By subverting the consensus protocol, the attacking pool can double-spend money or simply create a blockchain that pays mining rewards to the attacker’s pool. We show that our attacks are easy to encode in any Nakamoto-consensus-based cryptocurrency which supports a scripting language that is sufficiently expressive to encode its own mining puzzles. |
Motivational intervention to enhance post-detoxification 12-Step group affiliation: a randomized controlled trial | AIMS
To compare a motivational intervention (MI) focused on increasing involvement in 12-Step groups (TSGs; e.g. Alcoholics Anonymous) versus brief advice (BA) to attend TSGs.
DESIGN
Patients were assigned randomly to either the MI or BA condition, and followed-up at 6 months after discharge.
SETTING AND PARTICIPANTS
One hundred and forty substance use disorder (SUD) patients undergoing in-patient detoxification (detox) in Norway.
MEASUREMENTS
The primary outcome was TSG affiliation measured with the Alcoholics Anonymous Affiliation Scale (AAAS), which combines meeting attendance and TSG involvement. Substance use and problem severity were also measured.
FINDINGS
At 6 months after treatment, compared with the BA group, the MI group had higher TSG affiliation [0.91 point higher AAAS score; 95% confidence interval (CI) = 0.04 to 1.78; P = 0.041]. The MI group reported 3.5 fewer days of alcohol use (2.1 versus 5.6 days; 95% CI = -6.5 to -0.6; P = 0.020) and 4.0 fewer days of drug use (3.8 versus 7.8 days; 95% CI = -7.5 to -0.4; P = 0.028); however, abstinence rates and severity scores did not differ between conditions. Analyses controlling for duration of in-patient treatment did not alter the results.
CONCLUSIONS
A motivational intervention in an in-patient detox ward was more successful than brief advice in terms of patient engagement in 12-Step groups and reduced substance use at 6 months after discharge. There is a potential benefit of adding a maintenance-focused element to standard detox. |
Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling | Recurrent neural networks (RNN), convolutional neural networks (CNN) and selfattention networks (SAN) are commonly used to produce context-aware representations. RNN can capture long-range dependency but is hard to parallelize and not time-efficient. CNN focuses on local dependency but does not perform well on some tasks. SAN can model both such dependencies via highly parallelizable computation, but memory requirement grows rapidly in line with sequence length. In this paper, we propose a model, called “bi-directional block self-attention network (Bi-BloSAN)”, for RNN/CNN-free sequence encoding. It requires as little memory as RNN but with all the merits of SAN. Bi-BloSAN splits the entire sequence into blocks, and applies an intra-block SAN to each block for modeling local context, then applies an inter-block SAN to the outputs for all blocks to capture long-range dependency. Thus, each SAN only needs to process a short sequence, and only a small amount of memory is required. Additionally, we use feature-level attention to handle the variation of contexts around the same word, and use forward/backward masks to encode temporal order information. On nine benchmark datasets for different NLP tasks, Bi-BloSAN achieves or improves upon state-of-the-art accuracy, and shows better efficiency-memory trade-off than existing RNN/CNN/SAN. |
Re-ranking for joint named-entity recognition and linking | Recognizing names and linking them to structured data is a fundamental task in text analysis. Existing approaches typically perform these two steps using a pipeline architecture: they use a Named-Entity Recognition (NER) system to find the boundaries of mentions in text, and an Entity Linking (EL) system to connect the mentions to entries in structured or semi-structured repositories like Wikipedia. However, the two tasks are tightly coupled, and each type of system can benefit significantly from the kind of information provided by the other. We present a joint model for NER and EL, called NEREL, that takes a large set of candidate mentions from typical NER systems and a large set of candidate entity links from EL systems, and ranks the candidate mention-entity pairs together to make joint predictions. In NER and EL experiments across three datasets, NEREL significantly outperforms or comes close to the performance of two state-of-the-art NER systems, and it outperforms 6 competing EL systems. On the benchmark MSNBC dataset, NEREL provides a 60% reduction in error over the next-best NER system and a 68% reduction in error over the next-best EL system. |
Obtaining accurate trajectories from repeated coarse and inaccurate location logs | Context awareness is a key property for enabling context aware services. For a mobile device, the user's location or trajectory is one of the crucial contexts. One common challenge for detecting location or trajectory by mobile devices is to manage the tradeoff between accuracy and power consumption. Typical approaches are (1) controlling the frequency of usage of sensors and (2) sensor fusion technique. The algorithm proposed in this paper takes a different approach to improve the accuracy by merging repeatedly measured coarse and inaccurate location data from cell tower. The experimental result shows that the mean error distance between the detected trajectory and the ground truth is improved from 44m to 10.9m by merging data from 41 days' measurement. |
Decoronation - a conservative method to treat ankylosed teeth for preservation of alveolar ridge prior to permanent prosthetic reconstruction: literature review and case presentation. | Avulsed teeth that are stored extraorally in a dry environment for >60 min generally develop replacement root resorption or ankylosis following their replantation due to the absence of a vital periodontal ligament on their root surface. One negative sequelae of such ankylosis is tooth infra-positioning and the local arrest of alveolar bone growth. Removal of an ankylosed tooth may be difficult and traumatic leading to esthetic bony ridge deformities and optimal prosthetic treatment interferences. Recently a treatment option for ankylosed teeth named 'de-coronation' gained interest, particularly in pediatric dentistry that concentrated in dental traumatology. This article reviews the up-to-date literature that has been published on decoronation with respect to its importance for future prosthetic rehabilitation followed by a case presentation that demonstrates its clinical benefits. |
Normalizing Flows on Riemannian Manifolds | We consider the problem of density estimation on Riemannian manifolds. Density estimation on manifolds has many applications in fluid-mechanics, optics and plasma physics and it appears often when dealing with angular variables (such as used in protein folding, robot limbs, gene-expression) and in general directional statistics. In spite of the multitude of algorithms available for density estimation in the Euclidean spaces R that scale to large n (e.g. normalizing flows, kernel methods and variational approximations), most of these methods are not immediately suitable for density estimation in more general Riemannian manifolds. We revisit techniques related to homeomorphisms from differential geometry for projecting densities to sub-manifolds and use it to generalize the idea of normalizing flows to more general Riemannian manifolds. The resulting algorithm is scalable, simple to implement and suitable for use with automatic differentiation. We demonstrate concrete examples of this method on the n-sphere S. In recent years, there has been much interest in applying variational inference techniques to learning large scale probabilistic models in various domains, such as images and text [1, 2, 3, 4, 5, 6]. One of the main issues in variational inference is finding the best approximation to an intractable posterior distribution of interest by searching through a class of known probability distributions. The class of approximations used is often limited, e.g., mean-field approximations, implying that no solution is ever able to resemble the true posterior distribution. This is a widely raised objection to variational methods, in that unlike MCMC, the true posterior distribution may not be recovered even in the asymptotic regime. To address this problem, recent work on Normalizing Flows [7], Inverse Autoregressive Flows [8], and others [9, 10] (referred collectively as normalizing flows), focused on developing scalable methods of constructing arbitrarily complex and flexible approximate posteriors from simple distributions using transformations parameterized by neural networks, which gives these models universal approximation capability in the asymptotic regime. In all of these works, the distributions of interest are restricted to be defined over high dimensional Euclidean spaces. There are many other distributions defined over special homeomorphisms of Euclidean spaces that are of interest in statistics, such as Beta and Dirichlet (n-Simplex); Norm-Truncated Gaussian (n-Ball); Wrapped Cauchy and Von-Misses Fisher (n-Sphere), which find little applicability in variational inference with large scale probabilistic models due to the limitations related to density complexity and gradient computation [11, 12, 13, 14]. Many such distributions are unimodal and generating complicated distributions from them would require creating mixture densities or using auxiliary random variables. Mixture methods require further knowledge or tuning, e.g. number of mixture components necessary, and a heavy computational burden on the gradient computation in general, e.g. with quantile functions [15]. Further, mode complexity increases only linearly with mixtures as opposed to exponential increase with normalizing flows. Conditioning on auxiliary variables [16] on the other hand constrains the use of the created distribution, due to the need for integrating out the auxiliary factors in certain scenarios. In all of these methods, computation of low-variance gradients is difficult due to the fact that simulation of random variables cannot be in general reparameterized (e.g. rejection sampling [17]). In this work, we present methods that generalizes previous work on improving variational inference in R using normalizing flows to Riemannian manifolds of interest such as spheres S, tori T and their product topologies with R, like infinite cylinders. ar X iv :1 61 1. 02 30 4v 1 [ st at .M L ] 7 N ov 2 01 6 Figure 1: Left: Construction of a complex density on S by first projecting the manifold to R, transforming the density and projecting it back to S. Right: Illustration of transformed (S → R) densities corresponding to an uniform density on the sphere. Blue: empirical density (obtained by Monte Carlo); Red: Analytical density from equation (4); Green: Density computed ignoring the intrinsic dimensionality of S. These special manifolds M ⊂ R are homeomorphic to the Euclidean space R where n corresponds to the dimensionality of the tangent space of M at each point. A homeomorphism is a continuous function between topological spaces with a continuous inverse (bijective and bicontinuous). It maps point in one space to the other in a unique and continuous manner. An example manifold is the unit 2-sphere, the surface of a unit ball, which is embedded in R and homeomorphic to R (see Figure 1). In normalizing flows, the main result of differential geometry that is used for computing the density updates is given by, d~x = |det Jφ| d~u and represents the relationship between differentials (infinitesimal volumes) between two equidimensional Euclidean spaces using the Jacobian of the function φ : R → R that transforms one space to the other. This result only applies to transforms that preserve the dimensionality. However, transforms that map an embedded manifold to its intrinsic Euclidean space, do not preserve the dimensionality of the points and the result above become obsolete. Jacobian of such transforms φ : R → R with m > n are rectangular and an infinitesimal cube on R maps to an infinitesimal parallelepiped on the manifold. The relation between these volumes is given by d~x = √ det G d~u, where G = J φ Jφ is the metric induced by the embedding φ on the tangent space TxM, [18, 19, 20]. The correct formula for computing the density over M now becomes : ∫ |
A 30-MHz Voltage-Mode Buck Converter Using Delay-Line-Based PWM Control | A 30-MHz voltage-mode buck converter using a delay-line-based pulse-width-modulation controller is proposed in this brief. Two voltage-to-delay cells are used to convert the voltage difference to delay-time difference. A charge pump is used to charge or discharge the loop filter, depending on whether the feedback voltage is larger or smaller than the reference voltage. A delay-line-based voltage-to-duty-cycle (V2D) controller is used to replace the classical ramp-comparator-based V2D controller to achieve wide duty cycle. A type-II compensator is implemented in this design with a capacitor and resistor in the loop filter. The prototype buck converter was fabricated using a 0.18-<inline-formula> <tex-math notation="LaTeX">${\mu }\text{m}$ </tex-math></inline-formula> CMOS process. It occupies an active area of 0.834 mm<sup>2</sup> including the testing PADs. The tunable duty cycle ranges from 11.9%–86.3%, corresponding to 0.4 V–2.8 V output voltage with 3.3 V input. With a step of 400 mA in the load current, the settling time is around 3 <inline-formula> <tex-math notation="LaTeX">${\mu }\text{s}$ </tex-math></inline-formula>. The peak efficiency is as high as 90.2% with 2.4 V output and the maximum load current is 800 mA. |
Wireless Magnetic Sensor Node for Vehicle Detection With Optical Wake-Up | Vehicle detectors provide essential information about parking occupancy and traffic flow. To cover large areas that lack a suitable electrical infrastructure, wired sensors networks are impractical because of their high deployment and maintenance costs. Wireless sensor networks (WSNs) with autonomous sensor nodes can be more economical. Vehicle detectors intended for a WSN should be small, sturdy, low power, cost-effective, and easy to install and maintain. Currently available vehicle detectors based on inductive loops, ultrasound, infrared, or magnetic sensors do not fulfill the requirements above, which has led to the search for alternative solutions. This paper presents a vehicle detector which includes a magnetic and an optical sensor and is intended as sensor node for use with a WSN. Magnetic sensors based on magnetoresistors are very sensitive and can detect the magnetic anomaly in the Earth's magnetic field that results from the presence of a car, but their continuous operation would drain more than 1.5 mA at 3 V, hence limiting the autonomy of a battery-supplied sensor node. Passive, low-power optical sensors can detect the shadow cast by car that covers them, but are prone to false detections. The use of optical triggering to wake-up a magnetic sensor, combined with power-efficient event-based software, yields a simple, compact, reliable, low-power sensor node for vehicle detection whose quiescent current drain is 5.5 μA. This approach of using a low-power sensor to trigger a second more specific sensor can be applied to other autonomous sensor nodes. |
The State of Public Infrastructure-as-a-Service Cloud Security | The public Infrastructure-as-a-Service (IaaS) cloud industry has reached a critical mass in the past few years, with many cloud service providers fielding competing services. Despite the competition, we find some of the security mechanisms offered by the services to be similar, indicating that the cloud industry has established a number of “best-practices,” while other security mechanisms vary widely, indicating that there is also still room for innovation and experimentation. We investigate these differences and possible underlying reasons for it. We also contrast the security mechanisms offered by public IaaS cloud offerings and with security mechanisms proposed by academia over the same period. Finally, we speculate on how industry and academia might work together to solve the pressing security problems in public IaaS clouds going forward. |
Driver fatigue detection based on eye tracking and dynamk, template matching | A vision-based real-time driver fatigue detection system is proposed for driving safely. The driver's face is located, from color images captured in a car, by using the characteristic of skin colors. Then, edge detection is used to locate the regions of eyes. In addition to being used as the dynamic templates for eye tracking in the next frame, the obtained eyes' images are also used for fatigue detection in order to generate some warning alarms for driving safety. The system is tested on a Pentium III 550 CPU with 128 MB RAM. The experiment results seem quite encouraging andpromising. The system can reach 20 frames per second for eye tracking, and the average correct rate for eye location and tracking can achieve 99.1% on four test videos. The correct rate for fatigue detection is l00%, but the average precision rate is 88.9% on the test videos. |
The role of MRI in rheumatoid arthritis: research and clinical issues. | PURPOSE OF REVIEW
This review describes the important role of MRI in rheumatoid arthritis (RA), exploring recent reliability and validity work, as well as the current use of MRI in clinical trials and practice.
RECENT FINDINGS
Both bone oedema and erosions on MRI have been confirmed as representing osteitis and cortical bone defects, respectively, adding to what was already known about the validity of contrast enhanced synovium representing synovitis. An increasing number of studies have used MRI as an outcome measure with interest moving from disease-modifying antirheumatic drugs (DMARDs) to biological therapies and a more technical focus on dynamic imaging. In addition, low-field extremity MRI has been developed as a well tolerated, comfortable and convenient method for imaging assessment in clinical practice.
SUMMARY
This review has highlighted both recent research advances as well as the future potential for MRI in RA, with the aim that MRI will become part of standard measures for RA clinical trials. With respect to extremity imaging, further work is required to provide useful clinical algorithms. |
Induction and molecular signature of pathogenic TH17 cells | Interleukin 17 (IL-17)-producing helper T cells (TH17 cells) are often present at the sites of tissue inflammation in autoimmune diseases, which has led to the conclusion that TH17 cells are main drivers of autoimmune tissue injury. However, not all TH17 cells are pathogenic; in fact, TH17 cells generated with transforming growth factor-β1 (TGF-β1) and IL-6 produce IL-17 but do not readily induce autoimmune disease without further exposure to IL-23. Here we found that the production of TGF-β3 by developing TH17 cells was dependent on IL-23, which together with IL-6 induced very pathogenic TH17 cells. Moreover, TGF-β3-induced TH17 cells were functionally and molecularly distinct from TGF-β1-induced TH17 cells and had a molecular signature that defined pathogenic effector TH17 cells in autoimmune disease. |
TCP Westwood: End-to-End Congestion Control for Wired/Wireless Networks | TCP Westwood (TCPW) is a sender-side modification of the TCP congestion window algorithm that improves upon the performance of TCP Reno in wired as well as wireless networks. The improvement is most significant in wireless networks with lossy links. In fact, TCPW performance is not very sensitive to random errors, while TCP Reno is equally sensitive to random loss and congestion loss and cannot discriminate between them. Hence, the tendency of TCP Reno to overreact to errors. An important distinguishing feature of TCP Westwood with respect to previous wireless TCP “extensions” is that it does not require inspection and/or interception of TCP packets at intermediate (proxy) nodes. Rather, TCPW fully complies with the end-to-end TCP design principle. The key innovative idea is to continuously measure at the TCP sender side the bandwidth used by the connection via monitoring the rate of returning ACKs. The estimate is then used to compute congestion window and slow start threshold after a congestion episode, that is, after three duplicate acknowledgments or after a timeout. The rationale of this strategy is simple: in contrast with TCP Reno which “blindly” halves the congestion window after three duplicate ACKs, TCP Westwood attempts to select a slow start threshold and a congestion window which are consistent with the effective bandwidth used at the time congestion is experienced. We call this mechanism faster recovery. The proposed mechanism is particularly effective over wireless links where sporadic losses due to radio channel problems are often misinterpreted as a symptom of congestion by current TCP schemes and thus lead to an unnecessary window reduction. Experimental studies reveal improvements in throughput performance, as well as in fairness. In addition, friendliness with TCP Reno was observed in a set of experiments showing that TCP Reno connections are not starved by TCPW connections. Most importantly, TCPW is extremely effective in mixed wired and wireless networks where throughput improvements of up to 550% are observed. Finally, TCPW performs almost as well as localized link layer approaches such as the popular Snoop scheme, without incurring the overhead of a specialized link layer protocol. |
A Compact Dual-Band Printed Yagi-Uda Antenna for GNSS and CMMB Applications | A printed Yagi-Uda antenna with a meandered driven dipole and a concave parabolic reflector is proposed for dual-band operations of L1-band Global Navigation Satellite System (GNSS) and S-band China Mobile Multimedia Broadcasting (CMMB). The antenna is designed and fabricated on a thin dielectric substrate, and measured at 1580 MHz in the low band (L1-band) and 2645 MHz in the high band (S-band), respectively, with directivities of 6.7 and 4.9 dBi, front-to-back ratios of 13.1 and 10.3 dB, cross-polarization levels of -23.8 and -21.9 dB, bandwidths of 4.0% and 6.5%, and antenna efficiencies of -0.40 dB (91.2%) and -0.96 dB (80.2%), which are better than -1 dBi in terms of the three-dimensional (3-D) average gain. The occupied area of this dual-band antenna is the same as that of the previously proposed single-band one. With these properties, the proposed antenna is promising for combo applications of L1-band GNSS and S-band CMMB. |
Study on Unconstrained Facial Recognition Using the Boston Marathon Bombings Suspects | The investigation surrounding the Boston Marathon bombings was a missed opportunity for automated facial recognition to assist law enforcement in identifying suspects. We simulate the identification scenario presented by the investigation using three state-of-the-art commercial face recognition systems, and evaluate the maturity of face recognition technology in matching low quality face images of uncooperative subjects. Our experimental results show one instance where a commercial face matcher returns a rank-one hit for suspect Dzhokhar Tsarnaev against a one million mugshot background database. Though issues surrounding pose, occlusion, and resolution continue to confound matchers, there have been significant advances made in face recognition technology to assist law enforcement agencies in their investigations. |
Addressing Security and Privacy Challenges in Internet of Things | Internet of Things (IoT), also referred to as the Internet of Objects, is envisioned as a holistic and transformative approach for providing numerous services. The rapid development of various communication protocols and miniaturization of transceivers along with recent advances in sensing technologies offer the opportunity to transform isolated devices into communicating smart things. Smart things, that can sense, store, and even process electrical, thermal, optical, chemical, and other signals to extract user-/environment-related information, have enabled services only limited by human imagination. Despite picturesque promises of IoT-enabled systems, the integration of smart things into the standard Internet introduces several security challenges because the majority of Internet technologies, communication protocols, and sensors were not designed to support IoT. Several recent research studies have demonstrated that launching security/privacy attacks against IoT-enabled systems, in particular wearable medical sensor (WMS)-based systems, may lead to catastrophic situations and life-threatening conditions. Therefore, security threats and privacy concerns in the IoT domain need to be proactively studied and aggressively addressed. In this thesis, we tackle several domain-specific security/privacy challenges associated with IoTenabled systems. We first target health monitoring systems that are one of the most widely-used types of IoT-enabled systems. We discuss and evaluate several energy-efficient schemes and algorithms, which significantly reduce total energy consumption of different implantable and wearable medical devices (IWMDs). The proposed schemes make continuous long-term health monitoring feasible while providing spare energy needed for data encryption. Furthermore, we present two energy-efficient protocols for implantable medical devices (IMDs), which are essential for data encryption: (i) a secure wakeup protocol iii that is resilient against battery draining attacks, along with (ii) a low-power key exchange protocol that shares the encryption key between the IMD and the external device while ensuring confidentiality of the key. Moreover, we introduce a new class of attacks against the privacy of a patient who is carrying IWMDs. We describe how an attacker can infer private information about the patient by exploiting physiological information leakage, i.e., signals that continuously emanate from the human body due to the normal functioning of organs or IWMDs attached to (or implanted in) the body. Further, we propose a new generic class of security attacks, called dedicated intelligent security attacks against sensor-triggered emergency responses (DISASTER), that is applicable to a variety of sensor-based systems. DISASTER exploits design flaws and security weaknesses of safety mechanisms deployed in cyber-physical systems (CPSs) to trigger emergency responses even in the absence of a real emergency. In addition to introducing DISASTER, we comprehensively describe its serious consequences and demonstrate the possibility of launching such attacks against the two most widely-used CPSs: residential and industrial automation/monitoring systems. Finally, we present a continuous authentication system based on BioAura, i.e., information that is already gathered by WMSs for diagnostic and therapeutic purposes. We extensively examine the proposed authentication system and demonstrate that it offers promising advantages over one-time knowledge-based authentication systems, e.g., password-/pattern-based systems, and may potentially be used to protect personal computing devices and servers, software applications, and restricted physical spaces. |
Design Of Ternary Logic Gates Using CNTFET | This paper presents a novel design of ternary logic gates like STI,PTI,NTI,NAND and NOR using carbon nanotube field effect transistors. Ternary logic is a promising alternative to the conventional binary logic design technique, since it is possible to accomplish simplicity and energy efficiency in modern digital design due to the reduced circuit overhead such as interconnects and chip area. In this paper novel design of basic logic gates for ternary logic based on CNTFET, is proposed. Keywords— Carbon nano-tube Field Effect Transistor(CNTFET),MVL(multi valued logic),Ternary logic,STI,NTI,PTI |
Design, Implementation and Validation of the Three-Wheel Holonomic Motion System of the Assistant Personal Robot (APR) | This paper presents the design, implementation and validation of the three-wheel holonomic motion system of a mobile robot designed to operate in homes. The holonomic motion system is described in terms of mechanical design and electronic control. The paper analyzes the kinematics of the motion system and validates the estimation of the trajectory comparing the displacement estimated with the internal odometry of the motors and the displacement estimated with a SLAM procedure based on LIDAR information. Results obtained in different experiments have shown a difference on less than 30 mm between the position estimated with the SLAM and odometry, and a difference in the angular orientation of the mobile robot lower than 5° in absolute displacements up to 1000 mm. |
An FEM investigation into the behavior of metal matrix composites: tool-particle interaction during orthogonal cutting | An analytical or experimental method is often unable to explore the behaviour of a metal matrix composite (MMC) during machining due to the complex deformation and interactions among particles, tool and matrix. This paper investigates the matrix deformation and tool-particle interactions during machining using the finite element method. Based on the geometrical orientations, the interaction between tool and particle reinforcements was categorized into three scenarios: particles along, above and below the cutting path. The development of stress and strain fields in the MMC was analyzed and physical phenomena such as tool wear, particle debonding, displacements and inhomogeneous deformation of matrix material were explored. It was found that tool-particle interaction and stress/strain distributions in the particles/matrix are responsible for particle debonding, surface damage and tool wear during machining of MMC. |
EMA at SemEval-2018 Task 1: Emotion Mining for Arabic | While significant progress has been achieved for Opinion Mining in Arabic (OMA), very limited efforts have been put towards the task of Emotion mining in Arabic. In fact, businesses are interested in learning a fine-grained representation of how users are feeling towards their products or services. In this work, we describe the methods used by the team Emotion Mining in Arabic (EMA), as part of the SemEval-2018 Task 1 for Affect Mining for Arabic tweets. EMA participated in all 5 subtasks. For the five tasks, several preprocessing steps were evaluated and eventually the best system included diacritics removal, elongation adjustment, replacement of emojis by the corresponding Arabic word, character normalization and light stemming. Moreover, several features were evaluated along with different classification and regression techniques. For the 5 subtasks, word embeddings feature turned out to perform best along with Ensemble technique. EMA achieved the 1st place in subtask 5, and 3rd place in subtasks 1 and 3. |
A Taxonomy and Survey on Distributed File Systems | Applications that process large volumes of data (such as, search engines, grid computing applications, data mining applications, etc.) require a backend infrastructure for storing data. The distributed file system is the central component for storing data infrastructure. There have been many projects focused on network computing that have designed and implemented distributed file systems with a variety of architectures and functionalities. In this paper, we develop a comprehensive taxonomy for describing distributed file system architectures and use this taxonomy to survey existing distributed file system implementations in very large-scale network computing systems such as Grids, Search Engines, etc. We use the taxonomy and the survey results to identify architectural approaches that have not been fully explored in the distributed file system research. |
Angelic Semantics for High-Level Actions | High-level actions (HLAs) lie at the heart of hierarchical planning. Typically, an HLA admits multiple refinements into primitive action sequences. Correct descriptions of the effects of HLAs may be essential to their effective use, yet the literature is mostly silent. We propose an angelic semantics for HLAs, the key concept of which is the set of states reachable by some refinement of a high-level plan, representing uncertainty that will ultimately be resolved in the planning agent’s own best interest. We describe upper and lower approximations to these reachable sets, and show that the resulting definition of a high-level solution automatically satisfies the upward and downward refinement properties. We define a STRIPS-like notation for such descriptions. A sound and complete hierarchical planning algorithm is given and its computational benefits are demonstrated. |
Demo abstract: Accurate power profiling of sensornets with the COOJA/MSPsim simulator | Power consumption is of utmost concern in sensor networks. Researchers have several ways of measuring the power consumption of a complete sensor network, but they are typically either impractical or inaccurate. To meet the need for practical and scalable measurement of power consumption of sensor networks, we have developed a cycle-accurate simulator, called COOJA/MSPsim, that enables live power estimation of systems running on MSP430 processors. This demonstration shows the ease of use and the power measurement accuracy of COOJA/MSPsim. The demo setup consists of a small sensor network and a laptop. Beside gathering software-based power measurements from the motes, the laptop runs COOJA/MSPsim to simulate the same network.We visualize the power consumption of both the simulated and the real sensor network, and show that the simulator produces matching results. |
Sunscreen Products as Emerging Pollutants to Coastal Waters | A growing awareness of the risks associated with skin exposure to ultraviolet (UV) radiation over the past decades has led to increased use of sunscreen cosmetic products leading the introduction of new chemical compounds in the marine environment. Although coastal tourism and recreation are the largest and most rapidly growing activities in the world, the evaluation of sunscreen as source of chemicals to the coastal marine system has not been addressed. Concentrations of chemical UV filters included in the formulation of sunscreens, such as benzophehone 3 (BZ-3), 4-methylbenzylidene camphor (4-MBC), TiO₂ and ZnO, are detected in nearshore waters with variable concentrations along the day and mainly concentrated in the surface microlayer (i.e. 53.6-577.5 ng L⁻¹ BZ-3; 51.4-113.4 ng L⁻¹ 4-MBC; 6.9-37.6 µg L⁻¹ Ti; 1.0-3.3 µg L⁻¹ Zn). The presence of these compounds in seawater suggests relevant effects on phytoplankton. Indeed, we provide evidences of the negative effect of sunblocks on the growth of the commonly found marine diatom Chaetoceros gracilis (mean EC₅₀ = 125±71 mg L⁻¹). Dissolution of sunscreens in seawater also releases inorganic nutrients (N, P and Si forms) that can fuel algal growth. In particular, PO₄³⁻ is released by these products in notable amounts (up to 17 µmol PO₄³⁻g⁻¹). We conservatively estimate an increase of up to 100% background PO₄³⁻ concentrations (0.12 µmol L⁻¹ over a background level of 0.06 µmol L⁻¹) in nearshore waters during low water renewal conditions in a populated beach in Majorca island. Our results show that sunscreen products are a significant source of organic and inorganic chemicals that reach the sea with potential ecological consequences on the coastal marine ecosystem. |
An Overview of the ATSC 3.0 Physical Layer Specification | This paper provides an overview of the physical layer specification of Advanced Television Systems Committee (ATSC) 3.0, the next-generation digital terrestrial broadcasting standard. ATSC 3.0 does not have any backwards-compatibility constraint with existing ATSC standards, and it uses orthogonal frequency division multiplexing-based waveforms along with powerful low-density parity check (LDPC) forward error correction codes similar to existing state-of-the-art. However, it introduces many new technological features such as 2-D non-uniform constellations, improved and ultra-robust LDPC codes, power-based layered division multiplexing to efficiently provide mobile and fixed services in the same radio frequency (RF) channel, as well as a novel frequency pre-distortion multiple-input single-output antenna scheme. ATSC 3.0 also allows bonding of two RF channels to increase the service peak data rate and to exploit inter-RF channel frequency diversity, and to employ dual-polarized multiple-input multiple-output antenna system. Furthermore, ATSC 3.0 provides great flexibility in terms of configuration parameters (e.g., 12 coding rates, 6 modulation orders, 16 pilot patterns, 12 guard intervals, and 2 time interleavers), and also a very flexible data multiplexing scheme using time, frequency, and power dimensions. As a consequence, ATSC 3.0 not only improves the spectral efficiency and robustness well beyond the first generation ATSC broadcast television standard, but also it is positioned to become the reference terrestrial broadcasting technology worldwide due to its unprecedented performance and flexibility. Another key aspect of ATSC 3.0 is its extensible signaling, which will allow including new technologies in the future without disrupting ATSC 3.0 services. This paper provides an overview of the physical layer technologies of ATSC 3.0, covering the ATSC A/321 standard that describes the so-called bootstrap, which is the universal entry point to an ATSC 3.0 signal, and the ATSC A/322 standard that describes the physical layer downlink signals after the bootstrap. A summary comparison between ATSC 3.0 and DVB-T2 is also provided. |
Ubiquitous Smart Home System Using Android Application | This paper presents a flexible standalone, low cost smart home system, which is based on the Android app communicating with the micro-web server providing more than the switching functionalities. The Arduino Ethernet is used to eliminate the use of a personal computer (PC) keeping the cost of the overall system to a minimum while voice activation is incorporated for switching functionalities. Devices such as light switches, power plugs, temperature sensors, humidity sensors, current sensors, intrusion detection sensors, smoke/gas sensors and sirens have been integrated in the system to demonstrate the feasibility and effectiveness of the proposed smart home system. The smart home app is tested and it is able successfully perform the smart home operations such as switching functionalities, automatic environmental control and intrusion detection, in the later case where an email is generated and the siren goes on. |
A simulation-based software design framework for network-centric and parallel systems | In this paper we discuss a software design framework that is capable of realizing network-centricity and the rising multicore technology. Instead of producing static design documents in the form of UML diagrams, we propose automatic generation of a visual simulation model, which represents the target system design. We discuss a design environment that is responsible for the generation and execution of the simulation model. |
Robust throughput and routing for mobile ad hoc wireless networks | Flows transported across mobile ad hoc wireless networks suffer from route breakups caused by nodal mobility. In a network that aims to support critical interactive real-time data transactions, to provide for the uninterrupted execution of a transaction, or for the rapid transport of a high value file, it is essential to identify robust routes across which such transactions are transported. Noting that route failures can induce long re-routing delays that may be highly interruptive for many applications and message/stream transactions, it is beneficial to configure the routing scheme to send a flow across a route whose lifetime is longer, with sufficiently high probability, than the estimated duration of the activity that it is selected to carry. We evaluate the ability of a mobile ad hoc wireless network to distribute flows across robust routes by introducing the robust throughput measure as a performance metric. The utility gained by the delivery of flow messages is based on the level of interruption experienced by the underlying transaction. As a special case, for certain applications only transactions that are completed without being prematurely interrupted may convey data to their intended users that is of acceptable utility. We describe the mathematical calculation of a network’s robust throughput measure, as well as its robust throughput capacity. We introduce the robust flow admission and routing algorithm (RFAR) to provide for the timely and robust transport of flow transactions across mobile ad hoc wireless net- |
Beyond Skip Connections: Top-Down Modulation for Object Detection | In recent years, we have seen tremendous progress in the field of object detection. Most of the recent improvements have been achieved by targeting deeper feedforward networks. However, many hard object categories such as bottle, remote, etc. require representation of fine details and not just coarse, semantic representations. But most of these fine details are lost in the early convolutional layers. What we need is a way to incorporate finer details from lower layers into the detection architecture. Skip connections have been proposed to combine high-level and low-level features, but we argue that selecting the right features from low-level requires top-down contextual information. Inspired by the human visual pathway, in this paper we propose top-down modulations as a way to incorporate fine details into the detection framework. Our approach supplements the standard bottom-up, feedforward ConvNet with a top-down modulation (TDM) network, connected using lateral connections. These connections are responsible for the modulation of lower layer filters, and the top-down network handles the selection and integration of contextual information and lowlevel features. The proposed TDM architecture provides a significant boost on the COCO benchmark, achieving 28.6 AP for VGG16 and 35.2 AP for ResNet101 networks. Using InceptionResNetv2, our TDM model achieves 37.3 AP, which is the best single-model performance to-date on the COCO testdev benchmark, without any bells and whistles. |
What is Market News ? | We investigate the market for news under two assumptions: that readers hold beliefs which they like to see confirmed, and that newspapers can slant stories toward these beliefs. We show that, on the topics where readers share common beliefs, one should not expect accuracy even from competitive media: competition results in lower prices, but common slanting toward reader biases. On topics where reader beliefs diverge (such as politically divisive issues), however, newspapers segment the market and slant toward extreme positions. Yet in the aggregate, a reader with access to all news sources could get an unbiased perspective. Generally speaking, reader heterogeneity is more important for accuracy in media than competition per se. (JEL D23, L82) |
Effect of nodal irradiation and fraction size on cardiac and cerebrovascular mortality in women with breast cancer treated with local and locoregional radiotherapy. | PURPOSE
To determine whether the adjuvant breast cancer radiation volume or fraction size (>2 Gy vs. ≤2 Gy) affected the risk of fatal cardiac or cerebrovascular (CCV) events and to determine whether the addition of regional radiotherapy (RT) increased the risk of fatal cerebrovascular events compared with breast/chest wall RT alone.
METHODS AND MATERIALS
Overall survival was compared for patients receiving breast/chest wall RT alone or breast/chest wall plus regional node RT (BRCW+NRT) in a population-based cohort of women with early-stage breast cancer who had undergone RT between 1990 and 1996. The effect of laterality, age, systemic therapy, radiation volume, and fraction size on the risk of fatal CCV events was analyzed using a competing risk method.
RESULTS
A total of 4,929 women underwent adjuvant RT. The median follow-up was 11.7 years. BRCW+NRT was associated with an increased risk of CCV death at 12 years (5% for BRCW+NRT vs. 3.5% for breast/chest wall RT alone; p = .004), but the fraction size was not (3.92% for a fraction size >2 Gy vs. 3.54% for a fraction size <2 Gy; p = .83). The 12-year absolute risk of death from stroke alone did not differ for either radiation volume (1.17% for BRCW+NRT vs. 0.8% for breast/chest wall RT alone; p = .22) or fraction size (p = .59).
CONCLUSION
Regional RT was associated with a small (1.5% at 12 years), but statistically significant, increased risk of death from a CCV event. The addition of regional RT did not significantly increase the risk of death from stroke, although the number of events was small. An increased fraction size was not significantly associated with a greater risk of fatal CCV events. These data support the continued use of hypofractionated adjuvant regional RT. |
Intrinsic Motivation and Reinforcement Learning | Psychologists distinguish between extrinsically motivated behavior, which is behavior undertaken to achieve some externally supplied reward, such as a prize, a high grade, or a high-paying job, and intrinsically motivated behavior, which is behavior done for its own sake. Is an analogous distinction meaningful for machine learning systems? Can we say of a machine learning system that it is motivated to learn, and if so, is it possible to provide it with an analog of intrinsic motivation? Despite the fact that a formal distinction between extrinsic and intrinsic motivation is elusive, this chapter argues that the answer to both questions is assuredly “yes” and that the machine learning framework of reinforcement learning is particularly appropriate for bringing learning together with what in animals one would call motivation. Despite the common perception that a reinforcement learning agent’s reward has to be extrinsic because the agent has a distinct input channel for reward signals, reinforcement learning provides a natural framework for incorporating principles of intrinsic motivation. |
Efficient Parallel Graph Exploration on Multi-Core CPU and GPU | Graphs are a fundamental data representation that has been used extensively in various domains. In graph-based applications, a systematic exploration of the graph such as a breadth-first search (BFS) often serves as a key component in the processing of their massive data sets. In this paper, we present a new method for implementing the parallel BFS algorithm on multi-core CPUs which exploits a fundamental property of randomly shaped real-world graph instances. By utilizing memory bandwidth more efficiently, our method shows improved performance over the current state-of-the-art implementation and increases its advantage as the size of the graph increases. We then propose a hybrid method which, for each level of the BFS algorithm, dynamically chooses the best implementation from: a sequential execution, two different methods of multicore execution, and a GPU execution. Such a hybrid approach provides the best performance for each graph size while avoiding poor worst-case performance on high-diameter graphs. Finally, we study the effects of the underlying architecture on BFS performance by comparing multiple CPU and GPU systems, a high-end GPU system performed as well as a quad-socket high-end CPU system. |
Clinical significance of cytomegalovirus (CMV) antigenemia in the prediction and diagnosis of CMV gastrointestinal disease after allogeneic hematopoietic stem cell transplantation | Summary:To evaluate the clinical significance of a cytomegalovirus (CMV) antigenemia assay in the prediction and diagnosis of CMV gastrointestinal (CMV-GI) disease after hematopoietic stem cell transplantation (HSCT), 19 allogeneic HSCT recipients developing CMV-GI disease were retrospectively reviewed. All patients were monitored by a CMV antigenemia assay, at least once weekly after engraftment. The median onset of CMV-GI disease occurred 31 days post transplant (range: 19–62). Only four of 19 patients (21%) developed a positive CMV antigenemia test before developing CMV-GI diseases. Although all 19 patients subsequently developed positive CMV antigenemia tests during their clinical courses, the values remained at a low-level in nine (47%) patients. Among the 14 patients in whom results of real-time polymerase chain reaction (PCR) were available, seven (50%) yielded positive results of real-time PCR before developing CMV-GI disease. In contrast to the values of CMV antigenemia, all 14 patients exclusively yielded high viral loads (median: 2.8 × 104 copies/ml plasma). We conclude that CMV antigenemia testing has limited value in prediction or early diagnosis of CMV-GI disease, and that real-time PCR could have a more diagnostic significance. |
Orthotic insoles do not prevent physical stress-induced low back pain | Orthotic insoles are suggested to prevent low back pain. This randomized controlled study assessed if customised orthotic insoles prevent low back pain. Healthy military conscripts (n = 228; mean age 19 years, range 18–29) were randomly assigned to use either customised orthotic insoles (treatment group, n = 73) or nothing (control group, n = 147). The main outcome measure was low back pain requiring a physician visit and resulting in minimum 1 day suspension from military duty. Twenty-four (33%) treated subjects and 42 (27%) control subjects were suspended from duty due to low back pain (p = 0.37; risk difference 4.3%; 95% CI: −8.7 to 17.3%). Mean suspension duration was 2 days (range 1–7) in both groups. Four (5%) treated subjects and eight (5%) control subjects were released from duty due to persistent low back pain (p = 0.92; risk difference 0%; 95% CI: −6 to 6%). Use of orthotic insoles is therefore not recommended to prevent physical stress-related low back pain. |
Challenges in Security for Cyber-Physical Systems | The design of security for cyber-physical systems must take into account several characteristics common to such systems. Among these are feedback between the cyber and physical environment, distributed management and control, uncertainty, real-time requirements, and geographic distribution. This paper discusses these characteristics and suggests a design approach that better integrates security into the core design of the system. A research roadmap is presented that highlights some of the missing pieces needed to enable such an approach. 1. What is a Cyber-Physical-System? The term cyber-physical system has been applied to many problems, ranging from robotics, through SCADA, and distributed control systems. Not all cyber-physical systems involve critical infrastructure, but there are common elements that change the nature of the solutions that must be considered when securing cyber-physical systems. First, the extremely critical nature of activities performed by some cyber-physical systems means that we need security that works, and that by itself means we need something different. All kidding aside, there are fundamental system differences in cyber-physical systems that will force us to look at security in ways more closely tied to the physical application. It is my position that by focusing on these differences we can see where new (or rediscovered) approaches are needed, and that by building systems that support the inclusion of security as part of the application architecture, we can improve the security of both cyber-physical systems, where such an approach is most clearly warranted, as well as improve the security of cyber-only systems, where such an approach is more easily ignored. In this position paper I explain the characteristics of cyber-physical systems that must drive new research in security. I discuss the security problem areas that need attention because of these characteristics and I describe a design methodology for security that provides for better integration of security design with application design. Finally, I suggest some of the components of future systems that can help us include security as a focusing issue in the architectural design of critical applications. |
A Crowd Monitoring Framework using Emotion Analysis of Social Media for Emergency Management in Mass Gatherings | In emergency management for mass gatherings, the knowledge about crowd types can highly assist with providing timely response and effective resource allocation. Crowd monitoring can be achieved using computer vision based approaches and sensory data analysis. The emergence of social media platforms presents an opportunity to capture valuable information about how people feel and think. However, the literature shows that there are a limited number of studies that use social media in crowd monitoring and/or incorporate a unified crowd model for consistency and interoperability. This paper presents a novel framework for crowd monitoring using social media. It includes a standard crowd model to represent different types of crowds. The proposed framework considers the effect of emotion on crowd behaviour and uses the emotion analysis of social media to identify the crowd types in an event. An experiment using historical data to validate our framework is described. |
3D printing of soft robotic systems | |
Aqueous Extract of Kalmegh (Andrographis paniculata) Leaves as Green Inhibitor for Mild Steel in Hydrochloric Acid Solution | The inhibition of the corrosion of mild steel in hydrochloric acid solution by the extract of Kalmegh (Andrographis paniculata) leaves extract has been studied using weight loss, electrochemical impedance spectroscopy, linear polarization, and potentiodynamic polarization techniques. Inhibition was found to increase with increasing concentration of the extract. The effect of temperature, immersion time, and acid concentration on the corrosion behavior of mild steel in 1 M HCl with addition of extract was also studied. The inhibition was assumed to occur via adsorption of the inhibitor molecules on the metal surface. The adsorption of the molecules of the extract on the mild steel surface obeyed the Langmuir adsorption isotherm. The protective film formed on the metal surface was analyzed by FTIR spectroscopy. The results obtained showed that the extract of Kalmegh (Andrographis paniculata) leaves extract could serve as an effective inhibitor of the corrosion of mild steel in hydrochloric acid media. |
Business Process Design by Reusing Business Process Fragments from the Cloud | The constant development of technologies forces companies to be more innovative in order to stay competitive. In fact, designing a process from scratch is time consuming, error prone and costly. In this context, companies are heading to reuse process fragments when designing a new process to ensure a high degree of efficiency, with respect to delivery deadlines. However, reusing these fragments may disclose sensitive business activities, especially if these latter are deployed in an untrusted environment. In addition, companies are concerned about their user's privacy. To address these issues, we investigate how to build a new business process by reusing the safest existing fragments coming from various cloud servers, i.e. The ones that comply at best with company's preferences and policies, and offer an appropriate level of safety. |
Social Affordance Tracking over Time - A Sensorimotor Account of False-Belief Tasks | False-belief task have mainly been associated with the explanatory notion of the theory of mind and the theory-theory. However, it has often been pointed out that this kind of highlevel reasoning is computational and time expensive. During the last decades, the idea of embodied intelligence, i.e. complex behavior caused by sensorimotor contingencies, has emerged in both the fields of neuroscience, psychology and artificial intelligence. Viewed from this perspective, the failing in a false-belief test can be the result of the impairment to recognize and track others’ sensorimotor contingencies and affordances. Thus, social cognition is explained in terms of lowlevel signals instead of high-level reasoning. In this work, we present a generative model for optimal action selection which simultaneously can be employed to make predictions of others’ actions. As we base the decision making on a hidden state representation of sensorimotor signals, this model is in line with the ideas of embodied intelligence. We demonstrate how the tracking of others’ hidden states can give rise to correct falsebelief inferences, while a lack thereof leads to failing. With this work, we want to emphasize the importance of sensorimotor contingencies in social cognition, which might be a key to artificial, socially intelligent systems. |
Clustering items for collaborative filtering | This short paper reports on work in progress related to applying data partitioning/clustering algorithms to ratings data in collaborative filtering. We use existing data partitioning and clustering algorithms to partition the set of items based on user rating data. Predictions are then computed independently within each partition. Ideally, partitioning will improve the quality of collaborative filtering predictions and increase the scalability of collaborative filtering systems. We report preliminary results that suggest that partitioning algorithms can greatly increase scalability, but we have mixed results on improving accuracy. However, partitioning based on ratings data does result in more accurate predictions than random partitioning, and the results are similar to those when the data is partitioned based on a known content classification. |
An effective quality evaluation protocol for speech enhancement algorithms | FOR SPEECH ENHANCEMENT ALGORITHMS John H.L. Hansen and Bryan L. Pellom Robust Speech Processing Laboratory Duke University, Box 90291, Durham, NC 27708-0291 http://www.ee.duke.edu/Research/Speech [email protected] [email protected] ABSTRACT Much progress has been made in speech enhancement algorithm formulation in recent years. However, while researchers in the speech coding and recognition communities have standard criteria for algorithm performance comparison, similar standards do not exist for researchers in speech enhancement. This paper discusses the necessary ingredients for an e ective speech enhancement evaluation. We propose that researchers use the evaluation core test set of TIMIT (192 sentences), with a set of noise les, and a combination of objective measures and subjective testing for broad and ne phone-level quality assessment. Evaluation results include overall objective speech quality measure scores, measure histograms, and phoneme class and individual phone scores. The reported results are meant to illustrate speci c ways of detailing quality assessment for an enhancement algorithm. |
19.7 A 79GHz binary phase-modulated continuous-wave radar transceiver with TX-to-RX spillover cancellation in 28nm CMOS | The demand for inexpensive and ubiquitous accurate motion-detection sensors for road safety, smart homes and robotics justifies the interest in single-chip mm-Wave radars: a high carrier frequency allows for a high angular resolution in a compact multi-antenna system and a wide bandwidth allows fora high depth resolution. With the objective of single-chip radar systems, CMOS is the natural candidate to replace SiGe as a leading technology [1-6]. |
Uncovering ancient transcription systems with a novel evolutionary indicator | TBP and TFIIB are evolutionarily conserved transcription initiation factors in archaea and eukaryotes. Information about their ancestral genes would be expected to provide insight into the origin of the RNA polymerase II-type transcription apparatus. In obtaining such information, the nucleotide sequences of current genes of both archaea and eukaryotes should be included in the analysis. However, the present methods of evolutionary analysis require that a subset of the genes should be excluded as an outer group. To overcome this limitation, we propose an innovative concept for evolutionary analysis that does not require an outer group. This approach utilizes the similarity in intramolecular direct repeats present in TBP and TFIIB as an evolutionary measure revealing the degree of similarity between the present offspring genes and their ancestors. Information on the properties of the ancestors and the order of emergence of TBP and TFIIB was also revealed. These findings imply that, for evolutionarily early transcription systems billions of years ago, interaction of RNA polymerase II with transcription initiation factors and the regulation of its enzymatic activity was required prior to the accurate positioning of the enzyme. Our approach provides a new way to discuss mechanistic and system evolution in a quantitative manner. |
The relative contributions of MRI, SPECT, and PET imaging in epilepsy. | Functional and structural neuroimaging techniques are increasingly indispensable in the evaluation of epileptic patients for localization of the epileptic area as well as for understanding pathophysiology, propagation, and neurochemical correlates of chronic epilepsy. Although interictal single photon emission computed tomography (SPECT) imaging of cerebral blood flow is only moderately sensitive, ictal SPECT markedly improves yield. Positron emission tomography (PET) imaging of interictal cerebral metabolism is more sensitive than measurement of blood flow in temporal lobe epilepsy. Furthermore, PET has greater spatial resolution and versatility in that multiple tracers can image various aspects of cerebral function. Interpretation of all types of functional imaging studies is difficult and requires knowledge of time of most recent seizure activity and structural correlates. Only magnetic resonance imaging (MRI) can image the structural changes associated with the underlying epileptic process, and quantitative evidence of hippocampal volume loss has been highly correlated with seizure onset in medial temporal structures. Improved resolution and interpretation have made quantitative MRI more sensitive in temporal lobe epilepsy, as judged by pathology. When judged by electroencephalography (EEG), ictal SPECT and interictal PET have the highest sensitivity and specificity for temporal lobe epilepsy; these neuroimaging techniques have lower sensitivity and higher specificity for extratemporal EEG abnormalities. Regardless of the presence of structural abnormalities, functional imaging by PET or SPECT provides complementary information. Ideally these techniques should be used and interpreted together to improve the localization and understanding of epileptic brain. |
Cultural stereotypes as gatekeepers: increasing girls’ interest in computer science and engineering by diversifying stereotypes | Despite having made significant inroads into many traditionally male-dominated fields (e.g., biology, chemistry), women continue to be underrepresented in computer science and engineering. We propose that students' stereotypes about the culture of these fields-including the kind of people, the work involved, and the values of the field-steer girls away from choosing to enter them. Computer science and engineering are stereotyped in modern American culture as male-oriented fields that involve social isolation, an intense focus on machinery, and inborn brilliance. These stereotypes are compatible with qualities that are typically more valued in men than women in American culture. As a result, when computer science and engineering stereotypes are salient, girls report less interest in these fields than their male peers. However, altering these stereotypes-by broadening the representation of the people who do this work, the work itself, and the environments in which it occurs-significantly increases girls' sense of belonging and interest in the field. Academic stereotypes thus serve as gatekeepers, driving girls away from certain fields and constraining their learning opportunities and career aspirations. |
D'Alembertian Solutions of Inhomogeneous Linear Equations (differential, difference, and some other) | Let an Ore polynomial ring k[X; a, 6] and a nonzero pseudolinear map 19: K + K, where K is a O, &compatible extension of the field k, be given. Then we have the ring k[O] of operators K ~ K. It is assumed that if a first-order equation Fg = O, F ~ k[9], has a nonzero solution in a u, b-compatible extension of the field k, then the equation has a nonzero solution in K. These solutions form the set %,t C K of hyperexponential elements. An equation Py = O, P ~ k[O], is called completely factorable if P can be decomposed in the product of first-order operators over k. Solutions of all completely factorable equations form the linear space dk C K of d’Alembertian elements. The order of minimal operator over k which annihilates a G ~k is called the height of a. It is easy to see that %k C J& and the height of any a C %k is equal to 1. It is known ([12, 4]) that if L E ,k[@] and ~ G ‘?-l~ then all the hyperexponential solutions of the equation |
Probabilistic risk analysis and terrorism risk. | Since the terrorist attacks of September 11, 2001, and the subsequent establishment of the U.S. Department of Homeland Security (DHS), considerable efforts have been made to estimate the risks of terrorism and the cost effectiveness of security policies to reduce these risks. DHS, industry, and the academic risk analysis communities have all invested heavily in the development of tools and approaches that can assist decisionmakers in effectively allocating limited resources across the vast array of potential investments that could mitigate risks from terrorism and other threats to the homeland. Decisionmakers demand models, analyses, and decision support that are useful for this task and based on the state of the art. Since terrorism risk analysis is new, no single method is likely to meet this challenge. In this article we explore a number of existing and potential approaches for terrorism risk analysis, focusing particularly on recent discussions regarding the applicability of probabilistic and decision analytic approaches to bioterrorism risks and the Bioterrorism Risk Assessment methodology used by the DHS and criticized by the National Academies and others. |
BITE: Bitcoin Lightweight Client Privacy using Trusted Execution | Decentralized blockchains offer attractive advantages over traditional payments such as the ability to operate without a trusted authority and increased user privacy. However, the verification of blockchain payments requires the user to download and process the entire chain which can be infeasible for resource-constrained devices, such as mobile phones. To address such concerns, most major blockchain systems support lightweight clients that outsource most of the computational and storage burden to full blockchain nodes. However, such payment verification methods leak considerable information about the underlying clients, thus defeating user privacy that is considered one of the main goals of decentralized cryptocurrencies. In this paper, we propose a new approach to protect the privacy of lightweight clients in blockchain systems like Bitcoin. Our main idea is to leverage commonly available trusted execution capabilities, such as SGX enclaves. We design and implement a system called Bite where enclaves on full nodes serve privacy-preserving requests from lightweight clients. As we will show, naive serving of client requests from within SGX enclaves still leaks user information. Bite therefore integrates several privacy preservation measures that address external leakage as well as SGX side-channels. We show that the resulting solution provides strong privacy protection and at the same time improves the performance of current lightweight clients. |
The consequences of race for police officers' responses to criminal suspects. | The current work examined police officers' decisions to shoot Black and White criminal suspects in a computer simulation. Responses to the simulation revealed that upon initial exposure to the program, the officers were more likely to mistakenly shoot unarmed Black compared with unarmed White suspects. However, after extensive training with the program, in which the race of the suspect was unrelated to the presence of a weapon, the officers were able to eliminate this bias. These findings are discussed in terms of their implications for the elimination of racial biases and the training of police officers. |
Dynamic spectrotemporal feature selectivity in the auditory midbrain. | The transformation of auditory information from the cochlea to the cortex is a highly nonlinear process. Studies using tone stimuli have revealed that changes in even the most basic parameters of the auditory stimulus can alter neural response properties; for example, a change in stimulus intensity can cause a shift in a neuron's preferred frequency. However, it is not yet clear how such nonlinearities contribute to the processing of spectrotemporal features in complex sounds. Here, we use spectrotemporal receptive fields (STRFs) to characterize the effects of stimulus intensity on feature selectivity in the mammalian inferior colliculus (IC). At low intensities, we find that STRFs are relatively simple, typically consisting of a single excitatory region, indicating that the neural response is simply a reflection of the stimulus amplitude at the preferred frequency. In contrast, we find that STRFs at high intensities typically consist of a combination of an excitatory region and one or more inhibitory regions, often in a spectrotemporally inseparable arrangement, indicating selectivity for complex auditory features. We show that a linear-nonlinear model with the appropriate STRF can predict neural responses to stimuli with a fixed intensity, and we demonstrate that a simple extension of the model with an intensity-dependent STRF can predict responses to stimuli with varying intensity. These results illustrate the complexity of auditory feature selectivity in the IC, but also provide encouraging evidence that the prediction of nonlinear responses to complex stimuli is a tractable problem. |
Learning to Learn from Noisy Web Videos | Understanding the simultaneously very diverse and intricately fine-grained set of possible human actions is a critical open problem in computer vision. Manually labeling training videos is feasible for some action classes but doesnt scale to the full long-tailed distribution of actions. A promising way to address this is to leverage noisy data from web queries to learn new actions, using semi-supervised or webly-supervised approaches. However, these methods typically do not learn domain-specific knowledge, or rely on iterative hand-tuned data labeling policies. In this work, we instead propose a reinforcement learning-based formulation for selecting the right examples for training a classifier from noisy web search results. Our method uses Q-learning to learn a data labeling policy on a small labeled training dataset, and then uses this to automatically label noisy web data for new visual concepts. Experiments on the challenging Sports-1M action recognition benchmark as well as on additional fine-grained and newly emerging action classes demonstrate that our method is able to learn good labeling policies for noisy data and use this to learn accurate visual concept classifiers. |
Cryptographic Sealing for Information Secrecy and Authentication | A new protection mechanism is described that provides general primitives for protection and authentication. The mechanism is based on the idea of sealing an object with a key. Sealed objects are self-authenticating, and in the absence of an appropriate set of keys, only provide information about the size of their contents. New keys can be freely created at any time, and keys can also be derived from existing keys with operators that include Key-And and Key-Or. This flexibility allows the protection mechanism to implement common protection mechanisms such as capabilities, access control lists, and information flow control. The mechanism is enforced with a synthesis of conventional cryptography, public-key cryptography, and a threshold scheme. |
浅析 SMA 改性沥青混凝土路面施工技术 | |
Predicting individual disease risk based on medical history | The monumental cost of health care, especially for chronic disease treatment, is quickly becoming unmanageable. This crisis has motivated the drive towards preventative medicine, where the primary concern is recognizing disease risk and taking action at the earliest signs. However, universal testing is neither time nor cost efficient. We propose CARE, a Collaborative Assessment and Recommendation Engine, which relies only on a patient's medical history using ICD-9-CM codes in order to predict future diseases risks. CARE uses collaborative filtering to predict each patient's greatest disease risks based on their own medical history and that of similar patients. We also describe an Iterative version, ICARE, which incorporates ensemble concepts for improved performance. These novel systems require no specialized information and provide predictions for medical conditions of all kinds in a single run. We present experimental results on a Medicare dataset, demonstrating that CARE and ICARE perform well at capturing future disease risks. |
An Evaluation of Prefiltered B-Spline Reconstruction for Quasi-Interpolation on the Body-Centered Cubic Lattice | In this paper, we demonstrate that quasi-interpolation of orders two and four can be efficiently implemented on the Body-Centered Cubic (BCC) lattice by using tensor-product B-splines combined with appropriate discrete prefilters. Unlike the nonseparable box-spline reconstruction previously proposed for the BCC lattice, the prefiltered B-spline reconstruction can utilize the fast trilinear texture-fetching capability of the recent graphics cards. Therefore, it can be applied for rendering BCC-sampled volumetric data interactively. Furthermore, we show that a separable B-spline filter can suppress the postaliasing effect much more isotropically than a nonseparable box-spline filter of the same approximation power. Although prefilters that make the B-splines interpolating on the BCC lattice do not exist, we demonstrate that quasi-interpolating prefiltered linear and cubic B-spline reconstructions can still provide similar or higher image quality than the interpolating linear box-spline and prefiltered quintic box-spline reconstructions, respectively. |
Nurse-Led Medicines' Monitoring for Patients with Dementia in Care Homes: A Pragmatic Cohort Stepped Wedge Cluster Randomised Trial | BACKGROUND
People with dementia are susceptible to adverse drug reactions (ADRs). However, they are not always closely monitored for potential problems relating to their medicines: structured nurse-led ADR Profiles have the potential to address this care gap. We aimed to assess the number and nature of clinical problems identified and addressed and changes in prescribing following introduction of nurse-led medicines' monitoring.
DESIGN
Pragmatic cohort stepped-wedge cluster Randomised Controlled Trial (RCT) of structured nurse-led medicines' monitoring versus usual care.
SETTING
Five UK private sector care homes.
PARTICIPANTS
41 service users, taking at least one antipsychotic, antidepressant or anti-epileptic medicine.
INTERVENTION
Nurses completed the West Wales ADR (WWADR) Profile for Mental Health Medicines with each participant according to trial step.
OUTCOMES
Problems addressed and changes in medicines prescribed.
DATA COLLECTION AND ANALYSIS
Information was collected from participants' notes before randomisation and after each of five monthly trial steps. The impact of the Profile on problems found, actions taken and reduction in mental health medicines was explored in multivariate analyses, accounting for data collection step and site.
RESULTS
Five of 10 sites and 43 of 49 service users approached participated. Profile administration increased the number of problems addressed from a mean of 6.02 [SD 2.92] to 9.86 [4.48], effect size 3.84, 95% CI 2.57-4.11, P <0.001. For example, pain was more likely to be treated (adjusted Odds Ratio [aOR] 3.84, 1.78-8.30), and more patients attended dentists and opticians (aOR 52.76 [11.80-235.90] and 5.12 [1.45-18.03] respectively). Profile use was associated with reduction in mental health medicines (aOR 4.45, 1.15-17.22).
CONCLUSION
The WWADR Profile for Mental Health Medicines can improve the quality and safety of care, and warrants further investigation as a strategy to mitigate the known adverse effects of prescribed medicines.
TRIAL REGISTRATION
ISRCTN 48133332. |
ANFIS Approach for Navigation of Mobile Robots | This paper, discusses about navigation control of mobile robot using adaptive neuro-fuzzy inference system (ANFIS) in a real word dynamic environment. In the ANFIS controller after the input layer there is a fuzzy layer and rest of the layers are neural network layers. The adaptive neuro-fuzzy hybrid system combines the advantages of fuzzy logic system, which deal with explicit knowledge that can be explained and understood, and neural network, which deal with implicit knowledge, which can be acquired by learning. The inputs to fuzzy logic layer are front obstacle distance, left obstacle distance, right obstacle distance and target steering. A learning algorithm based on neural network technique has been developed to tune the parameters of fuzzy membership functions, which smooth the trajectory generated by the fuzzy logic system. Using the developed ANFIS controller, the mobile robots are able to avoid static and dynamic obstacles, and reach the target successfully in cluttered environments. The experimental results agree well with the simulation results, proves the authenticity of the theory developed. |
Calcium hydroxylapatite associated soft tissue necrosis: a case report and treatment guideline. | We present an uncommon case of nasal alar and facial necrosis following calcium hydroxylapatite filler injection performed elsewhere without direct physician supervision. The patient developed severe full-thickness necrosis of cheek and nasal alar skin 24 h after injections into the melolabial folds. Management prior to referral included oral antibiotics, prednisone taper, and referral to a dermatologist (day 3) who prescribed valacyclovir for a presumptive herpes zoster reactivation induced by the injection. Referral to our institution was made on day 11, and after herpetic outbreak was ruled out by a negative Tzanck smear, debridement with aggressive local wound care was initiated. After re-epithelialization and the fashioning of a custom intranasal stent to prevent vestibular stenosis, pulsed dye laser therapy was performed for wound modification. The patient healed with an acceptable cosmetic outcome. This report underscores the importance of facial vasculature anatomy, injection techniques, and identification of adverse events when using fillers. A current treatment paradigm for such events is also presented. |
Speed and power scaling of SRAM's | Simple models for the delay, power, and area of a static random access memory (SRAM) are used to determine the optimal organizations for an SRAM and study the scaling of their speed and power with size and technology. The delay is found to increase by about one gate delay for every doubling of the RAM size up to 1 Mb, beyond which the interconnect delay becomes an increasingly significant fraction of the total delay. With technology scaling, the nonscaling of threshold mismatches in the sense amplifiers is found to significantly impact the total delay in generations of 0.1 /spl mu/m and below. |
Comparison of the efficacy of a demand oxygen delivery system with continuous low flow oxygen in subjects with stable COPD and severe oxygen desaturation on walking. | BACKGROUND
Provision of ambulatory oxygen using an intermittent pulsed flow regulated by a demand oxygen delivery system (DODS) greatly increases the limited supply time of standard portable gaseous cylinders. The efficacy of such a system has not previously been studied during submaximal exercise in subjects with severe chronic obstructive pulmonary disease (COPD) in whom desaturation is likely to be great and where usage is often most appropriate.
METHODS
Fifteen subjects with severe COPD and oxygen desaturation underwent six minute walk tests performed in random order to compare the efficacy of a demand oxygen delivery system (DODS) with continuous flow oxygen. Walk distance, breathlessness, oxygen saturation, resting time, and recovery time (objective and subjective) were recorded and compared for each walk.
RESULTS
Breathing continuous oxygen compared with baseline air breathing improved mean walk distance (295 m versus 271 m) and recovery time (47 seconds versus 112 seconds), whilst the lowest recorded saturation (81% versus 74%) and time desaturated below 90% (201 seconds versus 299 seconds) were reduced. When the DODS was compared with air breathing only the walk distance changed (283 m versus 271 m). A comparison of the DODS with continuous oxygen breathing showed the DODS to be less effective at oxygenating subjects with inferior lowest saturation (78% versus 81%), time spent below 90% (284 seconds versus 201 seconds), time to objective recovery (83 seconds versus 47 seconds), and walk distance (283 m versus 295 m).
CONCLUSIONS
Neither of the delivery systems was able to prevent desaturation in these subjects. The use of continuous flow oxygen, however, was accompanied by improvements in oxygenation, walk distance, and recovery time compared with air breathing. The DODS produced only a small increase in walk distance without elevation of oxygen saturation, but was inferior to continuous flow oxygen in most of the measured variables when compared directly. |
Clinical anatomy of the elbow and shoulder. | The elbow patients herein discussed feature common soft tissue conditions such as tennis elbow, golfers' elbow and olecranon bursitis. Relevant anatomical structures for these conditions can easily be identified and demonstrated by cross examination by instructors and participants. Patients usually present rotator cuff tendinopathy, frozen shoulder, axillary neuropathy and suprascapular neuropathy. The structures involved in tendinopathy and frozen shoulder can be easily identified and demonstrated under normal conditions. The axillary and the suprascapular nerves have surface landmarks but cannot be palpated. In neuropathy however, physical findings in both neuropathies are pathognomonic and will be discussed. |
An Overview of Data Mining Techniques Applied for Heart Disease Diagnosis and Prediction | Data mining techniques have been applied magnificently in many fields including business, science, the Web, cheminformatics, bioinformatics, and on different types of data such as textual, visual, spatial, real-time and sensor data. Medical data is still information rich but knowledge poor. There is a lack of effective analysis tools to discover the hidden relationships and trends in medical data obtained from clinical records. This paper reviews the stateof-the-art research on heart disease diagnosis and prediction. Specifically in this paper, we present an overview of the current research being carried out using the data mining techniques to enhance heart disease diagnosis and prediction including decision trees, Naive Bayes classifiers, K-nearest neighbour classification (KNN), support vector machine (SVM), and artificial neural networks techniques. Results show that SVM and neural networks perform positively high to predict the presence of coronary heart diseases (CHD). Decision trees after features reduction is the best recommended classifier to diagnose cardiovascular disease (CVD). Still the performance of data mining techniques to detect coronary arteries diseases (CAD) is not encouraging (between 60%-75%) and further improvements should be pursued. |
Application of implicit-explicit high order Runge-Kutta methods to discontinuous-Galerkin schemes | Despite the popularity of high-order explicit Runge–Kutta (ERK) methods for integrating semi-discrete systems of equations, ERK methods suffer from severe stability-based time step restrictions for very stiff problems. We implement a discontinuous Galerkin finite element method (DGFEM) along with recently introduced high-order implicit–explicit Runge–Kutta (IMEX-RK) schemes to overcome geometry-induced stiffness in fluid-flow problems. The IMEX algorithms solve the non-stiff portions of the domain using explicit methods, and isolate and solve the more expensive stiff portions using an L-stable, stiffly-accurate explicit, singly diagonally implicit Runge–Kutta method (ESDIRK). Furthermore, we apply adaptive time-step controllers based on the embedded temporal error predictors. We demonstrate in a number of numerical test problems that IMEX methods in conjunction with efficient preconditioning become more efficient than explicit methods for systems exhibiting high levels of grid-induced stiffness. 2007 Elsevier Inc. All rights reserved. |
Vectorized total variation defined by weighted L infinity norm for utilizing inter channel dependency | Vectorized total variation (VTV) is very successful convex regularizer to solve various color image recovery problems. Despite the fact that color channels of natural color images are closely related, existing variants of VTV can not utilize this prior efficiently. We proposed L∞-VTV as a convex regularizer can penalize the violation of such inter-channel dependency by employing weighted L∞ (L-infty) norm. We also introduce an effective algorithm for an image denoising problem using L∞-VTV. Experimental results shows that our proposed method can outperform the conventional methods. |
[The prevalence of atrophic gastritis and intestinal metaplasia according to gender, age and Helicobacter pylori infection in a rural population]. | OBJECTIVES
The objective of this study was to evaluate the prevalence of atrophic gastritis and intestinal metaplasia according to gender, age and Helicobacter pylori infection in a rural population in Korea.
METHODS
Between April 2003 and January 2007, 713 subjects (298 men and 415 women, age range: 18-85) among the 2,161 adults who participated in a population-based survey received gastrointestinal endoscopy. All the subjects provided informed consent. Multiple biopsy specimens were evaluated for the presence of atrophic gastritis and intestinal metaplasia. The presence of Helicobacter pylori was determined using CLO and histology testing.
RESULTS
The age-adjusted prevalence of atrophic gastritis was 42.7% for men and 38.1% for women and the prevalence of intestinal metaplasia was 42.5% for men and 32.7% for women. The prevalence of atrophic gastritis and intestinal metaplasia increased significantly with age for both men and women (p for trend<0.001). The age-adjusted prevalence of Helicobacter pylori was similar for men (59.0%) and women (56.7%). The subjects with Helicobacter pylori infection showed a significantly higher prevalence of intestinal metaplasia (44.3%) compared with that (26.8%) of the noninfected subjects (p<0.001). However, the prevalence of atrophic gastritis was not statistically different between the Helicobacter pylori-infected subjects and the noninfected individuals.
CONCLUSIONS
Our findings suggest that the prevalence of atrophic gastritis and intestinal metaplasia is higher for a Korean rural population than that for a Western population; this may be related to the high incidence of gastric cancer in Koreans. Especially, the prevalence of intestinal metaplasia was high for the subjects with Helicobacter pylori infection. The multistep process of gastric carcinogenesis and the various factors contributing to each step of this process need to be determined by conducting future follow-up studies. |
Control performance in the horizontal plane of a fish robot with mechanical pectoral fins | The mechanism of locomotion of aquatic animals can provide us with new insight into the maneuverability and stabilization of underwater robots. This paper focuses on biomimesis in the maneuvering performance of aquatic animals to develop a new device for maneuvering underwater robots. In this paper, guidance and control in the horizontal plane of a fish robot equipped with a pair of two-motor-driven mechanical pectoral fins on both sides of the robot in water currents is presented. The fish robot demonstrates high performance in terms of maneuverability in such activities as lateral swimming. The use of fuzzy control enables the fish robot to perform rendezvous and docking with an underwater post in water currents. |
- 1-Towards an Intelligent Network for Matching Offer and Demand : from the sharing economy to the Global Brain | We analyze the role of the Global Brain in the sharing economy, by synthesizing the notion of distributed intelligence with Goertzel’s concept of an offer network. An offer network is an architecture for a future economic system based on the matching of offers and demands without the intermediate of money. Intelligence requires a network of condition-action rules, where conditions represent challenges that elicit action in order to solve a problem or exploit an opportunity. In society, opportunities correspond to offers of goods or services, problems to demands. Tackling challenges means finding the best sequences of condition-action rules to connect all demands to the offers that can satisfy them. This can be achieved with the help of AI algorithms working on a public database of rules, demands and offers. Such a system would provide a universal medium for voluntary collaboration and economic exchange, efficiently coordinating the activities of all people on Earth. It would replace and subsume the patchwork of commercial and community-based sharing platforms presently running on the Internet. It can in principle resolve the traditional problems of the capitalist economy: poverty, inequality, externalities, poor sustainability and resilience, booms and busts, and the neglect of non-monetizable values. |
One Vector is Not Enough: Entity-Augmented Distributed Semantics for Discourse Relations | Discourse relations bind smaller linguistic units into coherent texts. Automatically identifying discourse relations is difficult, because it requires understanding the semantics of the linked arguments. A more subtle challenge is that it is not enough to represent the meaning of each argument of a discourse relation, because the relation may depend on links between lowerlevel components, such as entity mentions. Our solution computes distributed meaning representations for each discourse argument by composition up the syntactic parse tree. We also perform a downward compositional pass to capture the meaning of coreferent entity mentions. Implicit discourse relations are then predicted from these two representations, obtaining substantial improvements on the Penn Discourse Treebank. |
MultiLabel Classification on Tree- and DAG-Structured Hierarchies | Many real-world applications involve multilabel classification, in which the labels are organized in the form of a tree or directed acyclic graph (DAG). However, current research efforts typically ignore the label dependencies or can only exploit the dependencies in tree-structured hierarchies. In this paper, we present a novel hierarchical multilabel classification algorithm which can be used on both treeand DAG-structured hierarchies. The key idea is to formulate the search for the optimal consistent multi-label as the finding of the best subgraph in a tree/DAG. Using a simple greedy strategy, the proposed algorithm is computationally efficient, easy to implement, does not suffer from the problem of insufficient/skewed training data in classifier training, and can be readily used on large hierarchies. Theoretical results guarantee the optimality of the obtained solution. Experiments are performed on a large number of functional genomics data sets. The proposed method consistently outperforms the state-of-the-art method on both treeand DAG-structured hierarchies. |
Enriching Literature Reviews with Computer-Assisted Research Mining. Case: Profiling Group Support Systems Research | In this paper we discuss and demonstrate how traditional literature reviews may be enriched by computer-assisted research profiling. Research profiling makes use of sophisticated text mining tools designed for structured science and technology information resources, such as the ISI Web of science, INSPEC or ABI/INFORM ProQuest. Besides aiding in summarizing and visualizing knowledge domains, these research mining tools act as interactive decision support systems for researchers. We illustrate research profiling with 2.000 publications on group support systems between years 1982-2005 |
Class vs. Student in a Bayesian Network Student Model | For decades, intelligent tutoring systems researchers have been developing various methods of student modeling. Most of the models, including two of the most popular approaches: Knowledge Tracing model and Performance Factor Analysis, all have similar assumption: the information needed to model the student is the student’s performance. However, there are other sources of information that are not utilized, such as the performance on other students in same class. This paper extends the Student-Skill extension of Knowledge Tracing, to take into account the class information, and learns four parameters: prior knowledge, learn, guess and slip for each class of students enrolled in the system. The paper then compares the accuracy using the four parameters for each class versus the four parameters for each student to find out which parameter set works better in predicting student performance. The result shows that modeling at coarser grain sizes can actually result in higher predictive accuracy, and data about classmates’ performance is results in a higher predictive accuracy on unseen test data. |
High-density lipoprotein restores endothelial function in hypercholesterolemic men. | BACKGROUND
Hypercholesterolemia is a risk factor for atherosclerosis-causing endothelial dysfunction, an early event in the disease process. In contrast, high-density lipoprotein (HDL) cholesterol inversely correlates with morbidity and mortality representing a protective effect. Therefore, we investigated the effects of reconstituted HDL on endothelial function in hypercholesterolemic men.
METHODS AND RESULTS
Endothelium-dependent and -independent vasodilation to intraarterial acetylcholine and sodium nitroprusside (SNP), respectively, was measured by forearm venous occlusion plethysmography in healthy normo- and hypercholesterolemic men. In hypercholesterolemics, the effects of reconstituted HDL (rHDL; 80 mg/kg IV over 4 hours) on acetylcholine- and SNP-induced changes in forearm blood flow were assessed in the presence or absence of the nitric oxide (NO) synthase inhibitor L-NMMA. Hypercholesterolemics showed reduced vasodilation to acetylcholine but not to SNP compared with normocholesterolemics (P<0.0001). rHDL infusion increased plasma HDL cholesterol from 1.3+/-0.1 to 2.2+/-0.1 mmol/L (P<0.0001, n=18) and significantly enhanced the acetylcholine-induced increase in forearm blood flow without affecting that induced by SNP. rHDL infusion also improved flow-mediated dilation of the brachial artery (to 4.5+/-0.9% from 2.7+/-0.6%, P=0.02). NO synthase inhibition prevented the improvement in acetylcholine-induced vasodilation while leaving the response to SNP unchanged. Albumin infusion in an equivalent protein dose had no effect on vasomotion or lipid levels.
CONCLUSIONS
In hypercholesterolemic patients, intravenous rHDL infusion rapidly normalizes endothelium-dependent vasodilation by increasing NO bioavailability. This may in part explain the protective effect of HDL from coronary heart disease and illustrates the potential therapeutic benefit of increasing HDL in patients at risk from atherosclerosis. |
Routing Protocols for Vehicular Adhoc Networks ( VANETs ) : A Review | Routing in Vehicular Ad hoc Networks is a challenging task due to the unique characteristics of the network such as high mobility of nodes, dynamically changing topology and highly partitioned network. It is a challenge to ensure reliable, continuous and seamless communication in the presence of speeding vehicles. The performance of routing protocols depends on various internal factors such as mobility of nodes and external factors such as road topology and obstacles that block the signal. This demands a highly adaptive approach to deal with the dynamic scenarios by selecting the best routing and forwarding strategies and by using appropriate mobility and propagation models. In this paper we review the existing routing protocols for VANETs and categorise them into a taxonomy based on key attributes such as network architecture, applications supported, routing strategies, forwarding strategies, mobility models and quality of service metrics. Protocols belonging to unicast, multicast, geocast and broadcast categories are discussed. Strengths and weaknesses of various protocols using topology based, position based and cluster based approaches are analysed. Emphasis is given on the adaptive and context-aware routing protocols. Simulation of broadcast and unicast protocols is carried out and the results are presented. |
Respiratory rate, heart rate and continuous measurement of BP using PPG | Measurement of blood volumetric changes in human body by photoplethysmographic sensors is used in present study. Objective is to measured different parameters that are heart rate, respiratory rate, BP. PPG signal is acquired by PPG sensor, microcontroller and RS 232. The acquired PPG signal is displayed in MATLAB. Frequency domain analysis of PPG signal shows a two peaks first at around 0.25 to 0.35 Hz and second at around 1 to 1.5 Hz. FFT at 1Hz relates to 60 BPM and FFT at 0.25 Hz relates to 15 respiratory cycles per minute. For BP Measurement, the pulse height of PPG is proportional to the difference between the systolic and the diastolic pressure in the arteries. The standard blood pressure monitoring instrument is used to calculate correlation coefficient. The arterial blood pressure is calculated based on these coefficients. PPG signal is used to detect blood pressure pulsations in a finger and achieved an accuracy of (0.8 ± 7) mmHg and (0.9 ± 6) mmHg for systolic and diastolic pressure, respectively. The developed PPG based method can be used as a noninvasive alternative to the conventional occluding-cuff approaches for long-term and continuous monitoring of blood pressure, heart rate and respiratory rate. |
Supernumerary nostril: a case report | BACKGROUND
Supernumerary nostril is a congenital anomaly that contains additional nostril with or without accessory cartilage. These rare congenital nasal deformities result from embryological defects. Since 1906, Lindsay (Trans Pathol Soc Lond. 57:329-330, 1906) has published the first research of bilateral supernumerary nostrils, and only 34 cases have been reported so far in the English literature.
CASE PRESENTATION
A 1-year-old female baby was brought to our department group for the treatment of an accessory opening above the left nostril which had been presented since her birth. Medical history was non-specific and her birth was normal. The size of a supernumerary nostril was about 0.2 cm diameter and connected to the left nostril. The right one was normal. Minimal procedure was operated for the anomaly. After 1 year, rhinoplasty was performed for the nostril asymmetry.
CONCLUSIONS
At 1 year follow-up, the functional and cosmetic result was satisfactory. In this case, it is important that we have early preoperative diagnosis. Also, it is desirable that we should perform a corrective surgery as soon as possible for the patient's psychosocial growth. |
FALL DETECTION AND PREVENTION FOR THE ELDERLY: A REVIEW OF TRENDS AND CHALLENGES | It is of little surprise that falling is often accepted as a natural part of the aging process. In fact, it is the impact rather than the occurrence of falls in the elderly, which is of most concern. Aging people are typically frailer, more unsteady, and have slower reactions, thus are more likely to fall and be injured than younger individuals. Typically, research and industry presented various practical solutions for assisting the elderly and their caregivers against falls via detecting falls and triggering notification alarms calling for help as soon as falls occur in order to diminish fall consequences. Furthermore, fall likelihood prediction systems have been emerged lately based on the manipulation of the medical and behavioral history of elderly patients in order to predict the possibility of falls occurrence. Accordingly, response from caregivers may be triggered prior to most fall occurrences and accordingly prevent falls from taking place. This paper presents an extensive review for the state-of-theart trends and technologies of fall detection and prevention systems assisting the elderly people and INTERNATIONAL JOURNAL ON SMART SENSING AND INTELLIGENT SYSTEMS VOL. 6, NO. 3, JUNE 2013 |
Hepatitis C virus therapy update 2013. | PURPOSE OF REVIEW
We review here the recent literature regarding hepatitis C virus (HCV) therapy through January 2013. We discuss current therapies, targets for new therapies, and what might be expected in this rapidly changing field.
RECENT FINDINGS
Boceprevir-based and telaprevir-based triple therapy with pegylated interferon and ribavirin marked the beginning of a new era in HCV therapy for genotype 1 patients. New direct-acting antivirals (DAAs) are being developed and new antiviral drug targets are being explored. New combination treatment regimens are expected to emerge soon and there is hope for interferon-free regimens.
SUMMARY
The standard of care for treatment of HCV genotype 1 changed dramatically with the approval of two new DAA drugs--telaprevir and boceprevir--for use in pegylated interferon-based and ribavirin-based triple therapy in mid-2011. Experience has shown improved response rates and treatment durations for many patients with genotype 1 HCV infection. However, persistent limitations to HCV treatment still exist for patients with prior treatment failure and comorbid conditions and patients on newer therapies suffer additional therapy-limiting side effects and drug-drug interactions. Genetic testing may provide some guidance but additional options for therapy are still needed for HCV. Many new drugs are currently under investigation and there is hope that effective and well tolerated interferon-free regimens may become a part of future therapy. |
Multi-Task Convolutional Neural Network for Pose-Invariant Face Recognition | This paper explores multi-task learning (MTL) for face recognition. First, we propose a multi-task convolutional neural network (CNN) for face recognition, where identity classification is the main task and pose, illumination, and expression (PIE) estimations are the side tasks. Second, we develop a dynamic-weighting scheme to automatically assign the loss weights to each side task, which solves the crucial problem of balancing between different tasks in MTL. Third, we propose a pose-directed multi-task CNN by grouping different poses to learn pose-specific identity features, simultaneously across all poses in a joint framework. Last but not least, we propose an energy-based weight analysis method to explore how CNN-based MTL works. We observe that the side tasks serve as regularizations to disentangle the PIE variations from the learnt identity features. Extensive experiments on the entire multi-PIE dataset demonstrate the effectiveness of the proposed approach. To the best of our knowledge, this is the first work using all data in multi-PIE for face recognition. Our approach is also applicable to in-the-wild data sets for pose-invariant face recognition and achieves comparable or better performance than state of the art on LFW, CFP, and IJB-A datasets. |
Mobile divides: gender, socioeconomic status, and mobile phone use in Rwanda | We combine data from a field survey with transaction log data from a mobile phone operator to provide new insight into daily patterns of mobile phone use in Rwanda. The analysis is divided into three parts. First, we present a statistical comparison of the general Rwandan population to the population of mobile phone owners in Rwanda. We find that phone owners are considerably wealthier, better educated, and more predominantly male than the general population. Second, we analyze patterns of phone use and access, based on self-reported survey data. We note statistically significant differences by gender; for instance, women are more likely to use shared phones than men. Third, we perform a quantitative analysis of calling patterns and social network structure using mobile operator billing logs. By these measures, the differences between men and women are more modest, but we observe vast differences in utilization between the relatively rich and the relatively poor. Taken together, the evidence in this paper suggests that phones are disproportionately owned and used by the privileged strata of Rwandan society. |
The stem cell secretome and its role in brain repair | Compelling evidence exists that non-haematopoietic stem cells, including mesenchymal (MSCs) and neural/progenitor stem cells (NPCs), exert a substantial beneficial and therapeutic effect after transplantation in experimental central nervous system (CNS) disease models through the secretion of immune modulatory or neurotrophic paracrine factors. This paracrine hypothesis has inspired an alternative outlook on the use of stem cells in regenerative neurology. In this paradigm, significant repair of the injured brain may be achieved by injecting the biologics secreted by stem cells (secretome), rather than implanting stem cells themselves for direct cell replacement. The stem cell secretome (SCS) includes cytokines, chemokines and growth factors, and has gained increasing attention in recent years because of its multiple implications for the repair, restoration or regeneration of injured tissues. Thanks to recent improvements in SCS profiling and manipulation, investigators are now inspired to harness the SCS as a novel alternative therapeutic option that might ensure more efficient outcomes than current stem cell-based therapies for CNS repair. This review discusses the most recent identification of MSC- and NPC-secreted factors, including those that are trafficked within extracellular membrane vesicles (EVs), and reflects on their potential effects on brain repair. It also examines some of the most convincing advances in molecular profiling that have enabled mapping of the SCS. |
PERTURBATIONS IN BOUNCING COSMOLOGICAL MODELS | I describe the features and general properties of bouncing models and the evolution of cosmological perturbations on such backgrounds. I will outline possible observational consequences of the existence of a bounce in the primordial Universe and I will make a comparison of these models with standard long inflationary scenarios. |
Does availability of AIR insulin increase insulin use and improve glycemic control in patients with type 2 diabetes? | BACKGROUND
In the concordance model, physician and patient discuss treatment options, explore the impact of treatment decisions from the patient's perspective, and make treatment choices together. We tested, in a concordance setting, whether the availability of AIR inhaled insulin (developed by Alkermes, Inc. [Cambridge, MA] and Eli Lilly and Company [Indianapolis, IN]; AIR is a registered trademark of Alkermes, Inc.), as compared with existing treatment options alone, leads to greater initiation and maintenance of insulin therapy and improves glycemic control in patients with type 2 diabetes.
METHODS
This was a 9-month, multicenter, parallel, open-label study in adult, nonsmoking patients with diabetes not optimally controlled by two or more oral antihyperglycemic medications. Patients were randomized to the Standard Options group (n = 516), in which patients chose a regimen from drugs in each major treatment class excluding inhaled insulin, or the Standard Options + AIR insulin group (n = 505), in which patients had the same choices plus AIR insulin. The primary end points were the proportion of patients in each group using insulin at end point and change in hemoglobin A1C (A1C) from baseline to end point.
RESULTS
At end point, 53% of patients in the Standard Options group and 59% in the Standard Options + AIR insulin group were using insulin (P = 0.07). Both groups reduced A1C by about 1.2% and reported increased well-being and treatment satisfaction. The most common adverse event with AIR insulin was transient cough.
CONCLUSIONS
The opportunity to choose AIR insulin did not affect overall use of insulin at end point or A1C outcomes. Regardless of group assignment, utilizing a shared decision-making approach to treatment choices (concordance model), resulted in improved treatment satisfaction and A1C values at end point. Therefore, increasing patient involvement in treatment decisions may improve outcomes. |
Context-aware collection, decision, and distribution (C2D2) engine for multi-dimensional adaptation in vehicular networks | Wireless vehicular networks have highly complex and dynamic channel state leading to a challenging environment to maintain connectivity and/or achieve high levels of throughput while also satisfying latency requirements of diverse vehicular applications. Adaptation over a large parameter space such as multiple frequency bands, novel modulation and coding schemes, and routing protocols is important in achieving good performance in this challenging setting. Vehicles now include a plethora of sensors which can be used to establish a clearer notion of the environmental context. However, while it is well understood that wireless performance greatly depends on this contextual information, protocols that leverage this information to improve wireless performance have yet to be fully developed. In this work, we will lay a foundation for developing context-aware intelligence to interface with existing adaptation protocols at multiple layers of the network stack. The core of this system consists of a context-aware collection, decision, and distribution (C2D2) engine. We give a brief overview of the architecture, design, and operation of the C2D2 engine. |
The Assessment of Goal Commitment: A Measurement Model Meta-Analysis. | Goals are central to current treatments of work motivation, and goal commitment is a critical construct in understanding the relationship between goals and performance. Inconsistency in the measurement of goal commitment hindered early research in this area but the nine-item, self-report scale developed by Hollenbeck, Williams, and Klein (1989b), and derivatives of that scale, have become the most commonly used measures of goal commitment. Despite this convergence, a few authors, based on small sample studies, have raised questions about the dimensionality of this measure. To address the conflicting recommendations in the literature regarding what items to use in assessing goal commitment, the current study combines the results of 17 independent samples and 2918 subjects to provide a more conclusive assessment by combining meta-analytic and multisample confirmatory factor analytic techniques. This effort reflects the first combined use of these techniques to test a measurement model and allowed for the creation of a database substantially larger than that of previously factor analyzed samples containing these scale items. By mitigating sampling error, the results clarified a number of debated issues that have arisen out of previous small sample factor analyses and revealed a five-item scale that is unidimensional and equivalent across measurement timing, goal origin, and task complexity. It is recommended that this five-item scale be used in future research assessing goal commitment. Copyright 2001 Academic Press. |
Frequently hospitalised psychiatric patients: a study of predictive factors | The purpose of this study was to investigate the factors predicting readmission and the interval between readmissions to psychiatric hospital during the early 1990s in Finland. Data were retrieved using the national register of all discharges from psychiatric hospitals during the early 1990s. Frequently admitted patients were an identifiable group. The factors associated with an increased risk of multiple readmissions were: previous admissions, long length of stay (LOS) and diagnosis of psychosis or personality disorder. Patients with psychosis or personality disorder were also readmitted more rapidly than patients with an organic disorder. There seemed to be a small proportion of psychiatric patients in need of frequent or lengthy hospital treatment. The expansion of community care did not as such seem to have diminished the need and use of psychiatric hospital care. However, the differences between the years 1990 and 1993 were less important than the other factors that predicted readmission, namely LOS and diagnosis. |
A framework for Arabic sentiment analysis using supervised classification | Sentiment analysis aims to determine the polarity that is embedded in people comments and reviews. Sentiment analysis is important for companies and organisations which are interested in evaluating their products or services. The current paper deals with sentiment analysis in Arabic reviews. Three classifiers were applied on an in-house developed dataset of tweets/comments. In particular, the Naïve Bayes, SVM and K-nearest neighbour classifiers were employed. This paper also addresses the effects of term weighting schemes on the accuracy of the results. The binary model, term frequency and term frequency inverse document frequency were used to assign weights to the tokens of tweets/comments. The results show that alternating between the three weighting schemes slightly affects the accuracies. The results also clarify that the classifiers were able to remove false examples (high precision) but were not that successful in identifying all correct examples (low recall). |
The long-term impact of the physical, emotional, and sexual abuse of children: a community study. | The associations between giving a history of physical, emotional, and sexual abuse in children and a range of mental health, interpersonal, and sexual problems in adult life were examined in a community sample of women. Abuse was defined to establish groups giving histories of unequivocal victimization. A history of any form of abuse was associated with increased rates of psychopathology, sexual difficulties, decreased self-esteem, and interpersonal problems. The similarities between the three forms of abuse in terms of their association with negative adult outcomes was more apparent than any differences, though there was a trend for sexual abuse to be particularly associated to sexual problems, emotional abuse to low self-esteem, and physical abuse to marital breakdown. Abuse of all types was more frequent in those from disturbed and disrupted family backgrounds. The background factors associated with reports of abuse were themselves often associated to the same range of negative adult outcomes as for abuse. Logistic regressions indicated that some, though not all, of the apparent associations between abuse and adult problems was accounted for by this matrix of childhood disadvantage from which abuse so often emerged. |
Study protocol of a randomised controlled trial comparing perioperative intravenous insulin, GIK or GLP-1 treatment in diabetes–PILGRIM trial | BACKGROUND
Diabetes mellitus (DM) is associated with poor outcome after surgery. The prevalence of DM in hospitalised patients is up to 40%, meaning that the anaesthesiologist will encounter a patient with DM in the operating room on a daily basis. Despite an abundance of published glucose lowering protocols and the known negative outcomes associated with perioperative hyperglycaemia in DM, there is no evidence regarding the optimal intraoperative glucose lowering treatment. In addition, protocol adherence is usually low and protocol targets are not simply met. Recently, incretins have been introduced to lower blood glucose. The main hormone of the incretin system is glucagon-like peptide-1 (GLP-1). GLP-1 increases insulin and decreases glucagon secretion in a glucose-dependent manner, resulting in glucose lowering action with a low incidence of hypoglycaemia. We set out to determine the optimal intraoperative treatment algorithm to lower glucose in patients with DM type 2 undergoing non-cardiac surgery, comparing intraoperative glucose-insulin-potassium infusion (GIK), insulin bolus regimen (BR) and GPL-1 (liragludite, LG) treatment.
METHODS/DESIGN
This is a multicentre randomised open label trial in patients with DM type 2 undergoing non-cardiac surgery. Patients are randomly assigned to one of three study arms; intraoperative glucose-insulin-potassium infusion (GIK), intraoperative sliding-scale insulin boluses (BR) or GPL-1 pre-treatment with liraglutide (LG). Capillary glucose will be measured every hour. If necessary, in all study arms glucose will be adjusted with an intravenous bolus of insulin. Researchers, care givers and patients will not be blinded for the assigned treatment. The main outcome measure is the difference in median glucose between the three study arms at 1 hour postoperatively. We will include 315 patients, which gives us a 90% power to detect a 1 mmol l(-1) difference in glucose between the study arms.
DISCUSSION
The PILGRIM trial started in January 2014 and will provide relevant information on the perioperative use of GLP-1 agonists and the optimal intraoperative treatment algorithm in patients with diabetes mellitus type 2.
TRIAL REGISTRATION
ClinicalTrials.gov, NCT02036372. |
Annotating Sentiment and Irony in the Online Italian Political Debate on #labuonascuola | In this paper we present the TWitterBuonaScuola corpus (TW-BS), a novel Italian linguistic resource for Sentiment Analysis, developed with the main aim of analyzing the online debate on the controversial Italian political reform “Buona Scuola” (Good school), aimed at reorganizing the national educational and training systems. We describe the methodologies applied in the collection and annotation of data. The collection has been driven by the detection of the hashtags mainly used by the participants to the debate, while the annotation has been focused on sentiment polarity and irony, but also extended to mark the aspects of the reform that were mainly discussed in the debate. An in-depth study of the disagreement among annotators is included. We describe the collection and annotation stages, and the in-depth analysis of disagreement made with Crowdflower, a crowdsourcing annotation platform. |
' s personal copy Core , animal reminder , and contamination disgust : Three kinds of disgust with distinct personality , behavioral , physiological , and clinical correlates | We examined the relationships between sensitivity to three kinds of disgust (core, animalreminder, and contamination) and personality traits, behavioral avoidance, physiological responding, and anxiety disorder symptoms. Study 1 revealed that these disgusts are particularly associated with neuroticism and behavioral inhibition. Moreover, the three disgusts showed a theoretically consistent pattern of relations on four disgust-relevant behavioral avoidance tasks in Study 2. Similar results were found in Study 3 such that core disgust was significantly related to increased physiological responding during exposure to vomit, while animal-reminder disgust was specifically related to physiological responding during exposure to blood. Lastly, Study 4 revealed that each of the three disgusts showed a different pattern of relations with fear of contamination, fear of animals, and fear of blood– injury relevant stimuli. These findings provide support for the convergent and divergent validity of core, animal-reminder, and contamination disgust. These findings also highlight the possibility that the three kinds of disgust may manifest as a function of different psychological mechanisms (i.e., oral incorporation, mortality defense, disease avoidance) that may give rise to different clinical conditions. However, empirical examination of the mechanisms that underlie the three disgusts will require further refinement of the psychometric properties of the disgust scale. 2008 Elsevier Inc. All rights reserved. |
SpikeNET: A simulator for modeling large networks of integrate and fire neurons | SpikeNET is a simulator for modeling large networks of asynchronously spiking neurons. It uses simple integrate-and-fire neurons which undergo step-like changes in membrane potential when synaptic inputs arrive. If a threshold is exceeded, the potential is reset and the neuron added to a list to be propagated on the next time step. Using such spike lists greatly reduces the computations associated with large networks, and simplifies implementations using parallel hardware since inter-processor communication can be limited to sending lists of the neurons which just fired. We have used it to model complex multi-layer architectures based on the primate visual system that involve millions of neurons and billions of synaptic connections. Such models are not only biological but also efficient, robust and very fast, qualities which they share with the human visual system. |
Salt and Pepper Noise Detection and removal by Modified Decision based Unsymmetrical Trimmed Median Filter for Image Restoration | In this paper, six different image filtering algorithms are compared based on their ability to reconstruct noise affected images. The purpose of these algorithms is to remove noise from a signal that might occur through the transmission of an image. A new algorithm, the Spatial Median Filter, is introduced and compared with current image smoothing techniques. Experimental results demonstrate that the proposed algorithm is comparable to these techniques.. This proposed algorithm shows better results than the Standard Median Filter (MF), Decision Based Algorithm (DBA), Modified Decision Based Algorithm (MDBA), and Progressive Switched Median Filter (PSMF). The proposed algorithm is tested against different grayscale and color images and it gives better Peak Signal-to-Noise Ratio (PSNR) and Image Enhancement Factor (IEF). |
A relativistic extension of Hopfield neural networks via the mechanical analogy | We propose a modification of the cost function of the Hopfield model whose salient features shine in its Taylor expansion and result in more than pairwise interactions with alternate signs, suggesting a unified framework for handling both with deep learning and network pruning. In our analysis, we heavily rely on the Hamilton-Jacobi correspondence relating the statistical model with a mechanical system. In this picture, our model is nothing but the relativistic extension of the original Hopfield model (whose cost function is a quadratic form in the Mattis magnetization which mimics the non-relativistic Hamiltonian for a free particle). We focus on the low-storage regime and solve the model analytically by taking advantage of the mechanical analogy, thus obtaining a complete characterization of the free energy and the associated self-consistency equations in the thermodynamic limit. On the numerical side, we test the performances of our proposal with MC simulations, showing that the stability of spurious states (limiting the capabilities of the standard Hebbian construction) is sensibly reduced due to presence of unlearning contributions in this extended framework. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.