title
stringlengths
8
300
abstract
stringlengths
0
10k
Droxidopa in patients with neurogenic orthostatic hypotension associated with Parkinson's disease (NOH306A).
BACKGROUND Neurogenic orthostatic hypotension (nOH) is common in Parkinson's disease (PD), and represents a failure to generate norepinephrine responses appropriate for postural change. Droxidopa (L-threo-3,4-dihydroxyphenylserine) is an oral norepinephrine prodrug. OBJECTIVE Interim analyses of the initial patients enrolled in a multicenter, randomized, double-blind, placebo-controlled phase 3 trial of droxidopa for nOH in PD (ClinicalTrials.gov Identifier: NCT01176240). METHODS PD patients with documented nOH underwent ≤ 2 weeks of double-blind droxidopa or placebo dosage optimization followed by 8 weeks of maintenance treatment (100-600 mg t.i.d.). The primary efficacy measure was change in Orthostatic Hypotension Questionnaire (OHQ) composite score from baseline to Week 8. Key secondary variables included dizziness/lightheadedness score (OHQ item 1) and patient-reported falls. RESULTS Among 24 droxidopa and 27 placebo recipients, mean OHQ composite-score change at Week 8 was -2.2 versus -2.1 (p = 0.98); in response to this pre-planned futility analysis, the study was temporarily stopped and all data from these patients were considered exploratory. At Week 1, mean dizziness/lightheadedness score change favored droxidopa by 1.5 units (p = 0.24), with subsequent numerical differences favoring droxidopa throughout the observation period, and at Week 1, mean standing systolic blood-pressure change favored droxidopa by 12.5 mmHg (p = 0.04). Compared with placebo, the droxidopa group exhibited an approximately 50% lower rate of reported falls (p = 0.16) and fall-related injuries (post-hoc analysis). CONCLUSIONS This exploratory analysis of a small dataset failed to show benefit of droxidopa, as compared with placebo by the primary endpoint. Nonetheless, there were signals of potential benefit for nOH, including improvement in dizziness/lightheadedness and reduction in falls, meriting evaluation in further trials.
Motivational enhancement therapy in addition to physical therapy improves motivational factors and treatment outcomes in people with low back pain: a randomized controlled trial.
OBJECTIVES To examine whether the addition of motivational enhancement treatment (MET) to conventional physical therapy (PT) produces better outcomes than PT alone in people with chronic low back pain (LBP). DESIGN A double-blinded, prospective, randomized, controlled trial. SETTING PT outpatient department. PARTICIPANTS Participants (N=76) with chronic LBP were randomly assigned to receive 10 sessions of either MET plus PT or PT alone. INTERVENTION MET included motivational interviewing strategies and motivation-enhancing factors. The PT program consisted of interferential therapy and back exercises. MAIN OUTCOME MEASURES Motivational-enhancing factors, pain intensity, physical functions, and exercise compliance. RESULTS The MET-plus-PT group produced significantly greater improvements than the PT group in 3 motivation-enhancing factors; proxy efficacy (P<.001), working alliance (P<.001), and treatment expectancy (P=.011). Furthermore, they performed significantly better in lifting capacity (P=.015), 36-Item Short Form Health Survey General Health subscale (P=.015), and exercise compliance (P=.002) than the PT group. A trend of a greater decrease in visual analog scale and Roland-Morris Disability Questionnaire scores also was found in the MET-plus-PT group than the PT group. CONCLUSION The addition of MET to PT treatment can effectively enhance motivation and exercise compliance and show better improvement in physical function in patients with chronic LBP compared with PT alone.
Structural Interpretation of Vector Autoregressions with Incomplete Identification: Revisiting the Role of Oil Supply and Demand Shocks
Traditional approaches to structural vector autoregressions can be viewed as special cases of Bayesian inference arising from very strong prior beliefs. These methods can be generalized with a less restrictive formulation that incorporates uncertainty about the identifying assumptions themselves. We use this approach to revisit the importance of shocks to oil supply and demand. Supply disruptions turn out to be a bigger factor in historical oil price movements and inventory accumulation a smaller factor than implied by earlier estimates. Supply shocks lead to a reduction in global economic activity after a significant lag, whereas shocks to oil demand do not.
Security and usability of authenticating process of online banking: User experience study
Bank websites are secure websites considered high risk, so security is a prime concern. Authentication is extremely important because it serves as the entry point for customers to access their personal and sensitive information. To be considered effective and desirable, a banking website should provide its users with secure and usable authentication mechanisms. Otherwise, the authentication process likely will become unsafe and frustrating for users. This issue in turn compromises the bank's objective to make online transactions convenient for its users. Focusing on this dilemma, this paper sought to examine the security and usability of single and multifactor authentication methods, which are used by most banks to strengthen the security process. However, the security and usability of multifactor authentication have not been closely investigated from the user's perspective. In a survey of 302 e-banking customers who owned at least two international bank accounts, multifactor authentication methods were perceived as secure and trustworthy, with a high rate of usability. Token-based authentication was perceived as more usable than SMS-based authentication, during which a SMS is sent via mobile means.
Rehabilitation of the short pelvic floor. I: Background and patient evaluation
Pelvic floor physical therapists have traditionally focused on rehabilitation of the weak pelvic floor of normal length. With the recognition that many urogynecologic symptoms arise from the presence of a short, painful pelvic floor, the role of the physical therapist is expanding. Clinically, the pelvic floor musculature is found to be short, tender, and therefore weak. There are associated trigger points and characteristic extrapelvic connective tissue abnormalities. We report the characteristic patterns of myofascial and connective tissue abnormalities in 49 patients presenting with this syndrome.
Ultrasound-guided radiofrequency ablation in the management of interdigital (Morton’s) neuroma
To identify the benefits of ultrasound-guided radiofrequency ablation of Morton’s neuroma as an alternative to surgical excision. We studied a consecutive cohort of surgical candidates for Morton’s neurectomy who we referred, instead, for radiofrequency ablation (RFA). Under local anaesthetic, RFA was performed under ultrasound guidance, by a single radiologist. This out-patient procedure was repeated after 4 weeks if necessary. We followed patients for a minimum of 6 months to assess their change in visual analogue pain scores (VAS), symptom improvement, complications and progression to surgical excision. Thirty feet in 25 patients were studied. There were 4 men and 21 women with an average age of 55 years (range 33–73 years). All had tried previous methods of conservative management. Forty percent presented with 2nd space neuromas and 60% with 3rd space ones. The average number of treatment sessions was 1.6 (range 1–3, mode 1). Prior to treatment, all patients had pain on activity (VAS average: 6.0, range 3–9). Post-treatment there was a statistically significant reduction in pain scores (post-RFA VAS average: 1.7, range 0–8, p < 0.001). The average overall symptom improvement was 76%. There was one minor complication of temporary nerve irritation. Three neuromas (10%) have progressed to surgical excision; 1 patient has ongoing, unchanged pain with no obvious cause. At 6 months, 26 out of 30 feet had a satisfactory outcome. Ultrasound-guided RFA has successfully alleviated patients’ symptoms of Morton’s neuroma in >85% of cases. Only 10% have proceeded to surgical excision in the short term.
Balancing acts: walking the Agile tightrope
Self-organizing teams are one of the critical success factors on Agile projects - and yet, little is known about the self-organizing nature of Agile teams and the challenges they face in industrial practice. Based on a Grounded Theory study of 40 Agile practitioners across 16 software development organizations in New Zealand and India, we describe how self-organizing Agile teams perform balancing acts between (a) freedom and responsibility (b) cross-functionality and specialization, and (c) continuous learning and iteration pressure, in an effort to maintain their self-organizing nature. We discuss the relationship between these three balancing acts and the fundamental conditions of self-organizing teams - autonomy, cross-fertilization, and self-transcendence.
Progressive Geometric Algorithms
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms for two geometric problems: computing the convex hull of a planar point set, and finding popular places in a set of trajectories.
PMSM laboratory stand for investigations on advanced structures of electrical drive control
This paper presents design, structure and experimental tests of a laboratory stand for the investigation on advanced control algorithms for electrical drive. The main propulsion unit is a permanent magnet synchronous motor, which is connected with a brushless DC motor operating as a dynamically adjustable load. System is controlled by a DSP with the dedicated speed and position measurement extension boards. Availability of an external DSP bus allowed to incorporate FPGA system to the control layer. The paper describes also an innovative, object-oriented approach to the implementation of the control algorithms in DSP system.
A Survey on M2M Systems for mHealth: A Wireless Communications Perspective
In the new era of connectivity, marked by the explosive number of wireless electronic devices and the need for smart and pervasive applications, Machine-to-Machine (M2M) communications are an emerging technology that enables the seamless device interconnection without the need of human interaction. The use of M2M technology can bring to life a wide range of mHealth applications, with considerable benefits for both patients and healthcare providers. Many technological challenges have to be met, however, to ensure the widespread adoption of mHealth solutions in the future. In this context, we aim to provide a comprehensive survey on M2M systems for mHealth applications from a wireless communication perspective. An end-to-end holistic approach is adopted, focusing on different communication aspects of the M2M architecture. Hence, we first provide a systematic review ofWireless Body Area Networks (WBANs), which constitute the enabling technology at the patient's side, and then discuss end-to-end solutions that involve the design and implementation of practical mHealth applications. We close the survey by identifying challenges and open research issues, thus paving the way for future research opportunities.
Ultracompact stars with multiple necks
We discuss ultracompact stellar objects which have multiple necks in their optical geometry. There are in fact physically reasonable equations of state for which the number of necks can be arbitrarily large. The proofs of these statements rely on a recent regularized formulation of the field equations for static spherically symmetric models due to Nilsson and Uggla. We discuss in particular the equation of state p = ρ - ρs which plays a central role in this context.
Terahertz CMOS Frequency Generator Using Linear Superposition Technique
A low Terahertz (324 GHz) frequency generator is realized in 90 nm CMOS by linearly superimposing quadruple (N=4) phase shifted fundamental signals at one fourth of the output frequency (81 GHz). The developed technique minimizes the fundamental, second and third order harmonics without extra filtering and results in a high fundamental-to-4 th harmonic signal conversion ratio of 0.17 or -15.4 dB. The demonstrated prototype produces a calibrated -46 dBm output power when biased at 1 V and 12 mA with 4 GHz tuning range and extrapolated phase noise of -91 dBc/Hz at 10 MHz frequency offset. The linear superposition (LS) technique can be generalized for all even number cases (N=2k, where k=1,2,3,4,...,n) with different tradeoffs in output power and frequency. As CMOS continues to scale, we anticipate the LS N=4 VCO to generate signals beyond 2 Terahertz by using 22 nm CMOS and produce output power up to -1.5 dBm with 1.7% power added efficiency with an LS VCO + Class-B Power Amplifier cascaded circuit architecture.
Health Cloud: An Enabler for Healthcare Transformation
An unprecedented volume of data is being generated in healthcare and life sciences, ranging across medical records, claims, lab data, genomics data, medical images, emerging exogenous data, and knowledge. Much of this data is moving to the cloud. In this paper, we describe examples of how the data from systems of record, exogenous data sources and knowledge sources can be combined at cloud scale and speed to create industry-transforming insights to improve health outcomes. We then describe a cloud architecture and building blocks that enable these solutions, and the compliance aspects that are critical to healthcare solutions. Finally, we outline a realization of this architecture and outline further research topics in this domain.
LIDE: Language Identification from Text Documents
The increase in the use of microblogging came along with the rapid growth on short linguistic data. On the other hand deep learning is considered to be the new frontier to extract meaningful information out of large amount of raw data in an automated manner. In this study, we engaged these two emerging fields to come up with a robust language identifier on demand, namely Language Identification Engine (LIDE). As a result, we achieved 95.12% accuracy in Discriminating between Similar Languages (DSL) Shared Task 2015 dataset, which is comparable to the maximum reported accuracy of 95.54% achieved so far.
Cluster Frameworks for Efficient Scheduling and Resource Allocation in Data Center Networks: A Survey
Data centers are widely used for big data analytics, which often involve data-parallel jobs, including query and web service. Meanwhile, cluster frameworks are rapidly developed for data-intensive applications in data center networks (DCNs). To promote the performance of these frameworks, many efforts have been paid to improve scheduling strategies and resource allocation algorithms. With the deployment of geo-distributed data centers and data-intensive applications, the optimization in DCNs regains pervasive attention in both industry and academia. Many solutions, such as the coflow-aware scheduling and speculative execution, have been proposed to meet various requirements. Therefore, we present a solid starting ground and comprehensive overview in this area to help readers quickly understand state-of-the-art technologies and research progress. We observe that algorithms in cluster frameworks are implemented with different guidelines and can be classified according to scheduling granularity, controller management, and prior-knowledge requirement. In addition, mechanisms for conquering crucial challenges in DCNs are discussed, including providing low latency and minimizing job completion time. Moreover, we analyze desirable properties of fault tolerance and scalability to illuminate the design principles of distributed systems. We hope that this paper will shed light on this promising land and serve as a guide for further researches.
Hyaluronic acid gel ( Juvéderm™) preparations in the treatment of facial wrinkles and folds
Soft tissue augmentation with temporary dermal fillers is a continuously growing field, supported by the ongoing development and advances in technology and biocompatibility of the products marketed. The longer lasting, less immunogenic and thus more convenient hyaluronic acid (HA) fillers are encompassing by far the biggest share of the temporary dermal filler market. Since the approval of the first HA filler, Restylane, there are at least 10 HA fillers that have been approved by the FDA. Not all of the approved HA fillers are available on the market, and many more are coming. The Juvéderm product line (Allergan, Irvine, CA), consisting of Juvéderm Plus and Juvéderm Ultra Plus, was approved by the FDA in 2006. Juvéderm is a bacterium-derived nonanimal stabilized HA. Juvéderm Ultra and Ultra Plus are smooth, malleable gels with a homologous consistency that use a new technology called "Hylacross technology". They have a high concentration of cross-linked HAs, which accounts for its longevity. Juvéderm Ultra Plus is used for volumizing and correcting deeper folds, whereas Juvéderm Ultra is best for contouring and volumizing medium depth facial wrinkles and lip augmentation. Various studies have shown the superiority of the HA filler products compared with collagen fillers for duration, volume needed, and patient satisfaction. Restylane, Perlane, and Juvéderm are currently the most popular dermal fillers used in the United States.
Sentiment Analysis of Conditional Sentences
This paper studies sentiment analysis of conditional sentences. The aim is to determine whether opinions expressed on different topics in a conditional sentence are positive, negative or neutral. Conditional sentences are one of the commonly used language constructs in text. In a typical document, there are around 8% of such sentences. Due to the condition clause, sentiments expressed in a conditional sentence can be hard to determine. For example, in the sentence, if your Nokia phone is not good, buy this great Samsung phone, the author is positive about “Samsung phone” but does not express an opinion on “Nokia phone” (although the owner of the “Nokia phone” may be negative about it). However, if the sentence does not have “if”, the first clause is clearly negative. Although “if” commonly signifies a conditional sentence, there are many other words and constructs that can express conditions. This paper first presents a linguistic analysis of such sentences, and then builds some supervised learning models to determine if sentiments expressed on different topics in a conditional sentence are positive, negative or neutral. Experimental results on conditional sentences from 5 diverse domains are given to demonstrate the effectiveness of the proposed approach.
Attention and inhibition in bilingual children: evidence from the dimensional change card sort task.
In a previous study, a bilingual advantage for preschool children in solving the dimensional change card sort task was attributed to superiority in inhibition of attention (Bialystok, 1999). However, the task includes difficult representational demands to encode and interpret the task stimuli, and bilinguals may also have profited from superior representational abilities. This possibility is examined in three studies. In Study 1, bilinguals outperformed monolinguals on versions of the problem containing moderate representational demands but not on a more demanding condition. Studies 2 and 3 demonstrated that bilingual children were more skilled than monolinguals when the target dimensions were perceptual features of the stimulus and that the two groups were equivalent when the target dimensions were semantic features. The conclusions are that bilinguals have better inhibitory control for ignoring perceptual information than monolinguals do but are not more skilled in representation, confirming the results of the original study. The results also identify the ability to ignore an obsolete display feature as the critical difficulty in solving this task.
Cheap, Fast and Good Enough: Automatic Speech Recognition with Non-Expert Transcription
Deploying an automatic speech recognition system with reasonable performance requires expensive and time-consuming in-domain transcription. Previous work demonstrated that non-professional annotation through Amazon’s Mechanical Turk can match professional quality. We use Mechanical Turk to transcribe conversational speech for as little as one thirtieth the cost of professional transcription. The higher disagreement of nonprofessional transcribers does not have a significant effect on system performance. While previous work demonstrated that redundant transcription can improve data quality, we found that resources are better spent collecting more data. Finally, we suggest a concrete method for quality control without needing professional transcription.
On Minimum Variance Unbiased Estimation of Clock Offset in a Two-Way Message Exchange Mechanism
For many applications, distributed networks require the local clocks of the constituent nodes to run close to an agreed upon notion of time. Most of the widely used clock synchronization algorithms in such systems employ the sender-receiver protocol based on a two-way timing message exchange paradigm. Maximum likelihood estimator (MLE) of the clock offset based on the timing message exchanges between two clocks was derived in D. R. Jeske, On maximum likelihood estimation of clock offset[IEEE Trans. Commun., vol. 53, pp. 53-54, Jan. 2005], when the fixed delays are symmetric and the variable delays in each direction assume an exponential distribution with an unknown mean. Herein, the best linear unbiased estimate using order statistics (BLUE-OS) of the clock offset between two nodes is derived assuming both symmetric and asymmetric exponential network delays, respectively. The Rao-Blackwell-Lehmann-Scheffe¿ theorem is then exploited to obtain the minimum variance unbiased estimate (MVUE) for the clock offset which it is shown to coincide with the BLUE-OS. In addition, it is found that the MVUE of the clock offset in the presence of symmetric network delays also coincides with the MLE. Finally, in the presence of asymmetric network delays, although the MLE is biased, it is shown to achieve lesser mean-square error (MSE) than the MVUE in the region around the point where the bidirectional network link delays are symmetric and hence its merit as the most versatile estimator is fairly justified.
A randomised comparison of monotherapy with Casodex 50 mg daily and castration in the treatment of metastatic prostate carcinoma. Casodex Study Group.
Casodex (Bicalutamide, ICI 176,334) is a potent, non-steroidal, selective anti-androgen with a long half-life allowing once-daily oral administration. In this randomised, open, multicentre study, Casodex 50 mg monotherapy was compared with castration (medical, using goserelin acetate, [Zoladex], or surgical) in 245 patients with advanced prostate cancer. Primary end-points were time to treatment failure, time to objective progression and survival. Subjective responses, quality of life and tolerability were also evaluated. There was no significant difference between the groups in terms of objective progression or subjective responses. Treatment failed in 59 of 119 patients (50%) randomised to Casodex and in 61 of 126 patients (48%) randomised to castration (no statistically significant difference). An updated analysis showed that survival was similar in the two groups. Casodex was well tolerated with a low incidence of diarrhoea and sexual dysfunction. On the basis of this study, Casodex monotherapy is an effective alternative to castration in the treatment of metastatic prostate cancer.
Tutorial 10 Kalman and Particle filters
Department of Mechanical Engineering, Politécnica/COPPE, Federal University of Rio de Janeiro, UFRJ, Cid. Universitaria, Cx. Postal: 68503, Rio de Janeiro, RJ, 21941-972, Brazil, [email protected], [email protected], [email protected], [email protected] Department of Mechanical and Materials Engineering, Florida International University, 10555 West Flagler Street, EC 3462, Miami, Florida 33174, U.S.A., [email protected] Department of Subsea Technology, Petrobras Research and Development Center – CENPES, Av. Horácio Macedo, 950, Cidade Universitária, Ilha do Fundão, 21941-915, Rio de Janeiro, RJ, Brazil, [email protected] Université de Toulouse ; Mines Albi ; CNRS; Centre RAPSODEE, Campus Jarlard, F-81013 Albi cedex 09, France, [email protected]
Association between mode of delivery and pelvic floor dysfunction.
BACKGROUND Normal vaginal delivery can cause significant strain on the pelvic floor. We present a review of the current knowledge on vaginal delivery as a risk factor for urinary incontinence and pelvic organ prolapse compared to caesarean section. MATERIAL AND METHOD We conducted a literature search in PubMed with an emphasis on systematic review articles and meta-analyses. The search was completed in January 2014. We also included articles from our own literature archives. RESULTS Compared to vaginal delivery, caesarean section appears to protect against urinary incontinence, but the effect decreases after patients reach their fifties. The risk of pelvic organ prolapse increases (dose-response effect) with the number of vaginal deliveries compared to caesarean sections. There are few reliable studies on the association between mode of delivery and anal incontinence, but meta-analyses may indicate that caesarean section does not offer protection after the postpartum period. Women with previous anal sphincter rupture during vaginal delivery are a sub-group with an elevated risk of anal incontinence. The degree of severity of pelvic floor dysfunction is frequently unreported in the literature. INTERPRETATION The prevalence of urinary incontinence and pelvic organ prolapse is lower in women who have only delivered by caesarean section than in those who have delivered vaginally. For urinary incontinence this difference appears to level out with increasing age. There is no basis for identifying sub-groups with a high risk of pelvic floor injury, with the exception of women who have previously had an anal sphincter rupture. Caesarean section will have a limited primary preventive effect on pelvic floor dysfunction at a population level.
2 Saddles in Deep Networks : Background and Motivation
Recent years have seen a growing interest in understanding deep neural networks from an optimization perspective. It is understood now that converging to low-cost local minima is sufficient for such models to become effective in practice. However, in this work, we propose a new hypothesis based on recent theoretical findings and empirical studies that deep neural network models actually converge to saddle points with high degeneracy. Our findings from this work are new, and can have a significant impact on the development of gradient descent based methods for training deep networks. We validated our hypotheses using an extensive experimental evaluation on standard datasets such as MNIST and CIFAR-10, and also showed that recent efforts that attempt to escape saddles finally converge to saddles with high degeneracy, which we define as ‘good saddles’. We also verified the famous Wigner’s Semicircle Law in our experimental results.
The F-MACHOP regimen in the treatment of aggressive non-Hodgkin's lymphomas: a single center experience in 72 patients.
BACKGROUND Since July 1991 we have employed the F-MACHOP regimen for the treatment of aggressive non-Hodgkin's lymphomas (NHL). The aim of the present study was to evaluate the response rate and the toxicity of this chemotherapy program. PATIENTS AND METHODS Seventy-two consecutive patients entered the study and were treated with the F-MACHOP regimen for 6 planned courses, given every 21 days. G- or GM-CSF were administered whenever required. RESULTS Sixty-six patients (92%) obtained a response: 38 (53%) a complete remission (CR) and 28 (39%) a partial remission (PR); 4 (6%) proved to be resistant and 2 (3%) died of chemotherapy-related toxicity. Fifty-seven patients with a good performance status were subsequently selected to undergo autologous stem cell transplantation (ASCT). During chemotherapy, grade III-IV neutropenia was observed in 59% of the patients; a significant drop in hemoglobin levels was detected, with blood transfusions being required in 21% of the cases; platelet counts were unaffected. The main extrahematological toxic events were: alopecia (100% of the patients), osteoarthromyalgias (58%), grade I-II neuropathy (53%) and grade I-II hepatic toxicity (43%). CONCLUSIONS Our study confirms the efficacy of the F-MACHOP regimen in obtaining a high rate of response (CR + PR) in most aggressive NHL cases, with an acceptable toxicity and a low rate of toxic deaths. This regimen enables the majority of patients to be selected for ASCT as consolidation therapy without significant toxicity.
High-Voltage Gain Boost Converter Based on Three-State Commutation Cell for Battery Charging Using PV Panels in a Single Conversion Stage
This paper presents a novel high-voltage gain boost converter topology based on the three-state commutation cell for battery charging using PV panels and a reduced number of conversion stages. The presented converter operates in zero-voltage switching (ZVS) mode for all switches. By using the new concept of single-stage approaches, the converter can generate a dc bus with a battery bank or a photovoltaic panel array, allowing the simultaneous charge of the batteries according to the radiation level. The operation principle, design specifications, and experimental results from a 500-W prototype are presented in order to validate the proposed structure.
SSL Backend Forwarding Scheme in Cluster-Based Web Servers
State-of-the-art cluster-based data centers consisting of three tiers (Web server, application server and database server) are being used to host complex Web services such as e-commerce applications. The application server handles dynamic and sensitive Web contents that need protection from eavesdropping, tampering and forgery. Although the Secure Sockets Layer is the most popular protocol to provide a secure channel between a client and a cluster-based network server, its high overhead degrades the server performance considerably and, thus, affects the server scalability. It improving the performance of SSL-enabled network servers is critical for designing scalable and high-performance data centers. We examine the impact of SSL offering and SSL-session-aware distribution in cluster-based network servers. We propose a back-end forwarding scheme, called ssl_with_bf, that employs a low-overhead user-level communication mechanism like Virtual Interface Architecture to achieve a good load balance among server nodes. We compare three distribution models for network servers, Round Robin, ssl_with_session and ssl_with_bf, through simulation. The experimental results with 16-node and 32-node cluster configurations show that, although the session reuse of ssl_with_session is critical to improve the performance of application servers, the proposed back-end forwarding scheme can further enhance the performance due to better load balancing. The ssl_with_bf scheme can minimize the average latency by about 40 percent and improve throughput across a variety of workloads. Key wards: Web services such as e-commerce Protection from eavesdropping Tampering and forgery INTRODUTION For example, E-Biz reported about $1.9 billion loss in Overview of the System: Due to the growing popularity from the Secure Sockets Layer (SSL), which is commonly of the Internet, data centers/network servers are used for secure communication between clients and anticipated to be the bottleneck in hosting network-based Web servers. Even though SSL is the de facto standard services, even though the network bandwidth continues for transport layer security, its high overhead and poor to increase faster than the server capacity. It has scalability are two major problems in designing secure been observed that network servers contribute to large-scale network servers. Deployment of SSL can approximately 40 percent of the overall delay and this decrease a server’s capacity by up to two orders of delay is likely to grow with the increasing use of dynamic magnitude. Web contents. For Web-based applications, a poor In addition, the overhead of SSL becomes even more response time has significant financial implications [1-7]. severe in application servers. Application servers provide revenue in 1998 due to the long response time resulting Middle-East J. Sci. Res., 20 (6): 752-755, 2014 753 dynamic contents and the contents require secure ssl_with_bf models, are essential to minimize the SSL mechanisms for protection. Generating dynamic content overhead. Second, the average latency can be reduced takes about 100 to 1,000 times longer than simply by 40 percent with the proposed ssl_with_bf model reading static content. Moreover, since static content compared to the ssl_with_session model, resulting in is seldom updated, it can be easily cached. Several improved throughput. Third, the proposed scheme efficient caching algorithms have been proposed to provides high utilization and better load balance across reduce latency and increase throughput of front-end all nodes. The rest of this paper is organized as follows: Web services [5-8]. However, because dynamic content a brief overview of cluster-based network servers, useris generated during the execution of a program, caching level communication and SSL is provided. Section 3 dynamic content is not an efficient option like caching outlines three distribution models, including our proposed static content. Recently, a multitude of network services SSL back-end forwarding scheme. have been designed and evaluated using cluster platforms. Specifically, the design of distributed Web Description of the Problem servers has been a major research thrust to improve Existing System: the throughput and response time [9]. It is the first Web server model that exploits user-level communication in a In existing system, they have used to develop the cluster-based Web server. Our previous work reduces project using Round Robin [RR] model and the response time in a cluster-based Web server SSL_with_Session model. Those models are not using co scheduling schemes. In this paper, first, we effective [12]. Those models are not able to give the investigate the impact of SSL offering in cluster-based out put in time and the thorough put also lesser network servers, focusing on application servers [10], than that their expected output. which mainly provide dynamic content. Second, we These models had made the Latency problem and show the possible performance improvement when the minimal through put. For this problem they SSL-session reuse scheme is utilized in cluster based introduced the SSL_with_bf (Backend forwarding) servers. The SSL-session reuse scheme has been tested model is to overcome the existing problems. on a single Web server node and extended to a cluster We going to implement SSL_with_Backend system that consisted of three Web servers. In this paper, Forwarding model in our proposed system. we explore the SSL-session reuse scheme using 16-node and 32-node cluster systems with various levels of Proposed System: workload. Third, we propose a back-end forwarding mechanism by exploiting the low-overhead user-level In our Proposed System, We are going to implement communication to enhance the SSL-enabled network the SSL_with_Backend Forwarding model server performance. (Algorithm) is to overcome the problem of existing XYZ To this end, we compare three distribution models in clusters: Round Robin (RR), ssl_with_session and ssl_with_bf (backend_forwarding). The RR model, widely used in Web clusters, distributes requests from clients to servers using the RR scheme. ssl_with_session uses a more sophisticated distribution algorithm in which subsequent requests of the same client are forwarded to the same server, avoiding expensive SSL setup costs. The proposed ssl_with_bf uses the same distribution policy as the ssl_with_session, but includes an intelligent load balancing scheme that forwards client requests from a heavily loaded back-end node to a lightly loaded node to improve the utilization across all nodes. This policy uses the underlying user-level communication for fast communication. Extensive performance analyses with various workload and system configurations are summarized as follows: First, schemes with reusable sessions [11], deployed in the ssl_with_session and system. This model will reduce the latency and increase the throughput than the existing system (Round Robin model and SSL_with_Session). The Secure Socket Layer_with_BF model is very helpful for load balancing of the server. This will reduce the load of the server while the server is being busy. These are the advantages of our proposed system. The ssl_with_bf scheme can minimize the average latency by about 40 percent and improve throughput across a variety of workloads. Module Descriptions Authentication Module: This module is to register the new users and previously registered users can enter into our project. The admin only can enter and do the uploading files into the servers. Middle-East J. Sci. Res., 20 (6): 752-755, 2014 754 IP Address Representation Module: This module is to the different functions clearly and quickly. give the IP addresses which we are going to assign those Initially as a first step the executable form of the as servers. We can enter and view IP addresses from this application is to be created and loaded in the common module. server machine which is accessible to the entire user and Load Servers Module: This module is, the administrator is to document the entire system which provides only can enter into this module. The administrator will do components and the operating procedures of the the encryption of the text file and store into the servers system. which we are assigned in IP representation module. Implementation is the stage of the project when the This module will make the both public and private key for theoretical design is turned out into a working system. the cryptography. Thus it can be considered to be the most critical stage in Load Balancing Module: This module is, the users can confidence that the new system will work and be enter into this module and can view the file name which effective. the administrator stored into the servers. The user can The implementation stage involves careful planning, select the file from the list and can download from the investigation of the existing system and it’s constraints server which is in idle state. We will get the response time on implementation, designing of methods to achieve and from which server we are getting the file [13]. changeover and evaluation of changeover methods. Finally we can get the decrypted file from the key pair. Implementation is the process of converting a new The SSL protocol: on user training, site preparation and file conversion for Implementation: Implementation is the most crucial RR, ssl_with_session and ssl_with_bf, through stage in achieving a successful system and giving the simulation. The simulation model captures the VIA user’s confidence that the new system is workable and communication characteristics and the application effective. Implementation of a modified application to server design in sufficient detail and uses realistic replace an existing one. This type of conversation is numbers for SSL encryption overheads obtained from relatively easy to handle, provide there are no major measurements. Simulation with 16-node and 32-node changes in th
Incremental Segmentation on Private Data without Catastrophic Forgetting
Despite the success of image segmentation, convolutional neural networks are ill-equipped for incremental learning, i.e., adapting the original model trained on a set of classes to additionally segment new classes, without access of the original training data. They suffer from “catastrophic forgetting” — an abrupt degradation of performance on the old classes, when the training objective is adapted to the new classes. We present a method to address this issue, and learn image segmentation incrementally on private data whose annotations for the original classes in the new training set are unavailable. The key of our proposed solution is to balance the interplay between predictions on the new classes and distillation loss, it minimizes the discrepancy between responses for old classes on updated network via knowledge rehearsal. This incremental learning can be performed multiple times, for a new set of classes in each step, with a moderate drop in performance compared to the baseline network trained on the ensemble of data. We present image segmentation results on the PASCAL VOC 2012 and COCO datasets, on the ResNet and DenseNet architecture, along with a detailed empirical analysis of the approach.
Loneliness, social network size, and immune response to influenza vaccination in college freshmen.
Antibody response to the influenza immunization was investigated in 83 1st-semester healthy university freshmen. Elevated levels of loneliness throughout the semester and small social networks were independently associated with poorer antibody response to 1 component of the vaccine. Those with both high levels of loneliness and a small social network had the lowest antibody response. Loneliness was also associated with greater psychological stress and negative affect, less positive affect, poorer sleep efficiency and quality, and elevations in circulating levels of cortisol. However, only the stress data were consistent with mediation of the loneliness-antibody response relation. None of these variables were associated with social network size, and hence none were potential mediators of the relation between network size and immunization response.
The design features of forecasting support systems and their effectiveness
Forecasts play a key role in the management of the supply chain. In most organisations such forecasts form part of an information system on which other functions such as scheduling, resource planning and marketing depend. Forecast accuracy is, therefore, an important component in the delivery of an effective supply chain. Typically, the forecasts are produced by integrating managerial judgment with quantitative forecasts within a forecasting support system (FSS). However, there is much evidence that this integration is often carried out poorly with deleterious effects on accuracy. This study considers the role that a well-designed FSS might have in improving this situation. It integrates the literatures on forecasting and decision support to explain the causes of the problem and to identify design features of FSSs that might help to ameliorate it. An assessment is made of the extent to which currently available business forecasting packages, which are widely employed in supply chain management, possess these features.
Modeling and querying data in NoSQL databases
Relational databases are providing storage for several decades now. However for today's interactive web and mobile applications the importance of flexibility and scalability in data model can not be over-stated. The term NoSQL broadly covers all non-relational databases that provide schema-less and scalable model. NoSQL databases which are also termed as Internetage databases are currently being used by Google, Amazon, Facebook and many other major organizations operating in the era of Web 2.0. Different classes of NoSQL databases namely key-value pair, document, column-oriented and graph databases enable programmers to model the data closer to the format as used in their application. In this paper, data modeling and query syntax of relational and some classes of NoSQL databases have been explained with the help of an case study of a news website like Slashdot.
Statistical Parsing for harmonic Analysis of Jazz Chord Sequences
Analysing music resembles natural language parsing in requiring the derivation of structure from an unstructured and highly ambiguous sequence of elements, whether they are notes or words. Such analysis is fundamental to many music processing tasks, such as key identification and score transcription. The focus of the present paper is on harmonic analysis. We use the three-dimensional tonal harmonic space developed by [4, 13, 14] to define a theory of tonal harmonic progression, which plays a role analogous to semantics in language. Our parser applies techniques from natural language processing (NLP) to the problem of analysing harmonic progression. It uses a formal grammar of jazz chord sequences of a kind that is widely used for NLP, together with the statistically based modelling techniques standardly used in wide-coverage parsing, to map music onto underlying harmonic progressions in the tonal space. Using supervised learning over a small corpus of jazz chord sequences annotated with harmonic analyses, we show that grammar-based musical parsing using simple statistical parsing models is more accurate than a baseline Markovian model trained on the same corpus.
Deep Neural Networks
With ever-increasing computational demand for deep learning, it is critical to investigate the implications of the numeric representation and precision of DNN model weights and activations on computational efficiency. In this work, we explore unconventional narrow-precision floating-point representations as it relates to inference accuracy and efficiency to steer the improved design of future DNN platforms. We show that inference using these custom numeric representations on production-grade DNNs, including GoogLeNet and VGG, achieves an average speedup of 7.6× with less than 1% degradation in inference accuracy relative to a state-of-the-art baseline platform representing the most sophisticated hardware using single-precision floating point. To facilitate the use of such customized precision, we also present a novel technique that drastically reduces the time required to derive the optimal precision configuration.
Ant colony optimization based enhanced dynamic source routing algorithm for mobile Ad-hoc network
Article history: Received 3 July 2013 Received in revised form 19 August 2014 Accepted 24 September 2014 Available online 7 October 2014
Legal and Institutional Barriers to Optimal Financial Architecture for New Economy Firms in Developing Countries
This paper reviews the obstacles for an appropriate financial architecture of new economy firms in developing countries by reviewing the theoretical and some preliminary empirical underpinnings of the importance of legal and institutional barriers. Apart from the more conventional institutional and legal barriers, which are advanced by the recent law and finance literature, the analysis in this paper focuses on the importance of the ICT environment, as a potentially important barrier to the development of the business sector in general, and new economy firms in particular. This preliminary analysis confirms the importance of this ICT environment for asset allocation (and the creation of intangibles) for the financial structure and, ultimately, for firm growth.
Design of Permanent Multipole Magnets with Oriented Rare Earth Cobalt Materials
By taking advantage of both the magnetic strength and the astounding simplicity of the magnetic properties of oriented rare earth cobalt material, new designs have been developed for a number of devices. In this article on multi pole magnets, special emphasis is put on quad rupoles because of their frequent use and because the aperture fields achievable 1.2 1.4 T are rather large. This paper also lays the foundation for future papers on:
31P-magnetic resonance spectroscopy (31P-MRS) detects early changes in kidney high-energy phosphate metabolism during a 6-month Valsartan treatment in diabetic and non-diabetic kidney-transplanted patients
31P-magnetic resonance spectroscopy (31P-MRS) is a non-invasive tool to study high-energy phosphate (HEP) metabolism. We evaluate whether 31P-MRS can detect early changes in kidney HEP metabolism during a 6-month trial with Valsartan. Twenty consecutive stable and normotensive kidney-transplanted patients were enrolled. Nine of them received short-term low-dose Valsartan treatment (80 mg/day) for 6 months, while 11 controls received no medication. Kidney HEP metabolism was evaluated both at baseline and after treatment by 31P-MRS with a 1.5 T system (Gyroscan Intera Master 1.5 MR System; Philips Medical Systems, Best, The Netherlands). Valsartan-treated patients (n = 9) showed a significant increase in β-ATP/Pi ratio, a marker of kidney HEP metabolism (baseline = 1.03 ± 0.08 vs. 6 months = 1.26 ± 0.07, p = 0.03). In contrast, the β-ATP/Pi ratio in the control group (n = 11) did not change (baseline = 0.85 ± 0.10 vs. 6 months = 0.89 ± 0.08, ns). The improvement in the β-ATP/Pi ratio was not associated with a reduction in arterial blood pressure or in urinary albumin excretion. Kidney-localized 31P-MRS can detect early changes in kidney HEP metabolism during a short-term low-dose Valsartan treatment in stable normotensive kidney-transplanted patients.
Quantum tunneling of superconducting string currents
We investigate the decay of current on a superconducting cosmic string through quantum tunneling. We construct the instanton describing tunneling in a simple bosonic string model, and estimate the decay rate. The tunneling rate vanishes in the limit of a chiral current. This conclusion, which is supported by a symmetry argument, is expected to apply in general. It has important implications for the stability of chiral vortons.
The Most Probable Database Problem
This paper proposes a novel inference task for probabilistic databases: the most probable database (MPD) problem. The MPD is the most probable deterministic database where a given query or constraint is true. We highlight two distinctive applications, in database repair of key and dependency constraints, and in finding most probable explanations in statistical relational learning. The MPD problem raises new theoretical questions, such as the possibility of a dichotomy theorem for MPD, classifying queries as being either PTIME or NP-hard. We show that such a dichotomy would diverge from dichotomies for other inference tasks. We then prove a dichotomy for queries that represent unary functional dependency constraints. Finally, we discuss symmetric probabilities and the opportunities for lifted inference.
Friction observer and compensation for control of robots with joint torque measurement
In this paper we introduce a friction observer for robots with joint torque sensing (in particular for the DLR medical robot) in order to increase the positioning accuracy and the performance of torque control. The observer output corresponds to the low-pass filtered friction torque. It is used for friction compensation in conjunction with a MIMO controller designed for flexible joint arms. A passivity analysis is done for this friction compensation, allowing a Lyapunov based convergence analysis in the context of the nonlinear robot dynamics. For the complete controlled system, global asymptotic stability can be shown. Experimental results validate the practical efficiency of the approach.
Water cycle algorithm – A novel metaheuristic optimization method for solving constrained engineering optimization problems
0045-7949/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.compstruc.2012.07.010 ⇑ Corresponding author. Tel.: +60 379675266; fax: E-mail addresses: [email protected] (H. hoo.com (A. Sadollah), [email protected] (A m.edu.my (M. Hamdi). This paper presents a new optimization technique called water cycle algorithm (WCA) which is applied to a number of constrained optimization and engineering design problems. The fundamental concepts and ideas which underlie the proposed method is inspired from nature and based on the observation of water cycle process and how rivers and streams flow to the sea in the real world. A comparative study has been carried out to show the effectiveness of the WCA over other well-known optimizers in terms of computational effort (measures as number of function evaluations) and function value (accuracy) in this paper. 2012 Elsevier Ltd. All rights reserved.
How do habits guide behavior ? Perceived and actual triggers of habits in daily life
a r t i c l e i n f o What are the psychological mechanisms that trigger habits in daily life? Two studies reveal that strong habits are influenced by context cues associated with past performance (e.g., locations) but are relatively unaffected by current goals. Specifically, performance contexts—but not goals—automatically triggered strongly habitual behaviors in memory (Experiment 1) and triggered overt habit performance (Experiment 2). Nonetheless, habits sometimes appear to be linked to goals because people self-perceive their habits to be guided by goals. Furthermore, habits of moderate strength are automatically influenced by goals, yielding a curvilinear, U-shaped relation between habit strength and actual goal influence. Thus, research that taps self-perceptions or moderately strong habits may find habits to be linked to goals. Introduction Having cast off the strictures of behaviorism, psychologists are showing renewed interest in the psychological processes that guide This interest is fueled partly by the recognition that automaticity is not a unitary construct. Hence, different kinds of automatic responses may be triggered and controlled in different ways (Bargh, 1994; Moors & De Houwer, 2006). However, the field has not yet converged on a common understanding of the psychological mechanisms that underlie habits. Habits can be defined as psychological dispositions to repeat past behavior. They are acquired gradually as people repeatedly respond in a recurring context (e.g., performance settings, action sequences, Wood & Neal, 2007, 2009). Most researchers agree that habits often originate in goal pursuit, given that people are likely to repeat actions that are rewarding or yield desired outcomes. In addition, habit strength is a continuum, with habits of weak and moderate strength performed with lower frequency and/or in more variable contexts than strong habits This consensus aside, it remains unclear how goals and context cues influence habit automaticity. Goals are motivational states that (a) define a valued outcome that (b) energizes and directs action (e.g., the goal of getting an A in class energizes late night studying; Förster, Liberman, & Friedman, 2007). In contrast, context cues for habits reflect features of the performance environment in which the response typically occurs (e.g., the college library as a setting for late night studying). Some prior research indicates that habits are activated automatically by goals (e.g., Aarts & Dijksterhuis, 2000), whereas others indicate that habits are activated directly by context cues, with minimal influence of goals In the present experiments, we first test the cognitive associations …
Telephone-administered cognitive behavioral therapy for veterans served by community-based outpatient clinics.
OBJECTIVE Multiple trials have found telephone-administered cognitive behavioral therapy (T-CBT) to be effective for the treatment of depression. The aim of this study was to evaluate T-CBT for the treatment of depression among veterans served by community-based outpatient clinics (CBOCs) outside of major urban areas. METHOD Eighty-five veterans meeting Diagnostic and Statistical Manual of Mental Disorders (4th ed.; American Psychiatric Association, 1994) criteria for major depressive disorder were randomized to receive 16 sessions of T-CBT over 20 weeks or treatment as usual through the CBOC. Veterans were assessed at baseline, 12 weeks, 20 weeks (posttreatment), and 6-month follow-up using the Hamilton Depression Rating Scale (Hamilton, 1960), the Patient Health Questionnaire-9 (Kroenke, Spitzer, & Williams, 2001), and a standardized psychiatric interview. RESULTS There were no significant Time × Treatment effects (ps > .20). Patients were compliant, with 38 (92.7%) completing at least 12 sessions, and 32 (78.0%) having no missed sessions whatsoever. Ratings of audiotaped sessions showed the therapists to be highly competent. CONCLUSIONS This trial yielded negative results for an intervention that has been shown to be effective under other circumstances. We speculate that veterans served within the Veterans Affairs system are more refractory to treatment than other populations, and they may require a more rigorous intervention. TRIAL REGISTRATION clinicaltrials.gov NCT00223652.
Accessible smart cities?: Inspecting the accessibility of Brazilian municipalities' mobile applications
The use of interactive technologies to aid in the implementation of smart cities has a significant potential to support disabled users in performing their activities as citizens. In this study, we present an investigation of the accessibility of a sample of 10 mobile Android™ applications of Brazilian municipalities, two from each of the five big geographical regions of the country, focusing especially on users with visual disabilities. The results showed that many of the applications were not in accordance with accessibility guidelines, with an average of 57 instances of violations and an average of 11.6 different criteria violated per application. The main problems included issues like not addressing labelling of non-textual content, headings, identifying user location, colour contrast, enabling users to interact using screen reader gestures, focus visibility and lack of adaptation of text contained in image. Although the growth in mobile applications for has boosted the possibilities aligned with the principles of smart cities, there is a strong need for including accessibility in the design of such applications in order for disabled people to benefit from the potential they can have for their lives.
Wireless home automation networks: A survey of architectures and technologies
Wireless home automation networks comprise wireless embedded sensors and actuators that enable monitoring and control applications for home user comfort and efficient home management. This article surveys the main current and emerging solutions that are suitable for WHANs, including ZigBee, Z-Wave, INSTEON, Wavenis, and IP-based technology.
PITCH DETECTION ALGORITHM : AUTOCORRELATION METHOD AND AMDF
This paper describes the pitch tracking techniques using autocorrelation method and AMDF (Average Magnitude Difference Function) method involving the preprocessing and the extraction of pitch pattern. It also presents the implementation and the basic experiments and discussions.
Study of induction heating power supply based on fuzzy controller
In order to satisfy the higher control performance requirement of the Induction Heating Supply, a fuzzy logic control technology for induction heating power supply power control system is researched. This study presents the composition and design of the induction heating control system based on the fuzzy logic controller. In this paper, a complete simulation model of induction heating systems is obtained by using the Matlab /Simulink software, Simulation results show the effectiveness and superiority of the control system.
A non-reward attractor theory of depression
A non-reward attractor theory of depression is proposed based on the operation of the lateral orbitofrontal cortex and supracallosal cingulate cortex. The orbitofrontal cortex contains error neurons that respond to non-reward for many seconds in an attractor state that maintains a memory of the non-reward. The human lateral orbitofrontal cortex is activated by non-reward during reward reversal, and by a signal to stop a response that is now incorrect. Damage to the human orbitofrontal cortex impairs reward reversal learning. Not receiving reward can produce depression. The theory proposed is that in depression, this lateral orbitofrontal cortex non-reward system is more easily triggered, and maintains its attractor-related firing for longer. This triggers negative cognitive states, which in turn have positive feedback top-down effects on the orbitofrontal cortex non-reward system. Treatments for depression, including ketamine, may act in part by quashing this attractor. The mania of bipolar disorder is hypothesized to be associated with oversensitivity and overactivity in the reciprocally related reward system in the medial orbitofrontal cortex and pregenual cingulate cortex.
Water Quality Criteria
EPA develops water quality criteria based on the latest scientific knowledge to protect human health and aquatic life. This information serves as guidance to states and tribes in adopting water quality standards.
Big Data, Social Media, and Protest: Foundations for a Research Agenda
Following the Arab Spring, a debate broke out among both academics and pundits as to how important social media had been in bringing about what may have been the least anticipated political development of the 21st century. Critics of the importance of social media pointed in particular to two factors: (a) the proportion of social media messages that were transmitted in English; and (b) the proportion of Arab Spring related social media posts that originated from outside the Arab world. In our chapter, we will test whether two important subsequent unanticipated protests Turkey’s 2013 Gezi Park protests and Ukraine’s 2013-14 Euromaidan protest also are susceptible to such criticisms. To do so, we draw on millions of tweets from both protests including millions of geolocated tweets from Turkey to test hypotheses related to the use of Twitter by protest participants and, more generally, in-country supporters of protest movements, effectively refuting the idea that emerged after the Arab Spring that Twitter use during protests only reflects international attention to an event. ∗The authors are all members of the New York University Social Media and Political Participation (SMaPP) laboratory. The writing of this article was supported by the INSPIRE program of the National Science Foundation (Award # SES-1248077) and Dean Thomas Carew and the Research Investment Fund (RIF) of New York University. 1 Online Appendix Figure A1: Turkish Facebook Activity Over Time
Marked variation in prevalence of malaria-protective human genetic polymorphisms across Uganda.
A number of human genetic polymorphisms are prevalent in tropical populations and appear to offer protection against symptomatic and/or severe malaria. We compared the prevalence of four polymorphisms, the sickle hemoglobin mutation (β globin E6V), the α-thalassemia 3.7kb deletion, glucose-6-phosphate dehydrogenase deficiency caused by the common African variant (G6PD A-), and the CD36 T188G mutation in 1344 individuals residing in districts in eastern (Tororo), south-central (Jinja), and southwestern (Kanungu) Uganda. Genes of interest were amplified, amplicons subjected to mutation-specific restriction endonuclease digestion (for sickle hemoglobin, G6PD A-, and CD36 T188G), reaction products resolved by electrophoresis, and genotypes determined based on the sizes of reaction products. Mutant genotypes were common, with many more heterozygous than homozygous alleles identified. The prevalences (heterozygotes plus homozygotes) of sickle hemoglobin (28% Tororo, 25% Jinja, 7% Kanungu), α-thalassemia (53% Tororo, 45% Jinja, 18% Kanungu) and G6PD A- (29% Tororo, 18% Jinja, 8% Kanungu) were significantly greater in Tororo and Jinja compared to Kanungu (p<0.0001 for all three alleles); prevalences were also significantly greater in Tororo compared to Jinja for α-thalassemia (p=0.03) and G6PD A- (p<0.0001). For the CD36 T188G mutation, the prevalence was significantly greater in Tororo compared to Jinja or Kanungu (27% Tororo, 17% Jinja, 18% Kanungu; p=0.0004 and 0.0017, respectively). Considering ethnicity of study subjects, based on primary language spoken, the prevalence of mutant genotypes was lower in Bantu compared to non-Bantu language speakers, but in the Jinja cohort, the only study population with a marked diversity of language groups, prevalence did not differ between Bantu and non-Bantu speakers. These results indicate marked differences in human genetic features between populations in different regions of Uganda. These differences might be explained by both ethnic variation and by varied malaria risk in different regions of Uganda.
Distributed representations in memory: insights from functional brain imaging.
Forging new memories for facts and events, holding critical details in mind on a moment-to-moment basis, and retrieving knowledge in the service of current goals all depend on a complex interplay between neural ensembles throughout the brain. Over the past decade, researchers have increasingly utilized powerful analytical tools (e.g., multivoxel pattern analysis) to decode the information represented within distributed functional magnetic resonance imaging activity patterns. In this review, we discuss how these methods can sensitively index neural representations of perceptual and semantic content and how leverage on the engagement of distributed representations provides unique insights into distinct aspects of memory-guided behavior. We emphasize that, in addition to characterizing the contents of memories, analyses of distributed patterns shed light on the processes that influence how information is encoded, maintained, or retrieved, and thus inform memory theory. We conclude by highlighting open questions about memory that can be addressed through distributed pattern analyses.
Infrapatellar saphenous neuralgia - diagnosis and treatment.
Persistent anterior knee pain, especially after surgery, can be very frustrating for the patient and the clinician. Injury to the infrapatellar branch of the saphenous nerve (IPS) is not uncommon after knee surgeries and trauma, yet the diagnosis and treatment of IPS neuralgia is not usually taught in pain training programs. In this case report, we describe the anatomy of the saphenous nerve and specifically the infrapatellar saphenous nerve branch; we also discuss the types of surgical trauma, the clinical presentation, the diagnostic modalities, the diagnostic injection technique, and the treatment options. As early as 1945, surgeons were cautioned regarding the potential surgical trauma to the IPS. Although many authors dismissed the nerve damage as unavoidable, the IPS is now recognized as a potential cause of persistent anterior and anteriomedial knee pain. Even more concerning, damage to peripheral nerves such as the IPS has been identified as the cause and potential perpetuating factor for conditions such as complex regional pain syndromes (CRPS). Because the clinical presentation may be vague, it has often been misdiagnosed and underdiagnosed. There is a documented vasomotor instability, but, unfortunately, sympathetic blocks will not address the underlying pathology, and therefore patients often will not respond to this modality, although the correct diagnosis can lead to rapid and gratifying resolution of the pathology. An entity unknown to the clinician is never diagnosed, and so it is important to familiarize pain physicians with IPS neuropathy so that they may be able to offer assistance when this painful condition arises.
Comparison of germinal center markers CD10, BCL6 and human germinal center-associated lymphoma (HGAL) in follicular lymphomas
BACKGROUND Recently, human germinal center-associated lymphoma (HGAL) gene protein has been proposed as an adjunctive follicular marker to CD10 and BCL6. METHODS Our aim was to evaluate immunoreactivity for HGAL in 82 cases of follicular lymphomas (FLs)--67 nodal, 5 cutaneous and 10 transformed--which were all analysed histologically, by immunohistochemistry and PCR. RESULTS Immunostaining for HGAL was more frequently positive (97.6%) than that for BCL6 (92.7%) and CD10 (90.2%) in FLs; the cases negative for bcl6 and/or for CD10 were all positive for HGAL, whereas the two cases negative for HGAL were positive with BCL6; no difference in HGAL immunostaining was found among different malignant subtypes or grades. CONCLUSIONS Therefore, HGAL can be used in the immunostaining of FLs as the most sensitive germinal center (GC)-marker; when applied alone, it would half the immunostaining costs, reserving the use of the other two markers only to HGAL-negative cases.
Transcriptomic identification of candidate genes involved in sunflower responses to chilling and salt stresses based on cDNA microarray analysis
Considering that sunflower production is expanding to arid regions, tolerance to abiotic stresses as drought, low temperatures and salinity arises as one of the main constrains nowadays. Differential organ-specific sunflower ESTs (expressed sequence tags) were previously generated by a subtractive hybridization method that included a considerable number of putative abiotic stress associated sequences. The objective of this work is to analyze concerted gene expression profiles of organ-specific ESTs by fluorescence microarray assay, in response to high sodium chloride concentration and chilling treatments with the aim to identify and follow up candidate genes for early responses to abiotic stress in sunflower. Abiotic-related expressed genes were the target of this characterization through a gene expression analysis using an organ-specific cDNA fluorescence microarray approach in response to high salinity and low temperatures. The experiment included three independent replicates from leaf samples. We analyzed 317 unigenes previously isolated from differential organ-specific cDNA libraries from leaf, stem and flower at R1 and R4 developmental stage. A statistical analysis based on mean comparison by ANOVA and ordination by Principal Component Analysis allowed the detection of 80 candidate genes for either salinity and/or chilling stresses. Out of them, 50 genes were up or down regulated under both stresses, supporting common regulatory mechanisms and general responses to chilling and salinity. Interestingly 15 and 12 sequences were up regulated or down regulated specifically in one stress but not in the other, respectively. These genes are potentially involved in different regulatory mechanisms including transcription/translation/protein degradation/protein folding/ROS production or ROS-scavenging. Differential gene expression patterns were confirmed by qRT-PCR for 12.5% of the microarray candidate sequences. Eighty genes isolated from organ-specific cDNA libraries were identified as candidate genes for sunflower early response to low temperatures and salinity. Microarray profiling of chilling and NaCl-treated sunflower leaves revealed dynamic changes in transcript abundance, including transcription factors, defense/stress related proteins, and effectors of homeostasis, all of which highlight the complexity of both stress responses. This study not only allowed the identification of common transcriptional changes to both stress conditions but also lead to the detection of stress-specific genes not previously reported in sunflower. This is the first organ-specific cDNA fluorescence microarray study addressing a simultaneous evaluation of concerted transcriptional changes in response to chilling and salinity stress in cultivated sunflower.
Inner Attention based Recurrent Neural Networks for Answer Selection
Attention based recurrent neural networks have shown advantages in representing natural language sentences (Hermann et al., 2015; Rocktäschel et al., 2015; Tan et al., 2015). Based on recurrent neural networks (RNN), external attention information was added to hidden representations to get an attentive sentence representation. Despite the improvement over nonattentive models, the attention mechanism under RNN is not well studied. In this work, we analyze the deficiency of traditional attention based RNN models quantitatively and qualitatively. Then we present three new RNN models that add attention information before RNN hidden representation, which shows advantage in representing sentence and achieves new stateof-art results in answer selection task.
Enterprise Information Systems as Objects and Carriers of Institutional Forces: The New Iron Cage?
This paper draws upon the institutional theory lens to examine enterprise information systems. We propose that these information systems engender a duality. On one hand, these systems are subject to institutional forces and institutional processes that set the rules of rationality. On the other hand, they are an important embodiment of institutional commitments and serve to preserve these rules by constraining the actions of human agents. The complexity inherent to enterprise technologies renders them an equivoque. This, when combined with the propensity toward lack of mindfulness in organizations, is likely to lead to acquiescence to institutional pressures. Enterprise information systems bind organizations to fundamental choices about how their activities should be organized; unquestioned choices that tend to appear natural. We suggest implications of this view and develop propositions examining: (1) enterprise information systems as objects of institutional forces in the “chartering” and “project” phases, (2) the resolution of institutional misalignments caused by the introduction of new systems, and (3) enterprise information systems as carriers of institutional logics in the “shakeout” and “onward &
Heterogeneous Vehicular Networking: A Survey on Architecture, Challenges, and Solutions
With the rapid development of the Intelligent Transportation System (ITS), vehicular communication networks have been widely studied in recent years. Dedicated Short Range Communication (DSRC) can provide efficient real-time information exchange among vehicles without the need of pervasive roadside communication infrastructure. Although mobile cellular networks are capable of providing wide coverage for vehicular users, the requirements of services that require stringent real-time safety cannot always be guaranteed by cellular networks. Therefore, the Heterogeneous Vehicular NETwork (HetVNET), which integrates cellular networks with DSRC, is a potential solution for meeting the communication requirements of the ITS. Although there are a plethora of reported studies on either DSRC or cellular networks, joint research of these two areas is still at its infancy. This paper provides a comprehensive survey on recent wireless networks techniques applied to HetVNETs. Firstly, the requirements and use cases of safety and non-safety services are summarized and compared. Consequently, a HetVNET framework that utilizes a variety of wireless networking techniques is presented, followed by the descriptions of various applications for some typical scenarios. Building such HetVNETs requires a deep understanding of heterogeneity and its associated challenges. Thus, major challenges and solutions that are related to both the Medium Access Control (MAC) and network layers in HetVNETs are studied and discussed in detail. Finally, we outline open issues that help to identify new research directions in HetVNETs.
The congenital bilateral perisylvian syndrome: imaging findings in a multicenter study. CBPS Study Group.
PURPOSE To describe the neuroimaging findings and the clinical features in patients with the congenital bilateral perisylvian syndrome. PATIENTS AND METHODS Evaluation including history, general and neurologic examinations, electroencephalogram, chromosomal studies, and imaging data were reviewed in 31 patients. Pathologic material was available in two patients. RESULTS All patients had similar neurologic dysfunction, primarily pseudobulbar paresis. Dysarthria and severe restriction of tongue movements were present in all. Motor milestones were delayed in 75% of the patients and language milestones in all. Mild to moderate intellectual deficits were documented in 75% of patients (full-scale IQ = 70). Pyramidal signs were observed in 70%. Seizures were present in 87% and were intractable to medical therapy in half of this group. MR revealed bilateral perisylvian and perirolandic malformations with exposure of the insula. The malformations were symmetrical in 80% of cases. Pathologic correlation revealed four layered polymicrogyria in the affected areas. CONCLUSIONS The congenital bilateral perisylvian syndrome is a homogeneous clinical-radiologic entity. The underlying abnormality is probably polymicrogyria.
Market impacts and the life cycle of investors orders
In this paper, we use a database of around 400,000 metaorders issued by investors and electronically traded on European markets in 2010 in order to study market impact at different scales. At the intraday scale we confirm a square root temporary impact in the daily participation, and we shed light on a duration factor in 1/T γ with γ ' 0.25. Including this factor in the fits reinforces the square root shape of impact. We observe a power-law for the transient impact with an exponent between 0.5 (for long metaorders) and 0.8 (for shorter ones). Moreover we show that the market does not anticipate the size of the meta-orders. The intraday decay seems to exhibit two regimes (though hard to identify precisely): a “slow” regime right after the execution of the meta-order followed by a faster one. At the daily time scale, we show price moves after a metaorder can be split between realizations of expected returns that have triggered the investing decision and an idiosynchratic impact that slowly decays to zero. Moreover we propose a class of toy models based on Hawkes processes (the Hawkes Impact Models, HIM) to illustrate our reasoning. We show how the Impulsive-HIM model, despite its simplicity, embeds appealing features like transience and decay of impact. The latter is parametrized by a parameter C having a macroscopic interpretation: the ratio of contrarian reaction (i.e. impact decay) and of the "herding" reaction (i.e. impact amplification).
Anemia and Iron Deficiency in Adolescent School Girls in Kavar Urban Area, Southern Iran
BACKGROUND Anemia is one of the most common public health problems especially in developing countries. We investigated the prevalence of anemia, iron deficiency anemia and related risk factors in adolescent school girls in Kavar urban area in southern Iran. METHODS A total of 363 adolescent school girls were evaluated by a cross sectional study. Socioeconomic, demographic and related risk factors were obtained by a questionnaire. Hematological parameters and serum iron indices were measured. RESULTS There were 21 cases of anemia (5.8%), 31 (8.5%) iron deficiency and 6 (1.7%) iron deficiency anemia.  Most of anemic girls (85.7%) had mild anemia.  MCV, TIBC, age, and BMI had statistically significant relationship with hemoglobin. Only parasites infestation in the last three months had a 6.83 times more risk of anemia than those without this history (95% CI, 1.66-28.11). CONCLUSION The prevalence of anemia and iron deficiency anemia in this study were substantially less than what reported in many other regions of Iran as well as other developing countries. It seems that related implemented strategies in the recent years have been successful. More especial attention to prevention of parasite infestation should be considered in this area.
Non-contact video-based pulse rate measurement on a mobile service robot
Non-contact image photoplethysmography has gained a lot of attention during the last 5 years. Starting with the work of Verkruysse et al. [1], various methods for estimation of the human pulse rate from video sequences of the face under ambient illumination have been presented. Applied on a mobile service robot aimed to motivate elderly users for physical exercises, the pulse rate can be a valuable information in order to adapt to the users conditions. For this paper, a typical processing pipeline was implemented on a mobile robot, and a detailed comparison of methods for face segmentation was conducted, which is the key factor for robust pulse rate extraction even, if the subject is moving. A benchmark data set is introduced focusing on the amount of motion of the head during the measurement.
Automated Attack Planning
x Acknowledgements xii Chapter
MegaPipe: A New Programming Interface for Scalable Network I/O
We present MegaPipe, a new API for efficient, scalable network I/O for message-oriented workloads. The design of MegaPipe centers around the abstraction of a channel – a per-core, bidirectional pipe between the kernel and user space, used to exchange both I/O requests and event notifications. On top of the channel abstraction, we introduce three key concepts of MegaPipe: partitioning, lightweight socket (lwsocket), and batching. We implement MegaPipe in Linux and adapt memcached and nginx. Our results show that, by embracing a clean-slate design approach, MegaPipe is able to exploit new opportunities for improved performance and ease of programmability. In microbenchmarks on an 8-core server with 64 B messages, MegaPipe outperforms baseline Linux between 29% (for long connections) and 582% (for short connections). MegaPipe improves the performance of a modified version of memcached between 15% and 320%. For a workload based on real-world HTTP traces, MegaPipe boosts the throughput of nginx by 75%.
Machine Learning for Intelligent Systems
Recent research in machine learning has focused on supervised induction for simple classification and reinforcement learning for simple reactive behaviors. In the process, the field has become disconnected from AI’s original goal of creating complete intelligent agents. In this paper, I review recent work on machine learning for planning, language, vision, and other topics that runs counter to this trend and thus holds interest for the broader AI research community. I also suggest some steps to encourage further research along these lines.
THE QUEST FOR QUANTUM GRAVITY: TESTING TIMES FOR THEORIES?
I discuss some theoretical ideas concerning the representation of quantum gravity as a Lorentz-symmetry-violating `medium' with non-trivial optical properties, which include a refractive index in `vacuo' and stochastic effects associated with a spread in the arrival times of photons, growing linearly with the photon energy. Some of these properties may be experimentally detectable in future satellite facilities (e.g. GLAST or AMS), using as probes light from distant astrophysical sources such as gamma ray bursters. I also argue that such linear violations of Lorentz symmetry may not always be constrained by ultra-high-energy cosmic-ray data, as seems to be the case with a specific (stringy) model of space-time foam.
Increasing Brand Attractiveness and Sales through Social Media Comments on Public Displays - Evidence from a Field Experiment in the Retail Industry
Retailers and brands are just starting to utilize online social media to support their businesses. Simultaneously, public displays are becoming ubiquitous in public places, raising the question about how these two technologies could be used together to attract new and existing customers as well as strengthen the relationship toward a focal brand. Accordingly, in a field experiment we displayed brandand product-related comments from the social network Facebook as pervasive advertising in small-space retail stores, known as kiosks. From interviews conducted with real customers during the experiment and the corresponding analysis of sales data we could conclude three findings. Showing social media comments resulted in (1) customers perceiving brands as more innovative and attractive, (2) a measurable, positive effect on sales on both the brand and the product in question and (3) customers wanting to see the comments of others, but not their own, creating a give-andtake paradox for using public displays to show social media comments.
Grasping Without Squeezing: Design and Modeling of Shear-Activated Grippers
Grasping objects that are too large to envelop is traditionally achieved using friction that is activated by squeezing. We present a family of shear-activated grippers that can grasp such objects without the need to squeeze. When a shear force is applied to the gecko-inspired material in our grippers, adhesion is turned on; this adhesion in turn results in adhesion-controlled friction, a friction force that depends on adhesion rather than a squeezing normal force. Removal of the shear force eliminates adhesion, allowing easy release of an object. A compliant shear-activated gripper without active sensing and control can use the same light touch to lift objects that are soft, brittle, fragile, light, or very heavy. We present three grippers, the first two designed for curved objects, and the third for nearly any shape. Simple models describe the grasping process, and empirical results verify the models. The grippers are demonstrated on objects with a variety of shapes, materials, sizes, and weights.
Safety of the Malaria Vaccine Candidate, RTS,S/AS01E in 5 to 17 Month Old Kenyan and Tanzanian Children
The malaria vaccine candidate, RTS,S/AS01(E), showed promising protective efficacy in a trial of Kenyan and Tanzanian children aged 5 to 17 months. Here we report on the vaccine's safety and tolerability. The experimental design was a Phase 2b, two-centre, double-blind (observer- and participant-blind), randomised (1∶1 ratio) controlled trial. Three doses of study or control (rabies) vaccines were administered intramuscularly at 1 month intervals. Solicited adverse events (AEs) were collected for 7 days after each vaccination. There was surveillance and reporting for unsolicited adverse events for 30 days after each vaccination. Serious adverse events (SAEs) were recorded throughout the study period which lasted for 14 months after dose 1 in Korogwe, Tanzania and an average of 18 months post-dose 1 in Kilifi, Kenya. Blood samples for safety monitoring of haematological, renal and hepatic functions were taken at baseline, 3, 10 and 14 months after dose 1. A total of 894 children received RTS,S/AS01(E) or rabies vaccine between March and August 2007. Overall, children vaccinated with RTS,S/AS01(E) had fewer SAEs (51/447) than children in the control group (88/447). One SAE episode in a RTS,S/AS01(E) recipient and nine episodes among eight rabies vaccine recipients met the criteria for severe malaria. Unsolicited AEs were reported in 78% of subjects in the RTS,S/AS01(E) group and 74% of subjects in the rabies vaccine group. In both vaccine groups, gastroenteritis and pneumonia were the most frequently reported unsolicited AE. Fever was the most frequently observed solicited AE and was recorded after 11% of RTS,S/AS01(E) doses compared to 31% of doses of rabies vaccine. The candidate vaccine RTS,S/AS01(E) showed an acceptable safety profile in children living in a malaria-endemic area in East Africa. More data on the safety of RTS,S/AS01(E) will become available from the Phase 3 programme.
Exploring antecedents and consequence of online group-buying intention: An extended perspective on theory of planned behavior
With the development of electronic commerce, many dotcom firms are selling products to consumers across different countries and regions. The managers of online group-buying firms seek to increase customer purchasing intentions in the face of competition. Online group-buying refers to a certain number of consumers who join together as a group via Internet, for the purpose of buying a certain product with a discount. This study explores antecedents of intention to participate in online group-buying and the relationship between intention and behavior. The research model is basaed on planned behavior theory, electronic word-of-mouth, network embeddedness, and website quality attitude. An online survey is administered to 373 registered members of the ihergo website. Data is analyzed using the partial least squares method, and analytical results demonstrate that for potential consumers, experiential electronic word-of-mouth, relational embeddedness of the initiator, and service quality attitude influence intention to engage in online group-buying; for current consumers, intention to attend online groupbuying is determined by the structural and relational embeddedness of the initiator, system quality attitude positively affects intention, and intention positively affects online group-buying behavior. This study proposes a new classification of electronic word-of-mouth and applies the perspective of network embeddedness to explore antecedents of intention in online group-buying, broadening the applicability of electronic word-of-mouth and embeddedness theory. Finally, this study presents practical suggestions for managers of online group-buying firms in improving marketing efficiency.
ReliefF for estimation and discretization of attributes in classification, regression, and ILP probl
Instead of myopic impurity functions, we propose the use of Reli-efF for heuristic guidance of inductive learning algorithms. The basic algoritm RELIEF, developed by Kira and Rendell (Kira and Rendell, 1992a;b), is able to eeciently solve classiication problems involving highly dependent attributes, such as parity problems. However, it is sensitive to noise and is unable to deal with incomplete data, multi-class, and regression problems (continuous class). We have extended RELIEF in several directions. The extended algorithm ReliefF is able to deal with noisy and incomplete data, can be used for multiclass problems, and its regressional variant RReliefF can deal with regression problems. Another area of application is inductive logic programming (ILP) where, instead of myopic measures, ReliefF can be used to estimate the utility of literals during the theory construction.
Online banking: a field study of drivers, development challenges, and expectations
Online banking is the newest and least understood delivery channel for retail banking services. Yet, few, if any, studies were reported quantifying the issues relevant to this cutting-edge technology. This paper reports the results of a quantitative study of the perceptions of banks’ executive and IT managers and potential customers with regard to the drivers, development challenges, and expectations of online banking. The findings will be useful for both researchers and practitioners who seek to understand the issues relevant to online banking. # 2001 Elsevier Science Ltd. All rights reserved.
Prevalence and associated factors of retinal vein occlusion in the Korean National Health and Nutritional Examination Survey, 2008–2012
Retinal vein occlusion (RVO) is the second most common retinal vascular diseases and there are only a few Asian population-based studies with small samples. Hypertension is one of a modifiable risk factor of RVO, but no recent studies have shown the relationship between RVO and hypertension control status. We aimed to investigate the prevalence of RVO and its associated factors in an adult Korean population.A nationwide population-based, cross-sectional study. We enrolled 37,982 participants from the Korea National Health and Nutrition Examination Survey who were 19 years or older and who had undergone ophthalmologic exams from 2008 through 2012. All participants underwent a comprehensive ophthalmic examination, standardized ophthalmic and health interviews, and laboratory investigations. Digital fundus photographs were interpreted by retinal specialists who investigated for the presence of RVO. The prevalence of RVO was then estimated. RVO-associated factors were determined using step-wise logistic regression analyses. We also performed a subgroup analysis to evaluate the association between hypertension and RVO according to hypertension control status and antihypertensive medication use.Of those enrolled participants, 25,765 participants met our study criteria and were included in the analyses. The overall RVO prevalence (n = 205) was 0.6 ± 0.1% (0.6 ± 0.1% for branch RVO and <0.1% for central RVO), and no sex differences were observed. In multivariate logistic regression analyses after adjusting for all potential risk factors, we found the following factors to be significantly associated with RVO: old age (odds ratio (OR) = 1.72, 95% CI: 1.27-2.34), hypertension (OR = 2.56, 95% CI: 1.31-5.08), history of stroke (OR = 2.08, 95% CI: 1.01-4.45), and hypercholesterolemia (OR = 1.84, 95% CI: 1.01-3.35). In a subset of participants with hypertension, participants with uncontrolled hypertension (OR = 3.46, 95% CI: 1.72-6.94) and unmedicated hypertension (OR = 4.12, 95% CI: 2.01-8.46) were more significantly associated with RVO than participants without hypertension.RVO prevalence in Korea was moderate relative to that in the rest of the world, and RVO-associated factors were similar to those identified in other population-based studies. Well-controlled hypertension and antihypertensive medication showed inverse association with RVO.
Umbilical arterial and venous catheters: placement, use, and complications.
Umbilical catheters have been used in NICUs for drawing blood samples, measuring blood pressure, and administering fluid and medications for more than 25 years. Complications associated with umbilical catheters include thrombosis; embolism; vasospasm; vessel perforation; hemorrhage; infection; gastrointestinal, renal, and limb tissue damage; hepatic necrosis; hydrothorax; cardiac arrhythmias; pericardial effusion and tamponade; and erosion of the atrium and ventricle. A review of the literature provides conflicting accounts of the superiority of high versus low placement of umbilical arterial catheters. This article reviews the current literature regarding use of umbilical catheters in neonates. It also highlights the policy developed for the authors' NICU, a 34-bed tertiary care unit of a children's hospital, and analyzes complications associated with umbilical catheter use for 1 year in that unit.
Evaluation of a challenge testing protocol to assess the stability of ready-to-eat cooked meat products against growth of Listeria monocytogenes.
Challenge testing of ready-to-eat (RTE) foods with Listeria monocytogenes is recommended to assess the potential for growth. The present study was undertaken to evaluate a protocol for challenge testing applied to RTE cooked meat products. In order to choose L. monocytogenes strains with a representative behaviour, initially, the variability of the response of multiple L. monocytogenes strains of human and food origin to different stress and growth conditions was established. The strains were not inhibited in their growth at moderate acid pH (5.25) and the four strains tested in particular showed a similar acid-adaptive response. Growth of the various strains under four different combined stress conditions indicated that no L. monocytogenes strain had consistently significant longer or shorter lag phase or higher or lower maximum specific growth rates. The effect of choice of strain and history (pre-incubation temperature 7 or 30 degrees C) on growth of L. monocytogenes under optimum conditions (Brain Heart Infusion, BHI) and modified BHI simulating conditions of cooked ham and paté was studied. In general, all four L. monocytogenes strains behaved similarly. In BHI, no difference in lag phase was observed for the cold-adapted and standard inoculum, whereas in BHI adjusted to ham and pâté conditions, a ca. 40-h reduction of the lag phase was noted for the cold-adapted inoculum. Subsequently, microbial challenge testing of L. monocytogenes in modified atmosphere packaged sliced cooked ham and paté was performed. A mixed inoculum of four L. monocytogenes strains and an inoculum level of ca. 1-10 cfu/g was used. On vacuum packed sliced cooked ham, the concentration of 100 cfu/g, the safety limit considered as low risk for causing listeriosis, was exceeded after 5 days whereas ca. 10(5) cfu/g were obtained after 14 days when also LAB spoilers reached unacceptable numbers (ca. 10(7) cfu/g) whether standard or cold-adapted inoculum was used. The concentration of sodium lactate determined the opportunities for growth of L. monocytogenes in pâté. If growth of L. monocytogenes in pâté was noticed, the threshold of 100 cfu/ml was crossed earlier for the cold-adapted inoculum compared to the standard inoculum.
HYPOSPADIAS. ANATOMY, EMBRYOLOGY, AND RECONSTRUCTIVE TECHNIQUES
Hypospadias is one of the most common congenital anomalies that can be treated with surgical reconstruction. The etiology in the majority of cases of hypospadias remains elusive. Androgens are clearly critical for penile development; however, defects in androgen metabolism and/or the androgen receptor explain only a small subset of patients with hypospadias. This paper reviews the present strategies to understanding the etiology of hypospadias. This is followed by a review of the anatomy of the male and female genitalia with an emphasis on reconstructive implications. Finally, current techniques for hypospadias repair are reviewed.
Lesions of the medial pallium, but not of the lateral pallium, disrupt spaced-trial avoidance learning in goldfish (Carassius auratus)
The effects of telencephalic lesions of the medial pallium (MP) and lateral pallium (LP) of goldfish on avoidance learning were studied in a two-way, shuttle response, spaced-trial avoidance conditioning situation. Animals received one trial per day, a training regime that permits the assessment of avoidance learning in the absence of stimulus carry-over effects from prior trials. Control and LP-lesioned goldfish exhibited significantly faster avoidance learning than MP-lesioned animals. These results suggest that the MP, but not the LP, is responsible for the widely described deficits in avoidance learning after lesions of the entire telencephalon. The proposal of a functional similarity between the fish MP and the mammalian amygdala, known to be involved in fear conditioning, suggests a conservative phylogenetic role of this area in avoidance learning.
Visions and Voices on Emerging Challenges in Digital Business Strategy
This section is a collection of shorter “Issue and Opinions” pieces that address some of the critical challenges around the evolution of digital business strategy. These voices and visions are from thought leaders who, in addition to their scholarship, have a keen sense of practice. They outline through their opinion pieces a series of issues that will need attention from both research and practice. These issues have been identified through their observation of practice with the eye of a scholar. They provide fertile opportunities for scholars in information systems, strategic management, and organizational theory.
Oil cactus pear (Opuntia ficus-indica L.)
Seeds and pulp of cactus pear (Opuntia ficus-indica L.) were compared in terms of fatty acids, lipid classes, sterols, fat-soluble vitamins and b-carotene. Total lipids (TL) in lyophilised seeds and pulp were 98.8 (dry weight) and 8.70 g/kg, respectively. High amounts of neutral lipids were found (87.0% of TL) in seed oil, while glycolipids and phospholipids occurred at high levels in pulp oil (52.9% of TL). In both oils, linoleic acid was the dominating fatty acid, followed by palmitic and oleic acids, respectively. Trienes, gand a-linolenic acids, were estimated in higher amounts in pulp oil, while a-linolenic acid was only detected at low levels in seed oil. Neutral lipids were characterised by higher unsaturation ratios, while saturates were higher levels in polar lipids. The sterol marker, b-sitosterol, accounted for 72% and 49% of the total sterol content in seed and pulp oils, respectively. Vitamin E level was higher in the pulp oil than in the seed oil, whereas g-tocopherol was the predominant component in seed oil and d-tocopherol was the main constituent in pulp oil. b-Carotene was also higher in pulp oil than in seed oil. Oils under investigation resembled each other in the level of vitamin K1 (0.05% of TL). Information provided by the present work is of importance for further chemical investigation of cactus pear oil and industrial utilisation of the fruit as a raw material of oils and functional foods. # 2003 Elsevier Science Ltd. All rights reserved.
Ensemble of exemplar-SVMs for object detection and beyond
This paper proposes a conceptually simple but surprisingly powerful method which combines the effectiveness of a discriminative object detector with the explicit correspondence offered by a nearest-neighbor approach. The method is based on training a separate linear SVM classifier for every exemplar in the training set. Each of these Exemplar-SVMs is thus defined by a single positive instance and millions of negatives. While each detector is quite specific to its exemplar, we empirically observe that an ensemble of such Exemplar-SVMs offers surprisingly good generalization. Our performance on the PASCAL VOC detection task is on par with the much more complex latent part-based model of Felzenszwalb et al., at only a modest computational cost increase. But the central benefit of our approach is that it creates an explicit association between each detection and a single training exemplar. Because most detections show good alignment to their associated exemplar, it is possible to transfer any available exemplar meta-data (segmentation, geometric structure, 3D model, etc.) directly onto the detections, which can then be used as part of overall scene understanding.
Manual and Automatic Evaluation of Machine Translation between European Languages
We evaluated machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. Evaluation was done automatically using the BLEU score and manually onfluencyandadequacy . For the 2006 NAACL/HLT Workshop on Machine Translation, we organized a shared task to evaluate machine translation performance. 14 teams from 11 institutions participated, ranging from commercial companies, industrial research labs to individual graduate students. The motivation for such a competition is to establish baseline performance numbers for defined training scenarios and test sets. We assembled various forms of data and resources: a baseline MT system, language models, prepared training and test sets, resulting in actual machine translation output from several state-of-the-art systems and manual evaluations. All this is available at the workshop website 1. The shared task is a follow-up to the one we organized in the previous year, at a similar venue (Koehn and Monz, 2005). As then, we concentrated on the translation of European languages and the use of the Europarl corpus for training. Again, most systems that participated could be categorized as statistical phrase-based systems. While there is now a number of competitions — DARPA/NIST (Li, 2005), IWSLT (Eck and Hori, 2005), TC-Star — this one focuses on text translation between various European languages. This year’s shared task changed in some aspects from last year’s: • We carried out a manual evaluation in addition to the automatic scoring. Manual evaluation http://www.statmt.org/wmt06/ was done by the participants. This revealed interesting clues about the properties of automatic and manual scoring. • We evaluated translation from English, in addition to into English. English was again paired with German, French, and Spanish. We dropped, however, one of the languages, Finnish, partly to keep the number of tracks manageable, partly because we assumed that it would be hard to find enough Finnish speakers for the manual evaluation. • We included an out-of-domain test set. This allows us to compare machine translation performance in-domain and out-of-domain. 1 Evaluation Framework The evaluation framework for the shared task is similar to the one used in last year’s shared task. Training and testing is based on the Europarl corpus. Figure 1 provides some statistics about this corpus.
Robust Feature Selection by Mutual Information Distributions
Mutual information is widely used in artificial intelligence, in a descriptive way, to measure the stochastic dependence of discrete random variables. In order to address questions such as the reliability of the empirical value, one must consider sample-to-population inferential approaches. This paper deals with the distribution of mutual information, as obtained in a Bayesian framework by a second-order Dirichlet prior distribution. The exact analytical expression for the mean and an analytical approximation of the variance are reported. Asymptotic approximations of the distribution are proposed. The results are applied to the problem of selecting features for incremental learning and classification of the naive Bayes classifier. A fast, newly defined method is shown to outperform the traditional approach based on empirical mutual information on a number of real data sets. Finally, a theoretical development is reported that allows one to efficiently extend the above methods to incomplete samples in an easy and effective way.
Deadlock-Free Message Routing in Multiprocessor Interconnection Networks
A deadlock-free routing algorithm can be generated for arbitrary interconnection networks using the concept of virtual channels. A necessary and sufficient condition for deadlock-free routing is the absence of cycles in a channel dependency graph. Given an arbitrary network and a routing function, the cycles of the channel dependency graph can be removed by splitting physical channels into groups of virtual channels. This method is used to develop deadlock-free routing algorithms for k-ary n-cubes, for cube-connected cycles, and for shuffle-exchange networks.
How does EMDR work ?
Eye movement desensitisation and reprocessing (EMDR) is an effective treatment for alleviating trauma symptoms, and the positive effects of this treatment have been scientifically confirmed under wellcontrolled conditions. This has provided an opportunity to explore how EMDR works. The present paper reports on the findings of a long series of experiments that disproved the hypothesis that eye movements or other ‘dual tasks’ are unnecessary. These experiments also disproved the idea that ‘bilateral stimulation’ is needed; moving the eyes up and down produces the same effect as horizontal eye movement, and so do tasks that require no eye movement at all. However, it is important that the dual task taxes working memory. Several predictions can be derived from the working memory explanation for eye movements in EMDR. These seem to hold up extremely well in critical experimental tests, and create a solid explanation on how eye movements work. This paper discusses the implications that this theory and the empirical findings may have for the EMDR technique. © Copyright 2012 Textrum Ltd. All rights reserved.
Effects of aging on lower urinary tract and pelvic floor function in nulliparous women.
OBJECTIVE To evaluate the effects of aging, independent of parity, on pelvic organ and urethral support, urethral function, and levator function in a sample of nulliparous women. METHODS A cohort of 82 nulliparous women, aged 21-70 years, were recruited from the community through advertisements. Subjects underwent pelvic examination using pelvic organ prolapse quantification, urethral angles by cotton-tipped swab, and multichannel urodynamics and uroflow. Vaginal closure force was quantified using an instrumented vaginal speculum. Subjects were grouped into five age categories and analyses performed using t tests, Fisher exact tests, Kruskal-Wallace, and Pearson correlation coefficients. Multiple linear regression modeling was performed to adjust for factors that might confound the results of our primary outcomes. RESULTS Increasing age was associated with decreasing maximal urethral closure pressure (r=-0.758, P<.001) with a 15-cm-H(2)O decrease in pressure per decade. Pelvic organ support as measured by pelvic organ prolapse quantification did not differ by age group. Levator function as measured by resting vaginal closure force and augmentation of vaginal closure force also did not change with increasing age. CONCLUSION In a sample of nulliparous women between 21 and 70 years of age maximal urethral closure pressure in the senescent urethra was 40% of that in the young urethra; increasing age did not affect clinical measures of pelvic organ support, urethral support, and levator function. LEVEL OF EVIDENCE III.
Parametric nonlinear dimensionality reduction using kernel t-SNE
Novel non-parametric dimensionality reduction techniques such as t-distributed stochastic neighbor embedding (t-SNE) lead to a powerful and flexible visualization of high-dimensional data. One drawback of non-parametric techniques is their lack of an explicit out-of-sample extension. In this contribution, we propose an efficient extension of t-SNE to a parametric framework, kernel t-SNE, which preserves the flexibility of basic t-SNE, but enables explicit out-of-sample extensions. We test the ability of kernel t-SNE in comparison to standard t-SNE for benchmark data sets, in particular addressing the generalization ability of the mapping for novel data. In the context of large data sets, this procedure enables us to train a mapping for a fixed size subset only, mapping all data afterwards in linear time. We demonstrate that this technique yields satisfactory results also for large data sets provided missing information due to the small size of the subset is accounted for by auxiliary information such as class labels, which can be integrated into kernel t-SNE based on the Fisher information. & 2014 Elsevier B.V. All rights reserved.
Omalizumab in the management of oral corticosteroid-dependent IGE-mediated asthma patients.
BACKGROUND Several studies have demonstrated the beneficial effects of omalizumab in asthma patients. Here we describe the drug's tolerance and oral corticosteroid sparing capacity in a long-term observational study. METHODS Thirty-two patients aged ≥18 years with obstructive airway disease and FEV(1) reversibility ≥12% and 200  mL, with an oral steroid requirement ≥7.5  mg per day of prednisolone during a period of ≥1 year, a positive prick test or in vitro reactivity (RAST) to at least one perennial aeroallergen and a baseline immunoglobulin E level ranking between 30-700  IU/mL were prospectively followed for 17.2  ±  8.5 months. Patients were visited once or twice a month, depending on their schedule for omalizumab administration. INTERVENTION blood analysis every six months; spirometry and nitric oxide measurement at every visit. RESULTS One patient who dropped out early was excluded. Follow-up period: the treatment benefited 83.9% (26/31) of the cohort; oral corticosteroids were reduced from 7.19 ± 11.1 to 3.29 ± 11.03  mg (p < 0.002) and withdrawn in 74.2% of patients. FEV(1) (percent predicted) was 64.4 ± 22.7 at the beginning and 62.9 ± 24.3 at the end. IgE at entry was 322.2 ± 334.2  IU/mL and increased 2.34-fold. Respiratory function and NO did not present statistically significant changes. We identified three groups of patients: the first (n = 17) receiving oral steroid at entry in whom the accumulated dose of oral steroids progressively decreased; another (n = 10) including patients who had quit oral steroids before starting omalizumab although they had not been instructed to do so and whose oral steroid dose at the end of follow-up was zero; and a third group (n = 4) that did not benefit from omalizumab treatment. The only relevant side effect was a flu-like syndrome which required discontinuation of treatment in one patient. CONCLUSION In our series, a substantial, safe decrease in oral corticosteroid requirements was observed due, at least to some extent, to omalizumab therapy. Oral corticosteroids were withdrawn in three-quarters of the patients. We were unable to identify a factor able to predict which patients would benefit most from omalizumab treatment.
A Novel Performance Evaluation Methodology for Single-Target Trackers
This paper addresses the problem of single-target tracker performance evaluation. We consider the performance measures, the dataset and the evaluation system to be the most important components of tracker evaluation and propose requirements for each of them. The requirements are the basis of a new evaluation methodology that aims at a simple and easily interpretable tracker comparison. The ranking-based methodology addresses tracker equivalence in terms of statistical significance and practical differences. A fully-annotated dataset with per-frame annotations with several visual attributes is introduced. The diversity of its visual properties is maximized in a novel way by clustering a large number of videos according to their visual attributes. This makes it the most sophistically constructed and annotated dataset to date. A multi-platform evaluation system allowing easy integration of third-party trackers is presented as well. The proposed evaluation methodology was tested on the VOT2014 challenge on the new dataset and 38 trackers, making it the largest benchmark to date. Most of the tested trackers are indeed state-of-the-art since they outperform the standard baselines, resulting in a highly-challenging benchmark. An exhaustive analysis of the dataset from the perspective of tracking difficulty is carried out. To facilitate tracker comparison a new performance visualization technique is proposed.
Lagged correlation-based deep learning for directional trend change prediction in financial time series
Trend change prediction in complex systems with a large number of noisy time series is a problem with many applications for real-world phenomena, with stock markets as a notoriously difficult to predict example of such systems. We approach predictions of directional trend changes via complex lagged correlations between them, excluding any information about the target series from the respective inputs to achieve predictions purely based on such correlations with other series. We propose the use of deep neural networks that employ step-wise linear regressions with exponential smoothing in the preparatory feature engineering for this task, with regression slopes as trend strength indicators for a given time interval. We apply this method to historical stock market data from 2011 to 2016 as a use case example of lagged correlations between large numbers of time series that are heavily influenced by externally arising new information as a random factor. The results demonstrate the viability of the proposed approach, with state-of-the-art accuracies and accounting for the statistical significance of the results for additional validation, as well as important implications for modern financial economics.
Ultrasound treatment of cutaneous side-effects of infused apomorphine: a randomized controlled pilot study.
Apomorphine hydrochloride is a dopamine agonist used in the treatment of advanced Parkinson's disease. Its administration by subcutaneous infusions is associated with the development of nodules that may interfere with absorption of the drug. This pilot study assessed the effectiveness of ultrasound (US) in the treatment of these nodules. Twelve participants were randomly assigned to receive a course of real or sham US on an area judged unsuitable for infusion. Following treatment, no significant change was observed in measures of tissue hardness and tenderness. However, 5 of 6 participants receiving real US rated the treated area suitable for infusion compared with the 1 of 6 receiving sham US. Sonographic appearance improved in both groups, but more substantially in the real US group. Power calculations suggest a total sample size of 30 would be required to establish statistical significance. A full-scale study of the effectiveness of therapeutic US in the treatment of apomorphine nodules is warranted.
Mecanum wheels with Astar algorithm and fuzzy PID algorithm based on genetic algorithm
In the industrial fields, Mecanum robots have been widely used. The Mecanum Wheel can do omnidirectional movements by electric machinery drive. It's more flexible than ordinary robots. It has massive potential in some situation which has small space. The robots with control system can complete the function of location and the calculation of optimal route. The Astar algorithm is most common mothed. However, Due to the orthogonal turning point, this algorithm takes a lot of Adjusting time. The Improved algorithm raised in this paper can reduce the occurrence of orthogonal turning point. It can generate a new smooth path automatically. This method can greatly reduce the time of the motion of the path. At the same time, it is difficult to obtain satisfactory performance by using the traditional control algorithm because of the complicated road conditions and the difficulty of establishing the model of the robot, so we use fuzzy algorithm to control robots. In fuzzy algorithm, the use of static membership function will affect the control effect, therefore, for complex control environment, using PSO algorithm to dynamically determine the membership function. It can effectively improve the motion performance and improve the dynamic characteristics and the adjustment time of the robot.
Happy families: a twin study of humour.
The objective of this study was to estimate how much of an individual's appreciation of humour is influenced by genetic factors, the shared environment or the individual's unique environment. A population-based classical twin study of 127 pairs of female twins (71 monozygous (MZ) and 56 dizygous (DZ) pairs) aged 20-75 from the St Thomas' UK Adult Twin Registry elicited responses to five 'Far Side' Larson cartoons on a scale of 0-10. Within both MZ and DZ twin pairs, the tetrachoric correlations of responses to all five cartoons were significantly greater than zero. Furthermore, the correlations for MZ and DZ twins were of similar magnitude and in some cases the DZ correlation was greater than that of the MZ twins. This pattern of correlations suggests that shared environment rather then genetic effects contributes to cartoon appreciation. Multivariate model-fitting confirmed that these data were best explained by a model that allowed for the contribution of the shared environment and random environmental factors, but not genetic effects. However, there did not appear to be a general humour factor underlying responses to all five cartoons and no effect of age was seen. The shared environment, rather than genetic factors, explains the familial aggregation of humour appreciation as assessed by the specific 'off the wall' cognitive type of cartoons used in this study.
Deep Unsupervised Convolutional Domain Adaptation
In multimedia analysis, the task of domain adaptation is to adapt the feature representation learned in the source domain with rich label information to the target domain with less or even no label information. Significant research endeavors have been devoted to aligning the feature distributions between the source and the target domains in the top fully connected layers based on unsupervised DNN-based models. However, the domain adaptation has been arbitrarily constrained near the output ends of the DNN models, which thus brings about inadequate knowledge transfer in DNN-based domain adaptation process, especially near the input end. We develop an attention transfer process for convolutional domain adaptation. The domain discrepancy, measured in correlation alignment loss, is minimized on the second-order correlation statistics of the attention maps for both source and target domains. Then we propose Deep Unsupervised Convolutional Domain Adaptation DUCDA method, which jointly minimizes the supervised classification loss of labeled source data and the unsupervised correlation alignment loss measured on both convolutional layers and fully connected layers. The multi-layer domain adaptation process collaborately reinforces each individual domain adaptation component, and significantly enhances the generalization ability of the CNN models. Extensive cross-domain object classification experiments show DUCDA outperforms other state-of-the-art approaches, and validate the promising power of DUCDA towards large scale real world application.
Accelerated Q-learning approach for minutiae extraction in fingerprint image
Fingerprint recognition is a physiological biometric technique. It is most dependable as compare to other biometric technique. Fingerprint recognition involves preprocessing, minutiae extraction and post processing stages. In conventional approaches preprocessing stage include image processing steps to reduce noise. Image processing steps are extremely sensible against noise. A Q-learning approach used for minutiae extraction generates insensitiveness against noise but it also gives success for wrong ridge path which expend processing time. In this paper we have proposed accelerated Q-learning approach for minutiae extraction which calculate Q-value for both success and fail state. In proposed method we follows ridges if it gets fail state it leaves that ridge path otherwise it will continue to follow that ridge and calculate Q-value for success state. Proposed method reduces processing time and also improves efficiency against noise. Keywordfingerprint images, minutiae extraction, ridge endings, ridge bifurcation, fingerprints recognition.
Level Playing Field for Million Scale Face Recognition
Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms [11]. Are the algorithms very different? Is access to good/big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be trained on same data, and tested at the million scale. MF2 is a public large-scale set with 672K identities and 4.7M photos created with the goal to level playing field for large scale face recognition. We contrast our results with findings from the other two large-scale benchmarks MegaFace Challenge and MS-Celebs-1M where groups were allowed to train on any private/public/big/small set. Some key discoveries: 1) algorithms, trained on MF2, were able to achieve state of the art and comparable results to algorithms trained on massive private sets, 2) some outperformed themselves once trained on MF2, 3) invariance to aging suffers from low accuracies as in MegaFace, identifying the need for larger age variations possibly within identities or adjustment of algorithms in future testing.
Personality and Persuasive Technology: An Exploratory Study on Health-Promoting Mobile Applications
Though a variety of persuasive health applications have been designed with a preventive standpoint toward diseases in mind, many have been designed largely for a general audience. Designers of these technologies may achieve more success if applications consider an individual’s personality type. Our goal for this research was to explore the relationship between personality and persuasive technologies in the context of health-promoting mobile applications. We conducted an online survey with 240 participants using storyboards depicting eight different persuasive strategies, the Big Five Inventory for personality domains, and questions on perceptions of the persuasive technologies. Our results and analysis revealed a number of significant relationships between personality and the persuasive technologies we evaluated. The findings from this study can guide the development of persuasive technologies that can cater to individual personalities to improve the likelihood of their success.
Face Model Compression by Distilling Knowledge from Neurons
The recent advanced face recognition systems were built on large Deep Neural Networks (DNNs) or their ensembles, which have millions of parameters. However, the expensive computation of DNNs make their deployment difficult on mobile and embedded devices. This work addresses model compression for face recognition, where the learned knowledge of a large teacher network or its ensemble is utilized as supervision to train a compact student network. Unlike previous works that represent the knowledge by the soften label probabilities, which are difficult to fit, we represent the knowledge by using the neurons at the higher hidden layer, which preserve as much information as the label probabilities, but are more compact. By leveraging the essential characteristics (domain knowledge) of the learned face representation, a neuron selection method is proposed to choose neurons that are most relevant to face recognition. Using the selected neurons as supervision to mimic the single networks of DeepID2+ and DeepID3, which are the state-of-the-art face recognition systems, a compact student with simple network structure achieves better verification accuracy on LFW than its teachers, respectively. When using an ensemble of DeepID2+ as teacher, a mimicked student is able to outperform it and achieves 51.6× compression ratio and 90× speed-up in inference, making this cumbersome model applicable on portable devices. Introduction As the emergence of big training data, Deep Neural Networks (DNNs) recently attained great breakthroughs in face recognition [23, 20, 21, 22, 19, 15, 29, 30, 28] and become applicable in many commercial platforms such as social networks, e-commerce, and search engines. To absorb massive supervision from big training data, existing works typically trained a large DNN or a DNN ensemble, where each DNN consists of millions of parameters. Nevertheless, as face recognition shifts toward mobile and embedded devices, large DNNs are computationally expensive, which prevents them from being deployed to these devices. It motivates research of using a small network to fit very large training ∗indicates co-first authors who contributed equally. Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. data. This work addresses model compression of DNNs for face recognition, by incorporating domain knowledge of learning face representation. There have been several attempts [1, 7, 18] in literature to compress DNNs, so as to make their deployments easier, where a single network (i.e. a student) was trained by using the knowledge learned with a large DNN or a DNN ensemble (i.e. a teacher) as supervision. This knowledge can be simply represented as the probabilities of label predictions by employing the softmax function [10]. Compared with the original 1-of-K hard labels, the label probabilities encode richer relative similarities among training samples and can train a DNN more effectively. However, this representation loses much information because most of the probabilities are close to zeros after squashed by softmax. To overcome this problem, Ba and Caruana [1] represented the learned knowledge by using the logits, which are the values before softmax activation but zero-meaned, revealing the relationship between labels as well as the similarities among samples in the logit space. However, as these unconstrained values (e.g. the large negatives) may contain noisy information that overfits the training data, using them as supervision limits the generalization ability of the student. Recently, Hinton et al. [7] showed that both the label probabilities and zero-meaned logits are two extreme outputs of the softmax functions, where the temperature becomes one and positive infinity, respectively. To remove target noise, they empirically searched for a suitable temperature in the softmax function, until it produced soften probabilities that were able to disclose the similarity structure of data. As these soften target labels comprise much valuable information, a single student trained on them is able to mimic the performance of a cumbersome network ensemble. Despite the successes of [7], our empirical results show that training on soft targets is difficult to converge when compressing DNNs for face recognition. Previous studies [23, 24, 20, 19] have shown that the face representation learned from classifying larger amount of identities in the training data (e.g. 250 thousand in [24]) may have better generalization capacity. In face recognition, it seems difficult to fit soft targets with high dimensionality, which makes convergence slow. In this work, we show that instead of using soft targets in the output layer, the knowledge of the teacher can also be obtained from the neurons in the top hidden layer, which preserve as much information as the soft targets (as the soft targets are predicted from these neurons) but are more compact, e.g. 512 versus 12,994 according to the net structure in [21]. As these neurons may contain noise or information not relevant to face recognition, they are further selected according to the usefulness of knowledge captured by them. In particular, the selection is motivated by three original observations (domain knowledge) of face representation disclosed in this work, which are naturally generalized to all DNNs trained by distinguishing massive identities, such as [19, 23, 24, 22]. (1) Deeply learned face representation by the face recognition task is a distributed representation [6] over face attributes, including the identity-related attributes (IA), such as gender, race, and shapes of facial components, as well as the identity non-related attributes (NA), such as expression, lighting, and photo quality. This observation implies that each attribute concept is explained by having some neurons being activated while each neuron is involved in representing more than one attribute, although attribute labels are not provided during training. (2) However, a certain amount of neurons are selective to NA or both NA and IA, implying that the distributed representation is neither invariant nor completely factorized, because attributes in NA are variations that should be removed in face recognition, whereas these two factors (NA and IA) are presented and coupled in some neurons. (3) Furthermore, a small amount of neurons are inhibitive to all attributes and server as noise. With these observations, we cast neuron selection as inference on a fully-connected graph, where each node represents attribute-selectiveness of neuron and each edge represents correlation between neurons. An efficient mean field algorithm [9] enables us to select neurons that are more selective or discriminative to IA, but less correlated with each other. As a result, the features of the selected neurons are able to maintain the inter-personal discriminativeness (i.e. distributed and factorized to explain IA), while reducing intra-personal variations (i.e. invariant to NA). We employ the features after neuron selection as regression targets to train the student. To evaluate neuron selection, we employ DeepID2+ [21] as a teacher (T1), which achieved state-of-the-art performance on LFW benchmark [8]. This work is chosen as an example because it successfully incorporated multiple complex components for face recognition, such as local convolution [12], ranking loss function [19], deeply supervised learning [13], and model ensemble [17]. The effectiveness of all these components in face recognition have been validated by many existing works [19, 23, 24, 27]. Evaluating neuron selection on it demonstrates its capacity and generalization ability on mimicking functions induced by different learning strategies in face recognition. With neuron selection, a student with simple network structure is able to outperform a single network of T1 or its ensemble. Interestingly, this simple student generalizes well to mimic a deeper teacher (T2), DeepID3 [22], which is a recent extension of DeepID2+. Although there are other advanced methods [24, 19] in face recognition, [21, 22] are more suitable to be taken as baselines. They outperformed [24] and achieved comparable result with [19] on LFW with much smaller size of training data and identities, i.e. 290K images [21] compares to 7.5M images [24] and 200M images [19]. We cannot compare with [24, 19] because their data are unavailable. Three main contributions of this work are summarized as below. (1) We demonstrate that more compact supervision converge more efficiently, when compressing DNNs for face recognition. Soft targets are difficult to fit because of high dimensionality. Instead, neurons in the top hidden layers are proper supervision, as they capture as much information as soft targets but more compact. (2) Three valuable observations are disclosed from the deeply learned face representation, identifying the usefulness of knowledge captured in these neurons. These observations are naturally generalized to all DNNs trained on face images. (3) With these observations, an efficient neuron selection method is proposed for model compression and its effectiveness is validated on T1 and T2. Face Model Compression Training Student via Neuron Selection The merit behind our method is to select informative neurons in the top hidden layer of a teacher, and adopt the features (responses) of the chosen neurons as supervision to train a student, mimicking the teacher’s feature space. We formulate the objective function of model compression as a regression problem given a training set D = {Ii, fi}i=1,
Using mobile phones in English education in Japan
We present three studies in mobile learning. First, we polled 333 Japanese university students regarding their use of mobile devices. One hundred percent reported owning a mobile phone. Ninety-nine percent send e-mail on their mobile phones, exchanging some 200 e-mail messages each week. Sixty-six percent e-mail peers about classes; 44% e-mail for studying. In contrast, only 43% e-mail on PCs, exchanging an average of only two messages per week. Only 20% had used a personal digital assistant. Second, we e-mailed 100-word English vocabulary lessons at timed intervals to the mobile phones of 44 Japanese university students, hoping to promote regular study. Compared with students urged to regularly study identical materials on paper or Web, students receiving mobile e-mail learned more (Po0.05). Seventy-one percent of the subjects preferred receiving these lessons on mobile phones rather than PCs. Ninety-three percent felt this a valuable teaching method. Third, we created a Web site explaining English idioms. Student-produced animation shows each idiom’s literal meaning; a video shows the idiomatic meaning. Textual materials include an explanation, script, and quiz. Thirty-one Japanese college sophomores evaluated the site using video-capable mobile phones, finding few technical difficulties, and rating highly its educational effectiveness.
Implicit Aspect Detection in Restaurant Reviews using Cooccurence of Words
For aspect-level sentiment analysis, the important first step is to identify the aspects and their associated entities present in customer reviews. Aspects can be either explicit or implicit, where the identification of the latter is more difficult. For restaurant reviews, this difficulty is escalated due to the vast number of entities and aspects present in reviews. The problem of implicit aspect identification has been studied for customer reviews in different domains, including restaurant reviews. However, the existing work for implicit aspect identification in customer reviews has the limitation of choosing at most one implicit aspect for each sentence. Furthermore, they deal only with a limited set of aspects related to a particular domain, thus have not faced the problem of ambiguity that arises when an opinion word is used to describe different aspects. This paper presents a novel approach for implicit aspect detection, which overcomes these two limitations. Our approach yields an F1measure of 0.842 when applied for a set of restaurant reviews collected from Yelp.
Hit Song Science Is Not Yet a Science
We describe a large-scale experiment aiming at validating the hypothesis that the popularity of music titles can be predicted from global acoustic or human features. We use a 32.000 title database with 632 manually-entered labels per title including 3 related to the popularity of the title. Our experiment uses two audio feature sets, as well as the set of all the manually-entered labels but the popularity ones. The experiment shows that some subjective labels may indeed be reasonably well-learned by these techniques, but not popularity. This contradicts recent and sustained claims made in the MIR community and in the media about the existence of “Hit Song Science”.