title
stringlengths
8
300
abstract
stringlengths
0
10k
11-XII-7 Aeneas-- Colony I Meets Three-Axis Pointing
A dedicated satellite mission is currently under development at the USC Space Engineering Research Center. Named “Aeneas,” (after the Trojan warrior who personifies duty and courage) the cubesat will be used to track cargo containers worldwide. To accomplish this feat, the satellite must maintain a 2-degree-accuracy surface track – the first of its kind in cubesat technology. This paper describes the requirements, design, implementation and tests to date in the areas of: flight dynamics, flight software, deployable spacecraft antenna, store-and-forward software, custom flight processor including MEMS gyroscopes, Doppler-based orbit determination enhancement and mobile ground station dish with helical feedhorn. Details are provided about the attitude control system and communications. INTRODUCTION AND BACKGROUND Continual advances in micro-electronics enable more to be done with less. Today’s gadgets are smaller in size, lower in weight and consume less in power than their counterparts only a few years ago. This truism enables progress across many industries, and perhaps none so much as in cubesat technology. As the relaxation of constraints is allowing more performance to be packed into each cubic centimeter, nanosatellites are rapidly gaining the capability to address fundamentally important missions. 1 Recognizing this trend, the USC Astronautics Department and the Information Sciences Institute (ISI) jointly created the Space Engineering Research Center (SERC) a fast-paced learning environment pairing students with industry experts to push the envelope of nanosatellite technology. SERC’s current satellite program is Aeneas, which modifies a 3U (30x10x10cm) National Reconnaissance Office (NRO) specified Colony I Cubesat bus to address a research thrust of the Department of Homeland Security (DHS). The delivery of Aeneas is scheduled for December of 2011 and the flight is manifested for June of 2012. It contains two payloads. The primary payload speaks to a mission with global reach: tracking cargo containers over the open ocean with a 1-watt WiFi-like transceiver. A current tracking system for cargo containers, designed by our primary payload provider iControl Inc., is capable of identifying the container within a mile from shore, but loses all contact for the majority of the openwater journey. For both government and nongovernment entities, the ability to track containers in transit is highly valued. This mission uses a custombuilt deployable mesh antenna, and stretches the attitude control and power generation capabilities of the Colony I bus to its limits. The secondary payload is an experimental, nextgeneration, radiation-hardened flight processor. The result of many government-funded research initiatives, this ITAR-controlled processor is at risk of staying in the “unholy valley” between research and development. On Aeneas, the processor will be space-qualified by performing self-diagnostic checks and reporting the results back to the ground. We hope that by raising the technology readiness level (TRL) we can provide a path to service for this highperformance chip. In this paper we will discuss the design work and fabrication supporting the primary payload: namely, three-axis pointing and the deployable antenna. We begin by describing the entire cubesat, focusing on those components that will serve a critical role in the success of the mission.
Novelty Detection with GAN
The ability of a classifier to recognize unknown inputs is important for many classification-based systems. We discuss the problem of simultaneous classification and novelty detection, i.e. determining whether an input is from the known set of classes and from which specific class, or from an unknown domain and does not belong to any of the known classes. We propose a method based on the Generative Adversarial Networks (GAN) framework. We show that a multi-class discriminator trained with a generator that generates samples from a mixture of nominal and novel data distributions is the optimal novelty detector. We approximate that generator with a mixture generator trained with the Feature Matching loss and empirically show that the proposed method outperforms conventional methods for novelty detection. Our findings demonstrate a simple, yet powerful new application of the GAN framework for the task of novelty detection.
Non-dystrophic myotonia: prospective study of objective and patient reported outcomes.
Non-dystrophic myotonias are rare diseases caused by mutations in skeletal muscle chloride and sodium ion channels with considerable phenotypic overlap between diseases. Few prospective studies have evaluated the sensitivity of symptoms and signs of myotonia in a large cohort of patients. We performed a prospective observational study of 95 participants with definite or clinically suspected non-dystrophic myotonia recruited from six sites in the USA, UK and Canada between March 2006 and March 2009. We used the common infrastructure and data elements provided by the NIH-funded Rare Disease Clinical Research Network. Outcomes included a standardized symptom interview and physical exam; the Short Form-36 and the Individualized Neuromuscular Quality of Life instruments; electrophysiological short and prolonged exercise tests; manual muscle testing; and a modified get-up-and-go test. Thirty-two participants had chloride channel mutations, 34 had sodium channel mutations, nine had myotonic dystrophy type 2, one had myotonic dystrophy type 1, and 17 had no identified mutation. Phenotype comparisons were restricted to those with sodium channel mutations, chloride channel mutations, and myotonic dystrophy type 2. Muscle stiffness was the most prominent symptom overall, seen in 66.7% to 100% of participants. In comparison with chloride channel mutations, participants with sodium mutations had an earlier age of onset of stiffness (5 years versus 10 years), frequent eye closure myotonia (73.5% versus 25%), more impairment on the Individualized Neuromuscular Quality of Life summary score (20.0 versus 9.44), and paradoxical eye closure myotonia (50% versus 0%). Handgrip myotonia was seen in three-quarters of participants, with warm up of myotonia in 75% chloride channel mutations, but also 35.3% of sodium channel mutations. The short exercise test showed ≥10% decrement in the compound muscle action potential amplitude in 59.3% of chloride channel participants compared with 27.6% of sodium channel participants, which increased post-cooling to 57.6% in sodium channel mutations. In evaluation of patients with clinical and electrical myotonia, despite considerable phenotypic overlap, the presence of eye closure myotonia, paradoxical myotonia, and an increase in short exercise test sensitivity post-cooling suggest sodium channel mutations. Outcomes designed to measure stiffness or the electrophysiological correlates of stiffness may prove useful for future clinical trials, regardless of underlying mutation, and include patient-reported stiffness, bedside manoeuvres to evaluate myotonia, muscle specific quality of life instruments and short exercise testing.
An axial flux permanent magnet synchronous generator for a gearless wind energy system
In low speed applications such as wind energy conversion systems, the use of direct driven generators, instead of geared machines, reduces the number of drive components, which offers the opportunity to reduce costs and increases system reliability and efficiency. The Axial Flux Permanent Magnet (AFPM) generator is particularly suited for such application, since it can be designed with a large pole number and a high torque density. This paper presents the design, construction and experimental validation of a double-sided AFPM synchronous generator prototype, with internal rotor and slotted stators. Design objectives embrace achieving a good compromise between performance characteristics and feasibility of construction, which results in a cost competitive machine.
Implementation of LSB Steganography and its Evaluation for Various File Formats
----------------------------------------------------------------------ABSTRACT-------------------------------------------------------------Steganography is derived from the Greek word steganos which literally means “Covered” and graphy means “Writing”, i.e. covered writing. Steganography refers to the science of “invisible” communication. For hiding secret information in various file formats, there exists a large variety of steganographic techniques some are more complex than others and all of them have respective strong and weak points. The Least Significant Bit (LSB) embedding technique suggests that data can be hidden in the least significant bits of the cover image and the human eye would be unable to notice the hidden image in the cover file. This technique can be used for hiding images in 24-Bit, 8-Bit, Gray scale format. This paper explains the LSB Embedding technique and Presents the evaluation for various file formats.
Path and travel time inference from GPS probe vehicle data
We consider the problem of estimating real-time traffic conditions from sparse, noisy GPS probe vehicle data. We specifically address arterial roads, which are also known as the secondary road network (highways are considered the primary road network). We consider several estimation problems: historical traffic patterns, real-time traffic conditions, and forecasting future traffic conditions. We assume that the data available for these estimation problems is a small set of sparsely traced vehicle trajectories, which represents a small fraction of the total vehicle flow through the network. We present an expectation maximization algorithm that simultaneously learns the likely paths taken by probe vehicles as well as the travel time distributions through the network. A case study using data from San Francisco taxis is used to illustrate the performance of the algorithm.
Rest versus exercise hemodynamics for middle cerebral artery aneurysms: a computational study.
BACKGROUND AND PURPOSE Exercise is an accepted method of improving cardiovascular health; however, the impact of increases in blood flow and heart rate on a cerebral aneurysms is unknown. This study was performed to simulate the changes in hemodynamic conditions within an intracranial aneurysm when a patient exercises. MATERIALS AND METHODS Rotational 3D digital subtraction angiograms were used to reconstruct patient-specific geometries of 3 aneurysms located at the bifurcation of the middle cerebral artery. CFD was used to solve for transient flow fields during simulated rest and exercise conditions. Inlet conditions were set by using published transcranial Doppler sonography data for the middle cerebral artery. Velocity fields were analyzed and postprocessed to provide physiologically relevant metrics. RESULTS Overall flow patterns were not significantly altered during exercise. Across subjects, during the exercise simulation, time-averaged WSS increased by a mean of 20% (range, 4%-34%), the RRT of a particle in the near-wall flow decreased by a mean of 28% (range, 13%-40%), and time-averaged pressure on the aneurysm wall did not change significantly. In 2 of the aneurysms, there was a 3-fold order-of-magnitude spatial difference in RRT between the aneurysm and surrounding vasculature. CONCLUSIONS WSS did not increase significantly during simulated moderate aerobic exercise. While the reduction in RRT during exercise was small in comparison with spatial differences, there may be potential benefits associated with decreased RRT (ie, improved replenishment of nutrients to cells within the aneurysmal tissue).
Image Data Sharing for Biomedical Research—Meeting HIPAA Requirements for De-identification
Data sharing is increasingly recognized as critical to cross-disciplinary research and to assuring scientific validity. Despite National Institutes of Health and National Science Foundation policies encouraging data sharing by grantees, little data sharing of clinical data has in fact occurred. A principal reason often given is the potential of inadvertent violation of the Health Insurance Portability and Accountability Act privacy regulations. While regulations specify the components of private health information that should be protected, there are no commonly accepted methods to de-identify clinical data objects such as images. This leads institutions to take conservative risk-averse positions on data sharing. In imaging trials, where images are coded according to the Digital Imaging and Communications in Medicine (DICOM) standard, the complexity of the data objects and the flexibility of the DICOM standard have made it especially difficult to meet privacy protection objectives. The recent release of DICOM Supplement 142 on image de-identification has removed much of this impediment. This article describes the development of an open-source software suite that implements DICOM Supplement 142 as part of the National Biomedical Imaging Archive (NBIA). It also describes the lessons learned by the authors as NBIA has acquired more than 20 image collections encompassing over 30 million images.
Focal Loss for Dense Object Detection
The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors.
Swimming with stiff legs at low Reynolds number.
Locomotion at low Reynolds number is not possible with cycles of reciprocal motion, an example being the oscillation of a single pair of rigid paddles or legs. Here, I demonstrate the possibility of swimming with two or more pairs of legs. They are assumed to oscillate collectively in a metachronal wave pattern in a minimal model based on slender-body theory for Stokes flow. The model predicts locomotion in the direction of the traveling wave, as commonly observed along the body of free-swimming crustaceans. The displacement of the body and the swimming efficiency depend on the number of legs, the amplitude, and the phase of oscillations. This study shows that paddling legs with distinct orientations and phases offers a simple mechanism for driving flow.
Rethinking ferrule – a new approach to an old dilemma
The 'ferrule effect' is a long standing, accepted concept in dentistry that is a foundation principle for the restoration of teeth that have suffered advanced structure loss. A review of the literature based on a search in PubMed was performed looking at the various components of the ferrule effect, with particular attention to some of the less explored dimensions that influence the effectiveness of the ferrule when restoring severely broken down teeth. These include the width of the ferrule, the effect of a partial ferrule, the influence of both, the type of the restored tooth and the lateral loads present as well as the well established 2 mm ferrule height rule. The literature was collaborated and a classification based on risk assessment was derived from the available evidence. The system categorises teeth according to the effectiveness of ferrule effect that can be achieved based on the remaining amount of sound tooth structure. Furthermore, risk assessment for failure can be performed so that the practitioner and patient can better understand the prognosis of restoring a particular tooth. Clinical recommendations were extrapolated and presented as guidelines so as to improve the predictability and outcome of treatment when restoring structurally compromised teeth. The evidence relating to restoring the endodontic treated tooth with extensive destruction is deficient. This article aims to rethink ferrule by looking at other aspects of this accepted concept, and proposes a paradigm shift in the way it is thought of and utilised.
Axial coding and the grounded theory controversy.
The purpose of this article is to describe the similarities and differences between two approaches to grounded theory research: grounded theory as espoused by Glaser and grounded theory as espoused by Strauss and Corbin. The focus of the article is the controversy surrounding the use of axial coding. The author proposes a resolution to the controversy by suggesting that one does not need to view either approach as right or wrong; rather, the qualitative and grounded theory researcher can choose an approach, and that choice is based on the goal of the researcher's study. Examples of both approaches, from the author's research study on the experiences of living in a family with a child with attention deficit hyperactivity disorder (ADHD), are provided.
No . 067 April 4 , 2017 Theory of Deep Learning III : Generalization Properties of SGD by
In Theory III we characterize with a mix of theory and experiments the generalization properties of Stochastic Gradient Descent in overparametrized deep convolutional networks. We show that Stochastic Gradient Descent (SGD) selects with high probability solutions that 1) have zero (or small) empirical error, 2) are degenerate as shown in Theory II and 3) have maximum generalization. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 123 1216. H.M. is supported in part by ARO Grant W911NF-15-10385.
Predictive role of prenasal thickness and nasal bone for Down syndrome in the second trimester.
OBJECTIVE To assess the efficacy of prenasal thickness (PNT) and nasal bone (NB) for prediction of Down syndrome (DS) fetuses in the second trimester ultrasound examination. STUDY DESIGN PNT was measured from stored two-dimensional fetal profile images taken at 15-23 weeks in 242 fetuses with normal karyotype (Group 1) and 24 fetuses with DS (Group 2). It was measured as the shortest distance from the anterior edge of the lowest part of the frontal bone to the skin. The efficacy of PNT, NB, PNT/NB and biparietal diameter (BPD)/NB was evaluated for prediction of DS. RESULTS PNT values increased with gestational age in normal fetuses. PNT measurement was ≥95th percentile in 54.2% (13/24) of the DS cases and 2.9% of the normal cases. Receiver operator curve analysis showed that PNT/NB ratio had the best area under the curve with a detection rate of 80% for a false positive rate of 5% at a cut-off value of 0.76. CONCLUSIONS PNT is increased in fetuses with DS as compared to normal fetuses. PNT/NB≥0.76 in the second trimester is a better predictor of DS than the use of PNT or NB alone.
Vertical movements of North Sea cod
Various air-breathing marine vertebrates such as seals, turtles and seabirds show distinct patterns of diving behaviour. For fish, the distinction between different vertical behaviours is often less clear-cut, as there are no surface intervals to differentiate between dives. Using data from acoustic tags (n = 23) and archival depth recorders attached to cod Gadus morhua (n = 92) in the southern North Sea, we developed a quantitative method of classifying vertical movements in order to facilitate an objective comparison of the behaviour of different individuals. This method expands the utilisation of data from data storage tags, with the potential for a better understanding of fish behaviour and enhanced individual based behaviour for improved ecosystem modelling. We found that cod were closely associated with the seabed for 90% of the time, although they showed distinct seasonal and spatial patterns in behaviour. For example, cod tagged in the southern North Sea exhibited high rates of vertical movement in spring and autumn that were probably associated with migration, while the vertical movements of resident cod in other areas were much less extensive and were probably related to foraging or spawning behaviours. The full reasons underlying spatial and temporal behavioural plasticity by cod in the North Sea warrant further investigation.
Chemical Engineering Principles in a Freshman Engineering Course using a Cogeneration Facility
The primary goal of Rowan University's freshmen engineering course is to immerse students in multidisciplinary projects that teach engineering principles using the theme of engineering measurements in both laboratory and real-world settings. Currently, many freshman programs focus either on a design project or discipline specific experiments that may not be cohesively integrated. At Rowan, freshman engineers are introduced to industrial problems through a series of 4 modules and a interrelated-interactive lectures on problem solving, safety and ethics. In this paper a the process engineering module using the vehicle of a cogeneration plant is presented.
Making privacy personal: Profiling social network users to inform privacy education and nudging
Social Network Sites (SNSs) offer a plethora of privacy controls, but users rarely exploit all of these mechanisms, nor do they do so in the same manner. We demonstrate that SNS users instead adhere to one of a small set of distinct privacy management strategies that are partially related to their level of privacy feature awareness. Using advanced Factor Analysis methods on the self-reported privacy behaviors and feature awareness of 308 Facebook users, we extrapolate six distinct privacy management strategies, including: Privacy Maximizers, Selective Sharers, Privacy Balancers, Self-Censors, Time Savers/Consumers, and Privacy Minimalists and six classes of privacy proficiency based on feature awareness, ranging from Novices to Experts. We then cluster users on these dimensions to form six distinct behavioral profiles of privacy management strategies and six awareness profiles for privacy proficiency. We further analyze these privacy profiles to suggest opportunities for training and education, interface redesign, and new approaches for personalized privacy recommendations.
Self-Theories : The Mindset of a
Introduction There are things that distinguish great athletes—champions—from others. Most of the sports world thinks it’s their talent. I will argue today that it’s their mindset. This idea is brought to life by the story of Billy Beane, told so well by Michael Lewis in the book Moneyball. When Beane was in high school, he was in fact a huge talent--what they call a “natural.” He was the highest scorer on the basketball team, the quarterback of the football team, and the top hitter on the very competitive baseball team. And he was all these things without a great deal of effort. People thought he was the new Babe Ruth. However, as soon as anything went wrong, Beane lost it. He didn’t know how to learn from his mistakes, he didn’t know how to practice to improve. Why? Because naturals shouldn’t make mistakes or need practice. When Beane moved up to the baseball major leagues, things got progressively worse. Every at bat was a do-or-die situation and with every out he fell apart. Again, If you’re a natural, you shouldn’t have any flaws, so you can’t face your deficiencies and coach or practice them away. Beane’s lack of emphasis on learning and his inability to function in the face of setbacks—where does this come from? With avid practice and the right coaching he could have been one of the greats. Why didn’t he seek that? I will show how his behavior comes right out of his self-theory—out of a “fixed” mindset. Self-Theories In my work, I have identified two theories of ability that people may hold: an entity theory, in which people believe their abilities are fixed. You have a certain amount of intelligence or talent and that’s that. You can learn new things but you can’t change your ability. (I also call this the fixed mindset.) In contrast, others hold an incremental theory of ability. They believe that their abilities are things they can cultivate and develop throughout their lives. They believe that through effort and learning, they can become smarter or more talented. It’s not that they deny differences among people—that some people may know more, learn faster or even have more natural facility in area. But what they focus on is the idea that everyone can get better over time. (I also call this the growth mindset.)
Foreign Direct Investment , Financial Development and Economic Growth : Empirical Evidence from North African Countries
The present paper examines the causal linkage between foreign direct investment (FDI), financial development, and economic growth in a panel of 4 countries of North Africa (Tunisia, Morocco, Algeria and Egypt) over the period 1980-2011. The study moves away from the traditional cross-sectional analysis, and focuses on more direct evidence of the channels through which FDI inflows can promote economic growth of the host country. Using Generalized Method of Moment (GMM) panel data analysis, we find strong evidence of a positive relationship between FDI and economic growth. We also find evidence that the development of the domestic financial system is an important prerequisite for FDI to have a positive effect on economic growth. The policy implications of this study appeared clear. Improvement efforts need to be driven by local-level reforms to ensure the development of domestic financial system in order to maximize the benefits of the presence of FDI.
W2Go: a travel guidance system by automatic landmark ranking
In this paper, we present a travel guidance system W2Go (Where to Go), which can automatically recognize and rank the landmarks for travellers. In this system, a novel Automatic Landmark Ranking (ALR) method is proposed by utilizing the tag and geo-tag information of photos in Flickr and user knowledge from Yahoo Travel Guide. ALR selects the popular tourist attractions (landmarks) based on not only the subjective opinion of the travel editors as is currently done on sites like WikiTravel and Yahoo Travel Guide, but also the ranking derived from popularity among tourists. Our approach utilizes geo-tag information to locate the positions of the tag-indicated places, and computes the probability of a tag being a landmark/site name. For potential landmarks, impact factors are calculated from the frequency of tags, user numbers in Flickr, and user knowledge in Yahoo Travel Guide. These tags are then ranked based on the impact factors. Several representative views for popular landmarks are generated from the crawled images with geo-tags to describe and present them in context of information derived from several relevant reference sources. The experimental comparisons to the other systems are conducted on eight famous cities over the world. User-based evaluation demonstrates the effectiveness of the proposed ALR method and the W2Go system.
Efficient resource management for Cloud computing environments
The notion of Cloud computing has not only reshaped the field of distributed systems but also fundamentally changed how businesses utilize computing today. While Cloud computing provides many advanced features, it still has some shortcomings such as the relatively high operating cost for both public and private Clouds. The area of Green computing is also becoming increasingly important in a world with limited energy resources and an ever-rising demand for more computational power. In this paper a new framework is presented that provides efficient green enhancements within a scalable Cloud computing architecture. Using power-aware scheduling techniques, variable resource management, live migration, and a minimal virtual machine design, overall system efficiency will be vastly improved in a data center based Cloud with minimal performance overhead.
Science, Society, and Software Engineering Part 1: Scientists at work and play
"Nothing exists from whose nature some effect does not follow."-Baruch Spinoza Scientists have a different perspective than engineers. I didn't realize there was a difference until now, not having much interaction with them as a contractor to the Military-Industrial Complex. There, the ratio was maybe one scientist per 50 engineers. Now employed at an academic institution, scientists outnumber engineers two to one. The characteristic difference between scientists and engineers may be identified in the difference of the nature of their work. Scientists identify patterns and relationships by studying the world, to validate and create theories. Engineers work with scientists in constructing the tools used to acquire, search, manipulate and maintain data, which in turn allows the scientists to formulate new theories, for which engineers create new tools to acquire new data from which scientists can identify new patterns and relationships – a never-ending cycle. There is one constant in engineering, whether working for the Military-Industrial Complex or for a research institution-engineers are second-class citizens. " They " (the scientists, systems engineers, or managers) specify the work to be done, and " we " (the engineers of various specialties) design, build, test, integrate and maintain. There may be differences in how well second-class citizens are treated, however. From my albeit limited experience, engineers are better treated in academia than their counterparts in for-profits. At for-profits, employees seem to be constantly reminded by their bosses of Machiavelli's advice to princes, " it is better to be feared than loved. " In academia, the word " colleague " seems to have real meaning, and engineers good-naturedly refer to their scientists as " our scientists. " Outside of academia, do engineers use a tone other than sarcasm when talking of " our managers " ? So, I'm in a semi-weekly staff meeting to get the scientists to talk about what they are doing, else we, the staff, would never know. I volunteered to organize the meetings after a two-year hiatus where the meetings were not held at all because the scientists seemed to be always too busy. Coordination is simple: I schedule a room every other week; ask a secretary to prepare coffee; and I pick up donuts on the day of the meeting. After the meeting starts, I keep track of the time so that it doesn't go too far past an hour – everyone gets antsy when forced to sit too long. …
One carbon metabolism and bone homeostasis and remodeling: A review of experimental research and population studies.
Homocysteine (HCY) is a degradation product of the methionine pathway. The B vitamins, in particular vitamin B12 and folate, are the primary nutritional determinant of HCY levels and therefore their deficiencies result in hyperhomocysteinaemia (HHCY). Prevalence of hyperhomocysteinemia (HHCY) and related dietary deficiencies in B vitamins and folate increase with age and have been related to osteoporosis and abnormal development of epiphyseal cartilage and bone in rodents. Here we provide a review of experimental and population studies. The negative effects of HHCY and/or B vitamins and folate deficiencies on bone formation and remodeling are documented by cell models, including primary osteoblasts, osteoclast and bone progenitor cells as well as by animal and human studies. However, underlying pathophysiological mechanisms are complex and remain poorly understood. Whether these associations are the direct consequences of impaired one carbon metabolism is not clarified and more studies are still needed to translate these findings to human population. To date, the evidence is limited and somewhat conflicting, however further trials in groups most vulnerable to impaired one carbon metabolism are required.
Comparison of polymerase chain reaction and culture for detection of genital mycoplasma in clinical samples from patients with genital infections.
OBJECTIVE Comparison of polymerase chain reaction (PCR) and culture for detection of genital mycoplasma (Mycoplasma hominis, Mycoplasma genitalium, and Ureaplasma urealyticum) in clinical samples from patients with genital infections. METHODS Duplicate genital swabs were taken from 210 patients, referred to the gynecology clinic of Rasool Hospital, Tehran, Iran between December 2007 and June 2008. They were transported to the laboratory in a selective mycoplasma transport medium and in phosphate buffer solution. The specimens were inoculated into specific broth and solid medium for culture. Characteristic mycoplasma colonies were determined with Diennes' stain and examined microscopically. For PCR, samples were analyzed with genus specific primers. The primer sets, which were originally designed in our laboratory, amplified a 465 bp fragment (Mycoplasma genitalium), 559 bp fragment (Ureaplasma urealyticum), and 630 bp fragment (Mycoplasma hominis). Samples containing a band of the expected sizes for mycoplasma strains were subjected to digestion with a restriction endonuclease enzyme. RESULTS Of the 210 samples, mycoplasma strains were isolated from 83 patients (39.5%), (23 mycoplasma isolates, 11%; and 69 ureaplasma isolates, 32.9%) by using a selective mycoplasma isolation media. Using PCR, a total of 120 (57.1%) samples were found to be positive for mycoplasmas (28 mycoplasma spp., 13.3%; and 67 ureaplasma spp., 31.9%) and co-infections with both species were detected in 25 samples (11.9%). CONCLUSION The PCR was found to be highly sensitive when genus specific primers were used for diagnosis of genital mycoplasmas in comparison with culture.
Stratified analysis of clinical outcomes in thoracoscopic sympathicotomy for hyperhidrosis.
BACKGROUND The primary goal of this study is to identify clinical variables associated with successful surgical treatment for hyperhidrosis and facial blushing. METHODS Six hundred eight thoracoscopic sympathicotomies were performed in 304 patients. Retrospective stratified analysis of patients after thoracoscopic sympathicotomy for hyperhidrosis or facial blushing and having completed follow-up of at least 6 months (n = 232) was performed. Preoperative and postoperative quality-of-life indices (range, 0 to 3) were used to measure impact of surgery, and comparisons were indexed to preoperative symptoms. Postoperative compensatory sweating was analyzed with respect to the level(s) of sympathetic chain division. RESULTS Thoracoscopic sympathicotomy was performed at level T2 alone in 5% of patients; levels T2 to T3 in 63% of patients; levels T3 to T4 in 3% of patients; levels T2 to T4 in 14% of patients; and more than three levels in 14% of patients. In hyperhidrosis patients, mean preoperative quality-of-life index was 2.0 and postoperative quality-of-life index was 0.4 (p < 0.001). Facial blushers had preoperative and postoperative quality-of-life index of 2.6 and 1.0, respectively. Significant compensatory sweating was seen in 33% patients overall and occurred in 29% of patients with palmar symptoms, 26% of axillary patients, and 42% of facial blushers. Significant compensatory sweating in relation to the level(s) of sympathetic chain division occurred in T2 alone, 45%; T2 to T3, 30%; T3 to T4, 14%; T2 to T4, 38%; and more than three levels, 49%. CONCLUSIONS Significant improvement in quality of life can result from surgery for hyperhidrosis. However, the incidence of postoperative compensatory sweating may be dependent on the level of sympathicotomy performed. The choice of sympathicotomy level(s) should be directed toward reducing the incidence of significant compensatory sweating while simultaneously ensuring relief of primary preoperative symptoms.
Effects of valsartan compared with enalapril on blood pressure and cognitive function in elderly patients with essential hypertension
This prospective, randomised, open-label, blinded-endpoint study was to compare the effects of the angiotensin II (Ang II) AT1 receptor antagonist valsartan with those of the ACE inhibitor enalapril on blood pressure (BP) and cognitive functions in elderly hypertensive patients. One hundred and forty-four patients aged 61–80 years with mild to moderate essential hypertension (DBP ≥95 mmHg and ≤110 mmHg at the end of a 2-week placebo run-in period) were randomly assigned to once daily (o.d.) treatment with valsartan 160 mg (n=73) or enalapril 20 mg (n=71) for 16 weeks. The patients were examined every 4 weeks during the study, with pre-dose BP (standard mercury sphygmomanometer, Korotkoff I and V) and heart rate (pulse palpation) being recorded at each visit. Cognitive function was evaluated at the end of the wash-out period and after 16 weeks of active treatment by means of five tests (verbal fluency, the Boston naming test, word list memory, word list recall and word list recognition). Both valsartan and enalapril had a clear antihypertensive effect, but the former led to a slightly greater reduction in SBP/DBP at 16 weeks (18.6±4.6/13.7±4.0 mmHg vs 15.6±5.1/10.9±3.9 mmHg; P<0.01). Enalapril did not induce any significant changes in any of the cognitive function test scores; valsartan significantly increased the word list memory score (+11.8%; P<0.05 vs baseline and P<0.01 vs enalapril) and the word list recall score (+18.7%; P<0.05 vs baseline and P<0.01 vs enalapril), but not those of the other tests. These findings indicate that, in elderly hypertensive patients, 16 weeks of treatment with valsartan 160 mg o.d. is more effective than enalapril 20 mg o.d. in reducing BP, and (unlike enalapril) improves some of the components of cognitive function, particularly episodic memory.
Effect of an education programme for patients with osteoarthritis in primary care - a randomized controlled trial
BACKGROUND Osteoarthritis (OA) is a degenerative disease, considered to be one of the major public health problems. Research suggests that patient education is feasible and valuable for achieving improvements in quality of life, in function, well-being and improved coping. Since 1994, Primary Health Care in Malmö has used a patient education programme directed towards OA. The aim of this study was to evaluate the effects of this education programme for patients with OA in primary health care in terms of self-efficacy, function and self-perceived health. METHOD The study was a single-blind, randomized controlled trial (RCT) in which the EuroQol-5D and Arthritis self-efficacy scale were used to measure self-perceived health and self-efficacy and function was measured with Grip Ability Test for the upper extremity and five different functional tests for the lower extremity. RESULTS We found differences between the intervention group and the control group, comparing the results at baseline and after 6 months in EuroQol-5D (p < 0.001) and in standing one leg eyes closed (p = 0.02) in favour of the intervention group. No other differences between the groups were found. CONCLUSION This study has shown that patient education for patients with osteoarthritis is feasible in a primary health care setting and can improve self-perceived health as well as function in some degree, but not self-efficacy. Further research to investigate the effect of exercise performance on function, as well as self-efficacy is warranted. TRIAL REGISTRATION The trial is registered with ClinicalTrials.gov. REGISTRATION NUMBER NCT00979914.
Eye images increase generosity , but not for long : the limited effect of a false cue
⁎ Corresponding author. E-mail address: [email protected] (A. Sparks). 1090-5138/$ – see front matter © 2013 Elsevier Inc. Al http://dx.doi.org/10.1016/j.evolhumbehav.2013.05.001 Article history: Initial receipt 5 November 2012 Final revision received 3 May 2013
Targeting insulin resistance in type 2 diabetes via immune modulation of cord blood-derived multipotent stem cells (CB-SCs) in stem cell educator therapy: phase I/II clinical trial
BACKGROUND The prevalence of type 2 diabetes (T2D) is increasing worldwide and creating a significant burden on health systems, highlighting the need for the development of innovative therapeutic approaches to overcome immune dysfunction, which is likely a key factor in the development of insulin resistance in T2D. It suggests that immune modulation may be a useful tool in treating the disease. METHODS In an open-label, phase 1/phase 2 study, patients (N=36) with long-standing T2D were divided into three groups (Group A, oral medications, n=18; Group B, oral medications+insulin injections, n=11; Group C having impaired β-cell function with oral medications+insulin injections, n=7). All patients received one treatment with the Stem Cell Educator therapy in which a patient's blood is circulated through a closed-loop system that separates mononuclear cells from the whole blood, briefly co-cultures them with adherent cord blood-derived multipotent stem cells (CB-SCs), and returns the educated autologous cells to the patient's circulation. RESULTS Clinical findings indicate that T2D patients achieve improved metabolic control and reduced inflammation markers after receiving Stem Cell Educator therapy. Median glycated hemoglobin (HbA1C) in Group A and B was significantly reduced from 8.61%±1.12 at baseline to 7.25%±0.58 at 12 weeks (P=2.62E-06), and 7.33%±1.02 at one year post-treatment (P=0.0002). Homeostasis model assessment (HOMA) of insulin resistance (HOMA-IR) demonstrated that insulin sensitivity was improved post-treatment. Notably, the islet beta-cell function in Group C subjects was markedly recovered, as demonstrated by the restoration of C-peptide levels. Mechanistic studies revealed that Stem Cell Educator therapy reverses immune dysfunctions through immune modulation on monocytes and balancing Th1/Th2/Th3 cytokine production. CONCLUSIONS Clinical data from the current phase 1/phase 2 study demonstrate that Stem Cell Educator therapy is a safe approach that produces lasting improvement in metabolic control for individuals with moderate or severe T2D who receive a single treatment. In addition, this approach does not appear to have the safety and ethical concerns associated with conventional stem cell-based approaches. TRIAL REGISTRATION ClinicalTrials.gov number, NCT01415726.
Efficient Similarity Search over Encrypted Data
In the present time, due to attractive features of cloud computing, the massive amount of data has been stored in the cloud. Though cloud-based services offer many benefits but privacy and security of the sensitive data is a big issue. These issues are resolved by storing sensitive data in encrypted form. Encrypted storage protects the data against unauthorized access, but it weakens some basic and important functionality like search operation on the data, i.e. searching the required data by the user on the encrypted data requires data to be decrypted first and then search, so this eventually, slows down the process of searching. To achieve this many encryption schemes have been proposed, however, all of the schemes handle exact Query matching but not Similarity matching. While user uploads the file, features are extracted from each document. When the user fires a query, trapdoor of that query is generated and search is performed by finding the correlation among documents stored on cloud and query keyword, using Locality Sensitive Hashing.
Effectiveness of Triamcinolone Hexacetonide Intraarticular Injection in Interphalangeal Joints: A 12-week Randomized Controlled Trial in Patients with Hand Osteoarthritis.
OBJECTIVE To evaluate the effectiveness and tolerance of intraarticular injection (IAI) of triamcinolone hexacetonide (TH) for the treatment of osteoarthritis (OA) of hand interphalangeal (IP) joints. METHODS Sixty patients who underwent IAI at the most symptomatic IP joint were randomly assigned to receive TH/lidocaine (LD; n = 30) with TH 20 mg/ml and LD 2%, or just LD (n = 30). The injected joint was immobilized with a splint for 48 h in both groups. Patients were assessed at baseline and at 1, 4, 8, and 12 weeks by a blinded observer. The following variables were assessed: pain at rest [visual analog scale (VAS)r], pain at movement (VASm), swelling (physician VASs), goniometry, grip and pinch strength, hand function, treatment improvement, daily requirement of paracetamol, and local adverse effects. The proposed treatment (IAI with TH/LD) was successful if statistical improvement (p < 0.05) was achieved in at least 2 of 3 VAS. Repeated-measures ANOVA test was used to analyze intervention response. RESULTS Fifty-eight patients (96.67%) were women, and the mean age was 60.7 years (± 8.2). The TH/LD group showed greater improvement than the LD group for VASm (p = 0.014) and physician VASs (p = 0.022) from the first week until the end of the study. In other variables, there was no statistical difference between groups. No significant adverse effects were observed. CONCLUSION The IAI with TH/LD has been shown to be more effective than the IAI with LD for pain on movement and joint swelling in patients with OA of the IP joints. Regarding pain at rest, there was no difference between groups. TRIAL REGISTRATION NUMBER ClinicalTrials.gov (NCT02102620).
A Federal Higher Education iPad Mobile Learning Initiative: Triangulation of Data to Determine Early Effectiveness
This article presents faculty perceptions of the first month of iPad deployment in a national college system and a case study describing the integration of mobile learning devices in one college, interpreted within the framework of a SWOT analysis. We include a brief history of the implementation; description of the three-tier structure of infrastructure, pedagogy, and content; faculty perceptions; and pedagogy interview findings.We collected data using 1) case study interviews, 2) a faculty dispositional survey, and 3) iPad lead faculty. Overall, the large-scale deployment of iPad mobile learning devices was associated with high Innov High Educ (2014) 39:45–57 DOI 10.1007/s10755-013-9259-y Jace Hargis earned his undergraduate and graduate degrees in the chemical sciences from Florida Institute of Technology and the University of Florida, and he holds a Ph.D. from the University of Florida in Science Education. He is currently the College Director at the Higher Colleges of Technology (HCT) Abu Dhabi Women's College (ADWC) in Abu Dhabi, UAE. His research agenda addresses the theoretical aspects of how people learn with the use of emerging instructional technologies. He can be contacted at jace.hargis@ gmail.com. Cathy Cavanaugh has a B.A. in education from the University of the Virgin Islands, an M.Ed. from the University of Central Florida, and a Ph.D. in Curriculum and Instruction from the University of South Florida, specializing in distance education. She is currently the Associate Director at HCT ADWC. Her research, publications and awards focus on effective online and blended learning environments. Tayeb Kamali Deng obtained a B.S. in both Aircraft Engineering and Aeronautical Engineering followed by an M.B.A. with a concentration in Aviation from the Embry Riddle Aeronautical University. He then obtained his Doctorate in Engineering from George Washington University. He is currently the Vice Chancellor at HCT. Melissa Soto began her undergraduate degree in Education at the University of Florida and finished with a B.A. from the University of North Florida. Her graduate degree is in Mathematics Education from the University of Central Florida; and she is currently a doctoral candidate at the University of California, Davis, majoring in Mathematics Education. J. Hargis (*) : T. Kamali Higher Colleges of Technology, P.O. Box 41012, Abu Dhabi, United Arab Emirates e-mail: [email protected] e-mail: [email protected] C. Cavanaugh Abu Dhabi Women’s College, Higher Colleges of Technology, P.O. Box 41012, Abu Dhabi, United Arab Emirates e-mail: [email protected] M. Soto University of California-Davis, 1 Shields Ave, Davis, CA 95616, USA e-mail: [email protected] faculty engagement in formal and informal professional development activities and adoption of an active student-centered pedagogy. In addition, the program stimulated innovative approaches to technical challenges; and it spurred development and evaluation of new digital content.
Unveiling patterns of international communities in a global city using mobile phone data
We analyse a large mobile phone activity dataset provided by Telecom Italia for the Telecom Big Data Challenge contest. The dataset reports the international country codes of every call/SMS made and received by mobile phone users in Milan, Italy, between November and December 2013, with a spatial resolution of about 200 meters. We first show that the observed spatial distribution of international codes well matches the distribution of international communities reported by official statistics, confirming the value of mobile phone data for demographic research. Next, we define an entropy function to measure the heterogeneity of the international phone activity in space and time. By comparing the entropy function to empirical data, we show that it can be used to identify the city’s hotspots, defined by the presence of points of interests. Eventually, we use the entropy function to characterize the spatial distribution of international communities in the city. Adopting a topological data analysis approach, we find that international mobile phone users exhibit some robust clustering patterns that correlate with basic socio-economic variables. Our results suggest that mobile phone records can be used in conjunction with topological data analysis tools to study the geography of migrant communities in a global city.
Effect of barcode-assisted medication administration on emergency department medication errors.
OBJECTIVES Barcode-assisted medication administration (BCMA) is technology with demonstrated benefit in reducing medication administration errors in hospitalized patients; however, it is not routinely used in emergency departments (EDs). EDs may benefit from BCMA, because ED medication administration is complex and error-prone. METHODS A naïve observational study was conducted at an academic medical center implementing BCMA in the ED. The rate of medication administration errors was measured before and after implementing an integrated electronic medical record (EMR) with BCMA capacity. Errors were classified as wrong drug, wrong dose, wrong route of administration, or a medication administration with no physician order. The error type, severity of error, and medications associated with errors were also quantified. RESULTS A total of 1,978 medication administrations were observed (996 pre-BCMA and 982 post-BCMA). The baseline medication administration error rate was 6.3%, with wrong dose errors representing 66.7% of observed errors. BCMA was associated with a reduction in the medication administration error rate to 1.2%, a relative rate reduction of 80.7% (p < 0.0001). Wrong dose errors decreased by 90.4% (p < 0.0001), and medication administrations with no physician order decreased by 72.4% (p = 0.057). Most errors discovered were of minor severity. Antihistamine medications were associated with the highest error rate. CONCLUSIONS Implementing BCMA in the ED was associated with significant reductions in the medication administration error rate and specifically wrong dose errors. The results of this study suggest a benefit of BCMA on reducing medication administration errors in the ED.
Efficacy of intra-articular cocktail analgesic injection in total knee arthroplasty - a randomized controlled trial.
In a randomized, double-blind, placebo, parallel and controlled study, 80 patients with osteoarthritis who underwent unilateral TKA were randomly assigned to two groups: Trial Group, where patients received intra-articular intraoperative injection containing morphine, bupivacaine and betamethasone, and Control Group, where patients received normal saline as control. All patients received patient-controlled analgesia (PCA) for 48 h postoperatively. We found that intra-articular cocktail analgesic injection significantly reduced the morphine consumption during the 0-36 h postoperative period and the total morphine consumption. VAS at rest in Trial Group at postoperative 6, 10, 24 and 36 h was significantly lower than that in Control Group, and VAS during activity in Trial Group at postoperative 24 h and 36 h was significantly lower than that in Control Group. The time of ability to perform an active straight leg raise and to actively reach 90 degrees knee flexion, as well as ROM of the knee at the 15th postoperative day, was better in Trial Group than those in Control Group. There were no significant differences in postoperative wound healing, infection, blood pressure, heart rate, rash, respiratory depression, urine retention and DVT between the two groups. The occurrence of nausea and vomiting in Trial Group was lower than that of Control Group. This study revealed that intra-articular cocktail analgesic injection reduced the need for morphine and offered a better pain control, without apparent risks following TKA.
Named entity recognition for tweets
Two main challenges of Named Entity Recognition (NER) for tweets are the insufficient information in a tweet and the lack of training data. We propose a novel method consisting of three core elements: (1) normalization of tweets; (2) combination of a K-Nearest Neighbors (KNN) classifier with a linear Conditional Random Fields (CRF) model; and (3) semisupervised learning framework. The tweet normalization preprocessing corrects common ill-formed words using a global linear model. The KNN-based classifier conducts prelabeling to collect global coarse evidence across tweets while the CRF model conducts sequential labeling to capture fine-grained information encoded in a tweet. The semisupervised learning plus the gazetteers alleviate the lack of training data. Extensive experiments show the advantages of our method over the baselines as well as the effectiveness of normalization, KNN, and semisupervised learning.
Cache'n DASH: Efficient Caching for DASH
HTTP-based video streaming services have been dominating the global IP traffic over the last few years. Caching of video content reduces the load on the content servers. In the case of Dynamic Adaptive Streaming over HTTP (DASH), for every video the server needs to host multiple representations of the same video file. These individual representations are further broken down into smaller segments. Hence, for each video the server needs to host thousands of segments out of which, the client downloads a subset of the segments. Also, depending on the network conditions, the adaptation scheme used at the client-end might request a different set of video segments (varying in bitrate) for the same video. The caching of DASH videos presents unique challenges. In order to optimize the cache hits and minimize the misses for DASH video streaming services we propose an Adaptation Aware Cache (AAC) framework to determine the segments that are to be prefetched and retained in the cache. In the current scheme, we use bandwidth estimates at the cache server and the knowledge of the rate adaptation scheme used by the client to estimate the next segment requests, thus improving the prefetching at the cache.
Vancomycin dosing in morbidly obese patients
Objectives and methods: Vancomycin hydrochloride dosing requirements in morbidly obese patients with normal renal function were computed to determine the dose of vancomycin necessary to achieve target steady-state peak and trough concentrations and compared with a normal weight population. Results: Morbidly obese patients [total body weight (TBW) 165 kg, ideal body weight (IBW) 63 kg] required 31.2 mg · kg−1 · d−1 TBW or 81.9 mg · kg−1 · d−1 IBW to achieve the target concentrations. Normal weight patients (TBW 68.6 kg) required 27.8 mg · kg−1 · d−1 to achieve the same concentrations. Because of altered kinetic parameters in the morbidly obese patients (obese: t1/2=3.3 h, V=52 l, CL =197 ml · min−1; normal: t1/2=7.2 h, V=46 l, CL=77 ml · min−1, 20 of 24 patients required q8h dosing (1938 mg q8h) compared with q12h dosing (954 mg q12h) in all normal weight patients in order to avoid trough concentrations that were too low for prolonged periods. There was a good correlation between TBW and CL, but only fair correlation between TBW and V. Conclusion: Doses required to achieve desired vancomycin concentrations are similar in morbidly obese and normal weight patients when TBW is used as a dosing weight for the obese (approximately 30 mg · kg−1 · d−1). Shorter dosage intervals may be needed when dosing morbidly obese patients so that steady-state trough concentrations remain above 5 μg · ml−1 in this population. Because of the large amount of variation in required doses, vancomycin serum concentrations should be obtained in morbidly obese patients to ensure that adequate doses are being administered. Dosage requirements for morbidly obese patients with renal dysfunction require further study.
"Teaching as a Competency": competencies for medical educators.
Most medical faculty receive little or no training about how to be effective teachers, even when they assume major educational leadership roles. To identify the competencies required of an effective teacher in medical education, the authors developed a comprehensive conceptual model. After conducting a literature search, the authors met at a two-day conference (2006) with 16 medical and nonmedical educators from 10 different U.S. and Canadian organizations and developed an initial draft of the "Teaching as a Competency" conceptual model. Conference participants used the physician competencies (from the Accreditation Council for Graduate Medical Education [ACGME]) and the roles (from the Royal College's Canadian Medical Education Directives for Specialists [CanMEDS]) to define critical skills for medical educators. The authors then refined this initial framework through national/regional conference presentations (2007, 2008), an additional literature review, and expert input. Four core values grounded this framework: learner engagement, learner-centeredness, adaptability, and self-reflection. The authors identified six core competencies, based on the ACGME competencies framework: medical (or content) knowledge; learner- centeredness; interpersonal and communication skills; professionalism and role modeling; practice-based reflection; and systems-based practice. They also included four specialized competencies for educators with additional programmatic roles: program design/implementation, evaluation/scholarship, leadership, and mentorship. The authors then cross-referenced the competencies with educator roles, drawing from CanMEDS, to recognize role-specific skills. The authors have explored their framework's strengths, limitations, and applications, which include targeted faculty development, evaluation, and resource allocation. The Teaching as a Competency framework promotes a culture of effective teaching and learning.
Internet of things for smart agriculture: Technologies, practices and future direction
The advent of Internet of Things (IoT) has shown a new direction of innovative research in agricultural domain. Being at nascent stage, IoT needs to be widely experimented so as to get widely applied in various agricultural applications. In this paper, I review various potential IoT applications, and the specific issues and challenges associated with IoT deployment for improved farming. To focus on the specific requirements the devices, and wireless communication technologies associated with IoT in agricultural and farming applications are analyzed comprehensively. Investigations are made on those sensor enabled IoT systems that provide intelligent and smart services towards smart agriculture. Various case studies are presented to explore the existing IoT based solutions performed by various organizations and individuals and categories according to their deployment parameters. Related difficulties in these solutions, while identifying the factors for improvement and future road map of work using the IoT are also highlighted.
Physician Perspectives on Medical Home Recognition for Practice Transformation for Children.
OBJECTIVE To examine child-serving physicians' perspectives on motivations for and support for practices in seeking patient-centered medical home (PCMH) recognition, changes in practice infrastructure, and care processes before and after recognition, and perceived benefits and challenges of functioning as a PCMH for the children they serve, especially children with special health care needs. METHODS Semistructured interviews with 20 pediatricians and family physicians at practices that achieved National Committee for Quality Assurance level 3 PCMH recognition before 2011. We coded notes and identified themes using an iterative process and pattern recognition analysis. RESULTS Physicians reported being motivated to seek PCMH recognition by a combination of altruistic and practical goals. Most said recognition acknowledged existing practice characteristics, but encouraged ongoing, and in some cases substantial, transformation. Although many physicians said recognition helped practices improve financial arrangements with payers and participate in quality initiatives, most physicians could not assess the specific benefits of recognition on patients' use of services or health outcomes. Challenges for practices in providing care for children included managing additional physician responsibilities, communicating with other providers and health systems, and building sustainable care coordination procedures. CONCLUSIONS PCMH recognition can be valuable to practices as a public acknowledgement to payers and patients that certain processes are in place, and can also catalyze new and continued transformation. Programs and policies seeking to transform primary care for children should leverage physicians' motivations and find mechanisms to build practices' capacity for care management systems and linkages with the medical neighborhood.
Smart parking pricing: A machine learning approach
Crowded streets are a major problem in large cities. A large part of the problem stems from drivers seeking on-street parking. Cities such as San Francisco, Los Angeles and Seattle have tackled this problem with smart parking systems that aim to maintain the on-street parking occupancy rates around a target level, thus ensuring that empty spots are spread across the city rather than clustered in a single area. In this study, we use the San Francisco's SFpark system as a case study. Specifically, in each given parking area, the SFpark uses occupancy rate data from the previous month to adjust the price in the current month. Instead, we propose a machine learning approach that predicts the occupancy rate of a parking area based on past occupancy rates and prices from an entire neighborhood (which covers many parking areas). We further formulate an optimization problem for the prices in each parking area that minimize the root mean squared error (RMSE) between the predicted occupancy rates of all areas in the neighborhood and the target occupancy rates. This approach is novel in that 1) it responds to a predicted level of occupancy rate rather than past data and 2) it find prices that optimize the total occupancy rate of all neighborhoods, taking under account that prices in one area can impact the demand in adjacent areas. We conduct a numerical study, using data collected from the SFpark study, that shows that the prices obtained from our optimization lead to occupancy rates that are very close to the desired target level.
Towards Improving Data Quality
This paper is based on the notion of data quality. It includes correctness, completeness and minimality for which a notational framework is shown. In long living databases the maintenance of data quality is a rst order issue. This paper shows that even well designed and implemented information systems cannot guarantee correct data in any circumstances. It is shown that in any such system data quality tends to decrease and therefore some data correction procedure should be applied from time to time. One aspect of increasing data quality is the correction of data values. Characteristics of a software tool which supports this data value correction process are presented and discussed.
Transjugular Intrahepatic Portosystemic Shunt in the Treatment of Portal Hypertension Using Memotherm Stents: A Prospective Multicenter Study
Purpose: In a prospective multicenter study, efficacy and safety of transjugular intrahepatic portosystemic shunts (TIPS) were evaluated in the treatment of the complications of portal hypertension using a new self-expanding mesh-wire stent (Memotherm). Methods: One hundred and eighty-one patients suffering from variceal bleeding (either acute or recurrent) or refractory ascites were enrolled. Postinterventional follow-up lasted for 8.4 months on average. Differences were analyzed by the log-rank test (chi-square) or Wilcoxon test. Results: Shunt insertion was completed successfully in all patients (n = 181 patients, 100%). During follow-up, shunt occlusion was evident in 23 patients, and shunt stenosis was found in 33 patients (12.7% and 18.2%, respectively). Variceal rebleeding occurred in 20 of 139 patients (14.4%), with at least one episode of bleeding before TIPS treatment. The overall mortality rate of the patients treated by TIPS was 39.8%. In 51.4% of these cases (37 of 72 patients), however, the patients died within 30 days after TIPS placement. Analysis of subgroups showed that patients who underwent emergency TIPS for acute variceal bleeding had a significantly higher early mortality compared with other patient groups (p = 0.0014). Conclusion: In the present prospective multicenter study, we were able to show that insertion of Memotherm stents is an effective tool for TIPS. The occlusion rates seem to be comparable to those reported for the Palmaz stent. It could be shown that in particular, those patients who were treated for acute bleeding were at high risk of early mortality. Consequently, in such a critical condition, the indication for TIPS has to be set carefully.
[QT interval dispersion analysis in acute myocardial infarction patients: coronary reperfusion effect].
OBJECTIVE To study the effect of early reperfusion of infarct-related artery on QT(DeltaQT) dispersion interval, as well as how valuable it is as a marker for coronary reperfusion and ventricular arrhythmias. METHODS One hundred and six patients with reperfusion (WR) and 48 without reperfusion (WtR) who have received thrombolytic therapy in the acute phase of infarction were studied. ECG carried out on admission as well as on day 4 of patients course were analyzed. DeltaQT - defined as the difference between maximum and minimum QT interval - was measured by 12-lead ECG. RESULTS The reperfusion group showed significant DeltaQT reduction - from 89.66+/-20.47ms down to 70.95+/-21.65ms (p<0.001). On the other hand, the group without reperfusion showed DeltaQT significant increase - from 81.27+/-20.52ms up to 91.85+/-24.66ms (p<0.001). Logistic regression analysis showed that reduction magnitude between pre- and post-thrombolysis DeltaQT was the independent factor to most effectively identify coronary reperfusion (OR 1.045, p<0.0001; CI 95%). No significant difference was found in dispersion measures when patients with ventricular arrhythmias were compared with those with no arrhythmias in the course of the first 48 hours. CONCLUSION The study shows that DeltaQT is significantly reduced in patients with acute myocardial infarction submitted to successful thrombolysis, and is increased in infarcted patients with closed artery. DeltaQT reduction between the pre- and post-thrombolysis condition was a predictor for coronary reperfusion of those patients, and did not show correlation to ventricular arrhythmias.
How can student experience enhance the development of a model of interprofessional clinical skills education in the practice placement setting?
The practice placement setting offers opportunities and challenges for engaging students in high-quality interprofessional learning. The Fife Interprofessional Clinical Skills Model for Education was established to develop structured interprofessional learning opportunities for students during their clinical attachments in NHS Fife. This short report describes the delivery and evaluation of the model, which was piloted with students from the nursing, medicine and allied health professions. Scheduled workshops were delivered within primary and secondary care locations. The learning activities involved exploring and comparing their professional identities, discussing roles and responsibilities within the healthcare team and practicing nontechnical clinical skills. Students who participated in the workshops reported that they developed a better understanding of each other's roles and responsibilities and also identified that this would be transferable knowledge to their future practice. Exploring the student experience has assisted in developing relevant and accessible interprofessional learning opportunities within the practice placement setting.
FluidRAN: Optimized vRAN/MEC Orchestration
Virtualized Radio Access Network (vRAN) architectures constitute a promising solution for the densification needs of 5G networks, as they decouple Base Stations (BUs) functions from Radio Units (RUs) allowing the processing power to be pooled at cost-efficient Central Units (CUs). vRAN facilitates the flexible function relocation (split selection), and therefore enables splits with less stringent network requirements compared to state-of-the-art fully Centralized (C-RAN) systems. In this paper, we study the important and challenging vRAN design problem. We propose a novel modeling approach and a rigorous analytical framework, FluidRAN, that minimizes RAN costs by jointly selecting the splits and the RUs-CUs routing paths. We also consider the increasingly relevant scenario where the RAN needs to support multi-access edge computing (MEC) services, that naturally favor distributed RAN (D-RAN) architectures. Our framework provides a joint vRAN/MEC solution that minimizes operational costs while satisfying the MEC needs. We follow a data-driven evaluation method, using topologies of 3 operational networks. Our results reveal that (i) pure C-RAN is rarely a feasible upgrade solution for existing infrastructure, (ii) FluidRAN achieves significant cost savings compared to D-RAN systems, and (iii) MEC can increase substantially the operator's cost as it pushes vRAN function placement back to RUs.
Data Security and Privacy in Cloud Computing
Data security has consistently been a major issue in information technology. In the cloud computing environment, it becomes particularly serious because the data is located in different places even in all the globe. Data security and privacy protection are the two main factors of user’s concerns about the cloud technology. Though many techniques on the topics in cloud computing have been investigated in both academics and industries, data security and privacy protection are becomingmore important for the future development of cloud computing technology in government, industry, and business. Data security and privacy protection issues are relevant to both hardware and software in the cloud architecture.This study is to review different security techniques and challenges from both software and hardware aspects for protecting data in the cloud and aims at enhancing the data security and privacy protection for the trustworthy cloud environment. In this paper, we make a comparative research analysis of the existing research work regarding the data security and privacy protection techniques used in the cloud computing.
Threshold value for the perception of color changes of human gingiva.
The aim of this study was to assess the threshold value for the perception of color changes of human gingiva. Standardized presentations of five cases in the esthetic zone were made with the gingiva and teeth separated. The color parameters L, a, and b (CIELab) of the gingival layers were adjusted to induce darker and lighter colors. In the presentations, the right side (maxillary right anterior) was unchanged, while the left side (maxillary left anterior) of the pictures was modified. Ten dentists, 10 dental technicians, and 10 lay people evaluated the color difference of the pictures. The mean ΔE threshold values ranged between 1.6 ± 1.1 (dental technicians) and 3.4 ± 1.9 (lay people). The overall ΔE amounted to 3.1 ± 1.5.
Effect of cinnamon on gastric emptying, arterial stiffness, postprandial lipemia, glycemia, and appetite responses to high-fat breakfast
BACKGROUND Cinnamon has been shown to delay gastric emptying of a high-carbohydrate meal and reduce postprandial glycemia in healthy adults. However, it is dietary fat which is implicated in the etiology and is associated with obesity, type 2 diabetes and cardiovascular disease. We aimed to determine the effect of 3 g cinnamon (Cinnamomum zeylanicum) on GE, postprandial lipemic and glycemic responses, oxidative stress, arterial stiffness, as well as appetite sensations and subsequent food intake following a high-fat meal. METHODS A single-blind randomized crossover study assessed nine healthy, young subjects. GE rate of a high-fat meal supplemented with 3 g cinnamon or placebo was determined using the 13C octanoic acid breath test. Breath, blood samples and subjective appetite ratings were collected in the fasted and during the 360 min postprandial period, followed by an ad libitum buffet meal. Gastric emptying and 1-day fatty acid intake relationships were also examined. RESULTS Cinnamon did not change gastric emptying parameters, postprandial triacylglycerol or glucose concentrations, oxidative stress, arterial function or appetite (p < 0.05). Strong relationships were evident (p < 0.05) between GE Thalf and 1-day palmitoleic acid (r = -0.78), eiconsenoic acid (r = -0.84) and total omega-3 intake (r = -0.72). The ingestion of 3 g cinnamon had no effect on GE, arterial stiffness and oxidative stress following a HF meal. CONCLUSIONS 3 g cinnamon did not alter the postprandial response to a high-fat test meal. We find no evidence to support the use of 3 g cinnamon supplementation for the prevention or treatment of metabolic disease. Dietary fatty acid intake requires consideration in future gastrointestinal studies. TRIAL REGISTRATION TRIAL REGISTRATION NUMBER at http://www.clinicaltrial.gov: NCT01350284.
UK Renal Registry 11th Annual Report (December 2008): Chapter 15 The UK Renal Registry, UKRR database, validation and methodology.
The UK Renal Registry receives encrypted data extracts quarterly from each centre providing Renal Replacement Therapy (RRT) in England, Wales and Northern Ireland. Summary data is received from the Scottish Renal Registry to allow national statistics to be compiled. Data from patients receiving haemodialysis in satellite units or at home are reported through the main renal centre. Data from patients with functioning kidney transplants are reported through the centre providing routine clinical follow-up. The data are extracted from a variety of IT systems with varying functionality and no common messaging system, necessitating extensive data validation and cleaning prior to analysis. Growing confidence in the analyses since the inception of the Registry in 1995 has allowed de-anonymised centre-specific analyses of all outcomes, including survival, to be published, although incomplete data returns for primary renal diagnosis and comorbidity at start of RRT limit ability to adjust for case-mix.
High-level small-step operational semantics for transactions
Software transactions have received significant attention as a way to simplify shared-memory concurrent programming, but insufficient focus has been given to the precise meaning of software transactions or their interaction with other language features. This work begins to rectify that situation by presenting a family of formal languages that model a wide variety of behaviors for software transactions. These languages abstract away implementation details of transactional memory, providing high-level definitions suitable for programming languages. We use small-step semantics in order to represent explicitly the interleaved execution of threads that is necessary to investigate pertinent issues. We demonstrate the value of our core approach to modeling transactions by investigating two issues in depth. First, we consider parallel nesting, in which parallelism and transactions can nest arbitrarily. Second, we present multiple models for weak isolation, in which nontransactional code can violate the isolation of a transaction. For both, type-and-effect systems let us soundly and statically restrict what computation can occur inside or outside a transaction. We prove some key language-equivalence theorems to confirm that under sufficient static restrictions, in particular that each mutable memory location is used outside transactions or inside transactions (but not both), no program can determine whether the language implementation uses weak isolation or strong isolation.
Laparoscopic Transabdominal Pre-Peritoneal (TAPP) procedure - step-by-step tips and tricks.
UNLABELLED Minimally invasive approach for groin hernia treatment is still controversial, but in the last decade, it tends to become the standard procedure for one day surgery. We present herein the technique of laparoscopic Trans Abdominal Pre Peritoneal approach (TAPP). The surgical technique is presented step-by step;the different procedures key points (e.g. anatomic landmarks recognition, diagnosis of "occult" hernias, preperitoneal and hernia sac dissection, mesh placement and peritoneal closure) are described and discussed in detail, several tips and tricks being noted and highlighted. CONCLUSIONS TAPP is a feasible method for treating groin hernia associated with low rate of postoperative morbidity and recurrence. The anatomic landmarks are easily recognizable. The laparoscopic exploration allows for the treatment of incarcerated strangulated hernias and the intraoperative diagnosis of occult hernias.
The Asymmetric Traveling Salesman Problem: Algorithms, Instance Generators, and Tests
The purpose of this paper is to provide a preliminary report on the rst broad-based experimental comparison of modern heuristics for the asymmetric traveling salesmen problem (ATSP). There are currently three general classes of such heuristics: classical tour construction heuristics such as Nearest Neighbor and the Greedy algorithm, local search algorithms based on re-arranging segments of the tour, as exemplied by the Kanellakis-Papadimitriou algorithm [KP80], and algorithms based on patching together the cycles in a minimum cycle cover, the best of which are variants on an algorithm proposed by Zhang [Zha93]. We test implementations of the main contenders from each class on a variety of instance types, introducing a variety of new random instance generators modeled on real-world applications of the ATSP. Among the many tentative conclusions we reach is that no single algorithm is dominant over all instance classes, although for each class the best tours are found either by Zhang's algorithm or an iterated variant on KanellakisPapadimitriou.
A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles Behind Them
The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. The typical formulation, however, has changed little since the work of Horn and Schunck. We attempt to uncover what has made recent advances possible through a thorough analysis of how the objective function, the optimization method, and modern implementation practices influence accuracy. We discover that “classical” flow formulations perform surprisingly well when combined with modern optimization and implementation techniques. One key implementation detail is the median filtering of intermediate flow fields during optimization. While this improves the robustness of classical methods it actually leads to higher energy solutions, meaning that these methods are not optimizing the original objective function. To understand the principles behind this phenomenon, we derive a new objective function that formalizes the median filtering heuristic. This objective function includes a non-local smoothness term that robustly integrates flow estimates over large spatial neighborhoods. By modifying this new term to include information about flow and image boundaries we develop a method that can better preserve motion details. To take advantage of the trend towards video in wide-screen format, we further introduce an asymmetric pyramid downsampling scheme that enables the estimation of longer range horizontal motions. The methods are evaluated on the Middlebury, MPI Sintel, and KITTI datasets using the same parameter settings.
NODE FEATURES ADJUSTED STOCHASTIC BLOCK MODEL
Stochastic block model (SBM) and its variants are popular models used in community detection for network data. In this paper, we propose a feature adjusted stochastic block model (FASBM) to capture the impact of node features on the network links as well as to detect the residual community structure beyond that explained by the node features. The proposed model can accommodate multiple node features and estimate the form of feature impacts from the data. Moreover, unlike many existing algorithms that are limited to binary-valued interactions, the proposed FASBM model and inference approaches are easily applied to relational data that generates from any exponential family distribution. We illustrate the methods on simulated networks and on two real world networks: a brain network and an US air-transportation network.
The Character Strengths Rating Form ( CSRF ) : Development and initial assessment of a 24-item rating scale to assess character strengths
Character strengths are morally, positively, valued traits that are related to several positive life outcomes. In this study, the Character Strengths Rating Form (CSRF), a 24-item rating form of character strengths based on the classification proposed by Peterson and Seligman (2004), was developed using the data of 211 German-speaking adults. The CSRF yielded good convergence with Peterson and Seligman’s Values in Action Inventory of Strengths (VIA-IS) in terms of descriptive statistics, relationships with socio-demographic variables, and associations with life satisfaction; the means correlated .91, and standard deviations correlated .80. Correlations between corresponding strengths in the CSRF and the VIA-IS were between .41 and .77. Rank-order correlations of the correlations of both measures with age, education, and life satisfaction were .74, .76, and .84, respectively. Factor structure congruence coefficients ranged between .92 and .99. The rank-order correlation of the associations of the 5 factors with life satisfaction was .90. The CSRF proved to be a valid instrument for the assessment of character strengths. Its use is recommended for a brief measurement of character strengths when economy of instruments is at a premium (e.g., in large-scale longitudinal studies). DOI: https://doi.org/10.1016/j.paid.2014.03.042 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-95983 Accepted Version Originally published at: Ruch, Willibald; Martínez-Martí, María Luisa; Proyer, Rene T; Harzer, Claudia (2014). The Character Strengths Rating Form (CSRF): Development and initial assessment of a 24-item rating scale to assess character strengths. Personality and Individual Differences, 68:53-58. DOI: https://doi.org/10.1016/j.paid.2014.03.042 This manuscript was published as: Ruch, W., Martínez-­‐Martí, M. L., Proyer, R. T., & Harzer, C. (2014). The Character Strengths Rating Form (CSRF): Development and initial assessment of a 24-­‐Item rating scale to assess character strengths. Personality and Individual Differences, 68, 53-­‐58. doi:10.1016/j.paid.2014.03.042 Social Indicators Research The Character Strengths Rating Form (CSRF): Development and Initial Assessment of a 24-Item Rating Scale to Assess Character Strengths --Manuscript Draft-Manuscript Number: Full Title: The Character Strengths Rating Form (CSRF): Development and Initial Assessment of a 24-Item Rating Scale to Assess Character Strengths Article Type: Original Research
A Secure Biometrics-Based Multi-Server Authentication Protocol Using Smart Cards
Recently, in 2014, He and Wang proposed a robust and efficient multi-server authentication scheme using biometrics-based smart card and elliptic curve cryptography (ECC). In this paper, we first analyze He-Wang's scheme and show that their scheme is vulnerable to a known session-specific temporary information attack and impersonation attack. In addition, we show that their scheme does not provide strong user's anonymity. Furthermore, He-Wang's scheme cannot provide the user revocation facility when the smart card is lost/stolen or user's authentication parameter is revealed. Apart from these, He-Wang's scheme has some design flaws, such as wrong password login and its consequences, and wrong password update during password change phase. We then propose a new secure multi-server authentication protocol using biometric-based smart card and ECC with more security functionalities. Using the Burrows-Abadi-Needham logic, we show that our scheme provides secure authentication. In addition, we simulate our scheme for the formal security verification using the widely accepted and used automated validation of Internet security protocols and applications tool, and show that our scheme is secure against passive and active attacks. Our scheme provides high security along with low communication cost, computational cost, and variety of security features. As a result, our scheme is very suitable for battery-limited mobile devices as compared with He-Wang's scheme.
Reconstruction and Simulation of Neocortical Microcircuitry
UNLABELLED We present a first-draft digital reconstruction of the microcircuitry of somatosensory cortex of juvenile rat. The reconstruction uses cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm(3) containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses. Simulations reproduce an array of in vitro and in vivo experiments without parameter tuning. Additionally, we find a spectrum of network states with a sharp transition from synchronous to asynchronous activity, modulated by physiological mechanisms. The spectrum of network states, dynamically reconfigured around this transition, supports diverse information processing strategies. PAPERCLIP VIDEO ABSTRACT.
My Computer Is an Honor Student - but How Intelligent Is It? Standardized Tests as a Measure of AI
tion can machines think? by replacing it with another, namely can a machine pass the imitation game (the Turing test). In the years since, this test has been criticized as being a poor replacement for the original enquiry (for example, Hayes and Ford [1995]), which raises the question: what would a better replacement be? In this article, we argue that standardized tests are an effective and practical assessment of many aspects of machine intelligence, and should be part of any comprehensive measure of AI progress. While a crisp definition of machine intelligence remains elusive, we can enumerate some general properties we might expect of an intelligent machine. The list is potentially long (for example, Legg and Hutter [2007]), but should at least include the ability to (1) answer a wide variety of questions, (2) answer complex questions, (3) demonstrate commonsense and world knowledge, and (4) acquire new knowledge scalably. In addition, a suitable test should be clearly measurable, graduated (have a variety of levels of difficulty), not gameable, ambitious but realistic, and motivating. There are many other requirements we might add (for example, capabilities in robotics, vision, dialog), and thus any comprehensive measure of AI is likely to require a battery of different tests. However, standardized tests meet a surprising number of requirements, including the four listed, and thus should be a key component of a future battery of tests. As we will show, the tests require answering a wide variety of questions, including those requiring commonsense and world knowledge. In addition, they meet all the practical requirements, a huge advantage for any component of a future test of AI. Articles
K-means clustering for efficient and robust registration of multi-view point sets
Efficiency and robustness are the important performance for the registration of multi-view point sets. To address these two issues, this paper casts the multi-view registration into a clustering problem, which can be solved by the extended K-means clustering algorithm. Before the clustering, all the centroids are uniformly sampled from the initially aligned point sets involved in the multi-view registration. Then, two standard K-means steps are utilized to assign all points to one special cluster and update each clustering centroid. Subsequently, the shape comprised by all cluster centroids can be used to sequentially estimate the rigid transformation for each point set. These two standard K-means steps and the step of transformation estimation constitute the extended K-means clustering algorithm, which can achieve the clustering as well as the multi-view registration by iterations. To show its superiority, the proposed approach has tested on some public data sets and compared with the-state-of-art algorithms. Experimental results illustrate its good efficiency and robustness for the registration of multi-view point sets.
Transformation-mediated ductility in CuZr-based bulk metallic glasses.
Bulk metallic glasses (BMGs) generally fail in a brittle manner under uniaxial, quasistatic loading at room temperature. The lack of plastic strain is a consequence of shear softening, a phenomenon that originates from shear-induced dilation that causes plastic strain to be highly localized in shear bands. So far, significant tensile ductility has been reported only for microscopic samples of around 100 nm (ref. 4) as well as for high strain rates, and so far no mechanisms are known, which could lead to work hardening and ductility in quasistatic tension in macroscopic BMG samples. In the present work we developed CuZr-based BMGs, which polymorphically precipitate nanocrystals during tensile deformation and subsequently these nanocrystals undergo twinning. The formation of such structural heterogeneities hampers shear band generation and results in macroscopically detectable plastic strain and work hardening. The precipitation of nanocrystals and their subsequent twinning can be understood in terms of a deformation-induced softening of the instantaneous shear modulus. This unique deformation mechanism is believed to be not just limited to CuZr-based BMGs but also to promote ductility in other BMGs.
Edge Computing Resource Management and Pricing for Mobile Blockchain
The mining process in blockchain requires solving a proof-of-work puzzle, which is resource expensive to implement in mobile devices due to the high computing power and energy needed. In this paper, we, for the first time, consider edge computing as an enabler for mobile blockchain. In particular, we study edge computing resource management and pricing to support mobile blockchain applications in which the mining process of miners can be offloaded to an edge computing service provider. We formulate a two-stage Stackelberg game to jointly maximize the profit of the edge computing service provider and the individual utilities of the miners. In the first stage, the service provider sets the price of edge computing nodes. In the second stage, the miners decide on the service demand to purchase based on the observed prices. We apply the backward induction to analyze the sub-game perfect equilibrium in each stage for both uniform and discriminatory pricing schemes. For the uniform pricing where the same price is applied to all miners, the existence and uniqueness of Stackelberg equilibrium are validated by identifying the best response strategies of the miners. For the discriminatory pricing where the different prices are applied to different miners, the Stackelberg equilibrium is proved to exist and be unique by capitalizing on the Variational Inequality theory. Further, the real experimental results are employed to justify our proposed model.
Characterization of near-road pollutant gradients using path-integrated optical remote sensing.
Understanding motor vehicle emissions, near-roadway pollutant dispersion, and their potential impact to near-roadway populations is an area of growing environmental interest. As part of ongoing U.S. Environmental Protection Agency research in this area, a field study was conducted near Interstate 440 (I-440) in Raleigh, NC, in July and August of 2006. This paper presents a subset of measurements from the study focusing on nitric oxide (NO) concentrations near the roadway. Measurements of NO in this study were facilitated by the use of a novel path-integrated optical remote sensing technique called deep ultraviolet differential optical absorption spectroscopy (DUV-DOAS). This paper reviews the development and application of this measurement system. Time-resolved near-road NO concentrations are analyzed in conjunction with wind and traffic data to provide a picture of emissions and near-road dispersion for the study. Results show peak NO concentrations in the 150 ppb range during weekday morning rush hours with winds from the road accompanied by significantly lower afternoon and weekend concentrations. Traffic volume and wind direction are shown to be primary determinants of NO concentrations with turbulent diffusion and meandering accounting for significant near-road concentrations in off-wind conditions. The enhanced source capture performance of the open-path configuration allowed for robust comparisons of measured concentrations with a composite variable of traffic intensity coupled with wind transport (R2 = 0.84) as well as investigations on the influence of wind direction on NO dilution near the roadway. The benefits of path-integrated measurements for assessing line source impacts and evaluating models is presented. The advantages of NO as a tracer compound, compared with nitrogen dioxide, for investigations of mobile source emissions and initial dispersion under crosswind conditions are also discussed.
Methods of sperm vitality assessment.
Sperm vitality is a reflection of the proportion of live, membrane-intact spermatozoa determined by either dye exclusion or osmoregulatory capacity under hypo-osmotic conditions. In this chapter we address the two most common methods of sperm vitality assessment: eosin-nigrosin staining and the hypo-osmotic swelling test, both utilized in clinical Andrology laboratories.
Acute changes in endothelin after hemodialysis in children
The purpose of this study was to investigate the acute changes in endothelin (ET) levels immediately after hemodialysis and to determine whether these changes vary with the use of different membranes and hemodialysis solutions. Ten children were included in the study. Three different hemodialysis sessions were performed on all patients: session 1, acetate-based dialysate and polycarbonate membrane; session 2, bicarbonate-based dialysate and polycarbonate membrane; session 3, acetate-based dialysate and polysulfone membrane. In all cases blood samples were obtained before and after dialysis. Pre- and post-hemodialysis ET levels of the patients with acetate-based dialysate and polycarbonate membrane were 33.68±11.51 pg/ml and 28.27±12.85 pg/ml, respectively. The fall in ET levels after this session was statistically significant (P = 0.015). We did not observe a statistically significant change in ET levels in the other sessions. Post-dialysis mean arterial pressure values were significantly lower than the pre-dialysis values in all three dialysis sessions (P <0.01). A positive correlation was observed between plasma ET levels and blood urea nitrogen and serum potassium; a negative correlation was observed between plasma ET levels and hematocrit.
Critical evidence: a test of the critical-period hypothesis for second-language acquisition.
The critical-period hypothesis for second-language acquisition was tested on data from the 1990 U.S. Census using responses from 2.3 million immigrants with Spanish or Chinese language backgrounds. The analyses tested a key prediction of the hypothesis, namely, that the line regressing second-language attainment on age of immigration would be markedly different on either side of the critical-age point. Predictions tested were that there would be a difference in slope, a difference in the mean while controlling for slope, or both. The results showed large linear effects for level of education and for age of immigration, but a negligible amount of additional variance was accounted for when the parameters for difference in slope and difference in means were estimated. Thus, the pattern of decline in second-language acquisition failed to produce the discontinuity that is an essential hallmark of a critical period.
DEEP RESIDUAL LEARNING FOR TOMATO PLANT LEAF DISEASE IDENTIFICATION
Deep Learning for plant leaf analysis has been recently studied in various works. In most cases, Transfer Learning has been utilized, where the weights of networks, which are stored in the pre-trained models, are fine-tuned to use in the considered task. In this paper, Convolutional Neural Networks (CNNs), are employed to classify tomato plant leaf images based on the visible effects of diseases. In addition to Transfer Learning as an effective approach, training a CNN from scratch using the Deep Residual Learning method, is experimented. To do that, an architecture of CNN is proposed and applied to a subset of the PlantVillage dataset, including tomato plant leaf images. The results indicate that the suggested architecture outperforms VGG models, pre-trained on the ImageNet dataset, in both accuracy and the time required for re-training, and it can be used with a regular PC without any extra hardware required. A common feature visualization and verification technique is also applied to the results and further discussions are made to imply the importance of background pixels surrounding the leaves.
Perceptual objects capture attention
A recent study has demonstrated that the mere organization of some elements in the visual field into an object attracts attention automatically [Kimchi, R., Yeshurun, Y., & Cohen-Savransky, A. (2007). Automatic, stimulus-driven attentional capture by objecthood. Psychonomic Bulletin & Review, 14(1), 166-172]. We tested whether similar results will emerge when the target is not a part of the object and with simplified task demands. A matrix of 16 black L elements in various orientations preceded the presentation of a Vernier target. The target was either added to the matrix (Experiment 1), or appeared after its offset (Experiment 2). On some trials four elements formed a square-like object, and on some of these trials the target appeared in the center of the object. No featural uniqueness or abrupt onset was associated with the object and it did not predict the target location or the direction of the target's horizontal offset. Performance was better when the target appeared in the center of the object than in a different location than the object, even when the target appeared after the matrix offset. These findings support the hypothesis that a perceptual object captures attention (Kimchi et al., 2007), and demonstrate that this automatic deployment of attention to the object is robust and involves a spatial component.
A simulation study of arti " cial neural networks for nonlinear time-series forecasting
This study presents an experimental evaluation of neural networks for nonlinear time-series forecasting. The e!ects of three main factors * input nodes, hidden nodes and sample size, are examined through a simulated computer experiment. Results show that neural networks are valuable tools for modeling and forecasting nonlinear time series while traditional linear methods are not as competent for this task. The number of input nodes is much more important than the number of hidden nodes in neural network model building for forecasting. Moreover, large sample is helpful to ease the over"tting problem.
The influence of reading speed and line length on the effectiveness of reading from screen
With such a large volume of material accessible from the World Wide Web, there is an urgent need to increase our knowledge of factors in#uencing reading from screen. We investigate the e!ects of two reading speeds (normal and fast) and di!erent line lengths on comprehension, reading rate and scrolling patterns. Scrolling patterns are de"ned as the way in which readers proceed through the text, pausing and scrolling. Comprehension and reading rate are also examined in relation to scrolling patterns to attempt to identify some characteristics of e!ective readers. We found a reduction in overall comprehension when reading fast, but the type of information recalled was not dependent on speed. A medium line length (55 characters per line) appears to support e!ective reading at normal and fast speeds. This produced the highest level of comprehension and was also read faster than short lines. Scrolling patterns associated with better comprehension (more time in pauses and more individual scrolling movements) contrast with scrolling patterns used by faster readers (less time in pauses between scrolling). Consequently, e!ective readers can only be de"ned in relation to the aims of the reading task, which may favour either speed or accuracy. ( 2001 Academic Press
Research, Participation, and the Teaching of Politics
The director of the Political Studies Program at the University of North Carolina discusses the program's outstanding features. Both graduate and under-graduate courses in American government and politics have accompanying laboratory sessions that permit student observation and participation in politics in a research-oriented situation. Begun in 1957, the program has already had excellent results.The following article discusses another aspect of social science research at North Carolina.
Privacy-Preserving Public Auditing For Secure Cloud Storage
By using Cloud storage, users can access applications, services, software whenever they requires over the internet. Users can put their data remotely to cloud storage and get benefit of on-demand services and application from the resources. The cloud must have to ensure data integrity and security of data of user. The issue about cloud storage is integrity and privacy of data of user can arise. To maintain to overkill this issue here, we are giving public auditing process for cloud storage that users can make use of a third-party auditor (TPA) to check the integrity of data. Not only verification of data integrity, the proposed system also supports data dynamics. The work that has been done in this line lacks data dynamics and true public auditability. The auditing task monitors data modifications, insertions and deletions. The proposed system is capable of supporting public auditability, data dynamics and Multiple TPA are used for the auditing process. We also extend our concept to ring signatures in which HARS scheme is used. Merkle Hash Tree is used to improve block level authentication. Further we extend our result to enable the TPA to perform audits for multiple users simultaneously through Batch auditing.
A process of knowledge discovery from web log data: Systematization and critical review
This paper presents a comprehensive survey of web log/usage mining based on over 100 research papers. This is the first survey dedicated exclusively to web log/usage mining. The paper identifies several web log mining sub-topics including specific ones such as data cleaning, user and session identification. Each sub-topic is explained, weaknesses and strong points are discussed and possible solutions are presented. The paper describes examples of web log mining and lists some major web log mining software packages.
Value Chain Creation in Business Analytics
Firms are awash in big data and analytical technology as part of deriving values in the turbulent environment. The literature has somewhat reached a consensus that investments in technology only may not reap benefits from business analytics (BA). The main purpose of BA is not about how to install technical capabilities, but about how to make a process whereby a firm builds a value chain converting data into insights, leading to quality decisions. Drawing upon the theory of the information value chain, this study develops a BA value chain model and tests it with 268 data scientists. Results show that organizational resilience, absorptive capacity, and analytical IT capabilities are critical antecedents to analytical decision-making quality which in turn influences BA net benefits. Particularly, results illustrate that organizational resilience is a more significant variable impacting analytical decision-making quality than the influence of people and technology. Theoretical and practical implications are also discussed.
Strategies Used in the Translation of Interlingual Subtitling
This study was an attempt to identify the interlingual strategies employed to translate English subtitles into Persian and to determine their frequency, as well. Contrary to many countries, subtitling is a new field in Iran. The study, a corpus-based, comparative, descriptive, non-judgmental analysis of an English-Persian parallel corpus, comprised English audio scripts of five movies of different genres, with Persian subtitles. The study’s theoretical framework was based on Gottlieb’s (1992) classification of subtitling translation strategies. The results indicated that all Gottlieb’s proposed strategies were applicable to the corpus with some degree of variation of distribution among different film genres. The most frequently used strategy was “transfer” at 54.06%; the least frequently used strategies were “transcription” and “decimation” both at 0.81%. It was concluded that the film genre plays a crucial role in using different strategies.
A Survey on Time-of-Flight Stereo Fusion
Due to the demand for depth maps of higher quality than possible with a single depth imaging technique today, there has been an increasing interest in the combination of different depth sensors to produce a “super-camera” that is more than the sum of the individual parts. In this survey paper, we give an overview over methods for the fusion of Time-ofFlight (ToF) and passive stereo data as well as applications of the resulting high quality depth maps. Additionally, we provide a tutorial-based introduction to the principles behind ToF stereo fusion and the evaluation criteria used to benchmark these methods.
Successful IT Outsourcing Engagement: Lessons from Malaysia
The literature on IT outsourcing is well-developed with clear explanations of what may determine success. The context of IT outsourcing studies has primarily focused upon North American and European companies, seeking low-cost economies from links with developing nations to gain competitive advantage. These studies may not be generalisable to companies based in developing economies, who may be trying to replicate successful outsourcing approaches. If the focus of outsourcing is primarily one of cost cutting, IT policy makers and managers in Malaysia cannot assume that successful outsourcing determinants are of any significance to them. This article therefore addresses the question of how generalisable the determinants of successful IT outsourcing are to a Malaysian context. The study is based upon a sample survey of companies in the Penang region of Malaysia. It presents and tests hypotheses on the nature of outsourcing relationships. In conclusion, Malaysian managers can take some comfort in that the lessons of outsourcing can be generalised to their context. Of particular note is the value of selective outsourcing in comparison to full outsourcing and the impacts of communication and management commitment.
Evaluation of the Wireless M-Bus standard for future smart water grids
The most recent Wireless Sensor Networks technologies can provide viable solutions to perform automatic monitoring of the water grid, and smart metering of water consumptions. However, sensor nodes located along water pipes cannot access power grid facilities, to get the necessary energy imposed by their working conditions. In this sense, it is of basic importance to design the network architecture in such a way as to require the minimum possible power. This paper investigates the suitability of the Wireless Metering Bus protocol for possible adoption in future smart water grids, by evaluating its transmission performance, through simulations and experimental tests executed by means of prototype sensor nodes.
Vitamin D and Depression: A Systematic Review and Meta-Analysis Comparing Studies with and without Biological Flaws
Efficacy of Vitamin D supplements in depression is controversial, awaiting further literature analysis. Biological flaws in primary studies is a possible reason meta-analyses of Vitamin D have failed to demonstrate efficacy. This systematic review and meta-analysis of Vitamin D and depression compared studies with and without biological flaws. The systematic review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The literature search was undertaken through four databases for randomized controlled trials (RCTs). Studies were critically appraised for methodological quality and biological flaws, in relation to the hypothesis and study design. Meta-analyses were performed for studies according to the presence of biological flaws. The 15 RCTs identified provide a more comprehensive evidence-base than previous systematic reviews; methodological quality of studies was generally good and methodology was diverse. A meta-analysis of all studies without flaws demonstrated a statistically significant improvement in depression with Vitamin D supplements (+0.78 CI +0.24, +1.27). Studies with biological flaws were mainly inconclusive, with the meta-analysis demonstrating a statistically significant worsening in depression by taking Vitamin D supplements (-1.1 CI -0.7, -1.5). Vitamin D supplementation (≥800 I.U. daily) was somewhat favorable in the management of depression in studies that demonstrate a change in vitamin levels, and the effect size was comparable to that of anti-depressant medication.
Combined Thrombolysis with Abciximab Favourably Influences Platelet-Leukocyte Interactions and Platelet Activation in Acute Myocardial Infarction
Background: In patients with acute myocardial infarction (AMI), activated platelets and altered haemostatic/fibrinolytic systems with and without thrombolytic therapy are known. Platelets thereby interact with neutrophils, stimulated endothelial cells and with monocytes leading to adverse effects on further myocardial damage. Thrombolysis in these patients is still hampered by procoagulant effects favoring early reocclusion. The additional treatment with a GPIIb/IIIa antagonist aimed to minimize early reocclusion thus improving the present therapeutic regimen. Methods: In 38 patients with AMI, we investigated the effects of a thrombolytic regimen with half reteplase (r-PA) dose plus abciximab vs. full dose r-PA on membrane-bound adhesion molecules (CD41, CD42b, CD40, CD40L) expressed on platelets, neutrophils and monocytes as well as on soluble platelet-selectin as interaction and activation markers of these cells. Results: The combination group had significantly (p < 0.05) lower sP-selectin levels over 48 h vs. the group treated with full dose r-PA. After 3 h, the percentage of CD41 and CD42b positive monocytes and granulocytes as well as the percentage of CD40 positive granulocytes and the percentage of CD40L positive monocytes markedly (p < 0.01, p < 0.05) decreased in the combination group vs. data at admission compared with the r-PA group indicating less leukocyte-patelet adhesion. Conclusions: The thrombolytic regimen with half dose r-PA and abciximab had a benefical influence on platelet activation and induced a more marked decrease of platelet-monocyte, and in part, platelet-granulocyte aggregates compared with the r-PA regimen. This could contribute to a probably lesser monocyte activation state with favourable effects on monocyte-endothelial adhesion and a consecutively possible influence of myocardial damage, a reduction of the additionally acute local inflammatory processes and a reduction of adherence of platelet-granulocyte aggregates to subendothelium.
Constrained Hough Transforms for Curve Detection
This paper describes techniques to perform fast and accurate curve detection using constrained Hough transforms, in which localization error can be propagated efficiently into the parameter space. We first review a formal definition of Hough transform and modify it to allow the formal treatment localization error. We then analyze current Hough transform techniques with respect to this definition. It is shown that the Hough transform can be subdivided into many small subproblems without a decrease in performance, where each subproblem is constrained to consider only those curves that pass through some subset of the edge pixels up to the localization error. This property allows us to accurately and efficiently propagate localization error into the parameter space such that curves are detected robustly without finding false positives. The use of randomization techniques yields an algorithm with a worstcase complexity of O(n), where n is the number of edge pixels in the image, if we are only required to find curves that are significant with respect to the complexity of the image. Experiments are discussed that indicate that this method is superior to previous techniques for performing curve detection and results are given showing the detection of lines and circles in real images. c © 1999 Academic Press
Fuzzy & Datamining based Disease Prediction Using K-NN Algorithm
Disease diagnosis is one of the most important applications of such system as it is one of the leading causes of deaths all over the world. Almost all system predicting disease use inputs from complex tests conducted in labs and none of the system predicts disease based on the risk factors such as tobacco smoking, alcohol intake, age, family history, diabetes, hypertension, high cholesterol, physical inactivity, obesity. Researchers have been using several data mining techniques to help health care professionals in the diagnosis of heart disease. K-Nearest-Neighbour (KNN) is one of the successful data mining techniques used in classification problems. However, it is less used in the diagnosis of heart disease patients. Recently, researchers are showing that combining different classifiers through voting is outperforming other single classifiers. This paper investigates applying KNN to help healthcare professionals in the diagnosis of disease specially heart disease. It also investigates if integrating voting with KNN can enhance its accuracy in the diagnosis of heart disease patients. The results show that applying KNN could achieve higher accuracy than neural network ensemble in the diagnosis of heart disease patients. The results also show that applying voting could not enhance the KNN accuracy in the diagnosis of heart disease.
The induction of behavioural sensitization is associated with cocaine-induced structural plasticity in the core (but not shell) of the nucleus accumbens.
Repeated exposure to cocaine increases the density of dendritic spines on medium spiny neurons in the nucleus accumbens (Acb) and pyramidal cells in the medial prefrontal cortex (mPFC). To determine if this is associated with the development of psychomotor sensitization, rats were given daily i.p. injections of 15 mg/kg of cocaine (or saline) for 8 days, either in their home cage (which failed to induce significant psychomotor sensitization) or in a distinct and relatively novel test cage (which induced robust psychomotor sensitization). Their brains were obtained 2 weeks after the last injection and processed for Golgi-Cox staining. In the Acb core (AcbC) cocaine treatment increased spine density only in the group that developed psychomotor sensitization (i.e. in the Novel but not Home group), and there was a significant positive correlation between the degree of psychomotor sensitization and spine density. In the Acb shell (AcbS) cocaine increased spine density to the same extent in both groups; i.e. independent of psychomotor sensitization. In the mPFC cocaine increased spine density in both groups, but to a significantly greater extent in the Novel group. Furthermore, when rats were treated at Home with a higher dose of cocaine (30 mg/kg), cocaine now induced psychomotor sensitization in this context, and also increased spine density in the AcbC. Thus, the context in which cocaine is experienced influences its ability to reorganize patterns of synaptic connectivity in the Acb and mPFC, and the induction of psychomotor sensitization is associated with structural plasticity in the AcbC and mPFC, but not the AcbS.
High torque Internal Permanent Magnet wheel motor for electric traction applications
Permanent magnet (PM) motors are favored for electric traction applications due to their high efficiency. The paper proposes an enhanced wheel internal permanent magnet (IPM) motor offering the advantages of PM machines along with excellent torque performance and high power density. The advantage of the proposed scheme is the high magnetic flux developed in the air gap which allows much higher values of magnetic flux density, compared to a surface PM machine of the same size. This IPM motor aims to efficiently utilize the energy stored in the PM, where high load and intense transient phenomena occur, in electrical traction applications, while keeping a simple and robust structure.
Radio Frequency Tomography for Tunnel Detection
Radio frequency (RF) tomography is proposed to detect underground voids, such as tunnels or caches, over relatively wide areas of regard. The RF tomography approach requires a set of low-cost transmitters and receivers arbitrarily deployed on the surface of the ground or slightly buried. Using the principles of inverse scattering and diffraction tomography, a simplified theory for below-ground imaging is developed. In this paper, the principles and motivations in support of RF tomography are introduced. Furthermore, several inversion schemes based on arbitrarily deployed sensors are devised. Then, limitations to performance and system considerations are discussed. Finally, the effectiveness of RF tomography is demonstrated by presenting images reconstructed via the processing of synthetic data.
A Self-Organizing Strategy for Power Flow Control of Photovoltaic Generators in a Distribution Network
The focus of this paper is to develop a distributed control algorithm that will regulate the power output of multiple photovoltaic generators (PVs) in a distribution network. To this end, the cooperative control methodology from network control theory is used to make a group of PV generators converge and operate at certain (or the same) ratio of available power, which is determined by the status of the distribution network and the PV generators. The proposed control only requires asynchronous information intermittently from neighboring PV generators, making a communication network among the PV units both simple and necessary. The minimum requirement on communication topologies is also prescribed for the proposed control. It is shown that the proposed analysis and design methodology has the advantages that the corresponding communication networks are local, their topology can be time varying, and their bandwidth may be limited. These features enable PV generators to have both self-organizing and adaptive coordination properties even under adverse conditions. The proposed method is simulated using the IEEE standard 34-bus distribution network.
Online engagement factors on Facebook brand pages
Social networks have become an additional marketing channel that could be integrated with the traditional ones as a part of the marketing mix. The change in the dynamics of the marketing interchange between companies and consumers as introduced by social networks has placed a focus on the non-transactional customer behavior. In this new marketing era, the terms engagement and participation became the central non-transactional constructs, used to describe the nature of participants’ specific interactions and/or interactive experiences. These changes imposed challenges to the traditional one-way marketing, resulting in companies experimenting with many different approaches, thus shaping a successful social media approach based on the trial-and-error experiences. To provide insights to practitioners willing to utilize social networks for marketing purposes, our study analyzes the influencing factors in terms of characteristics of the content communicated by the company, such as media type, content type, posting day and time, over the level of online customer engagement measured by number of likes, comments and shares, and interaction duration for the domain of a Facebook brand page. Our results show that there is a different effect of the analyzed factors over individual engagement measures. We discuss the implications of our findings for social media marketing.
Short- and long-term modulation of upper limb motor-evoked potentials induced by acupuncture.
The aim of this study was to investigate in humans the effects of acupuncture upon upper-limb motor-evoked potentials (MEPs), elicited by transcranial magnetic stimulation of the primary motor cortex. It is known that peripheral sensory stimulation can be used to induce short- and long-term changes in motor cortex excitability. Data show that the simple insertion of the needle is an adequate somatosensory stimulus to induce a significant modulation of MEP amplitude, the sign of which (facilitation or inhibition) is specific to the investigated muscle and to the point of needle insertion. Moreover, MEP changes in upper-limb muscles are also observed following needling of lower-limb sites, revealing the presence of long-distance effects of acupuncture. Finally, the modulation in muscle excitability considerably outlasts the time period of needle application, demonstrating the induction of long-term plastic changes in the central nervous system. In addition, results have shown that the effects on muscle excitability are not restricted to the stimulation of well-coded acupoints, as described in traditional Chinese medicine, but they can also be induced by needling of nonacupoints, normally not used for therapeutic purposes. The possible neuronal mechanisms underlying the observed effects of acupuncture are discussed in relation to the available neurophysiological data regarding the interlimb reflexes and the changes in the representational cortical maps induced in humans by a prolonged somatosensory stimulation.
On a Methodology for Robust Segmentation of Nonideal Iris Images
Iris biometric is one of the most reliable biometrics with respect to performance. However, this reliability is a function of the ideality of the data. One of the most important steps in processing nonideal data is reliable and precise segmentation of the iris pattern from remaining background. In this paper, a segmentation methodology that aims at compensating various nonidealities contained in iris images during segmentation is proposed. The virtue of this methodology lies in its capability to reliably segment nonideal imagery that is simultaneously affected with such factors as specular reflection, blur, lighting variation, occlusion, and off-angle images. We demonstrate the robustness of our segmentation methodology by evaluating ideal and nonideal data sets, namely, the Chinese Academy of Sciences iris data version 3 interval subdirectory, the iris challenge evaluation data, the West Virginia University (WVU) data, and the WVU off-angle data. Furthermore, we compare our performance to that of our implementation of Camus and Wildes's algorithm and Masek's algorithm. We demonstrate considerable improvement in segmentation performance over the formerly mentioned algorithms.
Foot pressure distribution and contact duration pattern during walking at self-selected speed in young adults
Foot load observed as pressure distribution is examined in relation to a foot and ankle functions within the gait cycle. This load defines the modes of healthy and pathological gait. Determination of the patterns of healthy, i.e. “normal” walk is the basis for classification of pathological modes. Eleven healthy participants were examined in this initial study. Participants walked over pressure plate barefoot at their self - selected speed. Maximal values of the pressure were recorded in the heel, the first, the second and the third metatarsal joints and in the hallux region. Largest contact duration is recorded in the metatarsal region.
Learning Joint Multilingual Sentence Representations with Neural Machine Translation
In this paper, we use the framework of neural machine translation to learn joint sentence representations across six very different languages. Our aim is that a representation which is independent of the language, is likely to capture the underlying semantics. We define a new crosslingual similarity measure, compare up to 1.4M sentence representations and study the characteristics of close sentences. We provide experimental evidence that sentences that are close in embedding space are indeed semantically highly related, but often have quite different structure and syntax. These relations also hold when comparing sentences in different languages.
Mining Software Engineering Data from GitHub
GitHub is the largest collaborative source code hosting site built on top of the Git version control system. The availability of a comprehensive API has made GitHub a target for many software engineering and online collaboration research efforts. In our work, we have discovered that a) obtaining data from GitHub is not trivial, b) the data may not be suitable for all types of research, and c) improper use can lead to biased results. In this tutorial, we analyze how data from GitHub can be used for large-scale, quantitative research, while avoiding common pitfalls. We use the GHTorrent dataset, a queryable offline mirror of the GitHub API data, to draw examples from and present pitfall avoidance strategies.
ON POINTWISE KAN EXTENSIONS IN DOUBLE CATEGORIES
In this paper we consider a notion of pointwise Kan extension in double categories that naturally generalises Dubuc’s notion of pointwise Kan extension along enriched functors. We show that, when considered in equipments that admit opcartesian tabulations, it generalises Street’s notion of pointwise Kan extension in 2-categories. Introduction A useful construction in classical category theory is that of right Kan extension along functors and, dually, that of left Kan extension along functors. Many important notions, including that of limit and right adjoint functor, can be regarded as right Kan extensions. On the other hand right Kan extensions can often be constructed out of limits; such Kan extensions are called pointwise. It is this notion of Kan extension that was extended to more general settings, firstly by Dubuc in [Dub70], to a notion of pointwise Kan extension along V-functors, between categories enriched in some suitable category V , and later by Street in [Str74], to a notion of pointwise Kan extension along morphisms in any 2-category. It is unfortunate that Street’s notion, when considered in the 2-category V-Cat of V-enriched categories, does not agree with Dubuc’s notion of pointwise Kan extension, but is stronger in general. In this paper we show that by moving from 2-categories to double categories it is possible to unify Dubuc’s and Street’s notion of pointwise Kan extension. In §1 we recall the notion of double category, which generalises that of 2-category by considering, instead of a single type, two types of morphism. For example one can consider both ring homomorphisms and bimodules between rings. One type of morphism is drawn vertically and the other horizontally so that cells in a double category, which have both a horizontal and vertical morphism as source and as target, are shaped like squares. Every double category K contains a 2-category V (K) consisting of the objects and vertical morphisms of K, as well as cells whose horizontal source and target are identities. Many of the results in this paper first appeared as part of my PhD thesis “Algebraic weighted colimits” that was written under the guidance of Simon Willerton. I would like to thank Simon for his advice and encouragement. Also I thank the anonymous referee for helpful suggestions, and the University of Sheffield for its financial support of my PhD studies. Received by the editors 2014-02-05 and, in revised form, 2014-11-03. Transmitted by R. Paré. Published on 2014-11-06. 2010 Mathematics Subject Classification: 18D05, 18A40, 18D20.
Securing Internet of Things with Lightweight IPsec
Real-world deployments of wireless sensor networks (WSNs) require secure communication. It is important that a receiver is able to verify that sensor data was generated by trusted nodes. In some cases it may also be necessary to encrypt sensor data in transit. Recently, WSNs and traditional IP networks are more tightly integrated using IPv6 and 6LoWPAN. Available IPv6 protocol stacks can use IPsec to secure data exchange. Thus, it is desirable to extend 6LoWPAN such that IPsec communication with IPv6 nodes is possible. It is beneficial to use IPsec because the existing end-points on the Internet do not need to be modified to communicate securely with the WSN. Moreover, using IPsec, true end-to-end security is implemented and the need for a trustworthy gateway is removed. In this paper we provide End-to-End (E2E) secure communication between an IP enabled sensor nodes and a device on traditional Internet. This is the first compressed lightweight design, implementation, and evaluation of 6LoWPAN extension for IPsec on Contiki. Our extension supports both IPsec’s Authentication Header (AH) and Encapsulation Security Payload (ESP). Thus, communication endpoints are able to authenticate, encrypt and check the integrity of messages using standardized and established IPv6 mechanisms.
Knowledge Representation Learning with Entities, Attributes and Relations
Distributed knowledge representation (KR) encodes both entities and relations in a lowdimensional semantic space, which has significantly promoted the performance of relation extraction and knowledge reasoning. In many knowledge graphs (KG), some relations indicate attributes of entities (attributes) and others indicate relations between entities (relations). Existing KR models regard all relations equally, and usually suffer from poor accuracies when modeling one-to-many and many-to-one relations, mostly composed of attribute. In this paper, we distinguish existing KGrelations into attributes and relations, and propose a new KR model with entities, attributes and relations (KR-EAR). The experiment results show that, by special modeling of attribute, KR-EAR can significantly outperform state-of-the-art KR models in prediction of entities, attributes and relations. The source code of this paper can be obtained from https://github.com/thunlp/KR-EAR.
Data transmission via GSM voice channel for end to end security
Global System for Mobile Communications (GSM) technology still plays a key role because of its availability, reliability and robustness. Recently, additional consumer applications are proposed in which GSM is used as a backup or data transmission service. Unfortunately sending data via GSM channel is a challenging task since it is speech sensitive and suppresses other forms of signals. In this paper, a systematic method is proposed to develop a modem that transmits data over GSM voice channel (DoGSMV) using speech like (SL) symbols. Unlike the previous approaches an artificial search space is produced to find best SL symbols and analyses by synthesis (AbyS) method is introduced for parameter decoding. As a result 1.6 kbps simulation data rate is achieved when wireless communication errors are ignored.
Universality and language-specific experience in the perception of lexical tone and pitch
Two experiments focus on Thai tone perception by native speakers of tone languages (Thai, Cantonese, and Mandarin), a pitch–accent (Swedish), and a nontonal (English) language. In Experiment 1, there was better auditory-only and auditory–visual discrimination by tone and pitch–accent language speakers than by nontone language speakers. Conversely and counterintuitively, there was better visual-only discrimination by nontone language speakers than tone and pitch–accent language speakers. Nevertheless, visual augmentation of auditory tone perception in noise was evident for all five language groups. In Experiment 2, involving discrimination in three fundamental frequency equivalent auditory contexts, tone and pitch–accent language participants showed equivalent discrimination for normal Thai speech, filtered speech, and violin sounds. In contrast, nontone language listeners had significantly better discrimination for violin sounds than filtered speech and in turn speech. Together the © Cambridge University Press 2014. The online version of this article is published within an Open Access environment subject to the conditions of the Creative Commons Attribution licence http://creativecommons.org/licenses/by/3.0/. 0142-7164/14 Applied Psycholinguistics 36:6 1460 Burnham et al.: Perception of lexical tone and pitch results show that tone perception is determined by both auditory and visual information, by acoustic and linguistic contexts, and by universal and experiential factors. In nontone languages such as English, fundamental frequency (F0; perceived as pitch) conveys information about prosody, stress, focus, and grammatical and emotional content, but in tone languages F0 parameters also distinguish clearly different meanings at the lexical level. In this paper, we investigate Thai tone perception in tone (Thai, Cantonese, and Mandarin), pitch–accent (Swedish), and nontone (English) language participants. While cues other than F0 (e.g., amplitude envelope, voice quality, and syllable duration) may also contribute to some lesser extent to tone production and perception, F0 height and contour are the main distinguishing features of lexical tone. Accordingly, tones may be classified with respect to the relative degree of F0 movement over time as static (level) or dynamic (contour). In Central Thai, for example, there are five tones: two dynamic tones, [khǎ:]-rising tone, meaning “leg”; and [khâ:]-falling tone, “to kill”; and three static tones, [khá:]-high tone, “to trade”; [kha:]-mid tone, “to be stuck”; and [khà:]-low tone, “galangal, a root spice.” Tone languages vary in the number and nature of their lexical tones; Cantonese has three static and three dynamic tones, and Mandarin has one static and three dynamic tones. Another important variation is between tone and pitch– accent languages; in tone languages, pitch variations occur on individual syllables, whereas in pitch–accent languages, it is the relative pitch between successive syllables that is important. In Swedish, for example, there are two pitch accents that are applied to disyllabic words. Pitch Accent 1 is the default “single falling” or acute tone; for example, anden (single tone) [′ andɛ̀n] meaning “duck.” Pitch Accent 2 is the “double” or grave tone, which is used in most native Swedish nouns that have polysyllabic singular forms with the principal stress on the first syllable; for example, anden (double tone) [′ andɛ̂n] meaning “spirit.” However, while pitch accent is used throughout Swedish spoken language, there are only about 500 pairs of words that are distinguished by pitch accent (Clark & Yallop, 1990). Figure 1 shows the F0 patterns over time of the languages of concern here (Thai, Mandarin, and Cantonese tones) and the two Swedish pitch accents. To describe the tones in these languages, both in Figure 1 and throughout the text, we apply the Chao (1930, 1947) system in which F0 height at the start and end (and sometimes in the middle) of words is referred to by the numbers 1 to 5 (1 = low frequency, 5 = high frequency), in order to capture approximate F0 height and contour. Tone languages are prevalent; they are found in West Africa (e.g., Yoruba and Sesotho), North America and Central America (e.g., Tewa and Mixtec), and Asia (e.g., Cantonese, Mandarin, Thai, Vietnamese, Taiwanese, and Burmese). Pitch–accent languages are found in Asia (Japanese and some Korean dialects) and Europe (Swedish, Norwegian, and Latvian). Tone and pitch–accent languages comprise approximately 70% of the world’s languages (Yip, 2002) and are spoken by more than 50% of the world’s population (Fromkin, 1978). Psycholinguistic investigations of tone perception fail to match this prevalence. Here, we contribute Applied Psycholinguistics 36:6 1461 Burnham et al.: Perception of lexical tone and pitch 50 100 150 200 250 300 350 400 (a) (b) 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 F 0 (H z) Duration (msec.) F0 Distribution of 5 Bangkok Thai Tones Mid-[ma:]33 Low-[ma:]21 Falling-[ma:]241 High-[ma:]45 Rising-[ma:]315 50 100 150 200 250 300 350 400 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 F 0 (H z) Duration (msec.) F0 Distribution of 4 Mandarin Tones High-[ma]55 Rising-[ma]35 Dipping-[ma]214 Falling-[ma]51 Figure 1. (a) Fundamental frequency (F0) distribution of Thai tones, based on five Thai female productions of “ma” (described by Chao values as follows: Mid-33, Low-21, Falling-241, High45, and Rising-315). (b) F0 of Mandarin tones, based on four Mandarin female productions of “ma” (described by Chao values as follows: High-55, Rising-35, Dipping-214, and Falling51). (c) F0 distribution of Cantonese tones, based on two Cantonese female productions of “si” (described by Chao values as follows: High-55, Rising-25, Mid-33, Falling-21, Low-Rising-23, and Low-22). (d) F0 distribution of Swedish pitch accents (across two syllables) based on three Swedish female productions for two-syllable words. Pitch Accent 1 shows the single falling F0 pattern and Pitch Accent 2 shows the double peak in F0. Applied Psycholinguistics 36:6 1462 Burnham et al.: Perception of lexical tone and pitch 50 100 150 200 250 300 350 400 (c) (d) 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 F 0 (H z) Duration (msec.) F0 Distribution of 6 Cantonese Tones High-[si]55 Rising-[si]25 Mid-[si]33 Falling-[si]21 Low Rising-[si]23 Low-[si]22 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400 450 F 0 (H z) Duration (msec.) F0 Distribution of 2 Swedish Pitch Accents (Across 2 Syllables) PitchAccent1 PitchAccent2
Facial recognition with PCA and machine learning methods
Facial recognition is a challenging problem in image processing and machine learning areas. Since widespread applications of facial recognition make it a valuable research topic, this work tries to develop some new facial recognition systems that have both high recognition accuracy and fast running speed. Efforts are made to design facial recognition systems by combining different algorithms. Comparisons and evaluations of recognition accuracy and running speed show that PCA + SVM achieves the best recognition result, which is over 95% for certain training data and eigenface sizes. Also, PCA + KNN achieves the balance between recognition accuracy and running speed.
The challenge of ubiquitous computing in health care: technology, concepts and solutions. Findings from the IMIA Yearbook of Medical Informatics 2005.
OBJECTIVES To review recent research efforts in the field of ubiquitous computing in health care. To identify current research trends and further challenges for medical informatics. METHODS Analysis of the contents of the Yearbook on Medical Informatics 2005 of the International Medical Informatics Association (IMIA). RESULTS The Yearbook of Medical Informatics 2005 includes 34 original papers selected from 22 peer-reviewed scientific journals related to several distinct research areas: health and clinical management, patient records, health information systems, medical signal processing and biomedical imaging, decision support, knowledge representation and management, education and consumer informatics as well as bioinformatics. A special section on ubiquitous health care systems is devoted to recent developments in the application of ubiquitous computing in health care. Besides additional synoptical reviews of each of the sections the Yearbook includes invited reviews concerning E-Health strategies, primary care informatics and wearable healthcare. CONCLUSIONS Several publications demonstrate the potential of ubiquitous computing to enhance effectiveness of health services delivery and organization. But ubiquitous computing is also a societal challenge, caused by the surrounding but unobtrusive character of this technology. Contributions from nearly all of the established sub-disciplines of medical informatics are demanded to turn the visions of this promising new research field into reality.