title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
A Review on Distributed Application Processing Frameworks in Smart Mobile Devices for Mobile Cloud Computing | The latest developments in mobile devices technology have made smartphones as the future computing and service access devices. Users expect to run computational intensive applications on Smart Mobile Devices (SMDs) in the same way as powerful stationary computers. However in spite of all the advancements in recent years, SMDs are still low potential computing devices, which are constrained by CPU potentials, memory capacity and battery life time. Mobile Cloud Computing (MCC) is the latest practical solution for alleviating this incapacitation by extending the services and resources of computational clouds to SMDs on demand basis. In MCC, application offloading is ascertained as a software level solution for augmenting application processing capabilities of SMDs. The current offloading algorithms offload computational intensive applications to remote servers by employing different cloud models. A challenging aspect of such algorithms is the establishment of distributed application processing platform at runtime which requires additional computing resources on SMDs. This paper reviews existing Distributed Application Processing Frameworks (DAPFs) for SMDs in MCC domain. The objective is to highlight issues and challenges to existing DAPFs in developing, implementing, and executing computational intensive mobile applications within MCC domain. It proposes thematic taxonomy of current DAPFs, reviews current offloading frameworks by using thematic taxonomy and analyzes the implications and critical aspects of current offloading frameworks. Further, it investigates commonalities and deviations in such frameworks on the basis significant parameters such as offloading scope, migration granularity, partitioning approach, and migration pattern. Finally, we put forward open research issues in distributed application processing for MCC that remain to be addressed. |
Do confessions taint perceptions of handwriting evidence? An empirical test of the forensic confirmation bias. | Citing classic psychological research and a smattering of recent studies, Kassin, Dror, and Kukucka (2013) proposed the operation of a forensic confirmation bias, whereby preexisting expectations guide the evaluation of forensic evidence in a self-verifying manner. In a series of studies, we tested the hypothesis that knowing that a defendant had confessed would taint people's evaluations of handwriting evidence relative to those not so informed. In Study 1, participants who read a case summary in which the defendant had previously confessed were more likely to erroneously conclude that handwriting samples from the defendant and perpetrator were authored by the same person, and were more likely to judge the defendant guilty, compared with those in a no-confession control group. Study 2 replicated and extended these findings using a within-subjects design in which participants rated the same samples both before and after reading a case summary. These findings underscore recent critiques of the forensic sciences as subject to bias, and suggest the value of insulating forensic examiners from contextual information. |
INTERNAL COMPACT PRINTED LOOP ANTENNA WITH MATCHING ELEMENT FOR LAPTOP APPLICATIONS | An internal compact printed loop antenna with matching element is presented for laptop applications. Bending to the printed loop structure is employed to reduce the size of antenna. A matching element is used at backside of the substrate to enhance the bandwidth of the antenna. The matching element is of rectangular shape with inverted U shape slot in it. The proposed antenna with wideband characteristics covers GSM900, GSM1900, UMTS, LTE2300, WLAN, WiMAX 3.3/3.5/3.7 bands and is suitable for laptop applications. The antenna structure is simulated using IE3D software. Keywordsbended; multiband antenna; printed loop antenna; tablet; wideband antenna. |
Learning Probabilistic Models of Link Structure | Most real-world data is heterogeneous and richly interconnected. Examples include the Web, hypertext, bibliometric data and social networks. In contrast, most statistical learning methods work with “flat” data representations, forcing us to convert our data into a form that loses much of the link structure. The recently introduced framework of probabilistic relational models(PRMs) embraces the object-relational nature of structured data by capturing probabilistic interactions between attributes of related entities. In this paper, we extend this framework by modeling interactions between the attributes and the link structure itself. An advantage of our approach is a unified generative model for both content and relational structure. We propose two mechanisms for representing a probabilistic distribution over link structures: reference uncertaintyandexistence uncertainty . We describe the appropriate conditions for using each model and present learning algorithms for each. We present experimental results showing that the learned models can be used to predict link structure and, moreover, the observed link structure can be used to provide better predictions for the attributes in the model. |
The Complexity of Satisfiability Problems: Refining Schaefer's Theorem | Schaefer proved in 1978 that the Boolean constraint satisfaction problem for a given constraint language is either in P or is NP-complete, and identified all tractable cases. Schaefer’s dichotomy theorem actually shows that there are at most two constraint satisfaction problems, up to polynomial-time isomorphism (and these isomorphism types are distinct if and only if P 6= NP). We show that if one considers AC isomorphisms, then there are exactly six isomorphism types (assuming that the complexity classes NP, P,⊕L, NL, and L are all distinct). |
Goal Reasoning and Trusted Autonomy | This chapter discusses the topic of Goal Reasoning and its relation to Trusted Autonomy. Goal Reasoning studies how autonomous agents can extend their reasoning capabilities beyond their plans and actions, to consider their goals. Such capability allows a Goal Reasoning system to more intelligently react to unexpected events or changes in the environment. We present two different models of Goal Reasoning: Goal-Driven Autonomy (GDA) and goal refinement. We then discuss several research topics related to each, and how they relate to the topic of Trusted Autonomy. Finally, we discuss several directions of ongoing work that are particularly interesting in the context of the chapter: using a model of inverse trust as a basis for adaptive autonomy, and studying how Goal Reasoning agents may choose to rebel (i.e., act contrary to a given command). Benjamin Johnson NRC Research Associate at the US Naval Research Laboratory; Washington, DC; USA e-mail: [email protected] Michael W. Floyd Knexus Research Corporation; Springfield, VA; USA e-mail: [email protected] Alexandra Coman NRC Research Associate at the US Naval Research Laboratory; Washington, DC; USA e-mail: [email protected] Mark A. Wilson Navy Center for Applied Research in AI, US Naval Research Laboratory; Washington, DC; USA e-mail: [email protected] David W. Aha Navy Center for Applied Research in AI, US Naval Research Laboratory; Washington, DC; USA e-mail: [email protected] |
Toward Scalable Neural Dialogue State Tracking Model | The latency in the current neural based dialogue state tracking models prohibits them from being used efficiently for deployment in production systems, albeit their highly accurate performance. This paper proposes a new scalable and accurate neural dialogue state tracking model, based on the recently proposed Global-Local Self-Attention encoder (GLAD) model by Zhong et al. (2018) which uses global modules to share parameters between estimators for different types (called slots) of dialogue states, and uses local modules to learn slot-specific features. By using only one recurrent networks with global conditioning, compared to (1 + # slots) recurrent networks with global and local conditioning used in the GLAD model, our proposed model reduces the latency in training and inference times by 35% on average, while preserving performance of belief state tracking, by 97.38% on turn request and 88.51% on joint goal and accuracy. Evaluation on Multi-domain dataset (Multi-WoZ) also demonstrates that our model outperforms GLAD on turn inform and joint goal accuracy. |
Two-dose versus monthly intermittent preventive treatment of malaria with sulfadoxine-pyrimethamine in HIV-seropositive pregnant Zambian women. | BACKGROUND
Intermittent preventive treatment of malaria during pregnancy (IPTp) reduces placental infection, maternal anemia, and low birth weight (LBW). However, the optimal dosing regimen in settings in which human immunodeficiency virus (HIV) is highly prevalent among pregnant women remains controversial.
METHODS
We conducted a randomized, double-blind, placebo-controlled study of IPTp comparing the standard 2-dose sulfadoxine-pyrimethamine (SP) regimen with monthly IPTp among a cohort of HIV-positive pregnant Zambian women. Primary outcomes included placental malaria (by smear and histology) and maternal peripheral parasitemia at delivery.
RESULTS
There were no differences between monthly IPTp (n=224) and standard IPTp (n=232) in placental malaria by histopathology (26% vs. 29%; relative risk [RR], 0.90 [95% confidence interval {CI}, 0.64-1.26]) or placental parasitemia (2% vs. 4%; RR, 0.55 [95% CI, 0.17-1.79]). There also were no differences in maternal anemia, stillbirths, preterm delivery, LBW, or all-cause mortality of infants at 6 weeks.
CONCLUSIONS
In an area of mesoendemicity in Zambia, monthly SP IPTp was not more efficacious than the standard 2-dose regimen for the prevention of placental malaria or adverse birth outcomes. IPTp policy recommendations need to take into account local malaria transmission patterns and the prevalence of HIV.
TRIAL REGISTRATION
ClinicalTrials.gov identifier: NCT00270530. |
Wastewater treatment high rate algal ponds for biofuel production. | While research and development of algal biofuels are currently receiving much interest and funding, they are still not commercially viable at today's fossil fuel prices. However, a niche opportunity may exist where algae are grown as a by-product of high rate algal ponds (HRAPs) operated for wastewater treatment. In addition to significantly better economics, algal biofuel production from wastewater treatment HRAPs has a much smaller environmental footprint compared to commercial algal production HRAPs which consume freshwater and fertilisers. In this paper the critical parameters that limit algal cultivation, production and harvest are reviewed and practical options that may enhance the net harvestable algal production from wastewater treatment HRAPs including CO(2) addition, species control, control of grazers and parasites and bioflocculation are discussed. |
Knowledge-based end-to-end memory networks | End-to-end dialog systems have become very popular because they hold the promise of learning directly from human to human dialog interaction. Retrieval and Generative methods have been explored in this area with mixed results. A key element that is missing so far, is the incorporation of a-priori knowledge about the task at hand. This knowledge may exist in the form of structured or unstructured information. As a first step towards this direction, we present a novel approach, Knowledge based end-to-end memory networks (KB-memN2N), which allows special handling of named entities for goaloriented dialog tasks. We present results on two datasets, DSTC6 challenge dataset and dialog bAbI tasks. |
Distributing Forth | A Familiar Problem There is a controversy over how best to distribute Forth. Should it be in assembly language or in Forth (i.e. "metacompiled")? Lately, e-forth has been offered as the solution .for repopularizing Forth. It is the modern approach, usmg assembly language, offered to replace the out of date Fig Forth, so much easier to understand than a metacompiled Forth, uses MASM [8086 macroassembler] on a PC, which everybody has, how universal, etc. I feel that metacompilation has been getting a bum rap. Let's look more closely at how e-forth is distributed: (1) executable form already customized for a particular processor/operating system, or (2) IBM PC MASM source code that needs to be ported to your processor. |
Supporting Non-Musicians? Creative Engagement with Musical Interfaces | This paper reports on a study which sets out to examine the process of creative engagement of individual non-musicians when interacting with interactive musical systems (IMSs), and to identify features of IMSs that may support non-musician's creative engagement in music making. In order to creatively engage novices, we aimed to design IMSs which would combine the intuitive interaction of `sound toys' with the rich and complex music possibilities offered by IMSs to support individual learning and creative processes. Two IMSs were designed and built and an empirical user study was conducted of them. The study used a multi-layered methodology, which combined interviews and recordings of user interactions. Analysis of participants' behaviour with the IMSs led to the identification of interaction patterns, which were used to identify a three-step framework ('learn', 'exploration','creation') of creative engagement with the musical interfaces, and to inform the development of implications for design for supporting creative engagement with IMSs. Key implications for design identified in this study are to support novices to: learn the sound; play live; catalyze insights; and scaffold composition. The basic study methodology of this paper also offers a contribution to methods for evaluating designs by combining recording user interactions with interviews. |
Audio-based Music Classification with a Pretrained Convolutional Network | Recently the ‘Million Song Dataset’, containing audio features and metadata for one million songs, was made available. In this paper, we build a convolutional network that is then trained to perform artist recognition, genre recognition and key detection. The network is tailored to summarize the audio features over musically significant timescales. It is infeasible to train the network on all available data in a supervised fashion, so we use unsupervised pretraining to be able to harness the entire dataset: we train a convolutional deep belief network on all data, and then use the learnt parameters to initialize a convolutional multilayer perceptron with the same architecture. The MLP is then trained on a labeled subset of the data for each task. We also train the same MLP with randomly initialized weights. We find that our convolutional approach improves accuracy for the genre recognition and artist recognition tasks. Unsupervised pretraining improves convergence speed in all cases. For artist recognition it improves accuracy as well. |
Model-Driven Development of Mobile Personal Health Care Applications | Personal health care applications on mobile devices allow patients to enhance their health via reminders, monitoring and feedback to health care providers. Engineering such applications is challenging with a need for health care plan meta-models, per-patient instantiation of care plans, and development and deployment of supporting Web and mobile device applications. We describe a novel prototype environment for visually modelling health care plans and automated plan and mobile device application code generation. |
28 GHz Angle of Arrival and Angle of Departure Analysis for Outdoor Cellular Communications Using Steerable Beam Antennas in New York City | Propagation measurements at 28 GHz were conducted in outdoor urban environments in New York City using four different transmitter locations and 83 receiver locations with distances of up to 500 m. A 400 mega- chip per second channel sounder with steerable 24.5 dBi horn antennas at the transmitter and receiver was used to measure the angular distributions of received multipath power over a wide range of propagation distances and urban settings. Measurements were also made to study the small-scale fading of closely-spaced power delay profiles recorded at half-wavelength (5.35 mm) increments along a small-scale linear track (10 wavelengths, or 107 mm) at two different receiver locations. Our measurements indicate that power levels for small- scale fading do not significantly fluctuate from the mean power level at a fixed angle of arrival. We propose here a new lobe modeling technique that can be used to create a statistical channel model for lobe path loss and shadow fading, and we provide many model statistics as a function of transmitter- receiver separation distance. Our work shows that New York City is a multipath-rich environment when using highly directional steerable horn antennas, and that an average of 2.5 signal lobes exists at any receiver location, where each lobe has an average total angle spread of 40.3° and an RMS angle spread of 7.8°. This work aims to create a 28 GHz statistical spatial channel model for future 5G cellular networks. |
SVM-BDT PNN and Fourier Moment Technique for Classification of Leaf Shape 1 | This paper presents three techniques of plants classification based on their leaf shape the SVM-BDT, PNN and Fourier moment technique for solving multiclass problems. All the three techniques have been applied to a database of 1600 leaf shapes from 32 different classes, where most of the classes have 50 leaf samples of similar kind. In the proposed work three techniques are used for comparing the performance of classification of leaves. Probabilistic Neural Network with principal component analysis, Support Vector Machine utilizing Binary Decision Tree and Fourier Moment. The proposed SVM based Binary Decision Tree architecture takes advantage of both the efficient computation of the decision tree architecture and the high classification accuracy of SVMs. This can lead to a dramatic improvement in recognition speed when addressing problems with large number of classes. Classification results from all the three techniques were compared and it was observed that SVM-BDT performs better than Fourier and PNN technique. Key wordsProbabilistic Neural Network, Support vector machine, Binary Decisions tree |
Effect of postoperative low-dose dopamine on renal function after elective major vascular surgery | OBJECTIVE
To determine the effect on renal function of postoperative low-dose dopamine in volume-replete patients after elective, major vascular abdominal surgery.
DESIGN
Randomized, double-blind, placebo-controlled trial.
SETTING
Intensive care unit of a referral hospital in Brisbane, Australia.
PATIENTS
37 patients having elective repair of an abdominal aortic aneurysm or having aortobifemoral grafting; 18 received dopamine, and 19 received placebo. Two patients were excluded from the 5-day analysis because of perioperative death.
INTERVENTIONS
Patients were randomly assigned to receive either placebo or a low-dose infusion of dopamine (3 micrograms/kg per minute) in saline. Patients in both groups were given sufficient crystalloid to maintain a urine flow of more than 1 mL/kg per hour during the first 24 postoperative hours. Care in the intensive care unit was otherwise usual and was the same for each group.
MEASUREMENTS
Plasma creatinine levels, urea levels, and creatinine clearance were measured preoperatively and postoperatively (at 24 hours and 5 days). Urine flow and the volume of crystalloid during the first 24 hours were recorded.
RESULTS
Two postoperative deaths occurred in the dopamine group (from renal failure and myocardial infarction). Four patients had myocardial infarction, three of whom received dopamine. Plasma creatinine levels remained unchanged in both groups. At 24 hours, the mean plasma urea level decreased by 1.07 mmol/L in the dopamine group compared with 1.84 mmol/L in the placebo group, a difference of 0.77 (95% CI, -0.12 to 1.67). The mean 24-hour creatinine clearance increased by 0.165 mL/s (9.89 mL/min) in the dopamine group and by 0.199 mL/s (11.98 mL/min) in the placebo group (P > 0.2). Urine volumes were slightly higher in those receiving dopamine (1.83 mL/kg compared with 1.6 mL/kg, a difference of 0.23 [CI, -0.18 to 0.64]). None of these differences were statistically or clinically significant.
CONCLUSIONS
Within the limits of the small size of the study, low-dose dopamine appeared to offer no advantage to euvolemic patients after elective abdominal aortic surgery. However, patients with acute oliguric renal failure were not included in the study. |
A Context-Based Infrastructure for Smart Environments | In order for a smart environment to provide services to its occupants, it must be able to detect its current state or context and determine what actions to take based on the context. We discuss the requirements for dealing with context in a smart environment and present a software infrastructure solution we have designed and implemented to help application designers build intelligent services and applications more easily. We describe the benefits of our infrastructure through applications that we have built. |
Synthesizing Programs for Images using Reinforced Adversarial Learning | Advances in deep generative networks have led to impressive results in recent years. Nevertheless, such models can often waste their capacity on the minutiae of datasets, presumably due to weak inductive biases in their decoders. This is where graphics engines may come in handy since they abstract away low-level details and represent images as high-level programs. Current methods that combine deep learning and renderers are limited by hand-crafted likelihood or distance functions, a need for large amounts of supervision, or difficulties in scaling their inference algorithms to richer datasets. To mitigate these issues, we present SPIRAL, an adversarially trained agent that generates a program which is executed by a graphics engine to interpret and sample images. The goal of this agent is to fool a discriminator network that distinguishes between real and rendered data, trained with a distributed reinforcement learning setup without any supervision. A surprising finding is that using the discriminator’s output as a reward signal is the key to allow the agent to make meaningful progress at matching the desired output rendering. To the best of our knowledge, this is the first demonstration of an end-to-end, unsupervised and adversarial inverse graphics agent on challenging real world (MNIST, OMNIGLOT, CELEBA) and synthetic 3D datasets. A video of the agent can be found at https://youtu.be/iSyvwAwa7vk. |
Design of a tendon-driven robotic hand with an embedded camera | This paper designs a new five-fingered robotic hand with a camera. Several morphological features of the human hand are integrated to improve the appearance of the hand. The drive system of this hand is under-actuated to eliminate the weight of the hand and to embed all the actuators inside the palm. Despite of this under-actuation, this hand can grasp objects in several different ways. In addition, the two different transmissions are adopted to drive the fingers according to their roles. These transmissions help not only to improve drive efficiency but also to secure the space of the embedded camera. |
Bistatic backscatter radio for power-limited sensor networks | For applications that require large numbers of wireless sensors spread in a field, backscatter radio can be utilized to minimize the monetary and energy cost of each sensor. Commercial backscatter systems such as those in radio frequency identification (RFID), utilize modulation designed for the bandwidth limited regime, and require medium access control (MAC) protocols for multiple access. High tag/sensor bitrate and monostatic reader architectures result in communication range reduction. In sharp contrast, sensing applications typically require the opposite: extended communication ranges that could be achieved with bitrate reduction and bistatic reader architectures. This work presents non-coherent frequency shift keying (FSK) for bistatic backscatter radio; FSK is appropriate for the power limited regime and also allows many RF tags/sensors to convey information to a central reader simultaneously with simple frequency division multiplexing (FDM). However, classic non-coherent FSK receivers are not directly applicable in bistatic backscatter radio. This work a) carefully derives the complete signal model for bistatic backscatter radio, b) describes the details of backscatter modulation with emphasis on FSK and its corresponding receiver, c) proposes techniques to overcome the difficulties introduced by the utilization of bistatic architectures, such as the carrier frequency offset (CFO), and d) presents bit error rate (BER) performance for the proposed receiver and carrier recovery techniques. |
Alfred Marshall’s Lectures to Women | This new critical edition makes the Lectures, which have sometimes been referred to by Marshallian scholars, available to a wider body of historians of economic thought. Based on Mary Paley Marshall’s original notes, corrected by Marshall himself, the Lectures are supplemented by Marshall’s lecture outlines. Some contemporary and related texts are also published here including a paper on the future of the working classes from the same year and Marshall’s exchange of articles with the trade unionist John Holmes in 1874 known as the Bee-Hive debate. |
The measurement and meaning of unintended pregnancy. | The concept of unintended pregnancy has been essential to demographers in seeking to understand fertility, to public health practitioners in preventing unwanted childbear-ing and to both groups in promoting a woman's ability to determine whether and when to have children. Accurate measurement of pregnancy intentions is important in understanding fertility-related behaviors, forecasting fertility, estimating unmet need for contraception, understanding the impact of pregnancy intentions on maternal and child health, designing family planning programs and evaluating their effectiveness, and creating and evaluating community-based programs that prevent unintended pregnancy. 1 Pregnancy unintendedness is a complex concept, and has been the subject of recent conceptual and method-ological critiques. 2 Pregnancy intentions are increasingly viewed as encompassing affective, cognitive, cultural and contextual dimensions. Developing a more complete understanding of pregnancy intentions should advance efforts to increase contraceptive use, to prevent unintended pregnancies and to improve the health of women and their children. To provide a scientific foundation for public health efforts to prevent unintended pregnancy, we conducted a review of unintended pregnancy between the fall of 1999 and the spring of 2001 as part of strategic planning activities within the Division of Reproductive Health at the Centers for Disease Control and Prevention (CDC). We reviewed the published and unpublished literature, consulted with experts in reproductive health and held several joint meetings with the Demographic and Behavioral Research Branch of the National Institute of Child Health and Human Development , and the Office of Population Affairs of the Department of Health and Human Services. We used standard scientific search engines, such as Medline, to find relevant articles published since 1975, and identified older references from bibliographies contained in recent articles; academic experts and federal officials helped to identify unpublished reports. This comment summarizes our findings and incorporates insights gained from the joint meetings and the strategic planning process. CURRENT DEFINITIONS AND MEASURES Conventional measures of unintended pregnancy are designed to reflect a woman's intentions before she became pregnant. 3 Unintended pregnancies are pregnancies that are reported to have been either unwanted (i.e., they occurred when no children, or no more children, were desired) or mistimed (i.e., they occurred earlier than desired). In contrast, pregnancies are described as intended if they are reported to have happened at the " right time " 4 or later than desired (because of infertility or difficulties in conceiving). A concept related to unintended pregnancy is unplanned pregnancy—one that occurred when … |
Remark on "Fast Floating-Point Processing in Common Lisp" | We explain why we feel that the comparison betwen Common Lisp and Fortran in a recent article by Fateman et al. in this journal is not entirely fair. |
Ictal adipokines are associated with pain severity and treatment response in episodic migraine. | OBJECTIVE
To evaluate ictal adipokine levels in episodic migraineurs and their association with pain severity and treatment response.
METHODS
This was a double-blind, placebo-controlled trial evaluating peripheral blood specimens from episodic migraineurs at acute pain onset and 30 to 120 minutes after treatment with sumatriptan/naproxen sodium vs placebo. Total adiponectin (T-ADP), ADP multimers (high molecular weight [HMW], middle molecular weight, and low molecular weight [LMW]), leptin, and resistin levels were evaluated by immunoassays.
RESULTS
Thirty-four participants (17 responders, 17 nonresponders) were included. In all participants, pretreatment pain severity increased with every quartile increase in both the HMW:T-ADP ratio (coefficient of variation [CV] 0.51; 95% confidence interval [CI]: 0.08, 0.93; p = 0.019) and resistin levels (CV 0.58; 95% CI: 0.21, 0.96; p = 0.002), but was not associated with quartile changes in leptin levels. In responders, T-ADP (CV -0.98; 95% CI: -1.88, -0.08; p = 0.031) and resistin (CV -0.95; 95% CI: -1.83, -0.07; p = 0.034) levels decreased 120 minutes after treatment as compared with pretreatment. In addition, in responders, the HMW:T-ADP ratio (CV -0.04; 95% CI: -0.07, -0.01; p = 0.041) decreased and the LMW:T-ADP ratio (CV 0.04; 95% CI: 0.01, 0.07; p = 0.043) increased at 120 minutes after treatment. In nonresponders, the LMW:T-ADP ratio (CV -0.04; 95% CI: -0.07, -0.01; p = 0.018) decreased 120 minutes after treatment. Leptin was not associated with treatment response.
CONCLUSIONS
Both pretreatment migraine pain severity and treatment response are associated with changes in adipokine levels. Adipokines represent potential novel migraine biomarkers and drug targets. |
Test-cost-sensitive attribute reduction | In many data mining and machine learning applications, there are two objectives in the task of classification; one is decreasing the test cost, the other is improving the classification accuracy. Most existing research work focuses on the latter, with attribute reduction serving as an optional pre-processing stage to remove redundant attributes. In this paper, we point out that when tests must be undertaken in parallel, attribute reduction is mandatory in dealing with the former objective. With this in mind, we posit the minimal test cost reduct problem which constitutes a new, but more general, difficulty than the classical reduct problem. We also define three metrics to evaluate the performance of reduction algorithms from a statistical viewpoint. A framework for a heuristic algorithm is proposed to deal with the new problem; specifically, an information gain-based λ-weighted reduction algorithm is designed, where weights are decided by test costs and a non-positive exponent λ, which is the only parameter set by the user. The algorithm is tested with three representative test cost distributions on four UCI (University of California Irvine) datasets. Experimental results show that there is a trade-off while setting λ, and a competition approach can improve the quality of the result significantly. This study suggests potential application areas and new research trends concerning attribute reduction. |
A DNA-Based Archival Storage System | Demand for data storage is growing exponentially, but the capacity of existing storage media is not keeping up. Using DNA to archive data is an attractive possibility because it is extremely dense, with a raw limit of 1 exabyte/mm3 (109 GB/mm3), and long-lasting, with observed half-life of over 500 years. This paper presents an architecture for a DNA-based archival storage system. It is structured as a key-value store, and leverages common biochemical techniques to provide random access. We also propose a new encoding scheme that offers controllable redundancy, trading off reliability for density. We demonstrate feasibility, random access, and robustness of the proposed encoding with wet lab experiments involving 151 kB of synthesized DNA and a 42 kB random-access subset, and simulation experiments of larger sets calibrated to the wet lab experiments. Finally, we highlight trends in biotechnology that indicate the impending practicality of DNA storage for much larger datasets. |
Government Influence on the Social Science Paradigm | A scientific paradigm includes a set of widely shared understandings that specify a discipline's research methodologies and substantive priorities. The impact of government sponsorship of academic social research on the paradigms of four social science disciplines is evaluated using a probability sample of 1,079 faculty members in the fields of anthropology, economics, political science, and psychology. The results indicate that federal government funding is allocated according to topical and methodological priorities that are distinct from the disciplines' self-defined priorities. It is also found that: (1) federal support of academic research has a significant impact on the substantive and methodological plans of social scientists; (2) social scientists who are financially dependent on government assistance are particularly responsive to government influence; (3) the condition of financial dependency on government funding is in part a product of prior federal investment in social research. An “externalist” thesis holds that the scientific paradigm is not autonomous and is significantly shaped by such outside factors as the political system, and these findings provide support for this thesis. |
Asymmetric Dimethylarginine Limits the Efficacy of Simvastatin Activating Endothelial Nitric Oxide Synthase. | BACKGROUND
Asymmetric dimethylarginine (ADMA), an endogenous inhibitor of endothelial nitric oxide synthase (eNOS), is considered a risk factor for the pathogenesis of cardiovascular diseases. Simvastatin, a lipid-lowering drug with other pleiotropic effects, has been widely used for treatment of cardiovascular diseases. However, little is known about the effect and underlying molecular mechanisms of ADMA on the effectiveness of simvastatin in the vascular system.
METHODS AND RESULTS
We conducted a prospective cohort study to enroll 648 consecutive patients with coronary artery disease for a follow-up period of 8 years. In patients with plasma ADMA level ≥0.49 μmol/L (a cut-off value from receiver operating characteristic curve), statin treatment had no significant effect on cardiovascular events. We also conducted randomized, controlled studies using in vitro and in vivo models. In endothelial cells, treatment with ADMA (≥0.5 μmol/L) impaired simvastatin-induced nitric oxide (NO) production, endothelial NO synthase (eNOS) phosphorylation, and angiogenesis. In parallel, ADMA markedly increased the activity of NADPH oxidase (NOX) and production of reactive oxygen species (ROS). The detrimental effects of ADMA on simvastatin-induced NO production and angiogenesis were abolished by the antioxidant, N-acetylcysteine, NOX inhibitor, or apocynin or overexpression of dimethylarginine dimethylaminohydrolase 2 (DDAH-2). Moreover, in vivo, ADMA administration reduced Matrigel plug angiogenesis in wild-type mice and decreased simvastatin-induced eNOS phosphorylation in aortas of apolipoprotein E-deficient mice, but not endothelial DDAH-2-overexpressed aortas.
CONCLUSIONS
We conclude that ADMA may trigger NOX-ROS signaling, which leads to restricting the simvastatin-conferred protection of eNOS activation, NO production, and angiogenesis as well as the clinical outcome of cardiovascular events. |
Psychological Well-Being in Adult Life | Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. |
Bison meat has a lower atherogenic risk than beef in healthy men. | The rearing method of bison and the nutrient content of the meat may make bison a healthier alternative to beef. We hypothesized that the acute and chronic effects of bison consumption, in comparison to beef, will result in a less perturbed blood lipid panel and a reduced inflammatory and oxidative stress response which will minimize the detrimental effect on vascular function. A double-blind, cross-over randomized trial was employed to examine the consequence of a single 12 oz serving (n = 14) and 7 weeks of chronic consumption (n = 10) (12 oz/d, 6 d/wk) of each meat. Measurements included blood lipids, interleukin-6, plasminogen activator inhibitor 1, C-reactive protein, oxidized low-density lipoprotein, protein carbonyl, hydroperoxides, flow-mediated dilation (FMD) and FMD/shear rate. Following a single beef meal, triglycerides and oxidized low-density lipoprotein were elevated (67% ± 45% and 18% ± 17% respectively); there was a tendency for hydroperoxides to be elevated (24% ± 37%); and FMD/shear rate was reduced significantly (30% ± 38%). Following a single meal of bison: there was a smaller increase in triglycerides (30% ± 27%), and markers of inflammation and oxidative stress and FMD/shear rate were unchanged. Chronic consumption of either meat did not influence body weight, % body fat, or blood lipids. Protein carbonyl (24% ± 45%), plasminogen activator inhibitor 1 (78% ± 126%), interleukin-6 (59% ± 76%) and C-reactive protein (72% ± 57%) were significantly elevated and FMD/shear rate was significantly reduced (19% ± 28%) following 7 weeks of beef consumption, but not bison consumption. Based on our findings, the data suggest that bison consumption results in a reduced atherogenic risk compared to beef. |
Novel inkjet-printed substrate integrated waveguide (SIW) structures on low-cost materials for wearable applications | The implementation of Substrate Integrated Waveguide (SIW) structures in paper-based inkjet-printed technology is presented in this paper for the first time. SIW interconnects and components have been fabricated and tested on a multilayer paper substrate, which permits to implement low-cost and eco-friendly structures. A broadband and compact ridge substrate integrated slab waveguide covering the entire UWB frequency range is proposed and preliminarily verified. SIW structures appear particularly suitable for implementation on paper, due to the possibility to easily realize multilayered topologies and conformal geometries. |
Effective Inhibition of Mannitol Crystallization in Frozen Solutions by Sodium Chloride | Purpose. The purpose of this work was to study the possibility of preventing mannitol crystallization in frozen solutions by using pharmaceutically acceptable additives. Methods. Differential scanning calorimetry (DSC) and low-temperature X-ray diffractometry (LTXRD) were used to characterize the effect of additives on mannitol crystallization. Results. DSC screening revealed that salts (sodium chloride, sodium citrate, and sodium acetate) inhibited mannitol crystallization in frozen solutions more effectively than selected surfactants, α-cyclodextrin, polymers, and alditols. This finding prompted further studies of the crystallization in the mannitol-NaCl-water system. Isothermal DSC results indicated that mannitol crystallization in frozen solutions was significantly retarded in the presence of NaCl and that NaCl did not crystallize until mannitol crystallization completed. Low-temperature X-ray diffractometry data showed that when a 10% w/v mannitol solution without additive was cooled at 1°C/min, the crystalline phases emerging after ice crystallization were those of a mannitol hydrate as well as the anhydrous polymorphs. In the presence of NaCl (5% w/v), mannitol crystallization was suppressed during both cooling and warming and occurred only after annealing and rewarming. In the latter case however, mannitol did not crystallize as the hydrate, but as the anhydrous δ polymorph. At a lower NaCl concentration of 1% w/v, the inhibitory effect of NaCl on mannitol crystallization was evident even during annealing at temperatures close to the Tg′ (−40°C). A preliminary lyophilization cycle with polyvinyl pyrrolidone and NaCl as additives rendered mannitol amorphous. Conclusion. The effectiveness of additives in inhibiting mannitol crystallization in frozen solutions follows the general order: salts > alditols > polyvinyl pyrrolidone > α-cyclodextrin > polysorbate 80 ∼ polyethylene glycol ∼ poloxamer. The judicious use of additives can retain mannitol amorphous during all the stages of the freeze-drying cycle. |
Answering Complicated Question Intents Expressed in Decomposed Question Sequences | Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total. Existing QA systems face two major problems when evaluated on our dataset: (1) handling questions that contain coreferences to previous questions or answers, and (2) matching words or phrases in a question to corresponding entries in the associated table. We conclude by proposing strategies to handle both of these issues. |
Part level transfer regularization for enhancing exemplar SVMs | Exemplar SVMs (E-SVMs, Malisiewicz et al., ICCV 2011), where an SVM is trained with only a single positive sample, have found applications in the areas of object detection and content-based image retrieval (CBIR), amongst others. In this paper we introduce a method of part based transfer regularization that boosts the performance of E-SVMs, with a negligible additional cost. This enhanced E-SVM (EE-SVM) improves the generalization ability of E-SVMs by softly forcing it to be constructed from existing classifier parts cropped from previously learned classifiers. In CBIR applications, where the aim is to retrieve instances of the same object class in a similar pose, the EE-SVM is able to tolerate increased levels of intra-class variation, including occlusions and truncations, over E-SVM, and thereby increases precision and recall. In addition to transferring parts, we introduce a method for transferring the statistics between the parts and also show that there is an equivalence between transfer regularization and feature augmentation for this problem and others, with the consequence that the new objective function can be optimized using standard libraries. EE-SVM is evaluated both quantitatively and qualitatively on the PASCAL VOC 2007 and ImageNet datasets for pose specific object retrieval. It achieves a significant performance improvement over E-SVMs, with greater suppression of negative detections and increased recall, whilst maintaining the same ease of training and testing. © 2015 Elsevier Inc. All rights reserved. |
OpenSec: Policy-Based Security Using Software-Defined Networking | As the popularity of software-defined networks (SDN) and OpenFlow increases, policy-driven network management has received more attention. Manual configuration of multiple devices is being replaced by an automated approach where a software-based, network-aware controller handles the configuration of all network devices. Software applications running on top of the network controller provide an abstraction of the topology and facilitate the task of operating the network. We propose OpenSec, an OpenFlow-based security framework that allows a network security operator to create and implement security policies written in human-readable language. Using OpenSec, the user can describe a flow in terms of OpenFlow matching fields, define which security services must be applied to that flow (deep packet inspection, intrusion detection, spam detection, etc.) and specify security levels that define how OpenSec reacts if malicious traffic is detected. In this paper, we first provide a more detailed explanation of how OpenSec converts security policies into a series of OpenFlow messages needed to implement such a policy. Second, we describe how the framework automatically reacts to security alerts as specified by the policies. Third, we perform additional experiments on the GENI testbed to evaluate the scalability of the proposed framework using existing datasets of campus networks. Our results show that up to 95% of attacks in an existing data set can be detected and 99% of malicious source nodes can be blocked automatically. Furthermore, we show that our policy specification language is simpler while offering fast translation times compared to existing solutions. |
Refining the relationship between personality and subjective well-being. | Understanding subjective well-being (SWB) has historically been a core human endeavor and presently spans fields from management to mental health. Previous meta-analyses have indicated that personality traits are one of the best predictors. Still, these past results indicate only a moderate relationship, weaker than suggested by several lines of reasoning. This may be because of commensurability, where researchers have grouped together substantively disparate measures in their analyses. In this article, the authors review and address this problem directly, focusing on individual measures of personality (e.g., the Neuroticism-Extroversion-Openness Personality Inventory; P. T. Costa & R. R. McCrae, 1992) and categories of SWB (e.g., life satisfaction). In addition, the authors take a multivariate approach, assessing how much variance personality traits account for individually as well as together. Results indicate that different personality and SWB scales can be substantively different and that the relationship between the two is typically much larger (e.g., 4 times) than previous meta-analyses have indicated. Total SWB variance accounted for by personality can reach as high as 39% or 63% disattenuated. These results also speak to meta-analyses in general and the need to account for scale differences once a sufficient research base has been generated. |
The Painfulness of Active, but not Sham, Transcranial Magnetic Stimulation Decreases Rapidly Over Time: Results From the Double-Blind Phase of the OPT-TMS Trial | BACKGROUND
Daily left prefrontal repetitive transcranial magnetic stimulation (rTMS) over several weeks is an FDA approved treatment for major depression. Although rTMS is generally safe when administered using the FDA guidelines, there are a number of side effects that can make it difficult for patients to complete a course of rTMS. Many patients report that rTMS is painful, although patients appear to accommodate to the initial painfulness. The reduction in pain is hypothesized to be due to prefrontal stimulation and is not solely explained by accommodation to the stimulation.
METHODS
In a recent 4 site randomized controlled trial (using an active electrical sham stimulation system) investigating the antidepressant effects of daily left dorsolateral prefrontal rTMS (Optimization of TMS, or OPT-TMS), the procedural painfulness of TMS was assessed before and after each treatment session. Computerized visual analog scale ratings were gathered before and after each TMS session in the OPT-TMS trial. Stimulation was delivered with an iron core figure-8 coil (Neuronetics) with the following parameters: 10 Hz, 120% MT (EMG-defined), 4 s pulse train, 26 s inter-train interval, 3000 pulses per session, one 37.5 min session per day. After each session, procedural pain (pain at the beginning of the TMS session, pain toward the middle, and pain toward then end of the session) ratings were collected at all 4 sites. From the 199 patients randomized, we had usable data from 142 subjects for the initial 15 TMS sessions (double-blind phase) delivered over 3 weeks (142 × 2 × 15 = 4260 rating sessions).
RESULTS
The painfulness of real TMS was initially higher than that of the active sham condition. Over the 15 treatment sessions, subjective reports of the painfulness of rTMS (during the beginning, middle and end of the session) decreased significantly 37% from baseline in those receiving active TMS, with no change in painfulness in those receiving sham. This reduction, although greatest in the first few days, continued steadily over the 3 weeks. Overall, there was a decay rate of 1.56 VAS points per session in subjective painfulness of the procedure in those receiving active TMS.
DISCUSSION
The procedural pain of left, prefrontal rTMS decreases over time, independently of other emotional changes, and only in those receiving active TMS. These data suggest that actual TMS stimulation of prefrontal cortex maybe related to the reduction in pain, and that it is not a non-specific accommodation to pain. This painfulness reduction softly corresponds with later clinical outcome. Further work is needed to better understand this phenomenon and whether acute within-session or over time painfulness changes might be used as short-term biomarkers of antidepressant response. |
Scaling Learning Algorithms towards AI | One long-term goal of machine learning research is to produce methods at are applicable to highly complex tasks, such as perception (vision, audition ), reasoning, intelligent control, and other artificially intelligent behaviors. We arg ue that in order to progress toward this goal, the Machine Learning community must endeavor to discover algorithms that can learn highly complex functions, with minimal need for prior knowledge, and with minimal human intervention. We pr esent mathematical and empirical evidence suggesting that many popular appr oaches to non-parametric learning, particularly kernel methods, are fundame ntally limited in their ability to learn complex high-dimensional functions. Our analysis focuses on two problems. First, kernel machines are shallow architectures , in which one large layer of simple template matchers i followed by a single layer of trainable coefficients. We argue that shallow architectures can be ver y in fficient in terms of required number of computational elements and example s. Second, we analyze a limitation of kernel machines with a local kernel, linked to th e curse of dimensionality, that applies to supervised, unsupervised (man ifold learning) and semi-supervised kernel machines. Using empirical results on invariant image recognition tasks, kernel methods are compared with deep architectures , in which lower-level features or concepts are progressively combined into more abstract and higher-level representations. We argue that deep architec tures have the potential to generalize in non-local ways, i.e., beyond immediate neighbo rs, and that this is crucial in order to make progress on the kind of complex tasks r equired for artificial intelligence. |
Level and predictors of participation in patients with stroke undergoing inpatient rehabilitation. | INTRODUCTION
The level of participation is an important factor influencing rehabilitation outcome. However, few studies have evaluated rehabilitation participation and its clinical predictors in patients with stroke. This study aimed to establish the level of participation in patients with stroke undergoing inpatient rehabilitation, and define the clinical predictors for participation.
METHODS
This was a prospective observational study of first-time patients with stroke admitted to a rehabilitation centre over a 12-month period. The primary outcome measure was the level of rehabilitation participation as measured on the Pittsburgh Rehabilitation Participation Scale (PRPS). PRPS measurements were made one week after admission and one week before planned discharge from inpatient rehabilitation. Other outcome measures evaluated were the National Institute of Health Stroke Scale, Functional Independence Measure (FIM), Elderly Cognitive Assessment Questionnaire (ECAQ), Centre for Epidemiologic Studies-Depression Scale, Fatigue Severity Scale (FSS), Lubben Social Network Scale-Revised, and Multidimensional Health Questionnaire.
RESULTS
A total of 122 patients with stroke were studied. The mean PRPS score on admission was relatively high at 4.30 ± 0.90, and this improved to 4.65 ± 0.79 before planned discharge (p < 0.001). On multivariate analysis, the mean PRPS score on admission was predicted by FIM, EACQ and FSS scores on admission, but not by variables such as age, gender, depression, social support, or health attitudes and beliefs.
CONCLUSION
Patients with lower levels of participation were more likely to be functionally dependent, cognitively impaired and have more fatigue. We suggest that in addition to cognition, fatigue should be routinely screened in patients with stroke undergoing rehabilitation. |
Runtime-behavior based malware classification using online machine learning | Identification of malware's family is an intricate process whose success and accuracy depends on different factors. These factors are mainly related to the process of extracting of meaningful and distinctive features from a set of malware samples, modeling malware via its static or dynamic features and particularly techniques used to classify malware samples. In this paper, we propose a new malware classification method based on behavioral features. File system, network, registry activities observed during the execution traces of the malware samples are used to represent behavior based features. Existing classification schemes apply machine-learning algorithms to the stored data, i.e., they are off-line. In this study, we use on-line machine learning algorithms that can provide instantaneous update about the new malware sample by following its introduction to the classification scheme. To validate the effectiveness and scalability of our method, we have evaluated our method by using 18,000 recent malicious files. Experimental results show that our method classifies malware with an accuracy of 92. |
Noise Characteristics of MEMS Gyro ’ s Null Drift and Temperature Compensation | Gyroscope is one of the primary sensors for air vehicle navigation and controls. This paper investigates the noise characteristics of microelectromechanical systems (MEMS) gyroscope null drift and temperature compensation. This study mainly focuses on temperature as a long-term error source. An in-house-designed inertial measurement unit (IMU) is used to perform temperature effect testing in the study. The IMU is placed into a temperature control chamber. The chamber temperature is controlled to increase from 25 C to 80 C at approximately 0.8 degrees per minute. After that, the temperature is decreased to -40 C and then returns to 25 C. The null voltage measurements clearly demonstrate the rapidly changing short-term random drift and slowly changing long-term drift due to temperature variations. The characteristics of the short-term random drifts are analyzed and represented in probability density functions. A temperature calibration mechanism is established by using an artificial neural network to compensate the long-term drift. With the temperature calibration, the attitude computation problem due to gyro drifts can be improved significantly. |
A Survey of the Application of Machine Learning in Decision Support Systems | Machine learning is a useful technology for decision support systems and assumes greater importance in research and practice. Whilst much of the work focuses technical implementations and the adaption of machine learning algorithms to application domains, the factors of machine learning design affecting the usefulness of decision support are still understudied. To enhance the understanding of machine learning and its use in decision support systems, we report the results of our content analysis of design-oriented research published between 1994 and 2013 in major Information Systems outlets. The findings suggest that the usefulness of machine learning for supporting decision-makers is dependent on the task, the phase of decision-making, and the applied technologies. We also report about the advantages and limitations of prior research, the applied evaluation methods and implications for future decision support research. Our findings suggest that future decision support research should shed more light on organizational and people-related evaluation criteria. |
Towards secure e-voting using ethereum blockchain | There is no doubt that the revolutionary concept of the blockchain, which is the underlying technology behind the famous cryptocurrency Bitcoin and its successors, is triggering the start of a new era in the Internet and the online services. While most people focus only at cryptocurrencies; in fact, many administrative operations, fintech procedures, and everyday services that can only be done offline and/or in person, can now safely be moved to the Internet as online services. What makes it a powerful tool for digitalizing everyday services is the introduction of smart contracts, as in the Ethereum platform. Smart contracts are meaningful pieces of codes, to be integrated in the blockchain and executed as scheduled in every step of blockchain updates. E-voting on the other hand, is another trending, yet critical, topic related to the online services. The blockchain with the smart contracts, emerges as a good candidate to use in developments of safer, cheaper, more secure, more transparent, and easier-to-use e-voting systems. Ethereum and its network is one of the most suitable ones, due to its consistency, widespread use, and provision of smart contracts logic. An e-voting system must be secure, as it should not allow duplicated votes and be fully transparent, while protecting the privacy of the attendees. In this work, we have implemented and tested a sample e-voting application as a smart contract for the Ethereum network using the Ethereum wallets and the Solidity language. Android platform is also considered to allow voting for people who do not have an Ethereum wallet. After an election is held, eventually, the Ethereum blockchain will hold the records of ballots and votes. Users can submit their votes via an Android device or directly from their Ethereum wallets, and these transaction requests are handled with the consensus of every single Ethereum node. This consensus creates a transparent environment for e-voting. In addition to a broad discussion about reliability and efficiency of the blockchain-based e-voting systems, our application and its test results are presented in this paper, too. |
Climate Change Adaptation and Development : Exploring the Linkages � | Please note that Tyndall working papers are "work in progress". Whilst they are commented on by Tyndall researchers, they have not been subject to a full peer review. The accuracy of this work and the conclusions reached are the responsibility of the author(s) alone and not the Tyndall Centre. Manuscript has also been submitted to a peer-reviewed journal 1 Summary Successful human societies are characterised by their adaptability, evidenced throughout human existence. However, climate change introduces a new challenge, not only because of the expected rise in temperature and sea-levels, but also due to the current context of failure to address the causes of poverty adequately. As a result, policy supporting adaptation has been cast as a necessary strategy for responding to both climate change and supporting development, making adaptation the focus of much recent scholarly and policy research. This paper addresses this new adaptation discourse, arguing that work on adaptation so far has focused on responding to the impacts of climate change, rather than sufficiently addressing the underlying factors that cause vulnerability. While there is a significant push all around for adaptation to be better placed in development planning, the paper finds this to be putting the cart before the horse. A successful adaptation process will require adequately addressing the underlying causes of vulnerability: this is the role that development has to play. This work results from research aimed at exploring the international discourse on adaptation to climate change and the meaning of adaptation to climate change in the context of development. 1. Introduction As a result of evidence that human-induced global climate change is already occurring and will continue to affect society over the coming decades, a surge in interest in impact-oriented action is discernable since the beginning of the century, in contrast to efforts centred on prevention (Burton et al., 2002). Frustration over the lack of progress and effectiveness of policy to reduce greenhouse gas emissions has contributed to this shift. Adapting to the changes has consequently emerged as a solution to address the impacts of climate change that are already evident in some regions. However, this course of action has not always been considered relevant within science and policy (Schipper, 2006a; Klein, 2003). Adaptation responds directly to the impacts of the increased concentrations of greenhouse gases in both precautionary and reactive ways, rather than through the preventative approach of limiting the source of the gases (this … |
Rethinking ImageNet Pre-training | We report competitive results on object detection and instance segmentation on the COCO dataset using standard models trained from random initialization. The results are no worse than their ImageNet pre-training counterparts even when using the hyper-parameters of the baseline system (Mask R-CNN) that were optimized for fine-tuning pretrained models, with the sole exception of increasing the number of training iterations so the randomly initialized models may converge. Training from random initialization is surprisingly robust; our results hold even when: (i) using only 10% of the training data, (ii) for deeper and wider models, and (iii) for multiple tasks and metrics. Experiments show that ImageNet pre-training speeds up convergence early in training, but does not necessarily provide regularization or improve final target task accuracy. To push the envelope we demonstrate 50.9 AP on COCO object detection without using any external data—a result on par with the top COCO 2017 competition results that used ImageNet pre-training. These observations challenge the conventional wisdom of ImageNet pre-training for dependent tasks and we expect these discoveries will encourage people to rethink the current de facto paradigm of ‘pretraining and fine-tuning’ in computer vision. |
Head-bobbing of walking birds | Many birds show a rhythmic forward and backward movement of their heads when they walk on the ground. This so-called “head-bobbing” is characterized by a rapid forward movement (thrust phase) which is followed by a phase where the head keeps its position with regard to the environment but moves backward with regard to the body (hold phase). These head movements are synchronized with the leg movements. The functional interpretations of head-bobbing are reviewed. Furthermore, it is discussed why some birds do bob their head and others do not. |
Power-constrained CMOS scaling limits | The scaling of CMOS technology has progressed rapidly for three decades, but may soon come to an end because of powerdissipation constraints. The primary problem is static power dissipation, which is caused by leakage currents arising from quantum tunneling and thermal excitations. The details of these effects, along with other scaling issues, are discussed in the context of their dependence on application. On the basis of these considerations, the limits of CMOS scaling are estimated for various application scenarios. |
Business Intelligence and Analytics Education, and Program Development: A Unique Opportunity for the Information Systems Discipline | “Big Data,” huge volumes of data in both structured and unstructured forms generated by the Internet, social media, and computerized transactions, is straining our technical capacity to manage it. More importantly, the new challenge is to develop the capability to understand and interpret the burgeoning volume of data to take advantage of the opportunities it provides in many human endeavors, ranging from science to business. Data Science, and in business schools, Business Intelligence and Analytics (BI&A) are emerging disciplines that seek to address the demands of this new era. Big Data and BI&A present unique challenges and opportunities not only for the research community, but also for Information Systems (IS) programs at business schools. In this essay, we provide a brief overview of BI&A, speculate on the role of BI&A education in business schools, present the challenges facing IS departments, and discuss the role of IS curricula and program development, in delivering BI&A education. We contend that a new vision for the IS discipline should address these challenges. |
Comparative Study of Chronic Kidney Disease Prediction using KNN and SVM | Chronic kidney disease (CKD), also known as chronic renal disease. Chronic kidney disease involves conditions that damage your kidneys and decrease their ability to keep you healthy. You may develop complications like high blood pressure, anemia (low blood count), weak bones, poor nutritional health and nerve damage. . Early detection and treatment can often keep chronic kidney disease from getting worse. Data Mining is the term used for knowledge discovery from large databases. The task of data mining is to make use of historical data, to discover regular patterns and improve future decisions, follows from the convergence of several recent trends: the lessening cost of large data storage devices and the everincreasing ease of collecting data over networks; the expansion of robust and efficient machine learning algorithms to process this data; and the lessening cost of computational power, enabling use of computationally intensive methods for data analysis. Machine learning, has already created practical applications in such areas as analyzing medical science outcomes, detecting fraud, detecting fake users etc. Various data mining classification approaches and machine learning algorithms are applied for prediction of chronic diseases. The objective of this research work is to introduce a new decision support system to predict chronic kidney disease. The aim of this work is to compare the performance of Support vector machine (SVM) and K-Nearest Neighbour (KNN) classifier on the basis of its accuracy, precision and execution time for CKD prediction. From the experimental results it is observed that the performance of KNN classifier is better than SVM. Keywords—Data Mining, Machine learning, Chronic kidney disease, Classification, K-Nearest Neighbour, Support vector machine. |
A Foundation for Multi-dimensional Databases | We present a multi-dimensional database model, which we believe can serve as a conceptual model for On-Line Analytical Processing (OLAP)-based applications. Apart from providing the functionalities necessary for OLAP-based applications, the main feature of the model we propose is a clear separation between structural aspects and the contents. This separation of concerns allows us to define data manipulation languages in a reasonably simple, transparent way. In particular, we show that the data cube operator can be expressed easily. Concretely, we define an algebra and a calculus and show them to be equivalent. We conclude by comparing our approach to related work. The conceptual multi-dimensional database model developed here is orthogonal to its implementation, which is not a subject of the present paper. |
Propolis for the Treatment of Onychomycosis | Onychomycosis is a fungal nail infection, considered as a public health problem because it is contagious and it interferes with the quality of life. It has long and difficult treatment, with many side effects and high cost. Propolis extract (PE) is a potential alternative to conventional antifungal agents because it has low cost, accessibility, and low toxicity. Herein, we report the favorable response of PE in onychomycosis in three elderly patients. |
Phase II study of docetaxel in patients with epithelial ovarian carcinoma refractory to platinum. | We analyzed the efficacy and toxicity of docetaxel in patients with ovarian cancer who failed previous chemotherapy with platinum. Fifty-five patients with measurable ovarian cancer were entered in this Phase II study at The University of Texas M. D. Anderson Cancer Center. Treatment consisted of 100 mg/m2 docetaxel given i.v. every 3 weeks. Because of hypersensitivity reactions, premedication with steroids and antihistamine was initiated during the study. Twenty-two (40%) patients responded (there were 3 complete responders and 19 partial responders). Twenty-one (38%) patients had stable disease. The median survival was 10 months. The main toxicity was neutropenia (98% of patients), with 13 episodes of neutropenic fever. Cumulative fluid retention was the main reason for dose modification and required a combination of diuretics and steroids for palliation. Other side effects were alopecia (100%); anemia (87%); dermatitis (67%); gastrointestinal disorders (53%); stomatitis (49%); neurotoxicity (45%); excessive lacrimation (33%); and hypersensitivity reactions (11%), which in one case were life threatening (loss of consciousness, fluid resuscitation). Docetaxel as a single agent proved to be active in heavily pretreated ovarian cancer patients but is associated with significant side effects. Objective toxicity consisted mainly of neutropenia and fluid retention. Neutropenia was dose limiting and required therapy with granulocyte colony-stimulating factor. Fluid retention was improved but not eliminated by diuretics and corticosteroids. Additional studies of docetaxel in ovarian carcinoma are indicated to define the activity in relation to paclitaxel and in platinum combination therapy. |
Natural demodulation of two-dimensional fringe patterns. I. General background of the spiral phase quadrature transform. | It is widely believed, in the areas of optics, image analysis, and visual perception, that the Hilbert transform does not extend naturally and isotropically beyond one dimension. In some areas of image analysis, this belief has restricted the application of the analytic signal concept to multiple dimensions. We show that, contrary to this view, there is a natural, isotropic, and elegant extension. We develop a novel two-dimensional transform in terms of two multiplicative operators: a spiral phase spectral (Fourier) operator and an orientational phase spatial operator. Combining the two operators results in a meaningful two-dimensional quadrature (or Hilbert) transform. The new transform is applied to the problem of closed fringe pattern demodulation in two dimensions, resulting in a direct solution. The new transform has connections with the Riesz transform of classical harmonic analysis. We consider these connections, as well as others such as the propagation of optical phase singularities and the reconstruction of geomagnetic fields. |
Static analysis tools as early indicators of pre-release defect density | During software development it is helpful to obtain early estimates of the defect density of software components. Such estimates identify fault-prone areas of code requiring further testing. We present an empirical approach for the early prediction of pre-release defect density based on the defects found using static analysis tools. The defects identified by two different static analysis tools are used to fit and predict the actual pre-release defect density for Windows Server 2003. We show that there exists a strong positive correlation between the static analysis defect density and the pre-release defect density determined by testing. Further, the predicted pre-release defect density and the actual pre-release defect density are strongly correlated at a high degree of statistical significance. Discriminant analysis shows that the results of static analysis tools can be used to separate high and low quality components with an overall classification rate of 82.91%. |
Computer-Assisted Language Learning And Natural Language Processing | This chapter examines the application of natural language processing to computer-assisted language learning including the history of work in this eld over the last thirty-ve years but with a focus on current developments and opportunities. 16.1 Introduction This chapter focuses on applications of computational linguistics (CL) techniques to computer-assisted language learning (CALL). This always refers to programs designed to help people learn foreign languages, e.g., programs to help German high-school students learn French. CALL is a large eld|much larger than computational linguistics|which no one can describe completely in a brief chapter. The focus of this chapter is therefore much narrower, viz., just on those areas of CALL to which CL has been applied or might be. CL programs process natural languages such as English and Spanish, and the techniques are therefore often referred to as natural language processing (NLP). To preview this chapter's contents, we note that NLP has been enlisted in several ways in CALL, e.g., to align bilingual texts so that a learner who encounters an unknown word in French can seen how it was rendered in translation; to provide lemmatized access to corpora for advanced learners seeking subtleties unavailable in grammars and dictionaries ; to provide morphological analysis and subsequent dictionary access for words unknown to readers; and to parse user input and diagnose morphological and syntactic errors. Speech recognition has been used extensively in pronunciation training, and speech synthesis for single words. The work on speech falls outside the scope of this book, but it is noted since we expect its use in CALL to enable further deployment of NLP, perhaps in parsing spoken input. The chapter deliberately ignores two further areas which arguably involve language learning, rst, software designed to diagnose and assist the verbally handicapped (Miller and second, programs for assistance in developing and polishing advanced writing skills. The rst area seems genuinely extraneous, and the second has attracted little interest from NLP practitioners even though it enjoys substantial attention in the larger CALL literature (Pennington 1999, Schultz 2000). The traditional methods of learning and teaching foreign languages require a teacher to work 60 to 100 hours to bring an adult to a level at which she can function minimally with the foreign language (FSI 1973). By`minimally functioning' we just mean being able to use short, prepackaged phrases to communicate simple thoughts with long pauses and very foreign pronunciation. In general progress beyond this … |
Early completion of occluded objects | We show that early vision can use monocular cues to rapidly complete partially-occluded objects. Visual search for easily-detected fragments becomes difficult when the completed shape is similar to others in the display; conversely, search for fragments that are difficult to detect becomes easy when the completed shape is distinctive. Results indicate that completion occurs via the occlusion-triggered removal of occlusion edges and linking of associated regions. We fail to find evidence for a visible filling-in of contours or surfaces, but do find evidence for a 'functional' filling-in that prevents the constituent fragments from being rapidly accessed. As such, it is only the completed structures--and not the fragments themselves--that serve as the basis for rapid recognition. |
PccP: A model for Preserving cloud computing Privacy | The widespread focus on the Cloud Computing has necessitated the corresponding mechanisms to ensure privacy and security. Various attempts have been made in the past to safeguard the privacy of the individual or agency trying to utilize the services being provided by the cloud. The most challenging task is to provide services to the users while also preserving the privacy of the user's information. In this paper a model that incorporates a three-level architecture, Preserving cloud computing Privacy (PccP) model is proposed which aims to preserve privacy of information pertaining to cloud users. The Consumer Layer deals with all the aspects which relate to enabling the user of the cloud to access the cloud services being provided by the cloud service provider. The Network Interface Layer creates an appropriate mapping between the original IP addresses of the users with a modified IP address, and thereby ensuring the privacy of the IP address of the users. The Privacy Preserved Layer utilizes the functionality of the Unique User Cloud Identity Generator for which an algorithm is proposed in this paper to generate an unique User Service Dependent Identity(USID) with privacy check by establishing mapping among the existing user identity(ID), if any to ID's available in a pool of User ID's to enhance the privacy of sensitive user information. A Privacy check method based on information privacy is being proposed which contributes significantly in maintaining user control over the generated user identities (USID's). |
High-density 80 K SNP array is a powerful tool for genotyping G. hirsutum accessions and genome analysis | High-throughput genotyping platforms play important roles in plant genomic studies. Cotton (Gossypium spp.) is the world’s important natural textile fiber and oil crop. Upland cotton accounts for more than 90% of the world’s cotton production, however, modern upland cotton cultivars have narrow genetic diversity. The amounts of genomic sequencing and re-sequencing data released make it possible to develop a high-quality single nucleotide polymorphism (SNP) array for intraspecific genotyping detection in cotton. Here we report a high-throughput CottonSNP80K array and its utilization in genotyping detection in different cotton accessions. 82,259 SNP markers were selected from the re-sequencing data of 100 cotton cultivars and used to produce the array on the Illumina Infinium platform. 77,774 SNP loci (94.55%) were successfully synthesized on the array. Of them, 77,252 (99.33%) had call rates of >95% in 352 cotton accessions and 59,502 (76.51%) were polymorphic loci. Application tests using 22 cotton accessions with parent/F1 combinations or with similar genetic backgrounds showed that CottonSNP80K array had high genotyping accuracy, good repeatability, and wide applicability. Phylogenetic analysis of 312 cotton cultivars and landraces with wide geographical distribution showed that they could be classified into ten groups, irrelevant of their origins. We found that the different landraces were clustered in different subgroups, indicating that these landraces were major contributors to the development of different breeding populations of modern G. hirsutum cultivars in China. We integrated a total of 54,588 SNPs (MAFs >0.05) associated with 10 salt stress traits into 288 G. hirsutum accessions for genome-wide association studies (GWAS), and eight significant SNPs associated with three salt stress traits were detected. We developed CottonSNP80K array with high polymorphism to distinguish upland cotton accessions. Diverse application tests indicated that the CottonSNP80K play important roles in germplasm genotyping, variety verification, functional genomics studies, and molecular breeding in cotton. |
Towards Zero-Shot Frame Semantic Parsing for Domain Scaling | State-of-the-art slot filling models for goal-oriented human/machine conversational language understanding systems rely on deep learning methods. While multi-task training of such models alleviates the need for large in-domain annotated datasets, bootstrapping a semantic parsing model for a new domain using only the semantic frame, such as the back-end API or knowledge graph schema, is still one of the holy grail tasks of language understanding for dialogue systems. This paper proposes a deep learning based approach that can utilize only the slot description in context without the need for any labeled or unlabeled in-domain examples, to quickly bootstrap a new domain. The main idea of this paper is to leverage the encoding of the slot names and descriptions within a multi-task deep learned slot filling model, to implicitly align slots across domains. The proposed approach is promising for solving the domain scaling problem and eliminating the need for any manually annotated data or explicit schema alignment. Furthermore, our experiments on multiple domains show that this approach results in significantly better slot-filling performance when compared to using only in-domain data, especially in the low data regime. |
Efficient navigation using slow feature gradients | A model of hierarchical Slow Feature Analysis (SFA) enables a mobile robot to learn a spatial representation of its environment directly from images captured during a random walk. After the unsupervised learning phase a subset of the resulting representations are orientation invariant and code for the position of the robot. Hence, they change monotonically over space even though the variation of the sensory signals received from the environment might change drastically e.g. during rotation on the spot. Furthermore, the property of spatial smoothness allows us to infer a navigation direction by taking the difference between the measurement at the current location and a measurement at a target location. In our work we investigated the use of slow feature representations, learned for a specific environment, for the purpose of navigation. We present a straightforward method for navigation using gradient descent on the difference between two points specified in slow feature space. Due to its slowness objective, the resulting slow feature representations implicitly encode information about static obstacles, allowing a mobile robot to efficiently circumnavigate them by simply following the steepest gradient in slow feature space. |
A phase I trial of the novel proteasome inhibitor PS341 in advanced solid tumor malignancies. | PURPOSE
The purpose of this study was to evaluate the toxicity and pharmacodynamic behavior of the novel proteasome inhibitor PS341 administered as a twice weekly i.v. bolus for 2 weeks, followed by a 1-week recovery period in patients with advanced solid tumor malignancies.
EXPERIMENTAL DESIGN
In this Phase I trial, 43 patients were treated with PS341 in doses ranging from 0.13 to 1.56 mg/m2/dose. A standard Phase I design was used. Pharmacodynamic studies were performed to access 20S proteasome activity.
RESULTS
Forty-three patients were treated with 89 cycles of PS341. Patients were heavily pretreated. Dose-limiting toxicities on this schedule were diarrhea and sensory neurotoxicity. Other side effects seen were fatigue, fever, anorexia, nausea, vomiting, rash, pruritus, and headache. There was no dose-limiting hematological toxicity. A dose-related inhibition of 20S proteasome activity with increasing dose of PS341 was seen. There was one major response in a patient with refractory non-small cell lung carcinoma.
CONCLUSIONS
Given the results of this trial, it is safe and reasonable to recommend treatment with PS341 on the schedule used in this trial at 1.56 mg/m2/dose in Phase II trials. Particular care should be taken with patients with preexisting neuropathy. Further testing in Phase II trials is warranted. |
Mapping Quantitative Trait Loci Controlling High Iron and Zinc Content in Self and Open Pollinated Grains of Pearl Millet [Pennisetum glaucum (L.) R. Br.] | Pearl millet is a multipurpose grain/fodder crop of the semi-arid tropics, feeding many of the world's poorest and most undernourished people. Genetic variation among adapted pearl millet inbreds and hybrids suggests it will be possible to improve grain micronutrient concentrations by selective breeding. Using 305 loci, a linkage map was constructed to map QTLs for grain iron [Fe] and zinc [Zn] using replicated samples of 106 pearl millet RILs (F6) derived from ICMB 841-P3 × 863B-P2. The grains of the RIL population were evaluated for Fe and Zn content using atomic absorption spectrophotometer. Grain mineral concentrations ranged from 28.4 to 124.0 ppm for Fe and 28.7 to 119.8 ppm for Zn. Similarly, grain Fe and Zn in open pollinated seeds ranged between 22.4-77.4 and 21.9-73.7 ppm, respectively. Mapping with 305 (96 SSRs; 208 DArT) markers detected seven linkage groups covering 1749 cM (Haldane) with an average intermarker distance of 5.73 cM. On the basis of two environment phenotypic data, two co-localized QTLs for Fe and Zn content on linkage group (LG) 3 were identified by composite interval mapping (CIM). Fe QTL explained 19% phenotypic variation, whereas the Zn QTL explained 36% phenotypic variation. Likewise for open pollinated seeds, the QTL analysis led to the identification of two QTLs for grain Fe content on LG3 and 5, and two QTLs for grain Zn content on LG3 and 7. The total phenotypic variance for Fe and Zn QTLs in open pollinated seeds was 16 and 42%, respectively. Analysis of QTL × QTL and QTL × QTL × environment interactions indicated no major epistasis. |
An approach for selecting seed URLs of focused crawler based on user-interest ontology | Seed URLs selection for focused Web crawler intends to guide related and valuable information that meets a user’s personal information requirement and provide more effective information retrieval. In this paper, we propose a seed URLs selection approach based on user-interest ontology. In order to enrich semantic query, we first intend to apply Formal Concept Analysis to construct user-interest concept lattice with user log profile. By using concept lattice merger, we construct the user-interest ontology which can describe the implicit concepts and relationships between them more appropriately for semantic representation eed URLs ormal concept analysis ser-interest ontology ipartite graph eb crawler and query match. On the other hand, we make full use of the user-interest ontology for extracting the user interest topic area and expanding user queries to receive the most related pages as seed URLs, which is an entrance of the focused crawler. In particular, we focus on how to refine the user topic area using the bipartite directed graph. The experiment proves that the user-interest ontology can be achieved effectively by merging concept lattices and that our proposed approach can select high quality seed URLs collection and improve the average precision of focused Web crawler. |
A negative correlation between human carotid atherosclerotic plaque progression and plaque wall stress: in vivo MRI-based 2D/3D FSI models. | It is well accepted that atherosclerosis initiation and progression correlate positively with low and oscillating flow wall shear stresses (FSS). However, this mechanism cannot explain why advanced plaques continue to grow under elevated FSS conditions. In vivo magnetic resonance imaging (MRI)-based 2D/3D multi-component models with fluid-structure interactions (FSI, 3D only) for human carotid atherosclerotic plaques were introduced to quantify correlations between plaque progression measured by wall thickness increase (WTI) and plaque wall (structure) stress (PWS) conditions. A histologically validated multi-contrast MRI protocol was used to acquire multi-year in vivo MRI images. Our results using 2D models (200-700 data points/patient) indicated that 18 out of 21 patients studied showed significant negative correlation between WTI and PWS at time 2 (T2). The 95% confidence interval for the Pearson correlation coefficient is (-0.443,-0.246), p<0.0001. Our 3D FSI model supported the 2D correlation results and further indicated that combining both plaque structure stress and flow shear stress gave better approximation results (PWS, T2: R(2)=0.279; FSS, T1: R(2)=0.276; combining both: R(2)=0.637). These pilot studies suggest that both lower PWS and lower FSS may contribute to continued plaque progression and should be taken into consideration in future investigations of diseases related to atherosclerosis. |
Knowledge, Attitudes and Motivations Among Blood Donors in São Paulo, Brazil | Recruiting safe, volunteer blood donors requires understanding motivations for donating and knowledge and attitudes about HIV. We surveyed 1,600 persons presenting for blood donation at a large blood bank in São Paulo, Brazil using a self-administered, structured questionnaire, and classified motivations into three domains as well as categorizing persons by HIV test-seeking behavior. Motivations, in descending order, and their significant associations were: “altruism”: female gender, volunteer donor and repeat donor status; “direct appeal”: female gender, repeat donor status and age 21–50 years; “self-interest”: male gender, age under 20 years, first-time donor status and lower education. HIV test-seekers were more likely to give incorrect answers regarding HIV risk behavior and blood donation and the ability of antibody testing to detect recent HIV infections. Altruism is the main motivator for blood donation in Brazil; other motivators were associated with specific demographic subgroups. HIV test-seeking might be reduced by educational interventions. |
Risks and Mitigation Measures in Build-Operate-Transfer Projects | Infrastructure investments are important in developing countries, it will not only help to foster the economic growth of a nation, but it will also act as a platform in which new forms of partnership and collaboration can be developed mainly in East Asian countries. Since the last two decades, many infrastructure projects had been completed through build-operate-transfer (BOT) type of procurement. The developments of BOT have attracted participation of local and foreign private sector investor to secure funding and to deliver projects on time, within the budget and to the required specifications. Private sectors are preferred by the government in East Asia to participate in BOT projects due to lack of public funding. The finding has resulted that the private sector or promoter of the BOT projects is exposed to multiple risks which have been discussed in this paper. Effective risk management methods and good managerial skills are required in ensuring the success of the project. The review indicated that mitigation measures should be employed by the promoter throughout the concession period and support from the host government is also required in ensuring the success of the BOT project. Keywords—BOT project, risks management, concessionaire, consortium. |
Do stack traces help developers fix bugs? | A widely shared belief in the software engineering community is that stack traces are much sought after by developers to support them in debugging. But limited empirical evidence is available to confirm the value of stack traces to developers. In this paper, we seek to provide such evidence by conducting an empirical study on the usage of stack traces by developers from the ECLIPSE project. Our results provide strong evidence to this effect and also throws light on some of the patterns in bug fixing using stack traces. We expect the findings of our study to further emphasize the importance of adding stack traces to bug reports and that in the future, software vendors will provide more support in their products to help general users make such information available when filing bug reports. |
Crowdsourcing, collaboration and creativity | While many organizations turn to human computation labor markets for jobs with black-or-white solutions, there is vast potential in asking these workers for original thought and innovation. |
Object Detection with Active Sample Harvesting | The work presented in this dissertation lies in the domains of image classification, object detection, and machine learning. Whether it is training image classifiers or object detectors, the learning phase consists in finding an optimal boundary between populations of samples. In practice, all the samples are not equally important: some examples are trivially classified and do not bring much to the training, while others close to the boundary or misclassified are the ones that truly matter. Similarly, images where the samples originate from are not all rich in informative samples. However, most training procedures select samples and images uniformly or weigh them equally. The common thread of this dissertation is how to efficiently find the informative samples/images for training. Although we never consider all the possible samples “in the world”, our purpose is to select the samples in a smarter manner, without looking at all the available ones. The framework adopted in this work consists in organising the data (samples or images) in a tree to reflect the statistical regularities of the training samples, by putting “similar” samples in the same branch. Each leaf carries a sample and a weight related to the “importance” of the corresponding sample, and each internal node carries statistics about the weights below. The tree is used to select the next sample/image for training, by applying a sampling policy, and the “importance” weights are updated accordingly, to bias the sampling towards informative samples/images in future iterations. Our experiments show that, in the various applications, properly focusing on informative images or informative samples improves the learning phase by either reaching better performances faster or by reducing the training loss faster. |
A deep representation for depth images from synthetic data | Convolutional Neural Networks (CNNs) trained on large scale RGB databases have become the secret sauce in the majority of recent approaches for object categorization from RGB-D data. Thanks to colorization techniques, these methods exploit the filters learned from 2D images to extract meaningful representations in 2.5D. Still, the perceptual signature of these two kind of images is very different, with the first usually strongly characterized by textures, and the second mostly by silhouettes of objects. Ideally, one would like to have two CNNs, one for RGB and one for depth, each trained on a suitable data collection, able to capture the perceptual properties of each channel for the task at hand. This has not been possible so far, due to the lack of a suitable depth database. This paper addresses this issue, proposing to opt for synthetically generated images rather than collecting by hand a 2.5D large scale database. While being clearly a proxy for real data, synthetic images allow to trade quality for quantity, making it possible to generate a virtually infinite amount of data. We show that the filters learned from such data collection, using the very same architecture typically used on visual data, learns very different filters, resulting in depth features (a) able to better characterize the different facets of depth images, and (b) complementary with respect to those derived from CNNs pre-trained on 2D datasets. Experiments on two publicly available databases show the power of our approach. |
Mutational landscape and significance across 12 major cancer types | The Cancer Genome Atlas (TCGA) has used the latest sequencing and analysis methods to identify somatic variants across thousands of tumours. Here we present data and analytical results for point mutations and small insertions/deletions from 3,281 tumours across 12 tumour types as part of the TCGA Pan-Cancer effort. We illustrate the distributions of mutation frequencies, types and contexts across tumour types, and establish their links to tissues of origin, environmental/carcinogen influences, and DNA repair defects. Using the integrated data sets, we identified 127 significantly mutated genes from well-known (for example, mitogen-activated protein kinase, phosphatidylinositol-3-OH kinase, Wnt/β-catenin and receptor tyrosine kinase signalling pathways, and cell cycle control) and emerging (for example, histone, histone modification, splicing, metabolism and proteolysis) cellular processes in cancer. The average number of mutations in these significantly mutated genes varies across tumour types; most tumours have two to six, indicating that the number of driver mutations required during oncogenesis is relatively small. Mutations in transcriptional factors/regulators show tissue specificity, whereas histone modifiers are often mutated across several cancer types. Clinical association analysis identifies genes having a significant effect on survival, and investigations of mutations with respect to clonal/subclonal architecture delineate their temporal orders during tumorigenesis. Taken together, these results lay the groundwork for developing new diagnostics and individualizing cancer treatment. |
A 24-pulse rectifier cascaded multilevel inverter with minimum number of transformer windings | This paper addresses the design of a medium voltage adjustable speed drive (ASD) utilizing the benefits of cascaded inverter technology while employing a 24-pulse rectifier inspired front end. Rectifier front ends of ASDs introduce a high level of harmonics into the utility grid, creating equipment overheating problems and distorting the voltage waveform near the ASD's connection point. To greatly reduce these harmonics while limiting cost, a cascaded multilevel inverter utilizing a 24-pulse transformer with a minimum number of windings is used. This design reduces the total harmonic current to less than 5% of the full load current over the entire frequency range. Simulation and experimental results are presented to show proof of concept. |
Agile and hackathons: a case study of emergent practices at the FNB codefest | Hackathons and similar innovation contests can accelerate the development of software prototypes to help large corporates such as banks experiment with new technology. These companies may also be adopting Agile in their existing software development practices and it is worth exploring the usage of Agile principles at such events and whether hackathons can assist overall Agile adoption. FNB is one of the largest and most innovative banks in South Africa and runs an internal hackathon called Codefest to enhance IT innovation in product design and internal operations. The event attracts over 200 internal developers who compete in teams during a 48-hour coding marathon. South African banks, including FNB, are also adopting Agile practices to improve speed and quality in their software development lifecycle. Codefest was not intended to help drive FNB's Agile journey, however some of its principles and practices were observed as having naturally occurred during the event.
This article explores the emergence of Agile practices at FNB Codefest as observed during publicly broadcast interviews with various participants and stakeholders. The spoken words of the interviewees were analysed for dominant concepts using the values and principles of the Agile Manifesto as a coding framework. The interviews provided practical observations of the environment at Codefest which was found to encourage certain Agile principles and practices. Adoption of Agile by teams also correlated with their level of success in the Codefest competition however more research would be needed to determine whether Codefest accelerated the bank's overall Agile journey.
Three main Agile concepts were found to be naturally cultivated by the environment of Codefest; collaboration, motivation and elements of technical excellence. Collaboration was observed between IT teams, between business and IT teams and between business teams, while also creating a model of conditions for how teams could operate during business as usual. Intrinsic motivators such as autonomy, mastery and purpose were also observed at codefest, supporting the notion of knowledge worker motivation as being crucial in setting up successful software development teams. Elements of technical excellence correlated to Agile through methodologies such as Extreme Programming or Scrum while quality practices were enabled by team practices such as communication and planning. Codefest was also mapped to a proposed model of Agile environments while considerations for such contests and suggestions for next steps are also presented. These include (1) using Codefest to raise awareness of Agile, (2) understanding how extrinsic motivators affect Codefest and Agile, (3) using Codefest participants to share and drive technical excellence and (4) Agile training before Codefest. |
An ant colony optimization algorithm for the Multiple Traveling Salesmen Problem | The multiple traveling salesmen problem (MTSP) is a generalization of the famous traveling salesman problem (TSP), where more than one salesman is used in the solution. Though the MTSP is a typical computationally complex combinatorial optimization problem, it can be extended to a wide variety of routing and scheduling problems. The paper proposed an ant colony optimization (ACO) algorithm for the MTSP with two objectives: the objective of minimizing the maximum tour length of all the salesmen and the objective of minimizing the maximum tour length of each salesman. In the algorithm, the pheromone trail updating and limits followed the MAX-MIN Ant System (MMAS) scheme, and a local search procedure was used to improve the performance of the algorithm. We compared the results of our algorithm with genetic algorithm (GA) on some benchmark instances in literatures. Computational results show that our algorithm is competitive on both the objectives. |
A randomized trial of daclatasvir with peginterferon alfa-2b and ribavirin for HCV genotype 1 infection. | BACKGROUND
Daclatasvir-containing regimens have the potential to address limitations of current regimens combining peginterferon alfa and ribavirin with first-generation protease inhibitors for treatment of chronic HCV genotype 1 infection.
METHODS
In this randomized, double-blind study, 27 Japanese treatment-naive patients received once-daily daclatasvir 10 mg or 60 mg or placebo, each combined with peginterferon alfa-2b/ribavirin; 18 prior null (n=9) or partial (n=9) responders received the same daclatasvir-containing regimens without a placebo arm. Daclatasvir recipients with protocol-defined response (HCV RNA<15 IU/ml at week 4, undetectable at week 12) were treated for 24 weeks; those without protocol-defined response and placebo recipients continued treatment to week 48.
RESULTS
Sustained virological response 24 weeks post-treatment (SVR24) was achieved by 66.7%, 90.0% and 62.5% of treatment-naive patients in the daclatasvir 10 mg, 60 mg and placebo groups, respectively. Prior non-responders had more frequent virological failure; 22.2% and 33.3% of daclatasvir 10 mg and 60 mg recipients, respectively, achieved SVR24. Adverse events were similar across groups and were typical of peginterferon alfa-2b/ribavirin. Pyrexia, headache, alopecia, decreased appetite and malaise were the most common adverse events; two daclatasvir recipients discontinued due to adverse events.
CONCLUSIONS
Daclatasvir 60 mg combined with peginterferon alfa-2b and ribavirin achieved a high rate of SVR24 in treatment-naive patients with HCV genotype 1 infection, with tolerability similar to that of peginterferon alfa-2b/ribavirin alone. However, regimens with greater antiviral potency are needed for prior non-responders. |
Microtubule Destabilization Is Shared by Genetic and Idiopathic Parkinson’s Disease Patient Fibroblasts | Data from both toxin-based and gene-based models suggest that dysfunction of the microtubule system contributes to the pathogenesis of Parkinson's disease, even if, at present, no evidence of alterations of microtubules in vivo or in patients is available. Here we analyze cytoskeleton organization in primary fibroblasts deriving from patients with idiopathic or genetic Parkinson's disease, focusing on mutations in parkin and leucine-rich repeat kinase 2. Our analyses reveal that genetic and likely idiopathic pathology affects cytoskeletal organization and stability, without any activation of autophagy or apoptosis. All parkinsonian fibroblasts have a reduced microtubule mass, represented by a higher fraction of unpolymerized tubulin in respect to control cells, and display significant changes in microtubule stability-related signaling pathways. Furthermore, we show that the reduction of microtubule mass is so closely related to the alteration of cell morphology and behavior that both pharmacological treatment with microtubule-targeted drugs, and genetic approaches, by transfecting the wild type parkin or leucine-rich repeat kinase 2, restore the proper microtubule stability and are able to rescue cell architecture. Taken together, our results suggest that microtubule destabilization is a point of convergence of genetic and idiopathic forms of parkinsonism and highlight, for the first time, that microtubule dysfunction occurs in patients and not only in experimental models of Parkinson's disease. Therefore, these data contribute to the knowledge on molecular and cellular events underlying Parkinson's disease and, revealing that correction of microtubule defects restores control phenotype, may offer a new therapeutic target for the management of the disease. |
Non-local sparse models for image restoration | We propose in this paper to unify two different approaches to image restoration: On the one hand, learning a basis set (dictionary) adapted to sparse signal descriptions has proven to be very effective in image reconstruction and classification tasks. On the other hand, explicitly exploiting the self-similarities of natural images has led to the successful non-local means approach to image restoration. We propose simultaneous sparse coding as a framework for combining these two approaches in a natural manner. This is achieved by jointly decomposing groups of similar signals on subsets of the learned dictionary. Experimental results in image denoising and demosaicking tasks with synthetic and real noise show that the proposed method outperforms the state of the art, making it possible to effectively restore raw images from digital cameras at a reasonable speed and memory cost. |
Using Phase Instead of Optical Flow for Action Recognition | Currently, the most common motion representation for action recognition is optical flow. Optical flow is based on particle tracking which adheres to a Lagrangian perspective on dynamics. In contrast to the Lagrangian perspective, the Eulerian model of dynamics does not track, but describes local changes. For video, an Eulerian phase-based motion representation, using complex steerable filters, has been successfully employed recently for motion magnification and video frame interpolation. Inspired by these previous works, here, we proposes learning Eulerian motion representations in a deep architecture for action recognition. We learn filters in the complex domain in an end-to-end manner. We design these complex filters to resemble complex Gabor filters, typically employed for phase-information extraction. We propose a phaseinformation extraction module, based on these complex filters, that can be used in any network architecture for extracting Eulerian representations. We experimentally analyze the added value of Eulerian motion representations, as extracted by our proposed phase extraction module, and compare with existing motion representations based on optical flow, on the UCF101 dataset. |
Precision Measurements of the Nucleon Strange Form Factors atQ2∼0.1GeV2 | We report new measurements of the parity-violating asymmetry A{sub PV} in elastic scattering of 3 GeV electrons off hydrogen and {sup 4}He targets with {theta}{sub lab} {approx} 6.0{sup 0}. The {sup 4}He result is A{sub PV}=(+6.40 {+-} 0.23(stat) {+-} 0.12(syst)) x 10{sup -6}. The hydrogen result is A{sub PV}=(-1.58 {+-} 0.12(stat) {+-} 0.04(syst)) x 10{sup -6}. These results significantly improve constraints on the electric and magnetic strange form factors GE and GM. We extract GE = 0.002 {+-} 0.014 {+-} 0.007 at Q{sup 2} = 0.077 GeV{sup 2}, and GE + 0.09GM = 0.007 {+-} 0.011 {+-} 0.006 at Q{sup 2} = 0.109 GeV{sup 2}, providing new limits on the role of strange quarks in the nucleon charge and magnetization distributions. |
Cooperation, Fast and Slow: Meta-Analytic Evidence for a Theory of Social Heuristics and Self-Interested Deliberation. | Does cooperating require the inhibition of selfish urges? Or does "rational" self-interest constrain cooperative impulses? I investigated the role of intuition and deliberation in cooperation by meta-analyzing 67 studies in which cognitive-processing manipulations were applied to economic cooperation games (total N = 17,647; no indication of publication bias using Egger's test, Begg's test, or p-curve). My meta-analysis was guided by the social heuristics hypothesis, which proposes that intuition favors behavior that typically maximizes payoffs, whereas deliberation favors behavior that maximizes one's payoff in the current situation. Therefore, this theory predicts that deliberation will undermine pure cooperation (i.e., cooperation in settings where there are few future consequences for one's actions, such that cooperating is not in one's self-interest) but not strategic cooperation (i.e., cooperation in settings where cooperating can maximize one's payoff). As predicted, the meta-analysis revealed 17.3% more pure cooperation when intuition was promoted over deliberation, but no significant difference in strategic cooperation between more intuitive and more deliberative conditions. |
The Emotional State of Technology Acceptance | Computer-phobic university students are easy to find today especially when it come to taking online courses. Affect has been shown to influence users’ perceptions of computers. Although self-reported computer anxiety has declined in the past decade, it continues to be a significant issue in higher education and online courses. More importantly, anxiety seems to be a critical variable in relation to student perceptions of online courses. A substantial amount of work has been done on computer anxiety and affect. In fact, the technology acceptance model (TAM) has been extensively used for such studies where affect and anxiety were considered as antecedents to perceived ease of use. However, few, if any, have investigated the interplay between the two constructs as they influence perceived ease of use and perceived usefulness towards using online systems for learning. In this study, the effects of affect and anxiety (together and alone) on perceptions of an online learning system are investigated. Results demonstrate the interplay that exists between affect and anxiety and their moderating roles on perceived ease of use and perceived usefulness. Interestingly, the results seem to suggest that affect and anxiety may exist simultaneously as two weights on each side of the TAM scale. |
Piggybacking Codes for Network Coding: The High/Low SNR Regime | We propose a Piggybacking scheme for Network Coding where strong source inputs piggyback the weaker ones, a scheme necessary and sufficient to achieve the cut-set upper bound at High/Low-SNR regime, a new asymptotically optimal operational regime for the multihop Amplify and Forward (AF) networks. |
Vision-based Deep Reinforcement Learning | Recently, Google Deepmind showcased how Deep learning can be used in conjunction with existing Reinforcement Learning (RL) techniques to play Atari games[11], beat a world-class player [14] in the game of Go and solve complicated riddles [3]. Deep learning has been shown to be successful in extracting useful, nonlinear features from high-dimensional media such as images, text, video and audio [12]. For control tasks like playing games, researchers have traditionally used handcrafted-features which require a lot of human effort. Convolutional neural networks (CNN) can be used to extract useful visual features from the highdimensional input, such as a video game screen, to learn near-optimal value function in such control tasks. We chose the Arcade Learning environment as our testbed to examine how deep learning can be applied to a reinforcement learning setting and analyze performance of various modern algorithms. The learning is performed end-to-end with no game knowledge included in the architecture or during training. We also show how the architecture of the deep network used, can be modified to achieve superior performances and compare performances on the task of playing a complicated game Seaquest. |
Video forgery detection using correlation of noise residue | We propose a new approach for locating forged regions in a video using correlation of noise residue. In our method, block-level correlation values of noise residual are extracted as a feature for classification. We model the distribution of correlation of temporal noise residue in a forged video as a Gaussian mixture model (GMM). We propose a two-step scheme to estimate the model parameters. Consequently, a Bayesian classifier is used to find the optimal threshold value based on the estimated parameters. Two video inpainting schemes are used to simulate two different types of forgery processes for performance evaluation. Simulation results show that our method achieves promising accuracy in video forgery detection. |
FPGA-Based Accelerators of Deep Learning Networks for Learning and Classification: A Review | Due to recent advances in digital technologies, and availability of credible data, an area of artificial intelligence, deep learning, has emerged and has demonstrated its ability and effectiveness in solving complex learning problems not possible before. In particular, convolutional neural networks (CNNs) have demonstrated their effectiveness in the image detection and recognition applications. However, they require intensive CPU operations and memory bandwidth that make general CPUs fail to achieve the desired performance levels. Consequently, hardware accelerators that use application-specific integrated circuits, field-programmable gate arrays (FPGAs), and graphic processing units have been employed to improve the throughput of CNNs. More precisely, FPGAs have been recently adopted for accelerating the implementation of deep learning networks due to their ability to maximize parallelism and their energy efficiency. In this paper, we review the recent existing techniques for accelerating deep learning networks on FPGAs. We highlight the key features employed by the various techniques for improving the acceleration performance. In addition, we provide recommendations for enhancing the utilization of FPGAs for CNNs acceleration. The techniques investigated in this paper represent the recent trends in the FPGA-based accelerators of deep learning networks. Thus, this paper is expected to direct the future advances on efficient hardware accelerators and to be useful for deep learning researchers. |
Points2Pix: 3D Point-Cloud to Image Translation using conditional Generative Adversarial Networks | We present the first approach for 3D point-cloud to image translation based on conditional Generative Adversarial Networks (cGAN). The model handles multi-modal information sources from different domains, i.e. raw point-sets and images. The generator is capable of processing three conditions, whereas the point-cloud is encoded as raw point-set and camera projection. An image background patch is used as constraint to bias environmental texturing. A global approximation function within the generator is directly applied on the point-cloud (Point-Net). Hence, the representative learning model incorporates global 3D characteristics directly at the latent feature space. Conditions are used to bias the background and the viewpoint of the generated image. This opens up new ways in augmenting or texturing 3D data to aim the generation of fully individual images. We successfully evaluated our method on the KITTI and SunRGBD dataset with an outstanding object detection inception score. |
Government intervention: Source or scourge of monetary order? | Charles Kindleberger argues that most, if not all, financial manias, panics, and crashes were market failures deriving from the irrational behavior of human actors. Both his notion of rationality and his interpretation of the sources of financial crises are open to question. A broader notion of rationality enables us to distinguish actual crises from cases of fraud or entrepreneurial error, and a closer look at financial history illustrates the ways in which government regulation, not human irrationality, has been the source of financial disorder. |
Diagnosability of discrete-event systems | Abstruct-Fault detection and isolation is a crucial and challenging task in the automatic control of large complex systems. We propose a discrete-event system (DES) approach to the problem of failure diagnosis. We introduce two related notions of diagnosability of DES’s in the framework of formal languages and compare diagnosability with the related notions of observability and invertibility. We present a systematic procedure for detection and isolation of failure events using diagnosers and provide necessary and sufficient conditions for a language to be diagnosable. The diagnoser performs diagnostics using online observations of the system behavior; it is also used to state and verify off-line the necessary and sufficient conditions for diagnosability. These conditions are stated on the diagnoser or variations thereof. The approach to failure diagnosis presented in this paper is applicable to systems that fall naturally in the class of DES’s; moreover, for the purpose of diagnosis, most continuous variable dynamic systems can be viewed as DES’s at a higher level of abstraction. In a companion paper [20], we provide a methodology for building DES models for the purpose of failure diagnosis and present applications of the theory developed in this paper. |
Chest diseases diagnosis using artificial neural networks | Chronic obstructive pulmonary, pneumonia, asthma, tuberculosis, lung cancer diseases are the most important chest diseases. These chest diseases are important health problems in the world. In this study, a comparative chest diseases diagnosis was realized by using multilayer, probabilistic, learning vector quantization, and generalized regression neural networks. The chest diseases dataset were prepared by using patient’s epicrisis reports from a chest diseases hospital’s database. 2010 Elsevier Ltd. All rights reserved. |
Management of hematocolpos in adolescents with transverse vaginal septum | The aim of this study was to underline the significance of premenarcheal gynecological examination in patients with transverse vaginal septum that could possibly be complicated with endometriosis. Retrospective study including the period between January 2008 and December 2010. Second Department of Obstetrics and Gynecology. We searched our databases regarding cases of hematocolpos caused by transverse vaginal septum. Among the patients presented with hematocolpos we identified 4 cases caused by transverse vaginal septum. We present the management of these cases regarding diagnosis, differential diagnosis, and treatment. The mean age of the patients was 13.1 years. All patients presented in our department with hypogastric abdominal pain and hematocolpos. No problems in adrenarche or thelarche were mentioned. The U/S and MRI revealed a normal cystic in the upper part of the vagina––hematocolpos varying from 42 × 26 × 30 to 73 × 55 × 32 mm. Three of the patients had an upper transverse vaginal septum while one had a middle transverse vaginal septum. Only one patient had a concomitant anomaly of the urinary system (ectopic kidney). In our patients, after laparoscopic examination 3 out of 4 patients had findings of endometriosis (2/3 with stage I-minimal endometriosis and 1/3 with stage II-mild endometriosis). Physicians should be aware of transverse vaginal septum in the differential diagnosis of hematocolpos with abdominal pain and primary amenorrhea in the early adolescent years. Early diagnosis could be based on premenarcheal gynecological examination and could lead to correct management in order to avoid the complications of endometriosis (dysmenorrhea or infertility). |
Online Streaming Feature Selection Based on Conditional Information Entropy | Online streaming feature selection (OSFS) algorithms, producing an approximately optimal subset from so-far-seen features in real time, are capable of addressing feature selection issues in extreme large or even infinite dimensional space. There are several algorithms proposed carrying out in OSFS way. However, some of these algorithms need prior knowledge about the entire feature space which is inaccessible in real OSFS scenario. Besides, results of them are sensitive to the permutations of features. In this paper, we first propose an OSFS framework based on the uncertainty measures in rough sets theory. The framework needs no additional information, except for the given data. Moreover, a sorting mechanism is adopted in the framework, as creates its stability to varying the order of features. Then, specifying the uncertainty measure with conditional information entropy (CIE), we design an algorithm named CIE-OSFS based on the framework. Comprehensive experiments are conducted to verify the effectiveness of our method on several high dimensional benchmark data sets. The experimental results indicate that CIE-OSFS achieves more compactness with the prerequisite of guaranteeing the predictive accuracy and performs more stably to the changing of features' order than other algorithms in most cases. |
Exploiting Semantics in Neural Machine Translation with Graph Convolutional Networks | Semantic representations have long been argued as potentially useful for enforcing meaning preservation and improving generalization performance of machine translation methods. In this work, we are the first to incorporate information about predicate-argument structure of source sentences (namely, semantic-role representations) into neural machine translation. We use Graph Convolutional Networks (GCNs) to inject a semantic bias into sentence encoders and achieve improvements in BLEU scores over the linguistic-agnostic and syntaxaware versions on the English–German language pair. |
Deep Feature Compression for Collaborative Object Detection | Recent studies have shown that the efficiency of deep neural networks in mobile applications can be significantly improved by distributing the computational workload between the mobile device and the cloud. This paradigm, termed collaborative intelligence, involves communicating feature data between the mobile and the cloud. The efficiency of such approach can be further improved by lossy compression of feature data, which has not been examined to date. In this work we focus on collaborative object detection and study the impact of both near-lossless and lossy compression of feature data on its accuracy. We also propose a strategy for improving the accuracy under lossy feature compression. Experiments indicate that using this strategy, the communication overhead can be reduced by up to 70% without sacrificing accuracy. |
News in an online world: The need for an "automatic crap detector" | Widespread adoption of internet technologies has changed the way that news is created and consumed. The current online news environment is one that incentivizes speed and spectacle in reporting, at the cost of fact-checking and verification. The line between user generated content and traditional news has also become increasingly blurred. This poster reviews some of the professional and cultural issues surrounding online news and argues for a two-pronged approach inspired by Hemingway’s “automatic crap detector” (Manning, 1965) in order to address these problems: a) proactive public engagement by educators, librarians, and information specialists to promote digital literacy practices; b) the development of automated tools and technologies to assist journalists in vetting, verifying, and fact-checking, and to assist news readers by filtering and flagging dubious information. |
A Survey on Feature Selection Algorithms | One major component of machine learning is feature analysis which comprises of mainly two processes: feature selection and feature extraction. Due to its applications in several areas including data mining, soft computing and big data analysis, feature selection has got a reasonable importance. This paper presents an introductory concept of feature selection with various inherent approaches. The paper surveys historic developments reported in feature selection with supervised and unsupervised methods. The recent developments with the state of the art in the on-going feature selection algorithms have also been summarized in the paper including their hybridizations. |
Privacy, accuracy, and consistency too: a holistic solution to contingency table release | The contingency table is a work horse of official statistics, the format of reported data for the US Census, Bureau of Labor Statistics, and the Internal Revenue Service. In many settings such as these privacy is not only ethically mandated, but frequently legally as well. Consequently there is an extensive and diverse literature dedicated to the problems of statistical disclosure control in contingency table release. However, all current techniques for reporting contingency tables fall short on at leas one of privacy, accuracy, and consistency (among multiple released tables). We propose a solution that provides strong guarantees for all three desiderata simultaneously.
Our approach can be viewed as a special case of a more general approach for producing synthetic data: Any privacy-preserving mechanism for contingency table release begins with raw data and produces a (possibly inconsistent) privacy-preserving set of marginals. From these tables alone-and hence without weakening privacy--we will find and output the "nearest" consistent set of marginals. Interestingly, this set is no farther than the tables of the raw data, and consequently the additional error introduced by the imposition of consistency is no more than the error introduced by the privacy mechanism itself.
The privacy mechanism of [20] gives the strongest known privacy guarantees, with very little error. Combined with the techniques of the current paper, we therefore obtain excellent privacy, accuracy, and consistency among the tables. Moreover, our techniques are surprisingly efficient. Our techniques apply equally well to the logical cousin of the contingency table, the OLAP cube. |
Religious meaning and subjective well-being in late life. | OBJECTIVES
The purpose of this study is to examine the relationship between religious meaning and subjective well-being. A major emphasis is placed on assessing race differences in the relationship between these constructs.
METHODS
Interviews were conducted with a nationwide sample of older White and older Black adults. Survey items were administered to assess a sense of meaning in life that is derived specifically from religion. Subjective well-being was measured with indices of life satisfaction, self-esteem, and optimism.
RESULTS
The findings suggest that older adults who derive a sense of meaning in life from religion tend to have higher levels of life satisfaction, self-esteem, and optimism. The data further reveal that older Black adults are more likely to find meaning in religion than older White adults. In addition, the relationships among religious meaning, life satisfaction, self-esteem, and optimism tend to be stronger for older African Americans persons than older White persons.
DISCUSSION
Researchers have argued for some time that religion may be an important source of resilience for older Black adults, but it is not clear how these beneficial effects arise. The data from this study suggest that religious meaning may be an important factor. |
Health benefits of fruit and vegetables are from additive and synergistic combinations of phytochemicals. | Cardiovascular disease and cancer are ranked as the first and second leading causes of death in the United States and in most industrialized countries. Regular consumption of fruit and vegetables is associated with reduced risks of cancer, cardiovascular disease, stroke, Alzheimer disease, cataracts, and some of the functional declines associated with aging. Prevention is a more effective strategy than is treatment of chronic diseases. Functional foods that contain significant amounts of bioactive components may provide desirable health benefits beyond basic nutrition and play important roles in the prevention of chronic diseases. The key question is whether a purified phytochemical has the same health benefit as does the whole food or mixture of foods in which the phytochemical is present. Our group found, for example, that the vitamin C in apples with skin accounts for only 0.4% of the total antioxidant activity, suggesting that most of the antioxidant activity of fruit and vegetables may come from phenolics and flavonoids in apples. We propose that the additive and synergistic effects of phytochemicals in fruit and vegetables are responsible for their potent antioxidant and anticancer activities, and that the benefit of a diet rich in fruit and vegetables is attributed to the complex mixture of phytochemicals present in whole foods. |
Polymorphisms in the Hsp 70 gene locus are genetically associated with systemic lupus erythematosus | Ann Rheum Dis 2010;69:1983–1989. doi:10.1136/ard.2009.12263 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.