title
stringlengths
8
300
abstract
stringlengths
0
10k
Who Is This " We " ? Levels of Collective Identity and Self Representations
Cross-cultural perspectives have brought renewed interest in the social aspects of the self and the extent to which individuals define themselves in terms of their relationships to others and to social groups. This article provides a conceptual review of research and theory of the social self, arguing that the personal, relational, and collective levels of self-definition represent distinct forms of selfrepresentation with different origins, sources of self-worth, and social motivations. A set of 3 experiments illustrates haw priming of the interpersonal or collective "we" can alter spontaneous judgments of similarity and self-descriptions.
Some New Parallel Mechanisms Containing the Planar Four-Bar Parallelogram
A parallelogram allows the output link to remain at a fixed orientation with respect to an input link, for which it acts as a unique role in the design of parallel mechanisms. In this paper, the unique role of a parallelogram is used completely to design some new parallel mechanisms with two to six degrees of freedom (DoFs). In these mechanisms, some with three DoFs possess the advantage of very high rotational capability and some with two DoFs have the translational output of a rigid body. More than that, the design concept is also applied first to some parallel mechanisms to improve the systems’ rotational capability. The parallel mechanisms proposed in this paper have wide applications in industrial robots, simulators, micromanipulators, parallel kinematics machines, and any other manipulation devices in which high rotational capability and stiffness are needed. Especially, the paper provides new concepts of the design of novel parallel mechanisms and the improvement of rotational capability for such systems. KEY WORDS—parallel mechanism, degrees of freedom, rotational capability, mechanical design, parallelogram
The study of evasion of packed PE from static detection
Static detection of packed portable executables (PEs) relies primarily on structural properties of the packed PE, and also on anomalies in the packed PE caused by packing tools. This paper outlines weaknesses in this method of detection. We show that these structural properties and anomalies are contingent features of the executable, and can be more or less easily modified to evade static detection.
Regional and seasonal influence in patient’s toxicity to adjuvant chemotherapy for early breast cancer
Results from multinational clinical trials are globally adopted into the routine clinical practice in most countries. Changes in the natural history and incidence of certain diseases as well as in drugs toxicities related to yearly seasons have been reported, however, variations related to climate have never been described. In our study, we assessed whether yearly seasons and climate could influence the chemotherapy toxicity profile. We analyzed the toxicities recorded in the phase III GEICAM 9906 study which was run in different geographically and climatically/seasonally regions in Spain. In this trial 1246 patients were randomized and eligible to receive FEC90 ×6 cycles or FEC90 ×4 cycles followed by eight doses of weekly paclitaxel (T). The results showed differences in hematological and non-hematological toxicities in relation to the season of the year and the climate of the area in which the treatment was administered. We found a higher hematological toxicity in warm seasons (spring and summer) and in Oceanic climate regions (Neutropenia G4: 7.8 vs. 1.0 vs. 1.0%, P < 0.0001). Asthenia was greater frequency in the summer period (FEC90: 21.1%, T: 15.3%) as well as in the Mediterranean areas (FEC: 28% T: 27.2%). Also we observed liver transaminase elevations were more frequent in the summer and in the Oceanic areas. Myalgias and secondary sensory neuropathy to paclitaxel were recorded more frequently during autumn. Climate should be considered a significant variable in toxicity to chemotherapy.
The ALSFRS-R: a revised ALS functional rating scale that incorporates assessments of respiratory function
The ALS Functional Rating Scale (ALSFRS) is a validated rating instrument for monitoring the progression of disability in patients with amyotrophic lateral sclerosis (ALS). One weakness of the ALSFRS as originally designed was that it granted disproportionate weighting to limb and bulbar, as compared to respiratory, dysfunction. We have now validated a revised version of the ALSFRS, which incorporates additional assessments of dyspnea, orthopnea, and the need for ventilatory support. The Revised ALSFRS (ALSFRS-R) retains the properties of the original scale and shows strong internal consistency and construct validity. ALSFRS-R scores correlate significantly with quality of life as measured by the Sickness Impact Profile, indicating that the quality of function is a strong determinant of quality of life in ALS.
Perceived Organizational Support, Job Satisfaction, Task Performance and Organizational Citizenship Behavior in China
We examined the relationships of perceived organizational support and job satisfaction with organizational citizenship behavior and task performance in China. Employees from two large-scale state-owned enterprises (SOE) completed measures of perceived organizational support and job satisfaction and their immediate supervisors completed measures of task performance and four facets of organizational citizenship behavior. Data analyzed using zero-order correlation and hierarchical regression analysis showed positive correlations of perceived organizational support and job satisfaction with task performance, and also showed positive associations of perceived organizational support and job satisfaction with organizational citizenship behavior and each of its four dimensions.
Through the looking glass - why no wonderland? Computer applications in architecture in the USA
Abstract one objective of the paper is to survey computer-aided design activities in the USA as they apply to architectural design. Normally, such reviews limit their focus to software, and possibly hardware, developments. However, the economic and institutional environments of a country largely determine the rate and direction of research and its application, and this is especially true of computer-aided design. Thus, a second objective of the paper is to place the current efforts in the USA within this wider context. A survey is attempted, not only of the innovative research and applications of computers in architectural design, but also the context in which these developments have been taken place. Also surveyed are the changes in the environment for computer-aided architectural design that have been proposed or are taking place.
An experimental comparison of classification algorithms for imbalanced credit scoring data sets
In this paper, we set out to compare several techniques that can be used in the analysis of imbalanced credit scoring data sets. In a credit scoring context, imbalanced data sets frequently occur as the number of defaulting loans in a portfolio is usually much lower than the number of observations that do not default. As well as using traditional classification techniques such as logistic regression, neural networks and decision trees, this paper will also explore the suitability of gradient boosting, least square support vector machines and random forests for loan default prediction. Five real-world credit scoring data sets are used to build classifiers and test their performance. In our experiments, we progressively increase class imbalance in each of these data sets by randomly undersampling the minority class of defaulters, so as to identify to what extent the predictive power of the respective techniques is adversely affected. The performance criterion chosen to measure this effect is the area under the receiver operating characteristic curve (AUC); Friedman’s statistic and Nemenyi post hoc tests are used to test for significance of AUC differences between techniques. The results from this empirical study indicate that the random forest and gradient boosting classifiers perform very well in a credit scoring context and are able to cope comparatively well with pronounced class imbalances in these data sets. We also found that, when faced with a large class imbalance, the C4.5 decision tree algorithm, quadratic discriminant analysis and k-nearest neighbours perform significantly worse than the best performing classifiers. 2011 Elsevier Ltd.
Asynchronous MPC with a strict honest majority using non-equivocation
Multiparty computation (MPC) among n parties can tolerate up to tsynchronous communication setting; however, in an asynchronous communication setting, the resiliency bound decreases to only t < n/3 active corruptions. We improve the resiliency bound for asynchronous MPC (AMPC) to match synchronous MPC using non-equivocation. Non-equivocation is a message authentication mechanism to restrict a corrupted sender from making conflicting statements to different (honest) parties. It can be implemented using an increment-only counter and a digital signature oracle, realizable with trusted hardware modules readily available in commodity computers and smartphone devices. A non-equivocation mechanism can also be transferable and allows a receiver to verifiably transfer the authenticated statement to other parties. In this work, using transferable non-equivocation, we present an AMPC protocol tolerating t < n/2 faults. From a practical point of view, our AMPC protocol requires fewer setup assumptions than the previous AMPC protocol with t < n/2 by Beerliova-Trubiniova, Hirt and Nielsen [PODC 2010]: unlike their AMPC protocol, it does not require any synchronous broadcast round at the beginning of the protocol and avoids the threshold homomorphic encryption setup assumption. Moreover, our AMPC protocol is also efficient and provides a gain of Θ(n) in the communication complexity per multiplication gate, over the AMPC protocol of Beerliova-Trubiniova et al. In the process, using non-equivocation, we also define the first asynchronous verifiable secret sharing (AVSS) scheme with t < n/2, which is of independent interest to threshold cryptography.
A bioinspired soft actuated material.
A class of soft actuated materials that can achieve lifelike motion is presented. By embedding pneumatic actuators in a soft material inspired by a biological muscle fibril architecture, and developing a simple finite element simulation of the same, tunable biomimetic motion can be achieved with fully soft structures, exemplified here by an active left ventricle simulator.
Correlation of p53 status with outcome of neoadjuvant chemotherapy using paclitaxel and doxorubicin in stage IIIB breast cancer.
BACKGROUND The role of p53 in modulating apoptosis has suggested that it may affect efficacy of anticancer agents. We prospectively evaluated p53 alterations in 73 patients with locally advanced breast cancer (IIIB) submitted to neoadjuvant chemotherapy. PATIENTS AND METHODS Patients received three cycles of paclitaxel (175 mg/m2) and doxorubicin (60 mg/m2) every 21 days. Tumor sections were analyzed before treatment for altered patterns of p53 expression using immunohistochemistry and DNA sequencing. RESULTS An overall response rate of 83.5% was obtained, including 15.1% complete pathological responses. The regimen was well tolerated with 17.7% grade 2/3 nausea and 12.8% grade 3/4 leukopenia. There was a statistically significant correlation between response and expression of p53. Of the 25 patients who obtained a complete clinical response, two were classified as positive (P = 0.004, chi-square). Of 11 patients who obtained a complete pathological remission, one was positive (P = 0.099, chi-square). Discussion The combination is highly effective in locally advanced breast cancer. A negative expression of p53 indicates a higher chance of responding to this regimen. The p53 status may be used as a biological marker to identify those patients who would benefit from more aggressive treatments.
The use of Likert scales with children.
OBJECTIVE We investigated elementary school children's ability to use a variety of Likert response formats to respond to concrete and abstract items. METHODS 111 children, aged 6-13 years, responded to 2 physical tasks that required them to make objectively verifiable judgments, using a 5-point response format. Then, using 25 items, we ascertained the consistency between responses using a "gold standard" yes/no format and responses using 5-point Likert formats including numeric values, as well as word-based frequencies, similarities to self, and agreeability. RESULTS All groups responded similarly to the physical tasks. For the 25 items, the use of numbers to signify agreement yielded low concordance with the yes/no answer format across age-groups. Formats based on words provided higher, but not perfect, concordance for all groups. CONCLUSIONS Researchers and clinicians need to be aware of the limited understanding that children have of Likert response formats.
A double-blind randomized study comparing the effects of continuing or not continuing rosiglitazone + metformin therapy when starting insulin therapy in people with Type 2 diabetes1
AIMS To compare the efficacy and safety of either continuing or discontinuing rosiglitazone + metformin fixed-dose combination when starting insulin therapy in people with Type 2 diabetes inadequately controlled on oral therapy. METHODS In this 24-week double-blind study, 324 individuals with Type 2 diabetes inadequately controlled on maximum dose rosiglitazone + metformin therapy were randomly assigned to twice-daily premix insulin therapy (target pre-breakfast and pre-evening meal glucose < or = 6.5 mmol/l) in addition to either rosiglitazone + metformin (8/2000 mg) or placebo. RESULTS Insulin dose at week 24 was significantly lower with rosiglitazone + metformin (33.5 +/- 1.5 U/day, mean +/- se) compared with placebo [59.0 +/- 3.0 U/day; model-adjusted difference -26.6 (95% CI -37.7, -15,5) U/day, P < 0.001]. Despite this, there was greater improvement in glycaemic control [HbA(1c) rosiglitazone + metformin vs. placebo 6.8 +/- 0.1 vs. 7.5 +/- 0.1%; difference -0.7 (-0.8, -0.5)%, P < 0.001] and more individuals achieved glycaemic targets (HbA(1c) < 7.0% 70 vs. 34%, P < 0.001). The proportion of individuals reporting at least one hypoglycaemic event during the last 12 weeks of treatment was similar in the two groups (rosiglitazone + metformin vs. placebo 25 vs. 27%). People receiving rosiglitazone + metformin in addition to insulin reported greater treatment satisfaction than those receiving insulin alone. Both treatment regimens were well tolerated but more participants had oedema [12 (7%) vs. 4 (3%)] and there was more weight gain [3.7 vs. 2.6 kg; difference 1.1 (0.2, 2.1) kg, P = 0.02] with rosiglitazone + metformin. CONCLUSIONS Addition of insulin to rosiglitazone + metformin enabled more people to reach glycaemic targets with less insulin, and was generally well tolerated.
Community Detection in Temporal Networks
Many complex systems in nature, society and technologyfrom the online social networks to the internet from the nervous system to power gridscan be represented as a graph of vertices interconnected by edges. Small world network, Scale free network, Community detection are fundamental properties of complex networks. Community is a sub graph with densely connected nodes typically reveals topological relations, network structure, organizational and functional characteristics of the underlying network, e.g., friendships on Facebook, Followers of a VIP account on Twitter, and interaction with business professionals on LinkedIn. Online social networks are dynamic in nature, they evolve in size and space rapidly. In this paper we are detecting incremental disjoint communities using dyanamic multi label propagation algorithm. It tackles the temporal event changes, i.e., addition, deletion of an edge or vertex in a sub graph for every timestamp. The experimental results on Enron real network dataset shows that the proposed method is quite well in identifying communities.
Presentation on Facebook and risk of cyberbullying victimisation
Facebook is an environment in which adolescents can experiment with self-presentation. Unfortunately, Facebook can also be an environment in which cyberbullying occurs. The aim of the current study was to investigate whether specific self-presentation behaviours in Facebook were associated with cyberbullying victimisation for adolescents. The contents of 147 adolescent (15–24 years) Facebook profile pages were recorded and used to predict cyberbullying victimisation. Coded contents included the presence or absence of Facebook profile features (e.g., relationship status) and the specific content of certain features (e.g., type and valence of wall posts). Participants completed measures of cyberbullying victimisation and traditional bullying victimisation and perpetration. More than three out of four participants reported experiencing at least one victimisation experience on Facebook in the preceding 6 months. A series of Facebook features and experiences of traditional bullying victimisation/perpetration were found to be associated with an increased risk of cyberbullying victimisation. Number of Facebook friends and traditional bullying victimisation were also significant predictors of cyberbullying victimisation. These results support the hypothesis that self-presentation on Facebook can increase the likelihood of eliciting negative attention from potential perpetrators. This has important implications for the development of cyberbullying prevention and education programs that teach adolescents about measures they may take to decrease their risk for cyberbullying victimisation within social networking sites like Facebook. 2014 Elsevier Ltd. All rights reserved.
Discovering Excitatory Networks from Discrete Event Streams with Applications to Neuronal Spike Train Analysis
Mining temporal network models from discrete event streams is an important problem with applications in computational neuroscience, physical plant diagnostics, and human-computer interaction modeling. We focus in this paper on temporal models representable as excitatory networks where all connections are stimulative, rather than inhibitory. Through this emphasis on excitatory networks, we show how they can be learned by creating bridges to frequent episode mining. Specifically, we show that frequent episodes help identify nodes with high mutual information relationships and which can be summarized into a dynamic Bayesian network (DBN). To demonstrate the practical feasibility of our approach, we show how excitatory networks can be inferred from both mathematical models of spiking neurons as well as real neuroscience datasets.
Deep Learning for IoT Big Data and Streaming Analytics: A Survey
In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature.
What is this evasive beast we call user satisfaction?
The notion of ‘user satisfaction’ plays a prominent role in HCI, yet it remains evasive. This exploratory study reports three experiments from an ongoing research program. In this program we aim to uncover (1) what user satisfaction is, (2) whether it is primarily determined by user expectations or by the interactive experience, (3) how user satisfaction may be related to perceived usability, and (4) the extent to which satisfaction rating scales capture the same interface qualities as uncovered in self-reports of interactive experiences. In all three experiments reported here user satisfaction was found to be a complex construct comprising several concepts, the distribution of which varied with the nature of the experience. Expectations were found to play an important role in the way users approached a browsing task. Satisfaction and perceived usability was assessed using two methods: scores derived from unstructured interviews and from the Web site Analysis MeasureMent Inventory (WAMMI) rating scales. Scores on these two instruments were somewhat similar, but conclusions drawn across all three experiments differed in terms of satisfaction ratings, suggesting that rating scales and interview statements may tap different interface qualities. Recent research suggests that ‘beauty’, or ‘appeal’ is linked to perceived usability so that what is ‘beautiful’ is also perceived to be usable [Interacting with Computers 13 (2000) 127]. This was true in one experiment here using a web site high in perceived usability and appeal. However, using a site with high appeal but very low in perceived usability yielded very high satisfaction, but low perceived usability scores, suggesting that what is ‘beautiful’ need not also be perceived to be usable. The results suggest that web designers may need to pay attention to both visual appeal and usability. q 2002 Elsevier Science B.V. All rights reserved.
Market Share, Scale, and Market Value: An Empirical Study of Small Closely Held Manufacturing Firms
Abstract This study examines the impact of the interaction of the firm variables market share and scale, and the industry conditions of concentration and growth, on the market value of 225 closely held, micro-market share manufacturing firms. The results suggest that the market value of small firms with micro-market shares benefits from the presence of large firms in concentrated industries. The study also found support for the hypothesis that the impact of concentration and scale economies on small-firm performance varies according to industry growth.
The Paradoxes of Education in Rorty's Liberal Utopia
Richard Rorty is a bundle of seeming contradictions. An anti-philosophical philosopher, a profound thinker against systematic theorizing, he signs a death certificate and writes an obituary for professionalized, academic philosophy. Writing breezy but often technical philosophical texts, he endeavors to demonstrate the irrelevance and marginality of his own breed, claiming that philosophers have little importance and a limited function in our present society. Yet despite his insistence upon the irrelevance of philosophy to democratic politics, he nevertheless points us toward and describes in great detail a political utopia, his philosophically informed vision of how Western democracies might look if they adopted his vocabulary and anti-metaphysical, anti-epistemological Weltanschauung. Similarly, while insisting that, as a philosopher, he has very little to say about education, and doubting, moreover, whether philosophy in general has anything important to say about education, he pointedly describes and endorses a system of education that would fit his liberal utopia and mesh with his philosophical musings. Paradoxically, while proclaiming the irrelevance of philosophy to education, Rorty’s philosophical work, coupled with several of his more popular essays, outlines a distinct philosophy of education.
Wireless Sensor Networks and Monitoring of Environmental Parameters in Precision Agriculture
The handiness and ease of use of tele-technology like mobile phones has surged the growth of ICT in developing countries like India than ever. Mobile phones are showing overwhelming responses and have helped farmers to do the work on timely basis and stay connected with the outer farming world. But mobile phones are of no use when it comes to the real-time farm monitoring or accessing the accurate information because of the little research and application of mobile phone in agricultural field for such uses. The current demand of use of WSN in agricultural fields has revolutionized the farming experiences. In Precision Agriculture, the contribution of WSN are numerous staring from monitoring soil health, plant health to the storage of crop yield. Due to pressure of population and economic inflation, a lot of pressure is on farmers to produce more out of their fields with fewer resources. This paper gives brief insight into the relation of plant disease prediction with the help of wireless sensor networks. Keywords— Plant Disease Monitoring, Precision Agriculture, Environmental Parameters, Wireless Sensor Network (WSN)
Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal
Model-free reinforcement learning has recently been shown to be effective at learning navigation policies from complex image input. However, these algorithms tend to require large amounts of interaction with the environment, which can be prohibitively costly to obtain on robots in the real world. We present an approach for efficiently learning goal-directed navigation policies on a mobile robot, from only a single coverage traversal of recorded data. The navigation agent learns an effective policy over a diverse action space in a large heterogeneous environment consisting of more than 2km of travel, through buildings and outdoor regions that collectively exhibit large variations in visual appearance, self-similarity, and connectivity. We compare pretrained visual encoders that enable precomputation of visual embeddings to achieve a throughput of tens of thousands of transitions per second at training time on a commodity desktop computer, allowing agents to learn from millions of trajectories of experience in a matter of hours. We propose multiple forms of computationally efficient stochastic augmentation to enable the learned policy to generalise beyond these precomputed embeddings, and demonstrate successful deployment of the learned policy on the real robot without fine tuning, despite environmental appearance differences at test time. The dataset and code required to reproduce these results and apply the technique to other datasets and robots is made publicly available at rl-navigation.github.io/deployable.
The Accrual Anomaly: Risk or Mispricing?
We document considerable return comovement associated with accruals after controlling for other common factors. An accrual-based factor-mimicking portfolio has a Sharpe ratio of 0.15, higher than that of the market factor or the HML factor of Fama and French (1993). In time series regressions, a model that includes the Fama-French factors and the additional accrual factor captures the accrual anomaly in average returns. However, further time series and cross-sectional tests indicate that it is the accrual characteristic rather than the accrual factor loading that predicts returns. These findings favor a behavioral explanation for the accrual anomaly.
An Inside Look at Botnets
The continued growth and diversification of the Internet has been accompanied by an increasing prevalence of attacks and intrusions [40]. It can be argued, however, that a significant change in motivation for malicious activi ty has taken place over the past several years: from vandalism and recognition in th e hacker community, to attacks and intrusions for financial gain. This shift has bee n marked by a growing sophistication in the tools and methods used to conduct atta cks, thereby escalating the network security arms race. Our thesis is that the reactivemethods for network security that are predominant today are ultimately insufficient and that more proactivemethods are required. One such approach is to develop a foundational understanding of the mechanisms employed by malicious software (malware) which is often readi ly available in source form on the Internet. While it is well known that large IT secu rity companies maintain detailed databases of this information, these are not o penly available and we are not aware of any such open repository. In this paper we begin t he process of codifying the capabilities of malware by dissecting four widely-u sed Internet Relay Chat (IRC) botnet codebases. Each codebase is classified along se ven k y dimensions including botnet control mechanisms, host control mechani sms, propagation mechanisms, exploits, delivery mechanisms, obfuscation and de ception mechanisms. Our study reveals the complexity of botnet software, and we disc us es implications for defense strategies based on our analysis.
SEPP-ZVS high frequency inverter for induction heating using newly developed SiC-SIT
SiC-SIT power semiconductor switching devices has an advantage that its switching time is high speed compared to those of other power semiconductor switching devices. We adopt newly developed SiC-SITs which have the maximum ratings 800V/4A and prepare a breadboard of a conventional single-ended push-pull(SEPP) high frequency inverter. This paper describes the characteristics of SiC-SIT on the basis of the experimental results of the breadboard. Its operational frequencies are varied at from 100 kHz to 250kHz with PWM control technique for output power regulation. Its load is induction fluid heating systems for super-heated-steam production.
Integrating dynamic stopping, transfer learning and language models in an adaptive zero-training ERP speller.
OBJECTIVE Most BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping. APPROACH A simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)-(d) are investigated. MAIN RESULTS Without any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance--competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation. SIGNIFICANCE A high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI.
Does Team Training Improve Team Performance? A Meta-Analysis
OBJECTIVE This research effort leveraged the science of training to guide a taxonomic integration and a series of meta-analyses to gauge the effectiveness and boundary conditions of team training interventions for enhancing team outcomes. BACKGROUND Disparate effect sizes across primary studies have made it difficult to determine the true strength of the relationships between team training techniques and team outcomes. METHOD Several meta-analytic integrations were conducted to examine the relationships between team training interventions and team functioning. Specifically, we assessed the relative effectiveness of these interventions on team cognitive, affective, process, and performance outcomes. Training content, team membership stability, and team size were investigated as potential moderators of the relationship between team training and outcomes. In total, the database consisted of 93 effect sizes representing 2650 teams. RESULTS The results suggested that moderate, positive relationships exist between team training interventions and each of the outcome types. The findings of moderator analyses indicated that training content, team membership stability, and team size moderate the effectiveness of these interventions. CONCLUSION Our findings suggest that team training interventions are a viable approach organizations can take in order to enhance team outcomes. They are useful for improving cognitive outcomes, affective outcomes, teamwork processes, and performance outcomes. Moreover, results suggest that training content, team membership stability, and team size moderate the effectiveness of team training interventions. APPLICATION Applications of the results from this research are numerous. Those who design and administer training can benefit from these findings in order to improve the effectiveness of their team training interventions.
Donor Retention in Online Crowdfunding Communities: A Case Study of DonorsChoose.org
Online crowdfunding platforms like DonorsChoose.org and Kickstarter allow specific projects to get funded by targeted contributions from a large number of people. Critical for the success of crowdfunding communities is recruitment and continued engagement of donors. With donor attrition rates above 70%, a significant challenge for online crowdfunding platforms as well as traditional offline non-profit organizations is the problem of donor retention. We present a large-scale study of millions of donors and donations on DonorsChoose.org, a crowdfunding platform for education projects. Studying an online crowdfunding platform allows for an unprecedented detailed view of how people direct their donations. We explore various factors impacting donor retention which allows us to identify different groups of donors and quantify their propensity to return for subsequent donations. We find that donors are more likely to return if they had a positive interaction with the receiver of the donation. We also show that this includes appropriate and timely recognition of their support as well as detailed communication of their impact. Finally, we discuss how our findings could inform steps to improve donor retention in crowdfunding communities and non-profit organizations.
Multi-Task Learning For Parsing The Alexa Meaning Representation Language
The Alexa Meaning Representation Language (AMRL) is a compositional graph-based semantic representation that includes fine-grained types, properties, actions, and roles and can represent a wide variety of spoken language. AMRL increases the ability of virtual assistants to represent more complex requests, including logical and conditional statements as well as ones with nested clauses. Due to this representational capacity, the acquisition of large scale data resources is challenging, which limits the accuracy of resulting models. This paper has two primary contributions. The first contribution is a linearization of the AMRL parses that aligns it to a related task of spoken language understanding (SLU) and a deep neural network architecture that uses multi-task learning to predict AMRL fine-grained types, properties and intents. The second contribution is a deep neural network architecture that leverages embeddings from the large-scale data resources that are available for SLU. When combined, these contributions enable the training of accurate models of AMRL parsing, even in the presence of data sparsity. The proposed models, which use the linearized AMRL parse, multi-task learning, residual connections and embeddings from SLU, decrease the error rates in the prediction of the full AMRL parse by 3.56%
CVD diamond tool performance in metal matrix composite machining
Metal matrix composite (MMC) has found increasing usages in the industry for lightweight high-strength applications. However, because of abrasive nature of the reinforced phase in MMC, machinability is poor, tool wear is rapid, yet only diamond tools are technically suitable to MMC machining. Furthermore, diamond coatings seem to be more economically viable than polycrystalline diamond for MMC machining. In this study, CVD diamond-coated tools, 30 Am thick on a tungsten carbide substrates, were investigated by outside diameter turning of MMC of aluminum-alloy reinforced with silicon-carbide particles. Cutting conditions ranged from 1 m/s to 6 m/s of cutting speed, 0.05 mm/ rev to 0.3 mm/rev feed, and 1 mm to 2 mm depth of cut. Tool wear was measured and compared at different machining conditions. Worn diamond-coated tools were extensively characterized by scanning electron microscopy. Cutting forces, chip thickness, and the chip–tool contact area were also measured for cutting temperature simulation by finite element analysis. The results show that tool wear is sensitive to cutting speed and feed rate, and the dominant wear mechanism is coating failure due to high stresses. The catastrophic coating failure suggests the bonding between the coating and substrate is critical to tool performance. High cutting temperatures will induce greater interfacial stresses at the bonding surface due to different thermal expansions between the coating and substrate, and plausibly result in the coating failure. A thermal management device, heat pipe, has been demonstrated for cutting temperature reductions. D 2005 Elsevier B.V. All rights reserved.
A POMDP Formulation of Preference Elicitation Problems
Preference elicitation is a key problem facing the deployment of intelligent systems that make or recommend decisions on the behalf of users. Since not all aspects of a utility function have the same impact on object-level decision quality, determining which information to extract from a user is itself a sequential decision problem, balancing the amount of elicitation effort and time with decision quality. We formulate this problem as a partially-observable Markov decision process (POMDP). Because of the continuous nature of the state and action spaces of this POMDP, standard techniques cannot be used to solve it. We describe methods that exploit the special structure of preference elicitation to deal with parameterized belief states over the continuous state space, and gradient techniques for optimizing parameterized actions. These methods can be used with a number of different belief state representations, including mixture models.
Perception and Emotion How We Recognize Facial Expressions
Perception and emotion interact, as is borne out by studies of how people recognize emotion from facial expressions. Psychological and neurological research has elucidated the processes, and the brain structures, that participate in facial emotion recognition. Studies have shown that emotional reactions to viewing faces can be very rapid and that these reactions may, in turn, be used to judge the emotion shown in the face. Recent experiments have argued that people actively explore facial expressions in order to recognize the emotion, a mechanism that emphasizes the instrumental nature of social cognition. KEYWORDS—face perception; faces; emotion; amygdala An influential psychological model of face processing argued that early perception (construction of a geometric representation of the face based on its features) led to subsequently separate processing of the identity of the face and of the emotional expression of the face (Bruce & Young, 1986). That model has received considerable support from neuroscience studies suggesting that the separate processes are based on separable neuroanatomical systems: In neuroimaging studies, different parts of the brain are activated in response to emotional expressions or identity changes; and brain damage can result in impairments in recognizing identity but not in recognizing emotional expressions, or the reverse. SOME PROCESSING IS RAPID AND AUTOMATIC AND CAN OCCUR IN THE ABSENCE OF AWARENESS OF THE STIMULI Some responses in the brain to emotional facial expressions are so rapid (less than 100 ms) that they could not plausibly be based on conscious awareness of the stimulus, although the responses, in turn, may contribute to conscious awareness. Evidence comes from studies using event-related potentials, measures of the brain’s electrical activity recorded at the scalp (or, much more rarely, directly from the brain in surgical patients). In those experiments, the responses to many presentations of emotional stimuli are averaged across stimuli, and the changes in electrical potential can be accurately timed in relation to stimulus onset. Evidence has also come from studies in which viewers were shown facial expressions subliminally. Especially salient aspects of faces are most potent in driving nonconscious responses: For instance, subliminal presentation of only the whites of the eyes of fearful faces results in measurable brain activation (Whalen et al., 2004). One specific structure involved in such rapid and automatic neural responses is the amygdala, a structure in the medial temporal lobe that is known to be involved in many aspects of emotion processing. Findings such as these have been of interest to psychologists, as they provide a potential mechanism consistent with two-factor theories of human behavior. For instance, the theory that affect and cognitive judgment are separate processes, and that affect can precede cognitive judgment, receives some support from the neuroscience findings (Zajonc, 1980). The data also add detail to theories of visual consciousness. In one set of studies, emotional facial expressions were shown to neurological patients who, because of their brain damage, were unable to report seeing the stimuli. An individual with damage to the visual cortex had ‘‘blindsight’’ for emotional faces: He could discriminate the emotion shown in faces by guessing, even though he reported no visual experience of seeing the faces. A neuroimaging experiment in this same individual revealed that the amygdala was activated differentially by different emotional faces, despite the absence of conscious visual experience (Morris, deGelder, Weiskrantz, & Dolan, 2001). These and other studies have suggested a distinction between subcortical processing of emotional visual stimuli (e.g., involving the amygdala and brainstem nuclei such as the superior colliculus), which may be independent of conscious vision (Johnson, 2005), and cortical processing, which is usually accompanied by conscious experience. It is interesting to note that amphibians and reptiles have only subcortical visual processing, since they lack a neocortex. One broad interpretation of these observations is thus that the subcortical route for processing emotional stimuli is the more ancient one, and that in Address correspondence to Ralph Adolphs, HSS 228-77, Caltech, Pasadena, CA 91125; e-mail: [email protected]. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 222 Volume 15—Number 5 Copyright r 2006 Association for Psychological Science mammals an additional, cortical route has evolved that probably allows more flexible behaviors based on learning and conscious deliberation. A final wrinkle has come from psychological studies of the relationship between attention and emotion. It has been known for some time that emotionally salient stimuli can capture attention, an interaction that makes much adaptive sense (Ohman, Flykt, & Esteves, 2001). But more recent studies have also shown the reverse: that volitionally allocating attention to stimuli can influence their emotional evaluation. Inhibiting attention to visual stimuli that are distractors in a search task, for example, results in devaluation of those stimuli when subjects are asked to rate them on affective dimensions (Raymond, Fenske, & Tavassoli, 2003). Emotions thus represent the value of stimuli—what people approach or avoid, cognitively or behaviorally, volitionally or automatically. FEAR AND DISGUST MAY BE PROCESSED SPECIALLY Several studies have found evidence that the amygdala, the subcortical structure discussed earlier, is disproportionately important for processing facial expressions of fear, although by no means exclusively so. The only other emotion for which a similar neuroanatomical specificity has been reported is disgust. A region of the cortex called the insula represents motor and sensory aspects of that emotional response (Calder, Lawrence, & Young, 2001). In lesion studies, damage to the amygdala is shown to result in impairment in the ability to recognize fear from facial expressions. Similarly, damage to the insula can result in impairment in the ability to recognize disgust from facial expressions. However, questions remain regarding exactly which emotion category or dimension is being encoded in these brain regions, since other emotions are often also variably impaired. Especially informative have been studies in a rare patient, SM, who has total damage to the amygdala on both sides of the brain. The subject can accurately judge age and gender from faces, and she has no difficulty recognizing familiar individuals from their faces. She also has little difficulty recognizing most facial expressions, with the notable exception of fear. When asked to judge the fearfulness of faces, SM is selectively and severely impaired (Adolphs, Tranel, Damasio, & Damasio, 1994). However, other patients with similar damage are impaired on a wider array of emotions; in functional neuroimaging studies, activation of the amygdala is seen in response to several emotions in addition to fear; and even SM’s highly specific impairment depends on the questions she is asked. For example, when asked to classify faces into categories of basic emotions, SM is rather selectively impaired on fear; but when asked to rate how arousing the emotion shown in the face is, she is impaired on all emotions of negative valence (Adolphs, Russell, & Tranel, 1999). These data suggest that, while the amygdala is critical for recognizing fear in faces, its role encompasses a broader, more abstract, and perhaps dimensional rather than categorical aspect of emotions, for which fear is but one instance—an issue I take up again below. MANY BRAIN STRUCTURES ARE INVOLVED Just as there is good evidence that the amygdala does more than solely detect fear, there are of course brain structures other than the amygdala that participate in perceiving emotion from faces. Brain-imaging studies, in particular, have found evidence for a large number of other brain structures that may come into play, depending on the emotion shown in the face and on the demands of the experimental task. One framework that summarizes these data runs as follows. Visual areas in the temporal cortex encode the face’s features and bind them into a global perceptual representation of the face. Subcortical visual areas (such as the superior colliculus, prominent in animals such as frogs) carry out coarser but faster processing of the face. Both routes, the cortical and the subcortical, feed visual information about the face into the amygdala. The amygdala then associates the visual representation of the face with an emotional response in the body, as well as with changes in the operation of other brain structures. For instance, it likely triggers autonomic responses (such as skin-conductance response, the sympathetic autonomic response of sweating on the palms of the hands) to the face, and it also modulates visual attention to the face. Tracing the path from initial perception of the face to recognizing the emotion it expresses is complicated by feedback loops and by multiple pathways that can be engaged. One class of proposals, as influential as they are controversial, argues that the emotional response elicited by the amygdala can in fact be used by the viewer to reconstruct knowledge about the emotion shown in the face. Roughly: if I experience a pang of fear within myself upon seeing a fearful face, I can use the knowledge of my own emotion to infer what the emotional state of the person whose face is shown in the stimulus might be. Broader theories in a similar vein do not restrict themselves to the amygdala or to fear, but more generally propose that we make inferences about other people’s emotional states by simulating within ourselves aspects of those states (Goldman & Sripada, 2005). Emotional contagion and imitation may be the earliest aspects of such a mechanism that can already be seen in infants
Injectable fillers for volume replacement in the aging face.
In recent years, there has been a better understanding of the aging process. In addition to changes occurring in the skin envelope, significant changes occur in the subcutaneous fat and craniofacial skeleton. This has led to a paradigm shift in the therapeutic approach to facial rejuvenation. Along with soft tissue repositioning, volumizing the aging face has been found to optimize the result and achieve a more natural appearance. Early in the aging process, when there has not been a significant change to the face requiring surgical intervention, fillers alone can provide minimally invasive facial rejuvenation through volumizing. Multiple injectable soft tissue fillers and biostimulators are currently available to provide facial volume such as hyaluronic acid, calcium hydroxylapatite, poly-L-lactic acid, polymethyl methacrylate, and silicone. A discussion of the morphological changes seen in the aging face, the properties of these products, and key technical concepts will be highlighted to permit optimum results when performing facial volumizing of the upper, middle, and lower thirds of the face. These fillers can act as a dress rehearsal for these patients considering structural fat grafting.
The “R.A.R.E.” Technique (Reverse and Repositioning Effect): The Renaissance of the Aging Face and Neck
Considering the fixed points of the face (Fig. 1), and in light of the fact that gravity is one of the main factors involved in aging, a new alternative concept in cosmetic surgery is discussed in this paper. In our approach, rejuvenation of the face and neck involves two completely separate procedures. The whole face must be treated “homothetically”, with an upward (vertical) and deep (subperiosteal) approach, to preserve facial proportions and distances, thus preserving the original facial identity. The facial portion of our rejuvenation surgery becomes a single “en bloc” and “closed” procedure, correcting the sagging tissue in the lateral sector, between the fixed zones which must be preserved. The Malaris portion of the Orbicularis Oculi Muscle, (through its strong connections with the skin and the malar fat) has become the “key tool” of the rejuvenation of the whole face. Then, neck surgery becomes a completely distinct procedure, and is to be performed in an oblique/horizontal direction. We now seek to preserve the very firmly attached neck zones, which are the attachment of the posterior border of the fibrous platysma onto the S.C.M. (Sterno-Cleido-Mastoidien muscle). This will permit a more conservative and less aggressive neck surgery, without any sub-platysmal disection. Over 200 RARE procedures have been performed during almost four years. Improvement in terms of facial rejuvenation is dramatic and the technique is quite safe and predictable. The only possible difficulty involves the patient’s temporary initial concern about early postoperative appearance. Figure 1 Main fixed zones of the face and the neck, (per Furnas, 1989). 1: fixed conqua of the ear. 2: auriculoplatysmal ligament. 3: fixed adherences between the posterior border of the platysma and the S.C.M. muscle (per T.Besins). 4: mandibular ligament. 5: cutaneo-platysmal anterior ligament. 6: zygomatic ligaments.
Natural Logic Inference for Emotion Detection
Current research on emotion detection focuses on the recognizing explicit emotion expressions in text. In this paper, we propose an approach based on textual inference to detect implicit emotion expressions, that is, to capture emotion detection as an logical inference issue. The approach builds a natural logic system, in which emotional detection are decomposed into a series of logical inference process. The system also employ inference knowledge from textural inference resources for reasoning complex expressions in emotional texts. Experimental results show the efficiency in detecting implicit emotional expressions.
Citisense: Mobile air quality sensing for individuals and communities Design and deployment of the Citisense mobile air-quality system
Air quality has great impact on individual and community health. In this demonstration, we present Citisense: a mobile air quality system that enables users to track their personal air quality exposure for discovery, self-reflection, and sharing within their local communities and online social networks.
A World-Championship-Level Othello Program
Othello is a recent addition to the collection of games that have been examined within artificial intelligence. Advances have been rapid, yielding programs that have reached the level of world-championship play. This article describes the current champion Othello program, Iago. The work described here includes: (1) a task analysis of Othello; (2) the implemenation of a program based on this analysis and state-of-the-art AI gameplaying techniques; and (3) an evaluation of the program's performance through games played against other programs and comparisons with expert human play. This research was sponsored by the Defense Advanced Research Projects Agency (DOD) , ARPA Order N o . 3597, monitored by the Air Force Avionics Laboratory Under Contract F33615-78-C-1551. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the US Government . Table of
Limitations of computed tomography coronary angiography.
t seems so simple and elegant. Fifty years after the ntroduction of coronary angiography, advances in technolgy allow imaging of the coronary arteries noninvasively sing multidetector computed tomography (CT) scanners 1). Within a few short years, these imaging systems have egun appearing everywhere, first in hospitals and clinics, hen in individual doctors’ offices, offering the promise of afe and painless detection of coronary obstructions (2). eekend courses allow the members of our profession to learn” this new technology and apply it routinely to patient are (3). The manufacturers of the equipment, all large ultinational providers of radiological imaging devices, are uite pleased to show practitioners how they can rapidly ecoup their million-dollar investments (4,5). What could ossibly be wrong with this picture? Medical progress to the etterment of patients (and practitioners).
Sunscreens as a source of hydrogen peroxide production in coastal waters.
Sunscreens have been shown to give the most effective protection for human skin from ultraviolet (UV) radiation. Chemicals from sunscreens (i.e., UV filters) accumulate in the sea and have toxic effects on marine organisms. In this report, we demonstrate that photoexcitation of inorganic UV filters (i.e., TiO2 and ZnO nanoparticles) under solar radiation produces significant amounts of hydrogen peroxide (H2O2), a strong oxidizing agent that generates high levels of stress on marine phytoplankton. Our results indicate that the inorganic oxide nanoparticle content in 1 g of commercial sunscreen produces rates of H2O2 in seawater of up to 463 nM/h, directly affecting the growth of phytoplankton. Conservative estimates for a Mediterranean beach reveal that tourism activities during a summer day may release on the order of 4 kg of TiO2 nanoparticles to the water and produce an increment in the concentration of H2O2 of 270 nM/day. Our results, together with the data provided by tourism records in the Mediterranean, point to TiO2 nanoparticles as the major oxidizing agent entering coastal waters, with direct ecological consequences on the ecosystem.
Applying software engineering processes for big data analytics applications development
Developing large scale software projects involves huge efforts at every stage of the software development life cycle (SDLC). This led researchers and practitioners to develop software processes and methodologies that will assist software developers and improve their operations. Software processes evolved and took multiple approaches to address the different issues of the SDLC. Recently big data analytics applications (BDAA) are in demand as more and more data is collected and stakeholders need effective and efficient software to process them. The goal is not just to be able to process big data, but also arrive at useful conclusions that are accurate and timely. Considering the distinctive characteristics of big data and the available infrastructures, tools and development models, we need to create a systematic approach to the SDLC activities for BDAA development. In this paper, we rely on our earlier work identifying the characteristic and requirements of BDAA and use that to propose appropriate models for their development process. It is necessary to carefully examine this domain and adopt the software processes that best serve the developers and is flexible enough to address the different characteristics of such applications.
Identification and Characterization of Full-Length cDNAs in Channel Catfish (Ictalurus punctatus) and Blue Catfish (Ictalurus furcatus)
BACKGROUND Genome annotation projects, gene functional studies, and phylogenetic analyses for a given organism all greatly benefit from access to a validated full-length cDNA resource. While increasingly common in model species, full-length cDNA resources in aquaculture species are scarce. METHODOLOGY AND PRINCIPAL FINDINGS Through in silico analysis of catfish (Ictalurus spp.) ESTs, a total of 10,037 channel catfish and 7,382 blue catfish cDNA clones were identified as potentially encoding full-length cDNAs. Of this set, a total of 1,169 channel catfish and 933 blue catfish full-length cDNA clones were selected for re-sequencing to provide additional coverage and ensure sequence accuracy. A total of 1,745 unique gene transcripts were identified from the full-length cDNA set, including 1,064 gene transcripts from channel catfish and 681 gene transcripts from blue catfish, with 416 transcripts shared between the two closely related species. Full-length sequence characteristics (ortholog conservation, UTR length, Kozak sequence, and conserved motifs) of the channel and blue catfish were examined in detail. Comparison of gene ontology composition between full-length cDNAs and all catfish ESTs revealed that the full-length cDNA set is representative of the gene diversity encoded in the catfish transcriptome. CONCLUSIONS This study describes the first catfish full-length cDNA set constructed from several cDNA libraries. The catfish full-length cDNA sequences, and data gleaned from sequence characteristics analysis, will be a valuable resource for ongoing catfish whole-genome sequencing and future gene-based studies of function and evolution in teleost fishes.
Fingerprint Image Reconstruction from Standard Templates
A minutiae-based template is a very compact representation of a fingerprint image, and for a long time, it has been assumed that it did not contain enough information to allow the reconstruction of the original fingerprint. This work proposes a novel approach to reconstruct fingerprint images from standard templates and investigates to what extent the reconstructed images are similar to the original ones (that is, those the templates were extracted from). The efficacy of the reconstruction technique has been assessed by estimating the success chances of a masquerade attack against nine different fingerprint recognition algorithms. The experimental results show that the reconstructed images are very realistic and that, although it is unlikely that they can fool a human expert, there is a high chance to deceive state-of-the-art commercial fingerprint recognition systems.
A novel hybrid feature selection via Symmetrical Uncertainty ranking based local memetic search algorithm
A novel correlation based memetic framework (MA-C) which is a combination of genetic algorithm (GA) and local search (LS) using correlation based filter ranking is proposed in this paper. The local filter method used here fine-tunes the population of GA solutions by adding or deleting features based on Symmetrical Uncertainty (SU) measure. The focus here is on filter methods that are able to assess the goodness or ranking of the individual features. Empirical study of MA-C on several commonly used datasets from the large-scale Gene expression datasets indicates that it outperforms recent existing methods in the literature in terms of classification accuracy, selected feature size and efficiency. Further, we also investigate the balance between local and genetic search to maximize the search quality and efficiency
Feature-Oriented Classification for Software Reuse
It is widely accepted that reuse is a key technology leading to substantial productivity gains in software development. In this paper we introduce FOCS (feature-oriented classification system), a new method and a tool to support (software) reuse. In feature-oriented classification components are described by sets of features. Each feature represents a property or attribute of the component. Storage and retrieval of components is done by means of these features sets. Features can be refined to give them a more precise meaning. We distinguish different kinds of refinement to support feature understanding and structuring. A similarity metric is defined between features to support the retrieval of similar components. The current FOCS prototype supports the organization of features in a classification scheme, the construction of descriptors, as well as the actual storage and retrieval of components.
ACTION-DEPENDENT FACTORIZED BASELINES
Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates. The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces. To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP. We demonstrate and quantify the benefit of the action-dependent baseline through both theoretical analysis as well as numerical results, including an analysis of the suboptimality of the optimal state-dependent baseline. The result is a computationally efficient policy gradient algorithm, which scales to high-dimensional control problems, as demonstrated by a synthetic 2000-dimensional target matching task. Our experimental results indicate that action-dependent baselines allow for faster learning on standard reinforcement learning benchmarks and highdimensional hand manipulation and synthetic tasks. Finally, we show that the general idea of including additional information in baselines for improved variance reduction can be extended to partially observed and multi-agent tasks.
Two-Step Moving Target Detection Algorithm for Automotive 77 GHz FMCW Radar
Today, 77GHz FMCW (Frequency Modulation Continuous Wave) radar sensors are used for automotive applications. In typical automotive radar, the target of interest is a moving target. Thus, to improve the detection probability and reduce the false alarm rate, an MTD(Moving Target Detection) algorithm should be required. This paper describes the proposed two-step MTD algorithm. The 1st MTD processing consists of a clutter cancellation step and a noise cancellation step. The two steps can cancel almost all clutter including stationary targets. However, clutter still remains among the interest beat frequencies detected during the 1st MTD and CFAR (Constant False Alarm) processing. Thus, in the 2nd MTD step, we remove the rest of the clutter with zero phase variation.
Evaluation of TRMM Multisatellite Precipitation Analysis ( TMPA ) and Its Utility in Hydrologic Prediction in the La Plata Basin
Satellite-based precipitation estimates with high spatial and temporal resolution and large areal coverage provide a potential alternative source of forcing data for hydrological models in regions where conventional in situ precipitation measurements are not readily available. The La Plata basin in South America provides a good example of a case where the use of satellite-derived precipitation could be beneficial. This study evaluates basinwide precipitation estimates from 9 yr (1998–2006) of Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA; 3B42 V.6) through comparison with available gauged data and the Variable Infiltration Capacity (VIC) semidistributed hydrology model applied to the La Plata basin. In general, the TMPA estimates agreed well with the gridded gauge data at monthly time scales, most likely because of the monthly adjustment to gauges performed in TMPA. The agreement between TMPA and gauge precipitation estimates was reduced at daily time scales, particularly for high rain rates. The TMPA-driven hydrologic model simulations were able to capture the daily flooding events and to represent low flows, although peak flows tended to be biased upward. There was a good agreement between TMPA-driven simulated flows in terms of their reproduction of seasonal and interannual streamflow variability. This analysis shows that TMPA has potential for hydrologic forecasting in data-sparse regions.
Laparoscopic Pancreatoduodenectomy in 50 Consecutive Patients with No Mortality: A Single-Center Experience.
BACKGROUND Laparoscopic pancreatic surgery has gradually expanded to include pancreatoduodenectomy (PD). This study presents data regarding the efficacy of laparoscopic PD in a single center. METHODS This was a single-cohort, prospective observational study. From March 2012 to September 2015, 50 consecutive patients underwent laparoscopic PD using a five-trocar technique. Reconstruction of the digestive tract was performed with double jejunal loop technique whenever feasible. Patients with radiological signs of portal vein invasion were operated by open approach. RESULTS Twenty-seven women and 23 men with a median age of 63 years (range 23-76) underwent laparoscopic PD. Five patients underwent total pancreatectomy. All, but 1 patient (previous bariatric operation), underwent pylorus-preserving resection. Reconstruction was performed with double jejunal loop in all cases except in 5 cases of total pancreatectomy. Conversion was required in 3 patients (6%) as a result of difficult dissection (two cases) and unsuspected portal vein invasion (1 patient). Median operative time was 420 minutes (range 360-660), and the 90-day mortality was nil. Pancreatic fistula occurred in 13 patients (26%). There was one grade C (reoperated), one grade B (percutaneous drainage), and all remaining were grade A (conservative treatment). Other complications included port site bleeding (n = 1), biliary fistula (n = 2), and delayed gastric emptying (n = 2). Mean hospital stay was 8.4 days (range 5-31). CONCLUSIONS Laparoscopic PD is feasible and safe, but is technically demanding and may be reserved to highly skilled laparoscopic surgeons with proper training in high-volume centers. Isolated pancreatic anastomosis may be useful to decrease the severity of postoperative pancreatic fistulas. Therefore, it could be a good option in patients with a high risk for developing postoperative pancreatic, as well as by less-experienced surgeons.
Job engagement, job satisfaction, and contrasting associations with person-job fit.
Forms of well-being vary in their activation as well as valence, differing in respect of energy-related arousal in addition to whether they are negative or positive. Those differences suggest the need to refine traditional assumptions that poor person-job fit causes lower well-being. More activated forms of well-being were proposed to be associated with poorer, rather than better, want-actual fit, since greater motivation raises wanted levels of job features and may thus reduce fit with actual levels. As predicted, activated well-being (illustrated by job engagement) and more quiescent well-being (here, job satisfaction) were found to be associated with poor fit in opposite directions--positively and negatively, respectively. Theories and organizational practices need to accommodate the partly contrasting implications of different forms of well-being.
Regression Shrinkage and Selection via the Lasso
We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients hat are exactly 0 and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described.
Performance of cable suspended robots for upper limb rehabilitation
This work presents a general simulation tool to evaluate the performance of a set of cable suspended rehabilitation robots. Such a simulator is based on the mechanical model of the upper limb of a patient. The tool was employed to assess the performances of two cable-driven robots, the NeReBot and the MariBot, developed at the Robotics & Mechatronics Laboratories of the Department of Innovation in Mechanics and Management (DIMEG) of University of Padua, Italy. This comparison demonstrates that the second machine, which was conceived as an evolution of the first one, yields much better results in terms of patient's arm trajectories.
City-scale landmark identification on mobile devices
With recent advances in mobile computing, the demand for visual localization or landmark identification on mobile devices is gaining interest. We advance the state of the art in this area by fusing two popular representations of street-level image data — facade-aligned and viewpoint-aligned — and show that they contain complementary information that can be exploited to significantly improve the recall rates on the city scale. We also improve feature detection in low contrast parts of the street-level data, and discuss how to incorporate priors on a user's position (e.g. given by noisy GPS readings or network cells), which previous approaches often ignore. Finally, and maybe most importantly, we present our results according to a carefully designed, repeatable evaluation scheme and make publicly available a set of 1.7 million images with ground truth labels, geotags, and calibration data, as well as a difficult set of cell phone query images. We provide these resources as a benchmark to facilitate further research in the area.
SPINE: SParse Interpretable Neural Embeddings
Prediction without justification has limited utility. Much of the success of neural models can be attributed to their ability to learn rich, dense and expressive representations. While these representations capture the underlying complexity and latent trends in the data, they are far from being interpretable. We propose a novel variant of denoising k-sparse autoencoders that generates highly efficient and interpretable distributed word representations (word embeddings), beginning with existing word representations from state-of-the-art methods like GloVe and word2vec. Through large scale human evaluation, we report that our resulting word embedddings are much more interpretable than the original GloVe and word2vec embeddings. Moreover, our embeddings outperform existing popular word embeddings on a diverse suite of benchmark downstream tasks.
Ethical frontiers of ICT and older users: cultural, pragmatic and ethical issues
The reality of an ageing Europe has called attention to the importance of e-inclusion for a growing population of senior citizens. For some, this may mean closing the digital divide by providing access and support to technologies that increase citizen participation; for others, e-inclusion means access to assistive technologies to facilitate and extend their living independently. These initiatives address a social need and provide economic opportunities for European industry. While undoubtedly desirable, and supported by European Union initiatives, several cultural assumptions or issues related to the initiatives could benefit from fuller examination, as could their practical and ethical implications. This paper begins to consider these theoretical and practical concerns. The first part of the paper examines cultural issues and assumptions relevant to adopting e-technologies, and the ethical principles applied to them. These include (1) the persistence of ageism, even in e-inclusion; (2) different approaches to, and implications of independent living; and (3) the values associated with different ethical principles, given their implications for accountability to older users. The paper then discusses practical issues and ethical concerns that have been raised by the use of smart home and monitoring technologies with older persons. Understanding these assumptions and their implications will allow for more informed choices in promoting ethical application of e-solutions for older persons.
Development of the Great Lakes Ice-Circulation Model (GLIM): Application to Lake Erie in 2003–2004
Index words: Coupled Ice-Ocean Model Ice modeling Lake ice cover Ice thickness Ice speed Lake surface temperature Great Lakes Lake Erie To simulate ice and water circulation in Lake Erie over a yearly cycle, a Great Lakes Ice-circulation Model (GLIM) was developed by applying a Coupled Ice-Ocean Model (CIOM) with a 2-km resolution grid. The hourly surface wind stress and thermodynamic forcings for input into the GLIM are derived from meteorological measurements interpolated onto the 2-km model grids. The seasonal cycles for ice concentration, thickness, velocity, and other variables are well reproduced in the 2003/04 ice season. Satellite measurements of ice cover were used to validate GLIM with a mean bias deviation (MBD) of 7.4%. The seasonal cycle for lake surface temperature is well reproduced in comparison to the satellite measurements with a MBD of 1.5%. Additional sensitivity experiments further confirm the important impacts of ice cover on lake water temperature and water level variations. Furthermore, a period including an extreme cooling (due to a cold air outbreak) and an extreme warming event in February 2004 was examined to test GLIM's response to rapidly-changing synoptic forcing.
Low cost QFN package design for millimeter-wave applications
This paper focuses on the design and development of a low-cost QFN package that is based on wirebond interconnects. One of the design goals is to extend the frequency at which the package can be used to 40-50 GHz (above the K band), in the millimeter-wave range. Owing to the use of mass production assembly protocols and materials, such as commercially available QFN in a mold compound, the design that is outlined in this paper significantly reduces the cost of assembly of millimeter wave modules. To operate the package at 50 GHz or a higher frequency, several key design features are proposed. They include the use of through vias (backside vias) and ground bondwires to provide ground return currents. This paper also provides rigorous validation steps that we took to obtain the key high frequency characteristics. Since a molding compound is used in conventional QFN packages, the material and its effectiveness in determining the signal propagation have to be incorporated in the overall design. However, the mold compound creates some extra challenges in the de-embedding task. For example, the mold compound must be removed to expose the probing pads so the effect of the microstrip on the GaAs chip can be obtained and de-embedded. Careful simulation and experimental validation reveal that the proposed QFN design achieves a return loss of -10 dB and an insertion loss of -1.5 dB up to 50 GHz.
"Cut and stick" rubbery ion gels as high capacitance gate dielectrics.
A free-standing polymer electrolyte called an ion gel is employed in both organic and inorganic thin-film transistors as a high capacitance gate dielectric. To prepare a transistor, the free-standing ion gel is simply laid over a semiconductor channel and a side-gate electrode, which is possible because of the gel's high mechanical strength.
Creating Trustworthy Robots : Lessons and Inspirations from Automated Systems
One of the most significant challenges of human-robot interaction research is designing systems which foster an appropriate level of trust in their users: in order to use a robot effectively and safely, a user must place neither too little nor too much trust in the system. In order to better understand the factors which influence trust in a robot, we present a survey of prior work on trust in automated systems. We also discuss issues specific to robotics which pose challenges not addressed in the automation literature, particularly related to reliability, capability, and adjustable autonomy. We conclude with the results of a preliminary web-based questionnaire which illustrate some of the biases which autonomous robots may need to overcome in order to promote trust in users.
Comparability of cause of death between ICD-9 and ICD-10: preliminary estimates.
OBJECTIVES This report presents preliminary results describing the effects of implementing the Tenth Revision of the International Classification of Diseases (ICD-10) on mortality statistics for selected causes of death effective with deaths occurring in the United States in 1999. The report also describes major features of the Tenth Revision (ICD-10), including changes from the Ninth Revision (ICD-9) in classification and rules for selecting underlying causes of death. Application of comparability ratios is also discussed. METHODS The report is based on cause-of-death information from a large sample of 1996 death certificates filed in the 50 States and the District of Columbia. Cause-of-death information in the sample includes underlying cause of death classified by both ICD-9 and ICD-10. Because the data file on which comparability information is derived is incomplete, results are preliminary. RESULTS Preliminary comparability ratios by cause of death presented in this report indicate the extent of discontinuities in cause-of-death trends from 1998 through 1999 resulting from implementing ICD-10. For some leading causes (e.g., Septicemia, Influenza and pneumonia, Alzheimer's disease, and Nephritis, nephrotic syndrome and nephrosis), the discontinuity in trend is substantial. The ranking of leading causes of death is also substantially affected for some causes of death. CONCLUSIONS Results of this study, although preliminary, are essential to analyzing trends in mortality between ICD-9 and ICD-10. In particular, the results provide a means for interpreting changes between 1998, which is the last year in which ICD-9 was used, and 1999, the year in which ICD-10 was implemented for mortality in the United States.
Lanreotide 60 mg, a Longer-Acting Somatostatin Analog: Tumor Shrinkage and Hormonal Normalization in Acromegaly
Background: Somatostatin analogues are nowadays the milestone in the medical treatment of acromegaly. We evaluated the effects of a new 60 mg longer-acting formulation of lanreotide (LAN60) on GH/IGF-I levels and tumor size. Patients: Twenty-one acromegalics entered a prospective monocentric open study. Eight were consecutive “de novo” patients (group I). Thirteen patients sensitive to SA (GH levels < 2.5 μg/l and/or IGF-I normalization on chronic LAN 30 mg (LAN30) treatment) were switched to LAN60 (group II). Protocol: LAN60 was administered IM for 6 cycles at 28 day intervals. In group I when GH/IGF-I remained pathological, the intervals were shortened to 21 days for the last three cycles. Controls: GH/IGF-I at the end of the 1st, 3rd and 6th cycle; MRI at the end of the study in all patients in group I bearing an adenoma. Results: Group I. GH (p = 0.00638, below 2.5 μg/l in two patients) and IGF-I (p = 0.0289, normalized in 5) significantly decreased. In one of two patients shortening the LAN60 schedule was more effective in suppressing GH/IGF-I. Group II. No change in GH and IGF-I levels was observed with the administration of LAN60, instead of LAN30. On LAN60 GH remained below 2.5 μg/l in 8/10 patients and IGF-I normal in 11/11 patients that had attained those values on LAN30. Tumor markedly shrank (23% to 64% vs basal), from 1400 (664–1680) mm3 to 520 (500–960) mm3 (median, interquartile, p = 0.0218) in all the 5 evaluable patients. Conclusion: LAN60 is a very effective and longer-lasting formulation for the treatment of acromegaly. A closer administration schedule might achieve greater efficacy. Its effectiveness in shrinking tumor opens new perspectives in the therapy of acromegaly.
Distributed Illumination Control With Local Sensing and Actuation in Networked Lighting Systems
We consider the problem of illumination control in a networked lighting system wherein luminaires have local sensing and actuation capabilities. Each luminaire: 1) consists of a light-emitting diode (LED) based light source dimmable by a local controller; 2) is actuated based on sensing information from a presence sensor, that determines occupant presence, and a light sensor, that measures illuminance, within their respective fields of view; and 3) a communication module to exchange control information within a local neighborhood. We consider distributed illumination control in such an intelligent lighting system to achieve presence-adaptive and daylight-integrated spatial illumination rendering. The rendering is specified as target values at the light sensors, and under these constraints, a local controller has to determine the optimum dimming levels of its associated LED luminaire so that the power consumed in rendering is minimized. The formulated optimization problem is a distributed linear programming problem with constraints on exchanging control information within a neighborhood. A distributed optimization algorithm is presented to solve this problem and its stability and convergence are studied. Sufficient conditions, in terms of parameter selection, under which the algorithm can achieve a feasible solution are provided. The performance of the algorithm is evaluated in an indoor office setting in terms of achieved illuminance rendering and power savings.
Hardware, Design and Implementation Issues on a Fpga-Based Smart Camera
Processing images to extract useful information in real-time is a complex task, dealing with large amounts of iconic data and requiring intensive computation. Smart cameras use embedded processing to save the host system from the low-level processing load and to reduce communication flows and overheads. Field programmable devices present special interest for smart cameras design: flexibility, reconfigurability and parallel processing skills are some specially important features. In this paper we present a FPGA-based smart camera research platform. The hardware architecture is described, and some design issues are discussed. Our goal is to use the possibility to reconfigure the FPGA device in order to adapt the system architecture to a given application. To that, a design methodology, based on pre-programmed processing elements, is proposed and sketched. Some implementation issues are discussed and a template tracking application is given as example, with its experimental results.
Sequential Data Fusion of GNSS Pseudoranges and Dopplers With Map-Based Vision Systems
Tightly coupling GNSS pseudorange and Doppler measurements with other sensors is known to increase the accuracy and consistency of positioning information. Nowadays, high-accuracy geo-referenced lane marking maps are seen as key information sources in autonomous vehicle navigation. When an exteroceptive sensor such as a video camera or a lidar is used to detect them, lane markings provide positioning information which can be merged with GNSS data. In this paper, measurements from a forward-looking video camera are merged with raw GNSS pseudoranges and Dopplers on visible satellites. To create a localization system that provides pose estimates with high availability, dead reckoning sensors are also integrated. The data fusion problem is then formulated as sequential filtering. A reduced-order state space modeling of the observation problem is proposed to give a real-time system that is easy to implement. A Kalman filter with measured input and correlated noises is developed using a suitable error model of the GNSS pseudoranges. Our experimental results show that this tightly coupled approach performs better, in terms of accuracy and consistency, than a loosely coupled method using GNSS fixes as inputs.
A 28-GHz CMOS Broadband Single-Path Power Amplifier with 17.4-dBm P1dB for 5G Phased-Array
This paper reports a fully integrated broadband and linear power amplifier (PA) for 5G phased-array. Weakly-and strongly-coupled transformers are compared and analyzed in detail. The output strongly-coupled transformer is designed to transfer maximum power. The inter-stage weakly-coupled transformer is optimized to broaden the bandwidth. Besides, linearity is highly improved by operating the PA in deep class AB region. Designed and implemented in 65-nm CMOS process with 1V supply, the two-stage PA delivers a maximum small-signal gain of 19 dB. Maximum 1-dB compressed power (P1dB) of 17.4 dBm and saturated output power (PSAT) of 18 dBm are measured at 28 GHz. The power-added efficiency (PAE) at P1dB is 26.5%. The measured P1dBis above 16 dBm from 23 to 32 GHz, covering potential 5G bands worldwide around 28 GHz.
Sildenafil in women with sexual arousal disorder following spinal cord injury
Study design:Double-blind, placebo-controlled, flexible-dose study.Objective:To evaluate the efficacy, safety and tolerability of oral sildenafil in women with female sexual arousal disorder as a result of SCI (paraplegia/tetraplegia).Setting:The study was conducted at clinical practice sites in North America (n =23), 11 European countries (n =23), Australia (n =4) and South Africa (n =2).Methods:129 women were randomized and treated with sildenafil or matching placebo. A 4-week baseline period was followed by 12 weeks of treatment, which could be increased from 50 to 100 mg or decreased to 25 mg once during the treatment period, depending on efficacy and tolerability. By use of an event log, sexual activity was monitored between screening and the end of treatment. The Sexual Function Questionnaire, the Sexual Quality of Life Questionnaire–Female, a global efficacy question and Sexual Distress Question were also assessed.Results:Sildenafil-treated women and placebo-treated women had an increase in their percentage of sexual activities throughout the course of the study, with no statistically significant difference between groups in the percentage of successful sexual activities at end of treatment versus baseline. There were also no statistically significant differences between sildenafil- and placebo-treated women on the aforementioned measures. The most common adverse events included headache and vasodilatation.Conclusion:The results of this study are similar to other reports regarding a lack of clinically meaningful benefit of sildenafil in other populations of women.Sponsorship:This study was sponsored by Pfizer Inc.
An evolutionary approach to library materials acquisition problems
This paper studies a library materials problem in which materials are served for satisfying the information need of its patrons. The objective is to maximize the average preferences of acquired subjects. We first formulated the problem by means of the mathematical programming. Without loss of generality, some empirical constraints are considered, including subject costs and the required amount of specified disciplines. We then design a particle swarm optimization to resolve the problem. CPLEX, a linear programming software package, was used to break down the small problems and provided optimal solutions for comparison. Computational results show that the proposed algorithm can optimally solve problems within a reasonable amount of time.
Spatial Predictive Control for Agile Semi-Autonomous Ground Vehicles
This paper presents a formulation to the obstacle avoidance problem for semi-autonomous ground vehicles. The planning and tracking problems have been divided into a two-level hierarchical controller. The high level solves a nonlinear model predictive control problem to generate a feasible and obstacle free path. It uses a nonlinear vehicle model and utilizes a coordinate transformation which uses vehicle position along a path as the independent variable. The low level uses a higher fidelity model and solves the MPC problem with a sequential quadratic programming approach to track the planned path. Simulations show the method’s ability to safely avoid multiple obstacles while tracking the lane centerline. Experimental tests on a semi-autonomous passenger vehicle driving at high speed on ice show the effectiveness of the approach.
Efficient Variants of the ICP Algorithm
TheICP (IterativeClosestPoint) algorithmis widelyusedfor geometric alignmentof three-dimensional modelswhenan initial estimateof therelativeposeis known.Manyvariantsof ICP have beenproposed,affectingall phasesof thealgorithmfromtheselection and matching of points to the minimizationstrategy. We enumer ateandclassifymanyof thesevariants,andevaluatetheir effect on the speedwith which the correct alignmentis reached. In order to improveconvergencefor nearly-flatmesheswith small features,such as inscribedsurfaces,we introducea new variant basedon uniformsamplingof thespaceof normals.We conclude by proposinga combinationof ICP variantsoptimizedfor high speed. We demonstr ate an implementationthat is able to align two range imagesin a few tensof milliseconds,assuminga good initial guess.Thiscapabilityhaspotentialapplicationto real-time 3D modelacquisitionandmodel-basedtracking. 1 Intr oduction – Taxonom y of ICP Variants TheICP(originally IterativeClosestPoint,thoughIterativeCorrespondingPointis perhapsabetterexpansionfor theabbreviation) algorithm hasbecomethe dominantmethodfor aligning threedimensional modelsbasedpurelyonthegeometry, andsometimes color, of themeshes.Thealgorithmis widely usedfor registering the outputsof 3D scanners, which typically only scanan object from one direction at a time. ICP startswith two meshesand an initial guessfor their relative rigid-body transform,anditeratively refinesthetransformby repeatedlygeneratingpairsof correspondingpointson themeshesandminimizing anerrormetric. Generatingtheinitial alignmentmaybedoneby avarietyof methods,suchas trackingscannerposition, identificationand indexing of surfacefeatures[Faugeras86, Stein92], “spin-image”surfacesignatures[Johnson97a], computingprincipalaxesof scans [Dorai 97], exhausti ve searchfor correspondingpoints[Chen98, Chen99], or userinput. In thispaper , we assumethata roughinitial alignmentis alwaysavailable. In addition,we focusonly on aligningasinglepairof meshes, anddonotaddresstheglobalregistrationproblem[Bergevin 96, Stoddart96, Pulli 97, Pulli 99]. Sincetheintroductionof ICP by ChenandMedioni [Chen91] andBesl andMcKay [Besl 92], many variantshave beenintroducedon thebasicICP concept.We may classifythesevariants asaffectingoneof six stagesof thealgorithm: 1. Selection of somesetof pointsin oneor bothmeshes. 2. Matching thesepointsto samplesin theothermesh. 3. Weighting thecorrespondingpairsappropriately . 4. Rejecting certainpairsbasedon looking at eachpair individually or consideringtheentiresetof pairs. 5. Assigninganerror metric basedon thepointpairs. 6. Minimizing theerrormetric. In this paper , we will look at variantsin eachof thesesix categories,andexaminetheir effectson theperformanceof ICP. Althoughour main focus is on the speedof convergence,we also considertheaccuracy of thefinal answerandtheability of ICP to reachthecorrectsolutiongiven“dif ficult” geometry. Ourcompar isonssuggesta combinationof ICP variantsthat is ableto aligna pair of meshesin a few tensof milliseconds,significantlyfaster thanmostcommonly-usedICP systems.Theavailability of such areal-timeICPalgorithmmayenablesignificantnew applications in model-basedtrackingand3D scanning. In this paper , we first presentthe methodologyusedfor comparingICP variants,andintroducea numberof testscenesused throughoutthepaper . Next, wesummarizeseveralICPvariantsin eachof the above six categories,andcomparetheir convergence performance.As part of the comparison,we introducethe conceptof normal-space-directed sampling,andshow thatit improves convergencein scenesinvolving sparse,small-scalesurfacefeatures. Finally, we examinea combinationof variantsoptimized for highspeed. 2 Comparison Methodology Our goal is to comparetheconvergencecharacteristicsof several ICPvariants.In orderto limit thescopeof theproblem,andavoid acombinatorialexplosionin thenumberof possibilities,weadopt themethodologyof choosinga baselinecombinationof variants, andexaminingperformanceasindividual ICP stagesarevaried. Thealgorithmwe will selectasour baselineis essentiallythatof [Pulli 99], incorporatingthefollowing features: Randomsamplingof pointsonbothmeshes. Matching eachselectedpoint to the closestsamplein the othermeshthathasanormalwithin 45degreesof thesource normal. Uniform (constant)weightingof pointpairs. Rejectionof pairscontainingedgevertices,aswell asa percentageof pairswith thelargestpoint-to-pointdistances. Point-to-planeerrormetric. The classic“select-match-minimize”iteration, rather than someothersearchfor thealignmenttransform. We pick this algorithm becauseit hasrecei ved extensi ve usein a productionenvironment[Levoy 00], andhasbeenfound to be robustfor scanneddatacontainingmany kindsof surfacefeatures. In addition, to ensurefair comparisonsamongvariants,we make thefollowing assumptions: Thenumberof sourcepointsselectedis always2,000.Since themesheswewill considerhave100,000samples, thiscorrespondsto a samplingrateof 1% permeshif sourcepoints areselectedfrom bothmeshes, or 2% if pointsareselected from only onemesh. All mesheswe usearesimpleperspecti ve rangeimages,as opposedto generalirregularmeshes, sincethisenablescomparisonsbetween“closestpoint” and“projectedpoint” variants(seeSection3.2). Surface normalsare computedsimply basedon the four nearestneighborsin therangegrid.
Mechanical Compliance Control System for a pneumatic robot arm
The design and control of robots from the perspective of human safety is desired. We propose a mechanical compliance control system as a new pneumatic arm control system. However, safety against collisions with obstacles in an unpredictable environment is difficult to insure in previous system. The main feature of the proposed system is that the two desired pressure values are calculated by using two other desired values, the end compliance of the arm and the end position and posture of the arm.
Combining ontologies for requirements elicitation
A variety of ontologies are used to define and represent knowledge in many domains. Many ontological approaches have been successfully applied in the field of Requirements Engineering. In order to successfully harness the disparate ontologies, researchers have focused on various ontology merging techniques. However, no serious attempts have been made in the area of Requirements Elicitation where ontology merging has the potential to be quite effective in generating requirements specifications quickly through the means of reasoning based on combined ontologies. This paper attempts to define an approach needed to effectively combine ontologies to enhance the Requirements Elicitation process. A methodology is proposed whereby domain knowledge encapsulated in existing ontologies is combined with an ontology being developed to capture the requirements. Using this, requirements engineers would be able to create more refined Requirements Deliverables.
A Short Survey of Biomedical Relation Extraction Techniques
Biomedical information is growing rapidly in the recent years and retrieving useful data through information extraction system is getting more attention. In the current research, we focus on different aspects of relation extraction techniques in biomedical domain and briefly describe the state-of-the-art for relation extraction between a variety of biological elements.
Energy-efficient GPU design with reconfigurable in-package graphics memory
We propose an energy-efficient reconfigurable in-package graphics memory design that integrates wide-interface graphics DRAMs with GPU on a silicon interposer. We reduce the memory power consumption by scaling down the supply voltage and frequency while maintaining the same or higher peak bandwidth. Furthermore, we design a reconfigurable memory interface and propose two reconfiguration mechanisms to optimize system energy efficiency and throughput. The proposed memory architecture can reduce memory power consumption up to 54%, without reconfiguration. The reconfigurable interface can improve system energy efficiency by 23% and throughput by 30% under a power budget of 240W*.
INTERSEX AND GENDER IDENTITY DISORDERS Male Gender Identity in an XX Individual with Congenital Adrenal Hyperplasia
Introduction. In spite of significant changes in the management policies of intersexuality, clinical evidence show that not all pubertal or adult individuals live according to the assigned sex during infancy. Aim. The purpose of this study was to analyze the clinical management of an individual diagnosed as a female pseudohermaphrodite with congenital adrenal hyperplasia (CAH) simple virilizing form four decades ago but who currently lives as a monogamous heterosexual male. Methods. We studied the clinical files spanning from 1965 to 1991 of an intersex individual. In addition, we conducted a magnetic resonance imaging (MRI) study of the abdominoplevic cavity and a series of interviews using the oral history method. Main Outcome Measures. Our analysis is based on the clinical evidence that led to the CAH diagnosis in the 1960s in light of recent clinical testing to confirm such diagnosis. Results. Analysis of reported values for 17-ketosteroids, 17-hydroxycorticosteroids, from 24-hour urine samples during an 8-year period showed poor adrenal suppression in spite of adherence to treatment. A recent MRI study confirmed the presence of hyperplastic adrenal glands as well as the presence of a prepubertal uterus. Semistructured interviews with the individual confirmed a life history consistent with a male gender identity. Conclusions. Although the American Academy of Pediatrics recommends that XX intersex individuals with CAH should be assigned to the female sex, this practice harms some individuals as they may self-identify as males. In the absence of comorbid psychiatric factors, the discrepancy between infant sex assignment and gender identity later in life underlines the need for a reexamination of current standards of care for individuals diagnosed with CAH. Jorge JC, Echeverri C, Medina Y, and Acevedo P. Male gender identity in an xx individual with congenital adrenal hyperplasia. J Sex Med 2008;5:122–131.
Audio Signal Classification: History and Current Techniques
Audio signal classification (ASC) consists of extracting relevant features from a sound, and of using these features to identify into which of a set of classes the sound is most likely to fit. The feature extraction and grouping algorithms used can be quite diverse depending on the classification domain of the application. This paper presents background necessary to understand the general research domain of ASC, including signal processing, spectral analysis, psychoacoustics and auditory scene analysis. Also presented are the basic elements of classification systems. Perceptual and physical features are discussed, as well as clustering algorithms and analysis duration. Neural nets and hidden Markov models are discussed as they relate to ASC. These techniques are presented with an overview of the current state of the ASC research literature.
An efficient design and implementation of LSM-tree based key-value store on open-channel SSD
Various key-value (KV) stores are widely employed for data management to support Internet services as they offer higher efficiency, scalability, and availability than relational database systems. The log-structured merge tree (LSM-tree) based KV stores have attracted growing attention because they can eliminate random writes and maintain acceptable read performance. Recently, as the price per unit capacity of NAND flash decreases, solid state disks (SSDs) have been extensively adopted in enterprise-scale data centers to provide high I/O bandwidth and low access latency. However, it is inefficient to naively combine LSM-tree-based KV stores with SSDs, as the high parallelism enabled within the SSD cannot be fully exploited. Current LSM-tree-based KV stores are designed without assuming SSD's multi-channel architecture. To address this inadequacy, we propose LOCS, a system equipped with a customized SSD design, which exposes its internal flash channels to applications, to work with the LSM-tree-based KV store, specifically LevelDB in this work. We extend LevelDB to explicitly leverage the multiple channels of an SSD to exploit its abundant parallelism. In addition, we optimize scheduling and dispatching polices for concurrent I/O requests to further improve the efficiency of data access. Compared with the scenario where a stock LevelDB runs on a conventional SSD, the throughput of storage system can be improved by more than 4X after applying all proposed optimization techniques.
Mining Roles with Multiple Objectives
With the growing adoption of Role-Based Access Control (RBAC) in commercial security and identity management products, how to facilitate the process of migrating a non-RBAC system to an RBAC system has become a problem with significant business impact. Researchers have proposed to use data mining techniques to discover roles to complement the costly top-down approaches for RBAC system construction. An important problem is how to construct RBAC systems with low complexity. In this article, we define the notion of weighted structural complexity measure and propose a role mining algorithm that mines RBAC systems with low structural complexity. Another key problem that has not been adequately addressed by existing role mining approaches is how to discover roles with semantic meanings. In this article, we study the problem in two primary settings with different information availability. When the only information is user-permission relation, we propose to discover roles whose semantic meaning is based on formal concept lattices. We argue that the theory of formal concept analysis provides a solid theoretical foundation for mining roles from a user-permission relation. When user-attribute information is also available, we propose to create roles that can be explained by expressions of user-attributes. Since an expression of attributes describes a real-world concept, the corresponding role represents a real-world concept as well. Furthermore, the algorithms we propose balance the semantic guarantee of roles with system complexity. Finally, we indicate how to create a hybrid approach combining top-down candidate roles. Our experimental results demonstrate the effectiveness of our approaches.
Focal Loss for Dense Object Detection
The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https://github.com/facebookresearch/Detectron.
Giandomenico Tiepolo's Il Mondo Nuovo: Peep Shows and the “Politics of Nostalgia”
What was the historiography of Il mondo nuovo, a fresco painted in 1791 by Giandomenico Tiepolo? How did its title emerge? Giandomenico likely found the inspiration for his subject in popular entertainment on Venice's Piazzetta. The houselike structure in the fresco's middle ground—a peep show—had been labeled il mondo nuovo by the eighteenth-century playwright Carlo Goldoni. Yet the fresco was not named until after 1906. Art historian Pompeo Molmenti introduced the Goldoni-inspired title, his efforts seconded by Corrado Ricci, a powerful art administrator. Both were steeped in the “politics of nostalgia,” associated with the Italian Aesthetic movement.
Graphical interface for logic programming
Much of the power of PROLOG language and logic programming is due to the unification process between variables and the matching between patterns. Moreover reasoning and theorem proving is easily achievable through the logic and the backtracking mechanism of the language.
Wideband H-Plane Horn Antenna Based on Ridge Substrate Integrated Waveguide (RSIW)
A substrate integrated waveguide (SIW) H-plane sectoral horn antenna, with significantly improved bandwidth, is presented. A tapered ridge, consisting of a simple arrangement of vias on the side flared wall within the multilayer substrate, is introduced to enlarge the operational bandwidth. A simple feed configuration is suggested to provide the propagating wave for the antenna structure. The proposed antenna is simulated by two well-known full-wave packages, Ansoft HFSS and CST Microwave Studio, based on segregate numerical methods. Close agreement between simulation results is reached. The designed antenna shows good radiation characteristics and low VSWR, lower than 2.5, for the whole frequency range of 18-40 GHz.
Big data: A review
Big data is a term for massive data sets having large, more varied and complex structure with the difficulties of storing, analyzing and visualizing for further processes or results. The process of research into massive amounts of data to reveal hidden patterns and secret correlations named as big data analytics. These useful informations for companies or organizations with the help of gaining richer and deeper insights and getting an advantage over the competition. For this reason, big data implementations need to be analyzed and executed as accurately as possible. This paper presents an overview of big data's content, scope, samples, methods, advantages and challenges and discusses privacy concern on it.
Criminal network analysis and visualization
A new generation of data mining tools and applications work to unearth hidden patterns in large volumes of crime data.
Birth size, early childhood growth, and adolescent obesity in a Brazilian birth cohort
DESIGN: Cross-sectional visit to a subsample of a population-based birth cohort.SAMPLE: A total of 1076 adolescents aged 14–16 y; 51% males.MEASUREMENTS: Weight, height, subscapular and triceps skinfolds were used for assessing overweight and obesity in adolescence, using WHO-recommended criteria. Anthropometric status in early life was measured through birthweight and through weight and length/height at average ages of 20 and 43 months.RESULTS: All analyses were adjusted for socioeconomic and maternal confounding factors. Birthweight and attained size (Z-scores of weight-for-age, height-for-age and weight-for-height) at 20 and 43 months were associated linearly and positively with overweight and obesity in adolescence. Four in each five obese adolescents were not overweight in childhood. Rapid weight gain, both between birth and 20 months, and between 20 and 43 months, was also associated with adolescent overweight and with obesity. Rapid height gain between 20 and 43 months was associated with overweight only. Most associations were stronger for boys.CONCLUSIONS: Birth size, attained size in childhood and particularly growth velocity in early life were associated with increased prevalence of obesity and overweight in Brazilian adolescents. On the other hand, the vast majority of overweight or obese adolescents were not overweight children. Early interventions are undoubtedly important, but population-based strategies aimed at improving diets and physical activity appear to have greater long-term potential than measures targeted at overweight children.
Meta-Gaussian Information Bottleneck
We present a reformulation of the information bottleneck (IB) problem in terms of copula, using the equivalence between mutual information and negative copula entropy. Focusing on the Gaussian copula we extend the analytical IB solution available for the multivariate Gaussian case to distributions with a Gaussian dependence structure but arbitrary marginal densities, also called meta-Gaussian distributions. This opens new possibles applications of IB to continuous data and provides a solution more robust to outliers.
Deep Comparison: Relation Columns for Few-Shot Learning
Few-shot deep learning is a topical challenge area for scaling visual recognition to open-ended growth in the space of categories to recognise. A promising line work towards realising this vision is deep networks that learn to match queries with stored training images. However, methods in this paradigm usually train a deep embedding followed by a single linear classifier. Our insight is that effective generalpurpose matching requires discrimination with regards to features at multiple abstraction levels. We therefore propose a new framework termed Deep Comparison Network (DCN) that decomposes embedding learning into a sequence of modules, and pairs each with a relation module. The relation modules compute a non-linear metric to score the match using the corresponding embedding module’s representation. To ensure that all embedding module’s features are used, the relation modules are deeply supervised. Finally generalisation is further improved by a learned noise regulariser. The resulting network achieves state of the art performance on both miniImageNet and tieredImageNet, while retaining the appealing simplicity and efficiency of deep metric learning approaches.
Hashtag Recommendation Using Dirichlet Process Mixture Models Incorporating Types of Hashtags
In recent years, the task of recommending hashtags for microblogs has been given increasing attention. Various methods have been proposed to study the problem from different aspects. However, most of the recent studies have not considered the differences in the types or uses of hashtags. In this paper, we introduce a novel nonparametric Bayesian method for this task. Based on the Dirichlet Process Mixture Models (DPMM), we incorporate the type of hashtag as a hidden variable. The results of experiments on the data collected from a real world microblogging service demonstrate that the proposed method outperforms stateof-the-art methods that do not consider these aspects. By taking these aspects into consideration, the relative improvement of the proposed method over the state-of-theart methods is around 12.2% in F1score.
Anonymous Shopping in the Internet by Separation of Data
Whenever clients shop in the Internet, they provide identifying data of themselves to parties like the webshop, shipper and payment system. These identifying data merged with their shopping history might be misused for targeted advertisement up to possible manipulations of the clients. The data also contains credit card or bank account numbers, which may be used for unauthorized money transactions by the involved parties or by criminals hacking the parties’ computing infrastructure. In order to minimize these risks, we propose an approach for anonymous shopping by separation of data. We argue for the feasibility of our approach by discussing important operations like simple reclamation cases and criminal investigations. TYPE OF PAPER AND
K-SVD : DESIGN OF DICTIONARIES FOR SPARSE REPRESENTATION
In recent years there is a growing interest in the study of sparse representation for signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. In this paper we propose a novel algorithm – the K-SVD algorithm – generalizing the K-Means clustering process, for adapting dictionaries in order to achieve sparse signal representations. We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real data.
A randomized controlled trial of the effect of standardized patient scenarios on dental hygiene students' confidence in providing tobacco dependence counseling.
PURPOSE Dental hygienists report a lack of confidence in initiating Tobacco Dependence Counseling (TDC) with their patients who smoke. The purpose of this study was to determine if the confidence of dental hygiene students in providing TDC can be increased by Standardized Patient (SP) training, and if that confidence can be sustained over time. METHODS This 2-parallel group randomized design was used to compare the confidence of students receiving SP training to stu dents with no SP training. After a classroom lecture, all subjects (n=27) received a baseline test of knowledge and confidence. Subjects were randomly assigned to test and control groups with equivalent mean knowledge scores. The test group subjects participated in a SP TDC session. Both groups gained parallel experience to treating patients who were smokers and giving TDC in clinical scenarios during the 6 month time period. One week end-training and 6 month post-training assessments were administered to both groups. ANCOVA compared mean confidence scores. RESULTS End-training scores at 1 week showed a statistically significant increase (p=0.002) in overall mean confidence following SP training for individuals in the test group. The 6 month follow-up test results showed a slight decline in confidence scores among subjects in the test group and an overall gain in confidence for control group participants. However, overall confidence scores were comparable for the groups. CONCLUSION SP training improved dental hygiene students' initial confidence in providing TDC and was sustained, but not to a significant degree. Clinical experience alone increased confidence. Further studies may help determine how the initial confidence gained by SP training can be sustained and what the role of clinical experience plays in overall confidence in providing TDC.
Using Collective Intelligence to Route Internet Traffic
A COllective INtelligence (COIN) is a set of interacting reinforcement learning (RL) algorithms designed in an automated fashion so that their collective behavior optimizes a global utility function. We summarize the theory of COINs, then present experiments using that theory to design COINs to control internet traffic routing. These experiments indicate that COINs outperform all previously investigated RL-based, shortest path routing algorithms.
Differences in eating behaviours, dietary intake and body weight status between male and female Malaysian University students.
INTRODUCTION University students are potentially important targets for the promotion of healthy lifestyles as this may reduce the risks of lifestyle-related disorders later in life. This cross-sectional study examined differences in eating behaviours, dietary intake, weight status, and body composition between male and female university students. METHODOLOGY A total of 584 students (59.4% females and 40.6% males) aged 20.6 +/- 1.4 years from four Malaysian universities in the Klang Valley participated in this study. Participants completed the Eating Behaviours Questionnaire and two-day 24-hour dietary recall. Body weight, height, waist circumference and percentage of body fat were measured. RESULTS About 14.3% of males and 22.4% of females were underweight, while 14.0% of males and 12.3% of females were overweight and obese. A majority of the participants (73.8% males and 74.6% females) skipped at least one meal daily in the past seven days. Breakfast was the most frequently skipped meal. Both males and females frequently snacked during morning tea time. Fruits and biscuits were the most frequently consumed snack items. More than half of the participants did not meet the Malaysian Recommended Nutrient Intake (RNI) for energy, vitamin C, thiamine, riboflavin, niacin, iron (females only), and calcium. Significantly more males than females achieved the RNI levels for energy, protein and iron intakes. CONCLUSION This study highlights the presence of unhealthy eating behaviours, inadequate nutrient intake, and a high prevalence of underweight among university students. Energy and nutrient intakes differed between the sexes. Therefore, promoting healthy eating among young adults is crucial to achieve a healthy nutritional status.
Cryoablation vs radiofrequency ablation for the treatment of renal cell carcinoma: a meta-analysis of case series studies.
UNLABELLED Study Type - Therapy (systematic review). Level of Evidence 2b What's known on the subject? and What does the study add? The oncological success of partial nephrectomy in the treatment of small renal masses is well established. However, partial nephrectomy has largely supplanted the radical approach. In the last decade, laparoscopy has been adopted as the new surgical approach for the treatment of renal cell carcinoma. Laparoscopy offers the advantage of lower analgesic use, shorter hospital stay, and quicker recovery time. More recently, ablative technologies have been investigated as an alternative to laparoscopic partial nephrectomy. These techniques can often be performed percutaneously in the radiology suite, or laparoscopically without the need for hilar clamping. However, only the cryoablation and radiofrequency ablation modalities have had widespread use with several series reporting short to intermediate results. This review shows that both cryoablation and radiofrequency ablation are promising therapies in patients with small renal tumours (<4 cm), who are considered poor candidates for more involved surgery. OBJECTIVE • To determine the current status of the literature regarding the clinical efficacy and complication rates of cryoablation vs radiofrequency ablation in the treatment of small renal tumours. METHODS • A review of the literature was conducted. There was no language restriction. Studies were obtained from the following sources: MEDLINE, EMBASE and LILACS. • Inclusion criteria were (i) case series design with more than one case reported, (ii) use of cryoablation or radiofrequency ablation, (iii) patients with renal cell carcinoma and, (iv) outcome reported as clinical efficacy. • When available, we also quantified the complication rates from each included study. • Proportional meta-analysis was performed on both outcomes with a random-effects model. The 95% confidential intervals were also calculated. RESULTS • Thirty-one case series (20 cryoablation, 11 radiofrequency ablation) met all inclusion criteria. • The pooled proportion of clinical efficacy was 89% in cryoablation therapy from a total of 457 cases. There was a statistically significant heterogeneity between these studies showing the inconsistency of clinical and methodological aspects. • The pooled proportion of clinical efficacy was 90% in radiofrequency ablation therapy from a total of 426 cases. There was no statistically significant heterogeneity between these studies. • There was no statistically significant difference regarding complications rate between cryoablation and radiofrequency ablation. CONCLUSIONS • This review shows that both ablation therapies have similar efficacy and complication rates. • There is urgency for performing clinical trials with long-term data to establish which intervention is most suitable for the treatment of small renal masses.
The Impact of Text-Messaging on Vocabulary Learning of Iranian EFL Learners/ IMPACT DE LA TEXT-MESSAGING SUR L'APPRENTISSAGE DU VOCABULAIRE DES APPRENANTS D'ANGLAIS EN LANGUE ETRANGERE IRANNIENNE
Vocabulary learning is one of the major challenges foreign language learners face during the process of learning a language. One way to alley the burden is to assist students in becoming independent learners during the process of L2 vocabulary learning. This could be achieved through instructing learners to use their mobile phone as an efficient tool to learn vocabulary. Mobile phones are the new addition to the information and communication technologies (ITC) for learning. The present research investigates the effectiveness of text-messaging on vocabulary learning of EFL learners. To fulfill the purpose of the study, 60 learners from among 90 Iranian high school students participated in the study. The participants were divided into two equal groups of experimental and control based on the results of a proficiency test. The target words in the book English for Pre-University Students by Birjandi, Samimi and Anabisarab (2007) were taught to the groups, using synonyms and antonyms. Six to seven words were introduced and taught to these students each session. The participants in the experimental group were required to send the researcher SMSs containing a sentence for each covered word in class while those in the control group wrote some sentences containing the target words to exchange them with their partners and bring their assignments to the class the next session. Results of t-test analysis indicated that participants in the experimental group outperformed those in the control group. The results of this study can help teachers to provide a flexible situation to teach new vocabulary, and also provide pedagogical implications for utilizing text-messaging as an effective and flexible learning tool.
Options for Control of Reactive Power by Distributed Photovoltaic Generators
High-penetration levels of distributed photovoltaic (PV) generation on an electrical distribution circuit present several challenges and opportunities for distribution utilities. Rapidly varying irradiance conditions may cause voltage sags and swells that cannot be compensated by slowly responding utility equipment resulting in a degradation of power quality. Although not permitted under current standards for interconnection of distributed generation, fast-reacting, VAR-capable PV inverters may provide the necessary reactive power injection or consumption to maintain voltage regulation under difficult transient conditions. As side benefit, the control of reactive power injection at each PV inverter provides an opportunity and a new tool for distribution utilities to optimize the performance of distribution circuits, e.g., by minimizing thermal losses. We discuss and compare via simulation various design options for control systems to manage the reactive power generated by these inverters. An important design decision that weighs on the speed and quality of communication required is whether the control should be centralized or distributed (i.e., local). In general, we find that local control schemes are able to maintain voltage within acceptable bounds. We consider the benefits of choosing different local variables on which to control and how the control system can be continuously tuned between robust voltage control, suitable for daytime operation when circuit conditions can change rapidly, and loss minimization better suited for nighttime operation.
Novel local features with hybrid sampling technique for image retrieval
In image retrieval, most existing approaches that incorporate local features produce high dimensional vectors, which lead to a high computational and data storage cost. Moreover, when it comes to the retrieval of generic real-life images, randomly generated patches are often more discriminant than the ones produced by corner/blob detectors. In order to tackle these problems, we propose a novel method incorporating local features with a hybrid sampling (a combination of detector-based and random sampling). We take three large data collections for the evaluation: MIRFlickr, ImageCLEF, and a collection from British National Geological Survey. The overall performance of the proposed approach is better than the performance of global features and comparable with the current state-of-the-art methods in content-based image retrieval. One of the advantages of our method when compared with others is its easy implementation and low computational cost. Another is that hybrid sampling can improve the performance of other methods based on the ``bag of visual words'' approach.
Intelligence : Knowns and Unknowns
Ulric Neisser (Chair) Gwyneth Boodoo Thomas J. Bouchard, Jr. A. Wade Boykin Nathan Brody Stephen J. Ceci Diane E Halpern John C. Loehlin Robert Perloff Robert J. Sternberg Susana Urbina Emory University Educational Testing Service, Princeton, New Jersey University of Minnesota, Minneapolis Howard University Wesleyan University Cornell University California State University, San Bernardino University of Texas, Austin University of Pittsburgh Yale University University of North Florida
Electromagnetic diffraction of an obliquely incident plane wave by a right-angled anisotropic impedance wedge with a perfectly conducting face
The diffraction of an arbitrarily polarized electromagnetic plane wave obliquely incident on the edge of a right-angled anisotropic impedance wedge with a perfectly conducting face is analyzed. The impedance tensor on the loaded face has its principal anisotropy axes along directions parallel and perpendicular to the edge, exhibiting arbitrary surface impedance values in these directions. The proposed solution procedure applies both to the exterior and the interior right-angled wedges. The rigorous spectral solution for the field components parallel to the edge is determined through the application of the Sommerfeld–Maliuzhinets technique. A uniform asymptotic solution is provided in the framework of the uniform geometrical theory of diffraction (UTD). The diffracted field is expressed in a simple closed form involving ratios of trigonometric functions and the UTD transition function. Samples of numerical results are presented to demonstrate the effectiveness of the asymptotic expressions proposed and to show that they contain as limit cases all previous three-dimensional (3-D) solutions for the right-angled impedance wedge with a perfectly conducting face.
Intermittent levosimendan treatment in patients with severe congestive heart failure
Levosimendan (LS) is a novel inodilator for the treatment of severe congestive heart failure (CHF). In this study, we investigated the potential long-term effects of intermittent LS treatment on the pathophysiology of heart failure. Thirteen patients with modest to severe CHF received three 24-h intravenous infusions of LS at 3-week intervals. Exercise capacity was determined by bicycle ergospirometry, well-being assessed by Minnesota Living with Heart Failure Questionnaire (MLHFQ) and laboratory parameters of interest measured before and after each treatment. One patient experienced non-sustained periods of ventricular tachycardia (VT) during the first infusion and had to discontinue the study. Otherwise the LS infusions were well tolerated. Exercise capacity (VO2max) did not improve significantly during the study although symptoms decreased (P < 0.0001). Levels of plasma NT-proANP, NT-proBNP and NT-proXNP decreased 30–50 % during each infusion (P < 0.001 for all), but the changes disappeared within 3 weeks. Although norepinephrine (NE) appeared to increase during the first treatment (P = 0.019), no long-term changes were observed. Intermittent LS treatments decreased effectively and repetitively plasma vasoactive peptide levels, but no carryover effects were observed. Patients’ symptoms decreased for the whole study period although there was no objective improvement of their exercise capacity. The prognostic significance of these effects needs to be further studied.
Analysis of Unstructured Data: Applications of Text Analytics and Sentiment Mining
The proliferation of textual data in business is overwhelming. Unstructured textual data is being constantly generated via call center logs, emails, documents on the web, blogs, tweets, customer comments, customer reviews, and so on. While the amount of textual data is increasing rapidly, businesses’ ability to summarize, understand, and make sense of such data for making better business decisions remain challenging. This paper takes a quick look at how to organize and analyze textual data for extracting insightful customer intelligence from a large collection of documents and for using such information to improve business operations and performance. Multiple business applications of case studies using real data that demonstrate applications of text analytics and sentiment mining using SAS® Text Miner and SAS® Sentiment Analysis Studio are presented. While SAS® products are used as tools for demonstration only, the topics and theories covered are generic (not tool specific).
CAWA: Coordinated warp scheduling and Cache Prioritization for critical warp acceleration of GPGPU workloads
The ubiquity of graphics processing unit (GPU) architectures has made them efficient alternatives to chip-multiprocessors for parallel workloads. GPUs achieve superior performance by making use of massive multi-threading and fast context-switching to hide pipeline stalls and memory access latency. However, recent characterization results have shown that general purpose GPU (GPGPU) applications commonly encounter long stall latencies that cannot be easily hidden with the large number of concurrent threads/warps. This results in varying execution time disparity between different parallel warps, hurting the overall performance of GPUs -- the warp criticality problem. To tackle the warp criticality problem, we propose a coordinated solution, criticality-aware warp acceleration (CAWA), that efficiently manages compute and memory resources to accelerate the critical warp execution. Specifically, we design (1) an instruction-based and stall-based criticality predictor to identify the critical warp in a thread-block, (2) a criticality-aware warp scheduler that preferentially allocates more time resources to the critical warp, and (3) a criticality-aware cache reuse predictor that assists critical warp acceleration by retaining latency-critical and useful cache blocks in the L1 data cache. CAWA targets to remove the significant execution time disparity in order to improve resource utilization for GPGPU workloads. Our evaluation results show that, under the proposed coordinated scheduler and cache prioritization management scheme, the performance of the GPGPU workloads can be improved by 23% while other state-of-the-art schedulers, GTO and 2-level schedulers, improve performance by 16% and -2% respectively.