title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Group-based communication in WhatsApp | WhatsApp is a very popular mobile messaging application, which dominates todays mobile communication. Especially the feature of group chats contributes to its success and changes the way people communicate. The group-based communication paradigm is investigated in this work, particularly focusing on the usage of WhatsApp, communication in group chats, and implications on mobile network traffic. |
Automatic Optic Disc Abnormality Detection in Fundus Images : A Deep Learning Approach | Optic disc (OD) is a key structure in retinal images. It serves as an indicator to detect various diseases such as glaucoma and changes related to new vessel formation on the OD in diabetic retinopathy (DR) or retinal vein occlusion. OD is also essential to locate structures such as the macula and the main vascular arcade. Most existing methods for OD localization are rule-based, either exploiting the OD appearance properties or the spatial relationship between the OD and the main vascular arcade. The detection of OD abnormalities has been performed through the detection of lesions such as hemorrhaeges or through measuring cup to disc ratio. Thus these methods result in complex and inflexible image analysis algorithms limiting their applicability to large image sets obtained either in epidemiological studies or in screening for retinal or optic nerve diseases. In this paper, we propose an end-to-end supervised model for OD abnormality detection. The most informative features of the OD are learned directly from retinal images and are adapted to the dataset at hand. Our experimental results validated the effectiveness of this current approach and showed its potential application. |
DISCO: A System Leveraging Semantic Search in Document Review | This paper presents Disco, a prototype for supporting knowledge workers in exploring, reviewing and sorting collections of textual data. The goal is to facilitate, accelerate and improve the discovery of information. To this end, it combines Semantic Relatedness techniques with a review workflow developed in a tangible environment. Disco uses a semantic model that is leveraged on-line in the course of search sessions, and accessed through natural hand-gesture, in a simple and intuitive way. |
Automated testing of mixed-signal integrated circuits by topology modification | A general method is proposed to automatically generate a DfT solution aiming at the detection of catastrophic faults in analog and mixed-signal integrated circuits. The approach consists in modifying the topology of the circuit by pulling up (down) nodes and then probing differentiating node voltages. The method generates a set of optimal hardware implementations addressing the multi-objective problem such that the fault coverage is maximized and the silicon overhead is minimized. The new method was applied to a real-case industrial circuit, demonstrating a nearly 100 percent coverage at the expense of an area increase of about 5 percent. |
Fractal Antenna Elements and Arrays | With the advance of wireless communication systems and increasing importance of other wireless applications, wideband and low profile antennas are in great demand for both commercial and military applications. Multi-band and wideband antennas are desirable in personal communication systems, small satellite communication terminals, and other wireless applications. Wideband antennas also find applications in Unmanned Aerial Vehicles (UAVs), Counter Camouflage, Concealment and Deception (CC&D), Synthetic Aperture Radar (SAR), and Ground Moving Target Indicators (GMTI). Some of these applications also require that an antenna be embedded into the airframe structure Traditionally, a wideband antenna in the low frequency wireless bands can only be achieved with heavily loaded wire antennas, which usually means different antennas are needed for different frequency bands. Recent progress in the study of fractal antennas suggests some attractive solutions for using a single small antenna operating in several frequency bands. The purpose of this article is to introduce the concept of the fractal, review the progress in fractal antenna study and implementation, compare different types of fractal antenna elements and arrays and discuss the challenge and future of this new type of antenna. |
RELATIONAL NEURAL EXPECTATION MAXIMIZATION | Common-sense physical reasoning is an essential ingredient for any intelligent agent operating in the real-world. For example, it can be used to simulate the environment, or to infer the state of parts of the world that are currently unobserved. In order to match real-world conditions this causal knowledge must be learned without access to supervised data. To solve this problem, we present a novel method that incorporates prior knowledge about the compositional nature of human perception to factor interactions between object-pairs and to learn them efficiently. It learns to discover objects and to model physical interactions between them from raw visual images in a purely unsupervised fashion. On videos of bouncing balls we show the superior modelling capabilities of our method compared to other unsupervised neural approaches, that do not incorporate such prior knowledge. We show its ability to handle occlusion and that it can extrapolate learned knowledge to environments with different numbers of objects. |
Static noise margin variation for sub-threshold SRAM in 65-nm CMOS | The increased importance of lowering power in memory design has produced a trend of operating memories at lower supply voltages. Recent explorations into sub-threshold operation for logic show that minimum energy operation is possible in this region. These two trends suggest a meeting point for energy-constrained applications in which SRAM operates at sub-threshold voltages compatible with the logic. Since sub-threshold voltages leave less room for large static noise margin (SNM), a thorough understanding of the impact of various design decisions and other parameters becomes critical. This paper analyzes SNM for sub-threshold bitcells in a 65-nm process for its dependency on sizing, VDD, temperature, and local and global threshold variation. The VT variation has the greatest impact on SNM, so we provide a model that allows estimation of the SNM along the worst-case tail of the distribution |
An Internet-based technique for the identification of persons with symptoms of inflammatory polyarthritis of less than 12 weeks | Identifying persons with early rheumatoid arthritis (RA) is a major challenge. The role of the Internet in making decisions about seeking care has not been studied. We developed a method for early diagnosis and referral using the Arthritis Foundation’s website. A person with less than 3 months of joint pain symptom who has not yet sought medical attention was screened. Prescreened persons are linked to a self-scoring questionnaire and get a “likelihood” of RA statement. If “likely,” the person is offered a free evaluation and biomarker testing performed by Quest Diagnostics. The system available only to Massachusetts’s residents yielded a small steady flow of screen-positive individuals. Over 21 months, 43,244 persons took the Arthritis Foundation website prescreening questionnaire; 196 were from Massachusetts and 60 took the self-scoring algorithm. Of the 48 who screened positive, 29 set up an appointment for a free evaluation, but six never came in. Twenty-four subjects were evaluated and diagnosed independently by three rheumatologists. One met the 1987 American College of Rheumatology (ACR) criteria for RA and two met the 2010 ACR/EULAR RA criteria. The 24 examined individuals were contacted at a minimum of 1 year and asked to redo the case-finding questionnaire and asked about their health resource utilization during the interval. Seventeen of the 24 subjects responded, and 10 had seen a health professional. Three of the 17 had a diagnosis of RA; all were on at least methotrexate. Internet case finding was useful in identifying new potential RA cases. The system’s performance characteristics are theoretically limited only by the number of study sites available. However, the major barrier may be that seeing a health professional is not a priority for many individuals with early symptoms. |
Extracting and Using Electromagnetic Energy from the Active Vacuum | "The first principles of things will never be adequately known. Science is an openended endeavor, it can never be closed. We do science without knowing the first principles. It does in fact not start from first principles, nor from the end principles, but from the middle. We not only change theories, but also the concepts and entities themselves, and what questions to ask. The foundations of science must be continuously examined and modified; it will always be full of mysteries and surprises." {1} |
Analysis and Design of a Single-Stage Parallel AC-to-DC Converter | In this paper, a single-stage (S2) parallel ac-to-dc converter based on single-switch two-output boost-flyback converter is presented. The converter contains two semistages. One is the boost-flyback semistage, which transfers partial input power transferred to load directly through one power flow path and has excellent self-power factor correction property when operating in discontinuous conduction mode even though the boost output is close to the peak value of the line voltage. The other one is the flyback dc-to-dc (dc/dc) semistage that provides the output regulation on another parallel power flow path. With this design, the power conversion efficiency is improved and the current stress of control switch is reduced. Furthermore, the calculation process of power distribution and bulk capacitor voltage, design equations, and design procedure for key parameters are also presented. By following the procedure, an 80 W prototype converter has been built and tested. The experimental results show that the measured line harmonic current at the worst condition complies with the IEC61000-3-2 class D limits, the maximum bulk capacitor voltage is about 415.4 V, and the maximum efficiency is about 85.8%. Hence, the proposed S2 converter is suitable for universal input usage. |
Child abuse and neglect. | he publication of “The Battered Child” by C. Henry Kempe and coworkers introduced child maltreatment (CM) to the pediatric literature, and in the 46 years following its publication, the knowledge in the field has expanded. CM pervades every area of pediatrics. Pediatricians are mandated reporters and play an important role in the identification of child abuse and neglect. This article seeks to inform the primary care clinician about the current state of knowledge of child abuse and neglect. The epidemiology and diagnosis of the different types of abuse, the effects of abuse on children, documentation, reporting, and prevention are covered. The intent of the article is to help primary care clinicians more accurately identify and report child abuse and neglect. |
An Ant Colony Optimization Approach to a Grid Workflow Scheduling Problem With Various QoS Requirements | Grid computing is increasingly considered as a promising next-generation computational platform that supports wide-area parallel and distributed computing. In grid environments, applications are always regarded as workflows. The problem of scheduling workflows in terms of certain quality of service (QoS) requirements is challenging and it significantly influences the performance of grids. By now, there have been some algorithms for grid workflow scheduling, but most of them can only tackle the problems with a single QoS parameter or with small-scale workflows. In this frame, this paper aims at proposing an ant colony optimization (ACO) algorithm to schedule large-scale workflows with various QoS parameters. This algorithm enables users to specify their QoS preferences as well as define the minimum QoS thresholds for a certain application. The objective of this algorithm is to find a solution that meets all QoS constraints and optimizes the user-preferred QoS parameter. Based on the characteristics of workflow scheduling, we design seven new heuristics for the ACO approach and propose an adaptive scheme that allows artificial ants to select heuristics based on pheromone values. Experiments are done in ten workflow applications with at most 120 tasks, and the results demonstrate the effectiveness of the proposed algorithm. |
Spatio-spectral filters for improving the classification of single trial EEG | Data recorded in electroencephalogram (EEG)-based brain-computer interface experiments is generally very noisy, nonstationary, and contaminated with artifacts that can deteriorate discrimination/classification methods. In this paper, we extend the common spatial pattern (CSP) algorithm with the aim to alleviate these adverse effects. In particular, we suggest an extension of CSP to the state space, which utilizes the method of time delay embedding. As we will show, this allows for individually tuned frequency filters at each electrode position and, thus, yields an improved and more robust machine learning procedure. The advantages of the proposed method over the original CSP method are verified in terms of an improved information transfer rate (bits per trial) on a set of EEG-recordings from experiments of imagined limb movements. |
Smart Distribution: Coupled Microgrids | The distribution system provides major opportunities for smart grid concepts. One way to approach distribution system problems is to rethinking our distribution system to include the integration of high levels of distributed energy resources, using microgrid concepts. Basic objectives are improved reliability, promote high penetration of renewable sources, dynamic islanding, and improved generation efficiencies through the use of waste heat. Managing significant levels of distributed energy resources (DERs) with a wide and dynamic set of resources and control points can become overwhelming. The best way to manage such a system is to break the distribution system down into small clusters or microgrids, with distributed optimizing controls coordinating multimicrogrids. The Consortium for Electric Reliability Technology Solutions (CERTSs) concept views clustered generation and associated loads as a grid resource or a “microgrid.” The clustered sources and loads can operate in parallel to the grid or as an island. This grid resource can disconnect from the utility during events (i.e., faults, voltage collapses), but may also intentionally disconnect when the quality of power from the grid falls below certain standards. This paper focuses on DER-based distribution, the basics of microgrids, possibility of smart distribution systems using coupled microgrid and the current state of autonomous microgrid technology. |
Biological properties and therapeutic activities of honey in wound healing: A narrative review and meta-analysis. | For thousands of years, honey has been used for medicinal applications. The beneficial effects of honey, particularly its anti-microbial activity represent it as a useful option for management of various wounds. Honey contains major amounts of carbohydrates, lipids, amino acids, proteins, vitamin and minerals that have important roles in wound healing with minimum trauma during redressing. Because bees have different nutritional behavior and collect the nourishments from different and various plants, the produced honeys have different compositions. Thus different types of honey have different medicinal value leading to different effects on wound healing. This review clarifies the mechanisms and therapeutic properties of honey on wound healing. The mechanisms of action of honey in wound healing are majorly due to its hydrogen peroxide, high osmolality, acidity, non-peroxide factors, nitric oxide and phenols. Laboratory studies and clinical trials have shown that honey promotes autolytic debridement, stimulates growth of wound tissues and stimulates anti-inflammatory activities thus accelerates the wound healing processes. Compared with topical agents such as hydrofiber silver or silver sulfadiazine, honey is more effective in elimination of microbial contamination, reduction of wound area, promotion of re-epithelialization. In addition, honey improves the outcome of the wound healing by reducing the incidence and excessive scar formation. Therefore, application of honey can be an effective and economical approach in managing large and complicated wounds. |
Impact of non-linear smoking effects on the identification of gene-by-smoking interactions in COPD genetics studies. | BACKGROUND
The identification of gene-by-environment interactions is important for understanding the genetic basis of chronic obstructive pulmonary disease (COPD). Many COPD genetic association analyses assume a linear relationship between pack-years of smoking exposure and forced expiratory volume in 1 s (FEV(1)); however, this assumption has not been evaluated empirically in cohorts with a wide spectrum of COPD severity.
METHODS
The relationship between FEV(1) and pack-years of smoking exposure was examined in four large cohorts assembled for the purpose of identifying genetic associations with COPD. Using data from the Alpha-1 Antitrypsin Genetic Modifiers Study, the accuracy and power of two different approaches to model smoking were compared by performing a simulation study of a genetic variant with a range of gene-by-smoking interaction effects.
RESULTS
Non-linear relationships between smoking and FEV(1) were identified in the four cohorts. It was found that, in most situations where the relationship between pack-years and FEV(1) is non-linear, a piecewise linear approach to model smoking and gene-by-smoking interactions is preferable to the commonly used total pack-years approach. The piecewise linear approach was applied to a genetic association analysis of the PI*Z allele in the Norway Case-Control cohort and a potential PI*Z-by-smoking interaction was identified (p=0.03 for FEV(1) analysis, p=0.01 for COPD susceptibility analysis).
CONCLUSION
In study samples of subjects with a wide range of COPD severity, a non-linear relationship between pack-years of smoking and FEV(1) is likely. In this setting, approaches that account for this non-linearity can be more powerful and less biased than the more common approach of using total pack-years to model the smoking effect. |
Parent , Peer and Media Effect on the Perception of Body Image in Preadolescent Girls and Boys | The goal of this study was to begin to determine how parent, peer and media influences on the perception of body image of preadolescent girls and boys in Rize. The research was carried total 70 students of Mehmet Akif Ersoy middle school . Surveys were collected and analyzed in the statistics department using SPSS 14.0 for Windows. Results indicated peers had the largest negative influence (X 1.60/3.0) and media had the largest positive influence (X 2.12/3.0) for scale-response. Results also revealed the higher the positive influence, the higher positive self-image. As one increases positively the other will increase positively. The correlation between parents, peers, and media and body image was found to be significant, P < .000 < α = .05. A correlation coefficient of .606/1.0 revealed a moderate to strong correlation. Results also indicated the preadolescent surveyed had a lower reported body image score than the influence from parents, peers, and media together. Results should be used by health educators to develop educational classes, programs that focus on the impact from parent, peer, and media influences on the perception of body image. The present study provides more insight and new trends in data, to the understanding of the onset of body image development in preadolescent. |
Exploring the motives and mental health correlates of intentional food restriction prior to alcohol use in university students. | This study explored the prevalence of and motivations behind 'drunkorexia' – restricting food intake prior to drinking alcohol. For both male and female university students (N = 3409), intentionally changing eating behaviour prior to drinking alcohol was common practice (46%). Analyses performed on a targeted sample of women (n = 226) revealed that food restriction prior to alcohol use was associated with greater symptomology than eating more food. Those who restrict eating prior to drinking to avoid weight gain scored higher on measures of disordered eating, whereas those who restrict to get intoxicated faster scored higher on measures of alcohol abuse. |
Rethinking Data-Intensive Science Using Scalable Analytics Systems | "Next generation" data acquisition technologies are allowing scientists to collect exponentially more data at a lower cost. These trends are broadly impacting many scientific fields, including genomics, astronomy, and neuroscience. We can attack the problem caused by exponential data growth by applying horizontally scalable techniques from current analytics systems to accelerate scientific processing pipelines.
In this paper, we describe ADAM, an example genomics pipeline that leverages the open-source Apache Spark and Parquet systems to achieve a 28x speedup over current genomics pipelines, while reducing cost by 63%. From building this system, we were able to distill a set of techniques for implementing scientific analyses efficiently using commodity "big data" systems. To demonstrate the generality of our architecture, we then implement a scalable astronomy image processing system which achieves a 2.8--8.9x improvement over the state-of-the-art MPI-based system. |
Inventory Management: A Tool of Optimizing Resources in a Manufacturing Industry A Case Study of Coca-Cola Bottling Company, Ilorin Plant | Inventory constitutes the most significant part of current assets of larger majority of Nigerian manufacturing industries. Because of the relative largeness of inventories maintained by most firms, a considerable sum of an organization’s fund is being committed to them. It thus becomes absolutely imperative to manage inventories efficiently so as to avoid the costs of changing production rates, overtime, sub-contracting, unnecessary cost of sales and back order penalties during periods of peak demand. The main objective of this study is to determine whether or not inventories in the Nigeria Bottling Company, Ilorin Plant can be evaluated and understood using the various existing tools of optimization in inventory management. The study methods employed include the variance analysis, Economic Order Quantity (EOQ) Model and the Chi-square method. The answer to the fundamental question of how best an organization which handles inventory can be efficiently run is provided for in the analysis and findings of the study. Consequently, recommendations on the right quantity, quality and timing of material, at the most favourable price conclude the research study. company’s profit as well as increase its return on total assets. It is thus the management of this economics of stockholding, that is appropriately being refers to as inventory management. The reason for greater attention to inventory management is that this figure, for many firms, is the largest item appearing on the asset side of the balance sheet. Essentially, inventory management, within the context of the foregoing features involves planning and control. The planning aspect involves looking ahead in terms of the determination in advance: (i) What quantity of items to order; and (ii) How often (periodicity) do we order for them to maintain the overall source-store sink coordination in an economically efficient way? (ii) How often (periodicity) do we order for them to maintain the overall stock coordination in an economically efficient way? The control aspect, which is often described as stock control involves following the procedure, set up at the planning stage to achieve the above objective. This may include monitoring stock levels periodically or continuously and deciding what to do on the basis of information that is gathered and adequately processed. Effort must be made by the management of 136 S. L. ADEYEMI AND A. O. SALAMI any organization to strike an optimum investment in inventory since it costs much money to tie down capital in excess inventory. In recent time, attention was focused on the development of suitable mathematical tools and approaches designed to aid the decision-maker in setting optimum inventory levels. Economic order quantity model (EOQ) has thus been developed to take care of the weaknesses emanating from the traditional methods of inventory control and valuation, which to some extent has proved useful in optimizing resources and thus, minimizing associated cost. Financial analysts have sounded enough warning on the danger expose to the long run profitability as well as continuity of business concern when its inventories are left unmanaged. First, a company, which neglects it management of inventory, runs the risk of production bottlenecks and subsequently unable to maintain the minimum investment it requires to maximized profit. Second, inventories that are inefficiently managed may apart from affecting sales create an irreparable loss in market for companies operating in highly competitive industry. Invariably, a company must neither keep excess inventories to avoid an unnecessary tying down of funds as well as loss in fund due to pilferage, spoilage and obsolescence nor maintain too low inventories so as to meet production and sales demand as at when needed. Therefore, the mere fact that ineffective inventory management affects virtually the organizational objectives necessitates this type of research work. The researcher has taken Nigeria Bottling Company as a case study so as to clearly see if their resounding success can be attributed to the kind of inventory system the company embarks on. Since their production process requires a lot of raw materials and despite the economic condition of the country at any given time, production has never ceased and their products have never been scarce. The objectives of this study are as follows: (i) To describe the inventory management procedures currently in use in Nigeria Bottling Company, llorin Plant (ii) To determine whether or not inventory management in the Nigeria Bottling Company can be evaluated and understood using the various existing tools of optimization in inventory management and, (iii) To determine the optimality in the company inventory policies. 2. INVENTORY MANAGEMENT: DEFINITIONS AND CONCEPTS There is need for installation of a proper inventory control technique in any business organization in developing country like Nigeria. According to Kotler (2000), inventory management refers to all the activities involved in developing and managing the inventory levels of raw materials, semi-finished materials (workin-progress) and finished good so that adequate supplies are available and the costs of over or under stocks are low. Rosenblatt (1977) says: “The cost of maintaining inventory is included in the final price paid by the consumer. Good in inventory represents a cost to their owner. The manufacturer has the expense of materials and labour. The wholesaler also has funds tied up”. Therefore, the basic goal of the researchers is to maintain a level of inventory that will provide optimum stock at lowest cost. Morris (1995) stressed that inventory management in its broadest perspective is to keep the most economical amount of one kind of asset in order to facilitate an increase in the total value of all assets of the organization – human and material resources. Keth et al. (1994) in their text also stated that the major objective of inventory management and control is to inform managers how much of a good to re-order, when to re-order the good, how frequently orders should be placed and what the appropriate safety stock is, for minimizing stockouts. Thus, the overall goal of inventory is to have what is needed, and to minimize the number of times one is out of stock. Drury (1996) defined inventory as a stock of goods that is maintained by a business in anticipation of some future demand. This definition was also supported by Schroeder (2000) who stressed that inventory management has an impact on all business functions, particularly operations, marketing, accounting, and finance. He established that there are three motives for holding inventories, which are transaction, precautionary and speculative motives. The transaction motive occurs when there is a need to hold stock to meet production and sales requirements. A firm might also decide to hold additional amounts of stock to cover the possibility that it may have under estimated its future production and sales requirements. This represents a precautionary motive, which applies only when future 137 INVENTORY MANAGEMENT demand is uncertain. The speculative motive for holding inventory might entice a firm to purchase a larger quantity of materials than normal in anticipation of making abnormal profits. Advance purchase of raw materials in inflationary times is one form of speculative behaviour. 2.1 Inventory Model: The Economic Order Quantity (EOQ) Model Undoubtedly, the best-known and most fundamental inventory decision model is the Economic Order Quantity Model. Its origin dated back to the early 1900s. The purpose of using the EOQ model in this research is to find out the particular quantity, which minimize total inventory costs that is the total ordering and carrying costs. 2.1.1 EOQ Assumptions The EOQ has been previously defined by Dervitsiotis (1981), Monks (1996), Lucey (1992), and Schroeder (2000) as the ordering quantity which minimizes the balance of cost between inventory holding cost and re-order costs. Lucey (1992) stressed further that to be able to calculate a basic EOQ, certain assumptions are necessary: (i) That there is a known, constant, stock holding costs; (ii) That there is a known, constant ordering costs; (iii) That the rates of demand are known (iv) That there is a known constant price per unit (v) That replenishment is made instantaneously, that is the whole batch is delivered at once. (vi) No stock-outs are allowed. It would be apparent that the above assumptions are somewhat sweeping and that they are a good reason for treating an EOQ calculation with caution. Also, the rationale of EOQ ignores buffer stocks, which are maintained to cater for variations in lead-time and demand. The above assumptions are wide ranging and it is unlikely that all could be observed in practice. Nevertheless, the EOQ calculation is a useful starting point in establishing an appropriate reorder quantity. The EOQ formula is given below; it’s derivation and graphical presentation. EOQ = 2CoD ........ (1) Cc Where Co, Cc and D denote the ordering costs, carrying cost and annual demand respectively. Note also that Annual stock = Q/2, Total annual carrying cost = CcQ/2, Number of orders per annum = D/Q, Annual ordering costs = CoD/ Q and Total cost = CcQ/2 + CoD/Q..(2) In the above formula, Q is defined as the result of the calculated EOQ. The order quantity, which makes the total cost (TC) at a minimum, is obtained by differentiating with respect to Q and equating the derivative to zero the above total cost equation 2. Thus, dTc/ dQ = Cc/2 – CoD/Q and when dTc/dQ = 0 cost is at minimum. DCo/Q = Cc/2 Q = 2DCo / Cc and Q which represent the EOQ formula would now be The EOQ formula is given below; it’s derivation and graphical presentation. EOQ = 2CoD Cc Graphically, the EOQ can be represent in the Figure 1. |
CRMA: collision-resistant multiple access | Efficiently sharing spectrum among multiple users is critical to wireless network performance. In this paper, we propose a novel spectrum sharing protocol called Collision-Resistant Multiple Access (CRMA) to achieve high efficiency. In CRMA, each transmitter views the OFDM physical layer as multiple orthogonal but sharable channels, and independently selects a few channels for transmission. The transmissions that share the same channel naturally add up in the air. The receiver extracts the received signals from all the channels and efficiently decodes the transmissions by solving a simple linear system. We implement our approach in the Qualnet simulator and show that it yields significant improvement over existing spectrum sharing schemes. We also demonstrate the feasibility of our approach using implementation and experiments on GNU Radios. |
SoftLight: Adaptive visible light communication over screen-camera links | Screen-camera links for Visible Light Communication (VLC) are diverse, as the link quality varies according to many factors, such as ambient light and camera's performance. This paper presents SoftLight, a channel coding approach that considers the unique channel characteristics of VLC links and automatically adapts the transmission data rate to the link qualities of various scenarios. SoftLight incorporates two new ideas: (1) an expanded color modulation interface that provides soft hint about its confidence in each demodulated bit and establishes a bit-level VLC erasure channel, and (2) a rateless coding scheme that achieves bit-level rateless transmissions with low computation complexity and tolerates the false positive of bits provided by the soft hint enabled erasure channel. SoftLight is orthogonal to the visual coding schemes and can be applied atop any barcode layouts. We implement SoftLight on Android smartphones and evaluate its performance under a variety of environments. The experiment results show that SoftLight can correctly transmit a 22-KByte photo between two smartphones within 0.6 second and improves the average goodput of the state-of-the-art screen-camera VLC solution by 2.2. |
The Imprecise Wanderings of a Precise Idea: The Travels of Spatial Analysis | The chapter uses a schematic map, “Quantgeog Airline,” first drawn by Peter Taylor in 1977, as an entree to discuss the complicated geography of the spatial analysis tradition within the history of geography. That tradition seeks to explain spatial order using a formal vocabulary reduced to its simplest axioms. The chapter draws on science studies to fashion some concepts to understand the diffusion of the idea of spatial analysis. It makes use of Foucault’s concept of heterotopia, Gieryn’s idea of a “truth spot”, and Latour’s notion of a “center of calculation,” applying these three concepts to provide a substantive (geographical) genealogy of the spatial analysis tradition that begins in ancient Greece, moves to Italy, and then in the seventeenth century shifts to Amsterdam and Cambridge (U.K.). The authors analyse the tradition’s complicated development in the twentieth century by looking at the United States in the 1950s during the so-called “quantitative revolution” and Northern Europe from the 1930s to early 1950s (Germany, Estonia, and Sweden). |
Blockchain — Literature survey | In the modern world that we live in everything is gradually shifting towards a more digitized outlook. From teaching methods to online transactions, every small aspect of our life is given a more technological touch. In such a case “money” is not left behind either. An approach towards digitizing money led to the creation of “bitcoin”. Bitcoin is the most efficient crypto-currency so far in the history of digital currency. Ruling out the interference of any third party which monitors the cash flow, Bitcoin is a decentralized form of online currency and is widely accepted for internet transactions all over the world. The need for digital money would be extensive in the near future. |
Decision aids for people facing health treatment or screening decisions. | BACKGROUND
Decision aids prepare people to participate in preference-sensitive decisions.
OBJECTIVES
1. Create a comprehensive inventory of patient decision aids focused on healthcare options. 2. Review randomized controlled trials (RCT) of decision aids, for people facing healthcare decisions.
SEARCH STRATEGY
Studies were identified through databases and contact with researchers active in the field.
SELECTION CRITERIA
Two independent reviewers screened abstracts for interventions designed to aid patients' decision making by providing information about treatment or screening options and their associated outcomes. Information about the decision aids was compiled in an inventory; those that had been evaluated in a RCT were reviewed in detail.
DATA COLLECTION AND ANALYSIS
Two reviewers independently extracted data using standardized forms. Results of RCTs were pooled using weighted mean differences (WMD) and relative risks (RR) using a random effects model.
MAIN RESULTS
Over 200 decision aids were identified. Of the 131 available decision aids, most are intended for use before counselling. Using the CREDIBLE criteria to evaluate the quality of the decision aids: a) most included potential harms and benefits, credentials of the developers, description of their development process, update policy, and were free of perceived conflict of interest; b) many included reference to relevant literature; c) few included a description of the level of uncertainty regarding the evidence; and d) few were evaluated. Thirty of these decision aids were evaluated in 34 RCTs and another trial evaluated a suite of eight decision aids. An additional 30 trials are yet to be published. Among the trials comparing decision aids to usual care, decision aids performed better in terms of: a) greater knowledge (WMD 19 out of 100, 95% CI: 13 to 24; b) more realistic expectations (RR 1.4, 95%CI: 1.1 to 1.9); c) lower decisional conflict related to feeling informed (WMD -9.1 of 100, 95%CI: -12 to -6); d) increased proportion of people active in decision making (RR 1.4, 95% CI: 1.0 to 2.3); and e) reduced proportion of people who remained undecided post intervention (RR 0.43, 95% CI: 0.3 to 0.7). When simpler were compared to more detailed decision aids, the relative improvement was significant in: a) knowledge (WMD 4 out of 100, 95% CI: 3 to 6); b) more realistic expectations (RR 1.5, 95% CI: 1.3 to 1.7); and c) greater agreement between values and choice. Decision aids appeared to do no better than comparisons in affecting satisfaction with decision making, anxiety, and health outcomes. Decision aids had a variable effect on which healthcare options were selected.
REVIEWER'S CONCLUSIONS
The availability of decision aids is expanding with many on the Internet; however few have been evaluated. Trials indicate that decision aids improve knowledge and realistic expectations; enhance active participation in decision making; lower decisional conflict; decrease the proportion of people remaining undecided, and improve agreement between values and choice. The effects on persistence with chosen therapies and cost-effectiveness require further evaluation. Finally, optimal strategies for dissemination need to be explored. |
A Survey of Public Auditing for Secure Data Storage in Cloud Computing | Cloud computing has been popular as the IT architecture. Cloud service providers offer many services based on cloud computing. Cloud storage service is the cloud services which can provide a huge storage space to solve the bottleneck of the storage space of local end users. However, cloud storage service may have data security because the users’ data is not stored in their own storage. In this paper, we will focus on data integrity in the cloud storage service. Public auditability is a model of outsourcing data integrity verification, which can achieve efficiency and security. Therefore, we survey the previous researches of data integrity based on public auditability which includes collecting the basic requirements and evaluation metrics, providing the representative with approaches to analyze security and efficiency. Finally, we propose some future developments. |
From Tweets to Polls: Linking Text Sentiment to Public Opinion Time Series | We connect measures of public opinion measured from polls with sentiment measured from text. We analyze several surveys on consumer confidence and political opinion over the 2008 to 2009 period, and find they correlate to sentiment word frequencies in contemporaneous Twitter messages. While our results vary across datasets, in several cases the correlations are as high as 80%, and capture important large-scale trends. The results highlight the potential of text streams as a substitute and supplement for traditional polling. Introduction If we want to know, say, the extent to which the U.S. population likes or dislikes Barack Obama, an obvious thing to do is to ask a random sample of people (i.e., poll). Survey and polling methodology, extensively developed through the 20th century (Krosnick, Judd, and Wittenbrink 2005), gives numerous tools and techniques to accomplish representative public opinion measurement. With the dramatic rise of text-based social media, millions of people broadcast their thoughts and opinions on a great variety of topics. Can we analyze publicly available data to infer population attitudes in the same manner that public opinion pollsters query a population? If so, then mining public opinion from freely available text content could be a faster and less expensive alternative to traditional polls. (A standard telephone poll of one thousand respondents easily costs tens of thousands of dollars to run.) Such analysis would also permit us to consider a greater variety of polling questions, limited only by the scope of topics and opinions people broadcast. Extracting the public opinion from social media text provides a challenging and rich context to explore computational models of natural language, motivating new research in computational linguistics. In this paper, we connect measures of public opinion derived from polls with sentiment measured from analysis of text from the popular microblogging site Twitter. We explicitly link measurement of textual sentiment in microblog messages through time, comparing to contemporaneous polling data. In this preliminary work, summary Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. statistics derived from extremely simple text analysis techniques are demonstrated to correlate with polling data on consumer confidence and political opinion, and can also predict future movements in the polls. We find that temporal smoothing is a critically important issue to support a successful model. Data We begin by discussing the data used in this study: Twitter for the text data, and public opinion surveys from multiple polling organizations. Twitter Corpus Twitter is a popular microblogging service in which users post messages that are very short: less than 140 characters, averaging 11 words per message. It is convenient for research because there are a very large number of messages, many of which are publicly available, and obtaining them is technically simple compared to scraping blogs from the web. We use 1 billion Twitter messages posted over the years 2008 and 2009, collected by querying the Twitter API,1 as well as archiving the “Gardenhose” real-time stream. This comprises a roughly uniform sample of public messages, in the range of 100,000 to 7 million messages per day. (The primary source of variation is growth of Twitter itself; its message volume increased by a factor of 50 over this twoyear time period.) Most Twitter users appear to live in the U.S., but we made no systematic attempt to identify user locations or even message language, though our analysis technique should largely ignore non-English messages. There probably exist many further issues with this text sample; for example, the demographics and communication habits of the Twitter user population probably changed over this time period, which should be adjusted for given our desire to measure attitudes in the general population. There are clear opportunities for better preprocessing and stratified sampling to exploit these data. This scraping effort was conducted by Brendan Meeder. Public Opinion Polls We consider several measures of consumer confidence and political opinion, all obtained from telephone surveys to participants selected through random-digit dialing, a standard technique in traditional polling (Chang and Krosnick 2003). Consumer confidence refers to how optimistic the public feels, collectively, about the health of the economy and their personal finances. It is thought that high consumer confidence leads to more consumer spending; this line of argument is often cited in the popular media and by policymakers (Greenspan 2002), and further relationships with economic activity have been studied (Ludvigson 2004; Wilcox 2007). Knowing the public’s consumer confidence is of great utility for economic policy making as well as business planning. Two well-known surveys that measure U.S. consumer confidence are the Consumer Confidence Index from the Consumer Board, and the Index of Consumer Sentiment (ICS) from the Reuters/University of Michigan Surveys of Consumers.2 We use the latter, as it is more extensively studied in economics, having been conducted since the 1950s. The ICS is derived from answers to five questions administered monthly in telephone interviews with a nationally representative sample of several hundred people; responses are combined into the index score. Two of the questions, for example, are: “We are interested in how people are getting along financially these days. Would you say that you (and your family living there) are better off or worse off financially than you were a year ago?” “Now turning to business conditions in the country as a whole—do you think that during the next twelve months we’ll have good times financially, or bad times, or what?” We also use another poll, the Gallup Organization’s “Economic Confidence” index,3 which is derived from answers to two questions that ask interviewees to to rate the overall economic health of the country. This only addresses a subset of the issues that are incorporated into the ICS. We are interested in it because, unlike the ICS, it is administered daily (reported as three-day rolling averages). Frequent polling data are more convenient for our comparison purpose, since we have fine-grained, daily Twitter data, but only over a twoyear period. Both datasets are shown in Figure 1. For political opinion, we use two sets of polls. The first is Gallup’s daily tracking poll for the presidential job approval rating for Barack Obama over the course of 2009, which is reported as 3-day rolling averages.4 These data are shown in Figure 2. The second is a set of tracking polls during the 2008 U.S. presidential election cycle, asking potential voters Downloaded from http://www.sca.isr.umich. edu/. Downloaded from http://www.gallup.com/poll/ 122840/gallup-daily-economic-indexes.aspx. Downloaded from http://www.gallup.com/poll/ 113980/Gallup-Daily-Obama-Job-Approval.aspx. Index G al lu p E co n. C on f. |
Coalition Formation Games for Dynamic Multirobot Tasks | This paper studies the problem of forming coalitions for dynamic tasks in multirobot systems. As robots either individually or in groups – encounter new tasks for which individual or group resources do not suffice, robot coalitions that are collectively capable of meeting these requirements need to be formed. We propose an approach where such tasks are reported to a task coordinator that is responsible for coalition formation. The novelty of this approach is that the process of determining these coalitions is modeled as a coalition formation game where groups of robots are evaluated with respect to resources and cost. As such, the resulting coalitions are ensured that no group of robots has a viable alternative to staying within their assigned coalition. The newly determined coalitions are then conveyed to the robots which then form the coalitions as instructed. As new tasks are encountered, coalitions merge and split so that the resulting coalitions are capable of the newly encountered tasks. Extensive simulations demonstrate the effectiveness of the proposed approach in a wide range of tasks. |
Development of whole-body humanoid “pneumat-BS” with pneumatic musculoskeletal system | The human musculoskeletal system is supposed to play an important role in doing various static and dynamic tasks. From this standpoint, some musculoskeletal humanoid robots have been developed in recent years. However, existing musculoskeletal robots did not have upper body with several DOFs to balance their bodies statically or did not have enough power to perform dynamic tasks. We think the musculoskeletal structure has two significant properties: whole-body flexibility and whole-body coordination. Using these two properties can enable us to make robots' performance better than before. In this study, we developed a humanoid robot with a musculoskeletal system that is driven by pneumatic artificial muscles. To demonstrate the robot's capability in static and dynamic tasks, we conducted two experiments. As a static task, we conducted a standing experiment using a simple feedback control and evaluated the stability by applying an impulse to the robot. As a dynamic task, we conducted a walking experiment using a feedforward controller with human muscle activation patterns and confirmed that the robot was able to perform the dynamic task. |
From Theories to Queries | This article surveys recent work in active learning aimed at making it more practical for real-world use. In general, active learning systems aim to make machine learning more economical, since they can participate in the acquisition of their own training data. An active learner might iteratively select informative query instances to be labeled by an oracle, for example. Work over the last two decades has shown that such approaches are effective at maintaining accuracy while reducing training set size in many machine learning applications. However, as we begin to deploy active learning in real ongoing learning systems and data annotation projects, we are encountering unexpected problems—due in part to practical realities that violate the basic assumptions of earlier foundational work. I review some of these issues, and discuss recent work being done to address the challenges. |
Scoring reputation in online social networks | Reputation management plays a key role in improving brand awareness and mastering the impact of harmful events. Aware of its importance, companies invest a lot in acquiring powerful social media monitoring tools (SMM). Several SMM tools are available today in the market, they help marketers and firms' managers follow relevant insights that inform them about what people think about their brand, who speaks about it, when and how it occurs. For this, SMM tools use several metrics concerning messages sent in social media about a brand, but few give a scoring to that brand's reputation. Our contribution through this work is to introduce the reputation score measured by our Intelligent Reputation Measuring System (IRMS) and compare it with existing SMM tools metrics. |
The Evolution of Wang Institute's Master of Software Engineering Program | Master of Software Engineering (MSE) programs are relatively new. Starting such a program is expensive in terms of human and capital resources. Some of the costs are: preparation of new course materials, acquisition of sophisticated equipment and software, and maintenance of a low student/faculty ratio. In addition, MSE students and faculty have special needs, such as technical background and familiarity with current industrial practices. |
PDFX: fully-automated PDF-to-XML conversion of scientific literature | PDFX is a rule-based system designed to reconstruct the logical structure of scholarly articles in PDF form, regardless of their formatting style. The system's output is an XML document that describes the input article's logical structure in terms of title, sections, tables, references, etc. and also links it to geometrical typesetting markers in the original PDF, such as paragraph and column breaks. The key aspect of the presented approach is that the rule set used relies on relative parameters derived from font and layout specifics of each article, rather than on a template-matching paradigm. The system thus obviates the need for domain- or layout-specific tuning or prior training, exploiting only typographical conventions inherent in scientific literature. Evaluated against a significantly varied corpus of articles from nearly 2000 different journals, PDFX gives a 77.45 F1 measure for top-level heading identification and 74.03 for extracting individual bibliographic items. The service is freely available for use at http://pdfx.cs.man.ac.uk/. |
Cross-Lingual Mixture Model for Sentiment Classification | The amount of labeled sentiment data in English is much larger than that in other languages. Such a disproportion arouse interest in cross-lingual sentiment classification, which aims to conduct sentiment classification in the target language (e.g. Chinese) using labeled data in the source language (e.g. English). Most existing work relies on machine translation engines to directly adapt labeled data from the source language to the target language. This approach suffers from the limited coverage of vocabulary in the machine translation results. In this paper, we propose a generative cross-lingual mixture model (CLMM) to leverage unlabeled bilingual parallel data. By fitting parameters to maximize the likelihood of the bilingual parallel data, the proposed model learns previously unseen sentiment words from the large bilingual parallel data and improves vocabulary coverage significantly. Experiments on multiple data sets show that CLMM is consistently effective in two settings: (1) labeled data in the target language are unavailable; and (2) labeled data in the target language are also available. |
Step by Step Towards Creating a Safe Smart Contract: Lessons and Insights from a Cryptocurrency Lab | We document our experiences in teaching smart contract programming to undergraduate students at the University of Maryland, the first pedagogical attempt of its kind. Since smart contracts deal directly with the movement of valuable currency units between contratual parties, security of a contract program is of paramount importance. Our lab exposed numerous common pitfalls in designing safe and secure smart contracts. We document several typical classes of mistakes students made, suggest ways to fix/avoid them, and advocate best practices for programming smart contracts. Finally, our pedagogical efforts have also resulted in online open course materials for programming smart contracts, which may be of independent interest to the community. |
Gerrit software code review data from Android | Over the past decade, a number of tools and systems have been developed to manage various aspects of the software development lifecycle. Until now, tool supported code review, an important aspect of software development, has been largely ignored. With the advent of open source code review tools such as Gerrit along with projects that use them, code review data is now available for collection, analysis, and triangulation with other software development data. In this paper, we extract Android peer review data from Gerrit. We describe the Android peer review process, the reverse engineering of the Gerrit JSON API, our data mining and cleaning methodology, database schema, and provide an example of how the data can be used to answer an empirical software engineering question. The database is available for use by the research community. |
Pre-service Teachers' Views on their Information Technology Education | The concern that teachers are being inadequately prepared by their pre-service education to be confident and competent users of information technology remains, despite over a decade of computer availability in education systems. This paper examines the views of 234 pre-service teachers who experienced an information technology component in their teacher education course. It finds that many students have low computer self-efficacy and express negative feelings about information technology. These perceptions are gender and age related. It concludes that the need for information technology competency training remains important, but such programmes need to be specifically tailored to account for the wide range of experiences and attitudes of pre-service teachers. |
Using user generated online photos to estimate and monitor air pollution in major cities | With the rapid development of economy in China over the past decade, air pollution has become an increasingly serious problem in major cities and caused grave public health concerns in China. Recently, a number of studies have dealt with air quality and air pollution. Among them, some attempt to predict and monitor the air quality from different sources of information, ranging from deployed physical sensors to social media. These methods are either too expensive or unreliable, prompting us to search for a novel and effective way to sense the air quality. In this study, we propose to employ the state of the art in computer vision techniques to analyze photos that can be easily acquired from online social media. Next, we establish the correlation between the haze level computed directly from photos with the official PM 2.5 record of the taken city at the taken time. Our experiments based on both synthetic and real photos have shown the promise of this image-based approach to estimating and monitoring air pollution. |
Fertility and the economy. |
Fertility and the economy is examined in the context of the Malthusian question about the links between family choices and longterm economic growth. Micro level differences are not included not are a comprehensive range of economic or determinant variables. Specific attention is paid to income and price effects, the quality of children, overlapping generations, mortality effects, uncertainty, and economic growth. Fertility and the demand for children in linked to parental incomes and the cost of rearing children, which is affected by public policies that change the costs. Demand is also related to child and adult mortality, and uncertainty about sex of the child. Fertility in one generation affects fertility in the next. Malthusian and neoclassical models do not capture the current model of modern economies with rising income/capita and human and physical capital, extensive involvement of married women in the labor force, and declining fertility to very low levels. In spite of the present advances in firm knowledge about the relationships between fertility and economic and social variables, there is still much greater ignorance of the interactions. The Malthusian utility function that says fertility rises and falls with income did hold up to 2 centuries of scrutiny, and the Malthusian inclusion of the shifting tastes in his analysis could be translated in the modern context to include price of children. The inclusion of net cost has significant consequences, i.e., rural fertility can be higher because the cost of rearing when children contribute work to maintaining the farm is lower than in the city. An income tax deduction for children in the US reduces cost. Economic growth raises the cost of children due the time spent on child care becoming more valuable. The modern context has changed from Malthusian time, and the cost of education, training, and medical care is relevant. The implication is that a rise in income could reduce the demand for children when education and training of children increases. Quality is substituted for quantity. The neoclassical model that "the capital-labor ratio and the degree of capital deepening" is affected by population growth is examined as well as the modern approach, and the implications are expressed, i.e., intergenerational transfers and parental altruism.
|
MFCC and its applications in speaker recognition | Speech processing is emerged as one of the important application area of digital signal processing. Various fields for research in speech processing are speech recognition, speaker recognition, speech synthesis, speech coding etc. The objective of automatic speaker recognition is to extract, characterize and recognize the information about speaker identity. Feature extraction is the first step for speaker recognition. Many algorithms are suggested/developed by the researchers for feature extraction. In this work, the Mel Frequency Cepstrum Coefficient (MFCC) feature has been used for designing a text dependent speaker identification system. Some modifications to the existing technique of MFCC for feature extraction are also suggested to improve the speaker recognition efficiency. |
Pooled analysis of the CONFIRM registries: impact of gender on procedure and angiographic outcomes in patients undergoing orbital atherectomy for peripheral artery disease. | PURPOSE
To compare the acute procedure and angiographic outcomes of peripheral artery disease (PAD) patients treated with orbital atherectomy stratified by gender.
METHODS
The CONFIRM I, II, and III registries are US multicenter, nonrandomized, all-comers registries of PAD patients who were treated with orbital atherectomy. All patients with gender specified in the registry database were included in the current analysis, which compared the final residual stenosis achieved after atherectomy and the rate of acute complications in female and male patients. The 3 registries included 3131 patients with 4761 lesions: 1261 women (mean age 73.2 ± 10.7 years) with 1874 lesions and 1870 men (mean age 70.4 ± 10.2) with 2887 lesions.
RESULTS
The women were older (p < 0.001) and had a higher but nonsignificant prevalence of critical limb ischemia (p = 0.075). After treatment, the final residual stenosis in women vs. men was 9% ± 11% vs. 11% ± 11%, respectively (p < 0.001). Women had a higher rate of all types of dissection (13.3% vs. 9.9%, p<0.001). However, both genders had similar rates of flow-limiting dissections (1.6% vs. 1.4%, p = 0.61), perforation, slow flow, vessel closure, spasm, embolism, and thrombus formation.
CONCLUSION
The gender analysis of the CONFIRM registries revealed that there was successful lesion modification with orbital atherectomy in both men and women; however, women had a higher rate of dissection (all types). This difference is likely because of the older age and higher percentage of critical limb ischemia in women in this cohort. These results, however, suggest that additional studies should be completed to further understand the increased risks for women vs. men during endovascular procedures. |
Big Data Mobile Services for New York City Taxi Riders and Drivers | In recent years, taxis in New York City have been equipped with GPS sensors to record the start and end locations of every trip, in conjunction with fare collection equipment recording trip time, fare, and distance. This paper analyzes this vast dataset with big data technologies. We use MapReduce and Hive in order to understand traffic and travel patterns, discover relationships among the data, and make predictions on the taxi network. We also propose a service model combining the results of our data analysis with smartphone applications to offer users various taxi-related services based on results of our big data analysis. |
Substrate Modeling and Lumped Substrate Resistance Extraction for CMOS ESD/Latchup Circuit Simulation | Due to interactions through the common silicon substrate, the layout and placement of devices and substrate contacts can have signi cant impacts on a circuit's ESD (Electrostatic Discharge) and latchup behavior in CMOS technologies. Proper substrate modeling is thus required for circuit-level simulation to predict the circuit's ESD performance and latchup immunity. In this work we propose a new substrate resistance network model, and develop a novel substrate resistance extraction method that accurately calculates the distribution of injection current into the substrate during ESD or latchup events. With the proposed substrate model and resistance extraction, we can capture the three-dimensional layout parasitics in the circuit as well as the vertical substrate doping pro le, and simulate these e ects on circuit behavior at the circuit-level accurately. The usefulness of this work for layout optimization is demonstrated with an industrial circuit example. |
Tactics for management of thrips (Thysanoptera: Thripidae) and tomato spotted wilt virus in tomato. | Four studies were conducted in Georgia during spring 1999, 2000, 2001, and 2002 to evaluate various management tactics for reducing thrips and thrips-vectored tomato spotted wilt virus (TSWV) in tomato and their interactions relative to fruit yield. Populations of thrips vectors of TSWV, Frankliniella occidentalis (Pergande) and Frankliniella fusca (Hinds), were determined using flower and sticky trap samples. The management practices evaluated were host plant resistance, insecticide treatments, and silver or metallic reflective mulch. Averaged over all tests, the TSWV-resistant tomato 'BHN444' on silver mulch treatment had the largest effect in terms of reducing thrips and spotted wilt and increasing marketable yield. Of the insecticide treatments tested, the imidacloprid soil treatment followed by early applications of a thrips-effective foliar insecticide treatment provided significant increase in yield over other treatments. Tomato yield was negatively correlated with the number of F. fusca and percentage of TSWV incidence. F. occidentalis per blossom was positively correlated with percentage of TSWV incidence, but not with yield. No significant interactions were observed between cultivar reflective mulch main plot treatments and insecticide subplot treatments; thus, treatment seemed to be additive in reducing the economic impact of thrips-vectored TSWV. Control tactics that manage thrips early in the growing season significantly increased tomato yield in years when the incidence of TSWV was high (>17%). |
Ecosystem carbon stocks of mangrove forest in Yingluo Bay, Guangdong Province of South China | Mangrove ecosystems could play important ecological, social and economic roles in addressing the mitigation of climate change through reduced deforestation. The present study aimed to estimate ecosystem carbon stocks (carbon accumulated in both biomass and soil) across the tidal gradient of mangroves in Yingluo Bay, Guangdong Province of South China. The ecosystem carbon storage, as well as vegetation biomass and soil organic carbon (SOC) storage, increased along with the tidal gradient of mangrove forests from the low intertidal zone to the high intertidal zone. The ecosystem carbon storage of the Avicennia marina, Sonneratia apetala, Aegiceras corniculatum + Kandelia obovata, Rhizophora stylosa and Bruguiera gymnorrhiza stands were 212.88 t/ha, 262.03 t/ha, 323.57 t/ha, 443.13 t/ha and 376.80 t/ha, respectively and their vegetation carbon pools accounted for 11.65%, 29.79%, 19.19%, 37.76% and 25.94% of ecosystem carbon storage, respectively. Compared to the mangrove ecosystem, there was a much lower ecosystem carbon stock (97.32 t/ha) at the bare mud flat. The relatively high vegetation biomass, which coupled with carbon-rich soils, resulted in the presence of larger ecosystem carbon stocks compared to other subtropical forests in the same latitude zone. A significant positive correlation was found between vegetation biomass and soil organic carbon concentration of the upper 0–50 cm layers. This may indicate that the increase of vegetation biomass will raise the mangrove-derived SOC in the soil (especially the 0–50 cm soil layers). Reforestation by using mangrove forests is an important way to increase ecosystem carbon sequestration in coastal areas. 2013 Elsevier B.V. All rights reserved. |
Racial Identity and Gender as Moderators of the Relationship Between Body Image and Self-esteem for African Americans | This study explored whether multiple dimensions of racial identity and gender moderated the relationship between body dissatisfaction and self-esteem for African American men and women (N=425) using an intersectional approach. Centrality (strength of identification with racial group), private regard (positive feelings about racial group), public regard (positive feelings others have about racial group), and gender moderated the relationship between body dissatisfaction and self-esteem for a sample of men (n=109) and women (n=316) college students from three regions of the United States. Body dissatisfaction was related to lower self-esteem only for those African Americans for whom race was less central to their identities. High private regard and low body dissatisfaction were synergistically associated with higher self-esteem. Similarly, low public regard and high body dissatisfaction were synergistically related to lower self-esteem. There was a positive main effect for assimilation ideology (emphasis on similarities between African Americans and Western society) on self-esteem; however it was not a significant moderator. The relationship between body dissatisfaction and self-esteem was stronger for women than for men. This study extends our knowledge of the ways in which racial attitudes and gender shape how African Americans experience their bodies and are related to self-esteem. |
Pig castration: will the EU manage to ban pig castration by 2018? | BACKGROUND
In 2010, the 'European Declaration on alternatives to surgical castration of pigs' was agreed. The Declaration stipulates that from January 1, 2012, surgical castration of pigs shall only be performed with prolonged analgesia and/or anaesthesia and from 2018 surgical castration of pigs should be phased out altogether. The Federation of Veterinarians of Europe together with the European Commission carried out an online survey via SurveyMonkey© to investigate the progress made in different European countries. This study provides descriptive information on the practice of piglet castration across 24 European countries. It gives also an overview on published literature regarding the practicability and effectiveness of the alternatives to surgical castration without anaesthesia/analgesia.
RESULTS
Forty usable survey responses from 24 countries were received. Besides Ireland, Portugal, Spain and United Kingdom, who have of history in producing entire males, 18 countries surgically castrate 80% or more of their male pig population. Overall, in 5% of the male pigs surgically castrated across the 24 European countries surveyed, castration is performed with anaesthesia and analgesia and 41% with analgesia (alone). Meloxicam, ketoprofen and flunixin were the most frequently used drugs for analgesia. Procaine was the most frequent local anaesthetic. The sedative azaperone was frequently mentioned even though it does not have analgesic properties. Half of the countries surveyed believed that the method of anaesthesia/analgesia applied is not practicable and effective. However, countries that have experience in using both anaesthesia and post-operative analgesics, such as Norway, Sweden, Switzerland and The Netherlands, found this method practical and effective. The estimated average percentage of immunocastrated pigs in the countries surveyed was 2.7% (median = 0.2%), where Belgium presented the highest estimated percentage of immunocastrated pigs (18%).
CONCLUSION
The deadlines of January 1, 2012, and of 2018 are far from being met. The opinions on the animal-welfare-conformity and the practicability of the alternatives to surgical castration without analgesia/anaesthesia and the alternatives to surgical castration are widely dispersed. Although countries using analgesia/anaesthesia routinely found this method practical and effective, only few countries seem to aim at meeting the deadline to phase out surgical castration completely. |
Innovative LDS antenna for 4G applications | An innovative LDS 4G antenna solution operating in the 698-960 MHz band is presented. It is composed of two radiating elements recombined in a broadband single feed antenna system using a multiband matching circuit design. Matching interfaces are synthesized thanks to lumped components placed on the FR4 PCB supporting the LDS antenna. Measurement shows a reflection coefficient better than -6 dB over the 698-960 MHz band, with a 30% peak total efficiency. Measurement using a realistic phone casing showed the same performances. The proposed approach can be extended to additional bands, offering an innovative antenna solution able to address the multi band challenge related to 4G applications. |
Correlates of adherence to supervised exercise in patients awaiting surgical removal of malignant lung lesions: results of a pilot study. | PURPOSE/OBJECTIVES
To examine the demographic, medical, and social-cognitive correlates of adherence to a presurgical exercise training intervention in patients awaiting surgery for suspected malignant lung lesions.
DESIGN
Pilot study, single-group, prospective design with convenience sampling.
SETTING
Exercise training was performed at a university research fitness center in western Canada.
SAMPLE
19 patients awaiting surgical resection of suspected malignant lung lesions.
METHODS
At baseline, participants completed a questionnaire including the Theory of Planned Behavior variables of perceived behavioral control, attitude, and subjective norm, as well as medical and demographic information. Participants were asked to attend five supervised exercise sessions per week during surgical wait time (X = 8 +/- 2.4 weeks).
MAIN RESEARCH VARIABLES
Theory of Planned Behavior variables and exercise adherence.
FINDINGS
Adherence to the exercise intervention was 73% (range = 0%-100%). Correlates of adherence were perceived behavioral control (r = 0.63; p = 0.004) and subjective norm (r = 0.51; p = 0.014). Participants with greater than 80% adherence reported significantly higher behavioral control than participants with less than 80% adherence (X difference = 1.1; 95% confidence interval = 0.1-2.2; p = 0.035). Men had better adherence than women (X difference = 24.9%; 95% confidence interval = 0.4-49.4; p = 0.047).
CONCLUSIONS
Perceived behavioral control and subjective norm were the strongest correlates of exercise adherence. Women could be at risk for poor exercise adherence prior to lung surgery.
IMPLICATIONS FOR NURSING
This information could be useful for clinicians in their attempts to improve adherence to exercise interventions in patients awaiting surgery for malignant lung lesions. |
Evaluating Reinforcement Learning Algorithms in Observational Health Settings | Omer Gottesman, Fredrik Johansson, Joshua Meier, Jack Dent, Donghun Lee, Srivatsan Srinivasan, Linying Zhang, Yi Ding, David Wihl, Xuefeng Peng, Jiayu Yao, Isaac Lage, Christopher Mosch, Li-wei H. Lehman, Matthieu Komorowski, Aldo Faisal, Leo Anthony Celi, David Sontag, and Finale Doshi-Velez Paulson School of Engineering and Applied Sciences, Harvard University Institute for Medical Engineering and Science, MIT T.H. Chan School of Public Health, Harvard University Department of Statistics, Harvard University Laboratory for Computational Physiology, Harvard-MIT Health Sciences & Technology, MIT Department of Surgery and Cancer, Faculty of Medicine, Imperial College London Department of Bioengineering, Imperial College London Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center MIT Critical Data |
Estimating a Dirichlet distribution | The Dirichlet distribution and its compound variant, the Dirichlet-multinomial, are two of the most basic models for proportional data, such as the mix of vocabulary words in a text document. Yet the maximum-likelihood estimate of these distributions is not available in closed-form. This paper describes simple and efficient iterative schemes for obtaining parameter estimates in these models. In each case, a fixed-point iteration and a Newton-Raphson (or generalized Newton-Raphson) iteration is provided. 1 The Dirichlet distribution The Dirichlet distribution is a model of how proportions vary. Let p denote a random vector whose elements sum to 1, so that pk represents the proportion of item k. Under the Dirichlet model with parameter vector α, the probability density at p is p(p) ∼ D(α1, ..., αK) = Γ( ∑ k αk) ∏ k Γ(αk) ∏ k pk k (1) where pk > 0 (2) |
The Evolution of Social Ethics: Using Economic History to Understand Economic Ethics | In the development of Roman Catholic social thought from the teachings of the scholastics to the modern social encyclicals, changes in normative economics reflect the transformation of an economic terrain from its feudal roots to the modern industrial economy. The preeminence accorded by the modern market to the allocative over the distributive function of price broke the convenient convergence of commutative and distributive justice in scholastic just price theory. Furthermore, the loss of custom, law, and usage in defining the boundaries of economic behavior led to a depersonalization of economic relationships that had previously provided effective informal means of protecting individual well-being. Hence, recent economic ethics has had to look for nonprice, nonmarket mechanisms for distributive justice. This is reflected, for example, in the shift in attitude from the medieval antipathy toward unions to the contemporary defense of organized labor on moral grounds. |
Statistical estimation of gene expression using multiple laser scans of microarrays | UNLABELLED
We propose a statistical model for estimating gene expression using data from multiple laser scans at different settings of hybridized microarrays. A functional regression model is used, based on a non-linear relationship with both additive and multiplicative error terms. The function is derived as the expected value of a pixel, given that values are censored at 65 535, the maximum detectable intensity for double precision scanning software. Maximum likelihood estimation based on a Cauchy distribution is used to fit the model, which is able to estimate gene expressions taking account of outliers and the systematic bias caused by signal censoring of highly expressed genes. We have applied the method to experimental data. Simulation studies suggest that the model can estimate the true gene expression with negligible bias.
AVAILABILITY
FORTRAN 90 code for implementing the method can be obtained from the authors. |
Efficacy and safety of ABT-335 (fenofibric acid) in combination with atorvastatin in patients with mixed dyslipidemia. | In patients with mixed dyslipidemia characterized by increased triglycerides (TG), decreased high-density lipoprotein (HDL) cholesterol, and increased low-density lipoprotein (LDL) cholesterol, monotherapy with lipid-altering drugs often fails to achieve all lipid targets. This multicenter, double-blind, active-controlled study evaluated ABT-335 (fenofibric acid) in combination with 2 doses of atorvastatin in patients with mixed dyslipidemia. A total of 613 patients with LDL cholesterol > or =130 mg/dl, TG > or =150 mg/dl, and HDL cholesterol <40 mg/dl for men and <50 mg/dl for women were randomly assigned to ABT-335 (135 mg), atorvastatin (20, 40, or 80 mg), or combination therapy (ABT-335 + atorvastatin 20 or 40 mg) and treated for 12 weeks. Combination therapy with ABT-335 + atorvastatin 20 mg resulted in significantly greater improvements in TG (-45.6% vs -16.5%) and HDL cholesterol (14.0% vs 6.3%) compared with atorvastatin 20 mg and LDL cholesterol (-33.7% vs -3.4%) compared with ABT-335. Similarly, significantly greater improvements were observed with ABT-335 + atorvastatin 40 mg in TG (-42.1% vs -23.2%) and HDL cholesterol (12.6% vs 5.3%) compared with atorvastatin 40 mg and LDL cholesterol (-35.4% vs -3.4%) compared with ABT-335 monotherapy. Combination therapy also improved multiple secondary variables. Combination therapy was generally well tolerated with a safety profile consistent with those of ABT-335 and atorvastatin monotherapies. No rhabdomyolysis was reported. In conclusion, ABT-335 + atorvastatin combination therapy resulted in more effective control of multiple lipid parameters than either monotherapy and may be an appropriate therapy for patients with mixed dyslipidemia. |
Deep Structured Scene Parsing by Learning with Image Descriptions | This paper addresses a fundamental problem of scene understanding: How to parse the scene image into a structured configuration (i.e., a semantic object hierarchy with object interaction relations) that finely accords with human perception. We propose a deep architecture consisting of two networks: i) a convolutional neural network (CNN) extracting the image representation for pixelwise object labeling and ii) a recursive neural network (RNN) discovering the hierarchical object structure and the inter-object relations. Rather than relying on elaborative user annotations (e.g., manually labeling semantic maps and relations), we train our deep model in a weakly-supervised manner by leveraging the descriptive sentences of the training images. Specifically, we decompose each sentence into a semantic tree consisting of nouns and verb phrases, and facilitate these trees discovering the configurations of the training images. Once these scene configurations are determined, then the parameters of both the CNN and RNN are updated accordingly by back propagation. The entire model training is accomplished through an Expectation-Maximization method. Extensive experiments suggest that our model is capable of producing meaningful and structured scene configurations and achieving more favorable scene labeling performance on PASCAL VOC 2012 over other state-of-the-art weakly-supervised methods. |
Continuum regression for cross-modal multimedia retrieval | Understanding the relationship among different modalities is a challenging task. The frequently used canonical correlation analysis (CCA) and its variants have proved effective for building a common space in which the correlation between different modalities is maximized. In this paper, we show that CCA and its variants may cause information dissipation when switching the modals, and thus propose to use the continuum regression (CR) model to handle this problem. In particular, the CR model with a fixed variance coefficient of 1/2 is adopted here. We also apply the multinomial logistic regression model for further classification task. To evaluate the CR model, we perform a series of cross-modal retrieval experiments in terms of two kinds of modals, namely image and text. Compared with previous methods, experimental results show that the CR model has achieved the best retrieval precision, which demonstrates the potential of our method for real internet search applications. |
Angiographic and clinical outcome following coronary stenting of small vessels: a comparison with coronary stenting of large vessels. | OBJECTIVES
Stent implantation reduces restenosis in vessels > or =3 mm compared with balloon angioplasty, but few data are available for stents implanted in vessels <3 mm. The aim of this study was to evaluate immediate and follow-up patient outcomes after stent implantation in vessels <3 mm compared to stent implantation in vessels > or =3 mm.
METHODS
Between March 1993 and May 1996, a total of 1,298 consecutive patients (1,673 lesions) underwent coronary stenting. The study population was divided into two groups based on angiographic vessel diameter. In case of multivessel stenting, patients were randomly assigned only one lesion. Group I included 696 patients (696 lesions) in whom stents were implanted in vessels > or =3 mm, and group II included 602 patients (602 lesions) in whom stents were implanted in vessels <3 mm.
RESULTS
There was no difference in procedural success (95.4% in group I and 95.9% in group II), or subsequent subacute stent thrombosis (1.5% in group I and 1.4% in group II, p=NS). The postprocedure residual diameter stenosis was 3.31+/-12.4% in group I and -2.45+/-16.2% in group II. Angiographic follow-up was performed in 75% of patients, restenosis occurred in 19.9% of patients in group I and 32.6% in group II (p <0.0001). Absolute lumen gain was significantly higher in group I compared to group II, but absolute late lumen loss was similar in the two groups (1.05+/-0.91 mm in group I vs. 1.11+/-0.85 mm in group II, p=NS). Subsequently, the loss index was more favorable in group I (0.45 vs. 0.56; p=0.0006). Independent predictors of freedom from restenosis by multivariate logistic regression in the total population were: larger baseline reference diameter (odds ratio 2.032 p=0.006), larger postprocedure minimal stent cross-sectional area (odds ratio 1.190, p=0.0001) and shorter lesions (odds ratio 1.037, p=0.01). At long-term clinical follow-up, patients with small vessels had a lower rate of event-free survival (63% vs. 71.3%, p=0.007).
CONCLUSIONS
Coronary stenting can be performed in small vessels with a high success rate and low incidence of stent thrombosis. However, the long-term angiographic and clinical outcome of patients undergoing stent implantation in small vessels is less favorable than that of patients with large vessels. |
A Survey of Question Answering for Math and Science Problem | Turing test was long considered the measure for artificial intelligence. But with the advances in AI, it has proved to be insufficient measure. We can now aim to measure machine intelligence like we measure human intelligence. One of the widely accepted measure of intelligence is standardized math and science test. In this paper, we explore the progress we have made towards the goal of making a machine smart enough to pass the standardized test. We see the challenges and opportunities posed by the domain, and note that we are quite some ways from actually making a system as smart as a even a middle school scholar. |
Lay Theories About White Racists : What Constitutes Racism ( and What Doesn ’ t ) | Psychological theories of racial bias assume a pervasive motivation to avoid appearing racist, yet researchers know little regarding laypeople’s theories about what constitutes racism. By investigating lay theories of White racism across both college and community samples, we seek to develop a more complete understanding of the nature of race-related norms, motivations, and processes of social perception in the contemporary United States. Factor analyses in Studies 1 and 1a indicated three factors underlying the traits laypeople associate with White racism: evaluative, psychological, and demographic. Studies 2 and 2a revealed a three-factor solution for behaviors associated with White racism: discomfort/unfamiliarity, overt racism, and denial of problem. For both traits and behaviors, lay theories varied by participants’ race and their race-related attitudes and motivations. Specifically, support emerged for the prediction that lay theories of racism reflect a desire to distance the self from any aspect of the category ‘racist’. |
Predicting Amazon review helpfulness [ CSE 255 Assignment 1 ] | Reviews on amazon are ranked by how helpful they are rated by users in an effort to quickly summarize the opinions of a product for potential buyers. This project aims to explore what factors affect a review’s helpfulness by building a classification model on the Amazon movie reviews data set. The model performs well with accuracies over 85% and it is found that a review’s writing style, product rating and unigram features affect helpfulness the most. |
The effects of sleep extension on the athletic performance of collegiate basketball players. | STUDY OBJECTIVES
To investigate the effects of sleep extension over multiple weeks on specific measures of athletic performance as well as reaction time, mood, and daytime sleepiness.
SETTING
Stanford Sleep Disorders Clinic and Research Laboratory and Maples Pavilion, Stanford University, Stanford, CA.
PARTICIPANTS
Eleven healthy students on the Stanford University men's varsity basketball team (mean age 19.4 ± 1.4 years).
INTERVENTIONS
Subjects maintained their habitual sleep-wake schedule for a 2-4 week baseline followed by a 5-7 week sleep extension period. Subjects obtained as much nocturnal sleep as possible during sleep extension with a minimum goal of 10 h in bed each night. Measures of athletic performance specific to basketball were recorded after every practice including a timed sprint and shooting accuracy. Reaction time, levels of daytime sleepiness, and mood were monitored via the Psychomotor Vigilance Task (PVT), Epworth Sleepiness Scale (ESS), and Profile of Mood States (POMS), respectively.
RESULTS
Total objective nightly sleep time increased during sleep extension compared to baseline by 110.9 ± 79.7 min (P < 0.001). Subjects demonstrated a faster timed sprint following sleep extension (16.2 ± 0.61 sec at baseline vs. 15.5 ± 0.54 sec at end of sleep extension, P < 0.001). Shooting accuracy improved, with free throw percentage increasing by 9% and 3-point field goal percentage increasing by 9.2% (P < 0.001). Mean PVT reaction time and Epworth Sleepiness Scale scores decreased following sleep extension (P < 0.01). POMS scores improved with increased vigor and decreased fatigue subscales (P < 0.001). Subjects also reported improved overall ratings of physical and mental well-being during practices and games.
CONCLUSIONS
Improvements in specific measures of basketball performance after sleep extension indicate that optimal sleep is likely beneficial in reaching peak athletic performance. |
Design of dual-polarized slotted waveguide array | A dual polarized waveguide slotted array antenna is presented. A 45deg slant left polarization slot array is interlaced with a 45deg slant right polarization slot array within a common antenna structure to provide the capability of transmitting and receiving simultaneous dual orthogonal linear polarization states. The signals exhibiting dual orthogonal polarization states can have the same frequency range or different frequency bands. The design has a compact structure by the use of ridged waveguide slot radiators. Advantageously, it can be fed by a waveguide-implemented distribution network mounted to the rear of the antenna to broaden the bandwidth. A dual circularly polarized waveguide slotted array can also be realized by the use of a 3-dB coupler. |
A Survey on TCP Congestion Control Schemes in Guided Media and Unguided Media Communication | Transmission Control Protocol (TCP) is a widely used end-to-end transport protocol in the Internet. This End to End delivery in wired (Guided) as well as wireless (Unguided) network improves the performance of transport layer or Transmission control Protocol (TCP) characterized by negligible random packet losses. This paper represents tentative study of TCP congestion control principles and mechanisms. Modern implementations of TCP hold four intertwined algorithms: slow start, congestion avoidance, fast retransmits, and fast recovery in addition to the standard algorithms used in common implementations of TCP. This paper describes the performance characteristics of four representative TCP schemes, namely TCP Tahoe, Reno, New Reno and Vegas under the condition of congested link capacities for wired network as well as wireless network. |
Effects of VDT and paper presentation on consumption and production of information: Psychological and physiological factors | Two experiments were performed to investigate the influence of VDT (video display terminals) and paper presentation of text on consumption of information (Study 1) measured in the form of convergent production and production of information (Study 2) measured in form of divergent production. The READ test of reading comprehension was used as the convergent task whereas the ‘‘Headlines’’ test was used as the divergent task. Several other factors pertaining to performance were also studied including the PANAS test of positive and negative affect, the STH test of stress, tiredness and hunger, the TRI (Technology Readiness Inventory) and the SE test of stress and energy. The results show that performance in the VDT presentation condition where inferior to that of the Paper presentation condition for both consumption and production of information. Concomitantly, participants in the VDT presentation condition of the consumption of information study reported higher levels of experienced stress and tiredness whereas the participants in the VDT presentation condition of production of information study reported only slightly higher levels of stress. Although the results are discussed inboth physiological andpsychological terms arguments are made that the incremental effects of VDT text presentation stem mainly from dual-task effects of fulfilling the assignment andworkingwith the computer resulting in a higher cognitiveworkload. 2004 Elsevier Ltd. All rights reserved. |
The Canny Edge Detector Revisited | Canny (IEEE Trans. Pattern Anal. Image Proc. 8(6):679-698, 1986) suggested that an optimal edge detector should maximize both signal-to-noise ratio and localization, and he derived mathematical expressions for these criteria. Based on these criteria, he claimed that the optimal step edge detector was similar to a derivative of a gaussian. However, Canny’s work suffers from two problems. First, his derivation of localization criterion is incorrect. Here we provide a more accurate localization criterion and derive the optimal detector from it. Second, and more seriously, the Canny criteria yield an infinitely wide optimal edge detector. The width of the optimal detector can however be limited by considering the effect of the neighbouring edges in the image. If we do so, we find that the optimal step edge detector, according to the Canny criteria, is the derivative of an ISEF filter, proposed by Shen and Castan (Graph. Models Image Proc. 54:112–133, 1992). In addition, if we also consider detecting blurred (or non-sharp) gaussian edges of different widths, we find that the optimal blurred-edge detector is the above optimal step edge detector convolved with a gaussian. This implies that edge detection must be performed at multiple scales to cover all the blur widths in the image. We derive a simple scale selection procedure for edge detection, and demonstrate it in one and two dimensions. |
The influence of personality traits on information seeking behaviour of students | The present study was undertaken with the objective to explore the influence of the five personality dimensions on the information seeking behaviour of the students in higher educational institutions. Information seeking behaviour is defined as the sum total of all those activities that are usually undertaken by the students of higher education to collect, utilize and process any kind of information needed for their studies. Data has been collected from 600 university students of the three broad disciplines of studies from the Universities of Eastern part of India (West Bengal). The tools used for the study were General Information schedule (GIS), Information Seeking Behaviour Inventory (ISBI) and NEO-FFI Personality Inventory. Product moment correlation has been worked out between the scores in ISBI and those in NEO-FFI Personality Inventory. The findings indicated that the five personality traits are significantly correlated to all the dimensions of information seeking behaviour of the university students. |
Review of the traditional uses, phytochemistry, pharmacology and toxicology of giant fennel (Ferula communis L. subsp. communis) | Ferula communis L., subsp. communis, namely giant fennel, has extensively been used in traditional medicine for a wide range of ailments. Fresh plant materials, crude extracts and isolated components of F. communis showed a wide spectrum of in vitro and in vivo pharmacological properties including antidiabetic, antimicrobial, antiproliferative, and cytotoxic activities. The present paper, reviews the traditional uses, botany, phytochemistry, pharmacology and toxicology of F. communis in order to reveal its therapeutic potential and future research opportunities. A bibliographic literature search was conducted in different scientific databases and search engines including Scopus, Cochrane Library, Embase, Google Scholar, Pubmed, SciFinder, and Web of science. Phytochemical studies have led to the isolation of different compounds such as sesquiterpenes from F. communis. This plant has two different chemotypes, the poisonous and non-poisonous chemotypes. Each chemotype is endowed with various constituents and different activities. The poisonous chemotype exhibits anticoagulant and cytotoxic activities with sesquiterpene coumarins as major constituents, while the non-poisonous one exhibits estrogenic and cytotoxic effects with daucane sesquiterpene esters as the main compounds. In addition, although various pharmacological properties have been reported for F. communis, anti-microbial activities of the plant have been investigated in most studies. Studies revealed that F. communis exhibits different biological activities, and contains various bioactive compounds. Although, antibacterial and cytotoxic activities are the two main pharmacological effects of this plant, further studies should focus on the mechanisms underlying these actions, as well as on those biological activities that have been reported traditionally. |
A randomized, placebo-controlled trial of selenium supplementation in patients with type 2 diabetes: effects on glucose homeostasis, oxidative stress, and lipid profile. | Selenium is an antioxidant trace element. Patients with diabetes are shown to have increased oxidative stress together with decreased selenium concentrations. Whether raising serum selenium will improve blood glucose management in diabetes is largely unknown. In this randomized, double-blind placebo-controlled trial, we assessed the effects of selenium on blood glucose, lipid profile, and oxidative stress in 60 patients with type 2 diabetes. Selenium 200 µg/d or placebo was administered orally for 3 months. Serum concentrations of fasting plasma glucose, glycosylated hemoglobin A1c (HbA1c), insulin, and lipid profile, as well as ferric-reducing ability of plasma and thiobarbituric acid-reactive substances were determined in the fasting state at baseline and after 3 months. Mean (SD) serum selenium at baseline was 42.69 (29.47) µg/L and 47.11 (42.86) µg/L in selenium and placebo groups, respectively. At endpoint, selenium concentration reached to 71.98 (45.08) µg/L in selenium recipients compared with 45.38 (46.45) µg/L in placebo recipients (P<0.01). Between-group comparison showed that fasting plasma glucose, glycosylated hemoglobin A1c, and high-density lipoprotein cholesterol were statistically significantly higher in the selenium recipient arm. Other endpoints changes during the course of trial were not statistically different across the 2 treatment arms. This study suggests that selenium supplementation in patients with type 2 diabetes may be associated with adverse effects on blood glucose homeostasis, even when plasma selenium concentration is raised from deficient status to the optimal concentration of antioxidant activity. Until results of further studies become available, indiscriminate use of selenium supplements in patients with type 2 diabetes warrants caution. |
Near-Linear Unconditionally-Secure Multiparty Computation with a Dishonest Minority | In the setting of unconditionally-secure MPC, where dishonest players are unbounded and no cryptographic assumptions are used, it was known since the 1980’s that an honest majority of players is both necessary and sufficient to achieve privacy and correctness, assuming secure point-to-point and broadcast channels. The main open question that was left is to establish the exact communication complexity. We settle the above question by showing an unconditionally-secure MPC protocol, secure against a dishonest minority of malicious players, that matches the communication complexity of the best known MPC protocol in the honest-but-curious setting. More specifically, we present a new n-player MPC protocol that is secure against a computationally-unbounded malicious adversary that can adaptively corrupt up to t < n/2 of the players. For polynomially-large binary circuits that are not too unshaped, our protocol has an amortized communication complexity of O(n logn+κ/n) bits per multiplication (i.e. AND) gate, where κ denotes the security parameter and const ∈ Z is an arbitrary non-negative constant. This improves on the previously most efficient protocol with the same security guarantee, which offers an amortized communication complexity of O(nκ) bits per multiplication gate. For any κ polynomial in n, the amortized communication complexity of our protocol matches the O(n logn) bit communication complexity of the best known MPC protocol with passive security. We introduce several novel techniques that are of independent interest and we believe will have wider applicability. One is a novel idea of computing authentication tags by means of a mini MPC, which allows us to avoid expensive double-sharings; the other is a batch-wise multiplication verification that allows us to speedup Beaver’s “multiplication triples”. |
XRCE: Hybrid Classification for Aspect-based Sentiment Analysis | In this paper, we present the system we have developed for the SemEval-2014 Task 4 dedicated to Aspect-Based Sentiment Analysis. The system is based on a robust parser that provides information to feed different classifiers with linguistic features dedicated to aspect categories and aspect categories polarity classification. We mainly present the work which has been done on the restaurant domain1 for the four subtasks, aspect term and category detection and aspect term and category polarity. |
A controllable hydrothermal fabrication of hierarchical ZnO microstructures and its gas sensing properties | In this paper, rod-flower like ZnO hierarchical microstructures with high uniformity are synthesized from the thermal decomposition of Zn(NH3)42+ precursors, which are prepared via a surfactant-assisted hydrothermal process. The as-synthesized hierarchical ZnO microstructure is assembled from columnar nanorods, and the measured length to diameter ratio of the nanorod is about 20. The morphology of the hierarchical ZnO microstructure can be tailored by varying hydrothermal conditions, e.g., hydrothermal temperature, reaction time, concentration of Zn2+ and zinc salts. Moreover, based on the experimental results, the possible reaction mechanism for the growth of the as-synthesized hierarchical ZnO microstructures is also discussed in detail,and the Zn2+ concentration was found to be a crucial role in the formation and nucleation of the rod flower-like microstructures. In addition, the gas sensing test demonstrates that the sensors based on hierarchical ZnO microstructures exhibits excellent gas sensing properties due to its unique architecture. |
Best Practices for Business and Systems Analysis in Projects Conforming to Enterprise Architecture | This paper aims to identify best practices for performing business and systems analysis in projects that are required to comply with Enterprise Architecture. We apply two qualitative research methods to study real-life projects conforming to architecture at Statistics Netherlands. First, a Canonical Action Research approach is applied to participate in two business process redesign projects. Second, we use Focus Group interviews to elicit knowledge about carrying out projects conforming to architecture. Based on this empirical research we present seven observations and ten best practices. The best practices point to the fact that project conformance is not only the responsibility of project members, but also of enterprise architects. Considering four levels of best practices (good idea, good practice, local best practice, industry best practice), we argue that our guidelines are located at the second (good practice) level. More research is required to prove or falsify them in other settings. |
ICrafter: A Service Framework for Ubiquitous Computing Environments | In this paper, we propose ICrafter, a framework for services and their user interfaces in a class of ubiquitous computing environments. The chief objective of ICrafter is to let users flexibly interact with the services in their environment using a variety of modalities and input devices. We extend existing service frameworks in three ways. First, to offload services and user input devices, ICrafter provides infrastructure support for UI selection, generation, and adaptation. Second, ICrafter allows UIs to be associated with service patterns for on-the-fly aggregation of services. Finally, ICrafter facilitates the design of service UIs that are portable but still reflect the context of the local environment. In addition, we also focus on the system properties such as incremental deployability and robustness that are critical for ubiquitous computing environments. We describe the goals and architecture of ICrafter, a prototype implementation that validates its design, and the key lessons learnt from our |
Compact two-layer slot array antenna with SIW for 60GHz wireless applications | In a variety of microwave and millimeter-wave applications where high performance antennas are required, waveguide slot arrays have received considerable attention due to their high aperture efficiency, low side lobe levels, and low cross polarization. Resonant slot arrays usually suffer from narrow bandwidth and high cost due to high precision required in manufacturing. Furthermore, because of using standard rectangular waveguides, the antenna array is thick and heavy and is not suitable for monolithic integration with high frequency printed circuits. |
Racial differences in microbleed prevalence in primary intracerebral hemorrhage. | BACKGROUND
Primary intracerebral hemorrhage is two to three times more common in many racial populations, including black patients. Previous studies have shown that microbleeds, identified on gradient echo MRI (GRE), are present in 50-80% of patients with primary ICH. The objective of this study was to compare, by race, the rates, risk factors, and topography of microbleeds in patients hospitalized for primary ICH.
METHODS
Patients diagnosed with primary ICH at two metropolitan stroke centers were included. Clinical and neuroimaging data were recorded for each patient. Analyses were performed to compare baseline characteristics as well as imaging findings by race.
RESULTS
A total of 87 patients met inclusion criteria (42 black subjects, 45 white subjects). The black cohort was younger (p < 0.001), and had a greater rate of hypertension (p = 0.001), but not other vascular risk factors. Microbleeds were more prevalent in the black population, with 74% of blacks having one or more microbleeds compared to 42% of whites (p = 0.005). The black population also tended to have a greater frequency of microbleeds in multiple territories than the white population (38% vs 22%, p = 0.106). When adjusting for age, hypertension, and alcohol use, race was an independent predictor of microbleeds (OR 3.308, 95% CI 1.144-9.571, p = 0.027).
CONCLUSIONS
These pilot data suggest that significant racial differences exist in the frequency and topography of microbleeds in patients with primary ICH. Microbleeds may be an important emerging imaging biomarker with the potential to provide insights into ICH pathophysiology, prognosis, and disease progression, as well as possible therapeutic strategies, particularly in medically underserved populations. |
Consumption of Dairy Products and Cognitive Functioning: Findings from the SU.VI.MAX 2 Study. | OBJECTIVES
Research concerning the link between dairy product intake and cognition is scant while experimental studies suggest links through various biological mechanisms. This study's objective was to examine the cross-time associations of total and specific dairy product consumption with cognitive performance in aging adults. We also explored compliance with dairy intake recommendations in France.
DESIGN
The study was based on the «Supplémentation en Vitamines et Minéraux Antioxydants» randomized trial (SU.VI.MAX, 1994-2002) and the SU.VI.MAX 2 observational follow-up study (2007-2009).
SETTING
A general-population cohort in France.
PARTICIPANTS
N=3,076 participants included in both the SU.VI.MAX and SU.VI.MAX 2 studies.
MEASUREMENTS
Dairy product consumption was estimated using repeated 24h records (1994-1996; mean=10 records, SD=3). Cognitive performance was assessed by neuropsychologists after an average of 13 years post-baseline via a battery of six validated tests. Mean age at the time of the cognitive function evaluation was 65.5 (SD=4.6) years. Principal component analysis revealed factors for verbal memory and working memory. Associations of energy-adjusted dairy product consumption and compliance with the respective dietary guidelines with subsequent cognitive impairment were examined using ANCOVA, providing mean differences (95% confidence intervals, CI) according to tertiles (T), adjusted for confounders including overall dietary patterns.
RESULTS
Total dairy product consumption was not associated with cognitive function. However, milk intake was negatively associated with verbal memory performance: mean difference T3 versus T1= -0.99 (-1.83, -0.15). Among women, consuming more than the recommended amount of dairy was negatively associated with working memory performance: excess versus adequate = -1.52 (-2.93, -0.11).
CONCLUSION
Our results indicate that dairy products consumption and especially compliance with dietary guidelines regarding dairy product intake are differentially associated with performance in specific cognitive domains after a comprehensive adjustment for lifestyle factors, health status markers and dietary patterns. Further longitudinal research is needed given the limited data available. |
A Wideband Dual-Mode SIW Cavity-Backed Triangular-Complimentary-Split-Ring-Slot (TCSRS) Antenna | A cavity-backed triangular-complimentary-split-ring-slot (TCSRS) antenna based on substrate integrated waveguide (SIW) is proposed in this communication. Proposed antenna element is designed and implemented at 28 and 45 GHz for the fifth generation (5G) of wireless communications. The principle of the proposed antenna element is investigated first then the arrays with two and four elements are designed for high-gain operation. Antennas prototype along with their arrays are fabricated using standard printed circuit board (PCB) process at both frequencies-28 and 45 GHz. Measured result shows that the 16.67% impedance bandwidth at 28 GHz and 22.2% impedance bandwidth at 45 GHz is achieved, while maintaining the same substrate height at both frequencies. The measured peak gains of the 2 × 2 antenna array at 30 or 50 GHz are 13.5 or 14.4 dBi, respectively. |
UriSed 3 and UX-2000 automated urine sediment analyzers vs manual microscopic method: A comparative performance analysis. | BACKGROUND
Fully automated urine analyzers now play an important role in routine urinalysis in most laboratories. The recently introduced UriSed 3 has a new automated digital imaging urine sediment analyzer with a phase contrast feature. The aim of this study was to compare the performance of the UriSed 3 and UX-2000 automated urine sediment analyzers with each other and with the results of the manual microscopic method.
METHODS
Two hundred seventy-seven (277) samples of leftover fresh urine from our hospital's central laboratory were evaluated by two automated urine sediment analyzers-UriSed 3 and UX-2000. The results of urine sediment analysis were compared between the two automated analyzers and against the results of the manual microscopic method.
RESULTS
Both devices demonstrated excellent agreement for quantitative measurement of red blood cells and white blood cells. UX-2000 had a lower coefficient correlation and demonstrated slightly lower agreement for squamous epithelial cells. Regarding semiquantitative analysis, both machines demonstrated very good concordance, with all applicable rates within one grade difference of the other machine. UriSed 3 had higher sensitivity for small round cells, while UX-2000 showed greater sensitivity for detecting bacteria and hyaline casts. UriSed 3 demonstrated slightly better specificity, especially in the detection of hyaline and pathological casts.
CONCLUSIONS
Both instruments had nearly similar performance for red blood cells and white blood cells measurement. UriSed 3 was more reliable for measuring squamous epithelial cells and small round cells, while the UX-2000 was more accurate for detecting bacteria and hyaline casts. |
Functional anatomy of thalamus and basal ganglia | Thalamus.The human thalamus is a nuclear complex located in the diencephalon and comprising of four parts (the hypothalamus, the epythalamus, the ventral thalamus, and the dorsal thalamus). The thalamus is a relay centre subserving both sensory and motor mechanisms. Thalamic nuclei (50–60 nuclei) project to one or a few well-defined cortical areas. Multiple cortical areas receive afferents from a single thalamic nucleus and send back information to different thalamic nuclei. The corticofugal projection provides positive feedback to the "correct" input, while at the same time suppressing irrelevant information. Topographical organisation of the thalamic afferents and efferents is contralateral, and the lateralisation of the thalamic functions affects both sensory and motoric aspects. Symptoms of lesions located in the thalamus are closely related to the function of the areas involved. An infarction or haemorrhage thalamic lesion can develop somatosensory disturbances and/or central pain in the opposite hemibody, analgesic or purely algesic thalamic syndrome characterised by contralateral anaesthesia (or hypaesthesia), contralateral weakness, ataxia and, often, persistent spontaneous pain. Basal ganglia.Basal ganglia form a major centre in the complex extrapyramidal motor system, as opposed to the pyramidal motor system (corticobulbar and corticospinal pathways). Basal ganglia are involved in many neuronal pathways having emotional, motivational, associative and cognitive functions as well. The striatum (caudate nucleus, putamen and nucleus accumbens) receive inputs from all cortical areas and, throughout the thalamus, project principally to frontal lobe areas (prefrontal, premotor and supplementary motor areas) which are concerned with motor planning. These circuits: (i) have an important regulatory influence on cortex, providing information for both automatic and voluntary motor responses to the pyramidal system; (ii) play a role in predicting future events, reinforcing wanted behaviour and suppressing unwanted behaviour, and (iii) are involved in shifting attentional sets and in both high-order processes of movement initiation and spatial working memory. Basal ganglia-thalamo-cortical circuits maintain somatotopic organisation of movement-related neurons throughout the circuit. These circuits reveal functional subdivisions of the oculomotor, prefrontal and cingulate circuits, which play an important role in attention, learning and potentiating behaviour-guiding rules. Involvement of the basal ganglia is related to involuntary and stereotyped movements or paucity of movements without involvement of voluntary motor functions, as in Parkinson’s disease, Wilson’s disease, progressive supranuclear palsy or Huntington’s disease. The symptoms differ with the location of the lesion. The commonest disturbances in basal ganglia lesions are abulia (apathy with loss of initiative and of spontaneous thought and emotional responses) and dystonia, which become manifest as behavioural and motor disturbances, respectively. |
43 Visual Data-Mining Techniques* | Never before in history have data been generated at such high volumes as it is today. Exploring and analyzing the vast volumes of data has become increasingly difficult. Information visu-alization and visual data mining can help to deal with the flood of information. The advantage of visual data exploration is that the user is directly involved in the data-mining process. There are a large number of information visualization techniques that have been developed over the last few years to support the exploration of large datasets. In this chapter, we provide an overview of information visualization and visual data-mining techniques and illustrate them using a few examples. The progress made in hardware technology allows today's computer systems to store very large amounts of data. Researchers from the University of Berkeley estimate that every year about 1 exabyte (1 million terabytes) of data is generated, of which a large portion is available in digital form. This means that in the next three years more data will be generated than in all of human history to date. The data is often automatically recorded via sensors and monitoring systems. Even simple transactions of everyday life, such as paying by credit card or using the telephone, are typically recorded by computers. Usually many parameters are recorded, resulting in data with high dimensionality. The data is collected because people believe that it is a potential source of valuable information, providing a competitive advantage (at some point). Finding the valuable information hidden in the data, however, is a difficult task. With today's data-management systems, it is possible to view only small portions of the data. If the data is presented textually, the amount of data that can be displayed is in the range of some one hundred data items, but this is like a drop in the ocean when you are dealing with datasets containing millions of data items. Having no possibility to adequately explore the large amounts of data that have been collected because of their potential usefulness, the data becomes useless and the databases become data 'dumps.' Information visualization focuses on datasets lacking inherent 2D or 3D semantics and therefore also lacking a standard mapping of the abstract data onto the physical screen space. There are a number of well known techniques for visualizing such datasets, such as x-y plots, line plots, and histograms. These techniques are useful for data exploration but are limited to … |
A Practical q -Gram Index for Text Retrieval Allowing Errors | We propose an indexing technique for approximate text searching, which is practical and powerful, and especially optimized for natural language text. Unlike other indices of this kind, it is able to retrieve any string that approximately matches the search pattern, not only words. Every text substring of a xed length q is stored in the index, together with pointers to all the text positions where it appears. The search pattern is partitioned into pieces which are searched in the index, and all their occurrences in the text are veriied for a complete match. To reduce space requirements, pointers to blocks instead of exact positions can be used, which increases querying costs. We design an algorithm to optimize the pattern partition into pieces so that the total number of veriications is minimized. This is especially well suited for natural language texts, and allows to know in advance the expected cost of the search and the expected relevance of the query to the user. We show experimentally the building time, space requirements and querying time of our index, nding that it is a practical alternative for text retrieval. The retrieval times are reduced from 10% to 60% of the best on-line algorithm. |
Realistic eavesdropping attacks on computer displays with low-cost and mobile receiver system | It is known that the computer display images can be reconstructed from display's radio frequency emanations. This is a big information leakage threat for information security. Recently published eavesdropping systems are very expensive and not portable. Furthermore all the published eavesdropping attacks are demonstrated for distance up to 10 meters away from display. In this work, a low-cost and all-in-one mobile receiver is combined to capture emanations from long distance approximately 50 meters away from display and to reconstruct the display images. In our eavesdropping scenario, receiver system is in a building and the target display is in another building. These experiments show that 26 points and bigger fonts can be read easily from the reconstructed images which are captured from the target display approximately 50 meters away. With this new system, we can also capture emanations from 3 meters away of the target display in an office environment for comparison. In this latter experiment 9 points and bigger fonts can be read easily. |
Electromagnetic Inspection of Carbon-Fiber- Reinforced Polymer Composites With Coupled Spiral Inductors | This paper presents a new type of an electromagnetic sensor for nondestructive testing of carbon-fiber-reinforced polymer composites. The sensor utilizes coupled planar spiral inductors operating typically in the range of 10-500 MHz. The method proposed here shows some similarity to the eddy current technique, but as will be shown, the principles of operation are different as the sensitivity to defects is mostly due to the magnetic field components tangential to the surface of a material. It is shown that the method is applicable to 3-D inspection of carbon-fiber-reinforced composites widely employed in the aerospace industry. |
Continual Learning with Deep Generative Replay | Attempts to train a comprehensive artificial intelligence capable of solving multiple tasks have been impeded by a chronic problem called catastrophic forgetting. Although simply replaying all previous data alleviates the problem, it requires large memory and even worse, often infeasible in real world applications where the access to past data is limited. Inspired by the generative nature of the hippocampus as a short-term memory system in primate brain, we propose the Deep Generative Replay, a novel framework with a cooperative dual model architecture consisting of a deep generative model (“generator”) and a task solving model (“solver”). With only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task. We test our methods in several sequential learning settings involving image classification tasks. |
Modulation Technique for Single-Phase Transformerless Photovoltaic Inverters With Reactive Power Capability | This paper underpins the principles for generating reactive power in single-phase transformerless photovoltaic (PV) inverters. Two mainstream and widely adopted PV inverters are explored, i.e., H5 and HERIC. With conventional modulation techniques, reactive power cannot be realized in H5 and HERIC due to the absence of freewheeling path in negative power region. Based on the study, modulation techniques are proposed to provide bidirectional current path during freewheeling period. With proposed modulation technique, reactive power control is achieved in H5 and HERIC inverters, without any modification on the converter structures. The performances of the proposed modulation techniques are studied via MATLAB simulation and further validated with experimental results. |
Adversarially Regularized Graph Autoencoder for Graph Embedding | Graph embedding is an effective method to represent graph data in a low dimensional space for graph analytics. Most existing embedding algorithms typically focus on preserving the topological structure or minimizing the reconstruction errors of graph data, but they have mostly ignored the data distribution of the latent codes from the graphs, which often results in inferior embedding in realworld graph data. In this paper, we propose a novel adversarial graph embedding framework for graph data. The framework encodes the topological structure and node content in a graph to a compact representation, on which a decoder is trained to reconstruct the graph structure. Furthermore, the latent representation is enforced to match a prior distribution via an adversarial training scheme. To learn a robust embedding, two variants of adversarial approaches, adversarially regularized graph autoencoder (ARGA) and adversarially regularized variational graph autoencoder (ARVGA), are developed. Experimental studies on real-world graphs validate our design and demonstrate that our algorithms outperform baselines by a wide margin in link prediction, graph clustering, and graph visualization tasks. |
Augmenting reality for augmented reality | elements that can be integrated with camera-based AR systems but that are independently meaningful objects in their own right. We argue that this new wave of physically grounded AR technologies constitutes the first steps toward a hybridized digital/physical future that can transform our world. The technology space of augmented reality (AR), sometimes characterized using the more general term mixed reality, is one of the more exciting frontiers to emerge from HCI research in recent years [3]. However, like many frontiers, it is chaotic, overhyped, and misunderstood. In the absence of existing best practices to guide development of this new design space, a number of competing visions have taken hold, each of which seeks to colonize the future of our digitally augmented world. Microsoft's version of this future is embodied by the HoloLens, an untethered standalone visor running Windows 10 that superimposes translucent " holographic " overlays into the center of a user's field of view. HoloLens feels like the big brother to Google's Glass project, which traded computing power and graphical fidelity for mobility. Glass was designed to be worn in daily life, a decision that didn't factor in the social consequences of wearing a highly visible digital surveillance There are two competing narratives for the future of computationally augmented spaces. On the one hand, we have the Internet of Things [1], where the narrative is one of making our environments more aware of us and of themselves, and generally making everything " smarter " through embedded computation, sensing, and actuation. On the other hand, we have current approaches to augmented or mixed reality, in which the space remains unchanged and instead we hack our perception of the space by superimposing a layer of media between us and the world [2,3]. In this article we present examples of three projects that seek to merge these two approaches by creating and fabricating playful material |
Eisenstein, Shub and the Gender of the Author as Producer | biography to be written in or translated into English.1 Numerous shorter biographical sketches have been published and Eisenstein's sometimes enigmatic personal life, his very public career, and his unfinished as well as completed creative projects continue to fascinate readers and viewers of his films. This fascination has been renewed as, over the last two decades in particular, hitherto inaccessible archival material has gradually become available. The existence of four Eisenstein biographies in English suggests that what is known about the great Soviet director lends itself readily to dramatic narrativisation, and thatthere is enough interest in him as an individual creator to periodically justify this enterprise. The Eisenstein biographies are one tangible manifestation of what lan Christie refers to at the beginning of his 'Introduction' to the collection of essays Eisenstein Rediscovered: the '"aura" that surrounds him'.2 Christie goes on to discuss new film historical work in the 1980s and 1990s which has substantially redefined the terms of that 'aura'. This work fully acknowledges the diversity of Eisenstein's interests and activities, and situates him within a 'new, more sophisticated totality' which takes into account preas well as post-Revolutionary cultural and intellectual influences.3 The net result of this new historical work has been to enhance rather than displace Eisenstein's 'aura', but the brief allusion to Walter Benjamin should give pause for thought. Eisenstein emerged from a Soviet avant-garde cultural context which questioned traditional notions of authorship and influenced Benjamin's critique of the individualised creation of 'auratic' artworks. The Western European and North American canonisation of Eisenstein as a seminal filmmaker and theorist (a reception history which has yet to be thoroughly explored) has obscured the fact that, as far as film and media authorship is concerned, Soviet avant-garde film culture bequeathed a divided legacy to Western scholarship.4 Discussions of Eisenstein tend to take place quite separately from considerations of the radical critique and reformulation of traditional notions of authorship (cinematic and otherwise) proposed by certain Soviet avantgarde theorists in the 1920s. This debate around authorship passed into the intellectual traditions of Western Marxism, principally through the work of writers such as Walter Benjamin, in 'The Author as Producer' (1934), and informs more recent reformulations such as Hans Magnus Enzensberger's 'Constituents of a Theory of the Media' (1970).5 Although he has now been severed from this particular historical context, Eisenstein's emergence as one of the key reference points within world film culture was intimately bound up with this Soviet avant-garde debate about authorship which fed into Western Marxism. |
Online multi-parameter estimation of interior permanent magnet motor drives with finite control set model predictive control | This study presents an online multiparameter estimation scheme for interior permanent magnet motor drives that exploits the switching ripple of finite control set (FCS) model predictive control (MPC). The combinations consist of two, three, and four parameters are analysed for observability at different operating states. Most of the combinations are rank deficient without persistent excitation (PE) of the system, e.g. by signal injection. This study shows that high frequency current ripples by MPC with FCS are sufficient to create PE in the system. This study also analyses parameter coupling in estimation that results in wrong convergence and propose a decoupling technique. The observability conditions for all the combinations are experimentally validated. Finally, a full parameter estimation along with the decoupling technique is tested at different operating conditions. |
Does culture influence what and how we think? Effects of priming individualism and collectivism. | Do differences in individualism and collectivism influence values, self-concept content, relational assumptions, and cognitive style? On the one hand, the cross-national literature provides an impressively consistent picture of the predicted systematic differences; on the other hand, the nature of the evidence is inconclusive. Cross-national evidence is insufficient to argue for a causal process, and comparative data cannot specify if effects are due to both individualism and collectivism, only individualism, only collectivism, or other factors (including other aspects of culture). To address these issues, the authors conducted a meta-analysis of the individualism and collectivism priming literature, with follow-up moderator analyses. Effect sizes were moderate for relationality and cognition, small for self-concept and values, robust across priming methods and dependent variables, and consistent in direction and size with cross-national effects. Results lend support to a situated model of culture in which cross-national differences are not static but dynamically consistent due to the chronic and moment-to-moment salience of individualism and collectivism. Examination of the unique effects of individualism and collectivism versus other cultural factors (e.g., honor, power) awaits the availability of research that primes these factors. |
Space Diet: Daily Mealworm (Tenebrio molitor) Harvest on a Multigenerational Spaceship | It has been proposed in recent years that insects are a viable food source that should be seriously considered for the future. Their high nutritional value, small size and rapid reproduction are also promising for space agriculture. In this paper, a possible future Multi-generational Spaceship with sufficient interior room to have an insect breeding room is considered. The insect of interest here is the mealworm (Tenebrio molitor), which has a very high protein content. The daily mealworm harvest aboard such a ship that satisfies the protein requirement of a stable crew population of 160 is approximated here to be ~162,000. There is also qualitative discussion of other considerations for a spaceship mealworm colony. Introduction Insects are included in the regular diet of approximately 2 Billion people, with a menu of close to 2000 edible species [1]. Yet, insect-eating, or entomophagy, has little popularity amongst Western cultures perhaps because of the perception of the practice being higher in gross factor than comfort factor. With human population growth and corresponding nutrition demands becoming an increasing concern, entomophagy has been proposed as a future sustainable food source [1]. Perhaps insect-eating is also a potential solution to nutrition for crew aboard spaceships, especially for long-duration travel [2]. Aboard spacecraft that are intended for longduration travel, sourcing food from stored stocks becomes less viable and there is an increased need for growing and harvesting food right there, with the application of bioregenerative models. However, there must be a compromise because there is limited space on the craft and energy consumption involved in agricultural practices is also limited by energy supply of the isolated ship. The insect that is considered here that can potentially form a staple part of the space crew diet is the mealworm. It is the edible larval stage of the Tenebrio molitor, a species of Darkling Beetle [3, 4]. The context for the paper’s calculation is drawn from a previously proposed multi-generational spaceship travelling for 200 years. Estimations according to anthropology and genetic studies have found that a minimum of 160 crew will maintain a stable, viable population [5]. The spaceship interior could be designed to include a mealworm breeding room. The main purpose of these mealworms will be to satisfy the protein demands of the crew. Here, the daily protein demand of the crew is estimated and the necessary daily output of the spaceship mealworm ‘farm’ is calculated. Daily Protein Requirements According to the Institute of Medicine, the recommended daily intake of protein for men and women in the age range of 17-90 are 56 and 46 grams/day respectively [6-7]. Assuming there are an equal population of men and women at any time, this means that in the 160 population, there are 80 men and 80 women. The total daily protein intake of the crew is shown in Table 1 below. Daily Mealworm Harvest Roasted mealworms have a protein content of 55.43 grams per 100 grams [8]. This is higher than meats such as chicken, pork and beef, which all have protein content in the range of 30-40g per 100g of meat [9]. Space Diet: Daily Mealworm Harvest on Spaceship, March 13 2015 Protein intake ( grams/day) |
Application of statistical process control in healthcare improvement: systematic review. | OBJECTIVE
To systematically review the literature regarding how statistical process control--with control charts as a core tool--has been applied to healthcare quality improvement, and to examine the benefits, limitations, barriers and facilitating factors related to such application.
DATA SOURCES
Original articles found in relevant databases, including Web of Science and Medline, covering the period 1966 to June 2004.
STUDY SELECTION
From 311 articles, 57 empirical studies, published between 1990 and 2004, met the inclusion criteria.
METHODS
A standardised data abstraction form was used for extracting data relevant to the review questions, and the data were analysed thematically.
RESULTS
Statistical process control was applied in a wide range of settings and specialties, at diverse levels of organisation and directly by patients, using 97 different variables. The review revealed 12 categories of benefits, 6 categories of limitations, 10 categories of barriers, and 23 factors that facilitate its application and all are fully referenced in this report. Statistical process control helped different actors manage change and improve healthcare processes. It also enabled patients with, for example asthma or diabetes mellitus, to manage their own health, and thus has therapeutic qualities. Its power hinges on correct and smart application, which is not necessarily a trivial task. This review catalogs 11 approaches to such smart application, including risk adjustment and data stratification.
CONCLUSION
Statistical process control is a versatile tool which can help diverse stakeholders to manage change in healthcare and improve patients' health. |
History and State of the Art in Commercial Electric Ship Propulsion, Integrated Power Systems, and Future Trends | Electric propulsion has emerged as one of the most efficient propulsion arrangements for several vessel types over the last decades. Even though examples can be found in the history at the end of 19th century, and further into the 20th century, the modern use of electric propulsion started in the 1980s along with the development of semiconductor switching devices to be used in high power drives (dc drives and later ac-to-ac drives). This development opened up for full rpm control of propellers and thrusters, and thereby enabling a simplification of the mechanical structure. However, the main reason for using electric propulsion in commercial ship applications is the potential for fuel savings compared to equivalent mechanical alternatives, except for icebreakers where the performance of an electric powered propeller is superior to a combustion engine powered propeller. The fuel saving potential lies within the fact that the applicable vessels have a highly varying operation profile and are seldom run at full power. This favors the power plant principle in which electric power can be produced at any time with optimum running of prime movers, e.g., diesel engines, by turning on and off units depending on the power demand for propulsion and other vessel loads. Icebreakers were among the first vessels to take advantage of this technology later followed by cruise vessel, and the offshore drilling vessels operating with dynamic positioning (DP). The converter technology was rapidly developing and soon the dc drives were replaced with ac drives. In the same period electric propulsion emerged as basic standard for large cruise liners, and DP operated drilling vessels, but also found its way into other segments as shuttle tankers, ferries, and other special vessels. At the same time podded propulsion were introduced, where the electric motor was mounted directly on the propeller shaft in a submerged 360 $^{\circ}$ steerable pod, adding better efficiency, improved maneuvering, and reduced installation space/cost to the benefits of electric propulsion. The future trends are now focusing on further optimization of efficiency by allowing multiple energy sources, independent operation of individual power producers, and energy storage for various applications, such as power back up, peak shaving, or emission free operation (short voyages). |
Digital Image Forgeries and Passive Image Authentication Techniques: A Survey | Digital images are present everywhere on magazine covers, in newspapers, in courtrooms as evidences, and all over the Internet signifying one of the major ways for communication nowadays. The trustworthiness of digital images has been questioned, because of the ease with which these images can be manipulated in both its origin & content as a result of tremendous growth of digital image manipulation tools. Digital image forensics is the latest research field which intends to authorize the genuineness of images. This survey attempts to provide an overview of various digital image forgeries and the state of art passive methods to authenticate digital images. |
Experimental exploration of the performance of binary networks | Binary neural networks for object recognition are desirable especially for small and embedded systems because of their arithmetic and memory efficiency coming from the restriction of the bit-depth of network weights and activations. Neural networks in general have a tradeoff between the accuracy and efficiency in choosing a model architecture, and this tradeoff matters more for binary networks because of the limited bit-depth. This paper then examines the performance of binary networks by modifying architecture parameters (depth and width parameters) and reports the best-performing settings for specific datasets. These findings will be useful for designing binary networks for practical uses. |
A Novel design of Electronic Voting System Using Fingerprint | The heart of democracy is voting. The heart of voting is trust that each vote is recorded and tallied with accuracy and impartiality. The accuracy and impartiality are tallied in high rate with biometric system. Among these biometric signs, fingerprint has been researched the longest period of time, and shows the most promising future in real-world applications. Because of their uniqueness and consistency over time, fingerprints have been used for identification over time. However, because of the complex distortions among the different impression of the same finger in real life, fingerprint recognition is still a challenging problem. Hence in this study, the authors are interested in designing and analysing the Electronic Voting System based on the fingerprint minutiae which is the core in current modern approach for fingerprint analysis. The new design is analysed by conducting pilot election among a class of students for selecting their representative. Various analysis predicted shows that the proposed electronic voting system resolves many issues of the current system with the help of biometric technology. Keywords— Biometric, Fingerprint, Minutiae, |
Biotechnological Potential of Agro-Industrial Wastes as a Carbon Source to Thermostable Polygalacturonase Production in Aspergillus niveus | Agro-industrial wastes are mainly composed of complex polysaccharides that might serve as nutrients for microbial growth and production of enzymes. The aim of this work was to study polygalacturonase (PG) production by Aspergillus niveus cultured on liquid or solid media supplemented with agro-industrial wastes. Submerged fermentation (SbmF) was tested using Czapeck media supplemented with 28 different carbon sources. Among these, orange peel was the best PG inducer. On the other hand, for solid state fermentation (SSF), lemon peel was the best inducer. By comparing SbmF with SSF, both supplemented with lemon peel, it was observed that PG levels were 4.4-fold higher under SSF. Maximum PG activity was observed at 55°C and pH 4.0. The enzyme was stable at 60°C for 90 min and at pH 3.0-5.0. The properties of this enzyme, produced on inexpensive fermentation substrates, were interesting and suggested several biotechnological applications. |
Imatinib use immediately before stem cell transplantation in children with Philadelphia chromosome-positive acute lymphoblastic leukemia: Results from Japanese Pediatric Leukemia/Lymphoma Study Group (JPLSG) Study Ph+ALL04 | Incorporation of imatinib into chemotherapeutic regimens has improved the prognosis of children with Philadelphia chromosome-positive acute lymphoblastic leukemia (Ph(+) ALL). We investigated a role of imatinib immediately before hematopoietic stem cell transplantation (HSCT). Children with Ph(+) ALL were enrolled on JPLSG Ph(+) ALL 04 Study within 1 week of initiation of treatment for ALL. Treatment regimen consisted of Induction phase, Consolidation phase, Reinduction phase, 2 weeks of imatinib monotherapy phase, and HSCT phase (Etoposide+CY+TBI conditioning). Minimal residual disease (MRD), the amount of BCR-ABL transcripts, was measured with the real-time PCR method. The study was registered in UMIN-CTR: UMIN ID C000000290. Forty-two patients were registered and 36 patients (86%) achieved complete remission (CR). Eight of 17 patients (47%) who had detectable MRD at the beginning of imatinib monotherapy phase showed disappearance or decrease in MRD after imatinib treatment. Consequently, 26 patients received HSCT in the first CR and all the patients had engraftment and no patients died because of complications of HSCT. The 4-year event-free survival rates and overall survival rates among all the 42 patients were 54.1 ± 7.8% and 78.1 ± 6.5%, respectively. Four of six patients who did achieve CR and three of six who relapsed before HSCT were salvaged with imatinib-containing chemotherapy and subsequently treated with HSCT. The survival rate was excellent in this study although all patients received HSCT. A longer use of imatinib concurrently with chemotherapy should eliminate HSCT in a subset of patients with a rapid clearance of the disease. |
A Language Modeling Approach to Information Retrieval | Models of document indexing and document retrieval have been extensively studied. The integration of these two classes of models has been the goal of several researchers but it is a very diicult problem. We argue that much of the reason for this is the lack of an adequate indexing model. This suggests that perhaps a better indexing model would help solve the problem. However, we feel that making unwarranted parametric assumptions will not lead to better retrieval performance. Furthermore, making prior assumptions about the similarity of documents is not warranted either. Instead, we propose an approach to retrieval based on probabilistic language modeling. We estimate models for each document individually. Our approach to model-ing is non-parametric and integrates document indexing and document retrieval into a single model. One advantage of our approach is that collection statistics which are used heuristically in many other retrieval models are an integral part of our model. We have implemented our model and tested it empirically. Our approach sig-niicantly outperforms standard tf.idf weighting on two diierent collections and query sets. |
Drain after elective laparoscopic cholecystectomy. A randomized multicentre controlled trial | Routine drainage after laparoscopic cholecystectomy is still debatable. The present study was designed to assess the role of drains in laparoscopic cholecystectomy performed for nonacutely inflamed gallbladder. After laparoscopic gallbladder removal, 53 patients were randomized to have a suction drain positioned in the subhepatic space and 53 patients to have a sham drain. The primary outcome measure was the presence of subhepatic fluid collection at abdominal ultrasonography, performed 24 h after surgery. Secondary outcome measures were postoperative abdominal and shoulder tip pain, use of analgesics, nausea, vomiting, and morbidity. Subhepatic fluid collection was not found in 45 patients (84.9 %) in group A and in 46 patients (86.8 %) in group B (difference 1.9 (95 % confidence interval −11.37 to 15.17; P = 0.998). No significant difference in visual analogue scale scores with respect to abdominal and shoulder pain, use of parenteral ketorolac, nausea, and vomiting were found in either group. Two (1.9 %) significant hemorrhagic events occurred postoperatively. Wound infection was observed in three patients (5.7 %) in group A and two patients (3.8 %) in group B (difference 1.9 (95 % CI −6.19 to 9.99; P = 0.997). The present study was unable to prove that the drain was useful in elective, uncomplicated LC. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.