title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Analysis of Recurrent Neural Networks for Probabilistic Modeling of Driver Behavior | The validity of any traffic simulation model depends on its ability to generate representative driver acceleration profiles. This paper studies the effectiveness of recurrent neural networks in predicting the acceleration distributions for car following on highways. The long short-term memory recurrent networks are trained and used to propagate the simulated vehicle trajectories over 10-s horizons. On the basis of several performance metrics, the recurrent networks are shown to generally match or outperform baseline methods in replicating driver behavior, including smoothness and oscillatory characteristics present in real trajectories. This paper reveals that the strong performance is due to the ability of the recurrent network to identify recent trends in the ego-vehicle's state, and recurrent networks are shown to perform as, well as feedforward networks with longer histories as inputs. |
Blocking screws for the treatment of distal femur fractures. | Intramedullary nailing is one of the most convenient biological options for treating distal femoral fractures. Because the distal medulla of the femur is wider than the middle diaphysis and intramedullary nails cannot completely fill the intramedullary canal, intramedullary nailing of distal femoral fractures can be difficult when trying to obtain adequate reduction. Some different methods exist for achieving reduction. The purpose of this study was determine whether the use of blocking screws resolves varus or valgus and translation and recurvatum deformities, which can be encountered in antegrade and retrograde intramedullary nailing. Thirty-four patients with distal femoral fractures underwent intramedullary nailing between January 2005 and June 2011. Fifteen patients treated by intramedullary nailing and blocking screws were included in the study. Six patients had distal diaphyseal fractures and 9 had distal diaphyseo-metaphyseal fractures. Antegrade nailing was performed in 7 patients and retrograde nailing was performed in 8. Reduction during surgery and union during follow-up were achieved in all patients with no significant complications. Mean follow-up was 26.6 months. Mean time to union was 12.6 weeks. The main purpose of using blocking screws is to achieve reduction, but they are also useful for maintaining permanent reduction. When inserting blocking screws, the screws must be placed 1 to 3 cm away from the fracture line to avoid from propagation of the fracture. When applied properly and in an adequate way, blocking screws provide an efficient solution for deformities encountered during intramedullary nailing of distal femur fractures. |
Strengthening of Reinforced Concrete Beams using FRP Technique : A Review | Excessive fatigue deterioration is usually experienced when Reinforced Concrete structural elements are subjected to loadings. This emphasizes the desire to strengthen as well as improve the fatigue performance and extend the fatigue life of RC structural components particularly beams. During the last few decades, strengthening of concrete structural elements by fibre-reinforced polymer (FRP) has become a widely used technique where high strength is needed for carrying heavy loads or repairing is done due to fatigue cracking, failure modes and or corrosion. This paper reviews various aspects of RC beams strengthened with FRP. This topic has not been covered comprehensively in previous studies, whereas the technology has been modified rapidly in the recent past. It highlights aspects such as surface preparation, adhesive curing, finite element (FE) simulation, fatigue performance as well as the failure modes of RC beams retrofitted with FRP. This technique eliminates and or reduces the crack growth rate, delay initial cracking, decline the stiffness decay with residual deflection and extend the fatigue life of RC beams. The best strengthening option in this case is pre-stressed carbon fibre-reinforced polymer (CFRP). |
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing | Many “big data” applications need to act on data arriving in real time. However, current programming models for distributed stream processing are relatively low-level, often leaving the user to worry about consistency of state across the system and fault recovery. Furthermore, the models that provide fault recovery do so in an expensive manner, requiring either hot replication or long recovery times. We propose a new programming model, discretized streams (D-Streams), that offers a high-level functional API, strong consistency, and efficient fault recovery. D-Streams support a new recovery mechanism that improves efficiency over the traditional replication and upstream backup schemes in streaming databases— parallel recovery of lost state—and unlike previous systems, also mitigate stragglers. We implement D-Streams as an extension to the Spark cluster computing engine that lets users seamlessly intermix streaming, batch and interactive queries. Our system can process over 60 million records/second at sub-second latency on 100 nodes. |
Obfuscation of executable code to improve resistance to static disassembly | A great deal of software is distributed in the form of executable code. The ability to reverse engineer such executables can create opportunities for theft of intellectual property via software piracy, as well as security breaches by allowing attackers to discover vulnerabilities in an application. The process of reverse engineering an executable program typically begins with disassembly, which translates machine code to assembly code. This is then followed by various decompilation steps that aim to recover higher-level abstractions from the assembly code. Most of the work to date on code obfuscation has focused on disrupting or confusing the decompilation phase. This paper, by contrast, focuses on the initial disassembly phase. Our goal is to disrupt the static disassembly process so as to make programs harder to disassemble correctly. We describe two widely used static disassembly algorithms, and discuss techniques to thwart each of them. Experimental results indicate that significant portions of executables that have been obfuscated using our techniques are disassembled incorrectly, thereby showing the efficacy of our methods. |
What are good parts for hair shape modeling? | Hair plays an important role in human appearance. However, hair segmentation is still a challenging problem partially due to the lack of an effective model to handle its arbitrary shape variations. In this paper, we present a part-based model robust to hair shape and environment variations. The key idea of our method is to identify local parts by promoting the effectiveness of the part-based model. To this end, we propose a measurable statistic, called Subspace Clustering Dependency (SC-Dependency), to estimate the co-occurrence probabilities between local shapes. SC-Dependency guarantees output reasonability and allows us to evaluate the effectiveness of part-wise constraints in an information-theoretic way. Then we formulate the part identification problem as an MRF that aims to optimize the effectiveness of the potential functions. Experiments are performed on a set of consumer images and show our algorithm's capability and robustness to handle hair shape variations and extreme environment conditions. |
Quality of life in multiple sclerosis: translation in French Canadian of the MSQoL-54 | BACKGROUND
Multiple Sclerosis (MS) is a neurodegenerative disease which runs its course for the remainder of the patient's life frequently causing disability of varying degrees. Negative effects on Health-related quality of life (HRQOL) are well documented and a subject of clinical study. The Multiple Sclerosis QOL 54 (MSQOL-54) questionnaire was developed to measure HRQOL in patients with MS. It is composed of 54 items, and is a combination of the SF-36 and 18 disease-specific items.
OBJECTIVE
The objective of this project was to translate the MSQOL-54 into French Canadian, and to make it available to the Canadian scientific community for clinical research and clinical practice.
METHODS
Across all French speaking regions, there are occurrences of variation. They include the pronunciation, sentence structure, and the lexicon, where the differences are most marked. For this reason, it was decided to translate the US original MSQOL-54 into French Canadian instead of adapting the existing French version. The SF-36 has been previously validated and published in French Canadian, therefore the translation work was performed solely on the 18 MS specific items. The translation followed an internationally accepted methodology into 3 steps: forward translation, backward translation, and patients' cognitive debriefing.
RESULTS
Instructions and Items 38, 43, 45 and 49 were the most debated. Problematic issues mainly resided in the field of semantics. Patients' testing (n = 5) did not reveal conceptual problems. The questionnaire was well accepted, with an average time for completion of 19 minutes.
CONCLUSION
The French Canadian MSQOL-54 is now available to the Canadian scientific community and will be a useful tool for health-care providers to assess HRQOL of patients with MS as a routine part of clinical practice. The next step in the cultural adaptation of the MSQOL-54 in French Canadian will be the evaluation of its psychometric properties. |
Modeling the Dynamics of Non-Player Characters' Social Relations in Video Games | Building credible Non-Playing Characters (NPCs) in games requires not only to enhance the graphic animation but also the behavioral model. This paper tackles the problem of the dynamics of NPCs social relations depending on their emotional interactions. First, we discuss the need for a dynamic model of social relations. Then, we present our model of social relations for NPCs and we give a qualitative model of the influence of emotions on social relations. We describe the implementation of this model and we briefly illustrate its features on a simple scene. |
Management of auricular hematoma and the cauliflower ear. | Acute auricular hematoma is common after blunt trauma to the side of the head. A network of vessels provides a rich blood supply to the ear, and the ear cartilage receives its nutrients from the overlying perichondrium. Prompt management of hematoma includes drainage and prevention of reaccumulation. If left untreated, an auricular hematoma can result in complications such as perichondritis, infection, and necrosis. Cauliflower ear may result from long-standing loss of blood supply to the ear cartilage and formation of neocartilage from disrupted perichondrium. Management of cauliflower ear involves excision of deformed cartilage and reshaping of the auricle. |
V-shift control for snake robot moving the inside of a pipe with helical rolling motion | A snake robot would be applied to a machine that goes into a narrow space to investigate the inside of a structure. Recently, multiple locomotion modes of snake robots have been realized. In the previous study, we also achieved some kind of locomotion modes such as undulatory locomotion mode, sidewinding locomotion mode, lateral rolling locomotion mode and helical rolling locomotion mode to move along a pipe. The shape of the robot in each locomotion mode is calculated by using mathematical continuum curve model respectively. However, it is necessary to change these shape smoothly when the environment where the snake robot is moving changes. It is relatively easy to achieve this in the case that the snake robot moves along its body, namely it moves to a tangential direction, by using the traditional shift control. In this paper, we propose a s-shift control and a v-shift control for a snake robot moving vertical direction to its body. In this paper, the v-shift control is installed to an experimental snake robot and the locomotion performance of robot is evaluated for the pipe composed of a straight pipe and an elbow pipe. As a result, the snake robot could successfully changes its locomotion mode to move the inside of the pipes with two types of helical rolling motion. |
The prognostic and predictive value of sstr2-immunohistochemistry and sstr2-targeted imaging in neuroendocrine tumors | Our aim was to assess the prognostic and predictive value of somatostatin receptor 2 (sstr2) in neuroendocrine tumors (NETs). We established a tissue microarray and imaging database from NET patients that received sstr2-targeted radiopeptide therapy with yttrium-90-DOTATOC, lutetium-177-DOTATOC or alternative treatment. We used univariate and multivariate analyses to identify prognostic and predictive markers for overall survival, including sstr2-imaging and sstr2-immunohistochemistry. We included a total of 279 patients. In these patients, sstr2-immunohistochemistry was an independent prognostic marker for overall survival (HR: 0.82, 95 % CI: 0.67 – 0.99, n = 279, p = 0.037). In DOTATOC patients, sstr2-expression on immunohistochemistry correlated with tumor uptake on sstr2-imaging (n = 170, p < 0.001); however, sstr2-imaging showed a higher prognostic accuracy (positive predictive value: +27 %, 95 % CI: 3 – 56 %, p = 0.025). Sstr2-expression did not predict a benefit of DOTATOC over alternative treatment (p = 0.93). Our results suggest sstr2 as an independent prognostic marker in NETs. Sstr2-immunohistochemistry correlates with sstr2-imaging; however, sstr2-imaging is more accurate for determining the individual prognosis. |
Diagnostic Performance of Fused Diffusion-Weighted Imaging Using Unenhanced or Postcontrast T1-Weighted MR Imaging in Patients With Breast Cancer | To evaluate the diagnostic performance of fused diffusion-weighted imaging (DWI) using either unenhanced (UFMR) or early postcontrast T1-weighted imaging (PCFMR) to detect and characterize breast lesions in patients with breast cancer.This retrospective observational study was approved by institutional review board in our hospital and informed consents were waived. We retrospectively selected 87 consecutive patients who underwent preoperative breast magnetic resonance imaging, including DWI and definitive surgery. Both UFMR and PCFMR were reviewed by 5 radiologists for detection, lesion size, Breast Imaging Reporting and Data System final assessment, the probability of malignancy, lesion conspicuity, and apparent diffusion coefficients.A total of 129 lesions were identified by at least 2 readers on UFMR or PCFMR. Of 645 potentially detected lesions, there were 528 (82%) with UFMR and 554 (86%) with PCFMR. Malignant lesions or index cancers showed significantly higher detection rates than benign or additional lesions on both UFMR and PCFMR (P < 0.05). Area under the characteristic curves (AUCs) for predicting malignancy ranged 0.927 to 0.986 for UFMR, and 0.936 to 0.993 for PCFMR, which was not significantly different. Lesion conspicuity was significantly higher on PCFMR than UFMR (8.59 ± 1.67 vs 9.19 ± 1.36, respectively; P < 0.05) across 5 readers. Mean intraclass correlation coefficients for lesion size on UFMR and PCFMR were 0.89 and 0.92, respectively.Detection rates of index malignant lesions were similar for UFMR and PCFMR. Interobserver agreement for final assessments was reliable across 5 readers. Diagnostic accuracy for predicting malignancy with UFMR versus PCFMR was similar, although lesion conspicuity was significantly greater with the latter. |
A simple method for generating single-stranded DNA probes labeled to high activities. | The random priming DNA-labeling method of Feinberg and Vogelstein (1, 2) produces probes of high activities. However, incompletely denatured templates in the reaction mixture may cause problems. In addition, probes generated by the standard random priming method are not ideal for in situ hybridization or other methods requiring only one labeled strand. We have developed a method utilizing a biotinylated single-stranded template bound to magnetic microspheres in the standard random priming reaction. The template is generated by a PCR-reaction with one of the two primers biotinylated (Fig. 1). The biotinylated PCR-product is then bound to streptavidin-coated magnetic beads (Dynabeads M-280 Streptavidin, Dynal). The non-biotinylated strand is removed using alkaline treatment and magnetic separation (3). Labeling of the 'purified' biotinylated strand can then be carried out as a one-tube reaction since unincorporated nucleotides are removed by fixing the beads with the magnet and discarding the supernatant. The choice of biotinylated primer decides which strand is to be labeled. In contrast to probes generated by the M-13 system this method can use any vector that can serve as a PCR-template. Compared to RNA-probes the single-stranded DNA probes have the advantage that they are easier to handle; there is no need for enzymatic degradation of the template and contamination by RNase is no problem. |
FPGASort: a high performance sorting architecture exploiting run-time reconfiguration on fpgas for large problem sorting | This paper analyses different hardware sorting architectures in order to implement a highly scaleable sorter for solving huge problems at high performance up to the GB range in linear time complexity. It will be proven that a combination of a FIFO-based merge sorter and a tree-based merge sorter results in the best performance at low cost. Moreover, we will demonstrate how partial run-time reconfiguration can be used for saving almost half the FPGA resources or alternatively for improving the speed. Experiments show a sustainable sorting throughput of 2GB/s for problems fitting into the on-chip FPGA memory and 1 GB/s when using external memory. These values surpass the best published results on large problem sorting implementations on FPGAs, GPUs, and the Cell processor. |
Dual-band printed L-slot antenna for 2.4/5 GHz WLAN operation in the laptop computer | This paper presents a dual-band L-shaped slot antenna for laptop computer operated in WLAN system of 2.4/5.2/5.8 GHz. The proposed antenna is formed by L-shaped slot and fed by strip line structure. The antenna size is relatively compact with the dimension of 15 mm × 60 mm × 0.8 mm. The antenna is compatible to embed at the top of the display panel for laptop computer. The antenna can be operated from 2.3 GHz to 2.6 GHz and from 5.0 GHz to 6.0 GHz that can cover WLAN system of 2.4 GHz, 5.2 GHz and 5.8 GHz. |
Cumulative pharmacokinetic study of oxaliplatin, administered every three weeks, combined with 5-fluorouracil in colorectal cancer patients. | The cumulative pharmacokinetic pattern of oxaliplatin, a new diamminecyclohexane platinum derivative, was studied in patients with metastatic colorectal cancer. Oxaliplatin was administered by i. v. infusion (130 mg/m2) over 2 h every 3 weeks, and 5-fluorouracil and leucovorin were administered weekly. A very sensitive method, inductively coupled plasma-mass spectrometry, allowed for the determination of total plasma and ultracentrifugable (UC) and RBC platinum levels on day 1, at 0, 2, and 5 h, and on days 8, 15, and 22. Sixteen patients underwent three or more courses, and six of them underwent six or more courses. The platinum concentration curves were quite similar from one course to another, with a high peak value 2 h after administration (day 1, Cmax = 3201 +/- 609 microgram/liter) and a rapid decrease (day 8, 443 +/- 99 microgram/liter). Cmax of both total and UC platinum levels in plasma remained unchanged throughout the treatment. The mean total platinum half-life in plasma was 9 days. We found residual levels of total platinum on day 22 (161 +/- 45 microgram/liter), but we observed no significant accumulation for the four first cycles (P = 0.57). In contrast, platinum accumulated significantly in RBCs after three courses (+91% at day 22 of the third cycle versus day 22 of the first cycle, P = 0.000018), and its half-life there was equivalent to that of RBCs. The patterns of UC and total platinum concentration curves were very similar and correlated significantly (P < 10(-6)) at all sampling times. The mean UC:total platinum ratio was 15% at day 1 and 5% at days 8, 15, and 22 in the 3-week treatment course. Unlike cisplatin, which rapidly accumulates in plasma as both free and bound platinum, oxaliplatin does not accumulate in plasma, but it does accumulate in RBCs, after repeated cycles at the currently recommended dose (130 mg/m2) and schedule of administration (every 3 weeks). |
A Review on Multipurpose Plant: Psidium Guajava | Psidium guajava Linn. (Guava) is used not only as food but also as folk medicine in subtropical areas around the world because of its pharmacologic activities. In particular, the leaf extract of guava has traditionally been used for the treatment of diabetes in East Asia and other countries. Many pharmacological studies have demonstrated the ability of this plant to exhibit antioxidant, hepatoprotective, anti-allergy, antimicrobial, antigenotoxic, antiplasmodial, cytotoxic, antispasmodic, cardioactive, anticough, antidiabetic, antiinflamatory and antinociceptive activities, supporting its traditional uses. Suggesting a wide range of clinical applications for the treatment of infantile rotaviral enteritis, diarrhoea and diabetes. |
INSIDE THE BLACK BOX, WITH APOLOGIES TO PANDORA. A REVIEW OF ULRIC NEISSER'S COGNITIVE PSYCHOLOGY1 | It behooves us, as good citizens of the science of psychology, to shirk no area of psychology as long as we can apply scientific method to it. The research in cognitive psychology is certainly interesting, on the whole well executed, and very challenging. It is well within the scope of a behavioristic approach. It merely awaits more attention from behaviorists. |
Internal fixation of type-C distal femoral fractures in osteoporotic bone. | BACKGROUND
Fixation of distal femoral fractures remains a challenge, especially in osteoporotic bone. This study was performed to investigate the biomechanical stability of four different fixation devices for the treatment of comminuted distal femoral fractures in osteoporotic bone.
METHODS
Four fixation devices were investigated biomechanically under torsional and axial loading. Three intramedullary nails, differing in the mechanism of distal locking (with two lateral-to-medial screws in one construct, one screw and one spiral blade in another construct, and four screws [two oblique and two lateral-to-medial with medial nuts] in the third), and one angular stable plate were used. All constructs were tested in an osteoporotic synthetic bone model of an AO/ASIF type 33-C2 fracture. Two nail constructs (the one-screw and spiral blade construct and the four-screw construct) were also compared under axial loading in eight pairs of fresh-frozen human cadaveric femora.
RESULTS
The angular stable plate constructs had significantly higher torsional stiffness than the other constructs; the intramedullary nail with four-screw distal locking achieved nearly comparable results. Furthermore, the four-screw distal locking construct had the greatest torsional strength. Axial stiffness was also the highest for the four-screw distal locking device; the lowest values were achieved with the angular stable plate. The ranking of the constructs for axial cycles to failure was the four-screw locking construct, with the highest number of cycles, followed by the angular stable plate, the spiral blade construct, and two-screw fixation. The findings in the human cadaveric bone were comparable with those in the synthetic bone model. Failure modes under cyclic axial load were comparable for the synthetic and human bone models.
CONCLUSIONS
The findings of this study support the concept that, for intramedullary nails, the kind of distal interlocking pattern affects the stabilization of distal femoral fractures. Four-screw distal locking provides the highest axial stability and nearly comparable torsional stability to that of the angular stable plate; the four-screw distal interlocking construct was found to have the best combined (torsional and axial) biomechanical stability. |
A Survey on Sensor-Cloud: Architecture, Applications, and Approaches | Many organizations desired to operate their businesses, works and services in a mobile (i.e. just in time and anywhere), dynamic, and knowledge-oriented fashion. Activities like e-learning, environmental learning, remote inspection, health-care, home security and safety mechanisms etc. requires a special infrastructure that might provide continuous, secured, reliable and mobile data with proper information/ knowledge management system in context to their confined environment and its users. An indefinite number of sensor networks for numerous healthcare applications has been designed and implemented but they all lacking extensibility, fault-tolerance, mobility, reliability and openness. Thus, an open, flexible and rearrangeable infrastructure is proposed for healthcare monitoring applications. Where physical sensors are virtualized as virtual sensors on cloud computing by this infrastructure and virtual sensors are provisioned automatically to end users whenever they required. In this paper we reviewed some approaches to hasten the service creations in field of healthcare and other applications with Cloud-Sensor architecture. This architecture provides services to end users without being worried about its implementation details. The architecture allows the service requesters to use the virtual sensors by themselves or they may create other new services by extending virtual sensors. |
Radial flux and axial flux PM machines analysis for More Electric Engine aircraft applications | In this paper the advantages, constraints and drawbacks of the aircraft More Electric Engine (MEE) are presented through a comparison between the radial flux and the axial flux Permanent Magnet (PM) machines for four different positions inside the main gas turbine engine. The electromagnetic preliminary designs are performed on the basis of self-consisting dimensioning equations, starting from the constraints of the available volumes. The actual baseline output generated power is assumed to be 300kVA. Taking into account the technical and reliabilities goals imposed by the specific application, in the paper some technological choices characterizing the projects are discussed too. In order to help the designers in the evaluation of the most promising solutions for this application, the comparison between different proposed design solutions is performed on the basis of several specific design indexes. |
A survey on predicting the popularity of web content | Social media platforms have democratized the process of web content creation allowing mere consumers to become creators and distributors of content. But this has also contributed to an explosive growth of information and has intensified the online competition for users attention, since only a small number of items become popular while the rest remain unknown. Understanding what makes one item more popular than another, observing its popularity dynamics, and being able to predict its popularity has thus attracted a lot of interest in the past few years. Predicting the popularity of web content is useful in many areas such as network dimensioning (e.g., caching and replication), online marketing (e.g., recommendation systems and media advertising), or real-world outcome prediction (e.g., economical trends). In this survey, we review the current findings on web content popularity prediction. We describe the different popularity prediction models, present the features that have shown good predictive capabilities, and reveal factors known to influence web content popularity. |
Development of the Alcohol Use Disorders Identification Test (AUDIT): WHO Collaborative Project on Early Detection of Persons with Harmful Alcohol Consumption--II. | The Alcohol Use Disorders Identification Test (AUDIT) has been developed from a six-country WHO collaborative project as a screening instrument for hazardous and harmful alcohol consumption. It is a 10-item questionnaire which covers the domains of alcohol consumption, drinking behaviour, and alcohol-related problems. Questions were selected from a 150-item assessment schedule (which was administered to 1888 persons attending representative primary health care facilities) on the basis of their representativeness for these conceptual domains and their perceived usefulness for intervention. Responses to each question are scored from 0 to 4, giving a maximum possible score of 40. Among those diagnosed as having hazardous or harmful alcohol use, 92% had an AUDIT score of 8 or more, and 94% of those with non-hazardous consumption had a score of less than 8. AUDIT provides a simple method of early detection of hazardous and harmful alcohol use in primary health care settings and is the first instrument of its type to be derived on the basis of a cross-national study. |
GazeNav: Gaze-Based Pedestrian Navigation | Pedestrian navigation systems help us make a series of decisions that lead us to a destination. Most current pedestrian navigation systems communicate using map-based turn-by-turn instructions. This interaction mode suffers from ambiguity, its user's ability to match the instruction with the environment, and it requires a redirection of visual attention from the environment to the screen. In this paper we present GazeNav, a novel gaze-based approach for pedestrian navigation. GazeNav communicates the route to take based on the user's gaze at a decision point. We evaluate GazeNav against the map-based turn-by-turn instructions. Based on an experiment conducted in a virtual environment with 32 participants we found a significantly improved user experience of GazeNav, compared to map-based instructions, and showed the effectiveness of GazeNav as well as evidence for better local spatial learning. We provide a complete comparison of navigation efficiency and effectiveness between the two approaches. |
Artificial intelligence techniques for driving safety and vehicle crash prediction | Accident prediction is one of the most critical aspects of road safety, whereby an accident can be predicted before it actually occurs and precautionary measures taken to avoid it. For this purpose, accident prediction models are popular in road safety analysis. Artificial intelligence (AI) is used in many real world applications, especially where outcomes and data are not same all the time and are influenced by occurrence of random changes. This paper presents a study on the existing approaches for the detection of unsafe driving patterns of a vehicle used to predict accidents. The literature covered in this paper is from the past 10 years, from 2004 to 2014. AI techniques are surveyed for the detection of unsafe driving style and crash prediction. A number of statistical methods which are used to predict the accidents by using different vehicle and driving features are also covered in this paper. The approaches studied in this paper are compared in terms of datasets and prediction performance. We also provide a list of datasets and simulators available for the scientific community to conduct research in the subject domain. The paper also identifies some of the critical open questions that need to be addressed for road safety using AI techniques. |
Digital image analysis of membrane connectivity is a robust measure of HER2 immunostains | The purpose of this study was to develop and validate a new software, HER2-CONNECTTM, for digital image analysis of the human epidermal growth factor receptor 2 (HER2) in breast cancer specimens. The software assesses immunohistochemical (IHC) staining reactions of HER2 based on an algorithm evaluating the cell membrane connectivity. The HER2-CONNECTTM algorithm was aligned to match digital image scorings of HER2 performed by 5 experienced assessors in a training set and confirmed in a separate validation set. The training set consisted of 167 breast carcinoma tissue core images in which the assessors individually and blinded outlined regions of interest and gave their HER2 score 0/1+/2+/3+ to the specific tumor region. The validation set consisted of 86 core images where the result of the automated image analysis software was correlated to the scores provided by the 5 assessors. HER2 fluorescence in situ hybridization (FISH) was performed on all cores and used as a reference standard. The overall agreement between the image analysis software and the digital scorings of the 5 assessors was 92.1% (Cohen’s Kappa: 0.859) in the training set and 92.3% (Cohen’s Kappa: 0.864) in the validation set. The image analysis sensitivity was 99.2% and specificity 100% when correlated to FISH. In conclusion, the Visiopharm HER2 IHC algorithm HER2-CONNECTTM can discriminate between amplified and non-amplified cases with high accuracy and diminish the equivocal category and thereby provides a promising supplementary diagnostic tool to increase consistency in HER2 assessment. |
Removal of fixed impulse noise from digital images using Bezier curve based interpolation | In this paper, we propose a Bezier curve based interpolation technique to eliminate the fixed impulse noise from digital images as well as maintaining the edges of the image. To eliminate the noise, we make the noisy image to pass through two steps, where in the first step we found out the pixels affected by the impulse noise, and in second step, edge protecting process is done using Bezier interpolation technique. Promising results were obtained for images having more than 80 percent of the image pixels are affected. Our proposed algorithm is producing better results in comparison to existing algorithms. |
A Comparison of Modeling Units in Sequence-to-Sequence Speech Recognition with the Transformer on Mandarin Chinese | The choice of modeling units is critical to automatic speech recognition (ASR) tasks. Conventional ASR systems typically choose context-dependent states (CD-states) or contextdependent phonemes (CD-phonemes) as their modeling units. However, it has been challenged by sequence-to-sequence attention-based models, which integrate an acoustic, pronunciation and language model into a single neural network. On English ASR tasks, previous attempts have already shown that the modeling unit of graphemes can outperform that of phonemes by sequence-to-sequence attention-based model. In this paper, we are concerned with modeling units on Mandarin Chinese ASR tasks using sequence-to-sequence attention-based models with the Transformer. Five modeling units are explored including context-independent phonemes (CI-phonemes), syllables, words, sub-words and characters. Experiments on HKUST datasets demonstrate that the lexicon free modeling units can outperform lexicon related modeling units in terms of character error rate (CER). Among five modeling units, character based model performs best and establishes a new state-of-the-art CER of 26.64% on HKUST datasets without a hand-designed lexicon and an extra language model integration, which corresponds to a 4.8% relative improvement over the existing best CER of 28.0% by the joint CTC-attention based encoder-decoder network. |
A flexible tensor block coordinate ascent scheme for hypergraph matching | The estimation of correspondences between two images resp. point sets is a core problem in computer vision. One way to formulate the problem is graph matching leading to the quadratic assignment problem which is NP-hard. Several so called second order methods have been proposed to solve this problem. In recent years hypergraph matching leading to a third order problem became popular as it allows for better integration of geometric information. For most of these third order algorithms no theoretical guarantees are known. In this paper we propose a general framework for tensor block coordinate ascent methods for hypergraph matching. We propose two algorithms which both come along with the guarantee of monotonic ascent in the matching score on the set of discrete assignment matrices. In the experiments we show that our new algorithms outperform previous work both in terms of achieving better matching scores and matching accuracy. This holds in particular for very challenging settings where one has a high number of outliers and other forms of noise. |
Force density limits in low-speed PM machines due to temperature and reactance | This paper discusses two of the mechanisms that limit the attainable force density in slotted low-speed permanent-magnet (PM) electric machines. Most of the interest is focused on the force density limits imposed by heating of the windings and by stator reactance. The study is based on analytical models for the force and reactance calculations and a lumped parameter thermal model. It is found that in a machine with an indirectly cooled stator, it is difficult to achieve a force density greater than 100 kN/m/sup 2/ due to temperature limits. A high force density is achieved by using deep slots, which lead to high reactance. The high reactance severely increases the converter kilovolt-ampere requirement and total system cost. It is also shown that the cost caused by the high reactance will also limit the force density reached. In machines with one slot per pole per phase, the reactance limited the useful slot depth to approximately 200 mm. However, in machines having a greater number of slots per pole per phase the reactance becomes no longer an important limiting factor for the slot depth and force density. |
"Deep" Learning for Missing Value Imputationin Tables with Non-Numerical Data | The success of applications that process data critically depends on the quality of the ingested data. Completeness of a data source is essential in many cases. Yet, most missing value imputation approaches suffer from severe limitations. They are almost exclusively restricted to numerical data, and they either offer only simple imputation methods or are difficult to scale and maintain in production. Here we present a robust and scalable approach to imputation that extends to tables with non-numerical values, including unstructured text data in diverse languages. Experiments on public data sets as well as data sets sampled from a large product catalog in different languages (English and Japanese) demonstrate that the proposed approach is both scalable and yields more accurate imputations than previous approaches. Training on data sets with several million rows is a matter of minutes on a single machine. With a median imputation F1 score of 0.93 across a broad selection of data sets our approach achieves on average a 23-fold improvement compared to mode imputation. While our system allows users to apply state-of-the-art deep learning models if needed, we find that often simple linear n-gram models perform on par with deep learning methods at a much lower operational cost. The proposed method learns all parameters of the entire imputation pipeline automatically in an end-to-end fashion, rendering it attractive as a generic plugin both for engineers in charge of data pipelines where data completeness is relevant, as well as for practitioners without expertise in machine learning who need to impute missing values in tables with non-numerical data. |
Interdisciplinary Management Of Maxillary Lateral Incisors Agenesis With Mini Implant Prostheses : A Case Report | Orthodontic management for patients with single or bilateral congenitally missing permanent lateral incisors is a challenge to effective treatment planning. Over the last several decades, dentistry has focused on several treatment modalities for replacement of missing teeth. The two major alternative treatment options are orthodontic space closure or space opening for prosthetic replacements. For patients with high aesthetic expectations implants are one of the treatment of choices, especially when it comes to replacement of missing maxillary lateral incisors and mandibular incisors. Edentulous areas where the available bone is compromised to use conventional implants with 2,5 mm or more in diameter, narrow diameter implants with less than 2,5 mm diameter can be successfully used. This case report deals with managing a compromised situation in the region of maxillary lateral incisor using a narrow diameter implant. |
A High Efficiency Accelerator for Deep Neural Networks | Deep Neural Networks (DNNs) are the current state of the art for various tasks such as object detection, natural language processing and semantic segmentation. These networks are massively parallel, hierarchical models with each level of hierarchy performing millions of operations on a single input. The enormous amount of parallel computation makes these DNNs suitable for custom acceleration. Custom accelerators can provide real time inference of DNNs at low power thus enabling widespread embedded deployment. In this paper, we present Snowflake, a high efficiency, low power accelerator for DNNs. Snowflake was designed to achieve optimum occupancy at low bandwidths and it is agnostic to the network architecture. Snowflake was implemented on the Xilinx Zynq XC7Z045 APSoC and achieves a peak performance of 128 G-ops/s. Snowflake is able to maintain a throughput of 98 FPS on AlexNet while averaging 1.2 GB/s of memory bandwidth. |
Associations between aspirin use and aging macula disorder: the European Eye Study. | OBJECTIVE
To study associations between aspirin use and early and late aging macula disorder (AMD).
DESIGN
Population-based cross-sectional European Eye Study in 7 centers from northern to southern Europe.
PARTICIPANTS
In total, 4691 participants 65 years of age and older, collected by random sampling.
METHODS
Aspirin intake and possible confounders for AMD were ascertained by a structured questionnaire. Ophthalmic and basic systemic measurements were performed in a standardized way. The study classified AMD according to the modified International Classification System on digitized fundus images at 1 grading center. Nonfasting blood samples were analyzed in a single laboratory. Associations were analyzed by logistic regression.
MAIN OUTCOME MEASURES
Odds ratios (ORs) for AMD in aspirin users.
RESULTS
Early AMD was present in 36.4% of the participants and late AMD was present in 3.3% of participants. Monthly aspirin use was reported by 1931 (41.2%), at least once weekly by 7%, and daily use by 17.3%. For daily aspirin users, the ORs, adjusted for potential confounders, showed a steady increase with increasing severity of AMD grades. These were: grade 1, 1.26 (95% confidence interval [CI], 1.08-1.46; P<0.001); grade 2, 1.42 (95% CI, 1.18-1.70), and wet late AMD, 2.22 (95% CI, 1.61-3.05).
CONCLUSIONS
Frequent aspirin use was associated with early AMD and wet late AMD, and the ORs rose with increasing frequency of consumption. This interesting observation warrants further evaluation of the associations between aspirin use and AMD.
FINANCIAL DISCLOSURE(S)
The author(s) have no proprietary or commercial interest in any materials discussed in this article. |
Efficiency Optimization of a DSP-Based Standalone PV System Using Fuzzy Logic and Dual-MPPT Control | This paper presents a new digital control scheme for a standalone photovoltaic (PV) system using fuzzy-logic and a dual maximum power point tracking (MPPT) controller. The first MPPT controller is an astronomical two-axis sun tracker, which is designed to track the sun over both the azimuth and elevation angles and obtain maximum solar radiation at all times. The second MPPT algorithm controls the power converter between the PV panel and the load and implements a new fuzzy-logic (FLC)-based perturb and observe (P&O) scheme to keep the system power operating point at its maximum. The FLC-MPPT is based on a voltage control approach of the power converter with a discrete PI controller to adapt the duty cycle. The input reference voltage is adaptively perturbed with variable steps until the maximum power is reached. The proposed control scheme achieves stable operation in the entire region of the PV panel and eliminates therefore the resulting oscillations around the maximum power operating point. A 150-Watt prototype system is used with two TMS320F28335 eZdsp boards to validate the proposed control scheme performance. |
Dermoscopic Findings in CutaneousMetastases | C utaneous metastases rarely develop in patients having cancer with solid tumors. The reported incidence of cutaneous metastases from a known primary malignancy ranges from 0.6% to 9%, usually appearing 2 to 3 years after the initial diagnosis.1-11 Skin metastases may represent the first sign of extranodal disease in 7.6% of patients with a primary oncologic diagnosis.1 Cutaneous metastases may also be the first sign of recurrent disease after treatment, with 75% of patients also having visceral metastases.2 Infrequently, cutaneous metastases may be seen as the primary manifestation of an undiagnosed malignancy.12 Prompt recognition of such tumors can be of great significance, affecting prognosis and management. The initial presentation of cutaneous metastases is frequently subtle and may be overlooked without proper index of suspicion, appearing as multiple or single nodules, plaques, and ulcers, in decreasing order of frequency. Commonly, a painless, mobile, erythematous papule is initially noted, which may enlarge to an inflammatory nodule over time.8 Such lesions may be misdiagnosed as cysts, lipomas, fibromas, or appendageal tumors. Clinical features of cutaneous metastases rarely provide information regarding the primary tumor, although the location of the tumor may be helpful because cutaneous metastases typically manifest in the same geographic region as the initial cancer. The most common primary tumors seen with cutaneous metastases are melanoma, breast, and squamous cell carcinoma of the head and neck.1 Cutaneous metastases are often firm, because of dermal or lymphatic involvement, or erythematous. These features may help rule out some nonvascular entities in the differential diagnosis (eg, cysts and fibromas). The presence of pigment most commonly correlates with cutaneous metastases from melanoma. Given the limited body of knowledge regarding distinct clinical findings, we sought to better elucidate the dermoscopic patterns of cutaneous metastases, with the goal of using this diagnostic tool to help identify these lesions. We describe 20 outpatients with biopsy-proven cutaneous metastases secondary to various underlying primary malignancies. Their clinical presentation is reviewed, emphasizing the dermoscopic findings, as well as the histopathologic correlation. |
Native plants are the bee’s knees: local and landscape predictors of bee richness and abundance in backyard gardens | Urban gardens may support bees by providing resources in otherwise resource-poor environments. However, it is unclear whether urban, backyard gardens with native plants will support more bees than gardens without native plants. We examined backyard gardens in northwestern Ohio to ask: 1) Does bee diversity, abundance, and community composition differ in backyard gardens with and without native plants? 2) What characteristics of backyard gardens and land cover in the surrounding landscape correlate with changes in the bee community? 3) Do bees in backyard gardens respond more strongly to local or landscape factors? We sampled bees with pan trapping, netting, and direct observation. We examined vegetation characteristics and land cover in 500 m, 1 km, and 2 km buffers surrounding each garden. Abundance of all bees, native bees, and cavity-nesting bees (but not ground-nesting bees) was greater in native plant gardens but only richness of cavity-nesting bees differed in gardens with and without native plants. Bee community composition differed in gardens with and without native plants. Overall, bee richness and abundance were positively correlated with local characteristics of backyard gardens, such as increased floral abundance, taller vegetation, more cover by woody plants, less cover by grass, and larger vegetable gardens. Differences in the amount of forest, open space, and wetlands surrounding gardens influenced abundance of cavity- and ground-nesting bees, but at different spatial scales. Thus, presence of native plants, and local and landscape characteristics might play important roles in maintaining bee diversity within urban areas. |
Automatic Question Paper Generator System | Information and intelligence are two vital columns on which development of humankind rise and knowledge has significant impact on operating of society. Student assessment is a crucial part of teaching and is done through the process of examinations and preparation of exam question papers has consistently been a matter of interest. Present-day technologies assist the teacher to stock the questions in a computer databases but the problem which emerges is how the present day technologies would also assist the teachers to automatically create the variety sets of questions from every now and then without worry about replication and duplication from the previous exam while the question bank keeps growing, so a non-automatic path for conniving a exam paper would not be able to serve to this need so in this paper we introduce an automated way which would permit the operation of conniving exam paper to be further well organized and productive and it would also aid in developing a database of questions which could be further classified for blending of exam question paper, currently there is no systematic procedure to fortify quality of exam question paper. Hence there appears a requirement to have a system which will automatically create the question paper from teacher entered description within few |
Converting Your Thoughts to Texts: Enabling Brain Typing via Deep Feature Learning of EEG Signals | An electroencephalography (EEG) based Brain Computer Interface (BCI) enables people to communicate with the outside world by interpreting the EEG signals of their brains to interact with devices such as wheelchairs and intelligent robots. More specifically, motor imagery EEG (MI-EEG), which reflects a subject's active intent, is attracting increasing attention for a variety of BCI applications. Accurate classification of MI-EEG signals while essential for effective operation of BCI systems is challenging due to the significant noise inherent in the signals and the lack of informative correlation between the signals and brain activities. In this paper, we propose a novel deep neural network based learning framework that affords perceptive insights into the relationship between the MI-EEG data and brain activities. We design a joint convolutional recurrent neural network that simultaneously learns robust high-level feature presentations through low-dimensional dense embeddings from raw MI-EEG signals. We also employ an Autoencoder layer to eliminate various artifacts such as background activities. The proposed approach has been evaluated extensively on a large-scale public MI-EEG dataset and a limited but easy-to-deploy dataset collected in our lab. The results show that our approach outperforms a series of baselines and the competitive state-of-the-art methods, yielding a classification accuracy of 95.53%. The applicability of our proposed approach is further demonstrated with a practical BCI system for typing. |
Subjective assessment of chimpanzee (Pan troglodytes) personality: reliability and stability of trait ratings | A 46-item rating scale was used to obtain personality ratings from 75 captive chimpanzees (Pan troglodytes) from 7 zoological parks. Factor analysis revealed five personality dimensions similar to those found in previous research on primate personality: Agreeableness, Dominance, Neuroticism, Extraversion and Intellect. There were significant sex and age differences in ratings on these dimensions, with males rated more highly on Dominance and older chimpanzees rated as more agreeable but less extraverted than younger chimpanzees. Interobserver agreement for most individual trait items was high, but tended to be less reliable for trait terms expressing more subtle social or cognitive abilities. Personality ratings for one zoo were found to be largely stable across a 3-year period, but highlighted the effects of environmental factors on the expression of personality in captive chimpanzees. |
The Context-Dependent Additive Recurrent Neural Net | Contextual sequence mapping is one of the fundamental problems in Natural Language Processing. Instead of relying solely on the information presented in a text, the learning agents have access to a strong external signal given to assist the learning process. In this paper, we propose a novel family of Recurrent Neural Network unit: the Context-dependent Additive Recurrent Neural Network (CARNN) that is designed specifically to leverage this external signal. The experimental results on public datasets in the dialog problem (Babi dialog Task 6 and Frame), contextual language model (Switchboard and Penn Discourse Tree Bank) and question answering (TrecQA) show that our novel CARNN-based architectures outperform previous methods. |
Structured Prediction of Unobserved Voxels from a Single Depth Image | Building a complete 3D model of a scene, given only a single depth image, is underconstrained. To gain a full volumetric model, one needs either multiple views, or a single view together with a library of unambiguous 3D models that will fit the shape of each individual object in the scene. We hypothesize that objects of dissimilar semantic classes often share similar 3D shape components, enabling a limited dataset to model the shape of a wide range of objects, and hence estimate their hidden geometry. Exploring this hypothesis, we propose an algorithm that can complete the unobserved geometry of tabletop-sized objects, based on a supervised model trained on already available volumetric elements. Our model maps from a local observation in a single depth image to an estimate of the surface shape in the surrounding neighborhood. We validate our approach both qualitatively and quantitatively on a range of indoor object collections and challenging real scenes. |
A Life in High-Energy Physics: Success Beyond Expectations | The author describes in some technical detail his career in experimental particle physics. It began in 1955, when he joined Brookhaven National Laboratory, and ended in 1985, when he moved to the field of cosmic-ray physics. The author discusses not only his successes but also his failures and his bad judgments. This period was the golden age of particle physics, when the experimental possibilities were abundant and one could carry out experiments with a small team of colleagues and students. |
Phonological skills in predominantly English-speaking, predominantly Spanish-speaking, and Spanish-English bilingual children. | PURPOSE
There is a paucity of information detailing the phonological skills of Spanish-English bilingual children and comparing that information to information concerning the phonological skills of predominantly English-speaking (PE) and predominantly Spanish-speaking (PS) children. The purpose of this study was to examine the relationship between amount of output (i.e., percentage of time each language was spoken) in each language and phonological skills in Spanish-English bilingual children and PE and PS children.
METHOD
Fifteen typically developing children, ranging in age from 5;0 (years;months) to 5;5 (mean = 5;2), participated in the study. The participants consisted of 5 PE speakers, 5 PS speakers, and 5 bilingual (Spanish-English) speakers. A single-word assessment was used to gather information on phonological skills (consonant accuracy, type and frequency of substitutions, frequency of occurrence of phonological patterns [e.g., cluster reduction], accuracy of syllable types [e.g., CV, CVC, CCV, etc.]), and type and rate of cross-linguistic effects.
RESULTS
The results indicated that there was no significant correlation between amount of output in each language and phonological skills either in the Spanish skills of PS children and Spanish-English bilingual speakers or in the English skills of PE children and Spanish-English bilingual speakers. In addition, there was no significant difference in segmental accuracy, syllabic accuracy, or percentage of occurrence of phonological patterns between either the Spanish skills of PS children and Spanish-English bilingual speakers or the English skills of PE children and Spanish-English bilingual speakers. Finally, the children showed a limited number of cross-linguistic effects.
CLINICAL IMPLICATIONS
Results from this study indicate no link between parent estimates of language output and phonological skill and demonstrate that Spanish-English bilingual children will have commensurate, although not identical, phonological skills as compared to age-matched PS and PE children. |
Using Scalable Video Coding for Dynamic Adaptive Streaming over HTTP in mobile environments | Dynamic Adaptive Streaming over HTTP (DASH) is a convenient approach to transfer videos in an adaptive and dynamic way to the user. As a consequence, this system provides high bandwidth flexibility and is especially suitable for mobile use cases where the bandwidth variations are tremendous. In this paper we have integrated the Scalable Video Coding (SVC) extensions of the Advanced Video Coding (AVC) standard into the recently ratified MPEG-DASH standard. Furthermore, we have evaluated our solution under restricted conditions using bandwidth traces from mobile environments and compared it with an improved version of our MPEG-DASH implementation using AVC as well as major industry solutions. |
Detection of unknown computer worms based on behavioral classification of the host | Machine learning techniques are widely used in many fields. One of the applications of machine learning in the field of the information security is classification of a computer behavior into malicious and benign. Anti viruses co nsisting on signature-based methods are helpless against new (unknown) computer worms. This paper focuses on the feasibility of accurately detecting unknown worm activity in indiv i ual computers while minimizing the required set of features collected from the monitor ed computer. A comprehensive experiment for testing the feasibility of detecting unknown computer worms , employing several computer configurations, background applica tions, and user activity, was performed. During the experiments 323 computer feat ur s were monitored by an agent that was developed. Four feature selection me thods were used to reduce the amount of features and four learning algorithms wer e applied on the resulting feature subsets. The evaluation results suggests that using classification algorithms applied on only 20 features the mean detection accuracy exceed ed 90%, and for specific unknown worms accuracy reached above 99%, while mai ntain ng a low level of false positive rate. |
Management of rheumatoid arthritis in Spain (emAR II). Clinical characteristics of the patients. | BACKGROUND
There is a wide variability in the diagnostic and therapeutic methods in rheumatoid arthritis (AR) in Spain, according to prior studies. The quality of care could benefit from the application of appropriate clinical practice standards; we present a study on the variability of clinical practice.
METHODS
Descriptive review of clinical records (CR) of patients aged 16 or older diagnosed with RA, selected by stratified sampling of the Autonomous Communities in two stages per Hospital Center and patient. Collected analysis of sociodemographic data, evolution, follow-up, joint count, reactants, function, job history, Visual Analogue Scales (VAS) and other.
RESULTS
We obtained valid information of 1,272 RA patients. The ESR, CRP and rheumatoid factor (RF) were regularly used parameters. The percentages of missing data in tender (TJN) and swollen (SJN) joint counts were 8.2% and 9.6% respectively; regarding the VAS we found 53.6% (patient), 59.1% (pain), and 72% in the physician VAS.
CONCLUSIONS
Despite having clinical practice guidelines on RA, there still exists a significant variability in RA management in our country. |
Active-clamp snubbers for isolated half-bridge DC-DC converters | In conventional isolated half-bridge dc-dc converters, the leakage-inductance-related losses degrade converter efficiency and limit the ability to increase the converters' switching frequencies. In this paper, a novel active-clamp snubber circuit for half-bridge dc-dc converters is proposed to recycle the energy stored in the leakage inductance by transferring this energy to a capacitor with zero-voltage zero-current-switching switched auxiliary switches, such that body-diode conduction of primary-side main switches are prevented and primary side ringing are attenuated resulting in improved converter efficiency. Principles of operation and simulation analysis are presented and supported by experimental results that show significant improvement in efficiency. |
Randomized Trial of Endoscopist-Controlled vs. Assistant-Controlled Wire-Guided Cannulation of the Bile Duct | OBJECTIVES:Biliary cannulation is frequently the most difficult component of endoscopic retrograde cholangiopancreatography (ERCP). Techniques employed to improve safety and efficacy include wire-guided access and the use of sphincterotomes. However, a variety of options for these techniques are available and optimum strategies are not defined. We assessed whether the use of endoscopist- vs. assistant-controlled wire guidance and small vs. standard-diameter sphincterotomes improves safety and/or efficacy of bile duct cannulation.METHODS:Patients were randomized using a 2 × 2 factorial design to initial cannulation attempt with endoscopist- vs. assistant-controlled wire systems (1:1 ratio) and small (3.9Fr tip) vs. standard (4.4Fr tip) sphincterotomes (1:1 ratio). The primary efficacy outcome was successful deep bile duct cannulation within 8 attempts. Sample size of 498 was planned to demonstrate a significant increase in cannulation of 10%. Interim analysis was planned after 200 patients–with a stopping rule pre-defined for a significant difference in the composite safety end point (pancreatitis, cholangitis, bleeding, and perforation).RESULTS:The study was stopped after the interim analysis, with 216 patients randomized, due to a significant difference in the safety end point with endoscopist- vs. assistant-controlled wire guidance (3/109 (2.8%) vs. 12/107 (11.2%), P=0.016), primarily due to a lower rate of post-ERCP pancreatitis (3/109 (2.8%) vs. 10/107 (9.3%), P=0.049). The difference in successful biliary cannulation for endoscopist- vs. assistant-controlled wire guidance was −0.5% (95% CI−12.0 to 11.1%) and for small vs. standard sphincerotome −0.9% (95% CI–12.5 to 10.6%).CONCLUSIONS:Use of the endoscopist- rather than assistant-controlled wire guidance for bile duct cannulation reduces complications of ERCP such as pancreatitis. |
Economics and Evolution: Bringing Life Back into Economics. | Economic theory is currently at a crossroads, where many leading mainstream economists are calling for a more realistic and practical orientation for economic science. Indeed, many are suggesting that economics should be reconstructed on evolutionary lines.This book is about the application to economics of evolutionary ideas from biology. It is not about selfish genes or determination of our behavior by genetic code. The idea that evolution supports a laissez-faire policy is rebutted. The conception of evolution as progress toward greater perfection, along with the competitive individualism sometimes inferred from the notion of the "survival of the fittest," is found to be problematic. Hodgson explores the ambiguities inherent in biology and the problems involved in applying ideas of past economic thinkers--including Malthus, Smith, Marx, Marshall, Veblen, Schumpeter, and Hayek--and argues that the new evolutionary economics can learn much from the many differing conceptions of economic evolution."This is a work of enormous perceptivity and subtlety as well as judiciousness of interpretation and critique . . . [that] establish[es] Hodgson as the leading institutional theorist, and as one of the leading evolutionary theorists, of his generation." --Warren J. Samuels"A daring and successful attempt to expunge the monopoly of reductionist and mechanistic thinking over evolutionary theory . . . a must for anyone who is interested not only in the foundations of economics, but also in the foundations of social theory." --Elias L. Khalil, Ohio State UniversityGeoffrey M. Hodgson is University Lecturer in Economics, Judge Institute for Management Studies, University of Cambridge. |
Automatic Off-line Signature Verification Systems: A Review | The use of biometric technologies for human identity verification is growing rapidly in civilized society and showing its advancement towards the usability of biometrics for security. Off-line signature verification is considered as a behavioral characteristic based biometric trait in the field of security and the prevention of fraud. So, offline signatures are extensively used as a means of personal verification and identification. Manual signature-based authentication of a large number of documents is a very difficult and time consuming task. Consequently for many years, in the field of protected communication and financial applications, we have observed an explosive growth in biometric personal authentication systems that are closely connected with measurable physical unique characteristics (hand geometry, iris scan, finger prints or DNA).. Human signatures also provide secure means for confirmation and authorization in legal documents. So nowadays, automatic signature verification becomes an essential component. In order to convey the state-of-the-art in the field to researchers, in this paper we present a survey of off-line signature verification systems. General Terms: Systems, Forgeries, Methods, Skilled, |
Security/privacy of wearable fitness tracking IoT devices | As wearable fitness trackers gain widespread acceptance among the general population, there is a concomitant need to ensure that associated privacy and security vulnerabilities are kept to a minimum. We discuss potential vulnerabilities of these trackers, in general, and specific vulnerabilities in one such tracker - Fitbit - identified by Rahman et al. (2013) who then proposed means to address identified vulnerabilities. However, the `fix' has its own vulnerabilities. We discuss possible means to alleviate related issues. |
iMapReduce: A Distributed Computing Framework for Iterative Computation | Iterative computation is pervasive in many applications such as data mining, web ranking, graph analysis, online social network analysis, and so on. These iterative applications typically involve massive data sets containing millions or billions of data records. This poses demand of distributed computing frameworks for processing massive data sets on a cluster of machines. MapReduce is an example of such a framework. However, MapReduce lacks built-in support for iterative process that requires to parse data sets iteratively. Besides specifying MapReduce jobs, users have to write a driver program that submits a series of jobs and performs convergence testing at the client. This paper presents iMapReduce, a distributed framework that supports iterative processing. iMapReduce allows users to specify the iterative computation with the separated map and reduce functions, and provides the support of automatic iterative processing within a single job. More importantly, iMapReduce significantly improves the performance of iterative implementations by (1) reducing the overhead of creating new MapReduce jobs repeatedly, (2) eliminating the shuffling of static data, and (3) allowing asynchronous execution of map tasks. We implement an iMapReduce prototype based on Apache Hadoop, and show that iMapReduce can achieve up to 5 times speedup over Hadoop for implementing iterative algorithms. |
Game Theory in Wireless Networks: A Tutorial | The behavior of a given wireless device may affect the communication capabilities of a neighboring device, notably because the radio communication channel is usually shared in wireless networks. In this tutorial, we carefully explain how situations of this kind can be modelled by making use of game theory. By leveraging on four simple running examples, we introduce the most fundamental concepts of non-cooperative game theory. This approach should help students and scholars to quickly master this fascinating analytical tool without having to read the existing lengthy, economics-oriented books. It should also assist them in modelling problems of their own. |
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information | This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f/spl isin/C/sup N/ and a randomly chosen set of frequencies /spl Omega/. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set /spl Omega/? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=/spl sigma//sub /spl tau//spl isin/T/f(/spl tau/)/spl delta/(t-/spl tau/) obeying |T|/spl les/C/sub M//spl middot/(log N)/sup -1/ /spl middot/ |/spl Omega/| for some constant C/sub M/>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the /spl lscr//sub 1/ minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C/sub M/ which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|/spl middot/logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N/sup -M/) would in general require a number of frequency samples at least proportional to |T|/spl middot/logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f. |
Obstacle Detection on Railway Track by Fusing Radar and Image Sensor | Novel technology to recognize the situation in distant place is necessary to develop a railway safety monitoring system by which human being having fallen onto the tracks from a platform and obstacles in the level crossing can be detected. In this research, we propose a method for detecting a stationary or moving obstacle by the technology which employs the super resolution radar techniques, the image recognition techniques, and the technology to fuse these techniques. Our method is designed for detecting obstacles such as cars, bicycles, and human on the track in the range up to hundreds of meters ahead by using sensors mounted on a train. In super resolution radar techniques, novel stepped multiple frequency radar is confirmed to provide the expected high resolution performance by experimental study using software defined radar. We designed the parameters of the stepped multiple frequency radar. The signal processing processor which employs A/Ds, D/As of high-sampling rate and the highest performance FPGAs has been designed and developed. In image recognition techniques, the main algorithm which detects a pair of rails is based on information about rail edges and intensity gradients. As the obstacle exists in the vicinity of rails, all the detection processes can be limited to the region in the vicinity of rails. As an example, we detect an obstacle by estimating image boundaries which enclose a group of feature points exactly on the obstacle. In order to determine the group of feature points, feature points of the whole image are detected at each frames and tracked over image sequence. For robustness estimation of boundary, we use radar so as to detect the obstacle’s rough position in an image region where an obstacle would exist; the motion segmentation technique is applied to the tracks that is located in the region. Note that an obstacle's position detected by radar is remarkably rough because radar has low transverse resolution. Nevertheless, the position detected by radar contributes to rapid and robust estimation of satisfactory image boundaries. To allow for monitoring in large distances, a prototype for an onboard pan/tilt camera platform with directional control was designed and manufactured as required in a railway situation. To detect obstacles in real time, we have built a high-performance test machine with a GPU and an image processing environment. |
Sensor network data fault types | This tutorial presents a detailed study of sensor faults that occur in deployed sensor networks and a systematic approach to model these faults. We begin by reviewing the fault detection literature for sensor networks. We draw from current literature, our own experience, and data collected from scientific deployments to develop a set of commonly used features useful in detecting and diagnosing sensor faults. We use this feature set to systematically define commonly observed faults, and provide examples of each of these faults from sensor data collected at recent deployments. |
Sugar-sweetened soda consumption and risk of developing rheumatoid arthritis in women. | BACKGROUND
Sugar-sweetened soda consumption is consistently associated with an increased risk of several chronic inflammatory diseases such as type 2 diabetes and cardiovascular diseases. Whether it plays a role in the development of rheumatoid arthritis (RA), a common autoimmune inflammatory disease, remains unclear.
OBJECTIVE
The aim was to evaluate the association between sugar-sweetened soda consumption and risk of RA in US women.
DESIGN
We prospectively followed 79,570 women from the Nurses' Health Study (NHS; 1980-2008) and 107,330 women from the NHS II (1991-2009). Information on sugar-sweetened soda consumption (including regular cola, caffeine-free cola, and other sugar-sweetened carbonated soda) was obtained from a validated food-frequency questionnaire at baseline and approximately every 4 y during follow-up. Incident RA cases were validated by medical record review. Time-varying Cox proportional hazards regression models were used to calculate HRs after adjustment for confounders. Results from both cohorts were pooled by an inverse-variance-weighted, fixed-effects model.
RESULTS
During 3,381,268 person-years of follow-up, 857 incident cases of RA were documented in the 2 cohorts. In the multivariable pooled analyses, we found that women who consumed ≥1 serving of sugar-sweetened soda/d had a 63% (HR: 1.63; 95% CI: 1.15, 2.30; P-trend = 0.004) increased risk of developing seropositive RA compared with those who consumed no sugar-sweetened soda or who consumed <1 serving/mo. When we restricted analyses to those with later RA onset (after age 55 y) in the NHS, the association appeared to be stronger (HR: 2.64; 95% CI: 1.56, 4.46; P-trend < 0.0001). No significant association was found for sugar-sweetened soda and seronegative RA. Diet soda consumption was not significantly associated with risk of RA in the 2 cohorts.
CONCLUSION
Regular consumption of sugar-sweetened soda, but not diet soda, is associated with increased risk of seropositive RA in women, independent of other dietary and lifestyle factors. |
An Update of Asymptomatic Falciparum Malaria in School Children in Muea, Southwest Cameroon | This article was originally published in a journal by OMICS Publishing Group, and the attached copy is provided by OMICS Publishing Group for the author’s benefit and for the benefit of the author’s institution, for commercial/research/educational use including without limitation use in instruction at your institution, sending it to specific colleagues that you know, and providing a copy to your institution’s administrator. |
Customer Knowledge Management | INtrODUctION As companies begin to develop competence in managing internal knowledge and applying it towards achieving organizational goals, they are setting their sights on new sources of knowledge that are not necessarily found within the boundaries of the firm. Customer knowledge management comprises the processes that are concerned with the identification, acquisition, and utilization of knowledge from beyond a firm's external boundary in order to create value for an organization. Companies can utilize this knowledge in many different forms of organizational improvement and change, but it is especially valuable for innovation and the new product development function. The notion of working with partners to share information was first discussed in 1966 where the possibility of transferring information between a company and its suppliers and customers was identified. Kaufman (1966) describes the advantages to a business that include reduced order costs, reduced delivery time, and increased customer " confidence and goodwill " (p. 148). Organizations have since been viewed as interpretation systems that must find ways of knowing their environment (Daft & Weick, 1984). Through this environmental learning, a firm's ability to innovate can improve by going beyond a firm's boundaries to expand the knowledge available for creating new and successful products. Some organizations conduct ongoing, active searches of the environment to seek new and vital information. Such organizations become the key innovators within an industry. Other, more passive organizations accept whatever information the environment presents and avoid the processes of testing new ideas and innovation. Marketing literature refers to this concept as market orientation (Kohli & Jaworski, 1990; Slater & Narver, 1995). More recently, many organizations have realized the value of information about their customers through customer relationship management and data mining strategies, and have used this information to tailor their marketing efforts accurately manage inventory levels within the supply chain (Lin, Huang, & Lin, 2002) also reflects this notion. However, what is missing from these theories and strategies is the realization of the value of knowledge residing within customers, and not information about customers. Iansiti and Levien (2004) describe an orga-nization's environment as an ecosystem, where networked organizations rely on the strength of others for survival. Within this ecology, they identify certain " keystone organizations " that " simplify the complex task of connecting network participants to one another or by making the creation of new products by third parties more efficient " (p. 73). This increase in overall … |
Finding Influential Training Samples for Gradient Boosted Decision Trees | We address the problem of finding influential training samples for a particular case of tree ensemble-based models, e.g., Random Forest (RF) or Gradient Boosted Decision Trees (GBDT). A natural way of formalizing this problem is studying how the model’s predictions change upon leave-one-out retraining, leaving out each individual training sample. Recent work has shown that, for parametric models, this analysis can be conducted in a computationally efficient way. We propose several ways of extending this framework to non-parametric GBDT ensembles under the assumption that tree structures remain fixed. Furthermore, we introduce a general scheme of obtaining further approximations to our method that balance the trade-off between performance and computational complexity. We evaluate our approaches on various experimental setups and use-case scenarios and demonstrate both the quality of our approach to finding influential training samples in comparison to the baselines and its computational efficiency.1 |
Impact of surface type, wheelchair weight, and axle position on wheelchair propulsion by novice older adults. | OBJECTIVE
To examine the impact of surface type, wheelchair weight, and rear axle position on older adult propulsion biomechanics.
DESIGN
Crossover trial.
SETTING
Biomechanics laboratory.
PARTICIPANTS
Convenience sample of 53 ambulatory older adults with minimal wheelchair experience (65-87y); men, n=20; women, n=33.
INTERVENTION
Participants propelled 4 different wheelchair configurations over 4 surfaces: tile, low carpet, high carpet, and an 8% grade ramp (surface, chair order randomized). Chair configurations included (1) unweighted chair with an anterior axle position, (2) 9.05 kg weighted chair with an anterior axle position, (3) unweighted chair with a posterior axle position (Delta0.08 m), and (4) 9.05 kg weighted chair with a posterior axle position (Delta0.08 m). Weight was added to a titanium folding chair, simulating the weight difference between very light and depot wheelchairs. Instrumented wheels measured propulsion kinetics.
MAIN OUTCOME MEASURES
Average self-selected velocity, push frequency, stroke length, peak resultant and tangential force.
RESULTS
Velocity decreased as surface rolling resistance or chair weight increased. Peak resultant and tangential forces increased as chair weight increased, as surface resistance increased, and with a posterior axle position. The effect of a posterior axle position was greater on high carpet and the ramp. The effect of weight was constant, but was more easily observed on high carpet and ramp. The effects of axle position and weight were independent of one another.
CONCLUSION
Increased surface resistance decreases self-selected velocity and increases peak forces. Increased weight decreases self-selected velocity and increases forces. Anterior axle positions decrease forces, more so on high carpet. The effects of weight and axle position are independent. The greatest reductions in peak forces occur in lighter chairs with anterior axle positions. |
DroidTrace: A ptrace based Android dynamic analysis system with forward execution capability | Android, being an open source smartphone operating system, enjoys a large community of developers who create new mobile services and applications. However, it also attracts malware writers to exploit Android devices in order to distribute malicious apps in the wild. In fact, Android malware are becoming more sophisticated and they use advanced “dynamic loading” techniques like Java reflection or native code execution to bypass security detection. To detect dynamic loading, one has to use dynamic analysis. Currently, there are only a handful of Android dynamic analysis tools available, and they all have shortcomings in detecting dynamic loading. The aim of this paper is to design and implement a dynamic analysis system which allows analysts to perform systematic analysis of dynamic payloads with malicious behaviors. We propose “DroidTrace”, a ptrace based dynamic analysis system with forward execution capability. Our system uses ptrace to monitor selected system calls of the target process which is running the dynamic payloads, and classifies the payloads behaviors through the system call sequence, e.g., behaviors such as file access, network connection, inter-process communication and even privilege escalation. Also, DroidTrace performs “physical modification” to trigger different dynamic loading behaviors within an app. Using DroidTrace, we carry out a large scale analysis on 36,170 dynamic payloads in 50,000 apps and 294 malware in 10 families (four of them are zero-day) with various dynamic loading behaviors. |
Shoe-integrated sensor system for wireless gait analysis and real-time feedback | We are developing a sensor system for use in clinical gait analysis. This research involves the development of an on-shoe device that can be used for continuous and real -time monitoring of gait. This paper presents the design of an instrumente d insole and a removable instrumented shoe attachment. Transmission of the data is in real -time and wireless, providing information about the three-dimensional motion, position, and pressure distribution of the foot. Using pattern recognition and numerical analysis of the calibrated sensor outputs, algorithms will be developed to analyze the data in real -time. Results will be validated by comparison to results from a commerical optical gait analysis system at the Massachusetts General Hospital (MGH) Biomoti on Lab. |
Predicting Elections with Twitter: What 140 Characters Reveal about Political Sentiment | Twitter is a microblogging website where users read and write millions of short messages on a variety of topics every day. This study uses the context of the German federal election to investigate whether Twitter is used as a forum for political deliberation and whether online messages on Twitter validly mirror offline political sentiment. Using LIWC text analysis software, we conducted a content analysis of over 100,000 messages containing a reference to either a political party or a politician. Our results show that Twitter is indeed used extensively for political deliberation. We find that the mere number of messages mentioning a party reflects the election result. Moreover, joint mentions of two parties are in line with real world political ties and coalitions. An analysis of the tweets’ political sentiment demonstrates close correspondence to the parties' and politicians’ political positions indicating that the content of Twitter messages plausibly reflects the offline political landscape. We discuss the use of microblogging message content as a valid indicator of political sentiment and derive suggestions for further research. |
Ontology-Based Integrated Information Platform for Digital City | Information integration develops toward semantic integration on the Semantic Web. Ontology facilitates the integration of heterogeneous data sources by resolving semantic heterogeneity among them. In this paper, we propose the system architecture of integrated information platform for digital city based on ontology, including three main layers: distributed heterogeneous data source layer, information integration layer, and application system layer. Its core is the information integration layer including the ontology server and the integrated database. Then, the domain ontology model for digital city is presented. Furthermore, we realize the semantic information retrieval according to the relations of "synonymy of, "kind of and "part of between concepts. At the end we conclude that ontology is a good tool to integrate information among various heterogeneous urban management systems on semantic level. |
Pattern-oriented associative rule-based patent classification | This paper proposes an innovative pattern-oriented associative rule-based approach to construct automatic TRIZ-based patent classification system. Derived from associative rule-based text categorization, the new approach does not only discover the semantic relationship among features in a document by their co-occurrence, but also captures the syntactic information by manually generalized patterns. We choose 7 classes which address 20 of the 40 TRIZ Principles and perform experiments upon the binary set for each class. Compared with three currently popular classification algorithms (SVM, C4.5 and NB), the new approach shows some improvement. More importantly, this new approach has its own advantages, which were discussed in this paper as well. Crown Copyright 2009 Published by Elsevier Ltd. All rights reserved. |
Evaluation of Inter-Process Communication Mechanisms | The abstraction of a process enables certain primitive forms of communication during process creation and destruction such as wait(). However, the operating system provides more general mechanisms for flexible inter-process communication. In this paper, we have studied and evaluated three commonly-used inter-process communication devices pipes, sockets and shared memory. We have identified the various factors that could affect their performance such as message size, hardware caches and process scheduling, and constructed experiments to reliably measure the latency and transfer rate of each device. We identified the most reliable timer APIs available for our measurements. Our experiments reveal that shared memory provides the lowest latency and highest throughput, followed by kernel pipes and lastly, TCP/IP sockets. However, the latency trends provide interesting insights into the construction of each mechanism. We also make certain observations on the pros and cons of each mechanism, highlighting its usefulness for different kinds of applications. |
A Hierarchical Model of Reviews for Aspect-based Sentiment Analysis | Opinion mining from customer reviews has become pervasive in recent years. Sentences in reviews, however, are usually classified independently, even though they form part of a review’s argumentative structure. Intuitively, sentences in a review build and elaborate upon each other; knowledge of the review structure and sentential context should thus inform the classification of each sentence. We demonstrate this hypothesis for the task of aspect-based sentiment analysis by modeling the interdependencies of sentences in a review with a hierarchical bidirectional LSTM. We show that the hierarchical model outperforms two non-hierarchical baselines, obtains results competitive with the state-of-the-art, and outperforms the state-of-the-art on five multilingual, multi-domain datasets without any handengineered features or external resources. |
HIV care and treatment factors associated with improved survival during TB treatment in Thailand: an observational study | BACKGROUND
In Southeast Asia, HIV-infected patients frequently die during TB treatment. Many physicians are reluctant to treat HIV-infected TB patients with anti-retroviral therapy (ART) and have questions about the added value of opportunistic infection prophylaxis to ART, the optimum ART regimen, and the benefit of initiating ART early during TB treatment.
METHODS
We conducted a multi-center observational study of HIV-infected patients newly diagnosed with TB in Thailand. Clinical data was collected from the beginning to the end of TB treatment. We conducted multivariable proportional hazards analysis to identify factors associated with death.
RESULTS
Of 667 HIV-infected TB patients enrolled, 450 (68%) were smear and/or culture positive. Death during TB treatment occurred in 112 (17%). In proportional hazards analysis, factors strongly associated with reduced risk of death were ART use (Hazard Ratio [HR] 0.16; 95% confidence interval [CI] 0.07-0.36), fluconazole use (HR 0.34; CI 0.18-0.64), and co-trimoxazole use (HR 0.41; CI 0.20-0.83). Among 126 patients that initiated ART after TB diagnosis, the risk of death increased the longer that ART was delayed during TB treatment. Efavirenz- and nevirapine-containing ART regimens were associated with similar rates of adverse events and death.
CONCLUSION
Among HIV-infected patients living in Thailand, the single most important determinant of survival during TB treatment was use of ART. Controlled clinical trials are needed to confirm our findings that early ART initiation improves survival and that the choice of non-nucleoside reverse transcriptase inhibitor does not. |
140-GHz ${\rm TE}_{20}$-Mode Dielectric-Loaded SIW Slot Antenna Array in LTCC | A <formula formulatype="inline"><tex Notation="TeX">${\rm TE}_{20}$</tex> </formula>-mode substrate integrated waveguide (SIW) fed slot antenna array is proposed at 140 GHz for fabrication ease and cost reduction. The antenna array consists of a feeding network, radiators, and dielectric loadings. The proposed antenna array and an SIW-waveguide transition are integrated into the multilayered low-temperature co-fired ceramic (LTCC). The feeding network comprises a power divider and <formula formulatype="inline"><tex Notation="TeX">$E$</tex> </formula>-plane couplers that support the <formula formulatype="inline"> <tex Notation="TeX">${\rm TE}_{20}$</tex></formula> mode. Each pair of symmetrical longitudinal radiating slots is fed by the <formula formulatype="inline"> <tex Notation="TeX">${\rm TE}_{20}$</tex></formula> mode in a doubled-width SIW. The pair of radiating slots is loaded by a doubled-width open-ended via-fence structure in which a higher order mode is also excited. Experimental results show that an 8 <formula formulatype="inline"><tex Notation="TeX">$\times$</tex> </formula> 8 slot array with the transition achieves a maximum boresight gain of 21.3 dBi at 140.6 GHz and an impedance matching bandwidth of 129.2–146 GHz with a boresight gain above 18.3 dBi. |
Aneesah : A Conversational Natural Language Interface to Databases | This paper presents the design and development of a novel Natural Language Interface to Database (NLIDB). The developed prototype is called Aneesah the NLIDB, which is capable of allowing users to interactively/conversely access desired information stored in a relational database. This paper introduces the novel conversational agent enabled architecture of Aneesah NLIDB and describes the scripting techniques that has been adopted for its development. The proposed framework for Aneesah NLIDB is based on pattern matching techniques implemented to converse with users, handle complexities and ambiguities for building dynamic SQL queries from multiple dialogues in order to extract database information. The preliminary evaluation results gathered following a pilot study reveal promising results. Index Terms – Natural Language Interface to Databases (NLIDB), Conversational Agents (CA), Knowledge base, Artificial Intelligence (AI), Pattern Matching (PM). |
Structure, function and management of semi-natural habitats for conservation biological control: a review of European studies. | Different semi-natural habitats occur on farmland, and it is the vegetation's traits and structure that subsequently determine their ability to support natural enemies and their associated contribution to conservation biocontrol. New habitats can be created and existing ones improved with agri-environment scheme funding in all EU member states. Understanding the contribution of each habitat type can aid the development of conservation control strategies. Here we review the extent to which the predominant habitat types in Europe support natural enemies, whether this results in enhanced natural enemy densities in the adjacent crop and whether this leads to reduced pest densities. Considerable variation exists in the available information for the different habitat types and trophic levels. Natural enemies within each habitat were the most studied, with less information on whether they were enhanced in adjacent fields, while their impact on pests was rarely investigated. Most information was available for woody and herbaceous linear habitats, yet not for woodland which can be the most common semi-natural habitat in many regions. While the management and design of habitats offer potential to stimulate conservation biocontrol, we also identified knowledge gaps. A better understanding of the relationship between resource availability and arthropod communities across habitat types, the spatiotemporal distribution of resources in the landscape and interactions with other factors that play a role in pest regulation could contribute to an informed management of semi-natural habitats for biocontrol. © 2016 Society of Chemical Industry. |
Development and validation of LC-MS/MS assay for the simultaneous determination of methotrexate, 6-mercaptopurine and its active metabolite 6-thioguanine in plasma of children with acute lymphoblastic leukemia: Correlation with genetic polymorphism. | Individualized therapy is a recent approach aiming to specify dosage regimen for each patient according to its genetic state. Cancer chemotherapy requires continuous monitoring of the plasma concentration levels of active forms of cytotoxic drugs and subsequent dose adjustment. In order to attain optimum therapeutic efficacy, correlation to pharmacogenetics data is crucial. In this study, a specific, accurate and sensitive liquid chromatography tandem mass spectrometry (LC-MS/MS) has been developed for determination of methotrexate (MTX), 6-mercaptopurine (MP) and its metabolite 6-thioguanine nucleotide (TG) in human plasma. Based on the basic character of the studied compounds, solid phase extraction using a strong cation exchanger was found the optimum approach to achieve good extraction recovery. Chromatographic separation was carried out using RP-HPLC and isocratic elution by acetonitrile: 0.1% aqueous formic acid (85:15v/v) with a flow rate of 0.8mL/min at 40°C. The detection was performed by tandem mass spectrometry in MRM mode via electrospray ionization source in positive ionization mode. Analysis was carried out within 1.0min over a concentration range of 6.25-200.00ng/mL for the studied analytes. Validation was carried out according to FDA guidelines for bioanalytical method validation and satisfactory results were obtained. The applicability of the assay for the monitoring of the MTX, MP and TG and subsequent application to personalized therapy was demonstrated in a clinical study on children with acute lymphoblastic leukemia (ALL). Results confirmed the need for implementation of reliable analysis tools for therapeutic dose adjustment. |
Collaborative filtering with privacy via factor analysis | Collaborative filtering (CF) is valuable in e-commerce, and for direct recommendations for music, movies, news etc. But today's systems have several disadvantages, including privacy risks. As we move toward ubiquitous computing, there is a great potential for individuals to share all kinds of information about places and things to do, see and buy, but the privacy risks are severe. In this paper we describe a new method for collaborative filtering which protects the privacy of individual data. The method is based on a probabilistic factor analysis model. Privacy protection is provided by a peer-to-peer protocol which is described elsewhere, but outlined in this paper. The factor analysis approach handles missing data without requiring default values for them. We give several experiments that suggest that this is most accurate method for CF to date. The new algorithm has other advantages in speed and storage over previous algorithms. Finally, we suggest applications of the approach to other kinds of statistical analyses of survey or questionaire data. |
A tactile proximity sensor | This paper introduces a novel tactile sensor with the ability to detect objects in the sensor's near proximity. For both tasks, the same capacitive sensing principle is used. The tactile part of the sensor provides a tactile sensor array enabling the sensor to gather pressure profiles of the mechanical contact area. Several tactile sensors have been developed in the past. These sensors lack the capability of detecting objects in their near proximity before a mechanical contact occurs. Therefore, we developed a tactile proximity sensor, which is able to measure the current flowing out of or even into the sensor. Measuring these currents and the exciting voltage makes a calculation of the capacitance coupled to the sensor's surface and, using more sensors of this type, the change of capacitance between the sensors possible. The sensor's mechanical design, the analog/digital signal processing and the hardware efficient demodulator structure, implemented on a FPGA, will be discussed in detail. |
The Persistence and Transience of Memory | The predominant focus in the neurobiological study of memory has been on remembering (persistence). However, recent studies have considered the neurobiology of forgetting (transience). Here we draw parallels between neurobiological and computational mechanisms underlying transience. We propose that it is the interaction between persistence and transience that allows for intelligent decision-making in dynamic, noisy environments. Specifically, we argue that transience (1) enhances flexibility, by reducing the influence of outdated information on memory-guided decision-making, and (2) prevents overfitting to specific past events, thereby promoting generalization. According to this view, the goal of memory is not the transmission of information through time, per se. Rather, the goal of memory is to optimize decision-making. As such, transience is as important as persistence in mnemonic systems. |
A Case Study in Understanding OSPF and BGP Interactions Using Efficient Experiment Design | In this paper, we analyze the two dominant inter- and intradomain routing protocols in the Internet: Open Shortest Path Forwarding (OSPFv2) and Border Gateway Protocol (BGP4). Specifically, we investigate interactions between these two routing protocols as well as overall (i.e. both OSPF and BGP) stability and dynamics. Our analysis is based on large-scale simulations of OSPF and BGP, and careful design of experiments (DoE) to perform an efficient search for the best parameter settings of these two routing protocols |
Assessing the nutritional status of patients with sarcoma by using the scored patient-generated subjective global assessment. | An intervention with the Scored Patient-Generated Subjective Global Assessment was implemented at a community cancer center to identify patients with sarcoma at risk for malnutrition. This population usually is not considered to be at nutritional risk because of young age and the site of diagnosis; however, 60% of patients assessed were at risk for malnutrition or were severely malnourished. Nurses and dietitians should be aware of potential nutritional risk in this population and learn about possible interventions. |
Learning Spatial and Temporal Cues for Multi-Label Facial Action Unit Detection | Facial action units (AU) are the fundamental units to decode human facial expressions. At least three aspects affect performance of automated AU detection: spatial representation, temporal modeling, and AU correlation. Unlike most studies that tackle these aspects separately, we propose a hybrid network architecture to jointly model them. Specifically, spatial representations are extracted by a Convolutional Neural Network (CNN), which, as analyzed in this paper, is able to reduce person-specific biases caused by hand-crafted descriptors (e.g., HOG and Gabor). To model temporal dependencies, Long Short-Term Memory (LSTMs) are stacked on top of these representations, regardless of the lengths of input videos. The outputs of CNNs and LSTMs are further aggregated into a fusion network to produce per-frame prediction of 12 AUs. Our network naturally addresses the three issues together, and yields superior performance compared to existing methods that consider these issues independently. Extensive experiments were conducted on two large spontaneous datasets, GFT and BP4D, with more than 400,000 frames coded with 12 AUs. On both datasets, we report improvements over a standard multi-label CNN and feature-based state-of-the-art. Finally, we provide visualization of the learned AU models, which, to our best knowledge, reveal how machines see AUs for the first time. |
A deficiency of manganese ions in the presence of high sugar concentrations is the critical parameter for achieving high yields of itaconic acid by Aspergillus terreus | Itaconic acid (IA), an unsaturated dicarboxylic acid with a high potential as a platform for chemicals derived from sugars, is industrially produced by large-scale submerged fermentation by Aspergillus terreus. Although the biochemical pathway and the physiology leading to IA is almost the same as that leading to citric acid production in Aspergillus niger, published data for the volumetric (g L−1) and the specific yield (mol/mol carbon source) of IA are significantly lower than for citric acid. Citric acid is known to accumulate to high levels only when a number of nutritional parameters are carefully adjusted, of which the concentration of the carbon source and that of manganese ions in the medium are particularly important. We have therefore investigated whether a variation in these two parameters may enhance IA production and yield by A. terreus. We show that manganese ion concentrations < 3 ppb are necessary to obtain highest yields. Highest yields were also dependent on the concentration of the carbon source (d-glucose), and highest yields (0.9) were only obtained at concentrations of 12–20 % (w/v), thus allowing the accumulation of up to 130 g L−1 IA. These findings perfectly mirror those obtained when these parameters are varied in citric acid production by A. niger, thus showing that the physiology of both processes is widely identical. Consequently, applying the fermentation technology established for citric acid production by A. niger citric acid production to A. terreus should lead to high yields of IA, too. |
Phase I and pharmacokinetic trial of PTC299 in pediatric patients with refractory or recurrent central nervous system tumors: a PBTC study | PTC299 is a novel, orally-bioavailable small molecule that selectively inhibits vascular endothelial growth factor receptor protein synthesis at the post-transcriptional level. Based on promising preclinical results, we conducted a pediatric phase I study to estimate the maximum tolerated dose, describe dose-limiting toxicities (DLT) and characterize the pharmacokinetic profile of PTC299 in children with recurrent CNS tumors. PTC299 was administered orally twice or three times daily, depending on the regimen. Four regimens were evaluated using the rolling 6 design, starting with 1.2 mg/kg/dose twice daily and escalating to 2 mg/kg/dose three times daily. Pharmacokinetic studies were performed during the first two courses. Twenty-seven children (14 male, median age 11.2, range 5.5–21 years) with recurrent brain tumors were treated; 21 were fully evaluable for toxicity assessment. Therapy was well-tolerated, and the only DLT was grade 3 hyponatremia. Grade three and grade four toxicities were uncommon in subsequent cycles. Median AUC0–Tlast values at the 2 mg/kg were similar to those observed in adults. The study was terminated while patients were being treated at the highest planned dose, due to hepatotoxicity encountered in the ongoing adult phase I studies. No complete or partial responses were observed. Two patients with low-grade gliomas were noted to have minor responses, and at the time of the study’s closure, 5 children with low-grade gliomas had been on therapy for 8 or more courses (range 8–16). PTC299 was well-tolerated at the highest dose level tested (2 mg/kg/dose TID) in children with recurrent brain tumors and prolonged disease stabilization was seen in children with low-grade gliomas. |
On the Kinematic Design of Spherical Three-Degree-of- Freedom Parallel Manipulators | This article studies the kinematic design of different types of spherical three-degree-of-freedom parallel manipulators. The mechanical architectures presented have been introduced elsewhere. However designs having at least one isotropic configuration are suggested here for the first time. Isotropic configurations are defined, in turn, as those configurations in which the Jacobian matrix, mapping the angular velocity vector of the effector into the joint velocities, is proportional to an orthogonal matrix. First, a review of the direct and inverse kinematics of spherical three-degree-of-freedom parallel manipulators is outlined, and a general form for the Jacobian matrix is given. Parallel manipulators with revolute or prismatic actuators are discussed. Then, the concept of kinematic conditioning is recalled and used as a performance index for the optimization of the manipulators. It is shown that this leads to designs having at least one isotropic configuration. Finally, a few examples of such designs are presented. |
Variational free energy and the Laplace approximation | This note derives the variational free energy under the Laplace approximation, with a focus on accounting for additional model complexity induced by increasing the number of model parameters. This is relevant when using the free energy as an approximation to the log-evidence in Bayesian model averaging and selection. By setting restricted maximum likelihood (ReML) in the larger context of variational learning and expectation maximisation (EM), we show how the ReML objective function can be adjusted to provide an approximation to the log-evidence for a particular model. This means ReML can be used for model selection, specifically to select or compare models with different covariance components. This is useful in the context of hierarchical models because it enables a principled selection of priors that, under simple hyperpriors, can be used for automatic model selection and relevance determination (ARD). Deriving the ReML objective function, from basic variational principles, discloses the simple relationships among Variational Bayes, EM and ReML. Furthermore, we show that EM is formally identical to a full variational treatment when the precisions are linear in the hyperparameters. Finally, we also consider, briefly, dynamic models and how these inform the regularisation of free energy ascent schemes, like EM and ReML. |
Guideline adaptation and implementation planning: a prospective observational study | BACKGROUND
Adaptation of high-quality practice guidelines for local use has been advanced as an efficient means to improve acceptability and applicability of evidence-informed care. In a pan-Canadian study, we examined how cancer care groups adapted pre-existing guidelines to their unique context and began implementation planning.
METHODS
Using a mixed-methods, case-study design, five cases were purposefully sampled from self-identified groups and followed as they used a structured method and resources for guideline adaptation. Cases received the ADAPTE Collaboration toolkit, facilitation, methodological and logistical support, resources and assistance as required. Documentary and primary data collection methods captured individual case experience, including monthly summaries of meeting and field notes, email/telephone correspondence, and project records. Site visits, process audits, interviews, and a final evaluation forum with all cases contributed to a comprehensive account of participant experience.
RESULTS
Study cases took 12 to >24 months to complete guideline adaptation. Although participants appreciated the structure, most found the ADAPTE method complex and lacking practical aspects. They needed assistance establishing individual guideline mandate and infrastructure, articulating health questions, executing search strategies, appraising evidence, and achieving consensus. Facilitation was described as a multi-faceted process, a team effort, and an essential ingredient for guideline adaptation. While front-line care providers implicitly identified implementation issues during adaptation, they identified a need to add an explicit implementation planning component.
CONCLUSIONS
Guideline adaptation is a positive initial step toward evidence-informed care, but adaptation (vs. 'de novo' development) did not meet expectations for reducing time or resource commitments. Undertaking adaptation is as much about the process (engagement and capacity building) as it is about the product (adapted guideline). To adequately address local concerns, cases found it necessary to also search and appraise primary studies, resulting in hybrid (adaptation plus de novo) guideline development strategies that required advanced methodological skills.Adaptation was found to be an action element in the knowledge translation continuum that required integration of an implementation perspective. Accordingly, the adaptation methodology and resources were reformulated and substantially augmented to provide practical assistance to groups not supported by a dedicated guideline panel and to provide more implementation planning support. The resulting framework is called CAN-IMPLEMENT. |
Panda: Public Auditing for Shared Data with Efficient User Revocation in the Cloud | With data storage and sharing services in the cloud, users can easily modify and share data as a group. To ensure shared data integrity can be verified publicly, users in the group need to compute signatures on all the blocks in shared data. Different blocks in shared data are generally signed by different users due to data modifications performed by different users. For security reasons, once a user is revoked from the group, the blocks which were previously signed by this revoked user must be re-signed by an existing user. The straightforward method, which allows an existing user to download the corresponding part of shared data and re-sign it during user revocation, is inefficient due to the large size of shared data in the cloud. In this paper, we propose a novel public auditing mechanism for the integrity of shared data with efficient user revocation in mind. By utilizing the idea of proxy re-signatures, we allow the cloud to re-sign blocks on behalf of existing users during user revocation, so that existing users do not need to download and re-sign blocks by themselves. In addition, a public verifier is always able to audit the integrity of shared data without retrieving the entire data from the cloud, even if some part of shared data has been re-signed by the cloud. Moreover, our mechanism is able to support batch auditing by verifying multiple auditing tasks simultaneously. Experimental results show that our mechanism can significantly improve the efficiency of user revocation. |
A Connected E-Shape and U-Shape Dual-Band Patch Antenna for Different Wireless Applications | In this paper, dual operation E-shape and U-shape Patch Antenna feed by transmission line is presented and study the effect of antenna dimensions length (L), width (W) and substrate parameters relative dielectric constant(Er), substrate thickness on radiation parameters of Band width. The proposed antenna is designed on two-layer, one RT/Duroid 6006 laminate substrate and another ground plane with an area of 33 mm by 42 mm. This paper contains designing a Connected E-Shape and U-Shape Dual Band Patch Antenna for different wireless applications except for its narrow band width. The dual operation frequencies are 2.46 GHz and 4.9 GHz. A (-8 dB) bandwidths of return loss S characteristic for the dual band are 13.02 % and 3.28 % respectively. This connected U-shape & E-shape patch antenna is mainly applicable to wireless local area net-works (WLAN).This paper suggests an alternative approach in enhancing the band width of microstrip antenna for the wireless application operating at a dual frequencies 2.46 GHz and 4.9 GHz. The measured results have been compared with the simulated results using software GEMS version-7.0. |
The Role of Network Analysis in Industrial and Applied Mathematics | Many problems in industry — and in the social, natural, information, and medical sciences — involve discrete data and benefit from approaches from subjects such as network science, information theory, optimization, probability, and statistics. Because the study of networks is concerned explicitly with connectivity between different entities, it has become very prominent in industrial settings, and this importance has been accentuated further amidst the modern data deluge. In this commentary, we discuss the role of network analysis in industrial and applied mathematics, and we give several examples of network science in industry. We focus, in particular, on discussing a physical-applied-mathematics approach to the study of networks. 1 ar X iv :1 70 3. 06 84 3v 2 [ cs .S I] 1 4 Se p 20 17 The Role of Network Analysis in Industrial and Applied Mathematics: A Physical-Applied-Mathematics Perspective Mason A. Porter and Sam D. Howison Department of Mathematics, University of California, Los Angeles, Los Angeles, California 90095, USA Mathematical Institute, University of Oxford, Oxford OX2 6GG, UK CABDyN Complexity Centre, University of Oxford, Oxford OX1 1HP, UK |
Reproductive coercion: connecting the dots between partner violence and unintended pregnancy. | Reproductive health professionals are in a critical position to reach women victimized by abusive relationships. In the general population, physical and sexual violence victimization by an intimate partner affects an estimated one in four women across the life span, with one in five adolescent girls reporting such abuse [1–3]. The prevalence of intimate partner violence reported among women utilizing sexual health services and seeking care in gynecologic and adolescent clinics is generally double these populationbased estimates [4–7]. This is not surprising, as such victimization is consistently associated with increased pregnancy and sexually transmitted infection (STI), with abused women demonstrating disproportionately higher rates of seeking care at family planning and other health services related to sexual health, such as HIV and STI testing [8–16]. Moreover, mounting evidence that unintended pregnancy occurs more commonly in abusive relationships highlights that victimized women face compromised decision making regarding contraceptive use and family planning, including condom use [17–22]. Forced sex, fear of violence if she refuses sex and difficulties negotiating contraception and condom use in the context of an abusive relationship all contribute to increased risk for unintended pregnancy and STIs. Thus, in settings where women seek care for sexual and reproductive health services, providers are well situated to build a bridge to further services for a significant number of women affected by partner violence. We suggest that providers can actually do more than simply offering a woman victim advocacy hotline numbers, based on new research findings. In the April issue of Contraception, we highlighted a phenomenon we labeled “reproductive coercion”: explicit male behaviors to promote pregnancy (unwanted by the woman). Reproductive coercion can include “birth control sabotage” (interference with contraception) and/or “preg- |
Blunted opiate modulation of hypothalamic-pituitary-adrenocortical activity in men and women who smoke. | OBJECTIVE
To examine the extent to which nicotine dependence alters endogenous opioid regulation of the hypothalamic-pituitary-adrenocortical (HPA) axis functions. Endogenous opiates play an important role in regulating mood, pain, and drug reward. They also regulate the HPA functions. Previous work has demonstrated an abnormal HPA response to psychological stress among dependent smokers.
METHODS
Smokers and nonsmokers (total n = 48 participants) completed two sessions during which a placebo or 50 mg of naltrexone was administered, using a double-blind design. Blood and saliva samples, cardiovascular and mood measures were obtained during a resting absorption period, after exposure to two noxious stimuli, and during an extended recovery period. Thermal pain threshold and tolerance were assessed in both sessions. Participants also rated pain during a 90-second cold pressor test.
RESULTS
Opioid blockade increased adrenocorticotropin, plasma cortisol, and salivary cortisol levels; these increases were enhanced by exposure to the noxious stimuli. These responses were blunted in smokers relative to nonsmokers. Smokers tended to report less pain than nonsmokers, and women reported more pain during both pain procedures, although sex differences in pain were significant only among nonsmokers.
CONCLUSIONS
We conclude that nicotine dependence is associated with attenuated opioid modulation of the HPA. This dysregulation may play a role in the previously observed blunted responses to stress among dependent smokers. |
The Effects of Logistics Leverage in Marketing Systems | An effective logistics system when incorporated into marketing can strengthen its operations and further give the firm a competitive edge. To design a marketing system which must maintain its market share, a firm must consider the effects of logistics and how its integration into marketing can produce several points of leverage. It is the purpose of this paper to highlight the leverage points available in any logistics units and to further analyze how marketing managers can work in sync with logistics managers to effectively utilize the points of leverage to gain competitive advantage in their industry. |
Generalized distributed rate limiting | The Distributed Rate Limiting (DRL) paradigm is a recently proposed mechanism for decentralized control of cloud-based services. DRL is a simple and efficient approach to resolve the issues of pricing and resource control/engineering of cloud based services. The existing DRL schemes focus on very specific performance metrics (such as loss rate and fair-share) and their design heavily depends on the assumption that the traffic is generated by elastic TCP sources. In this paper we tackle the DRL problem for general workloads and performance metrics and propose an analytic framework for the design of stable DRL algorithms. The closed-form nature of our results allows simple design rules which, together with extremely low communication overhead, makes the presented algorithms practical and easy to deploy with guaranteed convergence properties under a wide range of possible scenarios. |
Assessing EFL Student Progress in Critical Thinking with the Ennis-Weir Critical Thinking Essay Test. | Recent trends in the teaching of English as a Foreign Language (EFL) or English as a Second Language (ESL) have emphasized the importance of promoting thinking as an integral part of English language pedagogy; however, empirical research has not established that training in thinking skills can be combined effectively with EFL/ESL instruction. In this study, the Ennis-Weir Critical Thinking Essay Test was used to assess progress in critical thinking after a year of intensive academic English instruction for 36 Japanese students enrolled in a private two-year women's junior college in Oaska, Japan. A control group received only content-based intensive English instruction, while the treatment group received additional training in critical thinking. The treatment group scored significantly higher on the test ("p" =0.000). The results imply that critical thinking skills can indeed be taught as part of academic EFL/ESL instruction. (Contains 3 tables and 26 references.) (Author/SLD) *********************************************************************** Reproductions supplied by EDRS are the best that can be made from the original document. * *********************************************************************** Assessing EFL Student Progress 1 Running Head: Assessing EFL Student Critical Thinking U.S. DEPARTMENT OF EDUCATION Office f Educational Research and Improvement EDUQATIONAL RESOURCES INFORMATION CENTER (ERIC) This document has been reproduced as received from the person or organization originating it. Minor changes have been made to improve reproduction quality.. Points of view or opinions stated in this document do not necessarily represent official OERI position or policy. PERMISSION TO REPRODUCE AND DISSEMINATE THIS MATERIAL HAS BEEN GRANTED BY igiev 86 I)) -D790,-a)sevt) TO THE EDUCATIONAL RESOURCES INFORMATION CENTER (ERIC) ' Assessing EFL Student Progress in Critical Thinking With the Ennis-Weir Critical Thinking Essay Test Bruce W. Davidson Hokusei Gakuen University Rodney L. Dunham |
Fault Detection of Railway Vehicle Suspension Systems Using Multiple-Model Approach | This paper demonstrates the possibility to detect suspension failures of railway vehicles using a multiple-model approach from on-board measurement data. The railway vehicle model used includes the lateral and yaw motions of the wheelsets and bogie, and the lateral motion of the vehicle body, with sensors measuring the lateral acceleration and yaw rate of the bogie, and lateral acceleration of the body. The detection algorithm is formulated based on the Interacting Multiple-Model (IMM) algorithm. The IMM method has been applied for detecting faults in vehicle suspension systems in a simulation study. The mode probabilities and states of vehicle suspension systems are estimated based on a Kalman Filter (KF). This algorithm is evaluated in simulation examples. Simulation results indicate that the algorithm effectively detects on-board faults of railway vehicle suspension systems. |
Toolpath Planning for Continuous Extrusion Additive Manufacturing | Recent work in additive manufacturing has introduced a new class of 3D printers that operate by extruding slurries and viscous mixtures such as silicone, glass, epoxy, and concrete, but because of the fluid flow properties of these materials it is difficult to stop extrusion once the print has begun. Conventional toolpath generation for 3D printing is based on the assumption that the flow of material can be controlled precisely and the resulting path includes instructions to disable extrusion and move the print head to another portion of the model. A continuous extrusion printer cannot disable material flow, and so these toolpaths produce low quality prints with wasted material. This paper outlines a greedy algorithm for post-processing toolpath instructions that employs a Traveling Salesperson Problem (TSP) solver to reduce the distance traveled between subsequent space-filling curves and layers, which reduces unnecessary extrusion by at least 20% for simple object models on an open-source 3D printer. |
What video games have to teach us about learning and literacy | Good computer and video games like System Shock 2, Deus Ex, Pikmin, Rise of Nations, Neverwinter Nights, and Xenosaga: Episode 1 are learning machines. They get themselves learned and learned well, so that they get played long and hard by a great many people. This is how they and their designers survive and perpetuate themselves. If a game cannot be learned and even mastered at a certain level, it won't get played by enough people, and the company that makes it will go broke. Good learning in games is a capitalist-driven Darwinian process of selection of the fittest. Of course, game designers could have solved their learning problems by making games shorter and easier, by dumbing them down, so to speak. But most gamers don't want short and easy games. Thus, designers face and largely solve an intriguing educational dilemma, one also faced by schools and workplaces: how to get people, often young people, to learn and master something that is long and challenging--and enjoy it, to boot. |
Criticality in the brain: A synthesis of neurobiology, models and cognition | Cognitive function requires the coordination of neural activity across many scales, from neurons and circuits to large-scale networks. As such, it is unlikely that an explanatory framework focused upon any single scale will yield a comprehensive theory of brain activity and cognitive function. Modelling and analysis methods for neuroscience should aim to accommodate multiscale phenomena. Emerging research now suggests that multi-scale processes in the brain arise from so-called critical phenomena that occur very broadly in the natural world. Criticality arises in complex systems perched between order and disorder, and is marked by fluctuations that do not have any privileged spatial or temporal scale. We review the core nature of criticality, the evidence supporting its role in neural systems and its explanatory potential in brain health and disease. |
A Novel Design of $4 \times 4$ Butler Matrix With Relatively Flexible Phase Differences | This letter presents a novel topology of a 4 ×4 Butler matrix, which can realize relatively flexible phase differences at the output ports. The proposed Butler matrix employs couplers with arbitrary phase-differences to replace quadrature couplers in the conventional Butler matrix. By controlling the phase differences of the applied couplers, the progressive phase differences among output ports of the proposed Butler matrix can be relatively flexible. To facilitate the design, closed-form design equations are derived and presented. For verifying the design concept, a planar 4×4 Butler matrix with four unique progressive phase differences ( - 30<sup>°</sup>, + 150<sup>°</sup>, - 120<sup>°</sup>, and + 60<sup>°</sup>) is designed and fabricated. At the operating frequency, the amplitude imbalance is less than 0.75 dB, and the phase mismatch is within ±6<sup>°</sup>. The measured return loss is better than 16 dB, and the isolation is better than 18 dB. The bandwidth with 10 dB return loss is about 15%. |
Fusions in solid tumours: diagnostic strategies, targeted therapy, and acquired resistance | Structural gene rearrangements resulting in gene fusions are frequent events in solid tumours. The identification of certain activating fusions can aid in the diagnosis and effective treatment of patients with tumours harbouring these alterations. Advances in the techniques used to identify fusions have enabled physicians to detect these alterations in the clinic. Targeted therapies directed at constitutively activated oncogenic tyrosine kinases have proven remarkably effective against cancers with fusions involving ALK, ROS1, or PDGFB, and the efficacy of this approach continues to be explored in malignancies with RET, NTRK1/2/3, FGFR1/2/3, and BRAF/CRAF fusions. Nevertheless, prolonged treatment with such tyrosine-kinase inhibitors (TKIs) leads to the development of acquired resistance to therapy. This resistance can be mediated by mutations that alter drug binding, or by the activation of bypass pathways. Second-generation and third-generation TKIs have been developed to overcome resistance, and have variable levels of activity against tumours harbouring individual mutations that confer resistance to first-generation TKIs. The rational sequential administration of different inhibitors is emerging as a new treatment paradigm for patients with tumours that retain continued dependency on the downstream kinase of interest. |
Components for immersion | A person is immersed when they feel part of an environment they experience and influence. While virtual immersion systems are usually designed on a case by case basis, and are not easily reusable or scalable, OUT goal is to specify and develop a framework for the design and integration of immersive systems. We address the issues raised in the design and implementation of the middleware components necessary for immersion, in a generic, extensible, modular architecture for Integrated Media Systems we have developed for efficient generic concurrent processing of data streams. Design principles are illustrated on specific visual immersion components. Utilization of these components is demonstrated with a real-time immersive interactive application. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.