FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0167839615001399 | This expository paper exhibits the power and versatility of the Bernstein–Bézier form of a polynomial, and the role that it has played in the analysis of multivariate spline spaces. Several particular applications are discussed in some detail. The purpose of the paper is to provide the reader with a working facility with the Bernstein–Bézier form. | Multivariate splines and the Bernstein–Bézier form of a polynomial |
S0167839615001405 | We study the dimension of trivariate C 1 splines on bipyramid cells, that is, cells with n + 2 boundary vertices, n of which are coplanar with the interior vertex. We improve the earlier lower bound on the dimension given by J. Shan. Moreover, we derive a new upper bound that is equal to the known lower bound in most cases. In the remaining cases, our upper bound is close to the known lower bound, and we conjecture that the dimension coincides with the upper bound. We use tools from both algebraic geometry and Bernstein–Bézier analysis. | Dimension of trivariate C 1 splines on bipyramid cells |
S0167839615001417 | This paper provides a general formula for the dimension of spline space over general planar T-meshes (having concave corners or holes) by using the smoothing cofactor-conformality method. We introduce a new notion, the diagonalizable T-mesh, where the dimension formula is only associated with the topological information of the T-mesh. A necessary and sufficient condition for characterization of the diagonalizable T-mesh is also provided. By this new notion, we obtain some new dimension results for the spline spaces over T-meshes. | On the dimension of spline spaces over T-meshes with smoothing cofactor-conformality method |
S0167839615001429 | We show that a weighted least squares approximation of Bézier coefficients with factored Hahn weights provides the best constrained polynomial degree reduction with respect to the Jacobi L 2 -norm. This result affords generalizations to many previous findings in the field of polynomial degree reduction. A solution method to the constrained multi-degree reduction with respect to the Jacobi L 2 -norm is presented. | Constrained multi-degree reduction with respect to Jacobi norms |
S0167839615001442 | In this paper the G 1 interpolation of two data points and two tangent directions with spatial cubic rational PH curves is considered. It is shown that interpolants exist for any true spatial data configuration. The equations that determine the interpolants are derived by combining a closed form representation of a ten parametric family of rational PH cubics given in Kozak et al. (2014), and the Gram matrix approach. The existence of a solution is proven by using a homotopy analysis, and numerical method to compute solutions is proposed. In contrast to polynomial PH cubics for which the range of G 1 data admitting the existence of interpolants is limited, a switch to rationals provides an interpolation scheme with no restrictions. | G 1 interpolation by rational cubic PH curves in R 3 |
S0167839615001454 | In this note we study the regularity of generalized Hermite interpolation and compare it to that of classical Hermite interpolation. While every Hermite interpolation scheme is regular in one variable, the “classical Hermite interpolation schemes” in several variables are regular if and only if they are supported at one point. In this note we exhibit some regular generalized Hermite interpolation schemes supported at two points and study some limitation of existence of such schemes. The existence of such schemes provides a class of counterexamples to a conjecture of Jia and Sharma. | On regularity of generalized Hermite interpolation |
S0167839616000029 | Let Δ n be a cell with a single interior vertex and n boundary vertices v 1 , … , v n . Say that Δ n has the interpolation property if for every z 1 , … , z n ∈ R there is a spline s ∈ S 2 1 ( Δ n ) such that s ( v i ) = z i for all i. We investigate under what conditions does a cell fail the interpolation property. The question is related to an open problem posed by Alfeld, Piper, and Schumaker in 1987 about characterization of unconfinable vertices. For hexagonal cells, we obtain a geometric criterion characterizing the failure of the interpolation property. As a corollary, we conclude that a hexagonal cell such that its six interior edges lie on three lines fails the interpolation property if and only if the cell is projectively equivalent to a regular hexagonal cell. Along the way, we obtain an explicit basis for the vector space S 2 1 ( Δ n ) for n ≥ 5 . | Interpolation properties of C 1 quadratic splines on hexagonal cells |
S0167839616000030 | In this paper we define Tchebycheffian spline spaces over planar T-meshes and we address the problem of determining their dimension. We extend to the Tchebycheffian spline context the homological approach previously used to characterize polynomial spline spaces over T-meshes, and we exploit this characterization in the study of the dimension. In particular, we give combinatorial lower and upper bounds for the dimension, and we show that these bounds coincide if the dimensions of the underlying extended Tchebycheff section spaces are large enough with respect to the smoothness, under some mild conditions on the T-mesh. Finally, we provide simple examples of Tchebycheffian spline spaces over T-meshes with unstable dimension, which means that their dimension depends on the exact geometry of the T-mesh. These results are extensions of those known in the literature for polynomial spline spaces over T-meshes. | On the dimension of Tchebycheffian spline spaces over planar T-meshes |
S0167839616300085 | We present a new method for immersogeometric fluid flow analysis that directly uses the CAD boundary representation (B-rep) of a complex object and immerses it into a locally refined, non-boundary-fitted discretization of the fluid domain. The motivating applications include analyzing the flow over complex geometries, such as moving vehicles, where the detailed geometric features usually require time-consuming, labor-intensive geometry cleanup or mesh manipulation for generating the surrounding boundary-fitted fluid mesh. The proposed method avoids the challenges associated with such procedures. A new method to perform point membership classification of the background mesh quadrature points is also proposed. To faithfully capture the geometry in intersected elements, we implement an adaptive quadrature rule based on the recursive splitting of elements. Dirichlet boundary conditions in intersected elements are enforced weakly in the sense of Nitsche's method. To assess the accuracy of the proposed method, we perform computations of the benchmark problem of flow over a sphere represented using B-rep. Quantities of interest such as drag coefficient are in good agreement with reference values reported in the literature. The results show that the density and distribution of the surface quadrature points are crucial for the weak enforcement of Dirichlet boundary conditions and for obtaining accurate flow solutions. Also, with sufficient levels of surface quadrature element refinement, the quadrature error near the trim curves becomes insignificant. Finally, we demonstrate the effectiveness of our immersogeometric method for high-fidelity industrial scale simulations by performing an aerodynamic analysis of an agricultural tractor directly represented using B-rep. | Direct immersogeometric fluid flow analysis using B-rep CAD models |
S0167839616300115 | Recently, a construction of spline spaces suitable for representing smooth parametric surfaces of arbitrary topological genus and arbitrary order of continuity has been proposed. These splines, called RAGS (rational geometric splines), are a direct generalization of bivariate polynomial splines on planar triangulations. In this paper we discuss how to construct parametric splines associated with the three homogeneous geometries (spherical, affine, and hyperbolic) and we also consider a number of related computational issues. We then show how homogeneous splines can be used to obtain RAGS. As examples of RAGS surfaces we consider direct analogs of the Powell–Sabin macro-elements and also spline surfaces of higher degrees and higher orders of continuity obtained by minimizing an energy functional. | On constructing RAGS via homogeneous splines |
S0167839616300127 | Hilbert–Huang Transform (HHT) has proven to be extremely powerful for signal processing and analysis in 1D time series, and its generalization to regular tensor-product domains (e.g., 2D and 3D Euclidean space) has also demonstrated its widespread utilities in image processing and analysis. Compared with popular Fourier transform and wavelet transform, the most prominent advantage of Hilbert–Huang Transform (HHT) is that, it is a fully data-driven, adaptive method, especially valuable for handling non-stationary and nonlinear signals. Two key technical elements of Hilbert–Huang transform are: (1) Empirical Mode Decomposition (EMD) and (2) Hilbert spectra computation. HHT's uniqueness results from its capability to reveal both global information (i.e., Intrinsic Mode Functions (IMFs) enabled by EMD) and local information (i.e., the computation of local frequency, amplitude (energy) and phase information enabled by Hilbert spectra computation) from input signals. Despite HHT's rapid advancement in the past decade, its theory and applications on surfaces remain severely under-explored due to the current technical challenge in conducting Hilbert spectra computation on surfaces. To ameliorate, this paper takes a new initiative to compute the Riesz transform on 3D surfaces, a natural generalization of Hilbert transform in higher-dimensional cases, with a goal to make the theoretic breakthrough. The core of our theoretic and computational framework is to fully exploit the relationship between Riesz transform and fractional Laplacian operator, which can enable the computation of Riesz transform on surfaces via eigenvalue decomposition of Laplacian matrix. Moreover, we integrate the techniques of EMD and our newly-proposed Riesz transform on 3D surfaces by monogenic signals to compute Hilbert spectra, which include the space-frequency-energy distribution of signals defined over 3D surfaces and characterize key local feature information (e.g., instantaneous frequency, local amplitude, and local phase). Experiments and applications in spectral geometry processing and prominent feature detection illustrate the effectiveness of the current computational framework of HHT on 3D surfaces, which could serve as a solid foundation for upcoming, more serious applications in graphics and geometry computing fields. | Novel and efficient computation of HilbertâHuang transform on surfaces |
S0167839616300139 | We propose an automatic method for fast reconstruction of indoor scenes from raw point scans, which is a fairly challenging problem due to the restricted accessibility and the cluttered space for indoor environment. We first detect and remove points representing the ground, walls and ceiling from the input data and cluster the remaining points into different groups, referred to as sub-scenes. Our approach abstracts the sub-scenes with geometric primitives, and accordingly constructs the topology graphs with structural attributes based on the functional parts of objects (namely, anchors). To decompose sub-scenes into individual indoor objects, we devise an anchor-guided subgraph matching algorithm which leverages template graphs to partition the graphs into subgraphs (i.e., individual objects), which is capable of handling arbitrarily oriented objects within scenes. Subsequently, we present a data-driven approach to model individual objects, which is particularly formulated as a model instance recognition problem. A Randomized Decision Forest (RDF) is introduced to achieve robust recognition on decomposed indoor objects with raw point data. We further exploit template fitting to generate the geometrically faithful model to the input indoor scene. We visually and quantitatively evaluate the performance of our framework on a variety of synthetic and raw scans, which comprehensively demonstrates the efficiency and robustness of our reconstruction method on raw scanned point clouds, even in the presence of noise and heavy occlusions. | Cluttered indoor scene modeling via functional part-guided graph matching |
S0167839616300164 | In this paper, we propose a novel unsupervised algorithm for automatically segmenting a single 3D shape or co-segmenting a family of 3D shapes using deep learning. The algorithm consists of three stages. In the first stage, we pre-decompose each 3D shape of interest into primitive patches to generate over-segmentation and compute various signatures as low-level shape features. In the second stage, high-level features are learned, in an unsupervised style, from the low-level ones based on deep learning. Finally, either segmentation or co-segmentation results can be quickly reported by patch clustering in the high-level feature space. The experimental results on the Princeton Segmentation Benchmark and the Shape COSEG Dataset exhibit superior segmentation performance of the proposed method over the previous state-of-the-art approaches. | Unsupervised 3D shape segmentation and co-segmentation via deep learning |
S0167839616300188 | This paper proposes a fast algorithm for computing the real roots of univariate polynomials given in the Bernstein basis. Traditionally, the polynomial is subdivided until a root can be isolated. In contrast, herein we aim to find a root only to subdivide the polynomial at the root. This subdivision based algorithm exploits the property that the Bézier curves interpolate the end-points of their control polygons. Upon subdivision at the root, both resulting curves contain the root at one of their end-points, and hence contain a vanishing coefficient that is factored out. The algorithm then recurses on the new sub-curves, now of lower degree, yielding a computational efficiency. In addition, the proposed algorithm has the ability to efficiently count the multiplicities of the roots. Comparison of running times against the state-of-the-art on thousands of polynomials shows an improvement of about an order-of-magnitude. | Revisiting the problem of zeros of univariate scalar Béziers |
S0167839616300206 | Generalized quantum splines are piecewise polynomials whose generalized quantum derivatives agree up to some order at the joins. Just like classical and quantum splines, generalized quantum splines admit a canonical basis with compact support: the generalized quantum B-splines. Here we study generalized quantum B-spline bases and generalized quantum B-spline curves, using a very general variant of the blossom: the generalized quantum blossom. Applying the generalized quantum blossom, we develop algorithms and identities for generalized quantum B-spline bases and generalized quantum B-spline curves, including generalized quantum variants of the de Boor algorithms for recursive evaluation and generalized quantum differentiation, knot insertion procedures for converting from generalized quantum B-spline to piecewise generalized quantum Bézier form, and a generalized quantum variant of Marsden's identity. | Generalized quantum splines |
S0167839616300218 | This paper shows that generic 2D-Free-Form Deformations of degree 1 × n can be made birational by a suitable assignment of weights to the Bézier or B-spline control points. An FFD that is birational facilitates operations such as backward mapping for image warping, and Extended Free-Form Deformation. While birational maps have been studied extensively in the classical algebraic geometry literature, this paper is the first to present a family of non-linear birational maps that are flexible enough to be of practical use in geometric modeling. | Birational 2D Free-Form Deformation of degree 1× n |
S0167839616300280 | The theory of the isoptic curves is widely studied in the Euclidean plane E 2 (see Cieślak et al., 1991 and Wieleitner, 1908 and the references given there). The analogous question was investigated by the authors in the hyperbolic H 2 and elliptic E 2 planes (see Csima and Szirmai, 2010, 2012, submitted for publication), but in the higher dimensional spaces there are only few results in this topic. In Csima and Szirmai (2013) we gave a natural extension of the notion of the isoptic curves to the n-dimensional Euclidean space E n ( n ≥ 3 ) which is called isoptic hypersurface. Now we develope an algorithm to determine the isoptic surface H P of a 3-dimensional polyhedron P . We will determine the isoptic surfaces for Platonic solids and for some semi-regular Archimedean polytopes and visualize them with Wolfram Mathematica (Wolfram Research, Inc., 2015). | Isoptic surfaces of polyhedra |
S0167839616300292 | A Quasi Extended Chebyshev (QEC) space is a space of sufficiently differentiable functions in which any Hermite interpolation problem which is not a Taylor problem is unisolvent. On a given interval the class of all spaces which contains constants and for which the space obtained by differentiation is a QEC-space has been identified as the largest class of spaces (under ordinary differentiability assumptions) which can be used for design. As a first step towards determining the largest class of splines for design, we consider a sequence of QEC-spaces on adjacent intervals, all of the same dimension, we join them via connection matrices, so as to maintain both the dimension and the unisolvence. The resulting space is called a Quasi Extended Chebyshev Piecewise (QECP) space. We show that all QECP-spaces are inverse images of two-dimensional Chebyshev spaces under piecewise generalised derivatives associated with systems of piecewise weight functions. We show illustrations proving that QECP-spaces can produce interesting shape effects. | Design with Quasi Extended Chebyshev piecewise spaces |
S0167839616300413 | Hierarchical generating systems that are derived from Zwart–Powell (ZP) elements can be used to generate quadratic splines on adaptively refined criss-cross triangulations. We propose two extensions of these hierarchical generating systems, firstly decoupling the hierarchical ZP elements, and secondly enriching the system by including auxiliary functions. These extensions allow us to generate the entire hierarchical spline space – which consists of all piecewise quadratic C 1 -smooth functions on an adaptively refined criss-cross triangulation – if the triangulation fulfills certain technical assumptions. Special attention is dedicated to the characterization of the linear dependencies that are present in the resulting enriched decoupled hierarchical generating system. | Completeness of generating systems for quadratic splines on adaptively refined criss-cross triangulations |
S0167865513003322 | In this paper we propose a new method for skin detection in color images which consists in spatial analysis using the introduced texture-based discriminative skin-presence features. Color-based skin detection has been widely explored and many skin color modeling techniques were developed so far. However, efficacy of the pixel-wise classification is limited due to an overlap between the skin and non-skin pixels reported in many color spaces. To increase the discriminating power of the skin classification schemes, textural and spatial features are often exploited for skin modeling. Our contribution lies in using the proposed discriminative feature space as a domain for spatial analysis of skin pixels. Contrary to existing approaches, we extract the textural features from the skin probability maps rather than from the luminance channel. Presented experimental study confirms that the proposed method outperforms alternative skin detection techniques, which also involve analysis of textural and spatial features. | Spatial-based skin detection using discriminative skin-presence features |
S0167865513004625 | This work addresses the problem of detecting human behavioural anomalies in crowded surveillance environments. We focus in particular on the problem of detecting subtle anomalies in a behaviourally heterogeneous surveillance scene. To reach this goal we implement a novel unsupervised context-aware process. We propose and evaluate a method of utilising social context and scene context to improve behaviour analysis. We find that in a crowded scene the application of Mutual Information based social context permits the ability to prevent self-justifying groups and propagate anomalies in a social network, granting a greater anomaly detection capability. Scene context uniformly improves the detection of anomalies in both datasets. The strength of our contextual features is demonstrated by the detection of subtly abnormal behaviours, which otherwise remain indistinguishable from normal behaviour. | Contextual anomaly detection in crowded surveillance scenes |
S0167865514001263 | In mammographic imaging, the presence of microcalcifications, small deposits of calcium in the breast, is a primary indicator of breast cancer. However, not all microcalcifications are malignant and their distribution within the breast can be used to indicate whether clusters of microcalcifications are benign or malignant. Computer-aided diagnosis (CAD) systems can be employed to help classify such microcalcification clusters. In this paper a novel method for classifying microcalcification clusters is presented by representing discrete mereotopological relations between the individual microcalcifications over a range of scales in the form of a mereotopological barcode. This barcode based representation is able to model complex relations between multiple regions and the results on mammographic microcalcification data shows the effectiveness of this approach. Classification accuracies of 95% and 80% are achieved on the MIAS and DDSM datasets, respectively. These results are comparable to existing state-of-the art methods. This work also demonstrates that mereotopological barcodes could be used to help trained clinicians in their diagnosis by providing a clinical interpretation of barcodes that represent both benign and malignant cases. | Modelling mammographic microcalcification clusters using persistent mereotopology |
S0167865515000744 | Vision is one of the most important of the senses, and humans use it extensively during navigation. We evaluated different types of image and video frame descriptors that could be used to determine distinctive visual landmarks for localizing a person based on what is seen by a camera that they carry. To do this, we created a database containing over 3 km of video-sequences with ground-truth in the form of distance travelled along different corridors. Using this database, the accuracy of localization—both in terms of knowing which route a user is on—and in terms of position along a certain route, can be evaluated. For each type of descriptor, we also tested different techniques to encode visual structure and to search between journeys to estimate a user’s position. The techniques include single-frame descriptors, those using sequences of frames, and both color and achromatic descriptors. We found that single-frame indexing worked better within this particular dataset. This might be because the motion of the person holding the camera makes the video too dependent on individual steps and motions of one particular journey. Our results suggest that appearance-based information could be an additional source of navigational data indoors, augmenting that provided by, say, radio signal strength indicators (RSSIs). Such visual information could be collected by crowdsourcing low-resolution video feeds, allowing journeys made by different users to be associated with each other, and location to be inferred without requiring explicit mapping. This offers a complementary approach to methods based on simultaneous localization and mapping (SLAM) algorithms. | Appearance-based indoor localization: A comparison of patch descriptor performance |
S0167865515001269 | Constructivist philosophy and Hasok Chang’s active scientific realism are used to argue that the idea of “truth” in cluster analysis depends on the context and the clustering aims. Different characteristics of clusterings are required in different situations. Researchers should be explicit about on what requirements and what idea of “true clusters” their research is based, because clustering becomes scientific not through uniqueness but through transparent and open communication. The idea of “natural kinds” is a human construct, but it highlights the human experience that the reality outside the observer’s control seems to make certain distinctions between categories inevitable. Various desirable characteristics of clusterings and various approaches to define a context-dependent truth are listed, and I discuss what impact these ideas can have on the comparison of clustering methods, and the choice of a clustering methods and related decisions in practice. | What are the true clusters? |
S0167865515001531 | Several applications aim to identify rare events from very large data sets. Classification algorithms may present great limitations on large data sets and show a performance degradation due to class imbalance. Many solutions have been presented in literature to deal with the problem of huge amount of data or imbalancing separately. In this paper we assessed the performances of a novel method, Parallel Selective Sampling (PSS), able to select data from the majority class to reduce imbalance in large data sets. PSS was combined with the Support Vector Machine (SVM) classification. PSS-SVM showed excellent performances on synthetic data sets, much better than SVM. Moreover, we showed that on real data sets PSS-SVM classifiers had performances slightly better than those of SVM and RUSBoost classifiers with reduced processing times. In fact, the proposed strategy was conceived and designed for parallel and distributed computing. In conclusion, PSS-SVM is a valuable alternative to SVM and RUSBoost for the problem of classification by huge and imbalanced data, due to its accurate statistical predictions and low computational complexity. | Parallel selective sampling method for imbalanced and large data classification |
S0167865515001622 | Pattern classification methods assign an object to one of several predefined classes/categories based on features extracted from observed attributes of the object (pattern). When L discriminatory features for the pattern can be accurately determined, the pattern classification problem presents no difficulty. However, precise identification of the relevant features for a classification algorithm (classifier) to be able to categorize real world patterns without errors is generally infeasible. In this case, the pattern classification problem is often cast as devising a classifier that minimizes the misclassification rate. One way of doing this is to consider both the pattern attributes and its class label as random variables, estimate the posterior class probabilities for a given pattern and then assign the pattern to the class/category for which the posterior class probability value estimated is maximum. More often than not, the form of the posterior class probabilities is unknown. The so-called Parzen Window approach is widely employed to estimate class-conditional probability (class-specific probability) densities for a given pattern. These probability densities can then be utilized to estimate the appropriate posterior class probabilities for that pattern. However, the Parzen Window scheme can become computationally impractical when the size of the training dataset is in the tens of thousands and L is also large (a few hundred or more). Over the years, various schemes have been suggested to ameliorate the computational drawback of the Parzen Window approach, but the problem still remains outstanding and unresolved. In this paper, we revisit the Parzen Window technique and introduce a novel approach that may circumvent the aforementioned computational bottleneck. The current paper presents the mathematical aspect of our idea. Practical realizations of the proposed scheme will be given elsewhere. | The Parzen Window method: In terms of two vectors and one matrix |
S0167865515002263 | Better understanding of the anatomical variability of the human cochlear is important for the design and function of Cochlear Implants. Proper non-rigid alignment of high-resolution cochlear μCT data is a challenge for the typical cubic B-spline registration model. In this paper we study one way of incorporating skeleton-based similarity as an anatomical registration prior. We extract a centerline skeleton of the cochlear spiral, and generate corresponding parametric pseudo-landmarks between samples. These correspondences are included in the cost function of a typical cubic B-spline registration model to provide a more global guidance of the alignment. The resulting registrations are evaluated using different metrics for accuracy and model behavior, and compared to the results of a registration without the prior. | Free-form image registration of human cochlear μCT data using skeleton similarity as anatomical prior |
S0167865515002275 | Although studied for decades, effective face recognition remains difficult to accomplish on account of occlusions and pose and illumination variations. Pose variance is a particular challenge in face recognition. Effective local descriptors have been proposed for frontal face recognition. When these descriptors are directly applied to cross-pose face recognition, the performance significantly decreases. To improve the descriptor performance for cross-pose face recognition, we propose a face recognition algorithm based on multiple virtual views and alignment error. First, warps between poses are learned using the Lucas–Kanade algorithm. Based on these warps, multiple virtual profile views are generated from a single frontal face, which enables non-frontal faces to be matched using the scale-invariant feature transform (SIFT) algorithm. Furthermore, warps indicate the correspondence between patches of two faces. A two-phase alignment error is proposed to obtain accurate warps, which contain pose alignment and individual alignment. Correlations between patches are considered to calculate the alignment error of two faces. Finally, a hybrid similarity between two faces is calculated; it combines the number of matched keypoints from SIFT and the alignment error. Experimental results show that our proposed method achieves better recognition accuracy than existing algorithms, even when the pose difference angle was greater than 30°. | Cross-pose face recognition based on multiple virtual views and alignment error |
S0167923613001814 | Networks permitting anonymous contributions continue to expand and flourish. In some networks, the reliability of a contribution is not of particular importance. In other settings, however, the development of a network is driven by specific purposes which make the reliability of information exchanged of significant importance. One such situation involves the use of information markets for aggregating individuals' preferences on new or emerging technologies. At this point, there remains skepticism concerning the reliability of the preference revelations in such markets and thus the resulting preference aggregations and rankings of emerging technologies. In this paper, we study the reliability of on-line preference revelation using a series of controlled laboratory experiments. Our analysis includes individuals' pre- and post-experiment rankings of technologies, individual trading and accumulation activities during an electronics market experiment, the final experimental market outcomes, and a ranking of the same technologies by a panel of experts from a Fortune 5 company. In addition, as a final step, we allowed each participant to actually select and keep a unit of one of the technologies at zero price (free). That is, we were able to observe each participant's actual final true preference from the set of technologies. | Reliability (or “lack thereof”) of on-line preference revelation: A controlled experimental analysis |
S0167923613001875 | A keyword auction is conducted by Internet search engines to sell advertising slots listed on the search results page. Although much of the literature assumes the dynamic bidding strategy that utilizes the current bids of other advertisers, such information is, in practice, not available for participants in the auction. This paper explores the bidding behavior of advertisers in a sealed-bid environment, where each bidder does not know the current bids of others. This study considers secure bidding with a trial bid (SBT) as the bid adjustment process used by the advertisers, which is functional in a sealed-bid environment. It is shown that the SBT bid adjustment process converges to some equilibrium point in a one-shot game irrespective of the initial bid profile. Simulation results verify that a sealed-bid environment would be beneficial to search engines. | Bidding behaviors for a keyword auction in a sealed-bid environment |
S0167923614001250 | Identification of intrinsic characteristics and structure of high-dimensional data is an important task for financial analysis. This paper presents a kernel entropy manifold learning algorithm, which employs the information metric to measure the relationships between two financial data points and yields a reasonable low-dimensional representation of high-dimensional financial data. The proposed algorithm can also be used to describe the characteristics of a financial system by deriving the dynamical properties of the original data space. The experiment shows that the proposed algorithm cannot only improve the accuracy of financial early warning, but also provide objective criteria for explaining and predicting the stock market volatility. | A kernel entropy manifold learning approach for financial data analysis |
S0167923614002012 | In today's dynamic media landscape, products are reviewed by consumers in social media and reported by journalists in traditional media. This paper will focus on the relationship among the two types of “earned” media and product sales. Previous studies have focused on either traditional or social earned media, but rarely both. We will aim to bridge that gap using the following points of analysis: the New York Times Best Seller List as traditional media; Amazon user reviews as social media; and book purchases through Amazon as product sales. We find that: (1) both traditional and social earned media influence sales; (2) sales have a reciprocal effect on social earned media; and (3) traditional and social earned media influence each other. Communication through multiple media is known to produce the “synergy effect” in which one media activity enhances the effect of another. Our results suggest a new benefit unique to the use of multiple earned media. We call this the “multiplier effect,” which occurs when one earned media activity increases the level of another by becoming a “sounding board” that amplifies positive messages, as well as a bridge that allows messages to propagate freely in an interactive media system. Therefore, multiple earned media produce combined sales effects greater than those resulting from the sum of their parts. This analysis supports Amazon's decision to use multiple earned media to benefit from an ecosystem where product sales and earned media both influence and are influenced by one another. The paper will address the implications for marketing communication and media industry. | Why Amazon uses both the New York Times Best Seller List and customer reviews: An empirical study of multiplier effects on product sales from multiple earned media |
S0167923614002036 | It has become increasingly important for companies to utilize electronic word of mouth (eWOM) in their marketing campaigns for desired product sales. Identifying key eWOM disseminators among consumers is a challenge for companies. WOM is an interpersonal communication in which a sender spreads a message to receivers. Previously, researchers and practitioners have searched for opinion leaders by examining senders and receivers due to limited records on WOM message. Our study identifies three types of opinion leaders through eWOM using a message-based approach that elicits more accurate and comprehensive information on opinion leadership than sender-based and receiver-based approaches. We demonstrate that eWOM of opinion leaders drives product sales due to their product experience and knowledge background. Our findings suggest that companies can increase product sales via effective use of eWOM of such opinion leaders. Managerial and marketing implications are addressed. | Finding disseminators via electronic word of mouth message for effective marketing communications |
S0167923614002498 | The huge amount of textual data on the Web has grown in the last few years rapidly creating unique contents of massive dimension. In a decision making context, one of the most relevant tasks is polarity classification of a text source, which is usually performed through supervised learning methods. Most of the existing approaches select the best classification model leading to over-confident decisions that do not take into account the inherent uncertainty of the natural language. In this paper, we pursue the paradigm of ensemble learning to reduce the noise sensitivity related to language ambiguity and therefore to provide a more accurate prediction of polarity. The proposed ensemble method is based on Bayesian Model Averaging, where both uncertainty and reliability of each single model are taken into account. We address the classifier selection problem by proposing a greedy approach that evaluates the contribution of each model with respect to the ensemble. Experimental results on gold standard datasets show that the proposed approach outperforms both traditional classification and ensemble methods. | Sentiment analysis: Bayesian Ensemble Learning |
S0167923615001426 | By leveraging crowdsourcing, Web credibility evaluation systems (WCESs) have become a promising tool to assess the credibility of Web content, e.g., Web pages. However, existing systems adopt a passive way to collect users' credibility ratings, which incurs two crucial challenges: (1) a considerable fraction of Web content have few or even no ratings, so the coverage (or effectiveness) of the system is low; (2) malicious users may submit fake ratings to damage the reliability of the system. In order to realize a highly effective and robust WCES, we propose to integrate recommendation functionality into the system. On the one hand, by fusing Matrix Factorization and Latent Dirichlet Allocation, a personalized Web content recommendation model is proposed to attract users to rate more Web pages, i.e., the coverage is increased. On the other hand, by analyzing a user's reaction to the recommended Web content, we detect imitating attackers, which have recently been recognized as a particular threat to WCES to make the system more robust. Moreover, an adaptive reputation system is designed to motivate users to more actively interact with the integrated recommendation functionality. We conduct experiments using both real datasets and synthetic data to demonstrate how our proposed recommendation components significantly improve the effectiveness and robustness of existing WCES. | Towards a highly effective and robust Web credibility evaluation system |
S0167923615001967 | The prevalence of social media has greatly catalyzed the dissemination and proliferation of online memes (e.g., ideas, topics, melodies, and tags). However, this information abundance is exceeding the capability of online users to consume it. Ranking memes based on their popularities could promote online advertisement and content distribution. Despite such importance, few existing work can solve this problem well. They are either daunted by unpractical assumptions or incapability of characterizing dynamic information. As such, in this paper, we elaborate a model-free scheme to rank online memes in the context of social media. This scheme is capable to characterize the nonlinear interactions of online users, which mark the process of meme diffusion. Empirical studies on two large-scale, real-world datasets (one in English and one in Chinese) demonstrate the effectiveness and robustness of the proposed scheme. In addition, due to its fine-grained modeling of user dynamics, this ranking scheme can also be utilized to explain meme popularity through the lens of social influence. | A model-free scheme for meme ranking in social media |
S0167923616300197 | Deception is an inevitable component of human interaction. Researchers and practitioners are developing information systems to aid in the detection of deceptive communication. Information systems are typically adopted by end users to aid in completing a goal or objective (e.g., increasing the efficiency of a business process). However, end-user interactions with deception detection systems (adversarial systems) are unique because the goals of the system and the user are orthogonal. Prior work investigating systems-based deception detection has focused on the identification of reliable deception indicators. This research extends extant work by looking at how users of deception detection systems alter their behavior in response to the presence of guilty knowledge, relevant stimuli, and system knowledge. An analysis of data collected during two laboratory experiments reveals that guilty knowledge, relevant stimuli, and system knowledge all lead to increased use of countermeasures. The implications and limitations of this research are discussed and avenues for future research are outlined. | Man vs. machine: Investigating the effects of adversarial system use on end-user behavior in automated deception detection interviews |
S0167926015000231 | The increasing complexity of VLSI digital systems has dramatically supported system-level representations in modeling and design activities. This evolution makes often necessary a compliant rearrangement of the modalities followed in validation and analysis tasks, as in the case of power performances estimation. Nowadays, transaction-level paradigms are having a wider and wider consideration in the research on electronic system-level design techniques. With regard to the available modeling resources, the most relevant framework is probably the transaction-level extension of the SystemC language (SystemC/TLM), which therefore represents the best platform for defining transaction-level design techniques. In this paper we present a macro-modeling power estimation methodology valid for SystemC/TLM prototypes and of general applicability. The present discussion illustrates the implementation modalities of the proposed approach, verifying its effectiveness through a comparison with RTL estimation techniques. | Transaction-level power analysis of VLSI digital systems |
S0167931713002025 | Anomalous modulation characteristics of Josephson current I c through niobium tunnel junctions at liquid helium temperature were first measured after applying the external magnetic field in perpendicular direction. Josephson current I c was modulated by applying the external magnetic fields (Hx , Hy ) parallel and Hz vertical to the junction plane. Modulation characteristics of the I c value upon Hz had hysteresis. Before applying the vertical field, modulation characteristics I c–(Hx , Hy ) were the product of the two Fraunhofer diffraction patterns in Hx and Hy directions which were parallel to the junction edges of the square shape junction, respectively. Under the perpendicular magnetic field as much as 4000A/m, the maximum I c value did not appear at (Hx , Hy )=(0, 0) point. After removing this vertical field, the I c–(Hx , Hy ) modulation pattern changed from the product of the two Fraunhofer diffraction patterns to the deformed I c–(Hx , Hy ) characteristics, whose anomalous shape was explained by assuming the extremely low current density at four edges of the square shape except the four corner regions. Some magnetic flux would be trapped perpendicularly inside the junction electrodes within the junction area. After the junction was heated to the room temperature and was again cooled to the liquid He temperature, these modulation characteristics again became the normal modulation pattern, in which the trapped flux would be released by this thermal cycle. | Anomalous modulation characteristics of DC Josephson current through niobium tunnel junction by applying external magnetic field 4000A/m in perpendicular direction |
S0167931713002438 | Using ab initio calculations we demonstrate that extra electrons in pure amorphous SiO2 can be trapped in deep band gap states. Classical potentials were used to generate amorphous silica models and density functional theory to characterise the geometrical and electronic structures of trapped electrons. Extra electrons can trap spontaneously on pre-existing structural precursors in amorphous SiO2 and produce ≈ 3.2eV deep states in the band gap. These precursors comprise wide ( ⩾ 130 ° ) O–Si–O angles and elongated Si–O bonds at the tails of corresponding distributions. The electron trapping in amorphous silica structure results in an opening of the O–Si–O angle (up to almost 180 ° ). We estimate the concentration of these electron trapping sites to be ≈ 5 × 10 19 cm - 3 . | Identification of intrinsic electron trapping sites in bulk amorphous silica from ab initio calculations |
S0167931713002487 | High-κ dielectric gate stacks comprising HfO2 were fabricated on Ge with alumina as the barrier level. This was achieved by thermal annealing in an ultra high vacuum to remove the native oxide followed by deposition of aluminium by molecular beam epitaxy. After in situ oxidation at ambient temperature, HfO2 was deposited by atomic layer deposition. The devices underwent physical and electrical characterisation and show low EOT down to 1.3nm, low leakage current of less than 10−7 Acm−2 at ±1V, and CV hysteresis of ∼10mV. | Low EOT GeO2/Al2O3/HfO2 on Ge substrate using ultrathin Al deposition |
S0167931713004061 | Current progress in tissue engineering is focused on the creation of environments in which cultures of relevant cells can adhere, grow and form functional tissue. We propose a method for controlled chemical and topographical cues through surface patterning of self-folding hydrogel films. This provides a conversion of 2D patterning techniques into a viable method of manufacturing a 3D scaffold. While similar bilayers have previously been demonstrated, here we present a faster and high throughput process for fabricating self-folding hydrogel devices incorporating controllable surface nanotopographies by serial hot embossing of sacrificial layers and photolithography. | Self-folding nano- and micropatterned hydrogel tissue engineering scaffolds by single step photolithographic process |
S0167931713004991 | Structuring or removal of the epoxy based, photo sensitive polymer SU-8 by inductively coupled plasma reactive ion etching (ICP-RIE) was investigated as a function of plasma chemistry, bias power, temperature, and pressure. In a pure oxygen plasma, surface accumulation of antimony from the photo-initiator introduced severe roughness and reduced etch rate significantly. Addition of SF6 to the plasma chemistry reduced the antimony surface concentration with lower roughness and higher etch rate as an outcome. Furthermore the etch anisotropy could be tuned by controlling the bias power. Etch rates up to 800nmmin−1 could be achieved with low roughness and high anisotropy. | SU-8 etching in inductively coupled oxygen plasma |
S0167931713005042 | In this work the direct transfer of nanopatterns into titanium is demonstrated. The nanofeatures are imprinted at room temperature using diamond stamps in a single step. We also show that the imprint properties of the titanium surface can be altered by anodisation yielding a significant reduction in the required imprint force for pattern transfer. The anodisation process is also utilised for curved titanium surfaces where a reduced imprint force is preferable to avoid sample deformation and damage. We finally demonstrate that our process can be applied directly to titanium rods. | Increased efficiency of direct nanoimprinting on planar and curved bulk titanium through surface modification |
S0167931713005078 | The influence of cathode agitation on the residual stress of electroplated gold has been investigated. Using a custom-built plating cell, a periodic, reciprocating motion was applied to silicon substrates that were electroplated with soft gold. A commercially available gold sulfite solution was used to deposit the 0.6μm thick gold films using a current density of 3.0mA/cm2 and a bath temperature of 50°C. By increasing the speed of cathode agitation from 0 to 5cm/s, the magnitude of the compressive stress decreased from −64 to −9MPa. The results suggest that cathode agitation significantly alters the mass transport within the electrolytic cell and can be used as a method of stress control in gold electroplating. This finding is potentially significant for plating applications in microelectronics and microsystems that require precise stress control. | Stress in electroplated gold on silicon substrates and its dependence on cathode agitation |
S0167931713006904 | Copper electro-chemical deposition (ECD) of through silicon via (TSV) is a key challenge of 3D integration. This paper presents a numerical modeling of TSV filling concerning the influence of the accelerator and the suppressor. The diffusion–adsorption model was used in the simulation and effects of the additives were incorporated in the model. The boundary conditions were derived from a set of experimental Tafel curves with different concentrations of additives, which provided a quick and accurate way for copper ECD process prediction without complicated surface kinetic parameters fitting. The level set method (LSM) was employed to track the copper and electrolyte interface. The simulation results were in good agreement with the experiments. For a given feature size, the current density for superfilling could be predicted, which provided a guideline for ECD process optimization. | Numerical modeling and experimental verification of through silicon via (TSV) filling in presence of additives |
S0167931714000203 | To restrict pollen tube growth to a single focal plane is an important subject to enable their accurate growth analysis under microscopic observation. In the conventional method to assay pollen tube growth, the pollen tubes grow in a disorderly manner on solid medium, rendering it impossible to observe their growth in detail. Here, we present a new method to assay pollen tube growth using poly-dimethylsiloxane microchannel device to isolate individual pollen tubes. The growth of the pollen tube is confined to the microchannel and to the same focal plane, allowing accurate microscopic observations. This methodology has the potential for analyses of pollen tube growth in microfluidic environments in response to chemical products and signaling molecules, which paves the way for various experiments on plant reproduction. | Growth assay of individual pollen tubes arrayed by microchannel device |
S0167931714003347 | In order to advance flexible electronic technologies it is important to study the electrical properties of thin metal films on polymer substrates under mechanical load. At the same time, the observation of film deformation and fracture as well as the stresses that are present in the films during straining are also crucial to investigate. To address both the electromechanical and deformation behavior of metal films supported by polymer substrates, in-situ 4 point probe resistance measurements were performed with in-situ atomic force microscopy imaging of the film surface during straining. The 4 point probe resistance measurements allow for the examination of the changes in resistance with strain, while the surface imaging permits the visualization of localized thinning and crack formation. Furthermore, in-situ synchrotron tensile tests provide information about the stresses in the film and show the yield stress where the deformation initiates and the relaxation of the film during imaging. A thin 200nm Cu film on 23μm thick PET substrate will be used to illustrate the combined techniques. The combination of electrical measurements, surface imaging, and stress measurements allow for a better understanding of electromechanical behavior needed for the improvement and future success of flexible electronic devices. | Measuring electro-mechanical properties of thin films on polymer substrates |
S0167931714004456 | Lab-on-a-chip (LOC) devices are broadly used for research in the life sciences and diagnostics and represent a very fast moving field. LOC devices are designed, prototyped and assembled using numerous strategies and materials but some fundamental trends are that these devices typically need to be (1) sealed, (2) supplied with liquids, reagents and samples, and (3) often interconnected with electrical or microelectronic components. In general, closing and connecting to the outside world these miniature labs remain a challenge irrespectively of the type of application pursued. Here, we review methods for sealing and connecting LOC devices using standard approaches as well as recent state-of-the-art methods. This review provides easy-to-understand examples and targets the microtechnology/engineering community as well as researchers in the life sciences. | Lab-on-a-chip devices: How to close and plug the lab? |
S0167931715001203 | We used two methods, namely stamping and printing, to transfer arrays of epitaxial gallium phosphide (GaP) nanowires from their growth substrate to a soft, biodegradable layer of polycaprolactone (PCL). Using the stamping method resulted in a very inhomogeneous surface topography with a wide distribution of transferred nanowire lengths, whereas using the printing method resulted in an homogeneous substrate topography over several mm2. PC12 cells were cultured on the hybrid nanowire-PCL substrates realized using the printing method and exhibited an increased attachment on these substrates, compared to the original nanowire-semiconductor substrate. Transferring nanowires on PCL substrates is promising for implanting nanowires in-vivo with a possible reduced inflammation compared to when hard semi-conductor substrates are implanted together with the nanowires. The nanowire-PCL hybrid substrates could also be used as biocompatible cell culture substrates. Finally, using nanowires on PCL substrates would enable to recycle the expensive GaP substrate and repeatedly grow nanowires on the same substrate. | Transfer of vertical nanowire arrays on polycaprolactone substrates for biological applications |
S0167931715002129 | Solid-on-liquid deposition (SOLID) techniques are of great interest to the MEMS and NEMS (Micro- and Nano Electro Mechanical Systems) community because of potential applications in biomedical engineering, on-chip liquid trapping, tunable micro-lenses, and replacements of gate oxides. However, depositing solids on liquid with subsequent hermetic sealing is difficult because liquids tend to have a lower density than solids. Furthermore, current systems seen in nature lack thermal, mechanical or chemical stability. Therefore, it is not surprising that liquids are not ubiquitous as functional layers in MEMS and NEMS. However, SOLID techniques have the potential to be harnessed and controlled for such systems because the gravitational force is negligible compared to surface tension, and therefore, the solid molecular precursors that typically condense on a liquid surface will not sediment into the fluid. In this review we summarize recent research into SOLID, where nucleation and subsequent cross-linking of solid precursors results in thin film growth on a liquid substrate. We describe a large variety of thin film deposition techniques such as thermal evaporation, sputtering, plasma enhanced chemical vapor deposition used to coat liquid substrates. Surprisingly, all attempts at deposition to date have been successful and a stable solid layer on a liquid can always be detected. However, all layers grown by non-equilibrium deposition processes showed a strong presence of wrinkles, presumably due to residual stress. In fact, the only example where no stress was observed is the deposition of parylene layers (poly-para-xylylene, PPX). Using all the experimental data analyzed to date we have been able to propose a simple model that predicts that the surface property of liquids at molecular level is influenced by cohesion forces between the liquid molecules. Finally, we conclude that the condensation of precursors from the gas phase is rather the rule and not the exception for SOLID techniques. | Solid on liquid deposition, a review of technological solutions |
S0167931715002154 | Defect assisted electron transfer processes in metal-oxide materials play a key role in a diverse range of effects of relevance to microelectronic applications. However, extracting the key parameters governing such processes experimentally is a challenging problem. Here, we present a first principles based investigation into electron transfer between oxygen vacancy defects in the high-k dielectric material HfO2. By calculating electron transfer parameters for defects separated by up to 15Å we show that there is a crossover from coherent to incoherent electron transfer at about 5Å. These results can provide invaluable input into numerical simulations of electron transfer, which can be used to model and understand important effects such as trap-assisted tunneling in advanced logic and memory devices. | First principles modeling of electron tunneling between defects in m-HfO2 |
S0167931715002488 | Transfer printing has been reported recently as a viable route for electronics on flexible substrates. The method involves transferring micro-/macrostructures such as wires or ultra-thin chips from Si (silicon) wafers to the flexible substrates by using elastomeric transfer substrates such as poly(dimethylsiloxane) (PDMS). A major challenge in this process is posed by the residues of PDMS, which are left over on Si surface after the nanostructures have been transferred. As insulator, PDMS residues make it difficult to realize metal connections and hence pose challenge in the way of using nanostructures as the building blocks for active electronics. This paper presents a method for PDMS residues-free transfer of Si micro-/macrostructures to flexible substrates such as polyimide (PI). The PDMS residues are removed from Si surface by immersing the transferred structures in a solution of quaternary ammonium fluoride such as TBAF (Tetrabutylammonium Fluoride) and non-hydroxylic aprotic solvent such as PMA (propylene glycol methyl ether acetate). The residues are removed at a rate (∼1.5μm/min) which is about five times faster than the traditional dry etch methods. Unlike traditional alternatives, the presented method removes PDMS without attacking the flexible PI substrates. | PDMS residues-free micro/macrostructures on flexible substrates |
S0167931715002920 | The impact of subjecting a n-GaN surface to an in-situ argon plasma in an atomic layer deposition (ALD) tool immediately before deposition of an Al2O3 dielectric film is assessed by frequency dependent evaluation of Al2O3/GaN MOSCAPs. In comparison with a control with no pre-treatment, the use of a 50W argon plasma for 5min reduced hysteresis from 0.25V to 0.07V, frequency dispersion from 0.31V to 0.03V and minimum interface state density (Dit ) as determined by the conductance method from 6.8×1012 cm−2 eV−1 to 5.05×1010 cm−2 eV−1. | A study of the impact of in-situ argon plasma treatment before atomic layer deposition of Al2O3 on GaN based metal oxide semiconductor capacitor |
S0167931715300022 | The properties of electroless films produced from a bath designed for horizontal plating, the preferred technology for high production volumes in printed circuit board metallization, are reported. Film thickness, substrate type and electrolyte temperature were varied. Formation of a continuous layer of copper film is correlated with a change in the visual and spectroscopic appearance. Grain orientation is random in thin films and a <110> texture develops with increasing thickness. The plating solution contains Cu and Ni ions. Nickel co-deposits in copper films in the form of Ni hydroxide, and its concentration decreases from about 6% in the vicinity of the substrate to about 1% at the film surface. Film stress and strain were measured by substrate curvature and X-ray diffraction, respectively. Both stress and strain decrease as the film thickness increases. Stress remains tensile throughout during deposition and during relaxation, promoting film adhesion by preventing blisters. After deposition, stress relaxes first towards compressive and then towards tensile. The stress, the stress relaxation and the Ni concentration are high at the base of the film. We attribute this to the higher volume fraction of grain boundaries (smaller grain size) in this region. | Properties of electroless Cu films optimized for horizontal plating as a function of deposit thickness |
S0167931715301155 | We present an improved nanofabrication method of high aspect ratio tungsten structures for use in high efficiency nanofocusing hard X-ray zone plates. A ZEP 7000 electron beam resist layer used for patterning is cured by a second, much larger electron dose after development. The curing step improves pattern transfer fidelity into a chromium hard mask by reactive ion etching using Cl2/O2 chemistry. The pattern can then be transferred into an underlying tungsten layer by another reactive ion etching step using SF6/O2. A 630 nm-thick tungsten zone plate with smallest line width of 30 nm was fabricated using this method and characterized. At 8.2 keV photon energy the device showed an efficiency of 2.2% with a focal spot size at the diffraction limit, measured at Diamond Light Source I-13-1 beamline. | Improved tungsten nanofabrication for hard X-ray zone plates |
S0167931715301167 | We present a new chlorine-free dry etching process which was used to successfully etch indium antimonide grown on gallium arsenide substrates while keeping the substrate temperature below 150 °C. By use of a reflowed photoresist mask a sidewall with 60 degree positive slope was achieved, whereas a nearly vertical one was obtained when hard masks were used. Long etch tests demonstrated the non-selectivity of the process by etching through the entire multi-layer epitaxial structure. Electrical and optical measurements on devices fabricated both by wet and dry etch techniques provided similar results, proving that the dry etch process does not cause damage to the material. This technique has a great potential to replace the standard wet etching techniques used for fabrication of indium antimonide devices with a non-damaging low temperature plasma process. | Development of InSb dry etch for mid-IR applications |
S0167931716300387 | Optical lithography technique has been applied to fabricate devices from atomically thin sheets, exfoliated mechanically from kish graphite, bulk MoS2 and WSe2. During the fabrication processes, the exfoliated graphene, few-layer MoS2 and WSe2 sheets have been patterned into specific shapes as required and metal contacts have been deposited on these two-dimensional sheets to make field effect devices with different structures. The key to the successful implementation of the technique is the appropriate alignment mark design, which can solve the problems of aligning photomasks to the random location, orientation and irregular shape exfoliated two-dimensional sheets on the substrates. Raman characterization performed on the patterned two-dimensional sheets after the fabrication processes shows that little defects have been introduced during fabrication. Field effect has been observed from I–V characteristics with the highly doped silicon substrate as the back gate. The extracted field effect hole and electron mobilities of graphene are ~ 1010 cm2 V- 1 s- 1 and ~ 3550 cm2 V- 1 s- 1 respectively; and the field effect carrier mobilities of MoS2 and WSe2 are ~ 0.06 cm2 V- 1 s- 1 and ~ 0.03 cm2 V- 1 s- 1, separately, which are comparable with experimental results of other reports. | Optical lithography technique for the fabrication of devices from mechanically exfoliated two-dimensional materials |
S0167931716301241 | A novel negative tone molecular resist molecule featuring a tert-butyloxycarbonyl protected phenol malonate group bonded to a 1,8-Diazabicycloundece-7-ene is presented. The resist shows high-resolution capability in electron beam lithography at a range of beam energies. The resist demonstrated a sensitivity of 18.7 µC/cm2 at 20 kV. Dense features with a line width of 15 nm have been demonstrated at 30 kV, whilst a feature size of 12.5 nm was achieved for dense lines at 100 kV. | Performance of a high resolution chemically amplified electron beam resist at various beam energies |
S0167931716301770 | High-aspect-ratio GaN-based nanostructures are of interest for advanced photonic crystal and core-shell devices. Nanostructures grown by a bottom-up approach are limited in terms of doping, geometry and shape which narrow their potential application areas. In contrast, high uniformity and a greater diversity of shape and design can be produced via a top-down etching approach. However, a detailed understanding of the role of etch process parameters is lacking for creating high-aspect ratio nanorods and nanopores. Here we report a systematic analysis on the role of temperature and pressure on the fabrication of nanorod and nanopore arrays in GaN. Our results show a threshold in the etch behaviour at a temperature of ~ 125 °C, which greatly enhances the verticality of the GaN nanorods, whilst the modification of the pressure enables a fine tuning of the nanorod profile. For nanopores we show that the use of higher temperatures at higher pressures enables the fabrication of nanopores with an undercut profile. Such a profile is important for controlling the optical field in photonic crystal-based devices. Therefore we expect the ability to create such nanostructures to form the foundation for new advanced LED designs. | Fabrication of high-aspect ratio GaN nanostructures for advanced photonic devices |
S0167947313000571 | An important problem in high-dimensional data analysis is determining whether sample points are uniformly distributed (i.e., exhibit complete spatial randomness) over some compact support, or rather possess some underlying structure (e.g., clusters or other nonhomogeneities). We propose two new graph-theoretic tests of uniformity which utilize the minimum spanning tree and a snake (a short non-branching acyclic path connecting each data point). We compare the powers of statistics based on these graphs with other statistics from the literature on an array of non-uniform alternatives in a variety of supports. For data in a hypercube, we find that test statistics based on the minimum spanning tree have superior power when the data displays regularity (e.g., results from an inhibition process). For arbitrarily shaped or unknown supports, we use run length statistics of the sequence of segment lengths along the snake’s path to test uniformity. The snake is particularly useful because no knowledge or estimation of the support is required to compute the test statistic, it can be computed quickly for any dimension, and it shows what kinds of non-uniformities are present. These properties make the snake unique among multivariate tests of uniformity since others only function on specific and known supports, have computational difficulties in high dimension, or have inconsistent type I error rates. | An empirical study of tests for uniformity in multidimensional data |
S0167947313001047 | It is often necessary to test for the presence of seasonal unit roots when working with time series data observed at intervals of less than a year. One of the most widely used methods for doing this is based on regressing the seasonal difference of the series over the transformations of the series by applying specific filters for each seasonal frequency. This provides test statistics with non-standard distributions. A generalisation of this method for any periodicity is presented and a response surface regressions approach is used to calculate the P -values of the statistics whatever the periodicity and sample size of the data. The algorithms are prepared with the Gretl open source econometrics package and two empirical examples are presented. | Numerical distribution functions for seasonal unit root tests |
S0167947313001266 | A new algorithm is proposed for OLS estimation of linear models with multiple high-dimensional category variables. It is a generalization of the within transformation to arbitrary number of category variables. The approach, unlike other fast methods for solving such problems, provides a covariance matrix for the remaining coefficients. The article also sets out a method for solving the resulting sparse system, and the new scheme is shown, by some examples, to be comparable in computational efficiency to other fast methods. The method is also useful for transforming away groups of pure control dummies. A parallelized implementation of the proposed method has been made available as an R-package lfe on CRAN. | OLS with multiple high dimensional category variables |
S0167947313001576 | The authors consider a dynamic probit model where the coefficients follow a first-order Markov process. An exact Gibbs sampler for Bayesian analysis is presented for the model using the data augmentation approach and the forward filtering backward sampling algorithm for dynamic linear models. The authors discuss how our approach can be used for dynamic probit models as well as its generalizations including Markov regressions and models with Student link functions. An approach is presented to compare static and dynamic probit models as well as for Markov order selection in these classes of dynamic models. The developed approach is implemented to some actual data. | Bayesian dynamic probit models for the analysis of longitudinal data |
S0167947313002326 | The detection of change-points in heterogeneous sequences is a statistical challenge with applications across a wide variety of fields. In bioinformatics, a vast amount of methodology exists to identify an ideal set of change-points for detecting Copy Number Variation (CNV). While considerable efficient algorithms are currently available for finding the best segmentation of the data in CNV, relatively few approaches consider the important problem of assessing the uncertainty of the change-point location. Asymptotic and stochastic approaches exist but often require additional model assumptions to speed up the computations, while exact methods generally have quadratic complexity which may be intractable for large data sets of tens of thousands points or more. A hidden Markov model, with constraints specifically chosen to correspond to a segment-based change-point model, provides an exact method for obtaining the posterior distribution of change-points with linear complexity. The methods are implemented in the R package postCP, which uses the results of a given change-point detection algorithm to estimate the probability that each observation is a change-point. The results include an implementation of postCP on a publicly available CNV data set ( n = 120 ) . Due to its frequentist framework, postCP obtains less conservative confidence intervals than previously published Bayesian methods, but with linear complexity instead of quadratic. Simulations showed that postCP provided comparable loss to a Bayesian MCMC method when estimating posterior means, specifically when assessing larger scale changes, while being more computationally efficient. On another high-resolution CNV data set ( n = 14 , 241 ) , the implementation processed information in less than one second on a mid-range laptop computer. | Fast estimation of posterior probabilities in change-point analysis through a constrained hidden Markov model |
S0167947313002600 | Categorization is often needed for clinical decision making when dealing with diagnostic (prognostic) biomarkers and a binary outcome (true disease status). Four common methods used to dichotomize a continuous biomarker X are compared: the minimum P -value, the Youden index, the concordance probability and the point closest-to-(0, 1) corner in the ROC plane. These methods are compared from a theoretical point of view under Normal or Gamma biomarker distributions, showing whether or not they lead to the identification of the same true cut-point. The performance of the corresponding non-parametric estimators is then compared by simulation. Two motivating examples are presented. In all simulation scenarios, the point closest-to-(0, 1) corner in the ROC plane and concordance probability approaches outperformed the other methods. Both these methods showed good performance in the estimation of the optimal cut-point of a biomarker. However, when methods do not lead to the same optimal cut-point, scientists should focus on which one is truly what they want to estimate, and use it in practice. In addition, to improve communicability, the Youden index or the concordance probability associated to the estimated cut-point could be reported to summarize the associated classification accuracy. The use of the minimum P -value approach for cut-point finding is strongly not recommended because its objective function is computed under the null hypothesis of absence of association between the true disease status and X . This is in contrast with the presence of some discrimination potential of X that leads to the dichotomization issue. | Finding the optimal cut-point for Gaussian and Gamma distributed biomarkers |
S0167947313002818 | Challenges in the analyses of growth mixture models include missing data, outliers, estimation, and model selection. Four non-ignorable missingness models to recover the information due to missing data, and three robust models to reduce the effect of non-normality are proposed. A full Bayesian method is implemented by means of data augmentation algorithm and Gibbs sampling procedure. Model selection criteria are also proposed in the Bayesian context. Simulation studies are then conducted to evaluate the performances of the models, the Bayesian estimation method, and selection criteria under different situations. The application of the models is demonstrated through the analysis of education data on children’s mathematical ability development. The models can be widely applied to longitudinal analyses in medical, psychological, educational, and social research. | Robust growth mixture models with non-ignorable missingness: Models, estimation, selection, and application |
S0167947313002855 | For constructing simultaneous confidence intervals for the ratios of means of several lognormal distributions, we propose a new parametric bootstrap method, which is different from an inaccurate parametric bootstrap method previously considered in the literature. Our proposed method is conceptually simpler than other proposed methods, which are based on the concepts of generalized pivotal quantities and fiducial generalized pivotal quantities. Also, our extensive simulation results indicate that our proposed method consistently performs better than other methods: its coverage probability is close to the nominal confidence level and the resulting intervals are typically shorter than the intervals produced by other methods. | Simultaneous confidence intervals for ratios of means of several lognormal distributions: A parametric bootstrap approach |
S0167947313003381 | In several empirical applications analyzing customer-by-product choice data, it may be relevant to partition individuals having similar purchase behavior in homogeneous segments. Moreover, should individual- and/or product-specific covariates be available, their potential effects on the probability to choose certain products may be also investigated. A model for joint clustering of statistical units (customers) and variables (products) is proposed in a mixture modeling framework, and an appropriate EM-type algorithm for ML parameter estimation is presented. The model can be easily linked with similar proposals appeared in various contexts, such as co-clustering of gene expression data, clustering of words and documents in web-mining data analysis. | Model based clustering of customer choice data |
S0167947313003678 | Simulation-based forecasting methods for a non-Gaussian noncausal vector autoregressive (VAR) model are proposed. In noncausal autoregressions the assumption of non-Gaussianity is needed for reasons of identifiability. Unlike in conventional causal autoregressions the prediction problem in noncausal autoregressions is generally nonlinear, implying that its analytical solution is unfeasible and, therefore, simulation or numerical methods are required in computing forecasts. It turns out that different special cases of the model call for different simulation procedures. Monte Carlo simulations demonstrate that gains in forecasting accuracy are achieved by using the correct noncausal VAR model instead of its conventional causal counterpart. In an empirical application, a noncausal VAR model comprised of U.S. inflation and marginal cost turns out superior to the best-fitting conventional causal VAR model in forecasting inflation. | Forecasting with a noncausal VAR model |
S0167947313004465 | In the application of the popular maximum likelihood method to factor analysis, the number of factors is commonly determined through a two-stage procedure, in which stage 1 performs parameter estimation for a set of candidate models and then stage 2 chooses the best according to certain model selection criterion. Usually, to obtain satisfactory performance, a large set of candidates is used and this procedure suffers a heavy computational burden. To overcome this problem, a novel one-stage algorithm is proposed in which parameter estimation and model selection are integrated in a single algorithm. This is obtained by maximizing the criterion with respect to model parameters and the number of factors jointly, rather than separately. The proposed algorithm is then extended to accommodate incomplete data. Experiments on a number of complete/incomplete synthetic and real data reveal that the proposed algorithm is as effective as the existing two-stage procedure while being much more computationally efficient, particularly for incomplete data. | Automated learning of factor analysis with complete and incomplete data |
S0167947314000140 | The g -and- h distributional family is generated from a relatively simple transformation of the standard normal and can approximate a broad spectrum of distributions. Consequently, it is easy to use in simulation studies and has been applied in multiple areas, including risk management, stock return analysis and missing data imputation studies. A rapidly convergent quantile based least squares (QLS) estimation method to fit the g -and- h distributional family parameters is proposed and then extended to a robust version. The robust version is then used as a more general outlier detection approach. Several properties of the QLS method are derived and comparisons made with competing methods through simulation. Real data examples of microarray and stock index data are used as illustrations. | Robust estimation of the parameters of g -and- h distributions, with applications to outlier detection |
S0167947314000279 | In non-parametric regression analysis the advantage of frames with respect to classical orthonormal bases is that they can furnish an efficient representation of a more broad class of functions. For example, fast oscillating functions as audio, speech, sonar, radar, EEG and stock market are much more well represented by a frame, with similar oscillating characteristic, than by a classical orthonormal basis. In this respect, a new frame based shrinkage estimator is derived as the Empirical Regularized version of the optimal Shrinkage estimator generalized to the frame operator. An analytic expression of it is furnished leading to an efficient implementation. Results on standard and real test functions are shown. | A frame based shrinkage procedure for fast oscillating functions |
S0167947314000462 | We propose a frequency domain generalized likelihood ratio test for testing nonstationarity in time series. The test is constructed in the frequency domain by comparing the goodness of fit in the log-periodogram regression under the varying coefficient fractionally exponential models. Under such a locally stationary specification, the proposed test is capable of detecting dynamic changes of short-range and long-range dependences in a regression framework. The asymptotic distribution of the proposed test statistic is known under the null stationarity hypothesis, and its finite sample distribution can be approximated by bootstrap. Numerical results show that the proposed test has good power against a wide range of locally stationary alternatives. | A frequency domain test for detecting nonstationary time series |
S0167947314000577 | The latent class model provides an important platform for jointly modeling mixed-mode data—i.e., discrete and continuous data with various parametric distributions. Multiple mixed-mode variables are used to cluster subjects into latent classes. While the mixed-mode latent class analysis is a powerful tool for statisticians, few studies are focused on assessing the contribution of mixed-mode variables in discriminating latent classes. Novel measures are derived for assessing both absolute and relative impacts of mixed-mode variables in latent class analysis. Specifically, the expected posterior gradient and the Kolmogorov variation of the posterior distribution, as well as related properties are studied. Numerical results are presented to illustrate the measures. | Variable assessment in latent class models |
S0167947314000619 | A judgment post-stratified (JPS) sample is used in order to develop statistical inference for population quantiles and variance. For the p th order of the population quantile, a test is constructed, an estimator is developed, and a distribution-free confidence interval is provided. An unbiased estimator for the population variance is also derived. For finite sample sizes, it is shown that the proposed inferential procedures for quantiles are more efficient than corresponding simple random sampling (SRS) procedures, but less efficient than corresponding ranked set sampling (RSS) procedures. The variance estimator is less efficient, as efficient as, or more efficient than a simple random sample variance estimator for small, moderately small, and large sample sizes, respectively. Furthermore, it is shown that JPS sample quantile estimators and tests are asymptotically equivalent to RSS estimators and tests in their efficiency comparison. | Statistical inference for population quantiles and variance in judgment post-stratified samples |
S0167947314000899 | A mixture of skew- t factor analyzers is introduced as well as a family of mixture models based thereon. The particular formulation of the skew- t distribution used arises as a special case of the generalized hyperbolic distribution. Like their Gaussian and t -distribution analogues, mixtures of skew- t factor analyzers are very well-suited for model-based clustering of high-dimensional data. The alternating expectation–conditional maximization algorithm is used for model parameter estimation and the Bayesian information criterion is used for model selection. The models are applied to both real and simulated data, giving superior clustering results when compared to a well-established family of Gaussian mixture models. | Mixtures of skew- t factor analyzers |
S0167947314000929 | A methodology based on adaptive likelihood ratios (ALRs) for the detection of emerging disease clusters is presented. The martingale structure of the regular likelihood ratio is preserved by the ALR. The upper limit for the false alarm rate of the proposed method depends only on the quantity of evaluated cluster candidates. Thus Monte Carlo simulations are not required to validate the procedures’ statistical significance, allowing the construction of a fast computational algorithm to detect clusters. The number of evaluated clusters is also significantly reduced, through the use of an adaptive approach to prune many unpromising clusters. This further increases the computational speed. Performance is evaluated through simulations to measure the average detection delay and the probability of correct cluster detection. We present applications for thyroid cancer in New Mexico and hanseniasis in children in the Brazilian Amazon. | Adaptive likelihood ratio approaches for the detection of space–time disease clusters |
S0167947314001467 | Multiple-testing problems have received much attention. Different strategies have been considered in order to deal with this problem. The false discovery rate (FDR) is, probably, the most studied criterion. On the other hand, the sequential goodness of fit (SGoF), is a recently proposed approach. Most of the developed procedures are based on the independence among the involved tests; however, in spite of being a reasonable proviso in some frameworks, independence is not realistic for a number of practical cases. Therefore, one of the main problems in order to develop appropriate methods is, precisely, the effect of the dependence among the different tests on decisions making. The consequences of the correlation on the z-values distribution in the general multitesting problem are explored. Some different algorithms are provided in order to approximate the distribution of the expected rejection proportions. The performance of the proposed methods is evaluated in a simulation study in which, for comparison purposes, the Benjamini and Hochberg method to control the FDR, the Lehmann and Romano procedure to control the tail probability of the proportion of false positives (TPPFP), and the Beta–Binomial SGoF procedure are considered. Three different dependence structures are considered. As usual, for a better understanding of the problem, several practical cases are also studied. | On correlated z -values distribution in hypothesis testing |
S0167947314001753 | Limited information statistics have been recommended as the goodness-of-fit measures in sparse 2 k contingency tables, but the p -values of these test statistics are computationally difficult to obtain. A Bayesian model diagnostic tool, Relative Entropy–Posterior Predictive Model Checking (RE–PPMC), is proposed to assess the global fit for latent trait models in this paper. This approach utilizes the relative entropy (RE) to resolve possible problems in the original PPMC procedure based on the posterior predictive p -value (PPP-value). Compared with the typical conservatism of PPP-value, the RE value measures the discrepancy effectively. Simulated and real data sets with different item numbers, degree of sparseness, sample sizes, and factor dimensions are studied to investigate the performance of the proposed method. The estimates of univariate information and difficulty parameters are found to be robust with dual characteristics, which produce practical implications for educational testing. Compared with parametric bootstrapping, RE–PPMC is much more capable of evaluating the model adequacy. | A novel relative entropy–posterior predictive model checking approach with limited information statistics for latent trait models in sparse 2 k contingency tables |
S0167947314002102 | An algorithm for the evaluation of the exact Gaussian likelihood of an r -dimensional vector autoregressive-moving average (VARMA) process of order ( p , q ), with time-dependent coefficients, including a time dependent innovation covariance matrix, is proposed. The elements of the matrices of coefficients and those of the innovation covariance matrix are deterministic functions of time and assumed to depend on a finite number of parameters. These parameters are estimated by maximizing the Gaussian likelihood function. The advantages of that approach is that the Gaussian likelihood function can be computed exactly and efficiently. The algorithm is based on the Cholesky decomposition method for block-band matrices. It is shown that the number of operations as a function of p , q and n , the size of the series, is barely doubled with respect to a VARMA model with constant coefficients. A detailed description of the algorithm followed by a data example is provided. | The exact Gaussian likelihood estimation of time-dependent VARMA models |
S0167947314002126 | A vital extension to partial least squares (PLS) path modeling is introduced: consistency. While maintaining all the strengths of PLS, the consistent version provides two key improvements. Path coefficients, parameters of simultaneous equations, construct correlations, and indicator loadings are estimated consistently. The global goodness-of-fit of the structural model can also now be assessed, which makes PLS suitable for confirmatory research. A Monte Carlo simulation illustrates the new approach and compares it with covariance-based structural equation modeling. | Consistent and asymptotically normal PLS estimators for linear structural equations |
S0167947314002291 | Multivariate adaptive regression splines (MARS) provide a flexible statistical modeling method that employs forward and backward search algorithms to identify the combination of basis functions that best fits the data and simultaneously conduct variable selection. In optimization, MARS has been used successfully to estimate the unknown functions in stochastic dynamic programming (SDP), stochastic programming, and a Markov decision process, and MARS could be potentially useful in many real world optimization problems where objective (or other) functions need to be estimated from data, such as in surrogate optimization. Many optimization methods depend on convexity, but a non-convex MARS approximation is inherently possible because interaction terms are products of univariate terms. In this paper a convex MARS modeling algorithm is described. In order to ensure MARS convexity, two major modifications are made: (1) coefficients are constrained, such that pairs of basis functions are guaranteed to jointly form convex functions and (2) the form of interaction terms is altered to eliminate the inherent non-convexity. Finally, MARS convexity can be achieved by the fact that the sum of convex functions is convex. Convex-MARS is applied to inventory forecasting SDP problems with four and nine dimensions and to an air quality ground-level ozone problem. | A convex version of multivariate adaptive regression splines |
S0167947314002333 | Estimation of longitudinal models of relationship status between all pairs of individuals (dyads) in social networks is challenging due to the complex inter-dependencies among observations and lengthy computation times. To reduce the computational burden of model estimation, a method is developed that subsamples the “always-null” dyads in which no relationships develop throughout the period of observation. The informative sampling process is accounted for by weighting the likelihood contributions of the observations by the inverses of the sampling probabilities. This weighted-likelihood estimation method is implemented using Bayesian computation and evaluated in terms of its bias, efficiency, and speed of computation under various settings. Comparisons are also made to a full information likelihood-based procedure that is only feasible to compute when limited follow-up observations are available. Calculations are performed on two real social networks of very different sizes. The easily computed weighted-likelihood procedure closely approximates the corresponding estimates for the full network, even when using low sub-sampling fractions. The fast computation times make the weighted-likelihood approach practical and able to be applied to networks of any size. | Using retrospective sampling to estimate models of relationship status in large longitudinal social networks |
S0167947314002345 | The lifetime of subjects in reliability and survival analysis in the presence of several causes of failure (i.e., competing risks) has attracted attention in the literature. Most studies have simplified the computations by assuming that the causes are independent, though this does not hold. Dependent competing risks under progressively hybrid censoring condition using a Marshall–Olkin bivariate Weibull distribution is investigated. Maximum likelihood and approximated maximum likelihood estimators are developed for estimating the unknown parameters. Asymptotic distributions of the maximum likelihood estimators are used to construct approximate confidence intervals using the observed Fisher information matrix. Based on a simulation and real applications, it is illustrated that when a parametric distributional assumption is nearly true, a close approximation could be achieved by deliberately censoring the number of subjects and the study duration using Type-II progressively hybrid censoring, which might help to save time and money in research studies. | Analysis of dependent competing risks in the presence of progressive hybrid censoring using Marshall–Olkin bivariate Weibull distribution |
S0167947314002382 | This paper presents a new efficient and robust smooth-threshold generalized estimating equations for generalized linear models (GLMs) with longitudinal data. The proposed method is based on a bounded exponential score function and leverage-based weights to achieve robustness against outliers both in the response and the covariate domain. Our motivation for the new variable selection procedure is that it enables us to achieve better robustness and efficiency by introducing an additional tuning parameter γ which can be automatically selected using the observed data. Moreover, its performance is near optimal and superior to some recently developed variable selection methods. Under some regularity conditions, the resulting estimator possesses the consistency in variable selection and the oracle property in estimation. Finally, simulation studies and a detailed real data analysis are carried out to assess and illustrate the finite sample performance, which show that the proposed method works better than other existing methods, in particular, when many outliers are included. | An efficient and robust variable selection method for longitudinal generalized linear models |
S0167947314002448 | A Gini-based statistical test for a unit root is suggested. This test is based on the well-known Dickey–Fuller test, where the ordinary least squares (OLS) regression is replaced by the semi-parametric Gini regression in modeling the AR process. A residual-based bootstrap is used to find critical values. The Gini methodology is a rank-based methodology that takes into account both the variate values and the ranks. Therefore, it provides robust estimators that are rank-based, while avoiding loss of information. Furthermore, the Gini methodology relies on first-order moment assumptions, which validates its use for a wide range of distributions. Simulation results validate the Gini-based test and indicate its superiority in some design settings in comparison to other available procedures. The Gini-based test opens the door for further developments such as a Gini-based cointegration test. | A Gini-based unit root test |
S0167947314002473 | A resolution of the Fisher effect puzzle in terms of statistical inference is attempted. Motivation stems from empirical evidence of time-varying coefficients in the data generating process of both the interest rates and inflation rates for 19 OECD countries. These time-varying dynamics crucially affect the behaviour of all the co-integration estimators considered, especially in small samples. When employing simulated critical values instead of asymptotic ones, the results provide ample evidence supporting the existence of a long-run Fisher effect in which interest rates move one-to-one with inflation rates in all countries under scrutiny except for Ireland and Switzerland. | The Fisher effect in the presence of time-varying coefficients |
S0167947314002953 | The tri-linear PLS2 iterative procedure, an algorithm pertaining to the NIPALS framework, is considered. It was previously proposed as a first stage to estimate parameters of the multi-way PLS regression method. It is shown that the tri-linear PLS2 procedure is convergent. The procedure generates a sequence of parameters (scores and loadings), which can be described as increasing or decreasing two specific criteria. Furthermore, a hidden tensor is described allowing tri-linear PLS2 to search its best rank-one approximation. This tensor highlights the link between multi-way PLS regression and the well-known PARAFAC model. The parameters of the multi-way PLS regression method can be computed using three alternative procedures. | Multi-way PLS regression: Monotony convergence of tri-linear PLS2 and optimality of parameters |
S0167947314003338 | The intraclass correlation coefficient (ICC) in a two-way analysis of variance is a ratio involving three variance components. Two recently developed methods for constructing confidence intervals (CI’s) for the ICC are the Generalized Confidence Interval (GCI) and Modified Large Sample (MLS) methods. The resulting intervals have been shown to maintain nominal coverage. But methods for determining sample size for GCI and MLS intervals are lacking. Sample size methods that guarantee control of the mean width for GCI and MLS intervals are developed. In the process, two variance reduction methods are employed, called dependent conditioning and inverse Rao-Blackwellization. Asymptotic results provide lower bounds for mean CI widths, and show that MLS and GCI widths are asymptotically equivalent. Simulation studies are used to investigate the new methods. A real data example is used and application issues discussed. The new methods are shown to result in adequate sample size estimates, the asymptotic estimates are accurate, and the variance reduction techniques are effective. A sample size program is developed. 1 1 R program can be downloaded at http://dobbinuga.com. Future extensions of these results are discussed. | Sample size methods for constructing confidence intervals for the intra-class correlation coefficient |
S0167947314003399 | In images with low contrast-to-noise ratio (CNR), the information gain from the observed pixel values can be insufficient to distinguish foreground objects. A Bayesian approach to this problem is to incorporate prior information about the objects into a statistical model. A method for representing spatial prior information as an external field in a hidden Potts model is introduced. This prior distribution over the latent pixel labels is a mixture of Gaussian fields, centred on the positions of the objects at a previous point in time. It is particularly applicable in longitudinal imaging studies, where the manual segmentation of one image can be used as a prior for automatic segmentation of subsequent images. The method is demonstrated by application to cone-beam computed tomography (CT), an imaging modality that exhibits distortions in pixel values due to X-ray scatter. The external field prior results in a substantial improvement in segmentation accuracy, reducing the mean pixel misclassification rate for an electron density phantom from 87% to 6%. The method is also applied to radiotherapy patient data, demonstrating how to derive the external field prior in a clinical context. | An external field prior for the hidden Potts model with application to cone-beam computed tomography |
S0167947314003569 | Retrospective clinical datasets are often characterized by a relatively small sample size and many missing data. In this case, a common way for handling the missingness consists in discarding from the analysis patients with missing covariates, further reducing the sample size. Alternatively, if the mechanism that generated the missing allows, incomplete data can be imputed on the basis of the observed data, avoiding the reduction of the sample size and allowing methods to deal with complete data later on. Moreover, methodologies for data imputation might depend on the particular purpose and might achieve better results by considering specific characteristics of the domain. The problem of missing data treatment is studied in the context of survival tree analysis for the estimation of a prognostic patient stratification. Survival tree methods usually address this problem by using surrogate splits, that is, splitting rules that use other variables yielding similar results to the original ones. Instead, our methodology consists in modeling the dependencies among the clinical variables with a Bayesian network, which is then used to perform data imputation, thus allowing the survival tree to be applied on the completed dataset. The Bayesian network is directly learned from the incomplete data using a structural expectation–maximization (EM) procedure in which the maximization step is performed with an exact anytime method, so that the only source of approximation is due to the EM formulation itself. On both simulated and real data, our proposed methodology usually outperformed several existing methods for data imputation and the imputation so obtained improved the stratification estimated by the survival tree (especially with respect to using surrogate splits). | Bayesian network data imputation with application to survival tree analysis |
S0167947314003570 | A mixture of latent trait models with common slope parameters for model-based clustering of high-dimensional binary data, a data type for which few established methods exist, is proposed. Recent work on clustering of binary data, based on a d -dimensional Gaussian latent variable, is extended by incorporating common factor analyzers. Accordingly, this approach facilitates a low-dimensional visual representation of the clusters. The model is further extended by the incorporation of random block effects. The dependencies in each block are taken into account through block-specific parameters that are considered to be random variables. A variational approximation to the likelihood is exploited to derive a fast algorithm for determining the model parameters. Real and simulated data are used to demonstrate this approach. | Model based clustering of high-dimensional binary data |
S0167947315000171 | Model-based clustering associates each component of a finite mixture distribution to a group or cluster. Therefore, an underlying implicit assumption is that a one-to-one correspondence exists between mixture components and clusters. In applications with multivariate continuous data, finite mixtures of Gaussian distributions are typically used. Information criteria, such as BIC, are often employed to select the number of mixture components. However, a single Gaussian density may not be sufficient, and two or more mixture components could be needed to reasonably approximate the distribution within a homogeneous group of observations. A clustering method, based on the identification of high density regions of the underlying density function, is introduced. Starting with an estimated Gaussian finite mixture model, the corresponding density estimate is used to identify the cluster cores, i.e. those data points which form the core of the clusters. Then, the remaining observations are allocated to those cluster cores for which the probability of cluster membership is the highest. The method is illustrated using both simulated and real data examples, which show how the proposed approach improves the identification of non-Gaussian clusters compared to a fully parametric approach. Furthermore, it enables the identification of clusters which cannot be obtained by merging mixture components, and it can be straightforwardly extended to cases of higher dimensionality. | Identifying connected components in Gaussian finite mixture models for clustering |
S0167947315000559 | The asymmetry in the tail dependence between U.S. equity portfolios and the aggregate U.S. market is a well-established property. Given the limited number of observations in the tails of a joint distribution, standard non-parametric measures of tail dependence have poor finite-sample properties and generally reject the asymmetry in the tail dependence. A parametric model, based on a multivariate noncentral t distribution, is developed to measure and test asymmetry in tail dependence. This model allows different levels of tail dependence to be estimated depending on the distribution’s parameters and accommodates situations in which the volatilities or the correlations across returns are time varying. For most of the size, book-to-market, and momentum portfolios, the tail dependence with the market portfolio is significantly higher on the downside than on the upside. | Asymmetry in tail dependence in equity portfolios |
S0167947315000699 | Methods are introduced for the analysis of large sets of sleep study data (hypnograms) using a 5-state 20-transition-type structure defined by the American Academy of Sleep Medicine. Application of these methods to the hypnograms of 5598 subjects from the Sleep Heart Health Study provide: the first analysis of sleep hypnogram data of such size and complexity in a community cohort with a range of sleep-disordered breathing severity; introduce a novel approach to compare 5-state (20-transition-type) to 3-state (6-transition-type) sleep structures to assess information loss from combining sleep state categories; extend current approaches of multivariate survival data analysis to clustered, recurrent event discrete-state discrete-time processes; and provide scalable solutions for data analyses required by the case study. The analysis provides detailed new insights into the association between sleep-disordered breathing and sleep architecture. The example data and both R and SAS code are included in online supplementary materials. | Modeling sleep fragmentation in sleep hypnograms: An instance of fast, scalable discrete-state, discrete-time analyses |
S0167947315000730 | Cross-validation methodologies have been widely used as a means of selecting tuning parameters in nonparametric statistical problems. In this paper we focus on a new method for improving the reliability of cross-validation. We implement this method in the context of the kernel density estimator, where one needs to select the bandwidth parameter so as to minimize L 2 risk. This method is a two-stage subsampling-extrapolation bandwidth selection procedure, which is realized by first evaluating the risk at a fictional sample size m ( m ≤ sample size n ) and then extrapolating the optimal bandwidth from m to n . This two-stage method can dramatically reduce the variability of the conventional unbiased cross-validation bandwidth selector. This simple first-order extrapolation estimator is equivalent to the rescaled “bagging-CV” bandwidth selector in Hall and Robinson (2009) if one sets the bootstrap size equal to the fictional sample size. However, our simplified expression for the risk estimator enables us to compute the aggregated risk without any bootstrapping. Furthermore, we developed a second-order extrapolation technique as an extension designed to improve the approximation of the true optimal bandwidth. To select the optimal choice of the fictional size m given a sample of size n , we propose a nested cross-validation methodology. Based on simulation study, the proposed new methods show promising performance across a wide selection of distributions. In addition, we also investigated the asymptotic properties of the proposed bandwidth selectors. | Improving cross-validated bandwidth selection using subsampling-extrapolation techniques |
S0167947315001437 | Multivariate methods often rely on a sample covariance matrix. The conventional estimators of a covariance matrix require complete data vectors on all subjects—an assumption that can frequently not be met. For example, in many fields of life sciences that are utilizing modern measuring technology, such as mass spectrometry, left-censored values caused by denoising the data are a commonplace phenomena. Left-censored values are low-level concentrations that are considered too imprecise to be reported as a single number but known to exist somewhere between zero and the laboratory’s lower limit of detection. Maximum likelihood-based covariance matrix estimators that allow the presence of the left-censored values without substituting them with a constant or ignoring them completely are considered. The presented estimators efficiently use all the information available and thus, based on simulation studies, produce the least biased estimates compared to often used competing estimators. As the genuine maximum likelihood estimate can be solved fast only in low dimensions, it is suggested to estimate the covariance matrix element-wise and then adjust the resulting covariance matrix to achieve positive semi-definiteness. It is shown that the new approach succeeds in decreasing the computation times substantially and still produces accurate estimates. Finally, as an example, a left-censored data set of toxic chemicals is explored. | Covariance matrix estimation for left-censored data |
S0167947315001607 | A Moment Ratio estimator is proposed for an AR( p ) model of the errors in an OLS regression, that provides standard errors with far less median bias and confidence intervals with far better coverage than conventional alternatives. A unit root, and therefore the absence of cointegration, does not necessarily mean that a correlation between the variables is spurious. The estimator is applied to a quadratic trend model of real GDP. The rate of change of GDP growth is negative with finite standard error but is insignificant. The “output gap,” often used as a guide to monetary policy, has an infinite standard error and is therefore a statistical illusion. | Moment Ratio estimation of autoregressive/unit root parameters and autocorrelation-consistent standard errors |
S0167947315001747 | In the dependency rule mining, the goal is to discover the most significant statistical dependencies among all possible collapsed 2 × 2 contingency tables. Fisher’s exact test is a robust method to estimate the significance and it enables efficient pruning of the search space. The problem is that evaluating the required p -value can be very laborious and the worst case time complexity is O ( n ) , where n is the data size. The traditional solution is to approximate the significance with the χ 2 -measure, which can be estimated in a constant time. However, the χ 2 -measure can produce unreliable results (discover spurious dependencies but miss the most significant dependencies). Furthermore, it does not support efficient pruning of the search space. As a solution, a family of tight upper bounds for Fisher’s p is introduced. The new upper bounds are fast to calculate and approximate Fisher’s p -value accurately. In addition, the new approximations are not sensitive to the data size, distribution, or smallest expected counts like the χ 2 -based approximation. In practice, the execution time depends on the desired accuracy level. According to experimental evaluation, the simplest upper bounds are already sufficiently accurate for dependency rule mining purposes and they can be estimated in 0.004–0.1% of the time needed for exact calculation. For other purposes (testing very weak dependencies), one may need more accurate approximations, but even they can be calculated in less than 1% of the exact calculation time. | New upper bounds for tight and fast approximation of Fisher’s exact test in dependency rule mining |
Subsets and Splits