article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
computers are now essential in all branches of science , but most researchers are never taught the equivalent of basic lab skills for research computing . as a result , they take days or weeks to do things that could be done in minutes or hours , are often unable to reproduce their own work ( much less the work of others ) , and have no idea how reliable their computational results are .this paper presents a set of good computing practices that every researcher can adopt regardless of their current level of technical skill .these practices , which encompass data management , programming , collaborating with colleagues , organizing projects , tracking work , and writing manuscripts , are drawn from a wide variety of published sources , from our daily lives , and from our work with volunteer organizations that have delivered workshops to over 11,000 people since 2010 .two years ago a group of researchers involved in software carpentry and data carpentry wrote a paper called `` best practices for scientific computing '' .it was well received , but many novices found its litany of tools and techniques intimidating . also , by definition , the `` best '' are a small minority. what practices are comfortably within reach for the `` rest '' ?this paper therefore presents a set of `` good enough '' practices for scientific computing , i.e. , a minimum set of tools and techniques that we believe every researcher can and should adopt .it draws inspiration from many sources , from our personal experience , and from the experiences of the thousands of people who have taken part in software carpentry and data carpentry workshops over the past six years .our intended audience is researchers who are working alone or with a handful of collaborators on projects lasting a few days to a few months , and who are ready to move beyond emailing themselves a spreadsheet named ` results-updated-3-revised.xlsx ` at the end of the workday .a practice is included in our list if large numbers of researchers use it , and large numbers of people are _still _ using it months after first trying it out .we include the second criterion because there is no point recommending something that people wo nt actually adopt .many of our recommendations are for the benefit of the collaborator every researcher cares about most : their future self .change is hard and if researchers do nt see those benefits quickly enough to justify the pain , they will almost certainly switch back to their old way of doing things .this rules out many practices , such as code review , that we feel are essential for larger - scale development ( section [ sec : omitted ] ) .we organize our recommendations into the following topics : * data management : saving both raw and intermediate forms ; documenting all steps ; creating tidy data amenable to analysis . *software : writing , organizing , and sharing scripts and programs used in an analysis . *collaboration : making it easy for existing and new collaborators to understand and contribute to a project .* project organization : organizing the digital artifacts of a project to ease discovery and understanding . * tracking changes : recording how various components of your project change over time . *manuscripts : writing manuscripts in a way that leaves an audit trail and minimizes manual merging of conflict .we are grateful to arjun raj ( university of pennsylvania ) , steven haddock ( monterey bay aquarium research institute ) , stephen turner ( university of virginia ) , elizabeth wickes ( university of illinois ) , and garrett grolemund ( rstudio ) for their feedback on early versions of this paper , to those who contributed during the outlining of the manuscript , and to everyone involved in data carpentry and software carpentry for everything they have taught us .data within a project may need to exist in various forms , ranging from what first arrives to what is actually used for the primary analyses .our recommendations have two main themes .one is to work towards ready - to - analyze data incrementally , documenting both the intermediate data and the process .we also describe the key features of `` tidy data '' , which can be a powerful accelerator for analysis . 1 .* _ save the raw data ( 1a)_*. where possible , save data as originally generated ( i.e. by an instrument or from a survey ) .it is tempting to overwrite raw data files with cleaned - up versions , but faithful retention is essential for re - running analyses from start to finish ; for recovery from analytical mishaps ; and for experimenting without fear .consider changing file permissions to read - only or using spreadsheet protection features , so it is harder to damage raw data by accident or to hand edit it in a moment of weakness .+ some data will be impractical to manage in this way .for example , you should avoid making local copies of large , stable repositories . in that case , record the exact procedure used to obtain the raw data , as well as any other pertinent info , such as an official version number or the date of download .* _ create the data you wish to see in the world ( 1b)_*. create the raw dataset you _ wish _ you had received .the goal here is to improve machine and human readability , but _ not _ to do vigorous data filtering or add external information .machine readability allows automatic processing using computer programs , which is important when others want to reuse your data .specific examples of non - destructive transformations that we recommend at the beginning of analysis : + _ file formats _ : convert data from closed , proprietary formats to open , non - proprietary formats that ensure machine readability across time and computing setups .csv is good for tabular data , json , yaml , or xml for non - tabular data such as graphs , and hdf5 for certain kinds of structured data .+ _ variable names _ : replace inscrutable variable names and artificial data codes with self - explaining alternatives , e.g. , rename variables called ` name1 ` and ` name2 ` to ` personal_name ` and ` family_name ` , recode the treatment variable from ` 1 ` vs. ` 2 ` to ` untreated ` vs. ` treated ` , and replace artificial codes for missing data , such as `` -99 '' , with ` na`s , a code used in most programming languages to indicate that data is `` not available '' .+ _ filenames _ : store especially useful metadata as part of the filename itself , while keeping the filename regular enough for easy pattern matching . for example , a filename like ` 2016-05-alaska-b.csv ` makes it easy for both people and programs to select by year or by location .3 . * _ create analysis - friendly data ( 1c ) _ * : analysis can be much easier if you are working with so - called `` tidy '' data .two key principles are : + _ make each column a variable _ : do nt cram two variables into one , e.g. , `` male_treated '' should be split into separate variables for sex and treatment status .store units in their own variable or in metadata , e.g. , `` 3.4 '' instead of `` 3.4 kg '' .+ _ make each row an observation _ :data often comes in a wide format , because that facilitated data entry or human inspection .imagine one row per field site and then columns for measurements made at each of several time points .be prepared to gather such columns into a variable of measurements , plus a new variable for time point .fig [ fig : tidy ] presents an example of such a transformation .* _ record all the steps used to process data ( 1d ) _ * : data manipulation is as integral to your analysis as statistical modelling and inference . if you do not document this step thoroughly , it is impossible for you , or anyone else , to repeat the analysis .+ the best way to do this is to write scripts for _ every _ stage of data processing .this might feel frustratingly slow , but you will get faster with practice. the immediate payoff will be the ease with which you can re - do data preparation when new data arrives .you can also re - use data preparation logic in the future for related projects .+ when scripting is not feasible , it s important to clearly document every manual action ( what menu was used , what column was copied and pasted , what link was clicked , etc . ) .often you can at least capture _ what _ action was taken , if not the complete _why_. for example , choosing a region of interest in an image is inherently interactive , but you can save the region chosen as a set of boundary coordinates .* _ anticipate the need to use multiple tables ( 1e ) _ * : raw data , even if tidy , is not necessarily complete . for example , the primary data table might hold the heart rate for individual subjects at rest and after a physical challenge , identified via a subject i d .demographic variables , such as subject age and sex , are stored in a second table and will need to be brought in via merging or lookup .this will go more smoothly if subject i d is represented in a common format in both tables , e.g. , always as `` 14025 '' versus `` 14,025 '' in one table and `` 014025 '' in another .it is generally wise to give each record or unit a unique , persistent key and to use the same names and codes when variables in two datasets refer to the same thing .* _ submit data to a reputable doi - issuing repository so that others can access and cite it .( 1f ) _ * your data is as much a product of your research as the papers you write , and just as likely to be useful to others ( if not more so ) .sites such as figshare , dryad , and zenodo allow others to find your work , use it , and cite it ; we discuss licensing in section [ sec : collaboration ] below . follow your research community s standards for how to provide metadata .note that there are two types of metadata : metadata about the dataset as a whole and metadata about the content within the dataset .if the audience is humans , write the metadata ( the readme file ) for humans .if the audience includes automatic metadata harvesters , fill out the formal metadata and write a good readme file for the humans . taken in order, the recommendations above will produce intermediate data files with increasing levels of cleanliness and task - specificity .an alternative approach to data management would be to fold all data management tasks into a monolithic procedure for data analysis , so that intermediate data products are created `` on the fly '' and stored only in memory , not saved as distinct files .while the latter approach may be appropriate for projects in which very little data cleaning or processing is needed , we recommend the explicit creation and retention of intermediate products . saving intermediate files makes it easy to re - run _ parts _ of a data analysis pipeline , which in turn makes it less onerous to revisit and improve specific data processing tasks . breaking a lengthy workflow into pieces makes it easier to understand , share , describe , and modify .if you or your group are creating tens of thousands of lines of software for use by hundreds of people you have never met , you are doing software engineering . if you re writing a few dozen lines now and again , and are probably going to be its only user , you may not be doing engineering , but you can still make things easier on yourself by adopting a few key engineering practices .what s more , adopting these practices will make it easier for people to understand and ( re)use your code .the core realization in these practices is that _ readable _ , _ reusable _ , and _are all side effects of writing _modular _ code , i.e. , of building programs out of short , single - purpose functions with clearly - defined inputs and outputs . 1 .* _ place a brief explanatory comment at the start of every program ( 2a ) _ * , no matter how short it is .that comment should include at least one example of how the program is used : remember , a good example is worth a thousand words . where possible , the comment should also indicate reasonable values for parameters .an example of such a comment is show below .+ .... synthesize image files for testing circularity estimation algorithm .usage : make_images.py -f fuzzing -n flaws -o output -s seed -v -w size where : -f fuzzing = fuzzing range of blobs ( typically 0.0 - 0.2 ) -n flaws = p(success ) for geometric distribution of # flaws / sample ( e.g. 0.5 - 0.8 ) -o output = name of output file -s seed = random number generator seed ( large integer ) -v = verbose -w size = image width / height in pixels ( typically 480 - 800 ) .... 2 .* _ decompose programs into functions ( 2b ) _ * that are no more than one page ( about 60 lines ) long , do not use global variables ( constants are ok ) , and take no more than half a dozen parameters . the key motivation here is to fit the program into the most limited memory of all : ours .human short - term memory is famously incapable of holding more than about seven items at once . if we are to understand what our software is doing , we must break it into chunks that obey this limit , then create programs by combining these chunks .* _ be ruthless about eliminating duplication ( 2c)_*. write and re - use functions instead of copying and pasting source code , and use data structures like lists rather than creating lots of variables called ` score1 ` , ` score2 ` , ` score3 ` , etc .+ the easiest code to debug and maintain is code you did nt actually write , so * _ always search for well - maintained software libraries that do what you need ( 2d ) _ * before writing new code yourself , and * _ test libraries before relying on them ( 2e)_*. 4 .* _ give functions and variables meaningful names ( 2f ) _ * , both to document their purpose and to make the program easier to read .as a rule of thumb , the greater the scope of a variable , the more informative its name should be : while it s acceptable to call the counter variable in a loop ` i ` or ` j ` , the major data structures in a program should _ not _ have one - letter names .+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * tab completion * + almost all modern text editors provide _ tab completion _ , so that typing the first part of a variable name and then pressing the tab key inserts the completed name of the variable . employingthis means that meaningful longer variable names are no harder to type than terse abbreviations ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 5 .* _ make dependencies and requirements explicit .( 2 g ) _ * this is usually done on a per - project rather than per - program basis , i.e. , by adding a file called something like ` requirements.txt ` to the root directory of the project , or by adding a `` getting started '' section to the ` readme ` file . 6 . * _ do not comment and uncomment sections of code to control a program s behavior ( 2h ) _ * , since this is error prone and makes it difficult or impossible to automate analyses . instead ,put if / else statements in the program to control what it does ._ provide a simple example or test data set ( 2i ) _ * that users ( including yourself ) can run to determine whether the program is working and whether it gives a known correct output for a simple known input . such a `` build and smoke test '' is particularly helpful when supposedly - innocent changes are being made to the program , or when it has to run on several different machines , e.g. , the developer s laptop and the department s cluster .* _ submit code to a reputable doi - issuing repository ( 2j ) _ * upon submission of paper , just as you do with data .your software is as much a product of your research as your papers , and should be as easy for people to credit .dois for software are provided by figshare and zenodo .you may start working on projects by yourself or with a small group of collaborators you already know , but you should design it to make it easy for new collaborators to join. these collaborators might be new grad students or postdocs in the lab , or they might be _ you _ returning to a project that has been idle for some time . as summarized in ,you want to make it easy for people to set up a local workspace so that they _ can _ contribute , help them find tasks so that they know _ what _ to contribute , and make the contribution process clear so that they know _ how _ to contribute .you also want to make it easy for people to give you credit for your work . 1 .* _ create an overview of your project .( 3a ) _ * have a short file in the project s home directory that explains the purpose of the project .this file ( generally called ` readme ` , ` readme.txt ` , or something similar ) should contain the project s title , a brief description , up - to - date contact information , and an example or two of how to run various cleaning or analysis tasks .it is often the first thing users of your project will look at , so make it explicit that you welcome contributors and point them to ways they can help .+ you should also create a ` contributing ` file that describes what people need to do in order to get the project going and contribute to it , i.e. , dependencies that need to be installed , tests that can be run to ensure that software has been installed correctly , and guidelines or checklists that your project adheres to .* _ create a shared public `` to - do '' list ( 3b)_*. this can be a plain text file called something like ` notes.txt ` or ` todo.txt ` , or you can use sites such as github or bitbucket to create a new _ issue _ for each to - do item .( you can even add labels such as `` low hanging fruit '' to point newcomers at issues that are good starting points . )whatever you choose , describe the items clearly so that they make sense to newcomers .* _ make the license explicit .( 3c ) _ * have a ` license ` file in the project s home directory that clearly states what license(s ) apply to the project s software , data , and manuscripts .lack of an explicit license does not mean there is nt one ; rather , it implies the author is keeping all rights and others are not allowed to re - use or modify the material . + we recommend creative commons licenses for data and text , either cc-0 ( the `` no rights reserved '' license ) or cc - by ( the `` attribution '' license , which sharing and reuse but requires people to give appropriate credit to the creators ) . for software , we recommend a permissive license such as the mit , bsd , or apache license .+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * what not to do * + we recommend _ against _ the `` no commercial use '' variations of the creative commons licenses because they may impede some forms of re - use .for example , if a researcher in a developing country is being paid by her government to compile a public health report , she will be unable to include your data if the license says `` non - commercial '' .we recommend permissive software licenses rather than the gnu general public license ( gpl ) because it is easier to integrate permissively - licensed software into other projects , see chapter three in ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 4 . * _ make the project citable ( 3d ) _ * by including a ` citation ` file in the project s home directory that describes how to cite this project as a whole , and where to find ( and how to cite ) any data sets , code , figures , and other artifacts that have their own dois .the example below shows the ` citation ` file for the ecodata retriever ; for an example of a more detailed ` citation ` file , see the one for the khmer project .+ .... please cite this work as : morris , b.d . andwhite . 2013 . "the ecodata retriever : improving access to existing ecological data . "plos one 8:e65848 .organizing the files that make up a project in a logical and consistent directory structure will help you and others keep track of them .our recommendations for doing this are drawn primarily from . 1 . *_ put each project in its own directory , which is named after the project .( 4a ) _ * like deciding when a chunk of code should be made a function , the ultimate goal of dividing research into distinct projects is to help you and others best understand your work .some researchers create a separate project for each manuscript they are working on , while others group all research on a common theme , data set , or algorithm into a single project . + as a rule of thumb , divide work into projects based on the overlap in data and code files .if two research efforts share no data or code , they will probably be easiest to manage independently .if they share more than half of their data and code , they are probably best managed together , while if you are building tools that are used in several projects , the common code should probably be in a project of its own . 2 .* _ put text documents associated with the project in the ` doc ` directory .( 4b ) _ * this includes files for manuscripts , documentation for source code , and/or an electronic lab notebook recording your experiments .subdirectories may be created for these different classes of files in large projects .* _ put raw data and metadata in a ` data ` directory , and files generated during cleanup and analysis in a ` results ` directory ( 4c ) _ * , where `` generated files '' includes intermediate results , such as cleaned data sets or simulated data , as well as final results such as figures and tables .+ the ` results ` directory will _ usually _ require additional subdirectories for all but the simplest projects .intermediate files such as cleaned data , statistical tables , and final publication - ready figures or tables should be separated clearly by file naming conventions or placed into different subdirectories ; those belonging to different papers or other publications should be grouped together .* _ put project source code in the ` src ` directory .( 4d ) _ * ` src ` contains all of the code written for the project .this includes programs written in interpreted languages such as r or python ; those written compiled languages like fortran , c++ , or java ; as well as shell scripts , snippets of sql used to pull information from databases ; and other code needed to regenerate the results . +this directory may contain two conceptually distinct types of files that should be distinguished either by clear file names or by additional subdirectories .the first type are files or groups of files that perform the core analysis of the research , such as data cleaning or statistical analyses .these files can be thought of as the `` scientific guts '' of the project .+ the second type of file in ` src ` is controller or driver scripts that combine the core analytical functions with particular parameters and data input / output commands in order to execute the entire project analysis from start to finish . a controller script for a simple project , for example , may read a raw data table , import and apply several cleanup and analysis functions from the other files in this directory , and create and save a numeric result .for a small project with one main output , a single controller script should be placed in the main ` src ` directory and distinguished clearly by a name such as `` runall '' .* _ put external scripts , or compiled programs in the ` bin ` directory ( 4e)_*. ` bin ` contains scripts that are brought in from elsewhere , and executable programs compiled from code in the ` src ` directory .projects that have neither will not require ` bin ` .+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * scripts vs. programs * + we use the term `` script '' to mean `` something that is executed directly as - is '' , and `` program '' to mean `` something that is explicitly compiled before being used '' .the distinction is more one of degree than kind libraries written in python are actually compiled to bytecode as they are loaded , for example so one other way to think of it is `` things that are edited directly '' and `` things that are not '' . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 6 . *_ name all files to reflect their content or function .( 4f ) _ * for example , use names such as ` bird_count_table.csv ` , ` manuscript.md ` , or ` sightings_analysis.py ` .do _ not _ using sequential numbers ( e.g. , ` result1.csv ` , ` result2.csv ` ) or a location in a final manuscript ( e.g. , ` fig_3_a.png ` ) , since those numbers will almost certainly change as the project evolves .the diagram below provides a concrete example of how a simple project might be organized following these recommendations : .... ..... the root directory contains a ` readme ` file that provides an overview of the project as a whole , a ` citation ` file that explains how to reference it , and a ` license ` file that states the licensing .the ` data ` directory contains a single csv file with tabular data on bird counts ( machine - readable metadata could also be included here ) .the ` src ` directory contains ` sightings_analysis.py ` , a python file containing functions to summarize the tabular data , and a controller script ` runall.py ` that loads the data table , applies functions imported from ` sightings_analysis.py ` , and saves a table of summarized results in the ` results ` directory .this project does nt have a ` bin ` directory , since it does not rely on any compiled software .the ` doc ` directory contains two text files written in markdown , one containing a running lab notebook describing various ideas for the project and how these were implemented and the other containing a running draft of a manuscript describing the project findings .keeping track of changes that you or your collaborators make to data and software is a critical part of research .being able to reference or retrieve a specific version of the entire project aids in reproducibility for you leading up to publication , when responding to reviewer comments , and when providing supporting information for reviewers , editors , and readers .we believe that the best tools for tracking changes are the version control systems that are used in software development , such as git , mercurial , and subversion . they keep track of what was changed in a file when and by whom , and synchronize changes to a central server so that many users can manage changes to the same set of files .although all of the authors use version control daily for all of their projects , we recognize that many newcomers to computational science find version control to be one of the more difficult practices to adopt .we therefore recommend that projects adopt _ either _ a systematic manual approach for managing changes _ or _ version control in its full glory .whatever system you chose , we recommend that you use it in the following way : 1 . * _ back up ( almost ) everything created by a human being as soon as it is created .( 5a ) _ * this includes scripts and programs of all kinds , software packages that your project depends on , and documentation .a few exceptions to this rule are discussed below .* _ keep changes small .( 5b ) _ * each change should not be so large as to make the change tracking irrelevant .for example , a single change such as `` revise script file '' that adds or changes several hundred lines is likely too large , as it will not allow changes to different components of an analysis to be investigated separately .similarly , changes should not be broken up into pieces that are too small . as a rule of thumb , a good size for a single changeis a group of edits that you could imagine wanting to undo in one step at some point in the future ._ share changes frequently .( 5c ) _ * everyone working on the project should share and incorporate changes from others on a regular basis .do not allow individual investigator s versions of the project repository to drift apart , as the effort required to merge differences goes up faster than the size of the difference .this is particularly important for the manual versioning procedure describe below , which does not provide any assistance for merging simultaneous , possibly conflicting , changes .* _ create , maintain , and use a checklist for saving and sharing changes to the project .( 5d ) _ * the list should include writing log messages that clearly explain any changes , the size and content of individual changes , style guidelines for code , updating to - do lists , and bans on committing half - done work or broken code .see for more on the proven value of checklists .our first suggested approach , in which everything is done by hand , has three parts : 1 .* _ store each project in a folder that is mirrored off the researcher s working machine ( 5e ) _ * by a system such as dropbox , and synchronize that folder at least daily .it may take a few minutes , but that time is repaid the moment a laptop is stolen or its hard drive fails .2 . * _ add a file called ` changelog.txt ` to the project s ` docs ` subfolder ( 5f ) _ * , and make dated notes about changes to the project in this file in reverse chronological order ( i.e. , most recent first ) .this file is the equivalent of a lab notebook , and should contain entries like those shown below .+ .... # # 2016 - 04 - 08 * switched to cubic interpolation as default .* moved question about family 's tb history to end of questionnaire .# # 2016 - 04 - 06 * added option for cubic interpolation . * removed question about staph exposure ( can be inferred from blood test results ) . ....* _ copy the entire project whenever a significant change has been made ( 5 g ) _ * ( i.e. , one that materially affects the results ) , and store that copy in a sub - folder whose name reflects the date in the area that s being synchronized .this approach results in projects being organized as shown below : + .... . ....+ here , the ` project_name ` folder is mapped to external storage ( such as dropbox ) , ` current ` is where development is done , and other folders within ` project_name ` are old versions .+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * data is cheap , time is expensive * + copying everything like this may seem wasteful , since many files wo nt have changed , but consider : a terabyte hard drive costs about $ 50 retail , which means that 50 gbyte costs less than a latte . provided large data files are kept out of the backed - up area ( discussed below ) , this approach costs less than the time it would take to select files by hand for copying . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this manual procedure satisfies the requirements outlined above without needing any new tools . if multiple researchers are working on the same project , though, they will need to coordinate so that only a single person is working on specific files at any time .in particular , they may wish to create one change log file per contributor , and to merge those files whenever a backup copy is made .what the manual process described above requires most is self - discipline .the version control tools that underpin our second approach the one all authors use for their projects dont just accelerate the manual process : they also automate some steps while enforcing others , and thereby require less self - discipline for more reliable results . box 2 briefly explains how version control systems work .it s hard to know what version control tool is most widely used in research today , but the one that s most talked about is undoubtedly git .this is largely because of github , a popular hosting site that combines the technical infrastructure for collaboration via git with a modern web interface .github is free for public and open source projects and for users in academia and nonprofits .gitlab is a well - regarded alternative that some prefer , because the gitlab platform itself is free and open source . for those who find git s command - line syntax inconsistent and confusing ,mercurial is a good choice ; bitbucket provides free hosting for both git and mercurial repositories , but does not have nearly as many scientific users .the benefits of version control systems do nt apply equally to all file types .in particular , version control can be more or less rewarding depending on file size and format .first , today s version control systems are not designed to handle megabyte - sized files , never mind gigabytes , so large data or results files should not be included .what s `` large '' ? as a benchmark , note that the limit for an individual file on github is 100 mb .raw data should not change , and therefore should not absolutely require version tracking .keeping intermediate data files and other results under version control is also not strictly necessary if you can re - generate them from raw data and software .however , if data and results are small , we still recommend placing them under version control for ease of access by collaborators and for comparison across versions .second , file comparison in version control systems is optimized for plain text files , such as source code .the ability to see so - called `` diffs '' is one of the great joys of version control .unfortunately , microsoft office files ( like the ` .docx ` files used by word ) or other binary files , e.g. , pdfs , can be stored in a version control system , but it is not possible to pinpoint specific changes from one version to the next .tabular data ( such as csv files ) can be put in version control , but changing the order of the rows or columns will create a big change for the version control system , even if the data itself has not changed ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * inadvertent sharing * + researchers dealing with data subject to legal restrictions that prohibit sharing ( such as medical data ) should be careful not to put data in public version control systems .some institutions may provide access to private version control systems , so it is worth checking with your it department ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _an old joke says that doing the research is the first 90% of any project ; writing up is the other 90% .while writing is rarely addressed in discussions of scientific computing , computing has changed scientific writing just as much as it has changed research .a common practice in academic writing is for the lead author to send successive versions of a manuscript to coauthors to collect feedback , which is returned as changes to the document , comments on the document , plain text in email , or a mix of all three .this results in a lot of files to keep track of , and a lot of tedious manual labor to merge comments to create the next master version . instead of an email - based workflow, we recommend mirroring good practices for managing software and data to make writing scalable , collaborative , and reproducible . as with our recommendations for version control in general, we suggest that groups choose one of two different approaches for managing manuscripts .the goals of both are to : * ensure that text is accessible to yourself and others now and in the future by making a single master document that is available to all coauthors at all times . *reduce the chances of work being lost or people overwriting each other s work .* make it easy to track and combine contributions from multiple collaborators . *avoid duplication and manual entry of information , particularly in constructing bibliographies , tables of contents , and lists . *make it easy to regenerate the final published form ( e.g. , a pdf ) and to tell if it is up to date .* make it easy to share that final version with collaborators and to submit it to a journal ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * the first rule is * + the workflow you choose is less important than having all authors agree on the workflow _ before _ writing starts .make sure to also agree on a single method to provide feedback , be it an email thread or mailing list , an issue tracker ( like the ones provided by github and bitbucket ) , or some sort of shared online to - do list . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ our first alternative has two parts : 1 .* _ write manuscripts using online tools with rich formatting , change tracking , and reference management ( 6a ) _ * , such as google docs . with the documentonline , everyone s changes are in one place , and hence do nt need to be merged manually .* _ include a ` publications ` file in the project s ` doc ` directory ( 6b ) _ * with metadata about each online manuscript ( e.g. , their urls ) .this is analogous to the ` data ` directory , which might contain links to the location of the data file(s ) rather than the actual files .we realize that in many cases , even this solution is asking too much from collaborators who see no reason to move forward from desktop gui tools . to satisfy them, the manuscript can be converted to a desktop editor file format ( e.g. , microsoft word s ` .docx ` or libreoffice s ` .odt ` ) after major changes , then downloaded and saved in the ` doc ` folder .unfortunately , this means merging some changes and suggestions manually , as existing tools can not always do this automatically when switching from a desktop file format to text and back ( although pandoc can go a long way ) .the second approach treats papers exactly like software , and has been used by researchers in mathematics , astronomy , physics , and related disciplines for decades : 1 .* _ write the manuscript in a plain text format that permits version control ( 6c ) _ * such as latex or markdown , and then convert them to other formats such as pdf as needed using scriptable tools like pandoc .2 . * _ include tools needed to compile manuscripts in the project folder ( 6d ) _ * and keep them under version control just like tools used to do simulation or analysis .using a version control system provides good support for finding and merging differences resulting from concurrent changes .it also provides a convenient platform for making comments and performing review .this approach re - uses the version control tools and skills used to manage data and software , and is a good starting point for fully - reproducible research .however , it requires all contributors to understand a much larger set of tools , including markdown or latex , make , bibtex , and git / github .the first draft of this paper recommended always using plain text in version control to manage manuscripts , but several reviewers pushed back forcefully .for example , stephen turner wrote : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ try to explain the notion of compiling a document to an overworked physician you collaborate with .oh , but before that , you have to explain the difference between plain text and word processing . and text editors . and markdown / latex compilers . and bibtex . and git . and github .meanwhile he / she is getting paged from the or as much as we want to convince ourselves otherwise , when you have to collaborate with those outside the scientific computing bubble , the barrier to collaborating on papers in this framework is simply too high to overcome .good intentions aside , it always comes down to `` just give me a word document with tracked changes , '' or similar . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ similarly , arjun raj said in a blog post : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ google docs excels at easy sharing , collaboration , simultaneous editing , commenting and reply - to - commenting .sure , one can approximate these using text - based systems and version control .the question is why anyone would like to the goal of reproducible research is to make sure one can reproduce computational analyses .the goal of version control is to track changes to source code .these are fundamentally distinct goals , and while there is some overlap , version control is merely a tool to help achieve that , and comes with so much overhead and baggage that it is often not worth the effort ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in keeping with our goal of recommending `` good enough '' practices , we have therefore included online , collaborative editing in something like google docs .we still recommend _ against _ traditional desktop tools like libreoffice and microsoft word because they make collaboration more difficult than necessary .supplementary materials often contain much of the work that went into the project , such as tables and figures or more elaborate descriptions of the algorithms , software , methods , and analyses . in order to make these materials as accessible to others as possible ,do not rely solely on the pdf format , since extracting data from pdfs is notoriously hard .instead , we recommend separating the results that you may expect others to reuse ( e.g. , data in tables , data behind figures ) into separate , text - format files .the same holds for any commands or code you want to include as supplementary material : use the format that most easily enables reuse ( source code files , unix shell scripts etc ) .we have deliberately left many good tools and practices off our list , including some that we use daily , because they only make sense on top of the core practices described above , or because it takes a larger investment before they start to pay off . * branches * : : a _ branch _ is a `` parallel universe '' within a version control repository .developers create branches so that they can make multiple changes to a project independently .they are central to the way that experienced developers use systems like git , but they add an extra layer of complexity to version control for newcomers .programmers got along fine in the days of cvs and subversion without relying heavily on branching , and branching can be adopted without significant disruption after people have mastered a basic edit - commit workflow .* build tools * : : tools like make were originally developed to recompile pieces of software that had fallen out of date .they are now used to regenerate data and entire papers : when one or more raw input files change , make can automatically re - run those parts of the analysis that are affected , regenerate tables and plots , and then regenerate the human - readable pdf that depends on them .however , newcomers can achieve the same behavior by writing shell scripts that re - run everything ; these may do unnecessary work , but given the speed of today s machines , that is unimportant for small projects .* unit tests * : : a _ unit test _ is a small test of one particular feature of a piece of software . projects rely on unit tests to prevent _ regression _ , i.e. , to ensure that a change to one part of the software does nt break other parts . while unit tests are essential to the health of large libraries and programs , we have found that they usually are nt compelling for solo exploratory work .( note , for example , the lack of a ` test ` directory in noble s rules . ) rather than advocating something which people are unlikely to adopt , we have left unit testing off this list . * continuous integration * : : tools like travis - ci automatically run a set of user - defined commands whenever changes are made to a version control repository .these commands typically execute tests to make sure that software hasnt regressed , i.e. , that things which used to work still do .these tests can be run either before changes are saved ( in which case the changes can be rejected if something fails ) or after ( in which case the project s contributors can be notified of the breakage ) .ci systems are invaluable in large projects with many contributors , but pay fewer dividends in smaller projects where code is being written to do specific analyses . * profiling and performance tuning * : : _ profiling _ is the act of measuring where a program spends its time , and is an essential first step in _ tuning _ the program ( i.e. , making it run faster ) .both are worth doing , but only when the program s performance is actually a bottleneck : in our experience , most users spend more time getting the program right in the first place .* coverage * : : every modern programming language comes with tools to report the _ coverage _ of a set of test cases , i.e. , the set of lines that are and are nt actually executed when those tests are run . but as with unit testing , this only starts to pay off once the project grows larger , and is therefore not recommended here . * the semantic web *: : ontologies and other formal definitions of data are useful , but in our experience , even simplified things like dublin core are rarely encountered in the wild .* documentation * : : good documentation is a key factor in software adoption , but in practice , people wo nt write comprehensive documentation until they have collaborators who will use it . they will , however , quickly see the point of a brief explanatory comment at the start of each script , so we have recommended that as a first step . *a bibliography manager * : : researchers should use a reference manager of some sort , such as zotero , and should also obtain and use an orcid to identify themselves in their publications , but discussion of those is outside the scope of this paper . * code reviews and pair programming * : : these practices are valuable in projects with multiple people making large software contributions , which is not typical for the intended audience for this paper .one important observation about this list is that many experienced programmers actually do some or all of these things even for small projects .it makes sense for them to do so because ( a ) they ve already paid the learning cost of the tool , so the time required to implement for the `` next '' project is small , and ( b ) they understand that their project will need some or all of these things as it scales , so they might as well put it in place now .the problem comes when those experienced developers give advice to people who_ havent _ already mastered the tools , and _ do nt _ realize ( yet ) that they will save time if and when their project grows .in that situation , advocating unit testing with coverage checking and continuous integration is more likely to scare newcomers off than to aid them .we have outlined a series of practices for scientific computing based on our collective experience , and the experience of the thousands of researchers we have met through software carpentry , data carpentry , and similar organizations .these practices are pragmatic , accessible to people who consider themselves computing novices , and can be applied by both individuals and groups .most importantly , these practices make researchers more productive individually by enabling them to get more done in less time and with less pain .they also accelerate research as a whole by making computational work ( which increasingly means _all _ work ) more reproducible . butprogress will not happen by itself .universities and funding agencies need to support training for researchers in the use of these tools .such investment will improve confidence in the results of computational work and allow us to make more rapid progress on important research questions .10 wilson g , aruliah da , brown ct , hong npc , davis m , guy rt , et al .best practices for scientific computing . plos biology .2014;12(1):e1001745 . doi:10.1371/journal.pbio.1001745 .gentzkow m , shapiro jm .code and data for the social sciences : a practitioner s guide ; 2014 .available from : http://faculty.chicagobooth.edu/matthew.gentzkow/research/codeanddata.pdf .noble ws . .plos computational biology .2009;5(7 ) .doi:10.1371/journal.pcbi.1000424 .brown ct . how to grow a sustainable software development process ; 2015 . available from : http://ivory.idyll.org/blog/2015-growing-sustainable-software-development-process.html .wickham h. tidy data .journal of statistical software . 2014;59(1):123 .doi:10.18637/jss.v059.i10 . kitzes j. reproducible workflows ; 2016 . available from : http://datasci.kitzes.com/lessons/python/reproducible_workflow.html .sandve gk , nekrutenko a , taylor j , hovig e. ten simple rules for reproducible computational research .plos computational biology .2013;9(10 ) .doi : doi:10.1371/journal.pcbi.1003285 .hart e , barmby p , lebauer d , michonneau f , mount s , poisot t , et al .. ten simple rules for digital data storage ; 2015 . of illinois library u. file formats and organization;. available from : http://www.library.illinois.edu/sc/services/data_management/file_formats.html .white ep , baldridge e , brym zt , locey kj , mcglinn dj , supp sr .nine simple ways to make it easier to ( re)use your data .ideas in ecology and evolution .2013;6(2 ) .doi : doi:10.4033/iee.2013.6b.6.f .wickes e. comment on `` metadata '' ; 2015 .available from : https://github.com/swcarpentry/good-enough-practices-in-scientific-computing/issues/3#issuecomment-157410442 .miller ga .the magical number seven , plus or minus two : some limits on our capacity for processing information . psychological review . 1956;63(2):8197 .doi : doi:10.1037/h0043158 m .steinmacher i , graciotto silva m , gerosa m , redmiles df . a systematic literature review on the barriers faced by newcomers to open source software projects . information and software technology .2015;59(c ) . doi:10.1016/j.infsof.2014.11.001 .understanding open source and free software licensing .oreilly media ; 2004 .available from : http://www.oreilly.com / openbook / osfreesoft / book/. gawande a. the checklist manifesto : how to get things right .picador ; 2011 .petre m , wilson g. code review for and by scientists . in : katz d ,editor . proc .wssspe 2014 ; 2014 .1 . data management a. save the raw data .b. create the data you wish to see in the world . c.create analysis - friendly data .d. record all the steps used to process data .e. anticipate the need to use multiple tables . f. submit data to a reputable doi - issuing repository so that others can access and cite it .software a. place a brief explanatory comment at the start of every program .b. decompose programs into functions .c. be ruthless about eliminating duplication .d. always search for well - maintained software libraries that do what you need .e. test libraries before relying on them .f. give functions and variables meaningful names .g. make dependencies and requirements explicit .h. do not comment and uncomment sections of code to control a program s behavior .i. provide a simple example or test data set .j. submit code to a reputable doi - issuing repository .collaboration a. create an overview of your project .b. create a shared public `` to - do '' list .c. make the license explicit .d. make the project citable .project organization a. put each project in its own directory , which is named after the project .b. put text documents associated with the project in the ` doc ` directory .c. put raw data and metadata in a ` data ` directory , and files generated during cleanup and analysis in a ` results ` directory .d. put project source code in the ` src ` directory .e. put external scripts , or compiled programs in the ` bin ` directory .f. name all files to reflect their content or function .5 . keeping track of changesa. back up ( almost ) everything created by a human being as soon as it is created .b. keep changes small .c. share changes frequently .d. create , maintain , and use a checklist for saving and sharing changes to the project .e. store each project in a folder that is mirrored off the researcher s working machine .f. add a file called ` changelog.txt ` to the project s ` docs ` subfolder .g. copy the entire project whenever a significant change has been made .manuscripts a. write manuscripts using online tools with rich formatting , change tracking , and reference management .b. include a ` publications ` file in the project s ` doc ` directory .c. write the manuscript in a plain text format that permits version control .d. include tools needed to compile manuscripts in the project folder .a version control system stores snapshots of a project s files in a repository .users modify their working copy of the project , and then save changes to the repository when they wish to make a permanent record and/or share their work with colleagues .the version control system automatically records when the change was made and by whom along with the changes themselves .crucially , if several people have edited files simultaneously , the version control system will detect the collision and require them to resolve any conflicts before recording the changes .modern version control systems also allow repositories to be synchronized with each other , so that no one repository becomes a single point of failure .tool - based version control has several benefits over manual version control : * instead of requiring users to make backup copies of the whole project , version control safely stores just enough information to allow old versions of files to be re - created on demand . * instead of relying on users to choose sensible names for backup copies ,the version control system timestamps all saved changes automatically . * instead of requiring users to be disciplined about completing the changelog ,version control systems prompt them every time a change is saved .they also keep a 100% accurate record of what was _ actually _ changed , as opposed to what the user _ thought _ they changed , which can be invaluable when problems crop up later . * instead of simply copying files to remote storage , version control checks to see whether doing that would overwrite anyone else s work .if so , they facilitate identifying conflict and merging changes .
we present a set of computing tools and techniques that every researcher can and should adopt . these recommendations synthesize inspiration from our own work , from the experiences of the thousands of people who have taken part in software carpentry and data carpentry workshops over the past six years , and from a variety of other guides . unlike some other guides , our recommendations are aimed specifically at people who are new to research computing .
regression analysis is one of the most important tools used to investigate the relationship between a response and a predictor .many major studies of regression have been concerned with the estimation of the conditional mean function of given a predictor .on the other hand , the estimation of the conditional quantile function of given has gained momentum in recent years .this analysis is called quantile regression . in quantile regression ,the purpose is to estimate an unknown function that satisfies for a given .when , is the conditional median of .one established advantage of quantile regression as compared to mean regression is that the estimators are more robust against outliers in the response measurements .quantile regression models have been suggested by koenker and bassett ( 1978 ) .many authors have studied quantile regression based on the parametric method , its asymptotic theories , the computational aspects and other properties , and these developments have been summarized by koenker ( 2005 ) and hao and naiman ( 2007 ) .the nonparametric methods for quantile regression have also been studied extensively .many authors have explored the topic in relation to kernel methods , including fan et al .( 1994 ) , yu and jones ( 1998 ) , takeuchi et al .( 2006 ) , kai et al .( 2011 ) . on the other hand , hendricks and koenker ( 1992 ) and koenker et al .( 1994 ) used the low - rank regression splines method and the smoothing splines method , respectively .pratesi et al .( 2009 ) and reiss and huang ( 2012 ) utilized the penalized spline smoothing method .this paper focuses on penalized splines .compared with unpenalized splines and smoothing splines , an advantage of the penalized spline methods is follows .although the smoothing spline estimator gives the predictor with fitness and smoothness , the computational cost to construct the estimator is high . in unpenalized regression spline methods , on the other hand, it is known that the estimator tends to have a wiggle curve , but the computational cost is lower than that of smoothing spline methods .the penalized spline estimator , however , gives the curve with fitness and smoothness and its computational cost is lower than that of smoothing spline methods .thus , penalized splines can be considered an efficient technique .previous results of asymptotic studies of nonparametric quantile regressions include the following .fan et al .( 1994 ) showed the asymptotic normality of the kernel estimator .yu and jones ( 1998 ) proposed a new kernel estimator and studied its asymptotic results .he and shi ( 1994 ) showed the convergence rate of the unpenalized regression spline estimator .portnoy ( 1997 ) discussed asymptotics for smoothing spline estimators .however , the asymptotics for the penalized spline estimator of quantile regression have not yet been studied . in this paper, we show the asymptotic distribution of the penalized spline estimator for quantile regression with a low - rank -spline model and the difference penalty .the penalized spline estimator of for a given is defined as the minimizer of the convex loss function , which is the check function with an additional difference penalty . to establish the asymptotic distribution of the penalized spline estimator, we need to derive two biases ( i ) the model bias between the true function and the -spline model , and ( ii ) the bias arising from using the penalty term . by showing the asymptotic form of these two biases , the resulting asymptotic bias of the penalized spline estimator can be obtained .finally , together with the asymptotic variance of the estimator , we show the asymptotic normality of the penalized spline quantile estimator . this paper is organized as follows . in section 2 ,we define the penalized spline quantile estimator for a given . in terms of our estimation method , we mainly focus on the penalized iteratively reweighted least squares method .section 3 provides the asymptotic bias and variance as well as the asymptotic distribution of the penalized spline quantile estimator .furthermore , the related properties are described . in section 4 ,numerical studies are conducted .related discussion and issues for future research are provided in section 5 .finally , proofs for the theoretical results are all given in the appendix .for a given dataset , consider the conditional quantile of response given as where and is an unknown true conditional quantile function of given .it is easy to show that the true function satisfies .\end{aligned}\ ] ] here , is the check function provided by koenker and bassett ( 1978 ) , given as where is the indicator function of .we want to estimate using penalized spline methods . to approximate , we consider the -spline model }(x)b_k(\tau),\end{aligned}\ ] ] where }(x)(k =- p+1,\cdots , k) ] as unless the degrees of -splines are specified .details and many properties of the b - spline function are clarified by de boor ( 2001 ) .the estimator of is defined as where , is the smoothing parameter and matrix is the difference matrix , which is defined as , where for , and 0 for otherwise .it is well known that the difference penalty in ( [ plsc ] ) is very useful in mean regression and can be regarded as the controller of the smoothness of because we can interpret ( see , eilers and marx ( 1996 ) ) .although reiss and huang ( 2012 ) used the penalty , this penalty contains an integral and hence the computational difficulty for the resulting estimator grows .therefore , this paper proposes using as the penalty .in fact , is obtained via linear - programming methods , such as simplex methods or interior points methods ( see koenker and park ( 1996 ) , koenker ( 2005 ) ) . on the other hand, it is known that the iteratively reweighted least squares ( irls ) method is a useful in nonparametric quantile regression .the penalized spline estimator obtained via irls was also studied and detailed by reiss and huang ( 2012 ) .since irls is important for obtaining the estimator , we now provide the complete algorithm . for a given ,the -steps iterated estimator is defined as follows : where , , ] .the knots for the -spline basis are equidistantly located as and the number of knots satisfies .there exists such that <\infty](see de boor ( 2001 ) ) .therefore , in this case , the model bias becomes 0 , indicating that the regression spline quantile estimator is unbiased .we can definitely show that =0 ] , , and is an unknown parameter vector .pratesi et al .( 2009 ) obtained the estimator , where is the minimizer of where is the smoothing parameter and ] , while we have for the gaussian kernel and for the epanechnikov kernel . therefore the bias of the regression spline estimator is smaller than that of the local linear estimator in this situation .in this section , we show numerical simulation to confirm the performance as well as the asymptotic normality of the penalized spline quantile estimator claimed in theorem [ clt ] .the explanatory is generated from a uniform distribution on the interval ] and is the conditional kernel density estimate given . then we construct the density estimate of at and compare with the density of . to obtain and , the normal kernel and the bandwidth discussed by sheather and jones ( 1991 )are utilized .[ 0.9 ] .results of mise for and .all entries for mise are times their actual values . [cols="^,^,^,^,^,^,^,^,^,^",options="header " , ] table 1 shows the mise for and 0.5 . for p - cubic with normal error ,the performance of the quantile estimator is good even if .it is well known that the cauchy distribution is a pathological distribution . however , the mise of p - cubic with the cauchy distribution is sufficiently small , indicating that the quantile estimator is robust .for the boundary , on the other hand , the mise of the estimators is worse than that with interior . for the normal and cauchy models , the median estimator has better behavior than those with and 0.25 . on the other hand , for the exponential model, the median estimator has a larger mise than with other values of .the reason for this is that the density of exponential distribution is monotonically decreasing and its peak is at , which leads to many responses s being dropped near with small .we note the performance of the penalized spline estimator for .when a normal or cauchy error is used , it appears that the mise of and that of become similar since has a symmetrical density function at . for an exponential error ,the closer is to 0 , the smaller the mise of will become .overall , p - cubic has better behavior than r - linear and l - linear .however , for the exponential distribution and , the mise of l - linear is slightly smaller than that of p - cubic . additionally , the performance of l - linear is slightly superior to that of r - linear .this indicates that the variance of l - linear is less than that of r - linear ( see remark 7 ) . in figure [ simu ] ,the density estimate of for and 0.5 and the density of for each error are illustrated . in all errors, we can see that the density estimate of becomes close to as increases . for a normal distribution with ,the density estimate and are similar . in both errors, we see that the speed of convergence of is faster than that of .* remark 8 * we have confirmed the behavior of the penalized splines with ( p - linear ) and the regression splines with ( r - cubic ) though this is not shown in this paper for reasons of space .the mise of p - linear and r - cubic are similar to the p - cubic and r - linear , respectively .for spline smoothing , it is generally known that the pair of the ` cubic spline and the second difference penalty are particularly useful in data analysis .therefore we mainly focused on in this simulation . for (dot - dashed ) and (dashed ) , and the density of (solid ) .the left panels are for and the right panels are for .the upper , middle and bottom panels are for normal , exponential and cauchy errors , respectively . [ simu],title="fig:",width=283,height=188 ] for (dot - dashed ) and (dashed ) , and the density of (solid ) .the left panels are for and the right panels are for .the upper , middle and bottom panels are for normal , exponential and cauchy errors , respectively . [ simu],title="fig:",width=283,height=188 ] + for (dot - dashed ) and (dashed ) , and the density of (solid ) .the left panels are for and the right panels are for .the upper , middle and bottom panels are for normal , exponential and cauchy errors , respectively .[ simu],title="fig:",width=283,height=188 ] for (dot - dashed ) and (dashed ) , and the density of (solid ) .the left panels are for and the right panels are for .the upper , middle and bottom panels are for normal , exponential and cauchy errors , respectively . [ simu],title="fig:",width=283,height=188 ] + for (dot - dashed ) and (dashed ) , and the density of (solid ) .the left panels are for and the right panels are for .the upper , middle and bottom panels are for normal , exponential and cauchy errors , respectively . [ simu],title="fig:",width=283,height=188 ] for (dot - dashed ) and (dashed ) , and the density of (solid ) .the left panels are for and the right panels are for .the upper , middle and bottom panels are for normal , exponential and cauchy errors , respectively . [ simu],title="fig:",width=283,height=188 ] in this section , we apply the penalized spline quantile estimator to real data . in all examples ,we use and is chosen via gacv .figure [ bmd ] showed the penalized spline quantile estimators ( ) for bone mineral density ( bmd ) data .this data was presented by hastie et al .takeuchi et al . ( 2006 ) applied the kernel estimator to the same data .compared with figure 2 ( b ) of their paper , the penalized splines have a somewhat smooth curve . next , the confidence interval of is illustrated .the 100 confidence interval of based on the asymptotic result of is obtained as ,\label{conf}\end{aligned}\ ] ] where , and are the estimators of , and , while is a normal percentile . as the estimator of , is used .we utilize as given in the previous section . as the pilot estimator of in , we construct the derivative of the penalized spline quantile estimator with the -spline model .thus , we obtain ( [ conf ] ) . in figure[ motor ] , the approximate confidence interval of for motor cycle impact data is drawn .this dataset , with was given by h ( 1990 ) , where is the acceleration ( g ) and is the time ( ms ) . for comparison , the approximate confidence interval with uncorrected bias of defined by \end{aligned}\ ] ] is shown .the penalized spline estimator of the median has a curve with fitness and smoothness . in the area near , we see that there is a strong correction of the bias of .finally , we compare the median estimator and the mean estimator for boston housing data , with , where is the median value of owner - occupied homes in usd 1000s ( given by medv ) and is the average number of rooms per dwelling ( denoted rm ) .this dataset is available from harrison and rubinfeld ( 1979 ) .figure [ boston ] shows the penalized spline quantile estimator of (solid ) and the penalized spline estimator of the conditional mean of : ] and ] and lemma [ lyapnov ] holds .let since \\ & & = \int_0^{w_{in } } e[\{i(u_i\leq s)-i(u_i\leq 0)\}|\vec{x}_n]ds\\ & & = \int_0^{w_{in } } \left\{p\left(y_i<\vec{b}(x_i)^t\vec{b}^*(\tau)+s|x_i = x_i\right)-p(y_i<\vec{b}(x_i)^t\vec{b}^*(\tau)|x_i = x_i)\right\}ds\\ & & = { \small \sqrt{\frac{k_n}{n}}\int_0^{\vec{b}(x_i)^t\vec{\delta } } \left\{p\left(\left.y_i<\vec{b}(x_i)^t\vec{b}^*(\tau)+t\frac{k_n}{n}\right|x_i = x_i\right)-p(y_i<\vec{b}(x_i)^t\vec{b}^*(\tau)|x_i = x_i)\right\}dt } \\ & & = \frac{k_n}{n}\int_0^{\vec{b}(x_i)^t\vec{\delta } } f\left(\vec{b}(x_i)^t\vec{b}^*(\tau)|x_i\right)tdt\\ & & = \frac{k_n}{2n}f\left(\vec{b}(x_i)^t\vec{b}^*(\tau)|x_i\right)\{\vec{b}(x_i)^t\vec{\delta}\}^2.\end{aligned}\ ] ] therefore we obtain & = & \frac{k_n}{2n}\sum_{i=1}^n f\left(\vec{b}(x_i)^t\vec{b}^*(\tau)|x_i\right)\vec{\delta}^t\vec{b}(x_i)\vec{b}(x_i)^t\vec{\delta}\\ & = & \frac{k_n}{2}\vec{\delta}^t \left(\frac{1}{n}\sum_{i=1}^n f\left(\eta_\tau(x_i)+o(1)|x_i\right)\vec{b}(x_i)\vec{b}(x_i)^t\right)\vec{\delta}\\ & = & \frac{k_n}{2}\vec{\delta}^t g(\tau)\vec{\delta}(1+o_p(1)).\end{aligned}\ ] ] finally , we show =o_p(1) ] , we obtain }/e[r_n|\vec{x}_n]=o_p(1) ] . by the properties of the derivative of the -spline model , we have }(x)^t d_m\vec{b}(\tau) ] for . since the asymptotic order of }(x)^t \{k_n^md_m\vec{b}^*(\tau)\}$ ] and that of are the same as , is satisfied for .in addition , similar to the proof of theorem 1 of kauermann et al .( 2009 ) , is fulfilled .together with lemma [ g1 ] , we obtain thus proposition 2 has been proven .10 agawal , g . and studden , w.(1980 ) , `` asymptotic integrated mean square error using least squares and bias minimizing splines,''_ann. statist . _* 8*,1307 - 1325 .claeskens , g . , krivobokova , t . andopsomer , j.d.(2009 ) .asymptotic properties of penalized spline estimators . , 529 - 544 . de boor , c.(2001 ) .springer - verlag .eilers , p.h.c . andmarx , b.d.(1996 ) .flexible smoothing with -splines and penalties(with discussion ) . . *11 * , 89 - 121 ., hu , t.c . , and truong , y.k.(1994 ) . robust nonparametric function estimation . _ scandinavian journal of statistics_. * 21 * , 433 - 446 .hao , l . and naiman , d.q.(2007 ) ._ quantile regression_. sage publications , inc .hastie , t . ,tibshirani , r . and friedman , j.(2009 ) ._ the elements of statistical learning _ , springer - verlag .he , x . and shi , p.(1994 ). convergence rate of b - spline estimators of nonparametric conditional quantile functions ._ j. nonparam . statist_. * 3 * , 299 - 308 .hendricks , w . and koenker , r.(1992 ) .hierarchical spline models for conditional quantiles and the demand for electricity ._ j. amer .* 87 * , 58 - 68 ., li , r . , and zou , h.(2011 ) .new efficient estimation and variable selection methods for semiparametric varying - coefficient partially linear models_ * 39 * , 305 - 332 .kato , k.(2009 ) .asymptotics for argmin processes : convexity arguments ._ j. multi . anal_.* 100 * , 1816 - 1829 .kauermann , g . ,krivobokova , t . , and fahrmeir , l.(2009 ) .some asymptotic results on generalized penalized spline smoothing._j .r. statist ._ b * 71 * , 487 - 503 .knight.k.(1998 ) . limiting distributions for regression estimators under general conditions .statist . _* 26 * , 755 - 770 .koenker , r ._ quantile regression_. cambridge univ . press ,koenker , r . and bassett , g.(1978 ) .regression quantiles ._ econometrica_. * 46 * , 33 - 50 .koenker , r . , ng , p . and portnoy , s.(1994 ) .quantile smoothing splines ._ biometrika_. * 81 * , 673 - 680 .koenker , r . and park , b.j.(1996 ) . an interior point algorithm for nonlinear quantile regression ._ j. econom_. * 71 * , 265 - 283 .nychka , d . ,, haaland , p . ,martin , d . , and oconnell , m.(1995 ). a nonparametric regressio n approach to syringe grading for quality improvement ._ j. amer .assoc . _ * 90 * , 1171 - 1178 .pollard , d.(1991 ) .asymptotics for least absolute deviation regression estimators ._ econometric theory_. * 7 * , 186 - 199 .portnoy , s.(1997 ) .local asymptotics for quantile smoothing splines ._ * 25 * , 414 - 434 .pratesi , m ., ranalli.m.g . , and salvati , n.(2009 ) .nonparametric m - quantile regression using penalised splines ._ j. nonparam . statist_.* 21 * , 287 - 304 .reiss , p.t . andhuang , l.(2012 ) .smoothness selection for penalized quantile regression splines ._ the international journal of biostatistics_. * 8.1*. sheather , s. j. and jones , m. c.(1991 ) . a reliable data - based bandwidth selection method for kernel density estimation ._ j. r. statist ._ 53 , 683 - 690 .and li , g.(1995 ) . global convergence rates of -spline -estimators in nonparametric regression . _ statistica sinica_. * 5 * , 303 - 318 .takeuchi , i ., li , q.v , sears , t.d . , and smola , a.j.(2006 ) .nonparametric quantile estimation. _ journal of machine learning research_. * 7 * , 1231 - 1264 .yu , k . and jones , m.c.(1998 ) . local linear quantile regression ._ j. amer .statist . assoc . _* 93 * , 228 - 237 .yuan , m.(2006 ) .gacv for quantile smoothing splines._computational statistics data analysis_. * 50 * , 813 - 829 .zhou , s . , shen , x . andwolfe , d.a.(1998 ) .local asymptotics for regression splines and confidence regions ._ * 26*(5):1760 - 1782 .
quantile regression predicts the -quantile of the conditional distribution of a response variable given the explanatory variable for . the aim of this paper is to establish the asymptotic distribution of the quantile estimator obtained by penalized spline method . a simulation and an exploration of real data are performed to validate our results . * keywords * asymptotic normality , -spline , penalized spline , quantile regression .
the high positional and moderate energy resolutions of the charged couple device ( ccd ) established this device to be the main detector for imaging spectroscopy in x - ray astronomy since asca / sis .however , one drawback to an x - ray ccd in - orbit is the degradation of the gain and energy resolution due to an increase of the charge transfer inefficiency ( cti ) .the proton irradiation on the ccd chip increases the number of charge traps in the ccd , which is composed of silicon crystals .this defect is more severe for low energy protons because they deposit more energy than high energy protons in the ccd transfer channel .the main origin of the cti and consequent gain degradation is the increase of charge traps .in fact , chandra / acis has suffered from a degraded energy resolution due to the low - energy ( 10 - 100 kev ) protons in the van allen belts .although a thick shielding around the ccd camera can significantly reduce the proton flux on the ccd , the radiation damage can not be ignored over a mission lifetime of several years . in order to maintain the good performance of ccds in orbit , the cti must be frequently measured and applied to the data .most of the major x - ray missions are provided with one or more calibration sources to measure the cti .the number of charge traps is not uniformly distributed over the ccd imaging area , and hence the cti is also not uniform over the imaging area . therefore the cti correction should be independently executed for each column ( vertical transfer channel ) .however , the limited flux of calibration x - rays impedes an accurate and frequent measurement of the cti and its spatial variation over the imaging area . recentlya charge injection technique has been developed .a charge packet with the amount of is artificially injected through a charge injection gate into each column and is subsequently readout as after the charge transfer in the same manner as the x - ray event .this method allows us to measure a charge loss ( = ) for each column , which in turn , can potentially be a powerful tool for the cti calibration .the x - ray imaging spectrometer ( xis ; and references therein ) onboard the japanese 5 x - ray satellite suzaku is equipped with a charge injection structure .the low earth orbit makes the detector background of the xis lower and more stable than those of chandra and xmm - newton .however , the xis s gain and energy resolution have gradually degraded due to the increase of the cti during transit through the south atlantic anomaly . after six - month from the first - light of the xis , the cti has increased to non - negligible level . this result has stimulated us to investigate the in - orbit charge injection performances .this paper reports on the results .section [ sect : about_ccd ] and [ sect : about_ci ] of this paper describe the xis and charge injection capability .section [ sect : experiments ] is devoted to the cti experiments , while section [ sect : cal_results ] describes the results of the ground and onboard experiments .the discussion and summary are in section [ sect : discussion ] and [ sect : summary ] , respectively .the mean ionization energy of an electron by an x - ray in silicon is assumed to be throughout this paper . have provided details on the xis and ccds ( mit lincoln laboratory model ccid41 ) .hence , we briefly duplicate for the charge injection study of this paper .the ccds are the three - phase frame transfer type and have basically the same structure as those of chandra / acis .each pixel size is 24 m 24 m and the number of the pixels is 1024 1024 in the imaging area .therefore , the size of the imaging area is 25 mm 25 mm .the exposure time is 8 sec for the normal clocking mode . with the radiative cooling and a peltier cooler , the ccd temperature is controlled to 90 .hence , the dark current is suppressed to electrons/8sec / pixel .four ccds are onboard suzaku .three of them are the front - illuminated ( fi ) chips , while the other is the back - illuminated ( bi ) chip .the bi chip has the same basic specifications as the fi chips , except that the bi chip has a larger quantum efficiency in the soft energy band .the ground calibrations verified that the thickness of the depletion layer is 65 m for the fi chips , and 42 m for the bi chip . in order to see the function of the ccd , we give the schematic view of the xis fi chip in figure [ fig : ccd_pic_sche ] .each ccd chip has four segments ( from a to d ) , and each segment has one readout node . calibration sources , which irradiate the upper edge of the segment a and d , are used for the monitoring of the gain , cti , and energy resolution in orbit . have reported details of the charge injection structure . by referring to figures [ fig : ccd_pic_sche ] and [ fig : ci_schematic ] , herewe describe the essential function of the charge injection . for the brevity to describe the charge injection technique and its results , notations of parameters , which will be frequently used in this paperare listed in table [ tab : abbr ] . [ cols="<,<",options="header " , ] a serial register of 1024 pixels long is attached to the next of the upper edge of the imaging area ( hereafter we call this charge injection register ) .an input gate is equipped at left of the charge injection register ( see figure [ fig : ccd_pic_sche ] ) . pulling down the potential for electrons at the input gate and the next electrode ( s3 in figure [ fig : ci_schematic ] ) ,the potential well is filled with charges with the amount of . then pulling up the potential ,the charge packet is spilled .the amount of charge is controlled by the offset voltage between the input gate and the next electrode ( s3 ) . in the normal xis operations ,this fill - and - spill cycle is repeated every 1/40960 sec 24 .the deposited charge packets ( ) in the charge injection register are vertically transferred into the imaging area by the same clocking pattern as that of x - ray events .a part of the packet ( ) will be trapped by the charge traps during the transfer .after the launch , because is not negligible due to the increase of charge traps , only can be measured with a injection of single charge packet and the normal readout . on the other hand ,we need the measurement of in order to estimate the cti .we hence adopt the following injection pattern , with which we can obtain both values of and simultaneously as shown in the left panel of figure [ fig : ciimage ] . after injecting a _ test _ charge packet of in one row ( horizontal transfer channel ) , we inject packets with the same amount of in five subsequent rows : the preceding four packets are called the _ sacrificial _ charge packets andthe last one is the _ reference _ charge packet .the _ test _charge packet is separated from trains of _ sacrificial _charge packets to allow the event detection algorithm .the _ test _ charge packet may suffer from traps in the transfer channel ( column ) , and therefore , the readout charge ( ) should be . on the other hand , since the preceding _ sacrificial _ charge packets may fill the charge traps , the subsequent _ reference _ charge packet may not be trapped if the clocking time is shorter than the de - trapping time scale .thus , the readout charge ( ) from the _ reference _ charge packet should be approximately equal to .the right panel of figure [ fig : ciimage ] shows a frame image taken during our experiments .the positions of the charge packet trains are periodically shifted by one column to allow the proper event detection algorithm .the _ sacrificial _charge packets are not read because of the same reason .the value after the transfer can be measured by subtracting the mean pulse height amplitude ( pha ) of the _ test _ events from that of the _ reference _ events . by selecting different , we can also investigate the relation between and .before the launch of suzaku , we conducted the ground experiments with the same type of ccd chip as the xis in order to verify the performance of charge injection function .the ccd chip was damaged by protons utilizing the cyclotron at the northeast proton therapy center at boston ( usa ) .proton beam of 40 mev was irradiated on the circular region shown in figure [ fig : frameimg ] .the total fluence was 2.0 10 , which is approximately the same as that the xis may receive during several years in orbit .experiments with damaged and non - damaged chips were conducted in mit and kyoto university respectively , using the fluorescent x - ray generation system ( for the latter ) and a radioisotope . during these experiments , the sensors were maintained at a pressure of torr and a ccd temperature of 90 . in this paper, we report on the results of the fi chip , because the quantitative differences between the fi and bi chips are small .the in - orbit charge injection experiment was conducted in the observations of the supernova remnant ( snr ) 1e010272.3 in the large magellanic cloud .all the data were acquired with the normal clocking mode and with the 3 or 5 editing modes .we applied several values of in order to investigate the dependance of on .table [ tab : explog ] summarizes the experimental logs .lcc ' '' '' & xis0 & xis2 & xis1 & xis3 + ' '' '' date & 2006/7/17 & 2006/6/26 - 27 + ' '' '' time & 06:06:50 - 21:39:46 & 02:47:07 - 02:37:55 + ' '' '' the equivalent x - ray energy & 0.6/4.2/8.0 for xis0 & 0.3/7.3 for xis1 + ' '' '' of the injected charge packets ( kev ) & 0.6/3.9/7.8 for xis2 & 0.5/4.6 for xis3 + ' '' '' total effective exposure ( ksec ) & 6.0 & 5.2 +in order to reliably estimate , in _ reference _ and _ test _ events must be equal , because is the difference between and .if can be controlled more accurately than that of charge dispersion of x - ray events in each column , the charge injection offers obvious advantages over conventional cti measurements using x - ray calibration sources .we hence check the stability of when a designed offset voltage is applied at the input gate . for this propose ,we use the non - damaged chip , because should be nearly equal to due to the negligible number of charge traps .figure [ fig : stability ] shows the spectra of fluorescent x - rays ( ti k ) collected on ground and of approximately the same equivalent energy as x - rays collected both on ground before proton damage and in orbit after the damage .events are extracted from one arbitrary column for all the data sets .the fwhm of the is 91 ev for ground data and 95 ev for in - orbit data , that are significantly better than that of the x - ray data of 113 ev .the former fwhms mean the stability of of the charge injection under the controlled offset voltage at the input gate , and the latter is primarily due to the fano noise .thus , we verify that is sufficiently stable to estimate .next , we investigate whether the ratio of is proportional to the cti , which is measured with the calibration x - rays. the pha histograms of and are fitted with a single gaussian for each column . for the events , we extract the events from upper and lower 100 rows of the imaging area ( and ) .the 100 rows are selected for the statistical point of view in the spectral fitting .figure [ fig : correlation ] shows the correlations between and . for the non - damaged chip ( the left panel ) , because during the parallel transfer is 0 or 1 electron , the correlation can be hardly seen . for the damaged - chip ,on the other hand , the increase of cti is significant in the circular region as shown in figure [ fig : frameimg ] .we can see a clear positive correlation between the and charge injection events , especially in segment a , b , and c. the best - fit slope is .05 and the correlation coefficient is 0.94 ( d.o.f.=976 ) .hence , properly reflects the cti . based on the verification for the charge injection technique in section [ subsec : stability ], we apply this technique to the onboard data .figure [ fig : reftest ] shows the pha distribution for ( open circle ) and ( cross ) as a function of x - coordinate ( column ). the is clearly observed in orbit for the first time . in order to estimate the dependance on , we selected two or three different values for the in - orbit charge injection experiment ( table [ tab : explog ] ) . assuming the single power law function of as in , we derive for each column .the results are given in figure [ fig : lostvspha ] .the weighted mean values of are 0.62 , 0.71 , 0.62 , and 1.00 for the xis0 , 1 , 2 , and 3 , respectively. these values are roughly consistent with another ground experiment .the charge injection data provide only the information on the at the edge of the imaging area and hence we need to know the y coordinate dependance of from the data of celestial objects that extend over the field of view of the xis .figure [ fig : sgrcactydep ] shows the center energy of the 6.4 kev line as a function of the y coordinate for the diffuse x - rays from the sgr c region ( obs .sequence = 500018010 , obs .date = 2006 - 02 - 20 ) .the line center at the lower edge of the image ( y=0 ) deviates from 6.40 kev due to the cti during the fast transfer of all the data in the imaging area to the frame - store region ( hereafter we call this frame - store - transfer ) .however , the origin of the deviations at the other image regions is complicated because the charges suffer from three kinds of the cti : the cti in the imaging area due to the frame - store - transfer , that in the frame - store region due to the the frame - store - transfer and that in the frame - store region due to the subsequent slow vertical transfer .the cti during the horizontal transfer is ignored in this work .the cti depends on the number density of charge traps and the transfer speed .the shielding depth and pixel size are different in the imaging area and frame - store region .the ctis therefore may be different between these areas .however , we can not estimate each cti component separately from the total cti seen in figure [ fig : sgrcactydep ] . due to this limitation , we assume following phenomenological compensation of the charge as shown in figure [ fig : deltaq ] .the charge loss during the transfer is assumed to consists of a component depending on y ( ) and a y - independent one ( ) .both components are proportional to , but their proportionality constants may be different from each other . considering that the is generated by an x - ray absorbed at , the column - averaged charge loss of given by .\end{aligned}\ ] ] and can be estimated from figure [ fig : sgrcactydep ] .we next determined the column - dependent charge loss of so that the following relation holds at any y coordinate . where can be estimated from figure [ fig : reftest ] .hence we can compensate charge correctly in the entire region of the imaging area . without a cti correction ,the energy resolution gradually decreases at a rate of 50 ev in fwhm @ 5.9 kev per year . using the determined with the charge injection experiment, we make the new spectra for the calibration sources in the observation given in table [ tab : explog ] .figure [ fig : cispectra ] shows the calibration source spectra after the cti ( upper ) and cti ( lower ) correction .the tail component after the cti correction is significantly reduced compared to that after the cti correction .this strongly indicates that the origin of the tail component is the dispersion of the cti among columns .hence the temporal variation in the response function can be suppressed by the charge injection technique .figure [ fig : eneres ] shows the fwhm of the calibration source spectra after the cti and cti correction . on average , the fwhm is significantly improved from 193 ev to 173 ev .these are the first in - orbit results for the charge injection function .our final purpose is to demonstrate that parameters are effective for celestial objects .we apply the parameters to the tycho s snr data ( obs . sequence = 500024010 , obs .date = 2006 - 06 - 27 ) .figure [ fig : tychosika ] shows the spectra around the he - like si k emission line in the west part of tycho s snr for the xis3 after the cti correction and cti correction .we see the same benefit as shown in figure [ fig : cispectra ] .note that the latter is multiplied by 0.8 to avoid confusion .radiation damage continuously increases while in orbit , and hence , the benefit of the charge injection technique will become more apparent over time as shown in this figure .because the cti correction parameters are time dependent , must be periodically measured .we make two sets of cti using derived from the charge injection experiment in may and july 2006 , and apply to the calibration source data taken in may 2006 for the xis0 and xis2 .the results are shown in figure [ fig : longterm ] .the average cti in july is normalized to that in may , while the relative variation in cti is preserved .while the observation time of the charge injection and data differs by two months , the fwhm is significantly degraded only for the brightest calibration source .this confirms that , in practical sense , the interval of two months between each cti measurement is sufficient because there are few celestial objects which has emission line brighter than this calibration source .although the charge injection technique significantly improves the energy resolution of the calibration source spectra , the fwhms , as shown in figure [ fig : eneres ] , are still larger than those before the launch ( 130 ev ) . because the charge trapping is a probability process, the number of trapped electrons ( ) may also have a probability deviation , which would increase the line width after the transfer .hence the cti correction with charge injection technique can not completely restore the line broadening .we confirm this effect in figure [ fig : sighikaku ] , which shows the fwhm of the charge injection _ reference _ events and those of the _ test _ events before and after the charge compensation for the xis3 in - orbit data .in addition , the _ test _ events measured on ground ( before the radiation damage ) are shown .the smaller fwhms in all segments of the _ reference _ events are due to the fact that the _ reference _ events may not lose charge because the charge traps are already filled by the _sacrificial _ events .in fact , the fwhms of the _ reference _ events are consistent with those of the _ test _ events collected on ground although segment d show anomalous trend .another characteristic of figure [ fig : sighikaku ] is that the fwhms increase along with x coordinate for the _ reference _ events and _ test _ events collected on ground .this is due to the cti in the charge injection register .this influences the accuracy of the mean pha of the _ reference _ events and hence the accuracy of .however , the cti in the charge injection register is rather lower than that in the imaging area and frame - store region because the charge packets are injected with an interval of 3 pixels and hence they can work as _ sacrificial _charges .in fact , as shown in figure [ fig : eneres ] , there is no significant difference between segment a and d for the improvement of the fwhm of the calibration source spectra .these results lead us to use the charge injection capability to fill the traps in the transfer channel by periodically injection of .a ground experiment shows that charges injected into every 54 row improve the energy resolution .we are trying to utilize this charge injection technique for the onboard observations .the results will be presented in a separate paper .the results of ground and in - orbit experiments of the charge injection capability of the xis are as follows : 1 .the amount of injected charge ( ) is sufficiently stable ( dispersion of 91 ev in fwhm ) , which should be compared to the x - ray energy resolution ( fwhm of 113 ev ) with the same amount of charge .the cti depends on pha of the charge .the charge loss can be explained as 3 . with the cti correction , the energy resolution ( fwhm ) of improved compared to that with the cti correction ( from 193 ev to 173 ev on an average ) at the time of one year after the launch and the tail component in the line profile is also significantly reduced .the improved charge compensation method is applied to the tycho s snr data , which results in the same benefit as the calibration source data .we confirm that the energy resolution can be largely improved by filling the charge trap .hence , we are currently trying another in - orbit charge injection capability experiment to actively fill the charge traps in the transfer channel by the charge injection technique .the authors express their gratitude to all the members of the xis team .hn and hy are financially supported by the japan society for the promotion of science .this work is supported by a grant - in - aid for the 21st century coe `` center for diversity and universality in physics '' from ministry of education , culture , sports , science and technology , and by a grant - in - aid for scientific research on priority areas in japan ( fiscal year 2002 - 2006 ) `` new development in black hole astronomy '' .bautz , m. w. , kissel , s. e. , prigozhin , g. y. , lamarr , b. , burke , b. e. , gregory , j. a. & the xis team , 2004 , , 5501 , 111 burke , b. e. , mountain , r. w. , daniels , p. j. , cooper , m. j. , & dolat , v. s. , 1993 , , 2006 , 272 gendreau , k. , bautz , m. w. , & ricker , g. , 1993 , nucl .instr . and meth .a , 335 , 318 gendreau , k. , 1995 , phd thesis , massachusetts institute of technology grant , c. e. , bautz , m. w. , kissel , s. e. , & lamarr , b. , 2004 , , 5501 , 177 hamaguchi , k. , maeda , y. , matsumoto , h. , nishiuchi , m. , tomida , h. , koyama , k. , awaki , h. , tsuru , t. g. , 2000 , nucl .instr . and meth .a , 450 , 360 hardy , t. , murowinski , r. , & deen , m. j. , 1998 , ieee trans ., 45 , 154 koyama , k. et al .2007 , , 59 , 23 lamarr , b. , bautz , m. w. , kissel , s. e. , prigozhin , g. y. , hayashida , k. , tsuru , t. g. , & matsumoto , h. , 2004 , , 5501 , 385 meidinger , n. , schmalhofer , b. , & strder , l. , 2000 , nucl . instr . and meth .a , 439 , 319 mitsuda , k. et al . , 2007 , , 59 , 1 plucinsky , p. p. , & virani , s. n. , 2000 , , 4012 , 681 prigozhin , g. y. , burke , b. e. , bautz , m. w. , kissel , s. e. , lamarr , b. , & freytsis , m. , 2004 , , 5501 , 357 smith , d. r. , holland , a. d. , hutchinson , i. b. , abbey , a. f. , pool , p. j. , burt , d. , & morris , d. , 2004 , , 5501 , 189 tompsett , m. f. , 1975 , ieee trans .ed , 22 , 305
a charge injection technique is applied to the x - ray ccd camera , xis ( x - ray imaging spectrometer ) onboard suzaku . the charge transfer inefficiency ( cti ) in each ccd column ( vertical transfer channel ) is measured by the injection of charge packets into a transfer channel and subsequent readout . this paper reports the performances of the charge injection capability based on the ground experiments using a radiation damaged device , and in - orbit measurements of the xis . the ground experiments show that charges are stably injected with the dispersion of 91 ev in fwhm in a specific column for the charges equivalent to the x - ray energy of 5.1 kev . this dispersion width is significantly smaller than that of the x - ray events of 113 ev ( fwhm ) at approximately the same energy . the amount of charge loss during transfer in a specific column , which is measured with the charge injection capability , is consistent with that measured with the calibration source . these results indicate that the charge injection technique can accurately measure column - dependent charge losses rather than the calibration sources . the column - to - column cti correction to the calibration source spectra significantly reduces the line widths compared to those with a column - averaged cti correction ( from 193 ev to 173 ev in fwhm on an average at the time of one year after the launch ) . in addition , this method significantly reduces the low energy tail in the line profile of the calibration source spectrum .
network theory is systematically used to address problems of scientific and societal relevance , from the prediction of the spreading of infectious diseases worldwide to the identification of early - warning signals of upcoming financial crises .more in general , several dynamical and stochastic processes are strongly affected by the topology of the underlying network .this results in the need to identify the topological properties that are statistically significant in a real network , i.e. to discriminate which higher - order properties can be directly traced back to the local features of nodes , and which are instead due to additional factors . to achieve this goal ,one requires ( a family of ) randomized benchmarks , i.e. ensembles of graphs where the local heterogeneity is the same as in the real network , and the topology is random in any other respect : this defines a _ null model _ of the original network .nontrivial patterns can then be detected in the form of empirical deviations from the theoretical expectations of the null model .important examples of such patterns is the presence of _ motifs _ ( recurring subgraphs of small size , like building blocks of a network ) and _ communities _ ( groups of nodes that are more densely connected internally than with each other ) . to detect these and many other patterns, one needs to correctly specify the null model and then calculate e.g. the average and standard deviation ( or alternatively a confidence interval ) of any topological property of interest over the corresponding randomized ensemble of graphs .unfortunately , given the strong heterogeneity of nodes ( e.g. the power - law distribution of vertex degrees ) , the solution to the above problem is not simple .this is most easily explained in the case of binary graphs , even if similar arguments apply to weighted networks as well . for simple graphs ,the most important null model is the ( undirected binary ) configuration model ( ubcm ) , defined as an ensemble of networks where the degree of each node is specified , and the rest of the topology is maximally random .since the degrees of all nodes ( the so - called _ degree sequence _ ) act as constraints , `` maximally random '' does not mean `` completely random '' : in order to realize the degree sequence , interdependencies among vertices necessarily arise .these interdependencies affect other topological properties as well .so , even if the degree sequence is the only quantity that is enforced ` on purpose ' , other structural properties are unavoidably constrained as well .these higher - order effects are called `` structural correlations '' . in order to disentangle spurious structural correlations from genuine correlations of interest, it is very important to properly implement the ubcm in such a way that it takes the observed degree sequence as input and generates expectations based on a uniform and efficient sampling of the ensemble .similar and more challenging considerations apply to other null models , defined e.g. for directed or weighted graphs and specified by more general constraints .several approaches to the problem have been proposed and can be roughly divided in two large classes : _ microcanonical _ and _ canonical _ methods .microcanonical approaches aim at artificially generating many randomized variants of the observed network in such a way that the constrained properties are identical to the empirical ones , thus creating a collection of graphs sampling the desired ensemble . in these algorithmsthe enforced constraints are ` hard ' , i.e they are met exactly by each graph in the resulting ensemble . as we discuss in this paper , this strong requirement implies that most microcanonical approaches proposed so far suffer from various problems , including bias , lack of ergodicity , mathematical intractability , high computational demands , and poor generalizability . on the other hand , in_ approaches the constraints are ` soft ' , i.e. they can be violated by individual graphs in the ensemble , even if the ensemble average of each constraint still matches the enforced value exactly .canonical approaches are generally introduced to directly obtain , as a function of the observed constraints ( e.g. the degree sequence ) , exact mathematical expressions for the expected topological properties , thus avoiding the explicit generation of randomized networks . however , this is only possible if the mathematical expressions for the topological properties of interest are simple enough to make the analytical calculation of the expected values feasible .unfortunately , the most popular approaches rely on highly approximated expressions leading to ill - defined or unknown probabilities that can not be used to sample the ensemble .these approximations are in any case available only for the simplest ensembles ( e.g. the ubcm ) , leaving the problem unsolved for more general constraints .this implies that the computational use of canonical null models has not been implemented systematically so far . in this paper , by combining an exact maximum - likelihood approach with an efficient computational sampling scheme ,we define a rigorously unbiased method to sample ensembles of various types of networks ( i.e. directed , undirected , weighted , binary ) with many possible constraints ( degree sequence , strength sequence , reciprocity structure , mixed binary and weighted properties , etc . ) .we make use of a series of recent analytical results that generate the exact probabilities in all these cases of interest and consider various examples illustrating the usefulness of our method when applied to real - world networks .we also analyse the canonical fluctuations of the constraints in each model .previous theoretical analyses of fluctuations in some network ensembles have been carried out , for instance , in ref. for graphs with given degree sequence and in ref . for graphs with given community structure .also , a comparison between some microcanonical and canonical network ensembles has been carried out in ref . . in this paper, we provide a complete analytical characterization of the fluctuations of each constraint for all the ensembles under study . for the majority of these ensembles ,the exact analytical expressions characterizing the fluctuations are derived here for the first time . moreover , in our maximum - likelihood approach the knowledge of the hidden variables allows us to calculate , for the first time , the exact value of the fluctuations explicitly for each node in the empirical networks considered .our results suggest that , unlike in most physical systems , the microcanonical and canonical versions of the graph ensembles considered here are surprisingly _ not _ equivalent ( see ref . for a recent mathematical proof of ensemble nonequivalence in the ubcm ) . in any case , our canonical method can in principle be converted into an unbiased microcanonical one , if we discard all the sampled networks that violate the sharp constraints . at the end of the paper, we discuss the advantages and disadvantages of this procedure explicitly , and clarify that canonical ensembles are more appropriate in presence of missing entries or errors in the data . finally , we include an appendix with a description of a algorithm that we have explicitly coded in various ways .the algorithm allows the users to sample all the graph ensembles described in this paper , given an empirically observed network ( or even only the values of the constraints ) .in this section , we briefly discuss the main available approaches to the problem of sampling network ensembles with given constraints , and highlight the limitations that call for an improved solution .we consider both microcanonical and canonical methods . in both cases , since the ubcm is the most popular and most studied ensemble , we will discuss the problem by focusing mainly on the implementations of this model .the same kind of considerations extend to other constraints and other types of networks as well .there have been several attempts to develop microcanonical algorithms that efficiently implement the ubcm .one of the earliest algorithm starts with an empty network having the same number of vertices of the original one , where each vertex is assigned a number of ` half edges ' ( or ` edge stubs ' ) equal to its degree in the real network .then , pairs of stubs are randomly matched , thus creating the final edges of a random network with the desired degree sequence .unfortunately , for most empirical networks , the heterogeneity of the degrees is such that this algorithm produces several multiple edges between vertices with large degree , and several self - loops .if the formation of these undesired edges is forbidden explicitly , the algorithm gets stuck in configurations where edge stubs have no more eligible partners , thus failing to complete any randomized network . to overcome this limitation , a different algorithm ( which is still widely used )was introduced .this `` local rewiring algorithm '' ( lra ) starts from the original network , rather than from scratch , and randomizes the topology through the iteration of an elementary move that preserves the degrees of all nodes .while this algorithm always produces random networks , it is very time consuming since many iterations of the fundamental move are needed in order to produce just one randomized variant , and this entire operation has to be repeated several times ( the mixing time being still unknown ) in order to produce many variants . besides these practical problems ,the main conceptual limitation of the lra is the fact that it is _ biased _ , i.e. it does not sample the desired ensemble uniformly .this has been rigorously shown relatively recently .for undirected networks , uniformity has been shown to hold , at least approximately , only when the degree sequence is such that where is the largest degree in the network , is the average degree , is the second moment , and is the number of vertices . clearly , the above condition sets an upper bound for the heterogeneity of the degrees of vertices , and is violated if the heterogeneity is strong .this is a first indication that the available methods break down for ` strongly heterogeneous ' networks . as we discuss later , most real - world networks are known to fall precisely within this class . fordirected networks , where links are oriented and the constraints to be met are the numbers of incoming and outgoing links ( in - degree and out - degree ) separately , a condition similar to eq.([eq_1 ] ) is required to avoid the generation of bias . again , this condition is strongly violated by most real - world networks . moreover , the directed version of the lra is also non - ergodic , i.e. it is in general not able to explore the entire ensemble of networks .it has been shown that ergodicity can be restored by introducing an additional triangular move inverting the direction of closed loops of three vertices . however , in order to restore uniformity ( for both directed and undirected graphs ) one needs to introduce an appropriate acceptance probability for the rewiring move .unfortunately , the acceptance probability depends on some nontrivial property of the current network configuration . since this property must be recalculated at each step , the resulting algorithm is significantly time consuming . quantifying the bias generated by the lra when eq.([eq_1 ] ) ( or its directed counterpart ) is violated is difficult , mainly because an exact mathematical characterization of microcanonical graph ensembles valid in such regime is still lacking .yet , the proof of the existence of bias provided in refs . is an obvious warning against the use of the lra on strongly heterogeneous networks .the reader is referred to those papers for a discussion .other recent alternatives rely on theorems , such as the erds - gallai one , that set necessary and sufficient conditions for a degree sequence to be _ graphic _ , i.e. realized by at least one graph .these ` graphic ' methods exploit such ( or related ) conditions to define biased sampling algorithms in conjunction with the estimation of the corresponding sampling probabilities , thus allowing one to statistically reweight the outcome and sample the ensemble effectively uniformly .del genio et al . show that , for networks with power - law degree distribution of the form , the computational complexity of sampling _ just one _ graph using their algorithm is if . however , when the computational complexity increases to if and to if .the upper bound is a particular case of the so - called `` structural cut - off '' that we will discuss in more detail later . for the moment, it is enough for us to note that eq .is another indication that , for strongly heterogeneous networks , the problem of sampling becomes more complicated .unfortunately , most real networks violate eq .strongly .so , while ` graphic ' algorithms do provide a solution for every network , their complexity increases for networks of increasing ( and more realistic ) heterogeneity .a more fundamental limitation is that these methods can only handle the problem of binary graphs with given degree sequence .the generalization to other types of networks and other constraints is not straightforward , as it would require the proof of more general ` graphicality ' theorems , and _ ad hoc _ modifications of the algorithm .canonical approaches aim at obtaining , as a function of the observed constraints ( e.g. the degree sequence ) , mathematical expressions for the expected topological properties , avoiding the explicit generation of randomized networks . for canonical methodsthe requirement of uniformity is replaced by the requirement that the proability distribution over the enlarged ensemble has maximum entropy . for binary graphs ,since any topological property is a function of the adjacency matrix of the network ( with entries if the vertices and are connected , and otherwise ) , the ultimate goal is that of finding a mathematical expression for the probability of occurrence of each graph .this allows to compute the expected value of as .importantly , for canonical ensembles with local constraints factorizes to a product over pairs of nodes , where each term in the product involves the probability that the vertices and are connected in the ensemble . determining the mathematical form of is the main goal of canonical approaches .note that , by contrast , in the microcanonical ensemble all links are dependent on each other ( the degree sequence must be reproduced exactly in each realization ) , which implies that the probability of the entire graph does not factorize to node - pair probabilities . for binary undirected networks , the most popular specification for is the factorized one : ( where is the degree of node and is the total degree over all nodes ) . for weighted undirected networks , where each link can have a non - negative weight and each vertex characterized by a given strength ( the total weight of the links of node ) , the corresponding assumption is that the expected weight of the link connecting the vertices and is ( where is the total strength of all vertices ) .equations and are routinely used , and have become standard textbook expressions .the most frequent use of these expressions is perhaps encountered in the empirical analysis of _ communities _ , i.e. relatively denser modules of vertices in large networks .most community detection algorithms compare different partitions of vertices into communities ( each partition being parametrized by a matrix such that if the vertices and belong to the same community , and otherwise ) and search for the optimal partition .the latter is the one that maximizes the modularity function which , for binary networks , is defined as {ij } \label{eq_4}\ ] ] where eq. appears explicitly as a null model for . for weighted networks , a similar expression involving eq . applies .other important examples where eq .is used are the characterization of the connected components of networks , the average distance among vertices , and more in general the theoretical study of percolation ( characterizing the system s robustness under the failure of nodes and/or links ) and other dynamical processes on networks . due to the important role that these equations play in many applications , it is remarkable that the literature puts very little emphasis on the fact that eqs . and are valid only under strict conditions that , for most real networks , are strongly violated .it is evident that eq .represents a probability only if the largest degree in the network does not exceed the so - called `` structural cut - off '' , i.e. if obviously , the above condition sets an upper bound for the allowed heterogeneity of the degrees , since both and are determined by the same degree distribution . unfortunately ,as we discuss below , it has been shown that strongly exceeds in most real - world networks , making eq .ill - defined .it should be noted that in principle the knowledge of allows one to sample networks from the canonical ensemble very easily , by running over all pairs of nodes and connecting them with the appropriate probability .however , the fact that when makes such probability useless for sampling purposes .this is why , despite their conceptual simplicity , general algorithms to sample canonical ensembles of networks have not been implemented so far , and the emphasis has remained on microcanonical approaches .equations , and , along with our discussion above , show that most methods run into problems when the heterogeneity of the network is too pronounced : strongly heterogeneous networks elude most microcanonical and canonical approaches proposed so far .unfortunately , networks in this extreme regime are known to be ubiquitous , and represent the rule rather than the exception . a simple way to provethis is by directly checking whether the largest degree exceeds the structural cut - off . as maslov et al .first noticed , in real networks is strongly and systematically exceeded : for instance , for the internet and , which means that the structural cut - off is exceeded ten - fold .consequently , if eq . were applied to the two vertices with largest degree , the resulting connection ` probability ' would be , i.e. more than 40 times larger than any reasonable estimate for a probability .we also note that , when inserted into eq ., this value of would produce , in the summation , a single term 40 times larger than any other ` regular ' ( i.e. of order unity ) term , thus significantly biasing the community detection problem . to the best of our knowledge, a study of the entity of this bias has never been performed .the internet is not a special case , and similar results are found in the majority of real networks , making the problem entirely general . to see this, it is enough to exploit the fact that most real networks have a power - law degree distribution of the form with exponent in the range . for these networks ,the average degree is finite but the second moment diverges .therefore the structural cut - off scales as , which means that eqs . andby contrast , extreme value theory shows that the largest degree scales as .this implies that the ratio diverges for large networks , i.e. the largest degree is infinitely larger than the allowed cut - off value .unfortunately , many results and approaches that have been obtained by assuming are naively extended to real networks where , in most of the cases , .therefore , although this might appear as an exaggerated claim , most analyses of real - world networks ( including community detection ) that have been carried out so far have relied on incorrect expressions , and have been systematically affected by an uncontrolled bias . in theoretical and computational models of networks , the problem is normally circumvented by enforcing the condition explicitly , e.g. by considering a truncated power - law distribution .this procedure is usually justified with the expectation that the inequality should hold for sparse networks where the average degree does not grow with , as in most real networks .this interpretation of the role of sparsity is however misleading , since in real scale - free networks with the average degree is finite irrespective of the presence of the cut - off .this makes those networks sparse even without assuming a truncation in the degree distribution . as a matter of fact , as clear from the example above, real networks systematically violate the cut - off value , and are therefore ` strongly heterogeneous ' , even if sparse . by the way, the fact that a high density is not the origin of the breakdown of the available approaches should be clear by considering that dense but homogeneous networks ( including the densest of all , i.e. the complete graph ) are such that and are therefore correctly described by eq ., just like sparse homogeneous networks .this confirms that the problem is in fact due to _ strong heterogeneity _ and not to high density .the above arguments can be extended to other ensembles of networks with different constraints .the general conclusion is that , since real - world networks are generally strongly heterogeneous , the available approaches either break down or become computationally demanding .moreover , it is difficult to generalize the available knowledge to modified constraints and different types of graphs .in what follows , building on a series of recent results characterizing several canonical ensembles of networks , we introduce a unified approach to sample these ensembles in a fast , unbiased and efficient way . in our approach , the functional form of the probability of each graph in the ensembleis derived by maximizing shannon s entropy ( thus ensuring that the sampling is unbiased ) , and the numerical coefficients of this probability are derived by maximizing the probability ( i.e. the likelihood ) itself . since this double maximization is the core of our approach , we call our method the `` maximize and sample '' ( `` max & sam '' for short ) method .we also provide a code implementing all our sampling algorithms ( see appendix ) .we will consider canonical ensembles of binary graphs with given degree sequence ( both undirected and directed ) , of weighted networks with given strength sequence ( both undirected and directed ) , of directed networks with given reciprocity structure ( both binary and weighted ) , and of weighted networks with given combined strength sequence and degree sequence . in all these cases , that have been treated only separately so far , we implement an explicit sampling protocol based on the exact result that the probability of the entire network always factorizes as a product of dyadic probabilities over pairs of nodes .this ensures that the computational complexity of our sampling method is always in all cases considered here , irrespective of the level of heterogeneity of the real - world network being randomized .therefore our method does not suffer from the limitations of the other methods discussed in sec .[ sec : previous ] : it is efficient and unbiased even for strongly heterogeneous networks . it should be noted that , while most microcanonical algorithms require as input the entire adjacency matrix of the observed graph ( see sec . [sec : micro ] ) , our canonical approach requires only the empirical values of the constraints ( e.g. the degree sequence ) . at a theoretical level, this desirable property restores the expectation that such constraints should be the sufficient statistics of the problem . at a practical level, it enormously simplifies the data requirements of the sampling process .for instance , if the sampling is needed in order to reconstruct an unknown network from partial node - specific information ( e.g. to generate a collection of likely graphs consistent with an observed degree and/or strength sequence ) , then most microcanonical algorithms can not be applied , while canonical ones can reconstruct the network to a high degree of accuracy .let us start by considering binary , undirected networks ( buns ) .a generic bun is uniquely specified by its binary adjacency matrix . the particular matrix corresponding to the observed graph that we want to randomize will be denoted by . as we mentioned , the simplest non - trivial constraint is the degree sequence , ( where is the degree of node ) , defining the ubcm . in our approach ,the canonical ensemble of buns is the set of networks with the same number of nodes , , of the observed graph and a number of ( undirected ) links varying from zero to the maximum value .appropriate probability distributions on this ensemble can be fully determined by maximizing , in sequence , shannon s entropy ( under the chosen constraints ) and the likelihood function , as already pointed out in .the result of the entropy maximization is that the graph probability factorizes as where .the vector of unknown parameters ( or ` hidden variables ' ) is to be determined either by maximizing the log - likelihood function or , equivalently , by solving the following system of equations ( corresponding to the requirement that the gradient of the log - likelihood vanishes ) : where is the observed degree of vertex and indicates its ensemble average . in both cases , the parameters vary in the region defined by for all . from eq .it is evident that only the observed values of the chosen constraints ( the _ sufficient statistics _ of the problem ) are needed in order to obtain the numerical values of the unknowns ( the empirical degree sequence fixes the value of , which in turn fix the value of all the probabilities ) . in any case , for the sake of clarity , in the code we allow the user to choose the preferred input - form ( a matrix , a list of edges , a vector of constraints ) .this applies to all the models described in this paper and implemented in the code .note that the above form of represents the exact expression that should be used in place of eq .. this reveals the highly non - linear and non - local character of the interdependencies among vertices in the ubcm : in random networks with given degree sequence , the correct connection probability is a function of the degrees of _ all vertices _ of the network , and not just of the end - point degrees as in eq .. only when the degrees are ` weakly heterogeneous ' ( mathematically , this happens when for all pairs of vertices , which implies ) , these structural interdependencies become approximately local . note that , in the literature , this is improperly called the `` sparse graph '' limit , while , as we discussed in sec.[sec : problems ] , what defines this limit is a low level of heterogeneity , and not sparsity . unlike eq ., the considered here always represents a proper probability ranging between 0 and 1 , irrespective of the heterogeneity of the network .this implies that eq . provides us with a recipe to sample the canonical ensemble of buns under the ubcm .after the unknown parameters have been found , they can be put back into eq . to obtain the probability to correctly sample any graph from the ensemble .the key simplification allowing this in practice is the fact that the graph probability is factorized , so that a single graph can be sampled stochastically by sequentially running over each pair of nodes and implementing a bernoulli trial ( whose elementary events are , with probability , and , with probability ) . this process can be repeated to generate as many configurations as desired .note that sampling each network has complexity , and that the time required to preliminarily solve the system of coupled equations to find the unknown parameters is independent on how many random networks are sampled and on the heterogeneity of the network .thus this algorithm is always more efficient than the corresponding microcanonical ones described in sec.[sec : micro ] . in fig .[ bun_a2 ] we show an application of this procedure to the network of liquidity reserves exchanges between italian banks in 1999 . for an increasing number of sampled graphs ,we show the convergence of the sample average of each entry of the adjacency matrix to its exact canonical expectation , analytically determined after solving the likelihood equations .this preliminary check is useful to establish that , in this case , generating 1000 networks ( bottom right ) is enough to reach a high level of accuracy . if needed , the accuracy can be quantified rigorously ( e.g. in terms of the maximum width around the identity line ) and arbitrarily improved by increasing the number of sampled matrices .note that this important check is impossible in microcanonical approaches , where the exact value of the target probability is unknown .we then select the sample of 1000 networks and confirm ( see the top panel of fig .[ bun_top2 ] ) that the imposed constraints ( the observed degrees of all nodes ) are very well reproduced by the sample average , and that the confidence intervals are narrowly spread around the identity line .this is an important test of the accuracy of our sampling procedure .again , the accuracy can be improved by increasing the number of sampled matrices if needed .after this preliminary check , the sample can be used to compare the expected and observed values of higher - order properties of the network .note that in this case we do not require ( or expect ) that these ( unconstrained ) higher - order properties are correctly reproduced by the null model .the entity of the deviations of the real network from the null model depends on the particular example considered , and the characterization of these deviations is precisely the reason why a method to sample random networks from the appropriate ensemble is needed in the first place . in the bottom panels of fig .[ bun_top2 ] we compare the observed value of two quantities of interest with their arithmetic mean over the sample .the two quantities are the average nearest neighbors degree ( annd ) , , and the clustering coefficient , of each vertex .note that , since our sampling method is unbiased , the arithmetic mean over the sample automatically weighs the configurations according to their correct probability . in this particular case , we find that the null model reproduces the observed network very well , which means that the degree sequence effectively explains ( or rather generates ) the two empirical higher - order patterns that we have considered . this is consistent with other studies , but not true in general for other networks or other constraints , as we show later on . from the bottom panels of fig .[ bun_top2 ] we also note that the confidence intervals highlight a non - obvious feature : the fact that the few points further away from the identity line turn out to be actually within ( or at the border of ) the chosen confidence intervals , while several points closer to the identity are instead found to be much more distant from the confidence intervals , and thus in an unexpectedly stronger disagreement with the null model .these counter - intuitive insights can not be derived from the analysis of the expected values alone , e.g. using expressions like eq . or similar .we now calculate the fluctuations of the constraints explicitly .we start by calculating the ensemble variance of each degree , defined as \equiv\langle k_i^2\rangle-\langle k_i\rangle^2 ] . in the canonical ensemble, the independence of pairs of nodes implies that the variance of the sum coincides with the sum of the variances of its terms , i.e. &=&\sum_{j\ne i}\sigma^2[a_{ij}]=\sum_{j\ne i}(\langle a^2_{ij}\rangle-\langle a_{ij}\rangle^2)\nonumber\\ & = & \sum_{j\ne i}p_{ij}(1-p_{ij})=k_i-\sum_{j\ne i}p_{ij}^2.\label{eq : varubcm}\end{aligned}\ ] ] then , the canonical relative fluctuations can be measured in terms of the so - called _ coefficient of variation _ , which we conveniently express in the form \equiv\frac{\sigma[k_i]}{k_i}=\sqrt{\frac{1}{k_i}-\frac{\sum_{j\nei}p_{ij}^2}{(\sum_{j\ne i}p_{ij})^2 } } , \label{eq : fluubcm}\ ] ] where we have restricted ourselves to the case also implies =0 ] .however this case is uninteresting since each isolated node remains isolated across the entire ensemble ( ) and can be safely removed without loss of generality . ] . a plot of ] when . in general , we note that the term in eq.([eq : fluubcm ] ) is a _ participation ratio _ , measuring the inverse of the effective number of equally important terms in the sum : in particular , it equals if and only if there is only one nonzero term ( complete concentration ) , while it equals if and only if there are identical terms ( complete homogeneity ) , i.e. for all .since these are the two extreme bounds for a participation ratio , and since in the case of complete concentration we also have , we conclude that the bounds for ] is the one comprised between the abscissa and the dashed line in fig.[fig : fluubcm ] .we find that the realized trend is close to the upper bound .this suggests that the maximum - entropy nature of our algorithm produces almost maximally homogeneous terms in the sum , i.e. no particular subset of vertices is preferred as canditate partners for , the only preference being obviously given ( as a consequence of the explicit form of in terms of and ) to vertices with larger degree .since the degree distribution of most real - world networks is such that the average degree remains finite even when the size of the network becomes very large , the above results suggest that , unlike most physical systems , the microcanonical and canonical ensembles defined by the ubcm are _ not _ equivalent in the ` thermodynamic ' limit .while eq.([eq : boundubcm ] ) shows that values closer to the lower bound =0 ] as a function of the degree for each node of the binary network of liquidity reserves exchanges between italian banks in 1999 ( = 215 ) .the blue points are the exact values in eq.([eq : fluubcm ] ) , while the dashed curve is the upper bound in eq.([eq : boundubcm ] ) .the lower bound is the abscissa =0 ] and ] , and the independence of pairs of nodes implies &=&\sum_{j\ne i}\sigma^2[w_{ij}]=\sum_{j\ne i}(\langle w^2_{ij}\rangle-\langle w_{ij}\rangle^2)\nonumber\\ & = & \sum_{j \ne i } \frac{p_{ij}}{(1-p_{ij})^2}=\sum_{j\ne i}\langle w_{ij}\rangle(1+\langle w_{ij}\rangle)\nonumber\\ & = & s_i+\sum_{j\ne i}\langle w_{ij}\rangle^2 .\label{eq : varuwcm}\end{aligned}\ ] ] therefore the relative fluctuations take the form \equiv\frac{\sigma[s_i]}{s_i}=\sqrt{\frac{1}{s_i}+\frac{\sum_{j\ne i}\langle w_{ij}\rangle^2}{(\sum_{j\ne i}\langle w_{ij}\rangle)^2 } } \label{eq : fluuwcm}\ ] ] for . a plot of ] are quite different from those for ] is the one above the dashed line in fig.[fig : fluuwcm ] , and extends beyond 1 .we now find that the realized trend is very close to the _ lower _ bound for small and intermediate values of the strength ( again suggesting that in this regime our maximum - entropy method produces almost maximally homogeneous terms in the sum ) , while it exceeds the lower bound significantly for large values of the strength . in any case , since eq.([eq : bounduwcm ] ) implies that ] as a function of the strength for each node of the binary network of liquidity reserves exchanges between italian banks in 1999 ( = 215 ) .the blue points are the exact values in eq.([eq : fluuwcm ] ) , while the dashed curve is the lower bound in eq.([eq : bounduwcm ] ) .the upper bound exceeds 1 and extends beyond the region shown.,scaledwidth=48.0% ] we now consider weighted directed networks ( wdns ) , defined by a weight matrix which is in general not symmetric .each node is now characterized by two strengths , the out - strength and the in - strength .the _ directed weighted configuration model _ ( dwcm ) , the directed version of the uwcm , enforces the out- and in - strength sequences , and , of a real - world network .the model is widely used to detect modules and communities in real wdns . in its canonical version ,the dwcm is still characterized by eq .where `` '' is replaced by `` '' and now . the unknown parameters and can be fixed by either maximizing the log - likelihood function \nonumber \\ & + & \sum_i\sum_{j \neq i } \ln ( 1-x_iy_j)\nonumber\end{aligned}\ ] ] or solving the the equations where in both cases the parameters and vary in the region defined by for all . once the unknown variables are found , we can implement an efficient and unbiased sampling scheme in the same way as for the uwcm , but now running over each pair of vertices _ twice _( i.e. in both directions ). one can establish the weight of a link from vertex to vertex using the geometric distribution , and the weight of the reverse link from to using the geometric distribution , these two events being independent .alternatively , as for the undirected case , one can construct these random events as a combination of fundamental bernoulli trials with success probability and . since this directed generalization of the undirected caseis straightforward , we do not consider any explicit application .however , we have explicitly included the dwcm model in the code ( see appendix ) .we now come to the canonical fluctuations . in analogy with eq.([eq :varuwcm ] ) , it is easy to show that the variances of and are given by &= & \sum_{j\ne i}\langle w_{ij}\rangle(1+\langle w_{ij}\rangle)=s^{out}_i+\sum_{j \ne i } \langle w_{ij}\rangle^2,\label{eq : vardwcm1}\\ \sigma^2[s^{in}_{i}]&= & \sum_{j\ne i}\langle w_{ji}\rangle(1+\langle w_{ji}\rangle)=s^{in}_i+\sum_{j \ne i } \langle w_{ji}\rangle^2.\label{eq : vardwcm2}\end{aligned}\ ] ] for and , the relative fluctuations are &\equiv&\frac{\sigma[s^{out}_i]}{s^{out}_i}=\sqrt{\frac{1}{s^{out}_i}+\frac{\sum_{j\ne i}\langle w_{ij}\rangle^2}{(\sum_{j\nei}\langle w_{ij}\rangle)^2}},\label{eq : fludwcm1}\\ \delta[s^{in}_i]&\equiv&\frac{\sigma[s^{in}_i]}{s^{in}_i}=\sqrt{\frac{1}{s^{in}_i}+\frac{\sum_{j\ne i}\langle w_{ji}\rangle^2}{(\sum_{j\ne i}\langle w_{ji}\rangle)^2}}.\label{eq : fludwcm2}\end{aligned}\ ] ] for the bounds of the above quantities , expressions similar to eq.([eq : bounduwcm ] ) apply , suggesting that the microcanonical and canonical versions of this ensemble are also not equivalent . in analogy with the binary case, we now consider the _ reciprocal weighted configuration model _ ( rwcm ) , which is a recently proposed null model that for the first time allows one to constrain the reciprocity structure in weighted directed networks .the rwcm enforces three strengths for each node : the non - reciprocated incoming strength , , the non - reciprocated outgoing strength , , and the reciprocated strength , .such quantities are defined by means of three pair - specific variables : ] ( where , as usual , all the parameters are intended to be the ones maximizing the likelihood ) . also , note that and can not be both nonzero , but they are independent of ( the joint distribution of these three quantities shown above is not simply a multivariate geometric distribution ) .the above observations allow us to define an unbiased sampling scheme , even if more complicated than the ones described so far . for each pair of nodes , we define a procedure in three steps . first , we draw the reciprocal weight from the geometric distribution ( or equivalently , from the composition of bernoulli distributions as discussed for the uwcm ) .second , we focus on the _ mere existence _ of non - reciprocated weights ( irrespective of their magnitude ) .we randomly select one of these three ( mutually excluding ) events : we establish the absence of any non - reciprocated weight between and ( , ) with probability , we establish the existence of a non - reciprocated weight from to ( , ) with probability , we establish the existence of a non - reciprocated weight from to ( , ) with probability .third , if a non - reciprocated connection has been established ( i.e. if its weight is positive ) we then focus on the value to be assigned to it ( i.e. on the extra weight ) .if , we draw the weight from a geometric distribution ( shifted to strictly positive integer values of via the rescaled exponent ) , while if we draw the weight from the distribution .the recipe described above is still of complexity and allows us to sample the canonical ensemble of the rwcm in an unbiased and efficient way .it should be noted that the microcanonical analogue of this algorithm has not been proposed so far . as for the dwcm, we show no explicit application , even if the entire algorithm is available in our code ( see appendix ) . in this model ,the canonical fluctuations are somewhat more compicated than in the previous models .the variances of the constraints are & = & \sum_{j \ne i } \frac{x_iy_j(1-x_jy_i)(1-x_i^2x_jy_iy_j^2)}{(1-x_iy_j)^2(1-x_ix_jy_iy_j)^2},\\ \sigma^2[s_i^\leftarrow ] & = & \sum_{j \ne i } \frac{x_jy_i(1-x_iy_j)(1-x_ix_j^2y_i^2y_j)}{(1-x_jy_i)^2(1-x_ix_jy_iy_j)^2}\\ \sigma^2[s_i^\leftrightarrow ] & = & \sum_{j \ne i } \frac{z_{i}z_{j}}{(1-z_{i}z_{j})^2}.\end{aligned}\ ] ] while for the variance of the reciprocated weight we can still write =\langle w^\leftrightarrow_{ij}\rangle(1+\langle w^\leftrightarrow_{ij}\rangle) ] and >\langle w^\leftarrow_{ij}\rangle(1+\langle w^\leftarrow_{ij}\rangle) ] still applies , as in eq.([eq : bounduwcm ] ) .this suggests that , for this model as well , the microcanonical and canonical ensembles are not equivalent .we finally consider a ` mixed ' null model of weighted networks with both binary ( degree sequence ) and weighted ( strength sequence ) constraints .we only consider undirected networks for simplicity , but the extension to the directed case is straightforward .the ensemble of weighted undirected networks with given strengths and degrees has been recently introduced as the _ ( undirected ) enhanced configuration model _ ( uecm ) .this model , which is based on analytical results derived in , is of great importance for the problem of _ network reconstruction _ from partial node - specific information . as we have also illustrated in fig.[wun_top2 ] , the knowledge of the strength sequence alone is in general not enough in order to reproduce the higher - order properties of a real - world weighted network . usually , this is due to the fact that the expected topology is much denser than the observed one ( often the expected network is almost fully connected ) .by contrast , it turns out that the simultaneous specification of strengths and degrees , by constraining the local connectivity to be consistent with the observed one , allows a dramatically improved reconstruction of the higher - order structure of the original weighted network .this very promising result calls for an efficient implementation of the uecm .we now describe an appropriate sampling procedure .the probability distribution characterizing the uecm is halfway between a bernoulli ( fermi - like ) and a geometric ( bose - like ) distribution , and reads .\label{eq_mix}\ ] ] as usual , the unknown parameters must be determined either by maximizing the log - likelihood function \nonumber\\ & + & \sum_{i}\sum_{j< i } \ln \frac{1-y_iy_j}{(1-y_iy_j + x_ix_jy_iy_j)}\end{aligned}\ ] ] or by solving the equations : where . here, the parameters and vary in the region for all and for all respectively . in order to define an unbiased sampling scheme, we note that eq .highlights the two key ingredients of the uecm , respectively controlling for the probability that a link of any weight exists and , if so , that a specific positive weight is there .the probability to generate a link of weight between the nodes and is the above expression identifies two key steps : the model is equivalent to one where the ` first link ' ( of unit weight ) is extracted from a bernoulli distribution with probability and where the ` extra weight ' ( ) is extracted from a geometric distribution ( shifted to the strictly positive integers ) with parameter .as all the other examples discussed so far , this algorithm can be easily implemented . in fig .[ mix_aw2 ] we provide an application of this method to the world trade web .we show the convergence of the sample averages ( and ) of the entries of both binary and weighted adjacency matrices to their exact canonical expectations ( and respectively ) . as in the previous cases ,generating 1000 matrices is enough to guarantee a tight convergence of the sample averages to their exact values ( in any case , this accuracy can be quantified and improved by sampling more matrices ) . for this sample of 1000 matrices , in the top plots ( two in this case ) of fig .[ mix_top2 ] we confirm that both the binary and weighted constraints are well reproduced by the sample averages .when we use this null model to check for higher - order patterns in this network , we find that two important topological quantities of interest ( annd and anns , bottom panels of fig .[ mix_top2 ] ) are well replicated by the model .these results are consistent with what is obtained analytically by using the same canonical null model on the same network . moreover , in this case we can calculate confidence intervals besides expected values ( for instance , in fig .[ mix_top2 ] we can clearly identify outliers that are otherwise undetected ) , and do this for any desired topological property , not only those whose expected value is analytically computable .our method therefore represents an improved algorithm for the unbiased reconstruction of weighted networks from strengths and degrees .the canonical fluctuations in this ensemble can be also calculated analytically . for the variance of the degrees, we can still exploit the expression =p_{ij}(1-p_{ij}) ] , which however leads to a more complicated expression in this case . using the relation , the end result can be expressed as follows : & = & \sum_{j \ne i } p_{ij}(1-p_{ij}),\\ \sigma^2[s_i ] & = & \sum_{j\ne i}\frac{p_{ij}(1+y_{i}y_{j}-p_{ij})}{(1-y_{i}y_{j})^2}\nonumber\\ & = & \sum_{j \ne i } \langle w_{ij}\rangle\left(\frac{1+y_iy_j}{1-y_iy_j}-\langle{w_{ij}}\rangle\right).\end{aligned}\ ] ] since , we can obtain the following relations for the relative fluctuations : &\equiv&\frac{\sigma[k_i]}{k_i}=\sqrt{\frac{1}{k_i}-\frac{\sum_{j\ne i}p_{ij}^2}{(\sum_{j\ne i}p_{ij})^2}},\label{eq : fluuecm1}\\ \delta[s_i]&\equiv&\frac{\sigma[s_i]}{s_i}\ge\sqrt{\frac{1}{s_i}-\frac{\sum_{j\ne i}\langle{w_{ij}}\rangle^2}{(\sum_{j\ne i}\langle{w_{ij}}\rangle)^2}}.\label{eq : fluuecm2}\end{aligned}\ ] ] so ] has a more complicated form , which differs from that valid for the uwcm and does not lead to simple expressions for the upper and lower bounds .also note the presence of a _ minus _ sign in eq.([eq : fluuecm2 ] ) .what can be concluded relatively easily is that , in the ideal limit ( corresponding to very small values of ) , we have which implies and \to \delta[k_i] ] behaves as $ ] , so it has the same upper bound .however , since is typically larger than zero , this bound is systematically exceeded , especially for large values of .this is also confirmed in fig.[fig : fluuecm ] . as in the other models ,the non - vanishing of the fluctuations suggests that the microcanonical and canonical ensembles are not equivalent .in this section we come back to the difference between canonical and microcanonical approaches to the sampling of network ensembles and discuss how , at least in principle , our method can be turned into an unbiased microcanonical one .we provided evidence that , for all the models considered in this paper , the canonical and microcanonical ensembles are _ not _ equivalent ( see also for a recent mathematical proof of nonequivalence for the ubcm ) .this result implies that choosing between microcanonical and canonical approaches to the sampling of network ensembles is not only a matter of ( computational ) convenience , but also a theoretical issue that should be addressed more formally . to this end, we recall that microcanonical ensembles describe isolated systems that do not interact with an external ` heat bath ' or ` reservoir ' . in ordinary statistical physics, this means that there is no exchange of energy with the external world . in our setting, this means that microcanonical approaches do not contemplate the possibility that the network interacts with some external ` source of error ' , i.e. that the value of the enforced constraints might be affected by errors or missing entries in the data .when present , such errors ( e.g. a missing link , implying a wrong value of the degree of two nodes ) are propagated to the entire collection of randomized networks , with the result that the ` correct ' network is not included in the microcanonical collection of graphs on which inference is being made . by contrast , besides being unbiased and mathematically tractable , our canonical approach is also the most appropriate choice if one wants to account for possible errors in the data , since canonical ensembles appropriately describe systems in contact with an external reservoir ( source of errors ) affecting the value of the constraints .while in presence of even small errors microcanonical methods assign zero probability to the ` uncorrupted ' configuration and to all the configurations with the same value of the constraints , our method assigns these configurations a probability which is only slightly smaller than the ( maximum ) probability assigned to the set of configurations consistent with the observed ( ` corrupted ' ) one .these considerations suggest that , given its simplicity , elegance , and ability to deal with potential errors in the data , the use of the canonical ensemble should be preferred to that of the microcanonical one . nonetheless , it is important to note that , at least in principle , our canonical method can also be used to provide unbiased microcanonical expectations , if theoretical considerations suggest that the microcanonical ensemble is more appropriate in some specific cases .in fact , if the sampled configurations that do not satisfy the chosen constraints exactly are discarded , what remains is precisely an unbiased ( uniform ) sample of the microcanonical ensemble of networks defined by the same constraints ( now enforced sharply ) .the sample is uniform because all the microcanonical configurations have the same probability of occurrence in the canonical ensemble ( since all probabilities , as we have shown , depend only on the value of the realized constraints ) . the same kind of analysis presented in this paper can then be repeated to obtain the microcanonical expectations . in the rest of this section , we discuss some advantages and limitations of this approach . as a guiding principle, one should bear in mind that , to be feasible , a microcanonical sampling based on our method requires that the number of canonical realizations to be sampled ( among which only a number of microcanonical ones will be selected ) is not too large , especially because for each canonical realization one must ( in the worst - case scenario ) do checks to ensure that each constraint matches the observed value exactly ( the actual number is smaller , since all the checks after the first unsuccessful one can be aborted ) .we first discuss the relation between and .let denote a generic graph ( either binary or weighted ) in the canonical ensemble , and the observed network that needs to be randomized .let formally denote a generic vector of chosen constraints , and let indicate the observed values of such constraints .similarly , let denote the generic vector of lagrange multipliers ( hidden variables ) associated with , and let indicate the vector of their likelihood - maximizing values enforcing the constraints . on average , out of canonical realizations, we will be left with a number of microcanonical realizations , where is the probability to pick a graph in the canonical ensemble that matches the constraints exactly .this probability reads where is the probability of graph in the canonical ensemble , and is the number of microcanonical networks matching the constraints exactly ( i.e. the number of graphs with given ) . inserting eq . into eq . and inverting , we find that the value of required to distill microcanonical graphs is note that is nothing but the maximized likelihood of the observed network , which is automatically measured in our method .this is typically an extremely small number : for the networks in our analysis , it ranges between ( world trade web ) and ( binary interbank network ) . on the other hand ,the number is very large ( compensating the small value of the likelihood ) but unknown in the general case : enumerating all graphs with given ( sharp ) properties is an open problem in combinatorics , and asymptotic estimates are available only under certain assumptions .this means that it is difficult to get a general estimate of the minimum number of canonical realizations required to distill a desired number of microcanonical graphs .another criterion can be obtained by estimating the number of canonical realizations such that the microcanonical subset samples a desired _ fraction _ ( rather than a desired _ number _ ) of all the microcanonical graphs . in this case, the knowledge of becomes unnecessary : from the definition of we get the above formula shows that , if we want to sample a number of microcanonical realizations that span a fraction of the microcanonical ensemble , we need to sample a number of canonical realizations and discard all the non - microcanonical ones .this number can be extremely large , since is very small , as we have already noticed . on the other hand , can be chosen to be very small as well . to see this ,let us for instance compare with the corresponding fraction of _ canonical _ configurations sampled by realizations , where is the number of graphs in the canonical ensemble .for all networks we considered in this paper , we showed that realizations were enough to generate a good sample .this however corresponds to an extremely small value of .for instance , for the binary interbank network we have .we might therefore be tempted to choose the same small value also for , and find the required number from eq . . however , the result is a value ( in the mentioned example , ) , which clearly indicates that setting ( where is an acceptable canonical fraction ) is inappropriate .in general , should be much larger than .importantly , we can show that , given a value that generates a good canonical sample , the subset of the microcanonical relations contained in the canonical ones spans a fraction of the microcanonical ensemble that is indeed much larger than . to see this , note that , being obtained with the introduction of the constraints , is necessarily much larger than the completely uniform probability over the canonical ensemble ( corresponding to the absence of constraints ) .this inequality implies that , if we compare with ( both obtained with the same value of ) , we find that the above expression shows that , even if only out of the ( many more ) canonical realizations belong to the microcanonical ensemble , the resulting microcanonical sampled fraction is still much larger than the corresponding canonical fraction .this non - obvious result implies that , in order to sample a microcanonical fraction that is much larger than the canonical fraction obtained with a given value of , one does not need to increase the number of canonical realizations beyond .the above considerations suggest that , under appropriate conditions , using our `` max & sam '' method to sample the microcanonical ensemble might be competitive with the available microcanonical algorithms .it should be noted that the value of affects neither the preliminary search for the hidden variables , nor the calculation of the microcanonical averages over the final networks .however , it does affect the number of checks one has to make on the constraints to select the microcanonical networks .the worst - case total number of checks is , and performing such operation in a non - optimized way might slow down the algorithm considerably . a good strategy would be that of exploiting our analysis of the canonical fluctuations to identify the vertices for which it is more unlikely that the local constraint is matched exactly , and check these vertices first .this would allow one to identify , for each of the canonical realizations , the constraint - violating nodes at the earliest possible stage , and thus to abort the following checks for that particular network .implementing such an optimized microcanonical algorithm is however beyond the scope of this paper .the definition and correct implementation of null models is a crucial issue in network analysis . when applied to real - world networks ( that are generally strongly heterogeneous ), the existing algorithms to enforce simple constraints on binary graphs become biased or time - consuming , and in any case difficult to extend to networks of different type ( e.g. weighted or directed ) and to more general constraints .we have proposed a fast and unbiased `` max & sam '' method to sample several canonical ensembles of networks with various constraints .while canonical ensembles are believed to represent a mathematically tractable counterpart of microcanonical ones , they have not been used so far as a tool to sample networks with soft constraints , mainly because of the use of approximated expressions that result in ill - defined sampling probabilities . here, we have shown that it is indeed possible to use exact expressions to correctly sample a number of canonical ensembles , from the standard case of binary graphs with given degree sequence to the more challenging models of directed and weighted graphs with given reciprocity structure or joint strength - degree sequence .moreover , we have provided evidence that microcanonical and canonical ensembles of graphs with local constraints are not equivalent , and suggested that canonical ones can account for possible errors or missing entries in the data , while microcanonical ones do not .our algorithms are unbiased and efficient , as their computational complexity is even for strongly heterogeneous networks .canonical sampling algorithms may therefore represent an unbiased , fast , and more flexible alternative to their microcanonical counterparts .we have also illustrated the possibility to obtain an unbiased microcanonical method by discarding the realizations that do not match the constraints exactly . in our opinion, these findings might suggest new possibilities of exploitation of canonical ensembles as a solution to the problem of biased sampling in many other fields besides network science .an algorithm has been coded in various ways in order to implement our sampling procedure for all the seven null models described in sec . [sec : maxsam ] . inwhat follows , we describe the matlab implementation .a more detailed explanation accompanies the code in the form of a `` read_me '' file .here we briefly mention the main features . the code can be implemented by typing a command having the typical form of a matlab function , taking a number of different parameters as input .the output of the algorithm is the numerical value of the _ hidden variables _ , i.e. the vectors , and ( where applicable ) maximizing the likelihood of the desired null model ( see sec .[ sec : maxsam ] ) , plus a specifiable number of sampled matrices .the hidden variables alone allow the user to numerically compute the expected values of the adjacency matrix entries ( and ) , as well as the expected value of the constraints ( as a check of its consistency with the observed value ) , according to the specific definition of each model . moreover ,the user can obtain as output any number of matrices ( networks ) sampled from the desired ensemble .these matrices are sampled in an unbiased way from the canonical ensemble corresponding to the chosen null model , using the relevant random variables as described in sec .[ sec : maxsam ] . * * ubcm * for the undirected binary configuration model , preserving the degree sequence ( ) of an undirected binary network ( see sec . [sec : ubcm ] ) ; * * dbcm * for the directed binary configuration model , preserving the in- and out - degree sequences ( and ) of a directed binary network ( see sec . [sec : dbcm ] ) ; * * rbcm * for the reciprocal binary configuration model , preserving the reciprocated , incoming non - reciprocated and outgoing non - reciprocated degree sequences ( , and ) of a directed binary network ( see sec . [sec : rbcm ] ) ; * * uwcm * for the undirected weighted configuration model , preserving the strength sequence ( ) of an undirected weighted network ( see sec . [sec : uwcm ] ) ; * * dwcm * for the directed weighted configuration model , preserving the in- and out - strength sequences ( and ) of a directed weighted network ( see sec . [sec : dwcm ] ) ; * * rwcm * for the reciprocal weighted configuration model , preserving the the reciprocated , incoming non - reciprocated and outgoing non - reciprocated strength sequences ( , and ) of a directed weighted network ( see sec .[ sec : rwcm ] ) ; * * uecm * for the undirected enhnaced configuration model , preserving both the degree and strength sequences ( and ) of an undirected weighted network ( see sec .[ sec : uecm ] ) .* ` matrix ` for a ( binary or weighted ) matrix representation of the data , i.e. if the entire adjacency matrix is available ; * ` list ` for an edge - list representation of the data ,i.e. a matrix ( being the number of links ) with the first column listing the starting node , the second column listing the ending node and the third column listing the weight ( if available ) of the corresponding link ; * ` par ` when only the constraints sequences ( degrees , strengths , etc . ) are available . in any case , the two options that are not selected are left empty , i.e. their value should be `` * * [ ] * * '' .we stress that the likelihood maximization procedure ( or the solution of the corresponding system of equations making the gradient of the likelihood vanish ) , which is the core of the algorithm , only needs the observed values of the chosen constraints to be implemented .however , since different representations of the system are available , we have chosen to exploit them all and to let the user choose the most appropriate to the specific case .for instance , in network reconstruction problems one generally has empirical access only to the local properties ( degree and/or strength ) of each node , and the full adjacency matrix is unknown .the fifth parameter ( ` eps ` ) controls for the maximum allowed relative error between the observed and the expected value of the constraints .according to this parameter , the code solves the entropy - maximization problem by either just maximizing the likelihood function or also improving this first outcome solution by further solving the associated system . even if this choice might strongly depend on the observed data , the value works satisfactorily in most cases .the sixth parameter ( ` sam ` ) is a boolean variable allowing the user to extract the desired number of matrices from the chosen ensemble ( using the probabilities ) .the value `` 0 '' corresponds to no sampling : with this choice , the code gives only the hidden variables as output .if the user enters `` 1 '' as input value , the algorithm will ask him / her to enter the number of desired matrices ( after the hidden variables have been found ) . in this case, the code outputs both the hidden variables and the sampled matrices , the latter in a ` .mat ` file called ` sampling.mat ` . the seventh parameter ( ` x0new ` ) is optional and has been introduced to further refine the solution of the uecm in the very specific case of networks having , at the same time , big outliers in the strength distribution and a narrow degree distribution . in this case , the optional argument ` x0new ` can be inputed with the previously obtained output : in so doing , the code will solve the system again , by using the previous solution as initial point .this procedure can be iterated until the desired precision is reached .note that , since ` x0new ` is an _parameter , it is not required to enter `` * * [ ] * * '' when the user does not need it ( differently e.g. from the data format case ) .dg acknowledges support from the dutch econophysics foundation ( stichting econophysics , leiden , the netherlands ) with funds from beneficiaries of duyfken trading knowledge bv , amsterdam , the netherlands .this work was also supported by the eu project multiplex ( contract 317532 ) and the netherlands organization for scientific research ( nwo / ocw ) .
sampling random graphs with given properties is a key step in the analysis of networks , as random ensembles represent basic null models required to identify patterns such as communities and motifs . an important requirement is that the sampling process is unbiased and efficient . the main approaches are microcanonical , i.e. they sample graphs that match the enforced constraints exactly . unfortunately , when applied to strongly heterogeneous networks ( like most real - world examples ) , the majority of these approaches become biased and/or time - consuming . moreover , the algorithms defined in the simplest cases , such as binary graphs with given degrees , are not easily generalizable to more complicated ensembles . here we propose a solution to the problem via the introduction of a `` maximize and sample '' ( `` max & sam '' for short ) method to correctly sample ensembles of networks where the constraints are ` soft ' , i.e. realized as ensemble averages . our method is based on exact maximum - entropy distributions and is therefore unbiased by construction , even for strongly heterogeneous networks . it is also more computationally efficient than most microcanonical alternatives . finally , it works for both binary and weighted networks with a variety of constraints , including combined degree - strength sequences and full reciprocity structure , for which no alternative method exists . our canonical approach can in principle be turned into an unbiased microcanonical one , via a restriction to the relevant subset . importantly , the analysis of the fluctuations of the constraints suggests that the microcanonical and canonical versions of all the ensembles considered here are not equivalent . we show various real - world applications and provide a code implementing all our algorithms .
receptor trafficking has been identified as a key feature of synaptic transmission and plasticity . yet, the mode of trafficking remains unclear : after receptors are inserted in the plasma membrane of neuron , classical single particle tracking revealed that the motion can either be free or confined brownian motion .recently , super - resolution light optical microscopy techniques for _ in vivo _ data have allowed monitoring a large number of molecular trajectories at the single molecular level and at nanometer resolution . using a novel stochastic analysis approach , we have identified that high confined density regions are generated by large potential wells ( 100s of nanometers ) that sequester receptors . in addition , fluctuation in the apparent diffusion coefficient reflects the change in the local density of obstacles .classically , cell membranes are organized in local microdomains characterized by morphological and functional specificities . in neuronsprominent microdomains include dendritic spines and synapses , which play a major role in neuronal communication . in this report, we analyze ampar , that are key in mediating transmission in excitatory glutamatergic transmission , are not only relocating between synaptic and extrasynaptic sites due to lateral diffusion , but can be trapped in transient ring structures that contain several potential wells .we identified three regions ( fig .1 ) where trajectories form rings ( fig .2a - b ) : specifically , a first feature that defines a ring is that receptor trajectories are constraint into an annulus type geometry ( fig .2c ) located on the surface of the membrane .however , rings are not necessarily localized within dendritic spines . to further characterize a ring, we use the stochastic analysis developed in , to compute the vector field for the drift , which is surprisingly restricted to an annulus ( fig .2b , ring 2 and 3 ) : the size of the inner radius is of the order of 500 nm while the external radius is around .in addition , inside a ring , we found several attracting potential wells of various sizes that co - localized with regions of high density ( fig .3a ) . from the density distribution function along the ring ( fig .3 ) , we identified three wells . while the ampars diffusion coefficient in these wells is around /s ( fig .3b ) , the wells have an interaction range of about 500 nm ( fig .3c ) , and are associated with an energy of 3.6 , 1.8 and 3 respectively . using time lapse imaging ( fig .2 ) , we could see that a ring can be stable over a period of 30 minutes ( ring 1 ) , while ring 2 was only stable for 15 minutes .interestingly , we could see a transient ring which creates a transient structure interconnecting two - ring like structures at time 45 minutes before it disappears at 60 minutes .ring 3 could be detected transiently after 15 minutes , containing multiple wells . finally , focusing on ring 1 , by plotting the density function of trajectories, we could see over time how a potential well emerges , is destroyed as well as its stability ( fig .4a ) . while the diffusion coefficient remains constant over time ( fig .4b ) , the energy of the main well ( fig .4c ) changes across the time lapse experiments as follow : at time 30 min it is e = 6.6 kt ( score = 0.25 and a depth of a=2.0 ) , at the intermediate time 45 min , the energy is e= 7.8 kt ( score = 0.20 , depth a=2.3 ) and finally at time 60 minutes e = 5.2 kt ( score = 0.13 , depth a=1.6 ) .the weak score confirms the likelihood of the well .interestingly , the energy level of the well is neither weak and strong , but remains stable overtime . to conclude , potential wells that reflect the interaction of ampa receptor with molecular partner are not only appearing isolated or at synapses , but can also appear in ring structures .although these rings can be due to latex bead , the organization of potential wells around in such vicinity suggest that membrane curvature could be a key necessary component to shape the strength of the wells .these rings can trap receptors in 100 nanometers structures .both potential wells and rings are transient , but stable over periods of minutes .classical physical modeling and statistical analysis of single receptor motion assumes that the main driving force is free or confined brownian motion .this is based on langevin s description of a pointwise stochastic object description .however because gradient forces , such as electrostatic forces , which are the main sources of chemical interactions can not generate close trajectories in the determinist limit of langevin s approximation , maintaining receptor in ring can not be due to electrostatic forces alone .thus we propose several hypothesis for this ring structure : one possibility is that receptor dynamics description can not be reduced to a single point and rather we have now to account for the complex structure of the protein that can generate interactions at specific protein group , which is decoupled from the center of mass . for example the c - terminus tail can interact with some supra - structure generated by scaffolding molecules .another possibility is that rings are due to transient geometrical structures on the membrane , which traps receptors .we could imagine that this is the case near an endosomal compartment during vesicular exocytosis and endocytosis .the data analyzed in this report have been previously published in and were generated by d. nair , j.b .sibarita , e. hosy and d. choquet . we thank them for providing us access to these data .borgdorff aj , choquet d ( 2002 ) regulation of ampa receptor lateral movements ._ nature _ 417:649 - 653 .triller a , choquet d ( 2003 ) synaptic structure and diffusion dynamics of synaptic receptors ._ biol cell ._ 95:465 - 476 .n. hoze , d. nair , j.b .sibarita , e. hosy , c. sieben , s. manley , a. herrmann , d. choquet , and d. holcman , heterogeneity of receptor trafficking and molecular interactions revealed by super - resolution analysis of live cell imaging , pnas 2012 kusumi a _ et al . _( 2005 ) paradigm shift of the plasma membrane concept from the two - dimensional continuum fluid to the partitioned fluid : high - speed single - molecule tracking of membrane molecules ._ 34:351 - 378 .
by combining high - density super - resolution imaging with a novel stochastic analysis , we report here a peculiar nanostructure organization revealed by the density function of individual ampa receptors moving on the surface of cultured hippocampal dendrites . high density regions of hundreds of nanometers for the trajectories are associated with local molecular assembly generated by direct molecular interactions due to physical potential wells . we found here that for some of these regions , the potential wells are organized in ring structures . we could find up to 3 wells in a single ring . inside a ring receptors move in a small band the width of which is of hundreds of nanometers . in addition , rings are transient structures and can be observed for tens of minutes . potential wells located in a ring are also transient and the position of their peaks can shift with time . we conclude that these rings can trap receptors in a unique geometrical structure contributing to shape receptor trafficking , a process that sustains synaptic transmission and plasticity . * keywords : ampar trafficking | super - resolution data | stochastic analysis of trajectories | single particle tracking | lateral diffusion
many large complex nets , such as the internet and the world wide web , social networks of contact , and networks of interactions between proteins are _ scale - free _ : the degree ( number of links attached to a node , ) has a distribution with a heavy power - law tail , . because of their ubiquitousness in everyday life the structure and physical properties of scale - free nets have attracted much recent attention .the percolation problem is of particular practical interest : is the integrity of the internet compromised following random breakdown of a fraction of its routers ?what fraction of a population ought to be vaccinated to arrest the spread of an epidemics that spreads by social contact ? initial studies of percolation addressed the case of _ stochastic _ scale - free nets , where the links between the nodes are drawn at random , so as to satisfy the scale - free degree distribution ( for example , by the algorithm due to molloy and reed ) .these studies showed that scale - free nets are resilient to random dilution , provided that the degree exponent is smaller than 3 .explicit expressions for the critical exponents characterizing the transition as a function of were also derived .stochastic molloy - reed scale - free nets are limited , though .having fixed the degree distribution , all other structural properties ( such as the extent of clustering , assortativity , etc . )are fixed as well , in contrast with man - made and natural scale - free nets that show a great deal of variation in these other properties . in this paper , we study percolation in _ hierarchical _ scale - free nets .hierarchical scale - free nets may be constructed that are small - world or not , and with various degrees of assortativity , clustering , and other properties .hierarchical nets have been studied before , as exotic examples where renormalization group techniques yield exact results , including the percolation phase transition and the limit of the potts model .we study percolation directly , by focusing on the size of the giant component ( the largest component left after dilution ) and the probability of contact between hubs ( nodes of highest degree ) .our aim is to elucidate the effect of the various structural properties of the nets on the percolation phase transition . as we shall see below , whether the transition takes place or not , and its character , depends not only on the degree exponent ( as in stochastic nets ) but also on other factors scale - free nets are constructed in a recursive fashion .we focus on the special class of -flowers , where each link in generation is replaced by two parallel paths consisting of and links , to yield generation .a natural choice for the genus at generation is a cycle graph ( a ring ) consisting of links and nodes ( other choices are possible ) .the case of , ( fig .[ fig1 ] ) has been studied previously by dorogotsev , goltsev and mendes ( dgm ) . in the followingwe assume that , without loss of generality .-flower , or dgm network .( a ) method of construction : each link in generation is replaced by two paths of 1 and 2 links long .( b ) generations .( c ) alternative method of construction : generation is obtained by joining three replicas of generation at the hubs ( marked by a , b , c ) ., scaledwidth=40.0% ] all -flowers are self - similar , as evident from an equivalent method of construction : to produce generation , make copies of the net in generation and join them at the hubs ( the nodes of highest degree ) , as illustrated in fig .[ fig1]c .it is easy to see , from the second method of construction , that the number of links ( the size ) of a -flower of generation is at the same time , the number of nodes ( the order ) obeys the recursion relation which , together with the boundary condition , yields similar considerations let us reproduce the full degree distribution . by construction , -flowers have only nodes of degree , be the number of nodes of degree in the -flower of generation , then leading to as in the dgm case , this corresponds to a scale - free degree distribution , , of degree exponent the self - similarity of -nets , coupled with the fact that different replicas meet at a _ single _ node , makes them amenable to exact analysis by renormalization group techniques .there is a vast difference between -flowers with and . if the diameter of the -th generation flower ( the longest shortest path between any two nodes ) scales linearly with .for example , for the -flower and for the -flower .it is easy to see that the diameter of the -flower , for odd , is , and , while deriving a similar result for even is far from trivial , one can show that . for , however , the diameter grows as a power of .for example , for the -flower we find , and , more generally , if is even ( and ) , for odd one may establish bounds showing that . to summarize , since , we can recast these relations as thus , for the flowers are _ small world _ , similar to stochastic scale - free nets with .for the nets are in fact _ fractal _ , with fractal dimension since the mass increases by ( from one generation to the next ) while the diameter increases by .-flowers are _ infinite_-dimensional . in showed how these nets may be characterized by a different measure of dimension that takes into account their small - world scaling .-flowers with ( ) .( a ) small world : and .( b ) fractal : and .the graphs may also be iterated by joining four replicas of generation at the appropriate hubs . ,scaledwidth=40.0% ] the difference between flowers with and is perhaps best exemplified by the ( 1,3)- vs. the ( 2,2)-flower ( fig .[ fig2 ] ) .the nets have identical degree distributions , node for node , with degree exponent similar to the famed barabsi - albert model but the -flower is small world ( or infinite - dimensional ) , while the -flower is a fractal of dimension . upon varying and the hierarchical flowers acquire different structural properties .consider , for example , their _ assortativity _ the extent to which nodes of similar degree connect with one another . in the -flower ,nodes of degree and are only _ one _ link apart , and the assortativity index is 0 ; while in the -flower the same nodes are links apart , and its assortativity index tends to ( as ) , indicating a high degree of disassortativity , and more in line with naturally occurring scale - free nets . more generally , the degree of assortativity , , is , for , and , for , ( as ) .another property of interest is _ clustering _ , a measure of the likelihood for neighbors of a node to be neighbors of one another .-flowers with have zero clustering : the neighbors of a node are _ never _ neighbors of one another . the dgm net ( , ) has clustering coefficient , and gets smaller with increasing ( or degree exponent ) , quite in line with the clustering coefficient of everyday life scale - free nets .so far we have seen hierarchical nets that are either fractal and disassortative ( ) , or small world and assortative ( and ) .it is also possible to obtain hierarchical nets that are small world and disassortative at the same time .one way to do this is by constructing a fractal -flower ( ) and adding a link between opposite hubs at the end of each iteration step : the additional link does not get iterated .[ fig3 ] illustrates this procedure for the case of the -flower .-flower . construction method ( top ) : each link is replaced by two parallel paths of and links long , and an additional link ( broken line ) that does not get iterated . (bottom ) : the decorated -flower for generations ., scaledwidth=40.0% ] the added link has a negligible effect on the degree distribution : the degree exponent still approaches , as . on the other hand, it has a dramatic effect on the diameter of the net , which now grows linearly in ( logarithmically in ) , making it a small world network .the nets remain strongly disassortative : for example , for the -flower the assortativity changes from ( without ) to with the addition of the non - iterated links . finally , the added links have a dramatic effect on the clustering of the nets , which grows from zero to about 0.82008 , for the -flower .the main point is that by manipulating the method of construction one can generate scale - free nets with differing structural properties . by changing one property at a time onecan then hope to understand their effect on various physical phenomena , such as the percolation phase transition .one can also construct nets that mimic everyday life networks as closely as possible .the decorated -flower , with its degree exponent , disassortativity , and high degree of clustering , is a reasonable candidate for the latter .we now turn to the study of percolation in hierarchical scale - free nets .the recursive nature of the -flowers , coupled with their finite ramification , make it possible to obtain an exact solution by a real - space renormalization group analysis , including the finite - size behavior around the transition point .our plan is as follows .we first study percolation in _ fractal _ hierarchical nets .having finite dimensionality they resemble regular and fractal lattices , and the percolation phase transition is similar to what is found there as well .we then study percolation in the -flowers , which are small world , as most everyday life complex networks .unlike everyday life nets , the -flowers have no percolation phase transition , even for , or .clearly , the -flowers fail to mimic everyday life networks in some crucial aspect perhaps their high assortativity .we therefore conclude with an analysis of the decorated -flower .the transition there most closely resembles that of everyday life nets , but some differences remain .we speculate on the missing ingredient that gives rise to that difference in section [ discussion ] .consider the -flower , as a prototypical example of _ fractal _ hierarchical scale - free nets . in this netthe distance between opposite hubs ( or the diameter ) scales as , and the mass scales as , corresponding to a fractal dimension of .suppose that each link is present with probability .denote the probability for two opposite hubs in generation to be connected by , then , the analogous quantity in generation , is indeed , on iterating the flower to generation the probability of contact between opposite hubs depends on the existence of either of two parallel paths , each consisting of two stringed copies of generation . the probability that one of the paths is connected is and ( [ p22flower ] ) follows : a naive addition over - counts the event that all 4 generation- units are connected ( probability ) . starting with , for generation , onecan then compute the probability of contact for any other generation ( fig .[ fig4 ] ) .( [ p22flower ] ) has an unstable fixed point at , where , and two stable fixed points at and .if , the contact probability flows to ( as ) and the system is in the percolating phase . for , it flows to and there is no percolation .-flower . shownare the curves for generations and ., scaledwidth=40.0% ] near the percolation phase transition the contact probability obeys the _ finite - size _ scaling relation where , and is the critical exponent governing the scaling of the correlation length , we can obtain by evaluating the derivative of eq .( [ p22flower ] ) at : where using ( [ pscaling_frac ] ) it then follows that in our case and , yielding next , we address the probability that a site belongs to the infinite incipient cluster ( or the giant component ) , , in generation .it obeys the scaling relation the finite - size scaling exponent characterizes the size of the giant component at the transition point , : .the scaling function has a non - analytic part , for small , so that near the transition point ) .see text.,scaledwidth=40.0% ] let , and denote the probabilities that a site is connected to exactly one , two , three , or four of the hubs , respectively ( fig .[ fig5]a ) , then . the analogous quantities in generation , are here is the probability that opposite hubs ( in generation ) are disconnected . ( ) denote the event that only one ( two ) of the hubs that the site reaches in generation are also hubs of generation ( fig .[ fig5]b ) .these are straightforwardly related to the : as a useful check , one may verify that . from ( [ abcd22flower ] ) and ( [ st22flower ] ) we obtain a recursion relation for and : the scaling of the giant component is dominated by , the largest eigenvalue of the above matrix , evaluated at , in our case and , yielding to obtain , we derive eq .( [ pinf_scaling ] ) with respect to , where we used the fact that .doing the same for and dividing the two relations , while using ( [ pp ] ) , we get substituting for the values of , , , and , we find for the -flower for the -flower .shown are curves for generations and , obtained from and iterating eqs .( [ eq18 ] ) and ( [ p22flower ] ) ., scaledwidth=40.0% ] in summary , percolation in fractal scale - free nets is very similar to percolation in fractal and regular lattices . as far as we can tell, the broad scale - free degree distribution does not give rise to any kind of anomalous behavior , different from percolation in regular spaces . note , for example , that the contact probability , ( fig .[ fig4 ] ) , and ( fig .[ fig6 ] ) follow the same pattern as for percolation in regular spaces .interestingly , the exponents and that we find for the -flower ( whose fractal dimension is ) are not very different from the corresponding exponents in regular 2-dimensional space : and . the scaling relations between critical exponents , familiar from percolation in regular and fractal spaces ,are obeyed as well .indeed , we may rewrite eq .( [ 1/nu ] ) as since is the fractal dimension of the hierarchical flower and . using this , in conjunction with eqs .( [ lambda ] ) and ( [ mess ] ) , we derive the scaling relation the giant component , at criticality , scales as where is its fractal dimension . comparing this to , on the one hand , and to eq .( [ thetanud ] ) , on the other hand , we get which is a well - known scaling relation for percolation in regular space .the analysis carried out above for the -flower may be extended for other values of and .the recursion relation between opposite hubs in the general case is as for the -flower , this has two stable fixed points at and 1 , and an unstable fixed point whose location at may be computed numerically .one can then evaluate the correlation length exponent , using eqs .( [ lambda ] ) , ( [ 1/nu ] ) and keeping in mind that in the thermodynamic limit . results from such calculations are shown in fig .there is general agreement with percolation in regular -dimensional lattices , especially for the particular case of ( note the analytical continuation to non - integer values of and ). for fractal -flowers ( ) , plotted against their dimensionality ( ) .shown are results for increasing ( bottom to top ) and increasing ( left to right ) .the solid curve corresponds to a case that is close , numerically , to regular -dimensional space ( ) ., scaledwidth=45.0% ] we next turn to small world nets , and consider the -flower as examples of this type of networks .the mass of the -flower grows like , while the diameter increases only logarithmically , , making it a small world net of infinite dimensionality .as we shall shortly see , there is no percolation transition , contradicting the finding for percolation in random scale - free nets .this may be perhaps attributed to the fact that -flowers are quite strongly assortative ( highly connected nodes tend to be connected to one another ) , making them particularly resilient to random dilution .the recursion relation for the probability of contact between hubs in successive generations is now which has an unstable fixed point at and a stable fixed point at . in other words ,regardless of the dilution level , , contact between hubs is guaranteed , in the thermodynamic limit of .let ( ) denote the probability that a node is connected to exactly adjacent hubs , in generation .the recursion relations for the analogous quantities in generation are where have the same meaning as for the -flower , in the previous section .analysis of these equations reveals that there is a _probability for a site to belong to the giant component , at _ any _ dilution level . for small , andthere is an essential singularity at .practically , though , it is impossible to tell whether or not , for sufficiently small , and one could not rule out a percolation phase transition at some based on a numerical study or on simulations alone ( fig . [ fig8 ] ) . for the -flower . shown are curves for generations , and , obtained from and iterating eqs .( [ eq24 ] ) ( [ eq26 ] ) .note that is indistinguishable from zero , in the scale of the plot , for below about 0.2.,scaledwidth=40.0% ] having failed to find a percolation transition in the assortative -flowers , we now turn to the -flower with a non - iterated link ( fig .[ fig3 ] ) .the recursion relation for the probability of contact between hubs in successive generations is indeed , note that contact can be made through either of the two paths consisting of two stringed copies of generation ( with probability , in either case ) or through the non - iterated link ( with probability ) .the probability that none of these three parallel paths make contact is therefore , and follows . in the thermodynamic limit , .it is easier to obtain implicitly , inverting ( [ pp ] ) : one can thus see that is double - valued , for .a stability analysis reveals that only the lower branch is stable . for ,the only available solution to ( [ pp ] ) is .this solution is stable as well .thus , has a discontinuity at , where it jumps from to , see fig .[ fig9 ] .-flower . shownare curves for generations and , obtained from iteration of eq .( [ pp ] ) ., scaledwidth=40.0% ] the recursion relations for the giant component are slightly more involved than in previous cases .we define , as usual , as the probabilities that a node reaches various hubs combinations in generation ( fig .[ fig10]a ) .we also denote by the probability that , after embedding the -th generation in generation , the node reaches only one of the hubs , connected to the non - iterated link . similarly , is the probability that it reaches a single hub that is _ not _ connected to the non - iterated link , and the probability that it reaches both hubs ( fig .[ fig10]b ) .we then have \}\\ & \>\>\>\>\>\>\>\>\>+yppq^2+zq^2(qp+p)\,,\\ & g'=(x+y)p[p(2pq+p^2)+qp^2]\\ & \>\>\>\>\>\>\>\>\>+z(1-qq^2)\ , , \end{split}\ ] ] where again , the fact that confirms that the equations are consistent . ) ., scaledwidth=40.0% ] the recursion equations simplify with the substitution and , leading to the scaling of near the percolation threshold is dominated by the largest eigenvalue of the recursion matrix , ( evaluated at ) . from thiswe find the finite size exponent in order to compute the order parameter critical exponent , , we must first replace the scaling relations suitable for fractals and regular spaces with relations for percolation in _ small world _substrata , that are _ infinite_-dimensional . instead of ( [ pscaling_frac ] )we write where the original scaling argument has been changed to , obviating the question of diameter . using this and a similar argument to the one leading to ( [ thetanud ] ), we now derive naively equating this relation to ( [ thetanud ] ) one obtains .this is , of course , meaningless , but makes some kind of sense : because the net is small world both its dimension and the inverse of the correlation length exponent are infinite , but in such a way that their ratio yields a finite .the exponent is obtained in practice from the recursion relation for the contact probability , and using ( [ newphi ] ) : . for the decorated -flowerwe find , so that .iterated curves of ( for ) indeed show a transition at with an infinite order parameter exponent ( fig .[ fig11 ] ) . for the decorated -flower .shown are curves for generations and .inset : detail about , showing that with . ,we have studied the percolation phase transition in a class of hierarchical scale - free nets that can be built to display a large variety of structural properties and that can be analyzed exactly . when the scale - free nets are also fractal , that is , when the mass of the net increases as a power of its diameter , percolation is very similar to what is found in regular lattices .we do not see any specific signature that might be ascribed to the scale - free degree distribution .percolation in _ small world _hierarchical lattices is more exotic . in the -flowers, we find that there is no percolation phase transition : the system is always in the percolating phase , even as the bonds get diluted to concentration .this is in line with what is known for stochastic scale - free nets of degree exponent .however , for -flowers the percolation phase transition fails to appear even as ( and ) increase without bound . to be sure, flattens more pronouncedly about the origin , reaching near zero probability at wider and wider regions of , as increases , but there is no transition nevertheless see eq .( [ essential ] ) .a possible cause for the exceptional resilience of -flowers is their being strongly assortative , compared to stochastic and everyday life scale - free nets .we then studied percolation in the decorated -flower , a hierarchical scale - free net of degree exponent that is small world and _ disassortative_. in this case there is a percolation phase transition at a finite , and the order parameter critical exponent characterizing the transition is .this agrees with the result for percolation in stochastic scale - free nets , that , since the degre exponent of the decorated -flower is .however , the finding in our case is generic : one can show that for decorated flowers with other values , independently of .the same is true for the -flower decorated with _two _ non - iterated links ( connecting the two pairs of opposite hubs ) .the decorated -flower and similar constructs closely mimic everyday life stochastic scale - free nets ( small world , disassortative , and high degree of clustering ) .why is it then that they can not reproduce a phase transition with finite ? perhaps we are still missing out on some crucial structural property , common to everyday life stochastic networks .another possibility is that it is a consequence of the hierarchical flowers being _finitely ramified _( they can be disjointed by removing a finite number of nodes , regardless of the graphs sizes ) .we do not know whether finite ramification is typical of everyday life networks .finding out the answers to these questions will shed further light on the structure of the complex nets around us .we thank james bagrow for discussions and help with coding the -flowers , and nihat berker , vladimir privman , and bob ziff for help with background material and fruitful discussions .partial support from nsf award phy0555312 ( dba ) is gratefully acknowledged .r. albert and a .-barabsi , rev . of mod. phys . * 74 * , 47 ( 2002 ) ; a .-barabsi , _ linked : how everything is connected to everything else and what it means _ , ( plume , 2003 ) ; m. e. j. newman , siam review , * 45 * , 167 ( 2003 ) ; s. n. dorogovtsev , j. f. f. mendes , advances in physics * 51 * 1079 ( 2002 ) ; s. n. dorogovtsev and j. f. f. mendes , _ evolution of networks : from biological nets to the internet and www _ , ( oxford university press , oxford , 2003 ) ; s. bornholdt and h. g. schuster , _ handbook of graphs and networks _ ,( wiley - vch , berlin , 2003 ) ; r. pastor - satorras and a. vespignani , _ evolution and structure of the internet _ , ( cambridge university press , cambridge , uk , 2004 ) ; m. newman , a - l .barabsi , d. j. watts , eds . , _ the structure and dynamics of networks _ , ( princeton university press , 2006 ) ; s. boccaletti , v. latora , y. moreno , m. chavez , and d .- u .hwang , complex networks : structure and dynamics , " physics reports * 424 * , 175308 ( 2006 ) .a. kapitulnik , a. aharony , g. deutscher , and d. stauffer , j. phys .a * 16 * , l269 ( 1983 ) ; d. stauffer and a. aharony , _ introduction to percolation theory _ , 2nd edition ( taylor and francis , 1991 ) ; d. ben - avraham and s. havlin , _ diffusion and reactions in fractals and disordered systems _ ( cambridge university press , 2000 ) .
we study the percolation phase transition in hierarchical scale - free nets . depending on the method of construction , the nets can be fractal or small - world ( the diameter grows either algebraically or logarithmically with the net size ) , assortative or disassortative ( a measure of the tendency of like - degree nodes to be connected to one another ) , or possess various degrees of clustering . the percolation phase transition can be analyzed exactly in all these cases , due to the self - similar structure of the hierarchical nets . we find different types of criticality , illustrating the crucial effect of other structural properties besides the scale - free degree distribution of the nets .
the multitype contact process introduced in is a continuous - time markov process whose state space maps the -dimensional integer lattice into the set where state 0 refers to empty sites and where state , , refers to sites occupied by a type particle .denoting by the state of the system at time and by the binary relation indicating that two vertices are nearest neighbors , the evolution of the process at vertex is described by the transition rates where is the rate at which the state of flips from to .that is , type particles give birth through the edges of the lattice to particles of their own type at rate and die spontaneously at death rate .if an offspring is sent to a site already occupied , the birth is suppressed .the multitype contact process has been introduced and completely studied when the death rates are equal by neuhauser . to fix the time scale , assume that , and to leave out trivialities , assume in addition that the birth rates are greater than the critical value of the basic contact process .then , the type with the highest birth rate outcompetes the other type . in the neutral casewhen the birth rates are equal , the process clusters in dimension while coexistence occurs in dimension . here and after , coexistence means strong coexistence : there exists a stationary distribution with a positive density of type 1 and type 2 .the long - term behavior of the process when the death rates are different remains an open problem but neuhauser conjectured that her results extend to the general case provided one replaces the birth rate by the ratio of the birth rate to the death rate .in particular , it is believed that the coexistence region as a subset of the space of the parameters has lebesgue measure zero .the existence of such a coexistence region is mathematically interesting but for obvious reasons it is irrelevant to explain why species coexist in nature . in order to identify mechanisms ( more meaningful for biologists ) that promote coexistence ,recent studies have focused on modifications of the multitype contact process in which the coexistence region contains an open set of the parameters .it has been proved in different contexts that coexistence is promoted by spatial and temporal heterogeneities .this article introduces the first example of a multitype contact process in which coexistence is produced by the geometry of the graph on which particles evolve . in some sense ,our main result is analogous to the one of pemantle which states that , in contrast with the contact process on the regular lattice , the contact process on homogeneous trees exhibits a phase of weak survival . in both cases ,the geometry of the graph is responsible for creating new qualitative behaviors . to construct our process , we consider the -dimensional lattice as a homogeneous graph with degree where vertices are connected to each of their nearest neighbors .let be an odd positive integer .then , we consider the following collection of hyperplanes : where denotes the coordinate of , and remove from the original graph all the edges that intersect one of these hyperplanes .this induces a partition of the lattice into -dimensional cubes with length edge that we call _patches_. see the left - hand side of figure [ fig : graph ] for an illustration where edges drawn in dotted lines are the edges to be removed . since the parameter is odd, each patch has a central vertex . to complete the construction ,we draw a long edge between the centers of adjacent patches as indicated in the right - hand side of figure [ fig : graph ] .the resulting graph can be seen as the superposition of two lattices that we call _ microscopic _ and _ mesoscopic _ lattices .even though , for more convenience , we will prove all our results for this particular graph , our main coexistence result can be easily extended to more general graphs that we shall call _ two - scale graphs_. these graphs are described in details at the end of this section . to formulate the evolution rules, we write to indicate that vertices and are connected by a short edge , and to indicate that both vertices are connected by a long edge . the evolution at vertex then given by the following transition rates : note that in the expression of the first sum is empty whenever vertex is not located at the center of a patch .we call this process the _ two - scale multitype contact process_. also , we call the parameters and the _ microscopic _ and _ mesoscopic _ birth rates . the one - color version of this process has been introduced by belhadji and lanchier as a spatially explicit model of metapopulation . the objective there was to determine parameter values for which survival occurs .in contrast , the emphasis here is on whether both types coexist or one type outcompetes the other type .while our analysis of the single - species model in did not reveal any major difference between the contact processes on regular lattices and the graph of figure [ fig : graph ] , our analysis of the multispecies model shows that two - scale graphs , as opposed to regular lattices , promote the coexistence of the species . from now on, we assume that the parameters are chosen in such a way that each type survives in the absence of the other one and refer to for explicit conditions of survival .finally , note that when , i.e. , patches reduce to a single vertex , the values of the microscopic birth rates are irrelevant and the two - scale multitype contact process reduces to the multitype contact process .therefore , to avoid trivialities , we also assume that .we call the best _ invader _ the type with the highest to ratio , and the best _ competitor _ the type with the highest to ratio .our first theorem extends neuhauser s result to the two - scale multitype contact process : assuming that the death rates are equal , when one type is both the best invader and the best competitor , it outcompetes the other type except in the neutral case when the process clusters in and coexistence occurs in .see figure [ fig : m2cp ] for pictures of numerical simulations in the two dimensional neutral case .[ duality ] assume that and . 1 .in the neutral case and we have the following alternative .1 . in clustering occurs , i.e. , for any initial configuration , 2 . in coexistence occurs , i.e. , there exists a stationary distribution under which the density of type 1 and the density of type 2 are both positive .2 . if and with at least one strict inequality then type 2 wins . to search for strategies promoting coexistence, we now assume that one type , say type 1 , is the best invader , and the other type is the best competitor , in which case the limiting behavior of the process is more difficult to predict .interestingly , while the results of theorem [ duality ] are not sensitive to the patch size , the long - term behavior of the process under these new assumptions strongly depends upon the parameter . as mentioned above , when patches reduce to a single vertex , the values of the microscopic birth rates are irrelevant so that type 1 particles outcompete type 2 particles as predicted by theorem 1 in .in contrast , taking large leaves enough room for type 2 to outcompete locally type 1 within each patch , except maybe near the central vertices located on the mesoscopic lattice . the time a colony of type 2 particles persists within a single patch is long enough so that survival of type 2 is insured by casual migrations from one patch to another , which is referred to as the _ rescue effect _ in metapopulation theory . in conclusion ,large patches promote survival of the best competitor , as indicated in the following theorem .[ survival ] assume that and .then , in any dimension , type 2 survives provided the spatial scale is sufficiently large .theorem [ survival ] is the key result to identify a set of parameters for which coexistence occurs .first of all , we fix the parameter values to make type 1 a good invader but a bad competitor living at a slow time scale .type 1 then survives by jumping from patch to patch , or equivalently by invading the mesoscopic lattice .this holds regardless of the patch size .coexistence of both types then follows from the proof of theorem [ survival ] by fixing the remaining parameters to make type 2 a good competitor living at a much faster time scale than type 1 and taking large ( which does not affect survival of type 2 ) . to prove rigorously survival of both types ,the two - scale multitype contact process will be simultaneously coupled with two different oriented percolation processes , one following the evolution of type 1 at a certain time scale , the other one following the evolution of type 2 at a slower time scale .the use of a block construction implies the existence of an open set of the parameters in which coexistence occurs , so we can conclude that [ coexistence ] in any dimension , the lebesgue measure of the coexistence region is strictly positive provided the spatial scale is sufficiently large .as previously mentioned , theorem [ coexistence ] together with neuhauser s conjecture indicates that , in contrast to the regular lattice , the graph of figure [ fig : graph ] promotes coexistence for the multitype contact process .thinking of the competitive exclusion principle in ecology ( the number of coexisting species at equilibrium can not exceed the number of resources ) , this suggests that the two - scale graph provides two spatial resources , namely the microscopic and the mesoscopic lattices , which allows two types to coexist .note also that , while the coexistence region of the neuhauser s competing model corresponds to the neutral case , for the two - scale multitype contact process , the coexistence region contains cases in which type 1 and type 2 have opposite strategies , namely one type is a good invader exploiting the mesoscopic lattice , while the other type is a good competitor using the microscopic lattice as its primary resource , and both types live at different time scales .finally , even through for simplicity we will prove theorems [ survival ] and [ coexistence ] only for the two - scale graph depicted in figure [ fig : graph ] , we would like to point out that our proofs easily extend to more general graphs , which also gives rise to realistic stochastic spatial models of disease dynamics .we first describe the general mathematical framework in which our results can be extended and then discuss about the relevance of this framework from a biological point of view . to begin with ,let be two infinite graphs .we call the microscopic graph and the mesoscopic graph , and consider the two - scale contact process evolving on the graph where species gives birth through the edges of the microscopic graph at rate and through the edges of the mesoscopic graph at rate . assume that we have the following property that we call _ separation of the space scales_. the mesoscopic graph contains a self - avoiding path such that 1 . for all , contains a self - avoiding path of length at least containing .2 . for all , the shortest path in connecting and has length at least . if there is no path in connecting and ( note that this is the case for the two - scale graph depicted in figure [ fig : graph ] ) we assume by convention that both vertices are connected by a path of infinite length so condition 2 above holds .then , our proof of theorem [ coexistence ] implies that , for the two - scale multitype contact process evolving on , the lebesgue measure of the coexistence region is strictly positive provided is sufficiently large .note that , when is a connected graph , condition 1 above is always satisfied .in particular , a natural way to construct a suitable graph is to start from an infinite connected graph , then select an infinite subset of vertices that are at least distance from each other , and finally add enough edges between the vertices in to obtain a mesoscopic graph with at least one infinite self - avoiding path .returning to a microscopic graph made of infinitely many finite connected components and thinking of each component as a patch , one can legitimately argue that if two patches are connected by an edge of the mesoscopic structure then all the vertices of one patch should be connected to all the vertices of the other patch by a mesoscopic edge . in a number of contexts , however , patches are arbitrarily large yet adjacent patches are connected through only few vertices , in which case our general framework can capture the main features of the dynamics . this is the case , for instance , in metastatic diseases , such as malignant tumor cells that first spread within a given organ for a long time and then infect quickly other organs while reaching the bloodstream . in this context ,the connected components of the microscopic structure represent organs or parts of the organs of the human body and the mesoscopic structure the vascular system . in a different context, one can think of the microscopic structure as a set of major cities and the mesoscopic structure as an airline network , where two types of diseases spread : one highly infectious disease such as h1n1 influenza that spreads quickly through the microscopic structure but slowly through the mesoscopic one due to airport screenings ( best competitor ) , and one moderately infectious disease that spreads at an equal speed through the whole structure of the network ( best invader ) .theorem [ duality ] has been proved by neuhauser for the multitype contact process on the regular lattice , which corresponds to the case for our process .her proof relies on duality .thinking of the process as being generated by a graphical representation , the dual process of the multitype contact process starting at a space - time point exhibits a tree structure that induces an ancestor hierarchy in which the ancestors are arranged according to the order they determine the type of the particle at .her result follows from the existence of a sequence of renewal points dividing the path of the first ancestor into independent and identically distributed pieces , as stated in details in proposition [ neuhauser ] below . similarly , the two - scale contact process is self - dual and the dual process exhibits a tree structure that allows us to identify a first ancestor .renewal points can be defined from the topology of the dual process by using the same algorithm as for the multitype contact process . however , since the graph on which the particles compete is not homogeneous , the space - time displacements between consecutive renewal points are no longer identically distributed .the key to our proof is to rely on the fact that the graph of figure [ fig : graph ] is invariant by translation of vector to show the existence of a subsequence of renewal points performing a random walk . to define the dual process, we first use an idea of harris to construct the two - scale contact process graphically from collections of independent poisson processes .these processes are defined for each directed edge or vertex as indicated in table [ tab-1 ] .the last two columns show the rate of the poisson processes and the symbols used to construct the graphical representation , respectively .unlabeled arrows from to indicate birth events : provided site is occupied and site empty , becomes occupied by a particle of the same type as the one at .the same holds for type 2 arrows if the particle at site is of type 2 , but these arrows are forbidden for type 1 particles , which takes into account the selective advantage of type 2 .finally , a at site indicates that a particle of either type at this site is killed .this graphical representation allows to construct the two - scale multitype contact process starting from any initial configuration .the proof of lemma [ invasion ] relies on duality techniques as well .in order to define the dual process , the first step is to construct the multitype contact process on graphically from collections of independent poisson processes .as indicated in table [ tab-2 ] , these processes are defined for each directed edge or vertex .unlabeled arrows , type 2 arrows , and s have the same interpretation as in table [ tab-1 ] above .the additional symbol indicates a spontaneous birth of type 1 particle at site 0 .this graphical representation allows us to construct the multitype contact process on starting from any initial configuration .we say that there is a path from to , or equivalently that there is a dual path from to , if there are sequences of times and vertices such that the following two conditions hold : note that , in our definition of path and dual path , s have no effect , though they are important in the construction of the process .the dual process starting at space - time point is then defined as the set - valued process as previously , it is convenient to assume that the poisson processes in the graphical representation are defined for negative times so that the dual process is defined for all . to deduce the type of the particle at from the configuration at earlier times ,we define a labeling of the tree structure of the dual process , thus inducing an ancestor hierarchy , by using the algorithm introduced in section [ sec : duality ] .the path of the first ancestor is then constructed by following backwards in time the branch with the largest label .the type of is determined as follows : 1 .if the first ancestor crosses at least one on its way up to 1 .regardless of the initial configuration , is of type 1 . 2 .if the first ancestor does not cross any s on its way up to 1 . and lands at time 0 on an empty site, the first ancestor does not determine .2 . and lands at time 0 on a 1 and that , on its way up to , the first ancestor does not cross any type 2 arrow then is of type 1 . 3 . and lands at time 0 on a 1 and that , on its way up to , the first ancestor crosses a type 2 arrow then the first ancestor does not determine the type of .we then follow the path of the first ancestor on its way up to until the first 2-arrow we encounter and discard all the ancestors of the point where this arrow is directed to .lands at time 0 on a 2 then is of type 2 . if the first ancestor does not determine the type of ( 2a and 2c above ) , we look at the next ancestor in the hierarchy , and so on .point 1a follows from the fact that a spontaneous birth of type 1 particle occurs along the path of the first ancestor .points 2a-2d are the same as for the multitype contact process and we refer the reader to , page 472 , for more details on how to determine the type of for the process with no spontaneous birth . finally , note that , since the state space of the process is finite , which contradicts the definition of `` living forever '' introduced in above . in this section, we say that a space - time point lives forever if that is , condition is only satisfied for all .then , whenever the path of the first ancestor jumps to a space - time point that lives forever in the sense of , this point is called a renewal point . as in section[ sec : duality ] , the sequence of renewal points divides the path of the first ancestor into independent and identically distributed pieces .we are now ready to prove lemma [ invasion ] .+ proof of lemma [ invasion ] .let and assume that lives forever .the first step is to prove that there exist and such that , for all sufficiently large , to prove , we will construct a dual path forbidden for the 1 s starting at and ending at time 0 on a site occupied by a type 2 particle .the idea is to apply a modification of the so - called repositioning algorithm ( for the original version , see page 28 in ) .we say that a renewal point is associated with a 2-arrow if the first arrow a particle crosses starting at this renewal point and moving up the graphical representation is a 2-arrow .we call selected path with origin and target the following dual path : the process starts at and follows the path of the first ancestor starting at until the first time it jumps to a renewal point associated with a 2-arrow .then , we either leave where it is at that time or reposition it . to determine whether and where to reposition the selected path , we denote the location of the second ancestor in the hierarchy at time , provided this ancestor exists , by .let be a large constant that does not depend on , and let be the straight line going through and . also , we denote the euclidean distance by . 1 .assume that exists and lives forever .then 1 . if and set .2 . if and set .otherwise , we set .2 . assume that does not exist or does not live forever .we set . in either case, we start a new dual process at and follow the path of its first ancestor until the first time it jumps to a renewal point associated with a 2-arrow when we apply again the repositioning algorithm , and so on . intuitively , this causes the selected path to drift towards the target while staying close to the straight line .more precisely , let and belong to the segment in the order indicated in the following picture : assume that and at some time and set then , it can be proved that , for suitable constants and , the proof of can be found in , lemma 3.5 .that is , starting from the small ball centered at in the picture above , the selected path hits the small ball centered at before leaving the large ball centered at , this takes less than units of time with probability close to 1 when the parameter is large . in particular , if we let apply consecutively and use that , we obtain to construct , we let be the corner of closest to and be the center of patch ( see figure [ fig : perco ] ) .the dual path starts at , follows the selected path with target until when it hits the euclidean ball with center and radius , then follows the selected path with target until time . by , for suitable and . in other respects, since is good , a straightforward application of lemma 3.9 in implies that for suitable and .finally , using the duality properties described above and the fact that 2-arrows are forbidden for the 1 s , and combining and , we obtain for suitable and and all sufficiently large , which establishes . the second step is to prove that for appropriate and .note that , on the event that does not live forever , duality implies that the probability in is equal to 0 for the multitype contact process with no spontaneous birth of type 1 particles . in our case , due to the presence of spontaneous births at the central vertex 0 , we need to bound the probability that the dual process starting at hits 0 .more precisely , since , follows from which is a well - known property of the contact process .the proof follows from the analogous result for oriented percolation ( see , section 12 ) and the fact that the supercritical contact process viewed on suitable length and time scales dominates oriented percolation ( see ) . finally , combining and , we obtain for suitable and , and all sufficiently large .that is , condition 1 in definition [ good ] holds with probability arbitrarily close to 1 .now , let .since the process dominates a one - color contact process with parameter , we have for appropriate and . combining andimplies that in particular , there exist and such that for sufficiently large .the lemma then follows from and . + from lemma [ invasion ] , it is easy to deduce when .the left picture of figure [ fig : perco ] gives a schematic illustration of the dual path in continuous line .the proof of when and when both and are is similar and we refer to the last two pictures in figure [ fig : perco ] for an illustration of a suitable dual path in these two cases .note that the case is slightly more complicated since the repositioning algorithm has to be applied in two different directions in order to avoid 0 with high probability .with proposition [ coupling_1 ] in hands , we are now ready to deduce useful properties of the multitype contact process restricted to from analogous properties of the oriented percolation process on .we let denote the extinction time of type 2 particles .we denote by the conditional probability given the event that vertex 0 ( and only vertex 0 ) is occupied by a type 2 particle at time 0 . in the next lemma, we prove that , starting with a single 2 at the center of the patch , with high probability , the process may exhibit only two extreme behaviors : either the 2 s spread out successfully and the time to extinction is arbitrarily large when is large , or they die out quickly .the proof is divided into two steps , both relying on proposition [ coupling_1 ] .the idea is to decompose the event of interest according to whether the event occurs or not .the occurrence of can be thought of as a successful invasion of type 2 .we will prove that ( i ) type 2 particles live an exponentially long time on the event , while ( ii ) they die out quickly on the complement of .+ ( i ) the event occurs . in this case, we will prove that for suitable constants and .let since the event occurs and the set dominates the set of wet sites of a supercritical percolation process , there is an in - all - direction expanding region centered at 0 which contains a positive density of good sites . since the correlation between two sites decays exponentially with the distance , the configurations in two squares with no corner in common are almost independent when the parameter is large , which , together with large deviation estimates for the binomial distribution , implies the existence of a constant such that , for large , for suitable and .now , assume that and let denote the extinction level of the percolation process restricted to .theorem 2 in implies that there exists , fixed from now on , such that , for sufficiently large , for suitable and . combining estimates and with the coupling provided in proposition [ coupling_1 ] implies that this completes the proof of .+ ( ii ) the event does not occur . in this case , we will prove that for suitable constants and . by proposition [ coupling_1 ]it suffices to prove the analogous result for the oriented percolation process , namely for some and where is the extinction level . to establish, we couple the restricted and unrestricted percolation processes by letting assume that there exists .this together with implies that any open path ending at site for the unrestricted percolation process leaves the set . moving up along such a path , we denote by the first site outside we encounter .using again , it is easy to see that there is an open path ending at for the restricted percolation process which implies that for some . in conclusion, using the reverse of , we can bound the left - hand side of by where which is a well - known property of supercritical oriented percolation processes ( see page 1031 in ) .this establishes and . + we conclude by noticing that this completes the proof .due to spontaneous births , the region near vertex 0 is mostly occupied by 1 s .the next two lemmas show however that , provided the 2 s invade the patch successfully , the amount of time vertex 0 is occupied by a type 2 particle grows exponentially with the size of the patch .let be the ( finite ) set of the configurations restricted to such that is a good site and , for each configuration , let be the set of the realizations of the graphical representation restricted to the space - time box such that since the events and are independent , we have using that is finite , we obtain finally , since the events are measurable with respect to the graphical representation restricted to the space - time box , the bound only depends on .[ occupation ] let and as in lemma [ time ] . also , denote by the lebesgue measure on the real line .then , for any and large , for suitable constants and . the condition together with proposition [ coupling_1 ] implies the existence of an in - all - direction expanding region centered at 0 which contains a positive density of good sites . in particular, there exists such that from the previous estimate and lemma [ center ] it follows that , for sufficiently large , for suitable and .this completes the proof . in the previous section, we proved that the two - scale multitype contact process restricted to a single patch viewed at the -cube level dominates oriented percolation on . in this section ,we rely on consequences of this result , namely lemmas [ time ] and [ occupation ] , to prove that the process on the entire lattice viewed at the patch ( or -cube ) level dominates , in a sense to be specified , oriented percolation on .this will prove in particular theorem [ survival ] .first of all , we consider the interacting particle system whose state at time is a function , and whose evolution at vertex is described by the transition rates the dynamics are the same as for the process except at the center of the patches in which first type 1 particles appear spontaneously at rate and second births of type 2 particles originated from adjacent patches are only allowed if the patch is void of 2 s .note that is the rate of spontaneous births of type 1 in the process introduced in section [ sec : multitype ] but also an upper bound of the rate at which the center of a patch becomes occupied by a 1 originated from an adjacent patch in the two - scale multitype contact process .in particular , starting from the same initial configuration , the processes and can be coupled in such a way that so it suffices to prove theorem [ survival ] for the process .the process viewed at the patch level will be coupled with the oriented percolation process on via the following definition .the following proposition can be seen as the analog of proposition [ coupling_1 ] . while proposition [ coupling_1 ] is concerned with the two - scale multitype contact process restricted to a single patch viewed at the intermediate mesoscopic scale , proposition [ coupling_2 ]is concerned with the unrestricted process viewed at the upper scale .theorem [ survival ] is a straightforward consequence of proposition [ coupling_2 ] .since the evolution rules of the process are invariant by translation of vector , it suffices to prove that for all sufficiently large .we assume that site is type 2 stable and let in words , is the first time a successful invasion occurs , where successful invasion means that a 2 originated from an adjacent patch is sent to and its family survives at least units of time in the patch .the aim is to prove that is small for large .this , together with lemma [ occupation ] , will imply that site is type 2 stable with probability arbitrarily close to 1 for large enough .to estimate the random time we let and define by induction in words , is the time a type 2 originated from an adjacent patch is born at the center of patch and the time patch becomes void of 2 s . by letting we obtain .let , and let be the amount of time vertex 0 is occupied by a type 2 between time and time .since vertex 0 is occupied by a 2 at least units of time until ( recall that is type 2 stable ) , on the event , we have which implies that putting things together , we obtain we estimate the right - hand side of in three steps ( see - below ) .first of all , observing that from time to time the process restricted to evolves according to the transition rates of the process and applying the markov property , we have for all , which implies that in particular , there is a large , fixed from now on , such that in other respects , lemma [ time ] implies that \ \leq \\frac{{\epsilon}}{5 } \end{array}\ ] ] for sufficiently large .finally , letting and using that is in state 0 a fraction of time of less than with probability less than for large , we obtain for large . applying lemma [ occupation ] with , we also have for sufficiently large . from -, we conclude that for sufficiently large .this completes the proof . in this section , we prove that the parameter region in which type 1 and type 2 coexist for the two - scale multitype contact process has a positive lebesgue measure , which contrasts with neuhauser s conjecture about the multitype contact process on the regular lattice .the strategy of our proof is as follows .first of all , we establish coexistence for a particular point of the space of the parameters by comparing the process with 1-dependent oriented percolation .interestingly , survival of type 1 and survival of type 2 are proved by considering different time scales . in other words ,the process will be simultaneously coupled with two different oriented percolation processes , one following the evolution of type 1 particles at a certain time scale , the other one following the evolution of type 2 particles at a slower time scale . the suitable time scale for type 2is fixed afterward and depends on the time scale chosen for type 1 . in both cases , however , the process is viewed at the same spatial scale , namely the upper mesoscopic scale ( patch level ) .since our proof relies on a block construction , standard perturbation arguments imply that the coexistence region can be extended to an open set containing the coexistence point , which proves theorem [ coexistence ] . in order to compare the process with oriented percolation, we introduce the following definition . note that the definition of type 2 stable is slightly different from the one in definition [ 2-stable ] in that it now applies to events related to the two - scale multitype contact process instead of the modified process introduced in section [ sec : survival ]. however , proposition [ coupling_2 ] still holds for since the set of 2 s in the two - scale multitype contact process dominates the set of 2 s in the modified process . to exhibit a point of the space of the parameters atwhich coexistence occurs , we fix the condition is to fix the time scale .the condition indicates that type 1 particles can only survive by jumping from patch to patch . in particular , to prove that they survive , the idea is to choose so small that a 1 at the center of a patch can produce and send its offspring to adjacent patches before being killed . more importantly , since , survival of type 1 particles does not depend on the patch size .coexistence is then obtained by choosing so large that type 2 particles can establish themselves an arbitrarily long time in a single patch .since type 1 particles have a positive death rate , centers of patches are empty a positive fraction of time which allows type 2 particles to survive by invading adjacent patches from time to time .[ coupling_3 ] assume .then , for suitable , and , the processes can be constructed on the same probability space in such a way that where and are two copies of . as in proposition[ coupling_2 ] , it suffices to prove that , for , for a suitable choice of the parameters .the first step is to show that there is enough room for type 1 particles to invade the center of the patch . by observing that obtain \ \leq \\theta_2 \ : = \ \frac{2d \,(b_2 + \beta_2)}{1 + 2d \,(b_2 + \beta_2 ) } \ < \ 1.\ ] ] let .large deviation estimates for the poisson distribution give for suitable and . in particular , finally , taking large and then small , we get which establishes for . moreover ,survival of type 1 particles holds regardless of the patch size so the proof that conditions for and hold _ simultaneously _ for the same parameters follows by taking and sufficiently large , and applying the results of the previous two sections .this proves that both types coexist .to conclude , we briefly justify the fact that the results of the previous sections hold as well under the new assumptions first , the condition implies that , starting from any initial configuration , which , now that is fixed , can be made arbitrarily small by taking large . in words , except at the center of the patch , all the 1 s in are rapidly killed , so the proof of lemma [ invasion ] extends easily under .proposition [ coupling_1 ] and lemma [ time ] follow as well .now , we observe that the condition implies that vertex 0 is empty a positive fraction of time , which allows for invasions of particles of type 2 at the center of the patch . with this in mind, one can check easily that the proofs of lemmas [ center ] and [ occupation ] also apply under the assumptions .note however that the new lower bound in lemma [ center ] might be smaller .finally , the proof of proposition [ coupling_2 ] still holds as a consequence of lemmas [ time ] and [ occupation ] .
it is known that the limiting behavior of the contact process strongly depends upon the geometry of the graph on which particles evolve : while the contact process on the regular lattice exhibits only two phases , the process on homogeneous trees exhibits an intermediate phase of weak survival . similarly , we prove that the geometry of the graph can drastically affect the limiting behavior of multitype versions of the contact process . namely , while it is strongly believed ( and partly proved ) that the coexistence region of the multitype contact process on the regular lattice reduces to a subset of the phase diagram with lebesgue measure zero , we prove that the coexistence region of the process on a graph including two levels of interaction has a positive lebesgue measure . the relevance of this multiscale spatial stochastic process as a model of disease dynamics is also discussed .
lattice quantum chromodynamics ( lqcd ) is the lattice discretized theory of the strong nuclear force , the force that binds quarks together into particles such as the proton and neutron .high precision predictions from lqcd are required for testing the standard model of particle physics , a task with increased importance in the era of the large hadron collider ( lhc ) , where deviations between numerical lqcd predictions and experiment could be signs of new physics .lqcd also has a vital role to play in nuclear physics , where such calculations are used to compute and classify the excited states of protons , neutrons and other hadrons ; to study hadronic structure ; and to compute the forces and binding energies in light nuclei .lqcd is a grand challenge subject , with large - scale computations consuming a considerable fraction of publicly available supercomputing resources .the computations typically proceed in two phases : in the first phase , one generates thousands of _ configurations _ of the strong force fields ( gluons ) , colloquially referred to as _gauge fields_. this computation is a long - chain monte carlo process , requiring the focused power of leadership class computing facilities for extended periods . in the second phase ,these configurations are _ analyzed _ , a process that involves probing the interaction of quarks and gluons with each other on each configuration .the interactions are calculated by solving systems of linear equations with coefficients determined by elements of the gauge field . on each configurationthe equations are solved for many right hand sides , and the solution vectors are used to compute the final observables of interest . this second phase can proceed independently on each configuration , and as a result , cluster partitions of modest size have proven to be highly cost - effective for this purpose . until a few years ago, the analysis phase would often account for a relatively small part of the cost of the overall calculation , with analysis corresponding to perhaps 10% of the cost of gauge field generation . in recent years , however , focus has turned to more challenging physical observables and new analysis techniques that demand solutions to the aforementioned linear equations for much larger numbers of right hand sides ( see , e.g. , ) . as a result ,the relative costs have shifted to the point where analysis often requires an equal or greater amount of computation than gauge field generation .the rapid growth of floating point power in graphics processing units ( gpus ) together with drastically improved tools and programmability has made gpus a very attractive platform for lqcd computations .the quda library provides a package of optimized kernels for lqcd that take advantage of nvidia s compute unified device architecture ( cuda ) .once coupled to lqcd application software , e.g. , chroma , this provides a powerful framework for lattice field theorists to exploit .the `` 9 g '' gpu cluster at jefferson laboratory features 192 nvidia gtx 285 gpus providing over 30 tflops of sustained performance in lqcd , when aggregated over single gpu jobs . for problems that can be accommodated by the limited gpu memory , the price / performance compared to typical clusters or massively parallel supercomputers ( e.g. , bluegene / p ) is improved by around a factor of five . however , for problem sizes that are too large , individual gpus have no benefit . even for problems that do fit on a single gpu ,the economics of constructing a gpu cluster tend to motivate provisioning each cluster node with multiple gpus , since the incremental cost of an additional gpu is fairly small . in this scenario , it is possible to run multiple independent jobs on each node , but then the size of the host memory may prove to be the limiting constraint .the obvious recourse in both cases is therefore to parallelize a single problem over multiple gpus , which is the subject of our present work .the paper is organized as follows . in sections [ sec :lqcd ] and [ sec : gpus ] we review basic details of the lqcd application and of nvidia gpu hardware .we then briefly consider some related work in section [ sec : related - work ] before turning to a general description of the quda library in section [ sec : quda ] .our parallelization of the quark interaction matrix is described in [ sec : multi - gpu ] , and we present and discuss our performance data for the parallelized solver in section [ sec : solver - perf ] .we finish with conclusions and a discussion of future work in section [ sec : conclusions ] .the necessity for a lattice discretized formulation of qcd arises due to the failure of perturbative approaches commonly used for calculations in other quantum field theories , such as electrodynamics .quarks , the fundamental particles that are at the heart of qcd , are described by the dirac operator acting in the presence of a local su(3 ) symmetry . on the lattice ,the dirac operator becomes a large sparse matrix , , and the calculation of quark physics is essentially reduced to many solutions to systems of linear equations given by the form of on which we focus in this work is the sheikholeslami - wohlert ( colloquially known as _ wilson - clover _ ) form , which is a central difference discretization of the dirac operator .when acting in a vector space that is the tensor product of a 4-dimensional discretized euclidean spacetime , _ spin _ space , and _ color _ space it is given by here is the kronecker delta ; are matrix projectors in _ spin _space ; is the qcd gauge field which is a field of special unitary ( i.e. , su(3 ) ) matrices acting in _ color _ space that live between the spacetime sites ( and hence are referred to as link matrices ) ; is the clover matrix field acting in both spin and color space , corresponding to a first order discretization correction ; and is the quark mass parameter .the indices and are spacetime indices ( the spin and color indices have been suppressed for brevity ) .this matrix acts on a vector consisting of a complex - valued 12-component _ color - spinor _ ( or just _ spinor _ ) for each point in spacetime .we refer to the complete lattice vector as a spinor field .the nearest neighbor stencil part of the lattice dirac operator , as defined in ( [ eq : m ] ) , in the plane .the _ color - spinor _ fields are located on the sites .the su(3 ) color matrices are associated with the links .the nearest neighbor nature of the stencil suggests a natural even - odd ( red - black ) coloring for the sites.,width=240 ] since is a large sparse matrix , an iterative krylov solver is typically used to obtain solutions to ( [ eq : linear ] ) , requiring many repeated evaluations of the sparse matrix - vector product .the matrix is non - hermitian , so either conjugate gradients on the normal equations ( cgne or cgnr ) is used , or more commonly , the system is solved directly using a non - symmetric method , e.g. , bicgstab .even - odd ( also known as red - black ) preconditioning is used to accelerate the solution finding process , where the nearest neighbor property of the matrix ( see fig . [fig : dslash ] ) is exploited to solve the schur complement system .this has no effect on the overall efficiency since the fields are reordered such that all components of a given parity are contiguous .the quark mass controls the condition number of the matrix , and hence the convergence of such iterative solvers . unfortunately, physical quark masses correspond to nearly indefinite matrices .given that current leading lattice volumes are , for degrees of freedom in total , this represents an extremely computationally demanding task .in the context of general - purpose computing , a gpu is effectively an independent parallel processor with its own locally - attached memory , herein referred to as _device memory_. the gpu relies on the host , however , to schedule blocks of code ( or _ kernels _ ) for execution , as well as for i / o .data is exchanged between the gpu and the host via explicit memory copies , which take place over the pci - express bus .the low - level details of the data transfers , as well as management of the execution environment , are handled by the gpu device driver and the runtime system .it follows that a gpu cluster embodies an inherently heterogeneous architecture .each node consists of one or more processors ( the cpu ) that is optimized for serial or moderately parallel code and attached to a relatively large amount of memory capable of tens of gb / s of sustained bandwidth . at the same time , each node incorporates one or more processors ( the gpu ) optimized for highly parallel code attached to a relatively small amount of very fast memory , capable of 150 gb / s or more of sustained bandwidth .the challenge we face is that these two powerful subsystems are connected by a narrow communications channel , the pci - e bus , which sustains at most 6 gb / s and often less . as a consequence , it is critical to avoid unnecessary transfers between the gpu and the host . for single - gpu code ,the natural solution is to carry out all needed operations on the gpu ; in the quda library , for example , the linear solvers are written such that the only transfers needed are the initial upload of the source vector to the gpu and the final download of the solution , aside from occasional small messages needed to complete global sums . a multi - gpu implementation, however , can not avoid frequent large data transfers , and so the challenge becomes to overlap the needed communication with useful work .this is exacerbated further if one wishes to take advantage of many gpus spread across multiple nodes , since the bandwidth provided the fastest available interconnect , qdr infiniband , is half again that provided by ( x16 ) pci - e ..[table : specs]specifications of representative nvidia graphics cards . [ cols="<,^,^,^,^,^",options="header " , ] we turn now to the architecture of the gpu itself .our purpose is only to highlight those features that have directly influenced our implementation .we focus here on cards produced by nvidia and specifically on the gt200 generation , as typified by the tesla c1060 and the geforce gtx 285 , since the latter will serve as our test bed .the gt200 series is the second of the three extant generations of cuda - enabled cards , representative examples of which are listed in table [ table : specs ] .the most recent generation , embodying nvidia s `` fermi '' architecture , is only now becoming available in mid-2010 .we note that while hardware features and performance differ between generations , these have relatively little impact on our multi - gpu strategy .likewise , most of the considerations we discuss would apply even to an opencl implementation targeting graphics cards produced by amd / ati .gpus support a single - program multiple - data ( spmd ) programming model with up to thousands of threads in flight at once .each thread executes the same kernel , using a unique thread index to determine the work that should be carried out .the gpu in the geforce gtx 285 card consists of 240 cores organized into 30 multiprocessors of 8 cores each .each core services multiple threads concurrently by alternating between them on successive clock cycles , so a group of 32 threads ( a _ warp _ in nvidia s parlance ) is executing on the multiprocessor at a given moment . at the same time , many additional threads ( ideally hundreds ) are typically resident on the multiprocessor and ready to execute .this allows the multiprocessor to swap in a new set of 32 threads when a given set stalls while waiting for a memory access to complete , for example . in order to hide latency , it is desirable to have many threads resident at once , but each such thread requires a certain number of registers and quantity of shared memory , which limits the total . just as on a cpu , a _ register _is where a variable is stored while it is being operated on or written out .registers are not shared between threads ._ shared memory _, on the other hand , may be shared between threads executing on the same multiprocessor . strictly speaking, the threads must belong to the same _ thread block _, a group of threads whose size is specified by the programmer ; each thread block must consist of a multiple of 64 threads , and one or more thread blocks may be active on a multiprocessor at a time .the geforce gtx 285 provides 16,384 single - precision registers ( 8,192 in double precision ) and 16 kib of shared memory per multiprocessor .the cuda programming model treats the threads within a block as independent threads of execution , as though they were executing on cores that were true scalar processors ; threads may take independent code paths , read arbitrary locations in memory , and so on . in order to obtain optimal performance , however , it is better to treat the multiprocessor as a single 32-lane or 16-lane simd unit .this follows from two considerations .first , when threads within a set of 32 ( a warp ) take different paths at a branch , the various paths are serialized and executed one after another , a condition known as `` warp divergence . ''second , when accessing device memory , maximum bandwidth is achieved only when 16 threads access contiguous elements of memory , where each such element is a 4-byte , 8-byte , or 16-byte block .( the cuda c language defines various short vector types for this purpose , e.g. , _float2 , float4 , double2 , short4 , _ etc . )this allows the transfer to proceed as a single `` coalesced '' memory transaction . as described in section [ sec : quda ] below , this consideration directly influences the layout of our data .an additional consideration has to do with the physical organization of the device memory . like many classic vector architectures but unlike commodity cpus, gpus are equipped with a very wide memory bus ( 512-bit on the gtx 285 ) with memory partitioned into multiple banks ( eight on the gtx 285 ) .successive 256-byte regions in device memory map to these partitions in a round - robin fashion .this organization is generally transparent to the programmer , but if memory is accessed with a stride that results in traffic to only a subset of the partitions , performance will be lower than if all partitions were stressed equally .such `` partition camping '' can result in an unexpected loss of performance for certain problem sizes . as discussed in and section [ sec : quda ] below , this was found to be a problem for certain lattice volumes in our lqcd application , with the solution being to pad the relevant arrays to avoid the camping . to summarize , the gpu memory hierarchy consists of globally - accessible device memory and local per - multiprocessor shared memory , often used as a manually - managed cache , as well as local registers .in addition , gpus such as the gtx 285 provide two special - purpose caches .the first is a read - only texture cache , which speeds up reads from device memory for certain kinds of access patterns .it also provides various addressing modes and rescaling capability .as described further in section [ sec : quda ] , we take advantage of the latter in our half - precision implementation .finally , each multiprocessor provides a small _ constant cache _( 8 kib on the gtx 285 ) , which is useful for storing run - time parameters and other constants , accessible to all threads with very low latency .gpus were first used to perform lqcd calculations in .this pioneering study predated various programmability improvements , such as the c for cuda framework , and hence was implemented using the opengl graphics api .it targeted single gpu devices .the quda library was discussed extensively in , where the primary techniques and algorithms for maximizing the efficient use of memory bandwidth were presented for a single gpu device .lqcd on gpus has also been explored in , which focused on questions of fine grained vs. coarse grained parallelism on single gpu devices .in addition , there are several as yet unpublished efforts aimed at exploiting gpus for lqcd underway .lqcd has also been implemented on other heterogeneous devices , primarily on the cell broadband engine .efforts in this direction have been reported in as part of the `` qcd parallel computing on the cell broadband engine '' ( qpace ) project and elsewhere . outside the context of lqcd ,general challenges of implementing message passing on heterogeneous architectures have been considered for gpus in and for the roadrunner supercomputer in .an effort to provide a general message passing framework utilizing cuda , mpi , and posix threads is also underway at jefferson laboratory .the quda library is a publicly available collection of optimized qcd kernels built on top of the cuda framework , with a simple c interface to allow for easy integration with lqcd application software .currently , quda provides highly optimized cg and bicgstab linear solvers for a variety of different discretizations of the dirac operator , as well as other time critical components .the power of gpus may only be brought to bear when a large degree of parallelism is available .lqcd is fortunate in this regard , since parallelism can easily be achieved by assigning one thread to each lattice site .the mapping from the linear thread index to the 4-dimensional spacetime index is easily obtained through integer division and modular arithmetic involving the lattice dimensions . these runtime parameters ( and others , such as boundary conditions ) are stored in the constant cache . in applying the lattice dirac operator ,each thread is thus responsible for gathering its eight neighboring spinors ( 24 numbers apiece ) , applying the appropriate spin projector for each , multiplying by the color matrix connecting the sites ( 18 numbers ) , and accumulating the results together with the local spinor ( 24 numbers ) weighted by the mass term .the wilson - clover discretization also requires an extra multiplication by the clover matrix ( 72 numbers ) before the result ( 24 numbers ) is saved to memory . in total , the application of the wilson - clover matrix requires 3696 floating point operations for every 2976 bytes of memory traffic in single precision ( assuming kernel fusion to minimize memory traffic ) .the ordering typical on a cpu is to place the spacetime dimensions running slowest , with internal dimensions ( color , spin , and real / imaginary ) running fastest . however , since memory coalescing is only achieved if adjacent threads load consecutive blocks of 4 , 8 , or 16 bytes , the fields must be reordered to ensure this condition .this can be achieved if we abandon the naive ordering , in favor of the new mapping here is the spacetime volume ; is the linear spacetime index running from 0 through ; corresponds to the internal index running from 0 through , with 24 , 12 , and 72 elements for the spinor , color ( see section [ sec:12gauge ] ) , and clover fields respectively ; and is the length of the vector type used ( e.g. , for _ float , float2 , _ and _ float4 _ ) .we have found that using and is optimal in single and double precision , respectively , each corresponding to a length of 16 bytes . the field ordering used in quda : numbers are broken up into blocks of short vectors ( numbers ) .successive threads thus read successive short vectors ensuring coalescing of the memory transfers . within a block the time index runs slowest , implying that the two faces on the temporal boundaries are each contiguous within the block ; each face is stored in vectors .the blocks are separated by a padding region to avoid partition camping . as an example , in single precision one would use the _ float4 _ vector type ( ) , and thus 6 blocks would be needed to store the numbers that make up a color - spinor .likewise , in 2-row storage , the gauge field would need 3 blocks to store the numbers needed for each direction . with 4 such directions ,altogether 12 blocks are needed to store all the link matrices . with the size of the padding chosen to be sites , the ghost zone of link matrices can be hidden entirely in the padding.,width=220 ] quda follows the usual lattice site assignment for the color matrices .the color matrix connecting sites and is denoted by and stored at lattice site .it follows that , which is required for the gather from the backwards direction for site , is stored at site .( the matrix conjugation is performed at no cost through register relabeling in the kernel . ) as anticipated in section [ sec : gpus ] , for certain problem sizes performance may be affected by partition camping .the simple solution quda takes to this problem is to pad the gauge , spinor , and clover fields by one spatial volume , , so that the linear indexing is given by here , , and the are the lengths of the respective spacetime dimensions , with .although not originally intended for this purpose , padding the fields by an extra spatial volume is also convenient for the parallelization process ( see section [ sec : multi - gpu ] ) .we illustrate the field ordering in fig .[ fig : layout ] . given the peak instruction and bandwidth throughputs of current gpus ( table [ table : specs ] ) ,evaluation of the wilson - clover matrix vector product is strongly bandwidth bound .the approach taken by quda is to minimize memory traffic , even at the expense of additional floating point operations , to accelerate performance using the following techniques : only the first two rows of the color matrices are stored in device memory , and using unitarity , the third row is reconstructed in registers from the complex conjugate of the cross product of the first two rows .physically motivated similarity transformations are employed to increase the sparsity of the matrix . in particular, the spin projectors in the temporal dimension are diagonalized by changing from the conventional chiral basis to a `` non - relativistic '' basis , this has the benefit that only 12 real numbers need be loaded when gathering neighboring spinors in the temporal direction and also aids our parallelization approach ( see section [ sec : multi - gpu ] ) .further acceleration is obtained through the use of 16-bit fixed point storage , from here on referred to as half precision .this is implemented by reading the gauge field and spinor field elements via the texture cache , using the read mode _cudareadmodenormalizedfloat_. when a texture reference is defined using this mode , a signed 16-bit ( or even 8-bit ) integer read in from device memory will be automatically converted to a 32-bit floating point number in the range $ ] .this format is immediately suitable for the color matrices since all of their elements lie exactly in this range , as a consequence of unitarity .the spinors require an extra normalization , which is shared between all elements of a single spinor .thus in half precision a spinor is stored as 6 _ short4 _ arrays and a single _ float _ normalization array .the use of mixed - precision iterative refinement for solving linear equations is fairly commonplace on gpus and other architectures where the use of double precision comes with a significant performance penalty .such an approach allows the bulk of the computation to be performed in fast low precision , with periodic updates in high precision to ensure accuracy of the final solution .even on architectures where there is parity between peak single and double precision performance , a factor of two difference in memory traffic is unavoidable , and so for bandwidth - bound problems such as our sparse matrix - vector product , the use of mixed precision remains advantageous .quda uses a variant of reliable updates to implement mixed - precision iterative refinement .this approach has the advantage that a single krylov space is preserved throughout the solve , as opposed to the traditional approach of defect correction which explicitly restarts the krylov space with every correction , increasing the total number of solver iterations .it was found that the best time to solution is typically obtained using either double - half or single - half approaches .quda provides the additional vector - vector linear algebra ( blas1-like ) kernels needed to implement the linear solvers .these additional routines take advantage of kernel fusion wherever possible to reduce memory traffic and hence improve performance of the complete solver .since each of these kernels and their various half , single , and double precision variants may have different optimal cuda parameters ( i.e. , sizes of the thread blocks and the number of blocks treated at once ) , an auto - tuning approach is taken to ensure maximum performance .all possible combinations of parameters are tested for each kernel , and the optimal values are written out to a header file for inclusion in production code after a recompilation of the library . due to the memory bandwidth intensity of these ( essentially streaming ) kernels , the complete solver typically runs 10 to 20% slower than would the matrix - vector product in isolation .in parallelizing across multiple gpus , we have taken the simplest approach by only dividing the time dimension , with the full extent of the spatial dimensions confined to a single gpu .this approach was motivated by the asymmetric nature of the lattice dimensions under study ( and ) , and in order to simplify this initial parallelization . in this form , since we are parallelizing over the slowest running spacetime index , the changes required to the single gpu kernel code were relatively minimal . if one were to attempt to scale to hundreds of gpus or more , multi - dimensional parallelization would clearly be needed to keep the local surface to volume ratio under control . given current lattice sizes , however , such extreme parallelization would imply small local volumes and require rethinking of the fundamental algorithms . work in this direction is underway . for parallelizing across multiple gpus ,each gpu can either be controlled using a distinct cpu thread or with a distinct process .the potential advantage of the threaded approach is that it avoids unnecessary copies within a node ; however , this advantage has decreased on recent cpus that feature integrated memory controllers and much higher memory bandwidth ( compared to pre - nehalem xeons , for example ) , reducing the overhead of an additional local memory copy . to communicate between gpus on different nodes ,a message passing approach is necessary since the memory space is by definition separate .while mixed - mode programming is possible ( threads within a node , message passing between the nodes ) , we exclusively used a message passing approach since initial investigations suggested no improvement would be gained from the use of threads .in particular , we used qmp ( qcd message passing ) which is an api built on top of mpi that provides convenient functionality for lqcd computations . in parallelizing the action of the wilson - clover matrix onto a spinor field partitioned between distinct gpus , we slice the temporal dimension into equal sized volumes of size . referring to ( [ eq : m ] ) ,the only part of the matrix that connects different lattice sites is the action of , since the clover matrix is local to a given lattice site . when updating the sites on either end of the local temporal boundary , the adjacent spinors which are on the neighboring gpus are required , as well as the gauge field matrix connecting these sites .the link matrix connecting sites and is stored at site ; hence the required link matrix for the receive from the forward temporal direction for sites in the last spatial volume ( or timeslice ) will already be present locally , and only the adjacent spinor is required . for the receive from the backward temporal direction into the first timeslice , the required link matrix will be on the adjoining gpu and so must be transferred .since the link matrices are constant throughout the execution of the linear solver , we transfer the adjoining link matrices in the program initialization . compared to the original single gpu code , this posed the obvious question : where should the extra face ( the ghost zone ) of gauge field matrices be stored ?given that the fields were already padded by an extra spatial volume , a very natural location is within the padded region since this is exactly the correct size to store the additional gauge field slice ( see fig .[ fig : layout ] ) . altering the kernel for this change simply required that if the thread i d corresponded to the first timeslice ( local to the gpu ) then the gauge field array indices are set to the padded region .extra constants were introduced to describe the boundary conditions at the start and end of the local volume , since one of these boundaries might correspond to a global boundary and not just a local boundary .our initial strategy for storing the transferred faces was to put them in the padded regions of the destination gpu s spinor field . like the gauge field ghost zonethis seems very natural , but it introduced complications into the reduction kernels used in the krylov solvers : these assume a contiguous memory buffer , and so without careful rewriting the ghost zones would be double counted .the approach we opted for instead was to oversize the spinor fields by the size of the two transferred faces . when doing reductions , this end zone can be simply excluded ensuring correctness . as described in section [ sec : similar ] the spin projectors in the temporal direction are diagonalized , halving the amount of data that needs to be transferred in the temporal gathering , and so the extra total storage required is actually only components .the upper 12 spinor components which arise from the receive from the backward direction occupy the first half of the end zone , and the lower 12 spinor components arising from the receive from the forward direction occupy the second half . for half precision the extra normalization constant for each ( 12 component ) spinor is also required , and hence an end zone of size elements is added to the normalization field .we illustrate the spinor ghost zones and the basic communication requirements in fig .[ fig : comms ] . with the ghost zone elements stored in the end zone , extra indexing logic was required to ensure that the correct spinors would be loaded by the threads updating the boundaries .fortunately , this extra logic introduced minimal overhead since warp divergence is avoided because the number of spatial sites is divisible by the warp size , a condition that is met by the lattice dimensions under consideration here ( and all production lqcd calculations that we aware of ) . spinor ghost zones and communication steps : we show the source spinor on the sending device ( top ) assuming , corresponding to 6 blocks from fig .[ fig : layout ] .the grey buffers at the end correspond to the ghost zones .the top 3 blocks correspond to the projected components , while the lower 3 blocks nearer the ghost zone correspond to .data from the back faces ( green ) needs to be gathered into a communications buffer on the sending host and likewise for the forward face ( orange ) .the faces are then transferred to the receiving host via qmp / mpi .once transferred the faces are transferred to the ghost zones on the receiving device ( bottom of diagram ) , which then uses the data directly from the ghost zones , hence the corresponding faces have been greyed out.,width=288 ] the first and simplest approach to parallelization is to perform all of the communications up front and then do the computation for the entire volume in a single kernel .the device - to - host transfers are achieved through the use of separate _ cudamemcpy _ calls ( one for each face block ) , with half precision requiring an extra _ cudamemcpy _ for the face of the normalization array . once on the host ,all of these blocks are contiguous in memory , allowing for a single message passing in each direction .the received faces are sent to the device using a single _ cudamemcpy _ for each face ( with an extra _ cudamemcpy _ required for each of the normalization faces in half precision ) and placed in the end zone of the spinor field . finally the wilson - clover kernel is executed .our second implementation aimed to overlap all of the communication with the computation of the internal volume .to do so , the cuda streaming api was used , which allows for a cuda kernel to execute asynchronously on the gpu at the same time that data is being transferred between the device and host using _additionally this makes use of non - blocking mpi communication possible : after the backward face has been transferred to the host , the mpi exchange of this face to its neighbor is overlapped with the transfer of the forward face from device to host . in turn , when the first face has been received , this can be sent to the device while the second face is being communicated .this approach requires three cuda streams : one to execute the kernel on the internal volume , one for the face send backward / receive forward , and one for the face send forward / receive backward .an additional required step is that the streams responsible for gathering the faces to the host must be synchronized , using _cudastreamsynchronize _ , before messagepassing can take place to ensure transfer completion . in principle , we could also overlap the host - to - device transfer of the second face and the computation involving the first face . this would yield a minimal speedup at best , since the time spent executing the face kernel is not the limiting factor , and it may actually reduce overall performance since the kernel would be updating half as many sites at a time , reducing parallelism and potentially decreasing kernel efficiency . aside from the parallelization of the sparse matrix vector product , theonly other required addition to the code was the insertion of mpi reductions for each of the linear algebra reduction kernels .our numerical experiments were carried out on the `` 9 g '' cluster at jefferson laboratory .this cluster is made up of 40 nodes containing 4 gpus each , as well as an additional 16 nodes containing 2 gpu devices each that are interconnected by qdr infiniband on a single switch . in this study, we focused our attention primarily on the partition made up of the 16 infiniband connected nodes , with one or two exceptions .the nodes themselves utilize the supermicro x8dtg - qf motherboard populated with two intel xeon e5530 ( nehalem ) quad - core processors running at 2.4 ghz , 48 gib of main memory , and two nvidia geforce gtx 285 cards with 2 gib of device memory each .the nodes run the centos 5.4 distribution of linux with version 190.29 of the nvidia driver .the quda library was compiled with cuda 2.3 and linked into the chroma software system using the red hat version 4.1.2 - 44 of the gcc / g++ toolchain .communications were performed using version 2.3.2 of the qcd message passing library ( qmp ) built over openmpi 1.3.2 . in all our tests we ran in a mode with one mpi process bound to each gpu .the numerical measurements were taken from running the chroma propagator code and performing 6 linear solves for each test ( one for each of the 3 color components of the upper 2 spin components ) , with the quoted performance results given by averages over these solves .statistical errors were also determined but are generally too small to be seen clearly in the figures .importantly , all performance results are quoted in terms of `` effective gflops '' that may be compared with implementations on traditional architectures . in particular, the operation count does not include the extra work done to reconstruct the third row of the link matrix .we carried out both strong and weak scaling measurements .the strong scaling measurements used lattice sizes of and sites respectively . both the lattice sizes andthe wilson - clover matrix had their parameters chosen so as to correspond to those in current use by the _ anisotropic clover _analysis program of the hadron spectrum collaboration .the lattices used were _ weak field _ configurations .such configurations are made by starting with all link matrices set to the identity , mixing in a small amount of random noise , and re - unitarizing the links to bring the links back to the manifold .we emphasize that while these lattices were not physical , we have tested the code on actual production lattices on both the volumes mentioned for correctness .the concrete physical parameters do not affect the rate at which the code executes but control only the number of iterations to convergence in the solver .the weak scaling tests utilized local lattice sizes of and sites per gpu , respectively . the solver we employed was the reliably updated bicgstab solver discussed in .we ran the solver in single precision and mixed single - half precision with and without overlapped communications in the linear operator .for the lattices with spatial sites , we also ran the solver in uniform double precision and in mixed double - half precision modes . when run in single or single - half mixed precision modesthe target residuum was , whereas in the double precision and mixed double - half precision modes the residuum was .in addition , the delta parameter was set to in single , in mixed single - half , in double and in the mixed double - half modes of the solver respectively. the meanings of these parameters are explained fully in .+ our results for weak scaling are shown in fig .[ fig : weak - scale ] .we see near linear scaling on up to 32 gpus in all solver modes . in the case with sites per gpu, we were unable to fit the double precision and mixed double - half precision problems into device memory , and hence we show only the single and single - half data . in the case with local volume of show also double precision and mixed double - half precision data .it is gratifying to note that the mixed double - half precision performance of fig . [ fig : weak - scale](b ) is nearly identical to that of the single - half precision case .both mixed precision solvers are substantially more performant than either the uniform single or the uniform double precision solver .we note that for lattices with sites per gpu we have reached a performance of 4.75 tflops .+ + strong scaling results for the lattice in single precision , double precision , single - half mixed precision , and double - half mixed precision .we used the solver that did not overlap computations and communications for these results , since as shown in fig .[ fig : strong - scale ] it was faster than the overlapped solver for the lattice in single and mixed single - half precisions.,width=336 ] fig .[ fig : strong - scale ] shows our strong scaling results . in fig .[ fig : strong - scale](a ) we show the data for the lattices with sites .we see a clear deviation from linear scaling as the number of gpus is increased and the local problem size per gpu is reduced .this is not unexpected , since as the number of gpus is increased the faces represent a larger fraction of the overall work .the improvement from overlapping communication with computation is increasingly apparent as the number of gpus increases .the benefits of mixed precision over uniform single precision can clearly be seen . however , we note that performing the mixed precision computation comes with a penalty in terms of memory usage : the mixed precision solver must store data for both the single and half precision solves , and this increase in memory footprint means that at least 8 gpus are needed to solve this system .the uniform single precision solver requires only the single precision data and can be solved ( at a performance cost ) already on 4 gpus .we highlight the fact that the 32 gpu system is made up of 16 cluster nodes , which themselves contain 128 nehalem cores .we have performed a solution of this system on the jefferson lab `` 9q '' cluster , which is identical in terms of cores and infiniband networking but does not contain gpus . on a 16-node partition of the `` 9q '' cluster we obtained 255 gflops in single precision using highly optimized sse routines , which corresponds to approximately 2 gflops per cpu core . in our parallel gpu computation , on 16 nodes and 32 gpus we sustained over 3 tflops which is over a factor of 10 faster than observed without the gpus .[ fig : strong - scale](b ) shows our strong scaling results for the lattice with sites .this lattice has half the time extent of the larger lattice , and thus we expect strong scaling effects to be noticeable at smaller gpu partitions than in the previous case .further , the spatial volume is a factor of smaller for the lattices than for the larger case .we were surprised that the trend in our results is different from that in fig .[ fig : strong - scale](a ) .notably , in this case we seem to gain little from overlapping communication and computation in the mixed precision solver .indeed , for more than 8 gpus the mixed precision performance reaches a plateau and is surpassed even by the purely single precision case .we believe this dropoff in the strong scaling is due to additional overheads incurred in overlapping communications with computations arising from system and driver issues .we will return to this point in section [ sec : system ] , where we discuss latency microbenchmarks , but suffice it to say that using _cudamemcpyasync _ appears to have a higher latency than _cudamemcpy_. this may be a feature of our motherboard or the version of the nvidia driver we are using . in the case of the lattice , probably the volume in the body is large enough to hide this extra latency . in the case of the lattice ,our data suggests that the local volume may be sufficiently small that the overhead of setting up the asynchronous transfers dominates and that in this instance the lower latency of synchronous _ cudamemcpy _ calls can result in better performance .[ fig : strong - scale24 ] shows the strong scaling data for various precision combinations for the lattice , where we now include uniform double and mixed double - half precision results and do not overlap communication with computation .again we see that the mixed precision solvers employing half precision outperform both single and double uniform precision solvers .note that uniform double precision exhibits the best strong scaling of all because this kernel is less bandwidth bound due to the much lower double precision peak performance of the gtx 285 ( see table [ table : specs ] ) .the pci - e architecture in our supermicro nodes was such that the two gpu devices were each on a bus with a direct connection to a separate socket on the motherboard . in our tests we launched two mpi processes per node . in order to obtain maximum bandwidth on the buses , it was necessary to explicitly bind each mpi process to the correct socket .we accomplished this using the processor affinity feature of openmpi . in fig .[ fig : strong - scale](a ) we show the performance a deliberately badly chosen numa configuration ( with maroon x - symbols ) .we bound each process to the opposite socket from the cuda device it was using .one can see that the performance is noticeably lower than the correctly bound case denoted by blue asterisks in fig .[ fig : strong - scale](a ) . secondly ,as alluded to previously , we note that on these nodes the latencies of _ cudamemcpy _ ( used in the non - overlapped communication code ) and of _ cudamemcpyasync _ ( followed immediately by a _ cudasynchronizethread _ ) call are quite different .latency microbenchmark showing tranfer times from host to device or vice versa for messages of varying sizes .we show data for : device to host using _ cudamemcpy _ ( black ) , host to device using _ cudamemcpy _ ( red ) , device to host using _ cudamemcpyasync _ + _ cudasynchronizethreads _ ( green ) and host to device using _cudamemcpyasync_+_cudasynchronizethreads _ ( blue ) .the timings are taken over 500,000 message transfers , width=336 ] as shown in fig .[ fig : latency ] , using _ cudamemcpyasync_ incurs a latency of just under 50 microseconds whereas a synchronous _ cudamemcpy _ has a much shorter latency of 11 microseconds .it can also be seen that once out of the latency limited region , the graphs show different gradients for the host - to - device and device - to - host transfers , indicating different host - to - device and device - to - host bandwidths .these features may depend somewhat on the version of the nvidia driver and motherboard bios used , but additional testing so far suggests that the main culprit is a hardware limitation in the early revision of the intel 5520 ( tylersburg ) chipset used in the nodes. therefore the decision on whether to overlap communication and computation or not may depend on the system under consideration , as well as the problem size .we have demonstrated what we believe is the first successful attempt to use multiple gpu units in parallel for lqcd computations .we have weak scaled our application to 4.75 tflops on 32 gpus and have strong scaled the application , on a problem size of scientific interest , to over 3 tflops . in this latter case , we have achieved over a factor of 10 increase in performance compared to not using gpus ( 255 gflops on a `` regular '' cluster partition containing the same number processors ) .we believe that the order of magnitude increase in computing power is an enabling technology for sophisticated modern analysis methods of great interest to particle and nuclear physics . indeed the solver we have described is now in use in production lqcd calculations of the spectrum of hadrons using the technique of _ distillation _ .current calculations use lattice configurations of the same size as described in section [ sec : solver - perf ] which were generated on leadership computing platforms under doe incite and nsf teragrid allocations ( granted to the usqcd and hadron spectrum collaborations , respectively ) .the calculations involve 32768 calls to the solver for each configuration and benefit enormously from the speedup delivered by the gpu solver .prior to parallelizing the quda library , our larger volume dataset was not amenable to solution on gpus due to memory constraints .the use of multiple gpus allows the solution to proceed , realizing the large increases in cost effectiveness promised by gpus .a slightly more nuanced point is that the nodes containing 4 gpus ( and no infiniband ) may now be more efficiently utilized .prior to parallelization , one could solve the problem on a single gpu and analyze two configurations simultaneously on a single node .one could not analyze more , due to the limitations on the host ( primarily memory capacity ) .now one can analyze 2 configurations simultaneously using 2 gpus each , optimally utilizing all 4 gpus in the node .the exact optimization of a node configuration in terms of infiniband cards , gpus , and operating model is an interesting issue but is beyond the scope of this paper .there are many avenues for future exploration .currently only the solvers have been accelerated in the quda library .parallelization onto multiple gpus may make gauge generation on gpu clusters an interesting and desirable possibility .we are also interested in porting more modern algorithms to the gpus such as the adaptive multigrid solver discussed in to speed up computations even further .we follow the development of the opencl standard with interest with a view to potentially harness gpu devices from amd as well as nvidia , and we await future hardware and software improvements to allow better coexistence of gpus and message - passing ( such as sharing pinned memory regions between cuda and mpi ) . finally , we hope that the lessons learned from gpus will be usefully applicable on heterogeneous systems in general as we head towards the exascale .the authors would like to thank chip watson for funding an extremely productive week of coding , and for dedicated access to the jefferson lab 9 g cluster .enlightening discussions with jie chen , paulius micikevicius , and guochun shi are also gratefully acknowledged .this work was supported in part by u.s .nsf grants phy-0835713 and oci-0946441 and u.s .doe grant de - fc02 - 06er41440 .computations were carried out on facilities of the usqcd collaboration at jefferson laboratory , which are funded by the office of science of the u.s .department of energy . authored by jefferson science associates , llc under u.s .doe contract no .de - ac05 - 06or23177 .the u.s .government retains a non - exclusive , paid - up , irrevocable , world - wide license to publish or reproduce this manuscript for u.s .government purposes .r. babich , r. brower , m. clark , g. fleming , j. osborn , and c. rebbi , `` strange quark content of the nucleon , '' _ proc .science _ ( lattice2008 ) , 2008 , 160 [ arxiv:0901.4569 [ hep - lat ] ] . m. peardon _ et al . _ [ hadron spectrum collaboration ] , `` a novel quark - field creation operator construction for hadronic physics in lattice qcd , '' _ phys .d _ , vol .80 , 2009 , p. 054506[ arxiv:0905.2160 [ hep - lat ] ] .m. a. clark , r. babich , k. barros , r. c. brower , and c. rebbi , `` solving lattice qcd systems of equations using mixed precision solvers on gpus , '' _ comput ._ , vol . 181 , 2010 , p. 1517 .[ arxiv:0911.3191 [ hep - lat ] ] .r. g. edwards and b. joo [ scidac collaboration and lhpc collaboration and ukqcd collaboration ] , `` the chroma software system for latticeqcd , '' _ nucl ._ , vol . 140 , 2005 , p. 832[ arxiv : hep - lat/0409003 ] . b. sheikholeslami and r. wohlert , `` improved continuum limit lattice action for qcd with wilson fermions , '' _ nucl . phys .b _ , vol . 259 , 1985 , p. 572 .m. r. hestenes and e. stiefel , `` methods of conjugate gradients for solving linear systems '' , _ j. of research of the national bureau of standards _ , vol .49 , no . 6 , 1952 , p. 409 .g. i. egri , z. fodor , c. hoelbling , s. d. katz , d. nogradi , and k. k. szabo , `` lattice qcd as a video game , '' _ comput . phys ._ , vol . 177 , 2007 , p. 631[ arxiv : hep - lat/0611022 ] .k. barros , r. babich , r. brower , m. a. clark , and c. rebbi , `` blasting through lattice calculations using cuda , '' _ proc .science _ ( lattice2008 ) , 2008 , 045 [ arxiv:0810.5365 [ hep - lat ] ] .k. z. ibrahim and f. bodin , and o. pene , `` fine - grained parallelization of lattice qcd kernel routine on gpu '' , _j. of parallel and distributed computing _68 , no . 10 , 2008 , pp . 13501359 .f. belletti _ et al ._ , `` qcd on the cell broadband engine , '' _ proc . science _ ( lat2007 ) , 2007 , 039 [ arxiv:0710.2442 [ hep - lat ] ] . h. baier _ et al ._ , `` qpace a qcd parallel computer based on cell processors , '' _ proc . science _ ( lat2009 ) , 2009 , 001 [ arxiv:0911.2174 [ hep - lat ] ] .g. shi , v. kindratenko , and s. gottlieb , `` cell processor implementation of a milc lattice qcd application , '' _ proc .science _ ( lattice2008 ) , 2008 , 026 [ arxiv:0910.0262 [ hep - lat ] ] . j. spray , j. hill , and a. trew , `` performance of a lattice quantum chromodynamics kernel on the cell processor , '' _ comput ._ , vol . 179 , 2008 , p. 642[ arxiv:0804.3654 [ hep - lat ] ] .j. stuart and j. d. owens , `` message passing on data - parallel architectures , '' _ proc . the 23rd ieee international parallel and distributed processing symposium , rome , italy _ , 2009 .j. j. dudek , r. g. edwards , m. j. peardon , d. g. richards , and c. e. thomas , `` toward the excited meson spectrum of dynamical qcd , '' 2010 [ arxiv:1004.4930 [ hep - ph ] ] .j. brannick , r. c. brower , m. a. clark , j. c. osborn , and c. rebbi , `` adaptive multigrid algorithm for latticeqcd , '' _ phys ._ , vol . 100 , 2008 , 041601 [ arxiv:0707.4018 [ hep - lat ] ]
graphics processing units ( gpus ) are having a transformational effect on numerical lattice quantum chromodynamics ( lqcd ) calculations of importance in nuclear and particle physics . the quda library provides a package of mixed precision sparse matrix linear solvers for lqcd applications , supporting single gpus based on nvidia s compute unified device architecture ( cuda ) . this library , interfaced to the qdp++/chroma framework for lqcd calculations , is currently in production use on the `` 9 g '' cluster at the jefferson laboratory , enabling unprecedented price / performance for a range of problems in lqcd . nevertheless , memory constraints on current gpu devices limit the problem sizes that can be tackled . in this contribution we describe the parallelization of the quda library onto multiple gpus using mpi , including strategies for the overlapping of communication and computation . we report on both weak and strong scaling for up to 32 gpus interconnected by infiniband , on which we sustain in excess of 4 tflops . = 1
let be an open bounded domain of with a lipschitz boundary , and the unit normal to outward to .the purpose of this paper is to discuss existence and uniqueness of entropy solution for the following initial boundary value problem ,t[\times\omega,\\ u(0,x)&=u_0(x)\;\;\;\;\;\;\;\;\;\;\;\;\mbox { in } \;\;\ ; \omega,\\ b(u)-(f(u)-\nabla\phi(u)).\eta&=0\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mbox { on } \;\;\ ; \sigma=]0,t[\times\partial\omega . \end{array}\right.\ ] ] here , is taking values on ] with and is strictly increasing else. then problem degenerates to hyperbolic when takes values in the region ] is called entropy solution of problem if , and the following conditions hold : + ] ; for all and for all entropy solution of , we have : in general , uniqueness for evolution equation of kind appear very difficult mainly for the initial boundary values problems . in this context, the use of nonlinear semigroup techniques offers many advantages .let us present briefly another notion of solution coming from the theory of nonlinear semigroups ( see , e.g. , ) .let be an m - accretive operator ( see , e.g. , ) .suppose that , . a measurable function ; l^1(\omega;[0,u_{\max}])) ] for the set of all measurable functions from to ]-valued function .a measurable function taking values on ] , assume , and hold .problem admits a weak solution which is also an entropy solution .in particular , we have .in addition , there exists c independent on such that assume that the sequence is such that : and in .then in , where is the trace operator .the proof uses localization to a small neighborhood of .+ to prove existence of entropy solution , we assume that the couple is non - degenerate in the sense of the following definition : [ compacite ] ( ) .let be zero on ] and a vector .a couple is said to be non - degenerate if , for all , the functions are not affine on the non - degenerate sub intervals of ] , the stationary problem associated to problem : the notion of entropy solution of correspond to the time - independent entropy solution of with source term . in the case where is a bounded interval of , we have an important result , which states that , the total flux is regular at the points and .this kind of regularity seem hard to obtain in multiple space dimensions for , and even in dimension for .[ traceforte ] for all measurable function taking values in ] .moreover , is zero at and .( here and ) . from now ,let s define the operator on associated with regular solutions of by its graph : [ maccretive ] 1 . is accretive in .2 . for all sufficiently small , contains ) ] . for the proof of this proposition , we can refer to . + according to the general results of , it follows existence and uniqueness of integral solution in the sense of definition [ entrsol ] : [ unik1 ] let , and . let be integral solutions of ( with operator ) associated with the data and , respectively .then for a.e . . adapted to our case , we have the following result [ unik ] let .let be an entropy solution of and be an entropy solution of .then in particular , is an integral solution of with .we consider an entropy solution of and an entropy solution of .consider nonnegative function having the property that for each , for each .apply the doubling of variables in the spirit of , we obtain this following inequality .(\xi_x+\xi_y)dxdydt\nonumber\\&+\displaystyle\int_0^t\int_{x\in\partial\omega}\!\int_\omega \left|b(u)-(f(u)-\phi(u)_y).\eta(x)\right|\xi dyd\sigma dt\nonumber\\&+\int_0^t\!\!\!\int_\omega\!\int_{y\in\partial\omega}\left|b(v)-(f(v)-\phi(v)_x).\eta(y)\right|\xi d\sigma dxdt\nonumber\\ & + \int_0^t\!\!\int_\omega\!\int_\omega sign(v - u)(u - g(y))\xi dydxdt\nonumber\\&\geq\displaystyle\int_0^t\!\!\!\int_{x\in\omega}\int_{y\in\partial\omega}\left|b(u)-b(v)\right|\xi d\sigma dxdt+\displaystyle\int_0^t\!\!\!\int_{y\in\omega}\int_{x\in\partial\omega}\left|b(u)-b(v)\right|\xi d\sigma dydt\nonumber\\&+\mathop{\overline{\lim}}\limits_{\sigma\rightarrow0}\frac{1}{\sigma}\displaystyle\int_0^t\int\!\!\!\int_{{{\omega_x^c}\times{\omega_y^c}}\cap\left\{-\sigma<\phi(v)-\phi(u)<\sigma\right\}}|\phi(v)_x-\phi(u)_y|^2\xi dydxdt\geq0.\end{aligned}\ ] ] next , following the idea of , we take the test function , where , , and . then , and . due to this choice , by proposition [ traceforte ] , ) ] is divided into cells .we initialize the scheme by : the numerical approximation solution at in the cell number is : with the boundary conditions taken into account via here , is a numerical flux which we assume monotone , consistent , lipschitz continuous ( see ) . in the sequel, we take if ] ; numerically , we observe a boundary layer ( see figure [ fig2 ] ) and this is confirmed by theoretical results of . now , taking into account assumptions , , with data }$ ] ; the numerical observation shows that the boundary condition at and is verified literally and the numerical solution respect the maximum principle ( see figure [ fig3 ] ) .i would like to thank boris andreianov for his thorough reading and helpful remarks which helped me improve this paper .b. andreianov , k. shibi , _ scalar conservation laws with nonlinear boundary conditions _ c. r. acad .* 345 * ( 8) ( 2007 ) 431 - 434 .b. andreianov , k. shibi , _ well - posedness of general boundary - value problems for scalar conservation laws . _ams , accepted .available as preprint hal http://hal.archives-ouvertes.fr/ : hal-00708973 , version 2 .bnilan , crandall , m. g. and pazy , a. , _ nonlinear evolution equations in banach spaces_. preprint book .r. brger , h. frid , k. h. karlsen , _ on the well - posedness of entropy solution to conservation laws with a zero - flux boundary condition . _ j. math .appl . * 326 * ( 2007 ) , 108 - 120 .
we study a robin boundary problem for degenerate parabolic equation . we suggest a notion of entropy solution and propose a result of existence and uniqueness . numerical simulations illustrate some aspects of solution behavior . mohamed gazibo karimou ( communicated by the associate editor name )
minimum algorithms for the alignment of tracking detectors generally come in two flavours , namely those that ignore and those that do not ignore the correlations between hit residuals .the former are sometimes called _ local _ or _ iterative _ methods while the latter are called _ global _ or _ closed - form _ methods .the advantage of the closed - form methods is that for an alignment problem in which the measurement model is a linear function of both track and alignment parameters the solution that minimizes the total can be obtained with a single pass over the data .the covariance matrix for the track parameters is an essential ingredient to the closed - form alignment approach .if the track fit is performed using the standard expression for the least - squares estimator ( sometimes called the _ standard _ or _ global _ fit method ) , the computation of the covariance matrix is a natural part of the track fit .this is why previously reported implementations of the closed - form alignment procedure ( e.g. ) make use of the standard fit .in contrast most modern particle physics experiments rely on a kalman filter track fit for default track reconstruction .the kalman filter is less computationally expensive than the standard fit and facilitates an easy treatment of multiple scattering in the form of process noise .however , the computation of the covariance matrix in the common kalman track fit is not complete : the correlations between track parameters at different position along the track are not calculated . in the presence of process noisethese correlations are non - trivial .consequently , the result of the common kalman track fit can not be used directly in a closed - form alignment procedure . in this paperwe present the expressions for the computation of the global covariance matrix the covariance matrix for all parameters in the track model in a kalman filter track fit .we show how this result can be used in an alignment procedure .furthermore , using similar expressions we demonstrate how vertex constraints can be applied in the alignment without refitting the tracks in the vertex . to illustrate that our approach leads to a functional closed - form alignment algorithm , we present some results obtained for the alignment of the lhcb vertex detector with monte carlo simulated data .an important motivation for extending the kalman track fit for use in a closed - form alignment approach is that the estimation of alignment parameters is not independent of the track model .typically , in closed - form alignment procedures the track model used in the alignment is different from that used in the track reconstruction for physics analysis , which in practise is always a kalman filter .sometimes the track model in the alignment is simplified , ignoring multiple scattering corrections or the magnetic field .the imperfections in the track model used for alignment will partially be absorbed in calibration parameters .consequently , in order the guarantee consistency between track model and detector alignment , it is desirable to use the default track fit in the alignment procedure .the kalman filter has also been proposed for the estimation of the alignment parameters themselves .this method for alignment is an alternative formulation of the closed - form alignment approach that is particularly attractive if the number of alignment parameter is large .our results for the global covariance matrix of the kalman filter track model and for vertex constraints can eventually be applied in such a kalman filter alignment procedure .to show that the global covariance matrix of the track parameters is an essential ingredient to the closed - form alignment approach , we briefly revisit the minimum formalism for alignment .consider a track defined as ^t \ : v^{-1 } \ : \left [ \rule{0ex}{1.7ex } m - h(x ) \right ] , \label{equ : trackchisquare}\ ] ] where is a vector of measured coordinates , is a ( usually diagonal ) covariance matrix , is the measurement model and is the vector of track parameters .note that eq.[equ : trackchisquare ] is a matrix expression : and are vectors and is a symmetric matrix , all with dimension equal to the number of measurements . for a linear expansion of the measurement model around an initial estimate of the track parameters , where is sometimes called the derivative or projection matrix , the condition that the be minimal with respect to can be written as .\label{equ : minchisq}\ ] ] the solution to this system of equations is given by the well known expression for the least squares estimator , \label{equ : lsm}\ ] ] where the matrix is the covariance matrix for if the measurement model is not linear , _ i.e. _ if depends on , expression eq.[equ : lsm ] can be applied iteratively , until a certain convergence criterion is met , for example defined by a minimum change in the . in that caseit makes sense to write eq.[equ : lsm ] in terms of the first and second derivative of the at the current estimate and regard the iterative minimization procedure as an application of the newton - raphson method .we now consider an extension of the measurement model with a set of calibration parameters , the parameters are considered common to all tracks in a particular calibration sample .we estimate by minimizing the sum of the values of the tracks simultaneously with respect to and the track parameters of each track , please , note that the index refers to the track and not to a component of the vector .we will omit the index from now on and consider only the contribution from a single track .the number of parameters in the minimization problem above scales with the number of tracks .if the number of tracks is large enough , a computation that uses an expression for the least squares estimator analogous to eq.[equ : lsm ] is computationally too expensive .a more practical method relies on a computation in two steps .first , track parameters are estimated for an initial set of calibration parameters .subsequently , the total is minimized with respect to taking into account the dependence of on , _e.g. _ through the total derivative the derivative matrix in eq.[equ : ddalphaa ] follows from the condition that the of the track remains minimal with respect to , which can be expressed as and results in note that if the problem is linear this derivative is independent of the actual value of or .consequently , in this limit this expression remains valid even if the track was not yet minimized with respect to .the condition that the total of a sample of tracks be minimal with respect to both track and alignment parameters can now be expressed as for alignment parameter this defines a system of coupled non - linear equations . in analogy with the procedure introduced for the track minimization above wesearch for a solution by linearizing the minimum condition around an initial value and solving the linear system of equations for . in the remainder of this sectionwe derive the expressions for these derivatives . to simplify the notation we define the residual vector of the track and its derivative to we linearize around the expansion point , and using eq.[equ : dxdalpha ] obtain for any total derivative to ( the minus sign appears because is the derivative of and not of . ) in this expression we have substituted the covariance matrix for for .the first and second derivatives of the contribution of a single track are now given by the matrix that appears in these expressions is the covariance matrix for the residuals .this matrix is in general singular and its rank is the number of degrees of freedom of the fit .if the track parameters for which the residuals and are calculated , are actually those that minimize the track s for the current set of alignment constants , the residuals satisfy the least squares condition and the first derivative to reduces to consequently , if is diagonal , the derivative to a particular parameter only receives contributions from residuals for which does not vanish .is an alignment parameter of module , only hits in module contribute to the first derivative of the to . ] an important consequence of this is that if there are additional contributions to the tracks , in particular hits in subdetectors that we do not align for , constraints from a vertex fit or multiple scattering terms , then these terms only enter the derivative calculation through the track covariance matrix .we will exploit this property in the next section when we discuss the use of a kalman filter track model for alignment .the expressions eq.[equ : firstdchisqdaminchisq ] and eq.[equ : seconddchisqda ] can now be used to evaluate the first and second derivative for an initial calibration over a given track sample and inserted in eq.[equ : globalequation ] to obtain an improved calibration .if the residuals are non - linear in either track parameters or alignment parameters , several iterations may be necessary to minimize the . if the alignment is sufficiently constrained , the second derivative matrix can be inverted and the covariance matrix for the alignment parameters is given by ignoring higher order derivatives in , the change in the total as the result of a change in the alignment parameters can be written as consequently , the change in the total is equivalent to the significance of the alignment correction .the quantity is a useful measure for following the convergence of an alignment .in the global method for track fitting a track is modelled by a single parameter vector ( usually ) at a fixed position along the track .multiple scattering can be incorporated in this model by introducing explicit parameters for the kinks at scattering planes .the parameters that minimize the and the corresponding covariance matrix follow from the application of the least squares estimator eq.[equ : lsm ] . in the kalman filter method for track fitting a trackis modelled by a separate dimensional track parameter vector ( or _ state vector _ ) at each measurement ( or _ node _ ) .the state vectors are related by a _ transport _function , which follows from the equation of motion of the charged particle . in the absence of multiple scatteringthe state vectors are one - to - one functions of one - another and hence fully correlated . in the presence of multiple scattering the correlation is reduced by introducing so - called process noise in the propagation of the state vector between neighbouring nodes . as we have seen in the previous section the closed - form method for alignment uses the vector of residuals and a corresponding covariance matrix .the covariance matrix for the residuals can be computed from the global covariance matrix of the track parameters .however , the correlations between the state vectors at different nodes are normally not calculated in the kalman filter : they are either not computed at all ( if the smoothing is done as a weighted average of a forward and backward filter ) or ( if the rauch - tung - striebel smoother formalism is applied ) only implicitly and only between neighbouring nodes . to derive an expression for the covariance matrix of all parameters in the kalman filter track model we use the notation of reference for the linear kalman filter , in particular * is the state vector at node after accumulating the information from measurements ; * is the covariance of ; * is the state vector at node after processing all measurements . in the followingwe first calculate the correlation between and , which we denote by . from thiswe proceed with the correlation matrix between and .the correlation between any two states and then follows from the observation that the correlation between these states occurs via intermediate states . in the notation of we have for the prediction of state from state , where is the jacobian or transport matrix .the covariance of the prediction is given by where is the process noise in the transition from state to .the full covariance matrix for the pair of states is then given by in the kalman filter track fit we now proceed by adding the information of measurement to obtain a new estimate for the state in procedure that is called _ filtering _ and leads to state vector .the remaining measurements are processed with prediction and filter steps in the same fashion . afterwards a procedure called _ smoothing _ can be applied to recursively propagate the information obtained through measurements back to node . the smoothed state vector at node is labelled by and its covariance by .to derive the expression for the covariance matrix of the smoothed states and we first present the following lemma .suppose we have two observables and with covariance matrix now suppose we have obtained a new estimate of with variance by adding information .we can propagate the new information to with a least squares estimator , which gives this expression also holds if and are vectors .it can be derived by minimizing the following ( where with variance is the additional information for ) with respect to and .substituting for , for and for in eq.[equ : propagation ] we obtain for the correlation between the smoothed states where we have used the definition of the smoother gain matrix for the smoothed state and its covariance we find these are the rauch - tung - striebel smoothing expressions as found in .the gain matrix in eq.[equ : smoothergain ] can be written in different forms , _ e.g. _ expression shows explicitly that if there is no process noise ( ) .therefore , as one expects , without process noise the smoothed states in the kalman filter are just related by the transport equation .once we have the calculated the off - diagonal element , we proceed to the next diagonal .the correlation between states and can be calculated by performing a simultaneous smoothing of states and : in the argument above we substitute a new vector for , rather than just the state .the result for the correlation is this expression can also be derived with a simpler argument : the origin of the correlation between state vectors is the transport .therefore , the correlation occurs only through the correlations and .following the same reasoning the next diagonal becomes which shows that the calculation can be performed recursively . by substituting the smoother gain matrix we can write this in the following compact form the gain matrices are temporarily stored , then if is the dimension of the state vector , the calculation of each off - diagonal element in the full covariance matrix of state vectors requires about multiplications and additions .the total number of operations by far exceeds the numerical complication of the standard kalman filter .however , we have found that for tracks traversing the entire lhcb tracking system , with a total of about 30 measurement coordinates , the computational cost for the global covariance matrix with the procedure above was smaller than that of the kalman filter track fit itself .this is because the lhcb track fit is largely dominated by integration of the inhomogeneous magnetic field and the location of intersections with detector material .now that we have calculated the full covariance matrix of all states , the elements of the covariance matrix of the residuals are simply given by this completes the recipe for using a kalman filter track model in the alignment of tracking detectors .we have argued below eq.[equ : firstdchisqdaminchisq ] that the cancellation that takes place between eq.[equ : firstdchisqda ] and eq.[equ : firstdchisqdaminchisq ] is important when considering the kalman track fit for alignment .this can be explained as follows .if we were to use the kalman filter track model in a global fit , the would contain explicit contributions for the difference in the state vectors at neighbouring nodes , these contributions are equivalent to the terms that constrain scattering angles in the conventional track model for a global track fit .as they represent additional constraints to the , they must also appear in the matrix and the residual vector in eq.[equ : firstdchisqda ] .it is only because of the minimum condition for the track parameters that their contribution in the derivatives to the alignment parameters vanishes .the expressions in eq.[equ : propagation ] can also be used to include vertex or mass constraints in an alignment procedure .first , we propagate the track parameters to the estimated position of the vertex .we label the track parameters at that position with and its covariance by .the correlations between these track parameters and those at the position of each measurement can be computed with the procedure outlined in the previous section . for claritywe now drop the superscript and replace it with a superscript that labels the track in the vertex : the state of track at the vertex is with covariance . as a result of the vertex fit ( which we can implement as the billoir - frhwirth - regler algorithm ) we obtain the new ` constrained ' track parameters with covariance .the change in the track parameters can be propagated to the track states at each measurement using eq.[equ : propagation ] , which gives for the state vector at node and for the covariance the constrained residuals for track then become and the covariance matrix in eq.[equ : kalmanfullr ] can be computed using the new track state covariance .the vertex fit also gives us the covariance between any two tracks and in the vertex .this allows to compute the correlation between any two states in any two tracks as follows inserting this into the multi - track equivalent of eq.[equ : kalmanfullr ] gives the full correlation matrix for the residuals on _ all _ tracks .if the number of tracks in the vertex is large , the computation of the global covariance matrix for all states on all tracks is rather cpu time consuming .therefore , in practical applications it makes sense to compute the correlation only for a subset of hits close to the vertex .this completes the ingredients for including vertex constraints in the calculation of the alignment derivatives .eventual mass constraints or other kinematic constraints are included implicitly if they are applied during the vertex fit .the lhcb tracking system consists of a silicon vertex detector ( velo ) and a spectrometer . for the track based alignment of this systema closed - form alignment algorithm has been implemented in the lhcb software framework .this algorithm , which uses the standard lhcb kalman filter track fit , will be described in detail in a future publication . herewe briefly illustrate the effect of correlations between residuals and the applications of vertex constraints , using the alignment of the velo system as an example .the velo system consists of 21 layers of double sided silicon detectors with radial strips on one side and concentric circular strips on the other .each layer consist of two half circular disks called _modules_. the modules are mounted onto two separate support structures , the _ left _ and _ right _ velo halves .the two halves can be moved independently in the direction perpendicular to the beam ( ) axis in order to ensure the safety of the detectors during beam injection .the alignment of the velo system is of crucial importance to the physics performance of the lhcb experiment .for the analysis described here we have simulated deformations in the velo detector in such a way that it bows along the -axis : we introduced a bias in the and position of each module that was approximately proportional to , where was the position of the module relative to the middle of the velo .the reason to choose this particular misalignment is that it corresponds to a correlated movement of detector elements : such deformations sometimes called ` weak modes ' are inherently difficult to correct for with an alignment method that ignores the correlations between residuals in the track fit. tracks from a sample of simulated minimum bias interactions were reconstructed using a ` cheated ' pattern recognition , assigning velo hits to a track based on the monte carlo truth .we required at least 8 hits per track .the tracks were fitted with the standard lhcb track fit , taking scattering corrections into account as process noise .tracks were accepted for alignment if their per degree of freedom was less than 20 . in a perfectly aligned detectorthis cut only excludes a tiny fraction of reconstructed tracks , namely those with a kink due to a hadronic interaction .primary vertices were reconstructed using the standard lhcb primary vertex finder . to validate the implementation of the algorithm we have performed two tests .first , we have checked the calculation of the residual covariance matrix by comparing it to a numerical computation .a single track was refitted after changing one measurement coordinate by a numerically small value .the -th row of follows from ( for ) where is the change in residual .( the computation of the diagonal element is part of the standard kalman fit procedure . )this test has shown that the numerical uncertainty in the correlations coefficients of is typically of order , which is good enough for the purpose of detector alignment .second , we have analyzed the eigenvalue spectrum of the second derivative matrix eq.[equ : seconddchisqda ] . without an external reference system the global translations and rotations of a tracking systemare unconstrained in the alignment procedure .such unconstrained degrees of freedom lead to vanishing eigenvalues in the derivative matrix and , if left untreated , result in a poorly converging alignment .( see _ e.g. _ . )unconstrained degrees of freedom can be removed with lagrange constraints or by omitting the corresponding eigenvector from the solution to the linear system in eq.[equ : globalequation ] .however , to test the implementation of the calculations in the global alignment algorithm , the identification of the vanishing eigenvalues is a powerful tool : if the zero eigenvalues corresponding to the global movements are observed , we can be confident that the computation of both the matrix and the alignment derivatives is correct ( or at least consistently wrong ) .figure [ fig : evhalves ] shows the eigenvalues for the alignment of the position and rotation of the two velo halves .( the eigenvalues are plotted versus an arbitrary index that increases with the size of the eigenvalue . )the total number of alignment parameters is 12 . to definethe scale of the eigenvalues the derivative matrix was rescaled following the recipe in : the numerical value of the eigenvalue is roughly equal to the number of hits contributing to the corresponding linear combination of alignment parameters .as can be seen in the figure the eigenvalue distribution splits in two : the six smaller eigenvalues correspond to the global rotation and translation , whereas the six larger eigenvalues correspond to the relative alignment of the two detector halves . note that if correlations between residuals are ignored , the linear equations eq.[equ : globalequation ] split in independent parts for the two aligned objects and all eigenvalues are of about the same size .one may wonder why the eigenvalues corresponding to the global movements are not ` numerically ' zero , in contrast with the analysis reported in .the reason for this is a feature of the kalman filter : in the kalman filter the state vector is seeded with a finite variance even before a single measurement is processed .the variance must be large enough to have negligible weight in the variance of the state vector after all measurements are processed , but it must be small enough to make the computation of the filter gain matrix numerically stable .the finite value of the seed variance essentially fixes the track in space .we have observed that the value of the small eigenvalues is indeed sensitive to the variance of the seed . for practical purposesthe bias from the kalman filter seed is not important . to test the alignment procedure for the misalignment scenario presented above we aligned the position of each module in and , corresponding to a total of 84 alignment parameters .we omitted the translation and rotations to simplify the analysis .the eigenvalue distribution , shown in figure [ fig : evmodules ] , reveals 4 unconstrained degrees of freedom .these correspond to the global translation in and and originating from the planar geometry of the detector shearings in the and plane .we constrain these degrees of freedom with lagrange constraints .we report here two figures of merit that we use to judge the convergence of the alignment procedure , namely the number of selected tracks and the average of selected tracks , both as a function of the alignment iteration .the results are shown in figure [ fig : convergence ] for 3 different scenarios : first , we entirely ignore correlations between residuals , which means that off - diagonal elements in the matrix in equation eq.[equ : residualcovariance ] are assumed zero .second , we compute these correlations with the recipe outlined in section [ sec : kalmancorrelation ] . finally , we also include vertex constraints with the expressions given in section [ sec : vertexconstraints ] .as can be seen in the figure the scenario with correlations converges faster than the scenario without .furthermore , in the scenario without correlations less tracks survive the cut even after 5 iterations . per track ( left ) as a function of the number of alignment iterations for 3 alignment scenarios , namely ignoring correlations between residuals , not ignoring those correlations and including vertex constraints .the dashed line represents the result for a perfectly aligned detector.,title="fig:",scaledwidth=48.0% ] per track ( left ) as a function of the number of alignment iterations for 3 alignment scenarios , namely ignoring correlations between residuals , not ignoring those correlations and including vertex constraints .the dashed line represents the result for a perfectly aligned detector.,title="fig:",scaledwidth=48.0% ] the difference in convergence behaviour is mostly because there are two kinds of tracks .though most tracks pass only through a single velo half , there is a small fraction that passes through small regions in which detectors from both halves overlap .when correlations between hit residuals are taken into account , tracks that pass through a single half do not carry any weight in determining the relative positions of the two halves , because the contribution to the is invariant to the global position of the detector half .therefore , the relative position of the two halves is fully sensitive to the tracks that pass through both halves . on the contrary , if correlations are ignored , every track fixes the position of any detector element in space . as a resultthe overlap tracks get a much smaller weight in determining the relative position and convergence becomes poor .this problem can be partially overcome by explicitly enhancing the fraction of overlap tracks in the sample , _e.g. _ by down - sampling the tracks that do not pass through the overlap regions .such a strategy is applied in the alignment of the babar vertex detector .an important advantage of the closed - form algorithm is that it is not necessary to remove tracks with a small weight in the alignment as the algorithm inherently weights the information contained in the residuals correctly .in this paper we have presented how the most popular track fitting method , the kalman filter , can be used in a closed - form alignment procedure for tracking detectors .our contribution is summarized in expression eq.[equ : recursivecorrelation ] which shows how the correlations between state vectors can be computed recursively by using the smoother gain matrix . we have also shown how vertex constraints can be included without refitting the tracks .using an implementation of this formalism in the lhcb software framework we have illustrated for a simple misalignment scenario of the lhcb vertex detector the importance of correlations between residuals in the track fit .a more detailed analysis of the performance of the alignment algorithm to the lhcb tracking system will be reported in due course .the author would like to thank g. raven for posing the question concerning correlations between residuals in the kalman filter and for his patient proof reading of this manuscript .the alignment software that was used for the analysis in section [ sec : application ] was developed in close collaboration with j. amoraal , a. hicheur , m. needham and l. nicolas and g. raven , for an overview , see v. blobel , `` software alignment for tracking detectors '' , nucl .instrum . meth .a * 566 * ( 2006 ) 5 .v. blobel and c. kleinwort , phystat02 proceedings , `` a new method for the high - precision alignment of track detectors , '' , arxiv : hep - ex/0208021 .i. belotelov , a. lanyov and g. ososkov , `` data - driven alignment of the hera - b outer tracker '' , phys .nucl . lett .* 3 * ( 2006 ) 335 . c. kleinwort ,`` h1 alignment experience '' , proceedings of the first lhc detector alignment workshop , geneva ( 2006 ) p. brckman , a. hicheur and s. j. haywood , `` global approach to the alignment of the atlas silicon tracking detectors '' , atl - indet - pub-2005 - 002 ; p. brckman de renstrom , s. haywood , in : l. lyons , m.k .nel ( eds . ) , phystat05 proceedings , ic press , 2006 .a. bocci and w. hulsbergen , `` trt alignment for sr1 cosmics and beyond '' , atl - com - indet-2007 - 011 .g. flucke , p. schleper , g. steinbruck and m. stoye , `` a study of full scale cms tracker alignment using high momentum muons and cosmics , '' , cern - cms - note-2008 - 008 . s. viret , c. parkes and m. gersabeck , `` alignment procedure of the lhcb vertex detector , '' nucl .instrum .a * 596 * ( 2008 ) 157 .r. kalman , `` a new approach to linear filtering and prediction problems '' , journal of basic engineering , 35 ( 1960 ) .r. fruhwirth , `` application of kalman filtering to track and vertex fitting , '' nucl .instrum . meth .a * 262 * ( 1987 ) 444 .r. fruhwirth , t. todorov , m. winkler 2003 j. phys .g : nucl . part .29 561 ; e. widl and r. fruhwirth , `` a large - scale application of the kalman alignment algorithm to the cms tracker , '' j. phys .* 119 * ( 2008 ) 032038 .p. billoir , r. frhwirth and m. regler , `` track element merging strategy and vertex fitting in complex modular detectors '' , nucl .instrum .a * 241 * , 115 ( 1985 ) a. augusto alves _ et al . _[ lhcb collaboration ] , jinst * 3 * ( 2008 ) s08005 .j. amoraal _ et al ._ , `` alignment of the lhcb tracking system '' , _ in preparation_.
we present an expression for the covariance matrix of the set of state vectors describing a track fitted with a kalman filter . we demonstrate that this expression facilitates the use of a kalman filter track model in a minimum algorithm for the alignment of tracking detectors . we also show that it allows to incorporate vertex constraints in such a procedure without refitting the tracks .
due to the fast technological progress in the past decade , companies are facing a degree of technological complexity that requires more specialised engineers to push forward in their research and development ( r&d ) projects .for this particular reason , foremost universities around the globe tend to constantly revise the education of science , technology , engineering and mathematics ( stem ) fields , in order to be up - to - date with industry s requirements and adequately preparer their engineering students for their future jobs . however , in brazil , this might not be a simple task .in particular , the curricula of electrical and computer engineering courses from brazilian universities follow restricted requirements established by the brazilian ministry of education , which complicates curricula modifications and makes its progression an arduous task . in order to tackle such problem ,universities pursue for partnerships with industry in order to invest in co - operative education programmes to fulfil the gap in the educational process of their engineering students .in addition , many studies have shown the importance of work experience during college for the professional development of engineering students and its positive impacts on local economy . in this scenario , as an alternative to a close convergence between industry and academia in addition to promoting innovative r&d projects to benefit the engineering students , federal university of amazonas ( ufam ) founded the electronic and information research centre ( ceteli ) .it has years of experience in collaborating with several companies located at the industrial pole of manaus ( pim ) .indeed , its mission is to promote research , technological progress , and human resource training in amaznia with the purpose of achieving excellence in the fields of electronic and information technology , industrial automation and biomedical engineering .thus , pim s companies have funded several r&d projects in such fields .some companies that collaborated with ceteli include trpico systems and telecommunications , for developing telecommunication systems ; nokia institute of technology ( indt ) , for developing mobile applications and software verifiers ; and currently samsung , for training students and developing applications to mobile devices , digital television ( tv ) , and industrial automation .the projects developed at ceteli are typically coordinated by ( permanent ) professors from ufam , with the mission to achieve goals defined by each specific cooperation .moreover , they aim to train human resources in undergraduate and graduate levels in different engineering domains .in particular , ceteli has already prepared students who were able to start working in the industry after concluding their courses as well as researchers in technological innovation who were able to contribute in the development of innovative products for the market .ceteli has also a number of products available to end users and customers , _e.g. _ , hardware and software products developed by engineering students and professionals for trpico systems , indt , and samsung .the development of these projects contribute to the high quality of the academic education at ufam and it also builds a reliable partnership with companies .this paper addresses three major contributions : * first , we report the effort to establish an industrial - academic cooperation between ceteli , ufam and samsung , in order to meet the demand for trained human resources according to the ( current ) market interests and company needs ; * second , we present the proposed co - operative education programme , called as complementary training programme ( ctp ) and its educational structure from the theoretical and practical perspective ; * finally , we highlight the ctp s accomplishments and their impacts in the educational process of the undergraduate and graduate engineering students . as aforementioned ,such an industrial - academic collaboration is focused on research areas related to mobile devices , digital tv , and industrial automation , due to industry demand .for instance , mobile device technologies ( _ e.g. _ , smartphones and tablets ) grow every year worldwide by improving key - features such as processing power and storage capacity , which results in a better support to a wide range of applications ( _ e.g. _ , games , social interaction and services that were previously restricted to computers only ) . additionally , digital tv has shaped a new application market for user s interactivity , especially in brazil , where such technology has been recently implemented , which brings a new range of research and application development challenges . also , industrial automation is an emerging area , due to concepts such as the interconnected hybrid internet of things ( iot ) , allowing real - time communication , which thus increases the need for new and innovative solutions .these technological innovations have created the need for training professionals to develop such powerful applications , which demands development centres to train people in these research field .the partnership agreement between ceteli , ufam , and samsung was established to allow the investigation of new research areas and to provide an extensive training programme , for graduate and undergraduate engineering students , based on topics related to current industry demand .however , we have faced three major challenges to accomplish this collaboration project : 1 . the identification of the primary goals of each partner and the establishment of a common ground in the partnership , which is of paramount importance for the success of such collaboration ; 2 .overcome the bureaucracy in the federal institution of higher education ( ifes ) , in order to establish the partnership with a private company .in fact , the respective project must be approved by the original department , the innovation technology department , directors board , the administrative dean , and then the legal department . as one can see , it is a long journey , with no possibility of acceleration among those departments ; 3 . establish contracts on intellectual property ( ip ) ownership and confidentiality , which are of paramount importance for the company representatives ; note that this process must be as clear as possible for all involved parties . in order to overcome issue _( i ) _ , the project proposal has taken into account the university s interests to build distinct knowledge , as well as samsung s needs regarding the availability of human resources in its r&d department , in order to design , develop , and test ( innovative ) products . as a result ,the chosen r&d fields included software development activities related to mobile devices , digital tv , and industrial automation .regarding issue _ ( ii ) _ , the excessive bureaucracy of the project s approval process was attenuated by a continuous follow - up of its legal procedure at ufam . finally , with respect to issue _ ( iii ) _ , the official project proposal has established a three - years cooperation plan ; in particular , the proposal covers the courses contents , practical activities , scholarships , teaching instruction , project coordination , investments in infrastructure and equipment , and all legal contracts regarding ip ownership and confidentiality .in addition , ufam accepted the minor percentual portion of the joint ownership of ip , due to the fact it is the major beneficiary of this project , once it receives the majority of incomes and infrastructure .inspired by co - operative systems of education , the ultimate goal of this industrial - academic collaboration is to implement a continuing - education model named as complementary training programme ( ctp ) .similar to co - operative education programmes , ctp aims to combine a classroom - based learning approach with a more realistic work - based practical experience , in order to train undergraduate students in three major research areas : * mobile devices applications development . * digital tv applications development . * emerging technologies for industrial automation systems .it is worth noting that differently from the lecture / laboratory approach , which is usually applied to the engineering courses , this training programme delivers a work experience , where the undergraduate students work on a project from scratch until the accomplishment of a reliable product in each respective research area .in addition , another differential of the ctp is the inclusion of graduate engineering students from the postgraduate programme in electrical engineering ( ppgee ) in its process . through the mentorship of the undergraduate students , the graduate ones gain more teaching and leading experience , in addition to more financial incentive for their research .indeed , samsung provides scholarships and financial incentive for scientific publications ( _ i.e. _ , all costs for conference attendance and language editing ) for ppgee students , who work on research areas related to the project s fields of interest , and it also provides financial support for the ones who work as mentors during the projects .furthermore , samsung financially supports professors associated with ceteli , who work as project leaders , in order to supervise graduate and undergraduate students during the development of their work .ctp follows a specific workflow , which is modularised into four sub - modules identified as planning , learning , developing , and endorsing , as one can see in fig .[ fir : ctp ] .importantly , ctp s workflow is the same despite the target research area .* planning . *first of all , three professors associated with ceteli are elected to be project leaders in each respective research area. then , the project leaders select on average two graduate students from ppgge , based on their curriculum and theme of expertise , to be tutors of undergraduate students during the project .both professors and graduate students design extracurricular courses , which must cover all key - subjects to provide the necessary background that undergraduate students need to carry out a r&d project into the aforementioned research areas . ** at this stage , each project leader performs an admission process to select undergraduate students , based on their grade point average ( gpa ) and availability .then , during months the selected undergraduate students receive the aforementioned extracurricular courses , which are taught by the tutors ( graduate students ) and specialised professionals from industry .in addition , all undergraduate students are evaluated in these courses on a 10-scale grading system through courseworks and exams ; indeed , they must achieve at least grade and of attendance by the end of each course , in order to continue in the protect .importantly , each project offered from to extracurricular courses , lasting about hours each , which implies to hours of extracurricular education for undergraduate students . ** by the end of the learning activities , all professionals and students perform a brainstorming session , in order to suggest r&d projects ideas .most importantly , all ideias are proposed by undergraduate students and are evaluated by the project leaders , tutors , and industry professionals according to three main aspects : originality , innovation , and feasibility . on average , project ideas are selected , so , the undergraduate students are split into groups and each group works to implement one of them . during the practical phase , graduate students and industry professionals continuously supervise the undergraduate students practical activities .such process is also monitored by project leaders by means of periodic meetings , which help ensure the learning progress efficiency and the quality of student s work .this stage lasts approximately months . * endorsing .* once the proposed r&d projects are concluded ( _ i.e. _ , each group presents a stable and tested version of the proposed product ) , ceteli organises a workshop for the academic community , named as ceteli & samsung workshop of innovative technology , where the undergraduate students present their outcomes and experiences during the project . throughout the workshop ,a committee , which is composed by project leaders , professors associated with ceteli , tutors , specialised professionals , and representatives from samsung , evaluates each r&d project according to its outcomes and level of innovation , in order to award the best r&d project in each research area .indeed , the winner students are awarded with samsung devices , such as smartphones and smart tvs .it is worth noting that such healthy competition is important to push undergraduate students to attempt more creative and challenging ideias .moreover , as a way to inspire the academic community , renowned researchers are invited to delivery a keynote speech about emerging technologies from each research area .the following subsections describe the content related to each area , in addition to the goals targeted to ensure the training quality for the undergraduate students .projects related to this area aim to develop applications for mobile devices ( _ i.e. _ , smartphones and tablets ) , which may vary from public utilities to games , entertainments , and content searchers .indeed , the primary goal here is to provide the necessary background to develop high - quality mobile applications for the android platform . during this partnership ,it was offered training in this area ( each with students on average ) .the _ learning _ stage of this area covered the following subjects : java programming languages for mobile devices , software development methodologies for mobile devices , operating systems for mobile devices , and software verification and testing .this research area aims to develop digital tv applications to promote more interaction between users and contents , which may vary from interactive programs such as games , news report , and entertainments to programs with a specific collection of content , such as tv listings and utilities to support daily user activities . during this partnership ,it was offered trainings in this area ( each with students on average ) .the _ learning _ stage of this area covered the following subjects : fundamentals on digital tv , java for digital tv , software development using middleware for digital tv , c / c++ programming , programming for embedded systems , and embedded linux for digital tv .this research area aims to develop automatic solutions to improve industrial production environments , in order to make its processes faster , safer , and more accurate . during this partnership, it was offered training ( each with students on average ) .the _ learning _ stage of this area covered the following subjects : introduction to system automation , introduction to mobile robotics , introduction to industrial robotics , system development for plants automation , and software development for real - time systems .after years of partnership between ceteli , ufam and samsung , a total of mobile applications , digital tv applications , and industrial automation applications were developed .additionally , master degree dissertations were defended in areas related to the project s fields of interest , as well as , scientific publications in top conferences and journals . in the following sections , the outcomes of each area area described in details . in mobile devices area , undergraduate students were trained .in addition , the practical activities in this field resulted in the production of mobile applications to smartphones and tablets , which were designed to support users in different daily base activities .as example of such applications , mobile applications are described below , and it is worth noting that all are patented to ensure the students royalties .* pitstop * application , shown in fig .[ fig : app_pitstop ] , has two primary goals : to support drivers to organise all the information about their vehicles and to search for the cheapest gas station near by with the best service evaluation .the user can upload the information about his / her vehicle ( _ e.g. _ , model , fuel source , data of last inspection ) and , based on those information , the application can make suggestion for the driver such as if it is about time to make another inspection .in addition , the application shows the closest gas station that offers the cheapest price for the kind of fuel source that the vehicle needs .users can search on a map for the others gas stations as well , check which services they provide , compare their prices , make evaluations about their services , among other features .* kitchen survival guide * application , shown in fig .[ fig : app_kitchen ] , has the primary goal to support people who has no experience whatsoever in cooking .it differs from many others related available apps , because it provides , through a search system , recipes according to the available ingredients and household appliances that the user has at the moment .in addition , another outstanding feature of the application is its system recommendation , which automatically traces a profile for the user , detecting his / her preferences while the user uses the app , and then it recommends receipts that fit into the respective profile . * connect u * application , shown in fig .[ fig : app_connect ] , was developed to support students , who study at the same class and intend to share contents in a collaborative environment .it provides an area to share information , such as dates , files , exams , course - works , and all relevant material for the class progress .a place to deliberate specific topics is also available , which organises the discussions by themes .in addition , the application can be synchronised with facebook , so the students can also publish some contents in the social media . in digital tv area , undergraduate students were trained . as a result of their practical activities , applications for digital tv were developed .some examples of such applications are described below , and it is worth noting that all are patented to ensure the students royalties . motivated by the world cup held in brazil in ,undergraduate students have developed digital tv applications for the soccer fans .the * copa dtv * application provided all information about the match schedules and the participants during the respective world cup .in contrast , another developed application named as * gosoccer * , shown in fig .[ fig : tv_digital ] , provides all information about the matches from brazilian championships in general .such information are released during the match , so the user can track the data in real - time .inspired by the touristic potential of manaus city , undergraduate students also developed digital tv applications about what could be explored in the city .for instance , * espia s * application , shown in fig .[ fig : tv_digital1 ] , presents an organised set of information about gastronomic and entertainment places in manaus .it also releases information about upcoming events in the city , such as date , local , description , and so on , which are updated on a weekly or monthly basis .another digital tv application presents information about touristic places in manaus , providing their location and a brief explanation about each place .in the industrial automation area , undergraduate students were trained .the students were split into classes and each one was responsible for one of the following development areas : programmable logic controller ( plc ) programming , mobile robotics programming , and industrial robotics programming .the main goal was to apply the knowledge of each area to a global system , which comprises of a plant with two cars to transport items via five different stations .each item enters the production line manually and the final stations are an industrial robot and a palletising station to deposit the outcomes . as an example , fig .[ fig : automation ] shows the robot arm melfa rv-2sdb , which was programmed to manage the production line outcomes .currently , this project is in its final stage of development . during this partnership , an overall of articleswere published in international and national conferences , in addition to journal papers ; we also have other journal submissions under review .these scientific publications have contributed to specific fields related to the project s areas of interesting , such as digital tv , formal verification , and education of engineering students .as one can see in fig .[ fig : publications ] , through this partnership it was possible to improve in ceteli s scientific production , which also implied in an increase of in conference participation and more journal publications .furthermore , as a part of the program goals , four workshops were promoted to present the r&d projects conducted by the undergraduate students , their outcomes , and to allow an open discussion with the educational community about the research topics developed by each project .thereabout people among graduate and undergraduate students , professors , researchers , industry professionals and business representatives in general participated on the `` , , and ceteli & samsung workshop of innovative technology '' and the `` symposium on quantitative methods of biomedical digital images and biosensor '' .one of the primary goals from the partnership between ceteli , ufam and samsung was the installation of a physical infrastructure for r&d projects .in fact , a construction project to create an extension of ceteli was executed , which resulted in a new building at ufam , named as ceteli ii .importantly , the new facilities are also used in the training process of graduate students in innovative technologies areas , extracurricular courses for undergraduate students , and all remaining project activities . note that such facilities also provide the necessary equipment to conduct such activities .ceteli ii comprises of area and is fully equipped with classrooms for the ppgee , staff rooms , meeting rooms , stockrooms , and well - equipped laboratories with brand new technologies , where the r&d projects and the extracurricular courses are conducted .in fact , the following laboratories were assembled in ceteli ii : industrial automation , mobile device , digital tv , and general research laboratories .ceteli ii has three floors as can be seen in fig .[ fig : blueprint ] , which contains the blueprints of its construction project .as one can see , fig .[ fig : blueprint ] shows the blueprint of the ground floor , which contains the mobile device and industrial automation laboratories , respectively .in particular , the mobile device laboratory was built to provide engineering students a working environment to develop mobile applications , seeking functional and useable aspects of such systems .its facilities , as shown in fig .[ fig : lab_dispositivos ] , consist of mobile devices , standard - desktop computers , imacs , and macbooks ( mainly used in graphic design for mobile applications ) . additionally , the students have access to a wide range of different mobile - platforms ( _ e.g. _ , smartphones and tablets ) , which were used during the development and test of their mobile applications .industrial automation laboratory was built to provide a working environment to research and develop automated solutions for industrial processes .as shown in fig .[ fig : lab_automacao ] , its facilities consist of workstations , laptops and standard - desktop computers with real - time systems applications , in addition to robotic devices , automated platforms , and surface mount technology ( smt ) component placement systems . fig .[ fig : blueprint ] shows the blueprint of ceteli ii s first floor , which comprises the digital tv laboratory , classrooms , and research laboratories .the digital tv laboratory was built to provide students a working environment to the development of digital tv applications , educational training and practical project activities .in addition , its facilities , as shown in fig .[ fig : lab_tv ] , comprise workstations with standard - desktop computers , plasma tvs , set - top boxes , laptop computers , in addition to digital video recorders , video transmitters , and embedded software platforms .the classrooms were built with multimedia equipment to provide interactive lectures for students .as aforementioned , these classrooms are used for educational training ( _ e.g. _ , extracurricular courses ) , as well as , lectures for the master of science ( m.sc . ) programme in electrical engineering from ppgee . in addition , the remaining research laboratories were built for both undergraduate and graduate engineering students to have the opportunity to interact with each other and to tackle research challenges related to each project s field of interest . finally , fig .[ fig : blueprint ] shows the blueprint of ceteli ii s second floor , which comprises staff rooms , stockrooms , a meeting room , and a common staff room . such facilities were built to provide professors a place to meet all students during the educational training and practical phase of each project .furthermore , undergraduate students can use such offices for specific studies , as well as , graduate students to prepare their training material for the extracurricular courses ( cf .[ sec : training ] ) .each room contains standard - desktop computers and the necessary furniture to accommodate both professors and students .it is worth noting that ceteli ii has become ufam s patrimony and all its facilities are exceptionally available for all undergraduate and graduate students from the faculty of technology .ufam , ceteli and samsung accomplished an industrial - academic collaboration with an outstanding investment to the professional training of human resources in innovative technological areas . here , graduate and undergraduate engineering students are integrated into a complementary training programme , which is inspired by co - operative educational programmes . on one hand , undergraduate students participated in training courses and had the opportunity to apply their knowledge on real r&d projects in a professional level .it is worth noting that such approach provides to undergraduate students a learning and working experience closer to industry reality . on the other hand ,graduate students were able to get involved in the aforementioned r&d projects as well , which provided them a teaching / leading experience through the activities with the undergraduates .in addition , they received financial support to produce scientific papers , and participate in national and international conferences , which helped to increase ceteli s scientific production in .such investments also allowed a wide dissemination of the r&d projects held by ceteli and ufam , which endorses the work quality produced in the university and its professionals .furthermore , the infrastructure named as ceteli ii represents one of the main contributions of this partnership , once its facilities and equipments will continue to be used by the undergraduate students and ppgee s graduate students .most importantly , the laboratories will also continue receiving investments to keep on going the research and development projects of the respective fields of interest .summing up on numbers , the partnership qualified undergraduate students in the ctp training in mobile devices applications development , digital tv applications development , and emerging technologies for industrial automation systems .regarding ppgee , graduate students obtained a master s degree and others still have their graduate programme in progress .additionally , conference and journal papers were published and others journal papers are currently under revision . during years of partnership , workshops were promoted in order to present the outcomes from each project and to promote discussions about emerging technologies in the aforementioned research areas . as a result , mobile and digital tv applications , in addition to industrial automation ones were developed .these results are an example of the ceteli s potential to establish partnerships with companies , in order to improve the educational and professional experiences of its students . in particular , this partnership presented outstanding results , when compared to other investments in research and development of new technologies implemented at ufam . from now on , with ceteli ii facilities dedicated to the development of emerging technologies , research , and qualification of students ,ceteli is spotted as highlight technology centre in the industrial pole of manaus ( brazil ) .part of the results presented in this paper were obtained with the project for research and human resources qualification , for under- and post - graduate levels , in the areas of industrial automation , mobile devices software , and digital tv , sponsored by samsung eletrnica da amaznia ltda , under the terms of brazilian federal law number / .w. h. el , maraghy . : `` _ future trends in engineering education and research _ '' ; advances in sustainable manufacturing : proceedings of the 8th global conference on sustainable manufacturing , springer berlin heidelberg , 2011 , 1116 .van der hoek , andr and kay , david g. and richardson , debra j. : `` _ informatics : a novel , contextualized approach to software engineering education _ '' ; software engineering education in the modern age : software education and training sessions at the international conference on software engineering , springer berlin heidelberg , 2006 , 147165 .vicente ferreira de lucena , jose pinheiro de queiroz neto , joao edgar chaves filho , waldir sabino da silva , and lucas carvalho cordeiro . :`` _ gift young engineers : an extra - curricular initiative for updating computer and electrical engineering courses _ '' ; in proceedings of the 2011 frontiers in education conference ( fie 11 ) .ieee computer society , washington , dc , usa , 2011 , s1g-1 - 1-s1g-6 .c. f. f. costa filho , o. b. maia , m. g. f. costa , r. e. v. rosa , v. l. lucena jnior , a. m. gil , p. r. barros , o. s. e. silva.:``__upgrading the training of undergraduate students by addressing market demands _ _ '' ; dallas , usa : proceedings of iasted international conference technology for education , 2011 . m. powell.:``__effective work experience : an exploratory study of strategies and lessons from the united kingdom s engineering education sector __ '' ; journal of vocational education & training , volume 53 , 2001 , 421441 .m. k. schuurman , r. n. pangborn , r. d. mcclintic.:``__the influence of workplace experience during college on early post graduation careers of undergraduate engineering students _ _ '' ; wepan / namepa third joint national conference proceedings : leveraging our best practices : hitting the parity jackpot , 2005 . c. f. f. costa filho , m. g. f. costa , v. de lucena jr . , o. s. melo , o. s. e. silva , and o. b. maia .: `` _ _ relatos de casos e experincias na educao em engenharia _ _ '' ; in : vanderli fava de oliveira ; carlos almir monteiro de holanda ; ricardo fialho colares .. engenharia em movimento .braslia : abenge , 2011 , v. 1 , 64100 . c. f. f. costa filho , m. g. f. costa , v. f. lucena , o. s. silva , o. maia.:``__programa de formao complementar para alunos de graduao em engenharia eltrica e engenharia da computao _ _ '' ; fortaleza , brazil : xxxviii congresso brasileiro de educao em engenharia , 2010 .l. c. cordeiro , c. mar , e. valentin , f. cruz , d. patrick , r. s. barreto , v. lucena.:``__an agile development methodology applied to embedded control software under stringent hardware constraints _ _ '' ; new york , usa : acm sigsoft software engineering notes , 2008 .l. c. cordeiro , r. s. barreto , m. n. oliveira jr.:``__towards a semiformal development methodology for embedded systems _ _ '' ; funchal , portugal : 3rd international conference on evaluation of novel approaches to software engineering , 2008 .wilson prata and juan oliveira .: `` _ _ preferences and concerns regarding mobile digital tv in brazil _ _ '' ; 6th international conference on applied human factors and ergonomics ( ahfe 2015 ) , procedia manufacturing , volume 3 , 2015 , pages 5319 - 5325 . j. t. pronk , s. y. lee , j. lievense , j. pierce , b. palsson , m. uhlen , and j. nielsen .: `` _ _ how to set up collaborations between academia and industrial biotech companies _ _ '' ; nature biotechnology , nature publishing group , a division of macmillan publishers limited , volume 33 , ( 2015 ) , 237240 .kettil cedercreutz , cheryl cates , anton harfmann , marianne lewis , richard miller , michael zaretsky , alexander christoforidis , vasso apostolides , anita todd , zach osborne , louis von eye , t. michael baseheart , ann keeling , darnice langford , catherine maltbie , omas newbold , jennifer wiswell .: `` _ _ leveraging cooperative education to guide curricular innovation , the development of a corporate feedback loop for curricular improvement _ _ '' ; cheryl cates and kettil cedercreutz , university of cincinnati , ohio 45221 , 2008 .i. v. bessa , h. i. ismail , l. c. cordeiro , j. e. chaves filho .: `` _ _ verification of fixed - point digital controllers using direct and delta forms realizations _ _ '' ; in design automation for embedded systems , v. 20 , n. 2 , pp . 95 - 126 , 2016 .f. de s. farias , w. s. da silva , e. b. de lima filho , w. c. melo .: `` _ _ automated content detection on tvs and computer monitors _ _ '' . global conference on consumer electronics ( gcce2015 ) , ieee , 2015 .f. a. p. januario , l. c. cordeiro , e. b. de lima filho , v. f. lucena jr .: `` _ _ bmclua : verification of lua programs in digital tv interactive applications _ _ '' ; in global conference on consumer electronics ( gcce2014 ) , ieee , 2014 , 707708 .h. i. ismail , i. v. bessa , l. c. cordeiro , j. e. chaves filho , e. b. de lima filho .: `` _ _ dsverifier : a bounded model checking tool for digital systems _ _ '' ; international spin symposium on model checking of software ( spin2015 ) , 2015 .a. b. trindade , and l. c. cordeiro:``__applying smt - based verification to hardware / software partitioning in embedded systems _ _ '' . in design automation for embedded systems , v. 20 , n. 1 , pp. 1 - 19 , 2016 .
we describe the results of an industrial - academic collaboration among the graduate program in electrical engineering ( ppgee ) , the electronics and information research centre ( ceteli ) , and samsung eletrnica da amaznia ltda . ( samsung ) , which aims at training human resources for samsung s research and development ( r&d ) areas . inspired by co - operative education systems , this collaboration offers an academic experience by means of a complementary training programme ( ctp ) , in order to train undergraduates and graduate students in electrical and computer engineering , with especial emphasis on digital television ( tv ) , industrial automation , and mobile devices technologies . in particular , this cooperation has provided scholarships for students and financial support for professors and coordinators in addition to the construction of a new building with new laboratories , classrooms , and staff rooms , to assist all research and development activities . additionally , the cooperation outcomes led to applications developed for samsung s mobile devices , digital tv , and production processes , an increase of in ceteli s scientific production ( _ i.e. _ , conference and journal papers ) as well as professional training for undergraduates and graduate students . + co - operative education ; complementary training programme ; engineering students ; extra - curricular programmes ; university and industry collaboration .
the exponentiation of a real matrix allows solving initial value problems ( ivps ) for linear ordinary differential equations ( odes ) : given , the solution of the ivp defined by and is , where for any linear ode being met in many contexts , the numerical computation of the matrix exponential has been intensively studied ( see , , , and references therein ) .while an approximate computation of leads to an approximate solution for the underliying ivp , interval analysis ( see section [ s : ia ] ) offers a more rigorous framework : in most practical situations the parameters that define the linear ode are known with some uncertainty . in this situation, one usually ends with an interval of matrices =[{\underline{a}},{\overline{a}}]:=\{a\in{\mathbb{r}}^{n\times n}:{\underline{a}}\leq a\leq{\overline{a}}\} ] : ):=\{\exp(a):a\in[a]\}.\ ] ] the most obvious way of obtaining an interval enclosure of is to evaluate the truncated taylor series using interval arithmetic and to bound the remainder ( cf .subsection [ ss : taylor - series ] for details ) .however , the next example shows that the truncated taylor series is not well adapted to interval evaluation , even if no truncation of the series is performed .[ ex : correlation - loss ] consider the interval of matrices ] , it can be proved that )\leq{\overline{x}} ] is the optimal enclosure of ) ] with = 1 & -1.20912 + 0 & -6.25568 = 1 & 1.95819 + 0 & 6.4408 .this is actually an enclosure of , but a very pessimistic one. as shown by the previous example , even with high enough order for the expansion so the influence of the remainder is insignificant , the interval evaluation of the taylor series computes very crude bounds on the exponential of an interval matrix .the reason of this bad behavior of the taylor series interval evaluation is the dependency loss between the different occurrences of variable that occurs during the interval evaluation of an expression ( cf .section [ ss : dependency ] ) . in general , one can not expect to compute the optimal enclosure of : the np - hardness of this problem is proved in section [ s : nph ] .two well known techniques can help decreasing the pessimism of the interval evaluation : first , centered forms can give rise to sharper enclosures than the natural interval evaluation for small enough interval inputs .such a centered form for the matrix exponential was proposed in .however , this centered evaluation dedicated to the interval matrix exponentiation is quite complex and very difficult to follow or implement .furthermore , there is an error in the proof of proposition 10 m of )=\cup_{x\in[x]}f(x) ] .when no confusion is possible , lower an upper bounds of an interval ] .furthermore , a real number will be identified with the degenerated interval ] , a matrix of intervals is obtained by considering :=\{a\in{\mathbb{r}}^{n\times m}:\forall i\in\{1,\ldots n\},\forall j\in\{1,\ldots m\},a_{ij}\in[a_{ij}]\}.\ ] ] these two definitions are obviously equivalent following the notational convention , and =[{\underline{a}}_{ij},{\overline{a}}_{ij}] ] is |:=\max\{|{\underline{x}}|,|{\overline{x}}|\} ] .the infinite norm will be considered in the rest of the paper .the norm of an interval matrix is the maximum of the norms of the real matrices included in this interval matrix .it is easily computed as ||=|| \,|[a]|\,|| ] implies ||\leq||[b]|| ] .the interval hull is defined similarly for sets of vectors and sets of matrices .operations are extended to intervals in the following way : \circ[{\underline{y}},{\overline{y}}]:=\{x\circ y : x\in[{\underline{x}},{\overline{x}}],y\in[{\underline{y}},{\overline{y}}]\}.\ ] ] the division is defined for intervals ] are interpreted using .the ia lacks some important properties verified by its real counterpart : it is not a field anymore as interval addition and interval multiplication has no inverse in general , while distributivity is not valid anymore ( instead a subdistributivity law holds in the form ([y]+[z])\subseteq [ x]\,[y]+[x]\,[z] ] and \subseteq[y'] ] .[ [ rounded - computations ] ] rounded computations + + + + + + + + + + + + + + + + + + + + as real numbers are approximately represented by floating point numbers , the ia can not match the definition exactly . in order to preserve the inclusion property, the ia has to be implemented using an outward rounding .for example , /[2,2]=[0.5,1.5] ] where ( respectively ) is a floating point number smaller than ( respectively bigger than ) .of course , a good implementation will return the greatest floating point number smaller than and the smallest floating point number greater than . among other implementations of ia, we can cite the c / c++ libraries profil / bias and gaol , the matlab toolbox intlab and mathematica .the natural usage of the ia is to evaluate an expression for interval arguments .the fundamental theorem of interval analysis ( cf . ) allows explaining the interpretation of this interval evaluation .its proof is classical but is reproduced here .let and be either or or and \in\mathbb{ie} ] and an interval function :\mathbb{i}[x]\longrightarrow \mathbb{if} ] is the set of all intervals included in ] .suppose furthermore that both * for all ] * ] .then , ([x])\supseteq \{f(x):x\in[x]\} ] we have (x) ] , implies (x)\subseteq [ f]([x]) ] .furthermore , it is inclusion - increasing as it is compound of inclusion - increasing operations .therefore , the fundamental theorem of ia proves that +[x]\,[y]\supseteq \{x+xy : x\in[x],y\in[y]\} ] \ , [ b]\bigr)_{ij } \ = \\sum_k \ [ a_{ik}]\,[b_{kj}]\ ] ] gives rise to the inclusion \ , [ b]\supseteq \{ab : a\in[a],b\in[b]\} ] actually holds .note that \ , [ b]\neq \{ab : a\in[a],b\in[b]\} ] actually contains matrices that are not the product of matrices from ] , but \ , [ b] ] .however , the interval evaluation of expression that contains several occurrences of some variable is not optimal anymore in general . in this case, some overestimation generally occurs which can dramatically decrease the usefulness of interval evaluation .when an expression contains several occurrences of some variables its interval evaluation generally gives rise to a pessimistic enclosure of the range .for example , the evaluation of for the arguments =[0,1] ] gives rise to the enclosure ] while the evaluation of for the same interval arguments gives rise to the better enclosure ] , the interval evaluation of \,[a] ] , is not optimal in general since several occurrences of some entries of ] .an algorithm for the computation of \} ] was proposed in .however , it was proved that no such polynomial algorithm exists for the computation of \} ] is np - hard ( cf . ) .the situation is even worth than this : even computing an enclosure of \} ] , and consider an interval enclosure ] ) .the interval enclosure ] is np - hard .computing -accurate interval enclosures of the range of a multivariate polynomial is np - hard ( cf . and theorem 3.1 in ) .even if one restricts its attention to bilinear functions , the computation of -accurate enclosures of their range remains np - hard ( cf .theorem 5.5 in ) . note that if one fixes the dimension of the problems , then the computation of these -accurate enclosures is not np - hard anymore , hence showing that the np - hardness is linked to the growth of the problem dimension .it is not a surprise that computing an -accurate enclosure of the interval matrix exponential is np - hard , but this result remains to be proved . for every , computing an -accurate enclosure of ) ] .define by : =( c|c|c|c 0 & ^t & 0 & 0 + 0 & 0 & b & 0 + 0 & 0 & 0 & + 0 & 0 & 0 & 0 ) : =( c|c|c|c 0 & ^t & 0 & 0 + 0 & 0 & b & 0 + 0 & 0 & 0 & + 0 & 0 & 0 & 0 ) , which are obviously computed in polynomial time from , ] .we now prove that an -accurate enclosure of the exponentiation of ] and ] , that is equivalently ] .one can check easily that is nilpotent : a^2= ( c|c|c|c 0 & 0 & * x*^t b & 0 + 0 & 0 & 0 & b*y * + 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 ) a^2= ( c|c|c|c 0 & 0 & 0 & * x*^tb*y * + 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 ) a^3=0 . thus . as a consequence , \{((a))_1,2n+2:a}=\{*x*^t b * y*:*x*,*y * } and the entry of an -accurate enclosure of ) ] and ] }([a],k ) : = & i+[a]+\frac{1}{2}[a]^2+\ldots+\frac{1}{k!}[a]^k \\ { [ \mathcal{t}]}([a],k ) : = & [ \tilde{\mathcal{t}}]([a],k)+[\mathcal{r}]([a],k ) , \end{array}\ ] ] where the interval remainder ([a],k) ] is an enclosure of \} ] .[ lem : r ] for a fixed positive integer , the interval matrix operator (\,.\,,k) ] .let ,[b]\in{\mathbb{ir}}^{n\times n} ] and \subseteq[b] ] which implies ||\leq k+2 ] . finally , as ] .suppose that (a , k) ] has to hold . theorem [ thm : taylor ] below states that }([a],k) ] .it was stated in but proved with different arguments in , .note that the usage of the fundamental theorem of interval analysis allows us to provide a proof much simpler than the one proposed in , .[ thm : taylor ] let \in{\mathbb{ir}}^{n\times n} ]. then ) \subseteq [ \mathcal{t}]([a],k) ] is inclusion - increasing , and therefore so is (\;.\;,k) ] .therefore , one can apply the fundamental theorem of interval analysis to conclude the proof .[ ex : correlation - loss - taylor ] consider the interval of matrices ] : } & { [ -1.2092 , 1.9582 ] } \\ { [ -9\times 10^{-7 } , 9\times 10^{-7 } ] } & { [ -6.2557 , 6.4409 ] } \end{pmatrix}.\ ] ] higher order for the expansion do not improve the entries and anymore .the horner evaluation of a real polynomial improves both the computation cost and the stability ( see e.g. ) .when an interval evaluation is computed , the horner evaluation furthermore improves the effect of the loss of correlation ( see ) .it is therefore natural to evaluate using a horner scheme : ([a],k ) : = & i+ [ a]\bigl(i+\frac{[a]}{2}\bigl(i+\frac{[a]}{3}\bigl ( \ \ \cdots \ \ \bigl(i+\frac{[a]}{k}\bigr ) \cdots\bigr)\bigr)\bigr ) \\ { [ \mathcal{h}]}([a],k ) : = & [ \tilde{\mathcal{h}}]([a],k ) + [ \mathcal{r}]([a],k ) .\end{array}\ ] ] [ lem : horner ] let and such that . then (a , k) ] . as a consequence , (a , k)=[\mathcal{t}](a , k) ] and such that || ] . as a consequence of lemma [ lem : r ] , (\,.\,,k) ] .therefore , one can use the fundamental theorem of interval analysis to conclude the proof .[ ex : correlation - loss - horner ] consider the interval of matrices ] : 1+[-1.110 ^ -6 , 1.110 ^ -6 ] & [ -0.0706 , 0.7352 ] + [ -1.110 ^ -6 , 1.110 ^ -6 ] & [ -1.2056 , 1.2117 ] .this enclosure is sharper than the one computed using the taylor series : as it was forseen , the horner evaluation actually improves the loss of dependency introduced by the interval evaluation in the expression of the taylor expansion of the matrix exponential . the scaling and squaring process is one of the most efficient way to compute a real matrix exponential .it consists of first computing and then squaring times the resulting matrix : ( a)=((a/2^l))^2^l .therefore , one first has to compute .this computation is actually much easier than because can be made much smaller than .usually , pad approximations are used to compute .however , this technique has not been extended to interval matrices , hence we propose here to use the horner evaluation of the taylor series instead .therefore , we propose the following operator for the enclosure of an interval matrix exponential : let and be such that || ] and such that || ] . by theorem [ thm : horner ], we have \bigl([a]/2^l , k\bigr) ] . the interval evaluation ^{2^l} ] encloses \} ] .this concludes the proof as this holds for an arbitrary ] defined in example [ ex : correlation - loss ] .theorem [ thm : ss ] with and leads to the following enclosure of ) ] be the real matrix formed of the widths of the entries of ] as a quality measure of the enclosure ] using double precision does not provide any meaningful enclosure .figure [ fig : fig1 ] shows the quality of the enclosure obtained for different orders ranging from to and two different precisions for computations ( the mathematica arbitrary precision interval arithmetic was used ) .it shows that no meaningful enclosure is obtained for precision less than digits or order less than using the horner interval evaluation of the taylor expansion .plots of (a , k)|| ] computed using the standard double precision arithmetic .this can be improved using a decomposition where is easier to exponentiate .then , one can compute ( where has to be rigorously enclosed in order to maintain the rigorousness of the process ) . using the shur - decomposition ,we obtain (p^{-1}ap,12,12)\,p^{-1}||\approx 7.2\times 10^{-11}.\ ] ] in order to compare the different methods , we will use which is simpler to exponentiate .we have exponentiated :=0.1a+[-\epsilon,\epsilon] ] and the results are plotted on figure [ fig : loglog ] .] the three plain gray curves represent ([a_\epsilon],k)|| ] .the dashed line represents which is a lower bound of )|| ] growthlinearly because the contribution of quadratic terms are negligible . on the other hand , the interval evaluations ([a_\epsilon],k) ]are pessimistic , but it is well known that the pessimism of interval evaluation grows linearly w.r.t . the width of the interval arguments .thus , the computed enclosure show a linear growth w.r.t . which are approximately ([a_\epsilon],k)|| & \approx & 1.17\times10^{-4 } + 2.86\times 10^{10 } \ , \epsilon \\ ||{\mathrm{wid}\:}[s]([a_\epsilon],10,10)|| & \approx & 1.80\times10^{-9 } + 8.59\times 10 ^ 3 \ , \epsilon.\end{aligned}\ ] ] this cleary shows how smaller is the pessimism introduced by the interval scaling and squaring process . finally , both the interval horner evaluation and the interval scaling and squaring process show an exponential growth when is too large .the lower bound represented by the dashed line also shows a exponential growth , which proves that this is inherent to the exponentiation of an interval matrix . for such ,some matrices inside $ ] eventually see some of their eigenvalues becoming positive , leading to some exponential divergence of the underlying dynamical system , which is also observed on the matrices exponential .the author would like to thank the university of central arkansas , usa , who partially funded this work , and in particular doctor chenyi hu for his helpful comments .
the numerical computation of the exponentiation of a real matrix has been intensively studied . the main objective of a good numerical method is to deal with round - off errors and computational cost . the situation is more complicated when dealing with interval matrices exponentiation : indeed , the main problem will now be the dependency loss of the different occurrences of the variables due to interval evaluation , which may lead to so wide enclosures that they are useless . in this paper , the problem of computing a sharp enclosure of the interval matrix exponential is proved to be np - hard . then the scaling and squaring method is adapted to interval matrices and shown to drastically reduce the dependency loss w.r.t . the interval evaluation of the taylor series .
this work stems from the attempt to address the optimal infinite - horizon constrained control of discrete - time stochastic processes by a model predictive control strategy .we focus on linear dynamical systems driven by stochastic noise and a control input , and consider the problem of finding a control policy that minimizes an expected cost function while simultaneously fulfilling constraints on the control input and on the state evolution .in general , no control policy exists that guarantees satisfaction of deterministic ( hard ) constraints over the whole infinite horizon .one way to cope with this issue is to relax the constraints in terms of probabilistic ( soft ) constraints .this amounts to requiring that constraints will not be violated with sufficiently large probability or , alternatively , that an expected reward for the fulfillment of the constraints is kept sufficiently large .two considerations lead to the reformulation of an infinite horizon problem in terms of subproblems of finite horizon length .first , given any bounded set ( e.g. a safe set ) , the state of a linear stochastic dynamical system is guaranteed to exit the set at some time in the future with probability one whatever the control policy .therefore , soft constraints may turn the original ( infeasible ) hard - constrained optimization problem into a feasible problem only if the horizon length is finite .second , even if the constraints are reformulated so that an admissible infinite - horizon policy exists , the computation of such a policy is generally intractable .the aim of this note is to show that , for certain parameterizations of the policy space and the constraints , the resulting finite horizon optimization problem is tractable .an approach to infinite horizon constrained control problems that has proved successful in many applications is model predictive control . in model predictive control , at every time , a finite - horizon approximation of the infinite - horizon problem is solved but only the first control of the resulting policy is implemented . at the next time , a measurement of the state is taken , a new finite - horizon problem is formulated , the control policy is updated , and the process is repeated in a receding horizon fashion . under time - invariance assumptions ,the finite - horizon optimal control problem is the same at all times , giving rise to a stationary optimal control policy that can be computed offline . motivated by the previous considerations , here we study the convexity of certain stochastic finite - horizon control problems with soft constraints .convexity is central for the fast computation of the solution by way of numerical procedures , hence convex formulations or convex approximations of the stochastic control problems are commonly sought .however , for many of the classes of problems considered here , tight convex approximations are usually difficult to derive . one may argue that non - convex problems can be tackled by randomized algorithms .however , randomized solutions are typically time - consuming and can only provide probabilistic guarantees .in particular , this is critical in the case where the system dynamics or the problem constraints are time - varying , since in that case optimization must be performed in real - time .here we provide conditions for the convexity of chance constrained stochastic optimal control problems .we derive and compare several explicit convex approximations of chance constraints for gaussian noise processes and for polytopic and ellipsoidal constraint functions . finally , we establish conditions for the convexity of a class of expectation - type constrains that includes standard integrated chance constraints as a special case . for integrated chanceconstrains on gaussian processes with polytopic constraint functions , an explicit formulation of the optimization problem is also derived .the optimal constrained control problem we concentrate on is formulated in section [ sec : ps ] .a convenient parametrization of the control policies and the convexity of the objective function are discussed at this stage .next , two probabilistic formulations of the constraints and conditions for the convexity of the space of admissible control policies are discussed : section [ sec : cc ] is dedicated to chance constraints , while section [ sec : icc ] is dedicated to integrated chance constraints . in section [ sec : num ] , numerical simulations are reported to illustrate and discuss the results of the paper .let and . consider the following dynamical model : for , where is the state , is the control input , , , and is a stochastic noise input defined on an underlying probability space .no assumption on the probability distribution of the process is made at this stage .we assume that at any time , is observed exactly and that , for given , . fix a horizon length .the evolution of the system from through can be described in compact form as follows : where let and , with , be measurable functions .we are interested in constrained optimization problems of the following kind : \\\textrm{subject to}\quad&\textrm{(\ref{eq : compactdyn})}\quad\textrm{and}\quad \eta(\bar{x},\bar{u})\leq 0 \end{aligned}\ ] ] where the expectation ] , moreover , since is convex , since these inequalities hold for almost all , it follows that \leq & \,\ , \ee[\alpha\varphi\big ( \gamma(\omega , \theta)\big)+(1-\alpha)\varphi\big(\gamma(\omega,\theta')\big ) ] \\ = & \,\ , \alpha\ee[\varphi\big ( \gamma(\omega , \theta)\big)]+(1-\alpha)\ee[\varphi\big(\gamma(\omega,\theta')\big)],\end{aligned}\ ] ] which proves the assertion . assumption ( iii ) can be replaced by either of the following : * , .* , .let us now make the following standing assumption .[ a : convexobj ] is a convex function of and ] is a convex function of .first , note that the set of admissible parameters is a linear space .let us write and to express the dependence of , and on the random event .fix arbitrarily .since the mapping is affine and the mapping is assumed convex , their combination is a convex function of .then , the result follows from proposition [ thm : phietaconvex ] with and equal to the identity map . by virtue of the alternative assumptions ( iii ) and ( iii ) of proposition [ thm : phietaconvex ] , the requirement that ] , we relax the hard constraint by requiring that it be satisfied with probability . hence we address the optimization problem \label{e : objectivecc } \\\textrm{subject to}\quad&\eqref{e : ubardef},~\eqref{eq : cloopsys}\textrm { and } \label{e : dynconstr } \\& \pp(\eta(\bar x_\theta , \bar u_\theta ) \le 0)\geq 1-\alpha \label{e : constr}.\end{aligned}\ ] ] the smaller , the better the approximation of the hard constraint ( [ eq : constraint ] ) at the expense of a more constrained optimization problem .this problem is obtained as a special case of problem by setting and defining as -\infty , 0]}}(\eta_i)-\alpha,\ ] ] where -\infty , 0]}}(\cdot) ] , is convex . as a consequence , under assumption [ a : convexobj ] and for any \,0,1[ ] where , for , denotes the -th component of function .let and be the -th entry of and the -th row of , respectively , and let be a symmetric real matrix square root of .[ prop : constrsep ] let ,1[ ] . under assumption[ a : gaussian ] , the constraint \ge 1-\alpha ] , which proves the first claim .to prove the second claim , note that can alternatively be represented as where .since if and only if , where the denote the standard basis vectors in , we may rewrite as or equivalently for . for each , the supremum is attained with ; therefore , the above is equivalent to .clearly , .moreover , since , we have therefore , constraint reduces to .since the variables and are affine in the original parameters , this is an intersection of second order cone constraints . as a result , under the additional assumption [ a : convexobj ] ,the optimization of ] , 1 . ; 2 . . the above proposition states that as the dimension of the gaussian measure increases, its mass concentrates in an ellipsoidal shell of ` mean - size ' .it readily follows that since is a -dimensional gaussian random vector , its mass concentrates around a shell of size .note that the bounds corresponding to ( i ) and ( ii ) of proposition [ p : conc ] in the case of are independent of the optimization parameters ; of course the relative sizes of the confidence ellipsoids change with ( because the mean and the covariance of depend on ) , but proposition [ p : conc ] shows that the size of the confidence ellipsoids grow quite rapidly with the dimension of the noise and the length of the optimization horizon .intuitively one would expect the ellipsoidal constraint approximation method to be more effective than the cruder approximation by constraint separation .figure [ figure : betagrowth ] and proposition [ p : conc ] however suggest that this is not the case in general ; for large numbers of constraints ( e.g. longer mpc prediction horizon ) the constraint separation method is the less conservative . for any -dimensional random vector , we have -\infty , 0]}}(\eta_i)\right] ] .for every fixed value of , it holds that -\infty , 0]}}(\eta_i) ] .[ proposition : gaussexp ] under assumption [ a : gaussian ] , for defined as in , it holds that = \sum_{i=1}^r \exp \big(t_ih_{i,\theta } + \frac{t_i^2}{2 } ||\bar\sigma^{\frac{1}{2}}p_{i,\theta}||^2\big).\ ] ] as a consequence , under assumption [ a : convexobj ] and for any choice of , , the problem \\ \mathrm{subject~to}\quad&\eqref{e : ubardef},~\eqref{eq : cloopsys}~\mathrm{and}~\sum_{i=1}^r \exp \big(t_ih_{i,\theta } + \frac{t_i^2}{2 } ||\bar\sigma^{\frac{1}{2}}p_{i,\theta}||^2\big)\leq \alpha\end{aligned}\ ] ] is a convex conservative approximation of problem . it is easily seen that , for any -dimensional gaussian random vector with mean and covariance matrix , and any vector , = \exp\bigl(c^t\mu+\frac{1}{2}c^t\sigma ' c\bigr) ] is a convex function of and that the constraint set \leq \log \alpha\}=\{\theta\in\theta:~\ee[\varphi(\eta_\theta)]\leq \alpha\}\ ] ] is convex .finally , from lemma [ p : expcons ] , if \leq \alpha ] . together with assumption [ a : convexobj ] , this implies that the optimization problem with constraint latexmath:[ ] , the last statement of the proposition follows .in this section we focus on the problem \label{eq : objectiveicc } \\\textrm{subject to}\quad&\eqref{e : ubardef},~\eqref{eq : cloopsys}\textrm { and } \\ & j_i(\theta)\leq \beta_i,~i=1,\ldots , r \label{eq : genicc}\end{aligned}\ ] ] where , for , ] is convex . hence , for any choice of , the set is convex . since , the convexity of follows . together with assumption [ a : convexobj ] , this proves that is a convex optimization problem .it is worth noting that the function of section [ section : approxviaexp ] satisfies analogous monotonicity and convexity assumptions with respect to each of the , with . unlike those of section [ sec : cc ] , this convexity result is independent of the probability distribution of . by virtue of the alternative assumptions ( iii ) and( iii ) of proposition [ thm : phietaconvex ] , the requirement that be finite for all may be relaxed .a sufficient requirement is that there exist no two values and such that and . in particular , provided measurable and convex , definition ( [ eq : ramp ] ) satisfies all the requirements of proposition [ thm : etaphiconditions ] .the ( scalar ) polytopic constraint function : fulfills the hypotheses of proposition [ thm : etaphiconditions ] .hence , the corresponding integrated chance constraint is convex .[ ex : followon1 ] following example [ thm : examples ] , an interesting case is that of ellipsoidal constraints . for an -size positive - semidefinite real matrix and a vector , define is a convex function of the vector ( it is the composition of the convex mapping , , with the affine mapping ) and hence proposition [ thm : etaphiconditions ] applies .a problem setting similar to example [ ex : followon1 ] with quadratic expected - type cost function and ellipsoidal constraints has been adopted in , where hard constraints are relaxed to expected - type constraints of the form \leq \beta ] . in the absence of constraints , this is a finite horizon lqg problem whose optimal solution is the linear time - varying feedback from the state where the matrices are computed by solving , for , the backward dynamic programming recursion with .simulated runs of the controlled system are shown in figure [ f : lqg ] .we shall now introduce constraints on the state and the control input and study the feasibility of the problem with the methods of section [ sec : cc ] .the convex approximations to the chance - constrained optimization problems are solved numerically in matlab by the toolbox ` cvx ` . in all caseswe shall compute a -stage affine optimal control policy and apply it to repeated runs of the system .based on this we will discuss the feasibility of the hard constrained problem and the probability of constraint violation .let us impose bounds on the control inputs , , and , with , and bounds on the mass displacements , , for and with . in the notation of section [ s : polyt ] , these constraints are captured by the equation where and , with and this hard constraint is relaxed to the probabilistic constraint \geq 1-\alpha$ ] .the resulting optimal control problem is then addressed by constraint separation ( section [ s : sep ] ) and ellipsoidal approximation ( section [ s : confellipsoids ] ) . with constraint separation ,the problem is feasible for . for ,the application of the suboptimal control policy computed as in proposition [ prop : constrsep ] yields the results shown in figure [ f : constrsep ] . with this policy , the control input saturates within the required bounds whereas the mass displacements stay well below bounds .in fact , although the required probability of constraint satisfaction is , constraints were never violated in 1000 simulation runs .this suggests that the approximation incurred by constraint separation is quite conservative , mainly due to the relatively large number of constraints .it may also be noticed that the variability of the applied control input is rather small .this hints that the computed control policy is essentially open - loop , i.e. the linear feedback gain is small compared to the affine control term . with the ellipsoidal approximation method , for the same probability level, the problem turns out to be infeasible , in accordance with the conclusions of section [ s : comparison ] . for the sake of investigation, we loosened the bounds on the mass displacements to for all and .the problem of proposition [ prop : confellip ] is then feasible and the results from simulation of the controlled system are reported in figure [ f : confellip ] .although the controller has been computed under much looser bounds , the control performance is similar to the one obtained with constraint separation , a clear sign that the ellipsoidal approximation is overly conservative in this case .another evidence of inaccuracy is the fact that , while the control inputs get closer to the bounds , the magnitude of the displacements is not reduced .as in the case of constraint separation , the applied control input is insensitive to the specific simulation run , i.e. the control policy is essentially open loop . consider the constraint function with . unlike the previous section , we do not impose bounds on and at each but instead require that the total `` spending '' on and does not exceed a total `` budget '' .this constraint can be modelled in the form of section [ sec : ellipconstr ] , namely with and , where the constrained control policy for and is computed by solving the lmi problem of proposition [ prop : elliplmi ] .results from simulations of the closed - loop system are reported in figure [ f : ellipconstr ] .once again , constraints were not violated over 1000 simulated runs , showing the conservatism of the approximation .it is interesting to note that the displacements of the masses are generally smaller than those obtained by the controller computed under affine constraints , at the cost of a slightly more expensive control action .in contrast with the affine constraints case , the control action obtained here is much more sensitive to the noise in the dynamics , i.e. the feedback action is more pronounced .we have studied the convexity of optimization problems with probabilistic constraints arising in model predictive control of stochastic dynamical systems .we have given conditions for the convexity of expectation - type objective functions and constraints .convex approximations have been derived for nonconvex probabilistic constraints .results have been exemplified by a numerical simulation study .open issues that will be addressed in the future are the role of the tunable parameters ( e.g. the in section [ s : sep ] , the of section [ section : approxviaexp ] and the in section [ sec : icc ] ) in the various optimization problems , and the effect of different choices of the icc functions ( section [ sec : icc ] ) .directions of future research also include the extension of the results presented here to the case of noisy state measurements , the exact or approximate solution of the stochastic optimization problems in terms of explicit control laws and the control of stochastic systems with probabilistic constraints on the state via bounded control laws . , _ a full solution to the constrained stochastic closed - loop mpc problem via state and innovations feedback and its receding horizon implementation _ , in proceedings of the 42nd ieee conference on decision and control , vol . 1 , 9 - 12 dec . 2003 , pp . 929934 .
we investigate constrained optimal control problems for linear stochastic dynamical systems evolving in discrete time . we consider minimization of an expected value cost over a finite horizon . hard constraints are introduced first , and then reformulated in terms of probabilistic constraints . it is shown that , for a suitable parametrization of the control policy , a wide class of the resulting optimization problems are convex , or admit reasonable convex approximations . , , , stochastic control ; convex optimization ; probabilistic constraints
network coding was introduced in as a means to improve the rate of transmission in networks .linear network coding was introduced in .deterministic algorithms exist to construct _ scalar network codes _ ( in which the input symbols and the network coding coefficients are scalars from a finite field ) which achieve the maxflow - mincut capacity in the case of acyclic networks with a single source which wishes to multicast a set of finite field symbols to a set of sinks , as long as the field size . finding the minimum field size over which a network code exists for a given networkis known to be a hard problem .most recently , an algorithm was proposed in which attempts to find network codes using small field sizes , given a network coding solution for the network over some larger field size the algorithms of also apply to linear deterministic networks , and for _ vector network codes _ ( where the source seeks to multicast a set of vectors , rather than just finite field symbols ) . in this work ,we are explicitly concerned about the scalar network coding problem , although the same techniques can be easily extended to accommodate for vector network coding and linear deterministic networks , if permissible , as in the case of .network - error correction , which involved a trade - off between the rate of transmission and the number of correctable network - edge errors , was introduced in as an extension of classical error correction to a network setting .along with subsequent works and , this generalized the classical notions of the hamming weight , hamming distance , minimum distance and various classical error control coding bounds to their network counterparts .algorithms for constructing network - error correcting codes which meet a generalization of the classical singleton bound for networks can be found in . using the algorithm of , a network code which can correct any errors occurring in at most edgescan be constructed , as long as the field size is such that where is the set of edges in the network .the algorithms of have similar requirements to construct such network - error correcting codes .this can be prohibitive when is large , as the sink nodes and the coding nodes of the network have to perform operations over this large field , possibly increasing the overall delay in communication . in this work ,we extend the ef algorithm to block network - error correction using small fields . as in , we shall restrict our algorithms and analysis to fields with binary characteristic .the techniques presented can be extended to finite fields of other characteristics without much difficultly .the contributions of this work are as follows .* we extend the ef algorithm of to construct network - error correcting codes using small fields , by bridging the techniques of the ef algorithm and the network - error correction algorithm of . *the major step in the ef algorithm is to compute a polynomial of least degree coprime with a polynomial , of possibly large degree . while it is shown in that this can be done in polynomial time, the complexity can still be large .optimizing based on our requirement , we propose a alternate algorithm for computing the polynomial coprime with this is shown to have lesser complexity than that of the ef algorithm , which simply adopts a brute force method to do the same .the rest of this paper is organized as follows . in section [ sec2 ] ,we give the basic notations and definitions related to network coding , required for our purpose . in section [ sec3 ] ,we review the ef algorithm briefly and then propose our modification to it , and prove that the modified algorithm has lesser complexity than the original technique in the ef algorithm .section [ sec4 ] presents our algorithm for constructing network - error correcting codes using small field sizes , along with calculations of the complexity of the algorithm .examples illustrating the algorithm performance for network coding and error correction are presented in section [ sec5 ] .finally , we conclude the paper in section [ sec6 ] with comments and directions for further research .the model for acyclic networks considered in this paper is as in .an acyclic network can be represented as a acyclic directed multi - graph , where is the set of all nodes and is the set of all edges in the network .we assume that every edge in the directed multi - graph representing the network has unit _ capacity _ ( can carry utmost one symbol from ) . network links with capacities greater than unity are modeled as parallel edges .the network is assumed to be instantaneous , i.e. , all nodes process the same _ generation _ ( the set of symbols generated at the source at a particular time instant ) of input symbols to the network in a given coding order ( ancestral order ) .let be the source node and be the set of receivers .let be the unicast capacity for a sink node , i.e. , the maximum number of edge - disjoint paths from to .then is the max - flow min - cut capacity of the multicast connection .an -dimensional network code ( ) is one which can be used to transmit symbols simultaneously from to and can be described by the three matrices ( of size ) , ( of size ) , and ( of size for every sink ) , each having elements from some finite field .further details on the structure of these matrices can be found in and .we then have the following definition . [ nettransfermatrix ] _ the network transfer matrix _ , for a -dimensional network code , , corresponding to a sink node is a full rank matrix defined as the matrix governs the input - output relationship at sink the problem of designing a -dimensional network code then implies making a choice for the matrices and such that the matrices have rank each .we thus consider each element of , and to be a variable for some positive integer , which takes values from the finite field let be the set of all variables , whose values define the network code . the variables are known as the _ local encoding coefficients _ . for an edge in a network with a -dimensional network code in place , the _ global encoding vector _ is a dimensional vector which defines the particular linear combination of the input symbols which flow through it is known that deterministic methods of constructing a -dimensional network code exist , as long as let be the length of the longest path from the source to any sink . because of the structure of the matrices and ,it is seen that the matrix has degree at most in any particular variable and also a total degree ( sum of the degrees across all variables in any monomial ) of .let be the determinant of and then the degree in any variable ( and the total degree ) of the polynomials and are at most and respectively .after briefly recollecting the ef algorithm , we shall proceed to modify its key step so that the overall complexity of the algorithm is reduced . assign values to the scalar coding coefficients from an appropriate field such that the network transfer matrices to all the sinks are invertible . express every as a binary polynomial of degree at most using the usual polynomial representation of the finite field for a particular choice of the primitive polynomial of degree substituting these polynomials representing the in the matrices calculate the determinants of as the polynomials , ] also , for each sink the matrices remain invertible as as the following lemma ensures that such a coprime exists and can be found in polynomial time . [ coprimeexistence ] if is a non - zero binary polynomial of degree , there exists a coprime polynomial of degree at most , and we can identify it in polynomial time . [ rem1 ] the worst - case complexity of computing is where we now present a fast algorithm for computing the least degree irreducible polynomial that is coprime with note that any polynomial coprime with is useful only if the degree of is less than as only such a can result in a network code using a smaller field than the one we started with . using this fact , we give algorithm [ alg : coprime ] which computes a least degree irreducible polynomial that is coprime with let let be the first polynomial for which is non - zero . note that every is the product of all irreducible polynomials whose degree divides also , all irreducible polynomials of degree divide as all for all therefore , at least one of the irreducible polynomials of degree is coprime with find one such polynomial the following lemma ensures that all polynomials which are found to be coprime with by directly computing the gcd ( or the remainder for irreducible polynomials ) in the brute force method ( as done in algorithm [ alg : construction ] ) , can also be found by running algorithm [ alg : coprime ] , using the set of polynomials upto the appropriate degree .[ lemmafindingg ] for some field let ] such that then is also relatively prime with the polynomial as and are coprime with each other , we can obtain polynomials ] with also , as let then , which means that and are coprime with each other , hence proving the lemma .we now prove that our method for step of algorithm [ alg : construction ] has less complexity than that of . towards that end , we first prove the following lemma .[ lemmafindinggcomplexity ] let , ] because of the fact that the new matrices obtained after the modulo operation are also full rank , which implies that the error correcting capability of the code is preserved .in order to ensure that the error correction property of the original network code is preserved , it is sufficient if a polynomial is coprime with each polynomial rather than their product as shown in step of algorithm [ alg : necclowfieldsize ] .however , the following lemma shows that both are equivalent .let , i=1,2, ... ,n\right\} ] is relatively prime with all the polynomials in if and only if it is relatively prime with their product ._ if part : _ if is relatively prime with the product of all the polynomials in then there exist polynomials $ ] such that for each we can rewrite ( [ eqn102 ] ) as which implies that is coprime with each _ only if part : _ suppose is relatively prime with all the polynomials in then , for each we can find polynomials and such that , in particular , using ( [ eqn104 ] ) in ( [ eqn103 ] ) , thus , is relatively prime with continuing with the same argument , it is clear that is relatively prime with the complexity of algorithm [ alg : necclowfieldsize ] is given by table [ tab1 ] , along with the references and reasoning for the mentioned complexities for every step of the algorithm . the only complexity calculation of table [ tab1 ] which remains to be explained is the complexity involved in identifying and calculating the non - zero minor of the matrix there are such minors , and calculating each takes multiplications over as can take values upto clearly the function to be maximized is of the form for proposition [ thmcomplexity ] gives the value of for which such a function is maximized , based on which the value in table [ tab1 ] has been calculated .[ thmcomplexity ] for some positive integer let be an integer such that the function is maximized at the statement of the theorem is easy to verify for therefore , let let for some such that then , where proving the statement of the theorem is then equivalent to showing that both of the following two statements are true , which we shall do separately for even and odd values of . * for all integers * for all integers _ case - a _ ( _ is even _ ) : let for some integer such that then , for it is clear from ( [ eqn100 ] ) that if it is clear that thus , for even values of , the theorem is proved .+ _ case - b _( _ is odd _ ) : let for some integer such that then , now , for ( as and is odd ) .hence , for if then by ( [ eqn101 ] ) , it is clear that thus for all for again by ( [ eqn101 ] ) , it is clear that and thus the theorem holds for odd values of this completes the proof. [ cols="^,^,^,^",options="header " , ] [ tab3 ] according to the algorithm in , a network - error correcting code can be constructed deterministically if in fig .[ fig : necnetwork ] , let the variable denote the encoding coefficient between edges and similarly , the variable denote the local encoding coefficients between and let let and , where is a primitive element of let be the primitive polynomial of degree under consideration .consider two such network - error correcting codes obtained using algorithm [ alg : necc ] for the network of fig .[ fig : necnetwork ] as follows .let and be two choices for the set with all the other local encoding coefficients being unity .it can be verified that these two network codes can be used to transmit one error - free symbol from the source to both sinks , as long as not more than single edge errors occur in the network .table [ tab3 ] gives the results of running algorithm [ alg : necclowfieldsize ] for this network starting from these two codes , with and being the primitive elements of and respectively . except for the other coding coefficients remain over the respective fields . as in example[ exm1 ] , the initial choice of the sets and for results in the final network code being over different field sizes . with ,the resultant network - error correcting code is over exactly the one reported in by brute force construction .a new and faster method of computing a coprime polynomial to a given polynomial has been presented . thereby improving the performance of the ef algorithm , which is applicable to scalar and vector network coding .based on the ef algorithm , a method has been presented which can obtain network - error correcting codes using small fields that meet the network singleton bound . this technique can be adapted to obtain network - error correcting codes meeting the refined singleton bound , or for linear deterministic networks which permit solutions similar to those obtained in . as in the original paper , questions remain open about the achievability of a code using the minimal field size .as illustrated by the examples in section [ sec5 ] , factors such as the initial choice of the network code and the primitive polynomial of the field over which the initial code is defined ( using which the local encoding coefficients are represented as polynomials ) , control the resultant field size after the algorithm .the authors would like to thank raman sankaran of the csa dept ., iisc , for useful discussions regarding the complexity calculations of the algorithms presented in this paper .this work was supported partly by the drdo - iisc program on advanced research in mathematical engineering through a research grant and partly by the inae chair professorship grant to b. s. rajan .s. jaggi , p. sanders , p.a .chou , m. effros , s. egner , k. jain and l.m.g.m .tolhuizen , `` polynomial time algorithms for multicast network code construction '' , ieee trans .theory , vol .51 , no . 6 , june 2005 , pp.1973 - 1982 .s. avestimehr , s n. diggavi and d.n.c .tse , `` wireless network information flow '' proceedings of allerton conference on communication , control , and computing , illinois , september 26 - 28 , 2007 , pp .
recently , ebrahimi and fragouli proposed an algorithm to construct scalar network codes using small fields ( and vector network codes of small lengths ) satisfying multicast constraints in a given single - source , acyclic network . the contribution of this paper is two fold . primarily , we extend the scalar network coding algorithm of ebrahimi and fragouli ( henceforth referred to as the ef algorithm ) to block network - error correction . existing construction algorithms of block network - error correcting codes require a rather large field size , which grows with the size of the network and the number of sinks , and thereby can be prohibitive in large networks . we give an algorithm which , starting from a given network - error correcting code , can obtain another network code using a small field , with the same error correcting capability as the original code . our secondary contribution is to modify the ef algorithm itself . the major step in the ef algorithm is to find a least degree irreducible polynomial which is coprime to another large degree polynomial . we suggest an alternate method to compute this coprime polynomial , which is faster than the brute force method in the work of ebrahimi and fragouli .
communication networks are presenting ever - increasing challenges in a wide range of applications , and there is great interest in inferential methods for exploiting the information they contain .a common source of such data is a corpus of time - stamped messages such as e - mails or sms ( short message service ) .such messaging data is often useful for inferring a social structure of the community that generates the data . in particular , messaging data is an asset to anyone who would like to cluster actors according to their _ similarity_. a practitioner is often privy to messaging data in a _ streaming _ fashion , where the word _ streaming _ describes a practical limitation , as the practitioner might be privy only to the incoming data in a fixed summarized form without any possibility to retrieve past information .it is in the practitioner s interest to transform the summarized data so that the transformed data is appropriate for detecting _ emerging _ social trends in the source community . we mathematically model such streaming data as a collection of tuples of the form of time and actors , where and represent actors exchanging the -th message and represents the occurrence time of the -th message .there are many models suitable for dealing with such data .the most notable are the cox hazard model , the doubly stochastic process ( also known as the cox process ) , and the self - exciting process ( although self - exciting processes are sometimes considered as special cases of the cox hazard model ) . for references on these topics , see , and .all three models are related to each other ; however , the distinctions are crucial to statistical inference as they stem from different assumptions on information available for ( online ) inference . to transform data to a data representation more suitable for clustering actors, we model as a ( multivariate ) doubly stochastic process , and develop a method for embedding as a stochastic process taking values in for some suitably chosen .for statistical inference when there is information available beyond , the cox - proportional hazard model is a natural choice . in and , for instance , instantaneous intensity of messaging activities between each pair of actors is assumed to be a function of , in the language of generalized linear model theory , known covariates with unknown regression parameters . more specifically , in , the authors consider a model where with and representing independent counting processes , e.g. , are bernoulli random variables and are random variables from the exponential family . on the other hand , in ,a cox multiplicative model was considered where .the model in posits that actor interacts with actor at a baseline rate modulated by the pair s covariate whose value at time is known and is a common parameter for all pairs . in , it is shown under some mild conditions that one can estimate the global parameter consistently . in , the intensity is modeled for _ adversarial _ interaction between _ macro _ level groups , and a problem of nominating unknown participants in an event as a missing data problem is entertained using a self - exciting point process model . in particular , while no explicit intensity between a pair of actors ( gang members ) is modeled , the event intensity between a pair of groups ( gangs ) is modeled , and the spatio - temporal model s chosen intensity process is self - exciting in the sense that each event can affect the intensity process .when data is the only information at hand , a common approach is to construct a time series of ( multi-)graphs to model association among actors .for such an approach , a simple method to obtain a time series of graphs from is to `` pairwise threshold '' along a sequence of non - overlapping time intervals .that is , given an interval , for each pair of actors and , an edge between vertex and vertex is formed if the number of messaging events between them during the interval exceeds a certain threshold .this is the approach taken in , and , to mention just a few examples .the resulting graph representation is often thought to capture the structure of some underlying social dynamics .however , recent empirical research , e.g. , , has begun to challenge this approach by noting that changing the thresholding parameter can produce dramatically different graphs .another useful approach when is the only information available is to use a doubly stochastic process model in which count processes are driven by latent factor processes .this is the approach taken explicitly in and , and this is also done implicitly in . in and between actors are specified by proximity in their latent positions ; the closer two actors are to each other in their latent configuration , the more likely they exchange messages . using our model, we consider a problem of clustering actors `` online '' by studying their messaging activities .this allows us a more geometric approach afforded by embedding data to an representation for some fixed dimension . in this paper , we propose a useful mathematical formulation of the problem as a filtering problem based on both a multivariate point process observation and a population latent position distribution .as a convention , we assume that a vector is a column vector if its dimension needs to be disambiguated .we denote by the filtration up to time that models the information generated by undirected communication activities between actors in the community , where `` undirected '' here means we do not know which actor is the sender and which is the receiver .we denote by the space of probability measures on . for a probability density function defined on , denotes the probability density function that is proportional to where the normalizing constant does not depend on .the set of all matrices over the reals is denoted by . for each matrix , we write .given a vector , we write for its euclidean norm .let and . for each and , we write for the hadamard product of and , i.e. , denotes component - wise multiplication . given vectors in , the gram matrix of the ordered collection is the matrix such that its - entry is the inner product of and . given a matrix , is the column vector whose -th entry is the -th diagonal element of . with a slight abuse of notation , given a vector , we will also denote by the diagonal matrix such that its -th diagonal entry is .we always use for the number of actors under observation and for the dimension of the latent space .we denote by the -fold product of .an element of will be written in bold face letters , e.g. .similarly , bold faced letters will typically be used to denote objects associated with the actors collectively . an exception to this convention is the identity matrix which is denoted by , where the dimension is specified only if needed for clarification . also , we write as the column matrix of ones . with a bit of abuse of notation, we also write for an indicator function , and when confusion is possible , we will make our meaning clear .our actors under observation are assumed to be a subpopulation of a bigger population .that is , we observe actors that are sampled from a population for a longitudinal study .we are not privy to the actors latent features that determine the frequency of pairwise messaging activities , but we do observe messaging activities . a notional illustration of our approach thus far is summarized in figure [ fig : hierarchalmodeldiagram ] , figure [ fig : subpop_historgrams ] , and figure [ fig : kullback - leibler divergence - simulation ] .in both figure [ fig : subpop_historgrams ] and figure [ fig : kullback - leibler divergence - simulation ] , represents the ( same ) initial time when there was no cluster structure , and and represent the emerging and fully developed latent position clusters which represent the object of our inference task . for a more detailed diagram . ][ [ population - density - process - level ] ] population density process level + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the message - generating actors are assumed to be members of a community , which we call the population . the aspect of the population that we model in this paper is its members distribution over a latent space in which the proximity between a pair of actors determines the likelihood of the pair exchanging messages .the population distribution is to be time - varying and a mixture of component distributions .the latent space is assumed to be for some , and the population distribution at time is assumed to have a continuous density . to be more precise ,we assume that the ( sample ) path is such that for each , where * is a smooth _ sample _ path of a stationary ( potentially degenerate ) diffusion process taking values in , * is a probability density function on with convex support with its mean vector being the zero vector and its covariance matrix being a positive definite ( symmetric ) matrix , * is a smooth _ sample _ path of an -valued ( potentially non - stationary or degenerate ) diffusion process , * is a smooth _ sample _ path of a stationary ( potentially degenerate ) diffusion process taking values in .note that it is implicitly assumed that , and additionally , we also assume that for each and , the -th moment of the -th coordinate of is finite , i.e. , . in this paper ,we take and as exogenous modeling elements .however , for an example of a model with yet further hierarchical structure , one _ could _ take a cue from a continuous time version of the classic `` co - integration '' theory , e.g. , see .the idea is that the location of the -th center is non - stationary , but the inter - point distance between a combination of the centers is stationary .more specifically , one _ could _ further assume that there exist matrix , matrix and matrix such that * is the dimensional zero matrix , * is a dimensional brownian motion , * is a stationary diffusion process .thus , the position of centers are unpredictable , but the relative distance between each pair of centers are as predictable as that of a stationary process .) ] is a diffusion process whose generator is , and moreover we assume that are mutually independent .for each , let where each is assumed to be a column vector , i.e. , a matrix . in other words , the -th row of is the transpose of . , ) ] 0.1 in compute and compute the non - negative definite symmetric square root of standardnormalvector [ [ messaging - process - level ] ] messaging process level + + + + + + + + + + + + + + + + + + + + + + + denote by the number of messages sent _ from _ actor _ to _actor . also , denote by the number of messages exchanged _ between _ actor _ and _ actor .note that .for each actor , we assume that the path is deterministic , continuous and takes values in . for each , we assume that & = ( \lambda_{t , i } \lambda_{t , j}/2 ) p_{t , i\rightarrow j}(\bm x_{t})dt + o(dt).\end{aligned}\ ] ] for our algorithm development and experiment in section [ sec : numericalexperiments ] , we take ,\end{aligned}\ ] ] but for experiment in section [ sec : numericalexperiments ], we take . next , by way of assumption , for each pair , say , actor and actor , we eliminate the possibility that both actor and actor send messages concurrently to each other .more specifically , we assume that \\ & = ( \lambda_i(t ) \lambda_j(t)/2 ) ( p_{t , i\rightarrow j}(x_{t , i } ) + p_{t , j \rightarrow i}(x_{t , j } ) ) dt + o(dt).\end{aligned}\ ] ] for future reference , we let , and 0.1 in \leftarrow ( t , i , j) ] , and let denote its density .in theorem [ thm : exactposterior ] , the _ exact _ formula for updating the posterior is presented , and in theorem [ thm : simplifyingposteriorupdaterule ] , our _ working _ formula used in our numerical experiments is given .we develop our theory for the case where and are the same for all actors for simplicity , as generalization to the case of each actor having different values for and is straightforward but requires some additional notational complexity .[ thm : exactposterior ] for each and , where is an matrix such that for each , and for each , , and is an matrix such that for each pair , and for each pair , .hereafter , for developing algorithms further for efficient computations , we make the assumption that for each , where denotes the joint density for actors , and .[ thm : simplifyingposteriorupdaterule ] for each function , we have replacing with a dirac delta generalized function , theorem [ thm : simplifyingposteriorupdaterule ] states that for each , where denotes the formal adjoint operator of . for use only within algorithm [ algo : estimateactorposterior ] , , , and , 1 \le u_\ell < v_\ell \le n\} ] for all possible values of .another possibility among many others is to choose if is chosen so that ] in algorithm [ algo : messagingactivities ] and algorithm [ algo : estimateactorposterior ] , _ near - infinitesimally - small _ means so small that the likelihood of having more than one event during a time interval of length is practically negligible .also , by standardnormalvector in algorithm [ algo : latentprocess ] and unitexponentialvariable in algorithm [ algo : messagingactivities ] , we mean generating , respectively , a single normal random vector with its mean vector being zero and its covariance matrix being the identity matrix , and a single exponential random variable whose mean is one .in our experiments , we hope to detect clusters with accuracy and speed similar to that possible if the latent positions were actually observed even though we use only information in estimated from information contained in .we denote the end - time for our simulation as .there are two simulation experiments presented in this section , and the computing environment used in each experiment is reported at the end of this section .[ [ experiment-1 ] ] experiment 1 + + + + + + + + + + + + we take and we assume that for each ] where .\end{cases } \end{aligned}\ ] ] then , we also consider , where .\end{cases } \end{aligned}\ ] ] there is only one population density ; in other words , .note that even with _one population center _ , we can have _ more than one empirical mode _ for the subpopulation .one of these modes is near zero , and another mode is near one .the reason for this is that because of the value of and , when an actor is too far away from the mode of the population process , the population process affects the actors on its _ tail _ only by negligibly small amount . in figure[ fig : experimentone - sixactorpath ] and figure [ fig : experimentone - perfect - filtering - case ] , a sample path of the true latent position of each of eight actors is illustrated in black lines .it is apparent that in the , all eight actors are equally _ informed _ of the population mode shift , but in the case , only the last three were able to adapt to the change , and the first five actors are surprised by the abrupt change at time .our simulation is discretized .our unit time is , and in figure [ fig : experimentone - sixactorpath ] , each tick in the horizontal axis corresponds to an integral multiple of .the jump term in our update formula is quite sensitive to the number of actors being considered . as such , for updating the jump term , we further discretized into subintervals for numerical stability of our update iterations . for , each unit intervalis associated with sub - iterations , and the total number of the ( main ) iteration is , and we use instead of in each -th subiteration of each main iteration staring at time . to implement our mixture projection algorithm ,we take .the initial position of the actors are sampled from the initial population distribution .we take .the discretized version of is illustrated in figure [ fig : levelplot(at)example ] .for inference during our experiment , we have dropped the second order term and used only the first order term to keep the cost of running our experiment low . on the other hand , for simulating the actors latent positions , we have used both the first and second order term of .the value of gives the first part of the change in .note that in both figure [ fig : levelplot(at)one ] and figure [ fig : levelplot(at)zero ] , the entries that are _ sufficiently far off _ from the diagonals are near zero . for , the time plot of the number of messages produced during interval ] , the subpopulation messaging rate is relatively constant at the rate of messages over each unit interval , and this is expected as all eight actors are tightly situated around . 0.45 for the discretized version of , used in the simulation experiment for two particular cases , where the horizontal axis is associated with the rows of and the vertical axis is associated with the columns of .,title="fig : " ] 0.45 for the discretized version of , used in the simulation experiment for two particular cases , where the horizontal axis is associated with the rows of and the vertical axis is associated with the columns of .,title="fig : " ] case .the number of messages per across the time interval ] , in the bounded confidence model , the _ opportunities _ for ( latent ) position changes that each actor experiences is modeled as a simple poisson process .when there is a change at time , the change is assumed to involve precisely two actors , say , actor and actor , such that their position and differs by at most .this yields an inhomogeneity in the rate at which actors change their locations .then , the exact amount of change is specified by the following formula : where is a fixed constant . roughly speaking , upon interaction , actor keeps percent of its original position , andis allowed to be influenced by percent of the original position of actor , and vice versa .fix constants and . then , define by letting for each and , studied in particularly is the interaction between and where is the empirical distribution of .as shown in , the bounded confidence model has an appealing feature that the parameter space for the underlying parameters and can be partitioned according to the type of consensus that the population eventually reaches , namely , a total consensus and a partial consensus . in a total consensus regime , for sufficiently large , everyone is expected to gather tightly around some fixed common point ] separated by at least , to exactly one of which each actor s position is attracted . in particular , the ( asymptotic ) position of actors yields a partition of the actor set when the exact locations of are known .generally , ) ] in the sense that for some ] . in our adaptation , for analytic tractability , we replace the indicator function with , take to be an exogenous modeling element , and take to be potentially time dependent , yielding the operator the second numerical experiment in section [ sec : numericalexperiments ] focuses on the case where the community starts with no apparent clustering but as time passes , each actor becomes a member of exactly one of clusters , where each cluster is uniquely identified by a closed convex subset of the latent space .in this work , we use a model that that captures the action in up to the second order . to begin , note that where and denote respectively the gradient and the hessian of at , and h.o.t .denotes the higher order terms .suppose that is given .now , we have where and are given by the following : dropping the term associated with h.o.t . , we obtain the following : each , we see that we first consider the second term of the right side of . now , we have that and that hence , next , for the first term of the right side of , we have in summary , for each , we have and our claim follows from this .this section contains two formulas to be used in the next section .our result and proof in lemma [ lem : product - density - formula ] is stated in the same notation as in lemma [ lem : algebra - formula ] .recall that .[ lem : algebra - formula ] let and .then , where is the gram matrix for and is the matrix whose -entry is .let and for each , let .first , note that now , our claim follows from this .[ lem : product - density - formula ] let be the standard multivariate normal density defined on . also , fix a sequence , and a sequence . using lemma [ lem : algebra - formula ] , we see that here , we assume , as done in theorem [ thm : projfilterzakai ] , that and , where for simplicity , we have written . in this section ,we fix to be the standard multivariate normal density defined on and recall that . also , we fix , and a sequence . [lem : micro - b - a ] fix , and .for each , let where to simplify the expression of , we have used the fact that also , note that using lemma [ lem : product - density - formula ] with , and , we see that then , for our claim in , it is enough to see that next , we show our claim in . hereafter , to ease our notation , we write for .first , for , we have and hence , on the other hand , for , we have and so , we have our claim in follows . for lemma [ lem : b ] and lemma [ lem : a ] , by , we denote the gram matrix for , and define to be as in lemma [ lem : algebra - formula ] for , , .let to simplify our notation , we let define and note also , denote by the multivariate normal density defined on such that its mean vector is and its covariance matrix is .for and , we write and note that in particular , starting from , it is easy to see that and as a matter of definition , we have lemma [ lem : b ] and lemma [ lem : a ] are associated , respectively , with the first and the second terms appearing in the right side of .[ lem : b ] for each and , we have to ease our notation , we first let it follows that we compute instead of directly working with .first , we observe that and that using lemma [ lem : algebra - formula ] on the third equality , we see that continuing with the calculation , putting together , , and , and plugging in the full expression for , we see that our claim follows after summing over and replacing with its full expression .[ lem : a ] for each , and , note that where we first compute the diagonal terms , i.e. , the cases .note that and also that next , we compute the off - diagonal terms , i.e. , the cases . first , using our calculation just above , we see that we note that our claim follows from this after combining them together , and simplifying the combined term into a matrix notation .here , we will take the convention that is organized as a matrix . by the -th row of , we mean .let = \int { \rho}_t(d\bm x ) e^{\imath \langle \bm v , \bm x \rangle},\end{aligned}\ ] ] where for each and , in other words , is the ( random ) conditional characteristic function of .note also , let , for each and , .\end{aligned}\ ] ] for each , denotes the function obtained by fixing all other indices different from the -th actor indices but letting the -th actor indices to be free , and if is in the domain of the operator , with some abuse of notation , we write : similarly , for each , let .\end{aligned}\ ] ] in other words , denotes the conditional characteristic function of the -th row of , and also , let , for , and , .\end{aligned}\ ] ] note that the definition of is actually independent of a particular choice of vertex as they are all identically distributed .one can prove the next result by directly following , but one needs to adapt to the fact that the underlying process can now be a time - inhomogeneous non - linear markov process .the proof details are left to the reader . for a survey of similar techniques , see also and .[ prop : donaldsnyder ] for each and , our proof of theorem [ thm : exactposterior ] is by brute force calculation , starting from proposition [ prop : donaldsnyder ] .in particular , our claim in theorem [ thm : exactposterior ] follows from proposition [ prop : donaldsnyder ] by directly applying lemma [ lemmaaa ] , lemma [ lemmaab ] and lemma [ lemmaac ] which we list and prove now .[ lemmaaa ] for each , fix , , and .then , for each , we have : = 1 + \int_{0}^{\varepsilon}\mathbb{e}\left[\mathcal{a}\left(\mu_{t+s}\right)e^{\imath \langle v , \cdot - x\rangle}\left(x_{t+s , i}\right)\left|x_{t , i}=x\right.\right]ds.\end{aligned}\ ] ] we have and hence , \right| \le 1.\end{aligned}\ ] ] it follows that ds \\ = & \\mathbb{e}\left[\mathcal{a}\left(\mu_{t}\right ) e^{\imath \langle v , \cdot - x\rangle}(x_{t } ) \left|x_{t}=x\right . \right]\\ = & \\mathcal{a}\left(\mu_{t}\right ) e^{\imath \langle v , \cdot - x\rangle}(x).\end{aligned}\ ] ] [ lemmaab ] for each , we have : fix and note : and that d\bm v\\ & = \frac{1}{2\pi } \int \lim_{\varepsilon\rightarrow0 } \frac{1}{\varepsilon } \mathbb{e}\left[e^{\imath \langle \bm v , \bm x_{t+\varepsilon } - \bm y\rangle}- e^{\imath \langle \bm v , \bm z -\bm y \rangle } \left|\bm x_t = \bm z\right .\right ] d\bm v.\end{aligned}\ ] ] treating as a generalized function ( i.e. a tempered distribution ) , we have : d\bm v \right ) d\bm y \right)\\ & = \int \rho_t(d\bm z ) \lim_{\varepsilon\rightarrow0 } \frac{1}{\varepsilon } \left ( \mathbb{e}\left [ \int f(\bm y ) \left ( \frac{1}{2\pi } \int e^{\imath \langle \bm v , \bm x_{t+\varepsilon } - \bm y\rangle } d\bm v \right ) d\bm y \left|\bm x_t = \bm z\right .\right ] - \int f(\bm y ) \left ( \frac{1}{2\pi } \int e^{\imath \langle \bm v , \bm z -\bm y \rangle } d\bm v \right ) d\bm y \right)\\ & = \int \rho_t(d\bm z ) \lim_{\varepsilon\rightarrow0 } \frac{1}{\varepsilon } \left ( \mathbb{e}\left [ \int f(\bm y ) \delta_0(\bm x_{t+\varepsilon } - \bm y ) d\bm y \left|\bm x_t = \bm z\right .\right ] - \left ( \int f(\bm y ) \delta_0(\bm z -\bm y ) d\bm y \right ) \right)\\ & = \int \rho_t(d\bm z ) \lim_{\varepsilon\rightarrow0 } \frac{1}{\varepsilon } \left ( \mathbb{e}\left [ f(\bm x_{t+\varepsilon } ) \left|\bm x_t = \bm z\right .\right ] - f(\bm z ) \right)\\ & = \int \rho_t(d\bm z ) ( \mathcal{a}\left(\mu_t\right)f)(\bm z).\end{aligned}\ ] ] [ lemmaac ] for each , we have : note +recall that for each , denotes the function obtained by fixing all other indices different from the -th actor indices but letting the -th actor indices to be free .fix .let be such that for all .for each , then , the claimed formula follows from our assumption in .suppose that as and that for each , satisfies the rank condition , i.e. , is of rank at least .note that each is a non - empty compact subset of since for any real orthogonal matrix .in particular , for sufficiently small , we may assume that } \| \xi_d^*(m_\varepsilon ) \|_f^2< \infty$ ] .it is enough to show that for each arbitrary convergent subsequence of , consider an arbitrary convergent subsequence of .we begin by observing some linear algebraic facts .first , any sequence of real orthogonal matrices has a convergent subsequence whose limit is also real orthogonal .next , since both and are of rank , there exists a unique real orthogonal matrix such that and in fact , where is a singular value decomposition of , and is the corresponding _ unique right factor _ in the polar decomposition of .note that this implies the well - definition part of our claim on .also , since , we have that for relevant linear algebra computation details for these facts , see ( * ? ? ?69 , pg . 370 , pg .412 , and pg .now , by taking a subsequence if necessary , we also have that for some matrix such that , .then , next , note that if has distinct diagonal elements , then we also have so that . on the other hand ,more generally , i.e. , even when there are some repeated diagonal elements , we can find a matrix such that . to see this , note that the -th column of is also an eigenvector of for the eigenvalue , and , andhence it follows that for some real orthogonal matrix , we have .moreover , exploiting the block structure of owing to algebraic multiplicity of eigenvalues , we can in fact choose so that . then , now , we have where is a real orthogonal matrix and and implicitly the limit was taken along a further subsequence when necessary . moreover , in summary , we have : by definition of , along with the facts that ( i ) all of the convergent subsequences share the common limit , ( ii ) each subsequence has a convergent subsequence , and ( iii ) and have of full column rank , we have .
we model messaging activities as a hierarchical doubly stochastic point process with three main levels , and develop an iterative algorithm for inferring actors relative latent positions from a stream of messaging activity data . each of the message - exchanging actors is modeled as a process in a latent space . the actors latent positions are assumed to be influenced by the distribution of a much larger population over the latent space . each actor s movement in the latent space is modeled as being governed by two parameters that we call confidence and visibility , in addition to dependence on the population distribution . the messaging frequency between a pair of actors is assumed to be inversely proportional to the distance between their latent positions . our inference algorithm is based on a projection approach to an online filtering problem . the algorithm associates each actor with a probability density - valued process , and each probability density is assumed to be a mixture of basis functions . for efficient numerical experiments , we further develop our algorithm for the case where the basis functions are obtained by translating and scaling a standard gaussian density . = 1 social network ; multiple doubly stochastic processes ; classification ; clustering 62m0 , 60g35 , 60g55
in , j .- b .lasserre described a hierarchy of convex semidefinite programming ( sdp ) problems allowing to compute bounds and find global solutions for finite - dimensional nonconvex polynomial optimization problems .each step in the hierarchy consists of solving a primal moment sdp problem and a dual polynomial sum - of - squares ( sos ) sdp problem corresponding to discretizations of infinite - dimensional linear conic problems , namely a primal linear programming ( lp ) problem on the cone of nonnegative measures , and a dual lp problem on the cone of nonnegative continuous functions .the number of variables ( number of moments in the primal sdp , degree of the sos certificates in the dual sdp ) increases when progressing in the hierarchy , global optimality can be ensured by checking rank conditions on the moment matrices , and global optimizers can be extracted by numerical linear algebra . for more information on the moment - sos hierarchy and its applications , see .this approach was then extended to polynomial optimal control in . whereas the key idea in was to reformulate a ( finite - dimensional ) nonconvex polynomial optimization on a compact semi - algebraic set into an lp in the ( infinite - dimensional ) space of probability measures supported on this set , the key idea in , also developed in , was to reformulate an ( infinite - dimensional ) nonconvex polynomial optimal control problem with compact constraint set into an lp in the ( infinite - dimensional ) space of occupation measures supported on this set .note that lp formulations of optimal control problems ( on ordinary differential equations and partial differential equations ) are classical , and can be traced back to the work by l. c. young , filippov , as well as warga and gamkrelidze , amongst many others . for more details and a historical survey , see e.g. ( * ? ? ?* part iii ) .we believe that what is innovative in is the observation that the infinite - dimensional linear formulations for optimal control problems can be solved numerically with a moment - sos hierarchy of the same kind as those used in polynomial optimization . the objective of this contribution is to revisit the approach of and to survey the use of occupation measures to linearize polynomial optimal control problems .this is an opportunity to describe duality in infinite - dimensional conic problems , as well as various approximation results on the value function of the optimal control problem .the primal lp consists of finding occupation measures supported on optimal relaxed controlled trajectories , whereas the dual lp consists of finding the largest lower bound on the value function of the optimal control problem .the value function is the solution ( in a suitably defined weak sense ) of a nonlinear partial differential equation called the hamilton - jacobi - bellman equation , see e.g. ( * ? ? ?* chapters 8 and 9 ) and ( * ? ? ?* chapters 19 and 24 ) .it is traditionally used for verification of optimality , and for explicit computation of optimal control laws , but we do not describe these applications here .we consider polynomial optimal control problems ( pocps ) of the form \\ & & & u(t ) \in u , \ : t \in [ t_0,t ] \\ & & & x(t ) \in x_t \end{array}\ ] ] where the dot denotes time derivative , ] is a given terminal cost , ^n ] and the initial condition .in pocp ( [ pocp ] ) , the minimum is with respect to all control laws ; u) ] which are bounded functions of time with values in . let \times x ] starting at and admissible for pocp ( [ pocp ] ) .the function defined in ( [ pocp ] ) is called the value function , and its domain is .as explained in the introduction , to derive an lp formulation of pocp ( [ pocp ] ) we have to introduce measures on trajectories , the so - called occupation measures . the first step is to replace classical controls with probability measures , and for this we have to define additional notations . given a compact set , let denote the space of continuous functions supported on , and let denote its nonnegative elements , the cone of nonnegative continuous functions on .let denote its topological dual , the set of all nonnegative continuous linear functional on . by a riesz representation theorem ,these are nonnegative borel - regular measures , or borel measures , supported on .the topology in is the strong topology of uniform convergence , whereas the topology in is the weak - star topology .the duality bracket denotes the integration of a function against a measure . for background on weak - star topology see e.g. ( * ? ? ?* section 5.10 ) or ( * ? ? ? * chapter iv ) .finally , let us denote by the set of probability measures supported on , consisting of borel measures such that .in pocp ( [ pocp ] ) , given , let ; x\times u) ] , the control is not a vector , but a time - dependent probability measure which rules the distribution of the control in .we use the notation to emphasize the dependence on time .this is called a relaxed control , or stochastic control , or young measure in the functional analysis literature. pocp ( [ pocp ] ) is then relaxed to \\ & & & \omega_t \in { \mathscr p}(u ) , \ : t \in [ t_0,t ] \\ & & & x(t ) \in x_t \end{array}\ ] ] where the minimization is w.r.t . a relaxed control .note that we replaced the infimum in pocp ( [ pocp ] ) with a minimum in relaxed pocp ( [ relaxedpocp ] ) .indeed , it can be proved that this minimum is always attained using ( weak - star ) compactness of the space of probability measures with compact support . since classical controls ; u) ] , the minimum in relaxed pocp ( [ relaxedpocp ] ) is smaller than the infimum in classical pocp ( [ pocp ] ) , i.e. contrived optimal control problems ( e.g. with overly stringent state constraints ) can be cooked up such that the inequality is strict , i.e. , see e.g. the examples in ( * ? ? ?* appendix c ) .we do not consider that these examples are practically relevant , and hence the following assumption will be made .[ norelaxationgap ] for any relaxed controlled trajectory admissible for relaxed pocp ( [ relaxedpocp ] ) , there is a sequence of controlled trajectories admissible for pocp ( [ pocp ] ) such that for every function .then it holds for every .note that this assumption is satisfied under the classical controllability and/or convexity conditions used in the filippov - wa theorem with state constraints , see and the discussions around assumption i in and assumption 2 in . however , let us point out that assumption [ norelaxationgap ] does not imply that the infimum is attained in pocp ( [ pocp ] ) .conversely , if the infimum is attained , the values of pocp ( [ pocp ] ) and relaxed pocp ( [ relaxedpocp ] ) coincide , and assumption [ norelaxationgap ] is satisfied. given initial data , and given a relaxed control , the unique solution of the ode in relaxed pocp ( [ relaxedpocp ] ) is given by for every ] .let us define the linear operator \times x ) \to { \mathscr c}([t_0,t]\times x\times u) ] , notice that & = & \int_{t_0}^t { \mathcall } v(t , x(t ) ) dt & = & \langle { \mathcal l } v , \mu\rangle \end{array}\ ] ] which can be written more concisely as upon defining respectively the initial and terminal occupation measures let us define the adjoint linear operator \times x ) ' \to { \mathscr c}^1([t_0,t]\times x\times u)' ] and \times x) ] , we obtain a linear partial differential equation ( pde ) on measures that we write this linear transport equation is classical in fluid mechanics , statistical physics and analysis of pdes .it is called the equation of conservation of mass , or the continuity equation , or the advection equation , or liouville s equation . under the assumption that the initial data and the control law are given , the following result can be found e.g. in ( * ? ? ?* theorem 5.34 ) or .there exists a unique solution to the liouville pde ( [ liouville ] ) which is concentrated on the solution of the cauchy ode ( [ rode ] ) , i.e. such that ( [ occupation ] ) and ( [ initialterminal ] ) hold . in our context of conic optimization, the relevance of the liouville pde ( [ liouville ] ) is its linearity in the occupation measures , and , whereas the cauchy ode ( [ rode ] ) is nonlinear in the state trajectory .the cost in relaxed pocp ( [ relaxedpocp ] ) can therefore be written and we can now define a relaxed optimal control problem as an lp in the cone of non - negative measures : \times x\times u)\\ & & & \mu_t \in { \mathscr m}_+(\{t\}\times x_t ) \end{array}\ ] ] where the minimization is w.r.t . the occupation measure ( which includes the relaxed control , see ( [ occupation ] ) ) and the terminal measure , for a given initial measure which is the right - hand side in the liouville equation constraint . note that in lp ( [ measpocp ] ) the infimum is always attained since the admissible set is ( weak - star ) compact and the functional is linear .however , since classical trajectories are a particular case of relaxed trajectories corresponding to the choice ( [ occupation ] ) , the minimum in lp ( [ measpocp ] ) is smaller than the minimum in relaxed pocp ( [ relaxedpocp ] ) ( this latter one being equal to the infimum in pocp ( [ pocp ] ) , recall assumption [ norelaxationgap ] ) , i.e. the following result , due to , essentially based on convex duality , shows that there is no gap occuring when considering more general occupation measures than those concentrated on solutions of the ode . [ lemmavinter ] it holds for all .primal measure lp ( [ measpocp ] ) has a dual lp in the cone of nonnegative continuous functions : \times x\times u ) \\ & & & l_t - v(t , . )\in { \mathscr c}_+(x_t ) \end{array}\ ] ] where maximization is with respect to a continuously differentiable function \times x) ] is admissible for dual lp ( [ contpocp ] ) , then on \times x ] , starting at , it holds since on \times x\times u ] for lp ( [ contpocp ] ) . in this section, we investigate the properties of maximizing sequences given by lemma [ pointwiseconvergence ] , and in particular their convergence to the value function of pocp ( [ pocp ] ) .we first demonstrate the lower semicontinuity of the value of lp ( [ measpocp ] ) .this leads to the lower semicontinuity of the value of pocp ( [ pocp ] ) , by considering assumption [ norelaxationgap ] and lemma [ lemmavinter ] .note that lower semicontinuity is readily ensured when the set is convex in for all , with compact , see e.g. ( * ? ? ?* section 6.2 ) .indeed , in this case , the infimum is attained in pocp ( [ pocp ] ) , and assumption [ norelaxationgap ] is readily satisfied .the function is lower semicontinuous .we need to show that given a sequence such that , it holds that .suppose that is such that measure lp ( [ measpocp ] ) is feasible. if the left - hand side is not finite , the result holds .if the left - hand side is finite , we can consider , up to taking a subsequence , that .since the infimum is attained in measure lp ( [ measpocp ] ) , we have a sequence of measures such that and .convergence of to implies weak - star convergence of to . using the same closedness argument as in the proof of lemma [ nodualitygap ], we can consider that , up to a subsequencce , and converge to some measures and in the weak - star topology and that .hence , we have and the pair is feasible for problem .therefore which proves the result when lp ( [ measpocp ] ) is feasible for . using similar arguments, one can show that if is such that lp ( [ measpocp ] ) is not feasible , there can not be infinitely many such that lp ( [ measpocp ] ) is feasible for .the following result extends the convergence properties of the maximizing sequence .[ convergencerelaxed ] for any sequence admissible for the dual lp ( [ contpocp ] ) , for any solution of relaxed pocp ( [ relaxedpocp ] ) , and for any ] , we have .both the first term and the integrand are positive in the left - hand side .therefore , the right - hand side is a decreasing function of .moreover , the trajectory is suboptimal , and is a lower bound on the value function .it holds that . letting tend to infinity , using the lower semicontinuity of , we conclude that .it is important to notice that theorem [ convergencerelaxed ] holds for any trajectory realizing the minimum of pocp ( [ relaxedpocp ] ) and therefore , for all of them simultaneously .in addition , these trajectories are identified with limiting trajectories of pocp ( [ pocp ] ) by assumption [ norelaxationgap ] .liouville equation ( [ liouville ] ) is used as a linear equality constraint in pocp ( [ measpocp ] ) with a dirac right - hand side as an initial condition . however, this right - hand side can be replaced by more general probability measures .the linearity of the constraint allows to extend most of the results of the previous section to this setting .it leads to similar convergence guarantees regarding a ( possibly uncountable ) set of optimal control problems .these guarantees hold for solutions of a single infinite - dimensional lp .suppose that we are given a set of initial conditions , such that for every .given a probability measure , let and consider the following average value where is the value of pocp ( [ pocp ] ) . under assumption [ norelaxationgap ] , by linearitythis value is equal to the value of pocp ( [ measpocp ] ) with as the right - hand side of the equality constraint , namely the primal averaged lp \times x\times u)\\ & & & \mu_t \in { \mathscr m}_+(\{t\}\times x_t ) \end{array}\ ] ] with dual averaged lp \times x\times u ) \\ & & & l_t - v(t , . ) \in { \mathscr c}_+(x_t ) .\end{array}\ ] ] the absence of duality gap is justified in the same way as in lemma [ nodualitygap ] .moreover , lemma [ lowerbound ] also holds , and , as in lemma [ pointwiseconvergence ] , we have the existence of maximizing lower bounds such that intuitively , primal lp ( [ avmeaspocp ] ) models a superposition of optimal control problems .the lp formulation allows to express it as a single program over measures satisfying a transport equation .a relevant question here is the relation between solutions of averaged measure lp ( [ avmeaspocp ] ) and optimal trajectories of the original problem pocp ( [ pocp ] ) .the intuition is that measure solutions of lp ( [ avmeaspocp ] ) represent a superposition of optimal trajectories of the relaxed pocp ( [ relaxedpocp ] ) .these trajectories are themselves limiting trajectories of the original pocp ( [ pocp ] ) .the superposition principle of ( * ? ? ?* theorem 3.2 ) allows to formalize this intuition and to extend the result of theorem [ convergencerelaxed ] to this setting .[ convergencesupport ] for any solution of primal averaged lp ( [ avmeaspocp ] ) , there are parametrized measures ( for the state ) and ( for the control ) such that , and .in addition , if is a maximizing sequence for dual averaged lp ( [ avcontpocp ] ) , for any ] supported on trajectories admissible for relaxed pocp ( [ relaxedpocp ] ) and such that for any measurable function , it holds , x ) } w(x(t ) ) \sigma(dx(.)) ] .a remarkable practical implication of this result is that maximizing sequences of averaged dual lp ( [ avcontpocp ] ) provide an approximation to the value function of pocp ( [ pocp ] ) that is uniform in time and almost uniform in space along limits of optimal trajectories starting from .in sections [ sec : occupation ] and [ sec : superposition ] we reformulated nonlinear optimal control problems as abstract linear conic optimization problems that involve manipulations of measures and continuous functions in their full generality .the results presented in section [ sec : approx ] are related to properties of minimizing or maximizing elements , or sequences of elements for these problems . from a practical point of view , it is possible to construct these sequences using the same numerical tools as in static polynomial optimization . on the primal side , this allows to approximate the minimizing elements of measure lp problems with a converging hierarchy of moment sdp problems . on the dual side , we can construct numerically maximizing sequences of polynomial sos certificates for the continuous function lp problems .the convergence properties that we investigated hold in particular for these solutions of the moment - sos hierarchy .this section illustrates convergence properties of the sequence of approximations of value functions computed using moment - sos hierarchies .we consider simple , but largely spread , optimal control problems for which the value function ( or optimal trajectories ) are known . between the actual value function and its polynomial approximations of increasing degrees along the optimal trajectory starting at for the turnpike pocp ( [ turnpike ] ) .we observe uniform convergence along this trajectory , as well as time decrease of the difference , as predicted by the theory . ] for this problem , the infimum is attained at a unique optimal control which is piecewise constant .the optimal trajectory starting at is presented in figure [ fig : trajturnpike ] . the uniform convergence of approximate value functions to the true value function along this optimal trajectory , stated by theorem [ convergenceaccumulation ] ,is illustrated in figure [ fig : valueturnpike ] .moreover , the difference is a decreasing function of time as we observed in the proof of theorem [ convergenceaccumulation ] . ) of the decimal logarithm of the difference between the actual value function and its polynomial approximation of degree 6 for lqr pocp ( [ lqr ] ) .the dark area represents the set of optimal trajectories starting from ] .we approximate this value with primal and dual solutions of lps ( [ avmeaspocp ] ) and ( [ avcontpocp ] ) .the countour lines ( in decimal logarithmic scale ) of the difference between the true value function and a polynomial approximation of degree 6 is represented in figure [ fig : lqr ] .we also show the support of optimal trajectories starting from .this illustrates the fact that the approximation of the value function is correct in this region , as stated by theorem [ convergencesupport ] .it is noticeable that this is computed by a single linear program and provides approximation guarantees uniformly over an uncountable set of optimal control problems .l. ambrosio .transport equation and cauchy problem for non - smooth vector fields . in l. ambrosio( eds . ) , calculus of variations and nonlinear partial differential equations .lecture notes in mathematics , vol .1927 , springer - verlag , berlin , 2008 .
infinite - dimensional linear conic formulations are described for nonlinear optimal control problems . the primal linear problem consists of finding occupation measures supported on optimal relaxed controlled trajectories , whereas the dual linear problem consists of finding the largest lower bound on the value function of the optimal control problem . various approximation results relating the original optimal control problem and its linear conic formulations are developed . as illustrated by a couple of simple examples , these results are relevant in the context of finite - dimensional semidefinite programming relaxations used to approximate numerically the solutions of the infinite - dimensional linear conic problems .
ever since the seminal work of merton ( see and ) , the problem of dynamic optimal investment and consumption occupied a central role in mathematical finance and financial economics .merton himself , together with many of the researchers that followed him , made the simplifying assumption of _ no market frictions _: there are no transaction costs , borrowing and lending occur at the same interest rate , the assets can be bought and sold immediately in any quantity and at the same price ( perfect liquidity ) , etc . among those , transaction costs are ( arguably ) among the most important and ( demonstrably ) the most studied .the problem of optimal investment where transactions cost are present has received ( and continues to receive ) considerable attention . following the early work of constantinides and magill , davis and norman considered a risky asset driven by the geometric brownian motion for which proportional transactions costs are levied on each transaction .these authors formulated the optimal investment / consumption problem as a singular stochastic control problem , and approached it using the method of dynamic programming . very early in the gameit has been intuited , and later proved to varying degrees of rigor , that the optimal strategy has the following general form : 1 .the investor should not trade at all as long as his / her holdings stay within the so - called `` no - trade region '' - a wedge around the merton - proportion line. 2 . outside the no - trade region, the investor should trade so as to reach the no - trading region as soon as possible , and , then , adjust the portfolio in a minimal way in order not to leave it ever after. such a strategy first appeared in and was later made more precise in .the analysis of was subsequently complemented by that of shreve and soner who removed various technical conditions , and clarified the key arguments using the technique of viscosity solutions .still , even in , technical conditions needed to be imposed .most notably , the analysis there assumes that _ the problem is well posed _ , i.e. , that the value function is finite ; no necessary and sufficient condition for this assumption , in terms of the parameters of the model , is given in .in fact , to the best of our knowledge , the present paper provides the first such characterization .more recently , kallsen and muhle karbe approached the problem using the concept of a _ shadow price _ , first introduced by and .roughly speaking , the shadow - price approach amounts to comparing the problem _ with _ transaction costs to a family of similar problems , but _ without _ transaction costs , whose risky - asset prices lie between the bid and ask prices of the original model .the most unfavorable of these prices is expected to yield the same utility as the original problem where transaction costs are paid . as shown in ,this approach works quite well for the case of the logarithmic utility , which admits an explicit solution of the problem without transaction costs in a very general class of not - necessarily markovian models .the fact that the logarithmic utility is the only member of the crra ( power ) family of utility functions with that property makes a direct extension of their techniques seem difficult to implement . very recently , and in parallel with our work , partial results in this direction have been obtained by herczegh and prokaj whose approach ( and the intuition behind it ) is based on the second - order nonlinear free - boundary hjb equation of , and applies only to a rather restrictive range of parameters .our results apply to the model introduced or , and deal with _ general power - utility functions _ and _ general values of the parameters_. it is based on the shadow - price approach , but quite different in philosophy and execution from that of either or .our contributions can be divided into two groups : _ novel treatment and proofs of , as well as insights into the existing results ._ we provide a new and complete path to the solution to the optimal investment / consumption problem with transaction costs and power - type utilities .our approach , based on the notion of the shadow price , is fully self - contained , does not rely on the dynamic programming principle and expresses all the features of the solution in terms of a solution to a single , constrained free - boundary problem for a one - dimensional first - order ode .this way , it is able to distinguish between various parameter regimes which remained hidden under the more abstract approach of and .interestingly , most of those regimes turn out to be `` singular '' , in the sense that our first - order ode develops a singularity in the right - hand side .while we are able to treat them fully , those cases require a much more delicate and insightful analysis .the results of both and apply only to the parameter regimes where no singularity is present ._ new results ._ one of the advantages of our approach is that it allows us to give an explicit characterization of the set of model parameters for which the optimal investment and consumption problem with transaction costs is well posed .as already mentioned above , to the best of our knowledge , such a characterization is new , and not present in the literature . not only as another application , but also as an integral part of our proof , we furthermore prove that a shadow price exists whenever the problem is well - posed .finally , our techniques can be used to provide precise regularity information about all of the analytic ingredients , the value function being one of them .somewhat surprisingly , we observe that in the singular case these are not always real - analytic , even when considered away from the free boundary .the set - up and the main results are presented in section 2 . in section 3we describe the intuition and some technical considerations leading to our non - standard free - boundary problem . in section 4 ,we prove a verification - type result , i.e. , show how to solve the singular control problem , assuming that a smooth - enough solution for the free - boundary equation is available .the proof of existence of such a smooth solution is the most involved part of the paper . in order to make our presentation easier to parse, we split this proof into two parts .section 5 presents the main ideas of the proof , accompanied by graphical illustrations .the rigorous proofs follow in section 6 .we consider a model of a financial market in which the price process of the risky asset ( form simplicity called the `` stock '' ) is given by here , is a standard brownian motion , and and are constants - parameters of the model .the information structure is given by the natural saturated filtration generated by .an economic agent starts with shares of the stock and units of an interestless bond and invests in the two securities dynamically .transaction costs are not assumed away , and we model them as proportional to the size of the transaction .more precisely , they are determined by two constants and : one gets only for one share of the stock , but pays for it .we assume that the agent s initial position is * strictly solvent * , which means that it can be liquidated to a positive cash position .more precisely , we assume that where the agent s * ( consumption / trading ) strategy * is described by a triple of * optional * processes such that and are right - continuous and of finite variation and is nonnegative and locally integrable , a.s . the processes and have the meaning of the amount of cash held in the money market and the number of shares in the risky asset , respectively , while is the consumption rate . in order to incorporate the potential initial jump we distinguish between the initial values and the values ( after which the processes are right - continuous ) .this is quite typical for optimal investment / consumption strategies , both in frictional and frictionless markets , when the agent initially holds stocks , in addition to bonds . in this spirit, we always assume that a strategy is said to be * self - financing * if where is the pathwise minimal ( hahn - jordan ) decomposition of into a difference of two non - decreasing adapted , right - continuous processes , _ with possible jumps at time zero _ , as we assume that the integrals used in ( [ equ : self - fin ] ) above , with respect to the ( pathwise stieltjes ) measures and characterized by )= { { \varphi}^{\uparrow}}(b)-{{\varphi}^{\uparrow}}(a) ] , for together with , and .a self - financing strategy is called * admissible * if its position is always * solvent * , i.e. , if the set of all admissible strategies with and is denoted by , and the set of all such that for some and - the so - called * financeable consumption processes * - is denoted by . for , we consider the * utility function * of the power ( crra ) type .it is defined for by our task is to analyze the optimal investment and consumption problem , with the value } , \end{split}\ ] ] and stands for the ( constant ) * impatience rate*. as part of the definition of , we posit that unless } < \infty ] , 2 . and , and 3 . , for all , a.s , . and the set of processes that appear as the third component of an element of will be denoted by , i.e. , the elements of can be interpreted as the consumption processes financeable from the initial holding in the frictionless market modeled by .the intuition that the presence of transaction costs can only reduce the collection of financeable consumption processes can be formalized as in the following easy proposition .[ pro : c - in - cs ] , for each . for ,let be such that . by the self - financing condition ( [ equ : self - fin ] ) , the fact that and integration by parts ( simplified by the fact that is continuous ) , we have therefore , by the admissibility criterion ( [ equ : liq ] ) , we have it remains to set and , and observe that ( [ equ : pos ] ) directly implies ( [ equ : v ] ) .thus , . it will be important in the sequel to be able to check whether an element of belongs to .it happens , essentially , when a strategy that finances it `` buys '' only when and `` sells '' only when .a precise statement is given in the following proposition .[ pro : equally - important ] given , let be such that there exist processes and such that 1 . [ ite:1-equally - important ] , 2 . [ ite:2-equally - important] is a right - continuous process of finite variation , and 3 .[ ite:3-equally - important]the stieltjes measure on induced by is carried by and that induced by by . then , .let the triplet satisfy the conditions of the proposition . in particular , we have thanks to condition , the integration - by - parts formula and the self - financing property ( [ equ : self - fin ] ) , it follows that hence , . for each consistent price process , we define an auxiliary optimal - consumption problem - called the * -problem * , with the value , by where is defined as in , and the inequality on the right is implied by proposition [ pro : c - in - cs ] . in words , each consistent price affords at least as good an investment opportunity as the original frictional market .it is in the heart of our approach to show that the duality gap , in fact , closes , i.e. , that the inequality in becomes an equality ; the worst - case shadow problem performs no better than frictional one .a consistent price is called a * shadow price * if .the central idea of the present paper is to look for a shadow price as the minimizer of the right - hand side of ( [ equ : weak - duality ] ) viewed as a stochastic control problem .more precisely , we turn our attention to a search for an optimizer in the * shadow problem * : we start by tackling the shadow problem in a formal manner and deriving an analytic object ( a free - boundary problem ) related to its solution .next , we show that this free - boundary problem indeed admits a solution and use it to construct the candidate shadow price . finally , instead of showing that our candidate is indeed an optimizer for and that , we use the following direct consequence of proposition [ pro : equally - important ] . [ pro : when - shadow ] suppose that for there exists a triplet such that 1 . satisfies conditions , and of proposition [ pro : equally - important ] , and 2 . , for all , then , is a shadow price .the route we take towards the existence of a shadow price may appear to be somewhat circuitous .it is chosen so as to maximize the intuitive appeal of the method and minimize ( already formidable ) technical difficulties .while the remainder of the paper is devoted to the implementation of the above idea , we anticipate its final results here , for the convenience of the reader .an important by - product of our analysis is the _ explicit characterization _ of those parameter values which result in a well - posed problem ( the value function is finite ) . to the best of our knowledge ,such a characterization is not present in the literature , and the finiteness of the value function is either assumed ( as in ) or deduced from rather strong conditions ( as in ) .[ thm : well - posed ] given the environment parameters and the transaction costs , , the following statements are equivalent : \(1 ) the problem is well posed , i.e whenever .\(2 ) the parameters of the model satisfy one of the following three conditions : + - , + - and , + - , and where the function is given by in a closed form .+ for , the third condition in ( 2 ) above reduces to a well - known condition of shreve and soner .indeed , the entire section 12 in , culminating in theorem 12.2 , p. 677 , is devoted to the well - posedness problem with two bonds ( i.e , with ) . as demonstrated by our second main result , the shadow - price approach not only allows us to fully characterize the conditions under which a solution to the frictional optimal investment / consumption problem exists , but it also sheds light on its form and regularity .[ thm : main ] given the parameters and the transaction costs , , we assume that well - posedness conditions of theorem [ thm : well - posed ] hold. then 1 .[ ite : main-3 ] + there exist constants with and a function ] , defined by admits no zeros on ] , * the value of the constant is determined as in proposition [ prop : rx ] , and * is the unique solution of reflected sde with .[ ite : main-8 ] for any satisfying , the value and an optimal investment / consumption strategy for the main problem ( [ equ : oc - prob ] ) are given by where is defined in proposition [ prop : rx ] , and and in lemma [ lem : complete - optimal ] . in , if ( is a singular point described in section 5 ) , then the condition can be violated .for this exceptional case , proposition [ pro : y ] is still valid : more precisely , in the part ( 2 ) of proposition [ pro : y ] , we need to show that . if , in , the drift is positive and the volatility is zero , thus , we conclude that .the purpose of the present section is to provide a heuristic derivation of a free - boundary problem for a one - dimensional first - order ode which will later be used to construct a shadow process and the solution of our main problem . with the fully rigorous verification coming later, we often do not pay attention to integrability or measurability conditions and formally push through many steps in this section .we start by splitting the shadow problem according to the starting value of the process : } \quad \inf_{{{\tilde{s}}}\in{{\mathcal s } } , { { \tilde{s}}}_0=s_0 } \quad \sup_{c\in { { \mathcal c}}({{\tilde{s } } } ) } { { \mathcal u}}(c ) .\end{split}\ ] ] one can significantly simplify the analysis of the above problem by noting that , since each is a strictly positive it^ o - process , we can always choose processes and such that it pays to pass to the logarithmic scale , and introduce the process , whose dynamics is given by on the natural domain ] , a.s .we note that the market modeled by is complete , and that , thanks to the absence of friction , the agent with the initial holdings will achieve the same utility as the one who immediately liquidates the position , i.e. , the one with the initial wealth of . therefore , the standard duality theory suggests that where and are related as above and } .\end{split}\ ] ] the legendre - fenchel transform of admits an explicit and simple expression in the case of a power utility .indeed , we have where .the parameter is the negative of the conjugate exponent of , i.e. , ( , for ) and this relationship will be assumed to hold throughout the paper without explicit mention .consequently , if we combine and , we obtain the following equality : \times ( 0,\infty ) } \big((\eta_b+s_0 e^y \eta_s ) z+ \inf_{(\sigma,\theta)\in { { \mathcal p}}(y ) } { { \mathcal v}}(z { { \mathcal e}}(-\theta\cdot b ) ) \big ) .\end{split}\ ] ] the expression above is particularly convenient because it separates the shadow problem into a stochastic control problem over , and a ( finite - dimensional ) optimization problem over and , which can be solved separately .thanks to homogeneity ( -homogeneity for ) of the map , a dimensional reduction is possible in the inner control problem in ( [ equ : split - pr ] ) . indeed , with , we have } , & p=0,\\ \tfrac{z^{-q}}{q}\ , { { { \mathbb e}}\left[\int_0^{\infty } e^{-{\hat{\delta}}t } { { \mathcal e}}(-\theta\cdot b)_t^{-q}\ , dt \right ] } , & p\ne 0. \\ \end{cases}\ ] ] hence , } \begin{cases } \tfrac{1}{\delta}\big(-1+\log\big(\delta(\eta_b+s_0 e^y \eta_s)\big ) + w(y ) \big ) , & p=0 \\ \tfrac{(\eta_b+s_0 e^y \eta_s)^p}{p } { \left| w(y ) \right|}^{1-p } , & p\ne 0 , \end{cases } \end{split}\ ] ] where } , & p=0,\\ \operatorname{sgn}(p){{{\mathbb e}}\left[\int_0^{\infty } e^{-{\hat{\delta}}t } { { \mathcal e}}(-\theta\cdot b)_t^{-q}\ , dt \right ] } , & p\ne 0 .\end{cases}\ ] ] in the heuristic spirit of the present section , it will be assumed that the processes of the form and are ( true ) martingales so that the definition of the stochastic exponential and the simple identity can be used to simplify the expression for even further : } , & p=0,\\ \operatorname{sgn}(p){{{\mathbb e}}^{{\bar{{{\mathbb p}}}}}\left[\int_0^{\infty } e^{-{\hat{\delta}}t } e^{{\tfrac{1}{2}}q ( 1+q ) \int_0^t \theta^2_u\ , du}\ , dt \right ] } , & p\ne 0 .\end{cases } \end{split}\ ] ] here , the measure a cylindrical measure , but , given the heuristic nature of the present section , we do not pursue this distinction . ] is ( locally ) given by . by girsanovs theorem the process is ( locally ) a -brownian motion and the dynamics of the process can be conveniently written as the expression inside the infimum in ( [ equ : w ] ) involves a discounted running cost .hence , it fits in the classical framework of optimal stochastic control , and a formal hjb - equation can be written down .we note that even though the process appears in the original expression for , the simplification in ( [ equ : w ] ) allows us to drop it from the list of state variables and , thus , reduce the dimensionality of the problem .indeed , the formal hjb has the following form : where the functions and are defined in . in order to fully characterize the optimization problem, we need to impose the boundary conditions at and to enforce the requirement that stay within the interval ] , we are led to the boundary condition it will be shown in the following section that , in addition to the annihilation of the diffusion coefficient , ( [ equ : hjb - w - bd ] ) will ensure that the drift coefficient will indeed have the proper sign of at the boundary . by interpreting the problem of shadow prices as a game , one can arrive to the two - point boundary problem , without the use of duality .let denote the initial wealth of the investor who is not subject to transaction costs , for a fixed and ] up to the last minimization over , the above defines a stochastic game with the value .\ ] ] the corresponding isaacs equation with a two dimensional state and the initial condition scales as consequently , it can be reduced to a one - dimensional equation for , with ] .we believe that a similar approach - namely of rewriting the problem of optimal investment and consumption with transaction costs as a game , through the use of consistent prices - works well in more general situations , e.g. , when multiple assets are present . finally , based on the fact that the equation ( [ equ : hjb - g ] ) is autonomous ,we introduce an order - reducing change of variable . with expected to be increasing and continuous on ] , with and by .this transforms the equation ( [ equ : hjb - g ] ) into with ( free ) boundary conditions and the free boundaries and are expected to be positive .we start the proof of our main theorem [ thm : main ] with a verification argument which establishes the implication .after that , in lemma [ lem : complete - optimal ] and proposition [ prop : rx ] , we show .let us assume , therefore , that a triplet , as in part of theorem [ thm : main ] , is given ( and fixed for the remainder of the section ) , and that the function is defined as in .let \to { { \mathbb r}} ] be the formal optimizers of ( [ equ : hjb - g ] ) ,i.e. , similarly , let \to { { \mathbb r}} ] .while the equation ( [ equ : hjb - g ] ) can be written in a more explicit way - which will be used extensively later - for now we choose to keep its current variational form .we do note , however , the following useful property of the function : for all with , we have the equation ( [ equ : envelope ] ) follows either by direct computation ( using the explicit formulas for and above ) or the appropriate version of the envelope theorem ( see , e.g. , theorem 3.3 , p. 475 in ) , which states , loosely speaking , that we `` pass the derivative inside the infimum '' in the equation ( [ equ : hjb - g ] ) .the family of processes , ] there exists a unique solution of the following reflected ( skorokhod - type ) sde .\end{split } \right.\ ] ] here , is the `` instantaneous inward reflection '' term for the boundary , i.e. , a continuous process of finite variation whose pathwise hahn - jordan decomposition satisfies the reader is referred to for a more detailed discussion of various possible boundary behaviors of diffusions in a bounded interval , as well as the original existence and uniqueness result ) for . for ] by and the process by . in relation to the heuristic discussion of section [ sec : heuristic ], we note that plays the ( formal ) role of the inverse of the derivative . moreover , the process has the following properties : [ pro : y ] for ] , for all , a.s . , and 2 .[ ite : y-2 ] and .property follows from the definition of the function and the assumption ( c ) of part of theorem [ thm : main ] .for , it^ o s formula reveals the following dynamics of : the identity allows us to simplify the above expression to finally , since vanishes on the boundary , the singular term disappears and we obtain the second statement . for notational convenience ,we define [ pro : rep - g ] for ] , under the measure , defined by .therefore , + { { \mathbb e}}^{{\bar{{{\mathbb p}}}}_t}\left [ \int_0^t \rho^x_u { \hat{\gamma}}(x^x_u)\ , du\right ] = { { \mathbb e}}^{{\bar{{{\mathbb p}}}}_t } [ h_t]={{\mathbb e}}^{{\bar{{{\mathbb p}}}}_t}[h_0]=g(x),\ ] ] where the boundedness of the integrands was used to do away with the stochastic integrals with respect to . the exponential identity ( [ equ : identity ] ) now implies that + { { \mathbb e}}[\int_0^t e^{-{\hat{\delta}}u}w^x_u \ , du]+ \begin{cases } 0 , & p\ne 0\\ { \tfrac{1}{2}}e^{-\delta t } { { \mathbb e } } [ \int_0^t { \hat{\theta}}(x^x_s)^2\ , ds ] , & p=0\\ \end{cases } \end{split}\ ] ] for , we can use the fact that and are bounded to conclude that = { { \mathbb e } } [ e^{-\delta t } g(x^x_t ) ] \to 0\text { and } e^{-\delta t } { { \mathbb e } } [ \int_0^t { \hat{\theta}}(x^x_s)^2\ , ds ] \to 0.\ ] ] these two limits can now easily be combined with to yield . to deal with the case , we note that non - negativity of and in implies that \ , dt<\infty .\end{split}\ ] ] moreover , with } { \left| g(x ) \right| } ] , along sequence with which exists thanks to ( [ equ : fin - p1 ] ) . for , the fact that implies that \leq { \left| g \right|}_{\infty}e^{-{\hat{\delta}}t } \to 0 ], we define the process and observe that , by it^ o s formula , it admits the following dynamics : the goal of this subsection is to show that is a shadow price for the appropriate choice of the initial value ] and the initial positions with , we have moreover , with the processes , and defined by the optimal strategy for the -problem is given for by the standard complete - market duality theory ( see , e.g. , theorem 9.11 , p. 141 in ) implies that where dual functional is as in ( [ equ : sv - def ] ) . furthermore ,following the computations that lead to ( [ equ : hfu - form ] ) in subsection [ sub : dim - reduce ] , and using the representation of proposition [ pro : rep - g ] , we get once the form of the value function has been determined , it is a routine computation derive the expressions for the optimal investment / consumption strategy .indeed , let the processes be given by and and as in the statement . then, one readily checks that the triplet given by is an optimal investment / consumption strategy .finally , the equality between the form and the simpler one given in in the statement follows by direct computation where one can use the explicit formulas for the functions and from .[ prop : rx ] let be an admissible initial wealth , i.e. , such that . for the function \to{{\mathbb r}} ] be defined by \\ { \underline{x } } , & r(x)<0\text { for all } x\in [ { \underline{x}},{\overline{x}}]\\ \text{a solution to } r(x)=0 , & otherwise. \end{cases}\ ] ] then is a shadow price .the three possible cases in proposition [ prop : rx ] relate to whether the initial condition is outside the no - transaction region ( above or below ) or inside it .it is easy to check that minimizes the value for as mentioned in subsection [ game ] .the idea of the proof is to show that the triplet of lemma [ lem : complete - optimal ] satisfies the conditions of proposition [ pro : when - shadow ] .since is the optimal consumption process , it will be enough to show that conditions and of proposition [ pro : equally - important ] hold .the expression implies that the processes and are continuous , except for a possible jump at .let us , first , deal with the jump at .the conditions and of proposition [ pro : equally - important ] at translate into the following equality : which , after is used , becomes if admits a solution ] , then by continuity , either , for all ] .focusing on the first possibility ( with the second one being similar ) we note that in this case , and so , if we pick , we get . next , we deal with the trajectories of the processes and for .it is a matter of a tedious but entirely straightforward computation ( which can be somewhat simplified by passing to the logarithmic scale and using the identities ( [ equ : hjb - g ] ) and ( [ equ : envelope ] ) ) to obtain the following dynamics : thanks to the fact that is a finite - variation process which decreases only when ( i.e. , ) and increases only when ( i.e. , ) , the conditions and of proposition [ pro : equally - important ] hold .having presented a verification argument in the previous section , we turn to the analysis of the ( non - standard ) free - boundary problem , .we start by remarking that that the equation ( [ equ : hjb - g ] ) simplifies to the form and where the second - order polynomials and are given by the existence proof is based on a geometrically - flavored analysis of the equation , where the curves and , given by play a prominent role .many cases need to be considered , but we always proceed according to the following * program * : 1 . first , we note that the boundary conditions amount to 2 . then , for a fixed we solve the ode with initial condition and let it evolve to the right ( if possible ) until meeting again the curve at the -intercept .we therefore obtain a solution \rightarrow \mathbb{r} ] , for some ] + figure 3 .below shows some of the possible shapes the graph can take , under a representative choice of parameter regimes .+ {1.png } & \includegraphics[width=3.7cm]{2.png } & \includegraphics[width=3.7cm]{3.png } & \includegraphics[width=3.7cm]{4.png } \\\pi>1 , \\alpha > x_p & \pi>1 , \ \alpha\leq x_p & \pi<1 & \pi=1 \end{array} ] , and show that the following two statements hold : 1 .the map is continuous and strictly decreasing on , and 2 . while , where an expression for is given in below .the reader will note two major differences when the statements here are compared to the corresponding statements in the sub - case a ) .the first one is that now plays the role of .the second one is that the range of the integral is not the set of positive numbers anymore .it is an interval of the form , which makes the free - boundary problem solvable only for .in addition to the fact that we still need to deal with the possible singularity along the graph of , difficulties of a different nature appear in this sub - case .first of all , due to the unboundedness of the regions separated by a hyperbola , it is not clear whether the maximal solution started at will ever hit the curve again .indeed , this is certainly a possibility when , as depicted in figure 4 .however , we prove by contradiction that this is not the case for .the second new difficulty has to do with fact that is finite - a fact which prevents the existence of a solution to , .{middlemu1.png } & \qquad \includegraphics[width=4.7cm]{middlemu2.png } & \qquad \includegraphics[width=4.7cm]{middlemu3.png } \\ \pi<1 & \qquad \pi>1 , \\alpha > x_p & \qquad \pi>1 , \\alpha\leq x_p \end{array} ]after a heuristic description of the major steps in the existence proof and the associated difficulties , we now proceed to give more rigorous , formal proofs . more precisely , the goal of this section is to present a proof of the part of theorem [ thm : main ] .as already mentioned in the previous section , the proofs in the case , are very similar ( but less involved ) than those in the case so we skip them and refer the reader to the first author s phd dissertation for details. we also do not provide the proof of the part ( c ) of theorem [ thm : main ] , as it can be obtained easily by an explicit computation . without loss of generality, we consider the case , and construct a portfolio as follows : one easily checks that it is admissible and that its expected utility is given by & = \tfrac{(1-p)^p(1-{\underline{{\lambda}}})^p}{p^{1+p } } \mathbb{e } \big[\int_0^{\infty } e^{-\delta t } \tfrac{s_t^p}{t+1 } dt \big ] = \tfrac{(1-p)^p(1-{\underline{{\lambda}}})^p}{p^{1+p } } \int_0^{\infty } e^{pt(\mu - a)}\tfrac{1}{t+1}dt = \infty.\qedhere \end{split}\ ] ] as explained in the previous section , the main technique we employ in all of our existence proofs is the construction of a family of solutions to the equation , followed by the choice of the one that satisfies the appropriate integral condition .we , therefore , take some time here to define the appropriate notion of a solution to a singular ode : we therefore introduce the * upper graph * and the * lower graph * of the level curve by and for all , where moreover , for convenience , we include the case , where the minimal and maximal solutions of ( instead of ) are considered ; the domain is also defined .one easily checks that functions and allow us to define a subclass of solutions to : thanks to the local lipschitz property of the function away from , the general theory of ordinary differential equations , namely the peano existence theorem ( see , for example theorem i , p. 73 in ) , states that , starting from any point with ] . to avoid the analysis of unnecessary cases , we assume from the start that , so that is strictly increasing in the neighborhood of and the singularity is not used as the initial value ( a curious reader can peak ahead to proposition [ pro : second ] , to see how the case can be handled . ) to rule out the possible encounters of a maximal inner solution with away from the singular point , we delve a bit deeper into the geometry of the right - hand side of our ode .we start by a technical lemma which will help us construct the containment curve .some more explicit expressions for the upper and lower curves and are going to be needed : where are given by the end - points of the domains of and , i.e. , those for which are given by we can also check that are solutions of and that are the solutions to . finally , we note for future reference that holds , and that , for , the -coordinates of the north and east points ( ) are given by and \(1 ) a direct calculation shows that , for , we have , as well as by continuity , we can find such that we can check that , which , in turn , implies that \cup [ x_-(k_0),\infty) ] and is concave on ] .similarly , on ] .thus , . , : implies that . , : . , : implies .\(2 ) for the simple connectedness of , it is enough to show that is an interval .given that , for all , it is enough to show that is an interval . with }t_u(x , k_0) ] , therefore , \subset \{x>0 : \tau(x ) \leq t_u(x,0)\} ] such that and . since for close enough , we have .combine this with , we deduce that and .so , and . using this , we can calculate , which contradict to the fact that for close enough .\(2 ) noting that , we assume that there exists a point with . with the infimum of all such points , we observe immediately that and ] , which contradicts to the choice of . then , since , we reach a contradiction with part ( 6 ) of lemma [ omega ] : in the case , we have and , by the definition of the point and the domain , there exists , such that . consequently , we have for some , and we observe that , because is a decreasing function of near .now we reach a contradiction as in the case : where can be showed by part ( 6 ) of lemma [ omega ] .\(3 ) we first observe that the initial value lies on the graph of the function which is strictly increasing in the neighborhood of .therefore , any extension of to the left of would cross and exit the set .so , left end - point of the domain should be . to deal with the right end - point of , we note that noe of the following must occur : 1 ) explodes , 2 ) cossed , or 3 ) is hit . the first possibility is easily ruled out by the observation that no explosion can happen without crossing the curve , first . the second possibility is severely limited by ( 2 ) above ; indeed , with part ( 1 ) of proposition [ omega ] , is clear now that , in the right end - point limit , the function hits , provided . for and , clearly exists , and , so , .furthermore , since . in case and , we also conclude that , by observing that for and .we focus on the case in this subsection .the curve is now an ellipse and it admits a north pole with the -coordinate . by part ( 1 ) of proposition [ omega ] and ( 2 ) of proposition [ gamma ], we have the following dichotomy , valid for all .[ pro : first ] suppose that , and . then , ] . is a consequence of part ( 3 ) of proposition [ gamma ] . since , smoothness of follows from the general theory ( peano s theorem ) .moreover , the existence of the initial value , with the desired properties , is a direct consequence of the listed properties of , by way of the intermediate value theorem .we , therefore , focus on in the remainder of the proof , which is broken into several claims .the proof of each claim is placed directly after the corresponding statement .* claim 2 * : _ the map is continuous_. for this , we use the implicit - function theorem and the continuity of with respect to the initial data ( see , e.g. , theorem vi . ,p 145 in ) . to be able to use the implicit - function theorem, it will be enough to observe that , is not tangent to ( or ) at , which is a consequence of and claim 1 . above .* claim 4 : * _ ._ the joint continuity of at and the fact that , imply that there exists such that .\ ] ] we define and remind the reader that for each , so that .hence , we can pick such that , for . for any given , if it so happens that for ] .therefore , which is contradiction .we conclude that intersects on ], we conclude that on ] . therefore , 1 .there is a single solution of on ] of the form where is as in ( 1 ) above , and is _ any _ function as in ( 3 ) above , is a -solution to .elementary transformations can be used to show that for any solution of defined on \setminus{\{0\}} ], is of class and .2 . .then the limits ,\ ] ] exist and define a continuous solution to with the domain ], is of class , and .2 . if , then . for , and is of class .\(1 ) the parameter regime treated in proposition [ pro : second ] above leads to a truly singular behavior in the ode .indeed , the maximal continuous solution passes through the singular point , at which the right - hand side is not well - defined .it turns out that the continuity of the solution , coupled with the particular form of the equation , forces higher regularity ( we push the proof up to ) on the solution .the related equation of lemma [ lem : f ] provides a very good model for the situation .therein , uniqueness fails on one side of the equation ( and general existence on the other ) , but the equation itself forces a smooth passage of any solution through the origin .it follows immediately , that , even though high regularity can be achieved at the singularity , the solution will never be real analytic there , except , maybe , for one particular value of .this is a general feature of singular ode with a rational right - hand sides .consider , for example , the simplest case which admits as a solution the textbook example of a function which is not real analytic .\(2 ) for large - enough , the value of such that solves , , will fall below , and an interesting phenomenon will occur .namely , the right free boundary will stop depending on or .indeed , the passage through the singularity simply `` erases '' the memory of the initial condition in . in financial terms , the right boundary of the no - trade region will be stop depending on the transaction costs , while the left boundary will continue to open up as the transaction costs increase. we will only prove ( 1 ) here ; ( 2 ) can be proved by the same methods used in the proof of c ) below .for both ( 1 ) and ( 2 ) , follows easily .+ a ) by proposition [ omega ] , part ( 1 ) , .so , if , the statement can be proved by using the argument from the proof of proposition [ pro : first ] , mutatis mutandis .\b ) the existence of the limit from the statement is established in a matter similar to that used to prove the continuity of the map in claim 2 . in the proof of proposition [ pro : first ] .the existence of the limit follows from a standard argument involving a weak formulation and the dominated convergence theorem .finally , by part ( 2 ) of proposition [ gamma ] and , we conclude that is defined and continuous on ] , with , exists .therefore , by maximality , a maximal inner solution , with ] to ] .this leads to the following contradiction : * claim 2 : * _ ) ] . given an in a small - enough neighborhood of , the concavity of implies that the mean value theorem can now be used to conclude that there exist , arbitrarily close to , with such that finally , if we combine the obtained results with those of claim 1 ., we can conclude that decreases near .* claim 3 : * _ the second derivative of exists at and _ the proof is based on an explicit computation where the easy - to - check fact that our ode admits the form is used .we begin with the equality by lhospital s rule , as , the right - hand side above converges to which , in turn , evaluates to the half of the right - hand side of .having computed a second - order quotient of differences for at , we could use the concavity of at ( established in claim 2 . above ) to conclude that is twice differentiable there .we opt to use a short , self - contained argument , instead , where denotes the right - hand side of . for small enough , we have if we fix and choose , we obtain from which the claim follows immediately .* claim 4 : * _ ). ] , and ; we need to show that .this follows , however , directly from lemma [ lem : f ] , as we obtain the ode if we differentiate the equality , and pass to the new coordinates .the coefficient functions and admit a rather messy but explicit form which can be used to establish their continuity .indeed , it turns out that and can be represented as continuous transformations of functions of , and , which are , themselves , continuous .similarly , the condition imposed in lemma [ lem : f ] is satisfied because one can use the aforementioned explicit expression to conclude that for let be the ( ordered ) solutions of the quadratic equation , where , and are as in .the analysis in the sequel centers around the constants and , given by 1 . is the smallest solution to .moreover is nonnegative and if and only if . is bounded if and unbounded otherwise .3 . for , .4 . for , we have 5 .there exists a constant such that for and we have 6 . is well - defined and nonnegative .moreover , if and only if .\(2 ) it is easily checked that the leading coefficient of ( seen as a polynomial in ) is positive .therefore , for ] .thus , the expression inside the square root in is positive for and ] , is unbounded . similarly , since for small enough , we conclude that the domain of is bounded .part ( 4 ) of proposition [ omega ] , implies that is a bounded set for any sufficiently small .we conclude that is bounded for .\(3 ) from the definition of we get we already checked that for ] , since .thus , for ] , since the function is concave and its values at are positive .it follows immediately that for ] .so , by ( 3 ) , we can choose such that for some and all , ] and the fact that , which is , in turn , implied by ( 4 ) and ( 5 ) above .[ rem : k ] in our current parameter range ( , ) , the level curve is a hyperbola and the curve is an ellipse for large - enough values of .in fact , is the smallest value of such that is a hyperbola ( and , therefore , unbounded ) .the parts of statements ( 1 ) and ( 2 ) involving singularities are proved similarly to parallel statements in proposition [ pro : second ] .we show that for , with the case being quite similar .proceeding by contradiction , we suppose that , for some .then , just like in the proof of proposition [ pro : second ] , we can show that does not admit a local minimum on .thus , there exists such that . from proposition[ k and lim ] , part ( 2 ) , we learn that ] , ] .the first and the last integrals are then computed using the change of variable , while the limit of the middle integral is shown to be zero .we start this program by observing that the region is unbounded ( see proposition [ k and lim ] ( 2 ) ) , and , hence , so is the region . also , we observe that for .we conclude from there that intersects the region therefore , } { { g_{\alpha}}}'(x) ] be the inverse function of on ] , and , so , for , we have where the first equality follows by direct computation , the second one by the fact that and , and the final inequality from the choice of .hence , for , the function in proposition [ pro : third ] corresponds to the value function under the transaction costs and such that , where .more precisely , lemma [ lem : complete - optimal ] in section 4 above yields that where is the optimal utility for the initial position , under the transaction costs and .the strict increase of and the decrease of , imply that is strictly decreasing , wherever it is defined .it now easily follows that which , together with and the representation , yields that .since , clearly , is decreasing in , this amounts to saying that the map is strictly decreasing in general , not just under the parameters restricted by the hypothesis of proposition [ prop : wellposed ] . the same argument , as the one given in the proof of proposition [ prop : wellposed ] , applies .in particular , this fact can be used to show that the free - boundary problem , has a _ unique _solution for all values of the transaction costs , as long as .it is , perhaps , interesting to note that the authors are unable to come up with a purely analytic argument for the monotonicity of .the crucial step in the proof of proposition [ prop : wellposed ] above is to relate the value of to the original control problem , and then argue by using the natural monotonicity properties of the control problem itself , rather than the analytic description only .jinhyuk choi , _ a shadow - price approach to the problem of optimal investment / consumption with proportional transaction costs and utilities of power type _ , ph.d .thesis , the university of texas at austin , 2012 .wolfgang walter , _ ordinary differential equations _ , graduate texts in mathematics , vol .182 , springer - verlag , new york , 1998 , translated from the sixth german ( 1996 ) edition by russell thompson , readings in mathematics .
we revisit the optimal investment and consumption model of davis and norman ( 1990 ) and shreve and soner ( 1994 ) , following a shadow - price approach similar to that of kallsen and muhle - karbe ( 2010 ) . making use of the completeness of the model without transaction costs , we reformulate and reduce the hamilton - jacobi - bellman equation for this singular stochastic control problem to a non - standard free - boundary problem for a first - order ode with an integral constraint . having shown that the free boundary problem has a smooth solution , we use it to construct the solution of the original optimal investment / consumption problem in a self - contained manner and without any recourse to the dynamic programming principle . furthermore , we provide an explicit characterization of model parameters for which the value function is finite .
quantum contextuality , which was independently discovered by kochen and specker ( ks ) , and bell , is a fundamental concept in quantum information theory .it can be revealed by the ks sets in a logic manner , or by violation of statistical noncontextuality inequalities .so far , many theoretical and experimental works have been accomplished in order to find the optimal noncontextuality inequalities or ks sets , which in turn contribute to the use of speeding up quantum algorithms .new theoretical tools have been invented to study contextuality .graph theory is a such representative that finds its wide applications and effectiveness in discussing the contextual behavior .two different rank- projective measurements and are said to be exclusive if they commute with one another .the exclusivity relation of a set of rank- measurements s for a noncontextuality inequality can be effectively represented in an exclusivity graph consisting of vertices and edges , where a pair of vertices are connected if and only if the corresponding events of probability are mutually exclusive . for each exclusivity graph, the classical bound of the noncontextuality inequality equals to the independence , and the maximal quantum prediction is just the lovsz number .nevertheless , there is no perfectly pure " state in actual experiment .it is quite necessary to consider mixed states and analyze their influence upon contextuality .although there are state - independent noncontextuality ( sic ) inequalities , whose quantum violation is independent of which state is to be measured , yet in general the violation of a noncontextuality inequality may depend on the mixedness of the state .there have been proposed various measures of the mixedness of a state , linear entropy among them is an efficient one easy to compute : for a -dimensional mixed state , the linear entropy is defined as ) ] , is the general mixed state , and s are rank- projective measurements with exclusivity relations shown in figs .[ figkcbs ] and [ figkk ] .note that the former is the simplest inequality that requires the minimal number of measurements , while the latter is a first one that is quantum mechanically violated by all but the maximally mixed state . without loss of generality and for the sake of convenience ,we consider a quantum state in the diagonal form : with in decreasing order , and .this makes sense since for a fixed general state and its optimal measurement set , one can always be able to diagonalize the state and change accordingly its overall measurement set by a global rotation . in what followswe shall investigate the maximally contextuality of mixed states with respect to a fixed linear entropy , for each inequality mentioned above .we plot the upper and lower bounds of contextuality of mixed states in fig .[ nfig ] , together with measuring directions specified in table [ table1 ] [ table2 ] .in particular , the specific states and the analytic expressions for each curve ( except {c}\text{\resizebox{3ex}{.7ex}{}}\\[-0.9ex]\textstyleac\endarray}}\textstyle\frown\textstyle\frown\textstyle\frown\textstyle\frown\textstyle\frown\textstyle\frown ] .first , we introduce a lemma on hermitian matrices : [ lemma ] assume that where , , is a unitary matrix . then \le \vec{a}.\vec{b},\ ] ] where .directly computation shows that = \vec{a } w \vec{b},\ ] ] where with .then is a doubly stochastic matrix by the definition of doubly stochastic matrices .the birkhoff - von neumann theorem says that the set of doubly stochastic matrices forms a convex polytope whose vertices are the permutation matrices .if we consider the linear functional on that convex polytope , then its optimal can be achieved at the vertices , i.e. , the permutation matrices . since already in decreasing order , the maximal of can be achieved when , which implies .given a state where , . andthe set of measurements is optimal for .assume is such a unitary matrix that is diagonal and the diagonal is in decreasing order , lemma [ lemma ] tells us that the set of measurements is also optimal while is diagonal .thus , for our purpose , we can only consider the diagonal : with in decreasing order , , and being the total number of settings ( i.e. , 5 for kcbs and 9 for kk ) .denote .the condition restrains within an , e.g. , -plane . in general , .in particular , , which holds only to an edgeless exclusivity graph , that is , there is no exclusive relation between any pair of events of probability .in fact , the exclusivity relation will further limit the distribution of . as we shall see , the exclusivity relations for the kcbs and kk inequalities are so strong that will be dramatically restrained to a curve , rather than a region in the plane .the curves of for kcbs and kk inequalities are plotted in fig .[ ekcbs ] and fig .[ ekk ] , respectively .( for comparison , see fig .[ skcbs ] for the quantum violation of the kcbs inequality by a convex mixture of , and in table [ table1 ] . ) obviously , must be a function of . then by differentiating the expression with respect to , we will obtain the optimal for a given state . for the kcbs inequality , we have , and ( i ) : {c}\text{\resizebox{3ex}{.7ex}{}}\\[-0.9ex]\textstyleac\endarray}}:\;\;\frac{d c_q}{d m_1 } = \frac{1+\sqrt{1-\frac{4s}{3}}}{2 } + \frac{1-\sqrt{1-\frac{4s}{3}}}{2}\frac{dm_2}{d m_1}.\ ] ] then implies that hence , varies with , meaning that there is no universal set of measurements for this curve .( ii ) : {c}\text{\resizebox{3ex}{.7ex}{}}\\[-0.9ex]\textstylecd\endarray}}:\;\;\frac{d c_q}{d m_1 } = \sqrt{1-s}(1+\frac{dm_2}{d m_1 } ) < 0.\ ] ] this shows that the set of measurements in table i is optimal .( iii ) : {c}\text{\resizebox{3ex}{.7ex}{}}\\[-0.9ex]\textstylead\endarray}}:\;\;\frac{d c_q}{d m_1 } = \sqrt{1-s } > 0.\ ] ] again , this shows that the set of measurements in table ii is optimal . for the kk inequality , we find that ( i.e. , ) always holds .so all the states that violate the inequality possess the same set of measurements shown in table iii. moreover , the condition yields for kcbs , this yields , a monotonically increasing relation between and , implying that they take their maxima simultaneously , while for kk this yields , a monotonically decreasing relation , implying that the maximal is obtained when reaches its minimum , and vice versa .consequently , the spectral distribution of a noncontextualiy inequality can reflect the nature as to whether there exist a universal set of measurements , so that possible experimental setups could be greatly facilitated .in this paper , we have investigated the quantum contextuality of mixed states for the kcbs and the kk noncontextuality inequalities , and explored the question of why there exists a universal set of measurements for the latter , whereas none does for the former inequality .we have shown that a spectral analysis on the set of measurements may provide insightful clues toward the ultimate answer to this question .we believe that further works on combining graph theory and spectral theory in studying quantum contextuality may shed new light on these problems .is supported by the national basic research program ( 973 program ) of china under grant no .2012cb921900 and the nsf of china ( grant nos . 11175089 and 11475089 ) .this work is also partly supported by the national research foundation and the ministry of education , singapore .
we present a study of quantum contextuality of three - dimensional mixed states for the klyachko - can - biniciolu - shumovsky ( kcbs ) and the kurzyski - kaszlikowski ( kk ) noncontextuality inequalities . for any class of states whose eigenvalues are arranged in decreasing order , a universal set of measurements always exists for the kk inequality , whereas none does for the kcbs inequality . this difference can be reflected from the spectral distribution of the overall measurement matrix . our results would facilitate the error analysis for experimental setups , and our spectral method in the paper combined with graph theory could be useful in future studies on quantum contextuality .
why do friends spontaneously come up with mutually accepted rules , cooperation , and solidarity , while the creation of shared moral standards often fails in large communities ? in a `` global village '' , where everybody may interact with anybody else , it is not worthwhile to punish people who cheat . moralists ( cooperative individuals who undertake punishment efforts ) disappear because of their disadvantage compared to cooperators who do not punish ( so - called `` second - order free - riders '' ) .however , cooperators are exploited by free - riders .this creates a `` tragedy of the commons '' , where everybody is uncooperative in the end . yet ,when people interact with friends or local neighbors , as most people do , moralists can escape the direct competition with non - punishing cooperators by separating from them .moreover , in the competition with free - riders , moralists can defend their interests better than non - punishing cooperators .therefore , while seriously depleted in the beginning , moralists can finally spread all over the world ( `` who laughs last laughs best effect '' ) .strikingly , the presence of a few non - cooperative individuals ( `` deviant behavior '' ) can accelerate the victory of moralists . in order to spread, moralists may also form an `` unholy cooperation '' with people having double moral standards , i.e. free - riders who punish non - cooperative behavior , while being uncooperative themselves .public goods such as environmental resources or social benefits are particularly prone to exploitation by non - cooperative individuals ( `` defectors '' ) , who try to increase their benefit at the expense of fair contributors or users , the `` cooperators '' .this implies a tragedy of commons .it was proposed that costly punishment of non - cooperative individuals can establish cooperation in public goods dilemmas , and it is effective indeed .nonetheless , why would cooperators choose to punish defectors at a personal cost ?one would expect that evolutionary pressure should eventually eliminate such `` moralists '' due to their extra costs compared to `` second - order free - riders '' ( i.e. cooperators , who do not punish ) .these , however should finally be defeated by `` free - riders '' ( defectors ) . to overcome this problem , it was proposed that cooperators who punish defectors ( called `` moralists '' by us ) would survive through indirect reciprocity , reputation effects or the possibility to abstain from the joint enterprize by `` volunteering '' . without such mechanisms , cooperators who punish will usually vanish .surprisingly , however , the second - order free - rider problem is naturally resolved , without assuming additional mechanisms , if spatial or network interactions are considered .this will be shown in the following .in order to study the conditions for the disappearance of non - punishing cooperators and defectors , we simulate the public goods game with costly punishment , considering two cooperative strategies ( c , m ) and two defective ones ( d , i ) . for illustration, one may imagine that cooperators ( c ) correspond to countries trying to meet the co emission standards of the kyoto protocol , and `` moralists '' ( m ) to cooperative countries that additionally enforce the standards by international pressure ( e.g. embargoes ) .defectors ( d ) would correspond to those countries ignoring the kyoto protocol , and immoralists ( i ) to countries failing to meet the kyoto standards , but nevertheless imposing pressure on other countries to fulfil them . according to the classical game - theoretical prediction, all countries would finally fail to meet the emission standards , but we will show that , in a spatial setting , interactions between the four strategies c , d , m , and i can promote the spreading of moralists .other well - known public goods problems are over - fishing , the pollution of our environment , the creation of social benefit systems , or the establishment and maintenance of cultural institutions ( such as a shared language , norms , values , etc . ) .our simplified game - theoretical description of such problems assumes that cooperators ( c ) and moralists ( m ) make a contribution of to the respective public good under consideration , while nothing is contributed by defectors ( d ) and `` immoralists '' ( i ) , i.e. defectors who punish other defectors .the sum of all contributions is multiplied by a factor reflecting _ synergy effects _ of cooperation , and the resulting amount is equally shared among the interacting individuals .moreover , moralists and immoralists impose a fine on each defecting individual ( playing d or i ) , which produces an additional cost per punished defector to them ( see methods for details ) .the division by scales for the group size , but for simplicity , the parameter is called the _ punishment fine _ and the _ punishment cost_. given the same interaction partners , an immoralist never gets a higher payoff than a defector , but does equally well in a cooperative environment. moreover , a cooperator tends to outperform a moralist , given the interaction partners are the same .however , a cooperator can do better than a defector when the punishment fine is large enough .it is known that punishment in the public goods game and similar games can promote cooperation above a certain critical threshold of the synergy factor . besides cooperators who punish defectors ,heckathorn considered `` full cooperators '' ( moralists ) and `` hypocritical cooperators '' ( immoralists ) . for well - mixed interactions ( where individuals interact with a representative rather than local strategy distribution ) , eldakar and wilson find that altruistic punishment ( moralists ) can spread , if second - order free - riders ( non - punishing altruists ) are excluded , and that selfish punishers ( immoralists ) can survive together with altruistic non - punishers ( cooperators ) , provided that selfish nonpunishers ( defectors ) are sufficiently scarce .besides well - mixed interactions , some researchers have also investigated the effect of spatial interactions , since it is known that they can support the survival or spreading of cooperators ( but this is not always the case ) . in this way ,brandt _ et al ._ discovered a coexistence of cooperators and defectors for certain parameter combinations .compared to these studies , our model assumes somewhat different replication and strategy updating rules .the main point , however , is that we have chosen long simulation times and scanned the parameter space more extensively , which revealed several new insights , for example , the possible coexistence of immoralists and moralists , even when a substantial number of defectors is present initially .when interpreting our results within the context of moral dynamics , our main discoveries for a society facing public goods games may be summarized as follows : 1 ._ victory over second - order free - riders : _ over a long enough time period , moralists fully eliminate cooperators , thereby solving the `` second - order free - rider problem '' .this becomes possible by spatial segregation of the two cooperative strategies c and m , where the presence of defectors puts moralists in a advantageous position , which eventually allows moralists to get rid of non - punishing cooperators ._ `` who laughs last laughs best effect '' : _ moralists defeat cooperators even when the defective strategies i and d are eventually eliminated , but this process is very slow . that is , the system behavior changes its character significantly even after very long times .this is the essence of the `` who laughs last laughs best effect '' .the finally winning strategy can be in a miserable situation in the beginning , and its victory may take very long .3 . _ `` lucifer s positive side effect '' : _ by permanently generating a number of defectors , small mutation rates can considerably accelerate the spreading of moralists . 4 ._ `` unholy collaboration '' of moralists with immoralists : _ under certain conditions , moralists can survive by profiting from immoralists .this actually provides the first explanation for the existence of defectors , who hypocritically punish other defectors , although they defect themselves .the occurrence of this strange behavior is well - known in reality and even experimentally confirmed .these discoveries required a combination of theoretical considerations and extensive computer simulations on multiple processors over long time horizons .for well - mixed interactions , defectors are the winners of the evolutionary competition among the four behavioral strategies c , d , m , and i , which implies a tragedy of the commons despite punishment efforts .the reason is that cooperators ( second - order free - riders ) spread at the cost of moralists , while requiring them for their own survival .conclusions from computer simulations are strikingly different , if the assumption of well - mixed interactions is replaced by the more realistic assumption of spatial interactions . when cooperators and defectors interact in space , it is known that some cooperators can survive through spatial clustering .however , it is not clear how the spatiotemporal dynamics and the frequency of cooperation would change in the presence of moralists and immoralists . would spatial interactions be able to promote the spreading of punishment and thereby eliminate second - order free - riders ? in order to explore this ,we have scanned a large parameter space .figure 1 shows the resulting state of the system as a function of the punishment cost and punishment fine after a sufficiently long transient time .if the fine - to - cost ratio and the synergy factor are low , defectors eliminate all other strategies .however , for large enough fines , cooperators and defectors are always eliminated , and moralists prevail ( fig . 1 ) . at larger values ,when the punishment costs are moderate , we find a coexistence of moralists with defectors without any cooperators . to understand why moralists can outperform cooperators despite additional punishment costs ,it is important to analyze the dynamics of spatial interactions . starting with a homogeneous strategy distribution ( fig .2a ) , the imitation of better - performing neighbors generates small clusters of individuals with identical strategies ( fig .`` immoralists '' die out quickly , while cooperators and moralists form separate clusters in a sea of defectors ( fig .the further development is determined by the interactions at the interfaces between clusters of different strategies ( figs .2d f ) . in the presence of defectors ,the fate of moralists is not decided by a _direct _ competition with cooperators , but rather by the success of both cooperative strategies against invasion attempts by defectors .if the -ratio is appropriate , moralists respond better to defectors than cooperators .indeed , moralists can spread so successfully in the presence of defectors that areas lost by cooperators are quickly occupied by moralists ( supplementary video s1 ) .this indirect territorial battle ultimately leads to the extinction of cooperators ( fig .2f ) , thus resolving the second - order free - rider problem . in conclusion, the presence of some _ conventional _ free - riders ( defectors ) supports the elimination of _ second - order _ free - riders .however , if the fine - to - cost ratio is high , defectors are eliminated after some time .then , the final struggle between moralists and cooperators takes such a long time that cooperators and moralists seem to coexist in a stable way .nevertheless , a very slow coarsening of clusters is revealed , when simulating over extremely many iterations .this process is finally won by moralists , as they are in the majority by the time the defectors disappear , while they happen to be in the minority during the first stage of the simulation ( see fig .we call this the `` who laughs last laughs best effect '' .since the payoffs of cooperators and moralists are identical in the absence of other strategies , the underlying coarsening dynamics is expected to agree with the voter model .note that there is always a punishment fine , for which moralists can outcompete all other strategies .the higher the synergy factor , the lower the -ratio required to reach the prevalence of moralists . yet , for larger values of , the system behavior also becomes richer , and there are areas for small fines or high punishment costs , where clusters with different strategies can coexist ( see figs .for example , we observe the coexistence of clusters of moralists and defectors ( see fig . 2 and supplementary video s1 ) or of cooperators and defectors ( see supplementary video s2 ) . finally , for low punishment costs but moderate punishment fines and synergy factors ( see fig .1d ) , the survival of moralists may require the coexistence with `` immoralists '' ( see fig . 3 and supplementary video s3 ) .such immoralists are often called `` sanctimonious '' or blamed for `` double moral standards '' , as they defect themselves , while enforcing the cooperation of others ( for the purpose of exploitation ) .this is actually the main obstacle for the spreading of immoralists , as they have to pay punishment costs , while suffering from punishment fines as well .therefore , immoralists need small punishment costs to survive . as cooperators die out quickly for moderate values of ,the survival of immoralists depends on the existence of moralists they can exploit , otherwise they can not outperform defectors .conversely , moralists benefit from immoralists by supporting the punishment of defectors .note , however , that this mutually profitable interaction between moralists and immoralists , which appears like an `` unholy collaboration '' , is fragile : if is increased , immoralists suffer from fines , and if is increased , punishing becomes too costly . in both cases ,immoralists die out , and the coexistence of moralists and immoralists breaks down . despite this fragility, `` hypocritical '' defectors , who punish other defectors , are known to occur in reality .their existence has even been found in experiments . here , we have revealed conditions for their occurrence .in summary , the second - order free - rider problem finds a natural and simple explanation , without requiring additional assumptions , if the local nature of most social interactions is taken into account and punishment efforts are large enough .in fact , the presence of spatial interactions can change the system behavior so dramatically that we do not find the dominance of free - riders ( defectors ) as in the case of well - mixed interactions , but a prevalence of moralists via a `` who laughs last laughs best '' effect ( fig . 2 ) .moralists can escape disadvantageous kinds of competition with cooperators by spatial segregation .however , their triumph over all the other strategies requires the temporary presence of defectors , who diminish the cooperators ( second - order free - riders ) .finally , moralists can take over , as they have reached a superiority over cooperators ( which is further growing ) and as they can outcompete defectors ( conventional free - riders ) .our findings stress how crucial spatial or network interactions in social systems are .their consideration gives rise to a rich variety of possible dynamics and a number of continuous or discontinuous transitions between qualitatively different system behaviors .spatial interactions can even _ invert _ the finally expected system behavior and , thereby ,explain a number of challenging puzzles of social , economic , and biological systems .this includes the higher - than - expected level of cooperation in social dilemma situations , the elimination of second - order free - riders , and the formation of what looks like a collaboration between otherwise inferior strategies .by carefully scanning the parameter space , we found several possible kinds of coexistence between two strategies each : * moralists ( m ) and defectors ( d ) can coexist , when the disadvantage of cooperative behavior is not too large ( i.e. the synergy factor is high enough ) , and if the punishment fine is sufficiently large that moralists can survive among defectors , but not large enough to get rid of them . * instead of m and d ,moralists ( m ) and immoralists ( i ) coexist , when the punishment cost is small enough .the small punishment cost is needed to ensure that the disadvantage of punishing defectors ( i ) compared to non - punishing defectors ( d ) is small enough that it can be compensated by the additional punishment efforts contributed by moralists . * to explain the well - known coexistence of d and c , it is useful to remember that defectors can be crowded out by cooperators , when the synergy factor exceeds a critical value ( even when punishment is not considered ) . slightly below this threshold ,neither cooperators nor defectors have a sufficient advantage to get rid of the other strategy , which results in a coexistence of both strategies .generally , a coexistence of strategies occurs , when the payoffs at the interface between clusters of different strategies are balanced . in order to understand why the coexistence is possible in a certain parameter area rather than just for an infinitely small parameter set, it is important to consider that typical cluster sizes vary with the parameter values .this also changes the typical radius of the interface between the coexisting strategies and , thereby , the typical number of neighbors applying the same strategy or a different one . in other words , a change in the shape of a cluster can partly counter - balance payoff differences between two strategies by varying the number of `` friends '' and `` enemies '' involved in the battle at the interface between spatial areas with different strategies ( see fig . [ add ] ) . finally , we would like to discuss the robustness of our observations .it is well - known that the level of cooperation in the public goods game is highest in _ small _ groups .however , we have found that moralists can crowd out non - punishing cooperators also for group sizes of , 13 , 21 , or 25 interacting individuals , for example . in the limiting case of _ large _ groups , where everybody interacts with everybody else , we expect the outcome of the well - mixed case , which corresponds to defection by everybody ( if other mechanisms like reputation effects or abstaining are not considered ) .that is , the same mechanisms that can create cooperation among friends may _fail _ to establish shared moral standards , when spatial interactions are negligible .it would therefore be interesting to study , whether the fact that interactions in the financial system are global , has contributed to the financial crisis .typically , when social communities exceed a certain size , they need sanctioning institutions to stabilize cooperation ( such as laws , an executive system , and police ) .note that our principal discoveries are not expected to change substantially for spatial interactions within _ irregular _ grids ( i.e. neighborhoods different from moore neighborhoods ) . in case of _ network_ interactions , we have checked that small - world or random networks lead to similar results , when the degree distribution is the same ( see fig .[ add1 ] ) .a _ heterogeneous _ degree distribution is even expected to _ reduce _ free - riding ( given the average degree is the same ) . finally , adding other cooperation - promoting mechanisms to our model such as direct reciprocity ( a shadow of the future through repeated interactions ) , indirect reciprocity ( trust and reputation effects ) , abstaining from a joint enterprize , or success - driven migration ,will strengthen the victory of moralists over conventional and second - order free - riders . in order to test the robustness of our observations ,we have also checked the effect of randomness ( `` noise '' ) originating from the possibility of strategy mutations .it is known that mutations may promote cooperation . according to the numerical analysis of the spatial public goods game with punishment ,the introduction of rare mutations does not significantly change the final _ outcome _ of the competition between moralists and non - punishing cooperators .second - order free - riders will always be a negligible minority in the end , if the fine - to - cost ratio and mutation rate allows moralists to spread . while a large mutation rate naturally causes a uniform distribution of strategies , a low level of strategy mutations can be even beneficial for moralists . namely , by permanently generating a number of defectors , small mutation rates can considerably accelerate the spreading of moralists , i.e. the slow logarithmic coarsening is replaced by another kind of dynamics .defectors created by mutations play the same role as in the phase ( see figs . 1 + 2 ) .they put moralists into an advantage over non - punishing cooperators , resulting in a faster spreading of the moralists ( which facilitates the elimination of second - order free - riders over realistic time periods ) . in this way, the presence of a few `` bad guys '' ( defectors ) can accelerate the spreading of moral standards .metaphorically speaking , we call this `` lucifer s positive side effect '' . the current study paves the road for several interesting extensions .it is possible , for example , to study _ antisocial _ punishment , considering also strategies which punish cooperators .the conditions for the survival or spreading of antisocial punishers can be identified by the _same _ methodology , but the larger number of strategies creates new phases in the parameter space . while the added complexity transcends what can be discussed here , the current study demonstrates clearly how differentiated the moral dynamics in a society facing public goods problems can be and how it depends on a variety of factors ( such as the punishment cost , punishment fine , and synergy factor ) .going one step further , evolutionary game theory may even prove useful to understand how moral feelings have evolved .furthermore , it would be interesting to investigate the _ emergence _ of punishment within the framework of a coevolutionary model , where both , individual strategies and punishment levels are simultaneously spread .such a model could , for example , assume that individuals show some exploration behavior and stick to successful punishment levels for a long time , while they quickly abandon unsuccessful ones .in the beginning of this coevolutionary process , costly punishment would not pay off .however , after a sufficiently long time , mutually fitting punishment strategies are expected to appear in the same neighborhood by coincidence .once an over - critical number of successful punishment strategies have appeared in some area of the simulated space , they are eventually expected to spread .the consideration of success - driven migration should strongly support this process .over many generations , genetic - cultural coevolution could finally inherit costly punishment as a behavioral trait , as is suggested by the mechanisms of strong reciprocity .we study the public goods game with punishment .cooperative individuals ( c and m ) make a contribution of 1 to the public good , while defecting individuals ( d and i ) contribute nothing .the sum of all contributions is multiplied by and the resulting amount equally split among the interacting individuals .a defecting individual ( d or i ) suffers a fine by each punisher among the interaction partners , and each punishment requires a punisher ( m or i ) to spend a cost on each defecting individual among the interaction partners . in other words , only defectors and punishing defectors ( immoralists ) are punished , and the overall punishment is proportional to the sum of moralists and immoralists among the neighbors .the scaling by serves to make our results comparable with models studying different groups sizes . denoting the number of so defined cooperators , defectors , moralists , and immoralists among the interaction partners by , , and , respectively, an individual obtains the following payoff : if it is a cooperator , it gets , if a defector , the payoff is , a moralist receives , and an immoralist obtains . our model of the spatial variant of thisgame studies interactions in a simple social network allowing for clustering .it assumes that individuals are distributed on a square lattice with periodic boundary conditions and play a public goods game with neighbors .we work with a fully occupied lattice of size with in fig . 1 and in figs .24 ( the lattice size must be large enough to avoid an accidental extinction of a strategy ) .the initial strategies of the individuals are equally and uniformly distributed .then , we perform a random sequential update .the individual at the randomly chosen location belongs to five groups .( it is the focal individual of a moore neighborhood and a member of the moore neighborhoods of four nearest neighbors ) .it plays the public goods game with the interaction partners of a group , and obtains a payoff in all 5 groups it belongs to .the overall payoff is .next , one of the four nearest neighbors is randomly chosen .its location shall be denoted by and its overall payoff by .this neighbor imitates the strategy of the individual at location with probability \}$ ] .that is , individuals tend to imitate better performing strategies in their neighborhood , but sometimes deviate ( due to trial - and - error behavior or mistakes ) .realistic noise levels lie between the two extremes ( corresponding to unconditional imitation by the neighbor , whenever the overall payoff is higher than ) and ( where the strategy is copied with probability 1/2 , independently of the payoffs ) . for the noise level chosen in our study ,the evolutionary selection pressure is high enough to eventually eliminate poorly performing strategies in favor of strategies with a higher overall payoff .this implies that the resulting frequency distribution of strategies in a large enough lattice is independent of the specific initial condition after a sufficiently long transient time .close to the separating line between m and d+m in fig . 1 , the equilibration may require up to iterations ( involving updates each ) .we acknowledge partial financial support from the eu project qlectives and the eth competence center `` coping with crises in complex socio - economic systems '' ( ccss ) through eth research grant ch1 - 01 08 - 2 ( d.h . ) , from the hungarian national research fund ( grant k-73449 to a.s . and g.s . ) , the bolyai research grant ( to a.s . ) , the slovenian research agency ( grant z1 - 2032 - 2547 to m.p . ) , and the slovene - hungarian bilateral incentive ( grant bi - hu/09 - 10 - 001 to a.s ., m.p . and g.s . ) .d.h . would like to thank for useful comments by carlos p. roca , moez draief , stefano balietti , thomas chadefaux , and sergi lozano .nakamaru m , iwasa y ( 2005 ) the evolution of altruism by costly punishment in the lattice structured population : score - dependent viability versus score - dependent fertility ._ evolutionary ecology research _ 7 : 853 - 870 .flache a , hegselmann r ( 2001 ) do irregular grids make a difference ? relaxing the spatial regularity assumption in cellular models of social dynamics ._ journal of artificial societies and social simulation _ 4 : 4 , see http://www.soc.surrey.ac.uk/jasss/4/4/6.html . of cooperation , the punishment cost , and the punishment fine .the displayed phase diagrams are for ( a ) , ( b ) , and ( d ) .( d ) enlargement of the small - cost area for .solid separating lines indicate that the resulting fractions of all strategies change continuously with a modification of the model parameters and , while broken lines correspond to discontinuous changes .all diagrams show that cooperators can not stop the spreading of moralists , if only the fine - to - cost ratio is large enough .furthermore , there are parameter regions where moralist can crowd out cooperators in the presence of defectors .note that the spreading of moralists is extremely slow and follows a voter model kind of dynamics , if their competition with cooperators occurs in the absence of defectors .computer simulations had to be run over extremely long times ( up to iterations for a systems size of ) . for similar reasons , a small level of strategy mutations ( which permanently creates a small number of strategies of all kinds , in particular defectors ) can largely accelerate the spreading of moralists in the m phase , while it does not significantly change the resulting fractions of the four strategies .the existence of immoralists is usually not relevant for the outcome of the evolutionary dynamics .apart from a very small parameter area , where immoralists and moralists coexist , immoralists are quickly extinct .therefore , the 4-strategy model usually behaves like a model with the three strategies c , d , and m only . as a consequence ,the phase diagrams for the latter look almost the same like the ones presented here .,width=432 ] , , and . *( a ) initially , at time , cooperators ( blue ) , defectors ( red ) , moralists ( green ) and immoralists ( yellow ) are uniformly distributed over the spatial lattice .( b ) after a short time period ( here , at ) , defectors prevail .( c ) after 100 iterations , immoralists have almost disappeared , and cooperators prevail , since cooperators earn high payoffs when organized in clusters .( d ) at , there is a segregation of moralists and cooperators , with defectors in between .( e ) the evolutionary battle continues between cooperators and defectors on the one hand , and defectors and moralists on the other hand ( here at ) .( f ) at , cooperators have been eliminated by defectors , and a small fraction of defectors survives among a large majority of moralists .interestingly , each strategy ( apart from i ) has a time period during which it prevails , but only moralists can maintain their majority . while moralists perform poorly in the beginning , they are doing well in the end .we refer to this as the `` who laughs last laughs best '' effect.,width=528 ] , , and , supporting the occurrence of individuals with ` double moral standards ' ( who punish defectors , while defecting themselves ) . *( a ) initially , at time , cooperators ( blue ) , defectors ( red ) , moralists ( green ) and immoralists ( yellow ) are uniformly distributed over the spatial lattice .( b ) after 250 iterations , cooperators have been eliminated in the competition with defectors ( as the synergy effect of cooperation is not large enough ) , and defectors are prevailing .( c e ) the snapshots at , , and show the interdependence of moralists and immoralists , which appears like a tacit collaboration .it is visible that the two punishing strategies win the struggle with defectors by staying together .on the one hand , due to the additional punishment cost , immoralists can survive the competition with defectors only by exploiting moralists .on the other hand , immoralists support moralists in fighting defectors .( f ) after 12000 iterations , defectors have disappeared completely , leading to a coexistence of clusters of moralists with immoralists.,width=528 ] in the stationary state , supporting an adaptive balance between the payoffs of two different strategies at the interface between competing clusters . * snapshots in the top rowwere obtained for low punishment fines , while the bottom row depicts results obtained for higher values of .( a ) coexistence of moralists and defectors for a synergy factor , punishment cost , and punishment fine .( b ) same parameters , apart from .( c ) coexistence of moralists and immoralists for , , and .( d ) same parameters , apart from .a similar change in the cluster shapes is found for the coexistence of cooperators and defectors , if the synergy factor is varied.,width=345 ] . *the graphs were constructed by rewiring links of a square lattice of size with probability , thereby preserving the degree distribution ( i.e. every player has 4 nearest neighbors ) . for small values of , small - world properties result , while for , we have a random regular graph . by keeping the degree distribution fixed, we can study the impact of randomness in the network structure independently of other effects .an inhomogeneous degree distribution can further promote cooperation .the results displayed here are averages over 10 simulation runs for the model parameters , , and .similar results can be obtained also for other parameter combinations.,width=432 ]
situations where individuals have to contribute to joint efforts or share scarce resources are ubiquitous . yet , without proper mechanisms to ensure cooperation , the evolutionary pressure to maximize individual success tends to create a tragedy of the commons ( such as over - fishing or the destruction of our environment ) . this contribution addresses a number of related puzzles of human behavior with an evolutionary game theoretical approach as it has been successfully used to explain the behavior of other biological species many times , from bacteria to vertebrates . our agent - based model distinguishes individuals applying four different behavioral strategies : non - cooperative individuals ( `` defectors '' ) , cooperative individuals abstaining from punishment efforts ( called `` cooperators '' or `` second - order free - riders '' ) , cooperators who punish non - cooperative behavior ( `` moralists '' ) , and defectors , who punish other defectors despite being non - cooperative themselves ( `` immoralists '' ) . by considering spatial interactions with neighboring individuals , our model reveals several interesting effects : first , moralists can fully eliminate cooperators . this spreading of punishing behavior requires a segregation of behavioral strategies and solves the `` second - order free - rider problem '' . second , the system behavior changes its character significantly even after very long times ( `` who laughs last laughs best effect '' ) . third , the presence of a number of defectors can largely accelerate the victory of moralists over non - punishing cooperators . forth , in order to succeed , moralists may profit from immoralists in a way that appears like an `` unholy collaboration '' . our findings suggest that the consideration of punishment strategies allows to understand the establishment and spreading of `` moral behavior '' by means of game - theoretical concepts . this demonstrates that quantitative biological modeling approaches are powerful even in domains that have been addressed with non - mathematical concepts so far . the complex dynamics of certain social behaviors becomes understandable as result of an evolutionary competition between different behavioral strategies .
nowadays our data is often high - dimensional , massive and full of gross errors ( e.g. , corruptions , outliers and missing measurements ) . in the presence of gross errors , the classical principal component analysis ( pca ) method , which is probably the mostwidely used tool for data analysis and dimensionality reduction , becomes brittle a single gross error could render the estimate produced by pca arbitrarily far from the desired estimate . as a consequence , it is crucial to develop new statistical tools for robustifying pca .a variety of methods have been proposed and explored in the literature over several decades , e.g. , .one of the most exciting methods is probably the so - called rpca ( robust principal component analysis ) method by , built upon the exploration of the following low - rank matrix recovery problem : [ pb : lmr ] suppose we have a data matrix and we know it can be decomposed as where is a low - rank matrix in which each column is a data point drawn from some low - dimensional subspace , and is a sparse matrix supported on . except these mild restrictions ,both components are arbitrary .the rank of is unknown , the support set ( i.e. , the locations of the nonzero entries of ) and its cardinality ( i.e. , the amount of the nonzero entries of ) are unknown either. in particular , the magnitudes of the nonzero entries in may be arbitrarily large .given , can we recover both and , in a scalable and exact fashion ?the theory of rpca tells us that , very generally , when the low - rank matrix satisfies some _ incoherent conditions _( i.e. , the coherence parameters of are small ) , both the low - rank and the sparse matrices can be exactly recovered by using the following convex , potentially scalable program : where is the nuclear norm of a matrix , denotes the norm of a matrix seen as a long vector , and is a parameter . besides of its elegance in theory , rpca also has good empirical performance in many practical areas , e.g. , image processing , computer vision , radar imaging , magnetic resonance imaging , etc . while complete in theory and powerful in reality, rpca can not be an ultimate solution to the low - rank matrix recovery problem [ pb : lmr ] .indeed , the method might not produce perfect recovery even when the latent matrix is strictly low - rank .this is because , seen from the aspect of mathematics , rpca requires to satisfy some incoherent conditions , which , however , might not hold in reality . in a physical sense, the reason is that rpca captures only the low - rankness property , which should not be the only property of our data , but essentially ignores the _ extra structures _ ( beyond low - rankness ) widely existed in data : given the situation that is low - rank , i.e. , the column vectors of locate on a low - dimensional subspace , it is quite normal that may have some extra structures , which specify in more detail _ how _ the data points ( i.e. , the column vectors of ) locate on the subspace .figure [ fig : cluster ] demonstrates a typical example of extra structures ; that is , the clustering structure which is ubiquitous in modern applications . whenever the data is exhibiting some clustering structure, the coherence parameters might be large and therefore rpca might be unsatisfactory . more precisely ,as will be shown in this paper , while the rank of is fixed and the underlying cluster number goes large , the coherence of keeps heightening and thus , arguably , the performance of rpca drops . to well handle _ _ coherent data _ _ , a straightforward approach is to _ avoid _ the coherence parameters of . nevertheless ,as explained in , the coherence parameters are indeed _ necessary _ for matrix recovery ( if there is no additional condition available ) . even more , this paper shall further indicate that the coherence parameters are related in nature to some extra structures intrinsically existed in and therefore _ can not _ be discarded simply .interestingly , we show that it is possible to _ avoid _ the coherence parameters by imposing some _ additional conditions _ , which are easy to obey in supervised environments and can also be approximately satisfied in unsupervised environments .our study is based on the following convex program termed low - rank representation ( lrr ) : where is a size- dictionary matrix constructed in advance .suppose is the optimal solution with respect to . then lrr uses to restore .it is easy to see that lrr falls back to rpca when ( identity matrix ) , and it can actually be further proved that the recovery produced by lrr is the same as rpca whenever the dictionary is orthogonal . ] , and is a parameter . in order for lrr to avoid the coherence parameters which have potential to be large in the presence of extra structures , we prove that it is sufficient to construct in advance a dictionary matrix which is low - rank by itself .this additional condition ( i.e. , the dictionary is low - rank ) gives a generic prescription to defend the possible infections raised by coherent data , providing an elementary criterion for learning the dictionary matrix .subsequently , we propose a simple and effective algorithm that utilizes the output of rpca to construct the dictionary in lrr .our extensive experiments demonstrated on randomly generated matrices and motion data show promising results . in summary ,the contributions of this paper include : * for the first time , this paper studies the problem of recovering low - rank , but coherent matrices from their corrupted versions .we investigate the physical regime where coherent data arises the widely existed clustering structure is a typical example that leads to coherent data .we prove some basic theories for resolving the problem of recovering coherent data , and also establish a practical algorithm that works better than rpca in our experiments . * the studies of this paper help to understand the _ physical _ meaning of coherence , which is now standard and widely used in various literatures , e.g. , .we show that the coherence parameters are not `` assumptions '' for accomplishing a proof , but rather some excellent quantities that relate in nature to the _ extra structures _ ( beyond low - rankness ) intrinsically existed in .* this paper provides insights regarding the lrr model proposed by .while the special case of has been extensively studied , the lrr model with general dictionaries was not fully understood .we show that lrr equipped with proper dictionaries could well handle coherent data .* the idea of replacing with is essentially related to the spirit of matrix factorization which has been explored for long , e.g. , . in that sense , the explorations of this paper help to understand why factorization techniques are useful .the remainder of this paper is organized as follows .section [ sec : notation ] summarizes mathematical notations used throughout this paper . in section [ sec : principle ] , we explore the problem of recovering coherent data from corrupted observations , providing some theories and an algorithm for resolving the problem .section [ sec : proof ] presents the complete proof procedure of our main result .section [ sec : exp ] demonstrates experimental results and section [ sec : con ] concludes this paper .capital letters such as are used to represent matrices , and accordingly , {ij} ] , {ij}|\} ] .the greek letter and its variants ( e.g. , subscripts and superscripts ) are reserved to denote the coherence parameters of a matrix .we shall also reserve two lower case letters , and , to respectively denote the data dimension and the number of data points , and we use the following two symbols throughout this paper : a complete list of notations can be found in appendix [ sec : app : notations ] for convenience of readers .in this section , we shall firstly investigate the physical regime that raises coherent data , and then discuss the problem of recovering coherent data from corrupted observations , providing some basic principles and an algorithm for resolving the problem .notice that the rank function can not fully capture all characteristics of , and thus it is indeed necessary to define some quantities for measuring the effects of various extra structures ( beyond low - rankness ) such as the clustering structure demonstrated in figure [ fig : cluster ] .the _ coherence _ parameters defined in are excellent exemplars of such quantities . for an matrix with rank and svd ,some of its important properties can be characterized by two coherence parameters , denoted as and .the first coherence parameter , , which characterizes the column space identified by , is defined as where denotes the standard basis .the second coherence parameter , , which characterizes the row space identified by , is defined as in , another coherence parameter , called as the third coherence parameter and denoted as , is also introduced : notice that is not indispensable , as it is actually a `` derivative '' of and : simple calculations give that .the analysis of work does not need to access .we include it just for the sake of consistence with .the analysis in merges the above three parameters into a single one : . as will be seen later , the behaviors of those three coherence parameters are different from each other , and thus it is indeed more adequate to consider them individually . have proven that the success condition ( regarding ) of rpca is where and is some numerical constant .so , rpca will be less successful when the coherence parameters are considerably larger : the success condition is narrowed when goes large . as an extreme example , consider the case where the latent matrix is one in only one column and zero everywhere else .such a matrix produces , and thus the success condition is invalid . in this subsection , we shall further show that the widely existed clustering structure can enlarge the coherence parameters and , accordingly , degrades the performance of rpca .are fixed to be and 100 , respectively .the underlying cluster number is varying from 1 to 50 . is fixed as a sparse matrix with 13% nonzero entries .( a ) the first coherence parameter vs cluster number .( b ) vs cluster number .( c ) vs cluster number .( d ) recover error ( produced by rpca ) vs cluster number .the numbers shown in above figures are averaged from 100 random trials .the recover error is computed as , where denotes an estimate of .,scaledwidth=95.0% ] given the situation that is low - rank , i.e. , , the data points ( i.e. , column vectors of ) should be sampled from a -dimensional subspace .yet the sampling is unnecessary to be _uniform_. indeed , a more realistic interpretation is to consider the data points as samples from the union of number of subspaces ( i.e. , clusters ) , and the sum of those multiple subspaces together has a dimension .that is to say , there are multiple `` small '' subspaces inside one -dimensional `` large '' subspace , as exemplified in figure [ fig : cluster ] .it is arguable that such a structure of multiple subspaces exists widely in various domains , e.g. , face , texture and motion .whenever the low - rank matrix is exhibiting such clustering behaviors , the second coherence parameter will increase with the cluster number underlying , as shown in figure [ fig : ic ] .when the coherence is heightening , suggests that the performance of rpca will drop , as verified in figure [ fig : ic](d ) . for the ease of citation , we call the phenomena shown in figure [ fig : ic](b)(d ) as the `` -phenomenon '' . to see why the second coherence parameter increases with the cluster number underlying , please refer to appendix [ app : why ] .as can be seen from figure [ fig : ic](a ) , the first coherence parameter is _ invariant _ to the variation of the clustering number .this is because the behaviors of the data points ( i.e. , column vectors ) can only affect the row space , while is defined on the column space .nevertheless , if the row vectors of also own some clustering structure , could be large as well .this kind of data exists widely in text documents and we leave it as future work . to accurately recover coherent matrices from their corrupted versions , one may establish some parametric models to _ capture _ the extra structures which produce high coherence . however , it is usually hard , if not impossible , to know in advance what kind of extra structures there are and which models are appropriate to use .even if the modalities of the extra structure are known , e.g. , the mixture of multiple subspaces shown in figure [ fig : cluster ] , such a strategy still needs to face some difficult problems , e.g. , the estimate of the cluster number . in sharp contrast , it is much simpler to devise an approach that can _ avoid _ the second coherence parameter . unfortunately , as explained in , the coherence parameters are _ necessary _ for identifying accurately the success conditions of matrix recovery .even more , the -phenomenon actually implies that is related in nature to some intrinsic structures of and thus can not be eschewed freely .interestingly , we shall show that lrr can avoid by using some _ additional conditions _ , which are possible to obey in both supervised and unsupervised environments .+ * main result : * we shall show that , when the dictionary matrix itself is low - rank , the recovery performance of lrr does not depend on . our main result is presented in the following theorem ( the detailed proof procedure is deferred until section [ sec : proof ] ) .[ thm : noiseless ] let with svd be a column - wisely unit - normed ( i.e. , ) dictionary matrix which satisfies ( i.e. , is a subspace of ) . for any and some numerical constant , if then with probability at least , the optimal solution to the lrr problem with is unique and exact , in a sense that where is the optimal solution to . by , the column space of approximately have the same properties as , and thus , roughly , .so , as aforementioned , this paper needs to assume that the first coherence parameter of is small and only addresses the cases where the second coherence parameter might be large .it is worth noting that the restriction is looser than that of prca is probably the `` finest '' bound one could accomplish in theory . ] , which requires .the requirement of column - wisely unit - normed dictionary ( i.e. , ) is purely for complying the parameter estimate of , which is consistent with rpca .the condition , i.e. , is a subspace of , is indispensable if we ask for exact recovery , because is implied by the equality .this necessary condition , together with the condition that is low - rank , indeed provides an elementary criterion for learning the dictionary matrix in lrr .figure [ fig : demo ] presents an example , which further confirms our main result : lrr is able to avoid as long as and is low - rank .note that it is unnecessary for the dictionary to strictly satisfy , and lrr is actually tolerant to the `` errors '' possibly existing in the dictionary . .in this experiment , is a rank-1 matrix with one column being ( i.e. , a vector of all ones ) and everything else being zero .thus , and .the sparse matrix is with bernoulli values , and its nonzero fraction is set as 5% .the dictionary is set as ] denotes the column of a matrix .solve for by optimizing the lrr problem with and . *output : * .whenever the recovery produced by rpca is already exact , the claim in theorem [ thm : noiseless ] gives that the recovery produced by our algorithm [ alg : mr ] is exact as well .when rpca fails to exactly recover , the produced dictionary is still possible to satisfy the success conditions required by theorem [ thm : noiseless ] , namely is low - rank and .this is because those conditions are weaker than .thus , in terms of exactly recovering from a given , the success probability of our algorithm [ alg : mr ] is greater than or equal to that of rpca .also , in a computational sense , algorithm [ alg : mr ] does not double rpca , although there are two convex programs in our algorithm .in fact , according to our simulations , usually the computational time of algorithm [ alg : mr ] is just 1.2 times as much as rpca .the reason is that , as has been explored by , the complexity of solving the lrr problem is ( assume ) , which is much lower than that of rpca ( which requires ) provided that the obtained dictionary matrix is fairly low - rank ( i.e. , is small ) .one may have noticed that the procedure of algorithm [ alg : mr ] could be made iterative , i.e. , one can consider as a new estimate of and use it to further update the dictionary matrix , and so on .nevertheless , we empirically find that such an iterative procedure often converges within two iterations .hence , for the sake of simplicity , we do not consider the iterative strategies in this paper .the same as in rpca , we assume that the locations of the corrupted entries are selected _ uniformly at random_. in more details , we work with the bernoulli model , where s are i.i.d .variables taking value one with probability and zero with probability , so that the expected cardinality of is . for the ease of presentation ,we assume that the signs of the nonzero entries of are symmetric bernoulli values : {ij}=\left\{\begin{array}{ll } 1 , & \textrm{with probability } \frac{\rho_0}{2},\\ 0 , & \textrm{with probability } 1-\rho_0,\\ -1,&\textrm{with probability } \frac{\rho_0}{2}. \end{array}\right.\end{aligned}\ ] ] for general sign matrices , the same as in rpca , our theorem [ thm : noiseless ] can still be proved by globally placing an elimination theorem and a derandomization scheme . yetthe success conditions in theorem [ thm : noisy ] have not been proven when has an arbitrary distribution , because the elimination theorem does not hold in the noisy case .the following two lemmas are well - known and will be used multiple times in the proof .[ app : lem : basic:1]for any matrix , the following holds : * let the svd of be . then we have .* let the support set of be .then we have .[ lem : basic:2 ] for any matrices and of consistent sizes , first of all , we would like to prove that the sparse matrix does not locate in the column space of the dictionary , i.e. , or as equal . provided that is fairly low - rank , the analysis in gives that holds with high probability for any , where denotes the linear space given by .since and , it is natural to anticipate that is smaller than 1 with high probability .the difference is that we only need the first coherence parameter to finish the proof .following the techniques in , we have the following lemma to bound the operator norm of .[ lem : papo ] suppose with . then for any , holds with probability at least , provided that for any matrix , we have and so which gives note that the frobenius norm of a matrix is equivalent to the vector norm , while considering the matrix as a long vector . in that sense, we have the definition of gives then by using the results in and following the proof procedure of , we have that holds with probability at least for some numerical constants and . for any , setting and gives that holds with probability at least , provided that . by and the triangle inequality , finally , the fact completes the proof .while the above lemma implies that , we often need to bound the sup - norm of .the next lemma will show that , when the signs of the matrix entries are independent symmetric bernoulli variables , the sup - norm could be arbitrarily small .[ lem : inf : pas0 ] suppose is a symmetric linear projection with , and is a random sign matrix with i.i.d .entries distributed as {ij}=\left\{\begin{array}{ll } 1 , & \textrm{with probability } \frac{1}{2},\\ -1,&\textrm{with probability } \frac{1}{2}. \end{array}\right.\end{aligned}\ ] ] for any , holds with high probability as long as let {ij} ] closely relates to the length of the column of .so it may not lose much accuracy to use the relaxation of .for the sake of consistency , we use the norm to define as follows the third coherence parameter of , associating with a dictionary matrix : for of rank , its third coherence parameter , associating with a non - orthonormal , column - wisely unit - normed dictionary matrix which also satisfies , is defined as where and are the left and right singular vectors of , respectively , and is the condition number of the matrix . .( a ) and are fixed , while is varying .( b ) and are fixed , while is varying .( c ) is fixed , while and are varying ( ) .( d ) are fixed , while is varying . in these experiments ,the dictionary is with normalized columns , where is an random gaussian matrix .the numbers shown in above figures are averaged from 10 random trials.,scaledwidth=98.0% ] figure [ fig : u3a ] demonstrates some properties about this particular coherence parameter , .it can be seen that is approximately a numerical constant equaling to 1 , as long as the rank is not too high such that the dictionary matrix is well - conditioned . by lemma [ lem : papo ] , . by , thus we have by and setting , since ( provided that is well - conditioned ) , we claim for the sake of simplicity may not be the `` best '' choice . ] . now the dual condition is proved by we claim instead of because lemma [ lem : inf : pas0 ] requires , which follows from .our main result , theorem [ thm : noiseless ] , is useful in both supervised and unsupervised environments . for the fair of comparison , in the experiments of this paperwe shall focus on demonstrating the superiorities of our unsupervised algorithm [ alg : mr ] over rpca .we first verify the effectiveness of our algorithm [ alg : mr ] on randomly generated matrices .we generate a collection of data matrices according to the model of : is a support set chosen at random ; is created by sampling 200 data points from each of 5 randomly generated subspaces , and its values are normalized such that ; is consisting of random values from bernoulli .the dimension of each subspace varies from 1 to 20 with step size 1 , and thus the rank of varies from 5 to 100 with step size 5 .the fraction varies from 2.5% to 50% with step size 2.5% . for each pair of rank and support size , we run 10 trials , resulting in a total of 4000 ( ) trials .vs rpca on recovering randomly generated matrices , both using .a curve shown in the third subfigure is the boundary for a method to be successful the recovery is successful for any pair that locates below the curve . here , the success is in a sense that , where denotes an estimate of .,scaledwidth=95.0% ] figure [ fig : recover ] compares our algorithm [ alg : mr ] to rpca , both using .it can be seen that the learnt dictionary matrix works distinctly better than the identity dictionary adopted by rpca .namely , the success area ( i.e. , the area of the white region ) of our algorithm is 46% wider than that of rpca !one may have noticed that rpca owns a region to be exactly successful .this is because in these experiments the coherence parameters are not too large , namely and .whenever reaches the upper bound , e.g. , the example shown in figure [ fig : demo ] , the success region of rpca will vanish .we now experiment with 11 additional sequences attached to the hopkins155 database . in those sequences , about 10% of the entries in the data matrix of trajectories are unobserved ( i.e. , missed ) due to visual occlusion .we replace each missing entry with a number from bernoulli , resulting in a collection of corrupted trajectory matrices for evaluating the effectiveness of matrix recovery algorithms .we perform subspace clustering on both the corrupted trajectory matrices and the recovered versions , and use the clustering error rates produced by existing subspace clustering methods as the evaluation metrics .we consider three state - of - the - art subspace clustering methods : shape interaction matrix ( sim ) , low - rank representation with ( which is referred to as `` lrrx '' ) and sparse subspace clustering ( ssc ) . .clustering error rates ( % ) on 11 corrupted motion sequences . [ cols="^,^,^,^,^,^,^",options="header " , ]when the data points are sampled from a low - rank subspace _ uniformly at random _ , it has been proven by that the first and second coherence parameters are bounded .namely , and for some numerical constant independent of the characteristics of . although correct , such a property is not enough to interpret the phenomenon that the coherence parameters increase with the cluster number underlying .hence , it is necessary to establish a more accurate rule to characterize the coherence parameters . through extensive experiments ,we find that the first and second coherence parameters actually follow the well - known zipf s law . more precisely ,if the data points ( which form the column vectors of ) are uniformly sampled from a -dimensional subspace , then , roughly , the logarithm of coherence is inversely proportional to the logarithm of .that is , where and are two constants .the results in figure [ fig : zipf ] verify the above zipf s law .note that the zipf s law can also induce the boundedness property proved by .namely , approximately gives that and .the above zipf s law suggests that the coherence must be inversely proportional to the rank of data .this is intuitively interpretable .let {ij} ] and {ij}=[v_0]_{i_1j_1 } , \forall{}i , j , i_1,j_1 ] , where with svd is a matrix of data points from the subspace , then is equivalent to a block - diagonal matrix that has nonzero entries only on number of blocks : .\end{aligned}\ ] ] in this case , it is demonstrable that the second coherence parameter depends on the cluster number . for the convenience of analysis , we assume that the dimensions of all subspaces are equal , i.e. , , and the sampling in each subspace is _uniform_. then the zipf s law gives where is the cluster number .hence , approximately , the second coherence parameter will increase with the cluster number underlying .let be an optimal solution to .denote , and .then we have provided that , the proof process of lemma [ app : lem : f ] shows that by the optimality of , which leads to hence , by , where .by , matrix , dictionary matrix , parameter . .alternating minimization : * 1.1 . * fix the others and update by * 1.2 .* fix the others and update by * 1.3 .* fix the others and update by * 2 .* update the lagrange multipliers and the parameter in this work , we use the exact alm method to solve the optimization problem .we first convert to the following equivalent problem : this problem can be solved by the alm method , which minimizes the following augmented lagrange function : with respect to , and , respectively , by fixing the other variables , and then updating the lagrange multipliers and .algorithm [ alg : alm : lrr ] summarizes the whole procedure of the optimization procedure .martin fischler and robert bolles .random sample consensus : a paradigm for model fitting with applications to image analysis and automated cartography ._ communications of the acm _ , 240 ( 6):0 381395 , 1981 . qifa ke and takeo kanade .robust l norm factorization in the presence of outliers and missing data by alternative convex programming . in _ ieee conference on computer vision and pattern recognition _ , pages 739746 , 2005 .guangcan liu , zhouchen lin , xiaoou tang , and yong yu .unsupervised object segmentation with a hybrid graph model ( hgm ) ._ ieee transactions on pattern analysis and machine intelligence _ , 320 ( 5):0 910924 , 2010 .issn 0162 - 8828 .guangcan liu , zhouchen lin , shuicheng yan , ju sun , yong yu , and yi ma .robust recovery of subspace structures by low - rank representation ._ ieee transactions on pattern analysis and machine intelligence _ , 350 ( 1):0 171184 , 2013 .yigang peng , arvind ganesh , john wright , wenli xu , and yi ma .rasl : robust alignment by sparse and low - rank decomposition for linearly correlated images . _ ieee transactions on pattern analysis and machine intelligence _ , 340 ( 11):0 22332246 , 2012 .
the recently established rpca method provides us a convenient way to restore low - rank matrices from grossly corrupted observations . while elegant in theory and powerful in reality , rpca may be not an ultimate solution to the low - rank matrix recovery problem . indeed , its performance may not be perfect even when the data is strictly low - rank . this is because rpca prefers incoherent data , which , however , could be inconsistent with some natural structures of data . as a typical example , consider the clustering structure which is ubiquitous in modern applications . as the number of cluster grows , the coherence parameters of data keep increasing , and accordingly , the recovery performance of rpca degrades . we show that it is possible for low - rank representation ( lrr ) to overcome the challenges raised by coherent data , as long as the dictionary in lrr is configured appropriately . namely , we mathematically prove that if the dictionary itself is low - rank then lrr can avoid the coherence parameters which have potential to be large . this provides an elementary principle for dealing with coherent data and naturally leads to a practical algorithm for obtaining proper dictionaries in unsupervised environments . our extensive experiments on randomly generated matrices and real motion sequences show promising results .
it has been 50 years since joseph weber first embarked on a serious experimental program to try to detect gravitational waves directly , motivated by the possibility of detecting signals from sources such as a core - collapse supernova or a binary neutron star .the intervening years have seen great advances in technologies and new techniques for detecting gravitational waves , from much - improved `` weber bars '' to highly sensitive broadband interferometers , doppler tracking of spacecraft such as cassini , long - term campaigns to monitor pulse arrival times from stable pulsars , and mature plans for long - baseline interferometer networks in space ( namely lisa and decigo ) . in parallel , after the discovery of the first binary pulsar in 1974 , radio pulse timing campaigns with a number of short - period binary pulsars have provided compelling `` indirect '' evidence for the existence of gravitational radiation as well as precise experimental tests of the general theory of relativity .theoretical work and numerical modeling have provided a much better understanding of the likely gravitational - wave ( gw ) signatures of the original leading source candidates core - collapse supernovae and binary neutron stars as well as many other expected or plausible gw sources , including binary systems with supermassive or stellar - mass black holes , short - period white dwarf binaries , non - axisymmetric spinning or perturbed neutron stars , cosmic strings , early - universe processes , and more .( general overviews of gw sources may be found in and , for instance . ) and here is what our direct searches have yielded so far : _nothing_. the lack of a directly detected signal is not surprising , based on our limited knowledge of source populations and on the current sensitivity levels of the detectors .further instrumental improvements are on the way , including substantial upgrades to the current large interferometers , proposals to build additional interferometers , pulse timing measurements of more pulsars for longer time spans with better precision , and eventually the launch of space missions to open up the low - frequency window that is certain to be rich with signals . according to current schedules , we are sure to be detecting signals and doing gw astronomy around the middle of the coming decade. however , in this article i will argue that we are _ already _ doing gw astronomy . in the next sectioni will summarize and interpret several completed searches for which the lack of a detectable signal provides some relevant information about the population and/or astrophysics of plausible sources .i will then project forward to the time when signals _ are _ detected and discuss what we may learn from them , and how they will fundamentally change the field of gw astronomy .in 1969 weber claimed that his detectors had produced definitive evidence for the discovery of gravitational waves , the first in a series of claims by him that could not be reproduced by others and were ultimately discredited .nevertheless , his attempts inspired an experimental community that continued to improve the detection technologies , cooling bars to cryogenic temperatures to minimize thermal noise and exploring other detection methods .large cryogenic bar detectors allegro , explorer , nautilus , niobe , and auriga began operating for extended periods with good sensitivity in the 1990s . analyzing the data from two or more detectors togethersubstantially reduced the false alarm rate from spurious signals in the individual detectors ( from mechanical vibrations , cosmic rays , etc . ) and enabled searches for weaker and less - frequent transient signals .this culminated in the 1997 formation of the international gravitational event collaboration ( igec ) . around the same time , gw searches with prototype interferometric detectors ( _ e.g. _ ) gave way to the commissioning and eventual operation of the full - scale interferometers tama300 , ligo , geo600 and virgo , which have by now surpassed the bars in searching for `` high - frequency '' gw signals ( above hz ) .also , the past several years have seen advances in using multiple millisecond pulsars to search for gw signals at frequencies around hz , for instance with the parkes pulsar timing array project . over the past decade ,dozens of papers have been published to report results from searches for various types of gw signals . aside from a few hints of excess event candidates that were not confirmed ,all searches so far have yielded null results . from theseare derived upper limits on the rate and/or strength of gw signals reaching the detectors , or alternatively on the possible population of sources . in this sectioni highlight several of these results which represent significant steps toward gravitational - wave astronomy .many of these signals are expected to be detectable by the current instruments only if they originate relatively nearby ; therefore we begin by exploring our cosmic neighborhood .our galaxy is thought to contain neutron stars , of which a few thousand have been detected as radio or x - ray pulsars .a rapidly spinning neutron star can emit periodic gravitational waves through a number of mechanisms , including a static deformation that breaks axisymmetry , persistent matter oscillations ( _ e.g .r_-modes ) , or free precession .the crab pulsar , at a distance of about 2 kpc , is a particularly interesting neighbor . with a current spin frequency of and spin - down rate of hz / s , its energy loss rate is estimated to be w. this powers the expansion and electromagnetic luminosity of the crab nebula , but the energy flows in this complex system are difficult to pin down quantitatively , leaving open the possibility that a significant fraction of the energy could be emitted as gravitational radiation .palomba has estimated that the observed braking index of the pulsar spin - down constrains this fraction to be no more than 40% .the ligo scientific collaboration ( lsc ) used data from the first 9 months of ligo s s5 science run to search for a gw signal from the crab pulsar and , finding none , were able to set an upper limit of 8% of the total spin - down energy , using x - ray image information to infer the orientation of the pulsar spin axis and assuming that the gw emission is phase - locked at twice the radio pulse frequency .analysis of the full s5 data has improved this limit to just 2% .this observational result directly constrains the properties of the crab pulsar and the energy balance of the nebula .the lsc and the virgo collaboration ( now analyzing data jointly ) have also searched for periodic gws from all known pulsars with spin frequencies greater than 20 hz and sufficiently precise radio or x - ray pulse timing to allow a fully - coherent search , again assuming that the gw emission is phase - locked at twice the pulse frequency .the analysis considered 115 radio pulsars ( including 71 in binary systems ) that were timed during the ligo s5 run by the jodrell bank observatory , the green bank telescope , and/or the parkes radio telescope , along with the x - ray pulsar j0537 which was monitored by the rossi x - ray timing explorer ( rxte ) . for each pulsar , a 90% upper limit was placed on the gw amplitude in terms of the parameter , which represents the strain amplitude that would reach the earth in the `` plus '' and `` cross '' polarizations _ if _ the pulsar spin axis were oriented optimally .figure [ fig : cwresultsplot ] shows these upper limits , which are remarkably small numbers in themselves .for known pulsars using data from the ligo / geo600 s3 , s4 and s5 science runs , taken from ( reproduced by permission of the aas ) .each symbol represents one pulsar and is plotted at the expected gw signal frequency ( twice the spin frequency ) .the grey band is the range expected for s5 given the average instrumental noise level.,width=415 ] the lowest upper limit is , obtained for pulsar j1603 .j0537 is the only pulsar besides the crab for which the upper limit from this analysis reaches the spin - down limit , assuming that the moment of inertia is within the favored range of ( 1 to 3 ) kgm .assuming the neutron stars to be triaxial ellipsoids , these amplitude limits may be re - cast as limits on the equatorial ellipticity .pulsar j2124 , at a distance of pc and gw frequency of 404 hz , yields the strictest upper limit , .one may ask whether the perfect or near - perfect axisymmetry of these neutron stars is due to the properties of the neutron star material .it has long been thought that conventional neutron stars could support an ellipticity up to a few times , but recent work suggests that the pressure in the crystalline crust suppresses defects and ellipticities of up to are possible in a conventional neutron star .most of the pulsars in the s5 analysis have limits below that level , meaning that these neutron stars , at least , are closer to axisymmetric than required by the intrinsic material properties .soft gamma repeaters ( sgrs ) are believed to be magnetars , _i.e._neutron stars with very strong magnetic fields .sgrs are observed to emit intense flares of soft gamma rays at irregular intervals which may be associated with `` starquakes '' , abrupt cracking and rearrangement of the crust and magnetic field .these events could excite quasinormal modes of the neutron star which then radiate gravitational waves .the fundamental mode , at a frequency of around to 3 khz , is expected to be the most efficient gw emitter , although other nonradial modes may also participate .the lsc have used ligo data to search for gw bursts associated with flares of sgrs 1806 and 1900 , including the december 2004 giant flare of sgr 1806 .a first analysis treated the flares individually , setting upper limits on gw energy as low as a few times erg , depending strongly on the waveform assumed .the best of these limits ( for signals in the most sensitive range of the instruments , 100200 hz ) are within the range of possible gw energy emission during a giant flare , to erg , according to modeling by ioka .unfortunately , the giant flare occurred between ligo science runs , and the less - sensitive data available at that time only yields upper limits on gw energy emission of erg and above .a later paper re - analyzed the `` storm '' of sgr 1900 + 14 flares that spanned a period of seconds on 29 march 2006 .this analysis `` stacked '' the data around the times of the individual flares in order to gain sensitivity under the assumption that many or all of the flares had an associated gw burst at a common relative time offset .two scenarios were considered to choose the relative weighting of the flares : one in which the gw burst energy is assumed to be proportional to the electromagnetic fluence of each flare , and the other in which all large flares are assumed to have more - or - less equal gw burst energy .this analysis yielded per - burst energy upper limits as low as erg , an order of magnitude lower than the limits set for this storm by the earlier single - burst analysis .these searches are just beginning to address the few existing models of gw emission by sgrs , and are motivating new modeling of sgrs and their disturbances .stronger constraints ( if not a detection ) will be obtained when another giant flare occurs while the gw detector network is operating with good sensitivity , and/or from searches using ordinary flares from closer sgrs such as the recently discovered sgrs j0501 and j0418 , which may both be less than 2 kpc away . as noted previously ,only a small fraction of the hundreds of millions of neutron stars in our galaxy are visible to us in radio waves , x - rays or gamma rays .it is quite plausible that one or more nearby , unseen neutron stars have a large enough asymmetry and spin rate to be emitting periodic gravitational waves at a detectable level .a general argument , originated by blandford and extended in a 2007 paper by the ligo scientific collaboration , starts with the ( very optimistic ) assumptions that all neutron stars are born with a high spin rate and spin down due to gw emission alone , and concludes that the strongest signal that we can expect ( in an average sense ) to receive on earth would have , independent of frequency and .knispel and allen have greatly refined this analysis , replacing blandford s simple assumptions about the neutron star population with a detailed simulation of the birth , initial kick and subsequent motion of neutron stars in the galaxy . for a nominal ellipticity of , they find that the maximum expected gw signal amplitude ( with the same optimistic assumptions about neutron stars spinning down due to gw emission ) is around over the frequency range 1001000 hz .the lsc have published two all - sky searches for periodic gw signals using data from the early part of the s5 run , one using the semi - coherent `` powerflux '' method and the other using the substantial computing power provided by the einstein project to carry out longer coherent integrations on a smaller data set .these searches had comparable sensitivities , both slightly surpassing the knispel and allen model expectations ( with ) for pulsars with favorable orientations and gw signal frequencies in the vicinity of 200 hz .thus , periodic gw searches may be on the verge of detecting unseen neutron stars , or at least constraining models for the population of such objects in our galaxy . on 1 february 2007 , an extremely intense gamma - ray burst was detected by detectors on the konus - wind , integral , messenger , and swift satellites .the initial position error box from the relative arrival times of the bursts intersected the spiral arms of m31 ( the andromeda galaxy ) , raising the intriguing possibility that it originated in that galaxy , only kpc away .furthermore , the leading model for most short - hard grbs such as this one is a binary merger involving at least one neutron star .such an event would also emit strong gravitational waves . at the time of the grb, the two detectors at the ligo hanford observatory were collecting science - mode data , while the other large interferometers were not .the lsc searched this data for both an inspiral signal leading up to the merger and for an arbitrary gw burst associated with the merger itself .no plausible signal was found , and from the absence of a detectable inspiral signal at that range , a compact binary merger in m31 was ruled out with confidence .the ligo null result , along with a refined position estimate for the grb , helped to solidify the case that this was most likely an sgr giant flare event in m31 . in 2003a team of radio astronomers reported evidence for the discovery of a supermassive black hole binary in the bright radio galaxy 3c 66b , which is located about 90 mpc from earth .their claim was based on very long baseline interferometry ( vlbi ) observations of the galaxy , in which the radio core of the galaxy was seen to move slightly over the course of 15 months in a manner consistent with an elliptical orbit with a period of 1.05 year .this suggested the presence of a binary system with a total mass near .remarkably , such a system would also be expected to merge in 5 years due to energy loss by gw emission .jenet , lommen , larson & wen determined that pulsar timing could be used to check this claim , since the gravitational waves from the binary would cause pulse time - of - arrival variations of up to several microseconds .they analyzed seven years of archival arecibo timing data for psr 1855 + 09 and found no such variation , definitively ruling out the proposed binary system in 3c 66b .binary neutron star systems are benchmark sources for ground - based gw detectors because binary pulsars give a glimpse of the population ( see discussion in ) and efficient gw emission during the inspiral phase just before merging makes them detectable out to considerable distances , currently tens of megaparsecs .black - hole - and - neutron - star ( bhns ) and binary black hole ( bbh ) systems with stellar - mass black holes can be detected at even greater distances ; population synthesis studies suggest that the net detection rates for those sources are likely to be comparable . because gw detectors have wide antenna patterns , signals from these eventscan be detected from essentially anywhere in the sky and at any time .the most sensitive search published to date for these sources used ligo s5 data and templates for binary inspirals with total mass up to .no significant signal candidate was detected .the lsc interpreted this null result using a population model based on the assumption that the rate of mergers in each nearby galaxy is proportional to its blue light luminosity , as a tracer of massive star formation .the upper limits obtained from a total of 18 calendar months of ligo data , in units of merger rate per year per ( defined as times the blue light luminosity of the sun ) , were , and for binary neutron star , bhns , and bbh systems , respectively , calculated assuming black hole masses of .these limits are still far from the theoretically expected rates , but are motivating the numerical relativity community to improve waveform calculations and the data analysis community to find better ways to search for inspirals with higher masses , significant spin and/or high mass ratio .supernova core collapse and several other plausible signals are not modeled well enough to use matched filtering , either because the astrophysics is not completely known or because the physical parameter space is too large to effectively cover with a template bank . even for the important case of bbh mergers , which numerical relativity calculations are now having considerable success in modeling ,the physical parameter space allows for a wide variety of waveforms .thus it is important to search for arbitrary transient signals ( bursts ) in the gw data , and data analysis methods have been implemented which robustly detect a wide range of signals without advance knowledge of the waveform .so far the igec network of bar detectors is the observation time champion , having collected enough data since 1997 to establish an upper limit of per year on the rate of strong gw bursts , with looser rate limits on weaker bursts .more recent igec data extended their sensitivity to somewhat weaker bursts , but with looser rate limits of .5 per year . on the other hand , ligo is the sensitivity champion .the latest published burst search results , from the first calendar year of the ligo s5 run , set upper limits on the rate of bursts arriving at earth as a function of signal waveform and amplitude , expressed as the root - sum - squared gw strain calculated from both polarization component amplitudes at the earth : figure [ fig : burstplotsg ] shows the limits set by burst searches in the s5 run ( first calendar year only ) and earlier science runs for `` sine - gaussian '' waveforms with and central frequencies up to khz .the area above each curve is excluded at 90% confidence level , _i.e. _ the curve traces out the 90% upper limit on the rate for a given waveform assuming a hypothetical population with fixed .sufficiently loud bursts of any form have the same rate limit , per year , as the efficiency of the analysis pipeline approaches unity .the signal strength can also be related in a robust way to gw energy emission from a source at a known or assumed distance .for instance , the s5 first - year search mentioned above would have been sensitive to an event in the virgo galaxy cluster ( mpc ) that emitted of gw energy in a burst with a dominant frequency of hz .these searches constrain populations of sources such as binary mergers of intermediate - mass black holes , although so far only preliminary quantitative studies have been made with realistic simulated waveforms .the big bang may have left behind a stochastic background of gravitational waves , isotropic like the cosmic microwave background ( cmb ) but carrying information about much earlier fundamental processes in the early universe ; see for a review and references in for updates on the details of plausible processes .a stochastic background , isotropic or not , can also be produced by a large number of overlapping astrophysics sources such as binary mergers , cosmic ( super)strings , or core - collapse supernovae .the gw signal generally has the form of random `` noise '' with a characteristic power spectrum , though it can be distinguished from true instrumental noise by testing for a common signal in multiple detectors for which the instrumental noise is known to be uncorrelated .jenet _ et al _ have used pulsar timing to search for low - frequency stochastic gravitational waves in archival and newly - obtained data for seven pulsars spanning intervals from to years .they detected no signal but placed limits on the gw energy density assuming different power - law distributions as a function of frequency . from these they also derive limits on mergers of supermassive binary black hole systems at high redshift , relic gravitational waves amplified during the inflationary era , and a possible population of cosmic ( super)strings .the lsc and virgo have used ligo data to search for a stochastic gw signal in the vicinity of 100 hz by measuring correlations in the data from the hanford and livingston interferometers to test for a signal well below the noise level of either instrument .a recently published paper used the data from the full s5 science run to set a limit on the energy density in gws as a fraction of the critical energy density needed to close the universe .the result , assuming a frequency - independent spectrum , was at 95% confidence .this direct limit surpasses the indirect limits from big bang nucleosynthesis and the cmb and constrains early - universe models .it also imposes constraints on a possible population of cosmic strings in a different part of the parameter space than the pulsar timing result does . from this sampling of search results ,most published in the past few years , one can see the beginnings of rich astrophysics coming out of gw observations .the searches are now placing meaningful constraints on some individual objects and events , source populations ( either real or speculated ) , and the total energy density of gravitational waves in the universe .many more analyses are in progress , and null results will surely continue to provide interesting information .as i write this sentence , i can see that the two ligo 4-km detectors and virgo are all collecting science - mode data at this particular moment ( as part of the ongoing s6/vsr2 science run ) , while auriga , explorer and nautilus are also collecting good data .geo600 is being upgraded to `` geo - hf '' with a focus on improving the sensitivity for frequencies above hz and will collect more data over the next several years .it is possible that the first unambiguous gw signal is in the data already collected but not yet fully analyzed , or will soon be recorded . besides proving without a doubt that gws exist and can be detected, even a single detection would give us invaluable information about the source from the waveform properties .in the case of a binary inspiral , the `` chirp '' rate and possible modulation reflect the component object masses and spins ; for the ringdown of a perturbed black hole , the damped - sinusoid frequency and decay rate reveal the mass and spin ; for a spinning neutron star , the signal amplitude and polarization content indicate its ellipticity and spin axis inclination ; and so on . a signal that does not match any of the standard modelscould confirm a speculative source type or reveal an unanticipated one .the reconstructed sky position of the source may point to a galaxy and thus pin down the distance . if the signal is associated with an astronomical event or object observed by other means such as a grb , optical or radio afterglow , supernova , neutrino detection , or known pulsar then the complementary information will provide a clearer view of the nature of the source and emission mechanisms. preparations are well underway for major upgrades to the ground - based gw detector network in the form of advanced ligo and advanced virgo , which will have an order of magnitude better sensitivity than the current instruments .when these detectors begin operating in 2014 or 2015 , we can expect regular detections of binary inspirals and excellent prospects for detecting various other signals .the detection of multiple signals of the same type will enable population surveys that reveal the origin and evolution of such sources .proposals for other large interferometric detectors , in particular lcgt in japan and aigo in australia , would add significantly to the capabilities of the current detector network .prospects are good for detecting low - frequency signals with pulsar timing arrays on about the same time scale , while the space - based detectors are due to be launched some years later to open up the intermediate frequency band .conceptual designs for `` third - generation '' ground - based detectors such as the einstein telescope ( et ) are now being proposed with the goal of improving over the sensitivities of the `` advanced '' detectors by another order of magnitude .a comprehensive discussion of all the possible astrophysics that can be addressed is beyond the scope of this article ( but see the living review by sathyaprakash and schutz , for example ) .instead , to illustrate some of the key issues , let us look into a crystal ball ( see figure [ fig : crystalball ] ) and make some predictions for what the future _ might _ hold .near the end of the s6/vsr2 science run , ligo and virgo will record a fairly significant inspiral event candidate , with a strength corresponding to a false alarm rate of 1 per 160 years .the best - match template will have masses of and , representing a black hole and a neutron star .the all - sky burst search will also identify this as a candidate , but not the strongest one in that search .the reconstructed sky position will be consistent with three galaxies within 50 mpc .prompt follow - up imaging with a robotic telescope will capture a weak , fading optical transient in an elliptical galaxy within the favored sky region .after careful internal review and debate , the ligo and virgo collaborations will publish the complete diagnosis of the candidate , calling it `` cautious evidence for a gravitational - wave signal '' . by 2015 ,pulsar timing array analyses will have produced greatly improved upper limits on supermassive black hole binary mergers and stochastic background processes , but no detections yet . in the spring of 2015, advanced ligo and advanced virgo will have been mostly commissioned and ( i speculate ) will begin an 8-month science run while still a factor of away from their design sensitivities . the geo - hf detector may continue to operate for part of that period .the run will yield two black hole neutron star inspiral candidates , with an expected background of , and also two binary neutron star inspiral candidates , with a background of .one of the binary neutron star candidates will have a clear radio afterglow in prompt follow - up observations with radio telescopes .these event candidates will be published together as the first clear detection of gravitational waves .analysis of these events will also place strong limits on `` extra '' gw polarization states beyond the two predicted by general relativity .all - sky searches for burst and periodic gw signals using the same data will yield candidates that look promising but are not significant enough to claim as detections . greatly improved upper limits will be published on gw emission from known pulsars and on a stochastic background .after further commissioning , advanced ligo and advanced virgo will resume running at full sensitivity , joined the following year by lcgt and aigo .i imagine that analysis of two years of data will yield the following : * 15 binary neutron star candidates with a background of .two of the candidates will correspond to short - hard grbs , one of which is localized in a galaxy with a measured redshift of .based on this information , the emitted gw energy will turn out to be consistent with the theoretical prediction . * 18 black hole neutron star candidates with a background of .two of these will correspond to grbs , one also with a high - energy neutrino .* comparison of gw inspiral times with grb times will confirm that gws travel at the speed of light .* 6 binary black hole candidates with a background of .the masses and spins of the candidates will be inferred , giving a preliminary look at their distributions . * 4 burst candidates with a background of 0.15 .one of them , with central frequency 310 hz , will correspond to a weak long grb with no measured redshift . *a periodic gw signal will be detected from the crab pulsar , corresponding to % of the total spin - down energy .this result will be used to constrain models of neutron star formation in supernovae . *periodic gw signals will also be detected from sco x-1 ( using data collected during a 3-month period with the detectors in a narrow - band configuration ) and from 5 unseen neutron stars . *stochastic gw searches will detect signals from two low - mass x - ray binaries ( lmxbs ) besides sco x-1 , and will place much stricter limits on cosmic string models . around the same time , pulsar timing analyses will detect gws from supermassive black hole binary systems in two galaxies , and will rule out another large area of the parameter space for cosmic string models .gravitational - wave astronomy will thus be in full swing by the time that lisa and decigo are launched and open up new frequencies for gw observations .it may have been indulgent to speculate so specifically about what the future may bring , and the details are obviously fictional .however , i think the scenario above is actually fairly conservative and illustrates several of the scientific findings that can be derived from the observations .it also reflects many of the issues the gw community will have to deal with , such as borderline - significant event candidates , samples of event candidates with non - negligible backgrounds , and the role of information from electromagnetic observations .the transition envisioned above , from always setting upper limits to being able to claim some detections , will call for some changes in strategy to make optimal use of the detectors and of the data for science results . in this sectionwe discuss a few such areas .interferometric detectors may be operated in different ways in order to optimize the noise characteristics according to scientific priorities .for instance , the advanced ligo and advanced virgo detectors are designed to be limited by quantum noise at low and high frequencies ; by reducing the laser intensity , one can reduce the radiation pressure noise at low frequency at the cost of increasing the shot noise at high frequency .interferometer configurations with signal recycling , such as advanced ligo and advanced virgo , allow additional tuning options through changing the reflectivity and ( microscopic ) detuning phase shift of the signal recycling mirror .optimal tunings have been considered for individual signal types as well as some combinations .of course , an interferometer can only operate in one mode at a time .detection of one or more gw signals may motivate re - tuning the interferometer to focus on a certain class of signals , either temporarily or for the rest of the run .it may also be useful to tune different interferometers differently , _e.g. _ one of the advanced ligo hanford interferometers could be optimized for low frequency while the other is optimized for higher frequency .the first direct detection of a gw signal will erase any lingering doubts about whether gw detectors really work , and will bring a new focus to the science which can be done with gw observations .that should make a stronger case for additional detectors on the ground ( _ e.g. _ lcgt , aigo , et ) as well as boosting support for detectors in space ( lisa , decigo ) .furthermore , the designs of new detectors may be influenced by the view of what measurements are most important based on what has been detected so far .currently there is a big emphasis in the gw detection community on achieving near - perfect certainty in the first gw signal detection .typically this results in raising the signal strength threshold so that the false alarm rate is extremely low , but that also reduces the sensitivity of the search .however , having certified one or more events as genuine increases our belief in other candidates of the same type .thus we can choose to relax the signal strength threshold to include more candidates in a search , even if doing so also includes more background the benefit from having more real signals in the sample to study may outweigh the negative effects of the additional background .many of the gravitational - wave searches that have been performed in recent years have provided useful astrophysical information despite yielding no confirmed gw signal candidates .thus , one can say that we are already doing gravitational - wave astronomy .actual detections , when they finally start coming , will enable us to address a much wider range of astrophysics questions . and here is what will be exciting : _everything_. i would like to thank the organizers of the eighth edoardo amaldi conference on gravitational waves for giving me this opportunity to review and interpret the current state of gravitational - wave astronomy . of course , my colleagues in the gravitational - wave community are responsible for the observational results themselves , and my views of the astrophysics and of the field have been shaped by discussions with many of them too many to thank individually .i was particularly inspired by a 2007 seminar by ben owen entitled `` why ligo results are already interesting '' .i gratefully acknowledge the support of the national science foundation through grant phy-0757957 .this article has been assigned ligo document number p0900289-v4 .99 weber j 1960 30613 weber j 1966 122830 abbate s f _ et al _ 2003 _ proc .spie _ * 4856 * 907 hobbs g _et al _ 2009 the international pulsar timing array project : using pulsars as a gravitational wave detector arxiv:0911.5206 [ astro-ph.sr ] verbiest j p w _ et al _ 2009 _ mon . not .royal astron ._ * 400 * 951 danzmann k and rdiger a 2003 s19 stebbins r 2006 _ aip conference proceedings _ * 873 * 312 kawamura s _ et al _ 2008 _ j. phys .ser . _ * 122 * 012006 hulse r a and taylor j h 1975 _ astrophys . j. _ * 195 * l518 kramer m _ et al _ 2006 _ science _ * 314 * 97102 ott c d 2009 063001blanchet l , faye g , iyer b r and joguet b 2002 d * 65 * 061501 blanchet l , faye g , iyer b r and joguet b 2002 d * 71 * 129902 [ erratum ] cutler c and thorne k s 2002 _ proc .16th int .conf . on general relativity and gravitation ( gr16 ) ,1521 july 2001 , durban , south africa _ ed n t bishop and s d maharaj ( singapore : world scientific ) ( _ preprint _ gr - qc/0204090 ) schutz b f 2003 _ aip conference proceedings _ * 686 * 329 weber j 1969 13204 collins h m 2004 _ gravity s shadow _ ( chicago : university of chicago press ) thorne k s 1987 _ 300 years of gravitation _ ed s hawking and w israel ( cambridge : cambridge university press ) pp 330458 mauceli e , geng z k , hamilton w o , johnson w w , merkowitz s , morse a , price b and solomonson n 1996 126475 astone p _et al _ 1993 d * 47 * 36275 astone p _et al _ 1997 _ astropart ._ 23143 blair d g , ivanov e n , tobar m e , turner p j , van kann f and heng i s 1995 190811 prodi g a _ et al _ 1999 _ proc .2nd edoardo amaldi conference on gravitational waves _( singapore : world scientific ) p 148 prodi g a _ et al _ 2000 _ int .j. modern phys ._ d * 9 * 23745 allen b _ et al _1999 14981501 ando m ( the tama collaboration ) 2002 140919 abbott b p _ et al _ 2009 076901 grote h ( for the ligo scientific collaboration ) 2008 114043 grote h ( for the ligo scientific collaboration ) 2010 this issue acernese f _ et al _ 2008 114045 hobbs g b _ et al _ 2009 _ publ . astron . soc .australia _ * 26 * 1039 narayan r and ostriker j p 1990 _ astrophys .j. _ * 352 * 22246 jaranowski p , krlak a and schutz b f 1998 d * 58 * 063001 andersson n 2003 r10544 van den broeck c 2005 182539 lyne a g , pritchard r s and graham - smith f 1993 _ mon . not .royal astron ._ * 265 * 1003 current data at http://www.jb.man.ac.uk/ pulsar / crab.html hester j j 2008 _ ann ._ * 46 * 12755 palomba c 2000 _ astron ._ 1638 abbott b _et al _ 2008 _ astrophys .j. lett ._ * 683 * l459 abbott b _et al _ 2009 _ astrophys .j. lett ._ * 706 * l2034 [ erratum ] abbott b p _ et al _ 2009 _ astrophys .j. _ submitted ; _ preprint_ arxiv:0909.3583 [ astro-ph.he ] ushomirsky g , cutler c and bildsten l 2000 _ mon . not .royal astron .soc . _ * 319 * 902 horowitz c j and kadau k 2009 191102 duncan r c and thompson c 1992 _ astrophys. j. lett ._ * 392 * l9 de freitas pacheco j a 1998 _ astron .astrophys . _ * 336 * 397401 horvath j e 2005 _ modern phys .a _ * 20 * 2799804 abbott b _et al _ 2008 211102 iokak 2001 _ mon . not .royal astron ._ * 327 * 639 abbott b p _ et al _ 2009 _ astrophys ._ * 701 * l6874 rea n_ et al _ 2009 _ mon .not . royal astron ._ * 396 * 241932 aptekar r l , cline t l , frederiks d d , golenetskii s v , mazets e p and palshin v d 2009 _ astrophys . j. lett . _ * 698 * l825 van der horst a j 2010 discovery of a new soft gamma repeater : sgr j0418 + 5729 _ astrophys .j. lett ._ * in press * _ preprint_ arxiv:0911.5544 abbott b _et al _ 2007 d * 76 * 082001 knispel b and allen b 2008 d * 78 * 044031 abbott b p _ et al _ 2009 111102 abbott b p _ et al _ 2009 d * 80 * 042003 hurley k _ et al _ 2007 _ gcn circular _ 6103 nakar e 2007 _ phys . reports _ * 442 * 166236 abbott b _et al _ _ astrophys .j. _ * 681 * 141930 mazets e p , aptekar r l , cline t l , frederiks d d , goldsten j o , golonetskii s v , hurley k , von kienlin a and palshin v d 2008 _ astrophys . j. _ * 680 * 5459 ofek e o _ et al _ 2008 _ astrophys . j. _ * 681 * 14649 sudou h , iguchi s , murata y and taniguchi y 2003 _ science _ * 300 * 12635 jenet f a , lommen a , larson s l and wen l 2004 _ astrophys . j. _ * 606 * 799803 kalogera v _et al _ 2004 _ astrophys .j. lett ._ * 601 * l17982 kalogera v _et al _ 2004 _ astrophys .j. lett ._ * 614 * l1378 [ erratum ] oshaughnessy r , kim c , kalogera v and belczynski k 2008 _ astrophys .j. _ * 672 * 47988 kalogera v , belczynski k , kim c , oshaughnessy r and willems b 2007 _ phys .reports _ * 442 * 75 abbott b p _ et al _ 2009 d * 80 * 047101 buonanno a , pan y , baker j g , centrella j , kelly b j , mcwilliams s t and van meter j 2007 d * 76 * 104049 damour t , nagar a , dorband e n , pollney d and rezzola l 2008 d * 77 * 084017 damour t and nagar a 2007 d * 76 * 064028 campanelli m , lousto c o , mundim b c , nakano h and zlochower y 2010 this issue astone p _et al _ 2003 d * 68 * 022001 astone p _et al _ 2007 d * 76 * 102001 abbott b p _ et al _ 2009 d * 80 * 102001 aylott b _ et al _ 2009 165008 cadonati l , chatterji s , fischetti s , guidi g , mohapatra s r p , sturani r and vicer a 2009 204005 maggiore m 2000 _ phys .reports _ * 331 * 283367 the ligo scientific collaboration and the virgo collaboration 2009 _ nature _ * 460 * 9904 lck h _et al _ 2010 this issue jenet f a _ et al _ 2006 _ astrophys .j. _ * 653 * 15716 grishchuk l p 2005 _ phys .uspekhi _ * 48 * 1235 damour t and vilenkin a 2005 d * 71 * 063510 harry g m for the ligo scientific collaboration 2010 this issue advanced ligo team 2007 advanced ligo reference design _ technical document _ ligo - m060056 _ url _ https://dcc.ligo.org/cgi-bin/docdb/showdocument?docid=m060056 hild s , freise a , manotvani m , chelkowski s , degallaix j and schilling r 2009 025005 kuroda k _ et al _ 2006 _ prog . theor .* 5499 ohashi m for the lcgt collaboration 2008 _ j. phys .ser . _ * 120 * 032008 barriga p _al _ 2010 this issue einstein telescope project web site , _ url _ http://www.et-gw.edu/ hild s , chelkowski s and freise a 2008 _ preprint _ arxiv:0810.0604 sathyaprakash b s and schutz b f 2009 _ liv. rev ._ 2009 - 2 kondrashov i s ,simakov d a , khalili f ya and danilishin s l 2008 d * 78 * 062004
the successful construction and operation of highly sensitive gravitational - wave detectors is an achievement to be proud of , but the detection of actual signals is still around the corner . even so , null results from recent searches have told us some interesting things about the objects that live in our universe , so it can be argued that the era of gravitational - wave astronomy has already begun . in this article i review several of these results and discuss what we have learned from them . i then look into the not - so - distant future and predict some ways in which the detection of gravitational - wave signals will shape our knowledge of astrophysics and transform the field .
in this paper we ask how much entanglement is required to perform a measurement on a pair of spatially separated systems , if the participants are allowed only local operations and classical communication .that is , we want to find the `` entanglement cost '' of a given measurement .( we give a precise definition of this term in the following subsection . )our motivation can be traced back to a 1999 paper entitled `` quantum nonlocality without entanglement '' , which presents a complete orthogonal measurement that can not be performed using only local operations and classical communication ( locc ) , even though the eigenstates of the measurement are all unentangled .that result shows that there can be a kind of nonlocality in a quantum measurement that is not captured by the entanglement of the associated states . herewe wish to quantify this nonlocality for specific measurements . though the measurements we consider here have outcomes associated with _entangled _ states , we find that the entanglement cost of the measurement often exceeds the entanglement of the states themselves .the 1999 paper just cited obtained an upper bound on the cost of the specific nonlocal measurement presented there , a bound that has recently been improved and generalized by cohen .in addition , there are in the literature at least three other avenues of research that bear on the problem of finding the entanglement cost of nonlocal measurements .first , there are several papers that simplify or extend the results of ref . , for example by finding other examples of measurements with product - state outcomes that can not be carried out locally . a related line of research asks whether or not a given set of orthogonal bipartite or multipartite states ( not necessarily a complete basis , and not necessarily unentangled ) can be distinguished by locc , and if not , how well one _ can _ distinguish the states by such means .finally , a number of authors have investigated the cost in entanglement , or the entanglement production capacity , of various bipartite and multipartite operations . in this paperwe consider three specific cases : ( i ) a class of orthogonal measurements on two qubits , in which the four eigenstates are equally entangled , ( ii ) a somewhat broader class of orthogonal measurements with unequal entanglements , and ( iii ) a general , nonorthogonal , bipartite measurement in dimensions that is invariant under all local pauli operations .for the first of our three cases we present upper and lower bounds on the entanglement cost .for the second case we obtain a lower bound , and for the last case we compute the cost exactly : it is equal to the average entanglement of the states associated with the outcomes . throughout the paper ,we mark our main results as propositions .the upper bound in case ( i ) can be obtained directly from a protocol devised by berry a refinement of earlier protocols for performing a closely related nonlocal unitary transformation .our bound is therefore the same as berry s bound .however , because we are interested in performing a measurement rather than a unitary transformation , we give an alternative protocol consisting of a sequence of local measurements .to get our lower bounds , we use a method developed in papers on the local distinguishability of bipartite states .the average entanglement between two parties can not be increased by locc ; so in performing the measurement , the participants must consume at least as much entanglement as the measurement can produce .this fact is the basis of all but one of our lower bounds .the one exception is in section iii , where we use a more stringent condition , a bound on the success probability of local entanglement manipulation , to put a tighter bound on the cost for a limited class of procedures . to define the entanglement cost , we imagine two participants , alice and bob , each holding one of the two objects to be measured .we allow them to do any sequence of local operations and classical communication , but we do not allow them to transmit quantum particles from one location to the other . rather , we give them , as a resource , arbitrary shared entangled states , and we keep track of the amount of entanglement they consume in performing the measurement . at this point , though , we have a few options in defining the problem .do we try to find the cost of performing the measurement only once , ordo we imagine that the same measurement will be performed many times ( on many different pairs of qubits ) and look for the asymptotic cost per trial ? and how do we quantify the amount of entanglement that is used up ? in this paper we imagine that alice and bob will perform the given measurement only once .( in making this choice we are following cohen . )however , we suppose that this measurement is one of many measurements they will eventually perform ( not necessarily repeating any one of the measurements and not necessarily knowing in advance what the future measurements will be ) , and we assume that they have a large supply of entanglement from which they will continue to draw as they carry out these measurements . in thissetting it makes sense to use the standard measure of entanglement for pure states , namely , the entropy of either of the two parts .thus , for a pure state of a bipartite system ab , the entanglement is where is the reduced density matrix of particle a : . in this paper ,the logarithm will always be base two ; so the entanglement is measured in ebits . by means of local operations and classical communication , alice and bob can create from their large supply of entanglement any specific state that they need .for example , if they create and completely use up a copy of the state , this counts as a cost of . on the other hand ,if their procedure converts an entangled state into a less entangled state , the cost is the difference , that is , the amount of entanglement lost . a general measurement is specified by a povm ,that is , a collection of positive semi - definite operators that sum to the identity , each operator being associated with one of the outcomes of the measurement . in this paper we restrict our attention to _ complete _ measurements , that is , measurements in which each operator is of rank one ; so each is of the form for some in the range . in a complete _ orthogonal _ measurement ,each operator is a projection operator ( ) that projects onto a single vector ( an eigenvector of the measurement ) .now , actually performing a measurement will always entail performing some operation on the measured system .all that we require of this operation is that alice and bob both end up with an accurate classical record of the outcome of the measurement .in particular , we do not insist that the measured system be collapsed into some particular state or even that it survive the measurement .we allow the possibility of probabilistic measurement procedures , in which the probabilities might depend on the initial state of the system being measured .however , we do not want our quantification of the cost of a measurement to depend on this initial state ; we are trying to characterize the measurement itself , not the system on which it is being performed .so we assume that alice and bob are initially completely ignorant of the state of the particles they are measuring .that is , the state they initially assign to these particles is the completely mixed state .this is the state we will use in computing any probabilities associated with the procedure . bringing together the above considerations, we now give the definition of the quantity we are investigating in this paper . given a povm , let be the set of all locc procedures such that ( i ) uses pure entangled pairs , local operations , and classical communication , and ( ii ) realizes exactly , in the sense that for any initial state of the system to be measured , yields classical outcomes with probabilities that agree with the probabilities given by .then , the entanglement cost of a measurement , is defined to be where is the total entanglement of all the resource states used in the procedure , is the distillable entanglement of the state remaining at the end of the procedure , and indicates an average over all the possible results of , when the system on which the measurement is being performed is initially in the completely mixed state .( though we allow and take into account the possibility of some residual entanglement , in all the procedures we consider explicitly in this paper , the entanglement in the resource states will in fact be used up completely . )a different notion of the entanglement cost of a measurement is considered in ref . , namely , the amount of entanglement needed to effect a naimark extension of a given povm . in that casethe entanglement is between the system on which the povm is to be performed and an ancillary system needed to make the measurement orthogonal . for any orthogonalmeasurement , and indeed for all the measurements considered in this paper , the entanglement cost in the sense of ref . is zero .one way to perform a nonlocal orthogonal measurement on a bipartite system is to perform a nonlocal unitary transformation that takes the eigenstates of the desired measurement into the standard basis , so that the measurement can then be finished locally .( we will use this fact in section ii . )so one might wonder whether the problem we are investigating in this paper , at least for the case of orthogonal measurements , is equivalent to the problem of finding the cost of a nonlocal unitary transformation .a simple example shows that the two problems are distinct .suppose that alice holds two qubits , labeled a and a , and bob holds a single qubit labeled b. they want to perform an orthogonal measurement having the following eight eigenstates . here the order of the qubits in each ket is a , a , b. alice and bob can carry out this measurement by the following protocol : alice measures qubit a in the standard basis .if she gets the outcome , she and bob can finish the measurement locally .if , on the other hand , she gets the outcome , she uses up one ebit to teleport the state of qubit a to bob , who then finishes the measurement .the average cost of this protocol is 1/2 ebit , because the probability that alice will need to use an entangled pair is 1/2 . on the other hand, one can show that any unitary transformation that could change the above basis into the standard basis would be able to create 1 ebit of entanglement and must therefore consume at least 1 ebit .so the cost of the measurement in this case is strictly smaller than the cost of a corresponding unitary transformation .the crucial difference is that when one does a unitary transformation , one can gain no information about the system being transformed .so there can be no averaging between easy cases and hard cases .there are two general bounds on , an upper bound and a lower bound , that apply to all complete bipartite measurements .these bounds are expressed in the following two propositions. * proposition 1 .* let be a povm on two objects a and b , having state spaces of dimensions and respectively .then ._ let alice and bob share , as a resource , a maximally entangled state of two -dimensional objects .they can use this pair to teleport the state of a from alice to bob , who can then perform the measurement locally .the entanglement of the resource pair is .so ebits are sufficient to perform the measurement .similarly , ebits would be sufficient to teleport the state of b to alice .so the cost of is no greater than . as we have mentioned , most of our lower bounds are obtained by considering the entanglement production capacity of our measurements .specifically , we imagine that in addition to particles a and b , alice and bob hold , respectively , auxiliary particles c and d. we consider an initial state of the whole system such that the measurement on ab collapses cd into a possibly entangled state .the average amount by which the measurement increases the entanglement between alice and bob is then a lower bound on .that is , in the proof of the following proposition , the initial entanglement is zero. * proposition 2 .* let be a bipartite povm consisting of the operators , where each is a normalized state of particles a and b , each of which has a -dimensional state space .then is at least as great as the average entanglement of the states .that is , _ proof ._ let the initial state of abcd be a tensor product of two maximally entangled states .note that the reduced density matrix of particles a and b is the completely mixed state , in accordance with our definition of the problem .when the measurement yields the outcome , its effect on can be expressed in the form where is the identity on cd , and the operators act on the state space of particles a and b , telling us what happens to the system when the outcome occurs .the trace of the right - hand side of eq .( [ op ] ) is not unity but is the probability of the outcome .( note that may send states of ab to a different state space , including , for example , the state space of the system in which the classical record of the outcome is to be stored . the index is needed because the final state of the system when outcome occurs could be a mixed state . )the operators satisfy the condition applying the operation of eq .( [ op ] ) to the state of eq .( [ maxent ] ) , and then tracing out everything except particles c and d , one finds that these particles are left in the state where the asterisk indicates complex conjugation in the standard basis .this conjugation does not affect the entanglement ; so , when outcome occurs , particles c and d are left in a state with entanglement .the probability of this outcome is .so the average entanglement of cd after the measurement has been performed is the quantity of eq .( [ ave ] ) .but the average entanglement between alice s and bob s locations can not have increased as long as alice and bob were restricted to local operations and classical communication .so in the process of performing the measurement , alice and bob must have used up an amount of entanglement equal to or exceeding . in the following three sections we improve these two bounds for a specific measurement that we label , an orthogonal measurement on two qubits with eigenstates given by here and are nonnegative real numbers with and .section ii presents an improved upper bound for this measurement , section iii derives a lower bound for a restricted class of procedures , and section iv derives an absolute lower bound .we then consider a somewhat more general measurement in section v. in section vi we exhibit a class of bipartite measurements , in dimension , for which we can find a procedure that achieves the lower bound of eq .( [ ave ] ) .as noted earlier , these are the povms that are invariant under all local pauli operations .one way to perform the measurement is to perform the following unitary transformation on the two qubits . where and , the matrix is written in the standard basis and the s are the usual pauli matrices , under this transformation , the four orthogonal states that define the measurement are transformed into so once the transformation has been done , the measurement can be completed locally ; alice and bob both make the measurement versus and tell each other their results . the transformation is equivalent to one that has been analyzed in refs . , all of which give procedures that are consistent with the rules we have set up for our problem ; that is , the procedures can be used to perform the measurement once , rather than asymptotically , using arbitrary entangled states as resources .( some of those papers consider the asymptotic problem , but their procedures also work in the setting we have adopted here . )it appears that the procedure presented by berry in ref . is the most efficient one known so far .it is a multi - stage procedure , involving at each stage a measurement that determines whether another stage , and another entangled pair , are needed .we now present a measurement - based protocol for performing .the protocol can be derived from berry s and yields the same upper bound on the cost , but we arrive at it in a different way that may have conceptual value in the analysis of other nonlocal measurements .the construction of the protocol begins with the following observations .if alice were to try to teleport her qubit to bob using as a resource an incompletely entangled pair , she would cause a nonunitary distortion in its state . with his qubit andalice s distorted qubit , bob could , with some probability less than one , successfully complete the measurement .however , if he gets the wrong outcome , he will destroy the information necessary to complete the measurement .we require the measurement always to be completed , so this protocol fails . on the other hand ,suppose alice , again using a partially entangled pair , performs an _teleportation , conveying to bob only one rather than two classical bits , and suppose bob similarly makes an incomplete measurement , extracting only one classical bit from his two qubits . in that case , if the incomplete measurements are chosen judiciously , a failure does not render the desired measurement impossible but only requires that alice and bob do a different nonlocal measurement on the qubits they now hold . in the following description of the protocol ,we have incorporated the unitary transformations associated with teleportation into the measurements themselves , so that the whole procedure is a sequence of local projective measurements . like berry s protocol , our protocol consists a series of rounds , beginning with what we will call `` round one '' . 1 .alice and bob are given as a resource the entangled state , where the positive real numbers and ( with ) are to be determined by minimizing the eventual cost .thus each participant holds two qubits : the qubit to be measured and a qubit that is part of the shared resource .2 . alice makes a binary measurement on her two qubits , defined by two orthogonal projection operators : here the bell states and are defined by and .alice transmits ( classically ) the result of her measurement to bob .( here alice is doing the incomplete teleportation . in a complete teleportationshe would also distinguish from , and from . )if alice gets the outcome , bob performs the following binary measurement on his two qubits : here , , , and , and the real coefficients and are obtained from and via the equation , together with the normalization condition .( these values are chosen so as to undo the distortion caused by alice s imperfect teleportation . ) on the other hand , if alice gets the outcome , bob performs a different binary measurement : here , , , and .if alice and bob have obtained either of the outcomes or , which we call the `` good '' outcomes , they can now finish the desired measurement by making local measurements , with no further expenditure of entangled resources .for example , if they get the outcome , alice now distinguishes between and ( which span the subspace picked out by ) , and bob distinguishes between and ( which span the subspace picked out by ) .the total probability of getting one of the two good outcomes is on the other hand , if they have obtained one of the other two outcomes , or `` bad '' outcomes they find that in order to finish the measurement on their _ original _ pair of qubits , they now have to perform a different measurement on the system that they now hold .( even though each participant started with two qubits , each of them has now distinguished a pair of two - dimensional subspaces , effectively removing one qubit s worth of quantum information .so the remaining quantum information on each side can be held in a single qubit . )the measurement has the same form as , but with new values and instead of and .the new values are determined by the equations in any case , alice and bob have now finished round one . if they have obtained one of the bad outcomes , they now have two choices : ( i ) begin again at step 1 but with the new values and , or ( ii ) use up a whole ebit to teleport alice s system to bob , who finishes the measurement locally .they choose the method that will ultimately be less costly in entanglement .if they choose option ( i ) , we say that they have begun round two .this procedure is iterated until the measurement is finished or until rounds have been completed , where is an integer chosen in advance . in round , the measurement parameter is determined from the parameters and used in the preceding round according to eq .( [ newa ] ) ( with the appropriate substitutions ) . here and are to be interpreted as the first - round values and .if rounds are completed and the measurement is still unfinished , alice teleports her system to bob , who finishes the measurement locally .the entanglement used in stage of this procedure is , where is the binary entropy function $ ] . from eqs .( [ probability ] ) and ( [ newa ] ) , we therefore have the following upper bound on the cost of the measurement .* proposition 3 .* for each positive integer , let satisfy .we define the functions ( failure probability ) and ( new value of the measurement parameter ) as follows : where and .let , and for each integer , let be defined by .\end{split}\ ] ] then for each positive integer , is an upper bound on .the protocol calls for minimizing the bound over the values of and .this optimization problem is exactly the problem analyzed by berry .we present in fig .1 the minimal cost as obtained by a numerical optimization , plotted as a function of the entanglement of the eigenstates of the measurement .( in constructing the curve , we have limited alice and bob to two rounds .additional rounds do not make a noticeable difference in the shape of the curve , given our choice of the axis variables . )we also show on the figure the lower bound to be derived in section iv .we note that so far , for cases in which the entanglement of the eigenstates of exceeds around 0.55 ebits , there is no known measurement strategy that does better than simple teleportation , with a cost of one ebit .[ cols= " < " , ] average entanglement of the states in almost every case , the resulting lower bound is _ higher _ than the average entanglement of the eigenstates of the measurement .the only exceptions we have found , besides the ones already mentioned in section iv ( in which all the states are maximally entangled or all are unentangled ) , are those for which two of the measurement eigenstates are maximally entangled and the other two are unentangled .that is , this method does not produce a better lower bound for the measurement with eigenstates or for the analogous measurement with replaced by and with the product states suitably replaced to make the states mutually orthogonal . in all other casesthe cost of the measurement is strictly greater than the average entanglement of the states . the measurement has been considered in ref . , whose results likewise give a lower bound on the cost : ( where and ) .this bound is weaker than the one we have obtained , in part because we have followed the later paper ref . in assuming an initial pure state rather than a mixed state of abcd .here we consider a class of measurements for which the cost _ equals _ the average entanglement of the states associated with the povm elements .we begin with another two - qubit measurement , which we then generalize to arbitrary dimension .a measurement closely related to is measurement , which has eight outcomes , represented by a povm whose elements all have , with the eight states given by that is , they are the same states as in , plus the four states obtained by interchanging and .thus , alice and bob could perform the measurement by flipping a fair coin to decide whether to perform or .this procedure yields the eight possible outcomes : there are two possible outcomes of the coin toss , and for each one , there are four possible outcomes of the chosen measurement .the coin toss requires no entanglement ; so the cost of this procedure is equal to the cost of ( which is equal to that of ) . we conclude that as we will see shortly , the cost of is in fact strictly smaller for .the measurement is a non - orthogonal measurement , but any non - orthogonal measurement can be performed by preparing an auxiliary system in a known state and then performing a global orthogonal measurement on the combined system .we now show explicitly how to perform this particular measurement , in a way that will allow us to determine the value of . to do the measurement , alice and bob draw , from their store of entanglement ,the entangled state of qubits c and d. ( as always , alice holds c and bob holds d. ) then each of them locally performs the bell measurement of the global orthogonal measurement , we can find the corresponding povm element of the ab measurement as follows : \},\ ] ] where is the povm element of the global measurement .less formally , we can achieve the same result by taking the `` partial inner product '' between the initial state of the system cd and the eigenstate of the global measurement .for example , the eigenstate yields the following partial inner product : which works out to be .the corresponding povm element on the ab system is .continuing in this way , one finds the following correspondence between the 16 outcomes of the global measurement and the povm elements of the ab measurement . thus , even though there are formally 16 outcomes of the ab measurement , they are equal in pairs , so that there are only eight distinct outcomes , and they are indeed the outcomes of the measurement .the cost of this procedure is .this is the same as the average entanglement of the eight states representing the outcomes of , which we know is a lower bound on the cost .thus the lower bound is achievable in this case , and we can conclude that is exactly equal to .we note that the povm is invariant under all local pauli operations .this fact leads us to ask whether , more generally , invariance under such operations guarantees that the entanglement cost of the measurement is exactly equal to the average entanglement of the states associated with the povm elements .the next section shows that this is indeed the case for complete povms .we begin by considering a povm on a bipartite system of dimension , generated by applying generalized pauli operators to a single pure state .the povm elements are of the form , where and each index runs from to . here the generalized pauli operators and are defined by with and with the addition understood to be mod .one can verify that the above construction generates a povm for any choice of . in order to carry out this povm , alice and bob use , as a resource , particles c and d in the state , which has the same entanglement as .( as before , the asterisk indicates complex conjugation in the standard basis . )alice performs on ac , and bob on bd , the generalized bell measurement whose eigenstates are to see that this method does effect the desired povm , we compute the partial inner products as in the preceding subsection : thus the combination of bell measurements yields the povm defined by eq .( [ genpovm ] ) .we now extend this example to obtain the following result .* proposition 6 .* let be any complete povm with a finite number of outcomes , acting on a pair of systems each having a -dimensional state space , such that is invariant under all local pauli operations , that is , under the group generated by , , , and .then is equal to the average entanglement of the states associated with the outcomes of , as expressed in eq .( [ ave ] ) . _ proof . _the most general such povm is similar to the one we have just considered , except that instead of a single starting state , there may be an ensemble of states with weights , , such that . the povm elements ( of which there are a total of )are , where ( so plays the role of in eq .( [ ave ] ) . ) in order to perform this measurement , alice and bob first make a random choice of the value of , using the weights .they then use , as a resource , particles c and d in the state , and perform bell measurements as above .the cost of this procedure is the average entanglement of the resource states , which is but we know that is a lower bound on .since the above procedure achieves this bound , we have that . we discussed in the introduction , a general lower bound on the entanglement cost of a complete measurement is the average entanglement of the pure states associated with the measurement s outcomes .perhaps the most interesting result of this paper is that , for almost all the orthogonal measurements we considered , the actual cost is strictly greater than this lower bound .the same is true in the examples of `` nonlocality without entanglement '' , in which the average entanglement is zero but the cost is strictly positive .however , whereas those earlier examples may have seemed special because of their intricate construction , the examples given here are quite simple .the fact that the cost in these simple cases exceeds the average entanglement of the states suggests that this feature may be a generic property of bipartite measurements .if this is true , then in this sense the nonseparability of a measurement is generically a distinct property from the the nonseparability of the eigenstates .( in this connection it is interesting that for certain questions of distinguishability of generic bipartite states , the presence or absence of entanglement seems to be completely irrelevant . ) we have also found a class of measurements for which the entanglement cost is _ equal to _ the average entanglement of the corresponding states .these measurements have a high degree of symmetry in that they are invariant under all local generalized pauli operations .what is it that causes some measurements to be `` more nonseparable '' than the states associated with their outcomes ?evidently the answer must have to do with the _ relationships _ among the states . in the original `` nonlocality without entanglement ''measurement , the crucial role of these relationships is clear : in order to separate any eigenstate from any other eigenstate by a local measurement , the observer must disturb some of the other states in such a way as to render them indistinguishable .one would like to have a similar understanding of the `` interactions '' among states when the eigenstates are entangled .some recent papers have quantified relational properties of ensembles of bipartite states .perhaps one of these approaches , or a different approach yet to be developed , will capture the aspect of these relationships that determines the cost of the measurement .we thank alexei kitaev , debbie leung , david poulin , john preskill , andrew scott and jon walgate for valuable discussions and comments on the subject .s.b . is supported by canada s natural sciences and engineering research council ( nserc ) .g.b . is supported by canada s natural sciences and engineering research council ( nserc ) , the canada research chair program , the canadian institute for advanced research ( cifar ) , the quantum__works _ _ network and the institut transdisciplinaire dinformatique quantique ( intriq ) ._ lower bound_ our lower bound on the one - round cost is given by eqs .( [ lowerbound1 ] ) and ( [ lowerbound2 ] ) , which we rewrite here in an equivalent form : where is determined by the equation =\frac{(ac+bd)^2 - c^2}{d^2}. \label{appequation}\ ] ] for a small value of the parameter , we would like to obtain an approximation to the value of that solves eq .( [ appequation ] ) .as discussed in section iii , we are looking for a solution in the range , and the forms of the functions in eq .( [ appequation ] ) guarantee that there will be a unique solution in this range .one can show that within this range , the right - hand side of eq .( [ appequation ] ) satisfies the inequalities applying these same inequalities to the argument of the function on the left hand side of eq .( [ appequation ] ) , we have for sufficiently small , the function evaluated at the values appearing in eq .( [ app2 ] ) is an increasing function , so we can write \le h\left[\frac{(ac+bd)^2-c^2}{(ac+bd)^2}\right ] \le h\left[\frac{2bd}{(ac+bd)^2}\right ] . \label{explicith}\ ] ] we can bound the entropies to obtain \le -16 bd\log b. \label{entest}\ ] ] combining eqs .( [ appequation ] ) , ( [ firstineq ] ) , and ( [ entest ] ) , we get thus goes to zero as goes to zero , but it does so much more slowly .we now use this observation to approximate each side of eq .( [ appequation ] ) .first , in the entropy function , for very small we can ignore the second term , so that eq .( [ appequation ] ) can be simplified to \approx \frac{(ac+bd)^2}{d^2}.\ ] ] now , with very small and of order , we can approximate as so the equation becomes , but since becomes negligible compared to , we can just as well write finally , the lower bound given by eq .( [ firstapp ] ) becomes all of our approximations have been such that the ratio between the approximating function and the exact function approaches unity as approaches zero .so the same is true of the approximate expression relative to the exact lower bound .our upper bound for the single - round cost ( eq . ( [ oneround ] ) ) is the minimum over in the range of the function where ( here is playing the role of in eq .( [ oneround ] ) . )the function decreases monotonically from the value at to its minimum value at .thus the minimum value of approaches zero for small and is attained arbitrarily close to .therefore for sufficiently small , the function , as it falls to its minimum value , falls farther than rises , and the minimum value of is less than 1 .this minimum value is attained at some value of it is less than .( beyond that point both and are increasing for . )more simply , .so we can limit our attention to values of less than . with this limitation , for small we can approximate the function as setting the derivative of this function equal to zero, we find that can be made arbitrarily close ( in the sense that the fractional error can be made arbitrarily small ) to a solution of for small there are two solutions to this equation with .the smaller one , with of order , corresponds to a local _ maximum _ of , reflecting the fact that the slope of approaches positive infinity as approaches zero , whereas the competing negative slope of is finite at .the other solution , with approximately equal to , is therefore the one we want . at this valuewe have .again , the approximation is such that the ratio of the exact upper bound to this approximate value approaches unity as approaches zero . c. h. bennett , g. brassard , c. crpeau , r. jozsa , a. peres , and w. k. wootters , `` teleporting an unknown quantum state via dual classical and einstein - podolsky - rosen channels '' , phys .70 * , 1895 ( 1993 ) . c. h. bennett , g. brassard , s. popescu , b. schumacher , j. a. smolin , and w. k. wootters , `` purification of noisy entanglement and faithful teleportation via noisy channels '' , phys .* 76 * , 722725 ( 1996 ) .m. hayashi , d. markham , m. murao , m. owari , and s. virmani , `` bounds on multipartite entangled orthogonal state discrimination using local operations and classical communication '' , phys .. lett . *96 * , 040501 ( 2006 ) .
for certain joint measurements on a pair of spatially separated particles , we ask how much entanglement is needed to carry out the measurement exactly . for a class of orthogonal measurements on two qubits with partially entangled eigenstates , we present upper and lower bounds on the entanglement cost . the upper bound is based on a recent result by d. berry [ phys . rev . a * 75 * , 032349 ( 2007 ) ] . the lower bound , based on the entanglement production capacity of the measurement , implies that for almost all measurements in the class we consider , the entanglement required to perform the measurement is strictly greater than the average entanglement of its eigenstates . on the other hand , we show that for any complete measurement in dimensions that is invariant under all local pauli operations , the cost of the measurement is exactly equal to the average entanglement of the states associated with the outcomes .
we obtain two new achievable rate regions for the general discrete memoryless two - way relay channel ( twrc ) , in which two users exchange messages through a relay .we consider twrcs with no direct link between the users ( see fig .[ fig : twrc ] ) .the new rate regions are obtained using the idea of _ functional - decode - forward _ ( fdf ) , where the relay only decodes a function of the users messages or codewords without needing to decode the messages or codewords themselves ( hence saving the _ uplink _ bandwidth from the users to the relay ) .the relay then broadcasts the function to both users .the function must be defined such that knowing its own message , each user is able to decode the message sent by the other user .we first illustrate the concept of fdf using the _ noiseless _ binary adder twrc as an example , where nodes 1 and 2 ( the users ) exchange data through node 3 ( the relay ) .let be node s transmitted signal and be node s received signal .the noiseless binary adder twrc is defined as follows : ( i ) the uplink is , and ( ii ) the downlink is and .assume that the source messages are in bits , i.e. . the well - known optimal ( rate - maximizing ) coding strategy is for the users to transmit uncoded information bits , i.e. , , for , and for the relay to forward its received bits , i.e. , .having received which is , and knowing its own message , node 1 can recover perfectly .node 2 can recover similarly . here, the capacity of 1 bit / channel use is achievable using this strategy . while the bit - wise modulo - two addition of the users messages seems to be a good function for the relay to transmit , the main challenge of fdf on a _ noisy _ twrc lies in : * selecting a _ good function _ of the users messages / codewords which the relay should decode , and * constructing _ good codes _ for the users such that the relay can efficiently decode this function without needing to decode the individual users messages / codewords . in the case of _adder channels _, e.g. , , where is the channel noise , linear codes can be used ( see for the case of binary adder channels , for finite field adder channels , and for awgn channels ) .let be user s length- linear codeword channel uses , e.g. , x_i[2 ] , \dotsc , x_i[n]) ] is on the -th channel use .] , for .the structure of linear codes guarantees that is a codeword from the same code .the relay effectively receives , which is a noisy version of .capacity - achieving linear codes have been shown to exist for this type of additive noise channel .this means if the users transmit using these linear codes , then the relay is able to efficiently decode ( which is a function of the users codewords ) without having to decode the users codewords individually .the relay then broadcasts to the users , and each user can obtain the other user s message from and its own message / codeword . for the above adder channels , the channels actually perform the desired function by adding the users codewords . for fdf on_ general _ discrete memoryless twrcs in which the channels do not `` help '' , it is not immediately obvious what function the relay should decode , and how the relay can decode the function without first decoding the individual messages . in this paper , we use random linear codes for fdf on the general discrete memoryless twrc following the idea in for the multiple - access channel , i.e. , the users transmit randomly generated linear codewords on the uplink . although the uplink output can not be written as a ( noisy ) function of , by invoking the markov lemma we will prove that the relay is still able to _ reliably _( i.e. , with arbitrarily small error probability ) decode without needing to decode the individual messages / codewords .the relay then broadcasts to the users for each of them to obtain the other user s message .we call this strategy functional - decode - forward with linear codes ( fdf - l ) .another method for the relay to decode a function of the users messages in the general discrete memoryless twrc is by using _ systematic computation codes _ on the uplink . on the uplink, the users first send uncoded data , followed by linear - coded signals .after the relay decodes a function of the users messages , the downlink transmission is the same as that in fdf - l .we call this strategy functional - decode - forward with systematic computation codes ( fdf - s ) .we will first derive two achievable rate regions for the general discrete memoryless twrc , using fdf - l and fdf - s .we will then show , using an example , that fdf - l can achieve higher sum rates than those achievable by fdf - s and by existing coding strategies for the twrc , including ( i ) the _ complete - decode - forward _ ( cdf ) coding strategy , where the relay fully decodes the messages from both users , re - encodes and broadcasts a function of the messages back to the users , and ( ii ) the _ compress - forward _ ( cf ) coding strategy , where the relay quantizes its received signals , re - encodes and broadcasts the quantized signals to users .[ fig : twrc ] depicts the general discrete memoryless twrc considered in the paper , where users 1 and 2 exchange data through the relay ( node 3 ) .we denote by the channel input from node , the channel output received by node , and user s message. the twrc can be completely defined by ( i ) the uplink channel , and ( ii ) the downlink channel .let be an -bit message , for .consider on each uplink and downlink , channel uses .user transmits , for . at any time, the relay transmits a function of its previously received signals , i.e. , = f_{3,t}(y_3[1],y_3[2 ] , \dots , y_3[t-1]) ] with respect to a distribution on is the set of sequences such that for , and , where is the number of occurrences of the pair of symbols in the pair of sequences , is an arbitrarily small positive real number , and the sequences in \delta} ] and such that there is no where |\mathcal{f}|\delta} ] are also independent . from (* ? ? ? * theorem 6.9 ) , for a sufficiently large , we have , meaning that is jointly strongly -typical with probability tending to one , by choosing a sufficiently small .so , for a sufficiently large , equals |\mathcal{f}|\delta } \big\ } \nonumber\\ & = \pr \big\ { e_3^c \big\}\pr \big\ { ( \boldsymbol{u}(b ) , \boldsymbol{y}_3 ) \in t^n_{[uy_3]|\mathcal{f}|\delta } \big| e_3^c \big\ } \nonumber\\ & \quad+ \pr \big\ { e_3 \big\ } \pr \big\ { ( \boldsymbol{u}(b ) , \boldsymbol{y}_3 ) \in t^n_{[uy_3]|\mathcal{f}|\delta } \big| e_3 \big\}\\ & > \alpha + ( 1-\delta)\pr \big\ { ( \boldsymbol{u}(b ) , \boldsymbol{y}_3 ) \in t^n_{[uy_3]|\mathcal{f}|\delta } \big| e_3 \big\}\\ & > \alpha + ( 1-\delta)(1-\epsilon),\label{eq : markov - lemma}\end{aligned}\ ] ] for some arbitrarily small , where |\mathcal{f}|\delta } \big| e_3^c \big\ } \leq \pr \ { e_3^c \ } < \delta$ ] .. follows from the markov lemma ( * ? ? ?* ( lemma 4.1 ) ) because forms a markov chain .note that being jointly strongly -typical does not imply that is jointly strongly -typical .however , since forms a markov chain , invoking the markov lemma yields that is jointly strongly -typical with probability tending to one .it follows that for some arbitrarily small , by choosing a sufficiently small .now , from lemma [ lemma : linear - codes-2 ] , for any , and are independent , and hence and are also independent .so , we have equals |\mathcal{f}|\delta } \big\}\nonumber\\ & \leq \sum_{\substack{v_3 ' \in \{1,2,\dotsc,2^{nr}\ } \setminus \{b\ } } } \pr \big\ { ( \boldsymbol{u}(v_3 ' ) , \boldsymbol{y}_3 ) \in t^n_{[uy_3]|\mathcal{f}|\delta } \big\ } \label{eq : union - bound-2}\\ & = ( 2^{nr}-1)\pr \big\ { ( \boldsymbol{u}(v_3 ' ) , \boldsymbol{y}_3 ) \in t^n_{[uy_3]|\mathcal{f}|\delta } \big\}\\ & \leq ( 2^{nr}-1 ) 2^{-n[i(u;y_3)-\tau ] } \label{eq : jaep2}\\ & < 2^{-n [ i(u;y_3 ) - \tau - r ] } \leq \epsilon_1,\label{eq : end}\end{aligned}\ ] ] for some arbitrarily small if is sufficiently large and if , where as . hereis by the union bound , and follows from ( * ? ? ?* lemma 7.17 ) as and are independent .hence , if for defined in , then , where can be made arbitrarily small , i.e. , the relay can _ reliably _ decode .* downlink : * + assuming that the relay has correctly decoded , it re - encodes and broadcasts the index to the users in downlink channel uses . for large , the users can reliably decode if ( * ? ? ?* ( theorem 15.6.3 ) ) for some .note that linear codes are not required on the downlink .assuming node 1 correctly decodes the relay s message , knowing its own message , it can perform to get , where is the element - wise additive inverse of .node 2 decodes using a similar method . combining and , we have theorem [ theorem : achievability ] . an achievable rate region for the discrete memoryless twrc using fdf can also be obtained by using systematic computation codes ( instead of linear codes ) on the uplink .similar to fdf - l , the relay computes a function of the users codewords ( the function can again be chosen ) and broadcasts this function back to the users .however , on the uplink , using systematic computation codes , the users first send uncoded transmissions to the relay , followed by a refinement stage in which the users send linear - coded transmissions .we can show that the rate region in the following theorem is achievable for the twrc .[ theorem : compute - forward ] consider a twrc where , for some finite field . rename the elements in and so that .the rate pair is achievable if \nonumber \\ r_2 & \leq \left [ \frac{c_\text{mac}h(w_2)}{c_\text{mac } + 2h(x_1 \oplus x_2|y_3 ) } , i(x_3;y_1 ) , i(x_3;y_2 ) \right ] , \nonumber\end{aligned}\ ] ] for some joint distributions of the form and . here, is the maximum sum - rate of the multiple - access channel .the above result is also valid even for twrcs where and are not finite fields .see remark [ remark : non - finite - field ] .the above rate region is obtained using the results in ( * ? ? ?* theorem 2 ) ( by setting ) and ( * ? ? ?* ( theorem 15.6.3 ) ) .the additional factors and in the above equations compared to ( * ? ? ?( 23 ) ) convert computation rates to rates in bits / channel use considered in this paper .the proof is omitted because of space constraints .in this section , we show that the maximum sum rate obtained by fdf - l can be simultaneously higher than those achievable by fdf - s , and by two existing coding strategies : cdf and cf . using cdf, the relay completely decodes the messages and sent by users 1 and 2 respectively .it then encodes and broadcasts a function of the messages to the users such that each user can recover the message sent by the other user .the overall achievable rate region is thus limited by two sets of constraints , i.e. , the multiple - access constraints on the uplink and the broadcast constraints ( * ? ? ?* theorem 2.5 ) on the downlink , and is given in the following theorem .[ theorem : cdf][see ] consider a twrc .the rate pair is achievable using cdf if for some joint distributions of the form and . using this strategythe relay quantizes its received signal to , encodes and broadcasts to the users . assuming that both users can correctly decode , a virtual channel is created from user 1 to user 2 via the relay .similarly , a virtual channel is created from user 2 to user 1 via the relay .the achievable rate region using cf on the twrc is given in the following theorem .[ theorem : cf][see ] consider a twrc .the rate pair is achievable using cf if and , under the constraints and , for some joint distributions of the form and , where , , , and .now , we compare these four coding strategies on a twrc .we consider the following twrc : * , .* is given by the following transition matrix : + [ cols="^,^,^,^,^",options="header " , ] + .each entry in the lower right matrix denotes the conditional probability that is received when are sent .note that can not be written as a noisy function of . * , where the downlink from the relay to each user is a binary - symmetric channel with cross - over probability . the maximum sum rates ( i.e. , ) achievable by the different coding strategies are * fdf - l : . *fdf - s : .* cdf : . * cf ( an upper bound on the maximum sum rate ) : , for some .clearly , fdf - l outperforms the other coding strategies on this twrc .we have proposed a functional - decode - forward coding strategy with linear codes ( fdf - l ) for the general discrete memoryless two - way relay channel ( twrc ) and obtained a new achievable rate region . we showed that using random linear codes for the users , the relay can reliably decode a function of the users codewords even when the channel does not perform the desired function .the function , when broadcast back to the users , allows each user to decode the other user s message .noting that functional decoding on the uplink of the discrete memoryless twrc is also possible using systematic computation codes , we obtained another achievable region for the twrc using functional - decode - forward with systematic computation codes ( fdf - s ) . with an example, we numerically showed that fdf - l is capable of achieving strictly higher sum rates compared to fdf - s and two existing coding strategies , namely , complete - decode - forward and compress - forward .however , using fdf - l or fdf - s , if the cardinalities of the user s input alphabets are both not equal to that of any finite field , only subsets of are utilized for transmission .furthermore , since linear codes are used for fdf - l , the distributions of the users transmitted signals are constrained to be uniform , which is not always optimal for the channel .this paper nonetheless provides coding schemes for the relay to decode a function of the users messages without having to decode the messages individually on the general discrete memoryless twrc ( which may not be additive ). this strategy can be useful in multiterminal networks where different destination nodes have knowledge of some source messages and want to decode the messages of other sources .k. narayanan , m. p. wilson , and a. sprintson , `` joint physical layer coding and network coding for bi - directional relaying , '' in _ proc .45th allerton conf . on commun . , control , and comput ._ , monticello , usa , sep .26 - 28 2007 , pp .254259 .b. nazer and m. gastpar , `` the case for structured random codes : beyond linear models , '' in _ proc .46th allerton conf . on commun . , control , and comput ._ , monticello , usa , sep .23 - 26 2008 , pp . 14221425 . c. schnurr , t. j. oechtering , and s. stanczak , `` achievable rates for the restricted half - duplex two - way relay channel , '' in _ proc .41st asilomar conf . on signals ,syst . and comput ._ , pacific grove , usa , nov . 4 - 7 2007 , pp .
we consider the general discrete memoryless two - way relay channel , where two users exchange messages via a relay , and propose two functional - decode - forward coding strategies for this channel . functional - decode - forward involves the relay decoding a function of the users messages rather than the individual messages themselves . this function is then broadcast back to the users , which can be used in conjunction with the user s own message to decode the other user s message . via a numerical example , we show that functional - decode - forward with linear codes is capable of achieving strictly larger sum rates than those achievable by other strategies .
due to the fast development of astronomical observations such as the measurements of the cosmic microwave background temperature anisotropy ( e.g. _ wmap _ and _ planck _ satellites ) and observations of galaxy clustering ( e.g. 6df and sdss galaxy surveys ) , more and more large - scale data sets are available for studying a variety of astrophysical systems .it is , therefore , a common practice in astronomy to combine different data sets to obtain the joint likelihood for astrophysical parameters of interest .the standard approach for this joint analysis assumes that the data sets are independent , therefore the joint likelihood is simply the product of the likelihood of each data set .the joint likelihood function can then be used to determine optimal parameter values and their associated uncertainties . in the frequentist approach to parameter estimation , this is equivalent to the weighted sum of the parameter constraints from the individual data sets , where the weight of each data set is the inverse variance .data sets with small errors provide stronger constraints on the parameters .there is a long history discussing the appropriate way to combine observations from different experiments . in the context of cosmology, the discussion can be traced back to and , where weight parameters were assigned to different data sets to obtain joint constraints on the velocity field and hubble parameter . in these approaches , however , the assignment of weights to data sets with differing systematic errors was , in some ways , ad - hoc .for instance , if a data set has large systematic error and is not reliable , it is always assigned a weight of zero and is effectively excluded from the joint analysis . on the other hand , a more trustworthy data set can be assigned a higher relative weighting . due to the subjectivity and limitations of this traditional way of assigning weights to different data sets, and ( hereafter hbl02 ) developed the original hyperparameter method .this allows the statistical properties of the data themselves to determine the relative weights of each data set . in the framework developed by and hbl02 , a set of hyperparametersis introduced to weight each independent data set , and the posterior distribution of the model parameters is recovered by marginalization over the hyperparameters .the marginalization can be carried out with a brute - force grid evaluation of the hyperparameters , or it can be explored by using monte carlo methods which directly sample the posterior distribution .such possibilities include markov chain monte carlo ( mcmc ) algorithms such as metropolis - hastings and simulated annealing , or non - mcmc methods such as nested sampling .the application of hyperparameters was considered for a variety of cases by hbl02 .for instance , if the error of a data set is underestimated , the direct combination of data sets ( no hyperparameter ) results in an underestimated error - budget , providing unwarranted confidence in the observation and producing a fake detection of the signal . the hyperparameter method ,however , was shown to detect such a phenomenon and act to broaden the error - budget , thus recovering the true variance of the data sets . by using the hyperparameter method ,the results of joint constraints become more robust and reliable .this approach has also been applied to the joint analysis of the primordial tensor mode in the cosmic microwave background radiation ( cmb ) , the distance indicator calibration , the study of mass profile in galaxy clusters , and the cosmic peculiar velocity field study .notably , the hyperparameter method established by and hbl02 is limited to independent data sets , where `` no correlation between data sets '' is assumed in the joint analysis . in the analysis of cosmology and many other astrophysical systems ,the data sets sometimes are correlated .for instance , in the study of the angular power spectrum of the cmb temperature fluctuations , the data from the atacama cosmology telescope ( act ) , south pole telescope ( spt ) and _ planck _ satellite share a large range of multipole moments ( see fig. 1 of and fig . 11 of ) .when combining these observations , one needs to consider the correlated cosmic variance term since these data are drawn from a close region of the sky .in addition , in the study of the cosmic velocity field , the bulk flows from different peculiar velocity surveys are drawn from the same underlying matter distribution so , in principle , a non - zero correlation term exists between different peculiar velocity samples .therefore , a method both using hyperparameter method and taking into account the correlation between different data sets is needed in the study of astrophysics . providing such a method is the main aim of this paper . for a clear presentation, we build up our method step - by - step from the most basic level , explaining the concepts and derivation process in a pedagogical way .the structure of the paper is as follows . in section [ sec : statistics ] , we review bayes theorem ( section [ sec : bayes ] ) and the standard multivariate gaussian distribution ( section [ sec : multi - gauss ] ) in the absence of any hyperparameters .section [ sec : hyperparameter ] provides a review of the hyperparameter method as proposed in hbl02 . in section [ sec : hypermatrix ] we present the hyperparameter matrix method , which is the core of the new method proposed in this paper .we quote the appropriate likelihood function for the hyperparameter matrix method for correlated data in section [ sec : hypermatrix ] , leaving its derivation and proofs of its salient features in [ app : posdef ] .the proof of the functional form for the joint likelihood of correlated data sets makes use of several recondite matrix operations and lemmas .these are laid out in [ app : hadamard ] and [ block_matrix2 ] , while the main text simply quotes their results . in section[ sec : test ] , we apply our method to a straight - line model while fitting two independent data sets .we vary the error - budget and systematic errors in each data set to test the behaviour of the hyperparameter matrix method . in section [ sec : improve ] , we also discuss the improvement of our hyperparameter matrix method over the original method proposed by hbl02 .the conclusion and discussion are presented in the last section .let us suppose that our data set is represented by and the parameters of interest are represented by vector .then by bayes theorem , the posterior distribution pr( ) is given by where is called the likelihood function , but here we stick to the notation pr . ] , is the prior distribution of parameters and is the bayesian evidence , an important quantity for model selection . given a data set , let us suppose we have two alternative models ( or hypotheses ) for , namely and .one can calculate the bayesian evidence for each hypothesis as where the integral is performed over the entire parameter space of each model .note that the models may have different sets of parameters .the evidence is an important quantity in the bayesian approach to parameter fitting , and it plays a central role in model selection .specifically , if we have no prior preference between models and , the ratio between two bayesian evidences gives a model selection criterion , or bayes factor the value of indicates whether the model is favoured over model by data . gave an empirical scale for interpreting the value of , as listed in table [ tab : evidence ] .we will use this table as a criterion to assess the improvement of statistical significance when using the hyperparameter matrix method ..jeffreys empirical criterion for strength of evidence . [ cols="<,<",options="header " , ] ) ) as a function of the correlation strength . is sampled from to with each step . is equal to the value of the bayesian evidence with our hyperparameter matrix method to consider full covariance between data sets , minus the value of bayesian evidence from the original hyperparameter method ( ignore the correlation between data sets ) . for the specific experiment, please refer to sec .[ sec : improve].,width=307 ] the hyperparameter matrix method we propose here is the most general method which can be used to combine arbitrary number of multi - correlated experimental data .this greatly breaks up the limitation of the original hyperparameter method ( and hbl02 ) which can only deal with multiple independent data sets .it is always important , to include all of the correlation information between data sets to obtain correct parameter values and justify the goodness of fit . to see the importance of our method , we design an illustrative experiment to demonstrate this . we generate two data sets with . for each dataset , we generate the samples with mean and with gaussian error but correlated between the two data sets .we take the correlation strength as , , , ... , .then we use these correlated data sets to do a parameter estimation .we first use our hyperparameter matrix method , which considers the full covariance matrix between two data sets .then in order to check the behaviour of the original hyperparameter method , we _ ignore _ the correlation part of the two experiments and treat them as individual data sets .we calculate the bayesian evidence value ( eq .( [ eq : bayes2 ] ) ) for both cases , and obtain the difference between the two bayesian evidence ( be ) values . in fig .[ fig : correlate ] , we plot the difference between be value for our hyperparameter matrix method and for the original hyperparameter method .first , one can see that when , the two methods are the same one so . but as the correlation strength increases , the increases as well , indicating that the hyperparameter matrix method provides better and better fits than the original hyperparameter method .this can be understood as the danger of ignoring correlation between data sets , since the model becomes inadequate to fit the data if the correlation is not included . in fig .[ fig : correlate ] , one can see that if , the bayes factor becomes `` substantial '' , and if , the bayes factor becomes `` decisive '' .this strongly indicates that when combining multiple correlated data sets , it is very necessary to use our hyperparameter matrix method rather than the original hyperparameter method .in this paper we have reviewed the standard approach to parameter estimation when there are multiple data sets .this is an important aspect to most scientific enquiries , where multiple experiments are attempting to observe the same quantity . in the context of a bayesian analysis, the data can also be used for model selection and tests of the null hypothesis .we reviewed the original hyperparameter method of hbl02 for combining independent data sets , showing how it can overcome inaccurate error bars and systematic differences between multiple data sets .here we developed the hyperparameter matrix method for the case of correlated data sets , and we have shown that it is a preferred model to the standard non - hyperparameter approach of parameter estimation .we rigorously prove that the hyperparameter matrix likelihood can be greatly simplified and be easily implemented . from this form of the likelihood, we can recover the simple case of no hyperparameters where all of the data sets have equal weights .as well , the original hyperparameter approach is recovered in the limit of no inter - data set covariance ( if ) , so our likelihood function provides a generalized form which covers hyperparameter and non - hyperparameter analysis , as well as correlated and uncorrelated data sets .we test this statistical model by fitting two data sets to a straight line , and looked at the consequences of mis - reported error bars , as well as systematic differences between correlated data sets . in all cases , with the assistance of bayesian evidence , we find that the hyperparameter matrix method is heavily favoured over the traditional joint analysis . by using an illustrative example to calculate the difference of bayesian evidence value between the hyperparameter matrix method , and the original hyperparameter method , we demonstrate that the bayes factor becomes very substantial ( decisive ) if is greater than ( ) .this suggests that for the case where two experiments are strongly correlated , our hyperparameter matrix method is heavily favoured over the original hyperparameter method .the method proposed here can be used in a variety of astrophysical systems . in the context of cosmology , when cosmic variance is a common component to all large - scale observations , the data sets drawn from the same underlying density or temperature field will be correlated to some degree .for instance , in the study of cmb where multiple data sets drawn from the same region of the sky are combined ( such as _ planck _ , _ wmap _ , spt and act ) , it is necessary to consider the correlation between data sets since they follow the same underlying temperature distribution .therefore our method can be an objective metric to quantify the posterior distribution of cosmological parameters estimated from the cmb .in addition , in the analysis of the galaxy redshift surveys for cosmic density and velocity fields , when combining two surveys data drawn from the similar cosmic volume , the cosmic variance between different data sets should also be considered as a part of the total covariance matrix since they all follow the same underlying matter distribution . in the future survey of 21 cm ,if two or more surveys sample the neutral hydrogen in the same ( or close ) cosmic volume , the correlation between surveys should also be considered when combining data sets . in this sense, our hyperparameter matrix method provides an objective metric to quantify the probability distribution of the parameters of interest when multiple data sets are combined . in summary ,when combining correlated data sets , the hyperparameter matrix method can provide an unbiased and objective approach that can wisely detect and down - weight any unaccounted experimental errors or systematic errors , in this way it provides the most robust and reliable constraints on astrophysical parameters .we would like to thank chris blake , andrew johnson , douglas scott and jasper wall for helpful discussions .. is supported by a cita national fellowship .this research is supported by the natural science and engineering research council of canada .the generalized form of the likelihood function for the hyperparameter analysis in the presence of correlated data sets ( eq . ( [ eq : like - hyper2 ] ) )must satisfy several properties in order to serve as a probability density function .in particular , the generalized hyperparameter covariance matrix ( eq . ( [ cov_generalize2 ] ) ) must have positive determinant , and must be invertible .however , since the matrix is a function of the hyperparameters which , in principle , vary from zero to infinity , the positive definiteness and invertibility of are not immediately clear . the following theorem guarantees the feasibility of inverting the total covariance matrix , and the positive definiteness of the determinant .* theorem : * the likelihood function of combining correlated data sets with hyperparameter matrix , i.e. eq . ( [ eq : like - hyper2 ] ) is equivalent to \frac{1}{\sqrt{\det \tilde{c}}}\exp \left ( -\frac{1}{2}\vec{x}^{t}\left ( \hat{p}\odot \tilde{c}^{-1}\right ) \vec{x}\right ) , \label{like4}\]]where is the dimension of the data set , is the covariance matrix between data sets without the inclusion of hyperparameter ( eq . ( [ new_cov1 ] ) ) , is the element - wise product ( same as eq .( [ eq : element ] ) ) , and is the `` hadamard inverse '' of the matrix ( see [ app : hadamard ] ) .we first prove the inverse relation , * proof .* ( 1 ) let us multiply matrices and , then take the block element of the matrix , i.e. `` , , '' are the block element which can take any value between ( ) _ { ij } \nonumber \\ & = & \sum_{k}\left ( p\odot \tilde{c}\right ) _ { ik}\left ( \hat{p}\odot \tilde{c}^{-1}\right ) _ { kj } \nonumber \\ & = & \sum_{k}\left ( \tilde{c}_{ik}\ast \left ( \alpha _ { i}\alpha _ { k}\right ) ^{-1/2}\right ) \left ( \tilde{c}_{kj}^{-1}\ast \left ( \alpha _ { k}\alpha _ { j}\right ) ^{1/2}\right ) \nonumber \\ & = & \sum_{k}\left ( \tilde{c}_{ik}\tilde{c}_{kj}^{-1}\right ) \left ( \alpha _ { j}/\alpha _ { i}\right ) ^{1/2 } \nonumber \\ & = & ( \delta _ { ij})i_{n_{i}\times n_{j}}\left ( \alpha _ { j}/\alpha _ { i}\right ) ^{1/2 } \nonumber \\ & = & ( \delta _ { ij})i_{n_{i}\times n_{i } } , \label{eq : proof1}\end{aligned}\]]where in the second step , we use the property of block matrix product . the final line of eq .( [ eq : proof1 ] ) indicates that , only if , the product is an identity matrix , otherwise it is all zeros .thus we prove the inverse relation ( eq .( [ inverse_new1 ] ) ) . next , let us prove the determinant relation is given by eq .( [ cov_generalize2 ] ) , is given by ( eq . ( [ new_cov1 ] ) ) and is the dimension of the block matrix .* proof . * ( 2 ) in [ block_matrix2 ], we have proved that a matrix of type ( [ new_cov1 ] ) follows the determinant eqs .( [ det1])-([det5 ] ) .we now use eqs .( [ det1])-([det5 ] ) to prove eq .( [ detc_hyper1 ] ) . from eq .( [ det1 ] ) , we have the matrix stands for eqs . ( [ det2])-([det5 ] ) but replacing matrix for matrix .we then apply the same equation for the covariance matrix the matrix stands for eqs .( [ det2])-([det5 ] ) but replacing matrix with matrix .now we compare the last terms in eqs .( [ deter_prove1 ] ) and ( [ deter_prove2 ] ) . since is indeed as given by eq .( [ det2 ] ) , we have then calculate the term ; following eq .( [ det2 ] ) , we have using eq .( [ inverse_new1 ] ) , we obtain therefore we have , by mathematical induction , we have proved that all of the terms in eqs .( [ deter_prove1 ] ) and ( [ deter_prove2 ] ) follow eq .( [ eq : proof2-last ] ) .therefore the relationship between eqs .( [ deter_prove1 ] ) and ( deter_prove2 ) is .e .we have proved eq .( [ detc_hyper1 ] ) . combining proofs ( 1 ) and ( 2 ) , we have shown that , in general , when combining multiple correlated data sets with hyperparameters , the inverse and determinant of the covariance matrix follow eqs .( [ inverse_new1 ] ) and ( [ detc_hyper1 ] ) .therefore the likelihood function for combined correlated data sets is eq .( [ like4 ] ) .equation ( [ like4 ] ) greatly simplifies the computation of hyperparameter likelihood , since one can always calculate the covariance matrix for correlated data sets and then use `` element - wise '' product to calculate the covariance matrix with hyperparameters , and then numerically solve for the maximum likelihood solution .the hadamard product is the element - wise product of any two matrices with the same dimension . if and are the two matrices with the same dimension , the hadamard product is a matrix with the same dimension with element ( ) equal to the hadamard inverse is an inverse operation which requires that each element of the matrix is nonzero , so that each element of the hadamard inverse matrix is we use a hat to denote the hadamard inverse. therefore the hadamard product of an matrix and its hadamard inverse becomes a unit matrix where all elements are equal to one , i.e. will use the following lemma to prove the determinant relation of the covariance matrix of hyperparameter likelihood , eq .( [ detc_hyper1 ] ) .let be an ( real or complex matrix , which is partitioned into blocks , each of size is satisfies determinant of is given by is defined as vectors and are defined as is defined as a particular case of this lemma , where each block matrix has the same dimension , is shown as a theorem in . herewe extend the theorem shown in to a more general case where each diagonal block matrix may have a different size , so the off - diagonal matrix can be a rectangular matrix .* proof . *we start from the simplest case , where , i.e. is a symmetric block matrix and are and semi - positive definite symmetric matrix respectively , and is a matrix .the determinant of is can immediately check that this is indeed the simplest case for eqs .( det1)-([det5 ] ) where . since if , eq .( [ det2 ] ) gives where and , which is exactly eq .( [ det2by2 ] ) .now we can use eq .( [ det2by2 ] ) for the case to inductively derive general equations ( [ det1])-(det5 ) .let us treat matrix ( [ a_mat ] ) as a 2-by-2 matrix , where all of the matrices are grouped into a big matrix exactly ( eq . ( [ det5 ] ) ) , and exactly the definition of ( eq .( [ det3 ] ) ) .in addition , exactly ( eq . ( [ det4 ] ) ) . now applying the second line of eq .( [ det2by2 ] ) to this matrix , one has proceeding to again , can be separated into two big matrices as combining eqs .( [ deta22 ] ) and ( [ deta221 ] ) , we have this operation until breaking down the first term , one can eventually reach therefore the determinant of is comparing the brackets in eq .( [ eq : deta1 ] ) with eq .( [ det2 ] ) , one can find that each term is exactly the same , therefore the determinant is given by eq .( det1 ) .
we construct a `` hyperparameter matrix '' statistical method for performing the joint analyses of multiple correlated astronomical data sets , in which the weights of data sets are determined by their own statistical properties . this method is a generalization of the hyperparameter method constructed by and which was designed to combine independent data sets . the advantage of our method is to treat correlations between multiple data sets and gives appropriate relevant weights of multiple data sets with mutual correlations . we define a new `` element - wise '' product , which greatly simplifies the likelihood function with hyperparameter matrix . we rigorously prove the simplified formula of the joint likelihood and show that it recovers the original hyperparameter method in the limit of no covariance between data sets . we then illustrate the method by applying it to a demonstrative toy model of fitting a straight line to two sets of data . we show that the hyperparameter matrix method can detect unaccounted systematic errors or underestimated errors in the data sets . additionally , the ratio of bayes factors provides a distinct indicator of the necessity of including hyperparameters . our example shows that the likelihood we construct for joint analyses of correlated data sets can be widely applied to many astrophysical systems . bayesian analysis , data analysis , statistical method , observational cosmology
despite the underlying complexities of earthquake dynamics and their complex spatiotemporal behavior , celebrated statistical scaling laws have emerged , describing the number of events of a given magnitude ( gutenberg - richter law ) , the decaying rate of aftershocks after a main event ( omori law ) , the magnitude difference between the main shock and its largest aftershock ( bath law ) , as well as the fractal spatial occurrence of events .recent work has shown that scaling recurrence times according to the above laws results in the distribution collapsing onto a single curve . however , while the fractal occurrence of earthquakes incorporates spatial dependence , it appears to embed isotropy in the form of radial symmetry , while the occurrence of real - world earthquakes is usually anisotropic . to better characterize this anisotropic spatial dependenceas it applies to such heterogeneous geography , network approaches have been recently applied to study earthquake catalogs . these recent network approachesdefine links as being between successive events , events close in distance , or being between events which have a relatively small probability of both occurring based on three of the above statistical scaling laws .these methods define links between singular events .in contrast , we define links between locations based on long - term similarity of earthquake activity .while earlier approaches capture the dynamic nature of an earthquake network , they do not incorporate the characteristic properties of each particular location along the fault .various studies have shown that the interval times between earthquake events for localized areas within a catalog have distributions not well described by a poisson distribution , even within aftershock sequences .this demonstrates that each area not only has its own statistical characteristics , but also retains a memory of its events . as a result, successive events may not be just the result of uncorrelated independent chance but instead might be dependent on the history particular to that location . if prediction is to be a goal of earthquake research , it makes sense to incorporate interactions due to long - term behavior inherent to a given location , rather than by treating each event independently .we include long - term behavior as such in this paper by considering a network of locations ( nodes ) and interactions between them ( links ) , where each location is characterized by its long - term activity over several years .for our analysis , we utilize data from the _ japan university network earthquake catalog _ ( junec ) , available online at http://wwweic.eri.u - tokyo.ac.jp / catalog / junec/. we choose the junec catalog because japan is among the most active and best observed seismic regions in the world . because our technique is novel , this catalog provided the best avenue for employing our analysis . in the future, it may be possible to fine - tune our approach to more sparse catalogs .the data in the junec catalog span 14 years from 1 july 1985 - 31 december 1998 and are depicted in fig .[ fig : actmap ] .each entry in the catalog includes the date , time , magnitude , latitude , and longitude of the event .we found the catalog to obey the gutenberg - richter law for events of magnitude 2.2 or larger . by convention ,this is taken to mean that the catalog can be assumed to be complete in that magnitude range .however , because catalog completeness can not be guaranteed for shorter time periods over a 14-year span , we also examine gutenberg - richter statistics for each non - overlapping two - year period ( fig .[ fig : gr_by_year ] ) .we find that , though absolute activity varies by year , the relative occurrences of quakes of varying magnitudes does not change significantly for events between magnitude 2.2 and 5 , where there is the greatest danger of events missing from the catalog .additionally , the data are spatially heterogeneous , as shown in fig .[ fig : actmap ] .most events take place either over land or off japan s east coast .we remark to the reader that this is not an artifact of more detection equipment being located on land .the primary means for locating and detecting earthquake events involves using the s - waves and p - waves that emanate from the events .seismic stations are capable of detecting these waves a great distance from their source .both s - waves and p - waves travel through the earth s mantle , and the characteristic absorption distance , defined as the distance for wave amplitude to drop to of its original value , for body waves is on the order of 10,000 km .any event of magnitude 5.5 or larger , for example , is detectable anywhere on earth .hence , the location of the detection equipment does not affect how accurately events are catalogued . additionally ,because the location of the japanese archipelago is a consequence of seismic activity involving the philippine and other tectonic plates , it is not surprising that most seismic events take place on or near the islands themselves .we partition the region associated with the junec catalog as follows : we take the northernmost , southernmost , easternmost , and westernmost extrema of all events in the catalog as the spatial bounds for our analysis .we partition this region into a 23 23 grid which is evenly spaced in geographic coordinates .each grid square of approximate size 100 km 100 km is regarded as a possible node in our network .results do not qualitatively differ when the fineness of the spatial grid is modified , in agreement with analogous work carried out by ref . , using a different technique from ours . however , 100 km boxes are a more physical choice , as 100 km is on the order of rupture length associated with earthquakes , which in turn is roughly equivalent to the aftershock zone distance for larger earthquakes . for a given measurement at time , an event of magnitude occurs inside a given grid square .similar to the method of corral , we define the signal of a given grid square to form a time series , where each series term is related to the earthquake activity that takes place inside that grid square within the time window , as described below .because events do not generally occur on a daily basis in a given grid square , it is necessary to bin the data to some level of coarseness .how coarse the data are treated involves a trade - off between precision and data richness .we define the best results as those corresponding to the most prominent cross - correlations . to this end, we choose 90 days as the coarseness for our time series .this choice means that will cover a time window of days and will cover the 90-day non - intersecting time period immediately following , giving approximately 4 increments per year .additional analysis shows that results do not qualitatively differ by changing the time coarseness .we refer to the time series belonging to each grid cell as that grid cell s signal .we define the signal that is related to the energy released in the the grid cell by where denotes the number of events that occur in time window in grid square .we choose this definition because the term is proportional to the energy released from an earthquake of magnitude m .the signal therefore is proportional to the total energy released at a given location in a 90-day time period . to define a link between two grid squares ,we calculate the pearson product - moment correlation coefficient between the two time series associated with those two grid squares where indicates the mean and the standard deviations of the time series .we consider the two grid squares linked if is larger than a specified threshold value , where is a tunable parameter .as is standard in network - related analysis , we define the degree of a node to be the number of links the node has .note that our signal definition eq .[ signal ] involves an exponentiation of numbers of order 1 .this means that the energy released , and therefore the cross - correlation between two signals , is dominated by large events .examples of signals with high correlation are shown in fig .[ fig : signals ] . to confirm the statistical significance of , we compare of any two given signals with calculated by shuffling one of the signals .we also compare with the cross - correlation we obtain by time - shifting one of the signals by varying time increments , where is in units of 90 days .further , we impose periodic boundaries where is the length of the series .our justification for these boundaries is that events in the distant past ( years ) should have nominal effects on the present , while they also provide typical background noise for comparison .we note that over 14-year time period 1985 - 1998 , the overall observed activity increases in the areas covered by the catalog . to ensure that the values we calculate are not simply the result of trends in the data , we compare our results to those obtained with linearly detrended data .we find that the trends do not have a significant effect .for example , using , we obtain 815 links , while detrending the data results in only 3 links dropping below the threshold correlation value . for , we obtain 1003 links , while detrending results in only 3 links dropped .additionally , after detrending , 94% of correlation values stay within 2% of their values .as described above , we compare of eq .[ rtilde ] between signals at different locations at the same point in time with and with with correlation coefficient obtained by shuffling one of the series .shuffling or time - shifting by a single time step ( representing 90 days ) reduces to within the margin of significance , as shown in fig . [ fig : correlcompare ] . shufflingthe signal also reduce s we find a large number of links with cross - correlations far larger than their shuffled counterparts .the number of links exceeds that of time - shuffled data by roughly 3 - 8 , depending on choice of as shown in fig .[ fig : synth ] ( a ) .however , as shown , there are still many links that can be regarded as the result of noise .we therefore further examine the difference between the number of links found in time - shuffled data and the number found in the original data ( fig .[ fig : synth ] ( b ) ) .we find that the fraction of `` real '' links in general increases with .a significant fraction of these links connect nodes farther apart than 1000 km , as can be seen in fig .[ fig : map2 ] .this is consistent with the finding that there is no characteristic cut - off length for interactions between events , corroborated by fig .[ fig : possible ] , showing the number of links a network has at a given distance as a fraction of the number of links that are possible from choosing any two nodes in the potential network .distances shorter than 100 km have sparse statistics due to the coarseness of the grid while distances greater than 2300 km have sparse statistics due to the finite spatial extent of the catalog . within this range , the fraction of links observed drops off approximately no faster than a power law .we find qualitatively similar results when we adjust the grid coarseness .our results , shown in fig . [ fig : map2 ] , are anisotropic , with the majority of links occurring at approximately 37.5 degrees east of north .this is roughly along the principal axis of honshu , japan s main island , and parallel to the highly active fault zone formed by the subduction of the philippine and pacific tectonic plates under the amurian and okhotsk plates respectively .high degree nodes ( i.e. nodes with a large number of links ) tend to be found in the northeast and northcentral regions of the junec catalog and are notably not strongly associated with the locations in the catalog that are most active , which we discuss in further detail below . in network physics , we often characterize networks by the preference for high - degree nodes to connect to other high - degree nodes .the strength of this preference is quantified by the network s assortativity , defined as where is the pearson correlation coefficient given by eq .( [ prsn ] ) . the series and are found as follows : iterating through all entries in the adjacency matrix , the degree of each node is appended to the series and the degree of the node that is linked to is appended to the series .the assortativity coefficient thus gives a correlation of node degree within the network .if each node of degree connects only to nodes of the same degree , the two series and will be identical and a=1. networks like the network of paper coauthorship have positive assortativity , while those of the world - wide web and of many ecological and biological systems have negative assortativity .[ fig : assort ] shows that the networks resulting from our procedure are highly assortative with assortativity generally increasing with .the finding of positive correlation between the degree of a node and the degree of its neighbors is consistent with an analogous finding with iranian data , using a different technique from ours . for comparisonwe show the assortativity obtained by using time shuffled networks .since assortativity of the original networks is far higher than those of shuffled systems , the high assortativity can not be due to a finite size effect or to the spatial clustering displayed in the data , since time shuffling preserves location .we investigate the nature of the high - degree nodes and find that high degree is not a matter of more events being nearby , as there is a slight tendency for higher degree nodes to actually have _ longer _ distance links on average than low degree nodes .additionally , we found that node degree is essentially independent of both maximum earthquake size and number of events . because fig .[ fig : synth ] shows , as mentioned above , that many links can be regarded as the result of noise , we investigate the stability of links over time ( fig . [fig : likeness ] ) .similarity of the network between the first seven years ( 1985 - 1992 ) and the second seven years ( 1992 - 1998 ) in the catalog is found as follows .we find the set of links that satisfy in both the 1985 - 1992 network and the 1992 - 1998 network , and create a series out of the respective link strengths ( correlations ) in the 1985 - 1992 network .we create another series using the same links , now using the corresponding strengths from the 1992 - 1998 network .we then correlate the two series using the pearson correlation coefficient given by eq .( [ prsn ] ) .we find that the network is far more stable over time than counterpart results given by shuffling the time series ( fig .[ fig : likeness ] ) . because one would expect large correlations that arise purely from noise to have no `` memory '' from one time period to another , the finding of network stability over several years is consistent with our result that these links are not simply the result of chance .to summarize our results , we have introduced a novel method for analyzing earthquake activity through the use of networks .the resulting networks ( i ) display links with no characteristic length scale , ( ii ) display far more links than expected from chance alone , ( iii ) are far more assortative , and ( iv ) display significantly more link stability over time .the lack of a characteristic length scale is consistent with previous work and underscores the difficulty in making accurate predictions .the statistically significant nature of all of these results is consistent with the possibility of the presence of hidden information in a catalog , not captured by existing models or previous earthquake network approaches .b. gutenberg and c.f .richter , bull .* 34 * , 185 ( 1944 ) .f. omori , j. coll .tokyo * 7 * , 111 ( 1894 ) ; see the recent work of m. bottiglieri , l. de arcangelis , c. godano , and e. lippiello , phys .rev . lett . *104 * , 158501 ( 2010 ) .the gutenberg - richter law states that the number of events in the catalog greater that a certain magnitude has an exponential dependence , i.e. , where and are empirically observed constants with typically .p - waves and s - waves are the body waves which originate at an earthquake and travel through the earth .they are the primary means for locating an event .see : k. e. bullen and b. a. bolt , _ an introduction to the theory of seismology _( cambridge university press , cambridge , 1993 ) .we note that this term is similar in appearance though distinct from the cumulative benioff strain , the predictive power of which is hotly contested in geophysics .however , our technique does not use this term to make predictive statements about any individual events in a specific location , but rather allows us to observe patterns in the similarity of behavior across different locations .larger circles with brighter colors denote more events .the junec catalog clusters spatially , with most activity occurring on the eastern side of honshu , japan s main island.,scaledwidth=60.0% ] ) , with values of marked above : ( a ) two signals with pearson correlation coefficient , associated with locations 878 km apart , ( b ) the corresponding as a function of time offset as defined by eq .[ rtilde ] .( c ) corresponding scatterplot of ( a ) with signal plotted against signal .each point corresponds to a single point in time for the simultaneous signals of and . note that because the signal is defined in terms of exponentiation that large events dominate the correlation , just as large events dominate the total energy released in an earthquake catalog.,title="fig:",scaledwidth=49.0% ] ) , with values of marked above : ( a ) two signals with pearson correlation coefficient , associated with locations 878 km apart , ( b ) the corresponding as a function of time offset as defined by eq . [ rtilde ] .( c ) corresponding scatterplot of ( a ) with signal plotted against signal .each point corresponds to a single point in time for the simultaneous signals of and . note that because the signal is defined in terms of exponentiation that large events dominate the correlation , just as large events dominate the total energy released in an earthquake catalog.,title="fig:",scaledwidth=49.0% ] ) , with values of marked above : ( a ) two signals with pearson correlation coefficient , associated with locations 878 km apart , ( b ) the corresponding as a function of time offset as defined by eq .[ rtilde ] .( c ) corresponding scatterplot of ( a ) with signal plotted against signal .each point corresponds to a single point in time for the simultaneous signals of and . note that because the signal is defined in terms of exponentiation that large events dominate the correlation , just as large events dominate the total energy released in an earthquake catalog.,title="fig:",scaledwidth=70.0% ] , we shift one of the signals in time and calculate the new correlation coefficient . each colored line is a comparison of a pair of signals , as described by eq .[ rtilde ] .note the strong peak at corresponding to signals being compared at the same time .offsetting the signals in time results in lower cross - correlation , dropping to the level of noise in the actual data . as a control ,we shuffle the signals and calculate the cross - correlation for different time shifts ( shown below each figure ) .cross - correlation between various pairs of signals vs. time offset .shown are links for which ( a ) and ( b ) .,title="fig:",scaledwidth=45.0% ] , we shift one of the signals in time and calculate the new correlation coefficient .each colored line is a comparison of a pair of signals , as described by eq .[ rtilde ] .note the strong peak at corresponding to signals being compared at the same time .offsetting the signals in time results in lower cross - correlation , dropping to the level of noise in the actual data . as a control ,we shuffle the signals and calculate the cross - correlation for different time shifts ( shown below each figure ) .cross - correlation between various pairs of signals vs. time offset . shownare links for which ( a ) and ( b ) .,title="fig:",scaledwidth=45.0% ] , we shift one of the signals in time and calculate the new correlation coefficient .each colored line is a comparison of a pair of signals , as described by eq .[ rtilde ] . note the strong peak at corresponding to signals being compared at the same time . offsetting the signals in time results in lower cross - correlation , dropping to the level of noise in the actual data . as a control ,we shuffle the signals and calculate the cross - correlation for different time shifts ( shown below each figure ) .cross - correlation between various pairs of signals vs. time offset . shownare links for which ( a ) and ( b ) .,title="fig:",scaledwidth=45.0% ] , we shift one of the signals in time and calculate the new correlation coefficient .each colored line is a comparison of a pair of signals , as described by eq .[ rtilde ] .note the strong peak at corresponding to signals being compared at the same time .offsetting the signals in time results in lower cross - correlation , dropping to the level of noise in the actual data . as a control ,we shuffle the signals and calculate the cross - correlation for different time shifts ( shown below each figure ) .cross - correlation between various pairs of signals vs. time offset .shown are links for which ( a ) and ( b ) .,title="fig:",scaledwidth=45.0% ] .shown is the case .actual results , shown in red ( color online ) , are greater than from the mean of the shuffled distribution , about 17% more links than the mean of the shuffled distribution .( b ) results are similar for other values of .we note that the fraction of links we can regard `` real '' or meaningful in general increases with .,title="fig:",scaledwidth=49.0% ] .shown is the case .actual results , shown in red ( color online ) , are greater than from the mean of the shuffled distribution , about 17% more links than the mean of the shuffled distribution .( b ) results are similar for other values of .we note that the fraction of links we can regard `` real '' or meaningful in general increases with .,title="fig:",scaledwidth=49.0% ] that are connected to high - degree nodes ( ) .darker colors ( red online ) indicate stronger links ( i.e. stronger cross - correlations ) .links shown satisfy ( a ) , , ( b ) , , ( c ) , , ( d) , .these choices for and give approximately 70 , 70 , 90 , and 90 links respectively.,title="fig:",scaledwidth=35.0% ] that are connected to high - degree nodes ( ) .darker colors ( red online ) indicate stronger links ( i.e. stronger cross - correlations ) .links shown satisfy ( a ) , , ( b ) , , ( c ) , , ( d) , . these choices for and give approximately 70 , 70 , 90 , and 90 links respectively.,title="fig:",scaledwidth=35.0% ] that are connected to high - degree nodes ( ) .darker colors ( red online ) indicate stronger links ( i.e. stronger cross - correlations ) .links shown satisfy ( a ) , , ( b ) , , ( c ) , , ( d) , .these choices for and give approximately 70 , 70 , 90 , and 90 links respectively.,title="fig:",scaledwidth=35.0% ] that are connected to high - degree nodes ( ) . darker colors ( red online ) indicate stronger links ( i.e. stronger cross - correlations ) .links shown satisfy ( a ) , , ( b ) , , ( c ) , , ( d) , .these choices for and give approximately 70 , 70 , 90 , and 90 links respectively.,title="fig:",scaledwidth=35.0% ] ) ) for a wide range of , with assortativity generally increasing with . indicates that high - degree nodes tend to link to high - degree nodes and low - degree nodes tend to link to low - degree nodes . for comparison assortativity values obtained from networks using time - shuffled datademonstrate that these findings are neither a finite - size effect nor a result of spatial clustering , since time - shuffling preserves location.,scaledwidth=99.0% ] in both networks , ( ii ) making one series out of the strengths ( cross - correlation ) in the 1985 - 1992 network and creating another series out of the corresponding strengths in the 1992 - 1998 network and ( iii ) correlating the two series using the pearson cross - correlation coefficient given by eq .( [ prsn]).,scaledwidth=90.0% ]
earthquakes are a complex spatiotemporal phenomenon , the underlying mechanism for which is still not fully understood despite decades of research and analysis . we propose and develop a network approach to earthquake events . in this network , a node represents a spatial location while a link between two nodes represents similar activity patterns in the two different locations . the strength of a link is proportional to the strength of the cross - correlation in activities of two nodes joined by the link . we apply our network approach to a japanese earthquake catalog spanning the 14-year period 1985 - 1998 . we find strong links representing large correlations between patterns in locations separated by more than 1000 km , corroborating prior observations that earthquake interactions have no characteristic length scale . we find network characteristics not attributable to chance alone , including a large number of network links , high node assortativity , and strong stability over time .
the complexity of turbulence is due to a wide range of nonlinearly interacting scales .the numerical simulation of a turbulent flow , in most practical applications , can not take into account the full range of scales , due to limitations in computational resources .the principle of large eddy simulation ( les ) is that only the large scales are computed directly .the influence of the scales smaller than a given scale , associated to the grid - mesh of the simulation , are modeled as a function of the resolved scales . to develop consistent subgrid scale ( sgs ) models ,criteria are needed , based on physical or mathemathical principles and sometimes on the numerical stability of the closed set of equations .it is important that these criteria are well defined and generally accepted .the last two decades have seen the emergence of a large number of new subgrid models , and it is the authors opinion that the turbulence community must devote more efforts in developing consensual criteria than in increasing the number of models . the purpose of the present work is to investigate one possible criterion , which is the time - reversibility of a subgrid model , when the orientation of the velocity is inversed , _i.e. _ , under the transformation .we report the results of direct numerical simulations ( dns ) in which the velocity is reversed in all , or part , of the scales of the flow .these results are then compared to results from les in which the velocity is reversed , to assess the quality of the predictions of the models and to check whether time - reversibility is a valid criterion to assess subgrid models .in the absence of viscosity , the dynamics of the navier - stokes equations ( which reduce to the euler equations ) , are invariant under the simultaneous transformation , .this means that if at an instant the velocity is reversed , the flow will evolve backwards in time until the initial condition is reached . on the level of energy transfer between scales, this property implies that in the inviscid case the direction of the energy transfer reverses when the velocity is reversed ( or when time is reversed ) .this can be understood since the nonlinear interactions , which govern the cascade of energy between scales , are associated with triple velocity correlations .the sign of these triple products is changed when the velocity is reversed , so that the nonlinear energy transfer proceeds in the opposite direction .this symmetry is broken as soon as viscous dissipation is introduced since the viscous term of the navier - stokes equations does not share this symmetry .indeed , the conversion of kinetic energy to heat through the action of viscous stresses is an irreversible process within the macroscopic ( continuum ) description of turbulence . pinning down to what extent this symmetry property of the euler equationsis retained in navier - stokes turbulence is an interesting academic question in its own right .there is , however , also a practical reason to investigate this property . even though the probability of a complete velocity reversal in a real - life flow is small, it will occur locally in time and space in various applications . indeed in the presence of external forces which generate large - scale structures , quasi - two dimensionalisationcan be observed in which the backscatter can exceed the forward flux of energy .typical examples are thermal convection , turbulence in the wake of a cylinder , the turbulent boundary layer and quasi - two - dimensional flows . in these cases , the large scales can be regarded as partly reversed non - equilibrium states , which constitutes a challenge for sgs models .some mixed models are considered good choices in numerical tests for these particular flows , but the reasons are not entirely clear . in particular in these complex flow geometriesit is hard to disentangle the influence of the backscatter from the influence of other flow - properties .for this reason it seems helpful to carefully assess the influence of the time - reversibility of navier - stokes turbulence in the academically most simple setting , isotropic turbulence .we hope that this study will thereby contribute to the understanding and evaluation of subgrid scale models for large eddy simulation .if we consider les and we reverse the resolved velocity , it is not known what the subgrid model is supposed to do . for convenience we will limit our discussion to the most widely used class of models , based on the concept of eddy - viscosity .the ( scalar ) eddy - viscosity model expresses the subgrid stress , as a function of the resolved scales ( denoting a filtered quantity ) , by assuming that is aligned with the resolved strain - rate tensor , the eddy - viscosity assumption is then given by with the eddy - viscosity . note that here we only consider the effect of a filter but not the discretization error . although it may not be the only choice ( see for example carati __ who introduced a `` subgrid scale stress '' which includes the error of discretization ) , it is the choice made in most investigations of subgrid scale models . for some models ,the reversal of the velocity leads to a reversal of the subgrid stress tensor , for others it does not .indeed , the dynamic procedure leads to subgrid models that are time - reversible in the sense that the dynamics evolve backwards in time after a transformation ( see also reference ) .another reversible model is the czzs model by cui _et al . _ as well as its recently proposed extension . in the present workthe simplified formulation of the czzs model is used as an example of a time - reversible model . in this model the eddy - viscosity is given by where is the second - order longitudinal structure function of the filtered velocity , is the third - order longitudinal structure function , is the filter size and indicates an ensemble average which is in practice often treated as an average in the homogeneous directions .this model is time - reversible since the third - order structure function changes sign when . for the smagorinsky modelthis is not the case . for this model ,the eddy - viscosity is given by the eddy - viscosity in eq .( [ eqsmag ] ) can not become negative , so that the net flux of energy to the subgrid scales is always positive .this flux is defined as ( see fig . [fig : ek_0.81 ] for a graphic representation of the fluxes and ) . in the smagorinsky modelthe flux of energy from the large to the small scales is thus always larger or equal to the backscatter .the time - reversibility property of models , such as for example the dynamic model , is sometimes seen as a weakness .one reason for that is that subgrid - scale models are generally supposed to dissipate the energy flux towards the small scales ( see _ e.g. _ reference for a theoretical discussion on this subject ) .another reason is that these models become more easily ( numerically ) unstable .however it is well known that the backscatter of energy , to the resolved scales is a physical property , which should be taken into account in a correct model of the subgrid dynamics ( see _ e.g. _ reference ) .indeed the backward energy flux is not necessarily constrained to be inferior to the forward flux , and a negative energy flux should therefore not _ a priori _ be excluded by a model .in the present section we consider by direct numerical simulation the dynamics of subgrid and resolved scale energy after transformation . we will define a resolved velocity field , and a subgrid velocity ( see fig .[ fig : ek_0.81 ] ) . in the dnsboth velocities are computed and distinction between the two velocities is made by introducing a cut - off wavenumber , corresponding to the use of a sharp spherical low - pass filter in fourier space .other filters could also be considered , such as gaussian filters .we expect that the remainder of the present analysis will still hold qualitatively , but a separate investigation of the influence of the filter - type is outside the scope of the present article .extensive investigations on the influence of the filter - type on energy transfer in isotropic turbulence can be found in references . in these studiesit is shown that the qualitative features ( and in particular the locality ) of the subgrid - scale flux are not changed when considering smooth or sharp filters , as long as the smoothing is not too gentle . in the followingwe will focus on sharp spectral filters only .the main focus of the present work is on the case in which all scales of a freely decaying turbulent flow are reversed at a given time .this case will be denoted by rr , and will be compared to a freely decaying unmodified flow , denoted by nn .simulations are carried out using a standard pseudo - spectral solver and a fourth order runge - kutta time - integration scheme , with a semi - implicit treatment of the viscous term .the computational domain has grid - points .all cases simulate a freely decaying isotropic turbulence , starting from the same random initial field , with a spectral energy distribution similar to the measured spectrum in the experimental work of comte - bellot and corrsin .the location of the filter is illustrated in fig .[ fig : ek_0.81 ] , in which the energy spectrum at the time of reversal is also shown .the evolution of the grid - scale and subgrid - scale energies , and respectively , is given by in these equations and are the grid - scale and subgrid - scale dissipation rates . at high reynolds numbers is small compared to the energy - flux , if is chosen in the energy containing or inertial range .the evolution of grid - scale energy is shown in fig .[ fig : e_gs ] .the time is normalized as , with the time of reversal and the turnover - time at the time of reversal , defined as , with the kinetic energy and the viscous dissipation rate .the energy is normalized by , the resolved energy at the time of reversal . in the following , , and are all normalized by .the evolution of the energy of the reversed case changes radically with respect to the unmodified flow. a closer look at small times , as displayed in the inset in fig .[ fig : e_gs ] , shows that the energy of the grid scales in the reversed case increases , following approximately the relation , as would be expected from reversible euler dynamics , until , at later times , it starts to decay again .the increase of energy corresponds to the energy which flows back from the small scales to the large scales since the energy - cascade is reversed .the main reason that the energy level does not reach its initial value is that some of the energy of the flow has been dissipated and this process is irreversible .however , for the large scales , at which the direct influence of the viscosity is weak , the flow behaves as if it were governed by the euler equations .a quantification of the energy flux and dissipation rates and in equation ( [ eq : dedt ] ) and ( [ eq : dedt-2 ] ) will be given in the following section .this time - reversibility property of the large scales of a turbulent flow is the most important observation of this investigation : the increase of energy in the grid scales is a genuine physical effect described by the navier - stokes equations .the fact that a model does allow an increase of energy , such as the dynamic model , is therefore not a criterion to reject it .the opposite question : _ should a model possess the property of reversibility to be a sound subgrid scale model ? _ is a different question and we will now focus on that . evidently in a large eddy simulationwe do not know the small scales .the relative insensitivity of the resolved scales on a change of the subgrid scales is the basic assumption of large eddy simulation . in the context of the time - reversibility property of turbulence , we test in this section several cases in which the resolved scales and the subgrid scales are modified independently . in addition to the normal and reversed cases discussed in the previous section , we investigate here 4 different cases . in two of themthe large scales are reversed but the small scales are either left unmodified or set to zero . in the other two cases the large scales are not reversed , but the small scales are again either left unmodified or set to zero .the 6 different cases which are considered are summarized in table [ tab : dns - param ] .note that the rn and rz ( and nr and nz ) cases are straightforwardly defined using a sharp cut - off filter in fourier - space .the extension to smooth filters would probably raise further questions , since smooth filters can be inverted .this extension is considered as outside the scope of the present investigation in which a sharp filter is used ..overview of the dns cases .the letter n denotes normal , r reversed and z zero .[ cols="^,<",options="header " , ] [ tab : dns - param ] from large to small scales for the different runs .see table 1 for definitions.,scaledwidth=50.0% ] in figure [ fig : e_gs-2 ] ( a ) the behavior of the large scale energy is shown for the different cases .in contrast to the previous results in which all scales were reversed ( the rr case ) , here none of the different cases displays a significant increase of the energy . however , in the cases rn , rz in which the large scales are reversed , the energy decay is slowed down compared to the unmodified case .the rz case decays more slowly than the rn case .indeed , small scales can act as a non - local eddy - viscosity on the larger scales , an effect which is absent when these scales are set to zero . for the nr and nz cases , in which the large scales are unmodified ,the decay is not significantly altered by a reversal of the small scales .this last test can be considered a validation of one of the main assumptions of les , _i.e. _ , the fact that the resolved scales ( in a normal , non - reversed flow ) are relatively insensitive to the subgrid scale dynamics. we will not focus more on these two cases in the following . the evolution of subgrid scale energy for the casesnn , rr , rn and rz are shown in fig . [fig : e_gs-2](b ) .the differences exist mainly in the range .the energy of the rr and rn cases decrease very fast after reversal , since in addition to the viscous dissipation , which acts in all cases , the reversal of the grid - scale velocity leads to a reduction of the energy - input to the subgrid scale part . after some time ,when the triple correlations around the cut - off are restored to transfer in the normal direction , this energy flows back into the subgrid scales , leading to a temporary energy increase for the rr and rz cases .the energy flux from the resolved scales to the subgrid scales is shown in figure [ fig : pi ] .as expected , the energy flux at the time of reversal reverses for the rr case . for the rn and rz cases the energy fluxis strongly reduced by the reversal , but rapidly the flux is reestablished .the dissipation rates and are shown in figure [ fig : eps ] . in this figure it is observed that at the time of reversal the subgrid - scale dissipation is dominant , as is expected for moderate and high reynolds numbers . for completeness , we show in figure [ fig : tot](a ) the total energy . due to the normalization by the grid - scale energy ( which allowed a better comparison for the grid - scale dynamics in figure [ fig : e_gs-2 ] ) only the rz case has unity energy at the time of reversal , since the subgrid energy is zero in this case , whereas the other cases have a higher energy .the total dissipation [ fig .[ fig : tot](b ) ] behaves qualitatively similar to both the subgrid dissipation and the subgrid energy .order hyperviscous term.,scaledwidth=50.0% ] in the present simulations the reynolds number is moderate and from figure [ fig : eps ] it can be concluded that the contribution of the grid - scale dissipation is non - negligible and of the order of of the total dissipation at the time of reversal . to evaluate the dependence of the reynolds number, we would like to diminish the direct effect of the viscous dissipation on the resolved scales .we therefore calculated another set of flows replacing the viscous term by a fourth order hyperviscous draining term .such a hyperviscous term concentrates the influence of the viscosity to a small range of wavenumbers so that its direct influence on the large scales is reduced . in equation ( [ eq : dedt ] )this means that becomes very small compared to the other terms in the equations . in the present case is of the total dissipation at the time of reversal .the results of the simulations are shown in figure [ fig : e_gs - hv ] for short time after the reversal .it is observed that the evolution of the resolved kinetic energy at small times is qualitatively similar .the rr case shows a complete reversal at small times .the rz case shows a slow - down of the energy decay as was observed in the normal viscous case . the only qualitative difference is observed for the rn case which now shows a slight increase of the energy , an effect which is apparently sensitive to the small amount of dissipation which was present in the viscous run .let us digress a little and give the physical explanation for this increase .the energy transfer , analyzed in the fourier - domain , is governed by triple products of velocity modes of the form with wavevectors that can form triangles .the triple moments that are responsible for the transfer across the cut - off can be divided into two classes .one class consists of triangles with two legs of the triad shorter then and one longer , the other class consists of triangles with two legs of the triad longer then and one shorter ( see also figure 1 in reference ) . in the rn case the first class of triple products will remain unchanged , since two of the three velocity modes change sign so that the triple product does not change sign .the other class will change sign since only one of the three velocity modes changes sign .the balance between the two classes and their relative contributions to the forward and backward energy fluxes will now determine whether the resulting flux is positive or negative .in the present case , apparently , the resulting flux is slightly negative , but already a small amount of viscosity is enough to prevent this reversed flux from being visible . at a later time the rn case decays faster then the rz case , as in the viscous runs , due to the eddy - viscous effect of the small scales on the large scales which was already mentioned in the previous section . without focusing on the details , we only want to stress here that the precise form and location of the viscous dissipation do not qualitatively influence the behavior of the rr , and rz cases .the rn case changes and a small increase of energy is observed .the different behavior observed for the resolved scales when reversing the velocity in a part of the scales can be interpreted in two different ways .the first would be to point out the weakness of large eddy simulation , since apparently the large scales are not independent on the details of the small scales .however this interpretation would be disingenuous , since the concept of large eddy simulation in three - dimensional turbulence is intimately linked to the concept of a forward energy cascade .we would therefore prefer to point out the weakness of the criterion of time - reversibility to assess subgrid scale models .indeed , a model should not be rejected because it is time - reversible , since at short times the dynamics of the large scales of navier - stokes turbulence can be reversible .however , a model which does not display this property should not be rejected either , since even in cases in which the large scales are reversed , the energy of the large scales might not increase if the energy in the subgrid scales is not reversed as is observed here in the rz and ( the viscous ) rn case .with this in mind we will evaluate in section [ sec : les ] how different subgrid scale models behave when the velocity is reversed , but without judging on the validity of the models , which should be scrutinized using additional , less equivocal criteria .we perform in the present section the same test , reversing the large scales using different subgrid models , first the simplified czzs model ( [ eqczzs ] ) , second the smagorinsky model , eq .( [ eqsmag ] ) with fixed at .the first model is , as mentioned before , time - reversible , the second is not .the computational mesh has grid - points . as was shown by kraichnan , a constant ( non - scale dependent ) value for the eddy - viscosityis only a good approximation in the inertial range , far from the cut - off frequency .close to the cut - off , where the role of the model is most important , the value of the eddy - viscosity strongly increases .this effect can be corrected for by adding a scale dependent cusp to the model , as was applied by chollet and lesieur .this should be done in principle for _ all _ eddy - viscosity models . in the present workthis cusp is introduced by modifying the eddy - viscosity to two different reynolds number cases are considered .in the first one , the viscosity is set to zero , yielding , in the absence of , the time - reversible euler equation .the comparison of grid - scale energy is shown in fig . [fig : e_gs_les](a ) .we can observe that after reversal , the simplified czzs model yields an increase of energy , which is similar to what was observed in the rr case in the last part , before the irreversible influence of viscosity set in .the smagorinsky model remains decaying at the same rate as the normally decaying case for some time - steps after reversal .this phenomenon is not similar to any dns case in the last part , and stems from the fact that reversal leaves the value of the eddy - viscosity unchanged since is unchanged after reversal . for longer times , the decay rate decreases , since the direction of the resolved energy cascade changed sign .however the energy can not increase , since can not change sign . in order to determine the influence of the reynolds number on the dynamics, we considered a second decay - case , in which the molecular viscosity was not set to zero . in this case the ratio at the time of reversal , and the results are shown in fig .[ fig : e_gs_les](b ) . in this casethe behavior of the reversible model is very close to the dns result of the rr case , for both small and long times .the presence of a non - negligible amount of non - reversibility by the viscous stress , prevents the flow from developing an unlimited amount of energy .we want to stress however that the apparent success of the reversible model in the presence of non - zero viscosity in reproducing the dns results of the last section is fortituous and depends on the reynolds number . in other words, it might not always be easy to foresee which amount of viscosity is needed to avoid non - physical effects . at high reynolds numbers the behavior might ressemble more the inviscid behavior shown in fig . [fig : e_gs_les](a ) .the underlying issue here is that a subgrid model needs to reproduce two distinct features of the subgrid scales .the first is to drain the energy from the large scales .the second is to dissipate this energy .the dynamic model only forefills the first task , which corresponds to the reversible interaction between the small and the large scales , but does not dissipate the energy . for the sake of completeness , we also test two other models , which might be of practical use if one does not want to worry about the amount of viscosity needed to avoid a reversible model to reinject unphysical amounts of energy in the system . for the first one ( model r0 in the following ) we use the simplified czzs model in which we reverse the velocity , but fix all negative viscosity as zero , _i.e. _ in real practice , this clipping procedure was widely used in the time - reversible models to obtain numerical stability . for the second one ( that we denote mix in the following ) we follow the strategy of defining a mixed model as suggested by vreman __ and where the additional coefficient is used to guarantee the consistency with non - reversed turbulence .these mixed models often lead to good results in real applications , but their formulation is not supported by theoretical or physical arguments . shown in figure [ fig : e_gs_les-2 ] is the behavior of the models defined in expressions ( [ model : r0 ] ) and ( [ model : mix ] ) .we observe that their behavior closely resembles the rn and rz cases in fig .[ fig : e_gs ] , where grid - scale energy decay is reduced during a short time , and then decays normally .we can therefore conclude that the models ( [ model : r0 ] ) and ( [ model : mix ] ) represent a physical behavior , corresponding to a certain class of flows . as we argued in the previous section, we can not use the present results to assess the models .the backflow of energy is not unphysical but it is a phenomenon which is not observed in all possible flows in which the resolved scales are reversed .if in a particular application one aims at the prediction of a time - reversed flow without risking an unlimited amount of backscatter , one can use one of the models given by ( [ model : r0 ] ) and ( [ model : mix ] ) . a more sophisticated , but also more physical , procedure was proposed by ghosal __ by basing the flux of energy to the small scales on the subgrid scale energy , which was computed using a transport equation for the reynolds averaged subgrid scales .this procedure implicitly assumes that the net energy flux from the resolved scales to the subgrid scales is determined by the subgrid energy .another solution would be to introduce a cascade time , based on the energy around the cut - off , which limits the time during which the model remains time - reversible .however , all these fixes are only needed if one wants to be able to take into account the time - reversibility of the resolved scale turbulence .the _ gedanken_-experiment in which the velocity at each point in a turbulent flow is reversed can be carried out in numerical simulations , and this is what we performed in the study presented in this work .the goal of this work was not to judge particular subgrid scale models or even the whole concept of les using the criterion of time - reversibility , but rather to judge the criterion itself .our conclusion is then the following : the property of time - reversibility alone is not an unequivocal criterion to reject or qualify subgrid models .we base this judgment on two observations .the first one is that at short times the energy of the resolved scales increases in a reversed flow , as would be the case for a flow governed by the ( truncated ) euler equations .this observation alone could be used to argue that subgrid models should be , at least partly , reversible , and to reject subgrid models which do not possess this property .however , a second observation in the present work showed that if we reverse the large - scales but do not modify the subgrid scales or set them to zero , the large - scale energy does not necessarily increase .this second observation shows that , even if a model can not increase the energy of the resolved scales , it still corresponds to a certain class of flows , and the model can not be rejected .however , it can not be excluded that the invariance of a model with respect to the reversal of the velocity will not have other consequences when regarding other diagnostics .for some practical purposes , in which the user is only interested in a model that drains a sufficient amount of energy without compromising the stability of the simulation , time - reversibility will probably continue to be regarded as a possible criterion to reject a subgrid model .we claim here , on the basis of the present results , that considering the detailed flow physics , this criterion is unsuitable .this unsuitability to use the criterion of time - reversibility to assess subgrid scale models for les is inherent to the basic assumption of large eddy simulation .if les is to be usable at all , some _ a priori _ assumption by the user should be made about the property of the cascade of energy .this cascade is in general towards the small scales in three - dimensional turbulence , and in this case the user of les should accept that some physics are not captured by his simulations or he should give some input about the unknown scales to the model .if this is not satisfactory in some applications , les , in its present form , might simply not be the adequate tool to study these particular applications .more sophisticated approaches might then be needed , in which the direction of the energy flux between scales can be determined as a function of resolved flow parameters or models in which the scales are not arbitrarily divided into large and small scales .the authors acknowledge interaction with grgoire winckelmans and robert rubinstein .the authors also acknowledge an anonymous reviewer of a previous manuscript , who , by asking a question about the time - reversibility of subgrid - models , triggered the current investigation .l. shao acknowledges support from buaa sjp 111 program ( grant no .
among existing subgrid scale models for large - eddy simulation ( les ) some are time - reversible in the sense that the dynamics evolve backwards in time after a transformation at every point in space . in practice , reversible subgrid models reduce the numerical stability of the simulations since the effect of the subgrid scales is no longer strictly dissipative . this lack of stability constitutes often a criterion to reject this kind of models . the aim of this paper is to examine whether time - reversibility can constitute a criterion that a subgrid model has to fulfill , or has not to . thereto we investigate by direct numerical simulation the time - dependence of the kinetic energy of the resolved scales when the velocity is reversed in all or part of the lengthscales of the turbulent flow . these results are compared with results from existing les subgrid models . it is argued that the criterion of time - reversibility to assess subgrid models is incompatible with the main underlying assumption of les .
the prediction of the future state of a system known the actual state is a fundamental problem with obvious applications in geophysical flows ( leith , 1975 ; leith and kraichnan , 1972 ; leith , 1978 ) .there are many limitations to the ability of predicting the state of a geophysical system , e.g. the atmosphere , one of the most important is the lack of knowledge , or the difficulty of full implementation , of the equations of motion . still ,even if one assume to perfectly know the system and to have sufficiently large computers , the predictability can be severely limited by the dynamics itself , i.e. the `` intrinsic unpredictability '' the a system which is the subject of our study .a well known , and very popular , example of low - predictable system is given by a chaotic system ( lorenz , 1963 ) . by definition ,chaotic dynamical systems display sensible dependence on initial conditions : two initially close trajectories will diverge exponentially in the phase space with a rate given by the leading lyapunov exponent ( see eckmann and ruelle , 1985 ) .because the initial condition can be measured only with a finite uncertainty , we can know the future state of the system at a tolerance level only up to a maximum time one important consequence of equation ( [ eq:1 ] ) is that the predictability time has a very weak dependence on the precision of the initial condition and on the tolerance , therefore the predictability time is an intrinsic quantity of the system as the lyapunov exponent is . the naive formula ( [ eq:1 ] ) for the predictability problem holds only for infinitesimal perturbations and in non intermittent systems ; in the general case one has a series of problems and subtle points which have been objected of several studies in last years ( crisanti et al . , 1993 ;aurell et al . , 1996 , 1997 ) .one delicate issue is particularly relevant for our present study and essentially says that , although the lyapunov exponent for the atmosphere ( as a whole ) is presumably rather large ( due to the small scale turbulence ) , the large scale behavior of the system can be forecasted with good accuracy for several days ( lorenz , 1969 ; lorenz , 1982 ; simmons et al . , 1995 ) .the apparent paradox comes from the identification of the predictability time with the inverse of the lyapunov exponent based on equation ( [ eq:1 ] ) which is actually of little relevance even in few degree - of - freedom dynamical systems .indeed , in presence of different characteristic time scales , as is the case in any realistic model of geophysical flows , the lyapunov exponent will be roughly proportional to the inverse smallest characteristic time .this time is associated to the smallest , low energy containing scales which , after the fast saturation , do not play a role any more in the error growth law .large errors will grow , in general , with the characteristic time of the largest , energy containing scales ( leith , 1971 ; leith and kraichnan , 1972 ) .thus when the initial error is not very small , as is often the case in a predictability experiment , the leading lyapunov exponent may play no role at all . to be more quantitative , in this paperwe investigate the predictability problem in two time scale dynamical systems .we apply a recently introduced generalization of the lyapunov exponent to finite perturbations .we will show that the finite size lyapunov exponent ( fsle ) is more suitable for characterizing the predictability of complex systems where the growth rate of large errors in not ruled by the lyapunov exponent .the models considered here are crude approximations of a realistic geophysical flow also because both the subsystems have a single time scale. it would be interesting to extend the investigation to more realistic situations and comparing the latter case with present results .this remaining of the paper is organized as follows : in section [ sec:2 ] we introduce the finite size lyapunov exponent which is applied to the system models in section [ sec:3 ] .section [ sec:4 ] is devoted to conclusions .the notion of lyapunov exponent is based on the average rate of exponential separation of two infinitesimally close trajectory in the phase space : where is the distance between the trajectories with a suitable norm and the two limits can not be interchanged .the standard algorithm ( benettin et al . , 1980 ) for computing the lyapunov exponent is based on ( [ eq : b1 ] ) , with the trick of periodical rescaling of the two trajectory in order to keep their distance `` infinitesimal '' . as already discussed in the previous section ,the second limit in ( [ eq : b1 ] ) is of dubious interest in the predictability problem because the initial incertitude on the system variables is in general not infinitesimal .therefore one would like to relax the infinitesimal constrain still maintaining some well defined mathematical properties .recently , a generalization of ( [ eq : b1 ] ) which allows to compute the average exponential separation of two trajectories at finite errors have been introduced .the finite size lyapunov exponent , , is based on the concept of error growing time which is the time it takes for a perturbation of initial size to grow of a factor .the ratio should not be taken too large , in order to avoid the growth through different scales .the error growing time is a fluctuating quantity and one has to take the average along the trajectory as in ( [ eq : b1 ] ) .the finite size lyapunov exponent is then defined as where denotes the natural measure along the trajectory and is the average over many realizations . for an exhaustive discussion on the way to take averages , see aurell et al .( 1997 ) . in the limit of infinitesimal perturbations, , definition ( [ eq : b2 ] ) reduces to that of the leading lyapunov exponent ( [ eq : b1 ] ) . in practice, displays a plateau at the value for sufficiently small . to practically compute the fsle, one has first to define a series of thresholds , and to measure the time that a perturbation with size takes to grow up to .the time is obtained by following the evolution of the perturbation from its initial size up to the largest threshold . this is done by integrating two trajectories of the system that start at an initial distance . in general, one must take , in order to allow the direction of the initial perturbation to align with the most unstable phase - space direction .the fsle , , is then computed by averaging the error growing times over several realizations according to ( [ eq : b2 ] ) .note that the fsle has conceptual similarities with the -entropy ( kolmogorov , 1956 ; see also gaspard and wang , 1993 ) .this latter measures the bandwidth that is necessary for reproducing the trajectory of a system within a finite accuracy .the -entropy approach has already been applied to the analysis of simple systems and experimental data ( gaspard and wang , 1993 ) , giving interesting results .the calculation of the -entropy is , however , much more expensive from a computational point of view and of little relevance for the predictability problem .the computation of the fsle gives information on the typical predictability time for a trajectory with initial incertitude . to be more quantitative, one can introduce the average predictability time from an initial error to a given tolerance as the average error growing time , i.e. which reduces to ( [ eq:1 ] ) in the case of constant . from general considerations ,one expects that is a decreasing function of and thus ( [ eq : b3 ] ) gives longer predictability time than ( [ eq:1 ] ) .we now discuss the application of the fsle analysis to two relatively simple dynamical systems presenting different characteristic time scales .the proposed models are of little physical relevance ; they should rather be intended as prototypical models for the predictability problem in complex flows .the first example is obtained by coupling two lorenz models ( lorenz , 1963 ) , the first representing the slow dynamics and the second the fast dynamics the choice of the form of the coupling is constrained by the physical request that the solution remains in a bounded region of the phase space . since the trajectory is enough far from the origin , one has that it evolves in a bounded region of the phase space .the parameters have the values , and , the latter giving the relative time scale between the fast and slow dynamics .the two rayleigh numbers are taken different , and , in order to avoid sincronization effects . with the present choice, the two uncoupled systems ( ) display chaotic dynamics with lyapunov exponent and respectively and thus a relative intrinsic time scale of order . by switching on the couplings and we obtain a single dynamical system whose maximal lyapunov exponent is close ( for small couplings ) to the lyapunov exponent of the faster decoupled system ( ) .we will consider a single realization of the couplings , with and .the global lyapunov exponent is found to be in this case which is indeed close to in the uncoupled case . with the present choice of the couplings , the fast dynamicsis driven by means of the effective rayleigh number and one recognize in the time evolution the slow - varying component of the driver ( see figure [ fig1 ] ) . forwhat concern the predictability , one expect reasonably that for small coupling the slow component of the system remains predictable up to its own characteristic time .on the other hand , for any coupling we obtain a single dynamical system in which the errors grow with the leading lyapunov exponent .the apparent paradox stems from saturation effects which becomes apparent as soon as one is interested in non infinitesimal errors .we have integrated two trajectories of ( [ eq : c1 ] ) starting from very close initial conditions .one trajectory represents the `` true '' ( reference ) trajectory and the other is the forecast ( perturbed trajectory ) subjected to an initial error .the error is computed here by means of the euclidean distance in the phase space ^{1/2 } \label{eq : c3}\ ] ] figure [ fig2 ] reports the results for the error growth averaged over experiments with and .we observe that the relative magnitude of the initial errors is irrelevant for what concerns small errors because the error direction in the phase space will be rapidly aligned toward the most unstable direction . for small times ( ) , both the errors can be considered infinitesimal and the growth rate is thus given by the global lyapunov exponent .this is the linear regime of the error growth in which the lyapunov exponent is the relevant parameter for the predictability . for larger times, the fast component of the error , , reaches the saturation , the trajectories separation evolves according to the full non linear equations of motion and the growth rate for the slow component is strongly reduced . from figure [ fig2 ] oneobserves that the slow component error is still well below the saturation value , and grows with a rate close to its characteristic inverse time .we now apply the fsle algorithm to the slow component of the the error , ( figure [ fig3 ] ) .we define a series of thresholds starting with and ratio .the results presented ( figure [ fig3 ] ) are obtained after averaging over realizations .for very small , the fsle recovers the leading lyapunov exponent , indicating that in small scale predictability the fast component has indeed a dominant role .as soon as the error grows above the coupling , drops to a value close to and the characteristic time of small scale dynamics is no more relevant . in figure [ fig4 ]we plot the slow component predictability time ( [ eq : b3 ] ) for a fixed initial error as a function of the tolerance .we observe , as expected , an enhancement of as soon as one accepts a tolerance larger than the typical fast component fluctuation in the slow time series .observe that the naive application of ( [ eq:1 ] ) would heavily underestimate the predictability time for large tolerance ( dashed line ) .we now consider the second example .it is a more complex system introduced by lorenz ( lorenz , 1996 ) as a toy model for the atmosphere dynamics which includes explicitly both large scales ( synoptic scales , slow component ) and small scales ( convective scales , fast component ) .the apparent paradox described above can be reformulated here by saying that a more refined atmosphere model ( which is able to capture the small scale dynamics ) would be less predictable of a rougher one ( which resolve only large scale motion ) and thus the latter should be preferred for numerical weather forecasting .we will see that also in this case , the effect of the small , fast evolving , scales becomes irrelevant for the predictability of large scale motion if one consider large errors .the model introduces a set of large scale , slow evolving , variables and small scale , fast evolving , variables with and . as in ( lorenz , 1996 )we assume periodic boundary conditions on ( , ) while for we impose .the equation of motion are in which again represent the relative time scale between fast and slow dynamics and is a parameter which controls the relative amplitude .let us note that ( [ eq : c4 ] ) has the same qualitative structure of a finite mode truncation of navier - stokes equation , with quadratic inertial terms and viscous dissipation .the coupling ( with unit strength ) is chosen in order to have the `` energy '' conserved in the inviscid , unforced limit .the forcing term drives only the large scales and we will consider which is sufficient for developing chaos .we have performed the computation of the fsle for system ( [ eq : c4 ] ) with parameters as in ( lorenz , 1996 ) : , , implying that the typical variable is 10 times faster and smaller than the variable . in this casewe choose to adopt for measuring the errors the global euclidean norm on both the slow and fast variables ( energy norm ) : this is for mimic a realistic situation in which we are not able to recognize _ a priori _ the slow component in the system .the result of the fsle computation is displayed in figure [ fig5 ] after averaging over realizations with initial error .we set thresholds with and ratio .for very small errors we observe the saturation of to the leading lyapunov exponent of the system . for errors larger than the typical r.m.s .value of the fast variables ( ) we observe a second plateau at , corresponding to the inverse characteristic time of large scales .we observe that the relative time scale between fast and slow motions as computed by the fsle is slightly larger than the value of the parameter .we think that this effect is due to coupling which here can not be assumed small as in the previous example .in figure [ fig6 ] we plot the predictability time ( [ eq : b3 ] ) for fixed initial error and different thresholds .as in the previous example , we observe an enhancement of the predictability time for large tolerance with respect to the lyapunov exponent estimation .for large initial errors ( as it is usually the case in numerical weather forecasting ) the predictability time is thus independent of the lyapunov exponent .we have shown that in systems with possess different characteristic time scales , the predictability time can be an independent quantity of the leading lyapunov exponent .the latter is usually associated to the faster characteristic time and dominates the exponential growth of infinitesimal errors .large errors will evolve in general with large scale characteristic time which thus rules large scale predictability .we have introduced a generalization of the lyapunov exponent which allows to compute the average exponential error growth at a given error size .the finite size lyapunov exponent is expected to converge at the leading lyapunov exponent for very small errors . for larger errors, is decreasing with and thus the fsle analysis predicts an enhancement of the predictability time as observed in several numerical experiments .we illustrate these concepts on two model examples which possess different characteristic timescales .the numerical computation of the fsle confirms the predictability enhancement with respect to the lyapunov analysis .our results have a general significance which exceeds the proposed models .in particular , whenever one can identify in the system different features with different intrinsic time scales , one expects that slow varying quantities ( i.e. large scale features ) are predictable longer than fast evolving quantities .moreover , our results demonstrate that the estimation of the predictability time for a large scale circulation model do not require to resolve the small scale dynamics .this paper stems from the work of giovanni paladin , who has been tragically unable to see its conclusion .we dedicate this paper to his memory .g. boffetta thanks the `` istituto di cosmogeofisica del cnr '' , torino , for hospitality and support .this work has been partially supported by the cnr research project `` climate variability and predictability '' .benettin , g. , galgani , l. , giorgilli , a. and strelcyn , j.m .lyapunov characteristic exponent for smooth dynamical systems and hamiltonian systems ; a method for computing all of them ._ meccanica _ * 15 * , 9 .crisanti , a. , jensen , m.h . ,paladin , g. and vulpiani , a. 1993 .predictability of velocity and temperature fields in intermittent turbulence ._ j. phys .* 26 * , 6943 .intermittency and predictability in turbulence .lett . _ * 70 * , 166 .
the predictability problem for systems with different characteristic time scales is investigated . it is shown that even in simple chaotic dynamical systems , the leading lyapunov exponent is not sufficient to estimate the predictability time . this fact is due the saturation of the error on the fast components of the system which therefore do not contribute to the exponential growth of the error at large errors . it is proposed to adopt a generalization of the lyapunov exponent which is based on the natural concept of error growing time at finite error size . the method is first illustrated on a simple numerical model obtained by coupling two lorenz systems with different time scales . as a more realistic example , this analysis is then applied to a toy model of atmospheric circulation recently introduced by lorenz .
the problem of leak detection in water distribution networks ( wdns ) is of significant importance for effective management and water quality control .leaky distribution systems are inefficient due to water loss , energy wastage , and unreliable water quality : especially in case of underground leaks .these effects are even more pronounced in urban centers of developing countries where the networks are poorly instrumented and maintained . in case of wdns , leaks or lossesare quantified using unaccounted - for water ( ufw ) .high levels of ufw are detrimental to financial viability of the system .losses in wdns are a combined effect of real losses like leaks in pipes or joints , as well as other means like water thefts and unauthorized consumption . given the growing concern towards uncertainty in quality water supplies , the problem of leak detection and control has grown in importance .various techniques based on acoustic methods and magnetic flux leakage are available to determine the location of defect ( either small defect like corrosion , or large leaks ) in a single pipe. however , these methods could be time consuming , expensive , or disruptive in nature .thus , it is beneficial to use these techniques after narrowing down the leak to a small part of the network .one approach to leak detection involves the use of hydraulic models and simulators .available measurements are used to estimate the location of a leak which match the sensor measurements closely .this method is generally called inverse analysis and requires solving a large optimization problem . in order to use this approach , measurements of flow rates and pressures at a large number of intermediate locationsare required , in addition to source pressure and demand flows . in well instrumented networks ,some flow and pressure sensors are installed for the purpose of district metered area ( dma ) sectorization , but these are few in number .a more severe limitation of pressure - reading based methods is that predictions depend on precise estimates of model parameters like pipe friction factors , which are difficult to obtain .practical applicability of this method to large scale networks have proven to be a hard task , as reported by some researchers . 2 + ( a ) network with a leaky node + ( b ) querying a set of pipes in order to overcome the above difficulties , and to explore a new line of research ,we propose a method for leak detection which uses only flow measurements that are repeatedly performed on - demand in field campaigns .we call this process of obtaining flow measurement in a pipe ( on - demand ) as * querying * the pipe for flow .further , since the only property of leak we exploit is loss of material ( water ) , the method is equally applicable to any form of loss including thefts - which is not the case for hydraulic model based methods .even though we show results of our method on wdns , the method itself is much more general and pertains to any distribution system obeying conservation laws .to briefly illustrate the idea , consider the network shown in fig .let us say that some node in this network is leaky , and our objective is to find it . by querying the edges in red in fig .1(b ) , we can trace the leak to either of the two parts of the network ( shown in blue and green ) .this is possible by exploiting water balance ( or conservation laws in general ) as will be shown in subsequent sections . by performing this operation repeatedly, we can arrive at a small part of the network which contains the leak .querying a pipe requires access to it , which may be buried underground at a depth of about two meters .hence , there is a non - negligible cost associated with every query .therefore , it is important to minimize queries ( or query cost ) , which requires a strategic field campaign .an ideal field campaign should possess the following characteristics : ( i ) it should be systematic and arise out of a clear objective ; ( ii ) it should scale to large sectors or the whole network in absence of dmas ; ( iii ) must be capable of assimilating information from other sources ( like existing sensors ) ; ( iv ) should be optimal , requiring only few queries . an algorithmic solution to development of such a field campaignis the subject of this paper .before the formal problem setup , we briefly review the basics of algebraic graph theory relevant to this work . specifically , we review the representation of wdns as graphs and matrices , and survey relevant properties .see chapter 7 of for further discussion .* definition 2.1 * a graph is a tuple comprising the set of vertices and edges which are 2-element subsets of .the number of vertices and edges in the graph are and respectively .the graph could be directed or undirected .we use the following terms interchangably to suit the particular context : graph and network ; vertices and nodes ; edges , links , and pipes .the nodes of the network can be classified as source nodes where water is fed into the network , demand nodes or sink nodes where water is removed from the network for supplying to the consumers , and transmission nodes which aid in redistributing the flows .the edges of the network represent the pipes of the wdn .we choose an _ undirected graph _ representation for the network .however , we associate a sign convention with each edge to help identify the direction of flow .flow will be negative if it is in the opposite direction to the chosen sign .* definition 2.2 * the adjacency matrix is defined by the relationship : if nodes i and j are connected by a pipe and 0 otherwise .* definition 2.3 * the directed incidence matrix is defined by the relationship : the sign convention for can in fact be chosen arbitrarily and the above assignment is only one particular choice . *definition 2.4 * the degree of node , is the number of edges incident on the node and denoted by .the degree matrix is a diagonal matrix containing the degree of each node along the diagonal entries , i.e. , and .* definition 2.5 * the laplacian ( ) of a graph is defined by the relationship , where and are the degree and adjacency matrices , respectively .the adjacency and incidence matrices characterize the network completely .the other matrices can be computed with their knowledge .we also review some useful properties of these matrices . *property 2.1 * the laplacian matrix is positive semi - definite . *property 2.2 * the smallest eigenvalue of the laplacian matrix is 0 . the vector ^t ] we can write where is the vector of projections onto . from property 3.2 , we have , and . the constraint is equivalent to with this change of variables , the optimization problem in ( 6 ) after relaxation becomes : with .the above problem can be solved analytically as follows : 1 . if , then and 2 .if , then and the first solution indicates that if cut - cost is significantly more than cost associated with size disparity in partitions , the obvious solution is to not partition at all . this solution is trivial and is discarded .the second solution indicates that if cost associated with disparity is more than a certain threshold , then the solution is to partition such that .this suggests the assignment choice as where is the eigenvector corresponding to the second smallest eigenvalue , also known as the fiedler vector . since is orthogonal to , is non - trivial . in order to obtain an integer solution , we employ a simple round off procedure to obtain the solution that is consistent with problem specifications , and also maximizes .the final solution is : note that the above solution maximizes which is only an approximation of the original problem . in order to minimize the true problem ( 6 ), we need to consider the relative magnitudes of the different eigenvalues which is possible only in a combinatorial setting .in fact , it is this approximation that enables us to arrive at a computationally tractable solution . partitioning based on entries of fiedler vectoris known by the name of spectral bisection and is known to produce skewed partitions .this problem can be tackled by explicitly imposing a goal programming constraint as shown in figure 4 .we sort the entries of in ascending order , and normally assign partitions based on sign of the entry corresponding to each node . if we get skewed partitions , we can cut - off the partitions at the threshold defined by the minimum partition sizes .this is shown schematically in fig.[fig : approx_alg ] .this assignment ensures that is maximized when adhering to the partition size constraints .this is because there is a fixed number of sign mismatches that would occur between and which reduces the value of from its maximum possible value . by sorting and assigning nodes to partitions such that sign mismatches always occur with of least magnitude, the maximum possible value of is achieved in presence of the partition size constraint .* remark : * while we have presented two methods here ( ilp and approximation scheme ) which work well for the target application as seen through case studies , researchers have attempted other approximation algorithms , and the field of graph partitioning is very rich in literature .some of these methods employ the use of semi - definite programming and randomized algorithms .we do not present the results of these algorithms since the size of benchmark networks considered in this paper were not large enough to render ilps computationally infeasible , and the spectral bisection method provides adequate performance .however , if necessary , it is trivial to incorporate other approximate partitioning methods into protocol 1 .to test the proposed methods , we have chosen representative water distribution networks used frequently in literature .these include the exnet , richmond , dtown , and colorado springs networks .researchers have studied the topology of these networks , with emphasis on analyzing properties like link density , clustering coefficient , betweenness centrality etc .we have chosen these networks due to the wide spectrum of size , formation , and organizational patterns ; and hence representative of most wdns .the exnet network is a large realistic benchmark problem used for multi - objective optimization of water systems .the colorado springs network is an example with multiple water supply sources , while the richmond network is a sub - network of the yorkshire water system in the uk with a single reservoir .the dtown network was used in the battle of the water network ii ( bwn - ii ) as a design problem .in addition , we have also tested the algorithm on one sector of the bangalore water distribution network , which is smaller in size compared to the other full " networks , to study how the methods perform at smaller scales .some important properties of these networks are summarized in table 1 .the layouts of these networks are illustrated in fig .[ fig : network_layouts ] .properties of the networks studied .( and are the number of nodes and edges respectively ; q is the link density ( ) ; and are the mean and maximum node degrees ) [ cols="<,^,^,^,^,^,^",options="header " , ] [ tab4:approx ] 2 + ( a ) network with leaky pipe + + ( b ) first set of queries + + ( c ) second set of queries + + ( d ) third set of queries +an effective graph partitioning based protocol to locate leaky units in water distribution networks is proposed .the protocol involves solving a multi - objective optimization problem that approximately models hierarchical graph partitioning .it was observed that a goal programming formulation handles the multiple objectives in an effective manner , producing high quality solutions .an approximate partitioning algorithm inspired by spectral clustering was also presented , and the results discussed .the performance of the protocol and various formulations was elucidated through case studies on standard water distribution networks .it was observed that only a very small fraction of pipes need to be queried for flow measurements , in order to find the leak location .in this section , we propose possible methods to avoid some assumptions made earlier .we also propose possible extensions and future work . as outlined earlier , extending the proposed protocol to include leak in pipesrequires introducing more notations and modifying the protocol . for sake of brevity, we have presented this extension in the appendix . in the original problem formulation and protocol in section 3, we assumed that pre - installed sensors are not available .whenever a measurement was required , a query or act of measurement must be performed to obtain the flow rate . however , for well designed wdns , some pipes would already be fitted with permanent sensors .this could be for dma sectorization , or other monitoring requirements .in addition to sensors , we can also make use of valves by completely closing the valve through which we indirectly know that the flow rate in that pipe is zero .if such a disruptive method is not desirable , then the use of valves can be avoided .one method to incorporate these factors is to simply assign a very low querying cost to those pipes which have valves or sensors installed on them so that partitions containing them are favored over others .an extreme case of this is to simply remove those edges which have sensors on them from the network before running the partitioning algorithm and then use the appropriate flow rates when performing the water balance. the proposed algorithm can be very naturally extended to cases where there are multiple leaks .in such a scenario , more than one sub - network would show an imbalance at some stage of the hierarchical partitioning exercise .after this point , we apply the same method to each of these sub - networks with imbalances .the only binding assumption in such a case is the absence of any material ingress - i.e. all the leaks are material losses out of the network , and water can not enter the network through pipe ruptures .graph containing leaky node , ( threshold ) + * initialize : * cost 0 ; leakyset + * procedure : * ( leakyset , cost ) findleak ( , cost , , leakyset ) + * result : * leaky node(s ) in leakyset in our work , we have tried to obtain partitions that are balanced in size of the sub - networks ( measured in number of nodes ) .there are possibly alternate criteria for balanced partitions that take into account domain specific knowledge .for instance , if a probability distribution for leak occurrences in various nodes are available , we might want to obtain partitions that are balanced in this probability .this information could be obtained for instance through historical data or models utilizing network properties like pipe lengths , roughness factors etc .for instance , total length of pipe in a partition could be related to the probability of leak occurrence within the partition .it is easy to observe that node properties ( like leak probability ) can be easily incorporated into the ilp and approximate algorithms .however , it is not trivial to partition based on edge attributes ( like pipe length ) which is a line of work we plan to pursue in the future .this work was partially supported by the department of science and technology , india under the water technology initiative ( dst / tm / wti/2k13/144 ) and the iit madras interdisciplinary laboratory for data sciences ( cse/14 - 15/831/rftp / brav ). 10 a. colombo and b. karney , `` energy and costs of leaky pipes : toward comprehensive picture , '' _ journal of water resources planning and management _ , vol .128 , no . 6 , pp .441450 , 2002 .r. puust , z. kapelan , d. a. savic , and t. koppel , `` a review of methods for leakage management in pipe networks , '' _ urban water journal _ ,vol . 7 , no . 1 , pp .2545 , 2010 .f. gonzalez - gomez , m. a. garca - rubio , and j. guardiola , `` why is non - revenue water so high in so many cities ?, '' _ international journal of water resources development _ , vol . 27 , no . 2 , pp .345360 , 2011 .w. mpesha , s. gassman , and m. chaudhry , `` leak detection in pipes by frequency response method , '' _ journal of hydraulic engineering _ , vol .127 , no . 2 , pp .134147 , 2001 .z. sun , p. wang , m. c. vuran , m. a. al - rodhaan , a. m. al - dhelaan , and i. f. akyildiz , `` mise - pipe : magnetic induction - based wireless sensor networks for underground pipeline monitoring , '' _ ad hoc networks _ , vol . 9 , no . 3 , pp . 218 227 , 2011 .a. f. colombo , p. lee , and b. w. karney , `` a selective literature review of transient - based leak detection methods , '' _ journal of hydro - environment research _ ,vol . 2 , no . 4 , pp .212 227 , 2009 . j. liggett and l. chen , `` inverse transient analysis in pipe networks , '' _ journal of hydraulic engineering _120 , no . 8 , pp . 934955 , 1994 .m. stephens , m. lambert , a. simpson , j. vitkovsky , and j. nixon , _ field tests for leakage , air pocket , and discrete blockage detection using inverse transient analysis in water distribution pipes _ , ch .471 , pp . 110 . 2004 .m. stephens , a. simpson , m. lambert , and j. vtkovsk , _ field measurements of unsteady friction effects in a trunk transmission pipeline _ , ch .18 , pp . 112. 2005 .n. deo , _ graph theory with applications to engineering and computer science_. prentice - hall , inc ., 1974 .s. narasimhan and n. bhatt , `` deconstructing principal component analysis using a data reconciliation perspective , '' _ computers & chemical engineering _ , vol .77 , pp .74 84 , 2015 .a. rajeswaran and s. narasimhan , `` network topology identification using pca and its graph theoretic interpretations , '' _ arxiv preprint arxiv:1506.00438v2 [ cs.lg]_ , 2015 .j. shi and j. malik , `` normalized cuts and image segmentation , '' _ pattern analysis and machine intelligence , ieee transactions on _ , vol .22 , no . 8 , pp . 888905 , 2000 .h. sherali and a. soyster , `` preemptive and nonpreemptive multi - objective programming : relationship and counterexamples , '' _ journal of optimization theory and applications _ , vol .39 , no . 2 , pp .173186 , 1983 .m. bhushan and r. rengaswamy , `` comprehensive design of a sensor network for chemical plants based on various diagnosability and reliability criteria . 1 .framework , '' _ industrial & engineering chemistry research _ , vol .41 , no . 7 , pp . 18261839 , 2002 .m. bhushan , s. narasimhan , and r. rengaswamy , `` robust sensor network design for fault diagnosis , '' _ computers & chemical engineering _ , vol .32 , no .45 , pp . 1067 1084 , 2008 .s. arora , s. rao , and u. vazirani , `` expander flows , geometric embeddings and graph partitioning , '' _ j. acm _ ,56 , pp . 5:15:37 , apr . 2009 .b. w. kernighan and s. lin , `` an efficient heuristic procedure for partitioning graphs , '' _ bell system technical journal _ , vol .49 , no . 2 , pp .291307 , 1970 .a. pothen , h. simon , and k. liou , `` partitioning sparse matrices with eigenvectors of graphs , '' _ siam journal on matrix analysis and applications _ ,11 , no . 3 , pp .430452 , 1990 .v. guruswami and a. sinop , `` lasserre hierarchy , higher eigenvalues , and approximation schemes for graph partitioning and quadratic integer programming with psd objectives , '' in _ foundations of computer science ( focs ) , 2011 ieee 52nd annual symposium on _ , pp . 482491 , oct 2011 .a. yazdani and p. jeffrey , `` complex network analysis of water distribution systems , '' _ chaos : an interdisciplinary journal of nonlinear science _ ,21 , no . 1 , 2011in the main text , we presented the algorithm for finding leaks when they occur in nodes .however , in some cases , leaks may occur at any point along pipes as well .we now present an extension of the method for this case .we continue under the following assumptions : we first present the idea for the simplistic case where there is a single leak and sensors are noiseless .cosinder the graph which contains the leak ( either the full network , or network under consideration in some step of the recursive procedure ) .we consider a possible partition into and by querying flows in cut . for the above scenario , a straightforward approach to queryinga pipe is to measure the flows at both it s end points very close to the node , as shown in fig .if the flow at m and m do not match , it is clear the leak is in the pipe .however , if the flow rates at m and m are equal , then the leak is definitely not in this pipe . following a similar procedure for all the pipes in cut , we can trace the leak to either or exactly . in other words ,the leaky node or pipe is within the partition .we would of course need to account for the flows by adding source or sink terms to the nodes on which the connecting pipes were incident .for example , in fig .[ fig : meas ] , we need to add the flow rate in e by adding a source or sink term at nodes 4 and 5 , depending on the direction of flow . in this strategy , the cost will be twice the cut - cost .however , it is possible to reduce the cost with some modifications .when querying a pipe for the first time , rather than making two measurements , we can measure the flow at a single point close to the center . in this case however , the leak need not be in the interior of either partition . since leaks can occur at any point on a pipe , the half - pipe segments of the crossing pipes ( part of cut - set ) could contain the leak .thus we most modify our definition of partition to include these pipe segments as well ( which are incident on only one node ) .we do this by introducing an _ artificial _ node at the point of measurement .thus an edge between an actual node and artificial node represents a pipe segment . with this modification ,the recursive procedure proposed in the main paper can be used .after many recursion steps , we may come to a stage where we need to query an edge between an actual node and artificial node .this amounts to measuring a pipe for the second time , where we already have one measurement for the pipe .in such a case , the second measurement is made close to the actual node .if this measurement does not match with the flow measurement obtained at the artificial node , then the pipe segment contains the leak and the process can be stopped .when the measurements match , we continue with the recursive procedure by eliminating the pipe segment . in this procedure, only a few pipes will be measured twice and , therefore , the cumulative number of measurements required will be lower . for illustration , consider the situation shown in fig . [fig : intermittent ] .an artificial node at point of measurement in added and the corresponding incidence matrix is :
leak detection in urban water distribution networks ( wdns ) is challenging given their scale , complexity , and limited instrumentation . we present a technique for leak detection in wdns , which involves making additional flow measurements on - demand , and repeated use of water balance . graph partitioning is used to determine the location of flow measurements , with the objective to minimize the measurement cost . we follow a multi - stage divide and conquer approach . in every stage , a section of the wdn identified to contain the leak is partitioned into two or more sub - networks , and water balance is used to trace the leak to one of these sub - networks . this process is recursively continued until the desired resolution is achieved . we investigate different methods for solving the arising graph partitioning problem like integer linear programming ( ilp ) and spectral bisection . the proposed methods are tested on large scale benchmark networks , and our results indicate that on average , less than 3% of the pipes need to be measured for finding the leak in large networks .
we consider a galton watson process , that is , a population model with asexual reproduction such that at every generation , each individual gives birth to a random number of children according to a fixed distribution and independently of the other individuals in the population .we are interested in the situation where a child can be either a clone , that is , of the same type ( or allele ) as its parent , or a mutant , that is , of a new type .we stress that each mutant has a distinct type and in turn gives birth to clones of itself and to new mutants according to the same statistical law as its parent , even though it bears a different allele . in other words, we are working with an infinite alleles model where mutations are neutral for the population dynamics .we might as well think of a spatial population model in which children either occupy the same location as their parents or migrate to new places and start growing colonies on their own .this quite basic framework has been often considered in the literature ( see , e.g. , ) ; we also refer to for interesting variations ( these references are of course far from being exhaustive ) . note also that galton watson processes with mutations can be viewed as a special instance of multitype branching processes ( see chapter v in athreya and ney or chapter 7 in kimmel and axelrod ) .we are interested in the partition of the population into clusters of individuals having the same allele , which will be referred to as the _ allelic partition_. statistics of the allelic partition of a random population model with neutral mutations have been first determined in a fundamental work of ewens for the wright fisher model ( more precisely this concerns the partition of the population at a fixed generation ) .kingman provided a deep analysis of this framework , in connection with the celebrated coalescent process that depicts the genealogy of the wright fisher model .we refer to for some recent developments in this area which involve some related population models with fixed generational size and certain exchangeable coalescents .the main purpose of the present work is to describe explicitly the structure of the allelic partition of the entire population for galton watson processes with neutral mutations .we will always assume that the galton watson process is critical or subcritical , so the descent of any individual becomes eventually extinct , and in particular the allelic clusters are finite a.s .we suppose that every ancestor ( i.e. , individual in the initial population ) bears a different allele ; it is convenient to view each ancestor as a mutant of the zeroth kind .we then call mutant of the first kind a mutant - child of an individual of the allelic cluster of an ancestor , and the set of all its clones ( including that mutant ) a cluster of the first kind . by iteration, we define mutants and clusters of the kind for any integer . in order to describe the statistics of the allelic partition , we distinguish an ancestor whichwill then be referred to as _ eve _ , and focus on its descent .the set of all individuals bearing the same allele as eve is called the _ eve cluster_. the eve cluster has obviously the genealogical structure of a galton watson tree with reproduction law given by the distribution of the number of clone - children of a typical individual . informally , the branching property indicates that the same holds for the other clusters of the allelic partition .further , it should be intuitively clear that the process which counts the number of clusters of the kind for is again a galton watson process whose reproduction law is given by the distribution of the number of mutants of the first kind ; this phenomenon has already been pointed at in the work of tab .that is to say that , in some loose sense the allelic partition inherits branching structures from the initial galton watson process .of course , these formulations are only heuristic and precise statements will be given later on .we also stress that the forest structure which connects clusters of different kinds and the genealogical structure on each cluster are not independent since , typically , the number of mutants of the first kind who stem from the eve cluster is statistically related to the size of the eve cluster .our approach essentially relies on a variation of the well - known connection due to harris between ordinary galton watson processes and sequences of i.i.d .integer - valued random variables .specifically , we incorporate neutral mutations in harris representation and by combination with the celebrated ballot theorem ( which is another classical tool in this area as it is expounded , e.g. , by pitman ; see chapter 6 in ) , we obtain expressions for the joint distribution of various natural variables ( size of the total descent of an ancestor , number of alleles , size and number of mutant - children of an allelic cluster ) in terms of the transition probabilities of the two - dimensional random walk which is generated by the numbers of clone - children and of mutant - children of a typical individual .we also investigate some limit theorems in law ; typically we show that when the numbers of clone - children and mutant - children of an individual are independent ( and some further technical conditions ) , the sequence of the relative sizes of the allelic clusters in a typical tree has a limiting conditional distribution when the size of the tree and the number of types both tend to infinity according to some appropriate regime .the limiting distribution that arises has already appeared in the study of the standard additive coalescent by aldous and pitman . we also point at limit theorems for allelic partitions of galton watson forests , where , following duquesne and le gall , the limits are described in terms of certain lvy trees .in particular , this provides an explanation to a rather striking identity between two self - similar fragmentation processes that were defined on the one hand by logging the continuum random tree according to a poisson point process along its skeleton , and on the other hand by splitting the unit - interval at instants when the standard brownian excursion with a negative drift reaches new infima .we first develop some material and notation about galton watson forests with neutral mutations , referring to chapter 6 in pitman for background in the case without mutations .let be a pair of nonnegative integer - valued random variables which should be thought of respectively as the number of clone - children and the number of mutant - children of a typical individual .we also write for the total number of children , and assume throughout this work that that is , we work in the critical or subcritical regime .we implicitly exclude the degenerate case when or and , as a consequence , the means and are always less than .we write and for the sets of nonnegative integers and positive integers , respectively . a pair then used to identify an individual in an infinite population model , where the first coordinate refers to the generation and the second coordinate to the rank of the individual of that generation ( we stress that each generation consists of an infinite sequence of individuals ) .we assume that each individual at generation has a unique parent at generation .we consider a family of i.i.d .copies of which we use to define the galton watson process with neutral mutations . specifically , is the pair given by the number of clone - children and mutant - children of the individual at generation .we may assume that the offspring of each individual is ranked , which induces a natural order at the next generation by requiring further that if and are two individuals at the same generation with , then at generation the children of are all listed before those of .next , we enumerate as follows the individuals of the entire population ( i.e. , of all generations ) by a variation of the well - known depth - first search algorithm that takes mutations into account .we associate to each individual a label , where is the rank of the ancestor in the initial population , the number of mutations and a finite sequence of positive integers which keeps track of the genealogy of the individual .specifically , the label of the individual in the initial generation is . if an individual at the generation has the label , and if this individual has clone - children and mutant - children , then the labels assigned to its clone - children are whereas the labels assigned to its mutant - children are clearly , any two distinct individuals have different labels .we then introduce the ( random ) map which consists in ranking the individuals in the lexicographic order of their labels ; see figure [ fig1 ] .that is to say that if and only if the individual in the lexicographic order of labels corresponds to the individual at generation . this procedure for enumerating the individuals will be referred to as the _ depth - first search algorithm with mutations_. we shall also use the notation and whenever no generation is specified , the terminology individual will implicitly refer to the rank of that individual induced by depth - first search with mutation , that is , the individual means the individual at generation where .represent the different alleles .left : the label of an individual is given by the number of mutations and the sequence that specifies its genealogy ; for the sake of simplicity , the rank of the ancestor has been omitted .right : the same tree with individuals ranked by the depth - first search algorithm with mutations . ] [ le1 ] the variables are i.i.d . with the same law as .the sequence can be recovered from a.s .it should be plain from the definition of the depth - first search algorithm with mutations that for every , is a deterministic function of which takes values in . since is a sequence of i.i.d .variables with the same law as , this yields the first claim by induction .the second claim follows from the fact that each individual has a finite descent a.s .[ because the galton watson process is critical ] , which easily entails that the map is bijective .further , it is readily seen that the inverse bijection is a function of the sequence .henceforth , we shall therefore encode the galton watson process with neutral mutations by a sequence of i.i.d .copies of .we denote by the natural filtration generated by this sequence .we next briefly describe the genealogy of the galton watson process as a forest of i.i.d .genealogical trees .denote for every by so that is the increasing sequence of the ranks of ancestors induced by the depth - first search algorithm with mutations .for example , in the situation described by figure [ fig1 ] .the procedure for labeling individuals ensures that the descent of the ancestor corresponds to the integer interval ( that is to say , if we index the population model using generations , then the descent of is the image of by the inverse bijection ) .we write for the finite sequence of the numbers of clone - children and mutant - children of the individuals in the descent of the ancestor .so encodes ( by the depth - first search algorithm with mutations ) the genealogical tree of the ancestor , and it should be intuitively clear that the family is a forest consisting in a sequence of i.i.d . genealogical trees . to give a rigorous statement , it is convenient to introduce the downward skip - free ( or left - continuous ) random walk and the passage times we stress that the form an increasing sequence of -stopping times .[ le2 ] there is identity for every and , as a consequence , the sequence is i.i.d .this formula is a close relative of the classical identity of dwass and would be well known if individuals were enumerated by the usual depth - first search algorithm ( i.e. , without taking care of mutations ) , see , for example , lemma 63 in or .the proof in the present case is similar .indeed the formula is obvious for , and for , we have on the one hand that by expressing the fact that the predecessor of the second ancestor found by depth - first search with mutations has a rank given by the size of the population generated by eve , that is , eve herself and her descendants . on the other hand, we must have when , since otherwise the depth - first search algorithm with mutations would explore the second ancestor before having completed the exploration of the entire descent of eve .this proves the identity for , and the general case then follows by iteration . finally , the last claim is an immediate consequence of lemma [ le1](i ) and the strong markov property . we can now turn our attention to defining allelic partitions . in this direction , recall that every ancestor has a different type ( i.e. , bears a different allele ) , and thus should be viewed as an initial mutant .more generally , we call _ mutant _ an individual which either belongs to the initial generation or is the mutant - child of some individual , and then write for the ranks of mutants in the depth - first search algorithm with mutations . for example , , , , and in the situation depicted by figure [ fig1 ] . the upshot of this algorithm is that the set of individuals that bear the same allele as the mutant corresponds precisely to the integer interval . in this direction , it is therefore natural to introduce for every the _ allelic cluster _ that is , is the finite sequence of the numbers of clone - children and mutant - children of the individuals bearing the same allele as the mutant . the sequence encodes the allelic partition of the entire population .each allelic cluster is naturally endowed with a structure of rooted planar tree which is induced by the galton watson process .more precisely , the latter is encoded via the usual depth - first search algorithm by the sequence ; in particular the mutant is viewed as the root ( i.e. , ancestor ) of the cluster . in other words , the depth - first search algorithm with mutations for the galton watson process induces precisely the usual depth - first search applied to the forest of allelic clusters viewed as a sequence of planar rooted trees .we also stress that the initial galton watson process can be recovered from the allelic partition .indeed , the previous observation shows how to construct the portion of the genealogical tree corresponding to the allelic cluster generated by an initial mutant , and the latter also contains the information which is needed to identify the mutant - children of the first kind .mutant - children of the first kind are the roots of the subtrees corresponding to the allelic clusters of the second kind , and by iteration the entire genealogical forest can be recovered .just as above , it is now convenient to introduce the downward skip - free random walk and the passage times again , the form an increasing sequence of -stopping times .[ le3 ] there is identity for every . as a consequence , for every , is adapted to the sigma - field , whereas is independent of and has the same distribution as .in particular the sequence of the allelic clusters is i.i.d . the proof is similar to that of lemma [ le2 ] and therefore omitted .we also introduce the number of alleles , that is , of different types , which are present in the tree : for example , in the situation described by figure [ fig1 ] .note that there is the alternative expression [ c1 ] for every , we have equivalently , there is the identity the allelic partition of the tree , which is induced by restricting the allelic partition of the entire population to , is given by as a consequence , the sequence of the allelic partitions of the trees for , is i.i.d .\(i ) the first identity should be obvious from the definition of the depth - first search with mutations , as is the number of alleles which have been found after completing the exploration of the first trees and the next mutant is then the ancestor .the second then follows from lemmas [ le2 ] and [ le3 ] .\(ii ) the first assertion is immediately seen from ( i ) and the definitions of the trees and of the allelic clusters .then observe that the number of alleles in the tree is a function of that tree , and so is the allelic partition .the second assertion thus derives from lemma [ le2 ] .it may be interesting to point out that and are both increasing random walks .the range is the set of predecessors of ancestors ( in the depth - first search algorithm with mutations ) , whereas corresponds to predecessors of mutants .these are two regenerative subsets of , in the sense that each can be viewed as the set of renewal epochs of some recurrent event ( cf .feller ) .observe that both yield a partition of the set of positive integers into disjoint intervals : ^{(+)}_{i-1},t^{(+)}_{i}\bigr ] = \bigcup_{j\geq 1 } \bigl]t^{(\mathrm{c } ) } _ { j-1},t^{(\mathrm{c})}_{j}\bigr],\ ] ] that correspond respectively to the trees in the galton watson forest and to the allelic clusters . by corollary [ c1](i ) , there is the embedding and more precisely , this embedding is compatible with regeneration , in the sense that for every , conditionally on , the shifted sets and are independent of the sigma - field generated by and their joint law is the same as that of .we refer to for applications of this notion . roughly speaking, this implies that the allelic split of each interval ^{(+)}_{i-1},t^{(+)}_{i}] ] in a random way that only depends on the length ( i.e. , the size of ) , independently of its location and of the other integer intervals .this can be thought of as a fragmentation property ( see ) for the sizes of the trees . in order to analyze the structure of allelic partitions , we introduce some related notions .the genealogy of the population model naturally induces a structure of forest on the set of different alleles .more precisely , we enumerate this set by declaring that the allele is that of the cluster , and define a planar graph on the set of alleles ( which is thus identified as ) by drawing an edge between two integers if and only if the parent of the mutant is an individual of the allelic cluster .this graph is clearly a forest ( i.e. , it contains no cycles ) , which we call the _ allelic forest _ , and more precisely the allelic tree is that induced by the mutant descent of the ancestor . in other words ,the allelic tree is the genealogical tree of the different alleles present in . in particular, the sequence of allelic trees is i.i.d . andtheir sizes are given by .recall that the _ breadth - first search _ in a forest consists in enumerating individuals in the lexicographic order of their labels , where the label of the individual at generation is now given by the triplet , with the rank of the ancestor at the initial generation .after a ( short ) moment of thought , we see that the definition of depth - first search with mutations for the galton watson process ensures that the labeling of alleles by integers agrees with breadth - first search on the allelic forest , in the sense that the allele is found at the step of the breadth - first search on the allelic forest . for every , we consider the number of new mutants who are generated by the allelic cluster , viz . for instance , we have , and in the situation depicted by figures [ fig1 ] and [ fig2 ] . the allelic forest is thus encoded by breadth - first search via the sequence . .the labels represent the sizes of the allelic clusters . ][ le4 ] the sequence is i.i.d . , andtherefore the allelic forest is a galton watson forest with reproduction law the distribution of . as a consequence, the size of the first allelic tree is given by the identity showing that is an -stopping time .recall from lemma [ le3 ] that the sequence of the allelic clusters is i.i.d . clearly , each variable only depends on , which entails our first claim .the second follows from the well - known fact that breadth - first search induces a bijective transformation between the distributions of ( sub-)critical galton watson forests and those of i.i.d .sequences of integer - valued variables with mean less than or equal to one ( see , e.g. , section 6.2 in ) .finally , the identity for the number of alleles present in the tree follows from the preceding observations and again a variation of the celebrated formula of dwass ( see , e.g. , lemma [ le2 ] in the present work ) , as plainly , coincides with the total size of the first tree in the allelic forest .we start by stating a version of the classical ballot theorem that will be used in this section ; see .let be an -tuple of random variables with values in some space , which is cyclically exchangeable , in the sense that for every , there is the identity in law where we agree that addition of indices is taken modulo . consider a function and assume that for some .[ le6 ] under the assumptions above , the probability that the process of the partial sums of the sequence remains above until the -step is we have now introduced all the tools which are needed for describing some statistics of the allelic partition of a galton watson tree with neutral mutations .we only need one more notation .we write for the probability function of the reproduction law of the galton watson process with mutations .for every integer , we also write for the convolution product of that law , that is , suppose that the dynamics of the population can be described as follows .we start from a usual galton watson process with reproduction law on , say , and assume that at each step mutations affect each child with probability ,1[ ] with intensity .roughly speaking , we then get a random proper mass - partition by conditioning on ; see , for example , or proposition 2.4 in for a rigorous definition of this conditioning by a singular event .this family of random mass - partitions has appeared previously in a remarkable work by aldous and pitman , more precisely it arose by logging the continuum random tree according to poissonian cuts along its skeleton ; see also for related works . in the present setting , we may interpret such cuts as mutations which induce an allelic partition . as we know from aldous that the continuum random tree can be viewed as the limit when of galton watson trees conditioned to have total size , the fact that the preceding random mass - partitions appear again in the framework of this work should not come as a surprise .for the sake of simplicity , we shall focus on the case when the number of clone - children and the number of mutant - children are independent , although it seems likely that our argument should also apply to more general situations .recall that the expected number of clone - children of a typical individual is .we shall work under the hypothesis that by a suitable exponential tilting , this subcritical random variable can be turned into a critical one with finite variance .that is , we shall assume that there exists a real number such that it can be readily checked that ( [ e4 ] ) then specifies uniquely .suppose that and are independent , that neither distribution is supported by a strict subgroup of and that ( [ e4 ] ) holds .fix and let according to the regime . then the conditional law of given that the size of the total population is and the number of alleles converges weakly on the space of mass - partitions to the sequence of the atoms of a poisson random measure on ,\infty[|\mathbb{c}_1| , \ldots , and .observe that the latter is equivalent to conditioning on and .further , recall from lemma [ le3 ] that and hence , on this event , the variables are functions of .thus the assumption of independence between and enables us to ignore the conditioning on .finally , it should be clear that the exponential tilting does not affect such a conditional law , in the sense that the sequence has the same distribution under as under .we then estimate the distribution of the size of the eve cluster under , which is given again according to the dwass formula by recall that , by assumption , is critical with variance under , so an application of gnedenko s local central limit theorem gives putting the pieces together , we get that the conditional distribution of given and is the same as that obtained from an i.i.d . sequence by ranking in the decreasing order and conditioning on , where an application of corollary 2.2 in completes the proof of our claim .the purpose of this section is to point at an interpretation of a standard limit theorem involving left - continuous ( i.e. , downward skip - free ) random walks and lvy processes with no negative jumps , in terms of galton watson and lvy forests in the presence of neutral mutations .we first introduce some notation and hypotheses in this area , referring to the monograph by duquesne and le gall for details . for every integer ,let be a pair of integer - valued random variables with we consider two left - continuous random walks whose steps are ( jointly ) distributed as and , respectively .let also denote a lvy process with no negative jumps and laplace exponent , namely , we further suppose that does not drift to , which is equivalent to , and that we also need to introduce a different procedure for encoding forests by paths , which is more convenient to work with when discussing continuous limits of discrete structures .for each , we write for the ( discrete ) _ height function _ of the galton watson forest .that is , for , denotes the generation of the individual found by the usual depth - first search ( i.e. , mutations are discarded ) on the galton watson forest . in the continuoussetting , trees and forests can be defined for a fairly general class of lvy processes with no negative jumps , and in turn are encoded by ( continuous ) height functions ; cf . chapter 1 in for precise definitions and further references .the key hypothesis in this setting is the existence of a nondecreasing sequence of positive integers converging to and such that we also assume that the technical condition in is fulfilled . then the rescaled height function }(n)\dvtx t\geq0 \bigr)\ ] ] converges in distribution , in the sense of weak convergence on skorohod space as toward the height process which is constructed from the lvy process ; see theorem 2.3.1 in . similarly , we write for the height function of the galton watson forest , where each allelic cluster is endowed with the genealogical tree structure induced by the population model ( see remark , item 1 in section [ sec23 ] ) .[ p2 ] suppose that the preceding assumptions hold , and also that for some .then the rescaled height function }(n ) \dvtx t\geq0 \bigr)\ ] ] converges in distribution , in the sense of weak convergence on skorohod space as toward the height process which is constructed from the lvy process .more recently , duquesne and le gall ( see also the survey ) have developed the framework when l ' evy trees are viewed as random variables with values in the space of real trees , endowed with the gromov hausdorff distance .proposition [ p2 ] can also be restated in this setting .proof of proposition [ p2 ] the assumption ( [ e6 ] ) ensures the convergence in distribution }(n)\dvtx t\geq0 \bigr ) \longrightarrow(x_t\dvtx t\geq0 ) , \ ] ] see theorem 2.1.1 in and ( 2.3 ) there . on the other hand , by a routine argument based on martingales ,the assumption ( [ e7 ] ) entails that }(n)-s^ { ( \mathrm{c})}_{[tn\gamma_n]}(n)\bigr)=dt,\ ] ] uniformly for in compact intervals , in . the convergence in distribution }(n ) \dvtx t\geq0\bigr ) \longrightarrow(x_t - dt\dvtx t\geq0 ) \ ] ] follows .recall that depth - first search with mutations on the initial forest yields the usual depth - first search for the forest of allelic clusters ( cf .remark , item 1 in section [ sec23 ] ) .we can then complete the proof as in theorem 2.3.1 in .we now conclude this work by discussing a natural example .specifically , we suppose that the distribution of is the same for all . for the sake of simplicity , we assume also that and .we may then take , so by the central limit theorem , ( [ e6 ] ) holds and the lvy process is a standard brownian motion .we fix an arbitrary and consider the independent pruning model where for each integer , conditionally on the total number of children , the number of mutant - children of a typical individual has the binomial distribution . in other words , in the population model , mutations affect each child with probability , independently of the other children .then ( [ e7 ] ) clearly holds . roughly speaking ,theorem 2.3.1 of implies in this setting that the initial galton watson forest associated with the population model , converges in law after a suitable renormalization to the brownian forest , whereas proposition [ p2 ] of the present work shows that the allelic forest renormalized in the same way , converges in law to the forest generated by a brownian motion with drift .=1 this provides an explanation to the rather intriguing relation which identifies two seemingly different fragmentation processes : the fragmentation process constructed by aldous and pitman by logging the continuum random tree according to a poisson point process on its skeleton , and the fragmentation process constructed in by splitting the unit interval at instants when a brownian excursion with negative drift reaches a new infimum .it is interesting to mention that schweinsberg already pointed at several applications of the ( continuous ) ballot theorem in this framework .more generally , the transformation of lvy processes with no negative jumps also appeared in an article by miermont on certain eternal additive coalescents , whereas aldous and pitman showed that the latter arise asymptotically from independent pruning of certain sequences of birthday trees .finally , we also refer for another interesting recent work on pruning lvy random trees .i would like to thank two anonymous referees for their careful check of this work .
we consider a ( sub-)critical galton watson process with neutral mutations ( infinite alleles model ) , and decompose the entire population into clusters of individuals carrying the same allele . we specify the law of this allelic partition in terms of the distribution of the number of clone - children and the number of mutant - children of a typical individual . the approach combines an extension of harris representation of galton watson processes and a version of the ballot theorem . some limit theorems related to the distribution of the allelic partition are also given . . .
malware detection has evolved as one of the challenging problems in the field of cyber - security as the attackers continuously enhance the sophistication of malware to evade novel detection techniques .malware for various platforms such as desktop and mobile devices is growing at an alarming rate .for instance , kaspersky reports detecting 4 million malware infections in 2015 which is a 216% increase over 2014 .this volume and growth rate clearly highlights an imperative need for automated malware detection solutions .+ to perform automated malware detection , security analysts resort to program analysis and machine learning ( ml ) techniques . typically , this process involves extracting semantic features from suitable representations of programs ( e.g. , assembly code , call graphs ) and detecting malicious code or behavior patterns using ml classifiers .+ a major reason for such tremendous growth rate in malware is the production of _ malware variants_. typically , the attackers produce large number of _ variants _ of the same malware by resorting to techniques such as variable renaming and junk code insertion .these variants perform same malicious functionality , with apparently different syntax , thus evading syntax - based detectors. however , higher level semantic representations such as call graphs , control- and data - flow graphs , control- , data- and program - dependency graphs mostly stay similar even when the code is considerably altered . in this work ,we use a common term , _ program representation graph _( prg ) to refer to any of these aforementioned graphs .as prgs are resilient against variants , many works in the past have used them to perform malware detection .in essence , such works cast malware detection as a _ graph classification problem _ and apply existing graph mining and classification techniques . some methods such as note that ml classifiers are readily applicable on data represented as vectors and attempt to encode prgs as feature vectors .typically , these techniques face two challenges : * * ( c1 ) expressiveness . * prgs are complex and expressive data structures that characterize topological relationships among program entities . representing them as vectorsis a non - trivial task . in many cases vectorial representations of prgsfail to capture all the vital information .for instance , appcontext , a well - known android malware detection approach represents apps as prgs and ends up capturing features from individual nodes without their topological neighbourhood information . with such loss of expressiveness , attacks that span across multiple prg nodes could not be effectively detected . * * ( c2 ) efficiency . *the scale of malware detection problem is such that we have millions of samples already and thousands streaming in every day .many classic graph mining based approaches ( e.g. , ) are np hard and have severe scalability issues , making them impractical for real - world malware detection . * graph kernels .* one of the increasingly popular approaches in ml for graph - structured data is the use of graph kernels .recently , efficient and expressive graph kernels such as have been proposed and widely adopted in many application areas ( e.g , bio- and chemo - informatics ) .some of them support explicit feature vector representations of graphs ( e.g , ) .thus both the aforementioned challenges c1 and c2 are effectively addressed by these graph kernels .therefore , it just suffices to use a graph kernel together with a kernelized ml classifier ( e.g. , svm ) and we have a scalable , effective and ready - to - use malware detector .recently , three approaches , and , have successfully demonstrated using these general purpose graph kernels for malware detection .+ * research gap .* however , a major problem in using these general purpose graph kernels on prgs is that , they are not designed to take domain - specific observations into account .for instance , recent research on malware analysis has revealed that besides capturing neighbourhood ( i.e. , structural ) information from prgs it is important to capture the context under which the neighbourhoods are reachable to accurately detect malicious neighbourhoods ( explained in detail in [sec : bgm ] ) .many existing graph kernels such as and can capture and compare structural information from prgs effectively .however , they are not designed to capture the reachability context , as it is a strong domain - specific requirement and hence fail to do so . to address this, we develop a novel graph kernel which is capable of capturing both the aforementioned types of information . + for similar domain - specific reasons , researchers from other fields such as computer vision , bio- and chemo - informatics have developed a number of kernels that specifically suit their applications . despite graphs being natural representations of programs and amenable for various activities ,the program analysis research community has not devoted significant attention to development of domain - specific graph kernels .we take the first step towards this , by developing a kernel on prgs which specifically suits our task of malware detection . +* our approach . * to improve the accuracy of malware detection process, we propose a method to enrich the feature space of a graph kernel that inherently captures structural information with contextual information .we apply this feature - enrichment idea on a state - of - the - art graph kernel , namely , weisfeiler - lehman kernel ( wlk ) to obtain the contextual weisfeiler - lehman kernel ( cwlk ) . specifically , cwlk associates to each sub - structure feature of wlk a piece of information about the context under which the sub - structure is reachable in the course of execution of the program . a sub - structure appearing in two different prgswill match only if it is reachable under the same context in both prgs .we show that for the malware detection problem , cwlk is more expressive and hence more accurate than wlk and other state - of - the - art kernels while maintaining comparable efficiency .+ * experiments . * through our large - scale experiments with more than 50,000 android apps, we demonstrate that cwlk outperforms two state - of - the - art graph kernels ( including wlk ) and three malware detection techniques by more than 5.27% and 4.87% f - measure , respectively , while maintaining high efficiency .this , in essence shows the significance of incorporating the contextual information along with structural information in the graph kernel while performing malware detection . +* contributions . *the paper makes the following contributions : + ( 1 ) we develop a graph kernel that captures both structural and contextual information from prgs to perform accurate and scalable malware detection ( [ sec : cwlk ] ) . to the best of our knowledge ,this is the first graph kernel specifically addressing a problem from the field of program analysis .+ ( 2 ) through large - scale experiments and comparative analysis , we show that the proposed kernel outperforms two state - of - the - art graph kernels and three malware detection solutions in terms of accuracy , while maintaining high efficiency ( [ sec : eval ] ) .+ ( 3 ) we make an efficient implementation of the proposed kernel ( along with the dataset information ) publicly available .in this section , we motivate the design of our kernel by describing why considering just the structural information from prgs is insufficient to determine the maliciousness of a sample and how supplementing it with contextual information helps to increase detection accuracy . to this endwe use a real - world android malware from the _ geinimi _ family which steals users private information .we contrast its behavior with that of a well - known benign app , _ yahoo weather_. + * _ geinimi _ s execution . *the app is launched through a background event such as receiving a sms or call .once launched , it reads the user s personal information such as geographic location and contacts and leaks the same to a remote server .the ( simplified ) malicious code portion pertaining to the location information leak is shown in fig .[ fig : me ] ( a ) .the method _leak_location _ reads the geographic location through getlatitude and getlongitude application programming interfaces ( apis ) .subsequently , it calls _leak_info_to_url _ method to leak the location details ( through dataoutputstream.writebytes ) to a specific server .the data dependency graph ( ddg ) corresponding to the code snippet is shown in fig .[ fig : me ] ( b ) .the nodes in ddg are labeled with the sensitive apis that they invoke .+ * _ yahoo weather _ s execution .* on the other hand , _ yahoo weather _ could be launched only by user s interaction with the device ( e.g. , by clicking the app s icon on the dash board ) .the app then reads the user s location and sends the same to its weather server to retrieve location - specific weather predictions .hence , ddg portions of _ yahoo weather _ is same as that of _geinimi_. + * contextual information .* from the explanations above , it is clear that both the apps leak the same information in the same fashion .however , what makes _ geinimi _ malicious is the fact that its leak happens without the user s consent . in other words , unlike _ yahoo weather _ , _ geinimi _ leaks private information through an event which is not triggered by user s interaction .we refer to this as a leak happening in _ user - unaware _ context . on the same lines, we refer to _ yahoo weather s _ leak as happening in _ user - aware _ context . + as explained in and , in the case of android apps , one could determine whether a prg node is reachable under _ user - aware _ or _ user - unaware _context by examining its entry point nodes . following this procedurewe add the context as an attribute to every ddg node .this context annotated ddg of _ geinimi _ and _ yahoo weather _ are shown in fig .[ fig : me ] ( c ) and ( d ) , respectively . +* requirements for effective detection . * from the aforementioned example the two key requirements that makes a malware detection process effective can be identified : + * ( r1 ) _ capturing structural information . _ * since malicious behaviors often span across multiple nodes in prgs , just considering individual nodes ( and their attributes ) in isolation is not enough . capturing the structural ( i.e. , neighborhood ) information from prgs is of paramount importance . + * ( r2 ) _ capturing contextual information . _ * considering just the structural information without the context is not enough to determine whether a sensitive behavior is triggered with or without user s knowledge .for instance , if structural information alone is considered , the features of both _ geinimi _ and _ yahoo weather _ apps become identical , thus making the latter a false positive . hence , it is important for the detection process to capture the contextual information as well to make the detection process more accurate .+ many existing graph kernels could address the first requirement well .however , the second requirement which is more domain - specific makes the problem particularly challenging . to the best of our knowledge ,none of the existing graph kernels support capturing this reachability context information along with structural information .hence , this gives us a clear motivation to develop a new kernel that specifically addresses our two - fold requirement .the formal definitions and notations that will be used throughout the paper are presented in this section . +* definition 1 ( program representation graph ) . * is a directed graph where is a set of nodes and each node denotes program entity such as a function or instruction . is a set of edges and each edge denotes either control- or data - flow or dependency from to . is the set of labels that characterize the ( security - sensitive ) operations of a node and , is a labeling function which assigns a label to each node . is a set of events that denote the context of a node and , is a function which assigns the context to each node . + * definition 2 ( context ) . *the context of a node in the prg of a program is a set of attributes that govern the reachability of in the course of execution of .+ * examples of contexts . * in the case of windows executables , the _ guard conditions _ that govern the execution of a node could be considered as its context . unlike windows ( and other desktop os ) binaries , android and ios mobile appstypically have multiple entry points .hence , in the case of such mobile apps , besides guard conditions , the categories of entry points through which a node is reachable could also be considered as its context .similar platform - specific constraints and observations could be considered while defining the contexts for executables of other platforms .in this section , we begin by explaining how the regular wlk can be applied to perform malware detection using prgs and how it falls short .subsequently , we introduce our cwlk and discuss how it addresses the shortcomings of wlk . finally , we prove cwlk s semi - definitiveness and analyze its time complexity .wlk computes the similarities between graphs based on the 1-dimensional wl test of graph isomorphism . + * wl test of isomorphism . *suppose we are to determine whether a given a pair of graphs and are isomorphic . the wl test of isomorphism works by augmenting the node labels by the sorted set of labels of neighboring nodes .this process is referred to as _ label - enrichment _ and new labels are referred as _neighborhood labels_. thus , in each iteration _ i _ of the wl algorithm , for each node , we get a new neighborhood label , that encompass the degree neighborhood around . could be optionally compressed using a hash function such that , iff . to test graph isomorphism, the re - labeling process is repeated until the neighborhood label sets of and differ , or the number of iterations reaches a specific threshold .therefore , one iteration of wl relabeling is equivalent to a function that transforms all graphs in the same manner . + * definition 3 ( wl sequence ) . *define the wl graph at height of the graph as the graph .the sequence of graphs is called the wl sequence up to height of , where ( i.e. , ) is the original graph and is the graph resulting from the first relabeling , and so on . + * definition 4 ( wl kernel ) . * given a valid kernel and the wl sequence of graph of a pair of graphs and , the wl graph kernel with iterations is defined as where is the number of wl iterations and and are the wl sequences of and , respectively . is referred as _ height of the kernel_. + intuitively , wlk counts the common neighborhood labels in two graphs .hence we have , iff for , where is injective and the sets and are disjoint for all .+ * example & wlk s shortcoming .* we now apply wlk on the real - world examples discussed in [ sec : bgm ] to see if it distinguishes malicious and benign neighborhoods clearly , facilitating accurate detection . for the ease of illustration ,the label compression step is avoided . applying wlk on the ddg for both _geinimi _ and _ yahoo weather _ apps , shown in fig .[ fig : me ] ( b ) , for the node getlatitude , for heights , we get the neighborhood labels getlatitude and getlatitude , writebytes , respectively .clearly , wlk captures the neighborhood around the node getlatitude , incrementally in every iteration of .in fact , neighborhood label for captures that another sensitive node , writebytes lies in the neighborhood of getlatitude , which highlights a possible privacy leak .however , wlk does not capture whether the neighborhood involved in this leak is reached in _ user - aware _ or _ unaware _ context .this is precisely what we address through our cwlk . * input * : + | _ prg _ with set of nodes ( ) , set of edges ( ) and set of node labels ( ) and context for each node ( ) + | number of iterations + * output * : + - contextual wl sequence of height * return * \{ } [ algo : cr ] the goal of cwlk is to capture not only neighborhoods around the node , but also to include the contexts in which each of the neighborhoods is reachable in the prg . to this end, we modify the re - labeling step of wlk so as to accommodate the context of every neighborhood .we refer to this process as _ contextual - relabeling _ and the sequence of graphs thus obtained as _contextual wl sequence_. + * contextual re - labeling . * specifically , cwlk performs one additional step in the re - labeling process which is to attach the contexts of every node to its neighborhood label in every iteration .this in effect , indicates the contexts under which a particular neighborhood is reachable .the label thus obtained is referred to as _ contextual neighborhood label_. the contextual relabeling process is presented in detail in algorithm [ algo : cr ] .+ the inputs to the algorithm are prg , and the degree of neighbourhoods to be considered for re - labeling , .the output is the sequence of contextual wl graphs , , where are constructed using the contextual relabeling procedure .+ for the initial iteration , no neighborhood information needs to be considered .hence the contextual neighborhood label for all nodes is obtained by justing prefixing the contexts to the original node labels and compressing the same ( lines 6 - 8,17 - 18 ) . for ,the following procedure is used for contextual re - labeling .firstly , for a node , all of its neighboring nodes are obtained and stored in ( line 10 ) . for each node the neighborhood label up to degree is obtained and stored in multiset ( line 11 ) . , neighborhood label of till degree is concatenated to the sorted value of to obtain the current neighborhood label , ( line 12 ) .finally the current neighborhood label is prefixed with the contexts of node to obtain the string which is then compressed using the function to obtain the contextual neighborhood label , ( lines 13 - 15,17 - 18 ) . +* definition 5 ( cwl kernel ) . * given a valid kernel and the cwl sequence of graph of a pair of graphs and , the contextual wl graph kernel with iterations is defined as where is the number of cwl iterations and and are the cwl sequences of and , respectively .+ intuitively , cwlk counts the common contextual neighborhood labels in two graphs .hence we have , iff for .+ * example .* we now apply cwlk on the apps in our example to show how it overcomes wlk s shortcomings .the contextual neighborhood labels ( without compression ) of the node getlatitude in _ geinimi _ app for heights are , * user - unaware* and * user - unaware* , witebytes , respectively . for the same node in _ yahoo weather_ the contextual neighborhood labels are * user - aware* and * user - aware* , witebytes .hence , it is evident that the cwlk s contextual relabeling provides a means to clearly distinguish malicious prg neighborhoods from the benign ones .this is achieved by complementing the structural information with contextual information .therefore , unlike wlk , cwlk based classification does not detect _ yahoo weather _ as a false positive .this example clearly establishes the suitability of cwlk for the malware detection task .+ we now prove cwlk s positive definiteness and also analyze its time complexity . + * theorem 1 .* cwlk is positive definite . + * proof .* let us define a mapping that counts the occurrences of a particular contextual neighborhood label sequence in ( generated in iterations of algorithm [ algo : cr ] ) .let denote the number of occurrences of in , and analogously for .then , summing over all from the vocabulary , we get latexmath:[\[\begin{gathered } k^{(h)}_{cwl}(g , g ' ) = \sum_{\sigma \in \sigma^*}^ { } k^{(h)}_{\sigma}(g , g ' ) = \sum_{\sigma \in \sigma^ * } \phi^{(h)}_\sigma(g)\phi^{(h)}_\sigma(g')\\ = |\{(\sigma_i(n),\sigma_i(n'))|\sigma_i(n ) = \sigma_i(n ' ) , i \in \{1, ...,h\ } , n \in n , n ' \in n'\}|\\ = where the last equality follows from the fact that is injective .+ as if , the string corresponds to exactly one contextual neighborhood label and defines a kernel with corresponding feature map , such that * complexity . *the runtime complexity of cwlk with iterations on a graph with nodes and edges is ( assuming that ) which is same as that of wlk .more specifically , the neighborhood label computation with sorting operations ( lines 10 - 12 of algorithm [ algo : cr ] ) take time for one iteration and the same for iterations take . the inclusion of context ( lines 6 - 8,13 - 15 ) , does not incur additional overhead as .hence the final time complexity remains as .for a detailed derivation and analysis of the time complexity of wlk , we refer the reader to . +* efficient computation of cwlk on k graphs .* when computing cwlk on graphs to obtain kernel matrix , a nave approach would involve comparisons , resulting a time complexity of . however , as mentioned in , a bag - of - features ( bof ) model based optimization could be performed to arrive the kernel matrix in time .this optimized computation involves the following steps : ( 1 ) a vocabulary of all the contextual neighbourhood labels of nodes across the graphs is obtained in time .this facilitates representing each of the graphs as feature vectors of dimensions .( 2 ) subsequently , kernel matrix can be computed by multiplying these vectors in time .+ in summary , cwlk has the same efficiency as that of wlk and supports explicit feature vector representations of prgs . +* relation to other spatial contextual kernels .* two recently proposed graph kernels and , consider incorporating the spatial context information to neighborhood subgraph features .they define _ context of a subgraph feature as another subgraph appearing in its vicinity_. as mentioned earlier , in our malware detection problem we refer to _ attributes of a node which determine its reachability as its context_. this _ reachability context _ is different from _ spatial context _ discussed in and .hence cwlk is consummately different from these two kernels .[ tab : ds ] .composition of dataset [ cols="^,^,^",options="header " , ] we now compare cwlk based detection with the state - of - the - art android malware detection solutions to study whether contextual prg neighborhoods makes good features for malware detection , through experiment e2 . for cwlk based detection , icfgrepresentation with is used as it offers the best performance .the precision , recall and f - measures of each of these methods are reported in table [ tab : maldetect ] .the following observations are made from the table : * clearly , cwlk based malware detection outperforms all the compared solutions in terms of f - measure .in particular , our approach outperforms the best performing technique ( i.e. , drebin ) by 4.87% f - measure . in terms of precision ,our approach outperforms adagio and allix _ et al ._ s methods and is comparable to drebin . in terms of recall ,ours outperforms other methods . * out of the methods compared , drebin does not use both structural and contextual features .adagio and allix _ et al . _s approaches use structural information but not contextual information .this reveals that capturing both these types of information is the reason for our approach s superior performance , reinforcing our findings from experiment e1 ., width=340,height=136 ] we now compare the efficiency of cwlk based detection against that of state - of - the - art malware detectors .it is noted that these techniques use different features and classifiers and hence a wide variation in training and testing durations is expected .the results of this comparison is presented in fig .[ fig : soaeff ] , from which the following observations are made : * drebin being a light - weight non prg based approach it has significantly higher efficiency than all other methods , including ours .* allix _ et al . _s method is similar to ours in terms of using prg based features .hence our efficiency is comparable to this method . *adagio uses nhgk and hi kernel svm in the primal formulation .hence it takes a prohibitively long time for training and testing .our method is far more efficient than adagio . in conclusion ,our method s efficiency is comparable to that of other prg based methods , far better than heavy - weight approaches and inferior to non prg based light - weight methods .* summary . * from experiment e2 , we conclude that when compared to state - of - the - art malware detectors , cwlk produces considerably higher accuracy with a practically tractable efficiency , making it suitable for large - scale real - world malware detection .in this paper , we present cwlk , a novel graph kernel that facilitates detecting malware using prgs . unlike the existing kernels which capture only the security - sensitive neighborhoods in prgs , cwlk captures these neighborhoods along with the context under which they are reachable .this makes cwlk more expressive and in turn more accurate than existing kernels . besides expressiveness , cwlk has two specific advantages : ( 1 ) shows high efficiency , ( 2 ) supports building explicit feature vector representations of prgs .cwlk is evaluated on a large - scale experiment with more than 50,000 android apps , and is found to outperform two state - of - the - art graph kernels and three malware detection techniques in terms of f - measure , while maintaining comparable efficiency . +* future work . * in our future work ,we plan to investigate incorporating contextual information in other sub - structure based graph kernels such as and and subsequently , study their suitability for performing malware detection. + * implementation & dataset .* we provide an efficient implementation of cwlk and information on the datasets used within this work at : https://sites.google.com/site/cwlkernelwe thank the authors of and , for their suggestions that helped us re - implement their methods .zhang , mu , et al .`` semantics - aware android malware classification using weighted contextual api dependency graphs . ''proceedings of the 2014 acm sigsac conference on computer and communications security .acm , 2014 .navarin , n. , sperduti , a. , & tesselli , r. ( 2015 , november ) . extending local features with contextual information in graph kernels . in neural information processing ( pp .271 - 279 ) .springer international publishing .frhlich , holger , jrg k. wegner , and andreas zell .`` assignment kernels for chemical compounds . '' neural networks , 2005 . ijcnn05 . proceedings .2005 ieee international joint conference on .2 . ieee , 2005 .
in this paper , we propose a novel graph kernel specifically to address a challenging problem in the field of cyber - security , namely , malware detection . previous research has revealed the following : ( 1 ) graph representations of programs are ideally suited for malware detection as they are robust against several attacks , ( 2 ) besides capturing topological neighbourhoods ( i.e. , structural information ) from these graphs it is important to capture the context under which the neighbourhoods are reachable to accurately detect malicious neighbourhoods . we observe that state - of - the - art graph kernels , such as weisfeiler - lehman kernel ( wlk ) capture the structural information well but fail to capture contextual information . to address this , we develop the contextual weisfeiler - lehman kernel ( cwlk ) which is capable of capturing both these types of information . we show that for the malware detection problem , cwlk is more expressive and hence more accurate than wlk while maintaining comparable efficiency . through our large - scale experiments with more than 50,000 real - world android apps , we demonstrate that cwlk outperforms two state - of - the - art graph kernels ( including wlk ) and three malware detection techniques by more than 5.27% and 4.87% f - measure , respectively , while maintaining high efficiency . this high accuracy and efficiency make cwlk suitable for large - scale real - world malware detection . keywords | graph kernels , malware detection , program analysis
let , be independent and identically distributed continuous variables and suppose that their common density has a support defined by the unknown function is called the _frontier_. we address the problem of estimating . in , we introduced a new kind of estimator based upon kernel regression on high power - transformed data .more precisely the estimator of was defined by where and are non random sequences , is a symmetrical probability density with support included in ] , so that . for fixed the method for estimating first consists in solving the following minimization problem then , denoting by the solution of this least square minimization , one considers as an estimate of . the originality andthe difficulty of our paper in contrast with these traditional lines is that here and that we consider as an estimate of so we write .we refer to for other definitions of local polynomials estimators ( i.e. without high power transform ) and to for the estimation of frontier functions under monotonicity assumptions . in orderto get simplified matricial expressions , let us denote by the matrix defined by the lines _ { i=1, ... n} ] defined by similarly , denoting by the diagonal matrix , is the matrix {0\leq j , l\leq k} ] and _ { 0\leq j , l\leq k} ] , we have with ^{p}\right ] ^{1/p}.\ ] ] since , let us focus on ^{p}\right ] ^{1/p}.\ ] ] taking , implies and thus ^{p}\mathbf{1}\left\ { y_{i}<g\left ( x\right ) \left ( 1+\delta\right ) \right\ } \right ] ^{1/p}\\ & \leq\left ( 1+\delta\right ) \left [ \sum_{i=1}^{n}a\left ( x_{i}\right ) \mathbf{1}\left\ { y_{i}<g\left ( x\right ) \left ( 1+\delta\right ) \right\ } \right ] ^{1/p}.\end{aligned}\ ] ] moreover , since , for large enough , , it follows that now , the only difference with the proof of theorem 1 in is that the positive kernel is replaced by the signed kernel of higher order . the case is easily treated in a similar way .here , the following model is simulated : is uniformly distributed on ] such that with .this conditional survival distribution function belongs to the weibull domain of attraction , with extreme value index , see for a review on this topic . in the following , three exponents are used .the case corresponds to the situation where given is uniformly distributed on ] with , we have .\ ] ] besides , introducing the vector , the asymptotic expression of established in proposition [ prop1 ] entails .\ ] ] let us first focus on the first term of the bias expansion ( [ bias ] ) : \\ & = g^{-p}\left ( x\right ) h^{k+1}\beta_{k+1}e_{1}^{t}\mathbf{s}^{-1}c\left [ 1+o_{p}\left ( 1\right ) \right],\end{aligned}\ ] ] and using the expression of in , we have leading to let us now consider the second term in ( [ bias ] ) : expanding we have = nh^{k+1}f\left ( x\right ) c\left [ 1+o_{p}\left ( 1\right ) \right],\ ] ] which entails collecting ( [ bias1 ] ) and ( [ bias2 ] ) , we obtain the announced result first quote a bernstein - frchet inequality adapted to our framework .[ lem8 ] let independent centered random variables such that for each positive integers and , and for some positive constant , we have then , for every , we have the proof is standard . note that condition ( [ condbf ] ) is verified under the boundedness assumption , .in the next lemma , an asymptotic expansion of the estimated regression function is introduced .it is known from the local polynomial fitting theory that **** admits the following asymptotic expression,\ ] ] where is the so - called _ equivalent kernel _ , see .the remaining of the proof consists in explicitly writing this equivalent kernel .it is worth noticing that depends exclusively of the design . from anda recurrence argument it is easily checked that where the are continuous functions .the triangular inequality entails and , from lemma [ lem10 ] , if and we get , for sufficiently large , where } | \phi_{j}(s)|$ ] .thus , and replacing in ( [ 4 - 1 ] ) yields and the result is proved .let us consider , for the random variables defined by the next two lemmas are preparing the application of the bernstein - frchet inequality given in lemma [ lem8 ] .first , it is established that the are bounded random variables .second , a control of the conditional variance is provided .since the kernel is bounded and has bounded support , it is easily seen that if and that uniformly in . noticing that and using lemma [ lem10 ] , we get and the result is proved .recalling that we can write ^{2}g^{2p}\left ( x_{i}\right ) \\ & = \frac{h^{2}}{g^{2p}\left ( x\right ) } \frac{1}{2p+1}\frac{1}{f^{2}\left ( x\right ) } \sum_{j , l=0}^{k}u_{j}u_{l}\sum_{i=1}^{n}k_{h}^{2}\left ( x_{i}-x\right ) \left ( \frac{x_{i}-x}{h}\right ) ^{j+l}g^{2p}\left ( x_{i}\right ) \\ & = \frac{h^{2}}{g^{2p}\left ( x\right ) } \frac{1}{2p+1}\frac{1}{f^{2}\left ( x\right ) } \sum_{j , l=0}^{k}u_{j}u_{l}\frac{1}{h^{j+l}}s_{n , j+l}^{\ast}.\end{aligned}\ ] ] now , substituting the asymptotic expression for into the above expression yields ,\ ] ] and the parts of this lemma follow .the next two lemmas are the key tools to prove theorem [ th4 ] .lemma [ lem14 ] is mainly a consequence of the bernstein - frchet inequality given in lemma [ lem8 ] .lemma [ lem15 ] is dedicated to the control of the random variable introduced in ( [ defdelta ] ) . following the asymptotic expression of in lemma [ lem9 ], we can write \geq\varepsilon r_{n}\left ( x\right ) /\mathcal{x}\right ) \\ & = { \mathbb{p}}\left ( \left\vert \sum_{i=1}^{n}a\left ( x_{i}\right ) \left ( \left ( p+1\right ) y_{i}^{p}-g^{p}\left ( x_{i}\right ) \right ) \right\vert \geq\left [ 1+o_{p}\left ( 1\right ) \right ] \varepsilon g^{p}\left ( x\right ) /\mathcal{x}\right).\end{aligned}\ ] ] it is worth noticing that , conditionally to , the sequence can be seen as a deterministic sequence converging to .we now introduce the bounded variables ( see lemma [ lem12 ] ) . in accordance with the bernstein - frchet inequality given in lemma [ lem8 ] , and with the expressions ( [ 4 - 5 ] ) and ( [ 4 - 6 ] ) in lemma [ lem13 ] , we write \varepsilon\frac{nh}{p}/\mathcal{x}\right ) \\ & = { \mathbb{p}}\left ( \left\vert \sum_{i=1}^{n}\xi_{i}\right\vert \geq\varepsilon\left [ 1+o_{p}\left ( 1\right ) \right ] \frac{nh}{p\sqrt{{\mathbb{v}}\left ( \sum_{i=1}^{n}\xi_{i}/x\right ) } } \sqrt{{\mathbb{v}}\left ( \sum\nolimits_{i=1}^{n}\xi_{i}/x\right ) } /\mathcal{x}\right ) \\ & \leq2\exp\left\ { -\frac{\left ( \varepsilon\left [ 1+o_{p}\left ( 1\right ) \right ] \frac{nh}{p\sqrt{{\mathbb{v}}\left ( \sum_{i=1}^{n}\xi _ { i}/\mathcal{x}\right ) } } \right ) ^{2}}{4 + 2\varepsilon\left [ 1+o_{p}\left ( 1\right ) \right ] \frac{nh}{p\sqrt{{\mathbb{v}}\left ( \sum_{i=1}^{n}\xi _ { i}/\mathcal{x}\right ) } } { c_2}/\sqrt{{\mathbb{v}}\left ( \sum_{i=1}^{n}\xi_{i}/\mathcal{x}\right ) } } \right\ } \\ & = 2\exp\left\ { -\frac{\left ( \varepsilon\sqrt{\frac{nh}{p}}\sqrt{{c_3}}\left [ 1+o_{p}\left ( 1\right ) \right ] \right ) ^{2}}{4+{c_2}\varepsilon\left [ 1+o_{p}\left ( 1\right ) \right ] \frac{nh}{p}/{\mathbb{v}}\left ( \sum_{i=1}^{n}\xi_{i}/\mathcal{x}\right ) } \right\ } \\ & = 2\exp\left\ { -\frac{\varepsilon^{2}\frac{nh}{p}^{{}}{c_3}\left [ 1+o_{p}\left ( 1\right ) \right ] } { 4+{c_2}{c_3}\varepsilon\left [ 1+o_{p}\left ( 1\right ) \right ] } \right\ } \\ & \leq2\exp\left\ { -{c_4}\frac { nh}{p}\varepsilon^{2}\left [ 1+o_{p}\left ( 1\right ) \right ] \right\},\end{aligned}\ ] ] and the conclusion follows . from inequality ( [ 4 - 7 ] ) , we have \\ & \leq\left ( \sum_{i=1}^{n}\left\vert a\left ( x_{i}\right ) \right\vert \left ( p+1\right ) g^{p}\left ( x_{i}\right ) \right ) \left [ 1+o_{p}\left ( 1\right ) \right ] \\ & = { c_1}\frac{p}{h}g^{p}\left ( x\right ) \left [ 1+o_{p}\left ( 1\right ) \right ] \frac{1}{n}card\left\ { i:\left\vert x_{i}-x\right\vert < h\right\}.\end{aligned}\ ] ] then , the strong law of large numbers entails \left [ 1+o_{p}\left ( 1\right ) \right],\ ] ] and from the continuity of the density , we have .\ ] ] consequently , ,\ ] ] with depending on the design .we thus write where is a positive constant under the conditioning by . as an immediate consequence , we get from and is clear that is bounded conditionally to .d. deprins , l. simar , and h. tulkens . measuring labor efficiency in post offices . in p.pestieau m. marchand and h. tulkens , editors , _ the performance of public enterprises : concepts and measurements_. north holland ed , amsterdam , 1984 .
we present a new method for estimating the frontier of a sample . the estimator is based on a local polynomial regression on the power - transformed data . we assume that the exponent of the transformation goes to infinity while the bandwidth goes to zero . we give conditions on these two parameters to obtain almost complete convergence . the asymptotic conditional bias and variance of the estimator are provided and its good performance is illustrated on some finite sample situations . + * keywords : * local polynomials estimator , power - transform , frontier estimation . * ams 2000 subject classification : * 62g05 , 62g07 , 62g20 .
the ancient greek legend reads that theseus volunteered to enter in the minotaur s labyrinth to kill the monster and liberate athens from periodically providing young women and men in sacrifice .the task was almost impossible to achieve because killing the minotaur was not even half of the problem : getting out of the labyrinth was even more difficult .but ariadne , the guardian of the labyrinth and daughter of the king of crete , provided theseus with a ball of thread , so that he could unroll it going inside and follow it back to get out of the minotaur s labyrinth . which he did .our labyrinth here is the history of the formation of the solar system .we are deep inside the labyrinth , with the earth and the planets formed , but we do nt know how exactly this happened .there are several paths that go into different directions , but what is the one that will bring us out of this labyrinth , the path nature followed to form the earth and the other solar system planets and bodies ?our story reads that once upon a time , it existed an interstellar cloud of gas and dust .then , about 4.6 billions years ago , one cloud fragment became the solar system .what happened to that primordial condensation ?when , why and how did it happen ?answering these questions involves putting together all of the information we have on the present day solar system bodies and micro particles .but this is not enough , and comparing that information with our understanding of the formation process of solar - type stars in our galaxy turns out to be indispensable too .our ariadne s thread for this chapter is the deuterium fractionation , namely the process that enriches the amount of deuterium with respect to hydrogen in molecules .although deuterium atoms are only ( tab .[ tab : definitions ] ) times as abundant as the hydrogen atoms in the universe , its relative abundance in molecules , larger than the elemental d / h abundance in very specific situations , provides a remarkable and almost unique diagnostic tool .analysing the deuterium fractionation in different galactic objects which will eventually form new suns , and in comets , meteorites and small bodies of the solar system is like having in our hands a box of old photos with the imprint of memories , from the very first steps of the solar system formation .the goal of this chapter is trying to understand the message that these photos bring , using our knowledge of the different objects and , in particular , the ariadne s thread of the deuterium fractionation to link them together in a sequence that is the one that followed the solar system formation .the chapter is organised as follows . in [ sec : set - stage ] , we review the mechanisms of the deuterium fractionation in the different environments and set the bases for understanding the language of the different communities involved in the study of the solar system formation .we then briefly review the major steps of the formation process in [ sec : brief - hist ] .the following sections , from [ sec : the - pre - stell ] to [ sec : solar - nebula ] , will review in detail observations and theories of deuterium fractionation in the different objects : pre - stellar cores , protostars , protoplanetary disks , comets and meteorites . in [ sec : summaryd ] , we will try to follow back the thread , unrolled in the precedent sections , to understand what happened to the solar system , including the formation of the terrestrial oceans .we will conclude with [ sec : conclusions ] .k ) gas , deuterium fractionation occurs through three basic steps : 1 ) formation of h ions from the interaction of cosmic rays with h and h ; 2 ) formation of h ( hd and d ) from the reaction of h ( h and hd ) with hd ; 3 ) formation of other d - bearing molecules from reactions with h ( hd and d ) in the gas phase ( step 3a ) and on the grain mantles ( step 3b).,width=340 ] deuterium is formed at the birth of the universe with an abundance d / h estimated to be ( tab .[ tab : definitions ] ) and destroyed in the interiors of the stars .therefore , its abundance may vary from place to place : for example , it is lower in regions close to the galactic center , where the star formation is high , than in the solar system neighborhood .if there were no deuterium fractionation , a species with one h - atom , like for example hcn , would have a relative abundance of d - atom over h - atom bearing molecules equal to , namely dcn / hcn . as another important example, water would have hdo / h= .similarly , a species with two hydrogens will have a relative abundance of molecules with two d - atoms proportional to ( e.g. , d / h ) and so on . in practice ,if there were no deuterium fractionation , the abundance of d - bearing molecules would be ridiculously low .but in space things are special enough to make the conditions propitious for deuterium fractionation ( or molecular deuteration or deuterium enrichment ) to occur .this can be summarised in three basic steps , shown in fig .[ fig : d - chemistry - scheme ] : + _ 1 ) formation of h : _ in cold ( k ) molecular gas , the fastest reactions are those involving ions , as neutral - neutral reactions have activation barriers and are generally slower .the first formed molecular ion is h , a product of the cosmic rays ionisation of h and h. + _ 2 ) formation of h , d and d : _ in cold molecular gas , h reacts with hd , the major reservoir of d - atoms , and once every three times the d - atom is transfered from hd to h .the inverse reaction h + h which would form hd has a ( small ) activation barrier so that at low temperatures h/h becomes larger than .similarly , d and d are formed by reactions with hd .+ _ 3 ) formation of other d - bearing molecules : _h , d and d react with other molecules and atoms transferring the d - atoms to all the other species .this can happen directly in the gas phase ( _ step 3a _ in fig .[ fig : d - chemistry - scheme ] ) or on the grain mantles ( _ step 3b _ ) via the d atoms created by the h , d and d dissociative recombination with electrons . in both cases ,the deuterium fractionation depends on the h/h , d/h and d/h abundance ratios . therefore , generally speaking , the basic molecule for the deuterium fractionation is h ( and d and d in extreme conditions ) . the cause for the enhancement of h with respect to h and ,consequently , deuterium fractionation is the larger mass ( equivalent to a higher zero energy ) of h with respect to h , which causes the activation barrier in step 2 .the quantity which governs whether the barrier can be overcome and , consequently , the deuterium fractionation is the temperature : the lower the temperature the larger the deuterium fractionation . besides , if abundant neutrals and important destruction partners of h isotopologues , such as o and co , deplete from the gas - phase ( for example because of the freeze - out onto dust grains in cold and dense regions ; [ sec : the - pre - stell ] and [ sec : the - prot - disk ] ) , the deuterium fraction further increases .this is due to the fact that the destruction rates of all the h isotopologues drop , while the formation rate of the deuterated species increases because of the enhanced h abundance .there is another factor that strongly affects the deuterium fractionation : the ortho - to - para abundance ratio of h molecules .in fact , if this ratio is larger than , the internal energy of the ortho h molecules ( whose lowest energy level is k ) can be enough to overcome the h + h hd + h barrier and limit the h/h ratio . in general, it is believed that ortho and para h are formed on the surface of dust grains with a statistical ratio of 3:1 .proton - exchange reactions then convert ortho- into para- h , especially at the low temperatures of dense cloud cores , where the ortho - to - para h ratio is predicted to drop below 10 .so far we have discussed the deuterium fractionation routes in cold ( k ) gas .different routes occur in warm ( k ) and hot ( k ) gas . in warm gas( k ) , the d - atoms can be transferred to molecules by ch , whose activation barrier of the reaction with h is larger than that of h . at even higher temperatures , od transfers d - atoms from hd to water molecules . in these last two cases ,ch and od play the role of h at lower temperatures . at water can directly exchange d and h atoms with h .finally , some molecules , notably water , are synthesised on the surfaces of interstellar and interplanetary grains by addition and/or substitution of h and d atoms ( [ sec : fract - water ] ) . in this case , the deuterium fractionation depends on the d / h ratio of the atomic gas .as discussed previously in this section , the enhanced abundance of h ( d and d ) also implies an increased atomic d / h ratio in the gas ( _ step 3b _ in fig .[ fig : d - chemistry - scheme ] ) , as deuterium atoms are formed upon dissociative recombination of the h deuterated isotopologues , whereas h atoms maintain an about constant density of , determined by the balance between surface formation and cosmic - ray dissociation of h molecules . given the particular role of deuterated water in understanding the solar system history , we summarize the three major processes ( reported in the literature ) that cause the water deuteration .they are schematically shown in fig .[ fig : d - water ] . + _1 ) formation and deuteration on the surfaces of cold grains : _ in cold molecular clouds and star forming regions , water is mostly formed by h and d atoms addition to o , o and o on the grain surfaces , as demonstrated by several laboratory experiments . in this case , therefore , the key parameter governing the water deuteration is the atomic d / h ratio in the gas , which depends on the h/h ratio as discussed in the previous section .+ & + a & + alma & + cso & + hso & + jcmt & + iom & + ism & + noema & + psn & + soc & + som & + vlt & + + psc & + class 0 & + hot corino & + ppd & + + jfc & + & + occ & + & + & + ccs & + idps & + & + + + + + + + + & * definition * & * d / h references * + & cosmic elemental deuterium abundance & -900 0.8 a + psn d / h & deuterium abundance in the psn & -860 1.0 b + vsmow & vienna standard mean ocean water ( refers to evaporated ocean waters ) & 0 7.1 c + _ 2 ) hydrogen - deuterium exchange in the gas phase : _ as for any other molecule , d - atoms can be transferred from h ( d and d ) to h in cold gas ( sec .[ sec : the - chem - proc ] ) , and more efficiently through the hd + oh hdo + h and hd + oh hdo + h reactions . in warm gas , it is in principle possible to have direct exchange between hd and h to form hdo .however , being a neutral - neutral reaction , it possesses an activation barrier , which makes this route very slow at k. on the contrary , for temperatures high enough ( k ) , the oh + hd and od + h reactions can form hdo . based on modelling, demonstrated that the hd + o od + h followed by the od + h hdo + h reaction is indeed a major route for the hdo formation in warm gas .+ _ 3 ) isotopic exchange between solid h and hdo with other solid species : _ laboratory experiments have shown that d and h atoms can be exchanged between water ice and other molecules trapped in the ice , like for example ch . very likely , the exchange occurs during the ice sublimation phase , with the re - organisation of the crystal .similarly , h - d exchange in ice can be promoted by photolysis .note that this mechanism not only can alter the hdo / h abundance ratio in the ice , but also it can pass d - atoms to organic matter trapped in the ice , enriching it of deuterium .this chapter has the ambition to bring together researchers from different communities .one of the disavantages , which we aim to overcome here , is that these different communities do not always speak the same language .table [ tab : definitions ] is a a sort of dictionary which will help the reader to translate the chapter in her / his own language .in addition , several acronyms used throughout this chapter are also listed in the table . with this , we are ready now to start our voyage through the different objects .according to the widely accepted scenario , the five major phases of solar type star formation are ( fig . [fig : sec3-fig1 ] ) : * * pre - stellar cores . *these are the starting point of solar - type star formation . in these `` small clouds '' with evidence of contraction motions , contrarily to starless cores, matter slowly accumulates toward the center , causing the increase of the density while the temperature is kept low ( ) .atoms and molecules in the gas - phase freeze - out onto the cold surfaces of the sub - micron dust grains , forming the so called icy grain mantles .this is the moment when the deuterium fractionation is most effective : the frozen molecules , including water , acquire a high deuterium fraction . ** protostars . *the collapse starts , the gravitational energy is converted into radiation and the envelope around the central object , the future star , warms up . when and where the temperature reaches the mantle sublimation temperature ( 100120 k ) , in the so - called hot corinos , the molecules in the mantles are injected into the gas - phase , where they are observed via their rotational lines .complex organic molecules , precursors of prebiotic species , are also detected at this stage . ** protoplanetary disks . *the envelope dissipates with time and eventually only a circumstellar , protoplanetary disk remains .in the hot regions , close to the central object or the disk surface , some molecules , notably water , can be d - enriched via neutral - neutral reactions . in the cold regions , in the midplane , where the vast majority of matter resides , the molecules formed in the protostellar phase freeze - out again onto the grain mantles , where part of the ice from the pre - stellar phase may still be present .the deuterium fractionation process becomes again important .* * planetesimals formation .* the process of `` conservation and heritage '' begins .the sub - micron dust grains coagulate into larger rocks , called planetesimals , the seeds of the future planets , comets and asteroids .some of the icy grain mantles are likely preserved while the grains glue together .at least part of the previous chemical history may be conserved in the building blocks of the forming planetary system rocky bodies and eventually passed as an heritage to the planets .however , migration and diffusion may scramble the original distribution of the d - enriched material . ** planet formation . * the last phase of rocky planet formation is characterized by giant impacts between planet embryos , which , in the case of the solar system , resulted in the formation of the moon and earth .giant planets may migrate , inducing a scattering of the small bodies all over the protoplanetary disk .oceans are formed on the young earth and , maybe in other rocky planets .the leftovers of the process become comets and asteroids . in the solar system, their fragments continuously rain on earth releasing the heritage stored in the primitive d - enriched ices .life takes over sometime around 2 billion years after the earth and moon formation . in the rest of the chapter, we will discuss each of these steps , the measured deuterium fractionation and the processes responsible for that .stars form within fragments of molecular clouds , the so - called dense cores , produced by the combined action of gravity , magnetic fields and turbulence. some of the starless dense cores can be transient entities and diffuse back into the parent cloud , while others ( the pre - stellar cores ) will dynamically evolve until the formation of one or more planetary systems .it is therefore important to gather kinematics information to identify pre - stellar cores , which represent the initial conditions in the process of star and planet formation .their structure and physical characteristics depend on the density and temperature of the surrounding cloud , i.e. on the external pressure ( tan et al . , this volume ) .the well - studied pre - stellar cores in nearby molecular clouds have sizes of ,000au , similar to the oort cloud , masses of a few solar masses and visual extinctions .they are centrally concentrated , with central densities larger than 10 h cm , central temperatures close to 7k and with evidence of subsonic gravitational contraction as well as gas accretion from the surrounding cloud .the esa herschel satellite has detected water vapour for the first time toward a pre - stellar core and unveiled contraction motions in the central ,000au : large amounts of water ( a few jupiter masses , mainly in ice form ) are transported toward the future solar - type star and its potential planetary system .a schematic summary of the main chemical and physical characteristics of pre - stellar cores is shown in figure [ fig : psc - fig1 ] .the upper left panel shows one of the best studied objects : l1544 in the taurus molecular cloud complex , 140pc away .the largest white contour roughly indicates the size of the dense core and the outer edge represents the transition region between l1544 and the surrounding molecular cloud , where the extinction drops below and photochemistry becomes important .this is where water ice copiously form on the surface of dust grains and low - levels of water deuteration are taking place . within the _ dark - cloud zone _, where the carbon is mostly locked in co , gas - phase chemistry is regulated by ion - molecule reactions ( fig.[fig : psc - fig1 ] , top right panel ) . within the central ,000au , the volume density becomes higher than a few ( see bottom panel ) and the freeze - out timescale ( / ) becomes shorter than a few .this is the _ deuteration zone _, where the freeze - out of abundant neutrals such as co and o , the main destruction partners of h isotopologues , favour the formation of deuterated molecules ( see sect .[ sec : the - chem - proc ] ) .deuteration is one of the main chemical process at work , making deuterated species the best tools to study the physical structure and dynamics of the earliest phases of star formation .pre - stellar cores chemically stand out among other starless dense cores , as they show the largest deuterium fractions and co depletions .the largest d - fractions have been observed in n , ammonia , and formaldehyde ( d / h 0.010.1 ; * ? ? ?hcn , hnc and hco show somewhat lower deuterations ( between 0.01 and 0.1 ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ? recently detected doubly deuterated cyclopropenylidene ( c - c ) , finding a c - c/c - c abundance ratio of about 0.02 .unfortunately , no measurement of water deuteration is available yet .pre - stellar cores are the strongest emitters of the ground state rotational transition of ortho - h and the only objects where para - d has been detected .it is interesting to note that the strength of the ortho - h line does not correlate with the amount of deuterium fraction found in other molecules .this is probably due to variations of the h ortho - to - para ratio in different environments , an important clue in the investigation of how external conditions affect the chemical and physical properties of pre - stellar cores ( see also * ? ? ?pre - stellar cores are `` deuterium fractionation factories '' .the reason for this is twofold : firstly , they are very cold ( with typical gas and dust temperatures between 7 and 13k ) .this implies a one - way direction of the reaction h + hd h + h , the starting point of the whole process of molecular deuteration ( see fig.[fig : d - chemistry - scheme ] , step 2 and 3a ) .secondly , a large fraction of neutral heavy species such as co and o freeze - out onto dust grains . as mentioned in [ sec : the - chem - proc ] , the disappearance of neutrals from the gas phase implies less destruction events for h and its deuterated forms , with consequent increase of not just h but also d and d .this simple combination of low temperatures and the tendency for molecules to stick on icy mantles on top of dust grains , can easily explain the observed deuterium fraction measured in pre - stellar cores .the case of formaldehyde ( h ) deuteration requires an extra note , as not all the data can be explained by gas - phase models including freeze - out . as discussed in , another source of deuterationis needed .a promising mechanism is the chemical processing of icy mantles ( surface chemistry ) , coupled with partial desorption of surface molecules upon formation . in particular , once co freezes out onto the surface of dust grains , it can either be stored in the ice mantles , or be `` attacked '' by reactive elements , in particular atomic hydrogen . in the latter case ,co is first transformed into hco , then formaldehyde and eventually into methanol ( ch ) . in pre - stellar cores ,deuterium atoms are also abundant because of the dissociative recombination of the abundant h , d and possibly d ( see fig.[fig : d - chemistry - scheme ] , step 3b ) .chemical models predict d / h between 0.3 and 0.9 in the inner zones of pre - stellar cores with large co freeze - out , implying a large deuteration of formaldehyde and methanol on the surface of dust grains ( see dust grain cartoons overlaid on the bottom panel of fig.[fig : psc - fig1 ] ) .thus , the measured large deuteration of gas - phase formaldehyde in pre - stellar cores ( and possibly methanol , although this still awaits for observational evidence ) can be better understood with the contribution of surface chemistry , as a fraction of surface molecules can desorb upon formation ( thanks to their formation energy ) .class 0 sources are the youngest protostars . their luminosity is powered by the gravitational energy , namely the material falling towards the central object , accreting it at a rate of m/yr .they last for a short period , yr ( see also dunham et al .this volume ) .the central object , the future star , is totally obscured by the collapsing envelope , whose sizes are au , as the pre - stellar cores ( [ sec : the - pre - stell ] ) . it is not clear whether a disk exists at this stage , as the original magnetic field frozen on the infalling matter tends to inhibit its formation ( z .- y . li et al ., this volume ) . on the contrary ,powerful outflows of supersonic matter are one of the notable characteristics of these objects ( frank et al . ,this volume ) . in class0 protostars , the density of the envelope increases with decreasing distance from the centre ( ) , as well as the temperature . from a chemical point of view ,the envelope is approximatively divided in two regions , delimitated by the ice sublimation temperature ( 100120 k ) : a cold outer envelope , where molecules are more or less frozen onto the grain mantles , and an inner envelope , called hot corino , where the mantles , built up during the pre - stellar core phase ( [ sec : the - pre - stell ] ) , sublimate .this transitions occurs at distances from the center between 10 and 100 au , depending on the source luminosity .relevant to this chapter , in the hot corinos , the species formed at the pre - stellar core epoch are injected into the gas phase , bringing memory of their origin . for different reasons ,the outer envelope and the hot corino have molecules highly enriched in deuterium : in the first case because of the low temperatures and co depletion ( [ sec : the - chem - proc ] and [ sec : the - pre - stell ] ) , in the second case because of the inheritance of the pre - stellar ices .class 0 protostars are the objects where the highest deuterium fractionation has been detected so far and the first where the extreme deuteration , called in literature super - deuteration , has been discovered : doubly and even triply deuterated forms with d / h enhancements with respect to the elemental d / h abundance ( tab . [tab : definitions ] ) of up to 13 orders of magnitude .the first and the vast majority of measurements were obtained with single dish observations , so that they can not disentangle the outer envelope and the hot corino , if not indirectly by modelling the line emission in some cases . the following species with more than two atoms of deuterium have been detected ( see for a list of singly deuterated species ) : formaldehyde , methanol , ammonia , hydrogen sulphide ( d : * ? ? ?* ) , thioformaldehyde ( d : * ? ? ?* ) and water . in a few cases ,interferometric observations provided us with measurements of water deuterium fractionation in the hot corinos .finally , recent observations have detected deuterated species in molecular outflows . the situation is graphically summarized in fig . [fig : protostar-1 ] .o , h and ch suggests that the increasing deuteration reflects the formation time of the species on the ices .references : h : , , , , ; h : ; nh : , ; h : ; h : ; ; ch : . , width=302 ]we note that the methanol deuteration depends on the bond energy of the functional group which the hydrogen is located . in fact , the abundance ratio ch / ch is larger than , whereas it should be 3 if the d - atoms were statistically distributed . to explain this, it has been invoked that ch could be selectively destroyed in the gas - phase , that h d exchange reactions in solid state could contribute to reduce ch , or , finally , that the ch under abundance is due to d h exchange with water in the solid state .the reason for this over - deuteration of the methyl group with respect to the hydroxyl group may help to understand the origin of the deuterium fractionation in the different functional groups of the insoluble organic matter ( [ sec : meteorites ] ; see fig .[ fig : iom - fig3 ] ) .this point will be discussed further in [ sec : summaryd ] .the measure of the abundance of doubly deuterated species can also help to understand the formation / destruction routes of the species .[ fig : protostar-2 ] shows the d / d abundance ratio of the molecules in fig .[ fig : protostar-1 ] . for species forming on the grain surfaces ,if the d atoms were purely statistically distributed , namely just proportional to the d / h ratio , then it would hold : d - species / d-species = 4 ( d - species / h - species) . as shown in fig .[ fig : protostar-2 ] , this is not the case for h , nh , h and h .a plausible explanation is that the d and d-bearing forms of these species are formed at different times on the grain surfaces : the larger the deuterium fraction the younger the species ( see 5.3 ) . .adapted from .,width=302 ] in the context of this chapter , the deuteration of water deserves particular attention .being water very difficult to observe with ground based telescopes ( van dishoeck et al ., this volume ) , measurements of hdo / h exist only towards four class 0 sources , and they are , unfortunately , even in disagrement .table [ tab : protostars ] summarizes the situation ..measurement of the hdo / h ratio in class 0 sources . in the third columnwe report whether the measure refers to the outer envelope ( out ) or the hot corino ( hc ) .references : 1 : .4 : .5 : .6 : .[ cols="<,^ , > , > " , ] overall , the figure and the table , together with fig .[ fig : iom - fig1 ] , raise a number of questions , often found in the literature too , which we address here .why is the d / h ratio systematically lower in solar system bodies than in pre- and proto- stellar objects ? when and how did this change occur ?_ + may be here the answer is , after all , simple .all the solar system bodies examined in this chapter ( meteorites and comets ) were originally at distances less than 20 au from the sun .the d / h measurements of all pre- and protostellar objects examined in this chapter ( pre - stellar cores , protostars and protoplanetary disks ) refer to distances larger than that , where the temperatures are lower and , consequently , the deuteration is expected to be much larger ( [ sec : the - chem - proc ] ) . therefore , this systematic difference may tell us that the psn did not undergo a global scale re - mixing of the material from the outer ( au ) to the inner regions .there are exceptions , though , represented by the `` hot spots '' in meteorites , which , on the contrary , may be the only representatives of this large scale re - mixing of the material in the psn .+ _ 2 . why does organic material have a systematically higher d / h value than water , regardless the object and evolutionary phase ? _ + the study of the deuterium fractionation in the pre - stellar cores ( [ sec : the - pre - stell ] ) and protostars ( [ sec : prot - phase ] ) has taught us that the formation of water and organics in different epochs is likely the reason why they have a different d / h ratio ( fig .[ fig : protostar-1 ] ) .water ices form first , when the density is relatively low ( cm ) and co not depleted yet , so that the h/h ratio is moderate ( [ sec : the - chem - proc ] ) .organics form at a later stage , when the cloud is denser ( cm ) and co ( and other neutrals ) condenses on the grain surfaces , making possible a large enhancement of the h/h .can this also explain , _ mutatis mutandis _, the different deuterium fractionation of water and organics measured in ccs and comets ? after all ,if the psn cooled down from a hot phase , the condensation of volatiles would follow a similar sequence : oxygen / water first , when the temperature is higher , and carbon / organics later , when the temperature is lower .this would lead to deuterium fractions lower in water than in organics , regardless whether the synthesis of water and organic was on the gas phase or on the grain surfaces . obviously , this is at the moment speculative but a road to explore , in our opinion .we emphasise that this `` different epochs formation '' hypothesis is fully compatible with the theory described in [ sec : fract - organ ] of the origin of the organics deuteration from h .why do most comets exhibit higher d / h water values , about a factor 2 , than ccs and idps ? _ + a possible answer is that comets are formed at distances , au , larger than those where the ccs originate , au ( tab .[ tab : definitions ] ; see also 9 ) .a little more puzzling , though , is why the large majority of idps have d / h values lower than the cometary water , although at least 50% idps are predicted to be cometary fragments ( nesvorny et al .the new herschel observations of jfc indicate indeed lower d / h values than those found in occ ( tab . [tab : dcomets ] ; [ sec : the - comets ] ) , so that the observed idps d / h values may be consistent with the cometary fragments theory . in conclusion ,the d / h distribution of ccs and idps is a powerful diagnostic to probe the distribution of their origin in the psn .+ _ 4 . why does the d / h distribution of the idps , which are thought to be fragments of comets ,mimic the ccs bulk d / h distribution ? and why is it asymmetric , namely with a shoulder towards the large d / h values ? _ + as written above , the d / h ratio distribution depends on the distribution of the original distance from where ccs and idps come from or , in other words , their parental bodies .ccs are likely fragments of asteroids from the main belt and a good fraction is of cometary origin .idps are fragments from jfc and main belt asteroids .their similar d / h distribution strongly suggests that the mixture of the two classes of objects is roughly similar , which argue for a difference between the two , more in terms of sizes than in origin .besides , the asymmetric distribution testifies that a majority of ccs and idps originate from closer to the sun parent bodies ( see also * ? ? ? * ) .why is the d / h ratio distribution of cc organics and water so different ? _ + there are in principle two possibilities : water and organics formed in different locations at the same time and then were mixed together or , on the contrary , they were formed at the same heliocentric distance but at ( slightly ) different epochs .the discussion in point 2 would favor the latter hypothesis , although this is speculative at the moment .if this is true , the d / h distribution potentially provides us with their history , namely when each of the two components formed . again , being cc organics more d - enriched than water , they were formed at later stages .there are several differences between the proto planetary disks ( ppds ) and proto solar nebula ( psn ) models .a first and major one is that psn models assume a dense and hot phase for the solar ppd .for example , temperatures higher than 1000 k persist for yr at 3 au and yr at 1 au in the yang et al .( 2013 ) model .generally , models of ppds around solar - type stars , on the contrary , never predict such high temperatures at those distances .second , ppd models consider complex chemical networks with some horizontal and vertical turbulence which modifies , though not substantially , the chemical composition across the disk . on the contrary , in psn models , chemistry networks are generally very simplified , whereas the turbulence plays a major role in the final d - enrichment across the disk . however , since the density adopted in the psn models is very high , more complex chemical networks have a limited impact .the above mentioned psn models do not explicitly compute the dust particle coagulation and migration ( if not as diffusion ) , and gas - grains decoupling whereas ( some ) disk models do ( see turner et al . andtesti et al ., this volume ) . in our opinion ,the most significant difference is the first one and the urgent question to answer is : what is the good description of the psn ? a highly turbulent , hot and diffusive disk or a cooler and likely calmer disk , as the ones we see around t tauri stars ? or something else ?the community of psn models have reasons to think that the psn disk was once hot .a major one is the measurement of depletion of moderately volatile elements ( more volatile than silicon ) in chondritic meteorites ( palme et al .1988 ) , which can be explained by dust coagulation during the cooling of an initially hot ( k , the sublimation temperature of silicates ) disk in the terrestrial planet formation region ( au ; * ? ? ?* ; * ? ? ?* ; * ? ? ?the explanation , though , assumes that the heating of the dust at k was global in extent , ordered and systematic .alternatively , it is also possible that it was highly localised and , in this case , the hot initial nebula would be not necessary .for example , the so - called x - wind models assume that the hot processing occurs much closer to the star and then the matter is deposited outwards by the early solar wind . in addition , the detection of crystalline silicates in comets has been also taken as a prove that the solar system passed through a hot phase .for sure , t tau disks do not show the high temperatures ( k at au ) assumed by the psn model .these temperatures are predicted in the midplane very close ( au ) to the central star or in the high atmosphere of the disk , but always at distances lower than fractions of au .even the most recent models of very young and embedded ppds , with or without gravitational instabilities , predict temperatures much lower than k. one can ask whether the hot phase of the psn could in fact be the hot corino phase observed in class 0 sources ( [ sec : prot - phase ] ) . although the presently available facilities do not allow to probe regions of a few aus , the very rough extrapolation of the temperature profile predicted for the envelope of iras16293 - 2422 ( the prototype of class 0 sources ) by gives k at a few aus .the question is , therefore : should we not compare the psn with the hot corino models rather than the protoplanetary disks models ? at present , the hot corino models are very much focused only on the au regions , which can be observationally probed , and only consider the gas composition .should we not start , then , considering what happens in the very innermost regions of the class 0 sources and study the dust fate too ?if so , the link with the deuterium fractionation that we observe in the hot corino phase may become much more relevant in the construction of realistic psn models .last but not least , the present psn models are based on the transport and mixing of material with different initial d / h water because of the diffusion , responsible for the angular momentum dispersion . in class 0sources , though , the dispersion of the angular momentum is thought to be mainly due to the powerful jets and outflows and not by the diffusion of inward / outward matter . moreover , during the class 0 phase , material from the cold protostellar envelope continues to rain down onto the central region and the accreting disk ( z .- y .li et al . , this volume ) , replenishing them by highly deuterated material.the resulting d / h gradient across the psn may , therefore , be different than that predicted by current theories . several reviews discussing the origin of the terrestrial water are present in the literature , so that we will here just summarise the points emphasising the open issues . let first remind that the water budget on earth is itself subject of debate .in fact , while the lithosphere budget is relatively easy to measure ( m ; * ? ? ?* ) , the water contained in the mantle , which contains by far the largest mass of our planet , is extremely difficult to measure and indirect probes , usually noble gases , are used for that ( e.g. , * ? ? ?* ) , with associated larger error bars .it is even more difficult to evaluate the water content of the early earth , which was likely more volatile - rich than at present .the most recent estimates give m , namely 20 times larger than the value of the lithosphere water .if the water mantle is the dominant water reservoir , as it seems to be , then the d / h value of the terrestrial oceans may be misleading if geochemical processes can alter it .evidently , measurements of the mantle water d / h is even more difficult .the last estimates suggest a value slightly lower than the terrestrial oceans water . in this chapter ,given this uncertainty on the earth bulk water content and d / h , we adopted as reference the evaporated ocean water d / h , the vswom ( tab .[ tab : definitions ] ) .the `` problem '' of the origin of the terrestrial oceans rises because , if earth was formed by planetesimals at au heliocentric distance , they would have been `` dry '' and no water should exist on earth .one theory , called `` late veneer '' , assumes that water was brought after earth formed by , for example , comets .this theory is based on the assumptions that the d / h cometary water is the same than the earth water d / h , but , based on observations towards comets ( [ sec : the - comets ] ) , this assumption is probably wrong .the second theory , based on the work by , assumes that a fraction of the planetesimals that built the earth came from more distant ( 24 au ) regions and were , therefore , `` wet '' .dynamical simulations of the early solar system evolution have add support to this theory , challenging at the same time the idea that the flux of late veneer comets and asteroides could have been large enough to make up the amount of water on earth .moreover , the d / h value measured in ccs ( [ sec : meteorites ] ) adds support to this theory .the recent findings by would argue for a large contribution of a group of ccs , the ci type . in summary ,the origin of terrestrial water is still a source of intense debate .in this chapter we have established a link between the various phases in the process of solar - type star and planet formation and our solar system .this link has been metaphorically called the ariadne s thread and it is represented by the deuterium fractionation process .deuterium fractionation is active everywhere in time and space , from pre - stellar cores , to protostellar envelopes and hot corinos , to protoplanetary disks .its past activity can be witnessed by us today in comets , carbonaceous chondrites and interplanetary dust particles . trying to understand and connect this process in the various phases of star and planet formation ,while stretching the thread to our solar system , has opened new horizons in the quest of our origins .the ultimate goal is to chemically and physically connect the various phases and identify the particular route taken by our solar system .the steps toward this goal are of course many , but much will be learned just starting with the following important points : 1 .bridge the gap between pre - stellar cores and protoplanetary disks and understand how different initial conditions affect the physical and chemical structure and evolution of protoplanetary disks .2 . study the reprocessing of material during the early stages of protoplanetary disks ; in particular , can self - gravitating accretion disks , thought to be in present in the hot corino / protostellar phase , help in understanding some of the observed chemical and physical features of more evolved protoplanetary disks and our solar system ? 3 .study the reprocessing of material ( including dust coagulation and chemical evolution of trapped ices ) throughout the protoplanetary disk evolution ; in particular , which conditions do favour the production of the organic material observed in pristine bodies of our solar - system ?4 . compare psn models with protoplanetary disk models and include the main physical and chemical processes , in particular dust coagulation and ice mantle evolution .the future is bright thanks to the great instruments available now and in the near future ( e.g. , alma and noema , the esa rosetta mission ) and the advances in techniques used for the analysis of meteoritic material . trying to fill the d / h plot shown in fig.[fig : dh - summary ] with new observations is of course one priority , but this needs to proceed hand in hand with developments in theoretical chemistry and more laboratory work . for sure ,one lesson has been learned from this interdisciplinary work : our studies of the solar system and our studies of star / planet forming regions represent two treasures which can not be kept in two different coffers .it is now time to work together for a full exploitation of such treasures and eventually understand our astrochemical heritage . +* acknowledgments . *c. ceccarelli acknowledges the financial support from the french agence nationale pour la recherche ( anr ) ( project forcoms , contract anr-08-blan-0225 ) and the french spatial agency cnes .p. caselli acknowledges the financial support of the european research council ( erc ; project pals 320620 ) and of successive rolling grants awarded by the uk science and technology funding council .o. mousis acknowledges support from cnes .s. pizzarello acknowledges support through the years from the nasa exobiology and origins of the solar system programs .d. semenov acknowledges support by the _deutsche forschungsgemeinschaft _ through spp 1385 : `` the first ten million years of the solar system - a planetary materials approach '' ( se 1962/1 - 1 and 1 - 2 ) .this research made use of nasa s astrophysics data system .we wish to thank a. morbidelli , c. alexander and l. bonal for a critical reading of the manuscript .we also thank an anonymous referee and the editor , whose comments helped to improve the chapter clarity .
= 11pt = 0.65 in = 0.65 in
positron emission tomography ( pet ) [ 1 ] is currently one of the most perspective techniques in the field of medical imaging .pet is based on the fact that the electron and positron annihilate and their mass is converted to energy in the form of two gamma quanta flying in the opposite directions .the two gamma quanta registered in coincidence define a line referred to as line of response ( lor ) .the image of distribution of the radionuclide is obtained from the high statistics sample of reconstructed lors . in time of flight pet ( tof - pet ) systems[ 2 , 3 ] , the applied detectors measure the difference in the arrival time of the two gamma rays which enables to shorten significantly a range along the lor used for the reconstruction of the image .currently all commercial pet devices use inorganic scintillator materials , usually lso or lyso crystals , as radiation detectors .these are characterized by relatively long rise- and decay times , of the order of tens of nanoseconds .the j - pet collaboration investigates a possibility of construction of a pet scanner from plastic scintillators which would allow for simultaneous imaging of the whole human body .the j - pet chamber is built out from long strips forming the cylinder [ 4 , 5 ] .light signals from each strip are converted into electrical signals by two photomultipliers ( pm ) placed at opposite edges .it should be noted that the better the time resolutions of the detection system the better is the quality of a reconstructed images . in this paperwe will investigate the tof resolution of a novel j - pet scanner and we will show a simple method to improve the results by applying the compressive sensing theory . in the followingwe define the time resolution and present shortly the method of signal normalization based on compressive sensing theory .then we describe an experimental setup used for signal registration and present results of improving tof resolution in 30 cm long plastic scintillator strip , read out on both sides by the hamamatsu r4998 photomultipliers .signals from the photomultipliers were sampled in 50 ps steps using the lecroy signal data analyzer 6000a .in the following we will be interested in determination of moment of interaction of gamma quantum with a strip . the interaction moment ( )is given by : where and are the arrival times to the pm1(2 ) , respectively , is the length of whole strip , and is the effective speed of the light in used scintillator . in the recent work [ 6 ] ,the speed of the light in the scintillator was estimated to 12.6 cm / ns . in order to determine the resolution ( standard deviation ) of thit determination an indirect method based on the estimation of the resolution of time difference ( )will be provided .we assume for the sake of simplicity that in eq .1 is known exactly . since the time difference , we have and the resolution of based on eq .[ thit ] may be expressed as : which implies that the resolution of the determination of interaction moment ( ) is twice better than the resolution of the time difference .a necessary data to carry out the research have been acquired by a single module of the j - pet detector [ 6 ] .the scheme of experimental setup is presented in fig .the 30 cm long strip was connected on two sides to the photomultipliers ( pms ) .the radioactive source was moved from the first to the second end in steps of 6 mm . at each position ,about 10 000 pairs of signals from pm1 and pm2 were registered in coincidence with reference detector .the signals were sampled using the scope with a probing interval of 50 ps .examples of two signals registered at pm1 and pm2 are shown in fig .2 with blue ( red ) colors , respectively , for the case when the scintillator was irradiated at distance of 7 cm from pm2 ( 23 cm from pm1 ) . , scaledwidth=70.0% ] in the final , multi - modular devices with hundreds of photomultipliers probing with scopes will not be possible . therefore , a multi - threshold sampling method to generate samples of a pet event waveform with respect to four user - defined amplitudes was proposed .an electronic system for probing these signals in a voltage domain was developed and successfully tested [ 7 ] .based on the signals registered via scope , we simulate a four - level measurement with sampling in the voltage domain at 50 , 100 , 200 and 300 mv , indicated by four black horizontal lines in fig .it should be stressed that due to the time walk effect the resolution determined when applying the lowest threshold ( 50 mv ) is better with respect to the resolution obtained at the highest level ( 300 mv ) . therefore the simplest way to define the start of each pulse, times and , is to use the information from registration time at the lowest amplitude level , marked with vertical dashed lines in fig .2 . from this one may easily estimate the resolution of and therefore the resolution of thit ( see eq .3 ) . , scaledwidth=70.0% ] since the shape and amplitude of signals are predominantly related with the hit position , further improvement of the resolution may be provided by the analysis of the full time signals . according to the theory of compressive sensing ( cs ) [ 8 , 9 ] , a signal that is sparse in some domain can be recovered based on far fewer samples than required by the nyquist sampling theorem . in recent articles[ 10 , 11 , 12 ] we have proposed a novel signal recovery scheme based on the compress sensing method and the statistical analysis that fits to the signal processing scenario in j - pet devices . under this theoryonly a recovery of a sparse or compressible signals is possible . in articles[ 10 , 11 ] the sparse representation of signals was provided by the principal component analysis ( pca ) decomposition .we will not describe all the steps of signal processing here , but we just state that the recovery of full time signals based on eight samples is very accurate . for further details about signal recovery scheme in the j - pet framework the interested readeris referred to ref .[ 10 , 11 , 12 ] .the application of cs theory enable to take an advantage from fully sampled signals and open an area for completely new algorithms for the estimation of the values of times and ( see fig .2 ) . in the following we will use the recovered signals to provide the signal normalization . due to the low detection efficiency of the plastic scintillators ( and interaction of gamma quanta predominantly via compton effect ) , low number of photons reach the photomultipliers and the charge , as well as amplitude , of signals is subject to a large variations .however , the shapes of the signals are highly related with the position of the interaction . in order to improve the time resolution , a method of signals normalizationis proposed which permits to decrease the smearing of signals charge .the procedure of signal normalization is as follows .consider data sets representing charges of signals at pm1 and pm2 gathered for positions .the mean values of the charges at positions along the strip ( ) at pm1 , pm2 will be denoted by and , respectively . furthermore , the standard deviation of charges at each position along the strip ( ) at pm1 , pm2 will be denoted by and , respectively .suppose that new pair of signals and have been recovered based on time samples registered at four amplitudes levels at pm1 and pm2 , respectively .the charges of the signals , and , are calculated as an integrations of the and functions , respectively .the proposed normalization procedure qualifies a new measurement , represented by and , to the position : next , the recovered signals and are normalized according to the formula : where and denote the normalized signals .in the first step of the analysis the charge distributions of signals were investigated .experimental results based on the signals registered along the scintillator strip are presented in fig .3 . mean values of charges at pm1(2 ) are marked with solid blue ( red ) curves .as expected , the curves are symmetrical with respect to the center of the scintillator strip ( position of 15 cm ) .the distributions of +/- and +/- at pm1(2 ) along the strip are marked with dashed blue ( red ) curves . from fig .3 one may observe that and have the same trend as and , respectively , and are in the range from about 6 to 20 pc ., scaledwidth=70.0% ] figure 4 shows an example of the normalization of signals registered at pm1 at fixed position 5 cm from pm1 .the three corresponding signals registered at second photomultiplier ( pm2 ) are not shown here , but were used during the normalization process to estimate the position of irradiation ( see eq .the left part of fig .4 shows a three randomly selected raw signals registered via scope .the right part of fig .4 presents the same signals after the normalization procedure provided according to the description in sec .. the same colors of the signals on the left and right part in fig . 4 indicate a corresponding pair of signals before and after normalization . as it is seen from left part of fig .4 the shapes of the three signals are similar but the charges differ .however , an estimated positions according to eq . 4 were found to be very close , and therefore the charges of the normalized signals were also very similar ( see right part of fig .4 ) . [ cols="^,^ " , ] the normalization method was verified using signals from all the irradiation positions .each pair of signals was recovered via compressed sensing method based on the information from four amplitude levels ( see fig .2 ) and next normalized according to the description in sec . 2.4 . in order to compare the time resolutions before and after normalization process ,the normalized signals , and , were sampled in the voltage domain at four levels from 50 to 300 mv ( see fig .2 ) . for each position and at each levelthe distribution of time difference was calculated . in fig .5 and 6 the resulting resolutions ( ) are presented as a function of irradiated position , determined when applying the lowest threshold ( 50 mv ) and the highest threshold ( 300 mv ) , respectively . in fig . 5 and 6 ,the resolutions obtained based on the raw and normalized signals are indicated with blue and red squares , respectively .as expected , due to the time walk effect the resolution determined when applying threshold at 50 mv is better with respect to the resolution obtained at 300 mv. however , the influence of time walk effect is more visible in the case of raw signals . from fig . 5 and 6 one can infer that the time resolution is almost independent of the position of irradiation .an average resolution of the time difference along the strip at the lowest tested amplitude level ( fig .5 ) was determined to be 172 ps and 160 ps for the raw and normalized signals , respectively .this corresponds to the improvement of the resolution of the moment of the interaction from 86 ps to about 80 ps ., scaledwidth=70.0% ] , scaledwidth=70.0% ]in this paper the concept of signal normalization in a novel j - pet scanner was introduced .j - pet device is based on plastic scintillators and therefore is a promising solution in view of the tof resolution . in a related works [ 10 , 11 ] it was shown that compressive sensing theory can be successfully applied to the problem of signal recovery in a j - pet scanner .the information from fully recovered signals was utilized in order to provide the normalization of the signals .it was shown that with fully recovered signals a better time resolution of j - pet scanner is achieved ; the resolution of the moment of the interaction was improved from about 86 ps to 80 ps .it should be stressed that different approaches for utilizing the recovered information from compressive sensing theory may be considered and the studies are in progress .we acknowledge technical and administrative support of t. gucwa - ry , a. heczko , m. kajetanowicz , g. konopka - cupia , j. majewski , w. migda , a. misiak , and the financial support by the polish national center for development and research through grant innotech - k1/in1/64/159174/ncbr/12 , the foundation for polish science through mpd programme , the eu and mshe grant no .poig.02.03.00 - 161 00 - 013/09 , doctus - the lesser poland phd scholarship fund .99 j. l. humm et al . , from pet detectors to pet scanners eur .imaging 30 ( 2003 ) 1574 j. s. karp et al ., benefit of time - of - flight in pet : experimental and clinical results j. nucl . med .49 ( 2008 ) 462 m. conti , state of the art and challenges of time - of - flight pet phys . med . 25 ( 2009 ) 1 p. moskal et al . , novel detector systems for the positron emission tomography , bio - algorithms and med - systems 7 ( 2011 ) 73 ; [ arxiv:1305.5187 [ physics.med-ph ] ] .p. moskal et al . , a novel tof - pet detector based on organic scintillators radiotheraphy and oncology 110 ( 2014 ) s69 .p. moskal et al ., nuclear instruments and methods in physics research section a 764 ( 2014 ) 317 .m. palka et al . , a novel method based solely on field programmable gate array ( fpga ) units enabling measurement of time and charge of analog signals in positron emission tomography ( pet ) bio - algorithms and med - systems 10 ( 2014 ) 41 ; [ arxiv:1311.6127 [ physics.ins-det ] ] .e. candes , j. romberg , t. tao , ieee transaction on information theory 52 ( 2006 ) 489 .d. donoho , ieee transaction on information theory 52 ( 2006 ) 1289 .l. raczyski et al . , acta phys .b proceed .suppl . 6 ( 2013 ) 1121 , [ arxiv:1310.1612 [ physics.med-ph ] ] .l. raczyski et al . , nuclear instruments and methods in physics research section a , 786 ( 2015 ) 105 . l. raczyski et al . , nuclear instruments and methods in physics research section a , 764 ( 2014 ) 186 .
nowadays , in positron emission tomography ( pet ) systems , a time of flight information is used to improve the image reconstruction process . in time of flight pet ( tof - pet ) , fast detectors are able to measure the difference in the arrival time of the two gamma rays , with the precision enabling to shorten significantly a range along the line - of - response ( lor ) where the annihilation occurred . in the new concept , called j - pet scanner , gamma rays are detected in plastic scintillators . in a single strip of j - pet system , time values are obtained by probing signals in the amplitude domain . owing to compress sensing theory , information about the shape and amplitude of the signals is recovered . in this paper we demonstrate that based on the acquired signals parameters , a better signal normalization may be provided in order to improve the tof resolution . the procedure was tested using large sample of data registered by a dedicated detection setup enabling sampling of signals with 50 ps intervals . experimental setup provided irradiation of a chosen position in the plastic scintillator strip with annihilation gamma quanta . ` compressed sensing ` , ` positron emission tomography ` , ` time - of - flight `
many problems in science and engineering are described by nonlinear differential equations whose solutions are too complicated to be properly resolved .the problem of predicting the evolution of systems that are not well resolved has been addressed by the present authors and others in .nothing can be predicted without some knowledge about the unresolved ( `` subgrid '' ) degrees of freedom . in the optimal prediction methodsjust cited it is assumed that one possesses , as one often does , prior statistical information about the system in the form of an invariant measure ; what is sought is a mean solution with respect to this prior measure , compatible with the information initially at hand as well as with the limitations on the computing power one can bring to bear .the simplest version of this idea , markovian optimal prediction , generates an approximating system of ordinary differential equations and works well for a time that depends on the degree of underresolution and on the uncertainty in the data .this version is optimal in the class of markovian approximations , but it eventually exhibits errors , because the influence of partial initial data on the distribution of the solutions weakens in time if the system is ergodic , and this loss of information is not captured in full , see . to obtain an accurate approximation of a subset of variables without solving the full problem requires the addition of a memory " term , and the resulting prediction scheme becomes a generalized langevin equation , similar to those in irreversible statistical mechanics .we present a general formalism for separating resolved and unresolved degrees of freedom , analogous to the nonlinear projection formalism of zwanzig but using the language of probability theory .we find a zero - th order approximate solution of the equation for the orthogonal unresolved dynamics and find its statistics by monte - carlo integration ; we use the results to construct a prediction scheme with memory .we apply the scheme to a simple model problem . in the conclusionwe indicate how the construction is generalized to more complicated problems , and the new perspectives it opens for prediction as well as for irreversible statistical mechanics .consider a problem of the form where and are -dimensional vectors ( may be infinite ) , with components and ; is time .when is finite ( [ eq : system ] ) is a system of ordinary differential equations .our goal is to calculate the average values of components of , without calculating all the components ; the average is over all the values that the missing , unresolved , components may assume ; our prior information allows us to make statistical statements about these missing components .we denote the phase space ( the vector space in which resides ) by ; in classical statistical physics this phase space is the dimensional space of coordinates and momenta , where is the number of particles ; the at time are then entries of the vector .a solution of equation ( [ eq : system ] ) is defined when an initial value is given ; to each initial condition corresponds a trajectory , ; the initial value is emphasized by this notation in view of its key role in what follows .a phase variable is a function on ; may be a vector , whose components are labeled as .a phase variable varies when its argument varies in time , so that a phase variable whose value at was acquires at time the value .it is useful to examine the evolution of in a more abstract setting : introduce an evolution operator for phase variables by the relation differentiation of ( [ eq : pullback ] ) with respect to time yields where , the liouvillian , is the linear differential operator .thus the phase variable can be calculated in either of two ways : ( i ) for each integrate to time the equations of motion with initial conditions and evaluate the phase variable at the point ; or ( ii ) solve the equation it is convenient to write ; we do not inquire here as to the conditions under which this symbolic notation can be taken literally .the significant thing about equation ( [ liouville ] ) is that it is linear .one can check from the definitions that , for any function , and . in this notation, the symbol standing alone refers to the data at ; the time dependence is described by the exponential .suppose that the initial data are drawn from a probability distribution ; each initial datum gives rise to a solution of equation ( [ eq : system ] ) and the measure evolves into a measure at time .the evolution of is defined by the conditions for all sufficiently smooth phase variables .we assume that the measure is invariant under the flow ( [ eq : system ] ) : .many systems have invariant measures , in particular , any hamiltonian system with hamiltonian leaves invariant the canonical measure with density , where is a normalization constant and is the variance of the samples , which in physics is the temperature .given a phase variable , we denote by ] , which makes them elements of the hilbert space ] , where both and are phase variables ; it satisfies : 1 . ] is linear in : = \alpha\,{{\mathbb e}}[v_1|u ] + \beta\,{{\mathbb e}}[v_2|u ] .\nonumber\ ] ] 3 . ] is the orthogonal projection of on the space of functions of , and we can write ] , , are normalized coordinate functions .a short manipulation converts this sum into where , i.e. , is the correlation of the solution of the orthogonal dynamics equation that starts from with .substitution into the integral produces the term generalized langevin equation ( [ eq : langevin ] ) expresses the rate of change of the phase variable as a sum of terms that depend on and on the orthogonal dynamics .these expressions can not be used directly for approximation when the equations are nonlinear .indeed , the evaluation of a term such as =e[le^{tl}u|e^{tl}u] ] by {{{\hat{y}}}={{\mathbb e}}[{{\varphi}}(x , t)|{{\hat{x}}}]} ] ; at later time , in our approximation , {{{\hat{y}}}=\hat{{\varphi}}(x , t)} ] in as the mean of the right - hand - side of equations ( [ eq : system ] ) and as the fluctuation around that mean .this mean varies less than the fluctuations as spans , and we make it vary even less by anchoring it to the initial data for the specific problem we wish to solve , i.e. , first approximate {{{\hat{y}}}=\hat{{\varphi}}(x , t)} ] and then further fix at the specific initial value for which we want to solve the approximation of the system ( [ eq : system ] ) .let be the function {{{\hat{y}}}={{\mathbb e}}[\hat{{\varphi}}(x , t)|{{\hat{x}}}]} ] , become where ] , , and the expected value in is evaluated over all choices of in dimensions drawn from the invariant distribution with density . in figure 1we display some numerical results for in the example , with data .we show the truth " found by averaging many solutions of the full system , the galerkin approximation that sets all unknown functions to zero , the first - order optimal prediction , and the solution of equation ( [ example2 ] ) .the solutions are shown in a rather favorable case ( they look less striking if one exchanges for example the values of ) .at first sight , the results in figure 1 are interesting but not overwhelming : the cost of evaluating is comparable to the cost of evaluating the `` truth '' by monte - carlo , and the gain is not obvious .one may note however , that once has been evaluated , the cost of rerunning the calculation with any other initial data is negligible .this is the significant fact : is evaluated at equilibrium " , i.e. , with all components of , the initial data , sampled from the invariant measure ; does not depend on any specific initial value . as we shall show elsewhere, the analogous statement is true for an accurate solution of the orthogonal dynamics .once the heavy work of determining memory functions has been done , the solution of a specific problem is plain sailing . at equilibrium, one can bring to bear the panoply of scaling methods and equilibrium statistical mechanics .one can say that the mori - zwanzig formalism makes possible the use of universal " ( non problem specific ) results to solve specific problems . in some of the applications we have in mind , the large " ( -dimensional ) problems are partial differential equations , and then imperfections in the evaluation of memory terms are immaterial as long as the rate of convergence of finite - dimensional approximations is enhanced ( see ) . it is taken for granted in the physics literature that the memory kernels are autocorrelations of the noise " ( i.e. , the orthogonal dynamics ) .this is true also in the example we have presented here .however , it should be obvious from the discussion that this is an artifact of the use of , the linear projection , and that the full truth is more complicated and interesting .furthermore , in the physics literature one usually deals with memory by separating fast `` and ' ' slow " variables and assuming that the orthogonal dynamics generated by fast " variables generate noise " with delta - function memory ; note in contrast that in our problem the unresolved and the resolved variables have exactly the same time scale . finally , the heuristic arguments of the present paper will be replaced in general by a systematic approximation of the langevin equation , including a systematic evaluation of the orthogonal dynamics , as we will explain in future publications .we would like to thank prof .barenblatt , dr .e. chorin , mr .e. ingerman , dr . a. kast , mr .k. lin , for helpful discussions and comments , and mr .p. okunev for help in programming .this work was supported in part by by the applied mathematical sciences subprogram of the office of energy research of the us department of energy under contract de - ac03 - 76-sf00098 , and in part by the national science foundation under grant dms98 - 14631 .r.k . was supported by the israel science foundation founded by the israel academy of sciences and humanities .99 j bell , aj chorin and w crutchfield , stochastic optimal prediction with application to averaged euler equations , proc .fluid mech .lin ( ed ) , pingtung , taiwan , ( 2000 ) , pp . 1 - 13 .
optimal prediction methods compensate for a lack of resolution in the numerical solution of complex problems through the use of prior statistical information . we know from previous work that in the presence of strong underresolution a good approximation needs a non - markovian memory " , determined by an equation for the orthogonal " , i.e. , unresolved , dynamics . we present a simple approximation of the orthogonal dynamics , which involves an ansatz and a monte - carlo evaluation of autocorrelations . the analysis provides a new understanding of the fluctuation - dissipation formulas of statistical physics . an example is given .
controlling uncertain cyber - physical systems ( cps ) or networked control systems ( ncs ) subjected to limited information is a challenging task . generally in cps or ncs ,multiple physical systems are interconnected and exchanged their local information through a digital network . due toshared nature of communicating network the continuous or periodic transmission of information causes a large bandwidth requirement .apart from bandwidth requirement , most of the cyber - physical systems are powered by dc battery , so efficient use of power is essential . it is observed that transmitting data over the communicating network has a proportional relation with the power consumption . this trade off motivates a large number of researchers to continue their research on ncs with minimal sensing and actuation - , - .recently an event - triggered based control technique is proposed by - , to reduce the information requirement for realizing a stabilizing control law . in event - triggered control , sensing at system end and actuation at controller end happens only when a pre - specified event condition is violated .this event - condition mostly depends on system s current states or outputs .the primary shortcoming of continuous - time event - triggered control is that it requires a continuous monitoring of event condition . in - ,heemels et al .proposes an event - triggering technique where event - condition is monitored periodically . to avoid continuous or periodic monitoring ,self - triggered control technique is reported in - where the next event occurring instant is computed analytically based on the state of previous instant .maximizing inter - event time is the key aim of the event - triggered or self - triggered control in order to reduce the total transmission requirements .a girad , proposes a new event - triggering mechanism named as dynamic event - triggering to achieve larger inter - event time with respect to previous approach .the above discussion says the efficacy of aperiodic sensing and actuation over the continuous or periodic one in the context of ncs . in ncs, uncertainty is mainly considered in communicating network in the form of time - delay , data - packet loss in between transreceiving process , , . on the other hand the unmodeled dynamics , time - varying system parameters , external disturbances are the primary sources of system uncertainty .the main shortcoming of the classical event - triggered system lies in the fact that one must know the exact model of the plant apriori .a system with an uncertain model is a more realistic scenario and has far greater significance .however , there are open problems of designing a control law and triggering conditions to deal with system uncertainties . to deal with parametric uncertainty , f. lin et.al .proposes a continuous - time robust control technique where control input is generated by solving an equivalent optimal control problem - .the optimal control problem is formulated based on the nominal or auxiliary dynamics by minimizing a quadratic cost - functional which depends on the upper bound of uncertainty .the similar concepts is extended for nonlinear continuous system in , , where a non - quadratic cost - functional is considered . however , this framework for discrete - time uncertain system is not reported .recently e. garcia et.al . have proposed an event - triggered based discrete - time robust control technique for ncs - . to realize the robust control law their prior assumptions are that the physical system is affected by matched uncertainty ( which is briefly discussed in section [ sec2 ] ) andthe uncertainty is only in system s state matrix . butconsidering mismatched uncertainty in both state and input matrices is more realistic control problem .this is due to the fact that , the existence of stabilizing control law can be guaranteed for matched uncertainty but difficult for mismatched system .stabilization of mismatched uncertain system with communication constraint is a challenging task .this motivates us to formulate the present problem .+ in this paper a novel discrete - time robust control technique is proposed for ncs , where physical systems are inter - connected through an unreliable communication link . due to unavailability of networkthe robust control law is designed using minimal state information .the designed control law acts on the physical system where the state model is affected by mismatched parametric uncertainty . for mathematical simplicitywe are avoiding other uncertainties like external disturbances , noises .the communication unreliability is resolved by considering an event - triggered control technique , where control input is computed and updated only when an event - condition is satisfied . to derive the discrete - time robust control input a virtual nominal system and a modified cost - functionalis defined .solution of the optimal control problem helps to design the stabilizing control input for uncertain system .the input - to - state stability ( iss ) technique is used to derive the event - triggering condition and as well as to ensure the stability of closed loop system .the contributions of this paper are summarized as follows : ( i ) : : present paper proposes a robust control framework for discrete time linear system where state matrix consists of mismatched uncertainty .the periodic robust control law is derived by formulating an equivalent optimal control problem .optimal control problem is solved for an virtual system with a quadratic cost - functional which depends on the upper bound on uncertainty .( ii ) : : the virtual nominal dynamics have two control inputs and .the concept of virtual input is used to derive the existence of stabilizing control input to tackle mismatched uncertainty .the proposed robust control law ensures asymptotic convergence of uncertain closed loop system .( iii ) : : an event - triggered robust control technique is proposed for discrete - time uncertain system , where controller is not collocated with the system and connected thorough a communication network . the aim of this control law is to achieve robustness against parameter variation with event based communication and control .the event condition is derived from the iss based stability criteria .( iv ) : : it is shown that some of the existing results of matched system is a special case of the proposed results - .a comparative study is carried out between periodic control over the event - triggered control on the numerical example .the notation is used to denote the euclidean norm of a vector . here denotes the dimensional euclidean real space and is a set of all real matrices . and denote the all possible set of positive real numbers and non - negative integers . , and represent the negative definiteness , transpose and inverse of matrix , respectively .symbol represents an identity matrix with appropriate dimensions and time implies .symbols and denote the minimum and maximum eigenvalue of symmetric matrix respectively . through out this paper following definitions are used to derived the theoretical results .[ def23 ] a system is globally iss if it satisfies with each input and each initial condition .the functions and are and functions respectively .[ def34 ] a discrete - time system , whose origin is an equilibrium point if .a positive function is an iss lyapunov function for that system if there exist class functions and a class function for all by satisfying the following conditions - ._ system description : _ a discrete - time uncertain linear system is described by the state equation in the form where is the state and is the control input .the matrices , are the nominal , known constant matrices .the unknown matrices is used to represent the system uncertainty due to bounded variation of .the uncertain parameter vector belongs to a predefined bounded set .generally system uncertainties are classified as matched and mismatched uncertainty .system ( [ sys2 ] ) suffers through the matched uncertainty if the uncertain matrix satisfy the following equality where is the upper bound of uncertainty . in other words, is in the range space of nominal input matrix . for mismatched case equality ( [ une3 ] ) does not holds . for simplification, uncertainty can be decomposed in matched and mismatched component such as where is matched and is the mismatched one .the matrix denotes the pseudo inverse of matrix .the perturbation is upper bounded by a known matrices and defined as where , the scalar is a design parameter . to stabilize ( [ sys2 ] ) ,it is essential to formulate a robust control problem as discussed below .design a state feedback controller law such that the uncertain closed loop system ( [ sys2 ] ) with the nominal dynamics is asymptotically stable for all .+ in order to stabilize ( [ sys2 ] ) , the robust controller gain is designed through an optimal control approach .the essential idea is to design an optimal control law for a virtual nominal dynamics which minimizes a modified cost - functional , .the cost - functional is called modified as it consists with the upper - bound of uncertainty .an extra term is added with ( [ syss1 ] ) to define virtual system ( [ nomi1 ] ) .the derived optimal input for virtual system is also a robust input for original uncertain system .the virtual dynamics and cost - functional for ( [ sys2 ] ) are given bellow : where , , and the scaler is a design parameter .here is the stabilizing control input and is an virtual input .the input is called virtual since it is not used directly to stabilize ( [ sys2 ] ) .but it helps indirectly to design .the usefulness of l in the context of event - triggered control is discussed in section [ sec3 ] .the robust control law for ( [ sys2 ] ) is designed by minimizing ( [ cos1 ] ) for the virtual system ( [ nomi1 ] ) .the results are stated in the form a theorem .[ thr12 ] suppose there exists a scalar and positive definite solution of equation ( [ ri1 ] ) with moreover the controller gains and are computed as these gains and are the robust solution of ( [ sys2 ] ) , if it satisfies following inequality : where .the proof of this theorem is given in appendix [ a1 ] .the result in theorem [ thr12 ] does not consider any communication constraint in realising the control law .so we formulate a robust control problem for an uncertain system with event - triggered control input .the block diagram of proposed robust control technique is shown in figure [ fi : bld ] .it has three primary parts namely , system block , controller block and a unreliable communicating network between system and controller .the states of system are periodically measured by the sensor which is collocated with the system .the sensor is connected with the controller through a communication network .an event - monitoring unit verifies a state - dependent event condition periodically and transmits state information to the controller only when the event - condition is satisfied . the robust controller gain with eventual state information , which is received from uncertain system is used to generate the event - triggered control law to stabilize ( [ sys2 ] ) .here denotes the latest event - triggering instant and the control input is updated at aperiodic discrete - time instant .a zero - order - hold ( zoh ) is used to hold the last transmitted control input until the next input is transmitted . herethe actuator is collocated with the system and actuating control law is assumed to change instantly with the transmission of input . for simplicity, this paper does not consider any time - delay between sensing , computation and actuation instant .but in real - practice there must be some delay .this delay will effect the system analysis and as well as in event - condition .+ a discrete - time linear uncertain system ( [ sys2 ] ) with event - triggered input is written as from , this can be modelled as the variable is named as measurement error .it is used to represent the eventual state information in the form * problem statement : * design a feedback control law to stabilize the uncertain discrete - time event - triggered system ( [ syse1 ] ) such that the closed loop system is iss with respect to its measurement error .+ _ proposed solution : _ this problem is solved in two different steps .firstly , the controller is designed by adopting the optimal control and secondly an event - triggering rule is derived to make ( [ syse1 ] ) iss .the design procedure of controller gains , based on theorems [ thr12 ] , is already discussed in section [ sec2 ] .the event - triggering law is derived from the definition [ def23 ] & [ def34 ] assuming an iss lyapunov function .the design procedure of event - triggering condition is discussed elaborately in section [ sec3 ] .this section discusses the design procedure of event - triggering condition and stability proof of ( [ syse1 ] ) under presence of bounded parametric uncertainty .the results are stated in the form of a theorem .[ th1 ] let be a solution of the riccati equation ( [ ri1 ] ) for a scalar and satisfy the following inequalities where controller gains and are computed by ( [ g1 ] ) , ( [ g2 ] ) . the event - triggered control law ( [ ine ] ) ensures the iss of ( [ syse1 ] ) if the input is updated through the following triggering - condition moreover the design parameter is explicitly defined as where scalar and positive matrix . to prove the above theorem some intermediate resultsare stated in the form of following lemmas .the proof of these lemmas are omitted due to limitation of pages .[ lem12 ] let be a positive definite solution of ( [ ri1 ] ) . then there exist a scalar such that with [ lem34 ]let be a solution of ( [ ri1 ] ) which satisfy the equation ( [ cons11 ] ) .using controller gain ( [ g1 ] ) and ( [ g2 ] ) , the following inequality holds assuming as a iss lyapunov function for ( [ syse1 ] ) and . the time difference of ] and .+ figure [ fig:42 ] shows the convergence of uncertain states for event - triggered control .for event triggered control the triggering instants are shown in figure [ fig:43 ] .a discrete - time periodic and aperiodic control of uncertain linear system is proposed in this paper .the control law is designed by formulating an optimal control problem for virtual nominal system with a modified cost - functional .an virtual input is defined to design the stabilizing controller gain along with the stability condition .the paper also proposes an event - triggered based control technique for ncs to achieve robustness . the event - condition and stability of uncertain systemare derived using the iss lyapunov function .a new event - triggering law is derived which depends on the virtual controller gain in order to tackle mismatched uncertainty . a comparative study between existing and proposed resultsis also reported . a challenging future work to extend the proposed robust control framework for discrete - time nonlinear system with and without event - triggering input .this frame work can be formulated as a differential game problem where the control inputs and can be treated as minimizing and maximizing inputs . the proof has two parts . at first , we solve an optimal control problem to minimize ( [ cos1 ] ) for the nominal system ( [ nomi1 ] ) . for this purposethe optimal input and should minimize the hamiltonian , that means and . after applying discrete - time lqr methods , the riccati equation ( [ ri1 ] ) and controller gains ( [ g1 ] ) , ( [ g2 ] )are achieved . to prove the stability of uncertain system , let be a lyapunov function for ( [ sys2 ] ) . then applying ( [ syse1 ] ) the time difference of along the is where . using matrix inversion lemma , following is achieved using ( [ ri1 ] ) and ( [ er2 ] ) in ( [ eq46 ] ) , followingis achieved (k)-x(k)^{t}[f-\epsilon^{-1}\delta a^{t}\delta a]x(k)\end{aligned}\ ] ] now applying lemmas [ lem12 ] , [ lem34 ] , the equation ( [ grad1 ] ) is simplified as the inequality ( [ grad2 ] ) will be negative semi - definite if and only if equation ( [ scon3 ] ) is satisfied .99 o. c. imer and t. basar,``optimal control with limited controls '' , _ american control conference _ , pp .298 - 303 , minneapolis , 2006 .l. zhang and d. h. varsakelis , `` lqg control under limited communication '' , _ 44th ieee conference on decision and control _ , pp .185 - 190 , spain , 2005 .k. astrom and b. bernhardsson , `` comparison of riemann and lebesgue sampling for first order stochastic systems '' , _41st ieee conference on decision and control _ , pp .2011 - 2016 , las vegas , 2002 .p. tabuada , `` event - triggered real - time scheduling of stabilizing control tasks '' , _ ieee transactions on automatic control _ , vol .52(9 ) , pp . 1680 - 1685 , 2007 .a. eqtami , d. v. dimarogonas and k. j. kyriakopoulos , `` event - triggered control for discrete - time systems '' , _ american control conference _ , pp .4719 - 4724 , baltimore , 2010 .a. girard , `` dynamic triggering mechanisms for event - triggered control '' , _ ieee transactions on automatic control _60 no . 7 , pp .1992 - 1997 , 2015 .n. marchand , s. durand , and j. f. g. castellanos , `` a general formula for event - based stabilization of nonlinear systems '' , _ ieee transactions on automatic control _58(5 ) , pp .1332 - 1337 , 2013 .a. anta , and p. tabuada , `` to sample or not to sample : self - triggered control for nonlinear systems '' ._ ieee transactions on automatic control _ , vol .55(9 ) , pp .2030 - 2042 , 2010 .x. wang and m. d. lemmon , `` self - triggered feedback control systems with finite - gain stability '' , _ ieee transactions on automatic control _ , vol .54(3 ) , pp . 452 - 462 , 2009 .e. garcia , p. j. antsaklis , `` optimal model - based control with limited communication '' , _ proceedings of the 19th ifac world _, cape town , 2014 .e. garcia , p. j. antsaklis , l. a. montestruque , `` model - based control of networked systems '' , _springer _ , switzerland , 2014 .k. zhou , p. p.khargonekar,``robust stabilization of linear systems with norm - bounded time - varying uncertainty '' , _ systems & control letters _17 - 20 , 1988 .i. r. petersen and c. v. hollt,``a riccati equation approach to the stabilization of uncertain linear systems '' , _ automatica _ , vol .397 - 411 , 1986 .g. garcia , j. bernussou and d. arzelier , `` robust stabilization of discrete - time linear systems with norm - bounded time varying uncertainty '' , _ systems & control letters _ ,327 - 339 , 1994 .m. h. heemels , m. c. f. donkers , a. r. tell `` periodic event - triggered control for linear systems '' , _ ieee transactions on automatic control _ , vol .58(4 ) , pp . 847 - 861 , 2013 .m. h. heemels , m. c. f. donkers `` model - based periodic event - triggered control for linear systems '' , _ automatica _ , vol .698 - 711 , 2013 .a. sahoo , h. xu and s. jagannathan,``near optimal event - triggered control of nonlinear discrete - time systems using neurodynamic programming '' , _ ieee transactions on neural networks and learning systems _ , ( early accesses ) , pp . 1 - 13 , 2015 . s. trimpe and r. dandrea , `` event - based state estimation with variance - based triggering '' , _ 51st ieee conference on decision and control _ , pp . 6583 - 6590 , hawaii , 2012 . p. tallapragada and n. chopra , `` on event triggered tracking for nonlinear systems '' , _ ieee transactions on automatic control _ , vol .58(9 ) , pp . 2343 - 2348 , 2013 .e. d. sontag , `` input to state stability : basic concepts and results '' , _ nonlinear and optimal control theory _ , pp .163 - 220 , 2008 .d. nesic and a.r .teel , `` input - to - state stability of networked control systems '' , _ automatica _ , vol .40(12 ) , pp . 2121 - 2128 , 2004 .d. s. naidu , optimal control systems , _ crc press _ , india , 2009 .r. a. horn and c. r. johnson , matrix analysis , _ cambridge university press _ , cambridge , 1990 .e. garcia and p. j. antsaklis , `` model - based event - triggered control for systems with quantization and time - varying network delays '' , _ ieee transactions on automatic control _ ,58(2 ) , pp . 422 - 434 , 2013 .`` an optimal control approach to robust control design '' , _ international journal of control _ vol .73(3 ) , pp .177 - 186 , 2000 .f. lin and r. d. brandt , `` an optimal control approach to robust control of robot manipulators '' , _ ieee transactions on robotics and automation _14(1 ) , pp . 69 - 77 , 1998 .f. lin , w. zhang and r. d. brandt,robust hovering control of a pvtol aircraft " , _ ieee transactions on control system technology _ , vol .7(3 ) , pp . 343 - 351 , 1999 .d. m. adhyaru , i.n .kar and m. gopal , `` fixed final time optimal control approach for bounded robust controller design using hamilton jacobi bellman solution '' , _ iet control theory and applications_. vol .3(9 ) , pp . 1183 - 1195 , 2009 .d. m. adhyaru , i. n. kar and m. gopal , `` bounded robust control of systems using neural network based hjb solution '' , _ neural comput and applic _ , vol .20(1 ) , pp . 91 - 103 , 2011 .m. xia , v. gupta and p. j. antsaklis .`` networked state estimation over a shared communication medium '' , _ american control conference _ , pp. 4128 - 4133 , washington , 2013 .w. wu , s. reimann , d. gorges , and s. liu , `` suboptimal event - triggered control for time - delayed linear systems '' , _ ieee transactions on automatic control _ , vol .60(5 ) , pp . 1386 - 1391 , 2015 .h. k. khalil , nonlinear systems , _ prentice hall _ , 3rd edition , new jersey , 2002 .w. wu , s. reimann , d. gorges and s.liu , `` event - triggered control for discrete - time linear systems subjected to bounded disturbance '' , _ international journal of robust and nonlinear control _ , wiley , 2015 .
this paper proposes a procedure to control an uncertain discrete - time networked control system through a limited stabilizing input information . the system is primarily affected by the time - varying , norm bounded , mismatched parametric uncertainty . the input information is limited due to unreliability of communicating networks . an event - triggered based robust control strategy is adopted to capture the network unreliability . in event - triggered control the control input is computed and updated at the system end only when a pre - specified event condition is violated . the robust control input is derived to stabilize the uncertain system by solving an optimal control problem based on a virtual nominal dynamics and a modified cost - functional . the designed robust control law with limited information ensures input - to - sate stability ( iss ) of original system under presence of mismatched uncertainty . deriving the event - triggering condition for discrete - time uncertain system and ensuring the stability of such system analytically are the key contributions of this paper . a numerical example is given to prove the efficacy of the proposed event - based control algorithm over the conventional periodic one . shell : bare demo of ieeetran.cls for journals discrete - time event - triggered control , discrete - time robust control , mismatched uncertainty , optimal control , input to state stability .
two body problems are very classical in celestial mechanics and have been studied thoroughly since kepler discovered the laws of motion of celestial objects ( e.g. , aitken 1964 , goldstein 1980 , danby 1988 , roy 1988 , murray and dermott 1999 , beutler 2004 ) .the regular orbits of a system of two masses in newtonian mechanics are of three types : ellipse , parabola and hyperbola .the latter two cases , in which the separation of two bodies will become infinite in the remote future , may be called open orbits .the singular orbit is a linear one corresponding to a head - on collision , which is extremely special . in this paper, we consider regular orbits mentioned above : for a binary system , the orbit determination may bring us some informations about its formation and evolution mechanism . in an open orbit , one may infer an impact parameter and an initial relative velocity of two masses , one of which may be ejected by some explosive mechanism such as a supernova or by three body scattering .for instance , some observations reveal that kicked pulsars move at unusually high speed ( anderson et al .1975 , hobbs et al . 2005 ) .the orbit determination of _ visual double stars _ was solved first by savary in 1827 , secondly by encke 1832 , thirdly by herschel 1833 and by many authors including kowalsky , thiele and innes ( aitken 1964 for a review on earlier works ; for the state - of - the - art techniques , e.g , eichhorn and xu 1990 , catovic and olevic 1992 , olevic and cvetkovic 2004 ) . here , a visual binary is a system of two stars both of which can be seen .the relative vector from the primary star to the secondary is in an elliptic motion with a focus at the primary .this relative vector is observable because the two stars are seen . on the other hand , an _astrometric binary _ is a system of two objects where one object can be seen but the other can not like a black hole or a very dim star . in this case , it is impossible to directly measure the relative vector connecting the two objects , because one end of the separation of the binary , namely the secondary , can not be seen .the measures are made in the position of the primary with respect to unrelated reference objects ( e.g. , a quasar ) whose proper motion is either negligible or known . as a method to determine the orbital elements of a binary , an analytic solution in an explicit formhas been found by asada , akasaka and kasai ( 2004 , henceforth aak ) .this solution is given in a closed form by requiring neither iterative nor numerical methods .one may naturally seek an analytic method of orbit determination for open orbits .an extension for open orbits done earlier by dommanget ( 1978 ) used the thiele - innes method , namely solved numerically the kepler equation . as a result , the method by dommanget does not provide an explicit solution in a closed form .therefore , let us extend the explicit solution by aak to open orbits .then , we would face the following problem .aak formalism uses a fact that the semimajor and semiminor axes of an ellipse divide it into quarters .this fact plays a crucial role in determining the position of the common center of mass on a celestial sphere ; we should note here that the projected common center of mass is not necessarily a focus of an apparent ellipse .the division into quarters is possible for neither a parabola nor a hyperbola .the purpose of this paper is to generalize aak approach so that we can treat an open orbit .this paper is organized as follows .sec . 2 presents a generalized aak formalism . in sec .3 , the generalized approach is employed to obtain the method of orbit determination for a hyperbolic orbit . in sec . 4 , the formula for a parabolic orbit is presented . in sec .5 , we recover these formulae by making a suitable transformation of that for an elliptic orbit with some limiting procedures .6 is devoted to conclusion .we denote by the cartesian coordinates on a celestial sphere that is perpendicular to the line of sight .a general form of an ellipse on a celestial sphere is which is characterized by five parameters ; the position of its center , the length of its semimajor / semiminor axes and the rotational degree of freedom . by at least five measurements of the location of a star , one can determine all the parameters .henceforth , we adopt the cartesian coordinates such that the apparent ellipse can be reexpressed in the standard form as where we assume without loss of generality .the ellipticity , , is .a focus of the original keplerian ellipse is not always that of the apparent one because of the inclination of the orbital plane .we should note that a focus of the original keplerian orbit is the common center of mass of a binary , around which a component star moves at the constant - areal velocity following the keplerian second law ( the conservation law of the angular momentum in the classical mechanics ) .this enables us to find out the location of the common center of mass as shown below .a star is located at on a celestial sphere at each epoch for , where for . here, denotes the eccentric angle in the apparent ellipse but not the eccentric anomaly in the true one ; the eccentric anomaly of the original keplerian orbit is not observable .we assume anti - clockwise motion , such that for .all we must do in the case of the clockwise motion is to change the signature of the area in eq .( ) in the following .we define the time interval as .the common center of mass of the binary is projected onto the celestial sphere at .even after the projection , the law of constant - areal velocity still holds , where we should note that the area is swept by the line interval between the projected common center and the star .the area swept during the time interval , , is denoted by .the total area of the observed ellipse is denoted by .the law of the constant areal velocity on the celestial sphere becomes where .\label{areas}\ ] ] equation is rewritten explicitly as they are solved for and as where the periastron is projected onto the observed ellipse at .the ratio of the semimajor axis to the distance between the center and the focus of the ellipse remains unchanged , even after the projection .hence , we find the positional vector is still located on the apparent ellipse given by eq .we thus obtain the ellipticity as in the original derivation of aak formula , the fact that the semimajor and semiminor axes divide the area of the ellipse in quarters .this still holds even after the projection .namely the projected semimajor and semiminor axes divide the area of the apparent ellipse in quarters , though the original semimajor and semiminor axes are not always projected onto the apparent semimajor and semiminor ones . this way of the derivation, however , can be used for neither hyperbolic nor parabolic cases , where there are no counterparts of the semiminor axis .hence we shall employ another method . in this paragraph, we use the cartesian coordinates on the original orbital plane .let be the inclination angle between the original orbital plane and the celestial sphere .we define as the angular distance of the periastron , namely the angle between the periastron and the ascending node .let us express the original keplerian ellipse as we consider the line that is perpendicular to the semimajor axis at a focus .this line intersects the original ellipse at points and . for later convenience ,we adopt the coordinates whose origin is located at the focus , by making a translation as .then , one rewrites and . only in this paragraph, we adopt other cartesian coordinates so that the ascending node can be located on the -axis and the origin can be the common center of mass .the true periastron of the original ellipse is projected at .the point denoted by is projected at .it is useful to consider the following invariants because the components of a vector depend on the adopted coordinates .we consider the area surrounded by the ellipse and the line interval between and .this area is divided into equal halves by the semimajor axis . even after the projection , the divided areas are still equal halves. hence one can determine the location of the projected as in the apparent ellipse coordinates , where we defined in this computation , it is useful to stretch the apparent ellipse along its semiminor axis by so that one can consider a circle with radius . in this stretching ,importantly , the areal division into equal halves still holds .we make a translation as and so that the projected common center of mass can become the origin of the new coordinates .then , we have hence , we obtain the invariants from these vectors as whose values can be estimated because , , , and have been already all determined up to this point .equations ( )-( ) for , and are solved as where we define one can show because the arithmetic mean is not smaller than the geometric one .it is worthwhile to mention that eq .( ) is obtained by solving a quadratic equation for as which can be obtained from eqs .( )-( ) by eliminating and .furthermore , one can prove that a root of must be abandoned because eq .( ) implies that it is always larger than the unity . only in the case of , the apparent ellipse coincides with the true orbithence , the ascending node and consequently the angular distance of the periastron make no sense . as a result , the denominator of r. h. s. of eq .( ) vanishes .equations ( ) , ( ) , ( ) and ( ) agree with those of aak , where different notations were employed . in this paper ,the semiminor axis is not used for areal divisions .therefore , this formalism can be generalized straightforwardly to an open orbit , as shown below .let a star move in a hyperbola on a celestial sphere . without loss of generality, we can assume that the hyperbola is expressed as and the orbit is the left - hand side of the hyperbola , .then the position of the star at each epoch is denoted by the projected common center of mass is not necessarily a focus of the apparent hyperbola but the projected focus of the original keplerian hyperbola . the projected areal velocity with respect to the projected common center of mass is denoted by .the law of the constant areal velocity on the observed plane is written as where for we obtain .\label{areas - h}\end{aligned}\ ] ] equation is rewritten explicitly as they are solved for and as where the periastron is projected onto the observed hyperbola at .the ratio of the semimajor axis to the distance between the center and the focus of the hyperbola remains the same , even after the projection .hence , we find the positional vector is still located on the apparent hyperbola given by eq . .we thus obtain the ellipticity as in this paragraph , we use the cartesian coordinates on the original orbital plane .let us express an original keplerian hyperbola as we consider the line that is perpendicular to the semimajor axis at a focus .this line intersects the original hyperbola at and . only in this paragraph, we shall employ other cartesian coordinates so that the ascending node can be located on the -axis and the origin can be the common center of mass .the true periastron of the original hyperbola is projected at , where and are the inclination angle and the angular distance of the periastron , respectively .the point denoted by is projected at .we shall use the following invariants as we consider the area surrounded by the hyperbola and the line interval between and .this area is divided into equal halves by the semimajor axis . even after the projection , the divided areas are still equal halves. hence one can determine the location of the projected as in the apparent hyperbola coordinates , where we defined we make a translation as and so that the center of the coordinates can move to the projected common center of mass .then , we have where we used eqs .( ) and ( ) .hence , we obtain the invariants from these vectors as whose values can be estimated because , , , and have been all determined up to this point .equations ( )-( ) for , and are solved as where we define in the similar manner to the elliptic case , one can show hence , for a quadratic equation for as which is derived from eqs .( )-( ) .one can prove that a root of is larger than the unity according to eq .( ) and thus must be abandoned .let a star move in a parabola on a celestial sphere as then the position of the star at each epoch is denoted by the projected common center of mass is not necessarily a focus of the apparent parabola but the projected focus of the original keplerian parabola .the law of the constant areal velocity on the observed plane is written as where for we obtain .\label{areas - p}\end{aligned}\ ] ] equation is rewritten explicitly as they are solved for and as where , \label{gj } \\ h_j&=&\frac32 [ t(j+1 , j)x_{j+2}+t(j+2 , j+1)x_j -t(j+2 , j)x_{j+1 } ] , \label{hj } \\ i_j&=&- [ t(j+1 , j)x_{j+2}\sqrt{-x_{j+2}}+t(j+2 , j+1)x_j\sqrt{-x_j } \nonumber\\ & & -t(j+2 , j)x_{j+1}\sqrt{-x_{j+1 } } ] .\label{ij } \end{aligned}\ ] ] the periastron is projected onto the observed parabola at .the semimajor axis is projected onto a line , which may be expressed as .this line intersects the apparent parabola only at the projected periastron .therefore , we find because must be larger than for a sufficient large . in addition, the projected semimajor axis goes through the projected common center .this implies .hence we obtain in this paragraph , we use the cartesian coordinates on the original orbital plane .let a keplerian parabola be we consider the line that is perpendicular to the semimajor axis at a focus .this line intersects the original parabola at and . only in this paragraph , we employ other cartesian coordinates so that the ascending node can be located on the -axis and the origin can be the common center of mass .the true periastron of the original parabola is projected at , where and are the inclination angle and the angular distance of the periastron , respectively .the point denoted by is projected at .we shall use the following invariants as we consider the area surrounded by the parabola and the line interval between and .this area is divided into equal halves by the semimajor axis . even after the projection, the divided areas are still equal halves . hence one can determine the location of the projected as in the apparent parabola coordinates , where we defined we make a translation as and so that the origin of the coordinates can be the projected common center of mass .then , we obtain hence , we obtain the invariants from these vectors as ^{3/2}}{4q } , \label{times2-p}\end{aligned}\ ] ] whose values can be estimated because , and have been all determined up to this point .equations ( )-( ) for , and are solved as where we define in the similar manner to the above two cases , one can show hence , for a quadratic equation for as which is derived from eqs .( )-( ) .one can prove that a root of is larger than the unity according to eq .( ) and thus must be abandoned .let us rederive the formula for a hyperbolic case from that for an elliptic one by making a transformation as which imply where . then , from eqs .( )-( ) and ( )-( ) we find we can thus show that the location of the common center is transformed from eqs .( ) and ( ) to eqs .( ) and ( ) . equation ( ) is transformed into eq .( ) , eqs .( )-( ) into eqs .( )-( ) , eqs .( )-( ) into eqs .( )-( ) , because remains unchanged .to rederive the formula for a parabolic case , we perform a transformation from an elliptic case with a limiting procedure as where the finite implies and . then , we find we can transform the location of the common center from eqs .( ) and ( ) to which agrees with eq .( ) .we thus recover eq .( ) as equation ( ) is transformed as where we used and . by using eq .( ) and , we obtain ( 1-e_{\mbox{k } } ) } { 2a^3q } \nonumber\\ & \to & -\frac{(y_{\mbox{c}}^2 + 4q^2 ) ( y_{\mbox{c}}^2 + 4qx^{\prime}_{\mbox{c } } ) } { 4q^2 } , \label{tr - pq2-p}\\ & = & 2 a^{3/2}\sqrt{q } ( 1-e_{\mbox{k}})^{3/2 } \nonumber\\ & \to&\frac{[-(y_{\mbox{c}}^2 + 4qx^{\prime}_{\mbox{c}})]^{3/2}}{4q } , \label{tr - times2-p}\end{aligned}\ ] ] which agree with eqs .( )-( ) .equations ( )-( ) are transformed as where remains unchanged .they agree with eqs .( )-( ) .the formulae for orbit determination of elliptic , hyperbolic and parabolic orbits are obtained in a unified manner by generalizing aak approach , which originally needed a fact of the areal divisions by the semimajor and semiminor axes of an ellipse .we show also that the present formulae are recovered from aak result by a suitable transformation among an ellipse , hyperbola and parabola .the present author would like to thank the anonymous reviewers for invaluable information particularly regarding the earlier works .he would like to thank professor m. kasai and professor k. maeda for encouragement .aitken r. g. , 1964 _ the binary stars _ ( ny : dover ) anderson b. , lyne a. g. , peckham , r. j. , 1975 , ` proper motions of six pulsars ' , nature , 258 , 215 asada h. , akasaka t. , kasai m. , 2004 , ` inversion formula for determining the parameters of an astrometric binary ' , pasj . , 56 , l35 beutler g. , 2004 _ methods of celestial mechanics _ ( berlin : springer ) catovic z. and olevic d. , 1992 _ in iau colloquim 135 , asp conference series , vol .32 _ , ed .mcalister h.a . and hartkopf w.i .( san francisco : astronomical society of the pacific ) , 217 danby j. m. a. , 1988 _ fundamentals of celestial mechanics _( va : william - bell ) dommanget j. , 1978 , ` mthode de calcul dune orbite d'toile double visuelle valable dans tous les cas dexcentricit , a&a . , 68 , 315 eichhorn h. k. , xu y. , 1990 , ` an improved algorithm for the determination of the system parameters of a visual binary ' , apj . , 358 , 575 goldstein h. , 1980 _ classical mechanics _( ma : addison - wesley ) hobbs g. , lorimer d. r. , lyne a. g. , kramer m. , 2005 , ` a statistical study of 233 pulsar proper motions ' , mnras ., 360 , 974 , murray c. d. , dermott s. f. , 1999 _ solar system dynamics _( cambridge : cambridge univ . press ) olevic d. , cvetkovic z. , 2004 , ` orbits of 10 interferometric binary systems calculated by using the improved kovalskij method ' , a&a , 415 , 259 roy a. e. , 1988 _ orbital motion _( bristol : institute of physics publishing )
we present an exact solution of the equations for orbit determination of a two body system in a hyperbolic or parabolic motion . in solving this problem , we extend the method employed by asada , akasaka and kasai ( aak ) for a binary system in an elliptic orbit . the solutions applicable to each of elliptic , hyperbolic and parabolic orbits are obtained by the new approach , and they are all expressed in an explicit form , remarkably , only in terms of elementary functions . we show also that the solutions for an open orbit are recovered by making a suitable transformation of the aak solution for an elliptic case . keywords : astrometry celestial mechanics orbit determination
the famous experiment of messages being forwarded to a target among a group of people , carried out by milgram in the 1960s , and also by dodds _et al . _ in 2003 on a larger scale , reveals the existence of short paths between pairs of distant vertices in networks that appear to be regular ( i.e. the small - world effect ) .one of the important quantities that characterize this small - world effect is the average shortest path length between two vertices . on small world networks, this value grows very slowly ( relative to the case of a fully regular network ) with the network size .recent empirical research has shown that a great variety of natural and artificial networks with their structure dominated by regularity are actually small worlds , and their average path lengths grow as , or more slowly .( see refs . for more reviews . )an alternative issue revealed by experiments , but less obvious , is about the realistic process of passing information on small world networks .this kind of information navigation has been studied by kleinberg .this process goes dynamically : when a message is to be sent to a designated target , each individual forwards the message to one of its nearest neighbors ( connected either by a regular link or a shortcut ) based on its limited information . without information of the whole network structure ,this actual path is usually longer than the shortest one given by the topological structure .while is the average shortest path length , the average actual path length is the average number of steps required to pass messages between randomly chosen vertex pairs . as usually referred to as the diameter of the system , in the rest of this paper shall be taken as the effective diameter , and .it has been noted that the topology of the network may significantly affect the behavior of .in other words , it may determine the efficiency of passing information .based on milgram s experiment , kleinberg studied navigation process on a variant of the watts - strogatz ( w - s ) small - world model on an open regular square lattice .each vertex sends out a long range link with probability , and the probability of the other end falling on a vertex at euclidean distance away decays as .kleinberg studied when each vertex sends out one long range link , and proved a lower bound of . when , the long range links are uniform , and was obtained . de moura _ et al . _ studied on the -dimensional w - s model with and varying , and obtained , and thus in the two - dimensional case . in the more recent work of zhu _ on the one - dimensional case , the variance of with both and was studied , and scaling relations were shown to exist . for the studies of the searching processes on other different networks , see refs . . in this paper, we systematically investigate the navigation process on a two - dimensional variant of the w - s network model .we study the behavior of by first working out the scaling relations in the two - dimensional case .our result also provides new understanding of the scaling analysis in ref . . in sec .[ sec.2 ] , the model used here is constructed and the navigation process is described , and then the average actual path length is obtained with some approximation based on a rigorous treatment . following that , in sec .[ sec.3 ] , the dependence of on and are presented based on scaling relations . special attention is paid to the cases studied in the works of kleinberg and de moura .summary and discussions can be found in sec .[ sec.4 ] .our model starts from a two - dimensional square lattice . with periodic boundary condition, the lattice distance between two vertices and can be written in a two - dimensional fashion as this value is actually the length of the shortest path connecting these two vertices through only regular links . to generate a small world , with probability ( )each vertex sends out an additional link to another vertex ( excluding its original nearest neighbors ) .if this other vertex is selected at random , then we are creating a small - world network with random shortcuts .based on realistic considerations ( for example , people tend to be brought together by similar interest , occupation , etc . ) , we shall add the shortcuts in a biased manner : if the shortcut starts from vertex , the probability that vertex is selected as the end depends on the lattice distance between them in the following way , where is a positive exponent and is the normalization factor . in the model described above , the navigation process can be simulated with the so - called `` greedy '' algorithm : without loss of generality , suppose the target is vertex .at each step , the current message holder , vertex , passes the message through one of its regular or long - range links .based on its limited local information , this link is believed to bring the message the closest to the target . based on this algorithm , the actual path length can be obtained after taking an ensemble average over all possible realizations of the network ( with a set of fixed parameters , , , , etc . ) . in the simplest case, we suppose that each vertex has information of only the vertices that can be reached within one step , and do the calculation as the following : ( 1 ) if the current message holder is vertex , we simply have it is the same for the other nearest neighbors of the target , , and .( 2 ) there are nodes with lattice distance from the target : , , , and .if the current message holder is , for example , , then with probability it is directly linked to the target via one shortcut , which means the message is sent directly to the target with this probability . on the other hand ,the probability that the message is forwarded along a regular bond is thus the calculation is the same for the other nodes mentioned above .( 3 ) in a general case , the message is held by vertex . denotes the probability that the message is forwarded in the next step to a vertex , which must be nearer to the target than by at least lattice distance .if the message holder is not able to find a shortcut , the message will be forwarded along a regular link with probability for example , if the vertex passes the message through a regular link , it will randomly choose or , which in the following will be denoted by .now , with this set of probabilities , we obtain , + w_{reg}\left [ 1+\left\langle l\left ( x_{reg},y_{reg}\right ) \right\rangle \right ] .\label{dituin}\ ] ] considering that is a relatively small quantity , can be expressed as where we have used the fact that . then it is an easy task to obtain from eq .( [ wreg ] ) .recall that we have the definition of the average shortest path length , where the length of the shortest path between vertices and .by contrast , the average actual path length can be defined as further , with vertex being the target , we group the other nodes according to their lattice distance from the target , and in the following we shall also discuss the function , which for each value of is obtained by averaging all nodes with .the average actual path length depends on multiple parameters . herewe take into consideration varying , , and , but keep the range of view of each vertex limited to its nearest neighbors .our discussion of the navigation process starts from looking for the basic scaling relations .scaling relation is not new in the theories of small world effect .actually , it plays a central role in the current theoretical framework . in 1999 , newman showed that in the w - s model with uniform shortcuts the average shortest path length is a function of , and it sharply decreases when becomes larger than 1 .newman noticed that is simply the expected number of long - range links .the threshold of small world behavior is , that means the network becomes a small world when there is more than one long - range link . when the model network is generalized , this interpretation shall be generalized as well .for example , in a discussion of the scaling relations in the problem of dynamic navigation , zhu _ et al . _ considered inhomogeneous long range links ( the probability of linking two nodes falls when their lattice distance increases ) . in their study , the dynamic small world behavior is switched on when , where is the number of long - range links and is the average reduced link length ( the average length of long range links divided by the system size ) .although they focused on different aspects ( static and dynamic ) of the small world effect , we can still compare these two versions of scaling relations . because in the model studied by newman , , it is consistent with the interpretation of zhu _et al_. actually , as we shall see below , the interpretation of zhu _ et al ._ can be developed as well , when a more general model is considered . in the introductionwe have defined as the effective diameter , in the following we will use as the reduced effective diameter . if the network is regular , will appear as a constant .bearing this in mind , we first look at the results shown in fig.1 . for each value of ( with exception at , as will be shown below ) , appears as a function of , where is the average length of long - range links .thus our study clearly supports an interpretation different from that in ref . : instead of , the parameter should be . in the one - dimensional case , this equals , and thus consistent with ref . . when , , and , indicating that the network is virtually regular , and when increases beyond , the system begins to show a dynamic small world behavior . however , we find interesting exceptions at and .as shown in fig.2 , at , is a linear function of for significantly larger than . due to this extra factor of , there is no way that can be written as a function of for because at the system shows the shortest , this is certainly a case of special interest .it becomes more curious when we notice that in one dimension there is not such an exception in the scaling analysis .as shown in fig.3 , at , appears as a function of , and the dynamic small world behavior is seen when this parameter exceeds .there is more discussion of these interesting points later .figs.1 , 2 and 3 can give us more information once we get an idea of .for large enough , we can use the following approximation , which gives and some comments on this approximation : we have chosen the upper and lower limits for the integrals with some arbitrariness . for and for very large , it gives results good enough for the later discussion .surely this approximation fails for , but in that region what is important is stays finite as goes to infinity . from eqs .( [ ravel ] ) to ( [ ravel3 ] ) , we have for large enough in the above equations , come out as special points . , corresponds to the totally random network .below , is always proportional to . in ref . , the authors proposed that for the system is in the random network phase , and is the continuous phase transition point from the random network phase to the small - world phase .as increases above , is finite and independent of in the limit of . as a result ,the system is virtually a regular network for .now we have the scaling relations shown in figs.1 , 2 and 3 , and as a function of given by eqs .( [ la0 ] ) to ( [ la34 ] ) . in the followingwe shall discuss the system behavior with starting from zero .\(1 ) : as shown in fig.1 , when , , indicating that the network is virtually regular .when increases beyond , the system begins to show a dynamic small world behavior , in the sense that or where depends only on . from the linear fit , at have obtained that and .note that is obtained from linear fit of limited data and can not be exact , but our result , , is close to kleinberg s result , .( we observe that in ref . , . ) as increases , increases , but remains positive as . with our present results , we are not able to give the full function of , because near , as a function of severely deviates from a power law for relatively small .\(2 ) : at this point , when , , showing that it is a regular network .when above , turns into a linear function of , and it means with for , this gives } { p\ln 2},\ ] ] or , in the limit of we have studied the case of , and the case of will be studied below .we shall see that at the smallest is achieved .\(3 ) : regularity dominates for . for , dynamic small world effect arises .similar to the case of , we have once again obtained and tends to zero as approaches .\(4 ) : when , , the network also shows dominating regularity .when greatly exceeds , substitute into the above equation , then for large enough , we have but the size of the networks used in the present study prevented us from getting a accurate estimate of .\(5 ) : since in this case stays finite when , the system is believed to behave like a regular network .it is confirmed by the numerical calculation , which gives nearly linear as .in this work , we investigate the navigation process on a variant of watts - strogatz ( w - s ) model embedded on a two - dimensional square lattice with periodic boundary condition . with probability , each vertex sends out a long range link , and the probability that the other end of this link falls on a vertex at lattice distance away decays as .vertices on the network have knowledge of only their nearest neighbors . in a navigation process, messages are forwarded to a designated target , and the average actual path length is obtained with varying , , and .our result is consistent with the existence of two phase transitions at ( random network to small world network ) , and ( small world network to regular network ) . for , and , it is found that , where is the average length of the additional long range links .this develops the scaling analysis in the works of newman and zhu _ et al ._ . given , dynamic small world effectis observed , and the behavior of at large enough gives .when , is close to , so , in agreement with kleinberg s result of for . as , increases ( but stays below ) , and once exceeds , begins to decrease , and approaches zero as . at ,this kind of scaling breaks down , and can no longer be written as a function of . in this casewe can still get for large enough .note that only at , grows as a polynomial of , and it is the closest point to the static small world effect . at ,the scaling is , and accordingly with large enough . for , is nearly linear with .it is reasonable that in social networks ( like various other networks ) the probability of connection falls as distance ( in various senses , e.g. , occupation ) increases , and the apparently very small value of in human society suggests human society might have its exponent being close to .the great success of the idea of small world has since motivated much effort in studying various dynamic processes based on the small world network model .the limited knowledge of the nodes of a network is an important limitation that has to be considered when studying navigation processes .another interesting and important limitation is due to the fact that the links in a network are usually associated with `` weights '' , as systematically studied in refs .further studies on link - weighted small - world model should help us gain insight in the navigation and other relevant phenomena in various artificial and natural networks , and help us design networks with higher efficiency .l. a. adamic , r. m. lukose , a. r. puniyani , and b. a. huberman , phys .e * 64 , * 046135 ( 2001 ) ; b. j. kim , c. n. yoon , s. k. han , and h. jeong , phys .e * 65 * , 027103 ( 2002 ) ; m. rosvall , p. minnhagen , and k. sneppen , phys .e * 71 * , 066111 ( 2005 ) .l. a. braunstein , s. v. buldyrev , r. cohen , s. havlin , and h. e. stanley , phys .91 , 168701 ( 2003 ) ; m. cieplak , a. maritan , and j. r. banavar , phys . rev .76 , 3754 ( 1996 ) ; t. kalisky , l. a. braunstein , s. v. buldyrev , s. havlin , and h. e. stanley phys . rev .e 72 , 025102(r ) ( 2005 ) . fig.1 .( color online ) the reduced average actual path length varies as for , where is the average length of the additional long range links . the data collapse with each specific contains curves with respectively .on each curve with fixed , , where ( has the same set of values on each curve in fig.2 and fig.3 ) . the solid line is a guide to the eye .
navigation process is studied on a variant of the watts - strogatz small world network model embedded on a square lattice . with probability , each vertex sends out a long range link , and the probability of the other end of this link falling on a vertex at lattice distance away decays as . vertices on the network have knowledge of only their nearest neighbors . in a navigation process , messages are forwarded to a designated target . for and , a scaling relation is found between the average actual path length and , where is the average length of the additional long range links . given , dynamic small world effect is observed , and the behavior of the scaling function at large enough is obtained . at and , this kind of scaling breaks down , and different functions of the average actual path length are obtained . for , the average actual path length is nearly linear with network size .
while modeling and optimizing the energy consumption of different applications and mobile systems have been the active research interest for a decade , the overall performance of the smartphone power management systems has not received significant attention yet .the diverse set of smartphone models available today are powered with batteries of different capacity volumes and technologies , such as lithium - ion and lithium - polymer .they employ different charging mechanisms to charge their batteries and rely on different state of charge ( soc ) estimation techniques .the growth of smartphone battery size has been linear with time .charging large batteries with traditional charging techniques may take very long time .in addition , the context may not allow a user to charge long enough time .therefore , it is necessary that the battery should be charged to some reasonable amount , e.g. , 30 - 50% , within a short amount of time .consequently , users are increasingly relying on a number of fast charging techniques from qualcomm ( quick ) and samsung ( fast ) .nevertheless , the quality of charging plays an important role in the longevity of smartphone batteries .for example , if a battery is charged over the maximum battery voltage , the resulting chemical reactions may reduce the capacity significantly and increase the battery temperature beyond the safety limit . understandingthe inefficiency of the energy source , and other related contributing factors can enable better optimization of the applications , systems , and more accurate power consumption modeling .given the number of smart mobile devices available on the market , it is not feasible to investigate their charging and battery properties , and the performance of the charging methods on batteries in a laboratory environment .although there are studies on users charging behavior , it is not well understood how this battery and charging information could be presented in more meaningful ways to the users and mobile vendors other than just the battery level . in this article , we explore a large battery analytic dataset comprising various battery sensor information from 30k devices of 1.5k unique smartphone models collected by the carat application .we explore their battery voltage behavior , charging rate and charging time , and demonstrate how these properties can be used to expose the characteristics of their power management systems .we identify their charging mechanisms , soc estimation techniques and battery properties , and the distribution of these properties among the devices . to the best of our knowledgesuch comprehensive study on a large smartphone battery dataset has not been presented earlier .our findings and contributions are listed in the sidebar .the rest of the paper is organized as follows .next section provides an overview on smartphone s power management system and describes the crowdsourced dataset .the subsequent sections explore the dataset and identify the characteristics of various power management techniques used by the smartphones and properties of smartphone batteries while charging . before concluding the paper, we also discuss user behavior in charging their smartphones .the charger and three different ics , a fuel gauge , a charging controller , and a protection ic , manage the charging of a mobile device . the charging controlleris hosted in the device and the protection ic resides in the battery .the fuel gauge functionality may be distributed between the device and the battery .the fuel gauge determines the runtime battery capacity , i.e. , soc or battery level , using open circuit voltage , coulomb counter , or a combined mechanism of these two .it senses battery voltage , temperature , and charge or discharge current to / from the battery pack . at the same time , it also provides feedback to the charging ic . the charging controller applies the charging algorithm , such as cc - cv , and uses the fuel gauge provided information to control the charging current , voltage , and to terminate the charging .finally , the protection ic protects the battery from over voltage or current from the device .the android battery manager collects charging and battery information from the fuel gauge ( see table [ tab : charger_battery_info ] ) and broadcasts as events .carat collects information from mobile devices as samples with a broadcast receiver .a sample structure can be defined as , where is the epoch timestamp of a soc update event and are the attribute and value pairs . from all the information collected in a sample , we consider the timestamp , soc , battery voltage , battery health , battery temperature , charging status , charger type , and the screen status attributes . [ cols="<,<",options="header " , ]we consider all the charging events and the corresponding samples in this section as well . a charging event should contain one sample for each battery level or soc update .therefore , the number of samples for a specific battery level should be unique in an event .however , we have found more than one sample for a single battery level update in the form of soc fluctuation ( e.g. battery level = 5 ) .nevertheless , such soc fluctuations are not uniformly distributed , rather left skewed with respect to the battery level .2% of the charging events reside at the tail of the distribution , which contain fluctuation between two consecutive levels .the screen status of the corresponding samples suggests that the devices were being actively used .therefore , it took a longer time for an actual 1% increment .other than charging and actively using their devices at the same time , users may keep their devices connected with the chargers even when the batteries are completely charged . from the dataset , we have identified 3% of such charging events .the duration of such events can be a few to thousands of seconds . in this case, the phone stops charging the battery and begins recharging whenever 1 - 2% has been discharged .however , we have measured that the extra energy spent during a over night charging for 10 hours can be used to charge a iphone 6 to its full capacity ( 1810mah ) .in this study based on data gathered from in - the - wild devices , we have shown that a few thousand devices use inefficient charging mechanisms that can significantly reduce battery life .we have found that 2% of the devices charge their batteries well above the maximum battery voltage . this charging method deteriorates battery capacity faster than normal .there has been very active discussion in various online forums identifying battery soc anomalies and such soc error is due to the capacity loss .a small number of devices had a charging current higher than 1.0c , which also degrades battery performance quickly .we have also observed that 85% of the devices suffered from 1 - 10% capacity loss .moreover , user behavior and interaction with the device during charging also contribute to energy waste .our future research includes investigating the performance of different charging algorithms with a larger dataset and developing a battery analytics api based on spark so that users and vendors can investigate the performance of their batteries and power management techniques .this work was funded by the academy of finland cubic project with grant number 277498 ., anand p. iyer , ion stoica , eemil lagerspetz , and sasu tarkoma .2013 . . in _ proceedings of the 11th acm conference on embedded networked sensor systems _ _ ( sensys 13)_. acm , new york , ny , usa , article 10 , 14 pages . ,mosharaf chowdhury , tathagata das , ankur dave , justin ma , murphy mccauly , michael j. franklin , scott shenker , and ion stoica . 2012 . .in _ presented as part of the 9th usenix symposium on networked systems design and implementation ( nsdi 12)_. usenix , san jose , ca , 1528 . , birjodh tiwana , zhiyun qian , zhaoguang wang , robert p. dick , zhuoqing morley mao , and lei yang. 2010 . . in _ proceedings of the eighth ieee / acm / ifip international conference on hardware / software codesign and system synthesis_ ( codes / isss 10)_. acm , new york , ny , usa , 105114 .
for better reliability and prolonged battery life , it is important that users and vendors understand the quality of charging and the performance of smartphone batteries . considering the diverse set of devices and user behavior it is a challenge . in this work , we analyze a large collection of battery analytics dataset collected from 30k devices of 1.5k unique smartphone models . we analyze their battery properties and state of charge while charging , and reveal the characteristics of different components of their power management systems : charging mechanisms , state of charge estimation techniques , and their battery properties . we explore diverse charging behavior of devices and their users .
we consider the problem of finding the global maxima of a function , where is assumed bounded , using the _ expected improvement _ ( ei ) criterion .many examples in the literature show that the ei algorithm is particularly interesting for dealing with the optimization of functions which are expensive to evaluate , as is often the case in design and analysis of computer experiments .however , going from the general framework expressed in to an actual computer implementation is a difficult issue .the main idea of an ei - based algorithm is a bayesian one : is viewed as a sample path of a random process defined on . for the sake of tractability , it is generally assumed that has a gaussian process distribution conditionally to a parameter , which tunes the mean and covariance functions of the process .then , given a prior distribution on and some initial evaluation results at , an ( idealized ) ei algorithm constructs a sequence of evaluations points such that , for each , where stands for the posterior distribution of , conditional on the -algebra generated by , and is the ei at given , with and the conditional expectation given and . in practice ,the computation of is easily carried out ( see ) but the answers to the following two questions will probably have a direct impact on the performance and applicability of a particular implementation : a ) how to deal with the integral in ?b ) how to deal with the maximization of at each step ?we can safely say that most implementations including the popular ego algorithm deal with the first issue by using an _ empirical bayes _ ( or _ plug - in _ ) approach , which consists in approximating by a dirac mass at the maximum likelihood estimate of . a plug - in approach using maximuma posteriori estimation has been used in ; _ fully bayesian _ methods are more difficult to implement ( see and references therein ) . regarding the optimization of at each step , several strategies have been proposed ( see , e.g. , ) .this article addresses both questions simultaneously , using a sequential monte carlo ( smc ) approach and taking particular care to control the numerical complexity of the algorithm .the main ideas are the following .first , as in , a weighted sample from is used to approximate ; that is , . besides , at each step , we attach to each a ( small ) population of candidate evaluation points which is expected to cover promising regions for that particular value of and such that .at each step of the algorithm , our objective is to construct a set of weighted particles so that , with where denotes the lebesgue measure , , is a criterion that reflects the interest of evaluating at ( given and past evaluation results ) , and is a normalizing term . for instance , a relevant choice for is to consider the probability that exceeds at , at step .( note that we consider less than in to keep the numerical complexity of the algorithm low . ) to initialize the algorithm , generate a weighted sample from the distribution , using for instance importance sampling with as the instrumental distribution , and pick a density over ( the uniform density , for example ) .then , for each : + _ step 1 : demarginalize _ using and , construct a weighted sample of the form , with , , and .+ _ step 2 : evaluate _ evaluate at .+ _ step 3 : reweight / resample / move _ construct from as in : reweight the using , resample ( e.g. , by multinomial resampling ) , and move the to get using an independant metropolis - hastings kernel .step 4 : forge _ form an estimate of the second marginal of from the weighted sample .hopefully , such a choice of will provide a good instrumental density for the next demarginalization step .any ( parametric or non - parametric ) density estimator can be used , as long as it is easy to sample from ; in this paper , a tree - based histogram estimator is used . * experiments . * preliminary numerical results , showing the relevance of a fully bayesian approach with respect to empirical bayes approach , have been provided in .the scope of these results , however , was limited by a rather simplistic implementation ( involving a quadrature approximation for and a non - adaptive grid - based optimization for the choice of ) .we present here some results that demonstrate the capability of our new smc - based algorithm to overcome these limitations .the experimental setup is as follows .we compare our smc - based algorithm , with , to an ei algorithm in which : 1 ) we fix ( at a `` good '' value obtained using maximum likelihood estimation on a large dataset ) ; 2 ) is obtained by exhaustive search on a fixed lhs of size . in both cases ,we consider a gaussian process with a constant but unknown mean function ( with a uniform distribution on ) and an anisotropic matrn covariance function with regularity parameter .moreover , for the smc approach , the variance parameter of the matrn covariance function is integrated out using a jeffreys prior and the range parameters are endowed with independent lognormal priors . * results .* figures [ fig : branin ] and [ fig : hart6 ] show the average error over runs of both algorithms , for the branin function ( ) and the log - transformed hartmann 6 function ( ) . for the branin function ,the reference algorithm performs better on the first iterations , probably thanks to the `` hand - tuned '' parameters , but soon stalls due to its non - adaptive search strategy .our smc - based algorithm , however , quickly catches up and eventually overtakes the reference algorithm . on the hartmann 6 function, we observe that the reference algorithm always lags behind our new algorithm .j. mockus , v. tiesis , and a. zilinskas .the application of bayesian methods for seeking the extremum . in l.dixon and g. szego , editors , _ towards global optimization _ ,volume 2 , pages 117129 .elsevier , 1978 .
we consider the problem of optimizing a real - valued continuous function using a bayesian approach , where the evaluations of are chosen sequentially by combining prior information about , which is described by a random process model , and past evaluation results . the main difficulty with this approach is to be able to compute the posterior distributions of quantities of interest which are used to choose evaluation points . in this article , we decide to use a sequential monte carlo ( smc ) approach .
are widely applied in many areas of signal processing , where their popularity owes largely to efficient algorithms on the one hand and advantages of sparse wavelet representations on the other .the sparseness property means that while the distribution of the original signal values may be very diffuse , the distribution of the corresponding wavelet coefficients is often highly concentrated , having a small number of very large values and a large majority of very small values .it is easy to appreciate the importance of sparseness in signal compression , .the task of removing noise from signals , or _ denoising _ , has an intimate link to data compression , and many denoising methods are explicitly designed to take advantage of sparseness and compressibility in the wavelet domain , see e.g. , . among the various wavelet - based denoising methods those suggested by donoho and johnstone arethe best known .they follow the frequentist minimax approach , where the objective is to asymptotically minimize the worst - case risk simultaneously for signals , for instance , in the entire scale of hlder , sobolev , or besov classes , characterized by certain smoothness conditions .by contrast , bayesian denoising methods minimize the _ expected _ ( bayes ) risk , where the expectation is taken over a given prior distribution supposed to govern the unknown true signal .appropriate prior models with very good performance in typical benchmark tests , especially for images , include the class of generalized gaussian densities , and scale - mixtures of gaussians ( both of which include the gaussian and double exponential densities as special cases ) .a third approach to denoising is based on the minimum description length ( mdl ) principle .several different mdl denoising methods have been suggested .we focus on what we consider as the most pure mdl approach , namely that of rissanen .our motivation is two - fold : first , as an immediate result of refining and extending the earlier mdl denoising method , we obtain a new practical method with greatly improved performance and robustness .secondly , the denoising problem turns out to illustrate theoretical issues related to the mdl principle , involving the problem of unbounded parametric complexity and the necessity of encoding the model class .the study of denoising gives new insight to these issues .formally , the denoising problem is the following .let be a signal represented by a real - valued column vector of length .the signal can be , for instance , a time - series or an image with its pixels read in a row - by - row order .let be an regressor matrix whose columns are basis vectors .we model the signal as a linear combination of the basis vectors , weighted by coefficient vector , plus gaussian i.i.d .noise : where is the noise variance . given an observed signal , the ideal is to obtain a coefficient vector such that the signal given by the transform contains the informative part of the observed signal , and the difference is noise . for technical convenience , we adopt the common restriction on that the basis vectors span a _ complete orthonormal _ basis .this implies that the number of basis vectors is equal to the length of the signal , , and that all the basis vectors are orthogonal unit vectors .there are a number of wavelet transforms that conform to this restriction , for instance , the haar transform and the family of daubechies transforms .formally , the matrix is of size and orthogonal with its inverse equal to its transpose .also the mapping preserves the euclidean norm , and we have parseval s equality : geometrically this means that the mapping is a rotation and/or a reflection . from a statistical point of view, this implies that any spherically symmetric density , such as gaussian , is invariant under this mapping .all these properties are shared by the mapping .we call the inverse wavelet transform , and the forward wavelet transform. note that in practice the transforms are not implemented as matrix multiplications but by a fast wavelet transform similar to the fast fourier transform ( see ) , and in fact not even the matrices need be written down . for complete bases , the conventional maximum likelihood ( least squares )method obviously fails to provide denoising unless the coefficients are somehow restricted since the solution gives the reconstruction equal to the original signal , including noise . the solution proposed by rissanen is to consider each subset of the basis vectors separately and to choose the subset that allows the shortest description of the data at hand .the length of the description is determined by the normalized maximum likelihood ( nml ) code length .the nml model involves an integral , which is undefined unless the range of integration ( the support ) is restricted .this , in turn , implies hyper parameters , which have received increasing attention in various contexts involving , e.g. , gaussian , poisson and geometric models .rissanen used renormalization to remove them and to obtain a second - level nml model .although the range of integration has to be restricted also in the second - level nml model , the range for ordinary regression problems does not affect the resulting criterion and can be ignored .roos et al . give an interpretation of the method which avoids the renormalization procedure and at the same time gives a simplified view of the denoising process in terms of two gaussian distributions fitted to informative and non - informative coefficients , respectively . in this paperwe carry this interpretation further and show that viewing the denoising problem as a clustering problem suggests several refinements and extensions to the original method .the rest of this paper is organized as follows . in sec .[ sec : mdl ] we reformulate the denoising problem as a task of clustering the wavelet coefficients in two or more sets with different distributions . in sec .[ sec : refined ] we propose three different modifications of rissanen s method , suggested by the clustering interpretation . in sec .[ sec : results ] the modifications are shown to significantly improve the performance of the method in denoising both artificial and natural signals .the conclusions are summarized in sec .[ sec : conclusions ] .we rederive the basic model ( [ eq : model ] ) in such a way that there is no need for renormalization .this is achieved by inclusion of the coefficient vector in the model as a variable and by selection of a ( prior ) density for .while the resulting nml model will be equivalent to rissanen s renormalized solution , the new formulation is easier to interpret and directly suggests several refinements and extensions .consider a fixed subset of the coefficient indices .we model the coefficients for as independent outcomes from a gaussian distribution with variance . in the basic hard threshold versionall for are forced to equal zero .thus the extended model is given by this way of modeling the coefficients is akin to the so called _ spike and slab _ model often used in bayesian variable selection and applications to wavelet - based denoising ( and references therein ) . in relation to the sparseness property mentioned in the introduction , the ` spike ' consists of coefficients with that are equal to zero , while the ` slab ' consists of coefficients with described by a gaussian density with mean zerothis is a simple form of a scale - mixture of gaussians with two components . in sec .[ sec : subband ] we will consider a model with more than two components .let , where gives the representation of the noise in the wavelet domain .the vector is the wavelet representation of the signal , and we have it is easy to see that the maximum likelihood parameters are obtained directly from the i.i.d .gaussian distribution for in ( [ eq : extendedmodel ] ) implies that the distribution of is also i.i.d . andgaussian with the same variance , . as a sum of two independent random variates, each has a distribution given by the convolution of the densities of the summands , and the component of . in the case this is simply . in the case the density of the sum is also gaussian , with variance given by the sum of the variances , .all told , we have the following simplified representation of the extended model where the parameters are implicit : where denotes the variance of the informative coefficients , and we have the important restriction which we will discuss more below .the task of choosing a subset can now be seen as a clustering problem : each wavelet coefficient belongs either to the set of the informative coefficients with variance , or the set of non - informative coefficients with variance .the mdl principle gives a natural clustering criterion by minimization of the code - length achieved for the observed signal ( see ) .once the optimal subset is identified , the denoised signal is obtained by setting the wavelet coefficients to their maximum likelihood values ; i.e. , retaining the coefficients in and discarding the rest , and doing the inverse transformation .it is well known that this amounts to an orthogonal projection of the signal to the subspace spanned by the wavelet basis vectors in .the code length under the model depends on the values of the two parameters , and .the standard solution in such a case is to construct a single representative model for the whole model class such that the representative model is universal ( can mimic any of the densities in the represented model class ) .the minimax optimal universal model ( see ) is given by the so called normalized maximum likelihood ( nml ) model , originally proposed by shtarkov for data compression .we now consider the nml model corresponding to the extended model with the index set fixed. denote by the number of coefficients for which .the nml density under the extended model for a given coefficient subset is defined as where and are the maximum likelihood parameters for the data , and is the important normalizing constant .the constant is also known as the _ parametric complexity _ of the model class defined by . restricting the data such that the maximum likelihood parameters satisfy and ignoring the constraint , the code length under the extended model is approximated by bits . ] plus a constant independent of , with and denoting the sum of the squares of all the wavelet coefficients and the coefficients for which , respectively ( see the appendix for a proof ) .the code length formula is very accurate even for small since it involves only the stirling approximation of the gamma function .the set of sequences satisfying the restriction depends on .for instance , consider the case . in a model with , the restriction corresponds to a union of four squares , whereas in a model with either or , the relevant area is an annulus ( two - dimensional spherical shell ) .however , the restriction can be understood as a definition of the support of the corresponding nml model , not a rigid restriction on the data , and hence models with varying are still comparable as long as the maximum likelihood parameters for the observed sequence satisfy the restriction .the code length obtained is identical to that derived by rissanen with renormalization ( note the correction to the third term of in ) .the formula has a concise and suggestive form that originally lead to the interpretation in terms of two gaussian densities .it is also the form that has been used in subsequent experimental work with somewhat mixed conclusions : while for gaussian low variance noise it gives better results than a universal threshold of donoho and johnstone ( visushrink ) , over - fitting occurs in noisy cases ( see also sec .[ sec : results ] below ) , which is explained by the fact that omission of the third term is justified only in regression problems with few parameters .it was proved in that the criterion is minimized by a subset which consists of some number of the largest or smallest wavelet coefficients in absolute value .it was also felt that in denoising applications the data are such that the largest coefficients will minimize the criterion .the above alternative formulation gives a natural solution to this question : by the inequality , the set of coefficients with larger variance , i.e. , the one with larger absolute values should be retained , rather than _ vice versa_. in reality the nml model corresponding to the extended model ( [ eq : simplermodel ] ) is identical to rissanen s renormalized model only if the inequality is ignored in the calculations ( see the appendix ) .however , the following proposition ( proved in the appendix ) shows that the effect of doing so is independent of , and hence irrelevant .[ prop : ignore ] the effect of ignoring the constraint is exactly one bit .we can safely ignore the constraint and use the model without the constraint as a starting point for further developments for the sake of mathematical convenience .it is customary to ignore encoding of the index of the model class in mdl model selection ; i.e. , encoding the number of parameters when the class is in one - to - one correspondence with the number of parameters .one simply picks the class that enables the shortest description of the data without considering the number of bits needed to encode the class itself .note that here we do not refer to encoding the parameter values as in two - part codes , which are done implicitly in the so - called ` one - part codes ' such as the nml and mixture codes . in most casesthere are not too many classes and hence omitting the code length of the model index has no practical consequence .when the number of model classes is large , however , this issue does become of importance . in the case of denoising ,the number of different model classes is as large as ( with as large as ) and , as we show , encoding of the class index is crucial .the encoding method we adopt for the class index is simple .we first encode , the number of retained coefficients with a uniform code , which is possible since the maximal number is fixed .this part of the code can be ignored since it only adds a constant to all code lengths .secondly , for each there are a number of different model classes depending on which coefficients are retained .note that while the retained coefficients are always the _ largest _ coefficients , this information is not available to the decoder at this point and the index set to be retained has to be encoded .there are sets of size , and we use a uniform code yielding a code length nats , corresponding to a prior probability applying stirling s approximation to the factorials and ignoring all constants wrt . gives the final code length formula the proof can be found in the appendix .this way of encoding the class index is by no means the only possibility but it will be seen to work sufficiently well , except for one curious limitation : as a consequence of modeling both the informative coefficients and the noise by densities from the same gaussian model , the code length formula approaches the same value as approaches either zero or , which actually are disallowed .hence , it may be that in cases where there is little information to recover , the random fluctuations in the data may yield a minimizing solution near instead of a correct solution near .a similar phenomenon has been demonstrated for `` saturated '' bernoulli models with one parameter for each observation , and resembles the inconsistency problem of bic in markov chain order selection : in all these cases pure random noise is incorrectly identified as maximally regular data . in order to prevent this we simply restrict , which seems to avoid such problems .a general explanation and solution for these phenomena would be of interest terms in algorithmic information theory . ] .it is an empirical fact that for most natural signals the coefficients on different subbands corresponding to different frequencies ( and orientations in 2d data ) have different characteristics . basically , the finer the level , the more sparse the distribution of the coefficients , see fig . [fig : histo ] .( this is not the case for pure gaussian noise or , more interestingly , signals with fractal structure . ) within the levels the histograms of the subbands for different orientations of 2d transforms typically differ somewhat , but the differences between orientations are not as significant as between levels .finer levels have narrower ( more sparse ) distributions than coarser levels ; the finest level ( 9 ) is drawn with solid line . ] in order to take the subband structure of wavelet transforms into account , we let each subband have its own variance , . we choose the set of the retained coefficients separately on each subband , and let denote the set of the retained coefficients on subband , with . for convenience ,let be the set of all the coefficients that are not retained . note that this way we have . in order to encode the retained and the discarded coefficients on each subband, we use a similar code as in the ` flat ' case ( sec .[ sec : encodemodel ] ) . for each subband , the number of nats needed is . ignoring again the constraint ,the levels can be treated as separate sets of coefficients with their own gaussian densities just as in the previous subsection , where we had two such sets .the code length function , including the code length for , becomes after stirling s approximation to the gamma function and ignoring constants as follows : the proof is omitted since it is entirely analogous to the proof of eq .( see the appendix ) , the only difference being that now we have gaussian densities instead of only two .notwithstanding the added code - length for the retained indices , for the case this coincides with the original setting , where the subband structure is ignored , eq . , since we then have .this code can be extended to allow for some subbands simply by ignoring such subbands , which formally corresponds to reducing in such cases the constants ignored also get reduced .this effect is very small compared to terms in , and can be safely ignored since codes with positive constants added to the code lengths are always decodable . ] . finding the index sets that minimize the nml code length simultaneously for all subbands is computationally demanding .while on each subband the best choice always includes some largest coefficients , the optimal choice on subband depends on the choices made on the other subbands .a reasonable approximate solution to the search problem is obtained by iteration through the subbands and , on each iteration , finding the locally optimal coefficient set on each subband , given the current solution on the other subbands .since the total code length achieved by the current solution never increases , the algorithm eventually converges , typically after not more than five iterations .algorithm 1 in fig .[ fig : al1 ] implements the above described method . following established practice ,all coefficients are retained on the smallest ( coarsest ) subbands .ll + + 0 . & ' '' '' set + 1 .& initialize for all + 2 . & do until convergence + 3 .& for each + 4 . & optimize wrt .criterion + 5 . &end + 6 . & end + 7 .& for each + 8 .& if then set + 9 . &end + 10 . & output the methods described above can be used to determine the mdl model , defined by a subset of the wavelet coefficients , that gives the shortest description to the observed data . however , in many cases there are several models that achieve nearly as good a compression as the best one .intuitively , it seems then too strict to choose the single best model and discard all the others .a modification of the procedure is to consider a _ mixture _ , where all models indexed by are weighted by eq . : such a mixture model is universal ( see e.g. ) in the sense that with increasing sample size the per sample average of the code length approaches that of the best for all .consequently , predictions obtained by conditioning on past observations converge to the optimal ones achievable with the chosen model class .a similar approach with mixtures of trees has been applied in the context of compression . for denoising purposeswe need a slightly different setting since we can not let grow . instead , given an observed signal , consider another image from the same source .denoising is now equivalent to predicting the mean value of . obtaining predictions for given from the mixtureis in principle easy : one only needs to evaluate a conditional mixture with new updated ` posterior ' weights for the models , obtained by multiplying the nml density by the prior weights and normalizing wrt . : since in the denoising problem we only need the mean value instead of a full predictive distribution for the coefficients , we can obtain the predicted mean as a weighted average of the predicted means corresponding to each by replacing the density by the coefficient value obtained from for and zero otherwise , which gives the denoised coefficients where the indicator function takes value one if and zero otherwise .thus the mixture prediction of the coefficient value is simply times the sum of the weights of the models where with the weights given by eq . .the practical problem that arises in such a mixture model is that summing over all the models is intractable .since this sum appears as the denominator of , we can not evaluate the required weights .we now derive a tractable approximation . to this end , let denote a model determined by iff , and let denote a particular one with .also , let be the model with maximal nml posterior weight .the weight with which each individual coefficient contributes to the mixture prediction can be obtained from note that the ratio is equal to this can be approximated by which means that the exponential sums in the numerator and the denominator are replaced by their largest terms assuming that forcing to be one or zero has no effect on the other components of .the ratio of two weights can be evaluated without knowing their common denominator , and hence this gives an efficient recipe for approximating the weights needed in eq . . intuitively , if fixing decreases the posterior weight significantly compared to , the approximated value of becomes large and the coefficient is retained near its maximum likelihood value .conversely , coefficients that increase the code length when included in the model are shrunk towards zero .thus , the mixing procedure implements a general form of ` soft ' thresholding , of which a restricted piece - wise linear form has been found in many cases superior to hard thresholding in earlier work .such soft thresholding rules have been justified in earlier works by their improved theoretical and empirical properties , while here they arise naturally from a universal mixture code .the whole procedure for mixing different coefficient subsets can be implemented by replacing step 8 of algorithm 1 in fig .[ fig : al1 ] by the instruction where denotes the approximated value of .the behavior of the resulting soft threshold is illustrated in fig .[ fig : thres ] . ): the original wavelet coefficient value on the x - axis , and the thresholded value on the y - axis . for coefficients with large absolute value ,the curve approaches the diagonal ( dotted line ) .the general shape of the curve is always the same but the scale depends on the data : the more noise , the wider the non - linear part . ]the effect of the three refinements of the mdl denoising method was assessed separately and together on a set of artificial 1d signals and natural images commonly used for benchmarking .the signals were contaminated with gaussian pseudo - random noise of known variance , and the denoised signal was compared with the original signal .the daubechies d6 wavelet basis was used in all experiments , both in the 1d and 2d cases .the error was measured by the peak - signal - to - noise ratio ( psnr ) , defined as where is the difference between the maximum and minimum values of the signal ( for images ) ; and is the mean squared error .the experiment was repeated 15 times for each value of , and the mean value and standard deviation was recorded .the compared denoising methods were the original mdl method without modifications ; mdl with the modification of sec .[ sec : encodemodel ] ; mdl with the modifications of secs .[ sec : encodemodel ] and [ sec : subband ] ; and mdl with the modifications of secs .[ sec : encodemodel ] , [ sec : subband ] and [ sec : mixture ] . for comparison, we also give results for three general denoising methods applicable to both 1d and 2d signals , namely visushrink , sureshrink , and bayesshrink can be reproduced using the package . ] .figure [ fig : blockdemo ] illustrates the denoising results for the _ blocks _ signal with signal length .the original signal , shown in the top - left display , is piece - wise constant .the standard deviation of the noise is .the best method , having the highest ( and equivalently , the smallest ) is the mdl method with all the modifications proposed in the present work , labeled mdl ( a - b - c ) in the figure .another case , the _ peppers _ image with noise standard deviation , is shown in fig .[ fig : pepperdemo ] , where the best method is bayesshrink .visually , sureshrink and bayesshrink give a similar result with some remainder noise left , while mdl ( a - b - c ) has removed almost all noise but suffers from some blurring .the relative performance of the methods depends strongly on the noise level .figure [ fig : curves ] illustrates this dependency in terms of the relative psnr compared to the mdl ( a - b - c ) method .it can be seen that the mdl ( a - b - c ) is uniformly the best among the four mdl methods except for a range of small noise levels in the _ peppers _ case , where the original method is slightly better .moreover , it can be seen that the modifications of secs .[ sec : subband ] and [ sec : mixture ] improve the performance on all noise levels for both signals .the right panels of fig .[ fig : curves ] show that the overall best method is bayesshrink , except for small noise levels in _ blocks _ , where the mdl ( a - b - c ) method is the best .this is explained by the fact that the generalized gaussian model used in bayesshrink is especially apt for natural images but less so for 1d signals of the kind used in the experiments .the above observations generalize to other 1d signals and images as well , as shown by tables [ tab:1d ] and [ tab:2d ] .for some 1d signals ( _ heavisine _ , _ doppler _ ) the sureshrink method is best for some noise levels . in images , bayesshrink is consistently superior for low noise cases , although it can be debated whether the test setting where the denoised image is compared to the original image , which in itself already contains some noise , gives meaningful results in the low noise regime . for moderate tohigh noise levels , bayesshrink , mdl ( a - b - c ) and sureshrink typically give similar psnr output .+ + [ cols="^,^,^ " , ] rccccccccc & ( rissanen , 2000 ) & mdl ( a ) & mdl ( a - b ) & mdl ( a - b - c ) & visushrink & sureshrink & bayesshrink & & sd + + & 39.1 & 36.6 & 38.5 & 39.3 & 37.3 & 43.2 & * 46.9 & & + 10 & 31.6 & 30.8 & 31.8 & 32.4 & 30.1 & 32.8 & * 33.1 & & 0.02 + 20 & 25.0 & 27.8 & 28.8 & 29.4 & 27.1 & 29.5 & * 29.9 & & 0.03 + 30 & 19.8 & 26.0 & 27.1 & 27.6 & 25.4 & 27.8 & * 28.2 & & 0.03 + 40 & 16.7 & 24.9 & 26.0 & 26.5 & 24.3 & 26.4 & * 27.0 & & 0.04 + + + & 36.2 & 33.2 & 35.1 & 35.9 & 32.9 & 39.2 & * 40.3 & & + 10 & 30.2 & 28.6 & 29.8 & 30.5 & 28.0 & 31.3 & * 31.7 & & 0.02 + 20 & 24.2 & 25.8 & 26.8 & 27.5 & 25.2 & 27.9 & * 28.3 & & 0.03 + 30 & 19.6 & 24.3 & 25.2 & 25.8 & 23.7 & 26.1 & * 26.5 & & 0.02 + 40 & 16.6 & 23.2 & 24.2 & 24.7 & 22.8 & 24.9 & * 25.3 & & 0.03 + + + & 41.4 & 36.7 & 42.5 & 43.5 & 41.0 & 47.4 & * 54.2 & & + 10 & 31.4 & 30.7 & 31.5 & 32.1 & 30.2 & 32.5 & * 32.8 & & 0.06 + 20 & 24.7 & 27.3 & 28.1 & 28.7 & 26.8 & 28.7 & * 29.2 & & 0.05 + 30 & 19.7 & 25.4 & 26.4 & 27.0 & 24.9 & 26.9 & * 27.4 & & 0.06 + 40 & 16.7 & 24.2 & 25.2 & 25.7 & 23.7 & 25.4 & * 26.2 & & 0.07 + + + & 38.9 & 36.1 & 37.9 & 38.7 & 36.9 & 42.7 & * 51.2 & & + 10 & 30.7 & 29.3 & 30.3 & 31.0 & 28.6 & * 31.5 & * 31.5 & & 0.04 + 20 & 24.7 & 25.9 & 26.9 & 27.6 & 25.1 & 27.1 & * 27.9 & & 0.05 + 30 & 19.9 & 23.9 & 24.9 & 25.5 & 23.1 & 24.6 & * 25.9 & & 0.05 + 40 & 16.8 & 22.4 & 23.3 & 23.9 & 21.6 & 22.8 & * 24.4 & & 0.08 + * * * * * * * * * * * * * * * * * * * * *we have revisited an earlier mdl method for wavelet - based denoising for signals with additive gaussian white noise . in doingso we gave an alternative interpretation of rissanen s renormalization technique for avoiding the problem of unbounded parametric complexity in normalized maximum likelihood ( nml ) codes .this new interpretation suggested three refinements to the basic mdl method which were shown to significantly improve empirical performance .the most significant contributions are : i ) an approach involving what we called the _ extended model _ , to the problem of unbounded parametric complexity which may be useful not only in the gaussian model but , for instance , in the poisson and geometric families of distributions with suitable prior densities for the parameters ; ii ) a demonstration of the importance of encoding the model index when the number of potential models is large ; iii ) a combination of universal models of the mixture and nml types , and a related predictive technique which should also be useful in mdl denoising methods ( e.g. ) that are based on finding a single best model , and other predictive tasks ._ proof of eq . : _ the proof of eq .is technically similar to the derivation of the _ renormalized _ nml model in , which goes back to .first note that due to orthonormality , the density of under the extended model is always equal to the density of evaluated at .thus , for instance , the maximum likelihood parameters for data are easily obtained by maximizing the density of at .the density of is given by where denotes a gaussian density function with mean and variance .let be the sum of squares of the wavelet coefficients with : and let denote the sum of all wavelet coefficients . with slight abuse of notation , we also denote these two by and , respectively .let be the size of the set .the likelihood is maximized by parameters given by with the maximum likelihood parameters ( [ eq : mlpar ] ) the likelihood ( [ eq : lik ] ) becomes the normalization constant is also easier to evaluate by integrating the likelihood in terms of : where is given by and the range of integration is defined by requiring that the maximum likelihood estimators are both within the interval $ ] .it will be seen that the integral diverges without these bounds . the integral factors in two parts involving only the coefficients with and respectively .furthermore , the resulting two integrals depend on the coefficients only through the values and , and thus , they can be expressed in terms of these two quantities as the integration variables we denote them respectively by and .the associated riemannian volume elements are infinitesimally thin spherical shells ( surfaces of balls ) ; the first one with dimension and radius , the second one with dimension and radius , given by + thus the integral in is equivalent to both integrands become simply of the form and hence , the value of the integral is given by plugging ( [ eq : integral ] ) into ( [ eq : rawnorm ] ) gives the value of the normalization constant normalizing the numerator ( [ eq : numerator ] ) by , and canceling like terms finally gives the nml density : and the corresponding code length becomes applying stirling s approximation to the gamma functions yields now rearranging the terms gives the formula where is a constant wrt . , given by _ proof of proposition [ prop : ignore ] : _ the maximum likelihood parameters may violate the restriction that arises from the definition .the restriction affects range of integration in eq .giving the non - constant terms as follows using the integral gives then where the first two terms can be written as combining with the third term of changes the plus into a minus and gives finally which is exactly half of the integral in eq ., the constant terms being the same .thus , the effect of the restriction on the code length where the _ logarithm _ of the integral is taken , is one bit , i.e. , nats ._ proof of eq .[ eq : withchoose ] : _ the relevant terms in the code length , i.e. those depending on , for the index of the model class are \\ & = -\ln ( k(n - k ) ) - \ln \gamma(k ) -\ln \gamma(n - k ) , \end{aligned}\ ] ] which gives after stirling s approximation ( ignoring constant terms ) adding this to eq .[ eq : onelevel ] ( without the constant ) gives eq . .the authors thank peter grnwald , steven de rooij , jukka heikkonen , vibhor kumar , and hannes wettig for valuable comments .moulin and j. liu , `` analysis of multiresolution image denoising schemes using generalized gaussian and complexity priors , '' _ ieee trans .information theory _ , vol .45 , no . 3 , pp .909919 , apr .m. wainwright and e. simoncelli , `` scale mixtures of gaussians and the statistics of natural images , '' in _ advances in neural information processing systems _ , s. solla , t. leen , and kmuller , eds .12.1em plus 0.5em minus 0.4emmit press , may 2000 , pp .855861 .j. portilla , v. strela , m. wainwright , and e. simoncelli , `` image denoising using scale mixtures of gaussians in the wavelet domain , '' _ ieee trans .image processing _ , vol .12 , no . 11 , pp . 13381351 , nov .2003 .p. grnwald , `` a tutorial introduction to the minimum description length principle , '' in _ advances in mdl : theory and applications _ , p. grnwald , i. myung , and m. pitt , eds.1em plus 0.5em minus 0.4emmit press , 2005 .n. saito , `` simultaneous noise suppression and signal compression using a library of orthonormal bases and the minimum description length criterion , '' in _ wavelets in geophysics_.1em plus 0.5em minus 0.4em academic press , 1994 , pp .299324 .f. liang and a. barron , `` exact minimax strategies for predictive density estimation , data compression , and model selection , '' _ ieee trans .information theory _ ,50 , no . 11 , pp . 27082726 , nov .2004 .t. roos , p. myllymki , and h. tirri , `` on the behavior of mdl denoising , '' in _ proc .tenth int .workshop on ai and stat ._ , r. g. cowell and z. ghahramani , eds.1em plus 0.5em minus 0.4emsociety for ai and statistics , 2005 , pp . 309316 .p. kontkanen , p. myllymki , w. buntine , j. rissanen , and h. tirri , `` an mdl framework for data clustering , '' in _ advances in mdl : theory and applications _ , p. grnwald , i. myung , and m. pitt , eds.1em plus 0.5em minus 0.4emmit press , 2005 .j. ojanen , t. miettinen , j. heikkonen , and j. rissanen , `` robust denoising of electrophoresis and mass spectrometry signals with minimum description length principle , '' _ febs letters _ , vol .13 , pp . 107113 , 2004 .
we refine and extend an earlier mdl denoising criterion for wavelet - based denoising . we start by showing that the denoising problem can be reformulated as a clustering problem , where the goal is to obtain separate clusters for informative and non - informative wavelet coefficients , respectively . this suggests two refinements , adding a code - length for the model index , and extending the model in order to account for subband - dependent coefficient distributions . a third refinement is derivation of soft thresholding inspired by predictive universal coding with weighted mixtures . we propose a practical method incorporating all three refinements , which is shown to achieve good performance and robustness in denoising both artificial and natural signals . minimum description length ( mdl ) principle , wavelets , denoising .
given symmetric matrices of size with rational coefficients , let denote the corresponding convex _ spectrahedron _ , defined by the linear matrix inequality ( lmi ) enforcing that is positive semidefinite , or equivalently that the eigenvalues of , as functions of , are all nonnegative .spectrahedra are a broad generalization of polyhedra . like polyhedra ,spectrahedra have facets , edges and vertices .however , while the facets of a polyhedron are necessarily flat , the facets of a spectrahedron can be curved outwards or inflated , see figure [ fig : spectrahedron ] for an example of a spectrahedron of dimension defined by an lmi of size . , scaledwidth=70.0% ] optimization of a linear function on a spectrahedron is called semidefinite programming ( sdp ) , a broad generalization of linear programming ( lp ) with many applications in control engineering , signal processing , combinatorial optimization , mechanical structure design , etc , see .the algebra and geometry of spectrahedra is an active area of study in real algebraic geometry , especially in connection with the problem of moments and the decomposition of real multivariate polynomials as sums - of - squares ( sos ) , see and references therein .our software spectra aims at either proving that is empty , or finding at least one point in , using _exact arithmetic_. contrary to numerical algorithms which are based on approximate computations and floating point arithmetic such as the projection and rounding heuristics of e.g. or spectra is exclusively based on computations with exact arithmetic . since exact computations are potentially expensive, spectra should be used when the number of variables or the size of the lmi are small .it should not be considered as a competitor to numerical algorithms such as interior - point methods for sdp . it should be primarily used either in potentially degenerate situations , for example when it is expected that has empty interior , or when a rigorous certificate of infeasibility or feasibility is required . the input providedto spectra is the set of matrices with rational coefficients describing the pencil and hence the spectrahedron . if is empty , spectra returns the empty listotherwise , the output generated by spectra is a finite set described by a collection of univariate polynomials with integer coefficients ] such that there exists with and : * and * .the probabilistic nature of the algorithm comes from random changes of variables performed during the procedure , allowing to put the sets in generic position .recall that the incidence varieties are defined by enforcing a full column rank constraint on the dual matrix . in spectrathis is achieved as follows ( * ? ? ?* section 3.1 ) : given a subset of dinstinct integers between and , we enforce the submatrix of whose rows are indexed by these integers to be equal to the identity matrix of size . for a given value of , there are distinct choices of row indices and hence the same number of normalized incidence varieties . for each value of , the algorithm in spectra processes iteratively these normalized incidence varieties .finally , let us explain briefly how spectra is able to certify the correctness of the output .this explanation was not included in our paper , but we believe it is useful for readers interested in the implementation details . for each computed solution belonging to a connected component of an incidence variety , spectra uses exact arithmetic to decide whether is positive semidefinite and to evaluate the rank of .if is not positive semidefinite , then the point is discarded . from theorem [ th : minrank ] we know that at least one computed point lies on the spectrahedron , and this point is of minimal rank , i.e. it solves problem ( [ minrank ] ) .we first build the following characteristic polynomial : where is the identity matrix of size .the coefficient ] reduces to a point in the plane .spectra can easily deal with such a degenerate case : .... > a : = matrix([[1+x1 , x2 , 0 ] , [ x2 , 1-x1 , 0 ] , [ 0 , 0 , x1 - 1 ] ] ) : > solvelmi(a ) ; [ [ x1 = [ 1 , 1 ] , x2 = [ 0 , 0 ] ] ] .... now let us modify further the bottom right entry , letting so that the corresponding spectrahedron is empty .spectra returns the empty list , and this is a certificate of emptiness : .... > a : = matrix([[1+x1 , x2 , 0 ] , [ x2 , 1-x1 , 0 ] , [ 0 , 0 , x1 - 2 ] ] ) : >solvelmi(a ) ; [ ] .... since spectra is based on exact arithmetic , it is not sensitive to numerical roundoff errors or small parameter changes : .... > a : = matrix([[1+x1 , x2 , 0 ] , [ x2 , 1-x1 , 0 ] , [ 0 , 0 , x1 - 1 - 10^(-20 ) ] ] ) : > solvelmi(a ) ; [ ] > a:=matrix([[1+x1 , x2 , 0 ] , [ x2 , 1-x1 , 0 ] , [ 0 , 0 , x1 - 1 + 10^(-20 ) ] ] ) : > solvelmi(a ) ; [ [ x1 = [ 36893488147418995335 / 36893488147419103232 , 4611686018427401391 / 4611686018427387904 ] , x2= [ -350142318592414077 / 2475880078570760549798248448 , -2801138548739304423 / 19807040628566084398385987584 ] ] .... displayed with 10 significant digits , the latter point reads : \approx 1.000000000 , \\[1em ] x_2 \in [ \frac{-350142318592414077}{2475880078570760549798248448 } , \frac{-2801138548739304423}{19807040628566084398385987584 } ] \approx -0.1414213562\cdot 10^{-9}. \end{array}\ ] ] the above point is an irrational solution , and the rational intervals are provided so that their floating point approximations are correct up to the number of digits specified in the maple environment variable digits , which is by default equal to 10 .use the command .... > digits:=100 : .... prior to calling solvelmi if you want an approximation correct to 100 digits . at the price of increased computational burden ,spectra then provides larger integer numerators and denominators in the coordinate intervals .in general , each coordinate of a point computed by spectra is an algebraic number , i.e. the root of a univariate polynomial with integer coefficients . for the classical univariate matrix the spectrahedron reduces to the irrational point . the simple call .... > a:=matrix([[1 , x1 , 0 , 0 ] , [ x1 , 2 , 0 , 0 ] , [ 0 , 0 , 2*x1 , 2 ] , [ 0 , 0 , 2 , x1 ] ] ) : > solvelmi(a ) ; [ [ x1 = [ 26087635650665550353 / 18446744073709551616 , 13043817825332807945 / 9223372036854775808 ] ] ] .... returns an interval enclosure valid to 10 digits .we can however obtain an exact representation of this point via a rational parametrization : .... > solvelmi(a , { par } ) ; [ [ x1 = [ .. ] , par = [ _ z^2 - 2,_z,[2 ] ] ] ] .... the output parameter par contains three univariate polynomials such that the computed point is contained in the finite set as in ( [ par ] ) . hereobviously the rational interval isolates the irrational point .the algebraic degree of semidefinite programming was studied in .let us consider the spectrahedron of example 4 in this reference , for which the following point can be easily found with spectra , and it has rank 2 , which is guaranteed to be the minimal rank achieved in the spectrahedron : .... > a:=matrix([[1+x3 , x1+x2 , x2 , x2+x3 ] , [ x1+x2 , 1-x1 , x2-x3 , x2 ] , [ x2 , x2-x3 , 1+x2 , x1+x3 ] , [ x2+x3 , x2 , x1+x3 , 1-x3 ] ] ) : > solvelmi(a , { rnk } ) ; [ [ x1 = [ 29909558235590963953/36893488147419103232 , 29909558235593946897/36893488147419103232 ] , x2 = [ -18555206088021567643/36893488147419103232 , -9277603044010395249/18446744073709551616 ] , x3 = [ -12556837519724045701/36893488147419103232 , -12556837519723709525/36893488147419103232 ] , rnk = 2 ] ] .... with the following instruction we can indeed certify that there is no point of rank 1 or less : .... > solvelmi(a , { } , [ 1 ] ) ; [ ] .... the command .... > solvelmi(a , { par } ) ; .... returns the following rational univariate parametrization ( [ par ] ) of the above rank 2 point : the degree of the polynomial in this parametrization can be obtained with the command ....> solvelmi(a , { deg } ) ; .... we can obtain more points in the spectrahedron as follows : .... > solvelmi(a , { all , rnk , deg } , [ 2 ] ) ; .... this returns 4 feasible solutions of rank , all parametrized by the above degree 10 polynomial .notice that this degree matches with the algebraic degree of a generic semidefinite programming problem with parameters , which is 10 according to ( * ? ? ?* table 2 ) .consider the matrix modeling the unit disk .two consecutive calls to solvelmi return two distinct points :after another call , or on your own computer , these intervals should still differ as spectra makes random changes of coordinates to ensure that the geometric objects computed are in general position .this kind of behavior is expected when there are infinitely many points of minimal rank in the spectrahedron . to generate reproducible outputs ,the instruction randomize can be used to seed the random number generator used by maple :let the spectrahedron is the orange region whose boundary is the internal oval of the smooth quartic determinantal curve represented in black on figure [ fig : quartic ] ., scaledwidth=50.0% ] with the following instructions .... > a:=matrix([[1+x1,x2,0,0],[x2,1-x1,x2,0],[0,x2,2+x1,x2],[0,0,x2,2-x1 ] ] ) : > solvelmi(a,{},[3 ] ) ; > solvelmi(a,{},[3 ] ) ; > ... .... we compute several points on the boundary of , they are plotted in red on figure [ fig : quartic ] .note the third input argument which specifies to solvelmi the expected rank of the computed point . since the determinantal curve is smooth , we know that the rank of equals 3 on the whole curve , and in particular on the boundary of .since the rank is specified , spectra does not have to process iteratively the incidence varieties corresponding to points of smaller ranks , thereby reducing the computational burden to find at least one point in the spectrahedron .each of these points is represented by a rational univariate parametrization of degree , obtained with the instruction ....> solvelmi(a,{par},[3 ] ) ; .... for example , for the point the polynomial in the rational parametrization ( [ par ] ) is recall that the algebraic degree of a point in is the degree of the minimal algebraic extension of the ground field ( here the rational numbers ) required to represent .the algebraic degree depends on the size of the pencil but also on the rank of . with and generic data ,the algebraic degree is , cf .* table 2 ) , which indeed coincides with the degree of the exact representation of computed by spectra .deciding whether a multivariate real polynomial is non - negative is difficult in general . a sufficient condition , or certificate for non - negativity , is that the polynomial can be expressed as a sum of squares ( sos ) of other polynomials . findinga polynomial sos decomposition amounts to finding a point in a specific spectrahedron called gram spectrahedron , see e.g. and references therein . as an example , consider the homogeneous ternary quartic the polynomial belongs to a series of examples provided by c. scheiderer in to answer ( in the negative ) the following question by b. sturmfels : let be a polynomial with rational coefficients which is an sos of polynomials with real coefficients ; is it an sos of polynomials with rational coefficients ?scheiderer s counterexamples prove that , generally speaking , there is no hope in obtaining nonnegativity certificates over the rationals .however , certificates exist in some algebraic extension of the field of rational numbers .in the graded reverse lexicographic ordered monomial basis , the gram matrix of is the matrix depending linearly on 6 real parameters .the gram spectrahedron parametrizes the set of all sos decompositions of .we deduce by the discussion above that does not contain rational points .in particular , its interior is empty .let us use spectra to compute points in and hence to get positivity certificates for : .... > a : = matrix([[1,0,x1,0,-3/2-x2,x3 ] , [ 0,-2*x1,1/2,x2,-2-x4,-x5 ] , [ x1,1/2,1,x4,0,x6 ] , [ 0,x2,x4,-2*x3 + 2,x5,1/2 ] , [ -3/2-x2,-2-x4,0,x5,-2*x6,1/2 ] , [ x3,-x5,x6,1/2,1/2,1 ] ] ) : > solvelmi(a , { rnk , deg , par } ) ; [ [ [ x1 = [ .. ] , x2 = [ .. ] , x3 = [ .. ] , x4 = [ .. ] , x5 = [ .. ] , x6 = [ .. ] ] , rnk = 2 , deg = 3 , par = [ 8*z^3 + 8*z+1 , 24*z^2 - 8 , [ 16*z+3 , -24*z^2 + 8 ,8*z^2 + 6*z+8 , -16*z^2 + 6*z+16 , -16*z-3 , 16*z+3 ] ] ] .... we obtain an irrational point whose coordinates are algebraic numbers of degree 3 , and which belongs to the finite set at this point , the gram matrix has rank 2 , and hence is an sos of 2 polynomials .let us compute more non - negativity certificates of rank 2 : .... > solvelmi(a,{rnk , deg , par , all},[2 ] ) ; .... in addition to the point already obtained above , we get another point .the user can compare this output with ( * ? ? ? * ex.2.8 ) : it turns out that these are the only 2 points of rank 2 .other points in the gram spectrahedron have rank 4 and they are convex combinations of these 2 .for a given , consider the spectrahedron for every it holds which shows that exponentially many bits are required to represent a point .it is elementary to check that each of the above matrices of size 2 can have rank 1 , and hence that we can compute a point of rank as follows : .... > with(linearalgebra ) : > a:=diagonalmatrix([<<1,2>|<2,x1>>,<<1,x1>|<x1,x2>>,<<1,x2>|<x2,x3 > > , .. ] ) : > solvelmi(a,{},[n ] ) ; .... recall from section [ background ] that spectra examines iteratively a family of incidence varieties , a number growing exponentially with .for example there are incidence varieties to test to solve our problem for .hence we could expect spectra to perform poorly on this example .however , on our standard desktop pc equipped with intel i7 processor at 2.5ghz and 16 gb ram , we were able to handle spectrahedra of size in 29 seconds , and of size in 505 seconds .finally , we report on randomly generated examples . the rational entries of are generated as quotients of integers drawn uniformly in the interval ] .the algorithm is run for each value in ranks by solving the quadratic system of equations for a vector and a matrix with rows and columns whose entries are stored in a vector .it may happen that the rank of at a computed solution is strictly less than .* eithter the empty list in which case is empty , or * a rational enclosure of a single point , in the form + .... > solvelmi(a ) [ [ x1 = [ a1 , b1 ] , x2 = [ a2 , b2 ] , ... , xn = [ an , bn ] ] ] .... + where are rational numbers , displayed as ratios of integers .this means that each coordinate belongs to the interval ] are provide to isolate the computed point from this set of points .fgb : a library for computing grbner bases . in k. fukuda ,j. van der hoeven , m. joswig , n. takayama , editors , mathematical software - icms 2010 , volume 6327 of lecture notes in computer science , pp .84 - 87 , springer , berlin , 2010 .software available at www-polsys.lip6.fr//fgb e. l. kaltofen , b. li , z. yang , l. zhi .exact certification in global polynomial optimization via sums - of - squares of rational functions with rational coefficients . j. symbolic comput . 47(1):1 - 15 , 2012 .
this document describes our freely distributed maple library spectra , for semidefinite programming solved exactly with computational tools of real algebra . it solves linear matrix inequalities with symbolic computation in exact arithmetic and it is targeted to small - size , possibly degenerate problems for which symbolic infeasibility or feasibility certificates are required . * keywords * computer algebra , symbolic computation , linear matrix inequalities , semidefinite programming , low rank matrices , real algebraic geometry .
cloud computing is a disruptive it model allowing enterprises to focus on their core business activities . instead of investing in their own it infrastructures , they can now rent ready - to - use preconfigured virtual resources from cloud providers in a `` pay - as - you - go '' manner .organisations relying on fixed size private infrastructures often realise it can not match their dynamic needs , thus frequently being either under or overutilised .in contrast , in a cloud environment one can automatically acquire or release resources as they are needed a distinctive characteristic known as _autoscaling_. this is especially important for large scale web applications , since the number of users fluctuates over time and is prone to flash crowds as a result of marketing campaigns and product releases .most such applications follow the 3-tier architectural pattern and are divided in three standard layers / tiers : * * presentation layer * the end user interface . * * business / domain layer * implements the business logic .hosted in one or several application servers ( as ) .* * data layer * manages the persistent data . deployed in one or several database ( db ) servers .a user interacts with the presentation layer , which redirects the requests to an as which in turn can access the data layer .the presentation layer is executed on the client s side ( e.g. in a browser ) and thus scalability is not an issue . scalingthe db layer is a notorious challenge , since system architects have to balance between consistency , availability and partition tolerance following the results of the cap theorem .this field has already been well explored ( cattel surveys more than 20 related projects ) .furthermore , google has published about their new database which scales within and across data centres without violating transaction consistency .hence data layer scaling is beyond the scope of our work . in general , autoscaling the application servers ( as ) is comparatively straightforward . in an infrastructure as a service ( iaas ) cloud environment , the as vms are deployed `` behind '' a load balancer which redirects the incoming requests among them . whenever the servers capacity is insufficient , one or several new as vms are provisioned and associated with the load balancer and the db layer see figure [ fig:3tier ] ._ but what should be the type of the new as vm ? _ most major cloud providers like amazon ec2 and google compute engine offer a predefined set of vm types with different performance capacities and prices. currently , system engineers `` hardcode '' preselected vm types in the autoscaling rules based on their intuition or at best on historical performance observations .however , user workload characteristics vary over time leading to constantly evolving as capacity requirements . for example, the proportion of browsing , bidding and buying requests in an e - commerce system can change significantly during a holiday season , which can change the server utilisation patterns .middleware and operating system updates and reconfigurations can lead to changes in the utilisation patterns as well .this can also happen as a result of releasing new application features or updates .moreover , vm performance can vary significantly over time because of other vms collocated on the same physical host causing resource contentions .hence even vm instances of the same type can perform very differently . from the viewpoint of the cloud s client this can not be predicted . to illustrate better ,let us consider a large scale web application with hundreds of dedicated as vms .its engineers can analyse historical performance data to specify the most appropriate vm type in the autoscaling rules .however , they will have to reconsider their choice every time a new feature or a system upgrade is deployed .they will also have to constantly monitor for workload pattern changes and to react by adjusting the austoscaling rules .given that vm performance capacities also vary over time , the job of selecting the most suitable vm type becomes practically unmanageable .this can result in significant financial losses , because of using suboptimal vms . to address this , the key * contributions * of our work are ( i ) a machine learning approach which continuously learns the application s resource requirements and ( ii ) a dynamic vm type selection ( dvts ) algorithm , which selects a vm type for new as vms .since both workload specifics and vm performance vary over time , we propose an online approach , which learns the application s behaviour and the typical vm performance capacities in real time .it relieves system maintainers from having to manually reconfigure the autoscaling rules .the rest of the paper is organised as follows : in section [ related_work ] we describe the related works .section [ overview ] provides a succinct overview of our approach .section [ learning ] discusses the machine learning approaches we employ to `` learn '' the application s requirements in real time .section [ selection ] describes how to select an optimal vm type .section [ prototype ] details the architecture of our prototype and the benchmark we use for evaluation .section [ experiments ] describes our experiments and results .finally , section [ conclusion ] concludes and defines pathways for future work .the area of static computing resource management has been well studied in the context of grids , clouds , and even multi - clouds . however , the field of dynamic resource management in response to continuously varying workloads , which is especially important for web facing applications , is still in its infancy .horizontal autoscaling policies are the predominant approach for dynamic resource management and thus they have gained significant attention in recent years . lorido - botran et al .classify autoscaling policies as _ reactive _ and _ predictive _ or _ proactive _the most widely adopted _ reactive _ approaches are based on threshold rules for performance metrics ( e.g. cpu and ram utilisation ) . for each such characteristicthe system administrator provides a lower and upper threshold values .resources are provisioned whenever an upper threshold is exceeded . similarly ,if a lower threshold is reached resources are released .how much resources are acquired or released when a threshold is reached is specified in user defined autoscaling rules .there are different `` flavours '' of threshold based approaches .for example in amazon auto scaling one would typically use the average metrics from the virtual server farm , while rightscale provides a voting scheme , where thresholds are considered per vm and an autoscaling action is taken if the majority of the vms `` agree '' on it .combinations and extensions of both of these techniques have also been proposed ._ predictive _ or _ proactive _ approaches try to predict demand changes in order to allocate or deallocate resources .multiple methods using approaches like reinforcement learning , queuing theory and kalman filters to name a few have been proposed .our work is complementary to all these approaches .they indicate at what time resources should be provisioned , but do not select the resource type .our approach selects the best resource ( i.e. vm type ) once it has been decided that the system should scale up horizontally .fernandez et al .propose a system for autoscaling web applications in clouds .they monitor the performance of different vm types to infer their capacities .our approach to this is different , as we inspect the available to each vm cpu capacity and measure the amount of `` stolen '' cpu instructions by the hypervisor from within the vm itself .this allows us to normalise the vms resource capacities to a common scale , which we use to compare them and for further analysis .furthermore , their approach relies on a workload predictor , while ours is usable even in the case of purely reactive autoscaling .singh et al .use k - means clustering to analyse the workload mix ( i.e. the different type of sessions ) and then use a queueing model to determine each server s suitability .however , they do not consider the performance variability of virtual machines , which we take into account . also , they do not select the type of resource ( e.g. vm ) to provision and assume there is only one type , while this is precisely the focus of our work. a part of our work is concerned with automated detection of application behaviour changes through a hierarchical temporal memory ( htm ) model .similar work has been carried out by cherkasova et al . , who propose a regression based anomaly detection approach . however , they analyse only the cpu utilisation .moreover they consider that a set of user transactions types is known beforehand .in contrast , our approach considers ram as well and does not require application specific information like transaction types .tan et al .propose the prepare performance anomaly detection system .however , their approach can not be used by a cloud client , as it is built on top of the xen virtual machine manager to which external clients have no access .another part of our method is concerned with automatic selection of the _ learning rate _ and _ momentum _ of an artificial neural network ( ann ) .there is a significant amount of literature in this area as surveyed by moreira and fiesler .however , the works they overview are applicable for static data sets and have not been applied to learning from streaming online data whose patterns can vary over time .moreover , they only consider how the intermediate parameters of the backpropagation algorithm vary and do not use additional domain specific logic .although our approach is inspired by the work of vogl et al . as it modifies the _ learning rate _ and _ momentum _ based on the prediction error , we go further and we modify them also based on the _ anomaly score _ as reported by the hierarchical temporal memory ( htm ) models .figure [ fig : overview ] depicts an overview of our machine learning approach and how the system components interact . within each asvm we install a monitoring program which periodically records utilisation metrics .these measurements are transferred to an _ autoscaling component _ , which can be hosted either in a cloud vm or on - premises .it is responsible for ( i ) monitoring as vms performance ( ii ) updating machine learning models of the application behaviour and ( iii ) autoscaling . within each as vm the _ utilisation monitors _ report statistics about the cpu , ram , disk and network card utilisations and the number of currently served users .these records are transferred every 5 seconds to the _ autoscaling _ component , where they are normalised , as different vms have different de facto resource capacities . in the machine learning approacheswe only consider the cpu and ram utilisations , as disk and network utilisations of as vms are typically small . for each as vm the _ autoscaler _ maintains a separate single - region hierarchical temporal memory ( htm ) model , which is overviewed in a later section .in essence we use htms to detect changes in the application behaviour of each as vm .we prefer htm to other regression based anomaly detection approaches , as it can detect anomalies on a stream of multiple parameters ( e.g. cpu and ram ) .whenever monitoring data is retrieved from an as vm , the _ autoscaler _ trains its htm with the received number of users , cpu and ram utilisations and outputs an _ anomaly score _ defining how `` unexpected '' the data is . as a next step we use these utilisation measurements to train a 3-tier artificial neural network ( ann ) about the relationship between the number of served users and resource consumptions .we choose to use an ann because of its suitability for online data streams .other `` sliding window '' approaches operate only on a portion of the data stream . asa system s utilisation patterns can remain the same for long time intervals , the window sizes may need to become impractically large or even be dynamically adjusted . on the contrary , an ann does not operate on a fixed time window and is more adept with changes in the incoming data stream , as we will detail in a later section .there is only one ann and training samples from all as vms are used to train it .in essence the ann represents a continuously updated regression model , which given a number of users predicts the needed resources to serve them within a single vm without causing resource contentions .thus , we need to filter all training samples , which were taken during anomalous conditions ( e.g. insufficient cpu or ram capacity causing intensive context switching or disk swapping respectively ) . such samples are not indicative of the relationship between number of users and the resource requirements in the absence of resource contentions .furthermore , we use the _ anomaly score _ of each training sample ( extracted from htm ) to determine the respective _ learning speed _ and _ momentum _ parameters of the back propagation algorithm so that the ann adapts quickly to changes in the utilisation patterns . training the ann and the htms happens online from the stream of vm measurements in parallel with the running application . simultaneously we also maintain a _ vm capacity repository _ of the latest vm capacity measurements .when a new vm is needed by the autoscaling component , we use this repository to infer the potential performance capacity of all vm types . at that timethe ann is already trained adequately and given the predicted performance capacities can be used to infer how many users each vm type could serve simultaneously . based onthat we select the vm type , with minimal cost to number of users ratio .to measure vm performance utilisation , we use the _ sar _ , _ mpstat _ , _ vmstat _ and _ netstat _ linux monitoring tools .we use the mpstat _ % idle _ metric to measure the percentage of time during which the cpu was idle .the _ % steal _ metric describes the percentage of `` stolen '' cpu cycles by a hypervisor ( i.e. the proportion of time the cpu was not available to the vm ) and can be used to evaluate the actual vm cpu capacity .similarly , sar provides the _ % util _ and _ % ifutil _ metrics as indicative of the disk s and network card s utilisations . measuring the ram utilisation is more complex as operating systems keep in memory cached copies of recently accessed disk sectors in order to reduce disk access .although in general this optimisation is essential for vm performance , web application servers ( as ) are not usually i / o bound , as most of the application persistence is delegated to the data base layer . hence , using the _ vmstat _ ram utilisation metrics can be an overestimation of the actual memory consumption as it includes rarely accessed disk caches .thus , we use the _ `` active memory '' _ _ vmstat _ metric to measure memory consumption instead . it denotes the amount of recently used memory , which is unlikely to be claimed for other purposes .lastly , we need to evaluate the number of concurrently served users in an as vm .this could be extracted from the as middleware , but that would mean writing specific code for each type of middleware . moreover, some proprietary solutions may not expose this information .therefore , we use the number of distinct ip addresses with which the server has an active tcp socket , which can be obtained through the _ netstat _ command . typically ,the as vm is dedicated to running the as and does not have other outgoing connections except for the connection to the persistence layer .therefore , the number of addresses with active tcp sockets is a good measure of the number of currently served users . before proceeding to train the machine learning approaches, we need to normalise the measurements which have different `` scales '' , as the vms have different ram sizes and cpus with different frequencies . moreover , the actual cpu capacities within a single vm vary over time as a result of the dynamic collocation of other vms on the same host . as a first step in normalising the cpu load , we need to evaluate the actual cpu capacity available to each vm .this can be extracted from the _ /proc/ cpuinfo _ linux kernel file . if the vm has cores , _ /proc/ cpuinfo _ will list meta information about the physical cpu cores serving the vm including their frequencies .the sum of these frequencies is the maximal processing capacity the vm can get , provided the hypervisor does not `` steal '' any processing time . using the _% steal _ mpstat parameter we can actually see what percentage of cpu operations have been taken away by the hypervisor .subtracting this percentage from the sum of frequencies gives us the actual vm cpu capacity at the time of measurement .to normalise we further divide by the maximal cpu core frequency multiplied by the maximal number of cores of all considered vms in the cloud provider .this is a measure of the maximal vm cpu capacity one can obtain from the considered vm types . as clouds are made of commodity hardware , we will consider .this ensures that all values are in the range ] . in our environmentevery 5 seconds we feed each htm with a time stamp , the number of users and the cpu and ram utilisations of the respective vm .we use the standard nupic scalar and date encoders to convert the input to binary input . as a resultwe get an _ anomaly score _ denoting how expected the input is , in the light of the previously described algorithms .figure [ fig : ann ] depicts the topology of the artificial neural network ( ann ) .it has one input the number of users .the hidden layer has 250 neurons with the sigmoid activation function .the output layer has two output nodes with linear activation functions , which predict the normalised cpu and ram utilisations within an as vm .once a vm s measurements are received and normalised and the _ anomaly score _ is computed by the respective htm region , the ann can be trained . as discussed , we need to filter out the vm measurements which are not representative of normal , contention free application execution , in order to `` learn '' the `` right '' relationship between number of users and resource utilisations .we filter all vm measurements in which the cpu , ram , hard disk or network card utilisations are above a certain threshold ( e.g. 70% ) .similarly , we filter measurements with negligible load i.e. less than 25 users or less than 10% cpu utilisation .we also ignore measurements from periods during which the number of users has changed significantly e.g. in the beginning of the period there were 100 users and at the end there were 200 .such performance observations are not indicative of an actual relationship between number of users and resource utilisations .thus , we ignore measurements for which the number of users is less than 50% or more than 150% of the average of the previous 3 measured numbers of users from the same vm . since we are training the ann with streaming data , we need to make sure it is not overfitted to the latest training samples .for example if we have constant workload for a few hours we will be receiving very similar training samples in the ann during this period .hence the ann can become overfitted for such samples and lose its fitness for the previous ones .to avoid this problem , we filter out measurements / training samples , which are already well predicted . more specifically , if a vm measurement is already predicted with a _ root mean square error _ ( rmse ) less than it is filtered out and the ann is not trained with it .we call this value because it is obtained for each training sample before the ann is trained with it .it is computed as per eq .[ eq : rmse ] , where and are the values of the output neurons and the expected values respectively . with each measurement , which is not filtered out , we perform one or several iterations / epochs of the back - propagation algorithm with the number of users as input and the normalised cpu and ram utilisations as expected output .the back - propagation algorithm has two important parameters the _ learning rate _ and the _ momentum_. in essence , the _ learning rate _ is a ratio number in the interval which defines the amount of weight update in the direction of the gradient descent for each training sample . for each weight update ,the _ momentum _ term defines what proportion of the previous weight update should be added to it .it is also a ratio number in the interval . using a _ momentum _the neural network becomes more resilient to oscillations in the training data by `` damping '' the optimisation procedure .for our training environment we need a low _ learning rate _ and a high _ momentum _ , as there are a lot of oscillations in the incoming vm measurements .we select the _ learning rate _ to be and the _ momentum _ .we call these values the _ ideal parameters _ , as these are the values we would like to use once the ann is close to convergence . however , the low _ learning rate _ and high _ momentum _ result in slow convergence in the initial stages , meaning that the ann may not be well trained before it is used . furthermore ,if the workload pattern changes , the ann may need a large number of training samples and thus time until it is tuned appropriately .hence the actual _ learning rate _ and _ momentum _ must be defined dynamically .one approach to resolve this is to start with a high _ learning rate _ and low _ momentum _ and then respectively decrease / increase them to the desired values .this allows the back - propagation algorithm to converge more rapidly during the initial steps of the training .we define these parameters in the initial stages using the asymptotic properties of the sigmoid function , given in eq .[ eq : sigmoid ] . as we need to start with a high _ learning rate _ and then decrease it gradually to , we could define the learning rate for the -th training sample as .however , the sigmoid function decreases too steeply for negative integer parameters and as a result the learning rate is higher than for just a few training samples . to solve this we use the square root of instead and thus our first approximation of the _ learning rate _ is : as a result gradually decreases as more training samples arrive .figure [ fig : lr - mom ] depicts how it changes over time .we also need to ensure that it increases in case unusual training data signalling a workload change arrives and thus we need to elaborate . for thiswe keep a record of the last 10 samples _ anomaly scores _ and errors ( i.e. ) .the higher the latest anomaly scores , the more `` unexpected '' the samples are and therefore the _ learning rate _ must be increased .similarly , the higher the sample s compared to the previous errors , the less fit for it the ann is and thus the _ learning rate _ must be increased as well .thus our second elaborated approximation of the _ learning rate _ is : where and are the _ anomaly score _ and the error of the -th sample and is the average error of the last 10 samples . note that we use the sigmoid function for the anomaly scores in order to diminish the effect of low values . in some cases the _ learning rate _ can become too big in the initial training iterations , which will in fact hamper the convergence . to overcome this problem , for each sample we run a training iteration with , compute its rmse and then revert the results of this iteration . by comparing and can see if training with this will contribute to the convergence .if not , we use the ideal parameter instead .thus we finally define the _ learning rate _parameter in eq .[ eq : lrk ] : similarly we have to gradually increase the _ momentum _ as we decrease the _ learning rate _ until the ideal _ momentum _ is reached .if a workload change is present we need to decrease the _ momentum _ in order to increase the learning speed .hence , we can just use the ratio of the ideal learning rate to the current one as shown in eq .[ eq : mk ] . figure [ fig : lr - mom ] depicts how the _ learning rate _ and _ momentum _ change during the initial training stages , given there are no anomalies , accuracy losses and i.e. when .figure [ fig : lr - init ] shows the actual given realistic workload .furthermore , to speed up convergence it is beneficial to run multiple _ epochs _( i.e. repeated training iterations ) with the first incoming samples and with samples taken after a workload change .the ideal _ learning rate _ and its approximation already embody this information and we could simply use their ratio .however , can easily exceed 300 given , resulting in over - training with particular samples .hence we take the logarithm of it as in eq .[ eq : ek ] : null 0 return when a new vm has to be provisioned the ann should be already trained so that we can estimate the relationship between number of users and cpu and ram requirements .the procedure is formalised in algorithm [ algo : vmtselect ] .we loop over all vm types ( line 3 ) and for each one we estimate its normalised cpu and ram capacity based on the _ capacity repository _ as explained earlier ( lines 5 - 6 ) .the vm cost per time unit ( e.g. hour in aws or minute in google compute engine ) is obtained from the provider s specification ( line 7 ) .next we approximate the number of users that a vm of this type is expected to be able to serve ( lines 10 - 18 ) .we iteratively increase by starting from , which is the minimal number of users we have encountered while training the neural network .we use the procedure ( defined separately in algorithm [ algo : utilest ] ) to estimate the normalised cpu and ram demands that each of these values of would cause .we do so until the cpu or ram demands exceed the capacity of the inspected vm type .hence , we use the previous value of as an estimation of the number of users a vm of that type can accommodate . finally , we select the vm type with the lowest cost to number of users ratio ( lines 20 - 23 ) .algorithm [ algo : utilest ] describes how to predict the normalised utilisations caused by concurrent users .if is less than the maximum number of users we trained the ann with , then we can just use the ann s prediction ( line 5 ) . however , if is greater than the ann may not predict accurately . for example if we have used a single _ small _ vm to train the ann , and then we try to predict the capacity of a _ large _vm , can become much larger than the entries of the training data and the regression model may be inaccurate .thus , we extrapolate the cpu and ram requirements ( lines 7 - 11 ) based on the range of values we trained the ann with and the performance model we have proposed in a previous work . 0 0 return are two main approaches for experimental validation of a distributed system s performance through a simulation or a prototype .discrete event simulators like cloudsim have been used throughout industry and academia to quickly evaluate scheduling and provisioning approaches for large scale cloud infrastructure without having to pay for expensive test beds .unfortunately , such simulators work on a simplified cloud performance model and do not represent realistic vm performance variability , which is essential for testing our system .moreover , simulations can be quite inaccurate when the simulated system serves resource demanding workloads , as they do not consider aspects like cpu caching , disk data caching in ram and garbage collection .therefore , we test our method through a prototype and a standard benchmark deployed in a public cloud environment .we validate our approach with the cloudstone web benchmark deployed in amazon aws .it follows the standard 3-tier architecture . by default cloudstoneis not scalable , meaning that it can only use a single as .thus we had to extend it to accommodate multiple servers .our installation scripts and configurations are available as open source code . for space considerations we will not discuss these technical details and will only provide an overview .the interested readers can refer to our online documentation and installation instructions .the benchmark deployment topology is depicted in figure [ fig : cloustone ] .cloudstone uses the _harness to manage the runs and to emulate users .faban driver _ , which is deployed in the client vm communicates with the _ faban agents _ deployed in other vms to start or stop tests .it also emulates the incoming user requests to the application .these requests arrive at a haproxy _ load balancer _ which distributes them across one or many application servers ( as ) .cloudstone is based on the olio application , which is a php social network website deployed in a nginx server . in the beginningwe start with a single as `` behind '' the _load balancer_. when a new as vm is provisioned we associate it with the _ load balancer_. we update its weighted round robin policy , so that incoming request are distributed among the as vms proportionally to their declared cpu capacity ( i.e. ecu ) .the persistent layer is hosted in a mysql server deployed within a separate db vm .cloudstone has two additional components - ( i ) a geocoding service called _ geocoder _ , hosted in an apache tomcat server and ( ii ) a shared _ file storage _ hosting media files .they are both required by all application servers .we have deployed the geocoding service in the db vm .the file storage is deployed in a network file system ( nfs ) server on a separate vm with 1 tb ebs storage , which is mounted from each as vm .we use `` m3.medium '' vms for the client , load balancer and db server and `` m1.small '' for the nfs server .the types of the as vms are defined differently for each experiment .all vms run 64bit ubuntu linux 14.04 .our prototype of an autoscaling component is hosted on an on - premises physical machine and implements the previously discussed algorithms and approaches .it uses the jclouds multi - cloud library to provision resources , and thus can be used in other clouds as well .we use the nupic and fann libraries to implement htm and ann respectively .we ignore the first 110 _ anomaly scores _ reported from the htm , as we observed that these results are inaccurate ( i.e. always 1 or 0 ) until it receives initial training . whenever a new as vm is provisioned we initialise it with a deep copy of the htm of the first as vm , which is the most trained one .the monitoring programs deployed within each vm are implemented as bash scripts , and are accessed by the autoscaling component through ssh .our implementation of algorithm [ algo : utilest ] uses .previously we discussed that the number of current users could be approximated by counting the number of distinct ip addresses to which there is an active tcp session .however , in cloudstone all users are emulated from the same client vm and thus have the same source ip address .thus , we use the number of recently modified web server session files instead .our autoscaling component implementation follows the amazon auto scaling approach and provisions a new as vm once the average utilisation of the server farm reaches 70% for more than 10 seconds .hence , we ensure that in all experiments the as vms are not overloaded . thus ,even if there are sla violations , they are caused either by the network or the db layer , and the as layer does not contribute to them .we also implement a _ cool down _ period of 10 minutes .in our experiments , we consider three vm types : _m1.small_ , _m1.medium _ and _ m3.medium_. table [ tbl : vmt ] summarises their cost and declared capacities in the sydney aws region which we use . .awsvm type definitions . [ cols="<,>,<,<",options="header " , ] [ tbl : vmt ] in all experiments we use the same workload .we start by emulating 30 users and each 6 minutes we increase the total number of users with 10 until 400 users are reached . to achieve thiswe run a sequence of cloudstone benchmarks , each having 1 minute ramp - up and 5 minutes steady state execution time .given cloudstone s start - up and shut - down times , this amounts to more than 5 hours per experiment .the goal is to gradually increase the number of users , thus causing the system to scale up multiple times . to test our approach in the case of a workload characteristic changewe `` inject '' such a change 3.5 hours after each experiment s start . to doso we manipulate the _ utilisation monitors _ to report higher values .more specifically they increase the reported cpu utilisations with 10% and the reported ram utilisation with 1 gb plus 2 mb for every currently served user .we implement one experiment , which is initialised with a _m1.small _ as vm and each new vm s type is chosen based on our method ( dvts ) .we also execute 3 baseline experiments , each of which statically selects the same vm type whenever a new vm is needed , analogously to the standard aws auto scaling rules .first we investigate the behaviour of dvts before the workload change .it continuously trains one htm for the first as vm and the ann . in the initial stagesthe ann _ learning rate _ and _ momentum _ decrease and increase respectively to facilitate faster training .for example , the _ learning rate _ ( defined in eq .[ eq : lrk ] ) during the initial stages is depicted in fig [ fig : lr - init ] .it shows how drastically reduces as the ann improves its accuracy after only a few tens of training samples .once the as vm gets overloaded we select a new vm type . at this pointwe only have information about _m1.small _ in the _ capacity repository _ and therefore we infer the other cpu capacities based on eq .[ eq : cpucapest ] .finally using algorithm [ algo : vmtselect ] we select _m3.medium _ as the type for the second vm .after the new vm is instantiated , the autoscaling component starts its monitoring .it trains the ann and a new dedicated htm with its measurements. it also updates the _ capacity repository _ with the cpu capacity of the new vm .surprisingly , we observe that on average its cpu capacity is about 35% better than the one of the _ m1.small _vm , even though according to the specification _m3.medium _ has 3 ecus and _ m1.small _ has 1 .therefore , the previous extrapolation of _m3.medium_ s capacity has been an overestimation .hence , when a new vm is needed again , the algorithm selects _m1.small _ again .3.5 hours after the start of the experiment the workload change is injected .this is reflected in the htms anomaly scores and the ann s errors .consequently , the _ learning rate _ , the _ momentum _ and the _ epochs _ also change to speed up the learning process as per equations [ eq : lrk ] , [ eq : mk ] and [ eq : ek ] and as a result the ann adapts quickly to the workload change .as discussed for each sample we compute its error ( rmse - pre ) before updating the ann .figure [ fig : rmse - inj ] depicts how these errors increase when the change is injected and decrease afterwards as the ann adapts timely .eventually the load increases enough so the system needs to scale up again . due to the injected change, the workload has become much more memory intensive , which is reflected in the ann s prediction . hence _m1.small _ can serve just a few users , given it has only 1.7 gb ram . at that pointthe cpu capacity of _m1.medium _ is inferred from the capacities of _ m1.small _ and _ m3.medium _ as per eq .[ eq : cpucapest ] , since it has not been used before .consequently algorithm [ algo : vmtselect ] selects _m1.medium _ for the 4th vm just before the experiment completes . for each experiment , figure [ fig : timeline ] depicts the timelines of the allocated vms and the total experiment costs . for each vm the type and costare specified to the right .our selection policy is listed as _dvts_. the baseline policy which statically selects _m1.small _ allocates 8 new vms after the workload change as _m1.small _ can serve just a few users under the new workload .in fact , if there was no _ cool down _ period in the autoscaling , this baseline would have exceeded the aws limit of allowed number of vm instances before the end of the experiment . the baselines which select _ m1.medium _ and _ m3.medium _fail to make use of _ m1.small _ instances before the change injection , which offers better performance for money .admittedly , in the beginning dvts did a misstep with the selection of _m3.medium_ , because it started with an empty _ capacity repository _ and had to populate it and infer cpu capacities `` on the go '' .this could have been avoided by prepopulating the _ capacity repository _ with test or historical data .we could expect that such inaccuracies are avoided at later stages , once more capacity and training data is present .still , our approach outperformed all baselines in terms of incurred costs with more than 20% even though its effectiveness was hampered by the lack of contextual data in the initial stages .our experiments tested dvts and the baselines with a workload , which is lower than what is observed in some applications . while our tests did not allocate more than 12 vms ( in the baseline experiment , which statically allocates _m1.small_ ) many real world systems allocate hundreds or even thousands of servers .we argue that in such cases , dvts will perform better than demonstrated , as there will be much more training data and thus the vm types capacity estimations will be determined more accurately and the machine learning approaches will converge faster . as discussed , that would allow some of the initial missteps of dvts to be avoided. moreover , as the number of as vms grows , so does the cost inefficiency caused by the wastage of allocated resources , which can be reduced by dvts . finally ,the response times in the dvts experiment and all baseline experiments were equivalent .all experiments scale up once the as vms utilisations exceed the predefined thresholds , and thus never become overloaded enough to cause response delays .the load balancer is equally utilised in all experiments , as it serves the same number of users , although it redirects them differently among the as vms .similarly , the db layer is equally utilised , as it always serves all users from all as vms .in this work we have introduced an approach for vm type selection when autoscaling application servers .it uses a combination of heuristics and machine learning approaches to `` learn '' the application s performance characteristics and to adapt to workload changes in real time . to validate our work ,we have developed a prototype , extended the cloudstone benchmark and executed experiments in aws ec2 .we have made improvements to ensure our machine learning techniques train quickly and are usable in real time .also we have introduced heuristics to approximate vm resource capacities and workload resource requirements even if there is no readily usable data , thus making our approach useful given only partial knowledge .results show that our approach can adapt timely to workload changes and can decrease the cost compared to typical static selection policies .our approach can achieve even greater efficiency , if it periodically replaces the already running vms with more suitable ones in terms of cost and performance , once there is a workload change .we will also work on new load balancing policies , which take into account the actual vm capacities .another promising avenue is optimising the scaling down mechanisms i.e. selecting which vms to terminate when the load decreases .also , we plan to extend our approach , which currently optimises cost , to also consider other factors like energy efficiency .this would be important when executing application servers in private clouds .finally , we plan to incorporate in our algorithms historical data about vm types resource capacity and workload characteristics .we thank rodrigo calheiros , amir vahid dastjerdi , adel nadjaran toosi , and simone romano for their comments on improving this work .we also thank amazon.com , inc for their support through the aws in education research grant .+ e. brewer , `` towards robust distributed systems , '' in _ proceedings of the annual acm symposium on principles of distributed computing _ , vol .19.1em plus 0.5em minus 0.4emnew york , ny , us : acm , jul 2000 , pp .710 . j. c. corbett , j. dean , m. epstein , a. fikes , c. frost , j. j. furman , s. ghemawat , a. gubarev , c. heiser , p. hochschild , w. hsieh , s. kanthak , e. kogan , h. li , a. lloyd , s. melnik , d. mwaura , d. nagle , s. quinlan , r. rao , l. rolig , y. saito , m. szymaniak , c. taylor , r. wang , and d. woodford , `` spanner : google s globally distributed database , '' _ acm trans ._ , vol . 31 , no . 3 , pp . 8:18:22 , aug . 2013 .l. cherkasova , k. ozonat , n. mi , j. symons , and e. smirni , `` automated anomaly detection and performance modeling of enterprise applications , '' _ acm transactions on computer systems _ , vol .27 , no . 3 , pp . 132 ,o. tickoo , r. iyer , r. illikkal , and d. newell , `` modeling virtual machine performance : challenges and approaches , '' _ acm sigmetrics performance evaluation review _ , vol .37 , no . 3 , pp .5560 , jan .j. dejun , g. pierre , and c .- h .chi , `` ec2 performance analysis for resource provisioning of service - oriented applications , '' in _ proceedings of the international conference on service - oriented computing ( icsoc 2009 ) _ , ser .icsoc / servicewave09.1em plus 0.5em minus 0.4emberlin , heidelberg : springer - verlag , 2009 , pp .197207 .j. schad , j. dittrich , and j .- a .quian - ruiz , `` runtime measurements in the cloud : observing , analyzing , and reducing variance , '' _ the proceedings of the vldb endowment ( pvldb ) _ , vol . 3 , no . 1 - 2 , pp .460471 , sep .j. tordsson , r. s. montero , r. moreno - vozmediano , and i. m. llorente , `` cloud brokering mechanisms for optimized placement of virtual machines across multiple providers , '' _ future generation computer systems _ , vol .28 , no . 2 ,pp . 358367 , 2012 .t. lorido - botrn , j. miguel - alonso , and j. a. lozano , `` auto - scaling techniques for elastic applications in cloud environments , '' department of computer architecture and technology , university of the basque country , tech .ehu - kat - ik-09 - 12 , 2012 .t. chieu , a. mohindra , a. karve , and a. segal , `` dynamic scaling of web applications in a virtualized cloud computing environment , '' in _ proceedings of the ieee international conference on e - business engineering ( icebe 2009)_.1em plus 0.5em minus 0.4emieee , oct .2009 , pp .281286 .t. chieu , a. mohindra , and a. karve , `` scalability and performance of web applications in a compute cloud , '' in _ proceedings of the ieee international conference on e - business engineering _ , 2011 , pp .317323 .b. simmons , h. ghanbari , m. litoiu , and g. iszlai , `` managing a saas application in the cloud using paas policy sets and a strategy - tree , '' in _ proceedings of the 7th international conference on network and services management _ , ser .cnsm 11.1em plus 0.5em minus 0.4emlaxenburg , austria , austria : international federation for information processing , 2011 , pp .343347 .e. barrett , e. howley , and j. duggan , `` applying reinforcement learning towards automating resource allocation and application scalability in the cloud , '' _ concurrency and computation : practice and experience _ , vol . 25 , no . 12 , pp . 16561674 , 2013 .x. dutreilh , s. kirgizov , o. melekhova , j. malenfant , n. rivierre , and i. truck , `` using reinforcement learning for autonomic resource allocation in clouds : towards a fully automated workflow , '' in _ proceedings of the 7th international conference on autonomic and autonomous systems ( icas 2011 ) _ , may 2011 , pp .a. ali - eldin , j. tordsson , and e. elmroth , `` an adaptive hybrid elasticity controller for cloud infrastructures , '' in _ network operations and management symposium ( noms ) , 2012 ieee _ , april 2012 , pp . 204212 .a. gandhi , p. dube , a. karve , a. kochut , and l. zhang , `` adaptive , model - driven autoscaling for cloud applications , '' in _11th international conference on autonomic computing , icac 14 _, 2014 , pp . 5764 .r. singh , u. sharma , e. cecchet , and p. shenoy , `` autonomic mix - aware provisioning for non - stationary data center workloads , '' in _ proceedings of the 7th international conference on autonomic computing _icac 10.1em plus 0.5em minus 0.4emnew york , ny , usa : acm , 2010 , pp . 2130 .y. tan , h. nguyen , z. shen , x. gu , c. venkatramani , and d. rajan , `` prepare : predictive performance anomaly prevention for virtualized cloud systems , '' in _ proceedings of the 32nd international conference on distributed computing systems ( icdcs ) _ , june 2012 , pp .285294 .w. lloyd , s. pallickara , o. david , j. lyon , m. arabi , and k. rojas , `` performance implications of multi - tier application deployments on infrastructure - as - a - service clouds : towards performance modeling , '' _ future generation computer systems _ , vol .29 , no . 5 , pp . 12541264 , 2013 .v. mountcastle , `` an organizing principle for cerebral function : the unit model and the distributed system , '' in _ the mindful brain _ , g. edelman and v. mountcastle , eds.1em plus 0.5em minus 0.4emcambridge , ma , us : mit press , 1978 .r. n. calheiros , r. ranjan , a. beloglazov , c. a. f. d. rose , and r. buyya , `` cloudsim : a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms , '' _ software : practice and experience _ , vol .41 , no . 1 ,pp . 2350 , january 2011 .w. sobel , s. subramanyam , a. sucharitakul , j. nguyen , h. wong , a. klepchukov , s. patil , a. fox , and d. patterson , `` cloudstone : multiplatform , multi - language benchmark and measurement tools for web 2.0 , '' in _ proceedings of cloud computing and its applications ( cca 08 ) _ , ser .cca 08 , 2008 .
autoscaling is a hallmark of cloud computing as it allows flexible just - in - time allocation and release of computational resources in response to dynamic and often unpredictable workloads . this is especially important for web applications whose workload is time dependent and prone to flash crowds . most of them follow the 3-tier architectural pattern , and are divided into presentation , application / domain and data layers . in this work we focus on the application layer . reactive autoscaling policies of the type _ `` instantiate a new virtual machine ( vm ) when the average server cpu utilisation reaches x% '' _ have been used successfully since the dawn of cloud computing . but which vm type is the most suitable for the specific application at the moment remains an open question . in this work , we propose an approach for dynamic vm type selection . it uses a combination of online machine learning techniques , works in real time and adapts to changes in the users workload patterns , application changes as well as middleware upgrades and reconfigurations . we have developed a prototype , which we tested with the cloudstone benchmark deployed on aws ec2 . results show that our method quickly adapts to workload changes and reduces the total cost compared to the industry standard approach .
competitions play an important role in society , economics , and politics .furthermore , competitions underlie biological evolution and are replete in ecology , where species compete for food and resources .sports are an ideal laboratory for studying competitions .in contrast with evolution , where records are incomplete , the results of sports events are accurate , complete , and widely available .randomness is inherent to competitions .the outcome of a single match is subject to a multitude of factors including game location , weather , injuries , etc , in addition to the inherent difference in the strengths of the opponents .just as the outcome of a single game is not predictable , the outcome of a long series of games is also not completely certain . in this paper, we review a series of our studies that focus on the role of randomness in competitions . among the questions we askare : what is the likelihood that the strongest team wins a championship ?what is the likelihood that the weakest team wins ?how efficient are the common competition formats and how `` accurate '' is their outcome ?we introduce an elementary model where a weaker team wins against a stronger team with a fixed _ upset probability _ , and use this elementary random process to analyze a series of competitions . to help calibrate our model , we first determine the favorite and the underdog from the win - loss record over many years of sports competition from several major sports .we find that the distribution of win percentage approaches a universal scaling function when the number of games and the number of teams are both large .we then simulate a realistic number of games and a realistic number of teams , and demonstrate that our basic competition process successfully captures the empirical distribution of win percentage in professional baseball .moreover , we study the empirical upset frequency and observe that this quantity differentiates professional sports leagues , and furthermore , illuminates the evolution of competitive balance .next , we apply the competition model to single - elimination tournaments where , in each match , the winner advances to the next round and the loser is eliminated .we use the very same competition rules where the underdog wins with a fixed probability . here , we introduce the notion of innate strength and assume that entering the competition , the teams are ranked .we find that the typical rank of the winner decays algebraically with the size of the tournament .moreover , the rank distribution for the winner has a power - law tail .hence , larger tournaments do produce stronger winners , but nevertheless , even the weakest team may have a realistic chance of winning the entire tournament .therefore , tournaments are efficient but unfair .further , we study the league format , where every team plays every other team .we note that the number of wins for each team performs a biased random walk . using heuristic scaling arguments, we establish that the top teams have a realistic chance of becoming champion , while it is highly unlikely that the weakest teams can win the championship .in addition , the total number of games required to guarantee that the best team wins is cubic in . in this sense , leagues are fair but inefficient .finally , we propose a gradual elimination algorithm as an efficient way to determine the champion .this hybrid algorithm utilizes a preliminary round where the teams play a small number of games and a small fraction of the teams advance to the next round .the number of games in the preliminary round is large enough to ensure the stronger teams advance . in the championship round ,each team plays every other team ample times to guarantee that the strongest team always wins .this algorithm yields a significant improvement in efficiency compared to a standard league schedule .the rest of this paper is organized as follows . in section ii , the basic competition model is introduced and its predictions are compared with empirical standings data . the notion of innate team strengthis incorporated in section iii , where the random competition process is used to model single - elimination tournaments . scaling laws for the league formatare derived in section iv .scaling concepts are further used to analyze the gradual elimination algorithm proposed in section v. finally , basic features of our results are summarized in section vi .in our competition model , teams participate in a series of games .two teams compete head to head and , at the end of each match , one team is declared the winner and the other as the loser .there are no ties .to study the effect of randomness on competitions , we consider the scenario where there is a fixed _ upset probability _ that a weaker team upsets a stronger team .this probability has the bounds .the lower bound corresponds to predictable games where the stronger team always wins , and the upper bound corresponds to random games .we consider the simplest case where the upset probability does not change with time and is furthermore independent of the relative strengths of the competitors . in each game , we determine the stronger and the weaker team from current win - loss records .let us consider a game between a team with wins and a team with wins .the competition outcome is stochastic : if , where .if , the winner is chosen randomly .initially , all teams have zero wins and zero losses .we use a kinetic framework to analyze the outcome of this random process , taking advantage of the fact that the number of games is a measure of time .we randomly choose the two competing teams and update the time by , with , after each competition .with this normalization , each team participates in one competition per unit time .let be the fraction of teams with wins at time .this probability distribution must be normalized , . in the limit , this distribution evolves according to for . herewe also introduced two cumulative distribution functions : is the fraction of teams with less than wins and is the fraction of teams with more than wins .of course , .the first two terms on the right - hand - side of account for games in which the stronger team wins , and the next two terms correspond to matches where the weaker team wins .the last two terms account for games between teams of equal strength ( the numerical prefactor is combinatorial ) .accounting for the boundary condition and summing the rate equations , we readily verify that the normalization is preserved .the initial conditions are . in contrast to ,the cumulative distribution functions obey closed evolution equations .in particular , the quantity evolves according to which may be obtained by summing .the boundary conditions are and , and the initial condition is for .we note that the average number of wins , , where , follows from the fact that each team participates in one competition per unit time and that one win is awarded in each game .as , we can verify that by summing the rate equations .we first discuss the asymptotic behavior when the number of games is very large . in the limit , we use the continuum approach and replace the difference equations with the partial differential equation \frac{\partial f}{\partial k}=0\,.\ ] ] according to our model , the weakest team wins at least a fraction of its games , on average , and similarly , the strongest team wins no more than a fraction of its games .hence , the number of wins is proportional to time , .we thus seek the scaling solution here and throughout this paper , the quantity is the scaled cumulative distribution of win percentage ; that is , the fraction of teams that win less than a fraction of games played . the boundary conditions are and .we now substitute the scaling form into , and find that the scaling function satisfies where prime denotes derivative with respect to .there are two solutions : and the linear function .therefore , the distribution of win percentages is piecewise linear as expected , there are no teams with win percentage less than the upset probability , and there are no teams with win percentage greater than the complementary probability .furthermore , one can verify that .the linear behavior in indicates that the actual distribution of win percentage becomes uniform , for , when the number of games is very large .versus win percentage for at times and .also shown for reference is the limiting behavior .,scaledwidth=45.0% ] as shown in figure 1 , direct numerical integration of the rate equation confirms the scaling behavior .moreover , as the number of games increases , the function approaches the piecewise - linear function given by equation .however , there is a diffusive boundary layer near and , whose width decreases as in the long - time limit . generally , the win percentage is a convenient measure of team strength .for example , major league baseball ( mlb ) in the united states , where teams play games during the regular season , uses win percentage to rank teams .the fraction of games won is preferred over the number of wins because throughout the season there are small variations between the number of games played by various teams in the league .versus win percentage for : ( i ) monte carlo simulations of the competition process with , and ( ii ) season - end standings for major league baseball ( mlb ) over the past century ( 1901 - 2005).,scaledwidth=45.0% ] the piecewise - linear scaling function in holds in the asymptotic limits and .to apply the competition model , we must use a realistic number of games and a realistic number of teams . to test whether the competition model faithfully describes the win percentage of actual sports leagues , we compared the results of monte carlo simulations with historical data for a variety of sports leagues . in this paper, we give one representative example : major league baseball . in our simulations , there are teams , each participating in exactly games throughout the season . in each match ,two teams are selected at random , and the outcome of the competition follows the stochastic rule : with the upset probability , the team with the lower win percentage is victorious , but otherwise , the team with the higher win percentage wins . at the start of the simulated season , all teams have an identical record .we treated the upset frequency as a free parameter and found that the value best describes the historical data for mlb ( and ) .as shown in figure [ fig - phi - mlb ] , the competition model faithfully captures the empirical distribution of win percentages at the end of the season .the latter distribution is calculated from all season - end standings over the past century ( 1901 - 2005 ) .in addition , we directly measured the actual upset frequency from the outcome of all games played over the past century . to calculate the upset frequency , we chronologically ordered all games and recreated the standings at any given day .then we counted the number of games in which the winner was lower in the standings at the time of the current game .game location and the margin of victory were ignored . for mlb, we find the value , only slightly higher than the model estimate . the standard deviation in win percentage , , defined by ,is commonly used to quantify parity of a sports league .for example , in baseball , where the win percentage typically varies between and , the historical standard deviation is . from the cumulative distribution , it straightforwardly follows that the standard deviation varies linearly with the upset probability , there is an obvious relationship between the predictability of individual games and the competitive balance of a league : the more random the outcome of an individual game , the higher the degree of parity between teams in the league . as a function of time . shownare results of numerical integration of the rate equation with . also shown for referenceis the limiting value .,scaledwidth=45.0% ] the standard deviation is a convenient quantity because it requires only year - end standings , which consist of only data points per season . the upset frequency , on the other hand , requires the outcome of each game , and therefore involves a much larger number of data points , per season .yet , as a measure for competitive balance , the upset frequency has an advantage . as seen in figure [ fig - sigmat ] , the quantity consists of two contributions : one due to the intrinsic nature of the game and one due to the finite length of the season .for example , the large standard deviation in the national football league ( nfl ) is in large part due to the extremely short season , .therefore , the upset frequency , which is decoupled from the length of the season , provides a more accurate measure of competitive balance .the evolution of the upset frequency over time is truly fascinating ( figure [ fig - sigma ] ) .although varies over a narrow range , this quantity can differentiate the four sports leagues .the historical data shows that mlb has consistently had the least predictable games , while nba and nfl games have been the most predictable .the trends for for these sports leagues are even more interesting .certain sports leagues ( mlb and to a larger extent , nfl ) managed to increase competitiveness by changing competition formats , increasing the number of teams , having unbalanced schedules where stronger teams play more challenging opponents , or using a draft where the weakest team can first pick the most promising upcoming talent . in spite of the fact that nhl and nba implemented some of these same measures to increase competitiveness , there are no clear long - term trends in the evolution of the upset probability in these two leagues .another plausible interpretation of figure [ fig - sigma ] is that the sports leagues are striving to achieve an optimal upset frequency of .one may even speculate that the various sports leagues compete against each other to attract public interest , and that making the games less predictable , and hence , more interesting to follow is a key objective in this evolutionary - like process . in any event , the upset frequency is a natural and transparent measure for the evolution of competitive balance in sports leagues . with time .shown is data for : ( i ) major league baseball ( mlb ) , ( ii ) the national hockey league ( nfl ) ( iii ) the national basketball association ( nba ) , and ( iv ) the national football league ( nfl ) .the quantity is the cumulative upset frequency for all games played in the league up to the given year . in football ,a tie counts as one half of a win.,scaledwidth=45.0% ] the random process involves only a single parameter , .the model does not take into account many aspects of real competitions including the game score , the game location , the relative team strength , and the fact that in many sports leagues the schedule is unbalanced , as teams in the same geographical region may face each other more often .nevertheless , with appropriate implementation , the competition model specified in equation captures basic characteristics of real sports leagues .in particular , the model can be used to estimate the distribution of team win percentages as well as the upset frequency .thus far , our approach did not include the notion of innate team strength .randomness alone controlled which team reaches the top of the standings and which teams reaches at the bottom .indeed , the probability that a given team has the best record at the end of the season equals .furthermore , we have used the cumulative win - loss record to define team strength . however, this definition can not be used to describe tournaments where the number of games is small .we now focus on single - elimination tournaments , where the winner of a game advances to the next round of play while the loser is eliminated .a single - elimination tournament is the most efficient competition format : a tournament with teams requires only games through rounds of play to crown a champion . in the first round , there are teams and the winners advance to the next round .similarly , the second round produces winners . in general, the number of competitors is cut by half at each round in many tournaments , for example , the ncaa college basketball tournament in the united states or in tennis championships , the competitors are ranked according to some predetermined measure of their strength .thus , we introduce the notion of rank into our modeling framework .let be the rank of the team with in our definition , a team with lower rank is stronger .rank measures innate strength , and hence , it does not change with time .since ranking is strict , we use the uniform ranking scheme without loss of generality .again , we assume that there is a fixed probability that the underdog wins the game , so that the outcome of each match is stochastic .when a team with rank faces a team with rank , we have when .the important difference with is that the losing team is now eliminated .let be the distribution of rank for all competitors .this quantity is normalized , . in a two - team tournament ,the rank distribution of the winner , , is given by + 2q\,w_1(x)w_1(x),\ ] ] where is the cumulative distribution of rank .the structure of this equation resembles that of , with the first term corresponding to games where the favorite advances , and the second term to games where the underdog advances .mathematically , there is a basic difference with eq . in that equationdoes not contain loss terms .again , ties are not allowed to occur . by integrating ,we obtain the closed equation . in general , the cumulative distribution obeys the nonlinear recursion equation ^ 2.\ ] ] here , , and is the rank distribution for the winner of an -team tournament .the boundary conditions are and .the prefactor arises because there are two ways to choose the winner .the quadratic nature of equation reflects that two teams compete in each match ( competitions with three teams are described by cubic equations ) . starting with that corresponds to uniform ranking , , we can follow how the distribution of rank evolves by iterating the recursion equation . as shown in figure [ fig - wn ] , the rank of the winner decreases as the size of the tournament increases .hence , larger tournaments produce stronger winners .is calculated by iterating equation with .,scaledwidth=45.0% ] by substituting into equation , we find and in general , .this behavior suggests the scaling form where the scaling factor is the typical rank of the winner .this quantity decays algebraically with the size of the tournament , when games are perfectly random ( upset probability ) , the typical rank of the winner becomes independent of the number of teams , .when the games are highly predictable , the top teams tend to win the tournament , .again , the scaling behavior shows that larger tournaments tend to produce stronger champions . by substituting into, we see that the scaling function obeys the nonlocal and nonlinear equation the boundary conditions are and . from equation, we deduce the asymptotic behaviors with the scaling exponent .the large- behavior is obtained by substituting into and noting that since when , the correction obeys the linear equation .the large- behavior of the scaling function gives the likelihood that a very weak team manages to win the entire tournament .the scaling behavior is equivalent to with . in the limit ,the distribution approaches a constant .however , the tail of the rank distribution is algebraic when .the exponent increases monotonically with , and it diverges in the limit .moreover , the probability that the weakest team wins the tournament , , decays algebraically with the total number of teams , . in the following section , we discuss sports leagues and find that : ( i ) the rank distribution of the winner has an _ exponential _ tail , and ( ii ) the probability that the weakest team is crowned league champion is exponentially small .the scaling behavior indicates universal statistics when the size of the tournament is sufficiently large .once rank is normalized by typical rank , the resulting distribution does not depend on tournament size .further , the scaling law and the power - law tail reflect that tournaments can produce major upsets . with a relatively small number of upset wins ,a `` cinderella '' team can emerge , and for this reason , tournaments can be very exciting .furthermore , tournaments are maximally efficient as they require a minimal number of games to decide a champion .figure [ fig - ncaa ] shows that our theoretical model nicely describes empirical data for the ncaa college basketball tournament in the united states . in the current format ,64 teams participate in four sub - tournaments , each with teams .the four winners of each sub - tournament advance to the final four , which ultimately decides the champion .prior to the tournament , a committee of experts ranks the teams from to .we note that the game schedule is not random , and is designed such that the top teams advance if there are no upsets . versus the rank for ( i ) ncaa tournament data ( 1979 - 2006 ) , ( ii ) iteration of the equation .,scaledwidth=45.0% ] consistent with our theoretical results , the ncaa tournament has been producing major upsets : the seed team has advanced to the final four twice over the past 30 years .moreover , only once did all of the four top - seeded teams advance simultaneously ( 2008 ) .our model estimates the probability of this event at , a figure that is of the same order of magnitude as the observed frequency .we also mention that in producing the theoretical curve in figure [ fig - ncaa ] , we used the upset frequency , whereas the actual game results yield .this larger discrepancy ( compared with the mlb analysis above ) is due to a number of factors including the much smaller dataset ( games ) and the non - random game schedule .indeed , our monte - carlo simulations which incorporate a realistic schedule give better estimates for the upset frequency .we now discuss the common competition format in which each team hosts every other team exactly once during the season .this format , first used in english soccer , has been adopted in many sports . in a league of size ,each team plays games and the total number of games equals .given this large number of games , does the strongest team always wins the championship ? to answer this question , we assume that each team has an innate strength and rank the teams according to strength . without loss of generality , we use the uniform rank distribution and its cumulative counterpart where .moreover , we implicitly take the large- limit . consider a team with rank . the probability that this team wins a game against a randomly - chosen opponent decreases linearly with rank , as follows from +qw_1(x)$ ][ see also equation ] .consistent with our competition rules and , the probability satisfies .since team strength does not change with time , the average number of wins for a team with rank grows linearly with the number of games , accordingly , the number of wins of a given team performs a biased random walk : after each game the number of wins increases by one with probability , and remains unchanged with the complementary probability .also , the uncertainty in the number of wins , , grows diffusively with , with diffusion coefficient .let us assume that each team plays games .if the number of games is sufficiently large , the best team has the most wins .however , at intermediate times , it is possible that a weaker team has the most wins .for a team with strength to still be in contention at time , the difference between its expected number of wins and that of the top team should be comparable with the diffusive uncertainty we now substitute equations - into this heuristic estimate and obtain the typical rank of the leader as a function of time , in obtaining this estimate , we tacitly ignored numeric prefactors , including in particular , the dependence on .this crude estimate shows that the best team does not always win the league championship .since , we have since rank is a normalized quantity , the top of the teams have a realistic chance of emerging with the best record at the end of the season .thus randomness plays a crucial role in determining the champion : since the result of an individual game is subject to randomness , the outcome of a long series of games reflects this randomness .needed for the best team to emerge as champion in a league of size .the simulation results represent an average over simulated sports leagues .also shown for reference is the theoretical prediction.,scaledwidth=45.0% ] we can also obtain the total number of games needed for the best team to always emerge as the champion , this scaling behavior follows by replacing in with which corresponds to the best team . for the best team to win , each team must play every other team times ! alternativelythe number of games played by each team scales quadratically with the size of the league .clearly , such a schedule is prohibitively long , and we conclude that the traditional schedule of playing each opponent with equal frequency is neither efficient nor does it guarantee the best champion .we confirmed the scaling law numerically . in our monte carlo simulations ,the teams are ranked from to at the start of the season .we implemented the traditional league format where every team plays every other team and kept track of the leader defined as the team with the best record .we then measured the last - passage time , that is , the time in which the best team takes the lead for good .we define the average of this fluctuating quantity as .as shown in figure [ fig - tn ] , the total number of games required is cubic . again , we expect that the probability distribution that a team with rank has the best record after games is characterized by the scale given in numerical results confirm this scaling behavior .since the number of wins performs a biased random walk , we expect that the distribution of the number of wins becomes normal in the long - time limit .moreover , the scaling function in has a gaussian tail as .using this scaling behavior , we can readily estimate the probability that worst team becomes champion ( in the standard league format ) .for the worst team , , and the corresponding scaling variable in equation is .hence , the gaussian tail shows that the probability that the weakest team wins the league is exponentially small , in sharp contrast with tournaments , where this probability is algebraic , leagues do not produce upset champions .leagues may not guarantee the absolute top team as champion , but nevertheless , they do produce worthy champions . ,the probability that the -ranked team has the best record at the end of the season in the format of playing all opponents with equal frequency , and the probability that the -ranked team wins an -team single - elimination tournament .the upset probability is and .,scaledwidth=45.0% ] to compare leagues and tournaments , we calculated the probability that the ranked team is champion for a realistic number of games and a realistic upset probability ( figure [ fig - lt ] ) . for leagues , we calculated this probability from monte carlo simulations , and for tournaments , we used equation .indeed , the top four teams fare better in a league format while the rest of the teams are better off in a tournament .this behavior is fully consistent with the above estimate that the top teams have a realistic chance to win the league . what is the probability that the top team ends the season with the best record in a realistic sports league ? to answer this question , we investigated the four major sports leagues in the us : mlb , nhl , nfl , and nba .we simulated a league with the actual number of teams and the actual number of games , using the empirical upset frequencies ( see figure [ fig - sigmat ] ) .all of these sports leagues have comparable number of teams , .surprisingly , we find almost identical probabilities for three of the sports leagues : ( i ) mlb with the longest season and most random games ( , ) has , ( ii ) nfl with the shortest season but most deterministic games ( , ) has , and ( iii ) nhl with intermediate season and intermediate randomness ( , ) has .standing out as an anomaly is the value for the nba which has a moderate - length season but less random games ( and ) .this interesting result reinforces our previous comments about sports leagues competing against each other for interest and our hypothesis that there are optimal randomness parameters .having a powerhouse win every year does not serve the league well , but having the strongest team finish with the best record once every three years may be optimal .our analysis demonstrates that single - elimination tournaments have optimal efficiency but may produce weak champions , whereas leagues which result in strong winners are highly inefficient .can we devise a competition `` algorithm '' that guarantees a strong champion within a minimal number of games ? as an efficient algorithm , we propose a hybrid schedule consisting of a preliminary round and a championship round .the preliminary round is designed to weed out a majority of teams using a minimal number of games , while the championship round includes ample games to guarantee the best team wins . in the preliminary round ,every team competes in games .whereas the league schedule has complete graph structure with every team playing every other team , the preliminary round schedule has regular random graph structure with each team playing against the same number of randomly - chosen opponents . out of the teams , the teams with the largest number of wins in the preliminary - round advance to the championship round .the number of games is chosen such that the strongest team always qualifies . by the same heuristic argument leading to , the top team ranks no lower than after games .we thus require and consequently , each team plays preliminary games .the championship round uses a league format with each of the qualifying teams playing games against every other team .therefore , the total number of games , , has two components in writing this estimate , we ignore numeric prefactors , as well as the dependence on the upset frequency . the quantity is minimal when the two terms in are comparable .hence , the size of the championship round and the total number of games scale algebraically with , consequently , each team plays games in the preliminary round .interestingly , the existence of a preliminary round significantly reduces the number of games from to . without sacrificing the quality of the champion , the hybrid schedule yields a huge improvement in efficiency ! we can further improve the efficiency by using multiple elimination rounds . in this generalization ,there are consecutive rounds of preliminary play culminating in the championship round .the underlying graphical structure of the preliminary rounds is always a regular random graph , while the championship round remains a complete graph .each preliminary round is designed to advance the top teams , and the number of games is sufficiently large so that the top team advances with very high probability . when there are rounds , we anticipate the scaling laws where is the number of teams advancing out of the first round and is the total number of games . of course , when there are no preliminary rounds , and .following equation , the number of teams gradually declines in each round , .the exponents and in equation for . [ cols="^,^,^,^,^,^,^,^",options="header " , ] according to the first term in , the number of games in the first round scales as , and therefore , the total number of games obeys the recursion indeed , if we replace with in equation we can recognize the recursion .the second term scales as and becomes comparable to the second when .hence , the scaling exponents satisfy the recursion relations using and , we recover and in agreement with .the general solution of is hence , the efficiency is optimal , and the number of games becomes linear in the limit . for a modest number of teams , a small number of preliminary rounds , say 1 - 3 rounds , may suffice .indeed , with as few as four elimination rounds , the number of games becomes essentially linear , .interestingly , the result indicates that championship rounds or `` playoffs '' have the optimal size given by gradual elimination is often used in the arts and sciences to decide winners of design competitions , grant awards , and prizes .indeed , the selection process for prestigious prizes typically begins with a quick glance at all nominees to eliminate obviously weak candidates , but concludes with rigorous deliberations to select the winner .multiple elimination rounds may be used when the pool of candidates is very large . to verify numerically the scaling laws, we simulated a single preliminary round followed by a championship round .we chose the size of the preliminary round strictly according to and used a championship round where all teams play against all teams exactly times .we confirmed that as the number of teams increases from to to etc . , the probability that the best team emerges as champion is not only high but also , independent of .we also confirmed that the concept of preliminary rounds is useful for small . for teams ,the number of games can be reduced by a factor by using a single preliminary round .we introduced an elementary competition model in which a weaker team can upset a stronger team with fixed probability .the model includes a single control parameter , the upset frequency , a quantity that can be measured directly from historical game results .this idealized competition model can be conveniently applied to a variety of competition formats including tournaments and leagues .the random competition process is amenable to theoretical analysis and is straightforward to implement in numerical simulations .qualitatively , this model explains how tournaments , which require a small number of games , can produce major upsets , and how leagues which require a large number of games always produce quality champions .additionally , the random competition process enables us to quantify these intuitive features : the rank distribution of the champion is algebraic in the former schedule but gaussian in the latter . using our theoretical framework , we also suggested an efficient algorithm where the teams are gradually eliminated following a series of preliminary rounds . in each preliminary round ,the number of games is sufficient to guarantee that the best team qualifies to the next round .the final championship round is held in a league format in which every team plays many games against every other team to guarantee that the strongest team emerges as champion . using gradual elimination ,it is possible to choose the champion using a number of games that is proportional to the total number of teams .interestingly , the optimal size of the championship round scales as the one third power of the total number of teams .the upset frequency plays a major role in our model .our empirical studies show that the frequency of upsets , which shows interesting evolutionary trends , is effective in differentiating sports leagues. moreover , this quantity has the advantage that it is not coupled to the length of the season , which varies widely from one sport to another .nevertheless , our approach makes a very significant assumption : that the upset frequency is fixed and does not depend on the relative strength of the competitors .certainly , our approach can be generalized to account for strength - dependent upset frequencies .we note that our single - parameter model fares better when the games tend to be close to random , and that model estimates for the upset frequency have larger discrepancies with the empirical data when the games become more predictable .clearly , a more sophisticated set of competition rules are required when the competitors are very close in strength , as is the case for example , in chess .
we study the effects of randomness on competitions based on an elementary random process in which there is a finite probability that a weaker team upsets a stronger team . we apply this model to sports leagues and sports tournaments , and compare the theoretical results with empirical data . our model shows that single - elimination tournaments are efficient but unfair : the number of games is proportional to the number of teams , but the probability that the weakest team wins decays only algebraically with . in contrast , leagues , where every team plays every other team , are fair but inefficient : the top of teams remain in contention for the championship , while the probability that the weakest team becomes champion is exponentially small . we also propose a gradual elimination schedule that consists of a preliminary round and a championship round . initially , teams play a small number of preliminary games , and subsequently , a few teams qualify for the championship round . this algorithm is fair and efficient : the best team wins with a high probability and the number of games scales as , whereas traditional leagues require games to fairly determine a champion .
understanding the factors that influence the capability of a group of individuals to solve problems is a central issue on collective intelligence and on organizational design , nonetheless the meager interchange of ideas between these two research areas .conventional wisdom says that a group of cooperating individuals can solve a problem faster than the same group of individuals working in isolation , and that the higher the diversity of the group members , the better the performance .although there has been some progress on the quantitative understanding of the factors that make cooperative group work effective , only very recently a workable minimal agent - based model of distributed cooperative problem solving system was proposed ( see also ) .here we build on that model to dispute some common - sense views of the benefits of diversity in group organization .we consider a distributed cooperative problem solving system in which agents cooperate by broadcasting messages informing on their partial success towards the completion of the goal and use this information to imitate the more successful agent ( model ) in the system . in doing so , we follow bloom in conferring imitative learning the central role in the emergence of collective intelligence : `` imitative learning acts like a synapse , allowing information to leap the gap from one creature to another '' . the parameters of the model are the number of agents in the system and the copy or imitation propensities ] is the imitation or copy propensity of agent .if then agent will explore the solution space independently of the other agents . in previous studies of this model we assumed that the agents exhibited the same imitation behavior , i.e. , for .here we introduce variety in the behavior of the agents by endowing them with different copy propensities .in particular , we consider the case that the s are identically distributed independent random variables drawn from the uniform probability distribution for ] .the parameters of the smooth nk landscape are and ., scaledwidth=48.0% ] in fig .[ fig1_m18_k0 ] we show the performances , as measured by the mean computational cost , of a system composed of identical agents ( i.e. , for all agents ) , a system composed of agents with drawn from the uniform distribution and a system of agents with drawn from the ( biased ) trimodal distribution . as observed in previous analyses of the imitative search , for each condition there is a system size at which the computational cost is minimum .we note that , for a landscape without local maxima , the best performance of the imitative search is achieved by setting for all agents ( see fig .[ fig1_m18_k0 ] ) , since copying the fittest string at the trial is always a certain step towards the solution of the problem .this is the reason the trimodal distribution gives the best performance among the three distributions with exhibited in fig .[ fig1_m18_k0 ] : it simply produces systems with a large proportion of experts ( i.e. , agents with ) .in fact , we have verified that a bimodal distribution , in which half of the agents have and the other half , yields a better performance than the trimodal distribution . for greater than the optimal system size ,we observe two distinct growth regimes of the computational cost . the first regime , which occurs for and holds over for nearly three decades for the data of fig .[ fig1_m18_k0 ] , is characterized by a sublinear growth with and signals a scenario of mild negative synergy among the agents since the time necessary to find the global maximum decreases with rather than with as in the case of the independent search ( absence of synergy ) .although this regime is important because for large it is the only growth regime that can be observed in the simulations , the specific value of the exponent is not very informative since it depends on the distribution of the copy propensities ( see fig .[ fig1_m18_k0 ] ) and it increases with increasing .for instance , for the uniform distribution we found for , for and for .the second regime , which takes place for , is described by the linear function and corresponds to the situation where the system size is so large that the solution is found in the first trials . in this regime , is not affected by the value of , i.e. , adding more agents to the system does not decrease the time required to find the solution .finally , we note that for the imitative search always performs better than the independent search . between the standard deviation of the computational cost and its mean value as function of the system size for the different system compositions shown in fig .[ fig1_m18_k0 ] . the symbols convention and the parameters of the nk landscape are the same as for that figure ., scaledwidth=48.0% ] it is also instructive to consider the ratio between the standard deviation of the computational cost and its mean value , i.e. , ^{1/2} ] .figure [ fig2_m18_k0 ] shows the probability that an agent belonging to one of those classes hits the global maximum .this figure corroborates our preceding remark that for a smooth landscape the best strategy for the agents is to copy the model string , since that string always displays faithful information about the location of the global maximum . for the determining factor for an agent to hitthe solution is its proximity to the global maximum when the initial strings are set randomly and so all copy propensity classes perform equally in this regime , as expected .the results for the trimodal distribution are qualitatively the same as those shown in fig .[ fig2_m18_k0 ] , except that the high - copy propensity class , which in this case is characterized by , has a slightly higher probability of finding the global maximum than it has for the uniform distribution .the study of the performance of the imitative search on rugged landscapes is way more compute - intensive than on smooth landscapes for two reasons : first , the number of trials to hit the solution for system sizes near the optimal size is about 100 times greater than for smooth landscapes .second , now we need to average the results over many ( at least , ) realizations of the nk landscape . hence to grasp the behavior of the system in all regimes of studied before, we will consider first a rather small landscape with parameters and then verify whether the results hold true for a larger landscape with parameters .note that for both landscapes the correlation between the fitness of neighboring strings is . as function of the system size for a system of identical agents with ( ) , a system of agents with uniformly distributed in the unit interval ( ) , and a system of agents with generated using a trimodal distribution ( ) .the symbols are the results for the independent search .the solid line is the linear function .the parameters of the rugged nk landscape are and ., scaledwidth=48.0% ] figure [ fig1_m12_k3 ] summarizes our results for the nk landscape with parameters .this figure reveals that moderately large ( i.e. , ] , corroborating the puzzling finding that if the system size can be adjusted to maximize performance then the homogeneous system performs better than the heterogeneous one , given the constraint that is the same in all conditions .the results regarding the dispersion around the mean computational cost and the chances of agents in the different copy propensity classes to hit the solution are qualitatively the same as those discussed for the landscapes with parameters . finally , we note that since finding the global maxima of nk landscapes with is an np - complete problem , one should not expect that the imitative search ( or any other search strategy , for that matter ) would find those maxima much more rapidly than the independent search .our findings corroborate , in part , the prevalent views on the effects of diversity on the efficiency of cooperative problem - solving systems . in particular ,in the case of easy tasks , modeled here by smooth landscapes without local maxima , for which there is an optimal imitation strategy , the best performance is achieved by a homogeneous system of agents equipped with that strategy , the so - called experts ( see fig .[ fig1_m18_k0 ] ) . in the case of difficult tasks , modeled by landscapes plagued of local maxima, we find that diversity is a palliative for the main deficiency of the imitative search strategy , namely , the lure of the model strings in the vicinity of the local maxima , a phenomenon analogous to groupthink .in fact , for some system sizes diversity may produce a more than tenfold decrease of the computational cost in comparison with that of homogeneous systems .we note , however , that a more efficient strategy to bypass groupthink is to reduce the influence of the model string by decreasing the connectivity of the network .the main result of this paper is the surprising finding that if one is allowed to adust the system size to maximize the performance , then the homogeneous system will outperform the heterogeneous ones . to offer a clue to understand this finding, we note that the optimal size of the homogeneous system is ( see figs .[ fig1_m18_k0 ] , [ fig1_m12_k3 ] and [ fig1_m18_k5 ] ) , which means that the optimal system is composed of a model string together with a cloud of mutant strings that differ from it typically by one or two entries .since this is the manner viral quasispecies explore their fitness landscapes , it is probably the optimal ( or near - optimal ) way to explore rugged fitness landscapes .this research was partially supported by grant 2015/21689 - 2 , so paulo research foundation ( fapesp ) and by grant 303979/2013 - 5 , conselho nacional de desenvolvimento cientfico e tecnolgico ( cnpq ) .the research used resources of the lcca - laboratory of advanced scientific computation of the university of so paulo .herrmann s. , grahl j. and rothlauf f. , problem complexity in parallel problem solving ._ proceedings of the international workshop on modeling , analysis and management of social networks and their applications _ eds.:fischbach k. , grossmann m. , krieger u.r . and staake t. ( university of bamberg press , bamberg , 2014 ) pp .7783 .rendell l. , boyd r. , cownden d. , enquist m. , eriksson k. , feldman m.w . , fogarty l. , ghirlanda s. , lillicrap t. and laland k.n ., why copy others ?insights from the social learning strategies tournament ._ science _ * 328 * , 208 ( 2010 ) .
problem solving ( e.g. , drug design , traffic engineering , software development ) by task forces represents a substantial portion of the economy of developed countries . here we use an agent - based model of cooperative problem solving systems to study the influence of diversity on the performance of a task force . we assume that agents cooperate by exchanging information on their partial success and use that information to imitate the more successful agent in the system the model . the agents differ only in their propensities to copy the model . we find that , for easy tasks , the optimal organization is a homogeneous system composed of agents with the highest possible copy propensities . for difficult tasks , we find that diversity can prevent the system from being trapped in sub - optimal solutions . however , when the system size is adjusted to maximize performance the homogeneous systems outperform the heterogeneous systems , i.e. , for optimal performance , sameness should be preferred to diversity .
the setting in which abc operates is the approximation of a simulation from the posterior distribution when distributions associated with both the prior and the likelihood can be simulated ( the later being unavailable in closed form ) .the first abc algorithm was introduced by as follows : given a sample from a sample space , a sample is produced by + * algorithm 1 : abc sampler * [ algo : abc0 ] generate from the prior distribution generate from the likelihood set , the parameters of the abc algorithm are the so - called summary statistic , the distance , and the tolerance level .the approximation of the posterior distribution provided by the abc sampler is to instead sample from the marginal in of the joint distribution denotes the indicator function of and the basic justification of the abc approximation is that , when using a sufficient statistic and a small ( enough ) tolerance , we have in practice , the statistic is necessarily insufficient ( since only exponential families enjoy sufficient statistics with fixed dimension , see ) and the approximation then converges to the less informative when goes to zero .this loss of information is a necessary price to pay for the access to computable quantities and provides a convergent inference on when is identifiable in the distribution of .while acknowledging the gain brought by abc in handling bayesian inference in complex models , and the existence of involved summary selection mechanisms , we demonstrate here that the loss due to the abc approximation may be arbitrary in the specific setting of bayesian model choice via posterior model probabilities .the standard bayesian tool for model comparison is the marginal likelihood which leads to the bayes factor for comparing the evidences of models with likelihoods and , as detailed in , it provides a valid criterion for model comparison that is naturally penalised for model complexity .bayesian model choice proceeds by creating a probability structure across models ( or likelihoods ) .it introduces the model index as an extra unknown parameter , associated with its prior distribution , ( ) , while the prior distribution on the parameter is conditional on the value of the index , denoted by and defined on the parameter space . the choice between those modelsis then driven by the posterior distribution of , where denotes the marginal likelihood for model .while this posterior distribution is straightforward to interpret , it offers a challenging computational conundrum in bayesian analysis . when the likelihood is not available , abc represents the almost unique solution . describe the use of model choice based on abc for distinguishing between different mutation models .the justification behind the method is that the average abc acceptance rate associated with a given model is proportional to the posterior probability corresponding to this approximative model , when identical summary statistics , distance , and tolerance level are used over all models . in practice ,an estimate of the ratio of marginal likelihoods is given by the ratio of observed acceptance rates .using bayes formula , estimates of the posterior probabilities are straightforward to derive .this approach has been widely implemented in the literature ( see , e.g. , , , , and ) .a representative illustration of the use of an abc model choice approach is given by which analyses the european invasion of the western corn rootworm , north america s most destructive corn pest .because this pest was initially introduced in central europe , it was believed that subsequent outbreaks in western europe originated from this area . based on an abc model choice analysis of the genetic variability of the rootworm ,the authors conclude that this belief is false : there have been at least three independent introductions from north america during the past two decades .the above estimate is improved by regression regularisation , where model indices are processed as categorical variables in a polychotomous regression .when comparing two models , this involves a standard logistic regression .rejection - based approaches were lately introduced by , and , in a monte carlo simulation of model indices as well as model parameters .those recent extensions are already widely used in population genetics , as exemplified by .another illustration of the popularity of this approach is given by the availability of four softwares implementing abc model choice methodologies : * abc - sysbio , which relies on a smc - based abc for inference in system biology , including model - choice . * abctoolbox which proposes smc and mcmc implementations , as well as bayes factor approximation .* diyabc , which relies on a regularised abc - mc algorithm on population history using molecular markers .* popabc , which relies on a regular abc - mc algorithm for genealogical simulation .as exposed in e.g. , , or , once is incorporated within the parameters , the abc approximation to its posterior follows from the same principles as in regular abc .the corresponding implementation is as follows , using for the summary statistic a statistic that is the concatenation of the summary statistics used for all models ( with an obvious elimination of duplicates ) . + * algorithm 2 : abc - mc * [ algo : abcmoc ] generate from the prior generate from the prior generate from the model set and the abc estimate of the posterior probability is then the frequency of acceptances from model in the above simulation this also corresponds to the frequency of simulated pseudo - datasets from model that are closer to the data than the tolerance . in order to improve the estimation by smoothing , follow the rationale that motivated the use of a local linear regression in andrely on a weighted polychotomous regression to estimate based on the abc output .this modelling is implemented in the diyabc software .there is a fundamental discrepancy between the genuine bayes factors / posterior probabilities ) and the approximations resulting from abc - mc .the abc approximation to a bayes factor , say , resulting from algorithm 2 is an alternative representation is given by where the pairs are simulated from the joint prior and is the number of simulations necessary for acceptances in algorithm 2 . in order to study the limit of this approximation, we first let go to infinity .( for simplification purposes and without loss of generality , we choose a uniform prior on the model index . )the limit of is then } { \mathbb{p}[\mathcal{m}=2,\rho\{{\boldsymbol{\eta}}({\mathbf{z}}),{\boldsymbol{\eta}}({\mathbf{y}})\ } \le \epsilon]}\\ & = & \dfrac{\iint \mathbb{i}_{\rho\{{\boldsymbol{\eta}}({\mathbf{z}}),{\boldsymbol{\eta}}({\mathbf{y}})\ } \le \epsilon } \pi_1({\boldsymbol{\theta}}_1)f_1({\mathbf{z}}|{\boldsymbol{\theta}}_1)\,\text{d}{\mathbf{z}}\,\text{d}{\boldsymbol{\theta}}_1 } { \iint \mathbb{i}_{\rho\{{\boldsymbol{\eta}}({\mathbf{z}}),{\boldsymbol{\eta}}({\mathbf{y}})\ } \le \epsilon } \pi_2({\boldsymbol{\theta}}_2)f_2({\mathbf{z}}|{\boldsymbol{\theta}}_2)\,\text{d}{\mathbf{z}}\,\text{d}{\boldsymbol{\theta}}_2}\\ & = & \dfrac{\iint \mathbb{i}_{\rho\{{\boldsymbol{\eta}},{\boldsymbol{\eta}}({\mathbf{y}})\ } \le \epsilon } \pi_1({\boldsymbol{\theta}}_1)f_1^{{\boldsymbol{\eta}}}({\boldsymbol{\eta}}|{\boldsymbol{\theta}}_1)\,\text{d}{\boldsymbol{\eta}}\,\text{d}{\boldsymbol{\theta}}_1 } { \iint \mathbb{i}_{\rho\{{\boldsymbol{\eta}},{\boldsymbol{\eta}}({\mathbf{y}})\ } \le \epsilon } \pi_2({\boldsymbol{\theta}}_2)f_2^{{\boldsymbol{\eta}}}({\boldsymbol{\eta}}|{\boldsymbol{\theta}}_2)\,\text{d}{\boldsymbol{\eta}}\,\text{d}{\boldsymbol{\theta}}_2}\,,\end{aligned}\ ] ] where and denote the densities of when and , respectively . by lhospital formula ,if goes to zero , the above converges to namely the bayes factor for testing model versus model based on the sole observation of .this result reflects the current perspective on abc : the inference derived from the ideal abc output when only uses the information contained in .thus , in the limiting case , i.e. when the algorithm uses an infinite computational power , the abc odds ratio does not account for features of the data other than the value of , which is why the limiting bayes factor only depends on the distribution of under both models .when running abc for point estimation , the use of an insufficient statistic does not usually jeopardise convergence of the method .as shown , e.g. , in ( * ? ? ?* theorem 2 ) , the noisy version of abc as an inference method is convergent under usual regularity conditions for model - based bayesian inference , including identifiability of the parameter for the insufficient statistic .in contrast , the loss of information induced by may seriously impact model - choice bayesian inference . indeed, the information contained in is lesser than the information contained in and this even in most cases when is a sufficient statistic for _ both models_. in other words , _ being sufficient for both and does not usually imply that is sufficient for . _ to see why this is the case , consider the most favourable case , namely when is a sufficient statistic for both models .we then have by the factorisation theorem that , i.e. thus , unless , as in the special case of gibbs random fields detailed below , the two bayes factors differ by the ratio , which is only equal to one in a very small number of known cases . this decomposition is a straightforward proof that a model - wise sufficient statistic is usually not sufficient across models , hence for model comparison .an immediate corollary is that the abc - mc approximation does not always converge to the exact bayes factor .the discrepancy between limiting abc and genuine bayesian inferences does not come as a surprise , because abc is indeed an approximation method .users of abc algorithms are therefore prepared for some degree of imprecision in their final answer , a point stressed by and when they qualify abc as exact inference on a wrong model . however , the magnitude of the difference between and expressed by is such that there is no direct connection between both answers . in a generalsetting , if has the same dimension as one component of the components of , the ratio is equivalent to a density ratio for a sample of size , hence it can be arbitrarily small or arbitrarily large when grows .contrastingly , the bayes factor is based on an equivalent to a single observation , hence does not necessarily converge with to the correct limit , as shown by the poisson and normal examples below and in si .the conclusion derived from the abc - based bayes factor may therefore completely differ from the conclusion derived from the exact bayes factor and there is no possibility of a generic agreement between both , or even of a manageable correction factor .this discrepancy means that a theoretical validation of the abc - based model choice procedure is currently missing and that , due to this absence , potentialy costly simulation - based assessments are required when calling for this procedure .therefore , users must be warned that abc approximations to bayes factors do not perform as standard numerical or monte carlo approximations , with the exception of gibbs random fields detailed in the next section . in all cases when differs from one , no inference on the true bayes factor can be derived from the abc - mc approximation without further information on the ratio , most often unavailable in settings where abc is necessary . also derived this relation between both bayes factors in their formula * [ 18]*. while they still advocate the use of abc model choice in the absence of sufficient statistic , we stress that no theoretical guarantee can be given on the validity of the abc approximation to the bayes factor and hence of its use as a model choice procedure .note that resort to full allelic distributions in an abc framework , instead of chosing summary statistics .they show how to apply abc using allele frequencies to draw inferences in cases where selecting suitable summary statistics is difficult ( and where the complexity of the model or the size of dataset prohibits to use full - likelihood methods ) . in such settings, abc - mc does not suffer from the divergence exhibited here because the measure of distance does not involve a reduction of the sample .the same comment applies to the abc - sysbio software of , which relies on the whole dataset .the theoretical validation of abc inference in hidden markov models by should also extend to the model choice setting because the approach does not rely on summary statistics but instead on the whole sequence of observations .in an apparent contradiction with the above , showed that the computation of the posterior probabilities of gibbs random fields under competition can be done via abc techniques , which provide a converging approximation to the true bayes factor .the reason for this result is that , for these models in the above ratio , .the validation of an abc comparison of gibbs random fields is thus that their specific structure allows for a sufficient statistic vector that runs across models and therefore leads to an exact ( when ) simulation from the posterior probabilities of the models .each gibbs random field model has its own sufficient statistic and exposed the fact that the vector of statistics is also sufficient for the joint parameter . point out that this specific property of gibbs random fields can be extended to any exponential family ( hence to any setting with fixed - dimension sufficient statistics , see ) .their argument is that , by including all sufficient statistics and all dominating measure statistics in an encompassing model , models under comparison are submodels of the encompassing model .the concatenation of those statistics is then jointly sufficient across models . while this encompassing principle holds in full generality , in particular when comparing models that are already embedded , we think it leads to an overly optimistic perspective about the merits of abc for model choice : in practice , most complex models do not enjoy sufficient statistics ( if only because they are beyond exponential families ) .the gibbs case processed by therefore happens to be one of the very few realistic counterexamples . as demonstrated in the next section and in the normal example in si ,using insufficient statistics is more than a mere loss of information .looking at what happens in the limiting case when one relies on a common model - wise sufficient statistic is a formal but useful study since it brings light on the potentially huge discrepancy between the abc - based and the true bayes factors . to develop a solution to the problem in the formal case of the exponential families does not help in understanding the discrepancy for non - exponential models .the difficulty with the discrepancy between and is that this discrepancy is impossible to evaluate in a general setting , while there is no reason to expect a reasonable agreement between both quantities .a first illustration was produced by in the case of ma models . a simple illustration of the discrepancy due to the use of a model - wise sufficient statistic is a a sample that could come either from a poisson distribution or from a geometric distribution , already introduced in as a counter - example to gibbs random fields and later reprocessed in to support their sufficiency argument . in this case , the sum is a sufficient statistic for both models but not across models .the distribution of the sample given is a multinomial distribution when the data is poisson , while it is the uniform distribution over the s such that in the geometric case , since is then a negative binomial variable .the discrepancy ratio is therefore when simulating poisson or geometric variables and using prior distributions as exponential and uniform on the parameters of the respective models , the exact bayes factor is available and the distribution of the discrepancy is therefore available . fig [ fig : poisneg ] gives the range of versus , showing that is then unrelated with : the values produced by both approaches have nothing in common . as noted above , the approximation based on the sufficient statistic is producing figures of the magnitude of a _ single _ observation , while the true bayes factor is of the order of the sample size .comparison between the true log - bayes factor _( first axis ) _ for the comparison of a poisson model versus a negative binomial model and of the log - bayes factor based on the sufficient statistic _ ( second axis ) _ , for poisson _( left ) _ and negative binomial _ ( left ) _ samples of size , based on replications ] the discrepancy between both bayes factors is in fact increasing with the sample size , as shown by the following result : consider model selection between model 1 : with prior distribution equal to an exponential distribution and model 2 : with a uniform prior distribution when the observed data consists of iid observations with expectation = \theta_0 > 0 ] for this parameter ( the true value being 30 ) .the is algorithm was performed using 100 coalescent trees per particle .the marginal likelihood of both scenarios has been computed for the same set of 1000 particles and they provide the posterior probability of each scenario .the abc computations have been performed with diyabc .a reference table of 2 million datasets has been simulated using 24 usual summary statistics ( provided in table s1 ) and the posterior probability of each scenario has been estimated as their proportion in the 500 simulated datasets closest to the pseudo observed one .this population genetic setting does not allow for a choice of sufficient statistics , even at the model level .summary statistics used in the population genetic experiments , the subset column corresponding to the abc operated with 15 summary statistics and the last three statistics being only used in this reduced collection name & subset & definition + + nal1 & yes & average number of alleles in population 1nal2 & yes & average number of alleles in population 2nal3 & yes & average number of alleles in population 3het1 & yes & average heterozygothy n population 1het2 & yes & average heterozygothy n population 2het3 & yes & average heterozygothy n population 3var1 & yes & average variance of the allele size in population 1var2 & yes & average variance of the allele size in population 2var3 & yes & average variance of the allele size in population 3mgw1 & no & garza - williamson m in population 1mgw2 & no & garza - williamson m in population 2mgw3 & no &garza - williamson m in population 3fst1 & no & average fst in population 1fst2 & no & average fst in population 2fst3 & no & average fst in population 3lik12 & no & probability that sample 1 is from population 1lik13 & no & probability that sample 1 is from population 3lik21 & no & probability that sample 2 is from population 1lik23 & no & probability that sample 2 is from population 3lik31 & no & probability that sample 3 is from population 1lik32 & no & probability that sample 3 is from population 2das12 & yes & shared allele distance between populations 1 and 2das13 & yes & shared allele distance between populations 1 and 3das23 & yes & shared allele distance between populations 2 and 3dm212 & yes & distance between populations 1 and 2dm213 & yes & distance between populations 1 and 3dm223 & yes & distance between populations 2 and 2 + the second experiment also opposes two scenarios including three populations , two of them having diverged 100 generations ago and the third one resulting of a recent admixture between the first two populations ( scenario 1 ) or simply diverging from population 1 ( scenario 2 ) at the same time of 5 generations in the past . in scenario 1 , the admixture rate is from population 1 .pseudo observed datasets ( 100 ) of the same size as in experiment 1 ( 15 diploid individuals per population , 5 independent microsatellite loci ) have been generated for an effective population size of 1000 and mutation rates of .in contrast with experiment 1 , analyses included the following 6 parameters ( provided with corresponding priors ) : admixture rate ( ] ) , the time of admixture / second divergence ( ] ) . to account for an higher complexity in the scenarios ,the is algorithm was performed with 10,000 coalescent trees per particle .apart from this change , both abc and likelihood analyses have been performed in the same way as experiment 1 .fig [ fig : res11 ] shows a reasonable fit between the exact posterior probability of model 1 ( evaluated by is ) and the abc approximation in the first experiment on most of the 100 simulated datasets , even though the abc approximation is biased towards . when using as the decision boundary between model 1 and model 2 , there is hardly any discrepancy between both approaches , demonstrating that model choice based on abc can be trusted in this case .fig [ fig : res12 ] considers the same setting when moving from 24 to 15 summary statistics ( given in table s1 ) : the fit somehow degrades .in particular , the number of opposite conclusions in the model choice moves to . in the more complex setting of the second experiment , the discrepancy worsens ,as shown on fig [ fig : res2 ] . the number of opposite conclusions reaches and the fit between both versions of the posterior probabilities is considerably degraded , with a correlation coefficient of .comparison of is and abc estimates of the posterior probability of scenario 1 in the first population genetic experiment , using 24 summary statistics ] same caption as fig [ fig : res11 ] when using 15 summary statistics ] comparison of is and abc estimates of the posterior probability of scenario 1 in the second population genetic experiment ] the validity of the importance sampling approximation can obviously be questioned in both experiments , however fig [ fig : repeat1 ] and fig [ fig : repeat2 ] display a strong stability of the posterior probability is approximation across 10 independent runs for 5 different datasets and gives proper confidence in this approach .increasing the number of loci to 50 and the sample size to 100 individuals per population ( see si ) leads to posterior probabilities of the true scenario overwhelmingly close to one ( fig [ fig : resml2 ] ) , thus bluring the distinction between abc and likelihood based estimates but also reassuring on the ability of abc to provide the right choice of model with a higher information content of the data . actually , we note that , for this experiment , all abc - based decisions conclude in favour of the correct model. boxplots of the posterior probabilities evaluated over 10 independent monte carlo evaluations , for five independent simulated datasets in the first population genetic experiment ] boxplots of the posterior probabilities evaluated over 10 independent monte carlo evaluations , for five independent simulated datasets in the second population genetic experiment ] comparison between two approximations of the posterior probabilities of scenario 1 based on importance sampling with 50,000 particles _ ( first axis ) _ and abc _( second axis ) _ for the larger population genetic experiment ]since its introduction by and , abc has been extensively used in several areas involving complex likelihoods , primarily in population genetics , both for point estimation and testing of hypotheses . in realistic settings , with the exception of gibbs random fields , which satisfy a resilience property with respect to their sufficient statistics ,the conclusions drawn on model comparison can not be trusted _ per se _ but require further simulations analyses as to the pertinence of the ( abc ) bayes factor based on the summary statistics .this paper has examined in details only the case when the summary statistics are sufficient for both models , while practical situations imply the use of insufficient statistics .the rapidly increasing number of applications estimating posterior probabilities by abc indicates a clear need for further evaluations of the worth of those estimations .further research is needed for producing trustworthy approximations to the posterior probabilities of models . at this stage , unless the whole data is involved in the abc approximation as in , our conclusion on abc - based model choice is to exploit the approximations in an exploratory manner as measures of discrepancies rather than genuine posterior probabilities .this direction relates with the analyses found in .furthermore , a version of this exploratory analysis is already provided in the diy - abc software of .an option in this software allows for the computation of a monte carlo evaluation of false allocation rates resulting from using the abc posterior probabilities in selecting a model as the most likely .for instance , in the setting of both our population genetic experiments , diy - abc gives false allocation rates equal to ( under scenarios 1 and 2 ) and and ( under scenarios 1 and 2 ) , respectively .this evaluation obviously shifts away from the performances of abc as an approximation to the posterior probability towards the performances of the whole bayesian apparatus for selecting a model , but this nonetheless represents a useful and manageable quality assessment for practitioners . the first three authors work has been partly supported by agence nationale de la recherche via the 20092013 project emile .they are grateful to the reviewers and to michael stumpf for their comments .computations were performed on the inra cbgp and migale clusters .the following reproduces the poisson geometric illustration in a normal model .if we look at a fully normal setting , we have hence if we reparameterise the observations into , we do get ^ 2 \big/2 \right\}\end{aligned}\ ] ] since the jacobian is .hence ^ 2 /2 \right\ } \sigma^{-n}\ ] ] considering both models the discrepancy ratio is then given by ^ 2 \right)\right\}\ ] ] and is connected with the lack of consistency of the bayes factor : consider model selection between model 1 : and model 2 : , and being given , with prior distributions equal to a distribution and when the observed data consists of iid observations with finite mean and variance .then is the minimal sufficient statistic for both models and the bayes factor based on the sufficient statistic , , satisfies fig [ fig : twonormal ] illustrates the behaviour of the discrepancy ratio when and , for datasets of size simulated according to both models . the discrepancy ( expressed on a log scale )is once again dramatic , in concordance with the above lemma .empirical distributions of the log discrepancy for datasets of size simulated from _( left ) _ and _ ( right ) _ distributions when and , based on replications and a flat prior ] if we now turn to an alternative choice of sufficient statistic , using the pair with we follow the solution of . using a conjugate prior , the true bayes factor is equal to the bayes factor based on the corresponding distributions of the pair in the respective models .therefore , with sufficient computing power , the abc approximation to the bayes factor can be brought arbitrarily close to the true bayes factor . however, this coincidence does not bring any intuition on the behaviour of the abc approximations in realistic settings .we also considered a more informative population genetic experiment with the same scenarios ( 1 and 2 ) as in the second experiment .one hundred datasets were simulated under scenario 1 with 3 populations , i.e. 6 parameters .we take 100 diploid individuals per population , 50 loci per individual .this thus corresponds to 300 genotypes per dataset .the is algorithm was performed using 100 coalescent trees per particle .the marginal likelihood of both scenarios has been computed for the same set for both 1000 particles ( is1 ) and 50,000 particles ( is2 ) .a national cluster of 376 processors ( including 336 quad core processors ) was used for this massive experiment ( which required more than 12 calendar days for the importance sampling part ) .the confidence about the is approximation can be assessed on fig [ fig : resml2 ] , which shows that both runs most always provide the same numerical value , which almost uniformly is very close to one .this makes the fit of the abc approximation to the true value harder to assess , even though we can spot a trend towards under - estimation .furthermore , they almost all lead to correctly select model 1 .
approximate bayesian computation ( abc ) have become an essential tool for the analysis of complex stochastic models . grelaud et al . ( 2009 , bayesian ana 3:427442 ) advocated the use of abc for model choice in the specific case of gibbs random fields , relying on a inter - model sufficiency property to show that the approximation was legitimate . we implemented abc model choice in a wide range of phylogenetic models in the diy - abc software ( cornuet et al . ( 2008 ) bioinfo 24:27132719 ) . we now present arguments as to why the theoretical arguments for abc model choice are missing , since the algorithm involves an unknown loss of information induced by the use of insufficient summary statistics . the approximation error of the posterior probabilities of the models under comparison may thus be unrelated with the computational effort spent in running an abc algorithm . we then conclude that additional empirical verifications of the performances of the abc procedure as those available in diyabc are necessary to conduct model choice . nference on population genetic models such as coalescent trees is one representative example of cases when statistical analyses like bayesian inference can not easily operate because the likelihood function associated with the data can not be computed in a manageable time . the fundamental reason for this impossibility is that the model associated with coalescent data has to integrate over trees of high complexity . in such settings , traditional approximation tools like monte carlo simulation from the posterior distribution are unavailable for practical purposes . indeed , due to the complexity of the latent structures defining the likelihood ( like the coalescent tree ) , their simulation is too unstable to bring a reliable approximation in a manageable time . such complex models call for a practical if cruder approximation method , the abc methodology . this rejection technique bypasses the computation of the likelihood via simulations from the corresponding distribution ( see and for recent surveys , and for the wide and successful array of applications based on implementations of abc in genomics and ecology ) . we argue here that abc is a generally valid approximation method for doing bayesian inference in complex models . however , without further justification , abc methods can not be trusted to discriminate between two competing models when based on insufficient summary statistics . we exhibit simple examples in which the information loss due to insufficiency leads to inconsistency , i.e. when the abc model selection fails to recover the true model , even with infinite amounts of observation and computation . on the one hand , abc using the entire data leads to a consistent model choice decision but it is clearly infeasible in most settings . on the other hand , too much information loss due to insuffiency leads to a statistically invalid decision procedure . the challenge is in achieving a balance between information loss and consistency . theoretical results that mathematically validate model choice for insufficient statistics are currently lacking on a general basis . our conclusion at this stage is to opt for a cautionary approach in abc model choice , handling it as an exploratory tool rather than trusting the bayes factor approximation . the corresponding degree of approximation can not be evaluated , except via monte carlo evaluations of the model selection performances of abc . more empirical measures such as those proposed in the diy - abc software and in thus seem to be the only available solution at the current time for conducting model comparison . we stress that , while repeatedly expressed reservations about the formal validity of the abc approach in statistical testing , those criticisms were rebutted in and are not relevant for the current paper .
the formation and dissolution of young stellar clusters is an important , but complex problem that requires computer simulations to explore in detail .stars form rapidly from turbulent molecular gas , most often in clusters of tens to thousands of stars ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?therefore the early phases of the dynamical evolution of star clusters involves young stars moving through a significant and dynamic gaseous background . to understand the complete process of cluster formation and evolution requires a combination of self - gravitating hydrodynamics for the gas from which stars and planets form , and the gravitational -body dynamics of ( multiple ) stars and planets once they have formed .previously , detailed hydrodynamics and accurate -body dynamics have been separated .hydrodynamical simulations have been used to simulate the turbulent gas dynamics leading to fragmentation and star formation , while -body simulations tend to follow the late gas - free stages of star cluster evolution .however , there is a very signifcant and important phase in the life of a star cluster in which stellar dynamics within a gas background is vitally important . in the ` gas - rich ' phase , which occurs around 1 5myr the stars are interacting dynamically in a live gas background .the star formation process tends to produce binary and multiple systems in complex hiearchical structures .dynamical interactions between single and multiple systems during the subsequent few myr changes the binary properties of the stars as well as the structure and dynamics of the whole cluster ( see * ? ? ? * ; * ? ? ?* and references therein ) .therefore , the binary properties of stars released into the field after gas expulsion will depend on stellar dynamics during the gas - rich phase ( see also * ? ? ?in addition , the early stages of planet formation will occur during this gas - rich phase and interactions may seriously alter the architecture and properties of planetary systems .accurate observations of the binary and dynamical properties of clusters are usually only available once gas is expelled ( especially those to be provided by gaia ) , which means they will have been altered by dynamical evolution in the gas - rich phase . in hydrodynamical simulations ,we generally replace the dense collapse phase of gas into stars with sink particles ( see for sph ; for amr implementations , see also ) .these sink particles can represent individual stars if their sizes are au ( e.g. * ? ? ? * ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) or larger regions perhaps containing primordial multiple systems which can not be resolved if their sizes are au ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?sinks have the huge advantage of allowing dense , computationally expensive , and unresolvable regions to be ` compressed ' into a particle which can interact with the surrounding gas and accrete from it . however , sink particles are not point - like -body particles as , even if each sink represents a single star , ( a ) their gravity is softened , and ( b ) sinks accrete from the surrounding gas .pure -body simulations of stellar systems have a long history ( see * ? ? ?* ; * ? ? ?* ) , but most ignore the early gas - rich phases of a star cluster s life . the usual way to include gas and model the gas - rich phase is to introduce an external potential , which is often a simple plummer or king model ( e.g. * ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , although see used softened ` star particles ' in a live gas background .generally , the external potential is allowed to vary with time , e.g. to model the expulsion of gas from a cluster .however , the use of a simple analytic external potential to model the gas ( which is often the majority of the mass in an embedded cluster ) is clearly a vast over - simplification . in this paperwe introduce a new hybrid -body / smoothed particle hydrodynamics ( sph ) algorithm that has been implemented in the sph code seren .the stellar dynamics are computed with a 4th - order integrator allowing the details of -body interactions between stars to be followed .sph gas particles are used to represent a live background gas potential in which the stellar dynamics is modelled .we emphasise that this hybrid method is not a replacement for fully self - consistent , high - resolution star formation simulations or detailed -body simulations .rather , it represents a fast way of exploring stellar dynamics in a live background potential which can be used to perform large suites of simulations to explore large parameter space , or to inform the initial conditions of pure -body simulations of the post - gas phase . in section [s : method ] , we introduce the hydrodynamical and n - body methods used and how they are combined algorithmically . in section [ s : tests ] , we present a number of simple tests to demonstrate the accuracy and robustness of our method . in section [s : discussion ] , we discuss various important caveats of our method , in particular understanding resolution effects , and also discuss possible astrophysical problems that can be explored with this code .self - gravitating hydrodynamical simulations in astrophysics are usually modelled using either a lagrangian , particle - based approach such as smoothed particle hydrodynamics , or an eulerian , grid - based approach such as adaptive mesh refinement hydrodynamics . whereas sph derives interaction terms by computing particle - particle force terms and integrating the motion of each particle individually , grid codes operate by computing fluxes across neighbouring grid cells .since n - body codes also work by computing forces and integrating positions and velocities , sph is the most natural hydrodynamical method to merge directly with -body dynamics as the particle - nature of the gas and stars are easily compatible making it straight - forward to derive the coupling force terms and to merge their individual integration schemes .we use a conservative self - gravitating sph formulation to model the gas dynamics and include the star particles within the sph formulation as a special type of sph particle , rather than external -body particles .following most modern conservative sph schemes ( e.g. * ? ? ?* ; * ? ? ?* ) , the smoothing length of a gas particle is set by the relation , and the sph gas density is given by where , , , are the position , smoothing length , mass and density of particle respectively , , is the sph smoothing kernel and is a dimensionless number that controls the mean number of neighbours ( usually set to to have neighbours ) .since and depend on each other , we must iterate between equations [ eqn : hrho ] and [ eqn : sphrho ] in order to reach a consistent solution . in contrast , the star particles have a constant smoothing length which represents the gravitational softening length to prevent violent 2-body collisions with other stars , in place of using more complicated algorithms such as regularisation ( see * ? ? ?* for a description of common n - body techniques ) .in order to reduce ` scattering ' during star - gas interactions , we use the mean - smoothing length approach ( see appendix a of * ? ? ? * ) , to keep star - gas interactions as smooth as possible .we can now formulate the lagrangian of the system containing all interaction terms and then derive the equations of motion via the euler - lagrange equations .this simple approach allows us to develop a conservative scheme which in principle can be integrated to arbitrary accuracy ( i.e. if direct summation of gravitational forces and a constant , global timestep is used ) . due to the larger energy errors often produced by n - body encounters , we use a higher - order hermite integration scheme to integrate star particles , and a simpler 2nd - order leapfrog kick - drift - kick scheme to integrate the gas particles motion .the gravitational force softening between sph particles can be derived in a number of ways ( e.g. plummer softening , see * ? ? ?however , it has been suggested by that it is safest to use the sph kernel itself to derive the softening terms to prevent artificial gravitational fragmentation .they showed that for gas condensations where the jeans length was of order the smoothing length or greater than , the net hydrodynamical force is stronger than the net gravitational force from all neighbouring particles , thereby suppressing or even reversing the collapse of the condensation and preventing fragmentation .we therefore derive the softening terms from the sph kernel following the method and nomenclature of .first , we consider the case of uniform smoothing length .the gravitational potential at the position of particle due to a distribution of sph particles is given by where is the gravitational softening kernel and is the smoothing length of all sph particles .we note that eqn .[ eqn : ksgravpot ] requires the softening kernel to be a negative quantity .the potential is related to the density field by poisson s equation , where the sph density defined at is given by eqn .[ eqn : sphrho ] .this allows us to directly relate the softening kernel to the sph smoothing kernel for a consistent formulation of self - gravity in sph . substituting equations [ eqn : sphrho ] & [ eqn : ksgravpot ] into poisson s equation, we obtain by direct integration of eqn .[ eqn : wphi ] with the appropriate limits , we can obtain the gravitational softening kernel via the gravitational force kernel , where and where is the compact support of the kernel ( e.g. for the m4-kernel ) . when using variable smoothing lengths , we have two choices for symmetrising the gravitational interaction ; ( a ) use the average of the two softening kernels , or ( b ) use the mean smoothing length in the softening kernel , i.e. where . advocate using the average softening kernel approach since it requires less loops over all particles than the mean smoothing length approach . when using only sph particles where the smoothing length is determined by eqn .[ eqn : hrho ] , there is little difference in the results since the two methods give similar potentials and forces .however , when we include star particles which can have an arbitrary small smoothing length , there can be very large discrepancies between the two methods .for example , consider a ` collision ' between a gas particle and an sph particle , i.e. where the two particle lie at almost the same position in space .the mean softening kernel approach has two terms , one which will be quite small due to the smoothing length of the gas particle , and the second term using the star smoothing length which can become very large producing a corresponding large scattering force .the mean smoothing length approach however , can never produce a large scattering force , even if the smoothing length of the star becomes zero since the average of the two smoothing lengths can never be less than or .furthermore , the mean smoothing length method allows us to use star particles with zero smoothing length , permitting the study of truly collisional stellar dynamics with unsoftened star - star forces , but softened star - gas interactions .therefore , for the case of our hybrid formulation , we advocate using the mean smoothing length approach and we derive all subsequent equations of motion using this method . the sph equations of motion can be derived from lagrangian mechanics , resulting in a set of equations that automatically conserve linear momentum , to machine precision , and angular momentum and energy , both to integration error ( see * ? ? ?* ; * ? ? ? derived the equations of motion for self - gravitating systems with variable smoothing lengths using lagrangian mechanics .following their method , we derive the equations of motion for a set of sph particles ( with variable smoothing length given by eqns .[ eqn : hrho ] & [ eqn : sphrho ] ) and stars ( with a fixed smoothing length ) .if we have gas particles with labels and star particles with labels , then by inserting all terms into the lagrangian , we obtain where is the gravitational contribution to the lagrangian given by we note that throughout this paper , summations over all sph gas particles are given by the indices , and and summations over all star particles by , and .the equations of motion can be obtained by inserting this lagrangian into the euler - lagrange equations , after inserting the correct terms and evaluating the algebra ( see appendix [ a : sphderivation ] for a full derivation ) , we obtain the following equation of motion for sph gas particles \nonumber \\ & & - g \sum \limits_{b=1}^{n_g } m{_b}\ , \phi'{_{ab}}(\overline{h}{_{ab}})\,\hat{\bf r}{_{ab}}- g \sum \limits_{i=1}^{n_s } \,m{_i}\,\phi'{_{ai}}(\overline{h}{_{ai}})\,\hat{\bf r}{_{ai}}- \frac{g}{2 } \sum \limits_{b=1}^{n_g } m{_b}\,\left [ \frac{\left(\bar{\zeta}{_a}+ \bar{\chi}{_a}\right)}{\omega{_a}}\frac{\partial w{_{ab}}}{\partial { \bf r}{_a}}(h{_a } ) + \frac{\left(\bar{\zeta}{_b}+ \bar{\chi}{_b}\right)}{\omega{_b}}\frac{\partial w{_{ab}}}{\partial { \bf r}{_a}}(h{_b } ) \right ] \end{aligned}\ ] ] where is the thermal pressure of particle , and , , and are defined by the term is the familiar ` grad - h ' correction term that appears in conservative sph with varying smoothing length ( e.g. * ? ? ?* ; * ? ? ?the term is the correction term derived by for gravitational interactions between gas particles in conservative sph .we obtain an analogous correction term for the star - gas interaction , , which is a summation over all neighbouring star particles .however , is still included in a summation over all neighbouring gas particles since it is the variation in the smoothing length ( which is determined by neighbouring particle positions ) that gives rise to the correction terms .we have some choice in how to evolve the thermal properties of the gas particles .we chose to evolve the specific internal energy equation , i.e. for the star particles , we obtain the following equations of motion for star , we note that since only the smoothing lengths of the gas particles are allowed to vary , all correction terms derived via the lagrangian appear in the equations of motion for the gas particles . the equations of motion for both gas and star particles can be integrated either with a single integration scheme , or with two independent integration schemes in parallel .current sph codes typically use 2nd - order schemes , such as the leapfrog or the runge - kutta - fehlberg , whereas n - body codes use at least 4th - order schemes such as the hermite scheme . in our implementation, we chose to use a 2nd - order leapfrog kick - drift - kick scheme to integrate the sph gas particles coupled with a 4th - order hermite integration scheme to integrate the star particles .one important reason for this choice is that a 4th - order hermite scheme can be considered as the higher - order equivalent of the leapfrog scheme ( see * ? ? ?* ) , where the force , prediction and correction steps are all computed at the same points in the timestep for both schemes .we discuss the details of our implementation of both integration schemes , and in particular we discuss the modifications to the 4th - order hermite scheme to include sph smoothing .we integrate the sph particles using a 2nd - order leapfrog kick - drift - kick integration scheme . a traditional leapfrog works by advancing the positions and velocities half - a - step apart , i.e. it is possible to transform the traditional leapfrog equations into a form where the positions and velocities are both updated at the end of the step , this form of the leapfrog ( also known as the velocity - verlet integration scheme ) has 3rd - order accuracy in integrating the positions and 2nd - order accuracy in integrating the velocities , and therefore 2nd - order overall . as we will see in section [ sss : hermite ], it also has the useful property of having its acceleration and ` correction ' terms calculated in sync with the corresponding terms for the 4th order hermite scheme used for integrating the star particles .other integration schemes ( e.g. the 2nd - order leapfrog drift - kick - drift ) do not necessarily share this property . the timesteps are determined by taking the minimum of three separate conditions , the sph courant - friedrichs - lewy condition , the acceleration condition , and the energy condition when cooling is employed ( see * ? ? ?* ) , i.e. where the terms are dimensionless timestep multipliers and is a small number used to prevent a divide - by - zero in the case of or .we use a standard 4th - order hermite integrator modified by including the same sph softening scheme as the gas particles to integrate the motion of the star particles . in the hermite scheme , the first time derivative of the acceleration ( often referred to as the ` jerk ' )must be calculated explicitly from equation [ eqn : stargrav ] . by taking the time derivative of equation [ eqn : stargrav ] and using equation [ eqn : wphi ], we obtain for the jerk term , since the jerk can be computed from a single sum over all particles , we can compute it explicitly at the same time as computing the accelerations .once both terms are computed , we can calculate the predicted positions and velocities of the stars at the end of the steps , i.e. the acceleration and jerk are then recomputed at the end of the step , and .this then allows us to compute the higher order time derivatives , finally , we apply the correction step where the higher - order terms are added to the position and velocity vectors , i.e. we compute the n - body timesteps using the aarseth timestep criterion , is the timestep multiplier for stars . for the very first timestep , we must compute the 2nd and 3rd time derivatives explicitly since we do not yet have information on the 2nd and 3rd derivatives ( since they are only first computed at the end of the first timestep ) .hereafter , we use eqns .[ eqn : a2 ] & [ eqn : a3 ] for computing these derivatives .seren uses a barnes - hut gravity tree for computing gravitational forces for all self - gravitating gas particles .we use the same tree for computing the gravitational forces due to all gas particles for both sph gas particles , and star particles .seren has a variety of tree - opening criteria that can be selected at compilation - time . for most simulations in this paper, we use the eigenvalue multipole - acceptance criterion because it has better force error control , and therefore ultimately better energy error control , than the standard geometric opening - angle criterion often used . for nearby sph particles , which require direct computation of the gravitational acceleration, we also compute the jerk term when computing the acceleration of star particles . for tree cells ,we compute the jerk contribution due to the centre of mass of the cell ; however , we ignore the contribution due to the quadrupole moment terms for simplicity . for forces due to star particles , we sum all the contributions for the gravitational acceleration ( and jerk ) directly without using a gravity tree . in figure [fig : jerkerror ] , we plot the rms force ( monopole and quadrupole ) and jerk ( monopole only ) errors using the geometric opening - angle criterion ( in order to plot both monopole and quadrupole errors ) for stars in a star - gas plummer sphere ( as discussed in section [ ss : plummertest ] ) .we can see that the jerk error scales in the same way as the monopole - only force error .the force error using quadrupole moments scales much better than both monopole errors , as expected . in order to allow high - accuracy in the calculation of the jerk while still using the tree , we use two different tree opening criteria ; one for sph gas particles ( which do not need to calculate the jerk ) , and a stricter one for n - body particles that use the tree .it should be noted that figure [ fig : jerkerror ] represents an upper limit to the expected jerk error .the dominant contribution to the jerk will be from close encounters with other stars , which is computed exactly .seren uses a standard block - timestepping scheme used in many n - body and sph codes .the timesteps of all gas and star particles are restricted to being where is a positive integer .all particles and stars therefore are all exactly synchronised on the longest timestep , when the timestep level structure is recomputed . in the standard block - timestepping scheme ,particle timesteps are only computed once their current timestep has been completed . at the end of the step ,particles are allowed to move to any lower timestep ( higher ) , or are allowed to move up one level ( lower ) provided the new higher level is synchronised with the old level .this approach means particles can rapidly and immediately drop to short timesteps when required , but are only allowed to rise back up slowly to prevent timesteps oscillating up and down too frequently .seren also contains the neighbour - timestep monitoring procedure of to prevent large timestep differences between sph neighbours generating large energy , momentum and angular momentum errors .although this algorithm is not necessarily needed for the tests presented in this paper , it will almost certainly be required for future applications where feedback processes can suddenly generate large discontinuities in density and temperature , resulting in large timestep disparities .we present a small suite of numerical tests demonstrating the accuracy of our hybrid sph/-body approach .tests using the sph and -body components independently were presented by .the sph component was tested using a variety of shock - tube , kelvin - helmholtz instability and gravity tests , as well as tests of the tree and the error - scaling of the code .the -body component was tested with some 3-body examples that had known solutions . in this paper , we present tests of the combined scheme to demonstrate that the conservative equations of motion derived in section [ ss : gradhsph ] are correct , and that the scheme does not exhibit any unexpected numerical effects .we perform a simple test investigating the scattering between gas and star particles . using star - gas plummer spheres , we test the expected error scalings of the sph and -body components and the combined scheme , as well as the stability of the star - gas plummer spheres .finally , we perform a simple test of colliding star - gas plummer spheres . in all tests we use an adiabatic equation of state in which heating and cooling are only due to work by expansion or contraction , thus enabling us to test the energy conservation of the code .we work in dimensionless units throughout , where .our hybrid sph/-body method enables investigation of stellar dynamics within a gas potential that may be time - evolving and irregular . for this goal, we must clearly understand the origin and impact of any numerical effects on the results of our simulations .one important difference between gas - only interactions and star - gas interactions in this scheme is that star particles are allowed to ` pass through ' sph gas particles , whereas artificial viscosity will prevent gas particle penetration by forming a shock .even though the gravitational interactions between stars and gas are smoothed , the stars are still interacting with a gravitational field defined by discrete points and thus can be deflected by those points .therefore , gas particles can in principle scatter star particles significantly if the resolution is too coarse .this test is designed to investigate how significantly gas particles can scatter star particles and to help determine resolution criteria to prevent significant numerical scattering . for point particles obeying newton s gravitational law , the scattering angle of a star of mass due to a hyperbolic encounter with a gas particle of mass in the centre - of - mass frame is given by ( e.g. * ? ? ?* ) where is the impact parameter , is the relative tangential velocity at infinity , and the approximation is for small deflections . for smoothed gravity , we would expect that the net deflection angle would be reduced by smoothing , and for the dependence on to be fundamentally altered since the gravitational force for neighbouring particles tends to zero as the distance becomes zero .for the m4-kernel , the gravitational force reaches a maximum at around ( see figure 1 of * ? ? ? * ) and then decreases to zero ( instead of ) .therefore , an approximation to the maximum possible scattering angle can be obtained by setting in equation [ eqn : scatterangle ] and . however, setting gives the same qualitative behaviour averaged over all impact parameters where is some multiplicative constant of order unity . ] .since the smoothing length is dependent solely on the gas particle positions , the strength of the star - gas interaction becomes solely a function of relative velocity , . from equation [ eqn : scatterangle ] , we can define a critical relative velocity , , where the interaction results in a significant deflection angle , where in this form , the deflection angle is simply .therefore , a simple resolution condition that ensures star - gas scattering is negligible , i.e. , is given by .this application of this criterion to various astrophysical scenarios will be discussed later in the paper . in order to test the validity of this assertion, we perform simulations of a single star that is moving with velocity through a periodic gas cube of side - length and uniform density where the gas velocity is fixed to zero everywhere . if the gas density field is perfectly uniform ( i.e. in the continuum limit where ) , then the net gravitational force due to the gas will be zero and the star will simply move with constant velocity without any deviation or deceleration .if the density field is not perfectly uniform , as is the case when represented by a discrete set of particles , then small deflections will alter the path of the star . in our test , the gas particles are first relaxed to a glass ( see * ? ? ? * for details ) which is the most uniform arrangement of particles used in typical simulations .we use periodic boundary conditions combined with ewald gravity to produce a net zero gravitational field in the gas , with the exception of the small deviations due to the smoothed particle distribution . performed a similar scattering test to quantify the effects of numerical relaxation , although in the context of large - scale galaxy simulations .the initial velocity of the star particle is varied between simulations .we test , , , , , , and . for each case, we place the particle at a random position in the gas cube in order to remove the systematic effects of using the same glass arrangement .we simulate 6 realisations for each of the selected velocities and take the average and standard deviation of the results .each star particle moves along its trajectory until , i.e. the crossing time of the cube .this ensures , in so far as is possible , that during a simulation a star encounters approximately equal number of gas particles for all choices of initial velocity . to quantify the effect of the scattering on the star particle , we measure the deflection angle , , i.e. the angle of the particle s trajectory relative to the initial velocity along the x - axis .the results of these tests can be seen in fig .[ fig : glassscatter ] .for , increasing the star particle velocity decreases the net effect of scattering due to the gas particles . in this regime ,the scattering angle , , falls as , the same dependency as suggested by eqn .[ eqn : scatterangle ] with .we notice that the net average scattering angle is lower than expected ( figure [ fig : glassscatter ] ; dashed line ) due to the effects of smoothing .we note the net deflection is the accumulation of several ( ) deflections , not necessarily in the same direction hence it will ` random - walk ' with each deflection .the error bars are due to the combination of the random - walk errors and the impact parameter dependence ( which is reduced , but not eliminated ) . however , for , extremely strong scattering occurs and dominates the dynamics of the star . at these velocities ,the power - law relation between the scattering angle and velocity is broken since the interaction is now effectively a parabolic or elliptical interaction instead of a hyperbolic encounter .the final deflection angle is almost random due to the strong nature of the perturbations .the kinetic energy of the star is smaller than the gravitational potential energy , even accounting for smoothing , and therefore a star can in principle become trapped and bound to individual sph particles .we note that this would not necessarily happen in a more realistic astronomical simulation because the gas particles in this test are static ( so the star moves as a test particle ) , and therefore can not be scattered off the star themselves . at such velocities ,the coarseness of the sph particle distribution clearly introduces potentially serious numerical effects which could corrupt any hybrid simulation . in future sections in this paper , we discuss possible gas resolution criteria as a means of avoiding unphysical scattering effects . the plummer sphere is commonly used in stellar dynamics simulations as it is described by simple , analytic formula . for a plummer sphere of mass , , and characteristic plummer radius , , the density distribution , ,is given by and the 1-d velocity dispersion , , is given by we simulate plummer spheres that are purely stars , purely gas , or a mixture of stars and gas .this means that they are not in exact equilibrium ; however this has a negligible effect on their evolution ] . for setting up the stellar component of our clusters ,we use the method outlined by . for the gas distributions, the plummer model corresponds to a polytrope .a polytrope is a self - gravitating gas whose equation of state obeys the form and whose density structure is a solution of the lane - emden equation ( see * ? ? ?* for an in - depth description of polytropes ) . for the polytrope ,the radial density distribution is of the same form as equation [ eqn : plummerrho ] . instead of setting a velocity field complimentary to the density field to support against collapse ,a polytrope is supported by a thermal pressure gradient .the thermal energy of the gas is related to the velocity dispersion of the gas by equating it to the sound speed and then converting to specific internal energy by where is the ratio of specific heats of the gas .we note that the gas itself in our simulation does not need to obey a polytropic equation of state , only that the thermal energy distribution of the gas is set - up to mimic the pressure distribution of the equilibrium polytrope and therefore remain in hydrostatic balance .the gas responds adiabatically and therefore can heat by contraction or cool by expansion as it settles or is moved around by the potential of the stars .the initial positions are set - up using the method of , but the thermal energies are set using the above equation and the initial velocities are set to zero .we simulate the evolution of ( a ) a gas - only polytrope , ( b ) a star - only plummer sphere , and ( c ) a 50 - 50 mixture ( by mass ) of a star - plummer sphere and a gas polytrope .gas - only simulations are conducted with gas particles of total mass , and star - only simulations with star particles of total mass . for each case , and using dimensionless units ( where ) .mixed star - gas simulations have either either or star particles and either or as many gas particles . in all cases we use equal - mass star particles ( ) and equal - mass gas particles ( ) .the smoothing length of the stars in all simulations is .each simulation is run for 40 crossing times , where we define the crossing time here to be code time units .following the ideas discussed in section [ ss : scatteringtest ] , we can determine the resolution requirements of an equilibrium plummer sphere to significantly reduce the effects of unphysical star - gas scattering .let us assume that gas particles account for a fraction of the total mass , i.e. where each gas particle has mass .the central gas density is , and the central velocity dispersion is .we can then calculate the critical resolution velocity at the centre of the plummer sphere as where we have substituted equation [ eqn : hrho ] for and used the above expressions for . in order to avoid catastrophic numerical scattering of star particles off gas particles, star particles must be moving at velocities significantly larger than the critical velocity , i.e. .this leads to the following resolution criterion for the total gas particle number , , in the plummer sphere as assuming the typical value of . for a plummer sphere consisting of equal gas and stellar mass ( )we find that . for a gas - dominated plummer sphere ( ) , . demonstrated of order a hundred gas particles could only crudely represent an equilibrium polytrope , with reduced central density and a smaller radius .such structures require of order thousands or tens of thousands of gas particles to adequately resolve the density structure of the polytrope .therefore , it is clear that we require to resolve the density field , at which point the gas resolution is also sufficiently high to prevent serious star - gas scattering events from corrupting the simulation .we note that the stars are moving with a range of velocities , below and above the mean velocity dispersion , .no matter how high the resolution , there will always be a number of stars at some instant moving less than the critical velocity .therefore , we can not completely eliminate unphysical scattering in this case , but we can only reduce it to some acceptable level by using a reasonably high resolution . in order to test the energy conservation properties of hybrid code ,we investigate how the fractional global energy error , , varies as a function of the timestep size . instead of selecting a global , constant timestep , we allow an adaptive global timestep , and then vary the timestep multiplier , ( to which we equate all other timestep multipliers , , , and , for this test ) , which determines the adaptive timestep size. therefore we consider how the energy error scales with , which should scale in the same way with a constant stepsize .figure [ fig : energyerror ] shows the energy error scaling as a function of timestep multiplier for ( a ) gas - only polytrope , ( b ) star - only plummer sphere , and ( c ) the star - gas mixture as a function of . for the gas - only cluster ,the sph integration scheme is a 2nd - order leapfrog kick - drift - kick .therefore we expect the error to scale as . in figure[ fig : energyerror](a ) , we can see that the energy error ( filled black circles ) agrees very well with this expected scaling with only a small deviation from the added guideline ( solid red line ) . for the star - only cluster ,the -body integration scheme is a 4th - order leapfrog scheme , and therefore we expect .figure [ fig : energyerror](b ) shows that we obtain similar scaling to this .in fact , we obtain slightly steeper scaling relative to the expected 4th - order . for the star - gas cluster ,a combination of a 2nd - order scheme with a 4th - order scheme should in principle give either 2nd - order or 4th - order erros depending on whether gas - gas , star - gas or star - star interactions are dominating the error .figure [ fig : energyerror](c ) shows that we get 4th - order scaling ( filled black circles ) which suggests that star integration scheme dominates the error in this simple case , despite the formally larger error of the gas integration scheme .we also plot the energy errors for they star - gas cluster with and without the new correction term ( equations [ eqn : spheom ] & [ eqn : gradhchi ] ) introduced in this paper . without ,the equations of motion are non - conservative , and the energy error ( of order ) is dominated by this rather than integration error . as well asintegration error , block timesteps and gravity tree errors are other major sources of error in this scheme , and will likely dominate over the integration error in most practical simulations depending on the parameters chosen . the error in the tree can be controlled by an appropriate choice of multipole - acceptance criteria ( mac ) , such as the gadget - style mac or the seren eigenvalue mac , both of which can place an upper - bound on the force error due to individual cells and hence control the net tree force error and indirectly the global energy error .one of the aims of this hybrid scheme is to accurately follow the stellar dynamics of a system in a live gaseous background .the most significant difference between stars and gas in this context is that stars are collisional particles in the sense that they can be subject to strong two - body interactions , whilst gas is collisional in the sense that it can form shocks .a low- stellar plummer sphere can rapidly evolve due to two - body scattering . showed , both numerically and through semi - analytical models , that the effect of two - body relaxation in a plummer sphere is to redistribute energy ejecting some stars and causing the contraction ( eventually to core collapse ) of the remaining cluster . traced this with the lagrangian radii of the stellar clusters showing the contraction of the inner lagrangian radii and the expansion of the outer lagrangian radii due to energy conservation as stars are ejected .they showed that the 50 per cent lagrangian radius stays relatively constant throughout the evolution .the lagrangian radii evolve significantly on the two - body relaxation timescale , , given in terms of the crossing time , , as where is the number of stars ( see * ? ? ?the evolution of the star - only plummer sphere ( figure [ fig : lagrangianradii](a ) ) , containing stars , shows the same qualitative behaviour as found by demonstrating that our pure -body integration scheme is correctly capturing the qualitative effect of 2-body encounters over the expected timescale ( for , the relaxation timescale is using eqn .[ eqn : trelax ] ) . as observed by aarseth ( 1974 ) and aarseth et al .( 1974 ) , the 10 per cent lagrangian radius shrinks ( towards core collapse ) , the 90 per cent lagrangian radii expands ( due to ejections ) , and the 50 per cent lagrangian radii stays roughly constant .the gas - only plummer sphere ( figure [ fig : lagrangianradii](b ) ) , containing gas particles , is observed to evolve slightly over the first few crossing times as it settles into equilibrium but soon the lagrangian radii stay almost constant over time . of far greater interestis the behaviour of a mixed star - gas plummer sphere .we run two sets of simulations : star particles with either or gas particles , and star particles with either or gas particles .we use two different gas resolutions for each case to help determine if the simulations are converged .the evolution of the 10 , 50 and 90 per cent lagrangian radii of both the stars and gas in each case are shown in figure [ fig : lagrangianradii ] .we notice , for both and , that the lagrangian radii for the stellar and gas components evolve in opposite directions .the stellar component is altered somewhat from the star - only case where it appears to shrink for the 10 per cent and 50 per cent lagrangian radii ( with the 90 per cent remaining fairly constant ) .conversely , the gas appears to expand at all radii , and on a timescale comparable to the stellar 2-body relaxation timescale .therefore , there is a transfer of energy from the stars to the gas , allowing the gas to heat and expand , and conversely the stars lose energy and contract .most importantly for this paper , the results converge for different gas particle numbers ( with the same number of star particles ) . for , we use both ( figure [ fig : lagrangianradii](c ) ) and ( figure [ fig : lagrangianradii](d ) ) gas particles .the evolution of both the simulations is basically identical .there is some deviation between the results at late times in different backgrounds .this is due to low- noise and the slightly earlier ` core collapse ' of the simulation .the reader might notice that the behaviour of the stars in the star - only simulation is somewhat different to that of the stars in the simulations with gas .this is an interesting physical ( not numerical ) effect due to the presence of gas .we will return to the physics and astrophysical implications of this behaviour in the next paper .for now , however , we will simply use these simulations to illustrate the convergence of the results for different numbers of gas particles but the same number of star particles . as a simple qualitative test of the hybrid code s ability to model more complex star - gas systems, we perform a small suite of simulations of the head - on impact between two star - gas plummer spheres .we create two star - gas plummer spheres following the procedure described in section [ ss : plummertest ] .we collide the plummer spheres at a velocity , such that the collision occurs either subsonically or supersonically for all gas particles .we also consider plummer spheres that are gas - dominated , and star - dominated .we therefore expect strong differences in the behaviour of the gas and stellar dynamics between the subsonic and supersonic tests , and also between the star and gas - domainted cases .each plummer sphere contains equal - mass gas particles and equal - mass star particles and is set - up in the same way as in section [ ss : plummertest ] . for gas - dominated cases ,90 per cent of the mass is in gas and for star - dominated cases , 90 per cent of the mass is in stars ( therefore the relative masses of star and gas particles are different by a factor of between the two cases ) . for subsonic collisions ,the relative collision velocity is ( in dimensionless units ) , and for supersonic collisions , the relative collision velocity is .figures [ fig : cp - ghss ] and [ fig : cp - shss ] show the results of supersonic collisions of a gas - dominated and a star - dominated collision respectively . in the gas - dominated supersonic collision ( fig .[ fig : cp - ghss ] ) , the gas forms a shock around where the gas is heated up and is compressed to higher densities .meanwhile the stars pass through the shock front and also through the stellar - component of the other cluster .since the relative velocity of the two clusters is much greater than the individual velocity dispersions , the effects of two - body encounters are negligible and the two clusters pass through each other almost unperturbed . as the gaseous and stellar components have decoupled in the collision ,the gas - free stellar clusters are now unbound as they have had 90 per cent of their initial mass ( the gas ) removed .therefore they expand as their velocity dispersion is too high to maintain their initial dense configuration ; the cluster eventually dissolves over several crossing times ( this is analogous to gas expulsion , see ) . in the star - dominated supersonic collision ( fig .[ fig : cp - shss ] ) , the gas again shocks and decouples from the stars . however , as the stars dominate the potential , the cluster is only slightly super - virial ( ) and the degree of expansion whilst readjusting to the new potential is small . therefore, the two stellar clusters continue with almost no effect from the collision and the stripping of their gas .in figures [ fig : cp - ghns ] and [ fig : cp - shns ] , we show the results of subsonic collisions of gas - dominated and star - dominated clusters respectively . in both casesthe two clusters merge as the gas components merge and the stellar components can respond to the interaction as they interact on a timescale comparable to their crossing times .the shapes and details of the resulting clusters are different ; the star - dominated merger showing a more elongated final appearance ( compare the last panels of figures [ fig : cp - ghns ] and [ fig : cp - shns ] ) .this is explained as the post - shock velocity anisotropy of the stellar velocity field dominates in the star - dominated case , whilst in the gas - dominated merger the potential is dominated by the gas allowing significant violent relaxation of the stellar component in the spherical gas potential .the behaviour of the supersonic and subsonic collisions in these simulations is physically reasonable .we also track energy conservation and find it is very well conserved ( for the four simulations ) . as with the static plummer test , we will explore the physics of star - gas cluster collisions in more detail in a future paper .our main motivation for developing this new hybrid method is to provide an intermediate step between detailed hydrodynamical simulations and pure -body simulations . in particular , we wish to investigate the dynamics of stars in gas on pc to pc - scales in gmcs and young clusters .the main advantage of our method is that the dynamics of stars within a live gas background can be followed at relatively low computational expense compared to full hydrodynamical simulations ( hours or days on desktop computers compared to months on hpc facilities ) .we discuss a number of important practical considerations for preparing simulations using the hybrid code , as well as planned future developments and uses of the code .we reiterate that this method is not intended to act as a replacement for full hydrodynamical simulations which aim to model the star formation process itself , i.e. the fragmentation of molecular clouds into prestellar cores and finally multiple protostellar systems .the fragmentation of gas into stars is a complex hydrodynamical problem , involving much additional physics such as radiation transport , chemistry , and ( non - ideal ) mhd .we suggest that fragmentation should be artificially supressed for a number of reasons .firstly , we wish to avoid the complex physics and computational expense related to full hydrodynamical simulations since we are only currently interested in the global effects of the gaseous gravitational potential on the -body evolution of the cluster .secondly , if fragmentation occurs then sink particles should be introduced and it is unclear how to mix sink particles with -body star particles ( one will be softened and interacting hydrodynamically with the gas , the other will not despite both representing a star ) .therefore we suggest that the resolution be kept _ as low as possible _ and that the equation of state be designed to supress star formation and keep densities low .we note that this also has the advantage of avoiding the formation of discs around stars which are again complex , computationally expensive objects to model .we suggest that hybrid simulations are designed to avoid the regimes in which these processes are important , ie .sub - core ( pc ) scales and high gas densities ( g ) .the normal resolution criteria for sph simulations of star formation is the criteria in which the ( minimum ) jeans mass must be resolved in order to capture gravitational fragmentation ( or in amr , the * ? ? ?* jeans length criterion ) . as shown by , failure to meet the criteriameans that fragmentation is suppressed this is important as it means that low sph resolution results in no fragmentation rather than artificial fragmentation .we suggest that gas be kept at densities lower than g , well above the critical density for fragmentation ( i.e. g , * ? ? ?such densities are in the roughly isothermal regime , and so have a simple equation of state ( e.g. * ? ? ?we suggest that the equation of state be modified ( say by artificial heating ) to keep densities low .the potential to form cores is probably desirable , but not to follow their collapse and fragmentation .as discussed and demonstrated in the star - gas scattering test ( section [ ss : scatteringtest ] ) , a sufficiently high gas resolution is required to avoid the unphysical scattering of stars by gas particles . for a general situation , where there is a group of stars moving through a cloud of gas , but no equilbrium has been established , then the velocities are not linked in any way to the density of the gas and it is difficult to establish a simple criterion for the required gas resolution .there are various scenarios where we can establish a link between the star velocity and the gas density .for example , equilibrium plummer spheres , we were able to derive a resolution condition for equilibrium plummer spheres on the number of gas particles which was only a function of the gas mass fraction , ( eqn . [ eqn : plummerresolution ] ) .we note that this condition is only strictly true for equilibrium clusters where the velocity ( or velocity dispersion ) of the stars is well - known . for non - equilbrium simulations, we may need to use further information infered from the initial conditions , such as the initial virial ratio , to infer how the velocity relates to the density , and hence to the required resolution of the gas .one other special scenario is a gaseous cluster with emedded primoridal binary and multiple systems which may be unphysically disrupted due to star - gas scattering .consider the simple case of a binary star containing two stars of mass and in a circular orbit of separation , with an orbital velocity . to avoid artificial scattering from potential destroying the binary , we require that .assume that a binary is located at the centre of a pure gas plummer sphere , consisting of gas particles , of total mass , and scale - length .the critical velocity of the central gas is given by equation [ eqn : vcritplummer ] .rearranging leads to the resolution condition , ^{3/2 } \ , .\end{aligned}\ ] ] inserting in reasonable values of pc , au , , and , we find . therefore , for a smooth distribution of gas, is sufficient to avoid star - gas particle scattering .we note that wide binaries would be most sensitive to star - gas scattering in comparison to tight binaries .therefore , using wider - separation primordial binaries would require higher resolution .however this scenario is highly idealised compared to more practical scenarios .a more irregular gas distribution may have higher values of in high density pockets resulting in stronger scattering and therefore , more stringent resolution requirements .there are many possible future astrophysical applications for our hybrid method .our first follow - up papers will address stellar dynamics in small- star - gas groups , and the collisions and mergers of such objects as touched upon in this paper .in the longer term , we plan to simulate larger more complex systems ( dynamics in turbulent gas or fractal distributions ) both as simple numerical experiments and to compare with observations .we plan to add simple stellar heating and gas cooling prescriptions in order to advance on the simple adiabatic eos we have employed here . for feedback and gas dispersal simulations , we will add some simple feedback formulations , like gas - heating from supernovae , as well as mechanical winds and uv radiation using the healpix - based algorithm already implemented in seren .simple accretion models can be added to allow stars to accrete from the gas .one important algorithmic addition we are currently implementing in the code is a more sophisticated n - body integrator that will allow efficient evaluation of computationally expensive sub - sytems such as tight binaries and 3- or 4-body encounters .we are implementing an adaptive nearest - neighbour tree , similar to that used in the starlab n - body code suite and myriad , to decompose the stars into sub - systems and then use a higher - order integrator , such as the 6th or 8th order hermite integrators , to more accurately integrate the sub - system .this addition will allow us to model clusters containing primordial binaries and higher - order multiple systems , or clusters that form hard binaries .details and tests of any additional physics and optimisations will be explained in subsequent papers that introduce them .we have presented a new hybrid sph/-body method within the sph code seren .using conservative sph and 4th - order -body integrators , this scheme conserves energy extremely well with an adiabatic equation of state .we have presented a number of tests of the code showing that it works as expected in a number of simple situations .we will use this code in future to explore stellar dynamics in a live gas background to investigate problems involving star cluster formation and evolution .dah is funded by a leverhulme trust research project grant ( f/00 118/bj ) and an stfc post - doc , and was provided with a visitors grant through fondecyt grant 1095092 .rja is supported by a research fellowship from the alexander von humboldt foundation .rs acknowledges support from fondecyt grant number 3120135 .we thank the referee , nickolas moeckel , for some important comments and suggestions that helped to improve aspects of this paper .we also thank dr daniel price for making the splash code available , from which some of the figures in this paper were prepared .following , we derive the sph equations of motion , for both the gas and star particles , using lagrangian mechanics . for a set of gas particles with labels and star particles with labels ,then the lagrangian becomes where is the gravitational contribution to the lagrangian given by where is the mean smoothing length of particles and .the equations of motion can be obtained by inserting the lagrangian into the euler - lagrange equations , the lagrangian is symmetric in terms of interaction terms between the gas and star particles .the only difference lies in the method of calculating the smoothing length which leads to different forms of the equation of motion for both stars and gas .first , we derive the equation of motion for a general gas particle labelled by taking the derivative with respect to its position , , i.e. } \nonumber \\ & & - g \sum \limits_{b=1}^{n_g}\,\sum \limits_{i=1}^{n_s } { m{_b}\,m{_i}\,\left [ \phi'{_{bi}}({\overline{h}{_{bi}}})\,\hat{\bf r}{_{bi}}\,\delta{_{ba}}+ \frac{1}{2}\,\frac{\partial \phi{_{bi}}}{\partial { \overline{h}{_{bi}}}}\ , \frac{\partial h{_b}}{\partial \rho{_b}}\frac{\partial \rho{_b}}{\partial { \bf r}{_a } } \right ] } \ , . \end{aligned}\ ] ] we note there is no contribution from the star - only term in the lagrangian since there is no dependence on the position of any gas particles , i.e. . substituting the expression for , i.e. where is given by equation [ eqn : omega ] , we obtain expanding out the kronecker delta functions and simplifying , - \frac{g}{2 } \sum \limits_{b=1}^{n_g } m{_a}\,m{_b}\left [ \,\frac{\bar{\chi}{_a}}{\omega{_a } } \frac{\partial w{_{ab}}(h{_a})}{\partial { \bf r}{_a } } + \frac{\bar{\chi}{_b}}{\omega{_b}}\frac{\partial w{_{ab}}(h{_b})}{\partial { \bf r}{_a } } \right ] \end{aligned}\ ] ] where ( cf .* ) and are defined by substituting into the euler - lagrange equation ( equation [ eqn : eulerlagrange ] ) , we obtain the equation of motion for sph gas particles , \ , .\end{aligned}\ ] ] similarly for star particles , we derive the equation of motion for a general star particle labelled by taking the derivative of the gravitational component of the lagrangian with respect to its position , , i.e. } - g \sum \limits_{b=1}^{n_g}\,\sum \limits_{i=1}^{n_s } { m{_b}\,m{_i}\,\left [ \phi'{_{bi}}({\overline{h}{_{bi}}})\,\hat{\bf r}{_{bi}}\,(-\delta{_{ia } } ) \right ] } \nonumber \\ & = & -\frac{g}{2 } \sum \limits_{j=1}^{n_s } { m{_a}\,m{_j}\,\phi'{_{at}}({\overline{h}{_{aj}}})\,\hat{\bf r}{_{aj } } } \ , + \frac{g}{2 } \sum \limits_{i=1}^{n_s}\ , { m{_i}\,m{_a}\ , \phi'{_{sa}}({\overline{h}{_{ia}}})\,\hat{\bf r}{_{ia}}}\ , + g \sum \limits_{b=1}^{n_g}\ , { m{_b}\,m{_a}\,\phi'{_{ba}}({\overline{h}{_{ba}}})\,\hat{\bf r}{_{ba}}}\ , \nonumber \\ & = & -g \sum \limits_{i=1}^{n_s } { m{_a}\,m{_i}\,\phi'{_{ai}}({\overline{h}{_{ai}}})\,\hat{\bf r}{_{ai } } } \ , - g \sum \limits_{b=1}^{n_g}\ , { m{_a}\,m{_b}\,\phi'{_{ab}}({\overline{h}{_{ab}}})\,\hat{\bf r}{_{ab}}}\ , \end{aligned}\ ] ] due to the stars having constant smoothing length , we obtain somewhat simpler equations than for the case of gas particles . substituting into the euler - lagrange equations and renaming some summations for clarity ,we obtain the following expression for the acceleration of star ,
we present a new hybrid smoothed particle hydrodynamics ( sph)/-body method for modelling the collisional stellar dynamics of young clusters in a live gas background . by deriving the equations of motion from lagrangian mechanics we obtain a formally conservative combined sph--body scheme . the sph gas particles are integrated with a 2nd order leapfrog , and the stars with a 4th order hermite scheme . our new approach is intended to bridge the divide between the detailed , but expensive , full hydrodynamical simulations of star formation , and pure -body simulations of gas - free star clusters . we have implemented this hybrid approach in the sph code seren and perform a series of simple tests to demonstrate the fidelity of the algorithm and its conservation properties . we investigate and present resolution criteria to adequately resolve the density field and to prevent strong numerical scattering effects . future developments will include a more sophisticated treatment of binaries . [ firstpage ] methods : numerical , n - body simulations - hydrodynamics - stellar dynamics
binary stars are frequent in the universe , composing approximately 50% of main sequence stars ( abt 1979 ; duquennoy & mayor 1991 ; raghavan et al . 2010 ) . due to inherent difficulties in monitoring the radial velocities of multi - star systems ,these have not been primary targets in exoplanet surveys ( eggenberger & udry 2010 ) .still , at least 10% of the currently known extra - solar planets are hosted in binary stars ( roell et al . 2012 ) .the gravitational perturbations of a binary star can drastically influence the motion of planetary systems .the dynamical stability of planets in such environments depends strongly on the orbital and physical parameters of the system ( rabl & dvorak 1988 ; holman & wiegert 1999 ; pilat - lohinger & dvorak 2002 ; morais & giuppone 2012 ; andrade - ines & michtchenko 2013 ) .the dynamical effects of the perturbation due to a secondary star can also affect the planetary formation , and even though many studies on this subject have been made ( nelson 2000 ; boss 2006 ; haghighipour 2006 ; nelson & kley 2008 ; thbault et al .2009 ; giuppone et al .2011 ) , recent theories still struggle to explain how giant planets can be formed so close to the stability boundary in close binaries ( thebault 2011 ; mart & beaug 2012 , 2015 ; silsbee & rafikov 2015 ) .nevertheless , secular perturbations rule the dynamics of many of these subjects .giuppone et al .( 2011 ) showed that the planet formation appears more favorable in orbital configurations corresponding to the secular stationary solution . during the later stages of the formation ,michtchenko & rodrguez ( 2011 ) showed that the migrating planets tend towards stationary configurations , independent of the specific migration mechanism .andrade - ines & michtchenko ( 2014 ) studied the orbital stability of the secular stationary solution . for the particular case of the habitable zone of the centauri binary system, they also showed that , for orbits close to the secular stationary solution , the variation of the orbital distance to the central star is comparable to that suffered by the earth despite the strong perturbations of the companion star . due to high eccentricities and larger perturbing masses usually found in close binary systems , the classical secular theories based on the laplace expansion of the disturbing function ( _ e.g. _ brouwer & clemence 1961 ) are of limited use .an alternative approach is the use of the legendre expansion of the disturbing function ( heppenheimer 1978 ; ford et al .2000 ; georgakarakos 2003 , 2005 ; laskar & bou 2010 ) . even though this expansion has a larger radius of convergence in terms of the eccentricity , it has a slow convergence rate with respect to the semimajor axis ratio , and is thus usually applied only in hierarchical systems .another possible approach is the construction of a semi - analytical model ( e.g. michtchenko & malhotra 2004 ) , as applied by andrade - ines & michtchenko ( 2014 ) . in this case , the averaging over short - period perturbations is performed numerically over the exact expression of the disturbing function , resulting in a model with no constraints in eccentricities or semimajor axis .however , the procedure is still limited to first - order averaging theories ( giuppone et al . 2011 ; andrade - ines & michtchenko 2014 ) . still in the planetary case , libert & sansottera ( 2013 ) developed an analytical second - order in the masses secular model using laplace coefficients , up to a high degree in the eccentricities .their model displayed a significant improvement over the first - order model in comparison with the integrations of the exact equations of motion of extra - solar planetary systems , specially close to mean motion resonances .the development , however , was explicitly displayed only for the andromedae system . a second - order ( in the masses )coplanar analytical secular model was developed for the cephei binary system , by giuppone et al .( 2011 ) using a legendre expansion of the disturbing function .using lie - series canonical perturbation theory and considering the restricted case ( when the gravitational effects of the planet over the binary stars were neglected ) , the authors constructed an analytical model that was able to match numerical integrations , at least for initial conditions close to the observed planet .however , the authors also have emphasized that their full model was overly complex and preferred the use of empiric corrections specific for the cephei binary system .due to the difficulty of constructing and implementing a second - order model , it is of the utmost interest to determine for which orbital configurations the first or second - order models are applicable .the aim of this paper is to develop a general approach for the second - order coplanar secular model and determine the regimes of applicability of the first- and second - order secular models .the limits of applicability will be evaluated in the space of parameters of the problem for planets in s - type orbits ( dvorak 1984 ) , comparing the predictions of each analytical model with direct numerical integrations of the exact equations of motion .the paper is structured as follows .section [ analytical ] presents the analytical foundation of this work , with the expansion of the disturbing function and the application of a canonical perturbation theory for the construction of the first- and second - order secular models .section [ num_sim ] presents a numerical method to obtain the main features of the mean secular motion from direct n - body simulations .a comparison between different analytical models and numerical integrations is presented in section [ compare ] .section [ app_limits ] presents the range of validity of each model in different parametric planes , while applications to real exoplanetary systems are discussed in section [ applications ] .conclusions close the paper in section [ discussions ] .let us consider a system composed by a main star of mass , a planet of mass and a secondary star of mass in the jacobian reference frame centered in .we denote the position vector of with respect to as , while marks the position vector of with respect to the center of mass of and ( figure [ jacobi1 ] ) .we assume that for all time . . is the center of mass of bodies and .,scaledwidth=80.0% ] the hamiltonian of the three - body system in these coordinates is given by where is the keplerian part and is the disturbing function , both given by in the expressions above , is the jacobian osculating semimajor axis of the orbit and is the distance between the bodies and , respectively ; is the gravitational constant .terms and can be expanded in legendre polynomials , from which the disturbing function acquires the form where is the legendre polynomial of degree , is the angle between and , , and the legendre polynomials may be written as ( whittaker & watson 1963 ; laskar & bou 2010 ) where substituting ( [ identidadeww4 ] ) into ( [ perturbadora4 ] ) leads to in the planar problem , the angle is given by where and and are the true anomaly and longitude of the pericenter of the body . the transformation to mean anomaliescan be accomplished using hansen coefficients and newcomb operators ( plummer 1918 ; kaula 1962 ; hughes 1981 ) where are the hansen coefficients , , .similarly , are the newcomb operators and and are ( respectively ) the mean anomaly and eccentricity of the orbit . introducing ( [ hansen1 ] ) into ( [ perturbadora5 ] ) we obtain } , \end{array } \label{perturbadora10}\ ] ] where in this last expression we have denoted , , and . the newcomb operators can be obtained using recurrence relations ( hughes 1981 ; ellis & murray 2000 ) , although this calculation can be extremely costly in cpu time .the good news is that are independent of both initial conditions and the parameters of the system and need only be calculated once .we calculated the coefficients for every value of the set , truncating the series expansion of the disturbing function considering values of the indexes in the range , and .we verified that the error caused by this truncation is of the order of the numerical error when comparing the integration of the complete equations of motion of the hamiltonian to the exact problem for any orbit with and .therefore , this truncation guarantees that any difference between the results of the secular models and the numerical simulations of the exact problem will be only due to the averaging theory adopted in the parameter range and , as we discuss in section [ app_limits ] .the advantage of this method is that the factor is calculated just once and then can be applied to any system with and by just reading a file of lines .we re - indexed our sum with respect of the line of that file and we reordered the lines of the file with respect to the magnitude of the term , for and ( for more details see appendix [ coef_num ] ) .this allows us to rewrite the disturbing function as where we introduced , , , and with respect to the new index .the calculated coefficients of the disturbing function are available as an electronic supplementary material , with the files description presented in appendix [ coef_num ] . to construct our secular model we applied hori s perturbation theory ( hori 1966 ; see also ferraz - mello 2007 ) to eliminate the short - period terms associated to the mean anomalies .in the particular case when , the mean motions of both bodies and are of different orders of magnitude and only one of the fast angles has to be eliminated .however , in the general case when is of the same order of , we should eliminate both the fast angles .our set of canonical variables is given by where , and .at this point , we notice that the disturbing function ( [ perturbadora11 ] ) does not depend on . therefore its conjugated action ( _ i.e. _ the total angular momentum ) is a constant of motion and the problem can be reduced to three degrees - of - freedom . from ( [ hamiltonian1 ] ) , ( [ hamiltonian2 ] ) and ( [ perturbadora11 ] ) , and introducing the definition of , the complete hamiltonian can be written as where and and taking the real part of ( [ variables5 ] ) and introducing the keplerian terms into the sum as the terms with and by defining the coefficients and , we obtain the first step in the application of hori s method is to reorganize the terms of the hamiltonian , separating the integrable part ( function only of the actions ) , the secular part ( function of the actions and the secular angle ) and the short - period part ( function of all variables ) .we rearranged the terms of the disturbing function such that : * for , all the terms have no angular dependence , that is , ; * for , all the terms depend only of the angle , that is and ; * for , all the other terms . finally , for the small parameter of the problem , we adopted this choice for allows the perturbation theory to be applied even to the case where the mass of the perturber is larger than the mass of the central body , provided that is small enough .we can therefore formally express the complete hamiltonian function as where , and are the angle variables , , and are their respective conjugated actions and where and are functions of the variables , and and of the constant of motion , and we introduced a new mass factor to construct our second - order secular theory , our goal is to find a lie - type transformation of the variables , generated by the function to a new set of variables , such that the new hamiltonian is independent of the angles and , and is the remainder of order . recalling that the hamiltonian is time independent , expanding the lie series on the left - hand side of ( [ firstorder4.2 ] ) and identifying the terms in same order in , we get from eq .[ firstorder4.3 ] we identify the _ homological equation _ where the three frequencies of non - resonant coplanar problem , defined by ( see appendix [ ap01 ] ) , and is a known function once all the previous normalization steps are performed ( for more details , see ferraz - mello 2007 , chapter 6 ) . however , in ( [ secondorder13 ] ) both functions and are unknown .this indetermination is solved , without loss of generality , by adopting the averaging rule : from ( [ firstorder3 ] ) and ( [ secondorder17 ] ) , for , we obtain the first - order solution : where and are expressed as functions of and of the parameters , and . from this pointforward , we will refer to the model composed by as the _ first - order secular model_. from ( [ secondorder13 ] ) and ( [ firstorder18 ] ) the first - order of the generating function can be calculated and yields introducing ( [ firstorder18 ] ) and ( [ secondorder19 ] ) into ( [ secondorder13 ] ) and applying the averaging rule ( [ secondorder17 ] ) for , after a long but straightforward calculation , we finally obtain the second - order solution , which , explicitly , is written as : \\\vspace{0.4 cm } & \\ \times \cos[(p^{(3)}_i+p^{(3)}_j)\delta\varpi^ * ] \biggr\ } \\ \vspace{0.4 cm } & + \ \displaystyle\frac{{\cal g}^4}{4}\biggl\ { \displaystyle\sum_{i = n_0 + 1}^{n } \displaystyle\sum_{j = n_s+1}^{n } \displaystyle\frac{\sigma_i \delta_{p^{(1)}_j , p^{(1)}_i } \delta_{p^{(2)}_j , p^{(2)}_i } t_i { \cal m}_{n_i } { \cal k}_{n_i}t_j { \cal m}_{n_j } { \cal k}_{n_j}}{p^{(1)}_i\nu_1 + p^{(2)}_i\nu_2 + p^{(3)}_j\nu_3 } \\ \vspace{0.4 cm } & \times \ l_1^{*{\rm a}_i + { \rm a}_j-8 } l_2^{*{\rm b}_i+ { \rm b}_j-8}e_1^{*{\rm c}_i+{\rm c}_j}e_2^{*{\rm d}_i+{\rm d}_j } \\ \vspace{0.4 cm } & \times \biggl [ \ - \p^{(1)}_i \biggl(\displaystyle\frac{{\rm a}_j-{\rm a}_j - { \rm c}_i + { \rm c}_j}{l_1^ * } + \displaystyle\frac{{\rm c}_j - { \rm c}_i}{l_1^*e_1^{*2}}\biggr ) - \ p^{(2)}_i \biggl(\displaystyle\frac{{\rm b}_j-{\rm d}_j - { \rm b}_i + { \rm d}_j}{l_2^ * } + \displaystyle\frac{{\rm d}_j - { \rm d}_i}{l_2^*e_2^{*2}}\biggr ) \\\vspace{0.4 cm } & \ \ \ - \ p^{(3)}_i\biggl(\displaystyle\frac{{\rm d}_j\sqrt{1-e_2^{*2}}}{l_2^*e_2^{*2 } } - \displaystyle\frac{{\rm c}_j\sqrt{1-e_1^{*2}}}{l_1^*e_1^{*2 } } \biggr ) - \ p^{(3)}_j\biggl(\displaystyle\frac{{\rm d}_i\sqrt{1-e_2^{*2}}}{l_2^*e_2^{*2 } } - \displaystyle\frac{{\rm c}_i\sqrt{1-e_1^{*2}}}{l_1^*e_1^{*2 } } \biggr ) \biggr ] \\\vspace{0.4 cm } & \ \\times \cos[(p^{(3)}_i - p^{(3)}_j)\delta\varpi^ * ] \biggr\ } , \\\end{array } \label{secondorder28}\ ] ] where and are expressed as functions of and of the parameters , and and we introduced , defined as and is the kronecker delta , defined as it is worthy emphasizing that the second - order solution presented above is only valid when we consider the non - resonant condition , with . finally , the complete secular hamiltonian up to second order in is given by with , and given by ( [ firstorder2 ] ) , ( [ firstorder18 ] ) and ( [ secondorder28 ] ) , respectively . from this pointforward , we will refer to this model as the _ second - order secular model _, and the equations of motion of the one degree - of - freedom system are given by and , , , and are constants of motion , since their conjugated angle variables ( , and , respectively ) do not appear explicitly in the secular hamiltonian .let us remark that the secular hamiltonians possessing only one degree - of - freedom are integrable . as a consequence, chaotic motions can not be produced by these models .the evolution of the orbital parameters of the secular problem can be obtained by simultaneously integrating eqs .( [ secondorder29 ] ) numerically .although the model developed above was constructed for the general three - body problem , most secular models assume that . in the limit of the restricted three - body problem ( ) ,bodies and move in fixed ellipses as described by the two - body problem .up to first - order in the masses , it is possible to obtain an expression of the disturbing function which is exact with respect to ( _ e.g. _ , kaula 1962 ; laskar & bou 2010 ) . limiting the expansion in legendre polynomials to ( quadrupole problem ) and truncating the perturbation to order , heppenheimer ( 1978 ) obtained the averaged disturbing function in orbital elements as , \label{hepp1}\ ] ] where we omitted the constant terms . introducing the non - singular variables the modified lagrange - laplace planetary equations will be , up to order ( brouwer & clemence 1962 ) : where is the forced secular frequency and is the forced eccentricity . in both equations is the mean - motion of the planet .the general solution of the system of eqs .( [ hepp5 ] ) acquire the form where ( proper eccentricity ) and ( phase angle ) are constants of integration determined by the initial conditions .we can see from eqs .( [ hepp6 ] ) and ( [ hepp7 ] ) that and are functions only of the parameters of the problem . according to eqs .( [ hepp8 ] ) and ( [ hepp9 ] ) , the secular orbits define circles in the plane centered in and with only a single frequency .both are independent of the initial conditions of the planetary orbit .the trajectory starting with gives and , and therefore , is a stationary solution or _ fixed point_. since there is only one fixed point , located in the semi - plane , we can conclude that the secular angle will either circulate or oscillate around 0 .the resulting oscillation around is also known as _mode i _ ( michtchenko & ferraz - mello 2001 ) .the model of heppenheimer ( 1978 ) is a good approximation to the problem if and are sufficiently small . for larger values of these quantities , the expressions presented above for both and no longer yield quantitatively accurate values , although the topology of the secular problem remains unaltered ( giuppone et al .2011 ) .analytical approximations for the solutions including second - order terms have so far been estimated either by empirical approximations ( _ e.g. _ thbault et al .2006 ) or by functional approximations ( giuppone et al .2011 ) . in both cases, however , the resulting expressions for and are not general and valid only for a sub - set of the parameter space .in particular , the expressions for found by these authors are only valid for -cephei , but fail for other values of the perturbing mass and eccentricity . in order to obtain better estimations, we can use the first- and second - order secular models presented in section [ canonical ] .however , since the expressions for the averaged hamiltonian are too complex , we must open hand of explicit close formulas and determine and numerically .this can be achieved employing the geometric method introduced by ( michtchenko & malhotra 2004 ) and later applied by michtchenko et al .( 2006 ) and andrade - ines & michtchenko ( 2014 ) .this method consists in finding the eccentricity that gives the extreme value of the hamiltonian for and given values of the parameters , and ( the reader is referred to michtchenko & malhotra 2004 for a detailed description ) .in particular , the forced eccentricity is the solution of the algebraic equation where .similarly , the secular frequency for the first- and second - order models are given by where .the analytical expressions of the secular frequencies for the first- and second - order models are presented at the appendix [ ap01 ] .since any analytical theory is expected to be valid only for certain initial conditions or values of the parameters of the system , it is important to be able to deduce the main features of the secular solutions using numerical integration of newton s equations of motion . in this sectionwe show how to determine the families of forced eccentricities and secular frequencies from n - body simulations . in the secular problem ,the fixed point at and is a stationary orbit with zero secular amplitude .when the averaged variables are transformed back to osculating values , the resulting trajectory will be a quasi - periodic solution with 3 main frequencies , and , although with a zero amplitude associated with the secular frequency .therefore , a fixed point in the secular ( averaged ) problem will correspond to a quasi - periodic orbit in osculating elements with frequencies and .determining such a quasi - periodic solution from n - body simulations can be a challenging task .fortunately , there exist several numerical tools that can be employed to simplify this work .one was used by noyelles et al .( 2008 ) and later by couetdic et al .( 2010 ) based on frequency analysis of the numerical integration ( laskar 1990 ; michtchenko et al .this method has proved to be very efficient and yields accurate results .its main steps are summarized as follows : 1 .numerical integration of an orbit for a given set of initial conditions ; 2 . harmonic decomposition of the time series of the orbital elements to determine the fundamental frequencies ; 3 .quasi - periodic decomposition of the time series in function of the fundamental frequencies ; 4 .elimination of the terms depending on and construction of a new time series of the orbital elements ; 5 .determination of a new set of initial conditions from the time series of the orbital elements with suppressed .this process is iterative in nature , with each new set of initial conditions being closer to the solution and the convergence is generally fast , reducing the amplitudes of the secular components by 2 orders of magnitude in just 4 steps ( couetdic et al .2010 ) . in order to identify all the 3 frequencies of motion with the harmonic decomposition, the integrations must be long enough to include at least one secular period .moreover , the integration step must be small enough such that the keplerian period of the planet is identifiable .for the present work , we used the _ naff _ ( laskar 1999 ) algorithm for the harmonic decomposition , with a time step of and a total time of integration of , where an approximate value for was adopted following ( [ hepp6 ] ) .since we expect the real secular frequency to be different from that estimated from first - order models , we used a total integration time at least 6 times the approximate secular period .the iterative process was stopped whenever the relative difference between the initial eccentricities of the planet in two consecutive iterations was smaller than .once the initial conditions of the quasi - periodic orbit were determined , the forced eccentricity was estimated by with the secular frequency obtained at the quasi - periodic decomposition step of the last iteration of the method .the method described above provides an accurate approximation of the fundamental frequencies as long as the trajectory satisfies two conditions : ( i ) is regular ( i.e. not chaotic ) , and ( ii ) is not dominated by mean - motion resonances ( mmrs ) . if one of these conditions is met , then the quasi - periodic secular approximation is no longer valid and the iterative process will not be convergent . even though these cases are not covered by the secular models ,the analysis of these orbits is important in order to compare the predictions of the analytical models with n - body simulations .applying the quasi - periodic decomposition method in an unstable or a resonant orbit will lead to an inaccurate determination of the fundamental frequencies that will compromise the convergence of the method .for this reason , it was imposed in the algorithm that if the convergence condition was not satisfied in 20 steps , the orbit would go through a stability check .the stability of the orbits was numerically estimated by determining the proper mean motion of the planet with a quasi - periodic decomposition routine ( robutel & laskar 2001 ) : the analysis of the first two thirds of the data defined a value for the mean - motion , while the last two thirds of the data was used to calculate a second value . if the difference was found to be greater than , the orbit was considered unstable and the algorithm issued fictitious values of and .to assess the quality of the analytical secular models , in this section we compare the results obtained from our first- and second - order models ( section [ canonical ] ) , with the classical model of heppenheimer ( 1978 ) ( section [ hepp78 ] ) and with numerical simulations ( section [ num_det ] ) , which we will take as the exact solution . as a working example we chose the binary star system_ hd 196885 ab _ ; physical and orbital data of this system , as well as the data of the detected planet around the star a , are presented in table [ dadoshd196885 ]. llllllll body & ( au ) & & ( deg ) & ( deg ) & ( deg ) & ( deg ) & + a & - & - & - & - & - & - & 1.3 + b & 21 & 0.42 & 116.8 & 121 & 241.9 & 79.8 & 0.45 + b & 2.6 & 0.48 & 116.8 * & 349 & 93.2 & 79.8 * or 259.8 * & * + most stable / probable solution according to giuppone et al .( 2012 ) .figure [ ex - curvas ] ( a ) shows the family of stationary secular solutions ( i.e. forced eccentricity ) as function of the semimajor axis of the planet , for the system _ hd 196885 ab_. all other parameters of the system were fixed according to the values given by table [ dadoshd196885 ] .the red circles correspond to the result obtained from the numerical integrations , the magenta curve shows the solution using the model by heppenheimer ( 1978 ) , while the blue and green curves present the solutions obtained from our first- and second - order models , respectively .the dashed vertical line marks the present osculating semimajor axis of the detected planet ( au ) . as function of the semimajor axis , calculated with different analytical models , compared with the results of numerical simulations ( red dots ) .the black curve shows the amplitude of the short - period variations . *( b ) * averaged mean - motion ratio , as function of the initial osculating semimajor axis , calculated from n - body simulations .the location of several first - degree mmrs are marked with horizontal lines . * ( c ) * secular frequency , as function of the semimajor axis , calculated for different models , as well as with the numerical integration ( red dots ) . *( d ) * secular frequency as function of the eccentricity of the perturber for au . in all panelsthe values of and of the planet are marked by dashed vertical lines .the scattered red dots are non - convergent solutions obtained by the numerical method ( see section [ rmm - prog]).,scaledwidth=90.0% ] both first - order models show a linear dependence of the forced eccentricity with the semimajor axis , while the second - order model and the numeric solution show a significant quadratic component .as expected , for sufficiently small values of all models coincide , while increasingly large deviations are seen for orbits closer to the perturber . at the present location of the planetthe predictions of the first - order model are not quantitatively correct , indicating that any model for the secular dynamics of this system should include second - order terms . in the same frame, the black curve represents the amplitude of the short period oscillations , calculated as the difference between the maximum and minimum values that reached in a single keplerian period of the star .this amplitude shows a strong correlation with the difference in forced eccentricity between both the first- and second - order models .this is not surprising , since the magnitude of the second - order terms scales with the short - period variations ( see eq .[ secondorder19 ] ) . fig .[ ex - curvas ] * ( b ) * shows the dependence of the numerically determined mean motion ratio with the semimajor axis of the planet , close to the family of secular stationary solutions .as discussed in section [ rmm - prog ] , the crossing of mmrs , defined by the condition , can lead to instabilities that can hinder the convergence of the iterative method described at section [ num_det ] . as a result , due to the mmrs, we see `` gaps '' in the curve of fig .[ ex - curvas ] * ( b)*. we have found that the gaps appear for each , with the gaps getting larger with the decrease of , up until , when we have the stability limit for this system . even though the resonant problem is a complex subject and each mmr should be studied individually , we can still estimate empirically where in the phase space the mmrs may begin to play an important part in the dynamical evolution of this system .for instance , we identify the first significant resonance as the gap with the lowest semimajor axis , that appears at the 18:1 mmr , at au .we see that the planet is located very close to the 20:1 mmr , but the short time dynamical effects of this resonance were not detected by this method and therefore we conclude that the secular dynamics will still play the major part in the dynamical evolution of this system .figure [ ex - curvas](c ) shows the variation of the secular frequency as function of . as before ,all other parameters were taken from table [ dadoshd196885 ] . in both panels we present the first- and second - order models ( blue and green curves , respectively ) , the heppenheimer ( 1978 ) model ( magenta curves ) and the solution obtained from the exact equations of motion ( red circles ) .as before , the second - order solution presents an excellent agreement with the numerical results , while the first - order models predict smaller values of the secular frequency for initial conditions closer to the perturber .also , it is interesting to note that our first - order version now shows a noticeable ( albeit small ) deviation with respect to heppenheimer s version , which was not evident in the case of the forced eccentricity .the scatter of the numerical results for larger semimajor axis is due to the effect of mean - motion resonances .in particular , the 10/1 mmr , located at au caused non - convergence of the numerical method for many initial conditions , assigning to them an artificial value .other mean - motion resonances are also noticeable , although with smaller effect . for the planet around hd 196885 considering different initial values ( indicated on top of each frame ) .results obtained from n - body simulations are shown in gray , while predictions of different analytical models are indicated in color curves .note that heppenheimer s model shows a good fit for the secular frequency for in accordance with the lower - right panel at figure [ ex - curvas].,scaledwidth=90.0% ] these results are similar to those found by giuppone et al .( 2012 ) in the case of -cephei , indicating that a second - order secular theory may be not only desirable but actually necessary in many planetary systems around close binary stars . while the analytical models of heppenheimer ( 1978 ) and giuppone et al .( 2012 ) assumed zero - amplitude secular solutions at the fixed point , our model has the advantage of allowing to map finite amplitude oscillations and find the complete secular solutions of the system even if the initial conditions are far from the stationary value .these will occur whenever the initial value of the eccentricity is different from the forced value and/or .one of the consequences of finite - amplitude oscillations is that the secular frequency is different from that given by its stationary value .figure [ ex - curvas ] ( d ) shows the dependence of with the initial eccentricity of the planet .it has a maximum value at , and decreases for increasing amplitudes of oscillation .our second - order model shows a very good agreement with the full numerical simulations up to , a value higher than expected due to the truncation of the disturbing function for .in contrast , the secular frequency predicted by heppenheimer s model shows no dependence with .therefore , there should be always a value of for which the secular frequency determined from both the second - order and heppenheimer s models coincide .particularly for the system hd 196885 , with au , this happens for , which is close to the current value of the eccentricity of the planet ( see the dashed vertical line in fig .[ ex - curvas](c ) ) .we emphasize , however , that this is a coincidence and there is no way of predicting with just first - order models for which value of this will happen for different systems . to illustrate the dependency of with , figure [ ex - orbita ] shows ( in gray ) the result of five numerical integrations which differ only in the initial values of the eccentricity .the predictions of the different analytical models are depicted in colored lines . in all casesour second - order model shows a very good agreement with the n - body results , not only with respect with the frequency but also in the amplitude of oscillation .none of the other models appear reliable , although , again , heppenheimer s solution does show a good fit for the frequency for .although the example described above shows that a second - order secular model must be employed in some real planetary systems around binary stars , others are not so extreme and may be adequately mapped with a simple first - order model .since the second - order theory is , by construction , much more complex , it is important to predict when it is really necessary and when it may be avoided .similarly , even a second - order model will breakdown for initial conditions too close to the perturber , and it is also important to have some idea of its range of validity . in this sectionwe present a graphical representation of the _ limits of applicability _ of each analytical secular model , in terms of the main parameters of the system : mass and orbit of the secondary star , and semimajor axis of the planet . as proxywe adopt the forced eccentricity determined by each model , as compared with the value obtained from direct numerical integrations .the top panel of figure [ def_lim ] shows the variation of and the amplitudes of short - period oscillations as function of the initial semimajor axis , for a fictitious system with parameters , , au and . in the case of the forced eccentricity ,numerical results are again shown in red circles , while colored curves indicate the predictions of different analytical models .we will denote by : as the relative error of the forced eccentricity estimated by a given model , with respect to its exact ( numerically determined ) value .the amplitude of the short - period variations are shown with a black curve and were determined numerically with an n - body simulation .finally , the bottom panel shows the mean - motion ratio as function of the semimajor axis ratio ., , au , and .top panel : forced eccentricity as function of the semimajor axis ratio determined from numerical simulations ( red circles ) .the error bars correspond to a relative error of .color curves show the predictions of different analytical models : first- and second - order models ( blue and green , respectively ) and heppenheimer ( 1978 ) model ( magenta ) .numerical estimation of the amplitude of short - period oscillations are indicated in black .the vertical dashed lines represent the characteristic limits * fo * ( blue ) , * so * ( green ) and * sp * ( black ) ; see text for details .bottom panel : mean - motion ratio as function of the semimajor axis ratio .the vertical dashed lines represent the * mmr * ( red ) and * inst * ( magenta ) characteristic limits .the * macd * limit(andrade - ines & michtchenko 2014 ) occurs for larger semimajor axis and is not drawn in this plot.,scaledwidth=80.0% ] the vertical dashed lines in both graphs represent a series of _ characteristic limits _ , defined as : * * fo * : value of where , calculated with the 1st - order model ; * * so * : value of where , calculated with the 2nd - order model ; * * mmr * : lowest value of for which mean - motion resonances cause significant non - convergence of the secular models ; * * sp * : value of where the amplitude of the short - period oscillations equals the forced eccentricity ; * * inst * : lower limit of leading to orbital stability . beyond this point some ( but not all ) initial conditions result in collision or in an expulsion of the planet from the binary system . * * macd * : upper limit of leading to orbital stability . beyond this point_ all _ initial conditions result in collision or in an expulsion of the planet from the binary system .the * sp * limit is an estimative of the region where the short - period dynamics may play an important role , and where their amplitude rivals that of the secular dynamics .as discussed in section [ forced_ecc ] , the generating function ( eq . [ secondorder19 ] ) depends only on the short - period terms ; consequently , the larger the amplitude of these terms , the higher the order of the averaging theory we may need to apply . therefore , the applicability limits * fo * and * so * should be correlated with the * sp * limits .the lower instability limit * inst * signals the appearance of resonance overlap where some ( but not all ) initial conditions exhibit unstable motion . full orbital instability ( for all initial conditions ) roughly corresponds to the limit * macd * , which was estimated following the criterion developed in andrade - ines & michtchenko ( 2014 ) , adopting . according to this model , global instabilityis said to occur for all values of the semimajor axis satisfying the condition : where for , and for .note that we assume that the initial conditions of the planet coincide with the stationary secular solution , which is not necessarily the case ( e.g. hd196885 ) .however , it is sufficient for most purposes and serves as a proxy for the stability limit for s - type orbits in binary systems .we calculated the characteristic limits defined in section [ sec - def_lim ] for fictitious binary systems with a central mass of and seven different values for the secondary mass : .these values were chosen to include cases in which the planet orbits the most massive component , as well as situations in which the opposite occurs .the semimajor axis of the binary was set at au , and the eccentricity was again varied , this time taking values .the mass of the planet was chosen equal to ( roughly , twice the mass of neptune ) and its initial semimajor axis varied in the interval ] & ] & [ au ] & [ au ] & & & + oct ab & 1.4 & & 0.5 & 1.2 & 2.55 & 0.123 & 0.2359 & ( 1 ) + oct triple & 0.496 & & 1.4 & 0.524 & 2.565 & 0.67 & 0.2504 & ( 2 ) + koi-1257 & 0.99 & & 0.7 & 0.382 & 5.3 & 0.772 & 0.31 & ( 3 ) + hd 41004 ab & & & & 1.60 & 20.0 & 0.48 & 0.40 & ( 4 ) , ( 5 ) + hd 41004 bb & & & & 0.0177 & 20.0 & 0.081 & 0.40 & ( 4 ) , ( 5 ) , ( 6 ) + ceph ab & & & & 2.05 & 20.2 & 0.05 & 0.41 & ( 7 ) , ( 8) , ( 9 ) + hd 196885 ab & & & & 2.60 & 21.0 & * 0.48 * & 0.42 & ( 10 ) + cen bb & & & & 0.04 & 23.4 & 0.0 & 0.518 & ( 11 ) , ( 12 ) + gl 86 ab min & 0.8 & & 0.59 & 0.11 & 30.58 & 0.046 & 0.1 & ( 13 ) , ( 14 ) + gl 86 ab max & 0.8 & & 0.59 & 0.11 & 69.8 & 0.046 & 0.61 & ( 13 ) , ( 14 ) + hd 126614 ab & & & & 2.35 & 36.2 & 0.30 & & ( 15 ) + notes : unconfirmed due to orbital instability of coplanar solution .planet could lie on a highly inclined or retrograde orbit ; planet is a candidate ; the detection of this planet is contested by hatzes ( 2013 ) ; parameters of the binary are not well known , constrained by the relation au .references : ( 1 ) ramm et al .( 2009 ) , ( 2 ) morais & correia ( 2012 ) , ( 3 ) santerne et al . (2014 ) , ( 4 ) zucker et al .( 2004 ) , ( 5 ) roell et al .( 2012 ) , ( 6 ) santos et al .( 2002 ) , ( 7 ) neuhuser et al .( 2007 ) , ( 8) endl et al .( 2011 ) , ( 9 ) reffert & quirrenbach ( 2011 ) , ( 10 ) chauvin et al .( 2011 ) , ( 11 ) pourbaix ( 1999 ) , ( 12 ) dumusque et al .( 2012 ) , ( 13 ) queloz et al .( 2000 ) , ( 14 ) farihi et al .( 2013 ) , ( 15 ) howard et al .( 2010 ) . llllllllll system & & & & & & + & $ ] & % & [ /yr ] & % & [ /yr ] & % + cen bb & 0.1512 & 0.0370 & 0.05356 & 0.0907 & 0.05356 &0.0907 + gl 86 ab min & 0.04530 & 0.265 & 0.04019 & 0.412 & 0.04015 & 0.518 + gl 86 ab max & 0.188 & 1.84 & 0.0067 & 0.554 & 0.006674 & 0.660 + hd 41004 bb & 0.05367 & 1.85 & 0.020 & 1.67 & 0.0195 & 1.62 + hd 126614ab & 7.090 & 7.30 & 2.344 & 11.0 & 2.23023 & 6.47 + koi-1257 & 2.854 & 8.26 & 74.9 & 13.9 & 70.46 & 8.46 + ceph ab & 5.645 & 10.8 & 9.05 & 15.3 & 9.057 & 15.3 + hd 41004 ab & 4.26 & 11.8 & 9.27 & 16.0 & 7.872 & 1.08 + hd 196885 ab & 6.56 & 20.4 & 14.6179 & 23.7 & 12.45 & 10.4 + in the following we analyse each system individually : * oct ab - located in the orange region , indicating strong orbital instability .this system has been the subject of discussion in many works ( eberle & cuntz 2010 ; quarles et al .2012 ; godziewski et al.(2013 ) ; among others ) , suggesting that the planet may orbit the central star in a retrograde orbit .* oct triple - an alternative description of the same system , proposed by morais & correia ( 2012 ) , composed of a binary sub - system instead of a single secondary star . the planet predicted in this scenario is located in the magenta region , with strong dynamical effects from mean - motion resonances ; * koi-1257 - an unconfirmed planetary candidate ( santerne et al .2014 ) with a very high eccentricity ( ) that could lead to instabilities .this system is located in the borderline between the blue and the green regions , indicating that a second - order secular model is probably necessary to model its dynamical evolution ; * hd 41004 ab and bb - a multi - planetary system .planet is located very close to the star , with the dynamics properly described by the first - order model .planet , however , is located in the borderline region between the blue and green regions , which means that depending on the accuracy desired for the study , a second order approach may be necessary ; * ceph ab - located in the boundary between the blue and green regions , again indicating that a second - order approach may be necessary .this system also presents a low osculating eccentricity ( ) , very close to the forced value . due to the large perturbations from the binary companion ,the origin of this planet has been subject of many studies ( thbault et al .2004 ; giuppone et al .2011 , among others ) .a second - order secular theory has been proved necessary in analytical studies ; * hd 196885 ab - located in the green region , this is another system for which a second - order model is necessary .the planet is located close to the mmr region , and perhaps high - order resonances could play should be taken into consideration .the dynamical evolution of the system and constraints to the orbital parameters have been focus of several works ( giuppone et al .2012 ; satyal et al .2014 ) ; * cen bb - even though its existence is still under debate , the planetary candidate is located in the blue region and a first - order model is adequate to describe its secular dynamics .however , studies of the planetary formation around the b star , such as thbault et al .( 2009 ) , suggests a non - linearity of the eccentricity of putative planetesimals located at larger values of , whose dynamics would require a second - order model ; * gl 86 ab - due to difficulties of determining the orbital parameters of the binary ( farihi et al .2013 ) , this system possesses a high indetermination of the semimajor axis and eccentricity of the binary .nevertheless , in both cases the system is properly described by a first - order secular theory ; * hd 126614 ab - located in the boundary between the blue and green regions , once again indicating the necessity of a second - order approach .also , this system is very closely located to the mmr region , which would means that high - order resonances could play an important role in its dynamical evolution .in this work we showed the importance and influence of high - order averaging theories in the study of s - type planetary orbits around tight binaries . to be assured that any difference that could arise with respect to simpler models would be only due to the averaging method itself ,the disturbing function was expanded to high orders in semimajor axis ratio and eccentricities guaranteeing a precise representation for and .we then used this expansion for the construction of a second - order secular model applying a lie - series canonical perturbation technique .the basic properties of the secular dynamics are characterized by two quantities : the forced eccentricity and the secular frequency .we defined and applied a geometric method ( section [ sec_ef ] ) to determine these quantities from a general secular hamiltonian function .we showed that these can also be accurately obtained from n - body simulations with the aid of an iterative algorithm based on a quasi - periodic decomposition given by a frequency analysis method . to compare the families of stationary solutions obtained from the secular models to those obtained from the n - body code, we introduced characteristic limits that define the applicability domain of each analytical theory .we calculated these limits to a large grid of parameters and constructed parametric planes that show , for any given system , whether it should be studied with a first , a second or higher - order model .these parametric planes also yield information concerning its orbital stability , the influence of mmrs on its dynamics and the magnitude of short - period oscillations .we then applied these parametric planes to several real examples , including confirmed , candidates and contested planets in binary star systems .these planes show that there is always a region in the space of parameters that can be properly described by the first - order model for and .we also conclude that the second - order model is adequate up to higher values of , but there is still a region that can not be properly described , specially for lower values of and larger values of .we believe this region ( white area in the parametric planes ) should be properly described with a third or higher - order models . however , in many other cases , the limit of applicability of the second - order model coincides with the limit of orbital stability , indicating that higher - order models are unnecessary and will not improve the existing results .part of this work was developed during a visit of e.a - i . to the universidad nacional de cordoba .we wish to express our gratitude to fapesp ( grants 2010/01209 - 2 and 2013/17102 - 0 ) , cnpq , conicet and secyt / unc for their support .in this appendix we present the values of the coefficients , , , , , and of the disturbing function ( eq . [ perturbadora11 ] ) .two files containing these coefficients are provided as electronic supplementary material of the journal and can be requested to the first author .the file `` table - coefficients - ni_ci_di_xi_yi_zi_ti.dat '' contains the data of the development used throughout this paper .the file lines were ordered with respect to decreasing values of for and and then arranged as described in section [ angle - action ] , with , and . using this development , the second - order term of the secular hamiltonian ( eq . [ secondorder28 ] ) has the order of terms .table [ exemplo_coeficientes ] shows an excerpt of the file as an example .llllllll file line ( ) & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + the file `` table - coefficients - ni_ci_di_xi_yi_zi_ti - short.dat '' contains the data of a second development , valid for , and .the file is arranged again as described in section [ angle - action ] , with , and . using this development , the second - order term of the secular hamiltonian ( eq .[ secondorder28 ] ) has the order of terms , which decreases substantially the computation time in comparison with the first file .although it is a more limited development , it is still capable of reproducing the results presented in section [ applications ] , for orbits close to the secular stationary solution .the frequencies on the left hand side of eq .( [ secondorder13 ] ) are given by \\\vspace{0.4 cm } & \times \biggl [ \displaystyle\frac{({\rm d}_i + { \rm d}_j ) \sqrt{1-e_2^{*2}}}{l_2^ * e_2^{*2 } } - \displaystyle\frac{({\rm c}_i + { \rm c}_j ) \sqrt{1-e_1^{*2}}}{l_1^ * e_1^{*2 } } \biggl ] \\ \vspace{0.4 cm } & + \displaystyle\frac{1}{l_1^{*2 } e_1^{*4 } } \biggl [ 2p^{(1)}_i({\rm c}_i + { \rm c}_j)\sqrt{1 - e_1^{*2 } } + ( 1 - e_1^{*2 } ) ( p^{(3)}_j { \rm c}_i - p^{(3)}_i { \rm c}_j)\biggl ] \\\vspace{0.4 cm } & - \displaystyle\frac{1}{l_2^{*2 } e_2^{*4 } } \biggl [ 2p^{(2)}_i({\rm d}_i + { \rm d}_j)\sqrt{1 - e_2^{*2 } } + ( 1 - e_2^{*2})(p^{(3)}_i{\rm d}_j - p^{(3)}_j { \rm d}_i ) \biggl ] , \\ \end{array } \label{secondorder33}\ ] ] \\\vspace{0.4 cm } & \times \biggl [ \displaystyle\frac{({\rm d}_i + { \rm d}_j ) \sqrt{1-e_2^{*2}}}{l_2^ * e_2^{*2 } } - \displaystyle\frac{({\rm c}_i + { \rm c}_j ) \sqrt{1-e_1^{*2}}}{l_1^ * e_1^{*2 } } \biggl ] \\ \vspace{0.4 cm } & + \displaystyle\frac{1}{l_1^{*2 } e_1^{*4 } } \biggl [ - 2 p^{(1)}_i({\rm c}_i + { \rm c}_j)\sqrt{1 - e_1^{*2 } } + ( 1 - e_1^{*2 } ) ( p^{(3)}_j { \rm c}_i - p^{(3)}_i { \rm c}_j)\biggl ] \\ \vspace{0.4 cm } & + \displaystyle\frac{1}{l_2^{*2 } e_2^{*4 } } \biggl [ 2 p^{(2)}_i({\rm d}_i + { \rm d}_j)\sqrt{1 - e_2^{*2 } } + ( 1 - e_2^{*2})(p^{(3)}_i{\rm d}_j - p^{(3)}_j { \rm d}_i ) \biggl ]. \\ \end{array } \label{secondorder34}\ ] ] chauvin , g. , beust , h. , lagrange , a .- m . , & eggenberger , a. : planetary systems in close binary stars : the case of hd 196885 .combined astrometric and radial velocity study , astron ., 528 , a8 ( 2011 ) neuhuser , r. , mugrauer , m. , fukagawa , m. , torres , g. , & schmidt , t. : direct detection of exoplanet host star companion cep b and revised masses for both stars and the sub - stellar object , astron . astrophys . , 462 , 777 ( 2007 ) ramm , d. j. , pourbaix , d. , hearnshaw , j. b. , & komonjinda , s. : spectroscopic orbits for k giants reticuli and octantis : what is causing a low - amplitude radial velocity resonant perturbation in oct ? , mon . not .r. astron .soc . , 394 , 1695 ( 2009 ) reffert , s. , & quirrenbach , a. : mass constraints on substellar companion candidates from the re - reduced hipparcos intermediate astrometric data : nine confirmed planets and two confirmed brown dwarfs , astron .astrophys . , 527 , a140 ( 2011 ) santerne , a. , hbrard , g. , deleuil , m. , et al . :sophie velocimetry of kepler transit candidates .koi-1257 b : a highly eccentric three - month period transiting exoplanet , astron .astrophys . , 571 , a37 ( 2014 ) zucker , s. , mazeh , t. , santos , n. c. , udry , s. , & mayor , m. : multi - order todcor : application to observations taken with the coralie echelle spectrograph .a planet in the system hd 41004 , astron .astrophys . , 426 , 695 ( 2004 )
we analyse the secular dynamics of planets on s - type coplanar orbits in tight binary systems , based on first- and second - order analytical models , and compare their predictions with full n - body simulations . the perturbation parameter adopted for the development of these models depends on the masses of the stars and on the semimajor axis ratio between the planet and the binary . we show that each model has both advantages and limitations . while the first - order analytical model is algebraically simple and easy to implement , it is only applicable in regions of the parameter space where the perturbations are sufficiently small . the second - order model , although more complex , has a larger range of validity and must be taken into account for dynamical studies of some real exoplanetary systems such as -cephei and hd 41004a . however , in some extreme cases , neither of these analytical models yields quantitatively correct results , requiring either higher - order theories or direct numerical simulations . finally , we determine the limits of applicability of each analytical model in the parameter space of the system , giving an important visual aid to decode which secular theory should be adopted for any given planetary system in a close binary . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
if a single photon impinges on a mach - zehnder interferometer , the probability of detecting the photon in a certain output channel depends on the difference between the phase delays imposed by the two interfering channels .the photon behaves as if it goes through both interfering channels . on the other hand ,if detectors are inserted into each interfering channel , the photon is only detected in one of the channels . in the delayed - choice experiment ,the detectors may even be inserted in the last instant . then the experimenter s choice whether to insert these detectors or not determines whether the photon shall behave as if " it goes through one or both interfering channels .it might seem that the detectors impart some sort of nonlocal collapse " to the photon state .otherwise , one may imagine that the photon itself goes through one channel only accompanied by an empty wave " in the other channel .such models have been developed by de broglie and later by bohm .they are known to be nonlocal . a common feature of both the collapse " picture and the empty wave " picture is that they refer to systems which appear to be delocalized " ( yielding interference ) in a certain experiment and localized " ( yielding anticorrelation ) in another .it may seem that such behavior is in some way nonlocal .although classical optical fields may display interference effects in the same way as single photons , they can not simultaneously be anticorrelated .one may also imagine stochastic classical fields which yield anticorrelation between two channels , but such two - channel systems may not give rise to interference effects when superposed .one might suspect that two - channel quantum states which yield _ both _ interference _ and _ anticorrelation in a mach - zehnder interferometer violate local realism . in this paperwe shall see that this is indeed the case , and that for such systems no local hidden variable model can be constructed .before deriving the bell inequality , we first need to specify the characteristic quantities that we observe in a mach - zehnder interferometer ( see fig . [fig : machzehnder ] ) .= 2.5 in it is well known that the interference visibility obtainable in a first order interferometer is represented by the degree of first order coherence here and are annihilation and creation operators for the interfering channel ( ) of the interferometer . for simplicitywe employ a single mode treatment .if , on the other hand , we insert detectors into each channel of the interferometer , we may observe the coincidence rate or the degree of second order coherence between the same two channels , the fact that classical field theories do not allow both interference and anticorrelation for the same system is illustrated by the fact that in these theories the inequality must be fulfilled .we first analyze the experiment shown in fig . [fig : bell ] .we shall see that the observables here are closely related to the mach - zehnder interferometer ( fig .[ fig : machzehnder ] ) .= 3.0 in grangier _ et al ._ were the first to propose the use of local oscillators in bell experiments .the bell experiment that we will use here ( fig . [ fig : bell ] ) was first proposed by oliver and stroud .they also showed that single photon states violate local realism in this interferometer .later tan , holland and walls ( thw ) performed a thorough derivation of the conditions for local realism for any state in this interferometer .the behavior of single photon states in this interferometer was treated extensively by tan , walls and collett .we shall generalize the work of thw .again , we restrict the attention to single - mode systems .let and be annihilation and creation operators for the channel ( , ) .the -channels are output channels from beamsplitter , in analogy with the mach - zehnder interferometer ( fig .[ fig : machzehnder ] ) .each channel is mixed with a local oscillator channel on a semireflecting beamsplitter .the local oscillator is represented as a coherent state with ( real ) amplitude and ( real ) phase .the connection between the input and output on beamsplitter is given by the transformation we find that the total photon number is preserved , also , we may rewrite the difference between photon numbers at the two beamsplitter output channels in terms of input operators , [ eq : sumdiff ] we will now study the quantity which may be termed the modulation depth " of the correlation between the two interferometers . note that it s modulus is restricted from above to unity . using the operator definitions ( [ eq : sumdiff ] ) we find + \langle \hat{a}_1 \hat{a}_2^{\dag } \rangle \exp[-i(\theta_1-\theta_2 ) ] \right .\nonumber \\ - \left . \langle \hat{a}_1^{\dag } \hat{a}_2^{\dag } \rangle \exp[i(\theta_1+\theta_2 ] - \langle \hat{a}_1 \hat{a}_2 \rangle \exp[- i(\theta_1+\theta_2 ) ] \right ] .\label{eq : bigbad}\end{aligned}\ ] ] the local oscillator amplitudes are of course independent parameters , and may be chosen freely .we now want to choose them so that the modulation depth is maximized .it can be shown that this is achieved by the choice inserting this into eq .( [ eq : bigbad ] ) we may write we are particularly interested in the coefficient , since it may be expressed in terms of well known coherence functions between the two -channels , if we consider the quantity local realism imposes the restriction thw showed that this is satisfied whenever therefore a minimal ( necessary , but not sufficient ) requirement for local realism is that or this may be considered as a rewriting of the bell inequality ( [ eq : chsh ] ) .we see that violation of this inequality implies that inequality ( [ eq : titulaerglauber ] ) is also violated. however , inequality ( [ eq : titulaerglauber ] ) must be strongly violated in order to imply violation of inequality ( [ eq : bell ] ) . in other words , if the state violates local realism , it also violates classical field theory , but the converse is not necessarily true .we of course note that the parameters involved in bell s inequality ( [ eq : bell ] ) are exactly the same that we observe in the mach - zehnder interferometer , namely the degree of first and second order coherence .in other words , this inequality involves the interference visibility and the coincidence rate for a mach - zehnder interferometer .this of course also means that we can _ test _ this inequality in a mach - zehnder interferometer .it can be noted that in order to observe violation of inequality ( [ eq : bell ] ) , a minimal requirement is that we see that the state must be both sufficiently anticorrelated and it must yield a sufficiently high interference visibility .we see that this corresponds with our suspicion that the combination of these two features in some way yields violation of local realism .the most extreme example in this respect is of course the split single photon state which yields and .this state has been shown to violate local realism in the same experiment as we consider here .but in this paper we have in addition seen that a whole class of states violates local realism , including also mixed states .these states possess certain common features , namely those of a high interference visibility in combination with a strong anticorrelation .grangier , roger and aspect have performed an experiment measuring the visibility and the coincidence rate in the mach - zehnder interferometer .they observed a visibility of 0.98 and a coincidence rate of 0.18 .although this is sufficient to demonstrate violation of inequality ( [ eq : titulaerglauber ] ) , the coincidence rate was slightly too high to demonstrate violation of inequality ( [ eq : bell ] ) .however , such a demonstration should be well within technological reach today .note that a direct contradiction with local realism is not achieved in a mach - zehnder interferometer .if inequality ( [ eq : titulaerglauber ] ) is violated , the experiments do show that classical field theories break down .however , even if inequality ( [ eq : bell ] ) is violated , the results can be explained by a _ local _ but _ contextual _ hidden variable model .we may , e.g. , explain the interference visibility in terms of a classical wave model and the anticorrelation in terms of a classical particle model .still , it is interesting to see that merely by observing the interference visibility and the coincidence rate on an unknown state , we may gain sufficient information to predict that this state will violate local realism in the thw - experiment .thus , any single - mode state , pure or mixed , which displays both a sufficiently high interference visibility and a sufficient degree of anticorrelation violates local realism .the author wishes to thank kristoffer gjtterud , paul kwiat , barry sanders and aephraim steinberg for useful discussions and comments on an earlier version of this paper .this work was financed by the university of oslo , and is a cooperative project with buskerud college .see , e.g. , the development in d. f. walls and g. j. milburn , quantum optics ( springer - verlag , berlin , 1994 ) . it should be noted that this derivation involves an additional no - enhancement " assumption ( j. f. clauser and a. shimony , rep .41 ( 1978 ) 1881 ) .this is a common feature of all bell inequalities that have been experimentally tested so far .inequalities derived without this assumption generally require higher detector efficiencies in order to be tested .
we show that no local , hidden variable model can be given for two - channel states exhibiting both a sufficiently high interference visibility _ and _ a sufficient degree of anticorrelation in a mach - zehnder interferometer .
perhaps for many , ghosts , vampires , zombies and the like are no more than hollywood fantasy .however , increasingly these movies have come to reflect the popularity of pseudoscientific beliefs in the general public .for instance , the movie white noise , " starring michael keaton , is based on the new trend among paranormalists electronic voice phenomena ( evp ) .the occult underground in both america and europe is witnessing a trendy rise in vampirism and belief in voodoo zombiefication which is widespread in many parts of south america and africa . additionally , paranormal depictions in the media , especially tv and hollywood motion pictures , have a definite influence on the way in which people think about paranormal claims ( and references therein ) . in this articlewe point out inconsistencies associated with the ghost , vampire and zombie mythologies as portrayed in popular films and folklore , and give practical explanations to some of their features .we also use the occasion as an excuse to learn a little about physics and mathematics .of course the paranormalist or occultist could claim that the hollywood portrayal is a rather unsophisticated and inaccurate representation of their beliefs , and thus the discussion we give hear is moot .however , if they are to change their definition each time we raise issue , then all that they are really arguing is that there exists something out there which may be given the name ` ghost ' , for instance .surely , no skeptic could argue with this .it has become almost a hollywood cliche that the entrance of a ghostly presence be foreshadowed by a sudden and overwhelming chill ( see , for example , the sixth sense " , starring bruce willis ) .in fact , sharp temperature drops are commonly reported in association with supposed real - life encounters with ghosts or poltergeists .this feature of supposed ghost sightings lends itself naturally to physical explanation . the famous haunted gallery at hampton court palace near london , uk, is reputedly stalked by the spirit of catherine howard , who was executed on 13 february , 1542 , by henry viii .visitors to the room have described hearing screams and seeing apparitions in the gallery .a team of ghost - busting psychologists , led by dr richard wiseman of hertfordshire university , installed thermal cameras and air movement detectors in the gallery .about 400 palace visitors were then quizzed on whether they could feel a presence " in the gallery .more than half reported sudden drops in temperature and some said they sensed a ghostly presence .several people claimed to have seen elizabethan figures . before moving on to an explanation, we will need to outline the concept of heat .when a ` warm ' object is placed next to a ` cool ' object ( see figure [ heat ] ) energy will begin to flow from the warmer body , causing it to cool , to the cooler body , causing it to warm .this energy , which is being transferred between the two objects due to their difference in temperature , is called _heat_. note that an object is never said to ` possess ' any amount of heat .heat is only defined through transfer .for instance , no matter how high one turns their stove , it never possesses any degree of heat . in the instance where someone suddenly touches the stove, however , there is the feeling of heat it is the energy flowing from the stove to that person s hand .as heat continues to be transferred from the warmer body to the cooler one in figure [ heat ] , and the warmer body s temperature continues to drop while the cooler body s temperature climbs , there comes a point when the two bodies are at the same temperature . at this point heat ceases to flow between the two object since neither is the hotter one and heat has no definite direction in which to be transferred .this condition is called _thermal equilibrium_. in our stove example , heat was transferred via _ conduction _ the exchange of heat through direct contact .there are two other modes by which heat may be transferred .these two modes involve the exchange of heat by two objects which are separated by some distance .if these two objects are emersed in a fluid ( earth s atmosphere for example ) , then the warmer body may provide heat to the fluid in its immediate vicinity .this warmer fluid will then tend to rise thus coming in contact with a cooler body above .there may also be a lateral current in the fluid , thus allowing the heated fluid to affect a cooler body to the side .this type of heat transfer , by an intermediary fluid , is called _convection_. in figure [ convection](a )we give an example of what is known as _convection currents_. suppose that the right wall is kept warm and the left wall is kept cool .then air in contact with the right wall will tend to gain heat and rise while air in contact with the left wall will tend to loose heat and then sink .the circular flow that then forms is called a convection current .air cycles around a loop picking up some heat at the right wall , dropping it off at the left wall , and then coming back around again .actually , the air current pattern will be somewhat more complicated than what we just described .there will be all kinds of smaller cycles and eddies embedded in some complex pattern as in figure [ convection](b ) .the overall flow , however , will be as in figure [ convection](a ) .the third mode of heat transfer allows for exchange between two separated objects even if they are in a total vacuum .how can two objects exchange heat if there is no mater in between them ?the answer is _ radiation_. the thermal energy of a bodyis expressed in the ` jiggling ' of its various constituent particles .as electrically charged particles within a body jiggle about , they produce electromagnetic waves .when these waves hit another body , they cause the particles in that body to jiggle even more than they were before and thus the body heats up . since hotter bodies produce more of this radiation , there will be more radiation from the hotter body falling upon the cooler body than radiation from the cooler body falling upon the hotter body .thus , overall , the hotter body will be loosing heat while the cooler body will be gaining heat .we will not be too concerned with this particular mechanism for heat exchange here .returning to the haunted gallery at hampton court palace , dr wiseman s team reported that the experiences could be simply explained by the gallery s numerous concealed doors .these elderly exits are far from draught - proof and the combination of air currents which they let in cause sudden changes in the room s temperature . in two particular spots , the temperature of the gallery plummeted by up to ( ) . you do , literally , walk into a column of cold air sometimes , " said dr wiseman . it s possible that people are misattributing normal phenomena ... if you suddenly feel cold , and you re in a haunted place , that might bring on a sense of fear and a more scary experience . "the rumor that ` cold spots ' are associated with ghosts seems to be a myth created by the construction of old building and the vivid imagination of people . but how could a few degrees drop in temperature explain the dramatic chills described in so many in ghostly accounts ? first off , what we sense as cold is not correlated to temperature so much as the rate at which heat is being transferred from our body to the environment . for instance , even in a temperate pool , one feel a very sharp chill when one first enters . a moderate draft containingcondensed moisture could cause a very sharp sensation of cold .secondly , we are all aware of the ` tall - tale ' effect .memories tend to become distorted and exaggerated .it is exactly this reason why scientists tend not to rely on unchecked eyewitness accounts .popular myth holds that ghosts are material - less .for instance in the movie ghost " ( starring patrick swayze , demi moore , whoopi goldberg ) , the recently deceased main character tries desperately to save his former lover from a violent intruder .his attempts grant him no avail , as at each lunge he passes right through the perpetrator .it is interesting , however , that he was able to walk up the stairs just prior to this .in fact , this is a common feature of the ghost myth .ghosts are held to be able to walk about as they please , but they pass through walls and any attempt to pick up an object or affect their environment in any other way leads to material - less inefficacy unless they are poltergeists , of course !let us examine the process of walking in detail .now walking requires an interaction with the floor and such interactions are explained by _ newton s laws of motion_. newton s _ first law _ is the law of inertia .it states that a body at rest will remain at rest until acted upon by an _ external _ force .therefore , a person can not start walking unless a force , applied by some body other than herself , is acting upon her .but where is the force coming from ?the only object in contact with the person while walking is the floor .so , the force moving a person during walking is coming from the floor . but how does the floor know to exert a force when the person wants to start walking and stop exerting it when the person wants to stand ? actually , there is no magic here .the person actually tells the floor .she tells the floor by using newton s _ third law_. newton s third law says that if one object exerts a force on another object , then the second object exerts a force , that is equal but oppositely directed , on the first object hence for each action there is an equal but opposite reaction . "thus when the skate - boarder in figure [ thirdlaw ] pushes on the wall , the wall pushes right back on her , causing her to accelerate off to the left .thus walking goes like this ( see figure [ walking ] ) : the person wanting to do the walking must remain at rest unless a force acts on her .she gets the floor to apply a force to her by applying a backward force on the floor with her foot .she keeps repeating this action , alternating feet .the point is that for the ghost to walk , it must be applying forces to the floor .now the floor is part of the physical universe .thus the ghost has an affect on the physical universe .if this is so , then we can detect the ghost through physical observation .that is , the depiction of ghosts walking , contradicts the precept that ghosts are material - less .so which is it ?are ghosts material or material - less ?maybe they are only material when it comes to walking .well then we must assume that they ca nt control this selective material - lessness , otherwise patrick swayze would have saved his girlfriend in ghost . " in this case , we could place stress sensors on the floor and detect a ghost s presence .maybe they walk by some other supernatural means .well why ca nt they use this power to manipulate objects when they want to ? even more , it seems strange to have a supernatural power that only allows you to get around by mimicking human ambulation .this is a very slow and awkward way of moving about in the scheme of things . in any case, you d have to go to some lengths to make this whole thing consistent .incidentally , the reader may have noticed that we skipped a law in our discussion .we heard about the first law and the third of newton s laws .newton s second law of motion is that the acceleration of an object the rate at which it speeds up is proportional to the net force applied , the constant of proportionality being the mass .we did nt need the precise statement of this law but , we did make implicit use of it .the second law implies that the acceleration of an object will be nonzero ( and thus the object will be able to change its state of motion ) only if a net force is acting on it .this consistent with our statement ` therefore , a person can not start walking unless a force , applied by some body other than herself , is acting upon her . 'anyone whose seen john carpenter s vampires " or the movie blade " or any of the host of other vampire films is already quite familiar with how the legend goes .the vampires need to feed on human blood .after one has stuck his fangs into your neck and sucked you dry , you turn into a vampire yourself and carry on the blood sucking legacy . the fact of the matter is , if vampires truly feed with even a tiny fraction of the frequency that they are depicted to in movies and folklore , then the human race would have been wiped out quite quickly after the first vampire appeared .let us assume that a vampire need feed only once a month .this is certainly a highly conservative assumption given any hollywood vampire film .now two things happen when a vampire feeds .the human population decreases by one and the vampire population increases by one .let us suppose that the first vampire appeared in 1600 ad .it does nt really matter what date we choose for the first vampire to appear ; it has little bearing on our argument .we list a government website in the references which provides an estimate of the world population for any given date . for january 1 , 1600 we will accept that the global population was 536,870,911 millions . beyond mathematical simplification , our choice has little impact on the argument to follow . if we were to report any number in the range of possible values for the population in year 1600 , the end result of our calculations below would be essentially the same . ] in our argument , we had at the same time 1 vampire .we will ignore the human mortality and birth rate for the time being and only concentrate on the effects of vampire feeding . on february 1st , 1600 1human will have died and a new vampire born .this gives 2 vampires and humans .the next month there are two vampires feeding and thus two humans die and two new vampires are born .this gives 4 vampires and humans .now on april 1st , 1600 there are 4 vampires feeding and thus we have 4 human deaths and 4 new vampires being born .this gives us 8 vampires and humans . by nowthe reader has probably caught on to the progression .each month the number of vampires doubles so that after months have passed there are vampires .this sort of progression is known in mathematics as a _ geometric progression _ more specifically it is a geometric progression with ratio 2 , since we multiply by 2 at each step .a geometric progression increases at a very tremendous rate , a fact that will become clear shortly .now all but one of these vampires were once human so that the human population is its original population minus the number of vampires excluding the original one .so after months have passed there are humans .the vampire population increases geometrically and the human population decreases geometrically .table [ table : vamp ] above lists the vampire and human population at the beginning each month over a 29 month period . .vampire and human population at the beginning of each month during a 29 month period . [cols="^,^,^,^,^,^",options="header " , ] note that by month number 30 , the table lists a human population of zero .we conclude that if the first vampire appeared on january 1st of 1600 ad , humanity would have been wiped out by june of 1602 , two and half years later .all this may seem artificial since we ignored other effects on the human population .mortality due to factors other then vampires would only make the decline in humans more rapid and therefore strengthen our conclusion .the only thing that can weaken our conclusion is the human birth rate .note that our vampires have gone from 1 to 536,870,912 in two and a half year . to keep up, the human population would have had to increase by the same amount .the website mentioned earlier also provides estimated birth rates for any given time .if you go to it , you will notice that the human birth rate never approaches anything near such a tremendous value .in fact in the long run , for humans to survive , our population must _ at leat _ essentially double each month ! this is clearly way beyond the human capacity of reproduction .if we factor in the human birthrate into our discussion , we would find that after a few months , the human birthrate becomes a very small fraction of the number of deaths due to vampires .this means that ignoring this factor has a negligibly small impact on our conclusion . in our example, the death of humanity would be prolonged by only one month .we conclude that vampires can not exist , since their existence contradicts the existence of human beings .incidently , the logical proof that we just presented is of a type known as _reductio ad absurdum _ ,that is , reduction to the absurd .another philosophical principal related to our argument is the truism given the elaborate title , the _ anthropic principle_. this states that if something is necessary for human existence , then it must be true since we do exist . in the present case ,the nonexistence of vampires is necessary for human existence .apparently , whomever devised the vampire legend had failed his college algebra and philosophy courses .the zombie legends portrayed in movies such as dawn of the dead " or 28 days later " follow a similar pattern to the vampire legends .once you are attacked by zombies , while you may manage to escape immediate death , you will eventually die and turn into a zombie yourself .thus this particular type of zombie legend suffers the same flaw that we pointed out for the vampire legend previously .we still have some more work to do , however .there exists a second sort of zombie legend which pops its head up throughout the western hemisphere the legend of ` voodoo zombiefication ' .this myth is somewhat different from the one just described in that zombies do not multiply by feeding on humans but come about by a voodoo hex being placed by a sorcerer on one of his enemy .the myth presents an additional problem for us : one can witness for them self very convincing examples of zombiefication by traveling to haiti or any number of other regions in the world where voodoo is practiced .we describe the particular case of wilfrid doricent , an adolescent school boy from a small village in haiti .one day wilfred had become terribly ill .he was experiencing dramatic convulsions , his body had swelled terribly and his eyes had turned yellow .eight days latter , wilfred appeared to have died .this was confirmed by not only by the family and family friends present but also by the local medical doctor who could detect no vital signs .wilfred s body appeared to show bloating due to rigor - mortis and gave off the foul stench of death and rot .his body was buried soon thereafter .some time afterward , the weekly village cock - fight was interrupted as an incognizant figure appeared .the villagers were shocked as they gazed upon the exact likeness of wilfred .the arrival was indeed wilfred as his family verified by noting scars from old injuries and other such details .wilfred , however , had lost his memory and was unable to speak or comprehend anything . his family had to keep him in shackles so that he would nt harm himself in his incoherent state .it appeared that wilfred s body had risen from death leaving his sole in the possession of some voodoo sorcerer .word of wilfred s ` zombiefication ' spread quickly throughout the village .it was believed that wilfred s uncle , a highly feared voodoo sorcerer who had been engaged in a dispute over land with wilfred s family , was the culprit . wilfred s uncle was later charged with zombiefication , a crime in haiti equivalent to murder .is this truly a case of supernatural magic ? to answer this question , we turn our attention to a highly toxic substance called tetrodotoxin ( ttx ) .bryan furlow gives an overview of ttx s effects blended with a story from the news : at first the us federal officers thought they had stumbled upon a shipment of heroin .the suspicious package they intercepted last year [ 2000 ] , en route from japan to a private address in the us contained several vials packed with a white crystalline powder .but on - the - spot tests revealed that it was no narcotic .it took a while for forensic scientists at the lawrence livermore national laboratory in california to identify a sample , and what they found was alarming .the powder turned out to be tetrodotoxin ( ttx ) : one of the deadliest poisons on earth .gram for gram , ttx is 10,000 times more lethal than cyanide ... this neurotoxin has a terrifying modus operandi25 minutes after exposure it begins to paralyze its victims , leaving the brain fully aware of what s happening .death usually results , within hours , from suffocation or heart failure .there is no antidote .but if luckless patients can hang on for 24 hours , they usually recover without further complications ... the livermore team estimated that to extract the 90 milligrams of ttx discovered by the feds , you d need between 45 and 90 kilograms of puffer fish livers and ovaries the animal s most deadly tissues .no one knows what use its intended recipient had in mind ...ttx is found in various sea creatures and , in particular , in various species of puffer fish .puffer fish are a delicacy in japan known as ` fugu ' where only trained and licensed individuals prepare it by carefully removing the viscera . of course , despite the care taken in preparation ,about 200 cases of puffer fish poisoning are reported per year with a mortality ate 50% .the symptoms of the poisoning are as follows : the first symptom of intoxication is a slight numbness of the lips and tongue , appearing between 20 minutes to three hours after eating poisonous puffer fish .the next symptom is increasing paraesthesia in the face and extremities , which may be followed by sensations of lightness or floating .headache , epigastric pain , nausea , diarrhea , and/or vomiting may occur . occasionally , some reeling or difficulty in walking may occur .the second stage of the intoxication is increasing paralysis .many victims are unable to move ; even sitting may be difficult .there is increasing respiratory distress .speech is affected , and the victim usually exhibits dyspnea , cyanosis , and hypotension .paralysis increases and convulsions , mental impairment , and cardiac arrhythmia may occur . the victim ,although completely paralyzed , may be conscious and in some cases completely lucid until shortly before death .death usually occurs within 4 to 6 hours , with a known range of about 20 minutes to 8 hours . sometimes however , a victim pronounced dead , is lucky enough to wake up just before his funeral and report to his bewildered family that he was fully conscious and aware of his surroundings the entire ordeal .therefore , ttx has the unusual characteristic that if a nonlethal dose is given , the brain will remain completely unaffected .if just the right dose is given , the toxin will mimic death in the victim , whose vitals will slow to an immeasurable state , and whose body will show signs of rigor - mortis and produce the odor of rot . getting such a precise dosewould be rare for the case of fugu poisoning , but can easily be caused deliberately by a voodoo sorcerer , say , who could slip the dose into someone s food or drink .the secrets of zombiefication are closely guarded by voodoo sorcerers .however , frre dodo , a once highly feared voodoo sorcerer who is now an evangelical preacher and firm denouncer of the voodoo faith , has revealed the process .it turns out that zombiefication is accomplished by slipping the victim a potion whose main ingredient is powder derived from the liver of a species of puffer fish native to haitian waters .well , we now have an explanation for how wilfred could have been made to seem dead , even under the examination of a doctor .however , we have already said that the ttx paralysis was unlikely to have affected his brain . how does one account for wilfred s comatose mental state ?the answer is oxygen deprivation .wilfred was buried in a coffin in which relatively little air could have been trapped . wilfred s story probably goes something like this : slowly , the air in wilfred s coffin began to run out so that by the time he snapped out his ttx - induced paralysis , he had already suffered some degree of brain damage . at this pointhis survival instincts kicked in and he managed to dig himself out of his grave graves tend to be dug shallow in haiti .he probably wondered around for some time before ending up back the village .neuropsychiatrist dr .roger mallory , of the haitian medical society , conducted a scan of zombiefied wilfred s brain .although the results were not as definite as had been hoped for , he and his colleagues found brain damage consistent with oxygen starvation .it would seem that zobiefication is nothing more then a skillful act of poisoning .the bodily functions of the poisoned person suspend so that he appears dead . after he is buried alive , lack of oxygen damages the brain .if the person is unburied before he really dies from suffocation , he will appear as a soulless creature ( ` zombie ' ) as he has lost what makes him human : the thinking process of the brain .we have examined the science behind three of the most popular pseudoscientific beliefs encountered in hollywood movies . for two of them the idea of ghosts and vampires we have shown that they are inconsistent and contradictory to simple facts .for one of them the idea of zombies we have made no attempt to deny that it relies on real cases .however , we have reviewed evidence showing that the concept is a misrepresentation of simple criminal acts .wide spread belief in such concepts , we feel , is an indication of a lack of critical thinking skills in the general population . with simple elementary arguments one can easily discredit the validity of such claims .we thus finish with the following quote by carl sagan : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ both barnum and h.l .mencken are said to have made the depressing observation that no one ever lost money by underestimating the intelligence of the american public .the remark has worldwide application .but the lack is not intelligence , which is in plentiful supply ; rather , the scarce commodity is systematic training in critical thinking . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 9 _ palace ghost laid to rest _ , bbc news , thursday , 29 march , 2001 , ` http://news.bbc.co.uk/2/hi/uk/1249366.stm ` .episode _ zombies the living dead ? _ from the _ arthur clarke s mysterious universe _ , dvd , american home treasures 2002 .wade davis , _ serpent and the rainbow _, simon & schuster 1985 ( reissued in 1997 ) .bryant furlow , _ the freelance poisoner _ , new scientist , issue 2274 , 20 january 2001 .r. littlewood , c. douyon , _ clinical finding in three cases of zombiefication _ , the lancet , * 350 * ( 1997 ) 1094 .carl sagan , _boca s brain : reflections on the romance of science _, ballantine publishing group 1979 , p. 58 .glenn g. sparks , _paranormal depictions in the media : how do they affect what people believe ? _ , skeptical inquirer , july / august 1998 , p. 35 .glenn g. sparks , _ media effects research : a basic overview _ , 2nd ed ., thomson 2006 . u.s. census bureau , ` http://www.census.gov ` .food & drug administration , ` http://vm.cfsan.fda.gov/%7emow/chap39.html `after its initial release the present article received great attention by the media and the public .in addition , letters from readers were received after the publication of the article in the july / august 2008 issue of _ skeptical inquirer_. the authors would like to thank all readers who took the time to send their comments although we were surprised ( and partly disappointed ) to see that the majority of the readers misunderstood the goal of the article .since most of the comments have considerable overlap , we thought we d summarize our explanations in one reply .we would like to point out that our article was not about definite proofs .we challenge the reader to devise an absolute , mathematical / logical proof that a given supernatural occurrence does not exist .our prediction is that the reader will fail .in particular there is no universally agreed upon mathematical / logical definition of the various apparitions considered in our article nor of the manner in which they operate / behave ( e.g. whether vampires deliberately control which of their victims turns into a vampire , etc ; cf the vast majority of the received comments ) . however , an inability to present a definite proof does not imply an inability to discover logical and scientific inconsistencies or other flaws in a claim .our article was intended merely as an entertaining vehicle of education aiming at stirring critical thinking .our goal was to remind readers that ( a ) pseudoscientific and paranormal ideas barely make sense when elementary logic and science is applied ; ( b ) when and if there is an element of truth , it is highly distorted and hidden behind elaborate myths ; ( c ) to teach readers a little about science and remind them to make probabilistic assessments of various claims using reason .many letters state that the authors have ` missed ' essential ideas in the ` vampirization ' of humans that invalidate the calculation and , thus , the final conclusion .we would like to assure the readers that we are familiar with all the variations of the myth .after all , how could we miss them if hollywood and novel writers bombard us with them daily ?we know about buffy , angel , blade and the other vampire hunters .we have read about multiple bites , drained bodies , transfer of blood and other protocols necessary to create new vampires . however , none of them can change the final conclusion of the ` impossibility of the existence of vampires ' albeit the line of reasoning might have to be modified slightly ( or radically if the premises are changed considerably ) . no matter what assumptions are required, one can create a corresponding mathematical model .the authors intentionally simplified the assumptions and avoided sophisticated mathematical models lest the article be inaccessible to some depending on education . by introducing dynamical systems, one can construct highly sophisticated models for the vampire population versus the human population .for example , in one of the simplest models , known as the _ prey - predator model _ , the two populations fluctuate periodically .( for a simple presentation see . )if this model is used , the human population never disappears but it fluctuates between a maximum and a minimum value .one can immediately see now an argument against the existence of vampires : the human population has not fluctuated and even more it has kept ( and keeps ) increasing exponentially .one can start with this model and add additional features ( such as vampires slayers , vampire diseases , accidental exposure to sunlight , vampire babies , etc ) but the final outcome will not change : vampires can not exist since the model would predict a human population curve different from the actual one . also , the reader should note that this discussion is based in elementary physics and mathematics .we never discussed the social implications .imagine what it would mean if every so often an exsanguinated human corpse was found ( even if only one vampire existed ). would nt this be _ the _ headline in the news ?unless , of course , all governments have conspired to keep these incidents secret ... other messages to the authors pointed out similar omissions / holes in the arguments of ghosts and tried to affirm the existence of ghosts based on faith or incorrect physics explanations . unfortunately quantum mechanics and exotic matter can not give more substance to ghosts and faith can not be used as a substitute for proof . for reasons of limited space and timewe bypass a rebuttal of each of the ( incorrect ) attempts to use physics to make the concept of ghosts consistent .many hollywood movies have some ideas that are consistent with science ideas but the majority of movies greatly offend mathematical and scientific laws .our reference to the movies was a motivational tool , mainly .the authors would enjoy a consistent script although research on the issue indicates that hollywood has a negative impact on scientific literacy in the public .ignoring misrepresentation of particular situations and even ignoring series like _ scifi investigates _ , _ ghusthunters _ , etc which involve people untrained in the scientific method of investigation , hollywood s current trend is to promote the supernatural over logic and scientific inquiry .science is irrelevant , it means trouble , is reflexively close - minded , and only laypersons will find the true solution .fortunately , a few people have tried to reverse the unchallenged way hollywood presents its ideas .there have been some excellent books based on hollywood products that explain science in an entertaining way . among our favoritesare the classic book by lawrence krauss _ the physics of star trek _ and james kakalios s _ the physics of superheroes_. one of us ( c.e . )is involved in a more extensive project nicknamed _ physics in films _ that uses hollywood movies as a vehicle of education to increase the scientific literacy and the quantitative fluency of the public .the projects also attempts to reverse the unchallenged way that hollywood promotes its ideas .j. stewart , _ calculus _, 6th edition , thomson 2008 .efthimiou , r.a .llewellyn , _hollywood blockbusters : unlimited fun but limited science literacy _ , ` http://www.arxiv.org/abs/0707.1167 ` ; c.j .efthimiou , r. llewellyn , d. maronde , t. winningham , _ physics in films : an assessment _ , ` http://www.arxiv.org/abs/physics/0609154 ` ; c.j .efthimiou , r. llewellyn , _ is peudoscience the solution to science literacy ? _ `http://www.arxiv.org/abs/physics/0608061 ` ; c.j .efthimiou , r. llewellyn , _ cinema , fermi problems , & general education _ , ` http://www.arxiv.org/abs/physics/0608058 ` ; c.j .efthimiou , r.a .llewellyn , _ cinema as a tool for science literacy _, ` http://www.arxiv.org/abs/physics/0404078 ` ; c.j .efthimiou , r.a .llewellyn , _ physics in films : a new approach to teaching science _ , ` http://www.arxiv.org/abs/physics/0404064 ` ; c.j .efthimiou , r.a .llewellyn , _ physical science : a revitalization of the traditional course by avatars of hollywood in the physics classroom _, ` http://www.arxiv.org/abs/physics/0303005 ` .
we examine certain features of popular myths regarding ghosts , vampires and zombies as they appear in film and folklore . we use physics . to illuminate inconsistencies associated with these myths and to give practical explanation to certain aspects . * cinema fiction vs physics reality * + * ghosts , vampires and zombies * + costas j. efthimiou and sohang gandhi
in hierarchical clustering models , to estimate cluster abundances at any given time one must estimate the abundance of sufficiently overdense regions in the initial conditions ( press & schechter 1974 ) .the problem is to find those regions in the initial conditions that are sufficiently overdense on a given smoothing scale , but not on a larger scale .the framework for not double - counting smaller overdense regions that are embedded in larger ones is known as the excursion set approach ( epstein 1983 ; bond et al .1991 ; lacey & cole 1993 ; sheth 1998 ) . in this approach , one looks at the overdensity around any given random point in space as a function of smoothing scale .the resulting curve resembles a random walk , whose height tends to zero on very large smoothing scales . in this overdensity versus scale plane ,the critical density for collapse defines another curve , which we will call the barrier .the double - counting problem is solved by asking for the largest smoothing scale on which the walk first crosses the barrier .it is fairly straightforward to solve this problem numerically , by direct monte - carlo simulation of the path integrals ( bond et al .this is particularly simple if the steps in the walk are independent , but one can also include correlations between the steps , whose nature depends on the underlying fluctuation field ( i.e. , for a gaussian field , on the power spectrum ) and the form of the smoothing filter . when the steps are independent , exact solutions for constant ( bond et al .1991 ) or linear ( sheth 1998 ) barriers are known .exact solutions for more general barrier shapes are not known , but good analytic approximations are available ( sheth & tormen 2002 ; lam & sheth 2009 ) . in the appropriate units ,these solutions are self - similar , i.e. independent of the form of the power spectrum ( of course , they depend strongly on the barrier shape ) . however , an exact solution of the first crossing problem for correlated steps is still unknown .the main goal of the present work is to provide a simple formula which works for a wide variety of barrier shapes , smoothing filters , and power spectra .this is done in section [ main ] our main result is equation ( [ sfs ] ) , and it is explicitly not self - similar . section [ extend ] describes a number of extensions of this calculation having to do with walks conditioned to pass through a certain point in a certain way , or with barriers whose height depends on hidden variables .a final section summarizes our results , indicating how we expect our work to be used when fitting to the halo abundances which , recent simulations indicate , are not quite self - similar .in what follows , we will assume that the underlying fluctuation field is gaussian .we comment on non - gaussian field in the discussion section . in hierarchical models ,the variance of the fluctuation field is a monotonic function of the smoothing scale ( the exact relation depending on the shape of the power spectrum and the smoothing window . )therefore we can use the terms smoothing scale and variance interchangeably , and we will use to denote the variance . let denote the height of the barrier on ` scale ' , and the probability that the walk has height on this scale .we will assume that , so that .we would like to write down the probability that for all and at .when the steps in the walk are uncorrelated the two conditions separate , simplifying the analysis . for correlated steps , this simplicity is lost .recently paranjape et al .( 2012 ) have argued that there is considerable virtue in thinking of the limiting case in which the steps in the walk are completely correlated . in this case , if the walk had height on scale , then it has height on scale , so is a smooth monotonic function of .therefore , if first exceeds on scale , it was certainly below on all , and one need not account for this requirement explicitly .hence , the first crossing distribution is just where is the scale on which is minimum ( in many cases of interest ) .note that this relates the shape of to that of on the same scale .paranjape et al .( 2012 ) show that , despite the strong assumption about the deterministic smoothness of the walks , this expression provides a very good description of the first crossing distribution ( at small ) even when the steps are not completely correlated .physically , this is because if one thinks of real walks as stochastic zig - zags superimposed on smooth completely correlated trajectories , at small a walk has not fluctuated enough to depart significantly from its deterministic counterpart .one might thus expect that there is danger of double counting trajectories with two or more crossings only when becomes large .in cosmology the large regime is not nearly as interesting as the small .the discussion above suggests that it will be useful to construct an expansion in terms of the number of times walks cross the barrier .the first step in this program is to assume that no walks double - cross , but the actual correlation structure scatters their crossing scale around the completely correlated prediction .accounting for this fact alone should allow to estimate with a greater regime of accuracy than equation .accounting for one earlier crossing should be even more accurate , and so on . to proceed, we assume that when steps are strongly correlated one can replace the requirement that for all , which is a condition on all the steps in the walk prior to , with the milder requirement that for a condition on the one preceding step . because now the walk values on the two scales are independent , the analysis is more involved than when the steps were completely correlated , but the increase in complexity is relatively minor because we only require a bivariate distribution . for small we can expand both and in a taylor series .the condition on the walk height at the previous step means that .if we use primes to denote derivatives with respect to , then the first crossing distribution of interest is given by the fraction of walks which have that is to say , for a gaussian field is gaussian , and the conditional distribution in the integral is also gaussian , with shifted mean and reduced variance . if we define the mean of is and the variance is .( our notation was set by the fact that , for gaussian smoothing filters , equals the quantity spectral quantity which bardeen et al .1986 called . for tophat filters ,the integrals over the power spectrum which define our are given by paranjape et al .note that , and thus . before we evaluate the integral , note that the completely correlated approximation corresponds to the limitin which and becomes a delta function centered on .the integral then yields , and the resulting expression is consistent with equation .notice that our analysis has indeed extended the completely correlated solution by replacing the delta function with a gaussian whose width depends explicitly on the underlying power spectrum .in the generic case , one still has an integral over a single gaussian distribution .if we define ( the reason for our choice of sign for will become clear later ) , then equation gives evaluating the integral yields \label{sfs}\ ] ] ( recall depends on ) ; this is our main result .comparison with equation ( [ fcc ] ) shows that the term in square brackets above represents the correction to the completely correlated solution .there are two points to be made here .first , this correction term depends on , indicating that the shape of the first crossing distribution depends explicitly on the form of the underlying power spectrum . in this respect, walks with correlated steps are fundamentally different from walks with uncorrelated steps .second , when then this term tends to unity so the first crossing distribution reduces to that for completely correlated walks . to see what large implies , it is useful to consider some special cases .on which walks first cross the barrier .histograms show numerical monte carlo results for gaussian smoothing of , for which ; top to bottom are for barriers with and .for comparison , the two dotted curves show the corresponding distributions for and when steps are uncorrelated .smooth curves show eq .( [ sfs ] ) with and the appropriate values of ; the agreement indicates that our formula works well for a wide variety of barrier shapes .symbols with error bars show results for a constant barrier ( ) and tophat smoothing of a .this shows that , in contrast to when steps are uncorrelated , the first crossing distribution does indeed depend ( weakly ) on .the solid curve shows eq .( [ sfsconstant ] ) with for which ; the agreement shows that our formula works well for a wide range of . ] for a constant barrier , and .the latter is conventionally denoted as , so that corresponds to large . in terms of , equation is . \label{sfsconstant}\ ] ] in this form , it is clear that acts as the -scale below which the correction to the completely correlated distribution becomes important .( note however that our is not the same parameter defined by peacock & heavens 1990 and used in paranjape et al .2012 . ) figure [ comparemc ] compares this formula to distributions generated via monte - carlo simulation of the walks ( following bond et al .we present results for two rather different power spectra and smoothing filters , with two different values of .our first choice is gaussian smoothing of with , for which .our second is tophat smoothing of a power spectrum ; in this case itself depends on scale , with ( and hence ) on the scales where .the figure shows that equation ( [ sfsconstant ] ) , with and respectively , provides an excellent approximation to the monte - carlo distributions .note in particular that , for , ignoring the scale dependence of ( by simply using on all scales ) works very well .this demonstrates that eq .( [ sfsconstant ] ) is a simple and accurate approximation for the cdm family of models . for linear barriers , making and .figure [ comparemc ] shows results for and , both for gaussian smoothing of ; equation ( [ sfs ] ) with works very well .note in particular that our formula is quite different from the inverse gaussian distribution associated with uncorrelated steps .the analysis above has been so simple , and its results so accurate , that it is interesting and natural to extend it to a variety of other problems , some of which we outline below .the excursion set approach is often used to quantify the correlation between halo abundances and their environment .this is done by computing the ratio of , the first crossing distribution subject to the additional constraint that the walks passed through some on some large scale before first crossing on scale , to .motivated by the previous section , we now set in the limit where . this will give which is very similar to equation , only with modified mean values and variances .the gaussian outside the integral has mean and variance , where .the one inside has mean , where /s ] .the first gaussian obeys the scaling assumed by paranjape et al .( 2012 ) , who showed it was a good approximation to their conditional monte carlo distributions . therefore , it is interesting to see if this scaling also holds for the integral . if ( or , more carefully stated , if the term containing this quantity is smaller than the other ) then and , so .since this is the same rescaling of as for the first gaussian , in this approximation follows the scaling with that was assumed by paranjape et al .therefore , the conclusions of paranjape & sheth ( 2012 ) about the difference between real and fourier space bias factors also hold for our calculation .in particular , the fourier space bias will be simply given by differentiating with respect to , whereas the real space cross correlation will carry an additional factor of .for a constant barrier yields ^{-1}. \label{bias1}\ ] ] the first term on the right hand side is the bias associated with completely correlated walks .the correction term is negligible for , and it tends to when .the expression above assumes that is negligible .this is reasonable when ; e.g. for gaussian smoothing filters , falls to zero faster than .but at intermediate scales it is not , and will have additional dependence on . in musso , paranjape & sheth ( 2012 ) we show that this will generically introduce scale dependence into the fourier space bias factors , even at the linear level .the excursion set theory can also be used to compute halo progenitor mass functions and merger rates ( lacey & cole 1993 ) . for this , we need the joint distribution of walks that first cross the barrier on scale , and first cross the barrier on scale . dividing by , which we already have , will give the conditional distribution . the most natural way to estimateit is to set ( strictly speaking , we should adjust the limits on the integrals over to ensure that . )we have written the final expression in a suggestive form , to show that the left hand side should be thought of as a weighted average over first crossing distributions , each with its own value of . in the limit where the scales are very different , , it should be a good approximation to ignore the , and correlations .this makes where is the same quantity as , but on scale .this means there is some dependence on .note that if there were no correlation with ( in effect ) then the conditional distribution would have the same form as the unconditional one , except that , and , where we have defined .this is similar to the rescaling for sharp- filtering , for which .however , for gaussian smoothing of a power law , , so one may not set if one does not also set .therefore , the case with may be more relevant . in this case . since this same term appears in the exponential of , we have that has the same form as equation , except for the shift .therefore , except for the erf piece , the final integral over can also be done analytically .we leave a comparison of this approximation , and the full expression ( in which we have not ignored etc . ) , with the numerical monte - carloed distributions , to future work . that depends explicitly on , even in this approximation , is significant , since it shows that the conditional distribution for first crossing depends not just on the fact that was crossed , but on how was crossed .if we interpret ` how was crossed ' as a statement about the mass distribution on smoothing scales larger than , then the fact that depends on indicates that the formation history of the mass within depends on the surrounding environment .this shows that our formalism will naturally give rise to ` assembly bias ' effects of the sort identified by sheth & tormen ( 2004 ) , and studied since by many others . in triaxial collapse models ,the barrier is a function of the values of the initial deformation tensor ( rather than just its trace ) .this makes where .if the scale dependence of and can be neglected , then this will simplify to further , if the correlation between and and can be neglected , then and the first crossing distribution is that for , weighted by the probability of having and . if we use equation ( a3 ) of sheth et al .( 2001 ) for the distribution of and given , then our analysis should be thought of as generalizing their equation ( 9 ) . in particular , because the result is a weighted sum of first crossing distributions , it exhibits exactly the sort of stochasticity discussed in appendix c of paranjape et al .( 2011 ) . performing this calculationmore carefully is the subject of ongoing work .one of the great virtues of our approach is that it provides a simple way to compare the excursion set description with that for peaks .the relation is particularly simple for gaussian smoothing filters , since then of our equation is the _ same _ parameter which plays an important role in peaks theory .this correspondence means that our parameter is essentially the same as the peak curvature parameter ; the only difference is that we define the derivative with respect to the variance , whereas peaks theory derivatives are with respect to smoothing scale .this means that our integrals over are really just integrals over curvature , which makes intuitive sense .hence , to implement our prescription for peaks , we only need to account for the fact that the distribution of curvatures around a peak position differs from that around random positions .if we write our equation as times , then we need only replace , where is given by equation ( a15 ) of bardeen et al .( 1986 ) . for a constant barrier, our equals their , so our expression is simply their equation ( a14 ) weighted by and integrated over .( their additional factor of is just the usual factor in excursion set theory , which converts from mass fractions to halo abundances . )omitting the factor when performing the integral over yields the usual expression for peaks ( their equation a18 ) .therefore , one might think of their ( a18 ) as representing the ` completely correlated limit ' for peaks , whereas our analysis yields what should be thought of as the moving barrier excursion set model for peaks .we presented a formula , equation , which provides an excellent description ( see fig .[ comparemc ] ) of the first crossing distribution of a large variety of barriers , by walks exhibiting a large range of correlations ( i.e. , it is valid for a wide class of power spectra and smoothing filters ) . as we discuss below, we expect a special case of it equation to be a good physically motivated fitting formula for halo abundances .we then showed that our approach provides a simple expression for the first crossing distribution associated with walks conditioned to pass through a certain point , and hence a simple expression for how halo bias factors are modified because of the correction term ( equation [ bias1 ] ) .we sketched why a generic feature of this approach is that even the linear bias factor should be scale dependent .we also showed how to approximate the first crossing distributions associated with two non - intersecting barriers : the probability that a walk first crosses barrier on some scale and then on scale ( equation [ sfss ] ) .this exhibit ` assembly bias ' so they may provide useful approximations for halo progenitor mass functions and merger rates . finally , we argued that our approach makes it particularly easy to see how the first crossing distribution is modified if the barrier depends on hidden parameters , such as those associated with the triaxial collapse model ( section [ dcep ] ) or peaks ( section [ peaks ] ) . in our approach , peaks differ from random positions only in that the integral over in equation is modified . a similar integral in the expression for peak abundancesleads to scale dependent bias even at the linear level ( desjacques et al .2010 ) , and is why we now expect -dependent bias even for random positions .we are not the first to have considered the correlated steps problem .peacock & heavens ( 1990 ) identified ( our equation [ gammas ] ) as the key parameter .their approximation for is more accurate than more recent approximations ( maggiore & riotto 2010 ; achitouv & corasaniti 2011 ) which are , in any case , restricted to special combinations of power spectra and smoothing filter ( paranjape et al . 2012 ) .however , our equation is simpler and more accurate for a wider range of barrier shapes , power spectra and smoothing filters than any of these previous studies , and it can be easily extended .besides the problems outlined above , an obvious direction would be to include non - gaussianity , along the lines of musso & paranjape ( 2012 ) .this extension is conceptually straightforward equations will hold also for a non - gaussian field and is currently being investigated .we argued that this accuracy stems from the small behavior of the walks . in this regime , which is of most interest in cosmology , is well - described by equation for completely correlated walks ( paranjape et al .these walks are deterministic : the distribution of their heights is a delta function . for real walks , instead, it has a width that increases with .equation ( [ sfs ] ) was obtained by allowing for this broader distribution of heights in the one step prior to the crossing .however , it does not explicitly account for walks which may have criss - crossed the barrier more than once .musso et al .( 2012 ) discuss how accounting for more zigs and zags may yield even greater accuracy at larger .equation ( [ sfs ] ) is explicitly the completely correlated first crossing rate times a correction factor .it depends on power spectrum only because this factor depends on .since this factor is small at small , in this regime ( which is the usual one in cosmology ) one should expect to see departures from self - similarity , but they should be small .e.g. , for a barrier of constant height , the first crossing distribution becomes equation , and the correction factor becomes important at , with for the family of . in cosmologythe first crossing distribution is often used as a fitting formula . for this purpose, one might treat either or or both as free parameters ( even though equation [ sfsconstant ] describes the first crossing distribution very well with no free parameters ) . in this case, the value of will depend on a variety of factors ( sheth et al .2001 ; maggiore & riotto 2010b ; paranjape et al .2012 ) , and we expect to depend on the effective slope of the power spectrum . in this sense , our formula provides a simple way to understand , interpret and quantify the departures from self - similarity which simulations are just beginning to show .it is a pleasure to thank aseem paranjape for helpful comments on the draft .rks is supported in part by nsf - ast . 99 bardeen j. m. , bond j. r. , kaiser n. , szalay a. s. , 1986 , apj , 304 , 15 bond j. r. , cole s. , efstathiou g. , kaiser n. , 1991 , apj , 379 , 440 corasaniti p. s. , achitouv i. , 2011 , prl , 106 , 241302 desjacques v. , crocce m. , scoccimarro r. , sheth r. k. , 2010 , prd , 82 , 103529 epstein r. i. , 1983 , mnras , 205 , 207 lacey c. , cole s. , 1993 , mnras , 262 , 627 lam t. y. , sheth r. k. , 2009 , mnras , 398 , 2143 maggiore m. , riotto a. , 2010a , apj , 711 , 907 maggiore m. , riotto a. , 2010b , apj , 717 , 515 musso m. , paranjape a. , arxiv:1108.0565 , mnras , online early paranjape a. , sheth r. k. , 2012 , mnras , 419 , 132 paranjape a. , lam t .- y . , sheth r. k. , mnras , online early peacock j. a. , heavens a. f. , 1990 , mnras , 243 , 133 press w. h. , schechter p. , 1974 ,apj , 187 , 425 sheth r. k. , 1998 , mnras , 300 , 1057 sheth r. k. , mo h. j. , tormen g. , 2001 , mnras , 323 , 1
we provide a simple formula that accurately approximates the first crossing distribution of barriers having a wide variety of shapes , by random walks with a wide range of correlations between steps . special cases of it are useful for estimating halo abundances , evolution , and bias , as well as the nonlinear counts in cells distribution . we discuss how it can be extended to allow for the dependence of the barrier on quantities other than overdensity , to construct an excursion set model for peaks , and to show why assembly and scale dependent bias are generic even at the linear level . [ firstpage ] large - scale structure of universe
graphene , a one - atom - thick layer of carbon atoms arranged in a hexagonal lattice , has outstanding electrical and mechanical properties , as well as high optical transmittance . for this reason , many electronic and photonic devices employing graphene , as either an active layer or a transparent electrode , have been demonstrated , such as light - emitting diodes ( leds ) , solar cells , field - effect transistors , photodetectors , touch screens , terahertz wave modulators , and schottky junction devices . in many such demonstrations ,a graphene layer has been deposited by transferring it onto a device substrate following the conventional wet - transfer method , where a graphene polymer bilayer floating on a water bath is scooped by the substrate .and when the patterning of graphene layers is required , it has mostly been performed after graphene transfer , typically using photolithography followed by reactive - ion etch ( rie ) .however , this method of obtaining patterned graphene layers the wet - transfer and subsequent patterning process has only a limited range of applications , where graphene layers must be deposited and patterned , when necessary , prior to deposition of any material that is too fragile to withstand a wet , high - temperature , or plasma process .notable , practically important examples of such materials are organic semiconductors and organometal trihalide perovskite compounds attention , therefore , has been focused on development of dry - transfer techniques . for example , a graphene layer grown on a cu layer on a donor substrate can be directly transferred onto a target substrate , by delaminating the graphene cu interface when the target substrate in contact with the graphene layer is peeled off from the donor substrate .however , for selective delamination , the target substrate needs to be coated with an epoxy adhesion layer , which makes this technique unsuitable for high - performance electronic devices : for example , it can not be applied to fabrication of an led with a top graphene electrode , since the adhesion layer in this case would be placed in the device interior , just beneath the graphene electrode , impeding efficient charge injection .another approach is to transfer - print a graphene layer coated with a ` self - release ' layer from an elastomeric stamp onto a target substrate , where reliable transfer is achieved by choosing an appropriate self - release layer that assures the selective delamination at the interface between that and the elastomer .although the transfer process itself is dry , removing the self - release layer transferred along with the graphene is typically achieved with an organic solvent , ultimately limiting applications of this method .et al . _ demonstrated a technique capable of transferring graphene monolayers without an adhesion or a self - release layer . in this mechano - electro - thermal process ,complete transfer , instead , requires application of high temperature ( ) and voltage ( ) while a graphene layer grown on cu foil is pressed onto a target substrate . here , we demonstrate a low - temperature , dry transfer process capable of transfer - printing a patterned graphene monolayer onto a target substrate that can be damaged or degraded by a wet , plasma or high - temperature process . in this process, a graphene monolayer on cu foil , which is grown by chemical vapor deposition ( cvd ) and then patterned using a conventional lithographic process , is transferred onto a stamp made of poly(dimethylsiloxane ) ( pdms ) , and subsequently transfer - printed from the stamp onto the target substrate . the graphene transfer from cu foil to pdmsis achieved using the conventional wet - transfer process , with the following two modifications : the use of au , instead of poly(methyl methacrylate ) ( pmma ) , as a material for the support layer , and the decrease in surface tension of the liquid bath using a water - ethanol mixture .these modifications are critical in preventing defect formation in a graphene monolayer during its transfer onto a pdms stamp , thereby leading to a minimum sheet resistance of for a graphene monolayer transfer - printed onto a glass substrate .furthermore , we demonstrate transfer - printing of patterned graphene monolayers on poly(3,4-ethylenedioxythiophene):polystyrene sulfonate ( pedot : pss ) and moo , which are representative examples of organic electronic materials and practically important metal oxides , respectively , that are usually damaged or degraded when exposed to aqueous or aggressive patterning processes .the morphological and elemental characterizations of the surfaces of transfer - printed graphene show the existence of contaminants that are likely to be siloxane oligomers transferred from the pdms stamp .we discuss the current range of application of this technique and possible means to expand it by eliminating the contamination problem .to transfer a graphene monolayer onto a target substrate that can be damaged or degraded by a wet or high - temperature process ( fig .[ fig : process_schematic ] ) , we first transfer a cvd - grown graphene onto a pdms stamp following the conventional wet - transfer method ( a to f ) : by scooping up , with the pdms stamp , a graphene support bilayer floating on liquid .after the support layer is removed by chemical etching , the graphene is transfer - printed on a target substrate ( g to h ) .the first part of this process ( a to f ) , although seemingly similar to the conventional wet - transfer technique , has two distinct features , which are crucial to obtain a high - quality graphene monolayer on a target substrate .first , as a support layer material , we use thermally deposited au , instead of pmma , which is mostly widely used for this purpose in the wet - transfer method .pdms , the material chosen for a stamp owing to its mechanical and chemical properties suitable for various transfer - printing techniques , swells when immersed in an organic solvent that can dissolve the pmma support layer , such as acetone and chloroform . when this occurs , the graphene monolayer cracks , creating a large number of defects ( supplementary fig .s1 ) . on the contrary, the use of a au support layer allows one to obtain a high - quality graphene monolayer on pdms , since au can be removed using an aqueous etchant , which does not swell pdms .second , for the liquid on which the graphene au bilayer floats and from which it is scooped with a pdms stamp [ fig .[ fig : process_schematic](d ) ] , we use an ethanol water mixture , instead of water commonly used in the conventional wet - transfer technique .this is to decrease the surface tension of the liquid . in the conventional case , after the graphene support bilayer is scooped with a hydrophilic substrate [ as in fig .[ fig : process_schematic](d ) ] , a thin layer of water is present throughout the graphene substrate interface , providing sufficient lubrication at that interface . as a result ,when the sample is blow - dried using a n gun , the graphene and substrate form a conformal contact without wrinkles throughout the substrate , as the water is laterally displaced [ supplementary fig .s2(a ) ] . since the surface of a pdms stamp is hydrophobic , which is favorable for reliable transfer of a graphene monolayer onto a target substrate via stamping ( g to h in fig .[ fig : process_schematic ] ) , the use of water bath in fig .[ fig : process_schematic](d ) leads to a discontinuous lubrication layer between the bilayer and substrate , as schematically shown in supplementary fig .. therefore , blow - drying in this case results into bursting of trapped water droplets , tearing the graphene monolayer .this can be effectively prevented by using an ethanol water mixture as the liquid bath , which sufficiently wets the pdms surface to provide a continuous lubrication layer [ supplementary fig .s2(c ) ] .when patterning of graphene is required , a conventional patterning process , such as o rie of graphene using photoresist patterned by photolithography as an etch mask , is performed before step ( b ) in fig .[ fig : process_schematic ] .then , performing the remaining processes [ step ( b ) to ( h ) ] , one can obtain a patterned graphene monolayer on a target substrate .this pre - transfer patterning of graphene allows one to avoid possible damage to the fragile material that is likely to occur , when a process such as photolithography , rie , or laser ablation is performed after the graphene is transferred to the target substrate .to show that the surface tension of a liquid used in step ( d ) in fig . [ fig : process_schematic ] is a critical factor determining the quality of transfer - printed graphene , we transfer - printed a graphene monolayer on a si substrate coated with a 285-nm - thick sio layer following a process described in fig .[ fig : process_schematic ] , while varying the liquid bath : in one set of experiments , we used water , and in the other , a water ethanol mixture ( water and ethanol by volume ) .when water bath was used , although the entire graphene sheet ( by ) was seemingly well - transferred , a closer observation revealed that there are randomly distributed irregular - shaped holes where graphene is absent , as shown in fig .[ fig : water_vs_mixture](a ) .the density of these defects is approximately . when the pdms stamp was observed by an optical microscope after step ( f ) in fig .[ fig : process_schematic ] , it was found that similar defects , albeit smaller in size , were present ( supplementary fig .s3 ) , indicating that the defects are formed while transferring the graphene layer onto the pdms surface and are exacerbated during the transfer - printing onto the substrate .as described in the previous section , the defects arise from insufficient wetting of the pdms surface by water . since pdms is hydrophobic , immediately after a graphene au bilayer is scooped by a pdms stamp , water dewets the pdms surface in several locations , making the bilayer form contacts to the pdms surface that is only locally conformal [ supplementary figs .s2(b ) and s4(a ) ] . as the sample is blow - dried using a n gun , these locally conformal contacts laterally expand , generating narrow wrinkles with water droplets trapped inside , as shown in the right image of supplementary fig .we speculate that further application of n pressure causes the water droplets to burst , resulting into defects such as that shown in supplementary fig .in fact , as shown in fig .[ fig : water_vs_mixture](a ) , the locations of many defects in the graphene transferred onto the substrate seem to coincide with the intersection of the wrinkles , where relatively large water droplets are expected to form : the linear regions in fig . [fig : water_vs_mixture](a ) indicated by the white arrow are where the graphene monolayer is folded , which results from the wrinkles in the graphene au bilayer .in contrast , when the ethanol water mixture was used , its lower surface tension ( at allows a continuous lubrication layer to form between the graphene and pdms surfaces , providing effective `` decoupling '' of the bilayer from the pdms surface .therefore , no wrinkles , except a few with much smaller heights , were observed in the graphene au bilayer on the pdms stamp [ supplementary fig .s4(b ) ] . we found that mild baking at removes these wrinkles , resulting into the flat graphene au bilayer that is globally conformal to the pdms stamp , and consequently , successful transfer - printing of the graphene monolayer was achieved without defects , as shown in fig .[ fig : water_vs_mixture](b ) .the sheet resistance ( ) was measured for graphene monolayers transfer - printed on glass substrates , using the van der pauw method .the size of the graphene monolayers are approximately by . in the following , a graphene monolayer transfer - printed onto a final substrate from a pdms stamp onto which a graphene au bilayer was scooped from a bath of water and the ethanol water mixture are referred to as and , respectively . for ,the sheet resistance , averaged over five samples ( ) is , with a minimum equal to . in contrast , for , is , with a minimum being .figure [ fig : water_vs_mixture](c ) shows raman spectra of graphene monolayers shown in figs .[ fig : water_vs_mixture](a ) and ( b ) , where for they were obtained from defect - free regions .the spectra show that , for both cases , ( i ) each raman peak occurs at the same location ( d : , 2d : , g : ) , ( ii ) the height of the d peaks is negligible , and ( iii ) the 2d / g peak ratios are larger than 2.7 , confirming that the transfer - printed graphene is indeed a monolayer .this result indicates that significantly larger values of for , in comparison to that for , are due not to the properties of graphene in defect - free regions , but to large - scale defects as shown in fig .[ fig : water_vs_mixture](a ) , which has been prevented by decreasing the surface tension in the case of . figure [ fig : water_vs_mixture](d ) shows the optical transmission spectra of and transfer - printed on a 0.7-mm - thick glass substrate , averaged over five samples for each case .transmittance ( ) , plotted on the -axis , is the intensity of the optical beam transmitted through a glass / graphene sample normalized to that transmitted through a glass substrate .the size of the optical beam at the sample location was approximately by . for both and ,the values of are consistent with what was previously measured for a graphene monolayer on a quartz substrate .the value of for is slightly higher than that for , primarily because the absence of graphene in the defects in allow more light to be transmitted . under this hypothesis ,the ratio of total area of the defects to the entire area of the graphene sheet ( ) , can be estimated as , where and are transmittance of and , respectively .the value of calculated at each wavelength in fig .[ fig : water_vs_mixture](d ) ranges from to , which is consistent with our estimation based on optical microscope images .as expected , successful transfer - printing of graphene requires a defect - free graphene monolayer that is globally conformal to a pdms stamp .our proposed technique achieves this with the water ethanol mixture , which provides a continuous lubrication layer , and with an au support layer , which allows for its removal without swelling pdms .alternatively , one may attempt to obtain a defect - free graphene monolayer on a pdms stamp by pressing the stamp onto a graphene layer grown on cu foil and then etching away the cu foil by floating the cu / graphene / pdms on a bath of a cu etchant .since the surface of cu foil commonly used in cvd growth of graphene typically has corrugations on the micron scale , the pdms attached to the graphene in this case is in contact with the graphene only partially . as a result ,subsequent processes such as n blow - dry and transfer - printing tend to cause defects in the graphene layer , as shown in supplementary fig .in fact , it was previously reported that of a transfer - printed graphene monolayer by this approach was , even with a self - release layer inserted for reliable graphene transfer . to fabricate practical electronic devices where graphene is used as active layers or electrodes , the patterning of graphene is required .our technique , described in fig .[ fig : process_schematic ] , can achieve this with a simple modification : the process begins with a patterned graphene on cu foil in step ( a ) , instead of an unpatterned graphene layer . in our current demonstration , we first prepared a patterned graphene monolayer on cu foil by etching unpatterned graphene grown on cu foil by o rie using a photoresist etch mask patterned by photolithography .next , the patterned graphene was transfer - printed on a si / sio substrate coated with moo or pedot : pss , both of which are susceptible to degradation when exposed to an aqueous condition or aggressive patterning process. figures [ fig : patterning](a ) and ( b ) are optical micrographs of the substrates , where patterned graphene monolayers were transfer - printed in regions indicated by the arrows , showing that the patterns defined on photomasks were replicated in the transfer - printed graphene monolayers .the widths of the smallest features lines in fig .[ fig : patterning](a ) and arcs in fig .[ fig : patterning](b ) are and , respectively , which are identical , within the resolution of the optical imaging system used ( ) , to those of the corresponding features on the photomask .a closer observation of the pattern edge using a field emission scanning electron microscope ( fe - sem ) revealed that it is not straight on the nanoscale , with an `` edge resolution '' of , which is probably attributed to the edge resolution of the photomask patterns and/or limitation of the photolithography process ( supplementary fig .s7 ) . from this , together with the fact that previously demonstrated transfer - printing - based patterning techniques can create patterns whose size is well below 100 nm , we expect that our technique is capable of creating sub - micrometer graphene patterns , if a nanopatterning process , for example electron - beam , nanoimprint , or nanosphere lithography , is employed , instead of photolithography .raman spectra obtained from the graphene transfer - printed on the moo show the distinct g and 2d peaks , with the 2d / g intensity ratio of 2.5 , and the negligible d peak , suggesting that the quality of the graphene is comparable to that in fig .[ fig : water_vs_mixture](b ) . for the case of the graphenetransfer - printed onto the pedot : pss , the peaks associated with graphene , except the 2d peak , can not be identified due to the overlap with raman spectra of pedot : pss .next , we observed the surface of the transfer - printed graphene on a si / sio substrate , using a fe - sem and an atomic force microscope ( afm ) .as shown in fig .[ fig : sem_afm](a ) , irregularly shaped dark patches , as enclosed by a white circle , are randomly distributed throughout the surface .also shown are the dark lines , as marked by the white arrow .these two features , patches and lines , are commonly found in the transferred graphene cvd - grown on cu foil with the former and latter attributed to graphene multilayers and wrinkles , respectively and hence are not caused by our transfer technique .it is also shown that in the patches , there are darker spots with diameters of approximately . the surface profile measured using an afm along the white dotted line in fig . [fig : sem_afm](c ) shows that the spots have heights as high as approximately [ fig .[ fig : sem_afm](d ) ] . to identify the origin of the dark spots ,elemental analysis was carried out using a scanning transmission electron microscope capable of energy dispersive x - ray spectroscopy ( stem - eds ) . in order to prepare a sample for this analysis , a graphene layer was transfer - printed from a pdms stamp onto a si / sio substrate coated with a pedot : pss separation layer , and then transferred onto a lacey carbon tem grid using the conventional wet - transfer method ( see the method section for the experimental detail ) .an eds spectrum obtained from a region shown in supplementary fig .s8(a ) shows that , in addition to carbon , silicon atoms are present on the graphene surface [ supplementary fig .given many previous reports showing that uncured siloxane oligomers were present on pdms surfaces , it is highly likely that the dark spots on the graphene surface are siloxane residues that have been transferred from the pdms stamp .this speculation was further supported by the fact that the dark spots can be eliminated by annealing the sample at under h and ar , as shown in fig .[ fig : sem_afm](b ) . the afm measurements [ fig . [ fig : sem_afm](c ) and ( d ) ] show that the surface in the background , that is , regions away from the patches and lines , is much rougher than that of a clean graphene surface , suggesting that the oligomer residues are also present throughout the surface , not only on the multilayer regions .the morphological and elemental characterizations of the surface of transfer - printed graphene discussed above help determine the range of application of our technique in its current form .since the oligomer residues are likely to be present only on the top surface , that is , the graphene surface that used to be in contact with the pdms , our technique can be applied to fabrication of ( i ) devices where only the bottom surface of the graphene electrode is involved in injection or collection of charge carriers , such as leds and solar cells , made of organic semiconductors or organometal trihalide perovskite compounds , with top graphene electrodes , and ( ii ) devices whose graphene electrodes are used to establish electric fields without charge carrier transport , such as thin - film transistors with graphene gate electrodes and terahertz wave modulators . meanwhile , when charge carrier injection or collection occurs in both sides of the graphene layer , such as in tandem leds and solar cells where it is part of the interlayers , our technique is not applicable .therefore , expanding the range of application of our technique by eliminating the oligomer contamination , possibly with the following modification , is important future work : replacing pdms with other stamp material that can be completely cured ; or depositing a blocking layer on the pdms surface to prevent possible transfer of uncured oligomers onto the graphene surface , with a potential candidate being a pressure sensitive adhesive layer .in summary , we have developed a low temperature , dry process capable of transfer - printing a patterned graphene monolayer grown on cu foil on a target substrate .two features distinct from the conventional wet - transfer method the use of a support layer composed of au , instead of pmma , and the decrease in surface tension of the liquid bath on which a graphene au bilayer floats allow one to obtain a graphene monolayer on a pdms stamp without defects that would otherwise arise . subsequently , the graphene is transfer - printed from the stamp onto a target substrate .the characteristics of a graphene monolayer transfer - printed using our technique are comparable to those obtained with the conventional wet - transfer method , with a sheet resistance as low as and optical transmittance of at . in addition , with pre - transfer patterning of graphene on cu foil using conventional patterning processes , our technique is capable of creating graphene monolayer patterns on materials that are easily degraded when exposed to high - temperature processes , organic solvents , or aqueous chemicals . as an example , using photolithography followed by reactive - ion etch to pattern graphene monolayers on cu foil and then transfer - printing them , we have obtained graphene monolayer patterns on moo and pedot : pss , with the smallest feature size and edge resolution of and , respectively . immediate application areas of this technique include organic electronic devices whose top electrodes are composed of graphene .moreover , by eliminating siloxane oligomer residues on graphene using alternate stamp material , the technique can be further applied to devices whose graphene electrodes are in their interiors , such as tandem leds and solar cells .finally , with possible appropriate modification , it may also be applied to dry - transfer of other two - dimensional materials , including boron nitride and molybdenum disulfide .a graphene monolayer on cu foil was grown in a cvd system consisting of a tubular quartz reactor and a furnace .experimental details described in ref . were closely followed except the following : cu foil was annealed under a 5 sccm flow of h at , and during growth , the reactor was filled with a mixture of ch and h at a total pressure of , whose flow rates are 35 and , respectively .low - temperature , dry transfer of graphene monolayers was carried out by following processes described in fig .[ fig : process_schematic ] . to form a support layer on a graphene monolayer on cu foil ,a 200-nm - thick au layer was deposited by thermal evaporation in high vacuum ( , ) ( b ) . to etch away the cu foil , the cu / graphene / au multilayer was floated on an ammonium persulfate solution , prepared by dissolving of ammonium persulfate ( sigma aldrich ) in of water ( c ) . after the etch was completed , the graphene au bilayer was scooped with a glass slide , and then transferred on a bath of water to remove residual ammonium persulfate .next , the graphene au bilayer was moved onto a bath composed of an ethanol water mixture ( 70 vol % ethanol and 30 vol % water ) , from which the bilayer was scooped by and transferred onto a pdms stamp ( d ) .the sample was then blow dried using a n gun ( e ) , and was further dried on a hot plate at for more than .the au support layer was etched using an ammonium iodide solution ( lae-202 , cowon innotech .( f ) , after which the pdms / graphene sample was rinsed with water . after water droplets on the samplewere blown away using a n gun , the graphene - coated pdms stamp was gently pressed onto a target substrate , inducing intimate contact throughout the substrate area ( g ) . before separation of the stamp from the substrate ,the sample was stored at room temperature for under a pressure of , and then placed on a hot plate at for without application of pressure .finally , the stamp was carefully peeled off from the substrate ( h ) , resulting in the transfer - printed graphene monolayer on the target substrate . in this process, a graphene monolayer on cu foil was first patterned using conventional photolithography and reactive - ion etch , as described in supplementary fig .s6 . a 1.5- - thick photoresist ( az gxr-601 , ) was spin - coated on a cu / graphene sample , and then patterned by photolithography .the patterned graphene on cu foil was obtained , when the graphene in the areas not covered by the photoresist was etched by reactive ion etch in o ( , , , ) . performing the processes described in fig .[ fig : process_schematic ] with this sample , rather than unpatterned graphene on cu foil , we transfer - printed a patterned graphene monolayer on a target substrate coated with a 75-nm - thick pedot : pss or a 20-nm moo layer .the target substrate was a 500- - thick si substrate pre - coated with a 285-nm - thick thermal sio layer , and the pedot : pss ( heraeus ) and moo ( lts chemical inc . )layers were deposited by spin - coating ( ) and thermal evaporation in high vacuum ( , ) , respectively .samples for the elemental analysis were prepared following the processes described in supplementary fig . s9 .after a graphene monolayer was transfer - printed from a pdms stamp onto a pepot : pss layer using our transfer method ( a ) , a layer of pmma was deposited on the graphene layer by spin coating at for ( b ) .the pmma solution was prepared by dissolving pmma ( , sigma aldrich ) into chlorobenzene ( , sigma aldrich ) .the sample was then immersed into a water bath , separating the pmma graphene bilayer from the si / sio substrate as the pedot : pss layer was dissolved ( c ) .next , the bilayer was transferred to another water bath and kept floating on it for more than to ensure that pedot : pss remaining on the graphene surface was removed .then , the bilayer was scooped with a lacey carbon tem grid ( ted pella , inc . ) ( d ) , after which the grid was placed on a hot plate at for more than .finally , the pmma layer was removed by acetone ( e ) , resulting in the graphene monolayer on the tem grid ( f ) .the surface morphology of transfer - printed graphene layers was characterized using a fe - sem ( jsm-6700f , jeol ) and an afm ( dimension edge , bruker ) .the sheet resistance was measured using a source meter ( 2400 , keithley ) and a multimeter ( 34410a , agilent ) .the raman spectra were obtained using a confocal raman microscope ( invia , renishaw ) with an excitation wavelength of emitted from an ar laser .an ultraviolet visible spectrophotometer ( lambda 35 , perkin elmer ) was used to measure the optical transmittance spectra .the elemental analysis was carried out using a stem equipped with an eds ( jem-2100f , jeol ) .this work was supported by the global frontier r&d program on the center for multiscale energy system ( grant no .2011 - 0031561 ) and the center for thz - bio application systems ( grant no .2009 - 0083512 ) , both by the national research foundation under the ministry of science , ict , and future planning , korea .m.c . and c.k .conceived the main idea of the transfer - printing process , and designed the experiments ; s.c . performed cvd - growth of graphene used in all experiments , prepared and characterized graphene patterns on moo and pedot : pss , and performed afm and sem measurements , and elemental characterization ; s.c . and m.c. carried out the remaining experiments , except the acquisition of the raman spectra , which was performed by j.h.k . ;, m.c . , and c.k .analyzed the data ; c.k , s.c . and s.l .wrote the manuscript .the authors declare no competing financial interests .48ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1126/science.1102896 [ * * , ( ) ] link:\doibase 10.1103/revmodphys.81.109 [ * * , ( ) ] link:\doibase 10.1103/physrevb.76.064120 [ * * , ( ) ] link:\doibase 10.1126/science.1157996 [ * * , ( ) ] link:\doibase 10.1126/science.1156965 [ * * , ( ) ] link:\doibase 10.1038/srep05380 [ * * , ( ) ] link:\doibase 10.1063/1.3644496 [ * * , ( ) ] link:\doibase 10.1002/adma.201003673 [ * * , ( ) ] link:\doibase 10.1021/nl2029859 [ ( ) , 10.1021/nl2029859 ] link:\doibase 10.1002/adma.200800150 [ * * , ( ) ] link:\doibase 10.1016/j.orgel.2011.06.021 [ * * , ( ) ] link:\doibase 10.1038/nnano.2010.132 [ * * , ( ) ] link:\doibase 10.1038/ncomms1787 [ * * , ( ) ] link:\doibase 10.1021/nl4041274 [ * * , ( ) ] link:\doibase 10.1038/ncomms8082 [ * * , ( ) ] link:\doibase 10.1126/science.1220527 [ * * , ( ) ] link:\doibase 10.1103/physrevx.2.011002 [ * * , ( ) ] , link:\doibase 10.1126/science.1171245 [ * * , ( ) ] , link:\doibase 10.1002/adma.201004099 [ * * , ( ) ] link:\doibase 10.1063/1.4795332 [ * * , ( ) ] link:\doibase 10.1038/nature02498 [ * * , ( ) ] link:\doibase 10.1021/jz4020162 [ * * , ( ) ] link:\doibase 10.1039/c2nr31317k [ * * , ( ) ] link:\doibase 10.1021/nl204123h [ * * , ( ) ] link:\doibase 10.1038/nnano.2013.63 [ * * , ( ) ] link:\doibase 10.1002/adma.201400773 [ * * , ( ) ] link:\doibase 10.1021/am200729k [ * * , ( ) ] link:\doibase 10.1021/nl902623y [ ( ) , 10.1021/nl902623y ] link:\doibase 10.1146/annurev.matsci.28.1.153 [ * * , ( ) ] link:\doibase 10.1021/ac0346712 [ * * , ( ) ] link:\doibase 10.1016/j.matlet.2011.02.057 [ * * , ( ) ] link:\doibase 10.1108/03699429810246962 [ * * , ( ) ] \doibase 537.723.1:53.081.7 + 538.632:083.9 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.97.187401 [ * * , ( ) ] , link:\doibase 10.1063/1.1481980 [ * * , ( ) ] link:\doibase 10.1021/nl2020697 [ * * , ( ) ] , link:\doibase 10.1021/nl100750v [ * * , ( ) ] link:\doibase 10.1002/adma.201003847 [ * * , ( ) ] link:\doibase 10.1021/la026558x [ * * , ( ) ] link:\doibase 10.1002/adma.200803000 [ * * , ( ) ] link:\doibase 10.1063/1.3643444 [ * * , ( ) ] link:\doibase 10.1021/acs.nanolett.5b00110 [ ** , ( ) ] link:\doibase 10.1002/adma.201501145 [ * * , ( ) ] link:\doibase 10.1021/nl202983x [ * * , ( ) ] link:\doibase 10.1039/c4nr06991a [ * * , ( ) ] link:\doibase 10.1021/acs.nanolett.5b00440 [ * * , ( ) ] link:\doibase 10.1038/ncomms7519 [ * * , ( ) ] link:\doibase 10.1021/nl204562j [ * * , ( ) ] gun , the bilayer has many wrinkles , in which water is trapped ( center ) . although mild annealing on a hot plate at for decreases the heights and number of the wrinkles as it removes the residual water , the wrinkles can not be completely eliminated ( right ) .( b ) in contrast , when the bilayer was scooped up from a mixture of water and ethanol , the mixture liquid forms a continuous lubrication layer between the bilayer and the pdms stamp throughout the surface ( left ) . as a result , n flow aiming at the center of the bilayer displaces the liquid outward , resulting in conformal contact between the bilayer and the pdms , almost throughout the surface ( center ) .small number of wrinkles with smaller heights than those in ( a ) can be removed after heat treatment on a hot plate at for ( right ) . ]/pedot : pss .( b ) deposition of a pmma layer by spin coating .( c ) lifting off the graphene pmma bilayer by dissolving the pedot : pss layer in water .( d ) transferring the graphene pmma bilayer onto a lacey carbon grid .( e ) removal of the pmma layer using acetone .( f ) the graphene layer transfer - printed onto the target substrate is now placed on the lacey carbon grid for the elemental analysis . ]
graphene has recently attracted much interest as a material for flexible , transparent electrodes or active layers in electronic and photonic devices . however , realization of such graphene - based devices is limited due to difficulties in obtaining patterned graphene monolayers on top of materials that are degraded when exposed to a high - temperature or wet process . we demonstrate a low - temperature , dry process capable of transfer - printing a patterned graphene monolayer grown on cu foil onto a target substrate using an elastomeric stamp . a challenge in realizing this is to obtain a high - quality graphene layer on a hydrophobic stamp made of poly(dimethylsiloxane ) , which is overcome by introducing two crucial modifications to the conventional wet - transfer method the use of a support layer composed of au and the decrease in surface tension of the liquid bath . using this technique , patterns of a graphene monolayer were transfer - printed on poly(3,4-ethylenedioxythiophene):polystyrene sulfonate and moo , both of which are easily degraded when exposed to an aqueous or aggressive patterning process . we discuss the range of application of this technique , which is currently limited by oligomer contaminants , and possible means to expand it by eliminating the contamination problem .
since the watson - crick base - pairing rules of double - strand dna was established , template - directed dna replication became an important issue both in basic researches and application studies ( e.g. , polymerase chain reaction ) in biology .the match between the incoming nucleotide dntp and the template ( i.e. , the canonical watson - crick base pairing a - t and g - c ) in the replication process plays a central role for any organism to maintain its genome stability , whereas mismatch ( non - canonical base pairing like a - c ) may introduce harmful genetic variations into the genome , and thus the error rate of replication must be kept very low . in living cells ,the replication fidelity is controlled mainly by dna polymerase ( dnap) which catalyzes the template - directed dna synthesis , and the fidelity of dnap has been intensively studied since its discovery in 1950s. .pioneering theoretical studies on this issue were done by j.hopfield and j.ninio . regarding dna replicationapproximately as a binary copolymerization process of matched nucleotides ( denoted as a for convenience in the present paper ) and mismatched nucleotides ( denoted as b ) , they proposed independently the so - called kinetic proofreading mechanism which correctly points out that the replication fidelity is not determined thermodynamically by the free energy difference , but kinetically by the incorporation rate difference , between the match and the mismatch .this model , however , assumed that the proofreading occurs before nucleotide incorporation is accomplished ( as illustrated in fig .[ scheme_compare](a1 ) ) , which is not the case of real dnaps .structural and functional studies show that dnap often has two parts .the basic part of all dnaps is a synthetic domain ( i.e. , polymerase ) which binds the incoming dntp and catalyzes its incorporation into the nascent ssdna strand ( called as primer below for convenience ) .proofreading is performed by a second domain ( i.e , exonuclease ) which is not a necessary part of dnap .this domain may much likely excise the just - incorporated mismatched nucleotide , once the mismatched terminus is transferred from the polymerase site into the exonuclease site by thermal fluctuation .the first model that explicitly invokes the exonuclease , referred to as galas - branscomb model ( fig .[ scheme_compare](b1 ) ) , was proposed by galas et al . and revisited by many other groups .many experimental studies gave consistent results to this model .recently , improved experimental techniques revealed more details of the synthesizing and proofreading processes , and several detailed kinetic models have been proposed .however , all these models are based on the original simple galas - branscomb model and many important details such as higher - order neighbor effects of the primer terminus are not considered systematically ( see later sections ) . in particular , recent experimental works on phi29 dna polymerase revealed more details about the working mechanism of dnap , highlighting the importance of the forward and backward translocation steps which were absent from the galas - branscomb models . considering this point , as well as many other structural and kinetic experimental results , we propose a comprehensive reaction scheme of dnaps as shown in fig .[ scheme_full ] . or respectively throughout this paper . denotes either or .the superscript or means that the primer terminus is in the polymerase ( i.e. , synthetic ) site or the exonuclease site , respectively . denotes the state dntp binding to dnap before dnap undergoes further conformation change .( a1 ) the original kinetic proofreading mechanism . represents one or more high - energy intermediate states which dissociate much faster for than for .this proofreading occurs before the nucleotide is covalently incorporated into the primer .( b1 ) brief sketch of the galas - branscomb model .( c1 ) an alternative exonuclease proofreading mechanism proposed in this paper . considering only the exact calculation of the fidelity, one can simplify these schemes under steady - state conditions , i.e. , ( a1 ) can be simplified as minimal scheme ( a2 ) , ( b1 ) simplified as ( b2 ) , and ( c1 ) simplified as ( c2). there are several key features of this scheme .first , the template - primer duplex binds to dnap and forms two types of complexes . in the ` polymerase type ', the 3 terminus of the primer is located at the polymerase site . in the ` exonuclease type ' , the primer terminus is unzipped from the duplex and transferred to the exonuclease site . for the ` polymerase type ' complexes , two substateswere experimentally observed .one is the pre - translocation state of dnap in which the dntp binding site is occupied by the primer terminus .the other is the post - translocation state in which the dnap translocates forward ( relative to the template ) to expose the binding site to the next dntp .dnap can rapidly switch between these two states .correspondingly , one can assume two substates of dnap in the ` exonuclease type ' complexes , though there are not sufficient experimental evidences .one is the pre - translocation state in which the exonuclease site is occupied when the primer terminus is transferred from the polymerase site .the other is the post - translocation state in which the exonuclease site is exposed after the nucleotide excision while the newly - formed primer terminus does not return to the polymerase site .second , once the incoming dntp is incorporated into the primer , the dnap can either translocate forward to the post - translocation state and bind a new dntp in the polymerase site , or it pauses at the pre - translocation state and the primer terminus is unzipped from the duplex and transferred to the exonuclease site ( the terminus can switch between the two sites without being excised ). the large distance about between the two sites implies that more than one nucleotides of the primer terminus must be unzipped , and thus the stability of the entire terminal region may put an impact on the unzipping probability of the primer terminus .such neighbor effects , as well as other types of neighbor effects , can be very significant for the replication fidelity and should be taken account of in the kinetic models ( details see later sections ) . : dna polymerase ; : the state of the primer , being the length of the primer ; : dntp . and : the ` polymerase type ' complex when dnap is in the pre - translocation state and post - translocation state , respectively . and : the ` exonuclease type ' complex when dnap is in the pre - translocation state and post - translocation state , respectively. a free dntp can bind to dnap when the complex is at the post - translocation state .when the dntp is incorporated into the primer , the complex will return to the pre - translocation state .the primer terminus may be unzipped from the duplex and transferred to the exonuclease site .model i and model ii indicate two possible pathways after the nucleotide excision in the exonuclease site . ]third , the exonuclease site can only excise the terminal nucleotide .what happens after the cleavage is not clear yet .here we propose two possible pathways , which are denoted as model i and model ii in fig .[ scheme_full ] . in modeli , dnap undergoes a backward translocation and the primer terminus can either be excised processively , or be transferred back to the polymerase site ( at the pre - translocation state ) . in model ii , the primer terminus is directly transferred back to the polymerase site ( at the post - translocation state ) .[ scheme_full ] can be further simplified as fig .[ scheme_simple ] , considering that the addition of dntp in the polymerase site is almost irreversible ( i.e. , the product ppi of the polymerization reaction is often released irreversibly under physiological conditions ) .one can also reasonably assume that the translocation of dnap in ` polymerase type ' complex is in a rapid equilibrium .in biochemical experimental studies such as steady - state kinetic assays , the translocation can not be observed ( for comparison , the subsequent dntp binding can be clearly observed ) . in other words ,the two substates can not be identified individually , indicating there exists a rapid equilibrium between them .thus one does not need to distinguish between the pre - translocation and the post - translocation states .under such an approximation , model ii can be reduced to the galas - branscomb model as shown in fig .[ scheme_compare](b1 ) , while model i is reduced to fig .[ scheme_compare](c1 ) .although model ii were widely accepted , there is no direct experimental evidence to exclude model i. moreover , it has been found that the ssdna binding to the exonuclease site can be processively excised , indicating that more than one nucleotide bind to the exonuclease site ( e.g. , three nucleotides bind to the exonuclease site for polymerase i kf ) and removing the terminal nucleotide may trigger backward translocation of dnap for the subsequent excisions .so we will discuss both models in this paper , but put a focus on model i due to the following technique consideration .kinetic proofreading models like fig .[ scheme_compare](a1 ) or ( a2 ) are irreversible reactions , so the corresponding kinetic equations are always closed ( i.e. , of finite number ) and can be rigorously calculated .the galas - branscomb models like fig .[ scheme_compare](b1 ) or ( b2 ) , however , are seemingly reversible , and the corresponding kinetic equations are always unclosed and hierarchically coupled , which is hard to solve .fortunately , a general rigorous treatment for such problems has been established recently by us , and this method can be directly applied to model ii ( some important results are given in appendix [ app : m2 ] ) . for modeli like fig .[ scheme_compare](c1 ) or ( c2 ) , however , the above methods are inapplicable and new method should be developed , which will be a focus of later sections ., ( or , ) : pre - translocation ( or post - translocation ) state of dnap when the primer terminus is in the synthetic(s ) site or the exonuclease(e ) site respectively .when the primer terminus is in the exonuclease site , one does not need to distinguish between .however , it s still convenient to use to denote the immediate state when the terminus switches back to the polymerase site . by setting all the excision rates equal to , we obtain the models for real dnaps . under the steady - state conditions ,the effective rate of dntp addition can be expressed as ] is the concentration of the incoming dntp ( to calculate the intrinsic fidelity , one often sets =[b]12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * ( )
the fidelity of dna replication by dna polymerase ( dnap ) has long been an important issue in biology . while numerous experiments have revealed details of the molecular structure and working mechanism of dnap which consists of both a polymerase site and an exonuclease ( proofreading ) site , there were quite few theoretical studies on the fidelity issue . the first model which explicitly considered both sites was proposed in 1970s and the basic idea was widely accepted by later models . however , all these models did not systematically and rigorously investigate the dominant factor on dnap fidelity , i.e , the higher - order terminal effects through which the polymerization pathway and the proofreading pathway coordinate to achieve high fidelity . in this paper , we propose a new and comprehensive kinetic model of dnap based on some recent experimental observations , which includes previous models as special cases . we present a rigorous and unified treatment of the corresponding steady - state kinetic equations of any - order terminal effects , and derive analytical expressions for fidelity in terms of kinetic parameters under bio - relevant conditions . these expressions offer new insights on how the the higher - order terminal effects contribute substantially to the fidelity in an order - by - order way , and also show that the polymerization - and - proofreading mechanism is dominated only by very few key parameters . we then apply these results to calculate the fidelity of some real dnaps , which are in good agreements with previous intuitive estimates given by experimentalists .
theory of self - gravitating rotating bodies seems to be an unlimited reservoir of difficult problems hardly tractable even under severe simplifications .it is a subject of scientific effort since 1742 when maclaurin has initiated this field by his studies on incompressible rotating ellipsoids .development of modern numerical calculations resulted in progress in practical applications nowadays such as e.g. 3d hydrodynamical simulations of rotation of complex objects .analytical approach has succeeded for constant density , incompressible bodies .work of maclaurin , jacobi , poincare , schwarzschild and many others has explained the behaviour of those objects almost completely .behaviour of slowly rotating polytropes has been calculated by . by applying the differential equation of hydrostatical equilibrium modified by rotation he reduced the problem to an ordinary differential equation .this method however works only for a uniform rotation. this list would be incomplete without the roche model .it s simplicity makes it a very powerful tool for understanding behaviour of rotating objects .present computational methods allow one to handled numerically two- and three - dimensional problems with complicated governing equations in this paper we present simple analytical approach which can treat differentially rotating compressible barotropic stars in case of slow or moderately fast rotation . this model could fill a gap between simple analytical methods used for e.g. maclaurin spheroids or roche model , and complicated numerical methods such as e.g. hscf , or those applying straightforward newton - raphson technique .we attempt to find a density distribution ( iso - density contours ) of a single self - gravitating object under the following assumptions : 1 .barotropic eos 2 . simple rotation with angular velocity dependent only on the distance from rotation axis 3. newtonian gravity 4 .axisymmetric density distribution 5 .we seek solutions for stationary objects in full mechanical equilibrium , i.e. all quantities are time - independent with properties ( i)(v ) satisfied , the euler equation becomes , in cylindrical coordinates ( , , ) : continuity equation is then fulfilled automatically . introducing centrifugal potential : and enthalpy : get a simple equation : = 0\ ] ] with a solution equation ( [ main_eq ] ) is the most important equation in the study of the structure of rotating stars under conditions ( i)(v ) .we define the integration constant in eq .( [ enthalpy ] ) to be such that the enthalpy satisfies the condition .the only term which we havent specified yet is the gravitational potential .if we use the poisson equation : where is the gravitational constant , the equation ( [ main_eq ] ) becomes a non - linear second - order differential equation .this form , however , is very inconvenient , because we have to specify boundary conditions at a surface of the star , which is unknown _a priori_. more powerful is an integral form of eq .( [ main_eq ] ) obtained by substitution : this integral form has been used in very successful numerical algorithm developed originally by , and recently improved by and by .this form will be also used to derive our approximation formula in the next section .the _ integral equation _ form of eq .( [ main_eq ] ) is : where is the integral operator acting on the density performing the integration on the right - hand side of eq .( [ int_potential ] ) over entire volume of the star .we define the surface of the star to be manifold consisting of points where , . explicit form of the operator in terms of coordinates will not be needed . for a given eos and for a fixed rotation lawi.e. for given functions and , the only free parameter is the constant .the values of label a family of the stellar models with the same eos and rotation pattern , which differ in total mass and maximum density etc .( [ int_eq ] ) has a form of the hammerstein non - linear integral equation and can be rewritten in a canonical form : \ ] ] where : in case of linear function , eq .( [ eq_canon ] ) could be easily solved by the von neumann series .this strongly suggests to try the following iteration scheme : , \\\nonumber f_2 & = \mathcal{r } [ f(f_1 ) ] , \\ & \cdots \\ \nonumber f_n & = \mathcal{r } [ f(f_{n-1 } ) ] \\ \nonumber & \cdots\end{aligned}\ ] ] indeed , an iteration procedure of this type was successfully applied in the so - called self - consistent field method ( , ) .we have introduced the canonical form to ensure that the first - order approximation is found in a correct order i.e. by using the first line of the sequence ( [ seq ] ) .when we go back to non - canonical form ( [ int_eq ] ) the first line of ( [ seq ] ) takes the form : from eq .( [ correct_order ] ) above we can find the first - order deviation from sphericity .it seems impossible at first sight to avoid explicit integration in eq .( [ correct_order ] ) . in case of a general , this is true . but let us look at equation ( [ int_eq ] ) in case of vanishing centrifugal potential , i.e. with no rotation : when we use a function which satisfies eq .( [ non_rot ] ) as zero - order approximation : integration in eq .( [ correct_order ] ) can be easily eliminated : finally , our formula takes the form : or simpler , using the enthalpy ( , ) : functions used as zero - order approximation ( or ) are simply density and enthalpy distributions of non - rotating barotropic stars . in case of polytropic eos , ,these quantities are given by lane - emden functions .in more general case we have to find solution of the ordinary differential equation of the hydrostatic equilibrium .the only unanswered question is what are the value of constants and gives and _ vice versa_. ] .it is an essential part of this work , so we decided to explain it in a separate section .when we try to find the enthalpy distribution using the formula ( [ formula ] ) we have to find the best zero - order function and the value of given by .an equivalent problem is to find the equation for which the function and the value are best zero - order approximations in this case we seek for .we consider the latter case , as we have to find only one real number .let us denote : in our approximation , the equation for the first - order enthalpy distribution , in terms of the initial spherical distribution and rotation law is , from ( [ formula ] ) : where is still to be determined .the terms in eq .( [ deltac ] ) behave as follows : 1 . ` ' is spherical enthalpy distribution , thus it is only a function of the radius , has a maximum at the centre , and goes monotonically to zero , where usually is cut .however , from mathematical point of view , lane - emden functions extend beyond the first zero point with negative function values . ' is a monotonically increasing function of distance from the rotation axis .it starts with zero at the rotation axis .it does not change the enthalpy along the axis of symmetry .the strongest enthalpy increase takes place along the equatorial plane . ' shifts the sum of positive functions and down . at first sight , shifting down by seems not needed ( i.e. one would adopt ) , because we obtain correct qualitative behaviour the star is expanded along equator .but often it is enough to introduce slow rotation to get positive value of ( i.e. for our approximation to the enthalpy in this case ) for any i.e. equatorial radius becomes infinite .it leads directly to physically unacceptable results infinite volume and mass .so the value plays a non - trivial role and has to be found .[ terms ] shows the behaviour of all terms in eq .( [ deltac ] ) along the equator of the star , where the rotation acts most strongly .horizontal lines show points where enthalpy is cut for a given value of . ) , explaining the meaning of . the most general case is shown . in some particular cases and may not exist this depends on the both and .vertical dot - dashed line shows that , where is radius of non - rotating star .the dashed curve fragment below the axis reflects ambiguity of the lane - emden function continuation to negative values , as described in the text .this figure prepared with use of eq .( [ simple_case ] ) for .,height=317 ] we can distinguish some important values : 1 . for obtain infinite radius of a star .these values obviously have to be rejected .2 . for , where is the radius of a zero - order density distribution , we get finite volume of a star , but we use extension of with negative values .this introduces some problems which we discuss later in the article , although the resulting enthalpy and density are positive and physically acceptable .3 . for get a density distribution which is topologically equivalent to the ball .4 . for get toroidal density distribution .this case exists only if strong differential rotation is present .5 . for star disappears .we expect to find the solution in the range because we are looking for finite - volume non - toroidal stars .one can try to find both analytically and numerically . to keep the algebraic form and the simplicity of the formula, we now concentrate on the former method .when we substitute the formula ( [ deltac ] ) into our basic equation ( [ main_eq ] ) we get : in this formula we have made use of ( [ delta ] ) . after obvious simplifications , using ( [ zero_deg ] ) and denoting we have : this equality is true only if .the same holds for the enthalpy : using formula ( [ deltac ] ) again we finally obtain : left - hand side of eq .( [ deltac1 ] ) is constant , while the right - hand side is a function of distance from the rotation axis , monotonically decreasing from zero .this equality holds only in trivial case and with no rotation at all . in any other case ( [ deltac1 ] )can not be fulfilled .so instead we try another possibility and require that where ` hat ' denotes some mean value of the function .we have chosen integration is taken over the entire volume of a non - rotating initial star with the radius .this choice of gives good results .but using the mean value theorem : where is some value of in the integration area , and taking in account monotonicity of the centrifugal potential we get : i.e. the value of is in the range from fig .[ terms ] .it forces us to use negative values of non - rotating enthalpy . moreover , in case of polytropic eos with fractional polytropic index . ]lane - emden equation : has no real negative values , because of fractional power of negative term .but we can easily write equation , with solution identically equal to solution of lane - emden equation for , and real solution for e.g. : but , for example , solution of the following equation : is again identically equal to solution of lane - emden equation for , but differs from eq .( [ abs ] ) for .fortunately , difference between solution of eq .( [ abs ] ) ( fig .[ terms ] , below axis , dot - dashed ) and eq . ( [ sign ] ) ( fig . [ terms ] , solid ) for is small if .example from fig .[ terms ] ( for ) is representative for other values of .we will use form ( [ abs ] ) instead of the original lane - emden equation ( [ l - e ] ) for calculations in this article . to avoid problems with negative enthalpywe can put simply : which is strictly boundary value from fig .[ terms ] . the great advantage of the eq .( [ deltac3 ] ) is the possibility to analytically perform the integration of the centrifugal potential ( [ centrifugal ] ) for most often used forms of . in contrast , in formula ( [ meanc ] ) , not only angular velocity profile ( [ centrifugal ] ) , but also the centrifugal potential have to be analytically integrable function . in both cases ( [ meanc ] , [ deltac3 ] )however , possibility of analytical integration depends on the form of .the value of from eq .( [ deltac3 ] ) also gives reasonable iso - density contours , cf .[ v - const ] and [ j - const ] , but global accuracy is poor ( table [ a02jconstms ] ) .as we noticed , the best value of in formula ( [ deltac ] ) could be found numerically .for example , we can use virial theorem formula for rotating stars ( cf . ) : where and is the rotational kinetic energy and the gravitational energy , respectively .we define , so - called virial test parameter : where we introduced internal energy : parameter z is very common test of the global accuracy for rotating stars models .we may request that our enthalpy satisfy ( [ virial ] ) , i.e. we choose from equation : we can find from equation eq .( [ deltavt ] ) numerically only .vs for polytropic model ( [ n32_case ] ) with -const angular velocity profile for and . proper choose of can give density distribution satisfying virial theorem to arbitrary accuracy . from eq .( [ deltac3 ] ) is represented by circle .cross marks value given by formula ( [ meanc ] ) .virial theorem will be satisfied when we take , given by intersection of z with horizontal axis i.e. .,width=317 ] as it is shown on fig .[ vt_vs_deltac ] , we can find approximation of the rotating polytrope structure in form ( [ deltac ] ) satisfying virial theorem ( [ virial ] ) up to accuracy limited only by numerical precision .values of obtained with ( [ deltac3 ] ) , ( [ meanc ] ) and from virial test ( [ deltavt ] ) are compared on fig .[ 3xdelta ] .some of the global model properties are very sensitive to value of ( cf . figs [ vt_vs_deltac ] and [ etot_vs_j2 ] ) .because virial test is unable to check accuracy of our model , we may also try to compare directly eq .( [ deltac ] ) with enthalpy distribution from the numerical calculations of e.g. , and find minimizing e.g. the following formula : ^ 2 d^3\mathbf{r}= \mathrm{min}\ ] ] this method however , requires numerical results ( e.g. enthalpy distribution ) in machine - readable form .in the above sections , we tried to be as general as possible .now we give some examples , and test accuracy of approximation . in case of polytropic eos the enthalpyis : zero - order approximation of density ( density of non - rotating polytrope , ) with n - th ] lane - emden function is : ^n ,\qquad a^2 = \frac{4 \pi g}{n k \gamma } \rho_c^{\frac{n-1}{n}}\ ] ] and our formula for density becomes : ^n \label{polytropic_case}\ ] ] where is calculated from ( [ meanc ] ) , ( [ deltac3 ] ) or ( [ deltavt ] ) . in certain cases lane - emden functions are elementary functions as e.g. . in cases like this our formula may be expressed even by elementary functions .for example , for , , , , and from eq .( [ deltac3 ] ) we get a simple formula : functions like this can easily be visualized on a 2d plot .figure [ terms ] has been made from the formula ( [ simple_case ] ) while figures [ j - const ] and [ v - const ] from eq .( [ n32_case ] ) .now we concentrate on polytrope . in our calculations and figureswe will use , and .now formula ( [ polytropic_case ] ) becomes : iso - density contours of from ( [ n32_case ] ) are presented on fig .[ v - const ] and fig .[ j - const ] . to test accuracy of approximationwe have calculated axis ratio , total energy , kinetic to gravitational energy ratio , and dimensionless angular momentum .axis ratio is defined as usual as : where is distance from centre to pole and is equatorial radius .total energy : is normalized by : and dimensionless angular momentum is defined as : where and are total mass and angular momentum , respectively ; is maximum density .quantities ( [ axisratio])-([j_squared ] ) are computed numerically from ( [ n32_case ] ) , with given angular velocity and chosen .on , given by eq .( [ deltac3 ] ) ( dotted ) , calculated from ( [ meanc ] ) ( solid ) and given by virial theorem constrain ( [ deltavt ] ) ( dashed ) .both values estimated by ( [ meanc ] , [ deltavt ] ) are below i.e. from fig .[ terms ] .it shows , that continuation of lane - emden equation to negative values is required for successful approximation of the rotating body structure .density distribution was given by eq .( [ n32_case ] ) with -const angular velocity with ., width=317 ] ccrlrcccc &axis ratio& & & & virial test & & & + + &&&&&&&& + &&&&&&&& + &&&&&&&& + &&&&&&&& + &&&&&&&& + &&&&&&&& + &&&&&&&& + &&&&&&&& + &&&&&&&& + &&&&&&&& + &&&&&&&& + ratio as a function of the square of dimensionless angular momentum for our model ( [ n32_case ] ) with , and .solid line represent numerical results of .we see that our formula behaviour is in good agreement with numerical results if .results using from eq .( [ meanc ] ) are marked by crosses .results satisfying virial theorem ( from eq .( [ deltavt ] ) ) are represented by diamonds . as it is apparent from figure above, has no influence on this relation , and ca nt improve accuracy of the formula ( [ formula ] ) ., width=317 ] .we can see great improvement of results with from eq .( [ deltavt ] ) marked by ` ' . from eq .( [ meanc ] ) ( ) gives incorrect behaviour of the total energy .solid line represent results of , cross and diamonds are result derived from our approximate formula ( [ n32_case ] ) with from virial test ( [ deltavt ] ) and eq . ( [ meanc ] ) , respectively .approximation satisfying virial equation gives results resembling numerical calculations .parameters of model are given in caption of fig .[ twvsj2 ] ., width=317 ] we have made detailed comparison of our model ( [ n32_case ] ) with -const rotation law and ( middle row of fig .[ j - const ] ) for different values of with results of ( , table 1b ) .table [ a02jconstms ] show our results for from eq .( [ meanc ] ) .value of from eq .( [ deltac3 ] ) and corresponding virial test parameter is included here for comparison .table [ j - const02 ] shows global properties of our approximation with equal to the solution of eq .( [ deltavt ] ) , i.e. satisfying virial theorem . direct comparison of values from table [ a02jconstms ] and table table [ j - const02 ] to table 1b of be difficult , because our driving parameter is central angular velocity , while , following successful approach of , use axis ratio ( [ axisratio ] ) .more convenient in this case is comparison of figures prepared from data found in table 1b of and our tables .this is especially true , because axis ratio is nt well predicted by our formula ( cf .[ axisratio_vs_tw ] and fig . [ axisratio_vs_j2 ] ) , while global properties ( , , , cf . fig .[ twvsj2 ] , [ etot_vs_j2 ] ) and virial test are in good agreement if .[ twvsj2 ] shows that our approximation is valid until , and begins to diverge from numerical results strongly for .both values of ( [ meanc],[deltavt ] ) give similar behavior here .however , from virial test produces better results , and values are more sensible for the strongest rotation .in contrast , total energy ( [ etot ] ) is very sensitive to .value of from eq .( [ meanc ] ) produces wrong result . begin to increase for , while numerical results give monotonically decreasing .use of from ( [ deltavt ] ) instead , gives correct result , cf .[ etot_vs_j2 ] .while global properties of our model are in good agreement with numerical results for , axis ratio tends to be underestimated , even for small values of .[ axisratio_vs_tw ] and fig .[ axisratio_vs_j2 ] show minor improvements when we use from virial test ( [ deltavt ] ) instead of mean value ( [ meanc ] ) . .we see that our formula ( ) underestimates axis ratio .choose of satisfying virial theorem ( ) improves situation a bit .solid line again is result of ., width=317 ] .everything the same as in fig .[ axisratio_vs_tw ] . from eq .( [ z ] ) ( ) gives better approximation to axis ratio compared to formula ( [ meanc ] ) ( ) ., width=317 ] this subsection clearly show importance of constant value .best results are produced with from eq .( [ deltavt ] ) , therefore this value will be used in the next subsections to investigate influence of differential rotation parameter and type of rotation law on formula accuracy .ccclrcl &axis ratio& & & & virial test & + + &&&&&& + &&&&&& + &&&&&& + &&&&&& + &&&&&& + &&&&&& + &&&&&& + &&&&&& + &&&&&& + &&&&&& + &&&&&& + in addition to the results from previous subsection ( -const with ) we have calculated properties of the almost rigidly ( ) and extremely differentially ( ) rotating model with the same rotation law . in all three cases we are able to find value of satisfying eq .( [ deltavt ] ) .however , this is not enough to find correct solution , because other parameters describing rotating body may be wrong .this is clearly shown on fig .[ differ ] , where versus ( [ j_squared ] ) is plotted for three cases of differential rotation .apparent discrepancy for exists .both -const and -const angular velocity profiles behaves as rigid rotation in this case .thus we conclude that our formula is unable to predict correct structure in case of uniform rotation even if rotation is small . if rotation is concentrated near rotation axis , like in case , our and numerical results are of the same order of magnitude .quantitative agreement is achieved only for very small values of .let s note that in this case required by virial theorem ( [ deltavt ] ) is slightly below zero ( table [ j - const002 ] ) .this example shows , that may also be negative .all three cases are summarized on fig .[ differ ] .lcccccc &axis ratio& & & & virial test & + + &&&&&& + &&&&&& + &&&&&& + &&&&&& + ccclrcr &axis ratio& & & & virial test & + + &&&&&& + &&&&&& + &&&&&& + &&&&&& + &&&&&& + &&&&&& + &&&&&& + &&&&&& + vs for -const angular velocity ( plot ) for three values of .quantitative agreement between our formula ( symbols ) and numerical results ( , lines ) is achieved for if .this case is presented as solid line and crosses . in case of we have results of the same order , but they are identical only in case where rotation strength is very small . this caseis presented by dashed line and diamonds .formula fails ( dotted line and circles ) in case of rigid rotation ., height=317 ] results from this section show , that our formula is able to find correct structure of rotating body for differential rotation only .range of application vary with differential rotation parameters , and best results are obtained in middle range i.e. . with extremal case ( ) quality of our results is significantly degraded . in next subsectionwe examine , if this statement depends on rotation law .in addition to previously described cases , we have calculated global properties of our model in case of -const angular velocity profile , with parameter ( table [ v - const02 ] ) and ( table [ v - const002 ] ) .results with are nt presented , because they are similar to -const case ( cf .table [ j - const2 ] ) , where both functions behave as uniform rotation , and our formula fails in this case .figures [ v - const_1 ] and [ v - const_2 ] show very good agreement of of the global physical quantities ( ) with numerical results for entire range of rotation strength covered by both methods .the most extreme case ( ) also behaves well .axis ratio ( fig .[ v - const_3 ] ) however , clearly distinguish between approximation and precise solution .results are quantitatively correct only for small rotation parameters , e.g i.e. . ccclrcc &axis ratio& & & & virial test & + + &&&&&& + &&&&&& + &&&&&& + &&&&&& + &&&&&& + for -const rotation law with ( dashed , cross ) and ( solid , diamond ) , where lines refers to and symbols refers to our formula with from ( [ deltavt ] ) . in this casewe have good quantitative agreement with numerical results in both cases up to .,height=317 ] ccclrcc &axis ratio& & & & virial test & + + &&&&&& + &&&&&&.04 + &&&&&&.09 + &&&&&&.17 + &&&&&&.27 + &&&&&&.39 + &&&&&&.55 + &&&&&&.75 + versus square of dimensionless angular momentum .symbols description is the same as on previous figure , fig .[ v - const_1 ] ., height=317 ] for -const rotation law .symbols description is the same as on previous figures , fig .[ v - const_1 ] and fig .[ v - const_2 ] . ,comparison of the results obtained with our approximation formula ( fig .[ v - const ] fig .[ v - const_3 ] , table [ j - const02][v - const02 ] ) with other ( , fig . 25 , fig .9 , table 1 and 2 ) shows a correct qualitative behaviour for even the most simplified version of our approximation formula for a wide range of parameters describing differential rotation and strength of rotation .this make our formula excellent tool for those who are interested in the structure of barotropic , differentially rotating stars , but do not need exact , high precision results .it can be applied for qualitative analysis of structure of rapidly rotating stellar cores ( e.g. ` cusp ' formation , degree of flattening , off - centre maximum density ) with arbitrary rotation law , also for initial guess for numerical algorithms .it can also be used as an alternative for high - quality numerical results for use in a more convenient form as long as and we are interested mainly in global properties of differentially rotating objects . +* acknowledgments : * 20 chandrasekhar , s. 1936 mnras , 93 , 390 eriguchi , y. and mller , e. 1985 a&a , 146 , 260 hachisu , i. 1986 apjs , 61 , 479 hammerstein , a. 1930 acta mathematica , 54 , 117 kippenhahn , r. , weigert , a. , 1994 , stellar structure and evolution .springer - verlag p. 176 lyttleton , r.a ., 1953 , the stability of rotating liquid masses .cambridge university press maclaurin , c. , 1742 , a treatise of fluxions . printed by t. w. & t. ruddimans , edinburg ostriker , j.p . , mark , j.w .- k .1968 apj , 151 , 1075 tassoul , j .- l . ,2000 , stellar rotation .cambridge university press p. 56
approximate analytical formula for density distribution in differentially rotating stars is derived . any barotropic eos and conservative rotation law can be handled with use of this method for wide range of differential rotation strength . results are in good qualitative agreement with comparison to the other methods . some applications are suggested and possible improvements of the formula are discussed . [ firstpage ] stars : rotation methods : analytical
a set of agents is deployed in a network represented by a weighted graph .an edge weight is a positive real representing the length of the edge , i.e. , the distance between its endpoints along the edge .the agents start simultaneously at different nodes of .every agent has a battery : a power source allowing it to move in a continuous way along the network edges .an agent may stop at any point of a network edge ( i.e. at any distance from the edge endpoints , up to the edge weight ) .the movements of an agent use its battery proportionally to the distance traveled .we assume that all agents move at the same speed that is equal to one , i.e. , we can interchange the notions of the distance traveled and the time spent while traveling . in the beginning , the agents start with the same amount of power noted , allowing all agents to travel the same distance .we consider two tasks : _ convergecast _ , in which at the beginning , each agent has some initial piece of information , and information of all agents has to be collected by some agent , not necessarily predetermined ; and _ broadcast _ in which information of one specified agent has to be made available to all other agents . in both tasks ,agents notice when they meet ( at a node or inside an edge ) and they exchange the currently held information at every meeting . the task of convergecastis important , e.g. ,when agents have partial information about the topology of the network and the aggregate information can be used to construct a map of it , or when individual agents hold measurements performed by sensors located at their initial positions and collected information serves to make some global decision based on all measurements .the task of broadcastis used , e.g. , when a preselected leader has to share some information with others agents in order to organize their collaboration in future tasks .agents try to cooperate so that convergecast ( respectively broadcast ) is achieved with the smallest possible agent s initial battery power ( respectively ) , i.e. , minimizing the maximum distance traveled by an agent .we investigate these two problems in two possible settings , centralized and distributed . in the centralized setting, the optimization problems must be solved by a central authority knowing the network and the initial positions of all the agents .we call _ _ a finite sequence of movements executed by the agents . during each movement ,starting at a specific time , an agent walks between two points belonging to the same network edge .a is a convergecast if the sequence of movements results in one agent getting the initial information of every agent .a is a broadcast if the sequence of movements results in all agents getting the initial information of the source agent .we consider two different versions of the problem : the decision problem , i.e. , deciding if there exists a convergecast or a broadcast using power ( where is the input of the problem ) and the optimization problem , i.e. , computing the smallest amount of power that is sufficient to achieve convergecastor broadcast . in the distributed setting , the task of convergecastor broadcast must be approached individually by each agent .each agent is unaware of the network , of its position in the network and of the positions ( or even the presence ) of any other agents .the agents are anonymous and they execute the same deterministic algorithm .each agent has a very simple sensing device allowing it to detect the presence of other agents at its current location in the network .each agent is also aware of the degree of the node at which it is located , as well as the port through which it enters a node , called an _entry port_. we assume that the ports of a node of degree are represented by integers .agents can meet at a node or inside an edge .when two or more agents meet at a node , each of them is aware of the direction from which the other agent is coming , i.e. , the last entry port of each agent . since the measure of efficiency in this paper is the battery power ( or the maximum distance traveled by an agent , which is proportional to the battery power used ) we do not try to optimize the other resources ( e.g. global execution time , local computation time , memory size of the agents , communication bandwidth , etc . ) . in particular , we conservatively suppose that , whenever two agents meet , they automatically exchange the entire information they hold ( rather than the new information only ) .this information exchange procedure is never explicitly mentioned in our algorithms , supposing , by default , that it always takes place when a meeting occurs .the efficiency of a distributed solution is expressed by the _ competitive ratio _ , which is the worst - case ratio of the amount of power necessary to solve the convergecast or the broadcast problem by the distributed algorithm with respect to the amount of power computed by the optimal centralized algorithm , which is executed for the same agents initial positions .it is easy to see , that in the optimal centralized solution for the case of the line and the tree , the original network may be truncated by removing some portions and leaving only the connected part of it containing all the agents ( this way all leaves of the remaining tree contain initial positions of agents ) .we make this assumption also in the distributed setting , since no finite competitive ratio is achievable if this condition is dropped .indeed , two nearby anonymous agents inside a long line need to travel , in the worst case , a long distance to one of its endpoints in order to meet .rapidly developing network and computer industry fueled the research interest in mobile agents computing .mobile agents are often interpreted as software agents , i.e. , programs migrating from host to host in a network , performing some specific tasks . however , the recent developments in computer technology bring up problems related to physical mobile devices . these include robots or motor vehicles and various wireless gadgets .examples of agents also include living beings : humans ( e.g. soldiers in the battlefield or disaster relief personnel ) or animals ( e.g. birds , swarms of insects ) . in many applicationsthe involved mobile agents are small and have to be produced at low cost in massive numbers .consequently , in many papers , the computational power of mobile agents is assumed to be very limited and feasibility of some important distributed tasks for such collections of agents is investigated .for example introduced _ population protocols _ , modeling wireless sensor networks byextremely limited finite - state computational devices .the agents of population protocols move according to some mobility pattern totally out of their control and they interact randomly in pairs .this is called _ passive mobility _ , intended to model ,e.g. , some unstable environment , like a flow of water , chemical solution , human blood , wind or unpredictable mobility of agents carriers ( e.g. vehicles or flocks of birds ) . on the other hand , introduced anonymous , oblivious , asynchronous , mobile agents which can not directly communicate , but they can occasionally observe the environment .gathering and convergence , as well as pattern formation were studied for such agents .apart from the feasibility questions for limited agents , the optimization problems related to the efficient usage of agents resources have been also investigated .energy management of ( not necessarily mobile ) computational devices has been a major concern in recent research papers ( cf .fundamental techniques proposed to reduce power consumption of computer systems include power - down strategies ( see ) and speed scaling ( introduced in ) .several papers proposed centralized or distributed algorithms .however , most of this research on power efficiency concerned optimization of overall power used . similar to our setting , assignment of charges to the system components in order to minimize the maximal charge has a flavor of another important optimization problem which is load balancing ( cf . ) . in wireless sensor and ad hocnetworks the power awareness has been often related to the data communication via efficient routing protocols ( e.g. .however in many applications of mobile agents ( e.g. those involving actively mobile , physical agents ) the agent s energy is mostly used for it s mobility purpose rather than communication , since active moving often requires running some mechanical components , while communication mostly involves ( less energy - prone ) electronic devices .consequently , in most tasks involving moving agents , like exploration , searching or pattern formation , the distance traveled is the main optimization criterion ( cf . ) . single agent exploration of an unknown environment has been studied for graphs , e.g. , or geometric terrains , . while a single agent can not explore a graph of unknown size unless pebble ( landmark ) usage is permitted ( see ) , a pair of robots are able to explore and map a directed graph of maximal degree in time with high probability ( cf . ) . in the case of a team of collaborating mobile agents, the challenge is to balance the workload among the agents so that the time to achieve the required goal is minimized .however this task is often hard ( cf . ) , even in the case of two agents in a tree , . on the other hand , the authors of study the problem of agents exploring a tree , showing competitive ratio of their distributed algorithm provided that writing ( and reading ) at tree nodes is permitted .assumptions similar to our paper have been made in where the mobile agents are constrained to travel a fixed distance to explore an unknown graph , or tree . in mobile agent has to return to its home base to refuel ( or recharge its battery ) so that the same maximal distance may repeatedly be traversed . gives an 8-competitive distributed algorithm for a set of agents with the same amount of power exploring the tree starting at the same node .the convergecast problem is sometimes viewed as a special case of the data aggregation question ( e.g. ) and it has been studied mainly for wireless and sensor networks , where the battery power usage is an important issue ( cf .recently considered the online and offline settings of the scheduling problem when data has to be delivered to mobile clients while they travel within the communication range of wireless stations . presents a randomized distributed convergecast algorithm for geometric ad - hoc networks and study the trade - off between the energy used and the latency of convergecast .the broadcastproblem for stationary processors has been extensively studied both for the message passing model , see e.g. , and for the wireless model , see e.g. .to the best of our knowledge , the problem of the present paper , when the mobile agents perform convergecast or broadcast by exchanging the held information when meeting , while optimizing the maximal power used by a mobile agent , has never been investigated before . in the centralized setting ,we give a linear - time algorithm to compute the optimal battery power and the strategy using it , both for convergecastand for broadcast , when agents are on the line .we also show that finding the optimal battery power for convergecastor for broadcastis np - hard for the class of trees .in fact , the respective decision problem is strongly np - complete . on the other hand ,we give a polynomial algorithm that finds a 2-approximation for convergecastand a 4-approximation for broadcast , for arbitrary graphs . inthe distributed setting , we give a 2-competitive algorithm for convergecast in trees and a 4-competitive algorithm for broadcastin trees .the competitive ratio of 2 is proved to be the best for the problem of convergecast , even if we only consider line networks .indeed , we show that there is no ( )-competitive algorithm for convergecastor for broadcastin the class of lines , for any .the following table gives the summary of our results .[ -0.22cm]centralized & + & + & polynomial 2-approximation on arbitrary graphs& polynomial 4-approximation on arbitrary graphs + [ 0.22cm]distributed& 2-competitive algorithm for trees& 4-competitive algorithm for trees + & + * roadmap * in section 2 , we show that we can restrict the search for the optimal strategy for convergecast or broadcast on the line to some smaller subclass of strategies called regular strategies . in section 3 , we present our centralized algorithms for convergecast and broadcast on lines .section 4 is devoted to centralized convergecast and broadcast on trees and graphs . in section 5 , we investigate convergecast and broadcast in the distributed setting .section 6 contains conclusions and open problems .in this section , we show that if we are given a convergecast ( respectively broadcast ) strategy for some initial positions of agents in the line , then we can always modify it in order to get another convergecast ( respectively broadcast ) strategy , using the same amount of maximal power for every agent , satisfying some simple properties . such strategies will be called _ regular_. these observations permit to restrict the search for the optimal strategy to some smaller and easier to handle subclass of strategies .we order agents according to their positions on the line .hence we can assume w.l.o.g ., that agent , for is initially positioned at point ] .the set ] , a starting point , a target point ( ) , and an amount of power , we want to know if there exists a strategy for the agents enabling them to move the information from to so that the amount of power spent by each agent is at most .strategies that move information from point to point will be called _ carry _ strategies for ,s , t , p) ] such that |<p ] because otherwise either ] ) is useless or it is impossible to carry information from to . a _regular _ carry strategy for ,s , t , p) ] , getting there the information from the previous agent ( except that has to go to ) , then it goes forward to a point .moreover , we require that each agent travels the maximal possible distance , i.e. , it spends all its power .[ lem - subproblem ] if there exists a carrystrategy for ,s , t , p) ] .the push strategy that can be computed iteratively ( in linear time ) starting with the first agent : 1 .,s\} ] , 3. \geq b_i , \forall 1 \leq i \leq n ] with the minimum number of agents such that there exists a carrystrategy , but no pull strategy .we consider the smallest value such that ,s , t , p) ] , then either +p < s ] . in the first case , can not move the information between and , and then ,s , t , p) ] and there is no pull strategy for ,pos[1],t , p) ] .since there exists a carry strategy , let be the first agent that reaches .the rightmost point where can move the information from is ] . by minimality of the number of agents, the pull strategy solves the subproblem on ,s',t , p) ] .if , by minimality of , we have and thus is a pull strategy which is a contradiction . hence , suppose that . note that if =pos[1] ] and let ] , and thus +pos[1]+f_1 - 3p)/4 ] , and + f_1 ' - p)/2 ] .however , +pos[i]+f_1 - 3p)/4 = s + ( pos[1 ] - pos[i])/4 < s ] , then there exists a pull strategy on ,s , t , p) ] admits a carrystrategy . from the first part of the proof , we know that it admits a pull strategy .the push strategy for ,s , t , p) ] for be the set of intervals that induces the pull strategy for ,s , t , p). ] for induces the pull strategy for ,s , b_n , p). ] that induces a push strategy for ,s , b_n , p) ] and . ] induces a push strategy for ,s , t , p) ] , a target point , and an amount of power and enables to compute the smallest such that ,s , t , p) ] , a starting point , and an amount of power and enables to compute the largest such that ,s , t , p) ] on the segment ] and =\ell ] and right agents execute a reverse push strategy from ] for each agent , such that 1 . if , \} ] , 2 . if , \} ] , 3 . .suppose that we are given a partition of the agents into two disjoint sets and and values for each agent satisfying conditions ( 1)-(3 ) . then the following moves define a regular convergecast strategy : first , every agent moves to ; subsequently , every agent in moves to once it learns the initial information of ; then , every agent in moves to once it learns the initial information of .let be an agent from such that is maximum .once has moved to , it knows the initial information of all the agents such that . if , convergecast is achieved . otherwise , since , we know that there exists an agent such that . when reaches it knows the initial information of all the agents such that and thus , and know the initial information of all agents , which accomplishes convergecast .the following lemma shows that we can restrict attention to regular convergecast strategies .[ lem : regconv ] if there exists a convergecast strategy for a configuration ] using power at most . consider a convergecast strategy for a configuration ] .hence , by time , it must have learned the initial information of .it follows that every agent , for , must learn either the initial information of agent or of .therefore , we can partition the set of agents performing a convergecast strategy into two subsets and , such that each agent learns the initial information of agent before learning the initial information of agent ( or not learning at all the information of ) .all other agents belong to .we denote by ] the interval of points visited by .let and . since is a convergecast strategy , we have .observe that the agents in move the initial information of from ] to . from lemma [ lem - subproblem ] , we can assume that the agents in ( resp . ) execute a push strategy ( resp . a reverse push strategy ) and thus conditions ( 1)-(3 ) hold .suppose now that there exists an agent such that .let and ; note that \} ] .consider the strategy where we exchange the roles of and , i.e. , we put and .let \} ] , ] . if ] . if ] . in both cases , we still have a convergecast strategy .if ] , then > 2 f_{lr}(i+1 ) + p -pos[i+1 ] = f_{i+1} ]. consequently , we still have a convergecast strategy . applying this exchange a finite number of times, we get a regular convergecast strategy .we now define the notion of a regular broadcast strategy for ] , using power at most . without loss of generality, we suppose that =0 ] .intuitively , a regular broadcast strategy divides the set of all agents into the set of left agents and the set of right agents such that left agents execute a reverse pull strategy from ] . more formally , a _ regular _ broadcast strategy is given by points of segment ] , -p ] , 3 .if , and -p)/2 ] suppose that we are given points for each agent , satisfying conditions ( 1)-(4 ) . then the following moves define a regular broadcast strategy : initially every agent moves to . once learns the source information , moves to . since ( 1)-(4 )hold , this is a broadcast strategy and the maximum amount of power spent is at most . before proving that it is enough to only consider regular broadcast strategies , we need to prove the following technical lemma .[ lem - strat1-broadcast ] there exists a broadcast strategy for a configuration ,k , p) ] ; 3 . for each , | + \min(x_i+r_i-2l_i , 2r_i - x_i - l_i ) \leq p ] ( resp . ] and ( resp . ) . consider a broadcast strategy where the maximum amount of power spent is .for every agent , let be the position where learns the information that has to be broadcast , and let ( resp . ) be the leftmost ( resp .rightmost ) position reached by once it got the information . by definition of , ( 1 ) and( 2 ) hold .since the maximum amount of power spent by an agent is at most , and since the agent has to go from ] , or meets an agent in such that already has the information .assume that ] and let be the non - empty set of agents such that and learns the information before .let be the agent that is first to learn the information .since ] . thus ( 4 ) holds for .conversely , if we are given values satisfying ( 1)-(4 ) , we can exhibit a strategy for broadcast : initially every agent moves to . once learns the information , if , then moves to and to and if , then moves to and to . since ( 4 ) holds , this is a broadcast strategy and since ( 3 ) holds , the maximum amount of power spent is at most .the following lemma shows that we can restrict attention to regular broadcast strategies .[ lem - shape - algo - b ] if there exists a broadcast strategy for a configuration ] with source agent , using power at most .suppose that there exists a broadcaststrategy for ,k , p) ] .similarly , the agents execute a pull strategy between and -p ] , and \geq b_{k+1} ] , and - 2b_{k-1 } > p ] , it implies that + p ] .consequently , we can assume that > b_{k-1} ] . among all broadcast strategies ,consider the strategy that minimizes the size of . without loss of generality , assume that does not reach , and let such that reaches .for each agent , let be defined as in lemma [ lem - strat1-broadcast ] .note that +p ] .moreover , - p \leq l_i \leq b_{k-1 } \leq l_k ] , +pos[i])/2 ] ; let +pos[i])/2 ] .note that - 2l_i ' \leq p ] . since \cup [ l_k , r_k ] \subseteq [ pos[i]-p , pos[k]+p ] = [ l'_k , r'_k ] \cup [ l'_i , r'_i] ] , or - 2b_{k-1 } > p ] , each having power , may transport their total information .similarly , is the leftmost such point for agents at positions ] of agents .we first compute two lists , for and , for . then we scan them to determine if there exists an index , such that .in such a case , we set and and we apply lemma [ lem : regconv ] to obtain a regular convergecaststrategy where agents and meet and exchange their information which at this time is the entire initial information of the set of agents . if there is no such index , no convergecast strategy is possible .this implies in time we can decide if a configuration of agents on the line , each having a given maximal power , can perform convergecast .the remaining lemmas of this subsection bring up observations needed to construct an algorithm finding the optimal power and designing an optimal convergecast strategy . note that if the agents are not given enough power , then it can happen that some agent may never learn the information from ( resp . from ) .in this case , can not belong to ( resp . ) .we denote by the minimum amount of power needed to ensure that can learn the information from : if , \} ] . given a strategy using power , for each agent , we have and either ] .in the first case , +p ] .we define threshold functions and that compute , for each index , the minimal amount of power ensuring that agent does not go back when ( respectively ) , i.e. , such that ] ) . for each , let +p\} ] . clearly , .the next lemma shows how to compute and if we know and for every agent .[ lem - eqn - reach ] consider an amount of power and an index .if , then + ( 2^{q - p+1}-1)p - \sum_{i = p+1}^{q } 2^{q - i}pos[i] ] . we prove the first statement of the lemma ; the proof of the other statement is similar .we first show the following claim. * claim . *if for every ] .thus if , the statement holds .suppose now that .since , by the induction hypothesis , we have .\ ] ] consequently , we have \\ & = & 2^{q - p}{reach_{lr}^c\xspace}(p , p ) + ( 2^{q - p}-2)p - \sum_{i = p+1}^{q-1 } 2^{q - i}pos[i]+p - pos[q ] \\ & = & 2^{q - p}{reach_{lr}^c\xspace}(p , p ) + ( 2^{q - p}-1)p - \sum_{i = p+1}^{q } 2^{q - i}pos[i].\\\end{aligned}\ ] ] this concludes the proof of the claim .if , then for each ] . consequently , + ( 2^{q - p+1}-1)p - \sum_{i = p+1}^{q } 2^{q - i}pos[i].\ ] ] in the following , we denote ] .[ rem - slr ] for every , we have .we now show that for an optimal convergecast strategy , the last agent of and the first agent of meet at some point between their initial positions and that they need to use all the available power to meet .[ lem - egalite - reach ] suppose there exists an optimal convergecast strategy for a configuration ] moreover , , and , . in the proof we need the following claim. * claim . * for every , the function which assigns the value for any argument , is an increasing , continuous , piecewise linear function with at most pieces on . for every ,the function which assigns the value for any argument , is a decreasing continuous piecewise linear function with at most pieces on .we prove the first statement of the claim by induction on .for , + p ] and for every , ] .if , ] with at most pieces .if , + p ] , the function is an increasing , continuous , piecewise linear function on with at most pieces .one can show the second statement of the claim using similar arguments .this ends the proof of the claim .suppose we are given and consider the partition of the agents into and .consider a regular convergecast strategy for this partition and where the maximum amount of power used by an agent is minimized .we first show that .let .since is an increasing continuous function on and is a decreasing continuous function on , the difference is a continuous increasing function on .consider the case where ( the other case is similar ) .since +q ] and thus , . by definition of a regular convergecast strategy , there exists such that .consequently , since the difference is a continuous increasing function on , there exists a unique such that .consider an optimal regular convergecast strategy and let be the maximum amount of power used by any agent . by definition of a regular convergecast strategy, there exists an index such that .suppose that ] since . consequently , according to what we have shown above , there exists such that and is not the optimal value needed to solve convergecast .this contradiction shows that < { reach_{lr}^c\xspace}(p , p) ] , is not the optimal value needed to solve convergecastthis contradiction shows that ] , .this follows from the fact that for each such that , we have .consequently , for each ] and thus , - p > pos[p ] - p \geq { reach_{lr}^c\xspace}(p , p) ] , .we finally prove that for each ] and consequently , the first statement of the lemma implies that there exists such that .this implies that is not the optimal value needed to solve convergecast .this contradiction implies that for each ] , .we first sketch a suboptimal but much easier algorithm and later present and analyze in detail a more involved linear - time solution to our problem .first , we need to compute the functions and for all such that . by lemma[ lem - eqn - reach ] , the function can be computed from the values for all such that .starting from , one can compute all these functions , since each value \} ] and to compute the optimal convergecast strategy .we first explain how to compute a stack of couples that we can subsequently use to calculate for any given .then , we present a linear algorithm that computes the value needed to solve convergecast when the last index is provided : given an index , we compute the optimal power needed to solve convergecast assuming that and .finally , we explain how to use techniques introduced in the two previous algorithms in order to compute the optimal power needed to solve convergecast .[ [ computing - the - threshold - values . ] ] computing the threshold values .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in order to describe explicitly the function , we need to identify the indices such that for every ] .we denote by this set of indices , { th_{lr}^c\xspace}(r ) > { th_{lr}^c\xspace}(p)\} ] , and then is the value of power such that + ( 2^{q - p+1}-1)p - { s_{lr}^c\xspace}(p , q ) = pos[q+1] ] .after that , the integer needed to compute is located on the top of the stack .finally , the couple is pushed on the stack before we proceed with the subsequent index the function returns the stack corresponding to the index .below , we give the pseudo - code of the function . ( , ) ( ) the number of stack operations performed during the execution of this function is .however , in order to obtain a linear number of arithmetic operations , we need to be able to compute and in constant time . in order to compute efficiently , we can store the values of , ] .since ] ` of real ` , `:integer):stack ` that returns a stack containing all pairs such that for every ] - { s_{rl}^c\xspace}(q , r+1 ) - 2^{r - p}pos[p]+{s_{lr}^c\xspace}(p , r))/(2^{r - p+1}+2^{q - r}-2) ] if , or -p_{<r+1 } = { reach_{rl}^c\xspace}(r+1,p_{<r+1 } ) = { reach_{lr}^c\xspace}(r , p_{<r+1}) ] , each having power , may pick the information and bring it back to .similarly , is the leftmost point from which the agents at positions ] of agents and a specified source agent .we first compute and . then we test if - { reach_{rl}^b\xspace}(k+1,p)| ] are less or equal than .if one of the inequalities is true then there is a broadcast strategy .otherwise , broadcast is not possible .this implies in time we can decide if a configuration ] .similarly , if we have +p\} ] .similarly , for each agent such that we have )/2 ] of agents on the line for any source agent and to compute an optimal broadcast strategy .we formulate function which computes in linear time the optimal power for the broadcast in the line . , q = n ] , then . by lemma [ lem - eqn - reach - b ], all functions are linear with coefficient 1 on .hence , if -{act_{lr}^b\xspace}(p) ] .this shows that is correctly computed .it remains to show that is correctly computed . by definition of a regular broadcast strategy, we have )/2 ] , then is correctly computed as the above formula is used by the function . otherwise , we have : -{act_{lr}^b\xspace}(p)-{reach_{lr}^b\xspace}(p,{act_{lr}^b\xspace}(p)))/2 ] , < { y_{rl}^b\xspace} ] to compute the additional power that has to be used . by definition of and , we conclude that is the optimal value of power to achieve broadcast by a regular strategy . in view of lemma [ lem - shape - algo - b ] , this concludes the proof that is the optimal value of power to achieve broadcast .the complexity of the function is straightforward by its formulation . since a regular broadcast strategyis fully determined by the value of and by the two possible values of for the source agent ( or ) , computing the optimal power yields an optimal broadcast strategy .this concludes the proof of theorem [ thm : optpower - b ] .we start the section by showing that for arbitrary trees the centralized convergecast problem and the centralized broadcast problem are substantially harder than on lines .a configuration for convergecast on arbitrary graphs is a couple where is a -node weighted graph representing the network and of size is the set of the starting nodes of the agents .a configuration for broadcast additionally specifies the starting node of the source agent .we consider the centralized convergecast decision problem and the broadcast decision problem formalized as follows . *centralized convergecast decision problem * + _ instance : _ a configuration and a real .+ _ question : _ is there a convergecast for , in which each agent uses at most battery power ?* centralized broadcast decision problem * + _ instance : _ a configuration with a specified source agent and a real .+ _ question : _ is there a broadcast for with the specified source agent , in which each agent uses at most battery power ? we will prove that both these problems are strongly np - complete . in order to do this , we consider _ star configurations _, i.e. , configurations in which is a star , i.e. , a tree of diameter 2 .we define a class of strategies in a star called _ simple _ that consist of the following two phases : * the strategy starts with a gathering phase lasting time , in which each agent uses all its available power to move towards the center of the star and then waits until time .the agents that have used all their power during this phase without reaching the center are called _depleted_. * in the second phase , the agents does not move past depleted agents , i.e. , never enter the segment between a leaf and a depleted agent .the following lemma shows that it is enough to consider simple strategies for convergecast and broadcast .[ cl : simpl ] if there exists a convergecast ( respectively a broadcast strategy ) in a star using power , then there exists a simple convergecast ( respectively a simple broadcast strategy ) using power .let be a convergecast or a broadcast .we construct a simple as follows . in ,each agent moves towards the center of the star until it has used all its battery power or has reached the center of the star .this gathering phase lasts from time to time .if an agent has not reached the center in strategy , then it stops forever in .otherwise , consider time at which it arrives at the center in .then , in strategy , the agent executes at time each movement performed at time in .however , if a movement of an agent would result in the agent moving past a depleted agent from time to in , then in strategy the agent waits at the position of the depleted agent instead of moving past it . by construction, is a simple .observe that in , the non - depleted agents share all their information at the center of the star at time .since two depleted agents can not meet , it remains to show that when a non - depleted agent meets a depleted agent at time in , they meet at time in .the final position of agent is not farther from the center in than in .hence , any agent that meets agent at time is at the new position of in at time .hence , the meeting between and occurs in as well .if was a convergecast strategy ( respectively a broadcast strategy ) then is a simple convergecast strategy ( respectively a simple broadcast strategy ) .[ th : np - graph ] the centralized convergecast decision problem and the centralized broadcast decision problem are strongly np - complete for trees .the proof of theorem [ th : np - graph ] is split into three lemmas .we first show that the centralized convergecast decision problem is strongly np - hard , then that the centralized broadcast decision problem is strongly np - hard , and finally that both problems are in np .[ th : np - hard - graph ] the centralized convergecast decision problem is strongly np - hard for trees . we construct a polynomial - time many to one reduction from the following strongly np - complete problem .* 3-partition problem * + _ instance : _ a multiset of positive integers such that for with .+ _ question : _ can be partitioned into disjoint sets of size 3 , such that for ?we construct an instance of the centralized convergecast problem from an instance of 3-partition as follows .the graph is a star with leaves and is the set of leaves of .hence , there are agents , each located at a leaf of the star .we consider a partition of the set of agents into three subsets : , and .the subset contains agents .the leaves containing these agents are incident to an edge of weight .the subset contains agents . for ,the weight of the edge incident to the leaf containing agent is .the subset contains one agent .the leaf containing agent is incident to an edge of weight .figure [ fig : reduc ] depicts the star obtained in this way .the battery power allocated to each agent is equal to .the construction can be done in polynomial time .we show that the constructed instance of the centralized convergecast problem gives answer yes if and only if the original instance of 3-partition gives answer yes .first , assume that there exists a solution for the instance of the 3-partition problem .we show that the agents can solve the corresponding instance of the centralized convergecast problem using the following .agent moves at distance from the center and for each , agent moves at distance from the center . at this point , all these agents have used all their battery power . each agent in moves to the center of the star . for and for each of the three agents such that , agent moves to meet and goes back to the center of the starthe cost of this movement is , which is exactly the remaining battery power of agent .observe that since agents in have met all agents in , agents in , located at the center of the star , have the information of all agents except agent . then agent moves to meet agent .agents and have the information of all the agents .hence , this is a solution of the instance of the centralized convergecast problem .now assume that there exists a solution ( strategy ) to the convergecast problem . by lemma [ cl : simpl ], we can assume that the convergecast is simple .consider the star after the gathering phase of the simple .each agent in is at the center of the star . for , the agent has the remaining power of . for , the agent is at distance from the center of the star and agent is at distance from the center .since the agents in are the only agents with remaining battery power , they must move to collect the information of agents in .we call this phase the collecting phase . observe that since agent is at distance from the center , it is impossible for agents in to transport this information . indeed , when an agent reaches , it has used all its battery power .hence , the entire information must be collected at the position of . in order to collect the information, agents in must go to the position of each agent in and transport the information of these agents to the center .the total cost to move these information is at least twice the sum of the distances between each agent in and the center .this is equal to .then , this information must be moved to the position of .this costs at least .hence , the total cost of collecting information after the gathering phase is at least .the amount of power available to the agents for the collecting phase is equal to the amount of power needed to collect the information , since there are agents each having power .this means that during the collecting phase , for , agents can not collectively use a power larger than to collect the information of .suppose by contradiction that during the collecting phase , more than one agent in enters an edge to collect the information of agent at distance from the center , for some such that .let be the agent that has reached the position of .if comes back to the center , it has used at least power . sinceat least one other agent has used some power to enter edge , these agents have used more than battery power to collect information of agent .if does not come back to the center , then some other agent has to move the information to the center .if the agent stops at distance from the center , then at least one other agent has to go to this position ( at distance from the center ) and come back .thus , the cost is at least . in both cases ,the agents have used more power than , which leads to a contradiction .hence , for each , there is only one agent that collects the information of agent and enters the corresponding edge .we can assume , without loss of generality , that agent is the agent that transports the information to .observe that can not collect information from other nodes since moving to uses exactly all its remaining power .hence , only agents in can collect the information of agents in .let be the partition of defined by , for each .we have since each agent from has battery power at most .the power needed to collect information of agents in is which is exactly equal to the combined power available to agents in .this means that each agent in must use all its power to collect information and .hence , is a solution to the instance of 3-partition .[ th : np - hard - graph - b ] the centralized broadcast decision problem is strongly np - hard for trees .again , we construct a polynomial - time many to one reduction from 3-partition .the general structure of the proof is similar as in lemma [ th : np - hard - graph ] but details differ .we construct an instance of the centralized broadcast problem from an instance of 3-partition as follows .the graph is a star with leaves and is the set of leaves of .hence , there are agents , each located at a leaf of the star .we consider a partition of the set of agents into three subsets : , and .the subset contains agents .the leaves containing these agents are incident to an edge of weight .the subset contains agents . for ,the weight of the edge incident to the leaf containing agent is .the subset contains agents .all leaves containing an agent in are incident to an edge of weight .figure [ fig : reducb ] depicts the star obtained in this way .the battery power allocated to each agent is equal to and agent is the source agent .the construction can be done in polynomial time .we show that the constructed instance of the centralized broadcast problem gives answer yes if and only if the original instance of 3-partition gives answer yes .first , assume that there exists a solution for the instance of the 3-partition problem .we show that the agents can solve the corresponding instance of the centralized broadcast problem using the following . for each ,agent moves at distance from the center and for each , agent moves at distance from the center . at this point , all these agents have used all their battery power .each agent in moves to the center of the star .hence , each agent obtains the information of . for and each of the three agents such that , agent moves to meet and goes back to the center of the starthe cost of this movement is .observe that since agents in have met all agents in , all agents except those in have the information of .each agent moves to meet agent .each agent obtains the information of .hence , this is a solution to the instance of the centralized broadcast problem .now assume that there is a solution ( strategy ) to the broadcast problem . by lemma [ cl : simpl ], we can assume that the centralized broadcast is simple .consider the star after the gathering phase of the simple .each agent in is at the center of the star . for , the agent has the remaining power of . for , the agent is at distance from the center of the star . for , agent is at distance from the center . since the agents in are the only agents with remaining battery power , they must move to give the information to agents in .observe that since each agent is at distance from the center , an agent in that moves to meet an agent has not enough power to meet another depleted agent afterwards .hence , each agent must meet exactly one agent . without loss of generality, we can assume that each agent meets .before agents in meet agents in , they must meet agents in .the total cost to give the information to all agents in is at least twice the sum of the distances between each agent in and the center .this is equal to .the total cost to give the information to agents in is .the amount of power available to the agents in is , which is exactly the power needed for broadcast .assume for the sake of contradiction that two or more agents in enter the same edge incident to the leaf of an agent .in this case , one of the agents must meet .this costs the agent and other agents have used some power to enter this edge .this gives a contradiction because the total cost is more than the available power .thus , we can assume that each agent meets exactly one agent .let be the partition of defined by , for each .we have since the total power that agents in can use to meet agents in is at most .the power needed to give information to agents in is which is exactly equal to the combined power available to agents in .this means that each agent in must use all its power to meet agents in and .hence , is a solution to the instance of 3-partition .[ lem : np - graph ] the centralized convergecast decision problem and the centralized broadcast decision problem are in np .we consider the verifier - based definition of np .consider the of the agents for an instance of the centralized convergecast or centralized broadcast problems .we construct the certificate for the instance as follows .we say that a meeting of two or more agents is _ useful _ if at least one of the agents received a new piece of information during this meeting .each agent participates in at most useful meetings where is the number of agents .hence , there are at most useful meetings .the certificate contains the list of all useful meetings in chronological order . for the -th meeting ,the certificate encodes the identities of the meeting agents and the location of the meeting : a node or an edge of the graph .if the meeting has occurred on an edge , the certificate encodes a variable .the variable represents the distance between and the meeting point . if a previous meeting of number has occurred on the same edge , the certificate encodes if , or or . for each of the meeting agents, the certificate also encodes the node from which it has entered the edge ( or ) just before the meeting and the node from which it exits the edge just after the meeting .we consider the defined as follows .for each useful meeting in chronological order , the meeting agents move to the meeting location following a shortest path from their previous position .if the meeting occurs on an edge , the meeting agents enter and exit the edge using the node encoded in the certificate . is a convergecast since each time an agent has collected a new piece of information in , it collects the same information during the corresponding meeting in . moreover , the agents use at most as much power in as in since they move to the same meeting points using shortest paths .the verifier simulates the defined by the certificate .the verifier first checks that all the agents possess the entire information at the end of the algorithm . this can be done in polynomial time .then , the verifier computes the distance traveled by each agent .these distances are linear sums of variables with and of a constant .finding an assignment of the variables , such that the distance traveled by each agent is less or equal than , can be done in polynomial time using linear programming .thus , the certificate can be verified in polynomial time .theorem [ th : np - graph ] is a direct consequence of lemmas [ th : np - hard - graph ] , [ th : np - hard - graph - b ] and [ lem : np - graph ] .since both decision problems concerning convergecast and broadcast are np - hard for the class of trees , the same is true for their optimization counterparts , i.e. , computing the smallest amount of power that is sufficient to achieve convergecast or broadcast . in spite of that, we will show how to obtain , in polynomial time , a 2-approximation of the power needed to achieve centralized convergecast on arbitrary graphs and a 4-approximation of the power needed to achieve centralized broadcast on arbitrary graphs .let , where is the distance between and in .the following proposition shows a relation between and the above optimal power values .[ lem : twice - p ] consider a configuration for convergecast and a configuration with a specified source agent for broadcast . then and for any source agent in .we prove the proposition for the case of convergecast .the proof for broadcast is similar .suppose , by contradiction , that there is a partition of into and such that for each and the distance between and is greater than .it means that no agents in can meet an agent in using power .this contradicts the fact that there is a convergecast in using battery power .hence , for every partition of into and , there exist agents and that are at distance at most . in view of proposition [ lem : twice - p ], the following theorem shows that the convergecast problem has a polynomial - time 2-approximation .[ cor : fourapr ] consider a configuration .there is a polynomial algorithm computing a convergecast strategy in which each agent uses power .we formulate algorithm which produces the desired convergecast strategy .the parameters of the algorithm are the graph and the nodes corresponding to the initial positions of agents ( stored in ] let be the nodes chosen at the -th iteration of the first loop and let be the value of at the end of the -th iteration .we set ] has all the information since \} ] of agents on the line such that : * there exists a centralized convergecast strategy using power and there is no deterministic distributed strategy allowing the agents to solve convergecast when the amount of power given to each agent is .* there exists a centralized broadcast strategy using power for source agent starting at ] allowing the agents to solve broadcast when the amount of power given to each agent is . before proving theorem [ thm : twoapropt ] , we prove two technical lemmas .[ lem - offline - group ] consider any , an amount of power , and a set of agents located at positions ] , and if , there exists such that ] .therefore , in view of the claim from the proof of lemma [ lem - eqn - reach ] , we have \\ & = & { reach_{lr}^c\xspace}(1,p ) + ( 2^{k}-1)p - \sigma_{i=1}^{k } 2^{k - i}(pos[i]-{reach_{lr}^c\xspace}(1,p))\\ & \geq & { reach_{lr}^c\xspace}(1,p ) + ( 2^{k}-1)p - \sigma_{i=1}^{k } 2^{k - i}(p-\varepsilon)\\ & \geq & { reach_{lr}^c\xspace}(1,p ) + ( 2^{k}-1)p - ( 2^{k}-1)(p-\varepsilon)\\ & \geq & { reach_{lr}^c\xspace}(1,p ) + ( 2^{k}-1)\varepsilon\end{aligned}\ ] ] consequently , if , we have ] .let be the closest point from ] .suppose that all the agents execute the same distributed deterministic algorithm and do not know their initial position , and assume that some agent meets agent before any couple of agents in meet .then , and when meets , for each , agent is located on -d ] .since all agents are executing the same distributed deterministic algorithm , let us consider the execution of the algorithm until some agent meets agent .during this period , all the agents perform exactly the same moves . since they started simultaneously, no agent meets another agent before agent meets at point or to the left of .when agent meets , it has moved at least a distance of . until this meeting between and ,every other agent has also moved a distance of at least , and is located at distance to the left of its starting position .consequently , no agent can go further than to the right of ] of length such that for each , the distance between and is .in other words , for each , and ..,width=604 ] first , let us consider the execution of the optimal convergecast centralized algorithm for this configuration .we claim that if the amount of power given to each agent is , then convergecast is achievable .we show by induction on that for every , . for , +p > p-\epsilon = s_1-p+\epsilon ] .since , and since , we know by lemma [ lem - offline - group ] that ] .consequently , this concludes the proof by induction .since , is sufficient to solve convergecast .notice that the same strategy guarantees broadcast for source agent for configuration ] such that and for each ] and for each ] , if then , and for each ] , no agent in has met an agent of . hence , property hold for .suppose that the induction hypothesis holds for all and let and .note that by , we have and . by and , before step , no agent in , has met any other agent from a set , .thus , since all agents in execute the same deterministic distributed algorithm starting simutaneously , they have performed exactly the same moves and they have not met any other agent before step .suppose that an agent from meets another agent at step .then , either the leftmost agent from meets an agent from with , or the rightmost agent from meets an agent from with . by symmetry , it is enough to consider only one case . in the following ,we assume that meets an agent with at step . in this case , and thus and for each ; consequently , properties and hold for .moreover , by induction hypothesis , the meeting between and occurs at a point .first suppose that .by lemma [ lem - online - gap ] , we have , and thus property and holds for .then suppose that .we have .but this is impossible since the initial position of the leftmost agent of is \geq s_l$ ] and the power available to is .this concludes the proof by induction .in particular , no agent from ever meets any agent from and consequently , is neither a distributed convergecast strategy nor a distributed broadcast strategy for any source agent .theorems [ thm : fourcomp ] and [ thm : twoapropt ] show that for the distributed convergecast problem on the class of trees , the competitive ratio 2 is optimal .in the centralized setting , we showed that the breaking point in complexity between polynomial and np - hard , both for the convergecast and for the broadcast problem , is already present inside the class of trees .namely , agents optimal power and the strategy using it can be found in polynomial time for the class of lines but it is np - hard for the class of arbitrary trees .nevertheless , we found polynomial approximation algorithms for both these problems .it remains open if better approximation constants can be found .the problem of a single _ information transfer _ by mobile agents between two stationary points of the network , which we called _ carry _ in the case of lines , is also interesting . in particular, it is an open question whether the problem of finding optimal power for this task is np - hard for arbitrary tree networks or if a polynomial - time algorithm is possible in this case .our reduction from 3-partition is no longer valid for this problem . inthe distributed setting , we showed that 2 is the best competitive ratio for the problem of convergecast .however , our distributed algorithm for the broadcast problem is only 4-competitive .it remains open to find the best competitive ratio for the broadcast problem .additional natural questions related to our research include other variations of the agent model , e.g. , agents with unequal power , agents with non - zero visibility , labeled agents in the distributed setting , as well as fault - tolerant issues , such as unreliable agents or networks with possibly faulty components . c. ambhl .an optimal bound for the mst algorithm to compute energy efficient broadcast trees in wireless networks . in _ proceedings of the international colloquium on automata , languages , and programming ( icalp ) _ , volume 3580 of _ lecture notes in computer science _ , pages 11391150 , 2005 .m. bender and d. slonim .the power of team exploration : two robots can learn unlabeled directed graphs . in _ proceedings of the 35th annual symposium on foundations of computer science ( focs ) _ , pages 7585 , 1994 .m. cieliebak , p. flocchini , g. prencipe , and n. santoro . solving the robots gathering problem . in_ proceedings of the international colloquium of automata , languages and programming ( icalp ) _ , volume 2719 of _ lecture notes in computer science _ , pages 11811196 .springer berlin heidelberg , 2003 .a. cord - landwehr , b. degener , m. fischer , m. hllmann , b. kempkes , a. klaas , p. kling , s. kurras , m. mrtens , f. meyer auf der heide , c. raupach , k. swierkot , d. warner , c. weddemann , and d. wonisch . a new approach for analyzing convergence algorithms for mobile robots . in l.aceto , m. henzinger , and j. sgall , editors , in _ proceedings of the international colloquium of automata , languages and programming ( icalp ) _ , volume 6756 of _ lecture notes in computer science _ , pages 650661 , 2011 .s. das , p. flocchini , n. santoro , and m. yamashita . on the computational power of oblivious robots: forming a series of geometric patterns . in _ proceedings of the 29th acm sigact - sigops symposium on principles of distributed computing ( podc ) _ , pages 267276 , 2010 .m. dynia , m. korzeniowski , and c. schindelhauer .power - aware collective tree exploration . in _ architecture of computing systems ( arcs )_ , volume 3894 of _ lecture notes in computer science _ , pages 341351 , 2006 .b. krishnamachari , d. estrin , and s. wicker . the impact of data aggregation in wireless sensor networks . in _ proceedings of the 22nd international conference on distributed computing systems workshops_ , pages 575578 , 2002 .
a set of identical , mobile agents is deployed in a weighted network . each agent has a battery a power source allowing it to move along network edges . an agent uses its battery proportionally to the distance traveled . we consider two tasks : _ convergecast _ , in which at the beginning , each agent has some initial piece of information , and information of all agents has to be collected by some agent ; and _ broadcast _ in which information of one specified agent has to be made available to all other agents . in both tasks , the agents exchange the currently possessed information when they meet . the objective of this paper is to investigate what is the minimal value of power , initially available to all agents , so that convergecast or broadcast can be achieved . we study this question in the centralized and the distributed settings . in the centralized setting , there is a central monitor that schedules the moves of all agents . in the distributed setting every agent has to perform an algorithm being unaware of the network . in the centralized setting , we give a linear - time algorithm to compute the optimal battery power and the strategy using it , both for convergecastand for broadcast , when agents are on the line . we also show that finding the optimal battery power for convergecastor for broadcastis np - hard for the class of trees . on the other hand , we give a polynomial algorithm that finds a 2-approximation for convergecastand a 4-approximation for broadcast , for arbitrary graphs . in the distributed setting , we give a 2-competitive algorithm for convergecast in trees and a 4-competitive algorithm for broadcastin trees . the competitive ratio of 2 is proved to be the best for the problem of convergecast , even if we only consider line networks . indeed , we show that there is no ( )-competitive algorithm for convergecastor for broadcastin the class of lines , for any .
sniffing is a usual technique for monitoring wireless networks .it consists in spreading within some target area a number of monitors ( or _ sniffers _ ) that capture all wireless traffic they hear and produce traces consisting of macframe exchanges .wireless sniffing is a fundamental step in a number of network operations , including network diagnosis , security enhancement , and behavioral analysis of protocols .wireless sniffing often involves a centralized process that is responsible for combining the traces .the objective is to have a global view of the wireless activity from multiple local measurements .individual sniffers can also compensate for their frame losses with data from other sniffers .merging is however a difficult task ; it requires precise synchronization among traces ( up to a few microseconds ) and bearing the unreliable nature of the medium ( frame loss is unavoidable ) .the literature has provided the community with a number of merging tool , but they either require a wired infrastructure or are too specific to the experimentations conducted in the papers ( see more details in section [ sec : probem_description ] ) . in this paper we present wipal , an trace merging tool that focuses on ease - of - use , flexibility , and speed . by explaining wipal s design choices and internals , we intend to complete existing papers and give additional insights about the complex process of trace merging .wipalhas multiple characteristics that distinguish it from the few other traces mergers : offline tool . : : being an offline tool enables wipalto be independent of the monitors : one may use any software to acquire data .most trace mergers expect monitors to embed specific software .independent of infrastructure .: : wipal s algorithms do not expect features from traces that would require monitors to access a network infrastructure ( e.g.,synchronization ) .monitors just need to record data in a compatible input format .compliant with multiple formats .: : wipalsupports most of the existing input formats , whereas other trace mergers require a specific format .some tools even require a custom dedicated format .hands - on tool .: : wipalis usable in a straightforward fashion by just calling the adequate programs on trace files .other mergers require more complex setups ( e.g.,a database server or a network setup involving multiple servers . )this paper provides an analysis that supports these choices ( cf.section [ sec : eval ] ) .first , the proposed synchronization mechanism exhibits better precision than existing algorithms .second , wipalis an order of magnitude faster than the other publicly available offline merger , wit .this analysis uses crawdad s uw / sigcomm2004 dataset , recorded during the sigcomm 2004 conference .it allows us to calibrate various parameters of wipal , validate its operation , and show its efficiency .wipalis however _ not _ designed for a specific dataset and works on any wireless traces using the appropriate input format ( wipal s test suite includes various synthetic traces with different formats ) .we do believe that wipalwill be of great utility for the research community working on wireless network measurements .wireless sniffing requires the use of multiple monitors for _ coverage _ and _ redundancy _ reasons .coverage is concerned when the distance between the monitor and at least one of the transmitters to be sniffed is too large to ensure a minimum reception threshold .redundancy is the consequence of the unreliability of the wireless medium . even in good radio conditions monitors may miss successfully transmitted frames .after the collection phase , traces must be combined into one .a merged trace holds all the frames recorded by the different monitors and gives a global view of the network traffic .the traditional approach to merging traces involves a _ synchronization _ step , which aligns frames according to their timestamps .this enables identifying all frames that are identical in traces so that they appear once and only once in the output trace ( cheng et al refer to it as _unification_. ) this process is illustrated in fig .[ fig : basics ] .synchronization is difficult to obtain because , in order to be useful , it must be very precise .imprecise frame timestamps may result in duplicate frames and incorrect ordering in the output trace .an invalid synchronization may also lead to distinct frames accounted for the same frame in the output trace . in order to avoid such undesirable effectsone needs precision of less than . to the extent of our knowledge , no existing hardware supports synchronizing network cards clocks with such a precision ( note that we are interested in frame arrival times in the card , not in the operating system ) .therefore , all merging tools post - process traces to _ resynchronize _ them with the help of _ reference frames _ , which are frames that appear in multiple traces .one may readjust the traces timing information using the timestamps of the reference frames ( see fig .[ fig : basics ] . ) finding reference frames is however a hard task , since we must be sure a given reference frame is an occurrence of the same frame in every traces .that is , some frames that occur frequently ( e.g.,macacknowledgements ) can not be used as reference frames because their content does not vary enough .therefore , only a subset of frames are used as reference frames , as explained later in this paper ( cf.section [ sec : details ] ) .a few trace merging tools exist in the literature , but they do not focus on the same set of features as this paper .for instance , jigsaw is able to merge traces from hundreds of monitors , but requires monitors to access a network infrastructure .wismon is an online tool that has similar requirements .this paper however considers smaller - scale systems ( dozens of monitors ) but where no monitor can access a network infrastructure .another system close to ours is wit . despite witprovides valuable insights on how to develop a merging tool , it is difficult to use , modify , and extend in practice ( cf.authors note in crawdad ) . thus our motivation to propose a new trace merger .note that this paper only refers to wit s merging process ( as wit has other features like , e.g.,a module to infer missing packets ) .wipalhas been designed according to the following constraints : no wired connectivity .: : the sniffers must be able to work in environments where no wired connectivity is provided .this enables performing measurements when it is difficult to have all sniffers access a shared network infrastructure ( e.g.,in some conference venues , or when studying interferences between two wireless networks belonging to distinct entities ) .simplicity to the end - user .: : we believe simplicity is the key to re - usability .users are not expected to install and set up complex systems ( e.g.,a database backend ) in order to use wipal .clean design .: : wipalexhibits a modular design .developers can easily adapt part of the trace merger ( e.g.,the reference frames identification process , the synchronization , or merging algorithm . ) for these reasons , we opted for an offline trace merger that does not require that traces be synchronized a priori . concretely , the sniffers only have to record their measurements on a local storage device , using the widely used pcap ( packet capture ) file format .wipalcomes as a set of binaries to manipulate wireless traces , including the merging tool presented in this paper .it works directly on pcap files both as input and output .wipalis composed of roughly 10k lines of c++ and makes heavy usage of modern generic and static programming techniques .wipalis downloadable from http://wipal.lip6.fr .[ fig : design ] depicts wipal s structure .each box represents a distinct module and arrows show wipal s data flow .wipaltakes two wireless traces as input and produces a single merged trace . in the following ,we explain in detail the functioning of each one of the modules .this section explains the process of extracting reference frames .this operation involves two steps : extraction of unique frames and intersection of unique frames ( see fig .[ fig : design ] . )let us first define what a unique frame means .a frame is said to be unique when it appears `` in the air '' once and only once for the whole duration of the measurement .a frame that is unique within each trace but that actually appeared twice on the wireless medium should not be considered as unique .the process of extracting unique frames finds candidates to become reference frames .the process of intersecting unique frames identifies then identical unique frames from both traces to become reference frames .wipalconsider every beacon frame and non - retransmitted probe response as a unique frame .these are management frames that access points send on a regular basis ( e.g.,every 100 ms for beacon frames ) .the uniqueness of these frames is due to the 64-bit timestamps they embed ( these timestamps are not related to the actual timestamps used for synchronization ) . in practice ,the extraction process does not load full frames into memory .it uses 16-byte hashes instead , which are stored in memory and used for comparisons .limiting the size of stored information is an important aspect since , as we will see later , wipal s intersection process performs a lot of comparisons and needs to store many unique frames in memory .tests with crawdad s uw / sigcomm2004 dataset have shown that this technique is practical .concretely , wipalneeds less than 600 mb to load 7,700,000 unique frames .there are some rare cases where the assumption that beacons and probe responses are unique does not hold .the uw / sigcomm2004 dataset has a total number of 50,375,921 unique frames ( about 14% of 364,081,644 frames ) . among those frames, we detected 5 collisions ( distinct unique frames sharing identical hashes . )wipal s intersection process includes a filtering mechanism to detect and filter such collisions out .the intersection process intersects the sets of unique frames from both input traces .there are multiple algorithms to perform such a task . based on cheng et al . , a solution is to `` bootstrap '' the system by finding the first unique frame common to both traces and then use this reference frame as a basis for the synchronization mechanism , as shown in algorithm [ alg : inter - sync ] .one may also use subsequent reference frames to update synchronization .this algorithm is practical because the inner loop only searches a very limited subset of .it has several drawbacks though : ( i ) the performance of the algorithm strongly depends on the precision of the synchronization process ; ( ii ) finding the first reference frame is still an issue ; ( iii ) this algorithm couples intersection with synchronization , which is undesirable with respect to modularity ; and ( iv ) there is a possibility that some frames are read multiple times from .more specifically , access to is not sequential .* input : * two lists of unique frames and .* output : * a list of reference frames . synchronization precision append to output .wipalincludes an algorithm that is much simpler to implement and that avoids the drawbacks of the abovementioned solution .the main characteristics of the proposed algorithm ( detailed in algorithm [ alg : inter - nosync ] ) are : ( i ) it does not require a bootstrapping phase ; ( ii ) it does not depend on any kind of synchronization ; and ( iii ) it sequentially reads each frame only once from and .the algorithm starts by loading all unique frames of the first trace into memory .this precludes using it as an online tool .note that loading all unique frames from a trace into memory may hog resources ; this justifies the importance of having small identifiers for the unique frames .these constraints are however negligible compared to those of algorithm [ alg : inter - sync ] . to support our argument ,let us show an example using the uw / sigcomm2004 dataset .the biggest traces are those from sniffers _ mojave _ and _ sonoran _ on channel 11 ( roughly 19 gb each . ) extracting these traces unique frames and intersecting them using wipalneeds 575 mb of memory .therefore , memory aggressiveness is not a concern in algorithm [ alg : inter - nosync ]. * input : * two lists of unique frames and . *output : * a list of reference frames . insert into .append to output .another advantage of algorithm [ alg : inter - nosync ] is its ability to detect collisions of unique frames within the first trace .collisions are detected by duplicate elements in .wipaldetects such cases , memorizes collisions , and filter them out of the hash table before starting the algorithm s second loop .of course , collisions in the second trace remain undetected .even if wipaldetected them , there would still be the possibility that a collision spans across both traces ( i.e.,each trace contains one occurrence of a colliding unique frame ) .such cases lead to producing invalid reference frames . to detect them ,wipallooks at possible anomalies w.r.t.the interarrival times between unique frames . in practice ,invalid references are rare : only three occurrences when merging uw / sigcomm2004 s channel 11 ( a 73 gb input which produces a 22 gb output ) . synchronizing two tracesmeans mapping trace one s timestamps to values compatible with trace two s .wipalcomputes such a mapping with an affine function .it estimates and with the help of reference frames as the process runs .wipal s synchronization process operates on windows of reference frames ( finding an optimal value of is discussed below ) .for each reference frame , the process performs a linear regression using reference frames , , .at the beginning and at the end of the trace , we use and ( is the number of reference frames . ) the result gives and for all frames between and .we performed a number of experiments that revealed that the optimal value for is 2 ( i.e.,wipalperforms linear regressions on 3-frame windows ) .[ fig : prec ] shows the results of performing two merge operations with varying window sizes . the merges concern channel 11 of the _ sahara chihuahuan _ and _ kalahari mojave _ sniffers from uw / sigcomm2004 .the average synchronization error is computed as follows .consider only the subset of frames that are shared by both the first and second trace and .for a given frame , let be the arrival time of inside ( after clock synchronization ) and be the arrival time of inside .the average synchronization error is given by .as previously underlined , leads to the minimum average synchronization error . note that techniques that use ( i.e.,that performs linear interpolations on couples of reference frames )would lead to the worst synchronization error . furthermore , merging traces with misses some shared frames .table [ tab : sync ] shows the number of frames that are identified as duplicates in the input traces .whereas using always gives identical results , using leads to some missed duplicates ( 7,455 for _sahara chihuahuan _ and 84 for _ kalahari mojave _ ) .although this is a small number compared to the total number of frames in the output traces , it indicates that synchronizing traces using linear interpolation ( as wit does ) may lead to incorrect results .unfortunately , it is difficult to know whether some duplicates were missed when ( we do not know which frames to expect as duplicates ) ..number of frames found to be shared by both input traces when merging _sahara chihuahuan _ and _ kalahari mojave _ with and ( channel 11 ) . [ cols="^,^,^ " , ] we now present how wipalperforms the final step , namely the merging process itself . its role is to copy frames from synchronized traces to the output trace . of course, it must order its output correctly while avoiding duplicate frames . algorithm [ alg : merge ] details wipal s merging algorithm . for the sake of illustration , we present here a simplified version that assumes that only one frame is emitted at a given time inside the monitoring area .it simultaneously iterates on both inputs , where each iteration adds the earliest input frame to the output ( lines [ alg : merge : it-1 ] and [ alg : merge : it-2 ] . )duplicate frames are the ones that have identical contents and that are spaced less than 106 ( line [ alg : merge : dup ] . )the rationale for this value is that 106 is half of the minimum gap between two valid frames .therefore , the appearance of identical frames during such an interval is in fact a unique occurrence of the same frame . * input : * two synchronized traces and .* output : * the merge of and .append to output ; s next frame ( or ) s first frame ; s first frame advance( , ) advance( , ) s time of arrival s time of arrival [ alg : merge : dup ] append either or to output .s next frame ( or ) s next frame ( or ) advance( , ) [ alg : merge : it-1 ] advance( , ) [ alg : merge : it-2 ]this section provides an evaluation of wipalusing crawdad s uw / sigcomm2004 dataset .we investigate both the correctness and the efficiency of wipal .we merge all traces sniffed from channel 11 and then use some heuristics to evaluate the quality of the result .we also analyze wipal s speed .traces from five sniffers compose the uw / sigcomm2004 dataset : _ chihuahuan _ , _ kalahari _ , _ mojave _ , _ sahara _ , and _sonoran_. fig .[ fig : eval : merge ] shows the merging sequence we used to merge all traces .the reason why _ kalahari _ and _ mojave _ share so few frames is that _ kalahari _ is an order of magnitude smaller than _mojave_. checkingthe correctness of the output is difficult .being able to test whether traces are correctly merged or not would be equivalent to knowing exactly in advance what the merge should look like .unfortunately , there is no reference output against which we could compare .thus , we propose several heuristics to check if wipalintroduces or not inconsistencies in its outputs .we also check wipal s correctness with a test - suite of synthetic traces for which we know exactly what to expect as output. a broken merging process could lead to several inconsistencies in the output traces . regarding the uw / sigcomm2004 dataset, we investigate in particular two of those inconsistencies : duplicate unique frames and duplicate data frames .duplicate unique frames .: : as seen previously , every unique frame should only occur once in the traces ( including merged traces ) .yet , it is difficult to avoid collisions in practice ( see section [ sec : unique - frames : extract ] ) .thus one should not consider all collisions as inconsistencies .when merging uw / sigcomm2004 , the final trace has 5 collisions .we manually verified that they are not inconsistencies introduced by wipal s merging process .duplicate data frames .: : we search traces on a per - sender basis for successive duplicate data frames ( only considering non - retransmitted frames ) .such cases should not occur in theory without retransmissions sequence numbers should at least vary .surprisingly , traces from uw / sigcomm2004 contain 20,303 such anomalies .we have no explanations why the dataset exhibits those phenomena .we checked however that the merged trace does not have more duplicates than the original traces .merging all the traces ( 73 gb ) takes about 2 hours and 20 minutes ( real time ) on a 3 ghz processor with 2 gb ram .we balance merge operations on two hard drives , whose average throughput during computations are about 60 mb / s and 30 mb / s .the average cpu usage is 75% , which means one could perform faster with faster hard drives ( about 1 hour and 40 minutes ) .comparing wipalwith online trace mergers does not make much sense : their mode of operation is different , and these also have different requirements ( e.g.,wired connectivity and loose synchronization . )the comparison would be unfair .we can however compare wipalwith wit , another offline merger .wit works on top of a database backend , which means that trace files need to be imported into a database before any further operation can begin ( e.g.,merging or inferring missing packets ) . using the same machine as before ,importing channel 11 of uw / sigcomm2004 into wit s database takes around 33 hours ( user time ) .this means that , before wit begins its merge operations , wipalcan perform at least 14 runs of a full merge with the same data .wipalallows then tremendous speed improvements .one of the reasons for such a difference is wipaluses high performance c++ code while wit is just a set of perl scripts using sql to interact with a database .this paper introduced the wipaltrace merger . as an _ offline _ merger , wipaldoes not require sniffers to be synchronized nor to have access to a wired infrastructure .wipalprovides several improvements over existing equivalent software : ( i ) it comes as a simple program able to manipulate trace files directly , instead of requiring a more complex software setup , ( ii ) its synchronization algorithm offer better precision than the existing algorithms ; and ( iii ) it has a clean modular design .furthermore , we also showed wipalis an order of magnitude faster than wit , the other available offline merger .we have several plans for the future of wipal .first , we are currently extending it to include other features ( besides merging ) . as a flavor of future features of wipal, it will perform traffic statistics on traces .we will also make better use of wipal s modularity and test other algorithms for the various stages of the merging operation .m. rodrig , c. reis , r. mahajan , d. wetherall , j. zahorjan , and e. lazowska , `` crawdad data set uw / sigcomm2004 ( v. 2006 - 10 - 17 ) , '' downloaded from http://crawdad.cs.dartmouth.edu/uw/sigcomm2004 , oct .
merging wireless traces is a fundamental step in measurement - based studies involving multiple packet sniffers . existing merging tools either require a wired infrastructure or are limited in their usability . we propose wipal , an offline merging tool for traces that has been designed to be efficient and simple to use . wipalis flexible in the sense that it does not require any specific services , neither from monitors ( like synchronization , access to a wired network , or embedding specific software ) nor from its software environment ( e.g.,an sql server ) . we present wipal s operation and show how its features notably , its modular design improve both ease of use and efficiency . experiments on real traces show that wipalis an order of magnitude faster than other tools providing the same features . to our knowledge , wipalis the only offline trace merger that can be used by the research community in a straightforward fashion .
currently , some of the best tools for understanding the solar magnetic cycle are axisymmetric kinematic dynamo models and surface flux - transport simulations . on the one hand kinematic dynamo models ( which are usually based on an axisymmetric formulation ) , attempt to model the magnetic cycle self - consistently by using a prescribed meridional flow , differential rotation , turbulent diffusivity and poloidal source ( see sec .[ sec_kmfd ] ) .they have been successful in reproducing several of the characteristics of the solar cycle ( see for example : choudhuri , schssler & dikpati 1995 ; durney 1997 ; dikpati & charbonneau 1999 ; covas et al .2000 ; nandy & choudhuri 2001 ; rempel 2006 ; guerrero & de gouveia dal pino 2007 ; jouve & brun 2007 ; muoz - jaramillo , nandy & martens 2009 , mnm09 from here on ; for more information about kinematic dynamo models see review by charbonneau 2005 ) . on the other hand , surface flux - transport simulations study the evolution of the photospheric magnetic field by integrating the induction equation using a prescribed meridional flow , differential rotation and turbulent diffusivitythere are two main differences between surface flux - transport simulations and kinematic dynamo models : in the former the computational domain is restricted to the surface ( without imposing axisymmetry ) and they are not self - excited , but driven by the deposition of active region ( ar ) bipolar pairs .this type of models has proved a successful tool for understanding surface dynamics on long timescales ( see , for example , mackay , priest & lockwood 2002 ; wang , lean & sheeley 2002 ; schrijver , de rosa & title 2002 ) and the evolution of coronal and interplanetary magnetic field ( see for example lean , wang & sheeley 2002 ; yeates , mackay & van ballegooijen 2008 ) . however, a discrepancy exists between kinematic dynamo models and surface flux - transport simulations regarding the relationship between meridional flow amplitude and the strength of the polar field ( schrijver & liu 2008 ; hathaway & rightmire 2010 ; jiang et al .2010 ) . on the one hand kinematic dynamo modelsfind that a stronger meridional flow results in stronger polar field ( dikpati , de toma & gilman 2008 ) , on the other hand surface flux - transport simulations find an inverse relationship ( wang , sheeley & lean 2002 ; jiang et al .2010 ) . in this workwe improve upon the idea proposed by durney ( 1997 ) and further elucidated by nandy & choudhuri ( 2001 ) of using axisymmetric ring doublets to model individual ars .we show that this captures the surface dynamics better than the -effect formulation and resolves the discrepancy between dynamo models and surface flux - transport simulations regarding the relationship between meridional flow speed and polar field strength .as mentioned before , kinematic dynamo models are usually based on an axisymmetric formulation and our model is not an exception . given that herewe introduce an improved axisymmetric double - ring algorithm for modeling ar eruptions ( see below ) , but ar emergence is strictly a non - axisymmetric process , it is important to study the amount of information lost by averaging over the longitudinal dimension .we do this by performing surface transport simulations driven by a synthetic set of ar cycles based on kitt peak data using the model of yeates , mackay & van ballegooijen ( 2007 ) .we perform a regular surface flux - transport simulation in which the bipolar ars are distributed all across the surface of the sun ( case 1 ) and another in which the same set of ars is deposited at the same carrington longitude while leaving other properties ( time , tilt , latitude of emergence and flux ) intact ( case 2 ) .the difference between both simulations is clear from the top row of fig .[ fig_sft ] , where we show a snapshot of the surface magnetic field at the peak of the cycle for case 1 ( fig .[ fig_sft]-a ) and case 2 ( fig .[ fig_sft]-b ) . obviously these cases have entirely different magnetic configurations at the time of deposition . however , when the magnetic field is averaged in longitude and stacked in time to create a magnetic synoptic map ( also know as butterfly diagram ; figs .[ fig_sft]-c & [ fig_sft]-d ) , a careful examination shows that the results are essentially the same within a margin of 1% ( figs .[ fig_sft]-e & f ) .the reason the simulations have identical outcomes is that the differential rotation and the meridional flow are both independent of longitude in the simulations .note that non - axisymmetry is essential for the evolution of the corona and interplanetary magnetic field .this result simply indicates that an axisymmetric representation of surface dynamics is a reasonable approximation if we are only concerned with the general properties of the magnetic field at the surface over solar cycle timescales in the context of dynamo models .the initial implementation of the double - ring algorithm by durney ( 1997 ) and nandy & choudhuri ( 2001 ) consisted in searching the bottom of the convection zone ( cz ) for places in which the toroidal field exceeds a buoyant threshold and placing two axisymmetric rings of constant radial flux directly above them .this implementation had two important deficiencies : strong sensitivity to changes in grid resolution and the introduction of sharp discontinuities in the component of the vector potential .the first necessary step to address these problems is a careful mathematical definition of the vector potential associated with each ring doublet , which ensures a continuous first derivative in the computational domain .we do so by building a separable function : where is a constant we introduce to ensure super - critical solutions and defines the strength of the ring doublet . is defined as & r\geq r_\odot - r_{ar } \end{array}\right.,\ ] ] where m corresponds to the radius of the sun and represents the penetration depth of the ar .this depth is motivated from results indicating that the disconnection of an ar flux - tube happens deep down in the cz ( longcope & choudhuri 2002 ) . , on the other hand , is easier to define in integral form : \sin(\theta')d\theta',\ ] ] where ( ) defines the positive ( negative ) ring : & \theta_{ar}\mp\frac{\chi}{2}-\frac{\lambda}{2 } \leq \theta< \theta_{ar}\mp\frac{\chi}{2}+\frac{\lambda}{2}\\ 0 & \theta \geq \theta_{ar}\mp\frac{\chi}{2}+\frac{\lambda}{2 } \end{array}\right .. \ ] ] here is the co - latitude of emergence , is the diameter of each polarity of the doublet , for which we use a fixed value of ( heliocentric degrees ) and $ ] is the latitudinal distance between the centers , which in turn depends on the angular distance between polarity centers and the ar tilt angle ; is calculated using the spherical law of sines ( see fig . [fig_dr]-a for a diagram illustrating these quantities ) .[ fig_dr]-b shows the axisymmetric signature of one of such axisymmetric ars .given that the accumulated effect of all ars is what regenerates the poloidal field , we need to specify an algorithm for ar eruption and decay in the context of the solar cycle .on each solar day of our simulation we randomly chose one of the latitudes with fields higher than a buoyancy threshold of gauss at the bottom of the cz ( ) , and calculate the amount of magnetic flux present within its associated toroidal ring .the probability distribution we use is not uniform , but is restricted to observed active latitudes .we do this by making the probability function drop steadily to zero between 30 ( -30 ) and 40 ( -40 ) in the northern ( southern ) hemisphere : \right)\left ( 1 - \operatorname{erf}\left [ \frac{\theta - 0.694\pi}{0.055\pi } \right ] \right).\ ] ] we then calculate the corresponding ar tilt , using the local field strength , the calculated flux and the latitude of emergence . for thiswe use the expression found by fan , fisher & mcclymont ( 1994 ) : reducing the magnetic field of the toroidal ring from which the ar originates . in order to do this ,we first estimate how much magnetic energy is present on a partial toroidal ring ( after removing a chunk with the same angular size as the emerging ar ) . given that this energy is smaller than the one calculated with a full ring, we set the value of the toroidal field such that the energy of a full toroidal ring filled with the new magnetic field strength is the same as the one calculated with the old magnitude for a partial ring .finally , we deposit a double - ring ( as defined in section [ sec_ar ] ) with these calculated properties , at the chosen latitude .we perform dynamo simulations to explore how the double - ring formulation compares to the near surface -effect formulation .in particular we focus on the relationship between meridional flow speed and polar field strength .our model is based one the axisymmetric dynamo equations : = \eta\left ( \nabla^2 - \frac{1}{s^2 } \right)a + \alpha_0f(r,\theta)f(b_{tc})b_{tc}\ ] ] + ( \nabla \cdot \textbf{v}_p)b = \eta\left ( \nabla^2 - \frac{1}{s^2 } \right)b + s\left(\left [ \nabla \times ( a\bf \hat{e}_\phi ) \right]\cdot \nabla \omega\right ) + \frac{1}{s}\frac{\partial ( sb)}{\partial r}\frac{\partial \eta}{\partial r},\ ] ] where a is the -component of the vector potential ( from which and can be obtained ) , b is the toroidal field ( ) , is the meridional flow , the differential rotation , the turbulent magnetic diffusivity and .the second term on the right - hand side of equation [ eq_2.5dyna ] corresponds to the poloidal source in the mean - field formulation . in this formulation a constant that sets the strength of the source term and is usually used to ensure super - critical solutions ; captures the spatial properties of the bl mechanism : confinement to the surface , observed active latitudes and latitudinal dependence of tilt , while adds nonlinearity to the dynamo by quenching the source term for values of the toroidal field at the bottom of the cz that are too strong or too weak .more information about this source can be found in mnm09 .note that for simulations using the double - ring algorithm this term is not present in the equations ( ) . in order to integrate these equations ,we need to prescribe four ingredients : meridional flow , differential rotation , the poloidal field regeneration mechanism , and turbulent magnetic diffusivity . for the differential rotation, we use the analytical form of charbonneau et al .( 1999 ) , with a tachocline centered at whose thickness is and we use the meridional flow profile defined in mnm09 .this meridional flow better captures the features present in helioseismic data , specially the latitudinal dependence .we use an amplitude of m / s for the results shown in figure [ fig_mfvsdr ] and a variable amplitude for the results shown in figure [ fig_mf_obs_ch3 ] ( see below ) .we use a double stepped diffusivity profile as described in mnm09 .it starts with a diffusivity value /s at the bottom of the cz , jumps to a value of /s in the cz , and then to a value of /s in the near - surface layers .the first step is centered at and has a half - width of and the second step is centered at and has a half - width of . for the poloidal field regeneration mechanism we use the improved ring - doublet algorithm described above , using a value of , in order to insure super - criticality ( for a meridional flow of m / s ) . for those simulationswhich use an -effect formulation , we use the non - local poloidal source described above ( more information in mnm09 ) using a value of , in order to insure super - criticality ( for a meridional flow of m / s ) .in order to have a net accumulation of unipolar field at the poles , it is necessary to have an equal amount of flux cancellation across the equator .since the meridional flow is poleward in the top part of the convection zone , it essentially acts as a barrier against flux cancellation by sweeping both positive and negative ar polarities towards the poles resulting in weak polar fields .this leads to an inverse correlation between flow speed and polar field strength which is accurately captured in surface flux transport simulations .contrarily , dynamo simulations in typically used parameter regimes obtain an opposite relationship not consistent with the above physics .this is because if there is already a strong separation of flux , a fast meridional flow will lead to an enhancement of the polar field due to flux concentration .this unrealistically strong separation is typical of kinematic dynamo models that use a non - local -effect bl source ( see fig .[ fig_mfvsdr]-c ) .the reason is that by increasing the vector potential proportionally to the toroidal field at the bottom of the cz ( eq . [ eq_2.5dyna ] ) , one creates strong gradients in the vector potential above the edges of the toroidal field belt ; this ends up immediately producing poloidal field which is as large in length scale as the toroidal field itself , circumventing the whole process of flux transport by circulation and diffusion .figure [ fig_mfvsdr ] illustrates this fundamental difference : the top row shows the evolution of the surface magnetic field for a dynamo model using the double - ring algorithm ( fig .[ fig_mfvsdr]-a ) versus one using the -effect formulation ( fig .[ fig_mfvsdr]-b ) . the different way in which each formulation handles the surface dynamics is evident .the double - ring simulation clearly shows a mixture of polarities and small - scale features which migrate to the poles ( very much like the observed evolution of the surface magnetic field ) . on the other hand ,the mean field formulation only shows two large scale polarities whose centroids drift apart as the cycle progresses .the bottom row depicts a snapshot of the poloidal field for the double - ring algorithm ( fig .[ fig_mfvsdr]-c ) and the -effect formulation ( fig .[ fig_mfvsdr]-d ) both snapshots taken at solar max . herethe presence of small - scale features and a mixture of polarities is evident for the double ring , whereas the -effect formulation only shows a large - scale magnetic field with two polarities .it is clear that although the large scale internal field is similar for both , the double - ring algorithm does a much better job of capturing the surface dynamics . in order to study the relationship between meridional flow and polar field strength, we perform simulations in which we randomly change the meridional flow amplitude from one sunspot cycle to another ( between m / s ) .this is illustrated in fig .[ fig_mfv ] where a series of sunspot cycles is plotted along with their associated meridional flow .we then evaluate the correlation between the amplitude of the meridional flow of a given cycle and the polar field strength at the end of it .since we want to evaluate the relative performance of the double - ring algorithm as opposed to the non - local bl source , we perform the same simulation for both types of sources . aside from the varying meridional flow amplitude and the poloidal source ,the rest of the ingredients are the same .it is important to note that partly due to difficulties in tracking the exact occurrence of solar minimum , the two hemispheres eventually drift out of phase in long simulations sometimes this phase difference leads to quadrupolar solutions which often go back to the observed dipolar solution .this parity issue only appears when the meridional flow is changed at solar minimum : if there are no variations , or if the variation takes place at solar maximum , the cycle is always locked in phase with dipolar parity . nevertheless , to compare our simulations with surface flux - transport models , we change the flow speed only at solar minimum . to be consistent, we accumulate statistics only from cycles in which the two hemispheres are in dipolar phase . the statistics performed for both types of source contain about 200 sunspot cycles .the values of polar field we find using the kinematic dynamo simulations are of the order of 10 kg which is a common feature of dynamo models , which are successful in simulating the strong toroidal field necessary to produce sunspots and sustain the solar cycle ( dikpati & charbonneau 1999 ; chatterjee , nandy & choudhuri 2004 ; jiang & wang 2007 ; jouve at al .recent high resolution observations of the polar region have now confirmed the existence of such strong kilo - gauss unipolar flux tubes ( tsuneta et al .[ fig_mf_obs_ch3 ] shows the results of both simulations .we find a weak positive correlation between meridional flow and polar field strength for the simulations using the non - local -effect formulation ( fig .[ fig_mf_obs_ch3]-top ) , which is in general agreement with the results of dikpati , de toma & gilman ( 2008 ) . on the other hand ,the simulations using the double - ring formulation distinctively show a negative correlation ( fig .[ fig_mf_obs_ch3]-bottom ) , in agreement with surface flux - transport simulations ( wang , sheeley & lean 2002 ) .this clearly establishes that the discrepancy between the models is resolved by introducing the double - ring algorithm and that the double - ring formalism does a better job at capturing the observed surface magnetic field dynamics than the non - local -effect formalism .in the first half of this work , we perform surface flux - transport simulations to test the validity of the axisymmetric formulation of the kinematic dynamo problem .our results suggest that this axisymmetric formulation captures well the surface flux dynamics over spatial and temporal scales that are relevant for the solar cycle . building upon this we introduce an improved version of the double - ring algorithm to model the babcock - leighton mechanism for poloidal field regeneration in axisymmetric , kinematic dynamo models .we show that this new double - ring formulation generates surface field evolution and polar field reversal which is in close agreement with observations . additionally , we find that this improved treatment of the babcock - leighton process generates an inverse relationship between meridional flow speed and polar field strength which is suggested by simple physical arguments and also predicted by surface - flux transport simulations .this resolves the discrepancy between kinematic dynamo models and surface flux - transport simulations regarding the dynamics of the surface magnetic field .since the latter drives the evolution of the corona and the heliosphere , our work opens up the possibility of coupling dynamo models of the solar cycle with coronal and heliospheric field evolution models .we want to thank aad van ballegooijen for useful discussions that were crucial for the development of the algorithm mentioned in section [ sec_dblr2pld ] .the computations required for this work were performed using the resources of montana state university and the harvard - smithsonian center for astrophysics .we thank keiji yoshimura at msu , and alisdair davey and henry ( trae ) winter at the cfa for much appreciated technical support .this research was funded by nasa living with a star grant nng05ge47 g and has made extensive use of nasa s astrophysics data system .d.n . acknowledges support from the government of india through the ramanujan fellowship .thanks the uk stfc for financial support . , s. , ichimoto , k. , katsukawa , y. , lites , b. w. , matsuzaki , k. , nagata , s. , orozco surez , d. , shimizu , t. , shimojo , m. , shine , r. a. , suematsu , y. , suzuki , t. k. , tarbell , t. d. , & title , a. m. 2008 , astrophys . j. , 688 , 1374
the emergence of tilted bipolar active regions ( ars ) and the dispersal of their flux , mediated via processes such as diffusion , differential rotation and meridional circulation is believed to be responsible for the reversal of the sun s polar field . this process ( commonly known as the babcock - leighton mechanism ) is usually modeled as a near - surface , spatially distributed -effect in kinematic mean - field dynamo models . however , this formulation leads to a relationship between polar field strength and meridional flow speed which is opposite to that suggested by physical insight and predicted by surface flux - transport simulations . with this in mind , we present an improved double - ring algorithm for modeling the babcock - leighton mechanism based on ar eruption , within the framework of an axisymmetric dynamo model . using surface flux - transport simulations we first show that an axisymmetric formulation which is usually invoked in kinematic dynamo models can reasonably approximate the surface flux dynamics . finally , we demonstrate that our treatment of the babcock - leighton mechanism through double - ring eruption leads to an inverse relationship between polar field strength and meridional flow speed as expected , reconciling the discrepancy between surface flux - transport simulations and kinematic dynamo models .
in an online auction , bidders acting independently of each other , randomly place one of bids on a secure server . after a period of independent daily bidding , the server posts a cryptic message on a public website .our results show that for , such a message exists from which each bidder can deduce securely the highest bids , but no message exists to allow any of them to identify securely the winners . in general, suppose that the terminals in observe correlated signals , and that a subset of them are required to compute securely " a given ( single - letter ) function of all the signals . to this end , following their observations , all the terminals are allowed to communicate interactively over a public noiseless channel of unlimited capacity , with all such communication being observed by all the terminals .the terminals in seek to compute in such a manner as to keep its value information theoretically secret from an eavesdropper with access to the public interterminal communication .see figure [ f_sc ] .a typical application arises in a wireless network of colocated sensors which seek to compute a given function of their correlated measurements using public communication that does not give away the value of the function .our goal is to characterize necessary and sufficient conditions under which such secure computation is feasible .we formulate a new shannon theoretic multiterminal source model that addresses the elemental question : _ when can a function _ _ be computed so that its value is independent of the public communication used in its computation _ ?we establish that the answer to this question is innately connected to a problem of secret key ( sk ) generation in which all the terminals in seek to generate secret common randomness " at the largest rate possible , when the terminals in are provided with side information for limited use , by means of public communication from which an eavesdropper can glean only a negligible amount of information about the sk .the public communication from a terminal can be any function of its own observed signal and of all previous communication .side information is provided to the terminals in in the form of the value of , and can be used only for recovering the key .such a key , termed an aided secret key ( ask ) , constitutes a modification of the original notion of a sk in .the largest rate of such an ask , which can be used for encrypted communication , is the ask capacity . since a securely computable function for will yield an ask ( for ) of rate equal to its entropy , it is clear that necessarily must satisfy .we show that surprisingly , is a sufficient condition for the existence of a protocol for the secure computation of for .when all the terminals in seek to compute securely , the corresponding ask capacity reduces to the standard sk capacity for .we also show that a function that is securely computed by can be augmented by residual secret common randomness to yield a sk for of optimum rate .we also present the capacity for a general ask model involving _ arbitrary _ side information at the secrecy - seeking set of terminals for key recovery alone .its capacity is characterized in terms of the classic concept of maximum common function " .although this result is not needed in full dose for characterizing secure computability , it remains of independent interest .we do not tackle the difficult problem of determining the minimum rate of public communication needed for the secure computation of , which remains open even in the absence of a secrecy constraint .nor do we fashion efficient protocols for this purpose . instead , our mere objective in this work is to find conditions for the _ existence _ of such protocols .the study of problems of function computation , with and without secrecy requirements , has a long and varied history to which we can make only a skimpy allusion here .examples include : algorithms for exact function computation by multiple parties ( cf .e.g. , ) ; algorithms for asymptotically accurate ( in observation length ) function computation ( cf .e.g. , ) ; exact function computation with secrecy ( cf .e.g. , ) ; and problems of oblivious transfer .our results in section [ s_res ] are organized in three parts : capacity of ask model ; characterization of the secure computability of ; and a decomposition result for the total entropy of the model .proofs are provided in section [ s_pro ] and concluding remarks in section [ s_dis ] .[ s_pre ] let , , be rvs with finite alphabets , respectively .for any nonempty set , we denote .similarly , for real numbers and , we denote .let be the set .we denote i.i.d .repetitions of with values in by with values in . following , given , for rvs we say that is -_recoverable _ from if for some function of . all logarithms and exponentials are with respect to the base .we consider a multiterminal source model for secure computation with public communication ; this basic model was introduced in in the context of sk generation with public transaction .terminals observe , respectively , the sequences , of length .let be a given mapping , where is a finite alphabet . for , the mapping is defined by for convenience , we shall denote the rv by , and , in particular , simply by .the terminals in a given set wish to compute securely " the function for in . to this end, the terminals are allowed to communicate over a noiseless public channel , possibly interactively in several rounds .randomization at the terminals is permitted ; we assume that terminal generates a rv , such that and are mutually independent . while the cardinalities of range spaces of are unrestricted , we assume that .[ d_pubcomm ] assume without any loss of generality that the communication of the terminals in occurs in consecutive time slots in rounds ; such communication is described in terms of the mappings with corresponding to a message in time slot by terminal , , ; in general , is allowed to yield any function of and of previous communication described in terms of .the corresponding rvs representing the communication will be depicted collectively as where .a special form of such communication will be termed _ noninteractive communication _ if , where , .[ d_sc1 ] for , we say that is -_securely computable _( - sc ) by ( the terminals in ) a given set with from observations of length , randomization and public communication , if + ( i ) is - recoverable from for every , i.e. , there exists satisfying and + ( ii ) satisfies the strong " secrecy condition by definition , an -sc function is recoverable ( as ) at the terminals in and is effectively concealed from an eavesdropper with access to the public communication .[ d_sc2 ] we say that is _ securely computable _ by if is - sc by from observations of length , suitable randomization and public communication , such that .we consider first the case when all the terminals in wish to compute securely the function , i.e. , .our result for this case will be seen to be linked inherently to the standard concept of sk capacity for a multiterminal source model , and serves to motivate our approach to the general case when .[ d_sk ] for , a function of is an -_secret key _ ( -sk ) for ( the terminals in ) a given set and the set pursuing secure computation . ] with , achievable from observations of length , randomization and public communication as above if + ( i ) is -recoverable from for every ; + ( ii ) satisfies the strong " secrecy condition where denotes the set of possible values of .the sk capacity for is the largest rate of -sks for as above , such that .\(i ) the secrecy condition ( [ e_sin ] ) is tantamount jointly to a nearly uniform distribution for ( i.e. , is small ) and to the near independence of and ( i.e. , is small ) .+ ( ii ) for the trivial case , clearly .a single - letter characterization of the sk capacity is provided in .[ t_csk]_ _ the sk capacity equals where with furthermore , the sk capacity can be achieved with noninteractive communication and without recourse to randomization at the terminals in .the sk capacity is not increased if the secrecy condition ( [ e_sin ] ) is replaced by either of the following weaker requirements is not permitted , the converse proof in uses only the first part of ( [ e_sin ] ) or ( [ e_sin ] ) .when randomization is allowed , since the cardinality of the range space of is unrestricted , the converse proof in uses additionally the second part of ( [ e_sin ] ) or ( [ e_sin ] ) . ] : or we recall from that has the operational significance of being the smallest rate of communication for omniscience " for , namely the smallest rate of suitable communication for the terminals in whereby is -recoverable from at each terminal , with ; here denotes the cardinality of the set of values of .thus , is the smallest rate of interterminal communication among the terminals in that enables every terminal in to reconstruct with high probability all the sequences observed by all the other terminals in with the cooperation of the terminals in .the resulting omniscience for corresponds to total common randomness " of rate .the notion of omniscience , which plays a central role in sk generation for the multiterminal source model , will play a material role in the secure computation of as well .noting that implies a comparison of the conditions in ( [ e_sec ] , [ e_cardboundg ] ) and ( [ e_sin ] ) that must be met by a securely computable and a sk , respectively , shows for a given to be securely computable , it is necessary that remarkably , it transpires that is a sufficient condition for to be securely computable , and constitutes our first result .[ t_sc ] a function is securely computable by if conversely , if is securely computable by , then .theorem [ t_sc ] is , in fact , a special case of our main result in theorem [ t_gsc ] below .let , and let and be -valued rvs with let . from , ( and also theorem [ t_csk ] above ) , , where .since , by theorem [ t_sc ] is securely computable if we give a simple scheme for the secure computation of when , that relies on wyner s well - known method for slepian - wolf data compression and a derived sk generation scheme in , .we can write with being independent separately of and .we observe as in that there exists a binary linear code , of rate , with parity check matrix such that , and so , is -recoverable from at terminal 2 , where the slepian - wolf codeword constitutes public communication from terminal 1 , and where decays to exponentially rapidly in .let be the estimate of thereby formed at terminal 2 .further , let be the location of in the coset of the standard array corresponding to .by the previous observation , too is -recoverable from at terminal 2 . from , , constitutes a perfect " sk for terminals 1 and 2 , of rate , and satisfying also , observe from ( [ e_ex1ii ] ) that and , and for each fixed value of , the ( common ) arguments of and have the same distribution as .hence by ( [ e_ex1iii ] ) , since . then terminal 2 communicates in encrypted form as ( all represented in bits ) , with encryption feasible since by the sufficient condition ( [ e_ex1i ] ) .terminal 1 then decrypts using to recover .the computation of is secure since is small ; specifically , the first term equals since , while the second term is bounded using ( [ e_ex1iv ] ) according to where the inequality follows by fano s inequality and the exponential decay of to .next , we turn to the general model for the secure computability of by a given set . again in the manner of ( [ e_scn ] ) , it is clear that a necessary condition is in contrast , when , is _ not _ sufficient for to be securely computable by as seen by the following simple example .let , and consider rvs with , where is independent of and .let be defined by , , . clearly , .therefore , .however , for to be computed by the terminals and , its value must be conveyed to them necessarily by public communication from terminal .thus , is not securely computable .interestingly , the secure computability of can be examined in terms of a new sk generation problem that is formulated next .we consider an extension of the sk generation problem in definition [ d_sk ] , which involves additional side information that is correlated with and is provided to the terminals in for use in _ only the recovery stage _ of sk generation ; however , the public communication remains as in definition [ d_pubcomm ] .formally , the extension is described in terms of generic rvs , where the rvs too take values in finite sets , in .we note that the full force of this extension will not be needed to characterize the secure computability of ; an appropriate particularization will suffice . nevertheless , this concept is of independent interest .a function of is an - secret key aided by side information ( -ask ) for the terminals , , achievable from observations of length , randomization and public communication if it satisfies the conditions in definition [ d_sk ] with in the role of in condition ( i ) . the corresponding ask capacity is defined analogously as in definition [ d_sk ] .in contrast with the omniscience rate of that appears in the passage following theorem [ t_csk ] , now an underlying analogous notion of omniscience will involve total common randomness of rate exceeding .specifically , the enhanced common randomness rate will equal the entropy of the maximum common function " ( mcf ) of the rvs , introduced for a pair of rvs in ( see also ( * ? ? ?* problem 3.4.27 ) ) .[ d_mcf] for two rvs with values in finite sets , the equivalence relation in holds if there exist and sequences in with , and in satisfying and , .denote the corresponding equivalence classes in by .similarly , let denote the equivalence classes in . as argued in , and for , the mcf of the rvs is a rv with values in and pmf for rvs values in finite alphabets , we define the recursively by with as above .[ d_mcfn ] with denoting i.i.d .repetitions of the rv , we define note that is a function of _ each _ individual . as justification for the definition ( [ e_mcf : rec ] ) , consider a rv that satisfies and suppose for any other rv satisfying ( [ e_mcf : justification ] ) that .then lemma [ l_mcf ] below shows that must satisfy .the following result for the mcf of rvs is a simple extension of the classic result for ( * ? ? ?* theorem 1 ) .[ l_mcf ] given , if is -recoverable from for each , then * proof : * the proof involves a recursive application of ( * ? ? ?* lemma , section 4 ) to in ( [ e_mcf : rec ] ) , and is provided in appendix a. we are now in a position to characterize ask capacity . in a manner analogous to theorem [ t_csk ] , this is done in terms of and the smallest rate of communication for each terminal in to attain omniscience that corresponds to i.i.d .repetitions of .[ t_cask ] the ask capacity equals the proof of theorem [ t_cask ] is along the same lines as that of theorem [ t_csk ] and is provided in appendix b. the remark following theorem [ t_csk ] also applies to the ask capacity , as will be seen from the proof of theorem [ t_cask ] .if is securely computable by the terminals in , then constitutes an ask for under the constraint ( [ e_sin ] ) , of rate , with side information in the form of provided only to the terminals in in the recovery stage of sk generation .thus , a necessary condition for to be securely computable by , in the manner of ( [ e_scn ] ) , is where with by particularizing theorem [ t_cask ] to the choice of as above , the right side of ( [ e_gscn1 ] ) reduces to our main result says that the necessary condition ( [ e_gscn1 ] ) is tight .[ t_gsc ] a function is securely computable by if furthermore , under the condition above , is securely computable with noninteractive communication and without recourse to randomization at the terminals in .conversely , if is securely computable by , then .\(i ) it is easy to see that .in particular , the second inequality holds since in the context of the side information for recovery in ( [ e_zm ] ) is not provided to the terminals in and by noting that a sk for is also a sk for .\(ii ) observe in example 2 that and so , by theorem [ t_gsc ] , is not securely computable as noted earlier . for the auction example in section [ s_int ] , and i.i.d .rvs distributed uniformly on , while .let and . then , straightforward computation yields for that and for both that where , by theorem [ t_csk ] , by theorem [ t_gsc ] , is securely computable whereas is not .in fact , is not securely computable by _ any _ terminal .this , too , is implied by theorem [ t_gsc ] upon nothing that for each and a restricted choice , where the first equality is a consequence of remark ( i ) following theorem [ t_gsc ] and remark ( ii ) after definition [ d_sk ] .the sufficiency condition ( [ e_gscs ] ) prompts the following two natural questions : does the difference possess an operational significance ? if is securely computable by the terminals in , clearly forms a sk for .can be augmented suitably to form a for of maximum achievable rate ?the answers to both these questions are in the affirmative .in particular , our approach to the second question involves a characterization of the minimum rate of communication for omniscience for , under the additional requirement that this communication be independent of .specifically , we show below that for a securely computable function , this minimum rate remains ( see ( [ e_rco ] ) ) . addressing the first question, we introduce a rv such that constitutes an -ask for with side information as in ( [ e_zm ] ) and satisfying the additional requirement let the largest rate of such an ask be .observe that since is required to be nearly independent of , where is the public communication involved in its formation , it follows by ( [ e_kg ] ) that is nearly independent of .turning to the second question , in the same vein let be a rv such that constitutes an -sk for and satisfying ( [ e_kg ] ) .let denote the largest rate of .as noted above , will be nearly independent of , where is the public communication involved in the formation of .[ p_csg ] for , it holds that \(i ) for the case , both ( i ) and ( ii ) above reduce to .\(ii ) theorem [ t_csk ] and proposition [ p_csg](ii ) lead to the observation which admits the following heuristic interpretation . the total randomness " that corresponds to omniscience decomposes into three nearly mutually independent " components : a minimum - sized communication for omniscience for and the independent parts of an optimum - rate sk for composed of and .the necessity of ( [ e_gscn1 ] ) follows by the comments preceding theorem [ t_gsc ] . the sufficiency of ( [ e_gscs ] )will be established by showing the existence of _ noninteractive _ public communication comprising source codes that enable omniscience corresponding to at the terminals in , and thereby the computation of .furthermore , the corresponding codewords are selected so as to be simultaneously independent of , thus assuring security .first , from ( [ e_gscs ] ) and ( [ e_gscn2 ] ) , there exists such that , using . for each and , consider a ( map - valued ) rv that is uniformly distributed on the family of all mappings .the rvs are taken to be mutually independent .fix , with and .it follows from the proof of the general source network coding theorem ( * ? ? ?* lemma 3.1.13 and theorem 3.1.14 ) that for all sufficiently large , provided , where vanishes exponentially rapidly in .this assertion follows exactly as in the proof of ( * ? ? ?* proposition 1 , with ) but with there equal to rather than , .in particular , we shall choose such that below we shall establish that for all sufficiently large , to which end it suffices to show that since then it would follow from ( [ e_sc : rel ] ) , ( [ e_sc : sec ] ) and definition of in ( [ e_gscn1 ] ) that this shows the existence of a particular realization of such that is -sc from + for each . it now remains to prove ( [ e_sc : bsec ] ) .fix and note that for each , with denoting the cardinality of the ( image ) set , where the right side above denotes the ( kullback - leibler ) divergence between the joint pmf of + , and the product of the uniform pmf on and the pmf of + . using ( * ? ? ?* lemma 1 ) , the right side of ( [ e_sc : sin ] ) is bounded above further by where is the variational distance between the pmfs in the divergence above . therefore , to prove ( [ e_sc : bsec ] ) , it suffices to show that on account of the fact that , and the exponential decay to of .defining we have by ( [ e_sc : rel ] ) that .thus , in ( [ e_sc : bbsec ] ) , since is independent of .thus , ( [ e_sc : bbsec ] ) , and hence ( [ e_sc : bsec ] ) , will follow upon showing that for all sufficiently large . fix .we take recourse to lemma [ l_b ] in appendix c , and set , and for some mapping . by the definition of , so that condition ( [ e_bound0])(i ) preceding lemma [ l_b ] is met .condition ( [ e_bound0])(ii ) , too , is met since conditioned on the events in ( [ e_bound0])(ii ) , only those can occur that are determined uniquely by their components . upon choosing ,\end{aligned}\ ] ] in ( [ e_bound1 ] ) ,the hypotheses of lemma [ l_b ] are satisfied with , for an appropriate exponentially vanishing .then , by lemma [ l_b ] , with \right\rceil,\quad r ' = \left\lceil\exp\left[n\left(\sum_{l \in { { \mathcal m}}\backslash\{i\ } } r_l + \frac{\delta}{6}\right)\right]\right\rceil,\end{aligned}\ ] ] and with in the role of , we get from ( [ e_bc ] ) and ( [ e_sc : r_m ] ) that decays to doubly exponentially in , which proves ( [ e_sc : bbbsec ] ) .this completes the proof of theorem [ t_gsc ] .\(i ) since the rv , with nearly independent components , constitutes an ask for with side information as in ( [ e_zm ] ) , it is clear that in order to prove the reverse of ( [ e_propinq1 ] ) , we show that is an achievable ask rate for that additionally satisfies ( [ e_kg ] ) . first, note that in the proof of theorem [ t_gsc ] , the assertions ( [ e_sc : rel ] ) and ( [ e_sc : bsec ] ) mean that for all sufficiently large , there exists a public communication , say , such that and is -recoverable from for every , with .fix , where is as in the proof of theorem [ t_gsc ] .apply lemma [ l_b ] , choosing ,\end{aligned}\ ] ] whereby the hypothesis ( [ e_bound1 ] ) of lemma [ l_b ] is satisfied for all sufficiently large .fixing \right\rceil,\end{aligned}\ ] ] by lemma [ l_b ] a randomly chosen of rate will yield an ask which is nearly independent of ( and , in particular , satisfies ( [ e_kg ] ) ) with positive probability , for all sufficiently large .\(ii ) the proof can be completed as that of part ( i ) upon showing that for a securely computable , for all and sufficiently large , there exists a public communication that meets the following requirements : its rate does not exceed ; ; and is -recoverable from for every . to that end , for as in the proof of theorem [ t_gsc ] , consider that satisfies for all and noting that .further , for and as in that proof , define a ( map - valued ) rv that is uniformly distributed on the family of all mappings from + to , .the random variables + are taken to be mutually independent .define as the set of mappings for which there exists a such that is -recoverable from + for every . by the general source network coding theorem ( * ? ? ?* lemma 3.1.13 and theorem 3.1.14 ) , applied to the random mapping , it follows that for all sufficiently large , this , together with ( [ e_sc : rel ] ) and ( [ e_sc : bsec ] ) in the proof of theorem [ t_gsc ] , imply that for a securely computable there exist and for which the public communication satisfies the aforementioned requirements .finally , apply lemma [ l_b ] with and as in ( [ e_applybc2 kg ] ) but with and \right\rceil.\ ] ] as in the proof above of part ( i ) , a sk of rate which is nearly independent of ( and , hence , satisfies ( [ e_kg ] ) ) exists for all sufficiently large .we obtain simple necessary and sufficient conditions for secure computability involving function entropy and ask capacity . the latter is the largest rate of a sk for a new model in which side information is provided for use in only the recovery stage of sk generation .this model could be of independent interest .in particular , a function is securely computable if its entropy is less than ask capacity of an associated secrecy model .the difference is shown to correspond to the maximum achievable rate of an ask which is independent of the securely computed function and , together with it , forms an ask of optimum rate .also , a function that is securely computed by can be augmented to form a sk for of maximum rate .our results extend to functions defined on a block of symbols of _ fixed _ length in an obvious manner by considering larger alphabets composed of supersymbols of such length .however , they do not cover functions of symbols of increasing length ( in ) . in our proof of theorem [ t_gsc ] ,g was securely computed from omniscience at all the terminals in that was attained using noninteractive public communication . however , as example 1 illustrates , omniscience is not necessary for the secure computation of , and it is possible to make do with communication of rate less than using an interactive protocol . a related unresolved question is : what is the minimum rate of public communication for secure computation ? a natural generalization of the conditions for secure computability of by given here entails a characterization of conditions for the secure computability of multiple functions by of , respectively .this unsolved problem , in general , will not permit omniscience for any .for instance with , , , and and being independent , the functions , , are securely computable trivially , but not through omniscience since , in this example , public communication is forbidden for the secure computation of .the proof of lemma [ l_mcf ] is based on ( * ? ? ? * lemma , section 4 ) , which is paraphrased first .let the rvs and take values in the finite set and , respectively .for a stochastic matrix , let be the ergodic decomposition ( into communicating classes ) ( cf .e.g. , ) of based on .let denote a fixed ergodic class of ( the -fold cartesian product of ) on the basis of ( the -fold product of ) .let and be any ( nonempty ) subsets of and , respectively . for as above , assume that ,\\\label{e_gk1 } { { \pr}\left(r^n \in { { { \mathcal r}}^{(n)}}\mid q^n \in { { { \mathcal d}}^{(n)}}\right ) } & \geq \exp[-n { \epsilon}_n],\end{aligned}\ ] ] where .then ( as stated in ( * ? ? ?* bottom of p. 157 ) ) , ,\end{aligned}\ ] ] for a ( positive ) constant that depends only on the pmf of and on .a simple consequence of ( [ e_gk2 ] ) is that for a given ergodic class and disjoint subsets of it , and subsets ( not necessarily distinct ) of , such that , satisfy ( [ e_gk1 ] ) , then .\end{aligned}\ ] ] note that the ergodic decomposition of on the basis of for the specific choice corresponds to the set of values of defined by ( [ e_mcfn ] ) . next ,pick , , and define the stochastic matrix by the ergodic decomposition of on the basis of ( with as in ( [ e_stochw ] ) ) will correspond to the set of values of , recalling ( [ e_mcf : rec ] ) . since is -recoverable from , note that also is -recoverable in the same sense , recalling definition [ d_mcfn ] .this implies the existence of mappings , satisfying for each fixed value of , let let denote the set of s such that then , as in ( * ? ? ?* proposition 1 ) , it follows from ( [ e_recov ] ) that next , we observe for each fixed , that the disjoint sets lie in a fixed ergodic class of ( determined by ) .since ( [ e_gk1 ] ) are compatible with the assumption ( [ e_gk1 ] ) for all sufficiently large , we have from ( [ e_gkbound ] ) that ,\end{aligned}\ ] ] where depends on the pmf of and in ( [ e_stochw ] ) , and where . finally , where by ( [ e_ce ] ) and ( [ e_boundce ] ) .considering first the achievability part , fix . from the result for a general source network ( * ? ? ?* theorem 3.1.14 ) it follows , as in the proof of ( * ? ? ?* proposition 1 ) , that for and all sufficiently large , there exists a noninteractive communication with such that is -recoverable from .therefore , is -recoverable from .the last step takes recourse to lemma [ l_b ] in appendix c. specifically , choose , , , , ] , where } ) + \frac{1}{n}h\left(u_i , x_i^n \mid \mathbf{f } , k , u_{[1 , i-1 ] } , x^n_{[1 , i-1]}\right ) - h(u_i).\end{aligned}\ ] ] consider , . for , we have furthermore , since is -recoverable from and for with , } , u_{b^c } , x^n_{b^c } , z_j^n\right ) + \frac{1}{n}h\left(k\mid u_{b^c } , x^n_{b^c } , z_j^n , \mathbf{f}\right ) \\\nonumber & \quad + \frac{1}{n}\sum_{i \in b } h\left(u_i , x_i^n \mid u_{b^c \cap [ i+1,m ] } , x^n_{b^c \cap [ i+1 , m ] } , z_j^n , \mathbf{f } , k , u_{[1 , i-1 ] } , x^n_{[1 , i -1]}\right ) \\\nonumber & \leq \frac{1}{n}\sum_{i \in b}\left [ \sum_{\nu : \nu \equiv i \mod m } h\left(f_\nu \mid f_{[1 , \nu-1]}\right ) + h\left(u_i , x_i^n \mid \mathbf{f } , k , u_{[1 , i-1 ] } , x^n_{[1 , i -1]}\right)\right ] + \frac{{\epsilon}_n \log where it follows from ( [ e_ask1 ] ) and ( [ e_ask2])-([e_ask4 ] ) that where from ( [ e_ask4 ] ) , and therefore then , ( [ e_ask5 ] ) , ( [ e_ask6 ] ) imply the proof is completed using the second part of ( [ e_sin ] ) directly , or the second part of ( [ e_sin ] ) in the manner of ( * ? ? ?* theorem 3 ) .this completes the converse part .our proof of achievability in theorem [ t_cask ] and sufficiency in theorem [ t_gsc ] rely on a balanced coloring lemma " in ; we state below a version of it from .[ l_b0 ] ( * ? ? ?* lemma 3.1 ) let be any family of pmfs on a finite set , and let be such that satisfies for some . then the probability that a randomly selected mapping fails to satisfy simultaneously for each , is less than .in contrast to the application of lemma [ l_b0 ] in ( * ? ? ?* lemma b.2 ) , our mentioned proofs call for a balanced coloring of a set corresponding to a rv that differs from another rv for which probability bounds are used . however , both rvs agree with high probability when conditioned on a set of interest .consider rvs with values in finite sets , respectively , where is a function of , and a mapping .for , let be a subset of such that + ( i ) ; + ( ii ) given there exists satisfying then the following holds .[ l_b ] let the rvs and the set be as above .further , assume that then , a randomly selected mapping fails to satisfy with probability less than for a constant .* proof : * using the condition ( i ) in the definition of , the left side of ( [ e_bc ] ) is bounded above by therefore , it is sufficient to prove that with probability greater than for a constant .+ let . then , since we get from the extremities above that for and satisfying therefore , by ( [ e_b1 ] ) and ( [ e_bound1 ] ) , it follows that which is the same as the bound in ( [ e_b2 ] ) will now play the role of ( * ? ? ?* inequality ( 50 ) , p. 3059 ) and the remaining steps of our proof , which are parallel to those in ( * ? ? ?* lemma b.2 ) , are provided here for completeness .setting we get that next , defining it holds for , also , further , for , if then from ( [ e_e3 ] ) , we have therefore , recalling the conditions that define in ( [ e_bound0 ] ) , we have for that where second equality is by ( [ e_bound0 ] ) , and the previous inequality is by ( [ e_e2a ] ) , ( [ e_e3a ] ) and ( [ e_d1 ] ) . also , using ( [ e_d2 ] ) , ( [ e_e2 ] ) , we get now , the left side of ( [ e_bcii ] ) is bounded , using ( [ e_ed2 ] ) , as using ( [ e_ed1 ] ) , the family of pmfs satisfies the hypothesis ( [ e_boundb0 ] ) of lemma [ l_b0 ] with replaced by and replaced by ; assume that so as to meet the condition following ( [ e_boundb0 ] ) .the mentioned family consists of at most pmfs .therefore , using lemma [ l_b0 ] , with probability greater than for a constant .this completes the proof of ( [ e_bcii ] ) , and thereby the lemma .the authors thank sirin nitinawarat for helpful discussions .n. ma , p. ishwar and p. gupta , `` information - theoretic bounds for multiround function computation in collocated networks , '' _ ieee international symposium on information theory ( isit ) _ , pp . 23062310 , 2009 .
a subset of a set of terminals that observe correlated signals seek to compute a given function of the signals using public communication . it is required that the value of the function be kept secret from an eavesdropper with access to the communication . we show that the function is securely computable if and only if its entropy is less than the aided secret key " capacity of an associated secrecy generation model , for which a single - letter characterization is provided . aided secret key , balanced coloring lemma , function computation , maximum common function , omniscience , secret key capacity , secure computability .
protein folding kinetics is usually modeled in either of three ways . first , there are mass - action models that capture the amplitudes and decay rates of the exponentials in the folding or unfolding relaxation process .mass - action models are useful for cataloging the different types of kinetic behavior , but give no insight into molecular structures or mechanisms. such models do not predict other experimental properties , such as -values .second , there are all - atom or lattice model simulations that can explore sequence - structure relationships ( see , e.g. , ) .they are usually limited by computational power to short time scales and to studying restricted conformational ensembles .third , between these macroscopic and microscopic extremes , another type of model has recently emerged .this class of models uses knowledge of the native structure to infer the sequences of folding events .some of these models define partially folded states with one or two contiguous sequences of native - like ordered residues .others are based on a go - model energy function that enforces the global stability of the native state .we describe here a folding model of the third type .our model uses knowledge of the native structure to predict the kinetics .however , it differs from previous models in several respects .first , our model focuses on chain entropies and estimates loop lengths from the graph - theoretical concept of effective contact order eco ( see below ) .we follow time sequences of loop - closure events because we expect that these events reveal how the kinetics is encoded in the native structure .we assume that folding proceeds mostly through closures of small loops , and that large - loop closures are much slower and less important processes .second , our model focuses on _ contacts _ within the chain , not on whether _ residues _ are native - like or not , because we think the formation of contacts is a more physical description of the folding process .therefore , in our model partially folded states are characterized by formed contacts , not by contiguous stretches of native - like ordered residues as in other simple models .third , the folding kinetics is described by a master equation that can be solved directly for the macrostates considered here , without stochastic simulations such as molecular dynamics or monte carlo .hence the present treatment can handle the full spectrum of temporal events .the present work is related to a recent model of protein zipping .our fundamental units of protein structure are _contact clusters_. a contact cluster is a collection of contacts that is localized on a contact map , corresponding roughly to the main structural elements of the native structure .examples of contact clusters are turns , -helices , -strand pairings , and tertiary pairings of helices .a central quantity in our models is the effective contact order ( eco) . the eco is the length of the loop that has to be closed in order to form a contact , given a set of previously formed contacts or contact clusters .the premise is that the formation of the _ nonlocal _ contact clusters requires the prior formation of other , more _ local _ , clusters .our model predicts average -values for secondary structural elements that are in good agreement with the experimentally observed values for several two - state proteins .it shows that -value distributions can be understood from loop - closure events that are defined by the native topology of a protein .the importance of topology for routes and -values has also been previously noted by other groups . to compute the dynamics, we use a master equation .several previous studies of the folding kinetics of lattice heteropolymer models have also been based on master equation methods .these methods have the advantage that they require no _ ad hoc _ assumptions about what the transition state is .the transition state emerges in a direct physical way from the solution to the master equation .however , the lattice models are too simplified to treat specific amino acid sequences or specific protein structures .lattice models focus on transitions between _ microstates _ , the individual chain conformations , since these are the fundamental units of structure in such models .our present master equation describes transitions between _ macrostates _ , defined by the contact clusters of a given protein structure . in this way, the present model aims to make closer contact with experiments .to compute the folding kinetics , we start with the native contact map , the matrix in which element equals 1 if the residues and are in contact , and equals 0 otherwise .two residues are defined as being in contact if the distance between their or atoms is less than 6 angstroms .next , we divide the native contact map into contact clusters .each contact cluster corresponds to a structural element of the protein .two contacts and are defined as being in the same cluster if they are close together on the contact map , according to the distance criterion that .we define two types of clusters : local and nonlocal .clusters are _ local _ if they contain at least one local contact having contact order .local clusters include helices , turns , or -hairpins , for example .a cluster is _ nonlocal _ if it has no local contacts ; examples include -strand pairings other than hairpins , and the tertiary interactions of helices . to qualify as nonlocal ,a cluster must also have more than two contacts ; isolated nonlocal contacts are not considered to be clusters .similarly , we do not consider as contributing to clusters any ` peripheral ' contacts with a minimum distance to the other contacts in the cluster .in general , typical contact maps have only a few isolated nonlocal or peripheral contacts .[ clusters ] shows examples of clusters , specifically for chymotrypsin inhibitor 2 ( ci2 ) and the src sh3 domain . by our criteria , ci2 has 5 local clusters and 2 nonlocal clusters ( and ) , and the src sh3 domain has 6 local clusters and 2 nonlocal clusters ( rt- and ) .we assume that each cluster is either formed or not ; we neglect partial degrees of formation .thus , for a protein with clusters , there are possible states that describe the progression to the native state .each of these macrostates is characterized by a vector , where indicates that cluster is formed and indicates that cluster is not formed . in our model ,the free energy of the protein as a function of the state of cluster formation is given by : \label{statefs}\ ] ] each cluster that is formed ( ) contributes to the free energy of the state with two terms : a state - dependent free energy of loop closure ( ` initiation ' free energy ) , and a free energy for forming the cluster contacts ( ` propagation ' free energy ) .here , is a loop - closure parameter .the quantity is the _ initiation eco _ for cluster .the initiation eco of a cluster is the length of the smallest loop that must be closed in order to form that cluster from the other existing clusters . for a local cluster ,the initiation eco is the smallest co among the contacts . for a nonlocal cluster ,the initiation eco depends on the presence of other clusters in the state . in general, the initiation eco also depends on the sequence through which those clusters are formed .however , in order to apply the master equation formalism , we need a free energy and thus we require a definition of initiation eco that is only a function of state . for this purpose , we use the following scheme . if only one nonlocal cluster is formed in a certain state , the initiation eco of that cluster is the smallest eco among the cluster contacts , given all the local clusters formed in that state . if multiple nonlocal clusters are present in a state , we consider all the possible sequences along which these clusters can form , and determine the one having the smallest sum of ecos . for instance , for a state with two nonlocal clusters and , there are two sequences : ( 1 ) , and ( 2 ) .the minimum ecos for the clusters are determined sequentially : and along sequence ( 1 ) , and and along sequence ( 2 ) .if is smaller than , the initiation ecos and of the clusters and in the given state are taken to be and .the initiation ecos and are an estimate for the smallest loop lengths required to form the two clusters in the state . in eq .( [ statefs ] ) , the free energy cost of the loops is estimated by a simple linear approximation in the loop length . this is not unreasonable since the range of relevant ecos only spans roughly one order of magnitude , from about to or 40 .in general , determining the free energy of a chain molecule with multiple constraints or contacts is a complicated and unsolved problem . for the simpler problem of hairpin - like loop closures, several estimates have been given in the literature ( see , e.g. , ) . in principle, this model could treat the detailed energetics of each folding route , if each of the clusters were characterized by its own free energy .but here we consider a simpler version of the model .we assume that there are only two parameters for the free energy of formation : for propagating any local cluster , and for propagating any nonlocal cluster . to obtain two - state folding and agreement with experimental -values, we find that must be nonnegative and must be negative .this is consistent with the experimental observation that local structures , such as helices or -hairpins , are generally unstable in isolation .similar in spirit , the diffusion - collision model of karplus and weaver assumes that microdomains , e.g. helices , are individually unstable .thus , the rate - limiting barrier to folding in our model turns out to be the formation of mostly local structures needed to reduce the ecos of nonlocal clusters .the driving force for overcoming this barrier is the favorable free energy of assembling the nonlocal clusters .the predicted free energy landscape of the src sh3 domain is shown in fig .[ landscape ] , using the parameters and , where is boltzmann s constant temperature .the value of is chosen so that the equilibrium probability that the two nonlocal clusters rt- and are both folded ( ` native state ' ) is 0.9 , which gives for src sh3 . with these parameter settings ,we obtain a good agreement with average experimental -values for the src sh3 domain and other two - state folders ( see below ) . for clarity, we show in the figure only a reduced set of states based on the 5 major clusters , , , rt- , and .the three small clusters t , dt , and h have negligible effects on the folding kinetics and on the -values . only states differing by the formation of a single cluster are kinetically connected .the uphill steps in this model either are steps in which a local cluster is formed , or steps involving high ecos .the downhill steps are steps in which a nonlocal cluster is formed with a low eco , or steps in which a local cluster significantly reduces the ecos of previously formed nonlocal clusters .the model predicts two main folding routes . along the upper route ( e ) folds after ( d ) rt- ; along the lower route , they form in the opposite order . along these routes ,the barriers ( highest free energies states ) are the states in which two clusters are formed : bd and bc for the upper route , and ac for the lower route . in this section ,we describe the folding dynamics .we use the master equation , , \ ] ] which gives the time evolution of the probability that the protein is in state at time . here, is the transition rate from state to .the master equation can be written in matrix form where is the vector with elements , and the matrix elements of are given by the transition rates are given in terms of the free energies by ^{-1 } \label{transrates}\ ] ] where is a reference time scale .the only transitions that are assigned to have nonzero rates are those incremental steps that change the state by a single cluster unit .this is enforced by the term in eq .( [ transrates ] ) where the kronecker is one for and zero otherwise .the condition is only satisfied by pairs of states and with for a single cluster , and with for all other clusters .the transition rates ( [ transrates ] ) satisfy detailed balance , where ] .another standard choice satisfying detailed balance is the metropolis dynamics , which should lead to equivalent results .the detailed balance property of the transition rates implies that the eigenvalues of the matrix are real .one of the eigenvalues is zero , corresponding to the equilibrium distribution , while all other eigenvalues are positive .the solution to the master equation is given by \label{pdis}\ ] ] where is the eigenvector corresponding to the eigenvalue , and the coefficients are determined by the initial condition . for , the probability distribution tends towards the equilibrium distribution where is the eigenvector with eigenvalue .solving the master equation gives a set of eigenvalues , each with its associated eigenvector .each eigenvalue represents a relaxation rate . as initial conditions at , we start from the state in which no clusters are formed. this corresponds to folding from high temperatures or high denaturant concentrations .the signature of two - state kinetics is the existence of one slow relaxation process ( described by a single exponential ) , separated in time from fast relaxations ( a ` burst ' phase ) .[ spectra ] shows the eigenvalue spectra for ci2 and the src sh3 domain , based on using the parameters , local cluster free energy , and a nonlocal cluster free energy chosen so that the equilibrium ` native ' population with all nonlocal clusters formed has probability 0.9 .the latter condition leads to for ci2 , and for src sh3 .[ evolution ] shows the predicted folding dynamics for the src sh3 domain .the spectra in fig .[ spectra ] show that for these proteins , the eigenvalues do indeed separate into a slow single - exponential step and a burst phase , consistent with the experimental observation of two - state behavior .the slowest relaxation rate is about one order of magnitude smaller than the other nonzero eigenvalues ( see fig .[ spectra ] ) . at times , the probability distribution ( [ pdis ] ) is well approximated by ] .this gives a broad non - two - state spectrum .hence , the separation of time scales and the two - state cooperativity arise in this model from the coupling of the clusters via the loop - closure term in eq .( [ statefs ] ) . to see the magnitude of the barrier , note that the folding rate is related to the height of the energy barrier on the folding landscape . for comparison , consider a mass - action model with three states d t n ( denatured state , ` transition ' state , native state ) and transition rates as in eq .( [ transrates ] ) .the folding rate is given , to a very good approximation , by ] is in good agreement with the folding rate ( see fig .[ spectra ] ) .experiments have been interpreted either as indicating that burst phases involve structure formation or that burst phases are processes of non - structured polymer collapse , depending on the protein and the experimental method . in our model ,the burst phase is a process of structure formation .non - structured collapse is beyond the scope , or resolution , of our model , because the model has only a single fully unstructured state the state in which none of the clusters is formed .the burst phase in our model captures fast preequilibration events within the denatured state in response to initiating the folding conditions at . in the model, this denatured state is an ensemble of macrostates on one side of the barrier in the energy landscape ( see fig .[ landscape ] ) .it is reasonable to assume that such preequilibration events within the denatured state exist also for real proteins .however , whether these events can be detected as burst phases in experiments should depend on the initial conditions , experimental probes , etc .table 1 : maximum probability and elements for transient states of the src sh3 domain . + [ cols="^,^,^",options="header " , ] during folding or unfolding , certain conformations will be populated transiently .if the populations of those conformations are always small , we call them ` hidden intermediates ' . the population of a hidden intermediate conformation rises to a maximum , , then falls as the protein ultimately becomes fully folded .the term ` hidden ' means that is always small enough that it does not contribute an additional kinetic phase ; i.e. , the folding kinetics is two - state . here , we consider two quantities .( 1 ) we compute for the transient states . for simplicity , we consider only the 5 major clusters , , , rt- , and .( 2 ) we look at the elements of the eigenvector , the eigenvector corresponding to the smallest eigenvalue .these elements show how the various conformations grow and decay with rate as folding proceeds .table 1 shows that the maximum population correlates well with the elements of . for a typical route of src sh3 ,fig . [ evolution ] ( bottom ) illustrates the decay of the denatured state and hidden intermediates and the growth of the native state , all with rate .the effects of a mutation on the folding kinetics are often explored through experimental measurements of a -value , which is defined as where is the folding rate of the native protein and is its stability , and and are the corresponding quantities for the mutant protein .since the minimal structural units in our model are clusters of contacts , we do not calculate -values for single - residue mutations .rather , we consider whole helices and strands as units . to compare with experiments, we average the experimental -values over all the residues composing a given secondary structural element . to calculate average -values for secondary structures , we consider ` mutations ' that change the free energy of a contact cluster according to where is the fraction of residues of the secondary structural element that are involved in contacts of the cluster , and is a small energy .for example , if the secondary structural element contains residues , and of these residues appear in contacts of the cluster , we have .note that , where the value is obtained if the whole secondary structural element has contacts in cluster .thus the -value for the secondary structural element is given by eq .( [ phi ] ) with where is the smallest nonzero eigenvalue of the mutant with cluster free energies , and for , we find that the calculated -values are nearly independent of .we choose here . predicted -values are compared with experiments in fig .[ phisi ] .the theoretical -values were calculated with the same parameters for all four proteins ( see figure caption ) .the predicted values agree well with the experimental values .this comparison indicates that the folding kinetics of these proteins is dominated by generic features of the fold topology , rather than by the specific energetic details i.e. , which residues form contacts , how much hydrogen bonds or hydrophobic interactions are worth , the details of sidechain packing , etc . in the case of protein g( see fig .[ phisii ] ) , the experimental -value distribution is largely reproduced by making the additional assumption that the -helix cluster has a free energy , rather than the value that we have otherwise used for local clusters ( see fig .[ phisi ] ) .however , even without changing this parameter , the -value distribution reflects the features of the experimental distribution that the -values for the strands and are larger than those for and .we have developed a simple model of the folding kinetics of two - state proteins .the model aims to predict the folding rates of the fast and slow processes , the folding routes , and -values for a protein , if the native structure is given .the dominant folding routes are found to be those having small ecos , i.e. , steps that involve only small ` loop closures ' .the model parameters include : , an intrinsic free energy for loop closure ; , the free energy for propagating contacts in local structures ; and , the free energy for propagating nonlocal contacts .the model predicts that the barrier to two - state folding is the formation of local structural elements like helices and hairpins , and that the steps involving their assembly into larger and more native - like structure are downhill in free energy .99 alm , e. , and baker , d. 1999 .prediction of protein- folding mechanisms from free - energy landscapes derived from native structures .usa _ * 96 * , 11305 - 11310 .alm , e. , morozov , a.v . ,kortemme , t. , and baker , d. 2002 .simple physical models connect theory and experiment in protein folding kinetics ._ j. mol .biol . _ * 322 * , 463 - 476 .bruscolini , p. , and pelizzola , a. 2002 . exact solution of the munoz - eaton model for protein folding .lett . _ * 88 * , 258101 .callender , r.h ., dyer , r.b . ,gilmanshin , r. , and woodruff , w.h . 1998 .fast events in protein folding : the time evolution of primary processes .chem . _ * 49 * , 173 - 202 .chan , h.s . , and dill , k.a . 1990. the effects of internal constraints on the configurations of chain molecules .j. chem .phys . * 92 * , 3118 - 3135 .chan , h.s . , anddill , k.a . 1993 .energy landscapes and the collapse dynamics of homopolymers . _j. chem ._ * 99 * , 2116 - 2127 .cieplak , m. , henkel , m. , karbowski , j. , and banavar , j.r .master equation approach to protein folding and kinetic traps .lett . _ * 80 * , 3654 - 3657 .clementi , c. , nymeyer , h. , and onuchic , j.n . 2000 .topological and energetic factors : what determines the structural details of the transition state ensemble and `` en - route '' intermediates for protein folding ?an investigation for small globular proteins . _j. mol .biol . _ * 298 * , 937 - 953 .daggett , v. 2002 .molecular dynamics simulations of the protein unfolding / folding reaction ._ acc . chem .res . _ * 35 * , 422 - 429 .debe , d.a ., and goddard , w.a .first principles prediction of protein folding rates ._ j. mol .biol . _ * 294 * , 619 - 625 .dill , k.a . , fiebig , k.m . , and chan , h.s . 1993 .cooperativity in protein - folding kinetics ._ proc . natl .usa _ * 90 * , 1942 - 1946 .dill , k.a . , and chan , h.s . 1997 . from levinthal to pathways to funnels .biol . _ * 4 * , 10 - 19 .duan , y. , and kollman , p.apathways to a protein folding intermediate observed in a 1-microsecond simulation in aqueous solution ._ science _ * 282 * , 740 - 744 .eaton , w.a . ,munoz , v. , thompson , p.a ., henry , e.r . , and hofrichter , j. 1998 .kinetics and dynamics of loops , -helices , -hairpins , and fast - folding proteins . _acc . chem .res . _ * 31 * , 741 - 753 .englander , s.w .protein folding intermediates and pathways studied by hydrogen exchange .biophys ._ * 29 * , 213 - 238 .ferguson , n. , and fersht , a.r .early events in protein folding . _ curr . opin .* 13 * , 75 - 81 .fiebig , k.m . , anddill , k.a . 1993 . protein core assembly processes . _j. chem .phys . _ * 98 * , 3475 - 3487 .flammini a. , banavar , j.r . , and maritan , a. 2002 . energy landscape and native - state structure of proteins - a simplified model ._ europhysics letters _ * 58 * , 623 - 629 .galzitskaya , o.v . , and finkelstein , a.v .1999 . a theoretical search for folding / unfolding nuclei in three - dimensional protein structures .usa _ * 96 * , 11299 - 11304 .garcia - mira , m.m . ,sadqi , m. , fischer , n. , sanchez - ruiz , j.m . , and munoz , v. 2002. experimental identification of downhill protein folding ._ science _ * 298 * , 2191 - 2195 .gruebele , m. , sabelko , j. , ballew , r. , and ervin , j. 1998 .laser temperature jump induced protein refolding . _res . _ * 31 * , 699 - 707 .hoang , t.x ., and cieplak , m. 2000 .sequencing of folding events in go - type proteins ._ j. chem .* 113 * , 8319 - 8328 .ikai , a. , and tanford , c. 1971 .kinetic evidence for incorrectly folded intermediate states in the refolding of denatured proteins ._ nature _ * 230 * , 100 - 102 .ivankov , d.n . , and finkelstein , a.v .2001 . theoretical study of a landscape of protein folding - unfolding pathways .folding rates at midtransition . _biochemistry _ * 40 * , 9957 - 9961 .karplus , m. , and weaver , d.l .protein - folding dynamics ._ nature _ * 260 * : 404 - 406 .karplus , m. , and weaver , d.l .protein - folding dynamics : the diffusion - collision model and experimental data . _ protein science _ * 3 * : 650 - 668 .klimov , d.k . , and thirumalai , d. 2002 .stiffness of the distal loop restricts the structural heterogeneity of the transition state ensemble in sh3 domains . _j. mol .biol . _ * 317 * , 721 - 737 .leopold , p.e . ,montal , m. , and onuchic , j.n .protein folding funnels : a kinetic approach to the sequence - structure relationship .usa _ * 89 * , 8721 - 8725 .li , l. , and shakhnovich , e.i .constructing , verifying , and dissecting the folding transition state of chymotrypsin inhibitor 2 with all - atom simulations .. natl .usa _ * 98 * , 13014 - 13018 .micheelsen , m.a . ,rischel , c. , ferkinghoff - borg , j. , guerois , r. , and serrano , l. 2003 .mean first - passage time analysis reveals rate - limiting steps , parallel pathways and dead ends in a simple model of protein folding ._ europhys .lett . _ * 61 * , 561 - 566 .ozkan , s.b . ,bahar , i. , and dill , k.a .transition states and the meaning of -values in protein folding kinetics .* 8 * , 765 - 769 .ozkan , s.b . ,dill , k.a . , and bahar , i. 2002 .fast - folding protein kinetics , hidden intermediates , and the sequential stabilization model ._ protein sci . _ * 11 * , 1958 - 1970 .ozkan , s.b . ,dill , k.a . , and bahar , i. 2003 .computing the transition state populations in simple protein models ._ biopolymers _ * 68 * : 35 - 46 .parker , m.j . , and marqusee , s. 2000 . a statistical appraisal of native state hydrogen exchange data : evidence for a burst phase continuum_ j. mol .* 300 * , 1361 - 1375 .portman , j.j ., takada , s. , and wolynes , p.g .2001 . microscopic theory of protein folding rates .i. fine structure of the free energy profile and folding routes from a variational approach ._ j. chem ._ * 114 * , 5069 - 5081 .schonbrun , j. , and dill , k.a .why do proteins fold with single - exponential kinetics .submitted shea , j .- e . , and brooks iii , c.l . 2001 . from folding theories to folding proteins : a review and assessment of simulation studies of protein folding and unfolding .* 52 * , 499 - 535 .shoemaker , b.a . ,wang , j. , and wolynes , p.g .1999 . exploring structures in protein foldingfunnels with free energy functionals : the transition state ensemble . _j. mol .biol . _ * 287 * , 675 - 694 .tsong , t.y . ,baldwin , r.l . , and elson , e.l . 1971 .the sequential unfolding of ribonuclease a : detection of a fast initial phase in the kinetics of unfolding .usa _ * 68 * , 2712 - 2715 .van kampen , n.g .1992 . _ stochastic processes in physics and chemistry _ , ( elsevier , amsterdam ) vendruscolo , m. , paci , e. , dobson , c.m . , and karplus , m. 2001 ._ nature ( london ) _ * 409 * , 641 - 645 .weikl , t.r . , and dill , k.a .folding rates and low - entropy - loss routes of 2-state proteins . _j. mol ._ * 329 * , 585 - 598 .weikl , t.r . , and dill , k.a . 2003 .folding kinetics 2-state proteins : effect of circularization , permutation , and crosslinks ._ j. mol ._ , in press
we present a solvable model that predicts the folding kinetics of two - state proteins from their native structures . the model is based on conditional chain entropies . it assumes that folding processes are dominated by small - loop closure events that can be inferred from native structures . for ci2 , the src sh3 domain , tnfn3 , and protein l , the model reproduces two - state kinetics , and it predicts well the average -values for secondary structures . the barrier to folding is the formation of predominantly local structures such as helices and hairpins , which are needed to bring nonlocal pairs of amino acids into contact . * keywords + * protein folding kinetics ; two - state folding ; folding cooperativity ; -value analysis ; effective contact order ; loop - closure entropy ; master equation
the capacity region of interference channels ( ifcs ) , comprised of two or more interfering links ( transmitter - receiver pairs ) , remains an open problem .the sum capacity of a non - fading two - user ifc is known only when the interference is either stronger or much weaker at the unintended than at the intended receiver ( see , for e.g. , , and the references therein ) .recently , the sum capacity and optimal power policies for two - user ergodic fading ifcs are studied in and under the assumption that the instantaneous fading channel state information ( csi ) is known at all nodes . a sum capacity analysis for -user ergodic fading channels using ergodic interference alignmentis developed in and .in general , however , the instantaneous csi is not available at the transmitters and often involves feedback from the receivers .thus , it is useful to study the case in which only receivers have perfect csi and the transmitters are strictly restricted to knowledge only of the channel statistics .the sum capacity of multi - terminal networks without transmit csi remains a largely open problem with the capacity known only for ergodic fading gaussian multiaccess channels ( macs ) without transmit csi .for this class of channels , it is optimal for each user to transmit at its maximum average power in each use of the channel ( see for e.g. , or ) .the receiver , with perfect knowledge of the instantaneous csi , decodes the messages from all transmitters jointly over all fading realizations .recently , the sum capacity of ergodic fading two - receiver broadcast channels ( bcs ) without transmit csi has been studied in .the authors first develop the sum capacity achieving scheme for an _ ergodic layered erasure bc _ where the channel from the source to each receiver is modeled as a time - varying version of the binary expansion deterministic channel introduced in . in this model , the transmitted signal is viewed as a vector ( layers ) of bits from the most to the least significant bits .fading is modeled as an erasure of a random number of least significant bits and the instantaneous erasure levels , or equivalently the number of received layers ( or levels ) , are assumed to be known at the receivers . for a layered erasure fading bc ,the authors in show that a strategy of signaling independently on each layer to one receiver or the other based only on the fading statistics achieves the sum capacity .furthermore , the authors also demonstrate the optimality of their achievable scheme to within 1.44 bits / s / hz of the capacity region for a class of high - snr channel fading distributions . in this paper, we introduce an ergodic fading layered erasure one - sided ( two - user ) ifc in which , in each channel use , one of the receivers receives a random number of layers from its intended transmitter while the other receiver receives a random number of layers from both transmitters .one can view this channel as a time - varying one - sided version of a two - user binary expansion deterministic ifc introduced and studied in .the model in is a subset of the class of deterministic ifcs whose capacity region is developed in .more recently , in , the sum capacity of a class of one - sided two - user and three - user ifcs in which each transmitter has limited information about its connectivity to the receivers is developed .for the ergodic layered erasure one - sided ifc considered here , we develop outer bounds and identify fading regimes for which the strategies of either decoding or ignoring interference at the interfered receiver is tight .we classify the capacity achieving regimes based on the fading statistics of the direct and interfering links as follows : i ) weak , ii ) strong ( mix of strong but not very strong ( snvs ) and very strong ( vs ) ) , iii ) _ ergodic very strong _( mix of snvs , vs , and weak ) , and ( iv ) a sub - class of mixed interference ( mix of snvs and weak ) .the paper is organized as follows . in section [ sec_2 ]we introduce the channel model . in section [ sec_3 ], we develop the capacity region of a layered erasure multiple - access channel . in section [ sec_4 ] ,we develop outer bounds for the layered erasure ifc and identify the regimes where these bounds are tight using in part the results developed in section [ sec_3 ] .we conclude in section [ sec_5 ] .a two - user ifc consists of two point - to - point transmitter - receiver links where the receiver of each link also receives an interfering signal from the unintended transmitter . in a deterministic ifc ,the input at each transmitter is a vector of bits .we write ^{t} ] .associated with each transmitter and receiver is a non - negative integer that defines the number of bit levels of observed at receiver .the maximum level supported by any link is .specifically , an link erases least significant bits of such that only most significant bits of are received as the least significant bits of .the missing entries have been erased by the fading channel . thus , we have ^{t}\\ & = \mathbf{s}^{q - n_{jk}}x_{k}^{q}\ ] ] where is a shift matrix with entries that are non - zero only for . in a layered erasure ifc ,we model each of the four transmit - receive links as a -bit layered erasure channel__. a -bit layered erasure channel is defined in and summarized below .[ ][def1]a -bit layered erasure channel has input and output ] and = 0. ] to denote the probability mass function and to denote the complementary cumulative distribution function ( cdf ) .it is straightforward to verify that =\sum_{n=1}^{q}\overline{f}_{n}(n)=\sum_{n=1}^{q}\pr \left [ n\geq n\right ] .\label{en_fn}\ ] ] we also write .all logarithms are are taken to the base 2 and the rates are in units of bits per channel use . throughout the sequel we use the words transmitters and users interchangeablyconsider a multiple access channel with the two transmitters transmitting and respectively , and a received signal given by where and are the channel states for the links from the two transmitters to the receiver respectively .both random variables and satisfy =1 ] .the capacity region for the layered erasure multiple access channel is given by \\ r_{2 } & \leq\mathbb{e}[n_{2}]\\ r_{1}+r_{2 } & \leq\mathbb{e}[\max(n_{1},n_{2 } ) ] .\end{aligned}\ ] ] we will describe the achievability here since the converse is straightforward .we prove the achievability of a corner point given by the rate pair ,\mathbb{e}[n_{2}]) ] bits from level of user 1 , where we have used the fact that the expected value of an indicator function of an event is the probability of that event .the codebook rate of user 1 at this level therefore allows the receiver to reliably decode the message of user after decoding the messages of user 1 its contribution from the received signal can be canceled and the remaining contribution of the second user can be decoded reliably .thus , across all levels , the average transmission rates of ] at users 1 and 2 , respectively , enable reliable communications .[ exmax ] consider a layered mac with and two fading states : the first state with and occurs with probability , and the second state with and with probability .the above achievability scheme for rate pair reduces to the following . at transmitter 1 ,a rate code is used on the first level while nothing is transmitted on the remaining levels . at transmitter 2 ,a rate 1 code is used at the top three levels while a rate code is used on the fourth level . note that in this case , whenever the channel is in the first state ( ) the top bit of user 1 reaches the receiver noiselessly .hence , the rate codeword of user 1 can be decoded from the occurrences of state 1 .thus , the contribution of the first transmitter can be cancelled by the receiver across both states . following this, the receiver uses the top 3 levels of the second transmitter that are interference - free in both the states and hence a rate of 1 bit / channel use can be achieved for each of the three levels .the fourth level reaches the receiver whenever the system is in the second state ( ) which happens with probability , and thus the codebook of rate can be decoded by the receiver from the occurrences of the second state .outer bounds on the capacity region of a class of deterministic ifcs , of which the binary expansion deterministic ifc is a sub - class , are developed in . for a time - varying ( ergodic )layered erasure ifc with perfect csi at the receivers , we follow the same steps as in ( * ? ? ?* theorem 1 ) while including the csi as a part of the received signal at each receiver .the following theorem summarizes the outer bounds on the capacity region of layered erasure one - sided ifcs .[ th_ob]an outer bound of the capacity region of an ergodic layered erasure one - sided ifc is given by the set of all rate tuples that satisfy [ cob]\\ r_{2 } & \leq\mathbb{e}[n_{22}]\\ r_{1}+r_{2 } & \leq\mathbb{e}[\max(n_{11},n_{22},n_{21},n_{11}+n_{22}-n_{21})].\end{aligned}\ ] ] we now prove the tightness of the sum capacity outer bounds for specific sub - classes of ergodic layered erasure ifcs . for the very strong sub - class ,the achievable scheme also achieves the capacity region . for the remaining sub - classes, we achieve a corner point of the capacity region .[ vs ] for a class of very strong layered erasure ifcs for which holds with probability 1 , the sum capacity is ] and ] .the sum capacity of a class of very strong layered erasure ifcs for which with probability 1 is ] .let \leq\mathbb{e}[n_{11}] ] .the transmitter at level sends data from this codebook while the second user at level uses a codebook of rate to transmit the data .the decoding scheme proceeds as follows .the first receiver receives across all channel states , i.e. , on average , =\sum_{n=1}^{q}\pr(n_{11}\geq n)=\mathbb{e}[n_{11}] ] .similarly , the second receiver receives across all channel states , i.e. , on average , =\sum_{n=1}^{q}\pr(n_{21}-n_{22}\geq n)=\mathbb{e}[(n_{21}-n_{22})^{+}] ]. one can proceed similarly for \geq\mathbb{e}[n_{11}] ] and the same strategy achieves the sum capacity .more generally , one can also consider the sub - class of ifcs with a mix of all types of sub - channels , i.e. , a mix of weak , snvs , and vs. in the following theorem we develop the sum capacity for subset of such a sub - class in which on average the conditions for very strong are satisfied .if \geq\mathbb{e}[n_{{11}}+n_{22}] ] .the first user forms a codebook of rate /q ] bits from all the levels of user 1 and is thus able to decode .similarly , the second receiver receives across all channel states , i.e. , on average , it receives \ge \mathbb{e}[n_{11}] ] .if with probability 1 , then the sum capacity is .consider the following achievable scheme : at level , the first user uses a codebook of rate , i.e. , at each level , the first user transmits at the erasure rate supported by that level at its receiver . on the other hand , at level , the second user uses a codebook of rate to transmit its message .the second receiver receives across all channel states , i.e. , on average , it receives =\pr(n_{22}-n_{21}\geq n) ] bits .hence , for reliable reception , transmitter needs to transmit at an average rate\ ] ] bits / channel use across all levels .the sum - rate is then given by ( [ mixed_sr ] ) .[ condifc ] for every , ] and adding $ ] , the sum - rate in ( [ mixed_sr ] ) then simplifies as+\mathbb{e}[(n_{11}-n_{21})^{+}]\\ & + \mathbb{e}[\min(n_{11},(n_{21}-n_{22})^{+})]\\ & = \mathbb{e}[\max(n_{11}-n_{21},0)]\nonumber\\ & \text { \ \ \ } + \mathbb{e}[\min(n_{11}+n_{22},\max(n_{21},n_{22}))]\\ & = \mathbb{e}[\min(n_{11}+n_{22}+(n_{11}-n_{21})^{+},\nonumber\\ & \text { \ \ \ \ } \max(n_{11},n_{21},n_{22},n_{11}+n_{22}-n_{21}))].\end{aligned}\ ] ] [ th_mix_sc]the sum capacity of a class of mixed layered erasure ifcs for which the condition ( [ lemma_cond ] ) of lemma [ condifc ] is satisfied and with probability 1 is given by .\ ] ] [ exevs](ergodic very strong ) consider a layered ifc with and two fading states : the first state with and occurs with probability , and the second state with and with probability .the first state is weak while the second is very strong , but overall the net mixture is ergodic very strong .thus , the sum capacity of bits / channel use can be attained .we now present two examples for the mixed ifc . for the first , the sum capacity is given by theorem [ th_mix_sc ]; for the second , we present a new sum capacity achieving strategy .[ exmix](mixed ) consider a layered ifc with and two fading states : the first state with and occurs with probability , and the second state with and with probability .the first state is weak while the second is strong , but overall the net mixture satisfies all the conditions in theorem [ th_mix_sc ] .( note that although is not deterministic , the condition in lemma [ condifc ] is satisfied . )thus , the ergodic sum capacity of bits / channel use can be attained .[ exifc ] ( mixed ) consider a layered ifc with and two fading states : the first state with and occurs with probability , and the second state with and with probability .the first state is very strong while the second is weak though the ifc is not ergodic very strong .the states satisfy the condition in lemma [ condifc ] , and thus , the sum rate of 5/2 bits / channel use can be achieved .however , applying theorem [ th_ob ] the outer bound on sum capacity is 3 bits / channel use .we here present an alternate achievable strategy that achieves this outer bound . at its second level, the first transmitter sends a message at a rate of 1 bit / channel use which its intended receiver can always decode but the second receiver can not .suppose receiver 2 does not decode this second level in either channel state .thus , with respect to receiver 2 , the equivalent channel has two fading states : the first state and with probability , and the second state and with probability .this is an ergodic strong ifc and hence a sum capacity of 2 bits / channel use can be achieved . combining that with the rate sent to receiver 1 from the second level of transmitter 1 , we achieve a sum capacity of 3 .note that our strategy uses a public and a private message from the first transmitter at the first and second levels , respectively .thus , while the second level from the first transmitter is received at the second receiver half of the time , the message on this level is considered private from the second user .this is in contrast with the deterministic interference channel where the message reaching the other receiver is always public .we have developed inner and outer bounds on the sum capacity of a class of layered erasure ergodic fading ifcs .we have shown that the outer bounds are tight for the following sub - classes : i ) weak , ii ) strong , iii ) _ ergodic very strong _ ( mix of strong and weak ) , and ( iv ) a sub - class of mixed interference ( mix of snvs and weak ) , where each sub - class is uniquely defined by the fading statistics .our work demonstrates that for layered erasure ifcs with sub - channels that are not uniquely of one kind , i.e. , that are not all strong but not very strong or very strong or weak , joint encoding is required across layers . of immediate interestis extending these results to the ergodic fading gaussian ifcs without transmitter csi .furthermore , we are also exploring extending the results of theorem [ th_mix_sc ] to both general layered ifcs as well as ergodic fading gaussian ifcs .s. annapureddy and v. veeravalli , `` gaussian interference networks : sum capacity in the low interference regime and new outer bounds on the capacity region , '' feb .2008 , submitted to the _ ieee trans .theory_. d. n. c. tse and s. v. hanly , `` multiaccess fading channels - part i : polymatroid structure , optimal resource allocation and throughput capacities , '' _ ieee trans .inform . theory _44 , no . 7 , pp . 27962815 ,s. shamai and a. d. wyner , `` information - theoretic considerations for symmetric , cellular , multiple - access fading channels - part i , '' _ ieee trans .inform . theory _43 , no . 6 , pp . 18771894 ,
the sum capacity of a class of layered erasure one - sided interference channels is developed under the assumption of no channel state information at the transmitters . outer bounds are presented for this model and are shown to be tight for the following sub - classes : i ) weak , ii ) strong ( mix of strong but not very strong ( snvs ) and very strong ( vs ) ) , iii ) _ ergodic very strong _ ( mix of strong and weak ) , and ( iv ) a sub - class of mixed interference ( mix of snvs and weak ) . each sub - class is uniquely defined by the fading statistics .
in this paper , we study an infinite dimensional stochastic optimal problem whose purpose is to minimize a cost functional subject to the semilinear stochastic evolution equation ( see ) \mathrm{d}t+[b(t)x(t)+g(t , x(t),u(t))]\,\mathrm{d}w_{t},\nonumber\\ x(0 ) & = x_{0}\ ] ] with the control variable valued in a subset of a metric space . herethe state variable takes values in a separable hilbert space , and are both stochastic evolution operators , and are given -valued functions , and are nonlinear functionals , and is a 1-dimensional standard wiener process .the research on the pontryagin maximum principle for infinite - dimensional _ nonlinear _ stochastic evolution systems has developed for a long time .all existing publications considered only the case that the control does not appear in the diffusion term ( see etc . ) .very recently , there are three preprints concerning the case that the control appears in the diffusion term .both the works discuss a simple form of system with being an infinitesimal generator of a strongly continuous semigroup and . in our previous work ,a maximum principle is proved under the same setting as in the present paper , but the characterization of the second - order adjoint process was indirect . in this paper, we give a direct and clear characterization of the 2nd - order adjoint process and establish a complete formulation of stochastic maximum principle for the optimal control in the general case that stochastic evolution operators , as well as the control variable with values in a general set , enter into both drift and diffusion terms of the state equation . a basic idea of establishing the general maximum principle for stochastic optimal control follows from a well - known work which solved the finite - dimensional case .the main difficulties in our problem are ( i ) the -estimate for sees , and ( ii ) the second - order duality analysis ( in other words , the characterization of the second - order adjoint process ) . to get over the former ,we first introduce a structural condition on the operator ( assumption [ ass : onb ] ) , which is refined from many applications . for the second ,a classical method is to characterize the second - order adjoint process by an operator - valued bsdes .this , working well for the finite - dimensional case , meets a big problem for the infinite - dimensional case , because the required solvability result of infinite - dimensional operator - valued bsdes is unknown so far .our approach of overcoming the second difficulty is novel . by utilizing the lebesgue differential theorem at an early step of the second - order duality analysis ( lemma [ lem : approfx1 ] ) , we simplify the problem into characterizing the limit of a class of bilinear functionals on -valued random variables ( for more explanations , see remark [ rem : last ] ) .we show that this limit is associated with a stochastic bilinear functional ( proposition [ prop : propofp ] ) which can be represented by an operator - valued stochastic process ( theorem [ thm : repres ] ) .the last thing is exactly the desired _ second - order adjoint process_. the rest of this paper is organized as follows . in section 2 ,we state the problem and our main results . in section 3 , the basic -estimate for sees is derived . in section 4, we study the representation and properties of a stochastic bilinear functional , and finally in section 5 , we derive the stochastic maximum principle .let be a filtered probability space where the filtration is generated by a 1-dimensional standard wiener process and satisfies the usual conditions .let and be two separable real hilbert spaces such that is densely embedded in .we identify with its dual space , and denote by the dual of .then we have .denote by the norms of , by the inner product in , and by the duality product between and .denote by the banach space of all bounded linear operators from banach space to banach space , with the norm .simply , denote . for a -algebra ,denote by the space of all essentially -valued weakly -measurable random variables satisfying .now recall the controlled stochastic evolution system \mathrm{d}t+[b(t)x(t)+g(t , x(t),u(t))]\,\mathrm{d}w_{t},\\ x(0 ) & = x_{0}\ ] ] with the control process valued in a set , given stochastic evolution operators\times\omega\rightarrow\mathfrak{b}(v\rightarrow v^{\ast})\ \ \ \ \text{and}\ \ \ \ b:[0,1]\times\omega\rightarrow\mathfrak{b}(v\rightarrow h),\ ] ] and nonlinear terms\times h\times u\times\omega\rightarrow h.\ ] ] here the _control set _ is a nonempty borel - measurable subset of a metric space whose metric is denoted by .fix an element ( denoted by ) in , and then define .an _ admissible control _ is a -valued -progressively measurable process such that}\mathbf{e}\left\vert u(t)\right\vert _ { u}^{4}<\infty.\ ] ] denote by the set of all admissible controls . our optimal control problem is to find minimizing the cost functional with given functions\times h\times u\times\omega\rightarrow\mathbb{r } \ \ \ \text{and\ \ \ } h : h\times\omega\rightarrow\mathbb{r}.\ ] ] we make the following assumptions .fix some constants and .[ ass : onab ] the operator processes and are weakly -progressively measurable , i.e. , and are both -progressively measurable processes for any ; and for each \times\omega ] , and are globally twice frchet differentiable with respect to .the functions are bounded by the constant ; are bounded by ; is bounded by . herein bounded " is in the sense of their corresponding norms . from a well - known result ( see e.g. ( * ? ? ?* theorem 2.2.1 ) ) , see has a unique -progressively measurable _ weak solution _ ,l^{2}(\omega , h)) ] , the stochastic bilinear functional is well - defined , more specifically , for any , (\xi,\zeta) ] and any , ( \xi,\zeta)\ \ ( \text{a.s.})\text{. } \label{eq : relationofpt}\ ] ] we call _ the riesz representation _ of .hereafter is a positive constant depending only on the values in the brackets ._ c ) ( weak stochastic continuity ) _ for each and any , we have _ _ .\label{eq : continuityofp}\ ] ] _ d ) ( uniqueness ) _ if and are both the riesz representations of , then a.s . for each proof of this theorem is placed in subsection 4.1 .now we turn to the control problem .define the _ hamiltonian function _\times h\times u\times h\times h\,\rightarrow\mathbb{\mathbb{r } } , \ ] ] as the form then our main result can be stated as follows .[ stochastic maximum principle][thm : mp]let assumptions [ ass : onab][ass : onb ] be satisfied .suppose is the state process with respect to an optimal control .then _ i ) _ _ ( first - order adjoint process ) _ the backward stochastic evolution equation ( bsee ) \,\text{\emph{d}}t+q(t)\,\text{\emph{d}}w_{t},\nonumber\label{eq:1adjoint}\\ p(1 ) & = h_{x}(\bar{x}(1)).\end{aligned}\ ] ] has a unique -progressively measurable ( weak ) solution ; _ ii ) _ _ ( second - order adjoint process ) _ the four - tuple with is _ appropriate " _ , consequently from theorem [ thm : repres ] there is a unique -valued process as the riesz representation of _ iii ) _ _ ( maximum condition ) _ for each , the inequality \right\rangle \geq0\end{aligned}\ ] ] holds for a.e . .the proof of this theorem will be completed in section 5 .the -estimate of the solutions to evolution equations plays a basic role in our approaches .now we consider the following linear equation\,\mathrm{d}t+[b(t)y(t)+b(t)]\mathrm{d}w_{t},\nonumber\label{eq : linearsee}\\ y(0 ) & = y_{0}\in h.\end{aligned}\ ] ] under assumption [ ass : onab ] , the above equation has a unique ( -progressively measurable ) solution ,l^{2}(\omega , h)) ] is a doob s martingale and the filtration is continuous , we can select and fix a continuous version of , denoted by .now fix an arbitrary ] \,\mathrm{d}t\\ & + \big [ b(z_{t}-\xi_{t})+\varepsilon^{-\frac{1}{2}}(w_{t}-w_{\tau})b\xi\big ] \,\mathrm{d}w_{t}.\end{aligned}\ ] ] by the it formula and some standard arguments , we have ^{2}\\ & \leqc\bigg [ \int_{\tau}^{\tau+\varepsilon}\varepsilon^{-1}\sqrt { \mathbf{e}\left\vert \xi\right\vert _ { v}^{4}}\sqrt{\mathbf{e}\left\vert w_{t}-w_{\tau}\right\vert ^{4}}\,\mathrm{d}t\bigg ] ^{2}\\ & = c\varepsilon^{2}\cdot\mathbf{e}\left\vert \xi\right\vert _ { v}^{4}.\end{aligned}\ ] ] this concludes the lemma .notice the fact that for any , now we let tend to .on the one hand , one can show that the term tends to similarly as in ; on the other hand , by means of the above lemma , the term must tend to the same limit as , where and .therefore , we have [ lem : approfteps ] for each and any , we have -t_{\tau}^{\varepsilon}(\xi,\zeta)\big\vert = 0.\ ] ] noticing that = \mathbf{e}\left\langle \xi_{\tau+\varepsilon},p_{\tau+\varepsilon}\zeta _ { \tau+\varepsilon}\right\rangle , \ ] ] we need only show indeed , from and lemma [ lem : appofz ] , we have this concludes the lemma . on the other hand , from the continuity of we have the following [ lem : approfp ] for each and any , we have = 0.\ ] ] it follows from , the boundedness of , and doob s martingale inequality ( see ) that } \mathbf{e}^{\mathcal{f}_{t}}\lambda_{1}\big)\left\vert \xi\right\vert _ { h } ^{2}\left\vert \zeta\right\vert _ { h}^{2}\in l^{1}(\omega).\ ] ] then from theorem [ thm : repres](c ) and the lebesgue dominated convergence theorem , we have \big\vert^{2}\\ & \leq\mathbf{e}\big[\varepsilon^{-2}\left\vert w_{\tau+\varepsilon}-w_{\tau } \right\vert ^{4}\big]\mathbf{e}\left\vert \left\langle \xi,\left ( p_{\tau+\varepsilon}-p_{\tau}\right ) \zeta\right\rangle \right\vert ^{2}\\ & \leq3\mathbf{e}\left\vert \left\langle \xi,\left ( p_{\tau+\varepsilon } -p_{\tau}\right ) \zeta\right\rangle \right\vert ^{2}\rightarrow 0,\ \ \ \ \text{as\ } \ \varepsilon\downarrow0.\end{aligned}\ ] ] the lemma is proved .finally we arrive at the position of completing the proof of proposition [ prop : propofp ] .[ proof of proposition [ prop : propofp ] ] combining lemmas [ lem : approfteps ] and [ lem : approfp ] , we have for each and any , next , fix any . for arbitrary ,we can find such that then one can show^{\frac{1}{4}}.\ ] ] thus we have^{\frac{1}{4}}.\ ] ] from and the arbitrariness of , we conclude the proposition .in this section , we are going to prove our main theorem , the stochastic maximum principle . following a classical technique in the optimal control ,we construct a perturbed admissible control in the following way ( named _ spike variation _ ){ll}u , & \text{if } t\in\lbrack\tau,\tau+\varepsilon],\\ \bar{u}(t ) , & \text{otherwise,}\end{array } \right.\ ] ] with fixed , sufficiently small positive , and an arbitrary -valued -measurable random variable satisfying .let be the state process with respect to control .for the sake of convenience , we denote for ,u^{\varepsilon } ( t)\right ) \mathrm{d}\lambda.\end{aligned}\ ] ] from the basic estimates , we have [ lem : errorestimate ] under assumptions [ ass : onab][ass : onb ] , we have }\mathbf{e}\left\vert \xi(t)\right\vert _ { h}^{2}:=\sup_{t\in\lbrack0,1]}\mathbf{e}\left\vert x^{\varepsilon}(t)-\bar { x}(t)-x_{1}(t)-x_{2}(t)\right\vert _ { h}^{2}=o(\varepsilon^{2}),\ ] ] where and are the solutions respectively to\,\mathrm{d}s\nonumber\\ & + \int_{0}^{t}[b(s)x_{1}(s)+\bar{g}_{x}(s)x_{1}(s)+g^{\delta}(s)]\mathrm{d}w_{s},\label{eq:1variation}\\ x_{2}(t)= & \int_{0}^{t}[a(s)x_{2}(s)+\bar{f}_{x}(s)x_{2}(s)+\frac{1}{2}\bar{f}_{xx}(s)\left ( x_{1}\otimes x_{1}\right ) ( s)+f_{x}^{\delta}(s)x_{1}(s)]\,\mathrm{d}s\nonumber\\ & + \int_{0}^{t}[b(s)x_{2}(s)+\bar{g}_{x}(s)x_{2}(s)+\frac{1}{2}\bar{g}_{xx}(s)\left ( x_{1}\otimes x_{1}\right ) ( s)+g_{x}^{\delta}(s)x_{1}(s)]\,\mathrm{d}w_{s}. \label{eq:2variation}\ ] ] the proof is rather standard ( see , e.g. ) , so here we sketch the process . from lemma [ lem : lpestimate ] , we have on the other hand , a direct calculation gives \,\mathrm{d}s\\ & + \int_{0}^{t}\left [ b(s)\xi(s)+\bar{g}_{x}(s)\xi(s)+\beta^{\varepsilon } ( s)\right ] \,\mathrm{d}w_{s},\end{aligned}\ ] ] where and from lemma [ lem : lpestimate ] , and legesbue s dominated convergence theorem we conclude that}\mathbf{e}\left\vert \xi(t)\right\vert _ { h}^{2}\leq\bigg[\int_{0}^{1}\left ( \mathbf{e}\left\vert \alpha^{\varepsilon } ( s)\right\vert _ { h}^{2}\right ) ^{\frac{1}{2}}\,\mathrm{d}s\bigg]^{2}+\int_{0}^{1}\mathbf{e}\left\vert \beta^{\varepsilon}(s)\right\vert _ { h}^{2}\,\mathrm{d}s = o(\varepsilon^{2}).\ ] ] the lemma is proved .[ lem:2expansion ] under assumptions [ ass : onab][ass : onb ] , we have \mathrm{d}t\nonumber\\ & + \mathbf{e}\big [ \left\langle h_{x}(\bar{x}(1)),x_{1}(1)+x_{2}(1)\right\rangle + \frac{1}{2}\left\langle x_{1}(1),h_{xx}(\bar{x}(1))x_{1}(1)\right\rangle \big ] .\end{aligned}\ ] ] the proof is also standard ( see , e.g. ) , we give a sketch here .a direct calculation shows that \,\mathrm{d}t\\ & + \mathbf{e}\big[\left\langle h_{x}(\bar{x}(1)),x_{1}(1)+x_{2}(1)\right\rangle + \frac{1}{2}\left\langle x_{1}(1),h_{xx}(\bar{x}(1))x_{1}(1)\right\rangle \big]+\gamma(\varepsilon),\end{aligned}\ ] ] where ( x^{\varepsilon}(1)-\bar { x}(1)),x^{\varepsilon}(1)-\bar{x}(1)\right\rangle \\ & + \frac{1}{2}\mathbf{e}\left\langle h_{xx}\left ( \bar{x}(1)\right ) ( x^{\varepsilon}(1)-\bar{x}(1)),x^{\varepsilon}(1)-\bar{x}(1)-x_{1}(1)\right\rangle \\ & + \frac{1}{2}\mathbf{e}\left\langle h_{xx}\left ( \bar{x}(1)\right ) ( x^{\varepsilon}(1)-\bar{x}(1)-x_{1}(1)),x_{1}(1)\right\rangle \\ & + \frac{1}{2}\mathbf{e}\int_{0}^{1}\left\langle \left [ \tilde{l}_{xx}^{\varepsilon}(t)-\bar{l}_{xx}(t)\right ] ( x^{\varepsilon}(t)-\bar { x}(t)),x^{\varepsilon}(t)-\bar{x}(t)\right\rangle \,\mathrm{d}t\\ & + \frac{1}{2}\mathbf{e}\int_{0}^{1}\left\langle \bar{l}_{xx}(t)(x^{\varepsilon}(t)-\bar{x}(t)),x^{\varepsilon}(t)-\bar{x}(t)-x_{1}(t)\right\rangle \,\mathrm{d}t\\ & + \frac{1}{2}\mathbf{e}\int_{0}^{1}\left\langle \bar{l}_{xx}(t)(x^{\varepsilon}(t)-\bar{x}(t)-x_{1}(t)),x_{1}(t)\right\rangle \,\mathrm{d}t\end{aligned}\ ] ] with \right ) \mathrm{d}\lambda.\ ] ] we need some duality analysis in order to tend to in inequality and get the maximum condition .recall the hamiltonian and bsee . under assumptions [ ass :onab ] and [ ass : onfglh ] , it follows from du - meng ( * ? ? ?* propostion 3.2 ) that equation has a unique -progressively measurable _ weak solution _ such that}\mathbf{e}\left\vert p(t)\right\vert _q(t)\right\vert _ { h}^{2}\,\mathrm{d}t\leq c(\kappa , k)\bigg(1+\sup_{t\in\lbrack0,1]}\mathbf{e}\left\vert \bar { u}(t)\right\vert _ { u}^{2}\bigg ) .\label{eq : estforajoint}\ ] ] thus the assertion ( i ) of theorem [ thm : mp ] holds true .furthermore , from lemma [ lem:2expansion ] we have [ cor:2expansion ] under assumptions [ ass : onab][ass : onb ] , we have \,\mathrm{d}t\nonumber\label{eq : varitionineq}\\ & + \frac{1}{2}\varepsilon^{-1}\mathbf{e}\int_{0}^{1}\left\langle x_{1}(t),\mathcal{h}_{xx}(t,\bar{x}(t),\bar{u}(t),p(t),q(t))x_{1}(t)\right\rangle \,\mathrm{d}t\nonumber\\ & + \frac{1}{2}\varepsilon^{-1}\mathbf{e}\left\langle x_{1}(1),h_{xx}(\bar { x}(1))x_{1}(1)\right\rangle , \end{aligned}\ ] ] where is the solution to equation . from the duality between the see and bsee ( or by the it formula ) , and by and , we have \,\mathrm{d}t+\mathbf{e}\left\langle h_{x}(\bar{x}(1)),x_{1}(1)+x_{2}(1)\right\rangle \\= ~ & \mathbf{e}\int_{0}^{1}\big[\big\langle p(t),f^{\delta}(t)+\frac{1}{2}\bar{f}_{xx}(t)\left ( x_{1}\otimes x_{1}\right ) ( t)+f_{x}^{\delta } ( t)x_{1}(t)\big\rangle\big]\,\mathrm{d}t\\ & + \mathbf{e}\int_{0}^{1}\big[\big\langle q(t),g^{\delta}(t)+\frac{1}{2}\bar{g}_{xx}(t)\left ( x_{1}\otimes x_{1}\right ) ( t)+g_{x}^{\delta}(t)x_{1}(t)\big\rangle\big]\,\mathrm{d}t\\ = ~ & o(\varepsilon)+\mathbf{e}\int_{\tau}^{\tau+\varepsilon}\left [ \left\langle p(t),f^{\delta}(t)\right\rangle + \left\langle q(t),g^{\delta } ( t)\right\rangle \right ] \,\mathrm{d}t\\ & + \frac{1}{2}\mathbf{e}\int_{0}^{1}\left [ \left\langle p(t),\bar{f}_{xx}(t)\left ( x_{1}\otimes x_{1}\right ) ( t)\right\rangle + \left\langle q(t),\bar{g}_{xx}(t)\left ( x_{1}\otimes x_{1}\right ) ( t)\right\rangle \right ] \,\mathrm{d}t,\end{aligned}\ ] ] this along with lemma [ lem:2expansion ] yields \,\mathrm{d}t\\ & + \frac{1}{2}\varepsilon^{-1}\mathbf{e}\int_{0}^{1}\left\langle x_{1}(t),\mathcal{h}_{xx}(t,\bar{x}(t),\bar{u}(t),p(t),q(t))x_{1}(t)\right\rangle \,\mathrm{d}t\\ & + \frac{1}{2}\varepsilon^{-1}\mathbf{e}\left\langle x_{1}(1),h_{xx}(\bar { x}(1))x_{1}(1)\right\rangle .\end{aligned}\ ] ] recalling the definition of , we conclude the lemma . from the lebesgue differentiation theorem , the first - order expansion part , which is the first term on the right hand side of inequality , tends to , \ \ \ \ \text{a.e .} \tau\ \ \ ] ] for each when tends to . by the arbitrariness of and some standard techniques , this yields the first term of maximum condition .recall the four - tuple with bearing in mind assumptions [ ass : onab][ass : onb ] and the estimate , we can easily obtain that the four - tuple is `` appropriate '' , and then from theorem [ thm : repres ] there is a unique -valued process as the riesz representation of .hence , the assertion ( ii ) of theorem [ thm : mp ] is proved .observe that the solution to equation can be decomposed as where and are the solutions to the equations\,\mathrm{d}t+\tilde{b}(t)x^{(1)}(t)\,\mathrm{d}w_{t},\\ \mathrm{d}x^{(2)}(t ) & = \tilde{a}(t)x^{(2)}(t)\,\mathrm{d}t+[\tilde { b}(t)x^{(2)}(t)+\varepsilon^{-\frac{1}{2}}g^{\delta}(t)]\,\mathrm{d}w_{t},\\ x^{(1)}|_{[0,\tau ] } & = x^{(2)}|_{[0,\tau]}=0.\end{aligned}\ ] ] it follows from the -estimates that}\mathbf{e}\big\vert x^{(1)}(t)\big\vert^{4}+\varepsilon^{-1}\sup_{t\in\lbrack0,1]}\mathbf{e}\big\vert x^{(1)}(t)\big\vert^{2 } & \leq c,\\ \sup_{t\in\lbrack0,1]}\mathbf{e}\big\vert x^{(2)}(t)\big\vert^{4}+\sup _{ t\in\lbrack0,1]}\mathbf{e}\big\vert x^{(2)}(t)\big\vert^{2 } & \leq c,\end{aligned}\ ] ] which implies by the it formula , we have for each $],}\mathbf{e}\big\vert x^{(2)}(t)-z^{\varepsilon } ( t)\big\vert_{h}^{4}\leq c(\kappa , k)\cdot\frac{1}{\varepsilon}\int_{\tau } ^{\tau+\varepsilon}\mathbf{e}\left\vert g^{\delta}(t)-g^{\delta}(\tau)\right\vert _ { h}^{4}\,\mathrm{d}t.\ ] ] from the lebesgue differentiation theorem , we have for each , since is separable , let run through a countable density subset in , and denote then we have . for arbitrary positive ,take an such that then for each , from the arbitrariness of , we conclude this lemma . from the the above lemma and, we have keeping in mind the above relation , and applying proposition [ prop : propofp ] , we conclude for each , this along with lemma [ cor:2expansion ] yields for each , \\ & + \frac{1}{2}\mathbf{e}\left\langle g^{\delta}(\tau),p_{\tau}g^{\delta } ( \tau)\right\rangle , \ \ \ \ \ \ \text{a.e . }\tau\in\lbrack0,1).\end{aligned}\ ] ] therefore , the desired maximum condition follows from a standard argument ; see , for example , .this completes the proof of the stochastic maximum principle .[ rem : last ] usually , the lebesgue differentiation theorem is used at the end of the derivation of maximum principle . however , we utilize this sharp result at an early step of the second - order duality analysis ( lemma [ lem : approfx1 ] ) .the benefit of such a different treatment can be seen from relation .indeed , the dynamic process on the left hand side of is affected by a time - variant function , while the process on the other side is related simply to a random variable . noting the use of proposition [ prop : propofp ] , this skill is a key - point of our approach .
a stochastic maximum principle is proved for optimal controls of semilinear stochastic evolution equations in the general case . stochastic evolution operators , and the control with values in a general set enter into both drift and diffusion terms .
ion - mobility spectrometry ( ims ) is a well established analytical technique used to separate and identify ionized molecules in the gas phase of a volatile compound .this technique is based on the molecules mobility in a carrier buffer gas .the ions acquire a drift velocity through their mobility due to interactions with an electric field of magnitude . to the lowest order ,drift velocity is with the ion mobility , which allows us to identify the compound .a dependence of with thermal fluctuation , electrical charge , gas density and collision cross - section , can be obtained from the balance between mobility and diffusion forces during an elastic collision of the ionized molecule against a neutral molecule traditional devices for time - of - flight ims come in a wide range of sizes ( often tailored for a specific application ) and are capable of operating under a broad range of conditions above the millimeters scale .these devices use a ion impulse field that is parallel to the flow of the carrier gas .in contrast , aspiration condenser ims also known as cross - flow methods use an impulse field that is transverse to the flow of the carrier gas .a transverse impulse field allows splitting of a stream of ions within the flow of the carrier buffer gas , according to the respective mobility of ions .two main movement vectors are obtained : one in the flow s direction and the other perpendicular to it .devices that apply this method are remarkably compact and relatively easy to manufacture using micro - system technology .there are many methods of cross flow combined with pattern recognition and also mass - spectrometry for ion identification .one problem with these method is the low resolving - power due to overlapping of ions on detectors caused by diffusion and space charge effects .such effects can be reduced by increasing the flow rate as long as laminar flow conditions remain the same .a swept - field aspiration condenser ims uses a variable electric drift field to move all ion species across a single detector electrode .a variable deflection voltage applied to a single detector electrode can replace a detector made by an array of electrodes .an ion mobility distribution is obtained by applying the discrete inverse tammet transformation to data . however , reconstruction of the ion mobility distribution is difficult when the signal to measure is comparable to the noise and , therefore , identification possibilities of the actual signal are reduced .a radioactive ionization source is commonly used to produce ions . however , in this way , ions completely cover the entrance to the detection zone difficulting their identification . to increase identification capabilities , some ion - focusing methods proposed the use of funnels to guide ions before the splitting caused by the transverse impulse field .basically , ion focusing creates a concentrated starting point for ions to start to travel forming a well - defined trajectory for each compound and allowing to improve ion identification .ions with different mobility will have different trajectories .it has been utterly expensive and complex to apply this kind of solution so far , because of the inconveniences existing at the sub - millimeter scale . inthe ion - focusing aspiration condenser ims , funnels or intricate channels are implemented to narrow down the flow inside the system , thus generating potential turbulences which do not allow the ions to continue their travel .this results in a loss of identification efficiency when both high flow rates and a laminar condition are necessary . here , we present novel concepts for a different approach to produce ion focusing without any physical object acting as a funnel. drifts ions to produce the stream reaching the detectors .ions with larger mobility reach the detectors closer to the entrance , between positions and whereas ions with smaller mobility reaches the detectors at farther positions , between and .,scaledwidth=50.0% ] as illustrated in fig.[fig : separa ] , ions with mobility and , where starting at the same initial location near to generated lobes of ions and defined by a stream of ions with lower position and higher position travel towards the detection area forming two stream . one stream reaches the detection area that starts in with length for ions with mobility . the other stream reaches and area that starts in with length for ions with mobility .none of the ion streams will overlap on detectors , and the required separability will be verified as long as the trajectory of ions with larger mobility ( and moving on the lowest side of the ion stream ) and the trajectory of ions with smaller mobility ( moving on the higher side of the ion stream ) will meet at a point with , being the channel height .the present work is organized as follows .section [ sec : teoria ] presents basic equations describing the proposed model .section [ sec : dispositivo ] describes the integrated prototype that we designed containing the two main parts for ion - generation and ion - detector .section [ sec : generaion ] summarizes results for witness of corona discharge regime of the proposed ion - generation .section [ sec : crossflowfocal ] describe how to deal with the transport of micro - generated ions from their origin to detection .section [ sec : results ] presents analysis of the experimental prototype setup to generate localized ions or lobes of ions .this section also presents a numerical simulation about the resolving power and resolution to show the advantages of our proposed device .finally , section [ sec : conclusion ] includes our main conclusions .the hypothesis based on a localized ion generation and laminar flow means that the access of generated ion to the detection zone is very small , with height . for a constant carrier flow in the detection zone ,two orthogonal components exist in ion movements : the constant flow velocity and the drift velocity eq.([eq1 ] ) .therefore , considering that an ion from uses the same time to arrive to the detector in for each direction of movements , we obtain : where the drift field can be estimated by , assuming border effects are negligible and electric voltage , applied on the detectors , generates a uniform field in the detection zone .having defined a desired meeting point for those trajectories in the plane in fig .[ fig : separa ] and assuming uniform velocities as in eq.([ec : xinvv ] ) , we can estimate being voltage and flow velocity the same for both molecules we propose that separability will be guaranteed provided that . therefore , in the case we find an upper bound for the ion - focusing size needed to separate two ionized molecules according to their mobility and the channel height . our main hypothesis centers on ion - focusing being possible via the geometric design of the ion - generator .eq.([ec : foco2 ] ) provides an upper bound for the scaling of this focalization with the detection size of the device allowing identification of the species .this requirement to scale the system prompted us to use micro - system technology .the originality of the present research lays in the solution reached .the device was designed considering both the position and direction of mobility vectors of ions , from the generation and transportation of ion up to their detection .this design was possible by arranging the respective directions of electric discharges , fluid transportation , and the electric fields used for detection , orthogonally to each other . in this way , we minimized influences among the ion movements in these directions . by employing a virtual model of the micro - device using the coventorware software we constructed the design shown in fig .[ fig : maqueta ] .this software allows us to simulate the construction of the model ; from the optic layers to the manufacturing process . with lift off ,the design proceeds ; a positive photo - resin is deposited on a glass substrate and protected with a plastic cover , containing the image of the target structure .the resin is modified by uv light in unprotected places , and unmodified part of the resin can be removed with the cover as well .the remaining construct is completely covered with a layer of copper using an anionic deposition process . finally , the resin layer is removed with acetone , leaving the geometry of the copper layer on the substrate . this manufacturing process has been used because of its flexibility and sturdiness .also , one goal of this work is obtaining a minimum size compact ion - generation / ion - detector , with low cost and simple manufacture .the proposed micro - device has two main zones ( see fig .[ fig : maqueta ] ) ; the ionization and the detection zone . in the ionization zone ,lobes of ions are generated by employing an electric discharge of corona type ( see section [ sec : generaion ] for details ) .charged particles of the sample are transported via a carrier gas to the detection zone . when entering the first zone , both the carrier gas and the sample move along the two guide through the device towards the ionization zone .the guides determine the separation distance between two substrates , only the inferior substrate is shown in fig.[fig : maqueta ] .the ionization zone consisting of six sources is mounted on the surface of this inferior substrate .each source has seven pairs of flat , metallic electrodes , the two electrodes ; placed in front of each other at 40 m .the anode has six triangular tips , at a distance of 300 m from one another , and one flat cathode .the distance between two sources is 1500 m and the last source is 540 m away from the detection zone .these can be individually connected across the connection electrodes , fig .[ fig : maqueta ] ( 2 ) and ( 8) . in the detection zone , the detectors are immersed in an electrostatic field that deflects and separates ions according to their mobility .this zone has a flat electrode with a single connection mounted on the inferior substrate . individually connected and flat sectioned electrodes are mounted on the superior substrate .each electrode is 900 m long and at a distance separated 100 m from the next electrode , as shown by ( 6 ) and ( 3 ) respectively in fig.[fig : maqueta ] .the total length of the channel is m but the maximum active zone is less than half of this length .channel width is m and height is m . with these dimensions and an air stream of around estimate a flow in laminar regime due to .the ion generation problem is solved using a localized production of a self - limited discharge .electric discharges due to dielectric rupture can be produced via enough energy accumulation between two electrodes on which an electrostatic difference voltage is applied . for our propose , the state before the beginning of luminescence ,also known as the corona discharge , is just enough to ionize the sample .therefore it is possible to develop the electric field and the electric voltage before the electric discharge by solving the laplace equation in a volume defined by a closed surface . in a separate work ,we study , if the empirical peek s law that establishes the necessary electric voltage in the anode to witness a corona discharge can be applied at micro - scale . in summary , in these analysis , the electric field was calculated inside the ionization volume defined in section [ sec : dispositivo ] by solving the laplace equations for the configuration displayed in fig . [fig : esimvcexp ] e ) .m for a ) m , v ; b ) m , v ; c) m , v ; d) m , v on the section indicated by the outline in red within the volume of the ion generator ( e ) . the magnitude of the field is plotted in longitudinal direction on a plane 20 m above the the upper part of the anode and cathode .the radius of the curvature ( m ) of the three prongs forming the anode is the same for all cases in which varies .the effective radius is the average distance in which the electric field falls to m from the anode.,scaledwidth=90.0% ] the left panel shows the magnitude of the electric field on a section of the ion generator volume that contain the tip of the central triangular anode .the voltage applied corresponds to critical values experimentally determined for nitrogen gas , and and and m , respectively , with constant curvature radius of the anode tip m .we found that it is possible to approximately establish a same average distance from the origin of the curvature radius to the surface defining a volume where v/ m .as peek s law is verified at the micrometer scale , this same average distance should be approximately equal to an effective radius , measuring the curvature radius in micrometers .this prediction becomes true with a 10% sampling error in every case .the proposed alternative solutions for ion focusing using a corona discharge ion micro - generation are described in sections [ sec : teoria ] , [ sec : dispositivo ] , and [ sec : generaion ] .considering that eq.([eq1 ] ) and cross - flow design concepts can only be applied in a non - turbulent regime , we analyzed how the fluid should travel along the channel .we propose a channel with a rectangular section that remains constant along both zones of the device , which means that there are no detours , expansions , or contractions along the entire flow . considering this ,first we verify the laminar flow condition by using the reynold number that we estimate for non - circular sections as in terms of the cinematic viscosity and the hydraulic radius corresponding to the ratio between the fluid normal section area and its perimeter , and as the channel width .the flow rate is implicit in the above equation , because it is estimated as the average velocity of flow multiplied by the section of the channel : .for the objective of our proposed design , we made a significant size reduction for and .therefore , we need to be low enough so that ensure laminar condition of the fluid regime. however , can not be too low , because the number of ions would be too small to be analyzed .also , longitudinal dimensions of the channel have to be small enough so that the lifespan of generated ions is longer than time they take to arrive to the detection zone .all this together allows us to establish a higher ion velocity and a range for the dimensions of the device .dimensions can be optimized to obtain a device efficiently functioning at the smallest possible scale .the flow is generated by the pressure difference between the device entrance and exit , and , respectively .velocity is modified by friction between the fluid and the walls due to glides of imaginary layers of fluids . assuming the fluid is irrotational and incompressible and fits the device static conditions of the wall, we analyze its dependence on vertical direction , obtaining by force equilibrium where is the channel length and is the height variation for the channel that goes from to as defined in section [ sec : intro ] . in order to investigate the role of velocities profiles in a sub millimeter scale, we consider numeric simulation for the trajectory in a plane of _ ideals _ ions with the same mobility as nitrogen when driven in direction acquiring a of eq.([ec : vf ] ) due to dyne / , m and s/ and driven in direction acquiring given by eq.([eq1 ] ) for /vs due to a voltage difference of and between flat detector electrodes ( anode and cathode ) separated m . in fig.[fig : trayec ] trajectories are shown for two initial positions ( 0,0 ) and ( 0,50 m ) which represent limit cases due to localizing of ions as showed in fig.[fig : separa ] , voltage differences between cathode and anode . the continuous line for the parabolic profile eqs .( [ ec : vf ] ) and ( [ eq1 ] ) for two initial positions ( 0,0 ) and ( 0,50 m ) . dashed dotted line illustrates a uniform profile with initial position ( 0,0).,scaledwidth=50.0% ] they are calculated for parabolic profile resulting a narrow laminar stream .we also show the trajectory for uniform profile and initial position ( 0,0 ) .the trajectory with initial position ( 0,50 m ) is not shown , because being for the uniform profile it belongs to a parallel line separated 50 m from the first one , defining a straight laminar stream .standard cross - flow method operates at a scale of ten of millimeters , where the change in the velocity profile due to viscosity effects is negligible .however , at a sub - millimeter scale viscosity effects must be taken into account . by neglecting the velocity depends on ,errors in position detections as high as the channel height could be made , as shown in fig.[fig : trayec ] .note that , when duplicating the applied voltage , ions are detected at a distance that is half as long as the corresponding one for the voltage applied originally . this is due to inverse proportionality eq.([ec : xinvv ] ) between horizontal position of detection and voltage applied to the detector , that is here approximately verified when considering viscosity .first , nitrogen gas flows through the channel of the device , controlled by a regulator .then , a pulse generator is connected to a dc source producing every a pulse that lasts 50 .we analyzed the ion charges by serially connecting a lv 8 keithley 6514 electrometer to the arrangement of detection cathodes , as illustrated in the upper panel of fig.[fig : esqmed ] .the accumulated charge is measured at 20s intervals .charge measurements using an electrometer .upper panel : measurements of the total charge as a function of the flow rate .lower panel : measurements of the charge in each detector.,title="fig:",scaledwidth=55.0% ] + charge measurements using an electrometer .upper panel : measurements of the total charge as a function of the flow rate .lower panel : measurements of the charge in each detector.,title="fig:",scaledwidth=55.0% ] the average is calculated over ten measures for each individual and all electrode .data dispersion according to the average is lower than 3 % . fig .[ fig : vscaudal ] shows the charge normalized with charge signal normalized to charge depending on the flow rate for .clear ( dark ) gray bars correspond to ( ).,scaledwidth=50.0% ] depending on the flow in the device and cases in which detection voltage is fixed at and .the reference charge results from the same experiment , but using nitrogen streaming in opposite direction to ensures absence of ions . is needed because the electrometer is sensitive enough to capture spurious signals during the experiment , such as signals produced by generation of ions .parameters established in these experiments are taken from the theoretical example analyzed in section [ sec : crossflowfocal ] .thus , the highest charge possible is obtained for a flow rate of 2 .when increasing the flow rate to 5 , the charge diminishes to more than half of the previous charge . note that also the difference of with higher signal for is consistent with our theoretical model , since concentrating detection towards half of the length of the detection zone increases the probability to detect ions . in the context of an optimal design that considers the lifespan of ions ,a better signal is obtained using rather than because ions at are detected in half of the time than ions at . with flow rates of 5 and 10 expect a transition towards the turbulent regime . because of the loss of laminar flow condition , as predicted in section [ sec : crossflowfocal ], no significant signal differences are expected for these flows at 9 or 18 . to verifyif the generation of localized lobes of ion is enough to separate and identify ions , the electrometer has been connected in such a way to separately measure the signal of the charge at each detector ( see lower panel in fig.[fig : esqmed ] ) .we normalized the signal with for a 2 flow rate and four different detection voltages , 9 , 18 , 28 and 46 , respectively ( see fig.[fig : vspotencial ] ) .signal of charge normalized with for the flow rates 2 , 5 , 10 as function of the detector labels .clear and dark gray bars correspond to 9 and 18 , respectively.,scaledwidth=50.0% ] is obtained for each detection , by a neutral gas streaming in the opposite direction .when comparing results in fig.[fig : vspotencial ] with results from the numeric simulation for detection voltages of 9 and 18 ( fig.[fig : trayec ] ) , and additionally considering the dimensions of detectors in the micro - device ( see section [ sec : dispositivo ] ) , we observe that the highest signal values are measured in those detectors placed at positions and distances predicted by simulation . according to eq.([ec : xinvv ] ) , there is an inverse relationship between detection positions and applied voltages .this means that by increasing the applied voltage two- , three- , and five - fold ( from 9v to 18 , , and , respectively ) the highest signal is measured by the detector positioned at half , one third , and one fifth of the total distance , respectively , as predicted by simulations .this prediction is highly accurate in an ideal case of uniform profile . in the most realistic case ,a small correction is necessary , because collection distance predictions are higher than inferred by eq.([ec : xinvv ] ) ( see section [ sec : crossflowfocal ] , discussion in the last paragraph ) .a correct focalization by local ion micro - generation during our experiments was crucial to experimentally verify the model .the proposed experimental configuration allows us to determine an ionization volume with a diameter corresponding to as little as 10 of the channel height ( see analysis in section [ sec : generaion ] ) .higher ion localizing is possible by repeating analysis of section [ sec : generaion ] and shrinking the curvature radius of the ion generator . with respect to separation based on ion mobility, we could state that the performed experiments involving four voltages and one compound could ideally be translated into an experiment involving one voltage and four compounds .this is possible because channel height is constant , and is approximated to in eq.([eq1 ] ) , which means that drift velocity depends only on the product of mobilities and the applied electric voltages .therefore , to translate results from fig.[fig : vspotencial ] into the problem of separation by mobility , means to consider changes in which the compounds mobilities would duplicate , triplicate , and quintupling .however , expected changes in mobility among different compounds are usually small . for example , there is a difference of 24 between the mobility of toluene ( /vs ) and that of nitrogen , but for many other compounds such differences are actually much smaller .fig.[fig : separatnea ] shows numeric simulation of the ion trajectory using a voltage of 18 and a carrier flow rate of 2 .ideal trajectory of ions to the detectors when a voltage of 18 and carrier flow rate of 2 are applied to the system . the dimensions and other parameterscorrespond to real size of the device .insets show the details where ions impact and normalized charge distribution on detectors .segmented detector are indicated with alphanumeric label on each subdivision .trajectory and ims type spectrum for nitrogen ( nit . ) and toluene ( tol . ) in the upper panel , for acetone ( cet . ) and ethanol ( eth . ) in the lower panel.,scaledwidth=65.0% ] our design was able to separate signals corresponding to nitrogen and toluene because these were not overlapping as it is shown in the insets of upper panel fig .[ fig : separatnea ] .the insets show details of positions where ions impact and normalized charge respect to the total charge that reach detectors .detector are segmented and it is represented in fig . [ fig : separatnea ] with alphanumeric labels on each subdivision .according to our analysis in section [ sec : crossflowfocal ] and employing eq.([ec : xinvv ] ) , we can estimate that , for 18 , an increase of mobility of 24 would be equivalent to an average displacement of an ion current from the middle part of the fifth detector to the left by 870 m . because this distance is slightly bigger than the diameter of any ion stream reaching that position , ion streamsboth before and after displacement should not overlap , thus detecting two distinct signals .this result is consistent with our prediction about scaling between the ion current diameter at the entrance of the detection zone and the channel height ( see section [ sec : teoria ] ) . in our design , 0.1 , and we verify eq.([ec : foco2 ] ) for toluene and nitrogen .in contrast , our design was not able to separate the overlapping signals of acetone and ethanol , as shown at the lower panel of fig.[fig : separatnea ] , because the respective mobilities are only slightly different from one another ( 3.4 difference ) .thus , in according to eq.([ec : foco2 ] ) , to separate acetone from ethanol , should be less than 0.034 , that is not verified with the actual size of the designed device .the main advantage of this proposed device is its scalability that ultimately depends on the required application .scalability can be tested by applying eq.([ec : foco2 ] ) for ethanol and acetone which predicts that for channel height m an upper bound m is required for the initial diameter of the ion stream .it is possible to separate signals of acetone and ethanol by reducing the size of our proposed device by an order of magnitude , since localization would then be smaller than the predicted upper bound ( see fig.[fig : separatnea ] ) .ideal trajectory of ions of acetone and ethanol when a voltage of 180 is applied .each detector is 100 m long .the localization of ion micro - generation is concentrated to m in the upper(lower ) panel .the carrier flow rate is 2 l / min and other parameters are as in the experimental device .insets show the details where ions impact and the normalized charge distribution on detectors .segmented detector are indicated with alphanumeric label on each subdivision.,scaledwidth=65.0% ] the ideal trajectories of ion streams corresponding to acetone and ethanol in scaled conditions with detection voltage 180 v , and localization of ion generation smaller than 30 m , in order that , are shown in fig.[fig : separaea ] .the other parameters match those of the experimental device . with the proposed device the charge signalare not transformed to average time of ion detection as it was done with other ion - focusing method , although the implicit ion - focusing proposed in this work could also be adapted to be applied a tammet trasformation .a similar physical interpretation of resolving power might be followed by defining as the ratio of average trajectory position to the trajectory dispersion .note that with this definition does not consider the finite size of detector .furthermore , we are idealizing the fact that enough charge would be detected in such cases . accordingly, the resolution to separate two signals is defined as the ratio of the difference between the two average trajectory position to the larger of trajectory dispersion of both . and for our device to separate signals of acetone and ethanol was obtained from as it is shown in the inset of figs .[ fig : separatnea ] and [ fig : separaea ] by simulations of ion trajectories using a parabolic profile of velocity drift with random initial height of ions starting up to focalization length . in fig.[fig : respow ] we show for , 1000 , 1500 m and some comparative cases of , the and as function of . resolving power , and in the inset to separate signals of acetone and ethanol as function of .applied voltage is adjusted according variation of , 1000 , and 1500 m in order that is the same in all cases .the carrier flow rate is 2 and other parameters are as in the experimental device.,scaledwidth=50.0% ] we can see that is comparable with another ion focusing method for m and .however , to be able to separate acetone and ethanol from a mixture of the two compounds both a higher resolving power and an increased focusing are required .note in the inset of fig.[fig : respow ] that for m and m , and compounds can not be totally separated as it is shown in upper panel of fig.[fig : separaea ] , while for m and m , and compounds are well separated as it is shown in the lower panel of of fig.[fig : separaea ] .larger imposes a challenge for flow laminar condition . the device that we designed would be able to meet these requirements , because it does not need any physical object that could change fluids dynamics , as in currently available cross - flow methods .in this work we address the problem to identify ion species at the sub - millimeter scale based on their mobility and a standard cross - flow method . to solve this problem, we propose a novel manufacturing method using micro - system technology .we analyze the corona discharge and verify peek s law in the micro - scale . with this , we dimensioned the geometry for the proposed ion generator allowing it to localize the ion generation and , therefore , implicitly focus ions to the entrance of the detection zone , in a smaller size than the drift height . at the same time, we analyzed the flow of the system to design transportation of ions from their generation point to their detection point . by introducing orthogonal movements we made numerical simulations to analyze viscosity effects and state experimental configuration to guarantee the laminar flow condition .we found a simplified design for the construction of the micro - system , which avoids expansions or contractions . in this way , turbulence andstopped - flow zones are minimized in an adequate flow range , allowing us to optimize the functioning of the device .we performed experiments on a prototype of the proposed design to verify our hypothesis on localized ion generation as a solution for ion species identification at sub - millimeter scale .variation of the average trajectory of the ion stream with the variation of drift velocity can be analyzed with a parabolic velocity profile .we showed that for a case in which variations of drift velocity are 25 the charge signals displayed a negligible overlapping of 1 mm on the detection zone . for smaller differences in other volatile compoundswe would need to adjust the parameters used here ( as these depend on the application of the device ) , but not the architecture of the design .we have proposed an upper bound for the size of the ion lobes in terms of the total drift height and the ratio of ion mobilities to effectively separate ion specie .finally , we highlight how our design is elegantly simplified compared to more complex solutions in other methods .we obtained a compact micro - model that has two advantages : it does not use radioactive sources to generate ions , but an electrical source not exceeding 1kv , and it would reach a higher resolving power than currently available methods .this study was partially supported by anpcyt - foncyt ( grants pict - prh-135 - 2008 , picto - unne-190 - 2007 , pae 22594/2004 nodo nea:23016 , and pae - pav 22592/2004 nodo cac:23831 ) .we thank jessica lasorsa and brigitte marazzi for useful comments .20 h.e . revercomb and e.a ._ theory of plasma chromatography / gaseous electrophoresis- a review . _anal . chem . 1975 , 47:970 .ortiz gp , rinaldi c , boggio n , vorobioff j , ortiz j , gmez s , et al . _ development of an ims type device for volatile organic compounds detection : simulation and comparison of the ion distributions_. physics.ins-de [ internet ] .2009 mar 29 [ 2012 jan 10];0902.1206v2 .available from http://arxiv.org/abs/0902.1206 .eiceman ga and karpas z. _ ion mobility spectrometry_. 2nd .boca raton : crc press ; 2005 , 350 .roscioli k , davis e , siems w , mariano a , su w , et al ._ modular ion mobility spectrometer for explosives detection using corona ionization _2011 aug 1 ; 83(15):5965 - 71 .sabo m and matejek s , 2011 _ ion mobility spectrometry for monitoring high - purity oxygen _ , anal chem , 2011 ; 83 ( 6 ) , 1985 - 1989 .h. f. tammet , _ the aspiration method for determination of the determination of atmospheric - ion spectra _ , scientific notes of tartu state university , 1967 2(195):1 - 9 .vinopal r , jadamec j , de fur p , demars a , jakubielski s , green c , et .fingerprinting bacterial strains using ion mobility spectrometry_. elsevier .chim . acta .2002 ; 21745:1 - 13 .boggio n , alonso p , ortiz j , rinaldi c , lamagna a , boselli a. _ desarrollo de un espectrmetro por movilidad inica para la deteccin de compuestos orgnicos ( explosivos , drogas y contaminantes)_. anales afa issn 1850 - 1158 .2008;20(208):208 - 210 .k. tuovinen , m. kolehmainen , and h. paakkanen ._ determination and identification of pesticides from liquid matrices using ion mobility spectrometry .. 429:257 .m. utriainen , e. krpnoja , and h. paakkanen ._ combining miniaturized ion mobility spectrometer and metal oxide gas sensor for the fast detection of toxicchemical vapors ._ sens . actuators b 2003 , 93:17 .solis a and sacristan e. _ designing the measurement cell of a swept - field differential aspiration condenser_,rev .2006 mar 22;52(4):322 - 328 .mang zhang , anthony s. wexlerk ._ cross - flow ion mobility spectrometry : theory and initial prototype testing _ , int. j. mass spectrom .2006 , 258(1 - 3):13 - 20 .zimmermann s , abel n , baether w , barth s , 2007 _ an ion - focusing aspiration condenser as ion mobility spectrometer _ , 2007 .sciencedirect , sensors and actuators b , 125 ; 428 - 434 .barth s , zimmermann s , _ modeling ion motion in a miniaturized ion mobility spectrometer_. in : comsol conference .proceedings of the comsol conference ; 2008 ; hannover .rosasa g , murphya r , morenob w. _ smart antenna using mtm-mems.wireless and microwave technology conference ( wamicon)_. in ieee 11th annual ; 2010 ; melbourne , fl .ieee explorer , 2010 ( isbn : 978 - 1 - 4244 - 6688 - 7 ) 12 - 13 april 2010 .ortiz j , lamagna a , boselli a._design of a cf - ims ( cross flow ion mobility spectrometry ) with mems technology_,procedings of the 13th international meeting on chemical sensors , 2010 july 11 - 14 ; perth western , isbn : 978 - 1 - 74052 - 208 - 3 , pag.332 . mems and nanotechnology exchange ( mnx ) [ internet ] .reston , virginia .[ update 1999 , cited 2011 ] available from : www.memsnet.org/mems/processes/deposition.html .senturia s. _ microsystem design_.1ed .new york : kluwer academic publishers ; 2000 , 716 .escuela superior de ingenieria [ internet ] .[ update 2005 , cited 2011 ] available from : http://iecon02.us.es / asign / sea/. m. goldman , a. goldman and e.s .sigmon . _ the corona discharge , its properties and specific uses _ , pure & appl chem .1985 ; 57(9):1353 - 1362 .j.j.ortiz and g.p.ortiz _ design of a ion microgenerator by corona discharge _ , unpublished peek f. _ dielectric phenomena in high voltage engineering_. 1er .new york : mcgraw - hill book company , ink ; 1920 , 281 p. krantz s. _ handbook of complex variables_. 1er .boston : birkhauser ; 1999 , 290 .coventor [ internet ] .north carolina .[ update 2000 , cited 2010 ] .available from : http://www.coventor.com / solutions / resonators/. borg x. _ full analysis & design solutions for ehd thrusters at saturated corona current conditions _, the general science journal , 2004 jan 1 .available from : http://blazelabs.com/pdflib.asp .ortiz j , nigri c , rodrigez d , ortiz g , perillo p , lasorsa c , et al ._ verification of paschen curve and peek s law in micro glow - discharge_. proceding of the ii congreso de microelectrnica y aplicaciones ( uea ) , 2011 sep 6 - 9 .la plata , argentina .isbn 978 - 950 - 34 - 0749 - 3 , pag .235 - 237 .friedrich paschen ._ ueber die zum funkenubergang in luft , wasserstoff und kohlensaure bei verschiedenen drucken erforderliche potential differenz _annalen der physik , 1889 273(5):6975,48 - 56 .raizer y. _ gas discharge physics_. 1ed springer .springer ; 1991 sep 19 , 449 .the numerical calculation was done through laplace s equation extended to the entire volume .the algorithm used is called p - fft ( precorrected fast fourier transform ) impose conditions of symmetry on the top and sides of the volume . at the bottom edge conditions are voltage to the electrodes , positive on nails and zero for the plane , while the bottom surface charge surface boundary conditions on which find electrodes is zero .
ion - mobility spectrometry ( ims ) is an analytical technique used to separate and identify ionized gas molecules based on their mobility in a carrier buffer gas . such methods come in a large variety of versions that currently allow ion identification at and above the millimeter scale . here , we present a design for a cross - flow - ims method able to generate and detect ions at the sub - millimeter scale . we propose a novel ion focusing strategy and tested it in a prototype device using nitrogen as a sample gas , and also with simulations using four different sample gases . by introducing an original lobular ion generation localized to a few ten of microns and substantially simplifying the design , our device is able to keep constant laminar flow conditions for high flow rates . in this way , it avoids the turbulences in the gas flow , which would occur in other ion - focusing cross - flow methods limiting their performance at the sub - millimeter scale . scalability of the proposed design can contribute to improve resolving power and resolution of currently available cross - flow methods . _ ims design , gases detectors , cross - flow , micro - devices _
the use of approximate bayesian computation ( abc ) methods in models with intractable likelihoods has gained increased momentum over recent years , extending beyond the original applications in the biological sciences .( see marin _ et al . , _2011 , sisson and fan , 2011 and robert , 2015 , for recent reviews . ) whilst abc evolved initially as a practical tool , attention has begun to shift to the investigation of its formal statistical properties , including as they relate to the choice of summary statistics on which the technique typically relies ; see , for example , fearnhead and prangle ( 2012 ) , gleim and pigorsch ( 2013 ) , marin _ et al . _( 2014 ) , creel and kristensen ( 2015 ) , creel _ et al . _( 2015 ) , drovandi _et al_. ( 2015 ) , li and fearnhead ( 2015 ) and martin _ et al . _( 2016 ) .in this paper we study large sample properties of both posterior distributions and posterior means * * * * obtained from abc . * * * * under mild regularity conditions on the summary statistics used in abc , we characterize the rate of posterior concentration and show that the limiting shape of the posterior crucially depends on the interplay between the rate at which the summaries converge ( in distribution ) and the rate at which the tolerance used to accept parameter draws in abc shrinks to zero. critically , concentration around the truth and , hence , bayesian consistency , * * * * places a less stringent condition on the speed with which the tolerance declines to zero than does asymptotic normality of the abc posterior .further , and in contrast to the textbook bernstein - von mises result , we show that asymptotic normality of the abc posterior mean does not necessarily require asymptotic normality of the posterior itself , with the former result being attainable under weaker conditions on the tolerance than required for the latter .validity of all of these results depends critically on the satisfaction of an identification condition guaranteeing that , asymptotically , simulated summary statistics are unique functions of parameter draws .the satisfaction of this condition is examined in several examples , with the results indicating that this condition can fail in even simple applications . while the asymptotic properties of likelihood - based bayesian methods are now well documented in the case of finite - dimensional parameters ( see , e.g. , le cam , 1953 , walker , 1969 , and chen , 1985 ; plus textbook treatments in van der vaart , 1998 , and ghosh and ramamoorthi , 2003 ) , a thorough study on the asymptotic properties of posterior distributions obtained from so - called likelihood free methods , such as abc , has yet to be undertaken .this represents an important gap in the literature , which we look to fill with this work .existing asymptotic results are often derived under boundedness conditions for the underlying density function of the true model in the style of ibragimov and hasminskii ( 1981 ) . in the abc setting , conditions based on this density functionare not practically useful since the density generating the data is by definition analytically unavailable . herein , we take a different approach and only consider conditions on the abc summaries themselves .the most stringent of these conditions ( as noted above ) * * requires that the simulated summaries converge toward some limiting counterpart at some known rate , and that this limit counterpart , viewed as a mapping from parameters to simulated summaries , be injective .these conditions have a close correspondence with those required for theoretical validity of indirect inference and related ( frequentist ) estimators ( gourieroux _ et al_. , 1993 , heggland and frigessi , 2004 ) .our focus on all three aspects of the asymptotic behavior of abc * * * * - * * * * posterior consistency , limiting posterior shape , and the asymptotic distribution of the posterior mean - is much broader than that of existing studies on the large sample properties of abc , in which the asymptotic properties of point estimators derived from abc have been the primary focus ; see , creel _ et al_. ( 2015 ) , jarsa ( 2015 ) and li and fearnhead ( 2015 ) .our approach allows for weaker conditions than those given in the aforementioned papers , permits a complete characterization of the limiting shape of the posterior , and distinguishes between the conditions ( on both the summary statistics and the tolerance ) required for concentration and the conditions required for specific distributional results . in particular , we highlight the fact that asymptotic normality is only one possible scenario for the limiting shape of the posterior , and that asymptotic normality of the posterior is not a necessary condition for asymptotic normality of the abc posterior mean .the separation of these two latter asymptotic results is a particular point of contrast with work cited above .our approach also allows for an exploration of asymptotic behaviour in a host of interesting situations not covered by existing results , such as cases where the summaries do not satisfy a central limit theorem , examples where summaries satisfy a central limit theorem at different rates , and situations where the tolerance used in abc converges to zero too slowly .the paper proceeds as follows . in section [ sect2 ]we briefly outline the setup and review the principles of abc .section [ aux ] discusses posterior concentration and consistency in abc , with the required identification condition given particular attention .the impact of adding statistics to a set for which consistency has already been established is also demonstrated , which has important consequences in practice . as part of this sectionwe also specialize the results to abc based on summaries generated from estimating equations derived from an auxiliary model .section [ norm ] develops results for the limiting shape of the posterior distribution obtained by abc , whilst section [ mean ] outlines the conditions required for asymptotic normality of the posterior mean . in section [ pract ]we highlight some important implications of the theoretical results for practitioners , in particular with regard to choosing a tolerance that is optimal from a computational perspective .we then conclude the paper in section [ disc ] with a summary of the asymptotic results presented herein and the important conclusions drawn regarding implementation of abc .all proofs are collected in an appendix .we observe data , , drawn from the model , where admits the corresponding conditional density , and * * * * . given a prior , the aim of abc is to produce draws from an approximation to the posterior distribution in the case where both the parameters and pseudo - data can be easily simulated from , but where is intractable .the simplest ( accept / reject ) form of the algorithm ( tavar _ _ et al . , _ _ 1997 , pritchard _ et al ._ , 1999 ) is detailed in algorithm [ abc ] .simulate , , from simulate , , from the likelihood , select such that: is a ( vector ) statistic , is a distance function ( or metric ) , and is the tolerance level .algorithm [ abc ] thus samples and from the joint _ _ _ _ posterior:}{\textstyle\int_{\boldsymbol{\theta } } \int_{\boldsymbol{z}}p(\boldsymbol{\theta } ) p(\boldsymbol{z|\theta } ) \mathbb{i}_{\varepsilon } [ \boldsymbol{z}(\boldsymbol{\theta } ) ] d\boldsymbol{z}d\boldsymbol{\theta } } , \]]where :=\mathbb{i}[d\{\boldsymbol{\eta } ( \boldsymbol{y}),\boldsymbol{\eta } ( \boldsymbol{z}(\boldsymbol{\theta } ) ) \}\leq \varepsilon ] ] and =\theta _ { 01}(1+\theta _ { 02}) ] and by construction =0 ] denotes the square upper sub - matrix of . also , let and if , for all , then .while assumption [ * a4 * ] requires the map be injective ( but not necessarily bijective ) in this section we also maintain the assumption that is continuously differentiable at , and that the jacobian has full rank .in addition to this assumption and assumptions * [ a1]*-*[a4 ] * in section [ aux ] , the following conditions are needed to establish the limiting shape of . *[ a5 ] * there exists such that for all , is the identity matrix . *[ a6 ] * the sequence of functions is equicontinuous at . the following condition will only be used in cases where at least one of the coordinates satisfies . * [ a7 ] * for some positive and all , and for all ellipsoids for all and all fixed , {[k_{1}]}(\boldsymbol{\eta } ( \boldsymbol{z})-\boldsymbol{b}(\boldsymbol{\theta } ) ) -u\in b_{t}\right ) } { \prod_{j=1}^{k_{1}}h_{t}(j ) } & = \varphi _ { k_{1}}(u ) , \\ \frac{p_{\boldsymbol{\theta } } \left ( [ \boldsymbol{\sigma } _ { t}(\boldsymbol{\theta } ) ] _ { [ k_{1}]}(\boldsymbol{\eta } ( \boldsymbol{z})-\boldsymbol{b}(\boldsymbol{\theta } ) ) -u\in b_{t}\right ) } { \prod_{j=1}^{k_{1}}h_{t}(j ) } & \leq h(u),\quad \int h(u)du<+\infty , \end{split } \label{dens : cond}\]]for the density of a -dimensional normal random variate .[ normal_thm]assume that assumptions * [ a1 ] * -*[a6 ] * are satisfied and take an arbitrary compact .the following results hold : * ( i ) * : with probability approaching one ( wpa1 ) , for , * ( ii ) * there exists such that , , and : assume that with positive definite and that _ { [ j]}\right\ } ^{2}\leq c\epsilon _ { t}^{2}\right ) = + \infty , ] in case * ( ii ) * : there exists a non - gaussian probability distribution on , which depends on and is such that precisely , * ( v ) * : assume that with positive definite and that assumption * [ a7 ] * , i.e. , , holds for , then * remark 4 : * theorem [ normal_thm ] asserts that the crucial feature for determining the limiting shape of the abc posterior is the behavior of , for . if too slowly so that some ( or all ) of the components satisfy , as in results * ( i)*-*(iii ) * above , the _ only _ conclusion that can be drawn is one of posterior concentration . * * * * in short , so long as for all with strict equality for at least one , no bernstein - von mises ( bvm , hereafter ) result is available for the abc posterior .however , if , a bvm result _ is _available for the abc posterior .taking for simplicity s sake the intuition behind theorem [ normal_thm ] is as follows : limiting shape information in abc requires , while gaussianity of the limiting distribution requires the stronger condition . without these rate requirements ,i.e. , if * * * * * * * * the posterior concentrates in a manner that can not yield a bvm result or other shape information .when the bvm result does not hold , the posterior concentrates at the rate , in the sense that large enough and .the upper bound in ( [ upper ] ) is a consequence of theorem [ thm1 ] .the lower bound in ( [ lower ] ) can be deduced using similar arguments : indeed , following from equation in the proof of theorem [ thm1 ] in the appendix , we have _ ( d_2\ { ( ) , ( _ 0)}_t_t| ( ) ) & = + & = o(1 ) , where the term appears as a consequence of assumption [ * a3 * ] . from this reasoning, we conclude that * remark 5 : * the abc bvm result deduced herein is thus of a different nature than that derived in li and fearnhead ( 2015 ) , which relies on arguments similar to those of yuan and clark ( 2004 ) . in particular, the bvm result of li and fearnhead ( 2015 ) requires uniformity conditions and strict differentiability conditions on the intractable likelihood function that guarantee , among other things , the validity of a second - order taylor series expansion .conversely , the bvm result obtained herein requires no such strict differentiability conditions , nor any strict uniformity conditions .* remark 6 : * the results derived in this section , as well as all remarks made above , remain applicable in the case where is taken to be a vector of estimating equations derived from an auxiliary model , provided conditions * [ b1 ] * to * [ b6 ] * and conditions corresponding to those in * [ a5 ] * to * [ a7 ] * are satisfied .* remark 7 : * condition * [ a7 ] * only applies to random variables that are absolutely continuous with respect to lebesgue measure ( or , in the case of sums of i.i.d random variables , to sums that are non - lattice ; see bhattacharya and rao , 1986 ) .the case of discrete s requires an adaptation of condition * [ a7 ] * that leads to the same conclusions in theorem [ normal_thm ] . for simplicity s sake we write this adaptation in the case where , so that we need only study case * ( v ) * in theorem [ normal_thm ] .then * [ a7 ] * can be replaced by : * * [ a7** * ] * there exist and a countable set such that for all , there exists a continuous and positive map at such that under this alternative condition , the conclusion of case**(v ) * * of theorem [ normal_thm ] still holds. condition * * [ a7** * ] * is satisfied , for instance , in the case when is a sum of i.i.d .lattice random variables , as in the population genetic experiment detailed in section 3.3 of marin _ et al . _furthermore , this population genetic example is such that assumptions * [ a1 ] * -*[a6 ] * and * [ a8 ] * also hold , which means that the conclusions of both theorems [ thm1 ] and [ normal_thm ] apply to this model .as noted above , the current literature on the asymptotics of abc has focused primarily on conditions guaranteeing asymptotic normality of the posterior mean ( or functions thereof ) . to this end , it is important to stress that the posterior normality result in theorem [ normal_thm ] is not a weaker , or stronger , result than that of asymptotic normality of an abc point estimator ; both results simply focus on different objects .that said , existing proofs of the asymptotic normality of the abc posterior mean all require asymptotic normality of the posterior . in this section , we demonstrate that asymptotic normality of the posterior is not a _ necessary _ condition for asymptotic normality of the abc posterior mean . to present the ideas in as transparent a manner as possible , we focus on the simple case of an unknown scalar parameter and known scalar summary .in addition to assumptions * [ a1 ] * to * [ a7 ] * , we maintain the following assumptions on the prior .[ * a8 * ] the prior density satisfies the following : * ( i ) * for , . *( ii ) * for some and all , . * ( iii ) * for , we have .[ mean_thm ] assume that assumptions * [ a1 ] * - * [ a6 ] * , together with assumption * [ a8 ] * , are satisfied and assume exists and is non - zero . denoting as the abc posterior mean, we then have the following results : if * ( i ) * : or if * ( ii ) * : and * [ a7 ] * holds , then where ] according to independent uniforms ] , then similarly to case * ( i ) * we can bound goes to zero when goes to infinity . since can be chosen arbitrarily large , is proven . and .we have similarly to the computations leading to : setting , under assumption * [ a7 ] * , })(1+o(1))\prod_{j=1}^{k_{1}}(d_{t}(j)\varepsilon _ { t } ) , \end{split}\]]when where is the centered gaussian density of the dimensional vector , with covariance }$ ] .this implies as in case * ( ii ) * that })dx}{\int_{|x|\leq m}\varphi _ { k_{1}}(x_{[k_{1}]})dx}\leq m^{-(k - k_{1})}+o(1)\]]and choosing arbitrary large leads to . if for all . to prove, we use the computation of case * ( ii ) * with , so that implies that for all , choosing large enough , and when is large enough can be chosen arbitrarily large and since when goes to infinity , is proved .we shall prove that if , for and , we define ( with ) , then first bound from above the numerator .note that and that .denote , then the condition is used in the representation of the real line over which the integral defining is specified**. * * we first study the second integral term after the inequality . if then implies that and choosing large enough implies that if , the last term in can be bounded by we study the middle term in . using the assumption that is lipshitz at , fixed but arbitrarily large . by the dominating convergence theorem and the gaussian limit of , is continuous else , for the same reasons as above , implies that the holds as goes to infinity .we now study the denominator .when , contains so that , since in this region , in conclusion , holds and -z_{t}^{0}=o_{p}(1)\]]since is asymptotically gaussian with mean 0 and variance the same holds for .fearnhead , p. and prangle , d. 2012 .constructing summary statistics for approximate bayesian computation : semi - automatic approximate bayesian computation ._ j. royal statistical soc .series b _ , 74 , 419474 .nott d. , fan , y. , marshall , l. and sisson , s. 2014 .approximate bayesian computation and bayes linear analysis : towards high - dimensional abc , _ journal of computational and graphical statistics _, 23 , 6586 .pritchard , j.k ., seilstad , m.t . , perez - lezaun , a. and feldman , m.w .population growth of human y chromosomes : a study of y chromosome microsatellites , _ molecular biology and evolution _ , 16 , 17911798 .
approximate bayesian computation ( abc ) is becoming an accepted tool for statistical analysis in models with intractable likelihoods . with the initial focus being primarily on the practical import of abc , exploration of its formal statistical properties has begun to attract more attention . in this paper we consider the asymptotic behavior of the posterior obtained from abc and the ensuing posterior mean . we give general results on : ( i ) the rate of concentration of the abc * * * * posterior on sets containing the true parameter ( vector ) ; ( ii ) * * * * the limiting shape of the posterior ; and ( iii ) the asymptotic distribution of the abc posterior mean . these results hold under given rates for the tolerance used within abc , mild regularity conditions on the summary statistics , and a condition linked to identification of the true parameters . using simple illustrative examples that have featured in the literature , we demonstrate that the required identification condition is far from guaranteed . the implications of the theoretical results for practitioners of abc are also highlighted .
image restoration , including image denoising , deblurring , inpainting , computed tomography , etc ., is one of the most important areas in imaging science .it aims at recovering an image of high - quality from a given measurement which is degraded during the process of imaging , acquisition , and communication .an image restoration problem is typically modeled as the following linear inverse problem : where is the degraded measurement or the observed image , is a certain additive noise , and is some linear operator which takes different forms for different image restoration problems . note that this paper involves both functions ( operators ) and their discrete counterparts .we shall use regular characters to denote functions or operators and use bold - faced characters to denote their discrete analogs .for example , we use to denote a linear operator between two function spaces and as an element in a function space , while we use and to denote their corresponding discretized versions ( the type of discretization will be made clear later ) .the operator is in general ill - conditioned ( e.g. for deblurring ) or non - invertible ( e.g. for inpainting ) .naive inversions of in the presence of noise will inevitably lead to significant noise amplification .hence , in order to obtain a high quality recovery from the ill - posed linear inverse problem , a proper regularization on the images to be recovered is needed .successful regularization based methods include the rudin - osher - fatemi model and its nonlocal variants , the inf - convolution model , the total generalized variation ( tgv ) model , the combined first and second order total variation model , and the applied harmonic analysis approach such as curvelets , gabor frames , shearlets , complex tight framelets , wavelet frames , etc .the common concept of these methods is to find sparse approximation of images using a properly designed linear transformation together with a sparsity promoting regularization term ( such as the widely used norm ) .a typical norm based regularization model takes the following form where is some sparsifying linear transform ( such as wavelet transform or ) .this general formulation is widely applied in image restoration for regularizing designed smooth image components while preserving image singularities .meanwhile , the idea of explicitly taking image singularities into consideration was first explored in the pioneer work , where the following model , known as the mumford - shah model , was introduced : here , denotes the length of one - dimensional curve representing edges . due to the smoothness promoting property of norm , the above mumford - shah functional encourages to be smooth except along ( see for detailed surveys on the mumford - shah model and for the applications to image restoration ) . in a discrete setting , if we know the exact locations of image singularities , then we can recover the image with sharp edges by solving the following minimization problem : where is the index set of pixels corresponding to image singularities .the problem is easy to solve once we know .however , the restoration result of can be highly sensitive to the estimation of , and the main challenge lies in how to identify as accurately as possible from degraded observed images .sparse regularization with wavelet frame transforms is successfully applied in various imaging problems , due to its effectiveness of capturing multiscale singularities using compactly supported wavelet frame functions of varied vanishing moments . in connection with mumford - shah model , the authors in exploited the favorable properties of wavelet frames , and proposed the following piecewise smooth wavelet frame image restoration model : where is a wavelet frame transform and is the image singularities set to be estimated . as image singularities can be well approximated by wavelet frame coefficients of large magnitude , uses the norm to promote the smoothness of image away from , and uses the norm to recover sharp features lying in .the authors proved that under the assumption of a fixed index set , the discrete model converges to a new variational model as the resolution goes to infinity .a special case of the variational model is related to ( and yet significantly different from ) the mumford - shah functional . as a byproduct of the analysis in , it demonstrated that the model is more computationally tractable than the mumford - shah model .interested readers should consult for more details .another model that exploits the similar idea is the following constrained minimization model proposed in : where is the feasible set for , and the constraint on is imposed to promote the regularity of the singularity set , by implication , the sparsity of the wavelet frame coefficients . unlike which directly updates by comparing the norm and norm of at each step , additional geometric constraints on in are utilized to regularize image singularities . even though both and showed significant improvements over the typical wavelet frame sparsity based image restoration model , the above two models have their own drawbacks . for , since is estimated solely depending on the wavelet frame coefficients , the estimated may capture the unwanted isolated singularities when the measurement is severely noisy .in addition , since is split into the and the part , the reconstructed image may suffer from the staircase effect on the interface of and .for , as the coefficients on are not directly penalized , may introduce overly sharpened singularities compared to , especially in the case of deblurring with a severely degraded .in addition , it is difficult to rigorously analyze the model and its solutions with the presence of the singularities set . in this paper, we propose a new edge driven wavelet frame based image restoration model .we use the term `` edge driven '' as the proposed model continues to exploit the idea of alternate recovery of the image and the estimation of its singularities set in a different form . here, we provide a first glance of the model as follows : where is the image to be reconstructed , denotes a relaxed set indicator of the singularities set , and , , and are three wavelet frame transforms applied to different components of the images . for the clarity of presentation , the detailed definition and the analysis of the model in a multi - level decomposition formare postponed until section [ waveletmodelalg]-[variationasymanal ] .our model is closely related to the piecewise smooth wavelet frame models and .in fact , can be viewed as a relaxation of where is the estimated singularities of and is its set indicator .the first term is used to restore smooth regions of an image , while the second term preserves singularities , and the third term provides the regularization on singularities to enhance sharp image features . in other words, our model inflicts a different strength of regularization in smooth image regions and near image singularities such as edges , and actively restores / enhances sharp image features at the same time .as the first two terms are exchangeable , an appropriate choice of the wavelet frame transforms as well as the associated parameters is needed to obtain desired effects .the details of the properties of the three transforms will be detailed in section [ modelalg ] .compared to the two existing models and , it should be noted that instead of using norm , norm is used to promote regularity in the smooth region , as the image singularities can be better protected if the singularity set is not accurate .this leads to a more robust image approximation that is less sensitive to the estimation of the singularities of the unknown true image from the degraded measurement .in addition , an implicit and relaxed representation of the singularity set allows continuous overlap between the smooth and the sharp image regions in the transform domain . we expect that such overlap helps to suppress the staircase effects near the interface .finally , representing the singularity set implicitly enables us to provide an asymptotic analysis of the model with respect to both and , in contrast to that of where the singularity set is assumed to be fixed . to facilitate a better understanding of the proposed model and its relation to some existing variational models, we will present an asymptotic analysis of the proposed model .we discover that the continuum limit of the proposed model ( after a reformulation ) takes the following form which is an edge driven variational model that includes several existing variational and partial differential equation ( pde ) models as special cases ( see subsection [ correspondingvariational ] for more details ) .the rest of this paper is organized as follows . in section [ waveletframereview ], we introduce some basics of wavelet frame that will be used in later sections .we propose the discrete edge driven wavelet frame based model and its associated algorithm in section [ waveletmodelalg ] .numerical simulations of our proposed model and comparisons with some of the existing models are conducted at the end of this section . in section [ variationasymanal ], we present the continuum limit of the the proposed discrete model and provide a rigorous asymptotic analysis .all technical proofs will be postponed to the appendix .in this section , we present some basics of wavelet frame theory and some preliminary results . in this subsection , we briefly introduce the concept of tight frames and wavelet tight frames . for the details ,one may consult for theories of frames and wavelet frames , for a short survey on the theory and applications of frames , and for more detailed surveys . a countable set with called a tight frame of if where is the inner product on , and is called the canonical coefficient of . for given and , the corresponding quasi - affine system generated by defined by the collection of the dilations and the shifts of the members in : where is defined as when forms a tight frame of , each is called a ( tight ) framelet and the entire system is called a ( tight ) wavelet frame . in particularwhen , we simply write . note that in the literature , the affine system is widely used , which corresponds to the decimate wavelet ( frame ) transform .the quasi - affine system , which corresponds to the undecimated wavelet ( frame ) transformation , was first introduced and analyzed in . throughout this paper, we only discuss the quasi - affine system because it generally performs better in image restoration and the connection to pde is more natural than the widely used affine system .the interested reader can find further details on the affine wavelet frame systems and its connections to the quasi - affine frames in . in the discrete setting , , where denotes the space of two - dimensional discrete images .throughout this paper , we assume for simplicity that all images are square images ; , and we only consider the mra based tensor product wavelet frame system .we denote the two - dimensional fast ( discrete ) framelet transform , or the analysis operator ( see , e.g. , ) with levels of decomposition as where is the framelet band .then is a linear operator with the frame coefficients of at level and band being defined as \circledast\bu.\end{aligned}\ ] ] here , denotes the discrete convolution with a certain boundary condition ( e.g. , periodic boundary condition ) , and is defined as =\left\{\begin{array}{rl } \bq_{\aal}[2^{-l}\bk],&\bk\in 2^l\z^2;\vspace{0.5em}\\ 0,&\bk\notin 2^l\z^2 .\end{array}\right.\end{aligned}\ ] ] notice that and \circledast\bu ] , and the coefficients in the band satisfy =\big(\bq_{\aal}[-\cdot]\ast\bu\big)[\bk]&=\sum_{\bsj\in\z^2}\bq_{\aal}[\bsj-\bk]\bu[\bsj]=2^n\sum_{\bsj\in\z^2}\bq_{\aal}[\bsj-\bk]\big\la u,\phi_{n,\bsj}\big\ra\\ & = 2^n\left\la u,\sum_{\bsj\in\z^2}\bq_{\aal}[\bsj-\bk]\phi_{n,\bsj}\right\ra=2^n\big\la u,\psi_{\aal , n-1,\bk}\big\ra,\end{aligned}\ ] ] where denotes the discrete convolution . the key observation made by that for the piecewise b - spline framelets , there exists a function associated to such that , and , and the explicit formulae of are given in . with the aid of the theory of distribution , proposition [ prop1 ] generalizes the same result to any tensor product framelet .the proof can be found in [ proofprop1 ] .[ prop1 ] assume that a framelet function has vanishing moments of order , and it is generated by the tensor product of univariate framelet functions . if its support is a two - dimensional box \times[a_2,b_2] ] . for and with , we have for every . by proposition [ prop1 ], there exists the unique corresponding to such that , , and a.e . then by the chain rule where and are defined as in .this means that the proof is completed by the integration by parts formula ( * ? ? ?* proposition 4.2 ) : here , , is the index set defined as and if and if with being the outward unit normal of .note that every integration on vanishes , because from the proof of proposition [ prop1 ] , it can be easily verified that for , so that for .hence , which completes the proof. recently in , the authors obtained a similar result as proposition [ prop1 ] for generic tensor product framelets . however , it was not clear from their analysis that as well as the regularity of .therefore , the conclusion of proposition [ prop1 ] is stronger .in this section , we present our edge driven wavelet frame based image restoration model with full details. we also present an alternating optimization algorithm which iteratively updates the image to be recovered and the set of singularities .the proposed model and algorithm are all in discrete settings , where all variables are discrete arrays .we denote by the set of indices of the cartesian grid which discretizes the domain .recall that the space of all two - dimensional array on the grid is denoted as .let be some linear operator mapping into itself , so that both the ( unknown ) true image and the degraded measurement ( or the observed image ) are the elements of .we propose our wavelet frame based image restoration model as where \big)\left(\sum_{\aal\in\bb}\lambda_{l,\aal}[\bk]\bigg|\big(\bsw_{l,\aal}\bu\big)[\bk]\bigg|^2\right)^{\f{1}{2}},\\ \left\|\bsv\cdot\big(\gga\cdot\bsw'\bu\big)\right\|_1&=\sum_{\bk\in\oo}\sum_{l=0}^{l-1}\bsv_l[\bk]\left(\sum_{\aal\in\bb'}\gamma_{l,\aal}[\bk]\bigg|\big(\bsw_{l,\aal}'\bu\big)[\bk]\bigg|^2\right)^{\f{1}{2}},\\ \big\|\rrh\cdot\bsw''\bsv\big\|_1&=\sum_{l=0}^{l-1}\underbrace{\sum_{\bk\in\oo}\sum_{m=0}^{l''-1}\left(\sum_{\aal\in\bb''}\rho_{l , m,\aal}[\bk]\bigg|\big(\bsw_{m,\aal}''\bsv_l\big)[\bk]\bigg|^2\right)^{\f{1}{2}}}_{:=\big\|\rrh_l\cdot\bsw''\bsv_l\big\|_1},\end{aligned}\ ] ] and , , and denote the framelet bands of , , and respectively : to better understand the proposed model , we observe that it can be regarded as a relaxation of the following model : where with being the estimated singularity region for , which will be denoted as the level singularity in what follows , and with being the labelling binary image of : =1 ] .this relaxation allows an overlap between the smooth and the sharp image regions in the transform domain , which will be helpful to suppress the staircase effects near the interface .furthermore , as will be rigorously analyzed in section [ variationasymanal ] , this implicit representation of the singularity set enables us to provide an asymptotic analysis of the model with respect to both and , in contrast to that of where the singularity set is assumed to be fixed .we would like to mention that our model mainly focuses on the restoration of images which can be well approximated by piecewise smooth functions .therefore , our model may not be suitable for images having textures .indeed , textures can be sparsely approximated by systems with oscillating patterns such as local cosine systems , rather than piecewise smooth functions .however , we can easily modify the proposed model by adopting the idea of a two system model ( e.g. ) to better handle images with textures .nonetheless , we will not discuss details on such variant of our model , as it is beyond the scope of this paper .we will focus on recovering images that are piecewise smoothness . * step 0 .* , the proposed alternating minimization algorithm for is given by algorithm [ alg1 ] .to solve the subproblem , we use the split bregman algorithm , which is a widely used method for solving various convex sparse optimization problems in variational image restoration . for completeness, we present the full details of the split bregman algorithm solving the subproblem as follows : let . for where we omit the outer iteration superscript for notational simplicity .note that each of the subproblem of has a closed - form solution and it can be rewritten as ^{-1}\left[\bsa^t\bsf+\mu_1\bsw^t\big(\bsd_1^j-\bb_1^j\big)+\mu_2\big(\bsw'\big)^t\big(\bsd_2^j-\bb_2^j\big)\right]\\ \bsd_1^{j+1}&=\mt_{(\one-\bsv)\cdot\lam/\mu_1}\big(\bsw\bu^{j+1}+\bb_1^j\big)\\ \bsd_2^{j+1}&=\mt_{\bsv\cdot\gga/\mu_2}\big(\bsw'\bu^{j+1}+\bb_2^j\big)\\ \bb_1^{j+1}&=\bb_1^j+\bsw\bu^{j+1}-\bsd_1^{j+1}\\ \bb_2^{j+1}&=\bb_2^j+\bsw'\bu^{j+1}-\bsd_2^{j+1}. \end{split}\end{aligned}\ ] ] here , the isotropic shrinkage is defined as =\left\{\begin{array}{ll } \w_{l,\aal}[\bk],&\aal=\0,\vspace{0.5em}\\ \f{\w_{l,\aal}[\bk]}{r_l[\bk]}\max\big\{r_l[\bk]-\bsv_l[\bk]\lam_{l,\aal}[\bk],0\big\},&\aal\in\bb , \end{array}\right.\end{aligned}\ ] ] with =\left(\sum_{\aal\in\bb}\big|\w_{l,\aal}[\bk]\big|^2\right)^{1/2} ] and . in our numerical simulations , we set for .in this subsection , we conduct some numerical simulations on image inpainting and image deblurring using algorithm [ alg1 ] . in all of the numerical simulations, we will use the piecewise cubic b - spline wavelet frame for , and the piecewise linear b - spline for and .the levels of decomposition , i.e. and are chosen differently depending on the image restoration problems .we compare the results obtained from our proposed model with the piecewise smooth ( ps ) model in , and the geometric structure ( gs ) model in .we also compare with the total generalized variation ( tgv ) model : which is solved by the modified primal - dual hybrid gradient method . here , , and we use forward difference with periodic boundary condition to discretize . in all image restoration problems ,the true image takes the integer values in ] for the fair comparison.,width=158 ] ] for the fair comparison.,width=158 ] + ] for the fair comparison.,width=158 ] ] .the proof is similar to ( * ? ? ?* theorem 2 ) .however , for completeness , we include the proof .since takes its values in ] , has to be a minimizer of . now , we consider the -subproblem of when . by virtue of proposition [ prop3 ], it suffices to consider the following problem : for a fixed .then we can see how is related to several existing variational and pde models for image restoration : 1 . when and , is reduced to which is a special type of the combined first and second order total variation ( tv ) model .more precisely , let and .then we have the following combined first and second order tv model with spatially varying parameters 2 . in , the gradient descent flow ofis studied : we can easily see that there are two different nonlinear diffusions in region and , where stands for the interior of .the second order nonlinear diffusion in plays a role of edge - enhancing , while the fourth order nonlinear diffusion in plays a role of preventing smooth regions from being blocky .3 . the -subproblem can be viewed ( formally ) as a generalized inf - convolution model as well ; we define and we set and as in .then almost everywhere in , and , namely reduces to the following inf - convolution model : moreover , can be rewritten as which is a special case of the following ( unsymmetrized ) tgv model as we can see from the above discussions , the variational model is an edge driven variational model which restores piecewise smooth functions by inflicting varied strength of regularization in smooth and sharp image regions and simultaneously restoring image singularities .since the proposed discrete model approximates the variational model as will be shown in the next subsection , we can make the same assertion on .furthermore , the proposed model can be viewed as a more general image restoration model than the aforementioned variational models . in this subsection, we find a connection between the model and the variational model .as will be revealed in our analysis , can approximate various differential operators by choosing an appropriate weight for each of framelet bands .therefore , for simplicity , we shall restrict in and analyze the following problem with , , and chosen differently for different framelet bands .we further assume , for simplicity , that is the wavelet frame transform of piecewise b - spline wavelet frame systems . by virtue of proposition [ prop1 ] , it is not hard to see that our analysis can be generalized to the more general case .we start with introducing some symbols and notation that will be used throughout the rest of the paper .[ notation1 ] we focus our analysis on , i.e. the two - dimensional cases . all the two - dimensional refinable functions and framelets are assumed to be constructed by tensor products of univariate b - splines and the associated framelets obtained from the uep . 1 .all functions we consider are defined on , and that their discrete versions , i.e. digital images are defined on an cartesian grid on ^ 2 ] . here , for given sets and , denotes the space of all functions mapping from to .note that since is a finite set , we have and ^{\mm_n}\simeq[0,1]^{|\mm_n|} ] .we define the index set by where is the support of .in other words , consists of double indices such that the boundary condition of \circledast\bu ] . here, is the sobolev space defined as and ) ] .let be the refinable function corresponding to .define a linear operator on by then we define for notational simplicity , we will denote the energy functional in by : where the subscript is used to emphasize the dependence of and on the image resolution .we first consider ^{\mm_n}\big\}\\ p_e&=\inf\big\{e_n(u , v):u\in w_1^s(\om),~v\in w_1^r(\om,[0,1])\big\}.\end{aligned}\ ] ] then it is obvious that because for every ) ] may not necessarily lie in ) ] where )=\big\{u\in l_2(\om):0\leq u\leq 1~\text{a.e .in}~\om\big\} ] .[ rmk ] we further mention that in fact it is not necessary to impose the restriction on . using the refinable function corresponding to the piecewise b - spline wavelet frame system and defining corresponding index sets appropriately, we can establish the relation between ( the reformulation of ) the following model and the variational model .nevertheless , for simplicity , we focus on analyzing the relation between and . for convenience , we write and respectively as and here , without loss of generality , we assume that for .to draw an asymptotic relation between and , we need the assumptions on the operator and its discretization , and the parameters , , and : 1 . is a continuous linear operator mapping into itself , and its discretization satisfies note that which corresponds to denoising , deblurring , and inpainting satisfies the above assumption .we split the framelet band into where is the index set in . for , we set , where is given in proposition [ prop1 ] . for , we set for some such that and .the remaining parameters and are defined as in the similar way except for changing with in and in respectively . in particular , we replace with when we set . it remains to impose an appropriate topology on ) ] is closed in . hence , in what follows , by a topology on ) ] , we have with theorem [ th1 ] , we can show that the sequence is equicontinuous .[ prop2 ] assume that a1 and a2 are satisfied .let ) ] with and , we have . see [ proofprop2]. with the aid of theorem [ th1 ] and proposition [ prop2 ] , we have the following theorem showing that the convergence of to is stronger than pointwise convergence . a direct consequence of such convergence is the -convergence of to in ) ] with we have consequently , -converges to in ) ] , let be the sequence as given in item 2 of the definition of -convergence .together with , we have which completes the proof. this paper , we proposed a new edge driven wavelet frame based image restoration model by approximating images as piecewise smooth functions .the proposed model inflicts different strength of regularization in smooth image regions and near image singularities such as edges , and actively regularize image singularities at the same time .the performance gain of the proposed model over the existing piecewise smooth image restoration models is mainly due to its robustness to the estimation of image singularities and better regularization on the singularity set .finally , the formulation of using an implicit representation of the singularities set also enables an asymptotic analysis of the proposed edge driven model and a rigorous connection between the discrete model and a general variational model in the continuum setting .since is constructed by the tensor product of the univariate framelets , we first consider one - dimensional case .let have vanishing moments of order , and let . from the assumption , is a closed interval .we also denote by the supporting function on : since has vanishing moments of order , it follows that for all , but .since is compactly supported , its fourier transform can be extended to an entire function of , called fourier - laplace transform , which satisfies. then the taylor series expansion of at satisfies in other words , there exists an entire function such that for a given , we define note that and for . then by maximum modulus principle ( e.g. ) , we have latexmath:[\ ] ] where is independent of , and the third inequality follows from the stability of . for a given , we choose and both of which are again independent of .therefore , whenever and , we have which completes the proof of proposition [ prop2 ] .10 , _ variational analysis in sobolev and bv spaces _ , mos - siam series on optimization , society for industrial and applied mathematics ( siam ) , philadelphia , pa ; mathematical optimization society , philadelphia , pa , second ed . ,applications to pdes and optimization . , _ mathematical problems in image processing . partial differential equations and the calculus of variations .foreword by olivier faugeras _147 of appl .sci . , springer , new york , 2nd ed . , 2006 .height 2pt depth -1.6pt width 23pt , _ image restoration : a data - driven perspective _ , in proceedings of the 8th international congress on industrial and applied mathematics , higher ed .press , beijing , 2015 , pp .65108 . , _ the analysis of linear partial differential operators i. distribution theory and fourier analysis _ , vol .256 of grundlehren der mathematischen wissenschaften [ fundamental principles of mathematical sciences ] , springer - verlag , berlin , 1983 . , _ oscillating patterns in image processing and nonlinear evolution equations _ , vol .22 of univ .lecture ser . , american mathematical society , providence , ri , 2001 .the fifteenth dean jacqueline b. lewis memorial lectures . , _ and approximation of vector fields in the plane _ , in nonlinear partial differential equations in applied science ( tokyo , 1982 ) , vol .81 of north - holland math ., north - holland , amsterdam , 1983 , pp . 273288 .
wavelet frame systems are known to be effective in capturing singularities from noisy and degraded images . in this paper , we introduce a new edge driven wavelet frame model for image restoration by approximating images as piecewise smooth functions . with an implicit representation of image singularities sets , the proposed model inflicts different strength of regularization on smooth and singular image regions and edges . the proposed edge driven model is robust to both image approximation and singularity estimation . the implicit formulation also enables an asymptotic analysis of the proposed models and a rigorous connection between the discrete model and a general continuous variational model . finally , numerical results on image inpainting and deblurring show that the proposed model is compared favorably against several popular image restoration models . image restoration , ( tight ) wavelet frames , framelets , edge estimation , variational method , pointwise convergence , -convergence
two goals are targeted in this opus .the first stems from various indications that ir environments may demonstrate non - classical behavior , similar to that of quantum objects , and we would like to study the emergence of these effects on simple , ` tame ' models .the second is to operate the idea that the volume of data corpora became so large that it can be treated to be a continuous medium like it is done in solid state physics ( this option was highlighted in ) and information retrieval becomes akin to quantum measurement . in order to simulate the emergence of quantum behavior we suggest a toy model with few parameters which can be physically based on classical or quantum material .the principal assumption for the model are : * the number of retrieved documents is potentially infinite * both the relevance ( non - relevance ) of a document and the occurrence of a particular term in it are measurable properties , which are subject to physical measurement * the properties form an infinite set * the method we use to enhance the performance is query expansion : we take a term and , within a given query , pre - select documents by possessing the term based on this principles , we suggest a toy model of information retrieval .more precisely , we consider two models built in a similar way , but with the difference that the first is based on classical objects ( macro - objects ) governed by boolean logic , while the second deals with quantum microparticles .then , we carry out a series of numerical experiments with both models , the design of the experiments is similar and we explore the deviation of the internal logic of the second model from the boolean one . as a measure of this discrepancywe use accardi statistical invariant associated with each term .the results of the numerical evaluations are then compared with the results of a similar experiment performed over tipster test collection .let us first exactly describe the settings we assume for our toy model .we are not going to deal with average precision ( ap ) , rather , dwell on a simpler thing : just increasing the precision , saying nothing about the recall .that is , we use strictly one tool : query expansion by pre - filtering of term occurrence , and nothing more .[ [ step-1.-initial - setting . ] ] step 1 .initial setting .+ + + + + + + + + + + + + + + + + + + + + + + + first , the relevance is tested .both relevance and non - relevance are nothing but properties , in our simple model they will correspond to states , , respectively .after the relevance is justified , we check the occurrence of certain term .again , the term occurrence is nothing but a property , and we check the documents if they possess it ..8 mm [ [ step-2.-updated - search - query - expansion . ] ] step 2 .updated search : query expansion .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + once a term is set and the appropriate measurements are carried out , the search is modified by _ term pre - selection _ or , in other words , query expansion .this is done as follows : leaving the query , that is , the state the same , we perform a pre - selection and take only those documents which contain the term . then the result of the initial and of the updated queries are compared by forming which can take both positive and negative values depending on the choice of the term .melucci metaphor is a unified view to represent simplified ir environment with no reference to particular underlying logic . according to it , the ir procedureis represented by a two - slit experiment , widely known in physics .the ir system is thought of as a laboratory with the source , which supplies documents according to the input query .the documents within melucci metaphor are particles , they may be of classical or quantum nature , or , perhaps , of some other kind .we do not dwell on the mechanism of producing this flux of documents - particles .what is essential , is that the number of ejected documents is supposed to be potentially infinite , but we analyze only first documents .putting a particular query means preparing the source in a particular state . from this experimentwe get the value of : } \put(21,33){\vector(2,1){15 } } \put(22,32){\vector(4,1){14.5 } } \put(22,30){\vector(1,0){14 } } \put(22,28){\vector(4,-1){14.5 } } \put(21,27){\vector(2,-1){15 } } \put(50,3){\line(0,1){8 } } \put(50,19){\line(0,1){16 } } \put(50,43){\line(0,1){8 } } \multiput(49,11)(0,8){2}{\line(1,0){2}}\put(52,21){ } \multiput(49,35)(0,8){2}{\line(1,0){2}}\put(52,44){ } \multiput(48.8,10)(0.1,0){4}{\line(0,1){10}}\multiput(57,36)(0,3){3}{\vector(1,0){14 } } \put(77,24){\framebox(27,27){check } } \end{picture}\ ] ] and from this experiment we get the value of : } \put(21,33){\vector(2,1){15 } } \put(22,32){\vector(4,1){14.5 } } \put(22,30){\vector(1,0){14 } } \put(22,28){\vector(4,-1){14.5 } } \put(21,27){\vector(2,-1){15 } } \put(50,3){\line(0,1){8 } } \put(50,19){\line(0,1){16 } } \put(50,43){\line(0,1){8 } } \multiput(49,11)(0,8){2}{\line(1,0){2}}\put(52,21){ } \multiput(49,35)(0,8){2}{\line(1,0){2}}\put(52,44){ } \multiput(48.8,34)(0.1,0){4}{\line(0,1){10}}\multiput(57,12)(0,3){3}{\vector(1,0){14 } } \put(77,3){\framebox(27,27){check } } \end{picture}\ ] ] when we are in the classical realm , there is no need to calculate due to our boolean belief revision ( that is , the law of total probability ) : but just for fun we may attempt to measure directly , removing the relevance check : } \put(21,33){\vector(2,1){15 } } \put(22,32){\vector(4,1){14.5 } } \put(22,30){\vector(1,0){14 } } \put(22,28){\vector(4,-1){14.5 } } \put(21,27){\vector(2,-1){15 } } \put(50,3){\line(0,1){8 } } \put(50,19){\line(0,1){16 } } \put(50,43){\line(0,1){8 } } \multiput(49,11)(0,8){2}{\line(1,0){2}}\put(52,21){ } \multiput(49,35)(0,8){2}{\line(1,0){2}}\put(52,44){ } \multiput(57,36)(0,3){3}{\vector(1,0){14 } } \multiput(57,12)(0,3){3}{\vector(1,0){14 } } \put(77,11){\framebox(33,34){check } } \end{picture}\ ] ] and surprisingly discover that the result may drastically differ from .let us pass to exact numerical results .in order to evaluate the discrepancy , accardi statistical invariant is used : when the ir environment is classical , holds , therefore that is why in classical realm . in quantumsetting this is violated , see section [ squant ] for numerical results .in this case we suppose that the documents are like balls in an urn .we evaluate the probability of relevance ( non - relevance , respectively ) as the following ratios , introducing the notation : the next step of our toy scenario is to test _ afterwards _ the occurrence of a term .this gives rise to conditional probabilities , which are evaluated as follows together with the notations : then apply the bayes formula : using the parameters introduced above now we are in a position to evaluate the expected boost of precision . substituting to , we have in the sequel , for the comparison , we shall need the expression for accardi statistical invariant for this case which for classical case is .in this setting we assume that the documents form the flux of spin-1/2 quantum particles . for them ,the state space is two - dimensional complex hilbert space . for the sake of conveniencechoose the properties and to be basis vectors .the query state is a vector , denote it coordinates ; then the probabilities are expressed by the same formulas as in classical case , namely and the conditional probabilities for the term : due to the laws of quantum mechanics this is because we decided to wait for the same number of documents to arrive . in ` visible ' terms that means that the search engine with pre - selection will work longer than without it . as a result ,within our model we say nothing about recall , dealing only with precision of the ir process .the expression for the precision boost reads in this case : calculate the accardi statistical invariant for quantum case and see that it can take any real value .having the two models , classical and quantum , we perform numerical simulations .we take uniformly distributed values of the parameters of both models classical and quantum and create the scatterplots , each points with coordinates .the left plot corresponds to classical model ( balls from an urn ) , and the right one depicts the results from quantum model ( spin- particle ) . [ cols="^,^ " , ] looking at quantum picture we see that , if the physical model is quantum , better boost is obtained by expanding queries with ` non - classical ' terms , those violating the accardi restriction .now look at the results of the experiments carried out over tipster collection with the same coordinate axes : these are the data based on real - world ir environment and we see that they are more similar to quantum pattern than to classical one .we see that the experimental results over large data collections demonstrate features of quantum behavior . in order to mimic quantum indeterminacy ,the access to complete knowledge about the system was artificially restricted . but this phenomenon is generic for information retrieval !the point is that search engines store some limited data about the documents rather than the documents themselves .this is a natural restriction for the access to the documents to be complete , which , in turn , could be the reason for the observed non - classicality .the author appreciates cris calude , karl svozil and jozef tkadlec for stimulating discussions on quantum contextuality during my stay in technical university of vienna , supported by the ausseninstitut and the institute of theoretical physics of the vienna university of tech- nology .many new related ideas were acquired during the working group meeting ` foundations of quantum mechanics and relativistic spacetime ' for cost action mp1006 , 25 - 26 september 2012 , university of athens , greece . a financial support from russian basic research foundation ( grant 10 - 06 - 00178a ) is appreciated .
recent numerical results show that non - bayesian knowledge revision may be helpful in search engine training and optimization . in order to demonstrate how basic assumption about about the physical nature ( and hence the observed statistics ) of retrieved documents can affect the performance of search engines we suggest an idealized toy model with minimal number of parameters .
despite their growing success in industry ( e - commerce , social networks , vod , music streaming platforms ) and their impressive predictive performances , two major user concerns frequently show up about recommender systems in online services .first , people are more and more preoccupied by privacy issues . to maintain a good trust level, we should thus provide models and algorithms that offer the best compromise between quality of recommendations , ethics as regards data collection , and users policy .second , recommendations are still too often made out of context .recommending is not only a question of maximizing the accuracy , but also providing relevant items at the right time in the good manner .this is the reason why the literature about context - aware recommender systems is increasing fast .starting from these observations , we wondered what could possibly be the necessary and sufficient data to understand as quickly as possible the user context , and then to adapt recommendations .as regards privacy , cranor suggests to favor methods where personal data are transient ( _ i.e. _ deleted after the task or the session ) .the system should also rely on item profiles , rather than user profiles .thus , it is reasonable to study the short history of recently consulted items , and see what are the common features or differences that could explain or characterize the current user context .this line of reasoning implies that we have a precise description of each item available in the online service , or at least an exhaustive set of description attributes like those we have in product catalogs , but for every type of items ( music tracks , social network profiles of users and companies , ) . besides these considerations , castagnos _ et al ._ took an interest in the role of diversity within the user decision - making process .they provide two interesting conclusions within the frame of e - commerce applications . on one hand, the diversity in recommender systems seems to significantly improve user satisfaction , and is correlated to the intention to buy . on the other hand ,the user need for diversity evolves over time , and should be carefully controlled to provide the correct amount of diversity and novelty . bringing too much diversity risks to transform recommendations into novelty .recent works confirmed that satisfaction is negatively dependent on novelty , and badly - used diversity can lead users to mistrust the system .finally , in , we showed that recommender systems should increase the diversity level at the end of a session to make users more confident in their buying decisions . yet , predicting when the session will end is not an easy task .this conclusion led us to ask if we could take the opposite view : would it be possible to monitor the diversity level within user sequences of consultations over time , and find connections between variations of diversity and changes of context ? through an exploratory research , we proposed the first model that measure the diversity brought by each consulted item , relatively to a short user history .we showed that variations of diversity often match with ends of session .however , these conclusions were made _ a posteriori _ , _ i.e. _ by analyzing the whole sequence of consultations for each user , and then knowing how each session ended and how the next session started . furthermore , our model was built by considering that all consulted items were of the same type .as an example , if the active user is listening to music , it should be possible to measure the diversity between each pair of items . in this paper , we want to bring this model a step further .first , we aim at investigating if it allows us to predict ends of session in real time , without knowing what happens next. then , we will test the robustness of our model , by reconsidering our strong hypothesis according to which we always have a complete description of items .we will thus evaluate the performances of our model when we have sparse data about items .at last , we will extend our model to a situation where the active user consults different types of items ( _ e.g. _ music tracks , social network profiles , ... ) . in this case , it is not always possible to measure the diversity between items , since their attributes may be different . thank to a corpus of more than 210,000 consultations , we show that the performances of our system remain stable up to 60% of missing diversity measures .the rest of this paper is organized as follows : section [ related - work ] offers an overview of the literature as regards diversity and context in recommender systems .section [ model ] is dedicated to the presentation of our model and our hypotheses about its robustness to sparsity and diversification of types of items .section [ experiment ] presents and discusses its performances .diversity has long been proven to improve the interactions between users and recommender systems .this dimension is considered in two different ways in the literature .some analyze the impact of diversity on users behavior , while others integrate diversity in machine learning algorithms of recommender systems .diversity has first been defined by smyth and mcclave as the opposite dimension to similarity .more precisely , this measure quantifies the dissimilarity within a set of items .thus , diversifying recommendations consists in determining the best set of items that are highly similar to the users known preferences while reducing the similarity between those recommendations .a classification of diversity has been proposed by adomavicius and kwon .it distinguishes individual diversity and aggregated diversity , depending on if we are interested in generating recommendations to individuals , or to groups of users . here , we focus on individual diversity. many works focus on controlling the diversity level brought by recommender systems .diversity was initially dedicated to content - based algorithms , especially in the case we have attribute values for each item .we distinguish 3 practices : we can compute the diversity between two items , the diversity within a set of items , or the relative diversity brought by a single item relatively to a set of items ( see equation [ eq : reldiv ] ) .these metrics have then been used in content - based filtering to reorder the recommendation list , according to a diversity criterion .in addition to these content - based algorithms , some works have focused on a way to integrate diversity in collaborative filtering . in parallel to the integration of diversity in recommender systems , many user studies took interest in the role and perception of diversity . mcginty andsmyth showed that diversity improves the efficiency of recommendations .many works showed that diversity is perceived by users , and positively correlated to user satisfaction .nevertheless , it came out that the user need for diversity evolves over time and diversity should not be integrated in the same way at each recommendation stage . at last ,recent works focus on how the amount of diversity should be provided by recommender systems .contrary to this literature , we do not want to adapt the amount of diversity in recommendations .we aim at observing the natural diversity level within users navigation path to infer their context .thus , the following subsection will be dedicated to this notion of context .integrating the context into the recommendation process is an increasing research field known as ` cars ` , acronym for context aware recommender systems . in their state - of - the - art ,adomavicius _ et al ._ present several approaches like contextual modeling , pre / post filtering method for using contextual factors in order to adapt recommendation to the users context .contextual factors are all the information which can be gathered and used by a system to determine and characterize the current context of the user . for example, a system can use the location of the user to adapt the recommendation .the most important drawbacks of these kinds of systems lies in the fact that they are invasive , by using personal informations and most often require a complex representational model .for example , such systems can use ontologies to determine user context .yet , such an ontology can not be transferred from one domain to another . as adomavicius and tuzhilinexplain in their conclusion , `` most of the work on cars has focused on the representation view of the context and the alternative methods have been underexplored '' .this fact has also been highlighted by hariri _et al . _ who have developed a ` cars ` based on users feedback on items presented in a interactive recommender system .even if this approach dynamically adapts to changes of context , it requires user effort to obtain user feedback on which the system is based .we thus aim at proposing a similar method having the same objectives , but more transparent for users by relying on item profiles and users navigation path . in the following ,we propose to distinguish two different types of context : explicit context and implicit context .explicit context is close to the definition of contextual factors , that is to say physical context , social context , interaction media context and modal context are different kinds of explicit context .conversely , implicit context will refer to the common characteristics shared by the consulted items during a certain time lapse .the motivation behind this notion is that detecting implicit context does not increase user involvement , enhances the privacy and can be used in any application domain without heavy modifications .as explained above , the role of our model is to monitor the diversity level within users navigation path over time , and then derive their implicit context . concretely ,each time a user consults a new item , we compute the added value of this item called ` target ` relatively to a short history ( _ i.e. _ the previously consulted items ) as regards to diversity . to provide a better understanding of our model , we will rely on an example shown in figure [ fig : dance ] .let us imagine an online service that allows users to listen to music , and to browse different kinds of profiles like we can do on social networks ( profiles of other users , profiles of artists , information about record companies and so on ) . for each user, we can then pay attention to his / her sequence of consultations . in this example, we understand that there might be several contexts within a session , and several ways to classify them. one strength of our model is that it allows us to measure in real time the diversity brought by each item , for each attribute independently , and for the whole set of attributes .thus , it can be configured to detect and characterize various kinds of implicit contexts , or to cut the navigation path at some points where diversity reaches the highest levels ( _ i.e. _ what we called the changes of implicit context ) . in the rest of this article, we will give meaning to these changes of implicit context , by verifying that they match with some events such as ends of session in many cases .but , of course , there can be several successive implicit contexts , and several changes of context , within a session .let us notice that , in the case where we want to force the detection of events and to optimize the characterization of the implicit context according to user s expectations , all we have to do is to complete a learning phase to find the optimal weight of each attribute within our computation of the diversity over time .the quality of our model has been demonstrated in . however , the purpose of this paper is to test the robustness of our model in the case where we have sparse data within item descriptions , that is to say detecting the same changes of implicit context with less data .we see two different scenarios which can explain sparse data .either we have a single type of items ( for example music tracks ) , but an incomplete description of each item , which is often the case in real applications . or the users navigation path are made of different types of items , and there may be a partial overlap of attributes between items . in figure[ fig : dance ] , common attributes between items are displayed on the same line . before evaluating the robustness of our model, we will present it more formally and will introduce some notations .we call = \{_ , , ... , _ } the set of users . refers to the active user . = \{_ , , ... , _ } is the whole set of consulted items .the recent user history of size at time , called , can be written under the form of a sequence of items , ... , , , .at last , = _ , , ... , _ is the set of attributes of an item .let us note that each consulted item , such as , refers to an item of the set .our model is a markov model . at each time - step ( _ i.e. _ each timethe active user consults a new item ) , our model computes the relative diversity brought by the new consulted item relatively to . in order to do so , we strongly took inspiration from the formula proposed by smyth and mcclave ( see equation [ eq : reldiv ] ) .the only difference here is that we count the number of times when we can compute the similarity between the target item and one of the items in the history .as the active user can browse different types of items , there may be situations where there is no common attributes between two items , and no way to compute the similarity between this pair of items ( _ i.e. _ it returns nan ) .consequently , is included in $ ] . measuring rd ( equation [ eq : reldiv ] ) involves to compute the similarity between each pair of items , using equation [ eq : sim ] . in this equation, the function computes the similarity between two items relatively to a specific attribute . is the weight of this attribute in the computation of the similarity . in this paper , since we want mainly want to test the robustness of our model as regards sparse data , we will use a naive approach where each weight is equal to 1 .but we could parameter these weights to adapt our model , according to the kind of changes of implicit context and/or the kind of events we want to detect . in equation [ eq : sim ] , refers to the values ( or set of values ) of an attribute for a given item .starting from here , we developed 5 generic formulas to compute similarities per attribute , according to the type of attribute we have .if the values are expressed under the form of a list ( _ e.g. _ the attribute `` similar artists '' for a song ) , we will use equation [ eq : sima1 ] . if the values correspond to intervals ( _ e.g. _ the attribute `` period of activity of a singer '' ) , we will use equation [ eq : sima2 ] . if have binary values ( _ e.g. _ the mode of a song ) , we will use equation [ eq : sima3 ] . if take numerical values ( _ e.g. _ user ratings ) , we will use equation [ eq : sima4 ] . at last , if express coordinates ( _ e.g. _ the localization of two artists ) , we will use the equation [ eq : sima5 ] . finally , we are considering that there is a change of implicit context if the 4 conditions of equation [ eq : detection ] are met . allows us to focus on relative diversity measures that exceed a given threshold . the scientific question is now to test if our model is robust to a realistic situation where : ( 1 ) we do not know what will happen after the current time , ( 2 ) we have sparse data as regards item descriptions . for these reasons, we will make 3 assumptions that will be discussed in section [ experiment ] .this assumption has not been considered in our preliminary work in , since we were analyzing variations of diversity _ a posteriori _ on the whole user s navigation path , knowing consultations at each time .we will thus check how many ends of session we can retrieve by only using data at time , even if this does not lower the interests and relevancy of our other detections , as explained above ( see subsection [ overview ] ) . consideringthat we have a single type of items , we expect to retrieve the same amount of events and changes of implicit context. in this scenario , the attributes may be different from one type of items to another , leading to another form of sparsity .in this section , we present 3 experiments we developed to validate these assumptions . in the first experiment ( * h1 * ) , we test the ability of our model to detect changes of implicit context in real time , and check if the detected contexts could be correlated with some particular events like ends of sessions . however , unlike our exploratory research , our new model only uses data available at the current time ( that is to say , we do not look at how diversity evolves beyond the current time ) .indeed , our previous model was looking for local maxima on the curve of relative diversity and used thereby information unavailable at time to detect changes of context . in real situations ,only present and past information are available .that is one of the reason which motivated us to extend our model ( the other one is the consultations of different types of items ) .the principle of our model remains quite similar to .however , the inputs used to detect changes of context are different . for each consulted item , we compute the corresponding values of relative diversity .as relative diversity can be computed for each attribute , there are as many relative diversity values as attributes . in this paper, we set the relative diversity of the current item to the average of all relative diversities per attribute .from now on , when we will talk about a relative diversity value according to an item , we will refer to the average relative diversity calculated from all the attributes for this item relatively to the history ( equation [ eq : sim ] ) . inside a given context , we assume that the relative diversity of each item is quite constant and low , but that the relative diversity suddenly increases when changes of implicit context occur .this increase is due to the fact that different contexts do not share the same characteristics ( _ i.e. _ the same attribute values ) .our model aims to detect these peaks of relative diversity over time . to achieve this , our model checks at each time - step if the conditions of equation [ eq : detection ] are satisfied . in this case, we assume that is the first item of a new implicit context .for each new implicit context detected , we check if corresponds to the beginning of a new session . in the second experiment ( * h2 * ) , we put to the proof our model by deleting information within our corpus .indeed , data sparsity is a well - known problem in the field of recommender systems , and we want to know how our model can face this problem . in , we were using a complete dataset ( _ i.e. _ with no missing information about items ) , but that is rarely the case in real situations .for instance , in a musical corpus , we could have the song title and artist name for each track but some information like the release date , the popularity or the keywords may be missing .thus , we want to test if : * our model is able to compute a relative diversity value , even if some pieces of information about attributes are not known ; * our model is robust to missing information and still performs well for detecting changes of context . to answer these questions , we randomly delete values of attributes in our dataset , until we reach an intended rate of sparsity .we test the performances of our model for rates of sparsity between 1 and 99% . because of that random deletion, some similarity measures between two items , or even some relative diversity measures could not be computed .as soon as we can compute the similarity on at least one attribute for at least one pair of items ( the target item and one of the items within the history ) , a value of relative diversity can be set for the target .otherwise , if we can not compute any similarity per attribute on any pair of items , we set the relative diversity of the target to _ nan_.let us notice that we set the diversity to _ nan _ , because a value of 0 would indicate that there is no diversity brought by the current item , not that the diversity can not be calculated .of course , we do not consider nan values as changes of context ( see equation [ eq : detection ] ) . in the last experiment ( * h3 * ), the purpose is to examine the consequences of having several types of items in our dataset on context detection performances .indeed , the previous experiments were tested with a single type of items but in practice , this may not be always the case .when the target item and the history items are of the same type ( _ i.e. _ music ) , the relative diversity can be computed on all attributes for all items ( except when there are missing data ) . however , when these types may change from a consultation to another , the relative diversity can only be computed for common attributes ( see figure [ fig : dance ] ) . considering that our initial dataset contained a single type of items ( songs ) , we modified it in order to test our third hypothesis .criteria for simulating the different types of items were as follows : first , a number of types of items is determined , and all items are randomly assigned to a type of items .afterward , for each type of items , we randomly select a subset of attributes ( from the whole set of attributes ) that will characterize these items .another parameter , called , corresponds to the minimum number of attributes in common with all the other types of items .let us notice that the common attributes between pairs of types of items are not necessarily the same ( _ i.e. _ ( ) . in this way , we can artificially obtain a dataset composed of different types of items , with only a few attributes in common . for instance , if the initial dataset contains 7 attributes ( ) and we want to create 3 types with and , we randomly get this kind of situation : , , and .in that case , , , and . in order to test our different hypotheses , we decided to base our evaluation on a musical dataset .this choice was made because musical items offer many advantages .first , musical items have their own consultation time , that is to say the time spent to consult a song can not vary from a user to another .second , meta data on songs can be easily retrieved using some specialized services like echnonest or musicbrainz . at last, users frequently listen to several songs consecutively , contrary to a movie corpus for example .our dataset contains 212,233 plays which were listened by 100 users .we obtained these consultations by using the last.fm api to collect listening events from 28 june 2005 to 18 december 2014 .our dataset is made of 41,742 single tracks , performed by 5,370 single artists . in order to create the sessions for all the users, we assumed that a session is composed by a sequence of consultations without any interruption bigger than 15 minutes .when this threshold is reached , we consider that the user started a new session .according to this standard , we computed 22,212 sessions with an average of 9.6 consultations per session ( 42.71 min per session ) .then , using the echonest api , we gathered meta data on these songs . for each song, we retrieved 13 attributes : 7 of these attributes are specific to songs , and 6 of them are related to artists .* song attributes : duration , tempo , mode , hotttness , danceability , energy and loudness ; * artist attributes : hotttness , familiarity , similar artists ( 10 artists names ) , terms , years of activity , and location of the artist ( geographical coordinates ) .table [ tab : corpus ] summarizes the values of the attributes .* results as regards the first experiment ( h1 ) . * previously , we presented equation [ eq : detection ] which allow our model to determine if the current consultation is the start of a new implicit context . in order to fix the threshold , we calculated the mean and the standard deviation of all values of relative diversity for all users within our corpus .in table [ tab : statistique_rd ] , we can notice that the standard deviation is pretty high compared to the mean of the relative diversity .this result means that users relative diversity over time takes a large range of values .we can not know _ a priori _ the best value for , since we do not know how many implicit contexts are present in our dataset .however , we previously assumed that diversity is pretty low within a given context and increases when a change of context occurs .this assessment can easily be confirmed _ a posteriori _ , by noticing that the average level of relative diversity for consultations that correspond to a session opening ( ) is much higher than those of other consultations ( ) .we finally decided to set to the global average of relative diversity within our dataset ( ) , so as to favor the detection of consultations above an average rate , but without fixing this threshold too high since there might be significant increase of diversity after a long period of decreasing ( leading to values near the global average ) .when relative diversity exceeds this threshold and all the conditions of equation [ eq : detection ] are satisfied , we consider that there is a change of implicit context .the results are reported in table [ tab : detection_naive ] . in total, our model detects 51,795 changes of implicit context . among those changes of context ,the number of sessions detected is important , since our model is able to detect more than 63% of the sessions .this significant overlap between changes of context and events indicates that our model remains efficient when we only use information available at the current time ( _ i.e. _ without considering consultations at time and beyond ) , since we can easily justify / explain these changes of context by a end of session .this means that , when the explicit context changes ( at least as regards the time dimension since there is a temporal gap between two sessions ) , the songs listened in those two explicit contexts do usually not share common characteristics ( since they are in different implicit contexts ) .we can also note that there are 37,743 changes of implicit context which do not match with changes of session .this is not a surprising result and can be explained in a simply manner .there can exist more than one implicit context within a session .we can easily imagine the case where a user starts listening to calm and down tempo songs , and suddenly changes to energetic and rapid tempo songs within the same session . as a conclusion of these results, we can say that our model seems to perform well by detecting possibly interesting points with the navigation path , that corresponds to changes of implicit context according to our definition , and can often by confirmed by changes of explicit context ( events ) .but , as a perspective , we need to confront these results to real users , in order to study how they perceive and accept these implicit contexts , before using them as a support for recommender systems .also , let us remind that we can easily change every parameter of our model ( weights of attributes , size of history , value of the threshold , ... ) after a learning phase , to match users expectations and maximize the acceptance and adoption rates .* results as regards the second experiment ( h2 ) .* in order to understand how our model performs with a lack of data , we operated a controlled deterioration of our corpus . by controlled , we mean that the amount of missing data ( that is to say missing values of attributes for the songs ) was fixed for each execution .we monitored the number of sessions and implicit contexts detected , while progressively deteriorating the corpus percent after percent ( see figure [ fig : degradation_session ] ) . from figure[ fig : degradation_session ] , we can see that the performances of our model are pretty stable until up to 60% of missing data .these results highlight the fact that our model can perform well , even with a large and realistic amount of missing data .* results as regards the third experiment ( h3 ) . * derived from some popular social networks like facebook , linkedin , or yupeek , we observed that the number of different types of items was usually around 4 .that is why we decided to create 4 types of items from our initial corpus . on this basis, we tested different combinations as regards the number of attributes per item and the number of common attributes . for each combination , we compute the number of sessions and implicit contexts detected .the results are presented in table [ tab : types_differents ] .these values result from 10 executions , with the intent to limit bias due to the random selection of attributes . indeed , according to the attributes which are selected for each type of items , the performance could vary assome attributes may be more representative than others in the detection of implicit contexts .from table [ tab : types_differents ] , we can observe that performances are quite good even if the number of attributes per type of items is low .moreover , the highest the number of common attributes between types of items is , the more we detect changes of session and implicit contexts . we see that the standard deviation has high values when both the number of attributes and the number of common attributes are low .this confirms that all attributes have not the same impact in detecting changes of implicit context .it can be supposed that a difference between the value of the energy of two songs is more characteristic of a change of context than a variation of the artist location .adapting the weight of each attribute in the calculation of the relative diversity for a given item is a perspective .our model allows to monitor the natural diversity contained in users navigation path over time and , although part of an on - going research , already presents many strengths to characterize user context .first , it has a complexity in constant time since , at each time - step , we only compute relative diversity on a fixed and small history size .this makes our model highly scalable .in addition , it preserves privacy , since it does not require personal information about the active user ( even if it can make use of information that other users accept to share , as shown in figure [ fig : dance ] ) and allows to forget the navigation path beyond the recent history . at last, it is generic since our equations fit any kind of attributes , and does not require an ontology to put words on the context .one of the questions addressed in this paper was to check our ability to predict changes of implicit context at time , without knowing what will happen next .so as to give meaning to these implicit contexts detected by our model , we tried to find a matching with explicit factors and events such as ends of session .our results showed that we got a significant overlap between changes of implicit contexts and ends of session .thus , this reinforce our conviction that this model highlights interesting points within users navigation path .first , it allows us to anticipate ends of session , and will then be useful to adapt recommendations when users are near to reach a decision .second , the changes of implicit context detected by our model that do not match with events are very promising results to be , on the long - term , able to formally characterize the user context and provide context - aware recommendations that fit privacy issues .another purpose of this paper was to test the robustness of our model when confronted to sparse data .we distinguished two different scenarios where we have a single type of items with incomplete descriptions , or several types of items with small intersections of attributes . in both cases , the performances of our model remained stable in tough conditions , with about 60% of missing data . among our perspectives , we aim at confronting our model to real users , so as to measure their perception and acceptance rate of implicit contexts . we expect to map implicit and explicit contexts so as to reach the same performances as systems based on explicit contexts , but with a deeper consideration of privacy issues .finally , by characterizing implicit contexts , we will be able to explain recommendations based on implicit contexts and provide new interaction modes to make user decisions easier .this work was financed by the region of lorraine and the urban community of greater nancy , in collaboration with the yupeek company .l. f. cranor .hey , thats personal ! in l.ardissono , p. bruna , and a. mitrovic , editors , _ user modeling 2005 _ , volume 3538 of _ lecture notes in computer science _ , pages 44 .springer berlin heidelberg , 2005 .m. d. ekstrand , f. m. harper , m. c. willemsen , and j. a. konstan .user perception of differences in recommender algorithms . in _ proceedings of the 8th acm conference on recommender systems_ , recsys 14 , pages 161168 , new york , usa , 2014 .n. hariri , b. mobasher , and r. burke .context adaptation in interactive recommender systems . in _ proceedings of the 8th acm conference on recommender systems_ , recsys 14 , pages 4148 , new york , ny , usa , 2014 .m. hasan , a. kashyap , v. hristidis , and v. tsotras .user effort minimization through adaptive diversification . in _ proceedings of the 20th acm sigkdd international conference on knowledge discovery and data mining _ , kdd 14 , pages 203212 , new york , ny , usa , 2014 .m. kaminskas , f. ricci , and m. schedl .location - aware music recommendation using auto - tagging and hybrid matching . in _ proceedings of the 7th acm conference on recommender systems_ , recsys 13 , pages 1724 , new york , usa , 2013 .n. lathia , s. hailes , l. capra , and x. amatriain .temporal diversity in recommender systems . in _ proceedings of the 33rd international acm sigir conference on research and development in information retrieval _ , sigir 10 , pages 210217 , new york , usa , 2010 .a. said , b. kille , j. brijnesh , and s. albayrak . increasing diversity throughtfurhest neighbor based recommandation . in _ proceedings of the workshop on diversity in document retrieval _ , wsdm12 , seattle , usa , 2012 .b. smyth and p. mcclave .similarity vs. diversity . in _ proceedings of the 4th international conference on case based reasoning : case based reasoning research and development _ , iccbr 01 , pages 347361 , london , uk , 2001 .m. zhang and n. hurley . avoiding monotony : improving the diversity of recommendation lists . in _ proceedings of the 2008 acm conference on recommender systems_ , recsys 08 , pages 123130 , new york , ny , usa , 2008 .ziegler , s. m. mcnee , j. a. konstan , and g. lausen .improving recommendation lists through topic diversification . in _ proceedings of the 14th international conference on world wide web _ , pages 2232 , new york , ny , usa , 2005 .
being able to automatically and quickly understand the user context during a session is a main issue for recommender systems . as a first step toward achieving that goal , we propose a model that observes in real time the diversity brought by each item relatively to a short sequence of consultations , corresponding to the recent user history . our model has a complexity in constant time , and is generic since it can apply to any type of items within an online service ( _ e.g. _ profiles , products , music tracks ) and any application domain ( e - commerce , social network , music streaming ) , as long as we have partial item descriptions . the observation of the diversity level over time allows us to detect implicit changes . in the long term , we plan to characterize the context , _ i.e. _ to find common features among a contiguous sub - sequence of items between two changes of context determined by our model . this will allow us to make context - aware and privacy - preserving recommendations , to explain them to users . as this is an on - going research , the first step consists here in studying the robustness of our model while detecting changes of context . in order to do so , we use a music corpus of 100 users and more than 210,000 consultations ( number of songs played in the global history ) . we validate the relevancy of our detections by finding connections between changes of context and events , such as ends of session . of course , these events are a subset of the possible changes of context , since there might be several contexts within a session . we altered the quality of our corpus in several manners , so as to test the performances of our model when confronted with sparsity and different types of items . the results show that our model is robust and constitutes a promising approach . user modeling ; diversity ; context ; real - time analysis of navigation path ; recommender systems
modern cosmology , the study of the large scale structure and evolution of our universe , has advanced to the point where we can now answer some very fundamental questions about the distribution of matter within our universe .ever since einstein postulated the theory of general relativity and , together with de sitter , showed how it could be applied to the universe as a whole , generations of physicists have pondered on the question of what is the overall geometry of our universe . within the past few years observations of the relic microwave radiation from the `` big bang '' have shown that the universe exhibits a geometry quite unlike that expected from theoretical prejudices alone .although on the largest scales the distribution of matter within our universe is both homogeneous and isotropic , on smaller scales less than 1/20th the size of our visible universe it is highly inhomogeneous .even though the matter distribution of the universe was exceptionally smooth 300,000 years after the creation event , over billions of years the ubiquitous attraction of the gravitational force amplifies the minute fluctuations in the early matter distribution into the structure we see today .moreover , the current best theories of structure formation suggest that the matter distribution we observe is formed in a ` hierarchical clustering ' manner with the small structures merging to form larger ones and so forth . this growth of structure is accelerated by an unseen massive ` dark matter ' component in our universe . although dark matter can not be observed directly , there is sufficient evidence within observations to conclusively infer its existence .modifications to newton s equations , to change gravitational accelerations on large scales , have had limited success , and can not presently be cast in a form compatible with general relativity .understanding the distribution of matter within our local universe can tell us much about the cosmic structure formation process . while on the very largest scales gravity is the dominant force , on smaller scales gas pressure forces , from the gaseous inter - galactic ( igm ) and inter - stellar mediums ( ism ) ,can play a significant role . in clusters of galaxies , for example , hydrodynamic forces produced by the igm lead to a distribution of gas that is held close to hydrostatic equilibrium .indeed , understanding the interaction between the ism and the stars that condense out of it , is currently one of the hottest research areas in cosmology .since if we can understand this process we are much closer to being able to infer how the galaxies we observe relate to the underlying distribution of dark matter that dominates the evolution of structure .although we are yet to absolutely determine the relation between galaxies and dark matter , measuring the distribution of galaxies is the only way of infering the distribution of all matter ( visible or not ) .measurements of the speed of recession of local galaxies , led to form the distance - redshift relation now know as ` hubble s law ' , which has become a bedrock for the development of cosmological theory .although modern surveys of galaxies use an updated , and more accurate , form of the distance - redshift relation to uncover the spatial distribution of galaxies , the principles involved remain the same as those used by hubble . aided byhighly automated observing and computer driven data analysis , a new generation of high quality galaxy redshift surveys is mapping our local universe with exquisite precision .the 2 degree field and sloan digital sky survey provide astronomers with a survey of the local universe out to a redshift of , and contain over 200,000 and one million ( when complete ) redshifts respectively . in figure [ 2df ]we show the distribution of galaxies for the 2df survey to give an visual impression of the type of inhomogeneity observed .traditionally , one of the primary goals of analysis of redshift surveys is the calculation of the two point auto - correlation function ( 2-pt cf ) .the large sample volumes provided by 2df and the sdss have allowed the 2-pt cf to be calculated with great accuracy . while the initial conditions produced by the `` big bang '' are widely believed to exhibit gaussian statistics ( kolb and turner , 1990 ) , the formation of structure by gravitational instability introduces non - gaussian features into the statistics of the matter distribution .hence , the 2-pt cf can not be a complete descriptor of the underlying matter distribution at late times .astronomers were aware of this issue comparatively early in the development of the field , and the theoretical basis for calculating higher order statistics was developed through the 1970 s ( see for a detailed summary ) .early attempts to measure higher order moments of the mass distribution , via the counts - in - cells method ( again see ) , suffered from inadequate sample size . because higher order moments tend to be progressively dominated by the most dense regions in a given sample , ensuring that adequate sampling has been performed is of utmost importance . ensuring low sample variance is also necessary , and given one sample the only way to check this is to analyse sub - samples , which rapidly depletes the available information . from a theoretical perspective , higher order statistics are interesting in relation to gravitational perturbation theory and the evolution of non - linear gravitational clustering .analyses examining the accuracy of numerical simulation methods often rely upon higher order statistics .this is especially important in the study of gravitational clustering in ` scale free ' universes .the development of fast , parallel , statistical algorithms is vital to progress in this arena . while the development of parallel simulation algorithms has advanced forward rapidly ( thacker et al . , 2003 )development of parallel analysis tools has lagged behind .this is partially due to the fact that the benefits of developing a parallel analysis code can be shorted lived because the required analyses can change rapidly ( much faster than the simulation algorithms themselves ) .the rapid development times available on shared memory parallel machines make them an ideal complement to large distributed memory machines which most simulations are now run on .although throughout this paper we discuss the application of our new method to cosmology , it can be applied equally well to the statistics of any point process .indeed the terms ` particle ' and ` point ' are often used interchangeably .the method can also be modified to apply to different dimensions , although in 2 dimensions the gains are expected to be less significant due to the reduced amount of work in the counts - in - cells method .the layout of this paper is as follows : in section [ sect : stats ] , we quickly review the statistics we wish to calculate .this is followed by an explicit description of our new algorithm , and an examination of its performance .next we present a brief case study on applying our algorithm to cosmology and conclude with a brief summary .due to space limitations a full discussion of the counts - in - cells method , and how it is related to higher order moments , is beyond the scope of this paper .however an excellent discussion of counts - in - cells and statistical measurement processes may be found in . for completeness, we briefly summarize the statistics we are interested in measuring .the 2-pt cf , , measures the radial excess / deficit over poisson noise for a point process .it is defined in terms of the joint probability , , of finding objects in volume elements and separated by a radial distance , viz , where is the average number density of the point process .the fourier transform pair of the 2-pt cf is the power spectrum , , which is used to describe the statistics of the initial density field in cosmology .the joint probability idea can be generalized to n - pt processes , for example , the reduced 3-pt cf is defined by ; where , and are defined by the triangle described by the three points under consideration .for cosmology , the assumptions of homogeneity and isotropy require that be a symmetric function of these three lengths .higher order correlation functions follow in a logical manner . using the counts - in - cells method, it can be shown that the second central moment , where n is the count of points within spheres of radius ( and volume ) , is given by the third central moment , is given by both these equations show how integrals over the correlation functions enter in to calculations of the central moments .relationships for the higher order moments can be constructed , but rapidly become lengthy to calculate ( fry and peebles , 1978 ) .the final definition we require is one that relates higher order cumulants to the variance . to aid our discussionwe introduce the following notation : the over - density of a point process relative to the mean density , , is given by where is the local deviation from the average density .although this is most usually recognized as a continuum description , it also provides a useful construct for our discussion of point processes .for example , since the local density of particles in the counts - in - cells method is given by , . from this definition of -th order connected moments of the point process define the ` ' statistics via the following definition statistics are motivated by the assumption that , given the 2-pt cf , , the -pt correlation functions scale as , see balian and schaeffer ( 1989 ) . ] : the statistics play a central role in analysis of redshift surveys . to date , up to has been calculated by researchers .while the counts - in - cells method is conceptually beautiful in its relation to the statistics , it is computationally strenuous to calculate .as the radius of the sampling sphere becomes larger , on average the work to calculate the count within the sphere will grow at a cubic rate . in realitythe situation can be potentially worse , since inefficiencies in particle book - keeping can appear ( having to search far down tree - nodes , or equivalently searching through very dense cells in a grid code ) . to counter this problem one can use a hierarchical ( tree ) storage of counts in cells on a grid , as discussed in .this greatly improves calculation time , since the summation over particles within cells is much reduced at large radii . using this methodit has been reported that samples from a data set with 47 million particles can be generated in 8 cpu hours .the basis of our alternative ` smooth field algorithm ' is that each counts - in - cells value is a discrete sample of the local density field smoothed over the scale of the sample sphere . in the continuum limit of an infinite number of particles , defining the density , the sampled value can be written as an integral over the spherical top - hat function of radius and the raw density field , to give , where is the volume of the periodic sample region and the volume of the sample sphere ( a 3 dimensional top - hat ) . via the convolution theorem , the fourier transform of , namely , is given by thus we can quickly calculate the _ entire _ field by fourier methods .the discrete calculation of counts can be expressed in almost the same way , except that the continuous density field is replaced by a discrete sum of three dimensional dirac delta functions , , where is the number of particles in the simulation , and gives the position of particle . in the counts - in - cellsmethod the integral over the volume is replaced by a summation within the given search volume . to connect these two approachesall that is needed is a smoothing function that will convert a discrete set of points to a continuous density field .we require a smoothing function , , which can be summed over the particle positions to reproduce a smooth field .provided we can do this , we can use fourier methods to precalculate all of the required values and greatly reduce the amount of work . in practiceit will be necessary to define a discrete density on a grid , and then use an interpolation process to provide a continuum limit .the smoothing idea has been studied in great depth ( see for explicit details ) and there exists a series of computationally efficient smoothing strategies that have good fourier space properties , as well as having well defined interpolation function pairs . the most common smoothing function ( ` assignment function ' )mechanisms are ` cic ' ( cloud - in - cell ) , and ` tsc ' ( triangular shaped cloud ) .cloud - in - cell interpolation provides a continuous piece - wise linear density field , while tsc has a continuous value and first derivative .the only potential issue of difficulty is that sampling a continuous periodic variable at discrete points means that the fourier domain is finite and periodic and thus has the possibility of being polluted by aliased information ( with images separated by where l is the size of the period ) . in practice ,the higher order assignment functions have a sufficiently sharp cut - off in fourier space that this is not a significant problem . having established that we can convert our discrete set of points into a continuous density defined by a grid of values and an interpolation function , we must decide upon the size of grid to be used .the initial configuration of points ( corresponding to a low amplitude power spectrum ) is such that the majority of neighbouring particles have separations close to the mean inter - particle separation .therefore , for this configuration we use a grid defined such that .this is beneficial on two counts : firstly , the grid requires a comparatively small amount of memory to store than the particle data , and secondly , it captures almost all the density information stored in the particle distribution ( since most particles are separated by sizes close to the grid spacing ) . to summarize , the steps in the sfa are as follows : 1 .use an assignment function , , to smooth the mass ( ) associated with each of the particles on to a grid .this creates the grid representation of the density field , : 2 .fourier transform the density field to form 3 .multiply by , the product of the fourier transform of the real space top - hat filter ( ) and the inverse of the assignment function filter , which includes an alias sum out to two images 4 .fourier transform the resulting field back to real space 5 .calculate at all sampling positions using the interpolation function pair to the original assignment function 6 .calculate desired statistics in this paper we have used a 3rd order polynomial assignment function ( ` pqs ' , see hockney and eastwood , 1988 ) which is defined ( in 1-dimension ) by ; and the 3-dimensional function is defined .note that is not an isotropic function , which in this case is beneficial for speed , since it is unnecessary to calculate a square root .it also simplifies calculating the fourier transform of the assignment function since all the dimensions are now separable . note that has a comparatively wide smoothing profile , andtherefore its fourier transform is a strongly peaked function with good band - limiting properties .this is advantageous for dealing with the aliasing problem mentioned earlier .indeed , the fourier transform of is : which has a suppression of power .this is sufficiently sharp to ensure that only the first and second images need be accounted for in ( the green s function associated with top - hat filtering and the assignment process ) .before proceeding to parallelize the algorithm , it is instructive to compare the speed of the serial algorithm as compared to the counts - in - cells method . in figure [ comp ] we show the time to calculate samples on points as a function of the sample radius .a ( logarithmic ) least - squares fit showed that the time for the standard counts - in - cells method ( version 1 ) grows as , which is slightly lower than the expected value of .for the second counts - in - cells algorithm we developed , which is optimized by storing a list of counts in the chaining cells used to control particle book - keeping in the code , the dependence with radius was found to be .this is understood from the perspective that most of the work in each sample has already been performed in the summation within chaining cells and that the work for each sample thus becomes dependent on sorting over the cells at the surface of the sample area , which is proportional to .however comparison of both these methods to the sfa shows they are far slower in comparison . because the entire field is precalculated ( modulo the interpolation process to non - grid positions ) in the sfa method ,the time to calculate the samples is constant as a function of radius , and is exceptionally fast .based up the data presented in figure [ comp ] , we initially estimated being able to calculate sample points on a data set in less than 2 cpu hours , which is over 4x faster than the results reported for tree - optimized counts - in - cells methods .we have recently confirmed this result using our parallel code , which took 6.5 minutes on 32 processors to calculate samples on a particle data set produced for a project being conducted at the pittsburgh supercomputing center .typically when calculating statistics , the value of the sampling radius ( equivalently the top - hat radius ) is varied so that the entire sampling process must be repeated many times .thus the most obvious method of parallelization is to create several different grids for each smoothing radius and process them in parallel .however , available memory considerations may well make this impractical .instead , it is better to parallelize each calculation for each radius .this is non - trivial as the following algorithmic steps must be parallelized : 1. calculation of green s function 2 .forward fft of density grid to -space 3 .multiplication of density grid by green s function 4 .reverse fft to real space 5 .sum over sample points the first four items have all been parallelized previously for our main simulation code ( see thacker et al . , 1998 ) .the final step , while appearing to be somewhat straightforward , must be approached with care ( as we shall demonstrate ) .the obvious issues which need to addressed are ( 1 ) ensuring each thread has a different random seed for sample positions and ( 2 ) that the sum reduction of the final values across threads is performed . in practice ,both of these issues can be dealt with in very straightforward ways using the openmp shared memory programming standard .sum reductions can be controlled via the reduction primitive while different random seeds can be set using an array of initial values .parallelization in this environment turned out to be straightforward .tests on a 32 processor hp gs320 ( 1 ghz alpha ev6/7 processors ) at the canadian institute for theoretical astrophysics ( cita ) , showed reasonable speed - up ( see figure [ times ] ) , but comparatively poor efficiency ( 22% ) when 32 processors were used .there is also a noticeable step in the speed - up at 4 to 8 processors .this step is caused by memory for a job being moved to a second memory domain , or ` resource affinity domain ' ( rad ) , within the machine .the 32 processor machine has 8 rads in total , connected via a cross - bar , with 4 processors belonging to each rad .latency to remote rads is significantly higher than to local rads , which explains the increased execution time .additionally , as the amount of traffic on the cross - bar between the rads increases , latencies are known to increase by very large factors ( up to 3000 nanoseconds , cvetanovic , 2003 ) .this is a serious bottleneck in the gs320 design which has been removed in the latest gs1280 machine . ultimately , to improve performance on the gs320 ,it is necessary to increase the locality of the sampling technique to reflect the locality of memory within the machine , and avoid sending data across the cross - bar .note that using a block decomposition of data across the rads means that locality is only really necessary in one axis direction .therefore , we adopted the following strategy to improve performance : 1 .block decomposition of the grid across rads 2 .pre - calculate the list of random positions in the z - axis 3 .parallel sort the list of random positions in increasing z value 4 .parallelize over the list of z positions , calculating x and y values randomly the resulting sample still exhibits poisson noise statistics and is therefore valid for our purposes .however , the sample points are now local in the z direction , which greatly reduces the possibility of remote access due to the block assignment of data .the scaling improvement for this method is shown in figure [ times ] .the improvement is striking .we achieved a 1.2x increase in performance for the single processor result alone , while at 32 processors we have achieved a 4.8 improvement in speed - up and a tripling of the parallel efficiency ( 82% ) .note that the speed - up is still not perfect for the improved version .this may be a bandwidth issue since the interpolation at each sampling point requires 64 grid values , which breaks down into 16 cache lines , with only 8 floating point calculations performed for all the data in each cache line .note that it is unlikely that using the next lowest level of interpolation ( tsc ) would help .tsc requires 27 points grid points per sample , which is 9 cache lines , with 6 floating point calculations per cache - line .thus the overall ratio of calculation to memory fetches is actually reduced .the initial conditions for cosmological structure are prescribed by initial density , temperature and velocity fields .although there is debate over whether evolution in the early universe ( such as magnetic fields ) may induce a non - gaussian signal in the initial conditions , most researchers believe that the density field is gaussian process , and the velocity may be derived directly from it . in the absence of non - gaussian features , the density field , which is usually discussed in terms of the linear over - density , is completely described by its continuous power spectrum , where a is a normalization constant .this initially smooth field evolves under gravity to produce the locally inhomogeneous and biased distribution of galaxies we observe today ( see figure [ dstn ] , which compare particles positions from initial to final outputs ) .early evolution , when , is in the linear regime and can be described by perturbation theory .as the over - density values approach and later exceed unity , it is necessary to use simulations to calculate the non - linear evolution .thus , ideally , the initial conditions for simulations should correspond to the latest time that can be followed accurately by perturbation theory . has developed an algorithm for the fast calculation of the particle positions required for cosmological simulations via 2nd order lagrangian perturbation theory ( 2lpt ) .we have recently implemented this algorithm in parallel using openmp .although 2lpt requires more computation , it has significant advantages over the standard 1st order technique ( known as the zeldovich ( 1968 ) approximation ) as higher order moments exhibit far less transient deviations at the beginning of the simulation .further , one should in principle be able to follow the initial evolution to slightly later epochs using 2lpt and therefore begin simulations at a slightly later time . in practice ,the transient deviation issue is most significant . in general, the more negative the spectral index the faster the initial transients die away .this is helpful , since most simulations are conducted with an effective spectral index , , of between -1.5 to -3 ( depending on the size of the simulation volume ) .also , although we have focused solely on particle position statistics in this paper , it is worth noting that a similar analysis can be applied to velocity fields defined on the point process .analysis of the transients in the velocity divergence field , , shows an even greater improvement when using the 2lpt method . to test whether our new 2lpt code was reproducing the correct results we have compared the measured statistics for our 2lpt initial conditions versus those produces with the zeldovich approximation ( 1st order ) . at the initial expansion factor of ,the za predicts the following value for ; while 2lpt predicts ; thus after performing the 2nd order correction the value of should increase by 6/7 . in figure [ s3 ] we show the calculated values of for two sets of initial conditions , one created using the za and the other with the additional 2lpt correction . both the sfa measured values of are high for this particular set of phases ( as compared to the theoretical prediction ) , but we have confirmed that alternative random seeds can produce similar results .indeed we have found the values of are quite dependent upon the phases of the fourier waves used , and achieving a value that is asymptotic to the theoretical value is extremely difficult .we are currently investigating this phenomenon in more detail .however , a brief visual inspection of figure [ s3 ] provides evidence that the residual , , between the za and 2lpt results is close to .analysis of the set of residuals between the two lines gives ( deviation ) , confirming that our code is accurately reproducing the difference in values .we have presented a new fast algorithm for rapid calculation of one point cumulants for point processes .our algorithm is based upon a smoothed field approach , which reproduces the underlying statistical properties of the point processes field from which it is derived .the method is significantly faster than counts - in - cells methods because the overhead of evaluating the number of particles in a given sphere has been removed .we are able to calculate sample points on a data set in less than 2 cpu hours , which is over 4x faster than the results reported for tree - optimized counts - in - cells methods .we also note that while tree methods also lead to very large speed ups , they are still subject to noise from the point process for low amplitude signals .we are currently applying this new technique to examine the evolution of high order moments in cosmological density fields at low amplitude levels and will present our findings elsewhere ( thacker , couchman and scoccimarro in prep ) .we also anticipate making the codes described in this paper publically available in the near future .rjt is partially supported by a cita national fellowship .hmpc acknowledges the support of nserc and the ciar .rjt would like to thank evan scannapieco and lars bildsten for hosting him at u. c. santa barbara where part of this research was conducted .this research utilized cita and sharcnet computing facilities .bennett , c.l ._ et al_. ( 2003 ) first - year wilkinson microwave anisotropy probe ( wmap ) observations : preliminary maps and basic results , _ the astrophysical journal : supplements _ , vol .1 , pp.127 .thacker , r.j . and couchman , h.m.p ( 2001 ) ` star formation , supernova feedback , and the angular momentum problem in numerical cold dark matter cosmogony : halfway there ? ' , _ the astrophysical journal _ , vol .555 , no . 1 ,pp.l17-l20 .thacker , r.j . ,pringle , g. , couchman , h.m.p . and booth , s. ` hydra - mpi : an adaptive particle - particle , particle - mesh code for conducting cosmological simulations on massively parallel architectures ' , _ high performance computing systems and applications 2003 _ nrc research press .fry , j.n .and peebles , p.j.e .( 1978 ) ` statistical analysis of catalogs of extragalactic objects .ix - the four - point galaxy correlation function ' , _ the astrophysical journal _ , vol .221 , no . 1, pp.1933 .szapudi , i. , meiksin a. and nichol , r.c .( 1996 ) ` higher order statistics from the edinburgh / durham southern galaxy catalogue survey .i. counts in cells ' , _ the astrophysical journal _ , vol .473 , no . 2 , pp.1521
higher order cumulants of point processes , such as skew and kurtosis , require significant computational effort to calculate . the traditional counts - in - cells method implicitly requires a large amount of computation since , for each sampling sphere , a count of particles is necessary . although alternative methods based on tree algorithms can reduce execution time considerably , such methods still suffer from shot noise when measuring moments on low amplitude signals . we present a novel method for calculating higher order moments that is based upon first top - hat filtering the point process data on to a grid . after correcting for the smoothing process , we are able to sample this grid using an interpolation technique to calculate the statistics of interest . the filtering technique also suppresses noise and allows us to calculate skew and kurtosis when the point process is highly homogeneous . the algorithm can be implemented efficiently in a shared memory parallel environment provided a data - local random sampling technique is used . the local sampling technique allows us to obtain close to optimal speed - up for the sampling process on the alphaserver gs320 numa architecture . 3mp m 3map m
total variation ( tv ) denoising is a nonlinear filtering method based on the assumption that the underlying signal is piecewise constant ( equivalently , the derivative of the underlying signal is _ sparse _ ) .such signals arise in geoscience , biophysics , and other areas .the tv denoising technique is also used in conjunction with other methods in order to process more general types of signals .total variation denoising is prototypical of methods based on sparse signal models .it is defined by the minimization of a convex cost function comprising a quadratic data fidelity term and a non - differentiable convex penalty term .the penalty term is the composition of a linear operator and the norm .although the norm stands out as the convex penalty that most effectively induces sparsity , non - convex penalties can lead to more accurate estimation of the underlying signal .a few recent papers consider the prescription of non - convex penalties that maintain the convexity of the tv denoising cost function .( the motivation for this is to leverage the benefits of both non - convex penalization and convex optimization , e.g. , to accurately estimate the amplitude of jump discontinuities while guaranteeing the uniqueness of the solution . )the penalties considered in these works are separable ( additive ) .but non - separable penalties can outperform separable penalties in this context .this is because preserving the convexity of the cost function is a severely limiting requirement .non - separable penalties can more successfully meet this requirement because they are more general than separable penalties .this paper proposes a non - separable non - convex penalty for total variation denoising that generalizes the standard penalty and maintains the convexity of the cost function to be minimized .the new penalty , which is based on the moreau envelope , can more accurately estimate the amplitudes of jump discontinuities in an underlying piecewise constant signal .numerous non - convex penalties and algorithms have been proposed to outperform -norm regularization for the estimation of sparse signals e.g. , .however , few of these methods maintain the convexity of the cost function .the prescription of non - convex penalties maintaining cost function convexity was pioneered by blake , zisserman , and nikolova , and further developed in refs .these works rely on the presence of both strongly and weakly convex terms , which is also exploited in .the proposed penalty is expressed as a differentiable convex function subtracted from the standard penalty ( i.e. , norm ) .previous works also use this idea .but the differentiable convex functions used therein are either separable or sums of bivariate functions . in parallel with the submission of this paper, carlsson has also proposed using moreau envelopes to prescribe non - trivial convex cost functions . while the approach in starts with a given non - convex cost function ( e.g. , with the pseudo - norm penalty ) andseeks the convex envelope , our approach starts with the -norm penalty and seeks a class of convexity - preserving penalties .some forms of generalized tv are based on infimal convolution ( related to the moreau envelope ) .but these works propose convex penalties suitable for non - piecewise - constant signals , while we propose non - convex penalties suitable for piecewise - constant signals .given and , total variation denoising is defined as where is the matrix as indicated in , tv denoising is the proximity operator of the function .it is convenient that tv denoising can be calculated exactly in finite - time .before we define the non - differentiable non - convex penalty in sec . [ sec : pen ] , we first define a differentiable convex function .we use the moreau envelope from convex analysis .let .we define as where is the first - order difference matrix . if , then is the _ moreau envelope _ of index of the function .[ prop : calcs ] the function can be calculated by for : setting and in gives . for : by the definition of tv denoising , the minimizing the function in is the tv denoising of , i.e. , .let .the function satisfies from , we have for all . in particular , leads to .also , since is defined as the minimum of a non - negative function .let .the function is convex and differentiable .it follows from proposition 12.15 in ref . .[ prop : sgrad ] let .the gradient of is given by where denotes total variation denoising .since is the moreau envelope of index of the function when , it follows by proposition 12.29 in ref . that this proximity operator is tv denoising , giving .to strongly induce sparsity of , we define a non - convex generalization of the standard tv penalty . the new penalty is defined by subtracting a differentiable convex function from the standard penalty .let .we define the penalty as where is the matrix and is defined by .the proposed penalty is upper bounded by the standard tv penalty , which is recovered as a special case .let .the penalty satisfies and it follows from and .when a convex function is subtracted from another convex function [ as in ] , the resulting function may well be negative on part of its domain .inequality states that the proposed penalty avoids this fate .this is relevant because the penalty function should be non - negative .figures in the supplemental material show examples of the proposed penalty and the function .we define ` moreau - enhanced ' tv denoising .if , then the proposed penalty penalizes large amplitude values of less than the norm does ( i.e. , ) , hence it is less likely to underestimate jump discontinuities . given , , and , we define moreau - enhanced total variation denoising as where is given by .the parameter controls the non - convexity of the penalty . if , then the penalty is convex and moreau - enhanced tv denoising reduces to tv denoising .greater values of make the penalty more non - convex .what is the greatest value of that maintains convexity of the cost function ?the critical value is given by theorem [ thm : cond ] .[ thm : cond ] let and .define as where is given by . if then is convex . if then is strongly convex .we write the cost function as where is affine in .the last term is convex as it is the point - wise maximum of a set of convex functions .hence , is a convex function if .if , then is strongly convex ( and strictly convex ) .let , , and .then produced by the iteration [ eq : alg ] converges to the solution of the moreau - enhanced tv denoising problem .if the cost function is strongly convex , then the minimizer can be calculated using the forward - backward splitting ( fbs ) algorithm .this algorithm minimizes a function of the form where both and are convex and is additionally lipschitz continuous .the fbs algorithm is given by [ eq : fbs ] \\ x { ^{(k+1 ) } } & = \arg \min_x \big\ { { \tfrac{1}{2}}\norm { z { ^{(k ) } } - x } _ 2 ^ 2 + \mu f_2 ( x ) \big\}\end{aligned}\ ] ] where and is the lipschitz constant of . the iterates converge to a minimizer of . to apply the fbs algorithm to the proposed cost function ,we write it as where [ eq : deff12 ] the gradient of is given by using proposition [ prop : sgrad ] .subtracting from does not increase the lipschitz constant of , the value of which is 1 .hence , we may set .using , the fbs algorithm becomes \\\label{eq : xupdate } x { ^{(k+1 ) } } & = \arg \min_x \big\ { { \tfrac{1}{2}}\norm { z { ^{(k ) } } - x } _ 2 ^ 2 + \mu { { \lambda } } \norm { d x } _ 1 \big\}. \end{aligned}\ ] ] \\ \label{eq : xupdate } x { ^{(k+1 ) } } & = \arg \min_x \big\ { { \tfrac{1}{2}}\norm { z { ^{(k ) } } - x } _ 2 ^ 2 + \mu { { \lambda } } \norm { d x } _ 1 \big\}. \end{aligned}\ ] ] note that is tv denoising . using the value iteration .( experimentally , we found this value yields fast convergence . ) each iteration of entails solving two standard tv denoising problems . in this work , we calculate tv denoising using the fast exact c language program by condat . like the iterative shrinkage / thresholding algorithm ( ista ) , algorithm can be accelerated in various ways .we suggest not setting too close to the critical value because the fbs algorithm generally converges faster when the cost function is more strongly convex ( ) .in summary , the proposed moreau - enhanced tv denoising method comprises the steps : 1 .set the regularization parameter ( ) .2 . set the non - convexity parameter ( ) .3 . initialize .4 . run iteration until convergence .to avoid terminating the iterative algorithm too early , it is useful to verify convergence using an optimality condition .[ prop : opt ] let , , and . if is a solution to , then \in \operatorname{sign } ( [ d x ] _ n ) \ ] ] for , where is given by = \sum _ { m { \leqslant}n } x_m\ ] ] and is the set - valued signum function , & t = 0 \\ \ { 1 \ } , & t > 0 .\end{cases}\ ] ] according to , if is a minimizer , then the points , u_n ) \in { \mathbb{r}}^2 $ ] must lie on the graph of the signum function , where denotes the value on the left - hand side of .hence , the optimality condition can be depicted as a scatter plot .figures in the supplemental material show how the points in the scatter plot converge to the signum function as the algorithm progresses .a vector minimizes a convex function if where is the subdifferential of at .the subdifferential of the cost function is given by which can be written as ) , \ , u \in { \mathbb{r}}^{n-1 } \}.\end{gathered}\ ] ] ) , \ , u \in { \mathbb{r}}^{n-1 } \}.\ ] ] hence , the condition can be written as ) , \ , u \in { \mathbb{r}}^{n-1 } \}.\end{gathered}\ ] ] ) , \ , u \in { \mathbb{r}}^{n-1 } \}.\ ] ] let be a matrix of size such that , e.g. , .it follows that the condition implies that \in \operatorname{sign } ( [ d x]_n ) \ ] ] for .using proposition [ prop : sgrad ] gives .this example applies tv denoising to the noisy piecewise constant signal shown in fig .[ fig : example1](a ) .this is the ` blocks ' signal ( length ) generated by the wavelab function ` makesignal ` with additive white gaussian noise ( ) .we set the regularization parameter to following a discussion in ref . . for moreau - enhanced tv denoising, we set the non - convexity parameter to .figure [ fig : example1 ] shows the result of tv denoising with three different penalties . in each case , a _convex _ cost function is minimized .figure [ fig : example1](b ) shows the result using standard tv denoising ( i.e. , using the -norm ) .this denoised signal consistently underestimates the amplitudes of jump discontinuities , especially those occurring near other jump discontinuities of opposite sign .figure [ fig : example1](c ) shows the result using a separable non - convex penalty .this method can use any non - convex scalar penalty satisfying a prescribed set of properties . herewe use the minimax - concave ( mc ) penalty with non - convexity parameter set to maintain cost function convexity .this result significantly improves the root - mean - square error ( rmse ) and mean - absolute - deviation ( mae ) , but still underestimates the amplitudes of jump discontinuities .moreau - enhanced tv denoising , shown in fig .[ fig : example1](d ) , further reduces the rmse and mae and more accurately estimates the amplitudes of jump discontinuities .the proposed non - separable non - convex penalty avoids the consistent underestimation of discontinuities seen in figs .[ fig : example1](b ) and [ fig : example1](c ) . to further compare the denoising capability of the considered penalties , we calculate the average rmse as a function of the noise level .we let the noise standard deviation span the interval .for each value , we calculate the average rmse of 100 noise realizations .figure [ fig : rmse ] shows that the proposed penalty yields the lowest average rmse for all .however , at low noise levels , separable convexity - preserving penalties perform better than the proposed non - separable convexity - preserving penalty .this paper demonstrates the use of the moreau envelope to define a non - separable non - convex tv denoising penalty that maintains the convexity of the tv denoising cost function .the basic idea is to subtract from a convex penalty its moreau envelope .this idea should also be useful for other problems , e.g. , analysis tight - frame denoising .separable convexity - preserving penalties outperformed the proposed one at low noise levels in the example .it is yet to be determined if a more general class of convexity - preserving penalties can outperform both across all noise levels .10 f. astrom and c. schnorr . on coupled regularization for non - convex variational image enhancement .in _ iapr asian conf . on pattern recognition ( acpr ) _ , pages 786790 , november 2015 .h. h. bauschke and p. l. combettes . .springer , 2011 .i. bayram .penalty functions derived from monotone mappings . , 22(3):265269 , march 2015 .i. bayram . on the convergence of the iterative shrinkage/ thresholding algorithm with a weakly convex penalty . ,64(6):15971608 , march 2016 .s. becker and p. l. combettes .an algorithm for splitting parallel sums of linearly composed monotone operators , with applications to signal recovery ., 15(1):137159 , 2014 .a. blake and a. zisserman . .mit press , 1987 .m. burger , k. papafitsoros , e. papoutsellis , and c .- b .infimal convolution regularisation functionals of bv and lp spaces . , 55(3):343369 , 2016 .e. j. cands , m. b. wakin , and s. boyd . enhancing sparsity by reweighted l1 minimization . , 14(5):877905 ,december 2008 .m. carlsson .on convexification / optimization of functionals including an l2-misfit term .september 2016 .m. castella and j .- c .optimization of a geman - mcclure like criterion for sparse signal deconvolution . in _ ieee int .workshop comput . adv .multi - sensor adaptive proc ._ , pages 309312 , december 2015. a. chambolle and p .- l . lions . image recovery via total variation minimization and related problems .76:167188 , 1997 . r. chartrand .shrinkage mappings and their induced penalty functions . in _ proc .ieee int .speech , signal processing ( icassp ) _ , pages 10261029 , may 2014 . l. chen and y. gu . the convergence guarantees of a non - convex approach for sparse recovery ., 62(15):37543767 , august 2014 .p .- y . chen and i. w. selesnick .group - sparse signal denoising : non - convex regularization , convex optimization . ,62(13):34643478 , july 2014 .e. chouzenoux , a. jezierska , j. pesquet , and h. talbot .a majorize - minimize subspace approach for image regularization ., 6(1):563591 , 2013 .p. l. combettes and j .- c .proximal splitting methods in signal processing . in h.h. bauschke et al . ,editors , _ fixed - point algorithms for inverse problems in science and engineering _ , pages 185212 .springer - verlag , 2011 .l. condat . a direct algorithm for 1-d total variation denoising ., 20(11):10541057 , november 2013 .j. darbon and m. sigelle .image restoration with discrete constrained total variation part i : fast and exact optimization ., 26(3):261276 , 2006 .i. daubechies , m. defrise , and c. de mol . an iterative thresholding algorithm for linear inverse problems with a sparsity constraint . , 57(11):14131457 , 2004 .y. ding and i. w. selesnick .artifact - free wavelet denoising : non - convex sparse regularization , convex optimization ., 22(9):13641368 , september 2015 .d. donoho , a. maleki , and m. shahram .wavelab 850 , 2005 . ` http://www-stat.stanford.edu/%7ewavelab/ ` .l. dmbgen and a. kovac .extensions of smoothing via taut strings . , 3:4175 , 2009 .s. durand and j. froment .reconstruction of wavelet coefficients using total variation minimization ., 24(5):17541767 , 2003 .g. r. easley , d. labate , and f. colonna .shearlet - based total variation diffusion for denoising ., 18(2):260268 , february 2009 .m. figueiredo and r. nowak .an em algorithm for wavelet - based image restoration ., 12(8):906916 , august 2003 .a. gholami and s. m. hosseini . a balanced combination of tikhonov and total variation regularizations for reconstruction of piecewise - smooth signals ., 93(7):19451960 , 2013 .t. hastie , r. tibshirani , and m. wainwright . .crc press , 2015 .w. he , y. ding , y. zi , and i. w. selesnick .sparsity - based algorithm for detecting faults in rotating machines ., 72 - 73:4664 , may 2016 .n. a. johnson . a dynamic programming algorithm for the fused lasso and -segmentation ., 22(2):246260 , 2013 .a. lanza , s. morigi , and f. sgallari .convex image denoising via non - convex regularization with parameter selection . ,pages 126 , 2016 .m. a. little and n. s. jones .generalized methods and solvers for noise removal from piecewise constant signals : part i background theory . , 467:30883114 , 2011 .m. malek - mohammadi , c. r. rojas , and b. wahlberg . a class of nonconvex penalties preserving overall convexity in optimization - based mean filtering . , 64(24):66506664 ,december 2016 .y. marnissi , a. benazza - benyahia , e. chouzenoux , and j .- c .generalized multivariate exponential power prior for wavelet - based multichannel image restoration . in _ proc .ieee int .image processing ( icip ) _ , pages 24022406 , september 2013 .h. mohimani , m. babaie - zadeh , and c. jutten . a fast approach for overcomplete sparse decomposition based on smoothed l0 norm . , 57(1):289301 , january 2009 .t. mllenhoff , e. strekalovskiy , m. moeller , and d. cremers .the primal - dual hybrid gradient method for semiconvex splittings .8(2):827857 , 2015 . m. nikolova .estimation of binary images by minimizing convex criteria . in _ proc. ieee int .image processing ( icip ) _ , pages 108112 vol . 2 , 1998 .m. nikolova. local strong homogeneity of a regularized estimator ., 61(2):633658 , 2000 .m. nikolova .analysis of the recovery of edges in images and signals by minimizing nonconvex regularized least - squares ., 4(3):960991 , 2005 .m. nikolova .energy minimization methods .in o. scherzer , editor , _ handbook of mathematical methods in imaging _ , chapter 5 , pages 138186 .springer , 2011 .m. nikolova , m. k. ng , and c .- p . tam .fast nonconvex nonsmooth minimization methods for image restoration and reconstruction . , 19(12):30733088 , december 2010 .a. parekh and i. w. selesnick .convex denoising using non - convex tight frame regularization ., 22(10):17861790 , october 2015 .a. parekh and i. w. selesnick .enhanced low - rank matrix approximation . , 23(4):493497 , april 2016 .j. portilla and l. mancera . -based sparse approximation : two alternative methods and some applications . in _ proceedings of spie_ , volume 6701 ( wavelets xii ) , san diego , ca , usa , 2007 . p. rodriguez and b. wohlberg .efficient minimization method for a generalized total variation functional . , 18(2):322332 , february 2009 .l. rudin , s. osher , and e. fatemi .nonlinear total variation based noise removal algorithms ., 60:259268 , 1992 .i. w. selesnick and i. bayram .sparse signal estimation by maximally sparse convex optimization . ,62(5):10781092 , march 2014 .i. w. selesnick and i. bayram . enhanced sparsity by non - separable regularization . , 64(9):22982313 , may 2016 .i. w. selesnick , a. parekh , and i. bayram .convex 1-d total variation denoising with non - convex regularization ., 22(2):141144 , february 2015 .s. setzer , g. steidl , and t. teuber .infimal convolution regularizations with discrete l1-type functionals ., 9(3):797827 , 2011 .m. storath , a. weinmann , and l. demaret .jump - sparse and sparse recovery using potts functionals . , 62(14):36543666 , july 2014 .d. p. wipf , b. d. rao , and s. nagarajan .latent variable bayesian models for promoting sparsity ., 57(9):62366255 ,september 2011 .nearly unbiased variable selection under minimax concave penalty ., pages 894942 , 2010 .h. zou and r. li .one - step sparse estimates in nonconcave penalized likelihood models ., 36(4):15091533 , 2008 .to gain intuition about the proposed penalty function and how it induces sparsity of while maintaining convexity of the cost function , a few illustrations are useful .figure [ fig : pen1 ] illustrates the proposed penalty , its sparsity - inducing behavior , and its relationship to the differentiable convex function .figure [ fig : pen2 ] illustrates how the proposed penalty is able to maintain the convexity of the cost function .figure [ fig : pen1 ] shows the proposed penalty defined in for and .it can be seen that the penalty approximates the standard tv penalty for signals for which is approximately zero .but it increases more slowly than the standard tv penalty as . in that sense, it penalizes large values less than the standard tv penalty . as shown in fig .[ fig : pen1 ] , the proposed penalty is expressed as the standard tv penalty minus the differentiable convex non - negative function .since is flat around the null space of , the penalty approximates the standard tv penalty around the null space of .in addition , since is non - negative , the penalty lies below the standard tv penalty .figure [ fig : pen2 ] shows the differentiable part of the cost function for , , and .the differentiable part is given by in .the total cost function is obtained by adding the standard tv penalty to , see .hence , is convex if the differentiable part is convex . as can be seen in fig .[ fig : pen2 ] , the function is convex .we note that the function in this figure is not strongly convex .this is because we have used .if , then the function will be strongly convex ( and hence will also be strongly convex and have a unique minimizer ) .we recommend .figure [ fig : pen3 ] shows the differentiable part of the cost function for , , and . here, the function is non - convex because which violates the convexity condition . in order to simplify the illustration ,we have set in fig .[ fig : pen2 ] . in practice .but the only difference between the cases and is an additive affine function which does not alter the convexity properties of the function . in practicewe are interested in the case , i.e. , signals much longer than two samples . however , in order to illustrate the functions , we are limited to the case of .we note that the case of does not fully illustrate the behavior of the proposed penalty .in particular , when the penalty is simply a linear transformation of a scalar function , which does not convey the non - separable behavior of the penalty for .a separate document has additional supplemental figures illustrating the convergence of the iterative algorithm .these figures show the optimality condition as a scatter plot .the points in the scatter plot converge to the signum function as the algorithm converges .
total variation denoising is a nonlinear filtering method well suited for the estimation of piecewise - constant signals observed in additive white gaussian noise . the method is defined by the minimization of a particular non - differentiable convex cost function . this paper describes a generalization of this cost function that can yield more accurate estimation of piecewise constant signals . the new cost function involves a non - convex penalty ( regularizer ) designed to maintain the convexity of the cost function . the new penalty is based on the moreau envelope . the proposed total variation denoising method can be implemented using forward - backward splitting .
the nonlinear schrdinger ( nls ) equation arises as the model equation with second order dispersion and cubic nonlinearity describing the dynamics of slowly varying wave packets in nonlinear fiber optics , in water waves and in bose - einstein condensate theory .we consider the nls equation with the periodic boundary conditions . here is a complex valued function , is a parameter and .the nls equation is called `` focusing '' if and `` defocusing '' if ; for , it reduces to the linear schrdinger equation . in last two decades, various numerical methods were applied for solving nls equation , among them are the well - known symplectic and multisymplectic integrators and discontinuous galerkin methods .there is a strong need for model order reduction techniques to reduce the computational costs and storage requirements in large scale simulations , yielding low - dimensional approximations for the full high - dimensional dynamical system , which reproduce the characteristic dynamics of the system . among the model order reduction techniques ,the proper orthogonal decomposition ( pod ) is one of the most widely used method .it was first introduced for analyzing cohorent structures and turbulent flow in numerical simulation of fluid dynamics equations .it has been successfully used in different fields including signal processing , fluid dynamics , parameter estimation , control theory and optimal control of partial differential equations . in this paper , we apply the pod to the nls equation . to the best of our knowledge , there is only one paper where pod is applied to nls equation , where only one and two modes approximations of the nls equation are used in the fourier domain in connection with mode - locking ultra short laser applications . in this paper ,the nls equation being a semi - linear partial differential equation ( pde ) is discretized in space and time by preserving the symplectic structure and the energy ( hamiltonian ) . then ,from the snapshots of the fully discretized dynamical system , the pod basis functions are computed using the singular value decomposition ( svd ) .the reduced model consists of hamiltonian ordinary differential equations ( odes ) , which indicates that the geometric structure of the original system is preserved for the reduced model .the semi - disretized nls equations and the reduced equations are solved in time using strang splitting and mid - point rule .a priori error estimates are derived for pod reduced model , which is solved by mid - point rule .it turns out that most of the energy of the system can be accurately approximated by using few pod modes .numerical results for a nls equation with soliton solutions confirm that the energy of the system is well preserved by pod approximation and the solution of the reduced model are close to the solution of the fully discretized system .+ the paper is organized as follows . in section 2 , the pod method and its application to semi - linear dynamical systems are reviewed . in section 3 ,a priori error estimators are derived for the mid - point time - discretization of semi - linear pdes. numerical solution of the semi - discrete nls equation and the pod reduced form are described in section [ numnls ] . in the last section , section [ numres ] ,the numerical results for the reduced order models of nls equations are presented .in the following , we briefly describe the important features of the pod reduced order modeling ( rom ) ; more details can be found in . in the first step of the pod based model order reduction ,the set of snapshots , the discrete solutions of the nonlinear pde , are collected .the snapshots are usually equally spaced in time corresponding to the solution of pde obtained by finite difference or finite element method .the snapshots are then used to determine the pod bases which are much smaller than the snapshot set . in the last step ,the pod reduced order model is constructed to obtain approximate solutions of the pde .we mention that the choice of the snapshots representing the dynamics of the underlying pde is crucial for the effectiveness of pod based reduced model .+ let be a real hilbert space endowed with inner product and norm . for , we set as the ensemble consisting of the snapshots . in the finite differencecontext , the snapshots can be viewed as discrete solutions at time instances , , and \in { \mathbb r}^{m\times n} ] be a given matrix with rank .further , let be the svd of , where \in\mathbb{r}^{m\times m } , v=[v_1,\ldots , v_n]\in \mathbb{r}^{n\times n } ] is continuous in both arguments and locally lipschitz - continuous with respect to the second argument .the semi - discrete form of nls equation ( [ nlsdenk ] ) is a semi - linear equation as ( [ tsyst ] ) where the cubic nonlinear part is locally lipschitz continuous .suppose that we have determined a pod basis of rank in , then we make the ansatz .\end{aligned}\ ] ] substituting ( [ rom ] ) in ( [ tsyst ] ) , we obtain the reduced model , \quad \sum_{j=1}^l\mathrm{y}_j^l(0)\psi_j = y_0.\ ] ] the pod approximation ( [ pod ] ) holds after projection on the dimensional subspace . from ( [ pod ] ) and , we get for and ] by , \quad \mathrm{y}=(\mathrm{y}_1,\cdots,\mathrm{y}_l)\in \mathbb{r}^{l}\ ] ] and the vector \rightarrow \mathbb{r}^{l} ] , we obtain with both operation and the powers are hold elementwise .the reduced nls equation ( [ dismor ] ) is also hamiltonian and is solved , as the unreduced semi - discretized nls equation ( [ nlsdenk ] ) , with the symplectic midpoint method applying linear - nonlinear strang splitting : in order to solve ( [ nls ] ) efficiently , we apply the second order linear , non - linear strang splitting the nonlinear parts of the equations are solved by newton - raphson method . in the numerical examples , the boundary conditions are periodic , so that the resulting discretized matrices are circulant . for solving the linear system of equations , we have used the matlab toolbox * smt * , which is designed for solving linear systems with a structured coefficient matrix like the circulant and toepltiz matrices .it reduces the number of floating point operations for matrix factorization to .all weights in the pod approximation are taken equally as and .then the average rom error , difference between the numerical solutions of nls equation and rom is measured in the form of the error between the fully discrete nls solution the average hamiltonian rom error is given by where and refer to the discrete hamiltonian errors at the time instance corresponding to the full - order and rom solutions , respectively .the energy of the hamiltonian pdes is usually expressed by the hamiltonian .it is well known that symplectic integrators like the midpoint rule can preserve the only quadratic hamiltonians exactly .higher order polynomials and nonlinear hamiltonians are preserved by the symplectic integration approximately , i.e. the approximate hamiltonians do not show any drift in long term integration . for large matrices , the svd is very time consuming .recently several randomized methods are developed , which are very efficient when the rank is very small , i.e , .we compare the efficiency of matlab programs _ svd _ and _ fsvd _ ( based on the algorithm in ) for computation of singular values for the nls equations in this section , on a pc with amd fx(tm)-8150 eight - core processor and 32 gb ram .the accuracy of the svd is measured by norm , . the randomized version of svd , the fast svd _ fsvd _, requires the rank of the matrix as input parameter , which can be determined by matlab s _ rank _ routine .when the singular values decay rapidly and the size of the matrices is very large , then randomized methods are more efficient than matlab s * svd*. computation of the rank with _ rank _ and singular values with _fsvd _ requires much less time than the _ svd _ for one and two dimensional nls equations ( table 1 ) ..comparison of _ svd _ and _ fsvd _ [ cols="<,>,>,>,>,>,>,>",options="header " , ] , scaledwidth=40.0% ] and : full - order model ( left ) and rom with 5 pod modes ( right),title="fig:",scaledwidth=40.0% ] and : full - order model ( left ) and rom with 5 pod modes ( right),title="fig:",scaledwidth=40.0% ]a reduced model is derived for the nls equation by preserving the hamiltonian structure .a priori error estimates are obtained for the mid - point rule as time integrator for the reduced dynamical system .numerical results show that the energy and the phase space structure of the three different nls equations are well preserved by using few pod modes .the number of the pod modes containing most of the energy depends on the decay of the singular values of the snapshot matrix , reflecting the dynamics of the underlying systems . in a future work, we will investigate the dependence of the rom solutions on parameters for the cnls equation by performing a sensitivity analysis .a. l. islas , d. a. karpeev and c. m. schober , geometric integrators for the nonlinear schrdinger equation , journal of computational physics , 173 ( 2001 ) , 116148 .a. studingeer and s. volkwein , numerical analysis of pod a - posteriori error estimation for optimal control , in control and optimization with pde constraints , eds : k. bredies , c. clason , k. kunisch , g. von winckel , international series of numerical mathematics volume 164 ( 2013 ) 137158 .e. hairer , c. lubich and g. wanner , geometric numerical integration .structure- preserving algorithms for ordinary differential equations , springer series in computational mathematics , springer - verlag , berlin , 2nd edition , 31 2006 .e. schlizerman , e. ding , o. m. williams and j. n. kutz , the proper orthogonal decomposition for dimensionality reduction in mode - locked lasers and optical systems , int . j. optics , ( 2012 ) 831604 .g. berkooz , p. holmes and j. l. lumley , turbulence , coherent structuress , dynamical systems and symmetry , cambridge university press , cambridge monographs on mechanics , 1996 .j. a. c. weideman and b. m. herbst , split - step methods for the solution of the nonlinear schrdinger equation , siam j. numer ., 23 ( 1986 ) , 485507 . k. kunisch and s. volkwein , galerkin proper orthogonal decomposition methods for parabolic problems , numer ., 90 ( 2001 ) , 117148 .m. redivo - zaglia and g. rodriguez , smt : a matlab structured matrices toolbox , numer .algorithms , 59 ( 2012 ) , 639 - 659 .n. halko , p. g. martinsson , y. shkolnisky , m. tygert , an algorithm for the principal component analysis of large data set , siam j. sci .comput . , 33 , ( 2011 ) 2580-2594 ya - ming chen , zhu hua - jun zhu and he song , multi - symplectic splitting method for two - dimensional nonlinear schrdinger equation , commun ., 56 ( 2011 ) 617622 .
we apply the proper orthogonal decomposition ( pod ) to the nonlinear schrdinger ( nls ) equation to derive a reduced order model . the nls equation is discretized in space by finite differences and is solved in time by structure preserving symplectic mid - point rule . a priori error estimates are derived for the pod reduced dynamical system . numerical results for one and two dimensional nls equations , coupled nls equation with soliton solutions show that the low - dimensional approximations obtained by pod reproduce very well the characteristic dynamics of the system , such as preservation of energy and the solutions . nonlinear schrdinger equation ; proper orthogonal decomposition ; model order reduction ; error analysis
over the last decade , important advancements in the field of optical frequency combs have been reported .these have led to remarkable progresses in extending the accuracy of atomic clocks to the optical frequency region , with profound implications in several research areas , spanning from optical metrology and high precision spectroscopy to telecommunication technologies . in two recent papers the implementation of radiation comb generators using dc superconducting quantum interference devices ( squids ) or extended josephson junctionswere discussed .assuming realistic experimental parameters , it was shown that such devices would be able to generate hundreds of harmonics of the driving frequency .for example , at ghz a substantial output power of the order of a fraction of nw could be delivered using a standard ghz frequency drive .this extraordinary frequency up - conversion opens the way to many applications from low - temperature microwave electronics to on - chip sub - millimeter wave generation .the devices discussed in refs . were `` ideal '' in the sense that parasitic effects which can be present in a real structure were neglected . in light of a realistic implementation ,such effects are unavoidable and must be taken into account . which induces voltage pulses across the interferometer .the red regions denote the two josephson tunnel junctions , is the constant bias current , is the superconducting phase across the -th junction and are the superconducting electrodes .( b ) rcsj model circuit where , , and are the resistance , the josephson inductance and the capacitance of the squid , respectively .( c ) sketch of a linear array of squids , connected together via a superconducting wire , and coupled to a load resistance .each squid is pierced by a uniform magnetic flux .the total voltage which develops across the array is given by the sum of all the voltage drops across each single squid . ] in the present work we investigate extensively the impact of several parasitic effects on the phenomenology and performance of the squid - based radiation comb generator theoretically proposed in ref . .namely , we analyze the case in which the squids have a finite loop geometrical inductance and junction capacitance , and then we estimate the role of adding uncertainty in the squids areas and asymmetry parameters when building up a chain .we treat each one of these effects separately in order to emphasize their impact both on the physics and on the performance of our device .in particular we show that the junction capacitance plays a negligible role for our choice of parameters , whereas the loop geometrical inductance has a beneficial effect on the performance of the device .on the other hand , the errors on the squid areas and junction resistance asymmetries may deteriorate the radiation comb generator performance , but their effect remains quite moderate if such errors are within a tolerance of and for the areas and the junction resistance asymmetry parameters , respectively .the paper is structured as follows : first , we review the device theoretical analysis in sec .[ sec : model ] . in sec .[ sec : results ] we discuss how each parasitic effect alter the device performance : the role of a finite squid geometrical inductance and junction capacitance are investigated in secs .[ sec : results_inductance ] and [ sec : results_capacitance ] , respectively .then , in sec . [ sec : results_areas ] we estimate the impact of an uncertainty in the squids areas when adding them in series to build a linear array , whereas in sec .[ sec : results_asymmetries ] we consider squids with different asymmetry parameters . a discussion about the experimental feasibility of the proposed system as well as the estimate of its realistic performance when all the aforementioned effects are taken into account at once are the content of sec .[ sec : feasibility ] . finally , our conclusions are gathered in sec .[ sec : conclusions ] .in this section we briefly review the physical arguments leading to the prediction of the -jumps of the superconducting phase , and the consequent generation of voltage pulses , using squid devices . since these were extensively discussed in refs . both for devices based on squids and on extended josephson junctions , we recall here only the basic principles , without focusing on the details .we consider a squid biased by a constant current and driven by an external , time - dependent magnetic flux [ see fig .[ fig : squid_figuracompleta](a ) ] .due to the first josephson relation , the josephson current through the squid is i_j = i_c1 _ 1 + i_c2 _ 2 , where and ( =1,2 ) are the phase across and the critical current of the -th junction , respectively . in the limit of negligible inductance ] , by introducing the superconducting phase across the squid and using the flux quantization relation , the current ( ) vs phase relation of the squid can be written as i_j(;)=i_+[+r ] , [ eq : ij_squid ] where ( wb is the flux quantum ) , , and expresses the degree of asymmetry of the interferometer .equation describes the well - known oscillations of the squid critical current as a function of the magnetic flux , with minima occurring at integer multiples of . for a fixed bias current ,when crosses a critical - current minimum we see from eq . that a change of sign in must be accompanied by a change of sign in in order for the current to maintain its direction .this is accomplished by a phase jump of which , owing to the second josephson relation , results in a voltage pulse across the squid .the physical origin of the -jump of the superconducting phase can be also easily understood on an energetic ground . for a symmetric ( ) squid in the absence of any bias current ,the time - dependent josephson potential is , where and . at the initial time ( )this potential has minima at ( with integer ) .when the magnetic flux reaches the diffraction node at , vanishes , and for changes its sign .the former equilibrium points have become unstable and hence , to remain in a minimum energy state , must change sign , meaning that the superconducting phase must undergo a -jump to reach a new minimum at . notice that a finite bias current is then necessary to induce a preferential direction to the phase jumps . to determine the details of the voltage pulses , we rely on the so - called resistively and capacitively shunted josephson junction ( rcsj ) model adapted to a squid [ see fig .[ fig : squid_figuracompleta](b ) ] , in which each josephson junction is modelized as a circuit with a capacitor , a resistor , and a non - linear ( josephson ) inductance arranged in a parallel configuration .we consider an external sinusoidally - driven magnetic flux with frequency and amplitude , centered in the first node of the interference pattern , so that_ e(t ) = [ 1-(2 t ) ] . as a result ,the magnetic flux crosses the nodes of the interference pattern at , with integer .the equation of motion for can be written as : where is the junction capacitance , is the total shunting resistance of the squid , is the external bias current and /i_+ ] is the voltage drop generated across the squid . in writing the flux quantization : _ 2-_1=-2 , [ eq : flux_quantization ] now the _total _ magnetic flux piercing the squid is , which differs from the external ( time - dependent ) term because of the geometrical inductance of the loop . using eqs . and ,after some straightforward algebra , we can express the total current through the squid and the total magnetic flux as : where the phase /2 $ ] is related to the voltage drop across the squid via .equation offers the following physical interpretation : at any instant of time , the finite loop inductance modifies the external flux piercing the squid , and the resulting total magnetic flux has to be evaluated self - consistently .once this is done , the dynamics of the squid phase [ as well as the total voltage drop across the device ] can be evaluated via eq . .in writing as the mean voltage generated across the two junctions , we have implicitly assumed that the two squid arms have the same inductance .accounting for different arms inductances would result in an additional correction to the magnetic flux piercing the squid , which would become , where , , while and are defined by eqs . .from this expression we see that unless the difference between and is large ( i.e. , comparable to ) the term is a minor correction to the magnetic flux , with respect to . in order to quantify the effect of the inductance , we have solved numerically the rcsj - equation for an array of 50 symmetric ( ) squids made of nb / alox / nb junctions , and computed the voltage pulses for different values of the loop geometrical inductance , compatible with typical squid dimensions . in fig .[ fig : squid_nb_inductance_effect_nu1 ] we show the effect of a finite inductance on the shape of the voltage pulse generated by each squid of the chain .we notice that a geometrical inductance of the order is a reasonably good assumption for a squid with radius of the order m if we approximate , being the vacuum permeability . from the figurewe see that the principal effect of increasing is that the voltage pulses are delayed with respect to the case , and furthermore they are sharper and higher .this is a direct consequence of the change in the time - dependent magnetic flux profile . indeed , starting at , it turns out that is initially reduced by virtue of the second term in eq . .this means that the condition at which the -jump of the phase is met ( that is , ) is verified at a later time than ( see sec .[ sec : model ] ) , and the same holds for the voltage pulse . in addition , the fact that the shape of is altered from the original cosinusoidal profile induces a faster relaxation of the phase toward the energy minimum . as a consequence , the voltage peaks for finite geometrical inductance are sharper and skewed with respect to the case ( leftmost curve in fig . [fig : squid_nb_inductance_effect_nu1 ] ) .this has a beneficial impact on the emitted radiation spectrum , as it is confirmed in fig .[ fig : squid_nb_inductance_power_nu1 ] , where we show the power generated by a chain of nominally identical and symmetric squids made of nb / alox / nb junctions , driven by a 1 ghz oscillating magnetic field , for different values of the loop geometrical inductance . as we can see , the device with =10 ph is able to provide a power of about 0.1 nw at 20 ghz ( corresponding to the 20-th harmonics of the driving frequency ) . notice finally that only the even harmonics of the driving frequency are shown in the power spectrum of the emitted radiation , the contribution of the odd ones being vanishingly small for symmetric ( ) squids . ) of the array , for different values of its ( geometrical ) inductance .the driving frequency is =1 ghz , whereas the other parameters are those typical of a nb / alox / nb josephson junction , given at the end of sec .[ sec : model ] . ] .the calculation is performed for a chain of nominally identical and symmetric nb / alox / nb squids , subject to a = 1 ghz driving .the parameters are the same as in fig .[ fig : squid_nb_inductance_effect_nu1 ] .notice that only the even harmonics of the driving frequency are shown , the contribution associated to the odd ones being vanishingly small . ] in this section we investigate the effect of taking into account a finite squids junction capacitance . in order to do this, we have solved the differential rcsj equation [ eq .] for the squid phase dynamics without neglecting the second - order ( diffusive ) term .details on the numerical procedure are given in appendix [ sec : app_capacitance ] . in fig .[ fig : squid_nb_capacitance_effect_nu1 ] we show how the typical voltage pulse generated by each squid of the chain is altered due to the effect of a finite junction capacitance .we notice that increasing up to pf has the only effect of making the voltage peak slightly skewed and sharper : this would be beneficial in terms of output power . for larger values of the junctions capacitance , the second order term in eq. becomes more important and the system starts operating in the under - damped regime .this is evident for pf ( rightmost curve in fig .[ fig : squid_nb_capacitance_effect_nu1 ] ) , at which the voltage exhibits small oscillations before relaxing to zero , taking also negative values. however , all these effects would be relevant for large josephson junctions , whereas in this work we focus rather on small nb / alox / nb junctions , typically characterized by a relatively low capacitance ( ff ) . in this case , we see from fig . [fig : squid_nb_capacitance_effect_nu1 ] that there is no appreciable difference with respect to the zero - capacitance case ( the corresponding curves are essentially indistinguishable ) . as a consequence ,our device operates always in the over - damped regime . according to these results, we do not expect any relevant modifications in the power spectrum of the emitted radiation with respect to the ideal ( zero capacitance ) case , and thus we decided not to show it .in addition , we have also performed numerical simulations taking into account the combined effect of both a finite junction capacitance _ and _ loop inductance , but we did not observe any relevant modification with respect to the results discussed in this and the previous subsection [ sec : results_inductance ] . ) of the array , for different values of the junction capacitance .the driving frequency is =1 ghz , whereas the squids parameters are the same as in the previous figures . ] when fabricating an array of squids , it is most unlikely to be able to make them all identical .inevitable imprecisions in the lithographic processes imply that the squids will have slightly different areas . as a consequence ,if the array is embedded in a coil which generates an ideally uniform magnetic field , the resulting flux piercing each squid of the array will be different : larger squids will be pierced by a larger magnetic flux , and vice - versa .this will induce a shift in the time at which the condition ( when the superconducting phase experiences a -jump ) is met : the phase will jump earlier in larger squids .to better quantify this effect , let us associate a gaussian statistical distribution for the squid areas : a = a_0(1+_a ) ( _ a)=(- ) , [ eq : areasdistribution ] where is a dimensionless parameter quantifying the degree of uncertainty on the squids areas , being normally distributed around zero with variance , whereas is the reference value for the surface delimited by the squid loop .the standard deviation can thus be seen as the percentage error within which the value of the area is known .we can write the external magnetic flux as : ,\end{aligned}\ ] ] where we defined and .the phase jump occurs at , that is , at a _ switch time _ determined by : where is a non - negative integer . for sufficiently small ,the above expression for simplifies to : |t(1 + 2k)- t_k - , [ eq : switchtime ] where , as in sec .[ sec : model ] , we have defined . from this expressionit is evident that larger squids ( ) switch before ( ) , and vice - versa .notice also that , since the relation between and is linear , we can understand this result in terms of the distribution of the switch - times , which can be easily computed : ( |t)=(- ) , with .this can be interpreted by stating that the times at which the phase of the squids undergo a -jump is normally distributed around with a variance which is directly proportional to the uncertainty on the squids areas . in fig .[ fig : squid_nb_errareas_effect_nu1 ] we show how a _ typical _ voltage pulse generated by a linear array of symmetric squids is altered by assuming different uncertainties on the areas , up to five percent . by `` typical '' we mean that we have first computed for a single array of squids with random areas [ according to eq . ] , and then we have iterated this procedure for many realizations of the array .we have finally calculated the average voltage pattern , and defined it as the typical one ( see appendix [ sec : app_statisticalapproach ] ) .squids made of nb / alox / nb junctions with areas statistically distributed according to eq . for different values of the standard deviation .the driving frequency is =1 ghz , whereas the squids parameters are the same as in the previous figures . ]we notice that the main effect is that the voltage peaks are broadened and lowered , due to the fact that a certain number of squids switch before and after , the reference switch time for a squid of area [ see eq . ] . as a consequence ,the power spectrum of the emitted radiation is lowered , exhibiting an exponential cut - off at high frequency . despite this , we notice in fig . [fig : squid_nb_errareas_power_nu1 ] that this reduction is still very moderate for an uncertainty , in which case the power is reduced by less than one order of magnitude around ghz ( corresponding to the -th harmonics of the driving frequency ) , whereas it is basically unchanged at ghz . by increasing the error to ,on the other hand , the power is reduced in a substantial way .we finally note that , in contrast to fig .[ fig : squid_nb_inductance_power_nu1 ] , in the power spectra for the non - dominant ( odd ) harmonics are visible ( bottom curves ) . remarkably , they show complex structure when increasing .this is evident for : in this case , for ghz , the power associated to odd harmonics becomes of the same order , if not larger , than that associated to the odd ones .of the areas distributions .the calculation is performed for a chain of nb / alox / nb squids subject to a =1 ghz driving . ]another possible source of non - ideality in the fabrication of an array of squids stems from the asymmetry between the two josephson junctions composing each element of the array .this is quantified in terms of the asymmetry parameter , as explained in sec .[ sec : model ] .we notice that assuming a statistical symmetric distribution for the parameter around 0 ( corresponding to an ideally symmetric squid ) would be much detrimental for the device performance , because squids with generate opposite voltage pulses with respect to squids with , for small bias current .thus , when summing up all the pulses to compute the total voltage , the contributions associated to would basically compensate those associated to , resulting in a poor performance in terms of output power . to overcome this problem, we assume that the squids are fabricated with a small _ preferential _ asymmetry , for instance , which correspond to .we introduce a gaussian statistical distribution for the parameter : r = r_0+_r ( _ r)=(- ) , [ eq : asymmetrydistribution ] where is a dimensionless parameter which quantifies the uncertainty on the squids asymmetry , being normally distributed around zero with variance , whereas is the chosen reference value for the squids asymmetry .we have solved numerically the rcsj dynamics of the linear array of squids following the same procedure outlined in the previous section . in fig .[ fig : squid_nb_errasymmetries_effect_nu1 ] we show how the _ _ typical _ _ , the typical voltage is the result of a statistical average over many realizations . ]voltage pulses generated by an array of squids are altered by assuming different uncertainties on the parameter , up to a standard deviation of one percent .notice that the main qualitative difference with respect to the previous cases , in which symmetric squids were considered , is that here the sequence of voltage pulses exhibits alternating signs .this feature was observed and explained in ref .: its major consequence is that in the power spectrum the odd harmonics are predominant over the even ones .figure [ fig : squid_nb_errasymmetries_power_nu1 ] shows the average power spectrum of the emitted radiation for an array of squids .we notice that increasing the uncertainty on the asymmetry parameter reduces the power , especially at high frequency .similarly to what we observed in fig . [ fig : squid_nb_errareas_power_nu1 ] , the non - dominant harmonics ( in this case the even ones ) show complex structure when increasing . for , at high frequency ( ghz ) , the power associated to even harmonics becomes comparable or even larger than that associated to the odd ones .squids made of nb / alox / nb junctions .the squid chain is characterized by an asymmetry parameter distribution which is gaussian and centered around with a standard deviation [ see eq . ] .the driving frequency is =1 ghz , whereas the other squids parameters are the same as in the previous figures . ] of the asymmetry parameter distributions ( centered around ) .the calculation is performed for a chain of nb / alox / nb squids subject to a =1 ghz driving . ]in this final section we discuss the experimental feasibility of the setup , and we estimate its realistic performance when _ all _ the parasitic effects studied so far are taken into account at once .some of the effects we are going to discuss were studied in ref ., so here we just review them briefly .first of all , in our analysis so far we have neglected the coupling between the squids via mutual inductance and/or cross capacitance and inductance of the superconducting wire .this condition , which basically relies only on the current conservation through each squid in the chain , implies that the dynamics of each squid is independent from the rest of the array , and it can be realized in practice by a suitable design choice . as a consequence ,the voltage at the extremes of the array scales as the number of squids .accordingly , the _ intrinsic _ power generated by the device ( that is , the power delivered to an ideally infinite load ) scales as . on the other hand ,the _ extrinsic _ power depends on the detection system used . in our casethe jrcg array is supposed to be attached to a finite load , which effectively couples the dynamics of the squids : for realistic devices the extrinsic power is then found to scale as , rather than ( see related discussion in sec . [sec : model ] ) . as shown in sec .[ sec : results ] , this scaling is not a limitation in the region of tens of ghz , where sizable output power can be generated .conversely at higher frequency , e.g. sub - millimeter region , the output power drops and the device design must be modified to compensate for this decrease .one possibility is to operate with more jrcg arrays arranged in a parallel configuration : in this case the contribution of each jrcg array would add up and the total power would be given by , where is the number of squid arrays in parallel .another important issue concerns the way the emitted radiation propagates across the device . when discussing the scaling of the power with the number of squids in the chain , we have implicitly assumed such radiation to propagate _instantly _ across the device . strictly speaking, this lumped - element model is justified if the propagation time of the radiation through the whole array is much shorter than the typical voltage pulse width , i.e. the voltage transient .this condition strongly depends on the specific values of the parameters , which in turn are set by the device fabrication , its design , and the materials used .all these can be optimized with the aim of decreasing the propagation time . in any case , this lumped model approximation does by no means set any sharp boundary condition on the working operation of the device . even if was notshorter , but rather comparable with the voltage transient , the only consequence would be that the interference effects shall be taken into account .but since the device generates _ all _ the harmonics of the fundamental frequency , some of them will be partially attenuated because of destructive interference , some others will be ( almost ) unchanged because of constructive interference . therefore the output signal may be attenuated at some specific frequencies , remaining unchanged at the others , but the device would still work and can be used if the specific output frequency we want to extract has enough output power . in fig .[ fig : squid_nb_realistic_power_nu1 ] we show the estimated power spectrum generated by a single _ realistic _ array of squids made of nb / alox / nb josephson junctions . by `` realistic '' we mean subject to the fabrication errors discussed in secs .[ sec : results_areas ] and [ sec : results_asymmetries ] , having random ( normally distributed ) areas _ and _ asymmetry parameters .furthermore , we assume them to have a finite loop geometrical inductance ph ( see sec . [sec : results_inductance ] ) . on the other hand, we do not consider any corrections due to their finite junction capacitance since we showed in sec .[ sec : results_capacitance ] that they were completely negligible .from the figure , we notice that this device is still able to provide an output power of about 0.1 nw around 20 ghz ( corresponding to the 20-th harmonics of the driving frequency , see the corresponding black arrow ) .if we compare this to the results of fig .[ fig : squid_nb_inductance_power_nu1 ] , we notice that the power in this frequency range is only slightly reduced , as a consequence of the errors on the areas and the asymmetry parameters . a larger deterioration of the performance - of about two orders of magnitude - is otherwise expected at higher frequency ( around ghz , see the corresponding black arrow ) .nevertheless , the device is still able to produce an output power between and pw in this range , which can be relevant for several applications .all these considerations enforce the message that if the squids of the array can be fabricated with an accuracy of the order of on the areas and of on the asymmetry between the junctions , the expected performance is not altered significantly with respect to the ideal situation for frequencies around ghz .= 10 ph , and standard deviations =0.01 and =0.005 on the areas and the asymmetry parameter , respectively ( the latter centered around ) .the calculation is performed for a chain of nb / alox / nb squids subject to a =1 ghz driving .blue and red symbols represent the odd and even harmonics , respectively , whereas the black arrows emphasize the frequency ranges around ghz and ghz . ] finally , we stress that all our analysis has been carried out at zero temperature , being more focused on the fabrication parasitic effects .the effects of thermal noise were indeed already addressed in ref . for a similar setup made of yttrium barium copper oxide ( ybco ) josephson junctions . in that caseit was shown that its contribution was basically negligible , the signal to noise ratio being of the order of at a temperature of k. hence , we do not expect a finite temperature to alter significantly the results presented in this paper .in summary , we have discussed extensively several parasitic effects on the working operation of the squid - based radiation comb generator originally proposed in ref . . under certain conditions, we found that taking into account the finite loop geometrical inductance of the squids has a beneficial impact on the device performance , whereas the fabrication errors ( uncertainties in the squids areas and asymmetries ) tend to decrease it .also , in the range of parameters considered , we showed that a finite junction capacitance does not alter the results , meaning that the device operates always in the overdamped regime .+ when all these effects are taken into account at once , we have estimated that a realistic array of squids made of nb / alox / nb junctions is able to deliver a power of nw around 20 ghz , and of pw around 100 ghz , to a standard load resistance of 50 ohm .this may opens interesting perspectives in the realm of quantum information technology .+ the device has room for optimization by modeling the geometry of the single junctions , the fabrication materials , the driving signal and the array design .for instance , besides squids made of tunneling junction considered in this work , one may investigate devices made of weak - link superconductor - normal metal - superconductor sns junctions , such as nb / hfti / nb josephson junctions .+ finally , the discussed implementation would have the advantage to be built on - chip and integrated in low - temperature superconducting microwave electronics .stimulating discussions with c. altimiras are gratefully acknowledged .the work of r.b . has been supported by miur - firb2013 project coca ( grant no . rbfr1379ux ) .has received funding from the european union fp7/2007 - 2013 under rea grant agreement no .630925 coheat and from miur - firb2013 project coca ( grant no . rbfr1379ux ) .f.g . acknowledges the european research council under the european union s seventh framework program ( fp7/2007 - 2013)/erc grant agreement no .615187-comanche for partial financial support .to test the performance of this radiation generator , we have calculated the power spectrum vs frequency . to this goal , first we have computed the fourier transform of the voltage v()=_0^tdte^it v(t ) . the power spectral density ( psd )is then .finally , the power is calculated by integrating the psd around the resonances ( where is the monochromatic driving frequency ) and dividing for a standard load resistance of 50 ohm .this is the power we would measure at a given resonance frequency with a bandwidth exceeding the linewidth of the resonance .to study the dynamics of the squid phase in sec .[ sec : results_capacitance ] , we have used a downwind finite difference approach to discretize the derivatives in eq . , and the resulting equation implemented numerically is ( in the dimensionless time notation ) : =0,\end{aligned}\ ] ] where is the phase at time , is the reduced flux , is the asymmetry parameter of the squid , is the reduced junction capacitance , and is the dimensionless bias current .in order to estimate the effects of imperfections in the squids fabrication , we have followed a statistical approach .we describe here the procedure adopted in sec .[ sec : results_areas ] , the one in sec .[ sec : results_asymmetries ] being equivalent . given a certain value of the standard deviation ,we have sampled an interval of width by introducing a number of bins .we have then solved the rcsj dynamics for values of [ corresponding to values of areas , according to eq . ]taken as the centers of each bin . the computed voltage versustime has been stored aside . at this stage , to simulate the dynamics of an array , we have generated =50 values of taken from a random gaussian probability distribution with zero mean and standard deviation , and to each one of these we have associated the voltage corresponding to the closest value of , calculated and stored previously . for an array of squids , under the hypothesis of independent squid dynamics ( see sec . [sec : feasibility ] ) the total voltage is simply the sum of all the voltages generated by each squid : v_(t)=_i=1^n v_i(t ) . indeedthe presence of the load , and the fact that it effectively couples the dynamics of the squids , has been taken into account by substituting the shunt resistance with in the rcsj equation , as discussed in secs .[ sec : model ] .finally , this procedure has been iterated for a relatively large ( =10000 ) number of realizations of different arrays , and the typical voltage of an array has been defined as : v_(t)=_j v^(j)(t ) , where the index labels the -th realization of an array .we have done this , instead of simulating the dynamics of _ all _ the squids of _ each _ array many times , in order to reduce the computational burden , otherwise enormous .t. udem , r. holzwarth and t. w. hnsch , nature * 416 * , 233 ( 2002 ) .s. t. cundiff and j. ye , rev .phys . * 75 * , 325 ( 2003 ) .p. delhaye , a. schliesser , o. arcizet , t. wilken , r. holzwarth and t. j. kippenberg , nature * 450 * , 1214 ( 2007 ) .t. hnsch and h. walther , rev .phys . * 71 * , s242 ( 1999 ) .a. t. a. m. de waele , and r. de bruyn ouboter , physica * 41 * , 225 ( 1969 ) .a. barone , and g. patern , _ physics and applications of the josephson effect _( john wiley & sons , new york , 1982 ) .p. solinas , s. gasparinetti , d. golubev , and f. giazotto , scientific reports * 5 * , 12260 ( 2015 ) .p. solinas , r. bosisio , and f. giazotto , j. appl . phys . * 118 * , 113901 ( 2015 ) . m. tinkham , _ introduction to superconductivity _ ( courier dover publications , 2012 ) .f. giazotto , m. j. martnez - prez , and p. solinas , phys .b * 88 * , 094506 ( 2013 ) .f. giazotto , and m. j. martnez - prez , nature * 492 * , 401 ( 2012 ) .m. j. martnez - prez , and f. giazotto , nat .commun . * 5 * , 3579 ( 2014 ) .r. gross , and a. marx , lecture on `` applied superconductivity '' , http://www.wmi.badw.de / teaching / lecturenotes/. v. patel , and j. lukens , ieee trans .. supercond .* 9 * , 3247 ( 1999 ) .d. hagedorn , r. dolata , f .-buchholz , and j. niemeyer , physica c * 372 * , 7 ( 2002 ) .r. wlbing , j. nagel , t. schwarz , o. kieler , t. weimann , j. kohlmann , a. b. zorin , m. kemmler , r. kleiner , and d. koelle , appl .. lett . * 102 * , 192601 ( 2013 ) .a. blais , r .- s .huang , a. wallraff , s. m. girvin , and r. j. schoelkopf , phys .a * 69 * , 062320 ( 2004 ) .a. wallraff , nature * 431 * , 162 ( 2004 ) .j. koch , t. m. yu , j. gambetta , a. a. houck , d. i. schuster , j. majer , a. blais , m. h. devoret , s. m. girvin , and r. j. schoelkopf , phys . rev .a * 76 * , 042319 ( 2007 ) .
we study several parasitic effects on the implementation of a josephson radiation comb generator ( jrcg ) based on a dc superconducting quantum interference device ( squid ) driven by an external magnetic field . this system can be used as a radiation generator similarly to what is done in optics and metrology , and allows one to generate up to several hundreds of harmonics of the driving frequency . first we take into account how assuming a finite loop geometrical inductance and junction capacitance in each squid may alter the operation of this device . then , we estimate the effect of imperfections in the fabrication of an array of squids , which is an unavoidable source of errors in practical situations . we show that the role of the junction capacitance is in general negligible , whereas the geometrical inductance has a beneficial effect on the performance of the device . the errors on the areas and junction resistance asymmetries may deteriorate the performance , but their effect can be limited up to a large extent with a suitable choice of fabrication parameters .
long - term nasa plans for placing scientific equipment on the moon face uncertainty regarding the environmental impact on such devices as hard information about the lunar environmental effect on scientific instruments has not been available . from a quantitative analysis of the performance of the laser reflectors , we find clear evidence for degradation of the retroreflectors , and note that degradation began within one decade of placement on the lunar surface . from 19691985 ,the mcdonald observatory 2.7 m smith telescope ( mst : * ? ? ?* ) dominated lunar laser ranging ( llr ) , using a 634 nm ruby laser . starting around 1985 ,the mcdonald operation moved away from the competitively - scheduled mst to a dedicated 0.76 m telescope designed to perform both satellite and lunar laser ranging , becoming the mcdonald laser ranging system ( mlrs : * ? ? ?in 1984 , other llr operations began at the observatoire de la cte dazur ( oca : * ? ? ? * ) in france and at the haleakala site in hawaii , that used 1.5 m and 1.74 m telescopes , respectively .these systems all operate nd : yag lasers at 532 nm . in 2006 ,the apache point observatory lunar laser - ranging operation ( apollo : * ? ? ?* ) began science operations using the 3.5 m telescope and a 532 nm laser at the apache point observatory in new mexico .primarily geared toward improving tests of gravity , apollo is designed to reach a range precision of one millimeter via a substantial increase in the rate of return photons .the large telescope aperture and good image quality at the site , when coupled with a single - photon detector array , produces return photon rates from all three apollo reflectors that are about 70 times higher than the best rates experienced by the previous llr record - holder ( oca ) .consequently , apollo is able to obtain ranges through the full moon phase for the first time since mst llr measurements ceased around 1985 .we find that the performance of the reflectors themselves degrades during the period surrounding full moon . in this paperwe describe the full - moon deficit , report its statistical significance , and eliminate the possibility that it results from reduced system sensitivity at full moon .we show that this deficit began in the 1970 s , and examine the significance of successful total - eclipse observations by oca and mlrs .we see an additional factor - of - ten signal deficit that applies at all lunar phases , but this observation requires a detailed technical evaluation of the link , and is deferred to a later publication .we briefly discuss possible mechanisms that might account for the observed deficits .apollo observing sessions typically last less than one hour , with a cadence of one observing session every 23 nights . for a variety of practical reasons , apollo observationsare confined to 75% of the lunar phase distribution , from to , where is the synodic phase relative to new moon at . within an observing session , multiple short `` runs ''are carried out , where a run is defined as a contiguous sequence of laser shots to a specific reflector .typical runs last 250 or 500 seconds , consisting of 5000 or 10000 shots at a 20 hz repetition rate .each shot sends about photons toward the moon , and in good conditions we detect about one return photon per shot . if the signal level acquired on the larger apollo 15 reflector is adequate , we cycle to the other two apollo reflectors in turn , sometimes completing multiple cycles among the reflectors in the allotted time .when the lunokhod 2 reflector is in the dark , we range to it as well .its design leads to substantial signal degradation from thermal gradients , rendering it effectively unobservable during the lunar day .figure [ fig : apollo - rate ] displays apollo s return rates , in return photons per shot , for the apollo 15 reflector as a function of lunar phase , with 338 data points spanning 2006 - 10 - 03 to 2009 - 06 - 15 .signal rate is highly dependent on atmospheric seeing ( turbulence - related image quality ) . when the seeing is greater than 2 arcsec , the signal rate scales like the inverse fourth power of the seeing scale .variability in seeing and transparency dominate the observed spread of signal strength , resulting in at least two orders - of - magnitude of variation . )is apparent .the vertical scatter is predominantly due to variable atmospheric seeing and transparency .the dotted line across the top is a simple ad - hoc model of the signal deficit used to constrain background suppression in fig .[ fig : background ] .the dotted line across the bottom indicates the background rate in a 1 ns temporal window , against which signal identification must compete.[fig : apollo - rate],width=336 ] below about 0.001 photons per shot ( pps ) , we have difficulty identifying the signal against photon background and detector dark rate . a typical peak rate across phases is pps , with the best runs reaching pps ._ the key observation is the order - of - magnitude dip in signal rate in the vicinity of full moon _, at .the best return rates at full moon were associated with pristine observing conditions that would have been expected to deliver photon per shot at other phases , but only delivered 0.063 pps at full moon .thus the deficit is approximately a factor of 15 .the deficit appears to be confined to a relatively narrow range of around the full moon , and is not due to uncharacteristically poor observing conditions during this period . a kolmogorov - smirnov ( k - s ) test confirms the improbability that random chance could produce a full - moon dip as large as that seen in fig .[ fig : apollo - rate ] .there is % chance that the measurements within of full moon were drawn from the same distribution as the out - of - window points .similar tests using -wide windows centered away from full moon do not produce comparably low probabilities .additional evidence for the full - moon deficit is provided by the instances of failure to acquire a signal .failure can occur for a variety of reasons _ not _ related to the health of the lunar arrays : poor seeing ; poor atmospheric transparency ; inaccurate telescope pointing ; optical misalignment between transmit and receive beams ; time - of - flight prediction error ; instrumental component failure .but none of these causes depend on the phase of the moon .we therefore plot a histogram of apollo 15 acquisition failures as a function of lunar phase in fig .[ fig : failure ] .failures due to known instrumental problems were removed from this analysis , as were failures due to causes such as pointing errors that were ultimately remedied within the session .the bars are shaded to reflect observing conditions : light gray indicates poor conditions ( seeing or transparency ) ; medium gray indicates medium conditions ; and black indicates excellent observing conditions , for which the lack of signal is especially puzzling .note the cluster of failures centered around full moon .the phase distribution of run attempts is roughly uniform .bins of failed run attempts ( left - hand scale ) on apollo 15 during the period from 2006 - 10 - 03 to 2009 - 06 - 15 , excluding those due to known technical difficulties .black indicates good observing conditions , medium gray corresponds to medium conditions , and light gray reflects bad conditions .the line histogram shows the phase distribution of all run attempts in 15 bins ( right - hand scale).[fig : failure],width=336 ] the other apollo reflectors are similarly impacted at full moon .on the few occasions when the full - moon apollo 15 signal was strong enough to encourage attempts on the other reflectors , we found that the expected 1:1:3 ratio between the apollo 11 , 14 , and 15 rates is approximately preserved . in no casehave we been able to raise a signal on other reflectors after repeated failures to acquire signal from apollo 15 .could the full - moon deficit be explained by paralysis of our single - photon avalanche photodiode ( apd ) detectors in response to the increased background at full moon ?figure [ fig : background ] indicates that apollo sees a maximum background rate at full moon of avalanche events per 100 ns detection gate across the apd array in agreement with throughput calculations .therefore , a typical gate - opening has a % chance that _one _ of the 11 consistently - functioning avalanche photodiode elements ( out of 16 ) will be rendered blind _prior _ to the arrival of a lunar photon halfway into the 100 ns gate .the sensitivity for the entire array to signal return photons therefore remains above 97% even at full moon .the background rates presented here are extrapolated from a 20 ns window in the early part of the gate , before any lunar return signal . .apollo s detector clearly has high sensitivity at full moon.[fig : background],width=336 ] the apollo 15 site is near the lunar prime meridian , so that its illumination curve is roughly symmetric about full moon .small - aperture photometry measurements by and more recently by show that the surface brightness increases roughly linearly on approach to full moon , with an additional % enhancement very near full moon .a linear illumination curve is provided in fig .[ fig : background ] for reference .apollo clearly sees the expected background enhancement at full moon .if _ any _ phenomenon suppressed apd sensitivity to laser returns from the reflector array , it would _ likewise _ suppress sensitivity to the background photons , as suggested by the dashed curve in fig .[ fig : background ] .there is no hint of detector suppression in the background counts , so we conclude that the diminished return rate observed near full moon constitutes a genuine reduction in signal returning from the reflector .it is natural to ask if we can determine the timescale over which the full - moon deficit developed .mst llr data reveal that from 1973 to 1976 there was no indication of a full - moon deficit .figure [ fig : old_mcd ] shows the photon count per run for two periods of the mst operation , where a run typically consisted of 150200 shots at a rate of 20 shots per minute .a full - moon deficit began to develop in the period from 19771978 ( not shown ) , and is markedly evident in the period from 19791984but it appears somewhat narrower than the deficit now observed by apollo .k - s tests indicate that the probability that the distribution of photon counts between is the same as that outside the window is 18% , 0.9% , and 0.03% for the three periods indicated above , in time - order . in the last period , roughly one decade after placement of the apollo 15 array in 1971 , the deficit was approximately a factor of three .the mst apparatus did not change in a substantial way between 1973 to 1984 . because the mst s larger beam divergence and receiver aperture made it less sensitive to atmospheric seeing .[ fig : old_mcd],width=623 ] by 1985 , llr was performed only on smaller telescopes , and attempts at full moon ranging subsided except during four total lunar eclipses .the 35 eclipse range measurements by oca and mlrs add an interesting twist : the return strength during eclipse is statistically indistinguishable from that at other phases and not compatible with an order - of - magnitude signal deficit .existing data do not probe the time evolution of reflector efficiency into and out of the total eclipse , but it appears that the arrays perform normally as soon as 15 minutes into the totality .apollo could not observe the eclipses of 2007 august 28 and 2008 february 21 because of bad weather , but will have a chance to follow a complete total eclipse on 2010 december 21 .in addition to the full - moon deficit , analysis of apollo s return rate reveals an overall factor - of - ten signal deficit at all lunar phases .supporting evidence requires more analysis than can be covered here . in brief , the dominant contributors to the photon throughput loss arise from beam divergence on both the uplink and downlink .we can measure the former by deliberately scanning the beam across the reflector on the moon , confirming a seeing - limited beam profile .we additionally measure the atmospheric seeing via the spatial distribution of the return point source on the detector array .the downlink divergence is set by diffraction from the corner cubes , verified by measurements of the actual flight cubes .receiver throughput losses , which constitute a small fraction of the total loss , were measured by imaging stars or the bright lunar surface on the apd , and agree well with a model of the optical and detector system .careful analysis does not account for apollo s missing factor of ten in signal return , while early ranging data from mst do agree with the anticipated return rate .further evidence for the damaging effects of the lunar environment comes from the lunokhod 2 reflector . in the first six months of lunokhod 2 observations in 1973 ,its signal was 25% stronger than that from the apollo 15 array .today , we find that it is 10 times weaker . the lunokhod corner cubes are more exposed than the recessed apollo cubes , and unlike the apollo cubes , have a silver coating on the rear surfaces .both factors may contribute to the accelerated degradation of the lunokhod array relative to the apollo arrays .the full - moon deficit , the overall signal shortfall experienced by apollo , and the relative decline in performance of the lunokhod array all show that the lunar reflectors have degraded with time. it may be possible to explain these observations with a single mechanism that causes both an optical impairment at all phases , and a thermal influence near full moon that abates during eclipse .one possibility is alteration of the corner - cube prisms front surfaces either via dust deposition or surface abrasion from high - velocity impact ejecta or micrometeorites .alternatively , any material coating on the back of the corner cubes perhaps originating from the teflon mounting rings could impact performance of the apollo reflectors via frustration of total internal reflection ( tir ) and absorption of solar energy .bulk absorption in the glass could also produce the observed effects .the impact on reflection efficiency at all phases from each of these possibilities is obvious .the full - moon effect would arise from an enhancement of solar energy absorption by the corner - cube prisms and their housings defeating the careful thermal design intended to keep the prisms nearly isothermal . because the uncoated apollo corner cubes work via tir ,their rejection of solar flux should be complete when sunlight arrives within 17 of normal incidence .but the temperature uniformity within the corner cubes is upset either by absorption of energy at the cube surfaces , or by defeat of tir via scattering which results in energy deposition in the pocket behind the cubes , heating the cubes from the rear .temperature gradients in a corner - cube prism produce refractive index gradients , generating wavefront distortion within the prism .a 4 k gradient between the vertex and front face of the apollo corner cubes reduces the peak intensity in their far - field diffraction pattern by a factor of ten ( * ? ? ?apollo corner cubes are recessed by half their diameter in a tray oriented toward the earth . near full moon ,the weathered corner cubes are most fully exposed to solar illumination , maximizing the degradation . during eclipses, the reflector response may be expected to recover on a short timescale , governed by the minute thermal diffusion timescale for 38 mm diameter fused silica corner - cube prisms .while any of the proposed mechanisms could account for the observations , objections can be raised to each of them .bulk absorption is not expected in the suprasil fused silica used for the apollo cubes after 40 years of exposure .micrometeorite rates on the lunar surface gleaned from study of return samples , and summarized in , suggest the fill - factor of craters on an exposed surface to be after 40 years , dominated by craters in the 10100 m range .opportunities for a contaminant coating on the rear surfaces of the corner cubes are limited given that the only substance within the closed aluminum pocket besides the glass corner cube is the teflon support ring .moreover , the lunokhod array would not be subject to the same rear - surface phenomena as the apollo cubes , yet shows an even more marked degradation .dust is perhaps the most likely candidate for the observed degradation .astronaut accounts from the surface and from lunar orbit , as well as a horizon glow seen by surveyor 7 , suggest the presence of levitated dust possibly to altitudes in excess of 100 km , for which a lofting mechanism has been suggested by .the dust monitor placed on the lunar surface by the apollo 17 mission measured large fluxes of dust in the east - west direction around the time of lunar sunrise and sunset consistent with the electrostatic charging mechanisms described by .the main difficulty with the dust explanation is that electrostatic charging alone is not strong enough to liberate dust grains from surface adhesion .but mechanical disturbance seeded by micrometeorite and impact ejecta activity may be enough to free the already - charged grains . whether or not dust is responsible ,the supposed health of the reflector arrays has been used to argue that dust dynamics on the surface of the moon are of minimal importance .our observations of the reduced reflector performance invalidate the invocation of reflector health in this argument .the only other relevant data for the environmental impact on optical devices on the lunar surface comes from the surveyor 3 camera lens , retrieved by the apollo 12 mission .after 945 days on the surface , the glass cover of the camera lens had dust obscuring an estimated 25% of its surface though it is suspected that much of this was due to surveyor and apollo 12 landing and surface activities .clearly , the ascent of the lunar modules could result in dust deposition on the nearby reflectors .but the effect reported here became established after several years on the lunar surface ( e.g. , fig . [fig : old_mcd ] ) , and is therefore not related to liftoff of the lunar modules .the evidence for substantially worsened performance of the lunar reflectors over time makes it important to consider the long - term usefulness of next - generation devices proposed for the lunar surface . finding the mechanism responsible for the observed deficits is a high priority . thermal simulations or testing deliberately altered corner - cube prisms in a simulated lunar environmentwould likely expose the nature of the problem with the apollo arrays .especially important would be to differentiate between permanent abrasion versus removable dust .the results could impact the designs of a wide variety of space hardware especially next - generation laser ranging reflectors , telescopes , optical communication devices , or equipment dependent on passive thermal control .we thank doug currie , eric silverberg , and kim griest for comments .apollo is indebted to the staff at the apache point observatory , and to suzanne hawley and the university of washington astronomy department for apollo s telescope time .we also acknowledge the technological prowess of apollo - era scientists and engineers , who designed , tested , and delivered the first functional reflector array for launch on apollo 11 within six months of receipt of the contract .apollo is jointly funded by nsf and nasa , and some of this analysis was supported by the nasa lunar science institute as part of the lunar consortium ( nna09db30a ) .arthur d. little , inc .laser ranging retro - reflector array for the early apollo scientific experiments package _ , available at : link : www.physics.ucsd.edu/~{}tmurphy / apollo / doc / adl.pdf[www.physics.ucsd.edu/~\{}tmurphy / apollo / doc / adl.pdf ] farrell , w.m . , stubbs , t.j ., vondrak , r.r . ,delory , g.t . , & halekas , j.s . , 2007 .complex electric fields near the lunar terminator : the near - surface wake and accelerated dust . _ geophysical research letters _ , * 34 * , l14201 ( 5 pages ) johnson , s.w . ,taylor , g.j . , & wetzel , j.p . , 1992 .environmental effects on lunar astronomical observatories ._ lunar bases and space activities of the 21 century ii _mendell , w.w .et al . , 329335
forty years ago , apollo astronauts placed the first of several retroreflector arrays on the lunar surface . their continued usefulness for laser - ranging might suggest that the lunar environment does not damage optical devices . however , new laser ranging data reveal that the efficiency of the three apollo reflector arrays is now diminished by a factor of ten at all lunar phases and by an additional factor of ten when the lunar phase is near full moon . these deficits did not exist in the earliest years of lunar ranging , indicating that the lunar environment damages optical equipment on the timescale of decades . dust or abrasion on the front faces of the corner - cube prisms may be responsible , reducing their reflectivity and degrading their thermal performance when exposed to face - on sunlight at full moon . these mechanisms can be tested using laboratory simulations and must be understood before designing equipment destined for the moon .
impact events violent collisions of objects with targets are ubiquitous in nature and industry .they range from the impact of asteroids on planet surfaces , raindrops falling onto soil or sea , to the interaction of metal droplets and melts in the metallurgical industry .air entrapped during impact events may be beneficial allowing for the aeration of the sea or detrimental , when gas bubbles are trapped in liquid steel . in both casesit is crucial to understand the processes that take place below the surface of the target .when an object impacts on a deep layer of water a splash is formed and a few milliseconds later a jet shoots out of the water . because of the transparent nature of water it is possible to directly observe what happens below the surface : while the intruder moves through the water layer , an air cavity is formed .the walls of the cavity move toward each other due to hydrostatic pressure . at the momentthe cavity walls collide two jets are formed , one going up and one going down .the air - pocket entrapped below the pinch - off point moves down with the intruder , detaches , and then slowly rises to the surface . for impacts onto non - transparent media such as sand direct imagingis not feasible . upon impact of a steel ball on a bed of fine , very loose sand a splash and jetappear above the surface that are very similar to those observed for water .the questions that arise are : what happens below the surface of the granular bed and to what extent is this similar to the sequence of events in water ? to answer these questions we must `` look '' inside the sand bed , for which we use a unique , custom - built high - speed x - ray tomographic setup .previously , high speed x - ray imaging has been used by royer _ et al ._ in a series of pioneering papers , using a very different setup that makes use of a parallel x - ray beam . due to restrictions on the x - ray apparatus used , those experiments were conducted in a setup that is smaller than the setups used in which can lead to unwelcome boundary effects .for the same reason , and similarly with possible side - effects , the silica particles ( sand ) of were substituted by boron carbide particles . in this paperwe present impact experiments done in a custom - made high - speed x - ray tomography setup which is large enough to allow for the direct study of the experiments described in , _ i.e. _ , in the original size and using the same silica sand bed . in fact , we will show that the jet originates from the pinch - off point created by the collapse of the air cavity formed behind the penetrating ball .in addition , we measure the density changes in front of the sphere and in the pinch - off region during the collapse of the bed . finally , we observe how the entrapped air bubble rises through the sand bed . in the next sectionthe experimental setup will be introduced , whereafter we describe three different ways to analyze the data .first the air - cavity and jet formation will be reconstructed in ` cavity reconstruction ' . in the ` rising air bubble 'section we will take a close look into the shape and rising mechanism of the air bubble .last , in ` local packing fraction ' , the density changes of the sand around the ball and the air - cavity are explored .a cylindrical container that is 1 m high and with an inner diameter of 15 cm is filled with sand until a certain height ( see fig .[ xray : fig : setup ] ) .the bottom of the container consists of a porous material to allow for fluidization of the sand and the container is fully closed .an electromagnet is suspended from a rod , such that a metal ball ( diameter cm ) can be released from different heights . before every experimentthe sand is fluidized to destroy the existing network of contact forces , and subsequently the airflow is turned off slowly to allow the sand to settle into a very loose state .the height of the sand bed above the plate after fluidization ( ) and the release height ( ) are measured .the size distribution of the individual sand grains ( with density g/ ) is between 20 m and 40 m and the average packing fraction after fluidization is 0.41 . for this research several sets of experiments were carried out with different experimental conditions .the key parameter we varied is the impact velocity and can be described using the froude number , or equivalently , .varying the froude number from 9 to 92 resulted in the same qualitative behavior .therefore , in this paper , we do not extensively discuss the influence of this parameter , but merely state which experimental condition is used .the container is placed in an unique custom - built x - ray setup ( also shown in fig .[ xray : fig : setup ] ) , which consist of three powerful x - ray sources ( _ yxlon _ , 133 - 150kev , 4 amp . ) with three arrays of detectors placed in a triangular configuration . a single detector bank consists of two horizontal rows ( spaced 40 mm apart ) of 32 detectors that are positioned on an arc such that the distance between the source and the detectors is constant at 1386 mm .each detector consists of an cdwo scintillation crystal ( mm ) coupled to a photo diode and the data is collected with a sampling frequency of 2500 hz .two sets of experiments were carried out : one in which the container was positioned in the center of the x - ray setup and measurements are taken by all three detector banks , and a second set of experiments where the container was placed close to one of the sources ( at a distance of 275 mm ) for enhanced spatial resolution . in this last set measurements are taken from only one detector bank .each detector measures the attenuation of the x - rays on the path between the source and the detector .the attenuation of single wavelength x - rays is described by the lambert - beer law , which states that there is a logarithmic dependence between the number of registered photons per second and the absorption coefficient of the specific material times the path length . in this problemthe only parameters that change are the path length through sand and the path length through air , where the latter can be neglected in this setup due to the very low absorption of x - rays by air .every single detector is calibrated such that it gives the length of prepared sand on the path between the source and the detector , . as a first point in the calibration we used a fully fluidized bed .note that because the container has a circular cross - section the length of the path through the sand varies for different detectors .next , we place a rectangular container filled with air inside the bed in the path of the rays , and again prepare the bed .this changes the amount of sand in - between the x - ray source and the detectors .the boxes are made of very thin plastic to minimize their influence on the x - ray signal .when the exact position of the containers is known it is possible to calculate the equivalent path length ( _ i.e. _ , the length of the path the x - ray travels through the sand , as calculated from the measured signal ) for each detector .how can we reconstruct what happens inside the sand when a ball impacts ? with the setup described above we measure the response of the bed in one horizontal cross - section as a function of time . because we are interested in the complete cavity shape within the bed, the experiment will have to be repeated while measuring at different heights , .the results can later be stitched together .this method requires that the experiment is very reproducible . to check this we first examine the center detector signals for several repetitions of the experiment at an average depth within the sand while the ball penetrates the bed .the signals of the center detector for different experiments at a fixed height are shown in fig .[ xray : fig : reproducibility]a . on the vertical axis the change of the equivalent path length ( _ i.e. _ , compared to the situation before impact )is plotted during the impact .when the ball passes through the measurement plane the signal of the central detectors drops due to the higher absorption coefficient for x - rays of metal compared to sand .this leads to an increase of ( ) .immediately after the ball passes , the signal becomes negative ( ) , indicating that in the path of the ray there is less sand than would fit in the container , _e.g. _ , as would happen when an air cavity has formed in the wake of the ball .this implies that there is more air in the sand at this height , but it does not reveal how this air is distributed .the bed may have become very loose such that the air is evenly distributed , or the air may be concentrated in the center as an air cavity . some time latera negative peak again suggests the presence of air ( ) .schematic of the setup used in the x - ray experiments .the setup consists of a container filled with very fine sand . near the bottom a porous plateis mounted such that air can be blown in , fluidizing the sand .a ball is dropped from various heights using an electromagnet into a loosely settled bed .the setup is placed in a custom - made tomographical x - ray device consisting of 3 x - ray sources and 6 arrays of 32 detectors : opposite to each x - ray source two arrays of detectors are placed in one detector bank ( only one bank is shown).,width=325 ] the measured signal of the center detector as a function of time for four different realizations of an impact experiment using the same measurement height ( =8 cm below the surface , ) .two of the measurements are recorded with the upper detector bank and two measurements with the lower detector bank .when the ball passes through a ray the signal becomes higher , whereas an air cavity accounts for a lower signal .the first part of the signal , which corresponds to the passing of the ball ( ) , cavity creation , and cavity collapse ( ) , is very reproducible .the second part of the signal , corresponding to the rising air bubble ( ) , shows poor reproducibility . the measured signals of all the sensors of one detector array measured in a single experiment at one fixed height , . the number of detectors that see the ball ( i ) or the air cavities ( ii and iii ) provides an estimate of the size of the object .similar - sized objects that are visible in the signal for longer time move with a lower velocity through the measurement plane ., width=325 ] the different curves show four distinct realizations of the experiments all recorded at the same measurement plane in the sand .two of them are measured with the upper detector row and two of them with the lower array of detectors .the first part is very reproducible , which can be concluded from the fact that the equivalent path lengths and the duration of the peaks are equal .the second part , 200 ms to 400 ms after impact , is less reproducible .the measured values are similar , but the shape and timing of the peak are very different among different experiments. from all of the above we can deduce that it must be possible to accurately reconstruct the impact within the sand bed , up to a certain amount of time after the ball has impacted ( ms ) .this timespan must at least be sufficient to image the formation of the jet , judging from the time scale ( ms ) on which the latter forms .since data is available from an entire array of detectors it is possible to obtain information about sizes and positions .the signals from the different detectors of one of the arrays are plotted above each other in fig .[ xray : fig : reproducibility]b .the number of sensors that detect the ball ( the positive part of the signal ) reflect the width of the ball .the negative signals are concentrated in the center of the container indicating that the additional air exists in the form of an air cavity rather than somehow dispersed through the sand bed .these negative signals are found immediately after the ball passes ( ) showing that the air cavity is attached to the ball . since a similar number of detectors`` see '' both the cavity and the ball , the air cavity must have a width similar to that of the ball . the second air cavity ( ) , the signals of which arrive at the detectors after a considerable delay ,is visible in more sensors than the ball , demonstrating a larger size of this second cavity , which can be interpreted as a detached air bubble rising through the sand bed .note that an object moving at a lower velocity will have a longer x - ray signal duration because it is longer in field of view of the detector .the fact that the signal duration of the air bubble is longer than that of the air cavity does not necessarily imply that the air bubble is bigger ; the magnitude of the signal however does give information about the size . a series of plots at different times after the ball has impacted onto the sand bed at for .the grey - scale value represents the normalized signal . for each plot, the horizontal axis displays the signals of the different detectors and the vertical axis repetitions of the experiment with cross - sections taken at different depths in the bed .we clearly see the cavity first being formed and subsequently closing , the resulting pinch - off , the formation of the jet , and finally the entrapment of an air bubble in the sand . in the next plot the air bubble detaches from the ball and slowly rises to the surface . tomographic reconstruction of a single horizontal cross - section through the bed , see red dashed line in top figure , at 4 different times for . from left to right : a measurement plane through the center of the ball , through the air cavity immediately behind the ball , the air cavity close to the collapse , and a cross - section through the rising air bubble . ] in fig .[ xray : fig : density]a - f the change in equivalent path length is plotted as a grey scale value ( white for positive and black for negative ) for different times during the experiment .the pixels in each row indicate the signal of the different detectors at a single height , whereas the different rows correspond to experiments done at different depths in the bed .this gives a first indication of what happens inside the sand bed .as the ball moves through the sand an air cavity behind the ball is generated .this air cavity grows while the ball moves and then starts to collapse under the influence of the lithostatic pressure ( i.e. , the `` hydrostatic pressure '' due to the mass of grains above that point ) in the bed . when the walls of the cavity touch , a jet shoots upward and an air bubble is entrained .the air bubble moves down with the ball and after it detaches it slowly rises to the surface . from this analysisit is clear that the events in are highly reproducible , whereas the randomness and irregularity in the last two plots reflects that the rising of the air bubble is not reproducible at all .the size of the latter varies considerably with height , indicating that in the different experiments to which they correspond the bubble detaches from the ball at different points in time , leaving empty gaps in the reconstruction .the critical question that remains is : to what extent are the cavities and resulting jets that are created axisymmetric ? to obtain more insight into this issue , we will now look at the full tomographic information that is available from the setup .when the container is positioned in the center of the x - ray setup , a single horizontal cross - section can be imaged by three detector arrays spaced evenly around the contianer , allowing for tomographic measurements . using, we obtain the full 2d shape of the cross - section of the ball and the air - cavities . in figs .[ xray : fig : density]g - j the tomographic reconstruction at a single height , for four different times during the experiment is shown .for the tomography the signals of all three detector banks are super - positioned on a square lattice of 140 by 140 pixels . by applying a threshold intensity value the cavity and the ball shape can be extracted . in the first image the ball is visible , of which the size and shape are known .indeed , within the limits of the reconstruction the ball is found to be round , and also the size is correctly estimated . in fig .[ xray : fig : density]h a reconstruction of the air cavity just after the ball passed is provided .the air cavity is seen to have a similar degree of roundness as the ball , and has the same size as the ball , which is indeed what one would expect directly after the cavity is created .the second image of the air cavity is taken just before the collapse , showing that the air cavity is still axisymmetric .the last reconstruction is made at the time the bubble passes by .the bubble is not completely circular , and is rising slightly off - center .this is one of the origins of the poor reproducibility of the air bubble . by moving the setup closer to one of the x - ray sources we are able to obtain a much higher spatial resolution . from the tomography analysisthere are good indications that the cavity remains axisymmetric at least until the collapse .for such an axisymmetric cavity we can anticipate what the signals in the different detectors should look like , given the radius of the cross - section of the cavity .this is illustrated in the top half of fig .[ xray : fig : fit]a . to quantify the cavity shape as a function of timethe cavity radius for every measurement ( each height ) needs to be determined at each point in time . geometry of one horizontal cross - section of the bed . when the exact locations of the source and detectors with respect to the container are known it is possible to calculate the change of the equivalent path length as a function of the angle for both the case with a circular air cavity ( upper half ) and the situation in which a sand jet is present in the center of the air cavity ( lower half ) . in the equivalent path length is plotted as a function of the angle for two situations : an air cavity in the sand bed and a jet within the air cavity .both signals are fitted to the theoretical case of a circular cavity and a circular jet.,width=325 ] in fig .[ xray : fig : fit]b , for a given time and height , the measured equivalent path length is plotted as a function of the angle between the detector and the center detector when there is an air cavity present .this data can be fitted with the expectation for an axisymmetric air cavity in the center of the container , as shown in the top half of fig .[ xray : fig : fit]a .the equivalent path length of a circular cavity of radius as a function of the angle is calculated to be where is the known distance between the x - ray source and the center of the container .this function is fitted to the obtained data to get the cavity radius , red line in fig .[ xray : fig : fit]b . afterthe collapse a jet occurs inside the air cavity in some of the cross - sections , as illustrated in the bottom half of fig .[ xray : fig : fit]a .this will change the signal as shown in fig .[ xray : fig : fit]c , where we observe a shape similar to that of fig .[ xray : fig : fit]b , but with a pronounced dimple in the center . to calculate both the cavity radius and the jet radius , equation ( [ xray : eq : ltheoretisch ] )is adapted such that the change in equivalent path length is calculated through two concentric circles .the larger circle is filled with air and the smaller one filled with sand : , where is the radius of the jet .a fit for both and is plotted as a green line in fig .[ xray : fig : fit]c .this fitting procedure is repeated for every time step and every measurement height , such that we are able to reconstruct the full axisymmetric cavity- and jet - shape as a function of time .the result of this analysis is shown in fig .[ xray : fig:3d ] . in blue the cavity radius as a function of heightis represented in the figure at several times after impact .the exact position of the ball could be extracted from the original data by looking at the maximum ( see fig . [xray : fig : reproducibility ] ) .the ball is plotted in fig .[ xray : fig:3d ] in red .despite its small scale we were also able to reconstruct the jet that is created during the collapse .the last plot of fig .[ xray : fig:3d ] shows the jet in purple . from the analogy to cavity collapse in a liquid , a secondary , downward moving jet is expected to form from the pinch - off point as well .we find no clear evidence for this secondary jet , which may be connected to the fact that it is expected to be weaker than the upward one such that it is simply too thin to measure with the spatial accuracy of our experimental setup .several snapshots of the results from the analysis described in section [ sec : cavrec ] . to obtain these imagesthe equivalent path lengths measured in the x - ray setup are fitted to theoretical cavity shapes .stitching together the experiments executed at different heights results in the blue air cavity .the ball position is also measured from the data and the ball is added to the images in red . for the last image the jet recreated from the data is visible in purple .the plots represent the situation 40 , 60 , 80 , 100 , and 120 ms after the impact for ] . with the analysis described above the cavity radiusis extracted as a function of time .it is now possible to take a closer look at the dynamics of the cavity collapse , both at and below the closure depth .collapsing air bubbles in incompressible liquids have been studied both theoretically and experimentally + citepeggers2007,gordillo2006,gekle2009,duclaux2007,gekle2009,bergmann2006 .theory has shown that the time evolution of the cavity radius asymptotically and slowly converges to a power law with exponent .more specifically , the local slope has been shown to satisfy ] ( triangles ) . both methods are consistent in the sense that they give the same trend with depth , but as expected the second methods provides larger values for since the velocity diverges towards . in any case , clearly , the results of both methods show that increases slowly with depth . in the literaturethere has been some discussion about secondary , lower pinch - offs that may be responsible for the thickening of the lower part of the jet . in the context of this discussionour present result indicates that a second deeper pinch - off is capable of creating a jet that could be almost as strong and fast as the first pinch - off .this supports the view put forward in that the thick - thin structure first reported in is caused by a secondary jet catching up with the first . by bursting through the primary pinch - off regionit is then assumed to create the thick part of the visible jet .+ + concluding this section , for the froude numbers studied in our setup ( ) we have unambiguously shown that the jet at atmospheric pressures originates from the primary pinch - off point of the cavity , and not from the pressurized air bubble as was suggested in based upon xray experiments in a much smaller setup .in fact the entrained air bubble is observed to move down with the sphere ( see figs .[ xray : fig : density]d and [ xray : fig:3d ] ) until it detaches and starts rising through the sand bed .this latter process will be discussed in the next section .the experimental technique used in this paper opens the possibility of directly observing the formation of the thick - thin structure reported in refs . at reduced ambient pressure .however , from high - speed imaging experiments in the current setup it turned out not to be possible to observe these structures at atmospheric pressures , such that additional research at reduced ambient pressures is needed for this purpose .a remaining question regarding the impact events is the mechanism by which the detached air bubble moves towards the surface .an intuitive way of thinking about rising air bubbles in a granular medium is that the unsupported grains on top of the bubble `` rain '' down through the center into a pile at the bottom .this transport of material will give the bubble a net upward velocity . a second mechanism , often used for continuously fluidized beds ,is closer to the rising of air bubbles in water .material from the perimeter of the bubble is transported along the interface towards the bottom of the bubble , where a wake is formed . in our experiment we do measure a rising bubble , but due to poor reproducibility of this part of the experiment we can not stitch the different experiments at different heights together to obtain a spatial image ( see figs .[ xray : fig : reproducibility ] and [ xray : fig : density ] ) . to reconstruct the bubble shape we therefore have to find a different method to analyze the data .because the bubble is moving in the vertical direction , in due time the entire bubble will pass any horizontal cross section .this means that if the velocity of the bubble is known it is also possible to retrieve the shape of the bubble from a single experiment done at one height , such that we do nt have to worry about reproducibility . as shown in fig .[ xray : fig : setup ] we record the data with two detector arrays simultaneously . in a single experimentthe air bubble will pass the two measurement planes that go from the x - ray source to the upper and lower array of detectors .the distance between these two planes in the center of the container is 4 mm . by determining the time difference of the front and back of the bubble passing the two measurement planes we obtain the speed of the front and back of the bubble .the difference in velocity between the front and the back is found to be small enough such that we can assume that the bubble rises with a constant speed . comparing measurements at different heights, we find no clear trend of the bubble velocity as a function of height , which is at least partially due to the poor reproducibility of the experiment in this regime .we find that all bubble rise velocities are around m / s . the shape of the rising air bubble for an experiment at .this shape is obtained by recording the radius of the air bubble that passes by in time at a single height .plotting the signal from the different detectors gives the complete shape .the color indicates the width of the bubble perpendicular to the paper , dark blue is a width of 4.5 cm.,width=325 ] when the time axes are rescaled with the constant bubble speed we get a bubble shape as shown in fig .[ xray : fig : bubble ] . in the horizontal directionthe information from the different detectors is displayed .the colors represent the depth of the bubble perpendicular to the plane of view .the bubble is spherical cap shaped , like a bubble rising in a fluid , or in a continuously fluidized bed .the bottom of the bubble is concave , which is consistent with either a pile or a wake . in 1963 davidson and harrison presented a relation for the rising velocity of a single bubble in a fluidized bed : , where is the equivalent bubble diameter with the bubble volume . now that we have the shape of the bubble we can estimate the velocity using this model which gives a value of m / s .this is close to our experimental value , which is slightly lower .this stands to reason , since our bubble is not rising in a continuously fluidized bed and thus a lower velocity is expected .we are not able to see if there is a rain of particles within the bubble , since we measure the average signal over a line instead of locally , and the density of the rain " would be very low . however , the shape of the bubble and the rising velocity are close to bubbles rising in a continuously fluidized bed , suggesting that the rise mechanism will be similar as well .the shape of our measured bubble is similar to the air bubble measured by royer _ et al . _ in although they have a different explanation for the shape .they attribute the concave bottom of the bubble to an impinging second jet , that grows to meet the first jet .we however find that the rise velocity is consistent with a rising air bubble , that will finally erupt at the surface , rather than overtaking the primary jet .note that the shape of the rising air bubble is very different from the shape it had when it was still attached to and dragged along with the moving ball , as can be appreciated by comparing fig .[ xray : fig : bubble ] to , e.g. , fig .[ xray : fig : density](d ) .finally , in all of the over repetitions of the experiment that have been measured for this work , we have never observed more than a single rising bubble .until now we have assumed that the packing fraction of the bed does not change significantly .this assumption was necessary to calculate the air path lengths in the bed . by simply observing the experimentit is obvious that the packing fraction must change , since when we compare the bed height before and after impact we find that it has lowered . from the initialvery loose state that is created by the fluidization procedure we end up with a more compactified bed .we want to determine the corresponding change in the packing fraction , and we want to discover how the compactification is distributed throughout the bed .the experiment provides the equivalent path length of the x - rays through the sand , .whenever a given x - ray does not encounter an air cavity in its path the change in this path length ( ) before and after the experiment can , in first order , be related to a change in packing fraction by : where needs to be interpreted as the average packing fraction changes along the path .note that the values for given in this section correspond to much smaller than those discussed in the previous section .in fact , we measure packing fraction changes of only a few percent .this however needs to be contrasted with the maximum range that can be expected during compaction , i.e. , the difference between loose and dense packings , which is typically of the order of 10 % or less : for the experimental setup and the sand used in our experiments we have determined in a tapping experiment that the difference between the densest and the loosest obtainable packing is of the order of 10 % . a representation of the packing fraction of the bed after an experiment at .the data is taken several seconds after the impact events have terminated , assuring that there are no air pockets left in the sand .the color indicates the packing fraction ( ) . even though the change in packing fraction is small ,a compactified area next to the ball can be observed , while the vertical strip above the ball is relatively loose .the compaction below the ball decreases with depth . averaged packing fraction plotted as a function of time . to obtain the curvesthe signal of 20 different experiments is averaged .the three different curves give the signal from three different detectors , _i.e. _ , for three different values of . the transparent area around the curves indicates the statistical error .all three signals show a clear increase in packing fraction just before the ball the ball blocks out most x - rays.,width=325 ] the local packing fraction after the experiment in the entire sand bed , calculated with equation ( [ xray : eq : packingfraction ] ) , is shown in fig .[ xray : fig : density_differences]a .these measurements were taken several seconds after the ball has come to a halt , which assures that there are no air - cavities or bubbles left , and that only packing fraction variations are detected .note that the packing fraction before the experiment was equal to 0.41 , uniformly throughout the container .this means that the bed is compactified during the experiment .we see a clear compacted region ( pink area ) next to where the ball has stopped .the packing fraction above the ball ( in the center of the plot ) is relatively low , and is the lowest just on top of the ball .the packing fraction below the ball slowly decreases with depth back to a value of 0.41 .wherefrom do these packing fraction variations originate , and what do they teach us about the events below the surface ?the packing fraction around the ball at several times during impact with a froude number of . to obtain these images we move along with the ball and averaged the signals around it .the first image shows the bed before impact ( average packing fraction of 0.41 ) . in the other frames the ball moves from 8.5 cm ( 67 ms after impact ) below the surface to 16 cm ( 147 ms after impact ) below the surface . during this movementwe observe a compacted region to the sides and in front of the ball ( red ) , a growing compactified region ( yellow ) and a strong compression above the ball where the air cavity pinches off .note that the white area does not exactly represent the measured ball shape but simply the area in which no reliable data for density changes in the sand could be calculated due to the presence of the ball . ] to understand the packing fraction of the sand around the ball after the experiment we need to look into the local compaction while the ball is moving through the sand . to determine what happens with the sand just in front of the ball we zoom in on the signal before the ball passes by at a given height .this gives the bed density underneath the moving ball . to obtainsufficient data the signals of 20 different experiments at 10 different heights are averaged .the moment the ball passes by is used to synchronize the signals in time . to smoothen the signala central scheme is used where the trend of the 10 previous points is extrapolated beyond the central point . in fig .[ xray : fig : density_differences]b the result of this analysis is shown .the blue curve ( ) passes through the center of the ball and therefore detects the ball first .the other two curves show the signal through the side of the ball ( ) and completely beside the ball ( ) .all three curves show a clear increase of the signal before the front of the ball passes .this shows that there is a compaction of the sand just before the ball arrives . or, there is a compacted region being pushed in front of the ball . from the red curve ( )we deduce that there is also compaction next to the ball .this compacted region is still present after the experiment is finished , as can be seen in fig .[ xray : fig : density_differences]a .the density differences of the sand during the experiment are very small , but it is possible to detect them if the signal is averaged over a sufficiently large time - window ( see fig .[ xray : fig : density_differences]b ) . to image the packing fraction variations during the penetration around the ball we switch to the frame of reference of the ball such that we are able to do a time averaging .the result of this procedure is shown in fig .[ xray : fig : around_the_ball ] . the red area below and next to the ball indicates a compacted region , just as we saw in the previous section . in time we see that the compacted region below the ball ( yellow area ) grows downwards relative to the ball and obtains a size that is several times that of the sphere . when comparing these findings to the data obtained by royer _ et al_ in refs . , who reported a much smaller compacted region in front of the ball at atmospheric pressures for comparable froude numbers , we may attribute the difference to the higher sensitivity of our setup to packing fraction changes .in addition to what happens below the ball , we can also investigate the compaction above the ball .first the air cavity is visible ( blue ) and when the cavity collapses a growing red area indicates a compacted region next to the pinch - off . the data in fig .[ xray : fig : density_differences]a ( taken after the experiment was done ) shows a relatively uncompacted area in the center above the ball .this must be connected to the rising bubble rearranging the sand particles in its path .it suggests that the sand at the bottom of the bubble is deposited loosely , pointing to a slow and unpressurized mechanism .using a custom - made high - speed x - ray tomography setup we measured the events that occur below the surface when a ball impacts on a bed of fine , very loose sand .we were able to reconstruct the air cavity until and beyond the collapse by stitching together measurements done at different depths . from the cavity reconstruction we learned that the phenomena below the surface are similar to the events that occur during and after an impact in water :a cavity is formed behind the penetrating ball , the cavity collapses , creating a jet and entraining an air - bubble .even the power - law behavior with which the cavity collapses is consistent with the pinch - off of an air cavity in a liquid . using the signal of a single experiment done at one height we were able to retrieve the shape of the rising air bubble .both the shape and the rising velocity of the bubble are very similar to those of bubbles rising in a continuously fluidized bed . during the experimentthe sand bed is compactified .even though the change in the signal caused by the compaction is very small compared to that from the air cavities , we were able to measure sand that is compressed in front and to the side of the ball while the ball moves through the bed .this compacted area grows in time and typically has a size of several ball diameters when the ball comes to rest .this is much larger than the compacted region reported at atmospheric pressures in refs . , which may have implications for explaining the pressure dependence of the drag entirely from the `` cushioning '' effect put forward in these papers .the compaction does decrease with increasing distance from the ball , and is most pronounced in the center of of the container , below the ball .moreover , during the cavity collapse the sand at the collapse height is also greatly compressed and in the last step ( the rising of the air bubble ) the sand has time to rearrange itself : with the deposition of sand at the bottom of the bubble we end up with a relatively loose center and compacted sides . just above the position where the ball stops we have the area with the lowest compactionthis is where the air bubble pinches off from the ball , giving rise to the existence of a region that is depleted of sand grains .
when a ball is dropped in fine , very loose sand , a splash and subsequently a jet are observed above the bed , followed by a granular eruption . to directly and quantitatively determine what happens inside the sand bed , high - speed x - ray tomography measurements are carried out in a custom - made setup that allows for imaging of a large sand bed at atmospheric pressures . herewith we show that the jet originates from the pinch - off point created by the collapse of the air cavity formed behind the penetrating ball.subsequently we measure how the entrapped air bubble rises through the sand and show that this is consistent with bubbles rising in continuously fluidized beds . finally , we measure the packing fraction variation throughout the bed . from this we show that there is ( i ) a compressed area of sand in front of and next to the ball while the ball is moving down , ( ii ) a strongly compacted region at the pinch - off height after the cavity collapse ; and ( iii ) a relatively loosely packed center in the wake of the rising bubble .
how energy is converted from one form to another form has been extensively studied in these days . in biochemical reaction, some molecules are known to work for energy transducer " .such energy transduction is important in enzyme function , molecular motor , and so forth , while recent progress in nanotechnology makes us possible even to design a microscopic machine that works as such energy transducer .the question whether such molecular machine functions in the same way as a macroscopic machine was first addressed up by oosawa as a dichotomy between tight and loose coupling between input and output . in the tight coupling, chemical energy is converted into mechanical work via definitely scheduled successive conformational changes of molecules , and at each stage the conformational change has one - to - one correspondence to a mechanical work .this mechanism is similar to most of our macroscopic machine .consider a typical machine with gears .it converts some form of energy to work via several rigid gears , following external operations .each gear transforms one kind of motion to another directly and instantaneously , where fluctuations in the transformation process are negligible .in contrast , in the loose coupling mechanism , the manner or the route to establish energy conversion does fluctuate by each event , and the energy is not necessarily converted all in once at each stage . here , the amount of output work is not precisely specified , but varies by each event . by repeating trials , the output work extracted from the same amount of input energy ( e.g. , by a single reaction from atp to adp ) is distributed with some fluctuations .oosawa proposed that this loose coupling is a general property of a system that _ robustly works _ under large thermal fluctuations of the configuration of molecules even though the input energy is of the same order with the energy of thermal fluctuations .this problem is first posed in the study of molecular motors , where conversion from chemical energy to directional mechanical work within a fluctuating environment is studied .indeed , there are some experimental results suggesting loose coupling mechanisms . for instance, the energy is stored in a molecule over an anomalously long time between the chemical reaction event and emergence of directional motion , and multiple steps of conversion to work are repeated during molecular processes within one cycle of the chemical reaction . therethe resulting output steps is widely distributed as is expected in loose coupling mechanism . whether the energy conversion of a biomotor is loose or tight may depend on detailed molecular structure .still , the question how the energy conversion in loose coupling mechanism works is quite general , and can be discussed in any interacting molecules .one possible mechanism for the loose coupling was proposed as thermal ratchet at a _ statistical _ level , although _ dynamical _ process for the loose energy conversion is not yet clarified .little is known what types of loose conversion are possible , or in what condition and by what mechanism the loose coupling functions . in the present paper, we propose a scheme of _ dynamical energy transduction of autonomous regulation _ in highly fluctuating configurations , by introducing a system consisting of interacting molecules with internal dynamics .the specific choice of a model we adopt for this energy conversion is inspired by some experiments suggesting oscillatory molecular dynamics at a time scale of molecular work , whereas we do not pursue to make a direct correspondence with the experiment .rather , we adopt the model , expecting that the proposed mechanism itself is rather general . by setting up an example of a dynamical systems model that works as a loose coupling ,we show that energy transduction is possible within fluctuating environment . by analyzing the dynamical process of this energy transduction, we find some characteristic features of the dynamics that makes the coupling in this scheme generally loose .robust transduction of energy within fluctuating configuration is discussed from the viewpoint of dynamical systems .the proposed mechanism may also shed new light on the function of enzyme in chemical reaction , design of a nano - machine that works at a molecular scale , and biomotors .to study this problem , we start by defining a machine that works at a molecular scale , with which we are concerned here . the requisite here is essential to a system that works with large fluctuations of configurations in molecules .consider a system composed of several degrees of freedom .` input ' is injected into the system through some restricted degrees of freedom of it , that are pre - defined .then , as a reaction to the input , there arises dynamical change in the system , which leads to ` output ' as a typical motion of another set of restricted degrees of freedom . here , the degrees of freedom for the input and for the output are distinct .statistically , the output brings directional flow or mechanical work , although individual events of the output would be rather probabilistic .the dynamics of the system is given by the interaction among the degrees of the freedom , as well as noise from the heat bath .through the dynamics , input energy is transformed to ( stochastic ) output motion , that gives an output work statistically .this energy transducer works with large fluctuations of configurations of the system , without any external control , once the energy is injected as input .accordingly we postulate the following properties : * details in input event are not controlled . neither the timing of the injection with regards to the molecule fluctuations , nor the direction of the input motion is specified precisely .even the magnitude of the input is not restricted .* once the input is injected , no external control is allowed , and the process is governed by a given rule for temporal evolution , while fluctuations at a molecular scale are inevitable . * in spite of this uncontrolled condition , mechanical work or directional flow can be extracted from the output motion , on the average .the system adapts itself to produce output , against the change of the input timing , direction , or magnitude .then , we address the question how such system is possible . in the present paper, we provide a specific example of a system with such loose - coupling , and discuss its characteristic features which realize the above three conditions .the characteristic features of dynamical process are summarized as follows : after the energy is given as the input , the input energy is sustained with very weak dissipation for some time , and is not directly transduced to the output .this suppression of energy flow into other modes comes from some restriction to the interaction among the system . on the one hand, this restriction makes a change of molecular dynamics from the fluctuating motion to the directional motion later . during the timewhile the energy is stored , detailed information of the input ( such as the direction or the timing ) , except the energy value , is lost due to noisy chaotic dynamics .this makes the output distributed . in other words ,the input - output relationship is loose , in the sense that the output ( i.e. , displacement or extracted work ) is distributed around its mean .there is a rather general relationship between the variance and the mean of the outputs , as will be shown . by studying a dynamical model with several degrees of freedom ,we have confirmed the above features , when the model satisfies the requisite ( i)-(iii ) , to extract mechanical work from the energy flow within highly fluctuating configurations . on the other hand ,no substantial work is extracted from the output , when these features are not observed in the models .these results imply that the autonomous energy transduction works as a loose coupling mechanism .as a specific system to satisfy the condition ( i)-(iii ) , we consider the following model .the motor part consists of a ` head ' , of position , and an internal ` pendulum ' represented by .the motor interacts with rail , represented by a one - dimensional chain of lattice sites positioned at with index .the internal pendulum is excited when energy is applied to the system .every degree of freedom , except for , is in contact with a heat bath , generating random fluctuations described by a langevin equation with damping .the interaction potential between the chain and the head is spatially asymmetric and its form depends on the angle of the pendulum ( see fig.[fig : model ] ) .the periodic lattice is adopted as the chain , to study directional motion in an asymmetric periodic potential , as is often studied in the study of thermal ratchets .every degree of freedom , except for the internal pendulum , is in contact with the heat bath , generating random fluctuations described by a langevin equation .the equations of motion for this system are chosen as where is the temperature , is a friction coefficient and represents gaussian white noise . here, we use the units boltzmann constant . and are the spring constant and the natural interval between two neighboring lattice sites in the chain . to observe directional motion of the head, the chain is also connected to a fixed ground via a spring with a constant . here , , and are mass of the respective degrees of freedom , and .the inertia is not ignored , as small friction coefficient is adopted .although this could be reasonable for the application to enzymatic reaction or to a design of nano - machine , one might think that biomotors should be treated as overdamped systems , considering that for small object that moves with a slow time scale , water is highly viscous fluid . here , we introduce inertia term , because several experimental results by yanagida s group may not necessarily match with this standard view .also , each variable in our model does not necessarily correspond to the atomic configuration of a protein .rather it is a mesoscopic variable , for which effective inertia may be relevant to long - term storage of energy in dynamics of proteins , and allow for oscillatory dynamics among interacting degrees of freedom .we will come back to this inertia problem in 8 again .the potential form is asymmetric in space as shown in fig.1 , where the characteristic decay length of the interaction is set at a smaller value than , so that the interaction is confined mostly to the nearest lattice sites .here we adopt the following potential form , with and here the parameters and determine the degree of asymmetry , while and give the coupling strength and decay length of the interaction , respectively .specific choice of this form is not important .we have simulated our model choosing several other potential forms with asymmetry , for instance and obtained qualitatively the same results with regards to the directional motion . in this paper ,the parameters for the potential are fixed at , , and .the pendulum mode without direct coupling to the heat bath is adopted here just as one representation of long - term storage of energy experimentally suggested in molecular motors . as long as there exists such slow relaxation mode , our results follow , and this specific choice is _ just one example _ for it .any mode realizing slow relaxation can be adopted instead .once a sufficient energy is injected to the pendulum to drive the system out of equilibrium , we have observed that the head moves a few steps in one direction , on its way in the relaxation to thermal equilibrium .an example of the time course of the head motion is given in fig.[fig : jikeiretsu ] .how many steps the head moves after the energy injection depends on each run with different seed for the random force , even though the values of parameters including are kept constant . in fig .[ fig : temperature](a ) , the average step size is plotted as a function of temperature ( ) .the mean step size only weakly depends on the temperature for small .it shows only slight decrease by decreasing the temperature .this is in strong contrast with the thermally activated process , in which the rate of crossing the potential barrier ( for the head to shift ) is given by with an activation energy , according to arrhenius law . at thermal equilibrium , indeed , the head shows such thermally activated motion .the resulting diffusion rate of the head brownian motion obeys this exponential form as shown in fig.[fig : temperature](b ) , whereas the values of agree with the activation energy derived analytically . the dependence in fig.[fig : temperature](a ) is distinct from such dependence , and the motion of the head is not thermally activatedthis is natural , since the energy for the directional motion is extracted from the injected energy . as a result ,the head motion possesses different properties from the thermally activated motion as follows . for a fixed temperature, the average step of the head increases monotonically ( almost linearly ) with , when the injected energy is larger than a threshold ( see fig.[fig : injected_energy](a ) ) .this threshold is not equal to for the brownian motion , but shows the following dependence upon temperature : for higher temperature , this threshold energy goes to zero , and any small injected energy can lead to the directional motion on the average , whereas , with decreasing temperature , the threshold energy tends to converge to a certain value around .this result suggests that there is a certain minimum energy required for the head to shift to the next lattice site .the temperature dependence suggests that the head motion is not thermally activated .rather , it seems that the motion takes place when a suitable temporal correlation exists between the head motion and thermal fluctuations of the chain .it should also be noted that the directional motion is clearly observed even by the injection of the energy of times of .for example , the average motion is about 2 steps , for the injection of the energy of , for .the linear increase of the step size with implies that the injected energy is not wasted even if it is larger than the required threshold value for the directional motion of the head .this implies that the injected energy is converted to the directional motion not at once but step by step with the successive step motion of the head .this feature is revisited in section 6 . to make this successive energy conversion possible, it is important that the chain be sufficiently flexible .indeed , as is shown in fig.[fig : stiffness ] , if the spring constant is too large , directional motion of the head is not observed .the directional motion is not possible , if the chain is too tight , or too flexible .the average step size has a peak around .this suggests that a flexible motion that appears under a proper stiffness , plays an important role to produce the directional motion as an output of the energy transduction . indeed , around this parameter value, there is a transition in the dynamics of the head and neighboring few lattice points , to show correlated motion , and accordingly a rather long - range correlation appears there .now we discuss how the above directional motion is achieved in the term of dynamical systems .we note that the present mechanism works only as a loose coupling mechanism in this section .first , let us discuss how the system ` forgets ' initial conditions and how broad distribution of output step motion appears . before the directional motion of the head , the motion of the triplet \{head , pendulum , rail } shows a weakly chaotic motion , with which the input information ( e.g. , the configuration of the system when the input is applied ) is lost .the chaotic motion here only works as weak perturbation to the two - body problems consisting of the interactions between the head and the pendulum and that between the head and the adjacent lattice site . on the other hand ,when the head exhibits a step motion , there is strong interaction between the head and the lattice sites , that lead to a stronger chaotic instability arising from three - body or five - body ( including the neighboring lattice points ) motions .the change of the degree of instability is demonstrated in fig.[fig : difference ] , where the evolution of difference between two close orbits is plotted . as shown, there are two rates of exponential divergence , one that continues during the energy storage , and the other with the flow of energy from the pendulum part .the former is given by perturbations to the two - body motions , and the latter is induced by strong interaction with the three- or five - body motion .the orbital instability here is examined by taking two identical initial configurations , and applying identical input energy and identical random noise ( i.e. , sequence of random number ) .a slight disturbance is added to the reference system at the timing of energy injection .the evolution of these two systems with tiny difference in configurations results in quite different number of output steps ( see fig.[fig : difference](a ) ) .then , it is impossible to predict or control the number and the direction of steps by the configuration of the system at the excitation . on the other hand ,the directional motion appears rather independently of the configuration and , therefore the energy conversion of the system is robust against thermal noise , and disturbances in the input event .one consequence of these chaotic dynamics together with the influence of the heat bath is a _loose _ relationship between the input and the output .in fact , the output number of steps shows a rather broad distribution , as shown in fig.[fig : distribution ] , where one step means the displacement of the head for one lattice site along the chain .the distribution was obtained by taking samples with arbitrarily chosen configurations at the moment of energy injection , while satisfying thermal equilibrium , and also by taking different random sequences .the resulting distribution is rather broad , although the input energy is identical for all the samples .the difference in the output step comes mainly from intrinsic chaotic dynamics of the system mentioned above . in the distribution of the number of steps, we note that at low temperature the mean displacement is almost equal to its variance ( see fig.[fig : distribution](b ) ) .if the step motion is a result of rare stochastic events , one could expect poisson distribution .indeed , if one disregards the small tail of the distribution at negative steps , the obtained distribution is not far from poissonian , for low temperature . even for high temperature ,the linear relationship between the mean and variance is still valid with an offset depending on the temperature .this poisson - type distribution suggests that random process underlies the output head motion , as expected from the above chaotic dynamics . the relationship in fig.[fig : distribution](b ) indicates that to obtain sufficient output amount in directional motion , large fluctuation is inevitable .indeed , it is not possible , at least in the present model , to make the step distribution sharp while keeping sufficient directional motion of the head , even if the parameter values are changed .for example , one could make our model rigid , by increasing stiffness in the rail and suppressing the fluctuations of the configuration .then , the directional motion is highly suppressed , as already shown in fig.[fig : stiffness ] .the average displacement as a function of the stiffness has a maximum around , where the fluctuation of the displacement of rail elements is rather large . on the other hand , for ,i.e. , for a rigid spring of the rail , the average displacement is only 2% of the maximum .there , most of the input energy then is wasted . accordingly , to achieve larger directional steps without wasting input energy , the step distribution has to be broad .this demonstrates the relevance of loose coupling mechanism to energy transduction .this relevance of loose coupling is expected to be general under our postulates for the molecular transducer .note that rigid process in tight coupling mechanism can transfer energy very well when configurations of every two parts between which the energy is transferred is precisely determined , but the efficiency will go down drastically if this precise condition for the configurations is not satisfied . in the molecular energy transducer we are concerned , such precise control of configurations of molecules is impossible .furthermore , as the detailed information on the input is lost , control of the motion gets more difficult .hence , the tight coupling mechanism is difficult to keep high efficiency for a molecular transducer .on the other hand , in the present loose coupling mechanism for the energy transduction , there exists chaotic motion .then , the efficiency could be much lower than the maximum efficiency achieved by tuning initial condition in the tight coupling .however , the probability to achieve ` good ' configurations starting from any initial conditions or from a variety of boundary conditions remains to be large .hence energy transduction in present loose coupling mechanism works over a wide range of initial conditions , and therefore is robust .in spite of chaotic motion mentioned above , the output should be directional , in order to extract some work from the system . here , recall that there is some time lag between the injection of energy and the emergence of the step motion , during which chaotic motion is maintained . even after the stronger chaos appears with the flow of energy from the pendulum, there is a certain time lag before the emergence of the step motion ( see fig.[fig : difference ] ) . during such time lag ,the energy may be sustained or may be used degree by degree to bring about the step motion . in this section ,we examine the energy flow in detail during this ` active state ' preceding to the directional motion .the duration of active state is suggested to be important to realize the directional motion under large fluctuations of the configurations of molecules .note that the motion of the head to the neighboring lattice site is possible only if some condition for the configuration among the pendulum , head and the adjacent lattice sites is satisfied . in fig.[fig : jikeiretsu ] , we have plotted the time course of energy flow to the chain from the head - pendulum part . here , the time series for the kinetic energy of each lattice site is plotted together with that of the head . as shown ,the energy flow from the head to farther lattice site is suppressed , before the step motion occurs .this restriction in the energy flow is generally observed whenever the directional motion is observed . on the other hand, there is a continuous gradual flow when the step motion is not possible . to study the relevance of this restriction in the energy dissipation , we have computed energy storage by measuring the following two quantities ; is the work from the pendulum to the head and the lattice , up to time since the energy injection , and is the work from the three lattice sites adjacent to the head , to farther lattice sites , again up to the time .here we call the head , pendulum , and the adjacent three lattice sites as `` active part '' , and other lattice sites of the chain as `` residual chain part '' . by adopting these terms , is the energy dissipated to the residual chain part from the active part . in the present set of simulations for the measurement of and ( and only in this set of simulations ) , the contact with the heat bath to the active part is eliminated , to distinguish clearly the stored energy from the wasted work to other lattice sites .a time course of and is shown in figs.[fig : flow ] . in a certain range of parameters where the step motion frequently occurs , dissipation from the active partis delayed for some time after the energy flow from the pendulum into the system ( fig.[fig : flow](a ) ) . during the interval for this delay ,the energy is sustained in the active part ( i.e. , at the head and the adjacent lattice sites ) for a while . as is shown in the left figure of fig.[fig : flow](a ) , the sustained energy is dissipated to the residual chain , together with the event of the step motion .( note that steep changes in and after the step motion is an artifact since the computation here is based on the active part consisting of the head that is adjacent with the site , and loses its meaning after the head steps from the site . ) for the parameters with much smaller directional motion , the increase of and begins simultaneously with the energy flow from the pendulum to the system ( fig.[fig : flow](b ) ) .the mean profile of the energy flow from the active part is given by the time course of , where the temporal ensemble average is computed by setting the origin of as the time of the energy injection , since , for the parameters adopted here , the energy flows rapidly ( in less than time units ) from the pendulum to the other degrees of freedom .the rate takes a large value only in the initial stage ( ) of the relaxation process .the ensemble average at time is computed only over the samples in which the head stays at the site up to time .accordingly , the number of samples decreases with the increase of the time . as is seen in fig.[fig : mean_flow](a ) , the out - flow rate of the energy is kept small at the early stage ( up to ) of the relaxation , for .this is in strong contrast with the temporal profile for larger values of ( e.g. ) .the in - flow rate to the active part is approximately in maximum for both the two cases .then , the active part in the flexible system does not respond instantaneously to the sudden flow of energy , and therefore the energy is sustained in the active part without dissipation into the residual chain for a while .this sustained energy up to is estimated by .the ensemble average of the time course of this energy is given in fig.[fig : mean_flow](b ) .the energy is sustained up to the time when and dissipated more quickly when .if the injected energy were dissipated completely into a large number of degrees of freedom , the conversion of energy into mechanical work would suffer a rather large loss .contrastingly , the motion generated by our mechanism remains confined within just a few degrees of freedom where some correlation among degrees of freedom is maintained .supplied energy to the pendulum is not diffused as heat . as a result ,the conversion is efficient .the restriction of energy transfer to the farther lattice sites comes from the correlated motion among the pendulum , head , and the lattice sites .the effective rate for the sustained energy in the active part is estimated by by suitably choosing .( as is increased to infinity , these quantities go to zero , of course ) . by choosing the time as an average waiting time for the step motion, one can estimate the sustained energy within the time scale for the step motion .as shown in fig.[fig : maintained_energy ] approximately of the energy flow is sustained in the active part for , where the average steps in directional motion becomes maximum .the mean waiting time is almost for parameter values adapted in fig.[fig : maintained_energy ] , which is much shorter than the time for the complete dissipation of the energy .now it is shown that the combination of chaotic motion ( for the loss of details of input ) and the restriction of dissipation ( that leads to directional motion ) gives a basis of our mechanism how the energy is transduced in a microscopic system under large fluctuations of their configurations . due to the chaotic motion , if the head shifts to the next site by crossing the energy barrier or not is probabilistic . here , the active part can sustain the excess energy effectively . therefore , the head comes close to the barrier several times even if it fails in the first trial .hence the motion to cross the barrier can be robust and adaptive .revisiting fig.[fig : jikeiretsu ] , it is noted that the amount of the stored energy in the pendulum decreases step by step , corresponding to the step motion of the head .the energy once extracted from the pendulum is partly restored after the completion of the step motion .the fact that the energy is restored to the pendulum allows the delayed use for the multiple step motion .thus , in our mechanism the energy is used step by step . to confirm the delayed use in multiple steps , we have measured , by restricting only to such case that multiple steps are observed ( fig.[fig : degree_by_degree ] ) .here we have computed up to the time of the first , second , and third steps respectively .these measurements show that the extracted energy from the pendulum up to the -step motion increases with .these results imply that the multiple step motion is supported by the delayed step - by - step use of the stored energy .this is why output motion with more and more steps is possible as the injected energy is increased . in fig.[fig :degree_by_degree ] , one may note that the amount of the available energy in the pendulum for the next steps is smaller than that for the previous step .indeed , the extracted energy in fig.[fig : degree_by_degree ] is estimated to take a rather large value due to the timing to measure it , i.e. , is measured at the timing when the head crosses the energy barrier to move to the next site .then , the energy stored in the pendulum is expected to be larger than expected .this is because some part of the energy is restored to the pendulum when the head approaches the bottom of the potential for the next site , as is seen in several rises of after the first step motion shown in fig.[fig : jikeiretsu ] .in the paper , we have proposed a concept of autonomous energy transducer at a molecular scale . in this transducer , we have set the constraints ( i)-(iii ) in section 2 : output is produced , even though input energy is of the same order of the thermal energy , and even though the magnitude or the timing of input is not specified , nor any control after the input is introduced .this requisite is postulated so that the function of the autonomous energy transducers is robust at a molecular scale . we have shown a possible scheme that satisfies this requisite , by providing a simple example , with a dynamical system of several degrees of freedom . in the model ,the energy transducer is robust by the following two properties : the first one is chaotic dynamics .the chaotic dynamics amplifies difference of initial conditions by each input and thus detailed condition on the input event is lost .chaotic dynamics also brings mixing of orbits , and each orbit from different initial condition has a chance to visit a certain part of phase space that is necessary to generate directional motion .hence , an output motion can be obtained on the average , independently of details of the input .the second property is energy `` storage '' at `` active part '' in the system .the active part consists of several degrees of freedom , and is self - organized by absorbing energy from the input degrees of freedom .this active part is sustained over rather long time interval , with very weakly dissipating the energy . during this long time span, each orbit has more chances to visit the part of phase space inducing directional motion .then , the ratio of the events with the directional output motion is increased .in addition , a storage unit " , like a pendulum , enhances the potentiality of autonomous regulation of the system .the dynamic linking between the storage and the active parts makes restorage of the extracted energy possible , which makes the delayed use for multiple steps easier .such regulation would not be possible if the energy were simply provided to the active part , as assumed for a single conformational change per one chemical process . as a consequence of our scheme of energy transduction ,the output has large fluctuations by each event .there exists variance of the same order with the mean of the output .hence input - output relationship is not one - to - one , but the output from the same input is distributed .now , a mechanism is provided for a molecular machine with loose coupling between the input and the output , as was originally proposed by oosawa . for the autonomous energy transducer ,the output has to be extracted without specifying input conditions , even under the fluctuations of the same order of the input .hence , in order for the transducer to work at a molecular scale , several degrees of freedom are necessary to exhibit flexible change .the input energy is first stored into some of these degrees of freedom , and then is used step by step . accordingly , the output is not directly coupled to the input .in contrast with a tight - coupling machine consisting of `` rigid gears '' , the machine with this loose coupling is flexible .after the output is extracted from the transducer , a stationary final state is restored , that is identical with the initial state before the input , except the change for the output ( i.e. , translational motion ) . in the model we presented , such stationary stateis represented by a combination of stable two - body motions , that are almost decoupled each other . during the transduction process , however , several degrees of freedom in the internal states are coupled .this mode coupling leads to flexible dynamics on the one hand , but also brings about the directional motion , on the other hand .relevance of the change of effective degrees of freedom to energy transduction has also been discussed in relationship with chaotic itinerancy .although we presented our scheme by a specific model with a head and a rail , the mechanism proposed here is expected to be rather general .what we have required is just a dynamical system with several degrees of freedom , including those for input and output .there , the system changes effective degrees of freedom through the internal dynamics in the system .we note here that each degree of freedom we used in the model does not necessarily have to correspond to an atomic motion .rather , it may represent a collective variable consisting of a large number of atoms .it should be also noted that the present scheme for the energy transduction is not necessarily restricted to a langevin system with weak damping , but it is hoped that a model with overdamped dynamics will be constructed that realizes energy transduction with loose coupling by the present scheme . in this sense ,the concept of autonomous transducer at a molecular scale , which we have proposed here , would be general and applied widely .the importance of loose coupling mechanism lies in its adaptability to different conditions .although the motion is not precisely optimized with regards to the efficiency of the transduction , instead , the present transducer works even if external conditions are changed . as an extreme case ,consider such external condition that makes directional motion much harder .for example , by adding a load to the transducer , it is expected that the output motion is suppressed .then the question is raised if the fluctuation in the output may also be decreased .if the answer is in the affirmative , the coupling between input and output will be more tight , as added load is increased .in other words , a transducer with loose coupling can show tight coupling depending on external condition . on the other hand , a system designed to have a tight - coupling can not work as a machine with a loose - coupling , under the change of external condition .it just does not work when the condition is changed . to check this plasticity of energy transducer with loose - coupling ,we have carried out a numerical experiment of our model by adding a load . in order to study the effect of a load, we make a slight modification of the present model ; we reverse a condition for fixing the total system , i.e. , the head is fixed ( e.g. , by attached to a glass plate ) , and the rail ( chain ) is set free , and its center of mass can move . due to the friction from the heat bath ,the rail itself imposes a certain amount of load to the head .the amount of the load increases with the length of the rail given by . in fig.[fig :load - distribution ] , the step distributions are displayed for two different lengths of chain , ( and ) . when the length of the chain is short , the mean step size is multiple that is distributed rather broadly by samples , similarly to the original system ( refer fig.[fig : distribution ] ) . on the other hand , for a sufficiently long chain , the mean is still kept positive but becomes smaller in amount .moreover , the distribution gets much sharper .the fluctuation of the output is very small , i , e , the output is almost constant , as is expected in the tight coupling mechanism .such trend is also observed in the experiments of molecular motors .a good candidate of the autonomous energy transducer is protein . in applying the present idea to proteins ,it is interesting to recall that several reports show that protein is not rigid , but flexible .actin , a rail protein , is known to be not so stiff , as assumed in the chain of our model .some proteins are known to take several forms and spontaneously switch over these forms under thermal energy .recently , there are several reports on single - molecule protein dynamics , by adopting fluorescent techniques .some enzymes show long - term relaxation of a time scale of m - sec to seconds .in particular , oscillatory dynamics with order of seconds are observed in the conformational change of single enzyme molecules , which suggest cyclic dynamics over transient states .anomalously long - term energy storage is also observed by ishijima et al .there the chemical energy from atp hydrolysis is stored in the motor for 0.1 - 1sec and then the energy is used for the directional motion gradually .the energy is used step - by - step for multiple steps .oscillatory dynamics are also suggested in dynein motor .summing up , several experiments support long - term dynamics in the relaxation of protein after excitations , as assumed in the dynamics of our model .if the output is simply provided by chemical reaction , for instance by hydrolysis of atp , such dynamical behavior would not be possible . for the dynamics, the proteins in concern should possess a storage unit " between the domain for chemical reaction and the active part . in the present model ,the storage unit is a simple pendulum and is not directly contacted with the heat bath .this is just an idealization , and the storage unit need not be so simple .also , the obtained results would be valid qualitatively even though a weak contact with the heat bath is added . from our result, it is postulated that the storage unit should interact with the active part bi - directionally , and the dissipation of energy from the unit is rather weak .then , multiple step - by - step motion with restorage of the energy is possible .we hope that the storage unit that satisfies the above requisite is discovered in motor proteins . although the purpose of our paper is to propose a new general mechanism for loose coupling for energy conversion , application of the idea to a molecular motor may look straightforward . in doing so , however , one might claim that one should construct a model with overdamped dynamics , rather than a model with inertia .although it should be important to construct an overdamped - dynamics version of the present mechanism , still the present model may be relevant to some molecular motors , for the following reasons .first , it is not completely sure if the model for biomotor should be definitely overdamped . proteins there exhibit fluctuations of very slow time scales , while the dynamics to realize directional motion is considerably rapid .the time scale for the stepping motion is suggested to be faster than microsecond order and could be of nanosecond order , while the waiting time before stepping is milliseconds . in equilibrium , proteins show slow thermal fluctuations of the scale of millisecond or so . thus the time scale of the relaxation of protein might be slower than , or of the same order as the stepping motion , in contrast with the standard estimate .it is still open if the stepping of the protein motor should be definitely treated as a overdamped motion .it should also be noted that in the usual estimate for the damping , one uses the viscosity of macroscopic water , whereas if the water around a protein can be treated just as macroscopic water or not is an open question .second , as already mentioned , each variable in our model does not necessarily correspond to the atomic configuration of a protein .rather it can correspond to some mode at a mesoscopic scale , and the inertia term in our model does not directly correspond to that of an atomic motion .when the head swings along the chain , microscopic configurations do not necessarily return to the original , but may change along the relaxation .as long as some collective mode swings , the present mechanism works .a protein includes a large number of atoms . to understand energy transduction with such large molecule , it is important to use a reduced model with a smaller number of degrees of freedom representing a collective mode consisting of a large number of atoms .as a theory for loose coupling , thermal ratchet models of various forms is proposed , and they have captured some features of energy conversion . however , some problems remain .first , to attain a reasonable efficiency , one needs to assume a rather tuned timing for the change of switching of potential or the time scale of colored noise . when tuned , the energy conversion often becomes one - to - one .second , such external switching or a specific form of noise is given externally , and is not given within the theory in a self - contained form .on the other hand , in a feynmann - type ratchet , use of different temperatures is assumed , but the temperature localized at one part of a molecule is not necessarily well defined .some exogenous process is required .third , the ratchet model adopts statistical description , and a single event of dynamics is not pursued , while in the recent single molecule experiments , each event of the conversion from chemical energy of atp to mechanical work is pursued . whether the concept of autonomous energy transducer is valid for biomotor or not has to be judged in experiments .we discuss here possible predictions that can be tested experimentally in molecular motors . *mean step size per one atp hydrolysis is modified by the change of stiffness for the rail filament .( see fig.[fig : stiffness ] for the stiffness dependence of the mean step size ) .although it might be subtle which type of stiffness should be measured to compare with our prediction , it would be important to measure stiffness dependence in any form , as a first step .* mean step size is modified also by the change of the amount of energy from atp hydrolysis .if the amount of input energy is modified , the step size is predicted to change , roughly in proportion to it . although artificial modification of the available energy by atp is not easy experimentally , such modification of , might be already realized by the long term storage of energy in molecules .if the molecule can store the energy over several times of atp hydrolysis , the amount of available energy will simply increase .then , the step size realized in a single sequence would be observed to be significantly large , compared with the usual step size of molecular motors . in such cases , one - to - one correspondence would be lost between atp hydrolysis and one sequence of stepping .* molecular motors can step occasionally even without activation by atp hydrolysis .a rail element which has experienced stepping of motors accompanied by atp hydrolysis could possess long term memory and store a part of energy transduced from the motors during these events .then , a motor attached to this rail protein would later receive this stored energy and use it for the stepping motion .similarly , stepping events without atp may occur due to the energy propagated from the neighboring active motors with sufficient energy from atp hydrolysis . * more definite proof for the relevance of autonomous dynamics in the stepping motioncould be provided by direct observation of proteins dynamics .detection of oscillation in configuration over a few periods preceding to the stepping motion will confirm the validity of the autonomous dynamics to biomotors .measurement of conformational fluctuations with using frets or with using optical traps might be effective to explore this point , although for complete test of the theory , rather fine time precision , say nano or micro seconds , might be required . to close the paper, we note that design of a nano - scale machine will be interesting that works under the present mechanism .since our scheme is general and the model is simple , there can be several ways to realize the required situation . with the recent advances in nano - technology, the design of autonomous energy transducer at a molecular scale may be realized in near future .we are grateful for stimulating discussion with f.oosawa , t.yomo , m.peyrard and k.kitamura .we also thank m. peyrard for critical reading of this manuscript .the present work is supported by grants - in - aids for scientific research from japan society for the promotion of science and from the ministry of education , science and culture of japan .
we propose a concept of autonomous energy transducer at a molecular scale , where output is produced with small input energy , of the same order of the thermal energy , without restriction of magnitude or timing of input , and without any control after the input . as an example that satisfies these requisites , a dynamical systems model with several degrees of freedom is proposed , which transduces input energy to output motion on the average . it is shown that this transduction is robust and the coupling between the input and output is generally loose . how this transducer works is analyzed in terms of dynamical systems theory , where chaotic dynamics of the internal degrees of freedom , as well as duration of active state which is self - organized with the energy flow , is essential . we also discuss possible relationships to enzyme dynamics or protein motors .
according to the central paradigm of classical cognitive science and to the church - turing thesis of computation theory ( cf ., e.g. , ) , cognitive processes are essentially rule - based manipulations of discrete symbols in discrete time that can be carried out by turing machines . on the other hand , cognitive and computational neuroscienceincreasingly provide experimental and theoretical evidence , how cognitive processes might be implemented by neural networks in the brain .the crucial question , how to bridge the gap , how to realize a turing machine by state and time continuous dynamical systems has been hotly debated by `` computationalists '' ( such as fodor and pylyshyn ) and `` dynamicists '' ( such as smolensky ) over the last decades . while computationalists argued that dynamical systems , such as neural networks , and symbolic architectures were either incompatible to each other , or the former were mere implementations of the latter , dynamicists have retorted that neural networks could be incompatible with symbolic architectures because the latter can not be implementations of the former ; see for discussion . moore has proven that a turing machine can be mapped onto a generalized shift as a generalization of symbolic dynamics , which in turn becomes represented by a piecewise affine - linear map at the unit square using gdel encoding and symbologram reconstruction .these _ nonlinear dynamical automata _ have been studied and further developed by . using a similar representation of the machine tape but a localist one of the machine s control states , siegelmann and sontaghave proven that a turing machine can be realized as a recurrent neural network with rational synaptic weights . along a different vain , deploying sequential cascaded networks , pollack and later moore and tabor introduced and further generalized _ dynamical automata _ as nonautonomous dynamical systems ( see for a unified treatment of these different approaches ) . inspired by population codes studied in neuroscience , schner and co - workers devised _ dynamic field theory _ as a framework for cognitive architectures and embodied cognition where symbolic representations correspond to regions in abstract feature spaces ( e.g. the visual field , color space , limb angle spaces ) . because dynamic field theory relies upon the same dynamical equations as _ neural field theory _ investigated in theoretical neuroscience , one often speaks also about _ dynamic neural fields _ in this context . in this communicationwe unify the abovementioned approaches . starting from a nonlinear dynamical automaton as point dynamics in phase space in sec .[ sec : nda ] , which bears interpretational peculiarities , we consider uniform probability distributions evolving in function space in sec . [ sec : dfa ] .there we prove the central theorem of our proposal , that uniform distributions with rectangular support are mapped onto uniform distributions with rectangular support by the underlying nda dynamics .therefore , the corresponding dynamic field , implementing a turing machine , shall be referred to as _ dynamic field automaton_. in the concluding sec .[ sec : discu ] we discuss possible generalizations and advances of our approach .additionally , we point out that symbolic computation in a dynamic field automaton can be interpreted in terms of contextual emergence .a nonlinear dynamical automaton ( nda : ) is a triple where is a time - discrete dynamical system with phase space ^ 2 \subset \mathbb{r}^2 ] for each bi - index .moreover , the cells are the domains of the branches of which is a piecewise affine - linear map when .the vectors characterize parallel translations , while the matrix coefficients mediate either stretchings ( ) , squeezings ( ) , or identities ( ) along the - and -axes , respectively .the nda s dynamics , obtained by iterating an orbit from initial condition through describes a symbolic computation by means of a generalized shift when subjected to the coarse - graining . to this end , one considers the set of bi - infinite , `` dotted '' symbolic sequences with symbols taken from a finite set , an alphabet . in eq .( [ eq : symseq ] ) the dot denotes the observation time such that the symbol right to the dot , , displays the current state , dissecting the string into two one - sided infinite strings with as the left - hand part in reversed order and as the right - hand part . applying a gdel encoding to the pair , where is an integer gdel number for symbol and are the numbers of symbols that could appear either in or in , respectively , yields the so - called symbol plane or symbologram representation of in the unit square . a generalized shift emulating a turing machine as the current tape symbol underneath the head and as the current control state .then the remainder of is the tape left to the head and the remainder of is the tape right to the head .the dod is the word of length . ]is a pair where is the space of bi - infinite , dotted sequences with and is given as with where is the usual left - shift from symbolic dynamics , dictates a number of shifts to the right ( ) , to the left ( ) or no shift at all ( ) , is a word of length in the domain of effect ( doe ) replacing the content , which is a word of length , in the domain of dependence ( dod ) of , and denotes this replacement function . from a generalized shift with dod of length nda can be constructed as follows : in the gdel encoding ( [ eq : goedel ] ) the word contained in the dod at the left - hand - side of the dot , partitions the -axis of the symbologram into intervals , while the word contained in the dod at the right - hand - side of the dot partitions its -axis into intervals , such that the rectangle ( ) becomes the image of the dod .moore has proven that the map is then represented by a piecewise affine - linear ( yet , globally nonlinear ) map with branches at .in general , a turing machine has a distinguished blank symbol , delimiting the machine tape and also some distinguished final states indicating termination of a computation .if there are no final states , the automaton is said to terminate with empty tape . by mapping through the gdel encoding ,the terminating state becomes a fixed point attractor in the symbologram representation .moreover , sequences of finite length are then described by pairs of rational numbers by virtue of eq .( [ eq : goedel ] ) . therefore , nda turing machine computation becomes essentially rational dynamics . in the framework of generalized shifts and nonlinear dynamical automata, however , another solution appears to be more appropriate for at least three important reasons : firstly , siegelmann further generalized generalized shifts to so - called analog shifts , where the doe in eq .( [ eq : genshift3 ] ) could be infinity ( e.g. by replacing the finite word in the dod by the infinite binary representation of ) .secondly , the nda representation of a generalized shift should preserve structural relationships of the symbolic description , such as the word semigroup property of strings .beim graben et al . have shown that a representation of finite strings by means of equivalence classes of infinite strings , the so - called cylinder sets in symbolic dynamics lead to monoid homomorphisms from symbolic sequences to the symbologram representation .then , the empty word , the neutral element of the word semigroup , is represented by the unit interval ] and the complete cylinder onto the cartesian product of intervals ^ 2 ] . fixing the prefixes of both part cylinders and allowing for random symbolic continuation beyond the defining building blocks , results in a cloud of randomly scattered points across a rectangle in the symbologram .these rectangles are consistent with the symbol processing dynamics of the nda , while individual points ^ 2 ] as to nda macrostates , distinguishing them from nda microstates of the underlying dynamical system . in other words ,the symbolically meaningful macrostates are emergent on the microscopic nda dynamics .we discuss in sec .[ sec : discu ] how a particular concept , called contextual emergence , could describe this phenomenon .from a conceptional point of view it does not seem very satisfactory to include such a kind of stochasticity into a deterministic dynamical system .however , as we shall demonstrate in this section , this apparent defect could be easily remedied by a change of perspective . instead of iterating clouds of randomly prepared initial conditions according to a deterministic dynamics , one could also study the deterministic dynamics of probability measures over phase space . at this higher level of description , introduced by koopman et al . into theoretical physics , the point dynamics in phase space is replaced by functional dynamics in banach or hilbert spaces .this approach has its counterpart in neural and dynamic field theory in theoretical neuroscience . in dynamical system theorythe abovementioned approach is derived from the conservation of probability as expressed by a frobenius - perron equation where denotes a probability density function over the phase space at time of a dynamical system , refers to either a continuous - time ( ) or discrete - time ( ) flow and the integral over the delta function expresses the probability summation of alternative trajectories all leading into the same state at time . in the case of an nda ,the flow is discrete and piecewise affine - linear on the domains as given by eq .( [ eq : ndamap ] ) . as initial probability distribution densities we consider uniform distributions with rectangular support , corresponding to an initial nda macrostate , where is the `` volume '' ( actually the area ) of and is the characteristic function for a set .a crucial requirement for these distributions is that they must be consistent with the partition of the nda , i.e. there must be a bi - index such that the support . inserting ( [ eq : iniuni ] ) into the frobenius - perron equation ( [ eq : froper ] ) yields for one iteration in order to evaluate ( [ eq : froper2 ] ) , we first use the product decomposition of the involved functions : with and where the intervals are the projections of onto - and -axes , respectively .correspondingly , and are the projections of onto - and -axes , respectively .these are obtained from ( [ eq : ndamap ] ) as using this factorization , the frobenius - perron equation ( [ eq : froper2 ] ) separates into } \delta(x - a^{\nu}_x - \lambda^{\nu}_x x ' ) u_x(x ' , t ) { \ , \mathrm{d}}x ' \\\label{eq : fpy } u_y(y , t + 1 ) & = & \int_{[0 , 1 ] } \delta(y - a^{\nu}_y - \lambda^{\nu}_y y ' ) u_y(y ' , t ) { \ , \mathrm{d}}y'\end{aligned}\ ] ] next , we evaluate the delta functions according to the well - known lemma where indicates the first derivative of in .( [ eq : evadelta ] ) yields for the -axis i.e. one zero for each -branch , and hence inserting ( [ eq : evadelta ] ) , ( [ eq : zeros ] ) and ( [ eq : slope ] ) into ( [ eq : fpx ] ) , gives } \frac{1}{\lambda^{\nu}_x } \delta\left ( x ' - \frac{x - a^{\nu}_x}{\lambda^{\nu}_x } \right ) u_x(x ' , t ) { \ , \mathrm{d}}x ' \\ & = & \sum_\nu \frac{1}{\lambda^{\nu}_x } u_x\left ( \frac{x - a^{\nu}_x}{\lambda^{\nu}_x } , t \right)\end{aligned}\ ] ] next , we take into account that the distributions must be consistent with the nda s partition . therefore ,for given there is only one branch of contributing a simple zero to the sum above .hence , the evolution of uniform p.d.f.s with rectangular support according to the nda dynamics eq .( [ eq : froper2 ] ) is governed by * proof ( by means of induction ) .inserting the initial uniform density distribution ( [ eq : iniuni ] ) for into eq .( [ eq : uniter ] ) , we obtain by virtue of ( [ eq : decounix ] ) deploying ( [ eq : charfn ] ) yields let now \subset [ 0 , 1] ] .therefore , the same argumentation applies to the -axis , such that we eventually obtain with the image of the initial rectangle .thus , the image of a uniform density function with rectangular support is a uniform density function with rectangular support again .assume ( [ eq : fpuniform ] ) is valid for some .then it is obvious that ( [ eq : fpuniform ] ) also holds for by inserting the -projection of ( [ eq : fpuniform ] ) into ( [ eq : uniter ] ) using ( [ eq : decounix ] ) , again .then , the same calculation as under 1 . applies when every occurrence of is replaced by and every occurrence of is replaced by . by means of this constructionwe have implemented an nda by a dynamically evolving field .therefore , we call this representation _ dynamic field automaton ( dfa)_. the frobenius - perron equation ( [ eq : froper2 ] ) can be regarded as a time - discretized amari dynamic neural field equation which is generally written as here , is the characteristic time constant of activation decay , denotes the synaptic weight kernel , describing the connectivity between sites and is a typically sigmoidal activation function for converting membrane potential into spike rate .discretizing time according to euler s rule with increment yields for and the amari equation becomes the frobenius - perron equation ( [ eq : froper2 ] ) when we set this is the general solution of the kernel construction problem .note that is not injective , i.e. for fixed the kernel is a sum of delta functions coding the influence from different parts of the space ^ 2 $ ] .note further that higher - order discretization methods of explicit or implicit type such as the runge - kutta scheme could be applied to eq .( [ eq : amari ] ) as well .but in this case the relationship between the turing dynamics as expressed by the frobenius - perron equation ( [ eq : froper ] ) and the neural field dynamics would become much more involved .we leave this as an interesting question for further research .in this communication we combined nonlinear dynamical automata as implementations of turing machines by nonlinear dynamical systems with dynamic field theory , where computations are characterized as evolution in function spaces over abstract feature spaces . choosing the unit square of ndas as feature space we demonstratedthat turing computation becomes represented as dynamics in the space of uniform probability density functions with rectangular support .the suggested framework of dynamic field automata may exhibit several advantages .first of all , massively parallel computation could become possible by extending the space of admissible p.d.f.s . by allowing either for supports that overlap the partition of the underlying nda or for multimodal distribution functions, one could prepare as many symbolic representations one wants and process them in parallel by the dfa .moreover , dfas could be easily integrated into wider dynamic field architectures for object recognition or movement preparation .they could be programmed for problem - solving , logical interferences or syntactic language processing .in particular , bayesian inference or the processing of stochastic grammars could be implemented by means of appropriate p.d.f.s . for those applications, dfas should be embedded into time - continuous dynamics .this involves the construction of more complicated kernels through solving inverse problems along the lines of potthast et al .we shall leave these questions for future research .the construction of dfas has also interesting philosophical implications .one of the long - standing problems in philosophy of science was the precise relationship between point mechanics , statistical mechanics and thermodynamics in theoretical physics : is thermodynamics merely reducible to point mechanics via statistical mechanics ? or are thermodynamic properties such as temperature emergent on mechanical descriptions ?due to the accurate analysis of bishop and atmanspacher , point mechanics and statistical mechanics simply provide two different levels of description : on one hand , point mechanics deals with the dynamics of microstates in phase space . on the other hand , statistical mechanics , in the formulation of koopman et al . ( see sec . [sec : dfa ] ) , deals with the evolution of probability distributions over phase space , namely macrostates , in abstract function spaces .both are completely disparate descriptions , none reducible to the other .however , the huge space of ( largely unphysical ) macrostates must be restricted to a subspace of physically meaningful thermal equilibrium states that obey a particular stability criterium ( essentially the maximum - entropy principle ) .this restriction of states bears upon a contingent context , and in this sense , thermodynamic properties have been called _ contextually emergent _ by .our construction of dfas exhibits an interesting analogy to the relationship between mechanical micro- and thermal macrostates : starting from microscopic nonlinear dynamics of an nda , we used the frobenius - perron equation for probability density functions in order to derive an evolution law of macrostates : the time - discretized amari equation ( [ eq : amari ] ) with kernel ( [ eq : nftkernel ] ) .however , with respect to the underlying nda , not every p.d.f . can be interpreted as a symbolic representation of a turing machine configuration .therefore , we had to restrict the space of all possible p.d.f.s , by taking only uniform p.d.f.s with rectangular support into account . for those macrostates we were able to prove that the resulting dfa implements the original turing machine .in this sense , the restriction to uniform p.d.f.s with rectangular support introduces a contingent context from which symbolic computation emerges .( note that uniform p.d.f.s also have maximal entropy ) .this research was supported by a heisenberg grant ( gr 3711/1 - 1 ) of the german research foundation ( dfg ) awarded to pbg .preliminary results have been presented at a special session `` cognitive architectures in dynamical field theory '' , that was partially funded by an eucogiii grant , at the 2nd international conference on neural field theory , hosted by the university of reading ( uk ) .we thank yulia sandamirskaya , slawomir nasuto and gregor schner for inspiring discussions .
cognitive computation , such as e.g. language processing , is conventionally regarded as turing computation , and turing machines can be uniquely implemented as nonlinear dynamical systems using generalized shifts and subsequent gdel encoding of the symbolic repertoire . the resulting nonlinear dynamical automata ( nda ) are piecewise affine - linear maps acting on the unit square that is partitioned into rectangular domains . iterating a single point , i.e. a microstate , by the dynamics yields a trajectory of , in principle , infinitely many points scattered through phase space . therefore , the ndas microstate dynamics does not necessarily terminate in contrast to its counterpart , the symbolic dynamics obtained from the rectangular partition . in order to regain the proper symbolic interpretation , one has to prepare ensembles of randomly distributed microstates with rectangular supports . only the resulting macrostate evolution corresponds then to the original turing machine computation . however , the introduction of random initial conditions into a deterministic dynamics is not really satisfactory . as a possible solution for this problem we suggest a change of perspective . instead of looking at point dynamics in phase space , we consider functional dynamics of probability distributions functions ( p.d.f.s ) over phase space . this is generally described by a frobenius - perron integral transformation that can be regarded as a neural field equation over the unit square as feature space of a dynamic field theory ( dft ) . solving the frobenius - perron equation , yields that uniform p.d.f.s with rectangular support are mapped onto uniform p.d.f.s with rectangular support , again . thus , the symbolically meaningful nda macrostate dynamics becomes represented by iterated function dynamics in dft ; hence we call the resulting representation _ dynamic field automata_.
as a concise abstract model , the concept of network captures the most essential ingredients of a complex system , namely , its basic component units and their interaction configuration .this advantage simple in form but powerful in modelling has attracted intensive studies of complex networks in a wide spectrum of contexts , ranging from natural sciences to engineering problems and human societies .roughly speaking , the investigations mainly fall into two categories : seeking the topological characteristics and their origins in one and understanding how they interact with the dynamical processes supported by the networks in the other . it has been found that topological characteristics , such as small - world and scale - free properties , are quite general ; they are common features in a large set of networks from various fields .moreover , they are closely related to the dynamical processes on the networks .illuminating examples among many others include epidemic spreading , to which the surprising implications of the scale - free property have been well illustrated ; and network synchronization , where the role played by the topology can be marvellously separated and appreciated by analyzing the master stability function .such progress has greatly enhanced our belief in the significance of identification and detection of these important topological characteristics .community is another common topological feature that exists in many complex networks .intuitively , a community refers to a set of nodes whose connections between themselves are denser than their connections to the nodes outside the set .community detection is very important in network studies , because communities usually govern certain functions as seen in many biochemical networks and social networks .communities also have important implications to the dynamical processes based on the networks , such as synchronization , percolation and diffusion .in addition , in networks of large size , community structure may serve as a crucial guide for reducing the network , which is believed to be helpful in shedding light on the most essential properties of a complex system . in view of the importance of the community structure ,there have been a lot of studies devoted to the issue of community detection .( see ref . for a recent and comprehensive review . )recently , attempts have also been made to extend the community detection methods developed in these studies to weighted networks and directed networks . however , community is not the only perspective for partitioning a network .for example , in a bipartite network , the best justified partition is to separate all the nodes into two groups such that nodes in one group only link to the nodes in the other .indeed , partition perspectives other than that of community is necessary in order to have a better understanding of both the structures of complex networks and the dynamical processes they support , as shown in by the study of synchronous motions on bipartite networks .an insightful idea is to partition a network into groups where nodes in each group share a similar connection pattern .as the connection patterns are various and can vary from group to group , this group model is very general and powerful in representing many different types of structures in a network .this idea has a long history .it was first introduced in social science by lorrain and white , where the nodes of similar connection pattern are referred to as being _structurally equivalent_. this idea has fruitfully led to the analysis of networks in social and computer science based on block modelling .a recent review can be found in ref . . in a recent study , newman and leicht came up with a novel and general partition scheme based on this idea .it divides a network into groups of similar connection pattern .the most striking advantage of their scheme lies in that it can be applied for seeking a very broad range of types of structures in networks without any prior knowledge of the structures to be detected .in addition , the algorithm thus developed is ready to be used for both the directed and undirected networks , and it is straightforward to generalize it to analyze weighted networks .the efficiency of the algorithm is also high in terms of computation complexity .recently , ramasco and mungan have analyzed this method in detail and devised a generalized newman and leicht algorithm based on their study .other than the newman and leicht algorithm and its variant , another intriguing and insightful scheme for partitioning a network into groups of similar connection pattern has also been developed based on the information theory .the newman and leicht theory assumes that in a group the total outgoing degree must be larger than zero .this assumption limits the application of their theory . in order to overcome this limitation ,it has been suggested in to deal with the incoming degrees , outgoing degrees , and bidirectional degrees separately . in this paper , we show that by assuming that all nodes in a group share the same _ a prior _ probability to connect unidirectionally to a given node ( see analysis in sec .iii ) , this problem can be solved straightforwardly .the algorithm we develop based on this assumption can be applied without any restriction on the degree distribution . moreover, the partition of a network given by our algorithm can be shown to be exactly the same as that of its complementary network ( see sec .this is required by the definition of a group of similar connection pattern .another advantage of our algorithm is that it allows an analysis of the heterogeneity effects , which reveals further useful information of the network structure . in addition to all of these, our algorithm shows clearly that it is the _ information _ whether there is a link between two given nodes , rather than the link exclusively ( if it exists between the two nodes ) , that contributes to the partition .the information that there is no link between two given nodes is important .this insight provides a new and different view for partitioning weighted networks .our algorithm also inherits all the advantages of that by newman and leicht . in the next section ,we first review briefly the theory by newman and leicht , and then point out the extent of its applicability .next , in sec .iii , we develop our algorithm based on the _ a priori _ probability assumption and discuss its properties .after that we present examples of various types of groups together with the analysis of two real networks .we discuss in sec .iv the role played by the involved heterogeneity effects , and show how a group partition can depend on it by the example of the karate network .finally , before summarizing the results of this paper , we discuss in sec . v how to extend our algorithm to weighted networks .in search of the structures in a network , a dilemma we often encounter is that we have to _ input _ initially what structures we are intending to look for but this information is however usually unavailable before the structures have been found successfully . as a result what we can find eventually may strongly depend on whether we have enough prior knowledge of the structures to be detected . to overcome this difficulty , newman and leicht insightfully focused on the groups of similar connection pattern . in their theory, the connection pattern for a group is specified by sets of parameters to be determined .initially , the information of these connection patterns is not required as input to the search algorithm thus designed ; rather , they are shaped up during the search process ( running of the algorithm ) and produced as outputs .finally , what the algorithm provides simultaneously is not only the best way for grouping the nodes , but also the common connection pattern that nodes in each group share .they made this possible by skillfully harnessing the probabilistic mixture models and the expectation - maximization algorithm .as the groups of similar connection pattern are effective in modelling various structures in networks , their algorithm is very general and has a wide application spectrum .the main points of the newman and leicht theory are as follows .( for the sake of convenience and clarity , we take the same notation as in throughout this paper . )let us consider a network of nodes belonging to groups .its connection configuration is given by the adjacency matrix .if there is a link between node and node then otherwise . in the newman and leicht theory , , and assumed to be known and used as the input for their algorithm . herethe number of groups is the only information needed in advance about the partition .if it is unavailable , it should be assumed or estimated based on other known information of the network .next , the connection configuration is assumed to be a realization of an underlying statistical model defined by two sets of probabilities denoted by and , respectively , with and .this statistical model assumes that each node has probability to fall in a group and for all nodes in that group they have the same probability closely related to to connect to a given node . here is equivalent to the portion of the outgoing links of group that connect to node .the outgoing links of group refers to the outgoing links that all nodes in group have . in this sense the connection pattern shared by all nodes in group .as long as and are known , together with the adjacency matrix as measured data , one can obtain the probability for observing the node being in the group , namely , and thus all the information about the group partition . here represents the group to which the node is regarded to belong in a certain partition ; we use and to denote and respectively .hence the key is to specify and .newman and leicht assumed that the right values of the elements of and are those that maximize the likelihood to observe the connection configuration and a certain partition , namely , or equivalently those that maximize its logarithm in this way , the problem is converted to a solvable fitting model problem with the help of the maximum likelihood method .the next task is then reduced to find and that satisfy this requirement . to proceed further, newman and leicht adopted a crucial simplification : they suggested instead to maximize the averaged over all possible partitions : as are summed out , this simplification allows one to write down analytically the solutions of and in terms of and , and develop an efficient iterative algorithm based on them . in detail , starting from and newman and leicht obtained and \label{eq26}\end{aligned}\ ] ] with then and that maximize were deduced in terms of and as where denotes the outgoing degree of node .( [ eq27 ] ) , ( [ eq28 ] ) and ( [ eq29 ] ) thus define the newman - leicht algorithm ( nla ) .it runs in an iterative way : at each step , the old values of the elements of , and are substituted into the right hand side of these equations to generate their updated values .the convergent result of then defines the connection patterns of groups and that of suggests grouping . in practice, the calculation converges rapidly .( we found that the convergence time goes as in all the networks we have analyzed with the nla , including those that are not presented in this paper . )it should be noted that in getting eqs .( [ eq28 ] ) and ( [ eq29 ] ) the following constraints imposed on and have been taken into consideration : and indeed , the results given by eqs .( [ eq28 ] ) and ( [ eq29 ] ) satisfy these requirements .in addition , the results of eq .( [ eq28 ] ) and eq .( [ eq29 ] ) are in consistency with the definitions of and . in particular , eq . ( [ eq29 ] ) makes it clear that is the expected portion of the outgoing links of group that connect to node . the definition of and the corresponding normalization condition imposed by eq .( [ eq211 ] ) imply that the partition given by the nla must be such that each group has at least one outgoing link .this constraint limits the application range of the nla .an example cited in ( see fig . 2 in )is a directed bipartite network which is reproduced in fig .according to the definition of a group of similar connection pattern , this network should be partitioned into two groups such that one contains the left two nodes and one contains the right two nodes , respectively . however , as the right group has no outgoing links , nla would suggest instead a partition into the upper two nodes and the lower two nodes , or the whole network as a single group .another example is the directed star as shown in fig .1(b ) ; nla partitions all nodes into one group though from the viewpoint of similar connection pattern or symmetry we expect the center node to be in one group and other peripheral nodes in another .in this section we present an expectation maximization algorithm that does not have any restriction on the degree distribution of a group .in addition , it also has many other advantages which will be discussed in the following sections .our method is in the same spirit as the nla , but the statistical model of the group is different . first let us suppose the network under consideration has nodes that belong to groups , and the connection configuration is given by the adjacency matrix .similarly , we assume , and are known and serve as the input .next , as in the nla , we assume that each node has probability to fall in group . in effect reflects the size of group , which is expected to be . as any node must be in the network , we have however , to specify the connection pattern of a group , we take the _ a priori _ probability assumption instead .we assume that in a given group all its nodes share the same _ a priori _ probability , denoted by , to connect unidirectionally to a given node .as such should satisfy .we also assume that is independent of for ; namely , the probabilities for a node ( in group ) to connect to two different nodes are completely independent .the normalization condition for can be expressed as , where stands for the probability with which a node in group does not connect to node . as compared with the nla , here we need not introduce a normalization condition like eq .( [ eq211 ] ) ; can take any allowed value ( ) independently .it is this flexibility and adaptability that makes our algorithm applicable in principle to any network .now we follow the nla to develop the algorithm based on and . in order to introduce less notations , here we take all other symbols adopted in the nla except and maintain their original meaning ( with being replaced by where necessary ) .we also refer to our algorithm the _ a priori _ probability based expectation maximization algorithm ( apbema ) in the following .our starting point is the conditional probabilities and it should be stressed that the right hand side of eq .( [ eq33 ] ) accounts for not only the probability for the presence of a link ( ) but also that for a null link ( ) , hence honestly reflects the conditional probability for observing the configuration given by .as can be seen in the following , it also implies the null links are as _ equally _ important as links for partitioning a network , which agrees well with our intuition .our next task is to find and that maximize it can be rewritten as \label{eq35}\end{aligned}\ ] ] if we substitute eqs .( [ eq32 ] ) and ( [ eq33 ] ) into eq .( [ eq34 ] ) with here .apparently , it satisfies the normalization condition as required .now we are ready to obtain and that maximize with the _ only _ constraint .we set with being given by eq .( [ eq35 ] ) and the lagrange multiplier introduced . by solving the following equations we obtain and then we get the apbema defined by eqs .( [ eq36 ] ) , ( [ eq39 ] ) and ( [ eq310 ] ) .its iterative implementation is the same as that for the nla , hence it has the same efficiency in terms of computational complexity .also as in the nla , the convergent values of suggest the partition , and those of describe the connection patterns of groups .it is worthwhile noting that according to eq .( [ eq310 ] ) as expected .in addition , eq .( [ eq310 ] ) is consistent with the meaning of , namely , the probability with which a node in group is unidirectionally linked to node .this can be seen further from , which represents the averaged outgoing degree a node in group has . indeed , according to eq .( [ eq310 ] ) ( is the outgoing degree of node . ) the right hand side of eq .( [ eq311 ] ) is exactly the expected outgoing degree of a node in group . to summarize ,our algorithm is based on the probability assumption .it is this difference in the meaning between and that makes the apbema radically different from the nla despite their similarity in form .the apbema developed previously has the following properties : \(i ) _ applicable without any restriction on the degree distribution_. even in the trivial and less meaningful example where the network contains some isolated nodes the apbema can successfully assign them into one group , say group , that is characterized by . for the examples shown in fig .1 , the apbema partitions them without any ambiguity in the sense that the output values of and are all virtually zero or one . for the directed bipartite network shown in fig .1(a ) it suggests the left two nodes in one group and the right two in another while for the directed star ( fig .1(b ) ) it separates the center node from the rest just as expected .( to apply the apbema to these two networks , the number of groups has been assumed to be . )\(ii ) _ suggesting the same partition for the complementary network_. by the complementary network of a network specified by the adjacency matrix , we mean the network which has the same nodes but its adjacency matrix is related to via .namely , a link in network is a null link in its complementary network and vice versa .obviously , a group in characterized by ( ) is still a group in with according to the definition of group . hence an algorithm aiming at identifyingthe groups of similar connection pattern should suggest the same partition for both a network and its complementary network .this is the case for apbema , which is guaranteed by the symmetry of , , and in eqs .( [ eq36 ] ) , ( [ eq39 ] ) and ( [ eq310 ] ) .this symmetry also implies that null links play the same important role as links in partitioning a network .a further discussion will be given in sec .v. \(iii ) _ applicable to both directed and undirected networks_. although the apbema we obtain here is for directed networks , it can be extended without any modifications in form to undirected networks .the argument is similar to that given in : in an undirected network , is still the probability for a node in group to connect to node ; the probabilities for there is and there is no link between node and node are and , respectively .hence which is the same as eq . ( [ eq33 ] ) .( has been used . )other derivations are then exactly the same as in the directed case .\(iv ) _ powerful in accounting for the heterogeneity effects on grouping . _the apbema allows us to prescribe the involved heterogeneity effects of the outgoing degree distribution .this can be done by conveniently introducing a tunable parameter to the apbema . with this extension, we can study how the degree heterogeneity may affect the grouping results in a controlled way . in the situations where we desire to bias the heterogeneity effects on the groupingthis extended algorithm would be superior .this algorithm will be discussed in detail in sec .\(v ) _ applicable to weighted networks_. with a straightforward extension , the apbema can also be used to analyze weighted networks .a detailed discussion will be presented in sec . v.\(vi ) _ the same efficiency as the nla in terms of computational complexity_. to show how well the apbema works , we present in this subsection several typical examples . just as in the nla , besides the adjacency matrix we also need to set the number of groups , , as another input . for all the examples throughout this paper we assume that this information has been known . in particular , we set in all other examples except for the case of the american college football teams where is assumed .network constructed according to the definition of group .the network contains nodes which by construction are divided into two sets of equal size .in each set the nodes are randomly connected with the average intra - group degree , and between the two sets the links are randomly connected with the average inter - group degree .the error rate by the apbema is shown as a function of the inter - group degree .the two sets are successfully recognized for and when the group structure is clear . ]the first example is a homogeneous undirected network .we simply divide nodes into two sets of equal size and in each of them nodes are randomly intra - connected with the average intra - degree . after that the inter - group links are randomly added with the average inter - group degree .obviously , these two sets are two groups according to the definition , and when ( ) they are assortatively ( disassortatively ) connected . in practice, the larger the difference between and is , the clearer the group structure would be , and the easier it should be to detect the groups . the results for , against are summarized in fig .we find that the apbema works well : it identifies successfully both the assortatively and disassortatively linked groups when their structures are clear . if and are too close it fails just as expected .it is interesting to note that when the two groups can be seen as two communities .this fact suggests that in the cases when groups and communities overlap with each other in a network the apbema can be used to detect communities as well .given this , it is expected that for , when the network becomes bipartite - like , the apbema works equally well .this is because the complementary network in this case is a community network , and as having been pointed out in the last subsection , the apbema is symmetric for a network and its complementary network .indeed , such a symmetry has manifested itself clearly on the error rate curve presented in fig .2 . to measure the error of group detection, we define the error rate as the sum of the portions of nodes wrongly partitioned into the opposite group : where ( ) is the number of nodes in the first ( second ) group and ( ) the number of nodes belonging to group 1 ( 2 ) but are assigned to group 2 ( 1 ) by the algorithm .if the nodes are randomly assigned to each group , or all nodes are simply regarded as belonging to a single group , the error rate so defined takes the value one and implies a complete detection failure .it is zero only when all the nodes are correctly grouped . to suppress the fluctuations , for every data point presented in fig .2 we have averaged the error rates evaluated over 1000 realizations of the network .we have also checked that with other definitions of the detection error , for example , that used in ref . , which is based on the normalized mutual information , the results are qualitatively the same .this is also the case for all other examples throughout this paper where the error rate is evaluated .network constructed according to the definition of group .the error rate ( solid dots ) is for the group detection result by the apbema in identifying a fully connected clique of nodes immersed in a randomly connected background of 63 nodes whose average degree is varied for investigating how the error rate depends on it . for the apbema works very well ( the error rate is smaller than ) , and the error rate due to wrongly partitioning the clique nodes into the background ( open squares ) is small and can be neglected . in this casethe error rate is mainly contributed by wrongly partitioning the background nodes into the clique as a result of fluctuations in building the network . ] in our second example the groups are connected in a way neither purely assortative nor purely disassortative .first we build a random homogeneous and undirected network of nodes with the average degree , then we chose from them nodes randomly and fully connect them to form a clique .we then have two sets of nodes : the clique , whose nodes have an average degree , and the one consists of the rest nodes which we call the background , whose nodes have an average degree . we restrict ourselves to the case , namely , the degrees of the nodes in the clique are much larger than those in the background , thus making the clique quite outstanding to the background .hence the network under consideration is in fact highly heterogeneous .it should be pointed out that in this case the communities occasionally formed in the background due to fluctuations can be neglected , and according to the definition the clique and the background are two groups since nodes in themselves share the same connection pattern that can be appropriately specified in terms of .furthermore , this network is neither assortative nor disassortative ; it is not a community network either because the background nodes are connected between themselves the same densely as they are connected to the clique nodes . in fig .3 the partition results by the apbema for and are shown against the average degree of the background nodes , . it can be seen that for it gives the correct partition perfectly . in fact, the apbema works well all the way up to with the error rate smaller than 10% .as is increased further the clique becomes less distinct from the background , and the fluctuations in the background begin to play a role . as a resultthe error rate starts to increase quickly .further investigations show that for the detection error due to wrongly partitioning the clique nodes into the background ( open squares in fig .3 ) , namely in eq .( [ eq313])(subscript 1 ( 2 ) indicates the clique ( background ) ) , is very small and can be safely neglected .the detection error is mainly contributed by wrongly partitioning the background nodes into the clique in certain network realizations due to fluctuations where the wrongly partitioned background nodes happen to have a higher degree and more links connecting to the clique nodes . on average the total number of the wrongly partitioned nodes ( mainly from the background to the clique )is about and for and respectively . in this calculation1000 realizations of the network are considered again to average the error rate . which regards nodes sn89 and pl belonging to the opposite subdivision but all others nodes to their own subdivisions .this is one real network example where the apbema can be used to detect the community structure . ]the network studied in this example could be relevant for studying some real networks containing cliques .the success of the apbema is a good indication of the flexibility and adaptability of the probability assumption , and suggests that the apbema may find some unique applications in certain partition problems . in general , in a community network the nodes in a communitymay not share the same connection pattern .in such cases the group partition can be different from that of the community partition .such an example will be discussed in the next section . however , in the cases where they do share the same connection pattern , or approximately do , our algorithm can then be used to find the community structure .this has been seen in the first example ( fig .2 ) when the two groups are assortatively connected . in the followingwe show two examples of real community network where the partition result given by our algorithm is in good agreement with the community partition .the first one is a network of bottlenose dolphin living in doubtful sound , new zealand which is composed of 62 dolphins ( nodes ) and 159 social ties ( edges ) .it is assembled by researchers over years ( fig .during the course of the investigation of this network , it split into two disjointed subdivisions of unequal size ( represented by solid squares and solid dots in fig .4 respectively ) following the departure of a key member named sn100 ( denoted by the open dot in fig .4 ) . the group partition provided by the apbema corresponding to the largest value of agrees very well with the natural splitting except two nodes named pl and sn89 .is represented by the clusters .stars stand for the `` ia independence '' conference which are scattered due to their sparser connections inside . in this casethe groups given by the apbema coincide with the communities very well despite the scattering of the `` ia independence '' conference .this is another example in addition to the dolphin network ( see fig .4 ) where the apbema can be used to detect the community structure . ]the second example is the network of the american college football teams .the network is a map of the schedule of division i games for the 2000 season where 115 nodes represent the teams and 616 edges represent regular - season games between the two teams they connect .all 115 teams are organized into 12 conferences each of which contains about 8 - 12 teams . as games are usually more frequent between members of the same conference than between members of different conferences , most conferences can be seen as communities .but because there are few of them whose teams played more or nearly as many games against teams in other conferences than / as those in their own conference , the network structure does not reflect the genuine conference structure perfectly .the partition suggested by apbema is presented in fig .( the number of the groups is assumed to be as input . )it can be seen that the group structure suggested has a fairly accurate coincidence with that of the conference . in particular ,five groups ( the top five ) are completely the same as the corresponding conferences without any nodes wrongly assigned to / from other conferences , and five others have only one or two nodes being assigned to / from other conferences .the most obvious mismatch lies in the partition of the conference `` ia independence '' .its members , central florida , connecticut , navy , notre dame and utah state ( denoted by stars in fig .5 ) are assigned to other groups rather than in their own .considering the fact that they have more games in the conferences they are assigned to than in their own , this is reasonable and somehow expected . to summarize this subsection, the apbema performs well in identifying various structures in a network .more examples and further discussions of the presented ones will be given in the following sections .in this section we study how the degree heterogeneity may affect the grouping results .theoretically this problem is interesting as it is related to a general issue in network study , namely , whether / how two different types of topological characteristics are coupled .obviously , in the apbema the coupling between the degree distribution and the group structure is inherent : the apbema suggests the grouping based on the connection patterns it recognizes , but the connection patterns are in turn evaluated based on the outgoing degrees . the close relation between the connection patterns ( given by ) and the outgoing degrees , ,can be seen clearly in eq .( [ eq311 ] ) .then the next question for our aim here is how the apbema captures the degree heterogeneity .a key observation is that the apbema models the network in a coarse - graining way .it uses the groups as the ` patches ' to represent different parts of the network , hence in effect the network is characterized at two different levels . at the lower level , namely inside each group , the apbema has assumed that all nodes are identical and statistically independent .therefore the structure of a group , its degree distribution as well , has been assumed to be homogeneous .so at this level the heterogeneity is not captured by the apbema , which can be seen as a simplification adopted by the apbema .the difference between the outgoing degree of a node from its expected value ( i.e. , see eq .( [ eq311 ] ) ) in a group is treated by the apbema as a result of the statistical fluctuations . however , at the level of groups the apbema is flexible .it allows the statistical characteristics of the groups to vary from group to group so that the local structures of the network are given the best matching .therefore it is at this level that the heterogeneity is taken into account by the apbema . with this understandingwe may imagine that the apbema tries to mimic the degree distribution function with a series of peak - like functions .each peak - like function corresponds to a homogeneous degree distribution in a group , and its position represents the average outgoing degree of the group . hence if the network is heterogeneous , then the heterogeneity would be characterized by the distances between these peaks .a good example is the network studied in fig . 3 ; its degree distribution function happens to be one of two narrow peaks representing the clique and the background .the distance between them tells directly how heterogeneous the whole network is . for a more general degree distribution function , though it is hard to infer all the information of the heterogeneity based on the distances between these peak - like functions , they are still a good indicator of it .another ( opposite ) extreme case is for the homogeneous networks , see for example the one presented in fig.2 , where all these peak - like functions overlap with each other and the distances between them are all zero .what we have learned here implies that if we can appropriately preset the positions of these peak - like functions , namely the average outgoing degrees of the groups , then we can interfere the way the apbema considers the heterogeneity effects .our aim in this section is to develop such an algorithm .for example , if all the average outgoing degrees are taken to be equal , then we have in effect suppressed the heterogeneity effects to be considered completely .this extreme case will be discussed in the first subsection in the following .the apbema discussed in sec .iii has taken into account the heterogeneity effects as fully as it can , so it stands as another extreme . in the second subsectionwe will discuss how to introduce a control parameter to build an interpolating algorithm such that the heterogeneity effects involved can be tuned between these two extremes continuously .then we will show in the third subsection by the example of the karate network how the heterogeneity plays its role in grouping .a comparison with the dolphin network will reveal an interesting underlying structural difference between the two networks . as discussed in sec .iii , gives the expected outgoing degree for a node in group .if we assume that all the nodes , regardless of which group they belong to , have the same expected outgoing degree , then should satisfy where is the average outgoing degree over the whole network . with this consideration , we can build up a grouping algorithm where the effect of heterogeneity is completely suppressed .first we start from eqs .( [ eq32 ] ) and ( [ eq33 ] ) and get as in eq .( [ eq35 ] ) and as in eq .( [ eq36 ] ) , namely , again. then we can get and with constraints of and those imposed by eq .( [ eq41 ] ) by setting and requiring that the partial derivatives of with respect to its variables to be zero . and serve as lagrange multipliers of the constrains .it leads to and with we refer to this algorithm defined by eqs .( [ eq42])-([eq45 ] ) the heterogeneity suppressed algorithm ( hsa ) . as expected ,if we impose zero to all , then the apbema is retrieved .compared with the apbema , the change in form of the hsa caused by makes its implementation different : here in fact two cycles of iteration , the outer one and the inner one , are involved . at each step of the outer cycle , we update and via eqs .( [ eq42 ] ) and ( [ eq43 ] ) first , then we come into the inner cycle given by eqs .( [ eq44 ] ) and ( [ eq45 ] ) with which the values of and are iterated till they converge .then a whole step of the outer cycle is finished .the outer cycle is continued till all the values of , , and become stable .we notice that among various ways to perform the inner iteration according to the equivalent transforms of eqs .( [ eq44 ] ) and ( [ eq45 ] ) the one given by eqs .( [ eq44 ] ) and ( [ eq45 ] ) is the best : it converges in all the cases we have ever tested and the running time is the shortest .( we find the running time also scales with as but is about two times of that consumed by the nla and apbema . )now we have two extreme algorithms at hand : in one ( the apbema ) the heterogeneity is given full consideration and in another ( the hsa ) it is completely suppressed .inspired by the way we construct the hsa , we realize that an ` interpolating ' algorithm bridging the two extremes can be created by introducing a tunable parameter into eq . ( [ eq41 ] ) such that with now is the average outgoing degree we impose on the group , and the parameter prescribes the weight of the heterogeneity . for , , then no difference of the expected outgoing degrees between the groupsis considered ; eq .( [ eq46 ] ) is then reduced to eq .( [ eq41 ] ) . for , , which is exactly the average outgoing degree of group when the heterogeneity is fully considered ; it is then reduced to eq .( [ eq311 ] ) . for other values of ( )the average outgoing degree takes the linear interpolating values between and as a result .following the derivations as in the hsa , the solution of and under constraints and are still given by eqs .( [ eq42])-([eq44 ] ) , but now reads instead .it is easy to show that for it reduces to eq .( [ eq45 ] ) and the hsa is retrieved , and for as we have the apbema again . for thus have an intermediate algorithm in between where only partial effects of heterogeneity are considered , hence in effect it is a heterogeneity weighted algorithm ( hwa ) . by changing can therefore conveniently adjust the degree of heterogeneity involved and investigate how it may affect the grouping results .the numerical implementation of this algorithm is the same as the hsa . as a trivial testthis heterogeneity weighted algorithm has been applied to the example in fig .2 . as it is a homogeneous network , we can expect that weighting the heterogeneity will not produce any effects .namely , the partition results shown in fig .2 does not depend on .another trivial test is the clique - background network studied in fig .3 . as in this examplethe groups are characterized by their own average degrees , we may expect that suppressing the heterogeneity effects may blur the line of distinction of the two groups and hence cause a detection deterioration .these conjectures have been fully verified by our simulations ( the data of which are not shown here ) . in the followingwe will consider some more meaningful and inspiring examples .in particular we will apply the hwa to two real social networks .interesting results will be discussed in detail . in ref . , zachary reported an anthropological study of a karate club in a university . during the development of the club , two groups led by the instructor and the president formed gradually and in the end , due to the lack of a solution to a dispute , the club split . in recent years , the network of this karate club has been widely used for testing various community finding techniques , including the nla in where it has been found that the result of the nla is in good agreement with the true splitting .( a ) and ( b ) of the heterogeneity weighted algorithm ( hwa ) .the groups are distinguished by different symbols representing the nodes .the partition in ( b ) shows the groups may not be identical with the communities in a community network.,title="fig : " ] ( a ) and ( b ) of the heterogeneity weighted algorithm ( hwa ) .the groups are distinguished by different symbols representing the nodes .the partition in ( b ) shows the groups may not be identical with the communities in a community network.,title="fig : " ] to apply our heterogeneity weighted algorithm , it is found that for , namely the heterogeneity effects are completely suppressed , the partition result is the same as that given by the nla ( fig .6(a ) ) . but for ( fig .6(b ) ) , when the heterogeneity effects are fully considered , it suggests that those dominant nodes ( open dots in fig .6(b ) ) belong to one group and the others belong to another group .such a result ( fig .6(b ) ) is not surprising because nodes in each group are indeed much more , which agrees better with our definition of group .for example , nodes in each group have more similar degrees ; they have the similar connection pattern as well : in the dominant group nodes are weakly connected to each other and serve as the branches of the whole network , while in the other group nodes are only sparsely connected between themselves and look like leaves attached to the dominant group . this partition is also meaningful in reality : it recognizes the leaders and coordinators from the other members .it is important to note that from a different viewpoint based on the information theory , similar partition result has been obtained ( see fig .4b in ) .this example shows clearly that the groups of similar components may not be the same as the communities in a community network . in order to have a better understanding of the network structure , analysis of bothis necessary .now let us look at what happens if the weight of the heterogeneity is changed .starting from , each time we increase with a small step and then iterate the stabilized results of , and obtained at until they converge .in this way , we can trace the partition shown in fig .6(a ) up to .similarly , starting from , the partition shown in fig .6(b ) can be traced back up to close to zero .the values of evaluated by eq .( [ eq35 ] ) that correspond to these two groupings are presented in fig .we can find that the corresponding value for the partition in fig .6(a ) changes only very slightly during this process , but that for the partition in fig .6(b ) is , first , smaller when is close to zero , but it increases continuously with and at it begins to become larger .for , the fact that the partition of fig .6(a ) can still be traced suggests that the corresponding value of is , though not global , still a local maximum as well .( as both partitions coexist for our algorithm as maxima of , we believe that a network analysis by the expectation maximization method would be more powerful if local maxima solutions other than that of the global maximum are considered in addition . )values corresponding to the two groupings shown in fig . 6 are presented as functions of , the weight of the heterogeneity .they are two maxima and intersect at .it suggests that when the heterogeneity effects are suppressed ( ) the partition as in fig .6(a ) is preferred but when the heterogeneity effects are more fully considered ( ) the partition as in fig .6(b ) is recommended instead .it shows that a group partition can depend on the heterogeneity effects strongly . ]7 shows clearly the important role played by the heterogeneity in the definition and detection of the groups and communities . in this examplewe have both groups and communities .as they are identical for , that is where our algorithm can be used to detect the communities .if we insist that only the solution corresponding to the global maximum of defines the groups , then they are different from the communities when .on the other hand , as sets the weight of the heterogeneity to be considered , this tunable algorithm is quite flexible and may find some interesting applications in practice , in particular in those situations where we wish to stress or weaken the effects of the heterogeneity on purpose .next let us cite the social network of dolphin as a comparison . in fig .8 the three largest maxima of value are shown as functions of the weight of the heterogeneity .there are not any intersections between them .this fact may suggest that we have a unique grouping and it is robust to the heterogeneity .this is verified by the careful investigation that shows the partitions corresponding to these curves indeed do not change with .the groupings corresponding to the largest two maxima are given by fig .4 and fig . 9respectively . a comparison between these two partitions is interesting : the only difference lies in the node pl . on one handthe nuance between their values may be a signature that our algorithm lacks confidence in partitioning node pl due to its special role in between the two subdivisions , and on the other hand their overwhelming agreement may suggest that our algorithm is quite confident in partitioning all other nodes except pl . this is consistent with the big gap between the second and the third maxima of , which indicates that our algorithm would prefer to discard any other groupings except those shown in fig . 4 and fig .value against the weight of heterogeneity , , are shown .the grouping of the network corresponding to the top ( middle ) curve is given in fig .4 ( fig . 9 ) .it suggests that in this example the group structure depends insensitively on the heterogeneity effects . ]these results may be an indication that the natural subdivisions formed after the splitting of the network are the only main topological structure from the view point of group partition in this network . unlike the karate network where different structures may coexist , the network of dolphin lacks a ` core ' of dominant nodes around which the other nodes are organized .this topological difference may have implications in understanding the different social behaviors of the two societies .as the expectation maximization algorithms have so many advantages , it is desirable to extend them to weighted networks .in fact the newman and leicht scheme favors such an extension .a straightforward method was suggested in where the weight of each link was related to its contribution to the value . in this sectionwe discuss this problem based on the apbema , but the derivations are similar and straightforward for the heterogeneity suppressed and the heterogeneity weighted algorithm . the radical difference between our scheme and that in is that in our algorithm it is the _ information _ provided by each entry of the adjacency matrix that is weighted .( see fig .8) instead . in this partitiononly the node sn89 is not classified into the natural subdivision it belongs to . a comparison with the partition corresponding to the first maximum of ( see fig .4 ) indicates a special role node pl may play . ]we rewrite eq .( [ eq35 ] ) in the form of \label{eq51}\end{aligned}\ ] ] from which we can tell that the term between the square brackets represents the contribution to the value given by , namely the information of the connection state between node and node .obviously , no matter or its contribution is equally important and counts .hence if we attach a weight to the information provided by , then the value for the aim of grouping should naturally be replaced by .\label{eq52}\end{aligned}\ ] ] next , we assume the right grouping should be the one that maximize with the constrain .the deduction is then the same as in the apbema and finally we have and where is still given by eq .( [ eq36 ] ) .it is apparent that , for an unweighted network where , this algorithm is reduced to the apbema as expected . similarly , if the constraints of eq .( [ eq41 ] ) or eq .( [ eq46 ] ) are taken into account , we can get the heterogeneity suppressed or heterogeneity weighted algorithm for the weighted network as well .it is important to note that is the weight of the information provided by rather than of the link between node and .( note that though in calculating ( eq .( [ eq54 ] ) ) does not count in evaluating the numerator if , it does in evaluating the denominator . ) in other words , even if there is no link between node and node , this piece of information ( ) is equally important for recognizing the group structure .this result is consistent with our intuition and experience . in order to well appreciate the implications of this algorithm ,let us take the network studied in fig .3 as an illustration . for the sake of simplicity, we assume that all the weights take only two values : 1 and . here is a constant used to weight a selected potion of entries of the adjacency matrix and ; it is introduced to control the information of that potion the algorithm can use and so that we can investigate how the grouping results depend on it .we consider the following three cases : ( i ) for and for ; ( ii ) for and for ; ( iii ) if both node and node are in the clique and otherwise . for , since a crucial part of information of the network topology lacks , we may expect a failure of grouping . as is increased , more and more information are taken into account , the grouping should be more and more accurate .finally , as is approached , all the topological information is considered , our algorithm should suggest the grouping as perfectly as the apbema does .this conjecture has been well verified by the simulations . in fig .10 the grouping error rate against is summarized for the case where the network has nodes , the clique size is and the average degree of the background nodes .each data point represents the averaged error rate over 1000 realizations of the network . in the first case ( solid squares in fig .10 ) , the information associated with the null links is fully considered but that associated with the links is controlled by . for their contributionsare completely ignored ; as a consequence the algorithm ` sees ' all the nodes isolated from each other and classifies them into a single group . to increase from zero , thought slightly ,would stop the algorithm from classifying all the nodes in a single group , but the error rate is still high . as is increased further , more and more information of the links is available and the partition becomes more and more accurate . when it comes to the point , the information seems to have been enough for the algorithm to recognize well the clique from the background .this phenomenon is interesting : it suggests that in fact there is a redundance in information for the use of partition in the network under study . in the second case ( open squares in fig .10 ) , the information associated with the links is fully considered but that with the null links is tuned by .similarly , for the algorithm can not ` see ' the null links and thus the background .all nodes are regarded to be in one well connected group .this result shows clearly the information of the null links is a requisite for a correct partition . as increased from zero , the error rate undergoes an abrupt drop . this isbecause here we have much more null links than links and hence even a small value of may release much more information than in the case ( i ) .to increase further would improve the grouping correspondingly just as expected . in the last case ( solid dots ) the weights of the information associated with the cliqueis varied instead , but again we have qualitatively the same result as in the first two cases .these results are in good consistency with our discussions on the weighted apbema from the information perspective .nodes immersed in a randomly connected background of 63 nodes whose average degree is .the information contained in each entry of the adjacency matrix is weighted by either or 1 , and solid squares , open squares and solid dots represent three different ways for assigning the weights among the entries , which correspond to the case ( i ) , ( ii ) and ( iii ) as described in the text ( see the text ) .in all the cases as is increased the grouping becomes more accurate , which supports the viewpoint that the information of links and null links are equally important . ] to weight the information contained in can be more relevant in practice . to construct a network representation of a real complex system , it involves unavoidably the measurement of the connection state between any two nodes . in a general case ,the measurement does not generate a definite zero / one output ; rather , the errors and uncertainties are entangled intrinsically . in many cases , such as in some biological systems , biochemical systems and human societies , asthe relations between the elements can be numerous and of various types on one hand , and these relations themselves can be coupled with each other on the other hand , the problem of measurement is even more subtle and difficult .hence for any network abstracted in the end , the evaluations of the confidence in the measured connection states are important and necessary .these evaluations of the confidence are the ideal measures of the weights considered here .in this work we have studied how to detect the groups in a complex network that consist of nodes having the similar connection pattern .our algorithm is based on the mixture models and the exploratory analysis suggested by newman and leicht , but significant differences exist . in our algorithmthe connection pattern is modelled by the _ a priori _ probability assumption instead .the main advantages of our algorithm are that ( i ) it can be applied without any restriction on the degree distribution ; ( ii ) it possesses the symmetry between the links and the null links ; ( iii ) it is flexible in dealing with the heterogeneity effects ; and ( iv ) it can be extended to the connection information weighted networks .these advantages have been illustrated by various network examples . with our algorithmwe have studied the role played by the heterogeneity .we find that the grouping result may depend on the heterogeneity effects involved .this finding suggests that in order to have a thorough knowledge of the network structure , this dependence should be analyzed .for this reason all the groupings found ( at various values of , see sec .iv ) are justified .this can be seen as an extension to the definition of group formally defined at when the heterogeneity effects are fully considered .based on our analysis , it is natural to extend our algorithm to the connection information weighted networks .this result is a direct implication of our _ a priori _ probability based group connection pattern model .as the connection information weighted networks can be closely related to the measurement of networks , we expect our extended algorithm may find wide applications .finally , our study has also suggested that groupings associated with other top maxima of the merit function ( ) could be meaningful and useful as well .this may be a common feature among the expectation maximization algorithms . how to interpret these groupings seems to be interesting and potentially important that deserves further investigations .albert r and barabasi a l 2002 statistical mechanics of complex networks _ rev .* 74 * 47 newman m e j 2003 the structure and function of complex networks _siam rev . _* 45 * 167 holme p 2004 form and function of complex networks ( doctoral thesis ) , umea university , umea watts d j and strogatz s h 1998 collective dynamics of ` small - world ' networks _ nature _ * 393 * 440 barabasi a l and albert r 1999 emergence of scaling in random networks _ science _ * 286 * 509 moreno y and vzquez a 2003 disease spreading in structured scale - free networks _ eurb * 31 * 265 bogu m and pastor - satorras r 2002 epidemic spreading in correlated complex networks _ phys . rev ._ e * 66 * 047104 pecora l m and carroll t l 1998 master stability functions for synchronized coupled systems _ phys .lett . _ * 80 * 2109 girvan m and newman m e j 2002 community structure in social and biological networks _natl acad .usa _ * 99 * 7821 newman m e j and girvan m 2004 finding and evaluating community structure in networks __ e * 69 * 026113 newman m e j 2006 modularity and community structure in networks _ proc .natl acad .usa _ * 103 * 8577 fortunato s and barthlemy m 2007 resolution limit in community detection _natl acad .* 104 * 36 shen - orr s , milo r , mangan s and alon u 2002 network motifs in the transcriptional regulation network of escherichia coli _ nature genetics _* 31 * 64 porter m a , mucha p j , newman m e j and warmbrand c m 2005 a network analysis of committees in the u.s .house of representatives _ proc .natl acad .usa _ * 102 * 7057 arenas a , daz - guilera a and prez - vicente c j 2006 synchronization reveals topological scales in complex networks _ phys .lett . _ * 96 * 114102 huang l , park k , lai y c , yang l and yang k q 2006 abnormal synchronization in complex clustered networks _lett . _ * 97 * 164101 park k , lai y c , gupte s and kim j w 2005 synchronization in complex networks with a modular structure _ chaos _ * 16 * 015105 zhao m , zhou t and wang b h 2007 synchronization on complex networks with different sorts of communities _ preprint _physics.data-an/0711.0530 eriksen k a , simonsen i , maslov s and sneppen k 2003 modularity and extreme edges of the internet _ phys .lett . _ * 90 * 148701 zhou h 2003 network landscape from a brownian particles perspective _ phys ._ e * 67 * 041908 zhou h 2003 distance , dissimilarity index , and network community structure _ phys ._ e * 67 * 061901 galstyan a and cohen p 2007 cascading dynamics in modular networks _ phys ._ e * 75 * 036109 gfeller d and de los rios p 2007 spectral coarse graining of complex networks _ phys . rev .lett . _ * 99 * 038701 arenas a , duch j , fernndex a and gmez s 2007 size reduction of complex networks preserving modularity_ new j. phys _ * 9 * 176 chavez m , hwang d u , amann a and boccaletti s 2006 synchronizing weighted complex networks _ chaos _ * 16 * 015106 alves n a 2007 unveiling community stuctures in weighted networks _ phys ._ e * 76 * 036101 restrepo j g , ott e and hunt b r 2005 synchronization in large directed networks of coupled phase oscillators _ chaos _ * 16 * 015107 leicht e a and newman m e j 2008 community structure in directed networks _ phys . rev ._ * 100 * 118703 sorrentino f and ott e 2007 network synchronization of groups _ phys ._ e * 76 * 056114 lorrain f and white h c 1971 structural equivalence of individuals in social networks _* 1 * 49 white h c , boorman s a and breiger r l 1976 social structure from multiple networks .i. blockmodels of roles and positions _ am .j. sociol . _* 81 * 730 doreian p , batagelj v and ferligoj a 2005 generalized blockmodeling , cambridge university press , cambridge newman m e j and leicht e a 2007 mixture models and exploratory analysis in networks _ proc .natl acad .usa _ * 104 * 9564 ren w , yan g y and liao x p 2007 a simple probabilistic algorithm for detecting community structure in social networks _ preprint _physics.soc-ph/0710.3422 ramasco j j and mungan m 2008 inversion method for content - based networks _ phys ._ e * 77 * 036122 rosvall m and bergstrom c t 2007 an information - theoretic framework for resolving community structure in complex networks _ proc .natl acad .usa _ * 104 * 7327 zachary w w 1977 an information flow model for conflict and fission in small groups _j. anthropol res _ * 33 * 452 danon l , daz - guilera a , duch j and arenas a 2005 comparing community structure identification __ p09008 kuncheva li and hadjitodorov s t 2004 _ systems , 2004 mand and cybernetics , 2004 ieee int ._ vol 2 , pp 1214 fred a l n and jain a k 2003 _ 2003 proc .ieee computer society conf . on computer vision and pattern recognition _ , piscataway , nj : ieee , pp ii-128 - 33 guimer r , sales - pardo m and amaral lan 2004 modularity from fluctuations in random graphs and complex networks _ phys ._ e * 70 * 025101(r ) lusseau d , schneider k , boisseau o j , haase p , slooten e and dawson s m 2003 the bottlenose dolphin community of doubtful sound features a large propotion of long - lasting associations _ behav .. sociobiol _ * 54 * 396 lusseau d and newman m e j 2004 identifying the role that animals play in their social networks _ proc_ b(suppl . ) * 271 * s477-s481 newman m e j 2006 finding community structure in networks using the eigenvectors of matrices _ phys ._ e * 74 * 036104 girvan m and newman m e j 2002 community structure in social and biological networks _natl acad .usa _ * 99 * 7821
we study how to detect groups in a complex network each of which consists of component nodes sharing a similar connection pattern . based on the mixture models and the exploratory analysis set up by newman and leicht ( newman and leicht 2007 _ proc . natl . acad . sci . usa _ * 104 * 9564 ) , we develop an algorithm that is applicable to a network with any degree distribution . the partition of a network suggested by this algorithm also applies to its complementary network . in general , groups of similar components are not necessarily identical with the communities in a community network ; thus partitioning a network into groups of similar components provides additional information of the network structure . the proposed algorithm can also be used for community detection when the groups and the communities overlap . by introducing a tunable parameter that controls the involved effects of the heterogeneity , we can also investigate conveniently how the group structure can be coupled with the heterogeneity characteristics . in particular , an interesting example shows a group partition can evolve into a community partition in some situations when the involved heterogeneity effects are tuned . the extension of this algorithm to weighted networks is discussed as well .
from the very beginning , 3d numerical relativity has not been an easy domain .difficulties arise either from the computational side ( the large amount of variables to evolve , the large number of operations to perform , the stability of the evolution code ) or from the physical side , like the complexity of the einstein equations themselves , boundary conditions , singularity avoidant gauge choices , and so on . sometimes there is a connection between both sides .for instance , the use of singularity avoidant slicings generates large gradients in the vicinity of black holes .numerical instabilities can be produced by these steep gradients .the reason for this is that the standard evolution algorithms are unable to deal with sharp profiles .the instability shows up in the form of spurious oscillations which usually grow and break down the code . numerical advanced methods from cfd ( computational fluid dynamics )can be used to avoid this .stable codes are obtained which evolve in a more robust way , without too much dissipation , so that the shape of the profiles of the evolved quantities is not lost .these advanced methods are then specially suited for the problem of shock propagation , but they apply only to strongly hyperbolic systems , where one is able get a full set of eigenfields which generates all the physical quantities to be evolved . in the 1d case, these methods usually fulfill the tvd ( total variation diminishing ) condition when applied to transport equations .this ensures that no new local extreme appear in the profiles of the eigenfields , so that spurious oscillations are ruled out ab initio ( monotonicity preserving condition ) .unluckily , there is no general method with this property in the 3d case , mainly because the eigenfield basis depends on the direction of propagation .we will show how this can be achieved at least in some cases .the specific methods we will use are known as flux limiter algorithms .we will consider plane waves in 3d as a first generalization of the 1d case , because the propagation direction is constant .this specific direction leads then to an specific eigenfield basis , so that the 1d numerical method can be easily generalized to the 3d case .the algorithm will be checked with a `` minkowski waves '' metric .it can be obtained by a coordinate transformation from minkowski space - time .all the metric components are transported while preserving their initial profiles .the line element has the following form : where is any positive function .we can choose a periodic profile with sharp peaks so both the space and the time derivatives of will have discontinuous step - like profiles . if we can solve well this case ( the most extreme ) , we can hope that the algorithm will work as well in more realistic cases where discontinuities do not appear .minkowski waves are a nice test bed because the instabilities can arise only from the gauge ( there are pure gauge after all ! ) .this is a first step to deal with evolution instabilities in the einstein equations by the use of flux limiter methods .this will allow us to keep all our gauge freedom available to deal with more physical problems , like going to a co - rotating frame or adapting to some special geometry .advanced numerical methods take care of numerical problems so that physical gauge choices can be used to take care of physics requirements .we will use the well known 3 + 1 description of spacetime which starts by decomposing the line element as follows : where is the metric induced on the three - dimensional slices and is the shift . for simplicitythe case ( normal coordinates ) will be considered .the intrinsic curvature of the slices is then given by the three - dimensional ricci tensor , whereas their extrinsic curvature is given by : in what follows , all the geometrical operations ( index raising , covariant derivations , etc ) will be performed in the framework of the intrinsic three - dimensional geometry of every constant time slice . with the help of the quantities defined in ( [ metric],[kij ] ) , the ten fourdimensional field equations can be expressed as a set of six evolution equations : \end{aligned}\ ] ] plus four constraint equations the evolution system ( [ admkij ] ) has been used by numerical relativists since the very beginning of the field ( see for instance the seminal work of eardley and smarr ) , both in spherically symmetric ( 1d ) and axially symmetric ( 2d ) spacetimes . by the turn of the century , the second order system ( [ admkij ] ) has been rewritten as a first - order flux conservative hyperbolic ( fofch ) system in order to deal with the generic ( 3d ) case , where no symmetries are present .but the second order system ( [ admkij ] ) is still being used in 3d numerical calculations , mainly when combined with the conformal decomposition of as introduced by shibata and nakamura . in the system([kij],[admkij ] ) there is a degree of freedom to be fixed because the evolution equation for the lapse function is not given . in the study of black holes ,the slicing is usually chosen in order to avoid the singularity : where : three basic steps are needed to obtain a fofch system from the adm system .first , one must introduce some new auxiliary variables to express the second order derivatives in space as first order .these new quantities correspond to the space derivatives : the evolution equations for these variables are : at the second step the system is expressed in a first order balance law form where the array displays the set of independent variables to evolve and both `` fluxes '' and `` sources '' are vector valued functions . at the third step another additional independent variable is introduced to obtain a strongly hyperbolic system : and its evolution equation is obtained using the definition of from ( [ kij ] ) and switching space and time derivatives in the momentum constraint ( [ momentum_constraint ] ) .the result is an independent evolution equation for while the previous definition ( [ v ] ) in terms of space derivatives can be instead be considered as a first integral of the extended system .the extended array will then contain the following 37 functions , , , , , .due to the structure of the equations , the evolution(represented by the operator described by ( [ fofch ] ) can be decomposed into two separate processes ; the first one is a transport process and the second one is the contribution of the sources .the sources step ( represented by the operator does not involve space derivatives of the fields , so that it consists in a system of coupled non - linear ode ( ordinary differential equations ) : the transport step ( represented by the operator ) contains the principal part and it is given by a set of quasi - linear transport equations : the numerical implementation of these separated processes is quite easy .second order accuracy in can be obtained by using the well known strang splitting . according to ( [ kij],[lapse_evolution ] ) the lapse and the metric have no flux terms .it means that a reduced set of 30 quantities are transported in the second step over an inhomogeneous static background composed by .the equations for the transport step ( [ process_fluxes ] ) are given by : where : and m is an arbitrary parameter .to evolve the transport step , we will consider flux - conservative numeric algorithms , obtained by applying the balance to a single computational cell . in the 1d case the cell goes from n to n+1 in time ( ) and from j-1/2 to j+1/2 in space ( ) , so that we have : \ ] ] interface fluxes can be calculated in many different ways , leading to different numerical methods .we will use here the well known maccormack method .this flux - conservative standard algorithm works well for smooth profiles , as it can be appreciated in figure 1 .[ sinus1d ] but this standard algorithm is not appropriate for step - like profiles because it produces spurious oscillations near the steep regions , as it can be appreciated in figure 2 .[ step1d ] more advanced numerical methods must be used to eliminate ( or at least to reduce ) these oscillations .these advanced methods use information about the eigenfields and the propagation direction , so the flux characteristic matrix along the propagation direction must be diagonalized .we will use a convenient method to compute the eigenfields . let us study the propagation of a step - like discontinuity in the transported variables which will move along a specific direction with a given velocity .information about the corresponding eigenfields can be extracted from the well known rankine - hugoniot shock conditions : = n_k[f^k(u)]\ ] ] where ] symbol .if we develop this expressions we arrive at the following conclusions , where is the projection of the quantity over and are the transverse components : \1 ) ,[a_{\bot}],[d_{\bot ij}] ] propagate along with speed .there are 18 such eigenfields . for the line element given by ( [ metric_minkowski ] ) is along the x axis and all these fields are actually zero .\2 ) ] do generate eigenfields propagating along with speed ( light cones ) .there are only 10 such eigenfields because all of them are traceless . for minkowski waves , where there is only gauge , all these combinations are zero .this indicates that the correct way to get the traceless part of a given tensor in this context is just to take , so that the contribution of gauge modes will disappear .\3 ) ] do generate eigenfields propagating along with speed ( gauge cones ) .there are 2 such eigenfields corresponding to the gauge sector . for minkowski waves, there are the only non - zero components .we are left with : & = & \alpha [ a_n ] \\v[a_k ] & = & \alpha\ : f(\alpha ) [ trk]\ : n_k\end{aligned}\ ] ] so that $ ] is proportional to .now we can get the gauge eigenfields : these eigenfields propagate along according simple advection equations , a familiar situation in the 1d case .although this decomposition and diagonalization is trivial in 1d , it is very useful in the multidimensional case for a generic direction .the flux limiter methods we will use can be decomposed into some basic steps .first of all the interface fluxes have to be calculated with any standard second order accurate method ( maccormack in our case ) .then , the propagation direction and the corresponding eigenfields can be properly identified at every cell interface .two advection equations ( one for every sense of propagation ) are now available for the gauge eigenfluxes ( [ gauge_eigenfield ] ) .let us choose for instance the eigenflux which propagates to the right ( an equivalent process will be valid for the other eigenflux propagating to the left ) .this interface eigenflux can be understood as the grid point flux plus some increment . in general , the purpose of the limiter is to use of a mixture of the increments and to ensure monotonicity . in our casewe are using a robust mixture which goes by applying the well known minmod rule to and . in that way, the limiter acts only in steep regions , where the proportion between neighbouring increments exceeds a factor of two .we can apply this method to the step - like initial data propagating along the x axis .we can see in figure 3 that the result is much better than before. it can be ( hardly ) observed a small deviation from the tvd condition , which is produced by the artificial separation produced by the strang splitting into transport and non - linear source steps .this method can be applied , with the general decomposition described in section 4 , to discontinuities which propagate along any constant direction , and not only to the trivial case , aligned with the x axis , that we have considered until now . to prove it ,we have rotated the metric of minkowski waves in the x - z plane to obtain a diagonal propagation of the profile . the line element in this case has the following form : \;(dx^2+dz^2 ) + dy^2 \nonumber \\ & + & \frac{1}{2}[-1+h(\frac{x+z}{\sqrt{2}}-t)]\;(dx\:dz + dz\:dx)\end{aligned}\ ] ] we show the results in the figure 4 ._ acknowledgements : this work has been supported by the eu programme improving the human research potential and the socio - economic knowledge base ( research training network contract ( hprn - ct-2000 - 00137 ) and by a grant from the conselleria dinnovacio i energia of the govern de les illes balears _
new numerical methods have been applied in relativity to obtain a numerical evolution of einstein equations much more robust and stable . starting from 3 + 1 formalism and with the evolution equations written as a fofch ( first - order flux conservative hyperbolic ) system , advanced numerical methods from cfd ( computational fluid dynamics ) have been successfully applied . a flux limiter mechanism has been implemented in order to deal with steep gradients like the ones usually associated with black hole spacetimes . as a test bed , the method has been applied to 3d metrics describing propagation of nonlinear gauge waves . results are compared with the ones obtained with standard methods , showing a great increase in both robustness and stability of the numerical algorithm .
inferring the causal relations that have generated statistical dependencies among a set of observed random variables is challenging if no controlled randomized studies can be made . here, causal relations are represented as arrows connecting the variables , and the structure to be inferred is a directed acyclic graph ( dag ) .the constraint - based approach to causal discovery , one of the best known methods , selects directed acyclic graphs that satisfy both the causal markov condition and faithfulness : one accepts only those causal hypotheses that explain the observed dependencies and demand that all the observed _ in_dependencies are imposed by the structure , i.e. , common to all distributions that can be generated by the respective causal dag .however , the methods are fundamentally unable to distinguish between dags that induce the same set of dependencies ( markov - equivalent graphs ) .moreover , causal faithfulness is known to be violated if some of the causal relations are deterministic .solving these problem requires reasonable prior assumptions , either implicitly or explicitly as priors on conditional probabilities , as in bayesian settings .however , the fact that deterministic dependencies exist in real - world settings shows that priors that are densities on the parameters of the bayesian networks , as it is usually assumed , are problematic , and the construction of good priors becomes difficult .recently , several methods have been proposed that are able to distinguish between markov - equivalent dags causes or causes '' has been part of the challenge at the nips 2008 workshop `` causality : objectives and assessment '' ] .linear causal relations among non - gaussian random variables can be inferred via independent - component - analysis ( ica ) methods .the method of is able to infer causal directions among real - valued variables if every effect is a ( possibly non - linear ) function of its causes up to an additive noise term that is independent of the causes .the work of augmented these models by applying a non - linear function after adding the noise term . if the noise term vanishes or if all of the variables are gaussian and the relation is linear , all these methods fail .moreover , if the data are high - dimensional , the non - linear regression involved in the methods becomes hard to estimate .here we present a method that also works for these cases provided that the variables are multi - dimensional with sufficiently anisotropic covariance matrices .the underlying idea is that the causal hypothesis is only acceptable if the shortest description of the joint distribution is given by separate descriptions of the input distribution and the conditional distribution , expressing the fact that they represent independent mechanisms of nature . shows toy examples where such an independent choice often leads to joint distributions where and satisfy non - generic relations indicating that is wrong .here we develop this idea for the case of multi - dimensional variables and with a linear causal relation . we start with a motivating example .assume that is a multivariate gaussian variable with values in and the isotropic covariance matrix .let be another -valued variable that is deterministically influenced by via the linear relation for some -matrix .this induces the covariance matrix the converse causal hypothesis becomes unlikely because ( which is determined by the covariance matrix ) and ( which is given by with probability ) are related in a suspicious way , since the same matrix appears in both descriptions .this untypical relationship between and can also be considered from the point of view of symmetries : consider the set of covariance matrices with , where denotes the orthogonal group . among them, is special because it is the only one that is transformed into the isotropic covariance matrix .more generally speaking , in light of the fact of how anisotropic the matrices are for _ generic _ , the hypothetical effect variable is surprisingly isotropic for ( here we have used the short notation ) .we will show below that this remains true with high probability ( in high dimensions ) if we start with an arbitrary covariance matrix and apply a random linear transformation chosen independently of . to understand why independent choices of and typically induce untypical relations between and we also discuss the simple case that and are simultaneously diagonal with and as corresponding diagonal entries .thus is also diagonal and its diagonal entries ( eigenvalues ) are .we now assume that `` nature has chosen '' the values with independently from some distribution and from some other distribution .we can then interpret the values as instances of -fold sampling of the random variable with expectation and the same for .if we assume that and are independent , we have due to the law of large numbers , this equation will for large approximatively be satisfied by the empirical averages , i.e. , for the backward direction we observe that the diagonal entries of and the diagonal entries of have not been chosen independently because whereas the last inequality holds because the random variables and are always negatively correlated ( this follows easily from the cauchy - schwarz inequality ) except for the trivial case when they are constant .we thus observe a systematic violation of ( [ tracecomm ] ) in the backward direction .the proof for non - diagonal matrices in section [ iden ] uses standard spectral theory , but is based upon the same idea .the paper is structured as follows . in section [ iden ], we define an expression with traces on covariance matrices and show that typical linear models induce backward models for which this expression attains values that would be untypical for the forward direction . in section [ exp ]we describe an algorithm that is based upon this result and discuss experiments with simulated and real data .section [ gen ] proposes possible generalizations .given a hypothetical causal model ( where and are - and -dimensional , respectively ) we want to check whether the pair satisfies some relation that typical pairs only satisfy with low probability if is randomly chosen .to this end , we introduce the renormalized trace for dimension and compare the values one shows easily that the expectation of both values coincide if is randomly drawn from a distribution that is invariant under transformations this is because averaging the matrices over all projects onto since the average commutes with all matrices and is therefore a multiple of the identity . for our purposes , it is decisive that the typical case is close to this average , i.e. , the two expressions in ( [ tracecomp ] ) almost coincide . to show this, we need the following result : + [ lev ] let be a lipschitz continuous function on the -dimensional sphere with if a point on is randomly chosen according to an -invariant prior , it satisfies with probability at least for some constant , where can be interpreted as the median or the average of . given the above lemma , we can prove the following theorem : + [ ind ] let be a symmetric , positive definite -matrix and an arbitrary -matrix .let be randomly chosen from according to the unique -invariant distribution ( i.e. the haar measure ) . introducing the operator norm we have with probability at least for some constant ( independent of ) .proof : for an arbitrary orthonormal system we have we define the unit vectors dropping the index , we introduce the function for a randomly chosen , is a randomly chosen unit vector according to a uniform prior on the -dimensional sphere .the average of is given by .the lipschitz constant is given by the operator norm of , i.e. , an arbitrarily chosen satisfies with probability .this follows from lemma [ lev ] after replacing with .hence due to we thus have it is convenient to introduce as a scale - invariant measure for the strength of the violation of the equality of the expressions ( [ tracecomp ] ) .we now restrict the attention to two special cases where we can show that is non - zero for the backward direction .first , we restrict the attention to deterministic models and the case that where has rank .this ensures that the backward model is also deterministic , i.e. , with denoting the pseudo inverse .the following theorem shows that implies : + let and denote the dimensions of and , respectively . if and , the covariance matrices satisfy where is a real - valued random variable whose distribution is the empirical distribution of eigenvalues of , i.e. , for all .proof : we have using and taking the logarithm we obtain then the statement follows from note that the term in eq .( [ delta ] ) will not converge to zero for dimension to infinity if the random matrices are drawn in a way that ensures that the distribution of converges to some distribution on with non - zero variance . assuming this, tends to some negative value if tends to zero for .we should , however , mention a problem that occurs for in the noise - less case discussed here : since has only rank , we could equally well replace with some other matrix that coincides with on all of the observed -values .for those matrices , the value can get closer to zero because the term expresses the fact that the image of is orthogonal to the kernel of , which is already untypical for a generic model .it turns out that the observed violation of the multiplicativity of traces can be interpreted in terms of relative entropy distances . to show this , we need the following result : + let be the covariance matrix of a centralized non - degenerate multi - variate gaussian distribution in dimensions .let the anisotropy of be defined by the relative entropy distance to the closest isotropic gaussian then proof : the relative entropy distance of two centralized gaussians with covariance matrices in dimensions is given by setting , the distance is minimized for , which yields eq .( [ d ] ) . straightforward computations show : + let and be -matrices with positive definite .then hence , for independently chosen and , the anisotropy of the output covariance matrix is approximately given by the anisotropy of plus the anisotropy of , which is the anisotropy of the output that induces on an isotropic input .for the backward direction , the anisotropy is smaller than the typical value .we now discuss an example with a stochastic relation between and .we first consider the general linear model where is an matrix and is a noise term ( statistically independent of ) with covariance matrix .we obtain the corresponding backward model , this induces a joint distribution that does not admit a linear backward model with an _ independent _ noise , we can then only obtain _ uncorrelated _ noise .we could in principle already use this fact for causal inference . however , our method also works for the gaussian case and if the dimension is too high for testing higher - order statistical dependences reliably . ]reads with now we focus on the special case where is an orthogonal transformation and is isotropic , i.e. , with .we then obtain a case where and are related in a way that makes positive : + let with and the covariance matrix of be given by .then we have proof : we have with . therefore , one checks easily that the orthogonal transformation is irrelevant for the traces and we thus have where is a random variable of which distribution reflects the distribution of eigenvalues of .the function is monotonously increasing for positive and and thus also .hence and are positively correlated , i.e. , for all distributions of with non - zero variance .hence the logarithm is positive and thus . since the violation of the equality of the terms in ( [ tracecomp ] ) can be in both directions , we propose to prefer the causal direction for which is closer to zero .motivated by the above theoretical results , we propose to infer the causal direction using alg .[ algtr ] . in light of the theoretical results , the following issues have to be clarified by experiments with simulated data : 1 . is the limit for dimension to infinity already justified for moderate dimensions ? 2 .is the multiplicativity of traces sufficiently violated for noisy models ?furthermore , the following issue has to be clarified by experiments with real data : 1 .is the behaviour of real causal structures qualitatively sufficiently close to our model with independent choices of and according to a uniform prior ? for the simulated data , we have generated random models as follows :we independently draw each element of the structure matrix from a standardized gaussian distribution .this implies that the distribution of column vectors as well as the distribution of row vectors is isotropic . to generate a random covariance matrix , we similarly draw an matrix and set . due to the invariance of our decision rule with respect to the scaling of and , the structure matrix and the covariance can have the same scale without loss of generality .the covariance of the noise is generated in the same way , although with an adjustable parameter governing the scaling of the noise with respect to the signal : yields the deterministic setting , while equates the power of the noise to that of the signal .first , we demonstrate the performance of the method in the close - to deterministic setting ( ) as a function of the dimensionality of the simulations , ranging from dimension 2 to 50 . to show that the method is feasible even with a relatively small number of samples , we choose the number of samples to scale with the dimension as .( note that we must have to obtain invertible estimates of the covariance matrices . )the resulting proportion of correct vs wrong decisions is given in fig .[ fig : simulations]a , with the corresponding values of in fig .[ fig : simulations]b . as can be seen , even at as few as 5 dimensions and 10 samples , the method is able to reliably identify the direction of causality in these simulations . + to illustrate the degree to which identifiability is hampered by noise , the solid line in fig .[ fig : simulations]c gives the performance of the method for a fixed dimension ( ) and fixed sample size ( ) as a function of the noise level .as can be seen , the performance drops markedly as is increased .as soon as there is significantly more noise than signal ( say , ) , the number of samples is not sufficient to reliably estimate the required covariance matrices and hence the direction of causality .this is clear from looking at the much better performance of the method when based on the exact , true covariance matrices , given by the dashed lines . in fig .[ fig : simulations]d we show the corresponding values of , from which it is clear that the estimate based on the samples is quite biased for the forward direction . as experiments with real data with known ground truth ,we have chosen pixel images of handwritten digits .as the linear map we have used both random local translation - invariant linear filters and also standard blurring of the images .( we added a small amount of noise to both original and processed images , to avoid problems with very close - to singular covariances . )see fig .[ fig : digits ] for some example original and processed image pairs .the task is then : given a sample of pairs consisting of the picture and its processed counterpart infer which of the set of pictures or are the originals ( ` causes ' ) . by partitioning the image set by the digit class ( 0 - 9 ) , and by testing a variety of random filters ( and the standard blur ) , we obtained a number of test cases to run our algorithm on . out of the total of 100 tested cases ,the method was able to correctly identify the set of original images 94 times , with 4 unknowns ( i.e. only two falsely classified cases ) .these simulations and experiments are quite preliminary and mainly serve to illustrate the theory developed in the paper .they point out at least one important issue for future work : the construction of unbiased estimators for the trace values or the .the systematic deviation of the sample - based experiments from the covariance - matrix based experiments in fig .[ fig : simulations]c d suggest that this could be a major improvement .in this section , we want to rephrase our theoretical results in a more abstract way to show the general structure . we have rejected the causal hypothesis if we observe that attains values that are not typical among the set of transformed input covariance matrices . in principle, we could have any function that maps the output distribution to some value .moreover , we could have any group of transformations on the input variable that define transformed input distributions via applying the conditional to defines output distributions that we compare to .in particular , we check whether the value is typical for the set . + let and be random variables with joint distribution and be a group of transformations of the value set of .let be some real - valued function on the probability distributions of .the causal hypothesis is unlikely if is smaller or greater than the big majority of all distributions our prior knowledge about the structure of the data set determines the appropriate choice of .the idea is that expresses a set of transformations that generate input distributions that we consider equally likely .the permutation of components of also defines an interesting transformation group . for timeseries , the translation group would be the most natural choice . interpreting this approach in a bayesian way, we thus use symmetry properties of priors without the need to explicitly define the priors themselves .our experiments with simulated data suggest that the method performs quite well already for moderate dimensions provided that the noiselevel is not too high .certainly , the model of drawing according to a distribution that is invariant under may be inappropriate for many practical applications .however , as the example with diagonal matrices in section [ mot ] shows , the statement holds for a much broader class of models . for this reason , the method could also be used as a sanity check for causal hypotheses among one - dimensional variables .assume , for instance , one has a causal dag connecting variables attaining values in .if is an ordering that is consistent with , we define and and check the hypothesis using our method .provided that the true causal relations are linear , such a hypothesis should be accepted for every possible ordering that is consistent with the true causal dag .this way one could , for instance , check the causal relation between genes by clustering their expression levels to vector - valued variables .d. heckerman , c. meek , and g. cooper .a bayesian approach to causal discovery . in c.glymour and g. cooper , editors , _ computation , causation , and discovery _ , pages 141165 , cambridge , ma , 1999 .mit press .y. kano and s. shimizu .causal inference using nonnormality . in _ proceedings of the international symposium on science of modeling , the 30th anniversary of the information criterion _ , pages 261270 , tokyo , japan , 2003 .p. hoyer , d. janzing , j. mooij , j. peters , and b schlkopf .nonlinear causal discovery with additive noise models . in d.koller , d. schuurmans , y. bengio , and l. bottou , editors , _ advances in neural information processing systems 21 _ , vancouver , canada , 2009 . mit press .y. le cun , b. boser , j. s. denker , d. henderson , r. e. howard , w. hubbard , and l. d. jackel .handwritten digit recognition with a back - propagation network . in _ advances in neural information processing systems _, pages 396404 .morgan kaufmann , 1990 .
we describe a method for inferring linear causal relations among multi - dimensional variables . the idea is to use an asymmetry between the distributions of cause and effect that occurs if both the covariance matrix of the cause and the structure matrix mapping cause to the effect are independently chosen . the method works for both stochastic and deterministic causal relations , provided that the dimensionality is sufficiently high ( in some experiments , was enough ) . it is applicable to gaussian as well as non - gaussian data .
this technical note is a supplement to the manuscript . in that work the dynamics of cells in a cell cyclewere considered when cells are subjected to feedback from other cells in the cell cycle and the response to the feedback is also dependent on the cells location within the cell cycle .we refer the reader to that manuscript for the biological motivation for this study and for background on the model we study .we consider a piecewise affine model for the dynamics of cell populations along the cell cycle .let a population of cells be organized in equal clusters ( divides ) labeled by a discrete index .for the cluster , the progression along the cycle is represented by the periodic variable which evolves in time according to the entire population status .more specifically , the variables follows the deterministic flow associated with a real vector field , _ i.e. _ , that is the same for each . here are two parameters respectively governing the length of the _ signaling _ and of the _ responsive _ regions. let represent the fraction of cells in the signaling region , _i.e. _ then we will consider the case that assumes a simple expression where denotes the floor function . in short terms , clusters in the responsive region are accelerated by a fraction equal to the proportion of cells in the signaling region .in we consider systems in which the feedback experienced by cells is a general monotone increasing or decreasing function of .relabeling , integer translation of coordinates , and time translation are symmetries of the dynamics .thus , one can assume that all coordinates are initially well - ordered and belong to the same unit interval , _i.e. _ we have the definition of the vector field implies that the coordinates can not cross each other as time evolves ; thus this ordering is preserved under the dynamics .moreover , the first coordinate must eventually reach 1 ( not later than at time 1 ) , _ i.e. _ there exists such that , and more generally , it must reach any positive integer as time runs .thus the set defines a poincar section for the dynamics and the mapping defines the corresponding return map .we rely on the following considerations . starting from ,compute the time that needs to reach 1 and compute the location of the remaining cells at this date . define to be this mapping .( we assume first that for simplicity , but we will relax this soon . ) notice that by assumption on . nowthe time that needs to reach 1 , together with the population configuration at , follow by applying to the configuration . by repeating the argument, the desired return time is given by and the desired return map is .therefore , to understand the dynamics , one only has to compute the first map .it was noted in that we may regard as a continuous piecewise affine map of the -dimensional simplex into itself .( although the boundaries 0 and 1 are identified in the original flow , in the analysis here , we consider them as being distinct points for . ) on the edges of the simplex , has a relatively simple dynamics . indeedif , initially , all coordinates are equal , then they must all reach the boundary 1 simultaneously .in other words , on the diagonal ( for all ) , we have where depends on and ( for , we have independently of and ) .moreover , starting with implies which yields whatever the remaining coordinates are . as a consequence , the edge \right\}\ ] ]is mapped onto \right\}\ ] ] which is mapped onto \right\} ], the quantity varies in the interval ] .the orbits can be viewed as rotating around the fixed point which is included in the family ( _ i.e. _ for , the orbit actually reduces to a fixed point ) .the closure of the set of orbit coordinates forms a triangle whose corners are the coordinates of the orbit ( see figure [ neutraltriangle ] ) . at the upper boundary of the `` 7''-fixed point s existence domain, its first coordinate meets the vertical boundary with 6 .we then consider the fixed point in 6 .* its coordinates are given by and this neutral fixed point exists iff the bifurcation scenario at is similar to the one above at the lower boundary .* it turns out that this unique orbit emerges from the fixed point 7 at and exists up to ( or up to whichever is smaller ) .it is expanding with double real eigenvalues . at boththe component in 6 and its successor cross the horizontal line and the second component in 7 cross the upper domain boundary .+ however , the value does not involve any existence condition related to the fixed point in 6 . as shown below, the reason is that the periodic orbit can be continued up to provided that we consider the appropriate sequence of symbols .* this orbit continues the orbit when .indeed , it exists provided that it is expanding with 2 real eigenvalues ( the same as those associated with ) and coincide with at . * similarly to as before , a triangle of neutral 3-periodic orbits is created from the fixed point 7 at .the coordinates have the following expression and these orbits exist in the same parameter domain as the fixed point in 6 .when ] . at the boundary , the expanding periodic orbit and the family of neutral orbits meet with the neutral fixed point `` 6 '' in an inverse pitchfork bifurcation to create the fixed point in 3 .* the fixed point is expanding with double real eigenvalues and exists in the domain at the upper boundary of its existence domain , the fixed point `` 3 '' meets the region 1 . *the fixed point is neutral with double real eigenvalue 1 and exists in the domain * same parameter domain .expression of coordinates expanding with double real eigenvalue. emerges from the fixed point 3 .* same parameter domain .triangle of neutral 3-periodic orbits where is arbitrary in and .as before , the corners coincides with the period-3 orbit . to recapitulate , the return map possesses the following orbits depending on parameters in the considered regions ( see figure [ domains ] ) * .three sources ( two of them belong to 7 , the other one lies in 8) and a two - parameter family of neutral fixed points lying inside the triangle whose corners are the 3 sources .* .one source in 7 . * .similarly to as in ( 1 ) ; namely three sources ( two in 7 , one 6 if , and two sources belong to 3 and one is in 6 for ) and a triangle of neutral fixed points in 6 . * . similarly to as in ( 2 ) ; namely a single source in 1 .* .similarly to as in ( 1 ) ; namely three sources ( two lie in 3 and 1 lies in 1 ) and a triangle of neutral fixed points in 1 .note that in case ( 5 ) we have and this corresponds to the case that the three clusters in the cyclic solution ( which has initial conditions do not interact , _i.e. _ leaves before enters and no feedback is experienced . in the cases ( 1 ) and ( 3 ) the clusters in the cyclic solution experience feedback , but in a non - essential way , by which we mean that small perturbations do not lead either to further separation or contraction between the clusters .for instance in case ( 3 ) begins in and begins in between and and leaves before either of these states changes . in case( 1 ) both and begin in , but leaves before either leaves . a python script that will produce movies for the dynamics for arbitrary can be found at : ` http://oak.cats.ohiou.edu/~rb301008/research.html ` . besides orbits bifurcating with fixed points , based on numerics and on properties of the symbolic dynamics , additional orbits of exist depending on parameters .their existence domains do not coincide with those listed above .* the most important orbits are the one - parameter family of period-3 neutral - stable orbits lying on the edges ( and with code ) these orbits exist for arbitrary provided that .they are stable with respect to transverse perturbations .the coordinates form 3 open intervals , each included in one the edges . moreover ,the interval boundaries respectively form a period-3 hyperbolic orbit ( code ) and a period-3 neutral - unstable orbit ( code ) which exist in the same parameter domains and which merge for .orbits in the family correspond to 2 clusters solutions in the original system with one cluster composed of two clusters ( and the two clusters being certainly isolated one from each other ) . *interestingly , the bifurcation at generates two distinct periodic orbits with identical code ( ) .both orbits are neutral - unstable and the first has coordinates with the first point being in 2 .it exists under the condition .+ the other solution is actually a one - parameter family of periodic orbits ( which forms a segment and ) whose component in 11 is where can be chosen arbitrary with the condition the family exists under the condition . the bifurcations taking place at and are unclear .the fact that the one - parameter family reaches the boundary with 5 in the former case suggests to investigate orbits with code .surprisingly , the analysis concludes that such an orbit exits only if , a value that is unrelated to the one - parameter family existence condition .two additional orbits have been found on the edge when .one is hyperbolic with code and coordinates the other one is neutral - unstable with code and coordinates the two orbits merge for in a seemingly saddle - node bifurcation .in this note we have investigated only a portion of the possible parameter space .it should be clear to the reader by this point that further investigations into other subsets of the parameters in this fashion are possible , but perhaps prohibitively time - consuming .we see that such studies are likely to also prove unprofitable , since numerical simulations show that no other types of dynamics occur other than those described here .one can download a python script that to investigate the dynamics for arbitrary at : ` http://oak.cats.ohiou.edu/~rb301008/research.html ` .the dynamics we have observed for these parameter sets closely resembles the dynamics of cluster systems analyzed in .the fixed point of with positive feedback , like that for is either : in the later case the edges of the neutrally stable set are unstable .this case exists if either the three clusters are isolated from each other , or , if they interact in a non - essential way . in both cases the orbits of all other interior pointsare asymptotic to the boundary .thus cyclic solutions for either or are practically unstable in the sense the arbitrarily small perturbations may lead to loss of stability and eventual merger of clusters .since the single cluster solution ( synchronization ) is the only solution that is asymptotically stable , it would seem to be the most likely to be observed in application if the feedback is similar to the form we propose and is positive .* acknowledgments : * + b.f .thanks the courant institute ( nyu ) for hospitality .he was supported by cnrs and by the eu marie curie fellowship piof - ga-2009 - 235741 .t.y . and this work were supported by the nih - nigms grant r01gm090207 .
in this technical note we calculate the dynamics of a linear feedback model of progression in the cell cycle in the case that the cells are organized into clusters . we examine the dynamics in detail for a specific subset of parameters with non - empty interior . there is an interior fixed point of the poincar map defined by the system . this fixed point corresponds to a periodic solution with period in which the three clusters exchange positions after time . we call this solution -cyclic . in all the parameters studied , the fixed point is either : * isolated and locally unstable , or , * contained in a neutrally stable set of period points . in the later case the edges of the neutrally stable set are unstable . this case exists if either the three clusters are isolated from each other , or , if they interact in a non - essential way . in both cases the orbits of all other interior points are asymptotic to the boundary . thus -cyclic solutions are practically unstable in the sense the arbitrarily small perturbations may lead to loss of stability and eventual merger of clusters . since the single cluster solution ( synchronization ) is the only solution that is asymptotically stable , it would seem to be the most likely to be observed in application if the feedback is similar to the form we propose and is positive .
recently oliver and soundararajan computed the distribution of the last digits of consecutive primes for the first prime numbers .their calculations revealed a bias : the pairs , , and occur about a third less often than other ordered pairs of last digits of consecutive primes .their calculations are shown in table [ biastable ] . for the past several yearswe have been studying the cycle of gaps that arises at each stage of eratosthenes sieve .our work to this point is summarized in .we have identified a population model that describes the growth of the populations of any gap in the cycle of gaps , across the stages of eratosthenes sieve .the recursion from one cycle of gaps , , to the next , , leads to a discrete dynamic model that provides exact populations for a gap in the cycle , provided that .the model provides precise asymptotics for the ratio of the population of the gap to the population of the gap once the prime is larger than any prime factor of .this discrete dynamic system is deterministic , not probabilistic .the discrete dynamic model provides some insight into the phenomenon that oliver and soundararajan have observed . 1 .we look at the asymptotic ratios of the populations of small gaps to the gap .these asymptotic ratios suggest that the reported biases will erode away for samples of much larger primes .we look at additional terms in the model , to understand rates of convergence to the asymptotic values . to first orderthis explains some of the biases exhibited in table [ biastable ] .we initially work in base , so we then examine the results for a few different bases , to see how the biases depend on the base . a graph of the ratios of the populations of gaps in each residue class modulo , normalized by the population of gaps . herethe ratios in are approximated by equation [ eqeigsys ] to twelve terms .we used initial conditions from for gaps up to .the dashed line indicates where the calculations by oliver and soundararajan lie ., width=480 ] these observations apply to the stages of eratosthenes sieve as the sieve proceeds .all gaps between prime numbers arise in a cycle of gaps . to connect our results to the desired results on gaps between primes, we would need to better understand how gaps survive later stages of the sieve , to be affirmed as gaps between primes .until the models for survival have a higher accuracy , the results based on the exact models for can only be approximately applied to gaps between prime numbers .we offer the exact model on populations of gaps in as a constructive complement to the approaches working from the probabilistic models pioneered by hardy and littlewood ..[biastable ] oliver and soundararajan s table of computed distributions of last digits of consecutive primes for the first primes . herethey are working in base . in section [ secbases ]we address their calculations in base as well .[ cols="^,^,>,^,^,>",options="header " , ]by identifying structure among the gaps in each stage of eratosthenes sieve , we have been able to develop an exact model for the populations of gaps and their driving terms across stages of the sieve .we have identified a model for a discrete dynamic system that takes the initial populations of a gap and all its driving terms in a cycle of gaps such that , and thereafter provides the exact populations of this gap and its driving terms through all subsequent cycles of gaps .all of the gaps between primes are generated out of these cycles of gaps , with the gaps at the front of the cycle surviving subsequent closures .the trends across the stages of eratosthenes sieve indicate probable trends for gaps between primes .we are not yet able to translate the precision of the model for populations of gaps in into a robust analogue for gaps between primes .for the first primes , oliver and soundararajan calculated how often the possible pairs of last digits of consecutive primes occurred , and they observed biases .regarding their calculations they raised two questions : does the observed bias persist ? is the observed bias dependent upon the base ?we have addressed both of these questions by using the dynamic system that exactly models the populations of gaps across stages of eratosthenes sieve .the observed biases are transient phenomena .the biases persist through the range of computationally tractable primes .the asymptotics of the dynamic system play out on superhuman scales for example , continuing eratosthenes sieve at least through all -digit primes . to put this in perspective , the cycle has more gaps than there are particles in the known universe ; yet in for a -digit prime , small gaps like will still be appearing in frequencies well below their ultimate ratios .gaps the size of will just be emerging , relative to the prevailing populations of small gaps at this stage . our work on the relative frequency of gaps modulo for has addressed the bias between the residue classes . the observed biases are due to the quick appearance of small gaps and the slow evolution of the dynamic system .while we have addressed the inter - class bias , we have said nothing about the intra - class bias , that is , unequal distributions across the ordered pairs within a given residue class modulo .our initial calculations here indicate that this bias should also disappear eventually , but this exploration needs to be more thorough .the model developed by oliver and soundararajan also depends only on the residue class .our calculations use a sample of gaps . to improve the precision of our calculations of the asymptotic ratios across residue classes, it would be useful to find a normalization that makes working with all gaps manageable .once we understand the model for gaps , then any choice of base reassigns the gaps across the residue classes for this base .the number of ordered pairs corresponding to a residue class is proportional to the asymptotic relative frequency .the initial biases and more rapid convergence that favor the small gaps can be observed , over any computationally tractable range , for the residue classes to which these small gaps are assigned .two examples of the polynomial approximations in equation [ eqeigsys ] .the approximations differ from the exact discrete model by substituting for .the gap has driving terms up to length , so the approximations of degree and higher coincide with that of degree ., width=480 ]
recently oliver and soundararajan made conjectures based on computational enumerations about the frequency of occurrence of pairs of last digits for consecutive primes . by studying eratosthenes sieve , we have identified discrete dynamic systems that exactly model the populations of gaps across stages of eratosthenes sieve . our models provide some insight into the observed biases in the occurrences of last digits in consecutive primes , and the models suggest that the biases will ultimately be reversed for large enough primes . the exact model for populations of gaps across stages of eratosthenes sieve provides a constructive complement to the probabilistic models rooted in the work of hardy and littlewood .
the process of magnetic reconnection underpins our understanding of many astrophysical phenomena .examples include solar flares , geomagnetic storms and saw tooth crashes in tokamaks .yet a complete understanding of this enigmatic plasma process remains illusive , despite decades of research .fundamentally , magnetic reconnection is the process whereby excess energy in a magnetic field is liberated by the reorganization of a magnetic field s connectivity in the from of plasma heating , bulk fluid motions and particle acceleration .classically , this is envisioned to occur in a single well defined region of high electric current , within which non - ideal effects dominate and the plasma becomes decoupled from the magnetic field .however , in recent years the importance of instabilities which fragment reconnection regions has been more fully appreciated .in particular , in two dimensions high aspect ratio current sheets have been shown to be highly unstable to tearing with the resulting dynamics dominated by the formation and ejection of magnetic islands , whilst 3d simulations have emphasized the importance of flux rope formation , braiding and the possible development of turbulence .observations of plasma blobs and bursty radio emissions in the extended magnetic field beneath erupting cme s as well as bursty signatures of reconnection in the earth s magnetotail appear to somewhat corroborate this picture .an important diagnostic of any reconnection scenario is the rate at which the process occurs . in two dimensions reconnectionoccurs only at x - points , with the rate of reconnection given simply by the electric field at this position .if the current layer is fragmented then the only topologically stable situation is one in which only a single x - point resides at the boundary between the global flux domains .the reconnection rate is then the electric field measured at this dominant " x - point ( e.g. ) . in three dimensions ( 3d )the picture is more complex .when reconnection involves 3d nulls , separatrix surfaces divide up the magnetic field into regions of differing connectivity .the rate of reconnection can then be defined as the flux transfer across these surfaces , or past separators which sit at the intersection of different separatrix surfaces . if the non - ideal regions spanning the separatrix surfaces are fragmented then considering flux transfer across segments of a separatrix surface or along multiple separators if they exist allows the reconnection rate to be quantified .unlike 2d , where x - points other than the dominant x - point do not directly contribute to the reconnection rate ( although they may indirectly affect it ) , in 3d reconnection across a separatrix surface in multiple places or at multiple separators all contribute towards the total rate of flux transfer between the main topological domains .this leads to the surprising result that in 3d _ two _ measures of reconnection may be used when reconnection occurs in fragmented current layers .one that measures the total rate at which flux is reconnected ( taking account of recursively reconnected magnetic flux ) and a net measure of the combined effects of each of the fragmented non - ideal regions .the former is the true reconnection rate for any problem , but the latter may be of interest when the large scale effects of a reconnection site are being considered .furthermore , in 3d reconnection may also occur in the absence of magnetic null points . in this casethe lack separatrix surfaces against which reconnection can be defined requires a more general approach to the problem .the theory of general magnetic reconnection ( gmr ) encompasses reconnection across separatrices as well as describing reconnection in situations without them .the theory of gmr has shown that for a single isolated non - ideal region the rate of reconnection is given by the maximum of on all field lines threading the non - ideal region .however , the question remains as to how to measure reconnection in fragmented current layers without the presence of separatrix surfaces or in situations where separatrix surfaces are difficult to identify .the aim of this work is to extend the framework of gmr to quantify the reconnection process in this case .the paper is structured as follows . in sec .ii , we review the theory of gmr and introduce the relevant mathematical tools .section iii contrasts the manner in which new connections are created for single and multiple reconnection regions .sections iv and v recap the derivation of the reconnection rate for an isolated region and then derive expressions for the reconnection rate in fragmented current layers .in particular , we show that as with reconnection involving null points a total and a net rate may be defined . the interpretation of each is then discussed .section vi demonstrates the developed theory for two simple kinematic examples . finally , sec .vii summarises the new results and presents our conclusions . and seen by plasma elements on either side of the non - ideal region , .,scaledwidth=40.0% ]gmr is most readily developed within the framework of euler potentials . a pair of euler potentials ( and say ) are scalar functions which locally describe regions of non - vanishing magnetic field through the relation as long as field linesare simply connected and only enter and leave through the boundaries of the region of interest once , and are single valued and can be used to label individual field lines . and are also flux coordinates and are related to the magnetic flux through a given surface via coupled with an arc length ( ) satisfying , any position within the volume of interest can be expressed in space . within this formulation the electric field can be expressed as where the quasi - potential ( so named as it contains a time varying component ) is related to the electrostatic potential via when the magnetic vector potential is assumed to take the form .see for a discussion of the dependance of gmr on the choice of gauge taken for . for maximum applicability a general form of ohm s law is assumed where the contributing non - ideal terms are grouped together into a single vector such that where is assumed to be localized within a small region inside the domain of interest . by expanding in covariant form and inserting it into eqn .( [ ohms ] ) along with eqns .( [ bfield ] ) , ( [ efield ] ) and ( [ phi ] ) eventually leads to an expression giving the relative difference between the evolutions of and that are locally seen " by plasma elements on either side of the non - ideal region where is given by and are the quasi - potential functions on either side of the non - ideal region .( [ evos1 ] ) and ( [ evos ] ) show that plasma elements initially on the same field line threading a localized non - ideal region , measure a different evolution of and and so are not connected by the same field line at a later time .a sketch of this idea is shown in fig .[ fig : cart1 ] . the power of eqns .( [ evos1 ] ) and ( [ evos ] ) is that by considering only the relative difference in the evolutions of plasma elements , ideal flow components are removed , leaving only the components resulting in changes of field line connectivity and thus reconnection .if there is no variation in in -space then the evolutions of plasma elements are the same on both sides of . in this caseplasma elements which begin on the same field line ( and so initially have the same value of and ) are subject to the same change in and and so will be found on the same field line at a later time .therefore , a necessary and sufficient condition for reconnection is that i.e. that there be gradients in from one field line to another .to understand the nature of any resulting connectivity change for a given problem it is useful to map the problem from 3d real space into flux coordinate ( ) space .we will work in this space repeatedly throughout the rest of the paper .when the reconnection region is assumed to be localized within a single isolated region of the contours of in flux coordinate space form closed loops .the hamiltonian nature of eqns .( [ evos1 ] ) and ( [ evos ] ) dictates that new connections be formed tangential to the contours of .thus , when has only a single extrema these new connections will form in a circular manner .figure [ fig : mapping](a ) shows a sketch of this concept where the green arrows indicate the direction along which new connections form. however , when a single reconnection region has an inhomogeneous or multiple reconnection regions exist within the region of interest then the mapping of in flux coordinate space contains multiple maxima and minima , fig .[ fig : mapping](b ) . in generalwe restrict ourselves here to scenarios where the multiple regions still only make up a small fraction of the volume under consideration .this means that approaches zero outside of a flux tube encircling the multiple reconnection sites .the new connections which form now do so along multiple closed loops embedded within a larger scale set of loops , fig .[ fig : mapping](b ) ( right panel ) .the way that this connection change is achieved depends upon the global constraints of the system under consideration . in general the formation of new connections along these loopsis a weighted combination of two extremes : _ steady state _ and purely _ time dependent _ connection change .it is instructive to consider each in turn . in steady statethe electric field is potential and the magnetic field remains fixed in time .considering again the case when has a single extrema , let us then assume that on one side of the non - ideal region . the only way that new connections can form in the manner shown in fig .[ fig : mapping](a ) , whilst also maintaining is by inducing a circular plasma flow of the form shown in fig . [ fig : connection](a ) .ideal flows may be superposed on both sides , however the connection change of the magnetic field within this ideal transporting flow will remain the same . considered one such example of this scenario .extending this concept to multiple reconnection regions , each individual non - ideal region will behave locally like the single reconnection region shown in fig .[ fig : connection](a ) .the key difference is that now a subset of field lines thread through multiple reconnection regions .thus , circular plasma flows are induced on field lines leaving a reconnection region which then feed into other secondary regions further along the same field line .each secondary region superposes a circular plasma flow on to the flow pattern associated with the field lines which thread into it . in some casesthis will enhance the induced flow at the exit of the patchy reconnection volume . in others it will act to reduce it .figure [ fig : connection](b ) shows a conceptual sketch of this idea .thus , steady state patchy reconnection within a localized volume gives rise to an induced localized rotating flow with multiple internal vortices on field lines threading out of the reconnection volume . as with the single reconnection regionany background ideal plasma flow may be superposed on to this non - ideal flow . in the opposite extreme of purely timedependent reconnection the electric field is assumed to be zero on both sides of the non - ideal region .this is particularly relevant to the solar corona , e.g. . in this casenew connections can only be formed by a time dependent change in the magnetic field within .the circular nature of this connection change in situations with a single extrema in implies that helical magnetic fields are formed in the process , fig .[ fig : connection](c ) .when the volume under consideration contains several reconnection sites , each helical region of field may contain a subset of field lines which threads into other helical reconnection regions . fig .[ fig : connection](d ) depicts this idea .this shows that patchy time dependent reconnection can generate ( or relax ) braided magnetic fields which are thought to be important in the context of coronal heating . in any given 3d reconnection scenario a combination of both manners of connection change are likely to occur .if a given magnetic field contains separatrix surfaces and separator lines then these topological structures can be used as a reference against which the rate of flux transfer may be measured .for instance , when reconnection occurs along a separator the rate of reconnection is simply given by the integral of along the separator line . however , in the absence of such structures a more general theory is required . developed such a theory for an isolated single reconnection region , , extending those of previous works . using a similar approachwe now reproduce their results before generalizing the theory to quantify reconnection with multiple reconnection sites . without an obvious reference surface against which to measure an arbitrary flux surface ( i.e. a surface comprised of magnetic field lines ) which intersects the single region of parallel electric field and contains the field line along which the integral of is maximal .when mapped into -space this surface appears as a line , which they call the line .figure [ fig : gamma](a ) shows a sketch of this concept , where the contours depict the quasi - potential .now , this flux surface is comprised of field lines embedded in the ideal regions on either side of .generally , in each ideal region the evolution is comprised of a background ideal transporting component ( which by definition is the same on both sides ) and a non - ideal reconnecting component . without loss of generalitywe now focus on the non - ideal component by fixing the evolution of field lines threading _ into _ the non - ideal region to zero , i.e. .this is equivalent to using a coordinate system which moves with the plasma on field lines entering the non - ideal region , allowing the connection change to be entirely characterized by the evolution of the field lines threading _ out of _ the non - ideal region which evolve according to .if is then defined at some arbitrary time ( ) , then at some later time ( ) the differing evolution on either side of the non - ideal region splits into two new flux surfaces . in spacethese appear as two lines , shown in solid blue and dashed black in fig .[ fig : gamma](b ) .note that as one of these lines is coincident with the original line .the two new surfaces overlap at the edge of the non - ideal region and at ( where ) since at these places .the magnetic flux reconnected up to this time is simply given by the flux bounded within one of the two flux tubes formed by these two new flux surfaces , denoted and ( fig .[ fig : gamma](c ) ) .each flux tube must have the same cross sectional area due to the rotational nature of the connection change . in fluxcoordinate space this area is equal to the magnetic flux within each flux tube , recall the nature of the euler potentials ( eqn .( [ eulerflux ] ) ) .the rate of reconnection is then defined to be the rate at which ( representing either or ) grows at , where on one side of and on the other . is the outward normal of the boundary . as the boundary of collapses to become the section of the line on one side of the peak in , referred to hereafter as .the integral around the boundary of at then becomes the superposition of integrals along , i.e. where is the arc length along . in coordinate space the local normal to given by whilst eqns .( [ evos1 ] ) and ( [ evos ] ) give that combining eqns .( [ recon1 ] ) , ( [ recon2 ] ) , ( [ normal ] ) and ( [ diffevo ] ) then gives the reconnection rate as thus , for an isolated region of with a single maximum of the reconnection rate is given by the value of this maximum .this can be interpreted as the rate at which flux is transferred in one direction across any arbitrarily defined flux surface which intersects the non - ideal region and includes the field line upon which the maximum of occurs .when there are multiple reconnection sites or inhomogeneity of within a single site there is likely to be multiple peaks in .we now aim to develop expressions which quantify the rate of reconnection in this case and explain their interpretations . as discussed in sec .[ sec : nature ] , when there are multiple peaks in new connections are formed along multiple embedded closed paths in the plane . near positive extrema ( peaks ) of direction that this new connection formation takes is clockwise , whereas for negative extrema ( troughs ) it is anti - clockwise , fig .[ fig : mapping](b ) .the places where there is no connection change occur where .these correspond to the field lines not threading into any non - ideal region ( the neighboring ideal field ) and special field lines along which the net difference in connection change along their length is zero , i.e. field lines along which the induced connection change from multiple reconnection sites cancels out .these special field lines sit at the critical points ( x - points " and o - points " ) of the divergence free field defining the direction of new connection formation : in terms of the quasi - potential the `` o - points '' correspond to peaks and troughs of , whereas the `` x - points '' occur at saddle points .figure [ fig : arc](a ) shows a sketch of this idea , where the green and pink circles show the position of the `` o - points '' and `` x - points '' respectively .the key idea here is that just like x - points divide up two dimensional magnetic fields into distinct topological regions , so also the rotational formation of new connections described by is partitioned into localized rotational regions ( witho - points " at their centers ) by a series of `` x - points '' .the different topological regions of the field are shown in different colors in fig .[ fig : arc ] to better illustrate them . in fluxcoordinate space .the contours depict and the colors denote the topological regions associated with .( b ) as a function of arc length along . and ( where ] . at some later time ( )the differing field evolutions on either side of the multiple non - ideal regions forms a chain of flux tubes with cross sectional areas of or associated with and respectively .note , that the rotational nature of means that the sum of each set of area elements must be the same , i.e. now , if we compare the area segments swept out by the series of bounded flux regions discussed earlier ( fig .[ fig : arc](c ) ) to those generated by this continuous surface ( fig .[ fig : area](b ) ) we find that they match .this shows that the total rate of reconnection can be interpreted as the rate of growth as of the collective area associated with flux swept in the same direction ( all to the left _ or _ all to the right ) across , i.e. a similar conclusion can also be drawn for other choices of connecting the chains of maxima and minima .similarly , the net rate can be interpreted as the rate of growth as of the _ difference _ in the areas associated with flux swept in one ( or the other ) direction on one side of , i.e. where and and sum over the area segments formed along the portion of on one ( or other ) side of .it should be noted that the existence of such a large scale surface is not necessary for the application of eqns .( [ rrnet ] ) and ( [ rrtot ] ) , and indeed if is sufficiently complex or contains discontinuities such a surface may not be definable .however , we have shown that at least when is smooth and relatively simple the intuitive idea that the reconnection rate should measure the rate at which flux is reconnected across some large scale flux surface ( akin to that of a true separatrix when reconnection occurs between distinct topological regions ) still holds .finally , we now consider the case where rather than wanting to know the true rate of reconnection , one is interested in knowing the rate at which flux is reconnected past a particular flux surface .an example of such a surface would be one associated with an observed flare ribbon on the photosphere .another would be if the global topology is such that field lines from a separatrix surface or surfaces pass through the domain of interest and one wishes to know the rate of flux transfer between two different topological domains .equations ( [ rrnet ] ) and ( [ rrtot ] ) are easily generalized to this scenario .consider some arbitrary flux surface spanning a fragmented reconnection region with multiple peaks in , fig .[ fig : gen_gamma](a ) . along the length of a number of local maxima and minima of occur . between each of these local extrema fluxis transferred in one or other direction depending upon the sign of the gradient of , fig .[ fig : gen_gamma](b ) . in analogy to the previous sections the net rate at which flux is transferred across this surface is given by where the subscript denotes measurement of each quantity along the line .similarly the total rate of flux transfer across this particular flux surface is given by depending upon the path take by the line as it crosses in flux coordinate space the value of can be greater or less than the value measured by eqn .( [ rrtot ] ) .for instance if is chosen so that it crosses many times , then it would be likely that .however , by definition the net rate of transfer will at most be the same as the net rate of rotational connection change around the field line with , so that . .model parameters [ cols="^,^,^,^,^,^,^,^ " , ] [ table : runs ] of the maximum of , showing the three localized non - ideal regions at in both models . in redare a selection of field lines plotted from footpoints along ., scaledwidth=45.0% ]to illustrate the theory we now present two simple kinematic models of an idealized fragmented current layer . starting with an initial magnetic field ( at ) of the form we assume some non - ideal process occurs to produce multiple non - ideal regions such that where ( ) , ( ) and control the dimensions , position and the strength respectively of each non - ideal region .we choose three non - ideal regions ( ) , one larger central region and two smaller identical offset regions , see fig .[ fig : vapor_t0 ] .the chosen parameter values are given in table [ table : runs ] . depending upon the constraints placed upon the system, reconnection solutions describing purely time dependent , steady state or a combination of both scenarios can be constructed . in what followswe will consider the two extreme cases and verify in each case the validity of the eqns .( [ rrnet ] ) and ( [ rrtot ] ). mapped on to flux coordinate space .( b ) along the line , passing through the five critical points . , scaledwidth=45.0% ] in this extremewe impose that the sections of field lines threading into and out of the non - ideal region are held fixed such that the electric field vanishes on each side of the non - ideal region .this is equivalent to assuming that the plasma velocity everywhere .ohm s law then gives directly that , i.e. faraday s law , then dictates that at later times the magnetic field evolves such that where sums over each non - ideal region and at the magnetic field is initially straight , but as time progresses each flux ring introduces an ever increasing twist to the field .note that in this simple example we are only considering small periods in time , . at the straight magnetic field can be described with the two euler potentials and .since each non - ideal region is negligibly strong in the vicinity of the others can be constructed from the superpositions of across each region giving where is an arbitrary function .in what follows we will trace field lines from to so for convenience we set at to give figure [ fig : phi_t0]a shows a contour plot of at mapped on to the -plane .the profile contains three distinct peaks ( o - points ) with two saddle points ( x - points ) between them . by symmetrythe x - points and o - points of lie along , so we choose this as our line .the variation of the quasi - potential along this line is shown in fig .[ fig : phi_t0]b .the peaks occur at and , with the saddle points located at and . applying eqn .( [ rrtot ] ) gives the total reconnection rate of this system as with a net rate of flux transfer given by in this extreme , these values represent the total and net rate respectively at which magnetic field is generated normal to the surface collectively by the non - ideal regions .we now go on to verify these values by comparing them with values obtained numerically from a flux counting procedure , explained below .a large number of field lines were traced from a grid on as far as . at both positionsthe magnetic field has reached its asymptotic value of .this is done for the field at some time , and some later time .the amount of flux transfer ( ) in this period is obtained by comparing the final positions ( on ) at both times and summing the number of field lines to have crossed the line , weighted by their area element on the starting grid and the field strength perpendicular to the surface of starting points , i.e. where is the number of field lines under consideration .the rate of reconnection is then estimated as to obtain all field lines found to have crossed in are counted and the value halved so as not to double count the flux transfer ( recall that the connection change is circular and so will cross the line twice ) . is approximated by counting only the net transfer across a half segment of the line .the mapping of field lines on at , color coded according to whether they start above or below on the other side of the non - ideal region ( ) is shown in fig .[ fig : fluxcount](a ) .figure [ fig : fluxcount](b ) shows the regions within which field lines have changed connectivity compared with the mapping at .white areas depict where flux has reconnected across from to , and black regions where flux has been reconnected in the other direction .grey shows regions where field lines have not crossed .figure [ fig : vapor_t1 ] shows a 3d visualization of the field at , were the iso - contours depict the shape and position of each non - ideal region . applying the flux counting procedure we obtain that for a grid of starting points . aside from a small variation due to the discrete nature of the method , these results agree closely with the value obtained by applying eqns .( [ rrtot ] ) and ( [ rrnet ] ) . for the time dependent model .black show field lines with starting points below and white those with starting points above .( b ) connectivity plot of the field lines to have changed connection between and .black regions have moved from to , white have moved from to and grey regions have stayed the same.,scaledwidth=50.0% ] of the maximum of , showing the three localized non - ideal regions at in the time dependent model . in redare a selection of field lines plotted from footpoints along , demonstrating the injection of twist into the field and the overlap of the field line mappings.,scaledwidth=45.0% ] .,scaledwidth=45.0% ] lastly , consider now the instantaneous reconnection rate at the later time ( ) . at each non - ideal regionnow adds a non - zero twist to the field line mapping .the overlapping nature of the mappings distorts the shape of and therefore the positions of the extrema and saddle points , fig .[ fig : phi_t1 ] . as a result the conceptual flux surface against which reconnection rate is being measured by eqn .( [ rrtot ] ) moves to pass through these points at this later time . for comparisonwe now consider the opposite extreme of steady state reconnection for the same initial magnetic field and non - ideal term ( ) . in steady statethe electric field can be expressed in the form of a potential giving that for illustration we set which removes background ideal motions .thus , this electric field differs from , with a non - zero part outside of the non - ideal region which induces a perpendicular plasma flow of the form the magnetic field in this case remains straight for all time , and the quasi - potential is simply the same as the time dependent case at , i.e. in the steady state example .( b ) the induced perpendicular plasma flow at .,scaledwidth=45.0% ] line ( ) in the steady state example .( b ) variation of along the line .note that the zeros in the velocity field correspond to peaks or troughs of .,scaledwidth=45.0% ] figure [ fig : vel ] shows the induced plasma flows on one side of the reconnection regions when the electric field is assumed to be zero at .the generated flux transporting flows follow the contours of the quasi - potential , producing three overlapping vortices . as the contours of now form the stream lines of the perpendicular plasma flow , the zeros in the flow pattern are co - located with the peaks and saddle points in , fig .[ fig : vslice ] . as the quasi - potential is the same as the time dependent scenario at the two measures of reconnection rate are then also where can be chosen to lie along . in this extremethese quantities are measures of the total and net rate at which flux is swept past by the induced plasma flow on one side of the collective non - ideal regions , i.e. where denotes integration over either the positive or negative values only .an approximate expression for this flux transporting flow evaluated on at is which when substituted into eqn .( [ rrvel1 ] ) and integrated over the regions of negative velocity leads to note that integrating over the positive value gives the same result .substituting the above expression for into eqn .( [ rrvel2 ] ) then also gives that eqns .( [ phitot ] ) and ( [ phinet ] ) are simply eqns .( [ rrtot ] ) and ( [ rrnet ] ) applied to this particular profile .thus , we have verified the two rates of reconnection for our idealized fragmented reconnection region in each of the two extreme cases of steady state and purely time dependent reconnection and by extension the continuum of cases in - between .the aim of this paper was to extend the theory of general magnetic reconnection to situations with fragmented current layers within a localized volume .we considered the manner in which new connections may be formed , derived expressions for the rate at which this occurs and verified these expressions with two simple examples . in terms of facilitating the formation of new connections we showed that in the extreme of steady state reconnection a large scale rotational non - ideal flow with internal vortices is produced , whilst purely time dependent reconnection leads to spontaneously braided magnetic fields .however , it should be emphasized that the reverse is also true .that is , the existence of non - ideal regions is guaranteed by the right evolution of the magnetic field ( given the necessary non - ideal plasma conditions ) .in particular , if a magnetic field is initially braided with the field lines entering and leaving the volume held fixed , then multiple current layers must form to remove this braiding .this second scenario is readily observed by numerical experiments examining the non - ideal relaxation of braided magnetic fields ( e.g. ) . by consideringthe closed paths along which these new connections formed we also showed that when current layers are fragmented _ two rates of reconnection _ can be defined which describe the process . which measures the true rate at which new connections are formed collectively by the multiple non - ideal regions and a second , measuring the net rate at which changes in the global field occurs .when applied to a single reconnection region both rates are equal .we chose to define such that it measures the total rate at which flux is locally and globally cycled when viewed in flux coordinate space .this requires evaluating the quasi - potential at the saddle points of as well as the extrema .we chose this rather than a simple sum over each extrema as summing over only the extrema overestimates the rate flux is cycled ( although if each non - ideal region has little overlap this may give a close approximation , e.g. ) .this occurs as each extrema taken on its own measures the net rate of transfer of flux between itself and the background ideal field .therefore , summing over all extrema double counts the flux being cycled around outer loops , such as those depicted in orange and yellow in fig .[ fig : arc ] . by involving the quasi - potential measured at the saddle points ,this double counting is avoided .it is also worth emphasizing that our total reconnection rate does not measure the sum of the reconnection rates of each individual reconnection region within the volume .the only way that this could be quantified would be to consider the local quasi - potential drop across each non - ideal region in turn .however , each region would have to be surrounded by ideal magnetic field for this to be meaningful .in fragmented current layers this is rarely the case as different current sheets partially overlap when merging or breaking apart . considering the collective behavior as we have done here is the only way to properly quantify such a system .given that we have introduced two different rates to describe this collective behavior , which should be used to characterise a given reconnection process ?it depends upon what is most of interest for the problem at hand .for instance , if one is considering the scaling of energy release compared with reconnection rate then the total rate is the better choice. it would also be the more relevant choice in situations where the rate at which flux is swept up by a fragmented reconnection region is of interest , as is thought to be related to photospheric brightening in solar flares ( e.g. ) .however , the net rate may be more useful when the multiple reconnection regions are fluctuating and transient ( as occurs during an increasing turbulent evolution of the magnetic field ) and there are some simple large scale symmetries against which flux transfer is wished to be know ( e.g. ) .ultimately the non - ideal physics associated with the plasma , any gradients in the mapping of the magnetic field and the way in which excess magnetic energy is built up will dictate where non - ideal regions form and if they subsequently fragment .the present analysis serves as a way of interpreting how the subsequent reconnection proceeds and how best to quantify it .this research was supported by nasa s magnetospheric multiscale mission .pw acknowledges support from an appointment to the nasa postdoctoral program at goddard space flight center , administered by oak ridge associated universities through a contract with nasa .figs . 9 and 12 were made using the vapor visualization package ( www.vapor.ucar.edu ) .32ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1146/annurev - astro-082708 - 101726 [ * * , ( ) ] _ _ ( , ) * * , ( ) in _ _ , , vol . , ( ) p. * * , ( ) link:\doibase 10.1063/1.2783986 [ * * , ( ) ] , link:\doibase 10.1063/1.3264103 [ * * , ( ) ] link:\doibase 10.1038/nphys1965 [ * * , ( ) ] link:\doibase 10.1063/1.4893149 [ * * , ( ) ] , link:\doibase 10.1051/0004 - 6361/201014544 [ * * , ( ) ] , link:\doibase 10.1088/1674 - 4527/14/7/002 [ * * , ( ) ]link:\doibase 10.1088/2041 - 8205/723/1/l28 [ * * , ( ) ] , * * , ( ) link:\doibase 10.1029/2002gl016267 [ * * , ( ) ] link:\doibase 10.1063/1.4896060 [ * * , ( ) ] , link:\doibase 10.1080/03091920512331328071 [ * * , ( ) ] link:\doibase 10.1029/rg013i001p00303 [ * * , ( ) ] link:\doibase 10.1063/1.4804338 [ * * , ( ) ] , * * , ( ) * * , ( ) link:\doibase 10.1029/ja093ia06p05547 [ * * , ( ) ] link:\doibase 10.1029/ja093ia06p05559 [ * * , ( ) ] link:\doibase 10.1016/0273 - 1177(93)90341 - 8 [ * * , ( ) ] link:\doibase 10.1119/1.1976373 [ * * , ( ) ] link:\doibase 10.1086/432677 [ * * , ( ) ] * * , ( ) link:\doibase 10.1086/151512 [ * * , ( ) ] link:\doibase 10.1051/0004 - 6361/201117942 [ * * , ( ) ] , link:\doibase 10.1088/2041 - 8205/773/1/l2 [ * * , ( ) ] , link:\doibase 10.1088/0004 - 637x/725/1/319 [ * * , ( ) ] link:\doibase 10.1063/1.4833675 [ * * , ( ) ] link:\doibase 10.1088/0004 - 637x/700/1/63 [ * * , ( ) ] ,
there is growing evidence that when magnetic reconnection occurs in high lundquist number plasmas such as in the solar corona or the earth s magnetosphere it does so within a fragmented , rather than a smooth current layer . within the extent of these fragmented current regions the associated magnetic flux transfer and energy release occurs simultaneously in many different places . this investigation focusses on how best to quantify the rate at which reconnection occurs in such layers . an analytical theory is developed which describes the manner in which new connections form within fragmented current layers in the absence of magnetic nulls . it is shown that the collective rate at which new connections form can be characterized by two measures ; a total rate which measures the true rate at which new connections are formed and a net rate which measures the net change of connection associated with the largest value of the integral of through all of the non - ideal regions . two simple analytical models are presented which demonstrate how each should be applied and what they quantify .
more than twenty years ago edwards and oakeshott proposed a statistical mechanics framework for granular materials in mechanical equilibrium .the idea was to replace the energy by the total volume of the sample .thus the entropy of the system is defined as , where is the number of stable states for a given volume ( and given number of grains ) .the quantity equivalent to temperature within this description is the compactivity . in connection with edwards proposal ,aste _ et al _ has found for static packings of spheres an invariant distribution of voronoi volumes at the grain level .also , the random close packing fraction and the random loose packing fraction have been associated with configurations with and , respectively .it has been shown recently that two granular samples with the same packing fraction may not have the same properties , and an aditional macroscopic variable must be introduced in the statistical description . in reference was found that we can produce two samples with the same but different stresses .indeed , the stress tensor has been included in a more general statistical description ( see and references therein ) .we deal in this article with the so called _ parking lot model _ ( plm ) introduced by nowak _et al _ in the context of experiments of compaction of grains .starting with the statistical mechanics for the plm proposed by tarjus and viot , we obtain the random loose packing fraction for this model and introduce an order parameter that characterizes how far from a steady state situation the model is .we propose that a quantity analog to can be used as the order parameter in the continuum description of slow and dense granular flows by aranson and tsimring , mediating how solid and fluid is the form of the stress tensor .the plm is a model of random adsorption and desorption of particles on a substrate .particles are disorbed with rate and adsorbed with rate , with a no overlapping condition . for a given initial condition of the substrate, the model converges to a stationary state packing fraction around which it fluctuates . depends only on the ratio , which allow us to map the parameter to or of references . for large have and the convergence to a stationary state is very slow , reminiscent of glassy behavior .the plm is though to represent an average column of grains .tarjus and viot characterized the configurations produced by the plm with two variables .one of them is the packing fraction and the other is the insertion probability , which is the available line fraction for a new insertion .tarjus and viot recognized the need for an additional variable because of some memory effects observed in experiments that can be reproduced within the plm if we change in the course of a monte carlo simulation : we can generate two configurations with the same but with different subsequent evolution and by allowing a variation in , for a given finite time , we can obtain more dense substrates .thus , the history of the configuration is encoded in , which is a structure variable .the aditional variable is only needed to characterize configurations which are not produced in steady state , since in steady state the insertion probability is given by ( eq . 2 in ) : .\ ] ] obtained with monte carlo simulations ( circles ) .the continuum line is a gamma distribution with shape parameter . in the insetwe show vs. the insertion probability for the plm as given by eqs .[ rlp1 ] and [ rlp2 ] .the dotted line is equation [ rlpphi ] . at difference in is less than % .[phirlp ] ]if is the total lenght available for a new insertion ( , where is the size of the system ) , for given , and we have a configurational integral in terms of the gaps ( eq . 25 in ) : with being the step function .equation [ zgrande ] can be solved with a saddle point method in the limit of large , and , with and fixed . in steady state , for we have . in the limit of small obtain a random loose packing fraction by taking the limit in the above description ( see appendix ) : since the random loose packing of granular media depends on friction , from equation [ rlpphi ] we see that greater suggest grains with larger friction coefficient . for a given , a greater can also be associated to greater heterogeneity of voids at the grain level . in the limit , from equation [ rlpphi ]we get .this value for correspond to smooth grains .a gamma distribution of voronoi lenghts is obtained for ( see figure [ phirlp ] ) .the invariant distribution found in experiments by aste _et al _ for spheres packings in mechanical equilibrium is also a gamma distribution .it have been reported that at a packing fraction near the found by us the process of compaction is slowered . before reaching ,typically we have four differents regimes as we record the evolution of the packing fraction in a monte carlo simulation of the plm . during a first stage , `` ... increases rapidly until a value of around '' , and from this point afterwards the increase in is considerably slower .thus , can play a role in the onset of jamming in granular materials , as have been speculated in reference .. [ rho ] ) as a function of packing fraction for different values of the insertion probability . from right to left : .b ) 1- on a logarithmic scale as a function of for different values of ( same data than in a ) ) . from the intersection with the horizontal axis we can estimate the maximum value of packing fraction for a given .c ) circles : vs. as obtained from b ) . when , then .the solid line is equation [ phie ] , the steady state relation between and .[ parorden ] ] we introduce now a parameter which is , basically , the quotient between given by equation [ zgrande ] and which is the configurational integral without the restriction of having a definite value of , _ i.e. _ when only the first in equation [ zgrande ] is considered .this leads to : with and the entropy density is given by equation [ entropy ] in the appendix . in figure[ parorden]a we can see . for a given value of , as is increased up to a maximum value .this maximum value of can be estimated from a graph like the one shown in figure [ parorden]b . in figure[ parorden]c it can be seen that is equivalent to say that we are approaching a steady state situation . with this in mind , in figure [ orden ] we plot the insertion probability vs. the packing fraction for a given value of the parameter .we did this by solving numerically the relevant equations needed for to evaluate equation [ rho ] .it is worth to remember that a statistical description for the plm makes sense only if we need to consider configurations out of steady state , since in steady state these two variables are related by equation [ phie ] .thus , configurations with age .vs packing fraction for different values of the parameter .the particular case is given by equation [ phie ] , the steady state relation between and .the dark region are not available ( ) configurations and its frontier is defined by the curve . configurations with age .the dotted line is a schematic representation of .[ orden ] ] in the last stages of a monte carlo simulation of the plm is increasing into with a small variation in the insertion probability .talbot , tarjus and viot found in reference that for , there is a minimum in as a function of time that occurs at a packing fraction .this implies that for slow compaction if the system can increase its packing fraction while increasing or decreasing , depending if is greater or smaller than .this should have consequences for _ processes _ ( curves ) in a plane like the one shown in figure [ orden ] .thus , for in the plm we have two zones in which we expect different behaviour . in terms of zones are : and , with and . in reference the authors reported a phase transition when inserting slowly a rod into a column of grains : the system s response to shear changes at a certain packing fraction . can be localized by monitoring the change in height of the column , after removing the rod , as a function of .it would be interesting to put the results of reference in terms of a process that starts from a packing fraction on the curve of figure [ orden ] and ends on a packing fraction , with .can this two - variable description of the plm be related to the volume - stress proposal of edwards and others ?we believe that a connection can be made by using a quantity analog to given by equation [ rho ] as the order parameter in the stress tensor proposed by aranson and tsimring in their continuum description of slow and dense granular flows : ,\ ] ] where is the stress under static conditions with the same geometry .thus , in eq .[ rho ] can control how fluid and solid is the form for the stress tensor .for we have a partially fluidized granular medium . at best ,this is a first step , a suggestion , towards a real connection between edwards statistical mechanics and granular hydrodynamics . finally , since configurations with in the plm age we must say something about the relevant time scales for this model .kolan , nowak and tkachenko have found that the low relaxation frequency and the high relaxation frequency for this model are given by and , with .thus , in order to speak of thermodynamic processes in figure [ orden ] , the observation time must satisfy . from figure 5 of reference , we can see that this condition on can be satisfied only for high packing fractions .only for we have at least two orders of magnitude of separation between and .we have obtained in this article the random loose packing fraction for the parking lot model , where is the lower packing fraction in which we can find a sample in mechanical equilibrium .we have done this by taking the limit of infinite compactivity in the statistical description of tarjus and viot , in which a macro state is characterized by its packing fraction and its insertion probability .the compactivity is the analog of temperature in the statistical mechanics for granular materials proposed by edwards .we have proposed an order parameter that characterizes how far from a steady situation the model is .thus , configurations with age . with , we proposed a connection of statistical mechanics with the continuum description of slow and dense granular flows by aranson and tsimring . by considering the relevant time scales for this model obtained by kolan , nowak and tkachenko , we have argued that for blocked configurations ( ) , only for even higher packing fractions we expect to be able to speak of thermodynamic processes for this model .we thank gustavo gutirrez for useful suggestions .this work has been done under the pcp - fonacit franco - venezuelan program _ dynamics and statics of granular materials _ , and was supported in part by did of the universidad simn bolvar . , we have a maximum , which implies a maximum packing fraction .the horizontal line corresponds to , the maximum random loose packing fraction for the plm ( see eq .[ rlpphi ] ) .[ eqs2930 ] ]tarjus and viot obtained the entropy density ( ) ( eq . 28 in ) : }{z(z+y)}\right)\ ] ] with and being solutions to the coupled equations 29 and 30 of reference , which can be seen in figure [ eqs2930 ] .we have that can be interpreted as the inverse of compactivity and .s. edwards and r. oakeshott , physica a * 57 * , 1080 ( 1989 ) . t. aste _et al _ , europhys. lett . * 79 * , 24003 ( 2007 ) . s. f. edwards and d. v. grinev , advances in physics , * 51 * , 1669 ( 2002 ) .c. song , p. wang and h.a .makse , nature * 453 * , 629 ( 2008 ) .m. p. ciamarra and a. coniglio , phys .* 101 * , 128001 ( 2008 ) .l. a. pugnaloni _et al _ , arxiv:1002.3264v2 ( 2010 ) .this work was presented in the _ southern workshop on granular materials _ held in chile , nov - dic 2009 .s. f. edwards , physica a * 353 * , 114 ( 2005 ) .e. r. nowak _ et al _ , phys .e * 57 * , 1971 ( 1998 ) .g. tarjus and p. viot , phys .e * 69 * , 011307 ( 2004 ) .i. s. aranson and l.s .tsimring , phys .e * 64 * , 020301 ( 2001 ) .d. volfson , l.s .tsimring and i.s .aranson , phys .e * 68 * , 021301 ( 2003 ) .m. schrter , d.i .goldman and h.l .swinney , phys .e * 71 * , 030301 ( 2005 ) .p. krapivsky and e. ben - naim , j. chem . phys . *100 * , 6778 ( 1994 ) .kolan , e.r . nowak and a. v. tkachenko , phys .e * 59 * , 3094 ( 1999 ) .j. talbot , g. tarjus and p. viot , phys .e * 61 * , 5429 ( 2000 ) .l. j. budinski - petkovi and s. b. vrhovac , eur .j. e * 16 * , 89 ( 2005 ) .m. jerkins , m. schrter and h.l .swinney , phys . rev* 101 * , 018301 ( 2008 ) .g. tarjus and p. viot , in _ unifying concepts in granular media and glasses _ , elsevier , 2004 .edited by a. coniglio , a. fierro , h.j .herrmann and m. nicodemi .goldmann and h.l .swinney , phys .lett . * 96 * , 145702 ( 2006 ) .k. hernndez and l.i .reyes , phys .e * 77 * , 062301 ( 2008 ) .m. schrter _et al _ , europhys. lett . * 78 * , 44004 ( 2007 ) . i. s. aranson and l.s .tsimring , rev .phys . * 78 * , 641 ( 2006 ) .shang - keng ma , _ statistical mechanics _ , world scientific , 1985 .
we have obtained the random loose packing fraction of the parking lot model ( plm ) by taking the limit of infinite compactivity in the two - variable statistical description of tarjus and viot for the plm . the plm is a stochastic model of adsorption and desorption of particles on a substrate that have been used as a model for compaction of granular materials . an order parameter is introduced to characterize how far from a steady state situation the model is . thus , configurations with age . we propose that can be a starting point in order to stablish a connection between edwards statistical mechanics and granular hydrodynamics . random loose packing and an order parameter for the parking lot model + k. hernndez and l.i . reyes + departamento de fsica , universidad simn bolvar , apartado 89000 , caracas 1080-a , venezuela
few - body systems provide a useful tool for studying the dynamics of hadronic systems .the combination of short - ranged interactions and finite density means that the dynamics of complex hadronic systems can be understood by studying the dynamics of few - degree of freedom sub - systems .few - body systems are simple enough to perform nearly complete high - precision measurements and to perform ab - initio calculations that are exact to within the experimental precision .this clean connection between theory and experiment has led to an excellent understanding of two - body interactions in low - energy nuclear physics , and a good understanding of the three - body interactions .our knowledge of low - energy hadronic dynamics is largely due to the interplay between experimental and computational advances .a complete understanding of even the simplest few - hadron system requires measurements of a complete set of spin observables which have small cross sections and require state of the art detectors . at the same time , the model calculations with realistic interactions are limited by computer speed and memory . in additionthe equations are either singular or have complicated boundary conditions which require specialized numerical treatments .one of the most interesting energy scales is the one where the natural choice of few - body degrees of freedom changes from nucleons and mesons to sub - nucleon degrees of freedom .the qcd string tension or nucleon size suggest that the relevant scale for the onset of this transition is about a gev . a consistent dynamics of hadrons or sub - nuclear particles on this scale must be relativistic ; a galilean invariant theory can not simultaneously preserve momentum conservation in the lab and center of momentum frames if the initial and final reaction products have different masses .relativistic dynamical models are most naturally formulated in momentum space .this is due to the presence of momentum - dependent wigner and/or melosh rotations as well as square roots that appear in the relationship between energy and momentum .non - relativistic few - body calculations formulated in configuration space with local potentials have the advantage that the matrices obtained after discretizing the dynamical equations are banded , thus reducing the size of the numerical calculations .equivalent momentum - space calculations lead to dense matrices of comparable dimensions .in addition , the embedding of the two - body interactions in the three - body hilbert space leads to non - localities .realistic relativistic three - body calculations are just beginning to be solved .numerical methods that can reduce the size of these calculations could make relativistic calculations of realistic systems more tractable . in this paperwe explore the use of wavelet basis functions to reduce the size of momentum space scattering calculations .the resulting linear system can be can be accurately approximated by a linear system with a sparse kernel .it is our contention that the use of this sparse kernel results in a reduction in the size of the numerical calculation that is comparable to the corresponding configuration space calculations .the advantage is that the wavelet methods can be applied in momentum space and are not limited to local interactions .the long - term goal is to apply wavelet methods to solve the relativistic three - body problem . in a previous paper , we tested this method to solve the non - relativistic lippmann - schwinger equation with a malfliet tjon v potential . in this test problem, the s - wave k - matrix was computed .the wavelet method led to a significant reduction in the size of the problem .we found that 96% of the matrix elements of the kernel of the integral equation could be eliminated leading to an error of only a few parts in a million .the success of wavelet method in suggests that the method should be tested on a more complicated problem . in this paper , we test the wavelet method on the same problem without using partial waves .this leads to a singular two - variable integral equation , which has the same number of continuous variables as the three - body faddeev equations with partial waves .it is simpler than the full three - body calculation , but is a much larger calculation than was needed in ref .in addition , computations that employ conventional methods are available for comparison . in solving this problemit is necessary to address issues involving the storage and computations with large matrices .one well known use of wavelets is in the data compression algorithm used in jpeg files .our motivation for applying wavelet methods to scattering problems is based on the observation that both a digital photograph and a discretized kernel of an integral equation are two - dimensional arrays of numbers .if wavelets can reduce the size of a digital image , they should have a similar effect on the size of the kernel of an integral equation .given the utility of wavelets in digital data processing , it is natural to ask why they have not been used extensively in numerical computations in scattering .one possible reason is because there is a non - trivial learning curve that must be overcome for a successful application to singular integral equations .a relevant feature is that the basis functions have a fractal structure ; they are solutions to a linear renormalization group equation and thus have structure on all scales .numerical techniques that exploit the local smoothness of functions do not work effectively with functions that have structure on all scales . in , we concluded that these limitations could be overcome by exploiting the renormalization group transformation properties of the basis functions in numerical computations .these equations were used to compute moments of the basis functions with polynomials .these moments were used to construct efficient quadrature methods for evaluating overlap integrals .in addition , these moments could be combined with the renormalization group equations to perform accurate calculations of the types of singular integrals that appear in scattering problems .a key conclusion of was that wavelet methods provide an accurate and effective method for solving the scattering equations .in addition , the expected reduction in the size of the numerical problem could be achieved with minimal loss of precision .there are many kinds of wavelets .in we found that the daubechies-3 wavelets proved to be the most useful for our calculations .numerical methods based on wavelets utilize the existence of two orthogonal bases for a model space .the two bases are related by an orthogonal transformation . the first basis , called the father function basis , samples the data by averaging on small scales .it is the numerical equivalent of a raw digital photograph .the orthogonal transformation is generated by filtering the coefficients of the father function basis into equal numbers of high and low frequency parts .the high frequency parts are associated with another type of basis function known as the mother function .the same filter is again applied only to the to the remaining low frequency parts , which are divided into high and low frequency parts .this is repeated until there is only one low frequency coefficient .this orthogonal transformation and its inverse can be generated with the same type of efficiency as a fast fourier transform .the new basis is called the wavelet basis . for the daubechies-3 wavelets ,both sets of basis functions have compact support .the support of the father function basis functions is small and is determined by the resolution of the model space .the support of the wavelet basis functions is compact , but occurs on all scales between the finest resolution and the coarsest resolution .the father function for the daubechies-3 wavelets has the property that a finite linear combination of such functions can locally pointwise represent a polynomial of degree two or less .integrals over these polynomials and the scaling basis functions can be done exactly and efficiently using a one - point quadrature .the mother functions have the property that they are orthogonal to polynomials of degree two .this means that the expansion coefficient for a given mother basis function is zero if the function can be well - approximated by a polynomial on the support of the basis function .it is for this reason that most of the kernel matrix elements in this representation are small .setting these small coefficients to zero is the key approximation that leads to sparse matrices .some of the properties that make the daubechies wavelets interesting for numerical computations are * the basis functions have compact support .* the basis functions are orthonormal .* the basis functions can pointwise represent low degree polynomials * the wavelet transform automatically identifies the important basis functions .* there is a simple one point quadrature rule that is exact for low - degree local polynomials .* these are accurate methods for computing the singular integrals of scattering theory .* the basis functions never have to be computed .the above list indicates that wavelet bases have many advantages in common with spline bases , which have proven to be very useful in large few - body calculations .both the spline and wavelet basis functions have compact support , which allows them to efficiently model local structures , both provide pointwise representations of low - degree polynomials , both can be easily integrated using simple quadrature rules , and both can be accurately integrated over the scattering singularity .one feature that distinguishes the wavelet method from the spline method is that the wavelet transform automatically identifies the important basis functions that need to be retained . with splines ,the regions that have a lot of structure and require extra splines need to be identified by hand .this is a non - trivial problem in large calculations .the automatic nature of this step is an important advantage of the wavelet method in large calculations .in addition , unlike the spline basis functions , the wavelet basis functions are orthogonal , and the one - point quadrature only requires the evaluation of the driving term or kernel at a single point to compute matrix elements .this leads to numerical approximations that combine the efficiency of the collocation method with the stability of the galerkin method . in the next sectionwe give an overview of the properties of wavelets that are used in our numerical computations .our model problem is defined in section three .the methods of section two are used in section four to reduce the scattering integral equation in section three to an approximate linear system .the transformation to a sparse - matrix linear system and the methods used to solve the linear equations are discussed in section five .the considerations discussed in this section are important for realistic applications .the results of the model calculations are discussed and compared to the results of partial - wave calculations in section six .our conclusions are summarized in section seven .the complex biconjugate gradient algorithm that was used to solve the resulting system of linear equations is outlined in the appendix .in our work , we use daubechies original bases of compactly supported wavelets .in addition to their simplicity , these functions possess many useful properties for numeric calculations , which are discussed at the end of this section .there are two primal basis functions called the father , , and mother , .the primal father function is defined as the solution of the homogeneous scaling equation with normalization the primal mother function is defined in terms of the father by a similar scaling equation , where the parameter is the order of the daubechies wavelet and the are a unique set of numerical coefficients that satisfy certain relations such as orthogonality of basis functions .we employ wavelets of order , henceforth called daubechies-3 wavelets .the numerical values of the are given in table [ coef ] .equation ( [ scale ] ) is the most important in all of wavelet analysis , as all the properties of a wavelet basis are determined by the so - called filter coefficients , .a simple property that follows from the is that the father and mother function both have compact support on the interval .all other basis functions are related to the primal father and mother by means of dyadic ( power of two ) scale transformations and unit translations , .scaling coefficients for daubechies-3 wavelets [ cols= " < , < " , ] [ shifts ]we have shown that it is possible to use wavelets to calculate the two - body scattering matrix in terms of momentum vectors without resorting to partial waves .we were able to accurately reproduce the phase shifts of the malfliet - tjon potential .these calculations lead to sparse matrices , which can be efficiently inverted using standard iterative methods .application of a simple preconditioning matrix was shown to be necessary to achieve convergence of the iterative methods .traditional methods for solving scattering equations in momentum space typically produce dense matrices that require a large amount of storage and are time consuming to invert .these are promising results because relativistic scattering equations are naturally formulated in momentum space .also , the scattering boundary conditions are most easily treated in momentum space .wavelet methods can help treat both of these of problems .one of the main advantages of wavelet methods over methods such as splines is that the wavelet transform presents a method that automatically determines what basis functions are necessary for a given accuracy .unfortunately , this also leads to one of the main drawbacks of this method . in our procedure , a large dense matrix , , needs to be produced first and then this is transformed to a sparse matrix .most of the computational time is spent constructing and transforming this matrix into a sparse format .the subsequent solution of the sparse linear system takes relatively little computational effort . for this specific problem, wavelet methods based on momentum vectors may not be necessary .the maximum number of partial waves that needs to be included to achieve convergence , , is simply too small to gain a computational benefit from using wavelets in the angular variable . to achieve a computational benefitwe should use less basis functions in the angular variable than the maximum number of partial waves .in the three - body problem or at much higher energies , the number of partial waves that need to be included increases considerably and computational benefits may be gained from employing a momentum vector approach .this work supported in part by the u.s .department of energy , under contract de - fg02 - 86er40286 .the biconjugate gradient method is an iterative technique for solving large matrix equations of the form the advantage of this method for large sparse matrices is that it only involves matrix multiplication by and its adjoint , both of which can be accomplished efficiently in a sparse storage format such as ccs .the algorithm generates a sequence of approximate solutions , with residual .one iterates until the norm of the residual is less than some predetermined value .this method is traditionally formulated for real matrices , but the extension to complex matrices is straightforward .below we present the algorithm for general complex matrices . for our calculations , we start with the initial approximate solution with the residual for the initial values of the bi - residual , the direction vector , and bi - direction we use w .- c .shann , `` quadrature rules needed in galerkin - wavelets methods . ''proceedings for the 1993 annual meeting of chinese mathematics association .chiao - tung univ dec ( 1993 ) ; w .- c .shann , j .- c .yan , `` quadratures involving polynomials and daubechies wavelets . ''technical report 9301 .department of mathematics , national central university ( 1993 ) ( http://www.math.ncu.edu.tw/ shann / math/ + pre.html ) .r. barret , et al ._ templates for the solution of linear systems : building blocks for iterative methods _( siam , philadelphia , 1994 ) + ( http://www.netlib.org/linalg/html/ + report.html ) .
the use of orthonormal wavelet basis functions for solving singular integral scattering equations is investigated . it is shown that these basis functions lead to sparse matrix equations which can be solved by iterative techniques . the scaling properties of wavelets are used to derive an efficient method for evaluating the singular integrals . the accuracy and efficiency of the wavelet transforms is demonstrated by solving the two - body t - matrix equation without partial wave projection . the resulting matrix equation which is characteristic of multiparticle integral scattering equations is found to provide an efficient method for obtaining accurate approximate solutions to the integral equation . these results indicate that wavelet transforms may provide a useful tool for studying few - body systems .
the success or failure of magnetic field modeling of the solar corona depends on both the choice of the theoretical model , as well as on the choice of the used data sets .the simplest method is potential field modeling , which requires only a line - of - sight magnetogram , but only few solar active regions match a potential field model . also linear force - free field ( lfff ) models are generally considered as unrealistic , where a single constant value for the force - free parameter represents the multi - current system of an entire active region in the solar corona .the state - of - the - art is the nonlinear force - free field ( nlfff ) model , which can accomodate for an arbitrary configuration of current systems , described by a spatially varying parameter distribution in an active region .the next strategic decision is the choice of data sets to constrain the theoretical model .nlfff models generally require vector magnetograph data , ] ( as a function of the curvilinear abscissa ) have been used to constrain : ( i ) potential field models in terms of buried unipolar magnetic charges ( aschwanden and sandman 2010 ) or buried dipoles ( sandman and aschwanden 2011 ) , ( ii ) linear force - free fields ( feng , 2007 ; inhester , 2008 ; conlon and gallagher 2010 ) , and ( iii ) nonlinear force - free fields ( aschwanden , 2012a ) .the proof of concept to fit nlfff codes to prescribed field lines was also demonstrated with artificial ( non - solar ) loop data , fitting either 3d field line coordinates ] ( malanushenko , 2009 , 2012 ) .the method of malanushenko , ( 2012 ) employs a grad - rubin type nlfff code ( grad and rubin , 1958 ) that fits a lfff with a local -value to each coronal loop and then iteratively relaxes to the closest nlfff solution , while the method of aschwanden , ( 2012a ) uses an approximative analytical solution of a force - free and divergence - free field , parameterized by a number of buried magnetic charges that have a variable twist around their vertical axis , and is forward - fitted to observed loop coordinates .the latter method is numerically quite efficient and achieves a factor of two better agreement in the misalignment angle ( ) than standard nlfff codes using magnetic vector data .however , the major limitation of the latter method is the availability of solar stereoscopic data , which restricts the method to the beginning of the stereo mission ( i.e. , the year of 2007 ) , when stereo had a small spacecraft separation angle that is suitable for stereoscopy ( aschwanden , 2012b ) .hence , the application of nlfff magnetic modeling using coronal constraints could be considerably enhanced , if the methodical restriction to coronal 3d data , as it can be provided only by true stereoscopic measurements , could be relaxed to 2d data , which could be furnished by any high - resolution euv imager , such as from the soho / eit , trace , and sdo / aia missions .this generalization is exactly the purpose of the present study .we develop a modified code that requires only a line - of - sight magnetogram and a high - resolution euv image , where we trace 2d loop coordinates to constrain the forward - fitting of the analytical nlfff code described in aschwanden ( 2012a ) , and compare the results with those obtained from stereoscopically triangulated 3d loop coordinates ( described in aschwanden , 2012a ) .furthermore we test also magnetic forward - fitting to automatically traced 2d loop data , and compare the results with manually traced 2d loop data .the latter effort brings us closer to the ultimate goal of fully automated ( nlfff ) magnetic field modeling with widely accessible input data .the content of the paper is a follows : the theory of the analytical nlfff forward - fitting code is briefly summarized in section 2 , the numerical code is described in section 3 , tests of nlfff forward - fitting to simulated data are presented in section 4 , and to stereoscopic and single - image data in section 5 , while a discussion of the application is given in section 6 , with a summary of the conclusions provided in section 7 .a nonlinear force - free field ( nlfff ) is implicitly defined by maxwell s force - free and divergence - free conditions , where is a scalar function that varies in space , but is constant along a given field line , and the current density is co - aligned and proportional to the magnetic field .a general solution of equations ( 1)-(2 ) is not available , but numerical solutions are computed ( see review by wiegelmann and sakurai 2012 ) using ( i ) force - free and divergence - free optimization algorithms ( wheatland , 2000 ; wiegelmann 2004 ) , ( ii ) evolutionary magneto - frictional methods ( yang , 1986 ; valori , 2007 ) , or grad - rubin - style ( grad and rubin , 1958 ) current - field iteration methods ( amari , 1999 , 2006 ; wheatland 2006 ; wheatland and regnier 2009 ; malanushenko , 2009 ) .numerical nlfff solutions bear two major problems : ( i ) every method based on extrapolation of force - free magnetic field lines from photospheric boundary conditions suffers from the inconsistency of the photospheric boundary conditions with the force - free assumption ( metcalf 1995 ) , and ( ii ) the calculation of a single nlfff solution with conventional numerical codes is so computing - intensive that forward - fitting to additional constraints ( requiring many iteration steps ) is unfeasible .hence an explicit analytical solution of equations ( 1)-(2 ) would be extremely useful , which could be computed much faster and be forward - fitted to coronal loops in force - free domains circumventing the non - force - free ( photospheric ) boundary condition .an approximate analytical solution of equations ( 1)-(2 ) was recently calculated ( aschwanden 2012a ) that can be expressed by a superposition of an arbitrary number of magnetic field components , , where each magnetic field component can be decomposed into a radial and an azimuthal field component , where ( ) are the spherical coordinates of a magnetic field component system ( with a unipolar magnetic charge that is buried at position ( , has a depth , a vertical twist , and ^{1/2} ] for an ensemble of stereoscopically triangulated loops in a solar active region is described in detail and tested in aschwanden and malanushenko ( 2012 ) .we are using a cartesian coordinate system with the origin in the center of the sun and the plane - of - the - sky is in the -plane , while the z - axis is the line - of - sight .this allows us to take the curvature of the solar surface into full account , in contrast to some other nlfff codes that approximate the solar surface with a flat plane .the forward - fitting part of the code consists of two major parts , ( i ) the decomposition of buried unipolar magnetic charges from a line - of - sight magnetogram ( see appendix a of aschwanden , 2012a ) , and ( ii ) iterative optimization of the nonlinear force - free parameters by minimizing the misalignment angles between the loop data and the fitted nlfff model .the new approach in this work is the generalization of the forward - fitting code from 3d loop coordinates ( using stereoscopic measurements before ) to 2d loop coordinates ] , this is illustrated in figure ( 1 ) , where the loop directions ( black arrows ) and magnetic field directions ( red arrows ) are depicted at three loop segment positions , for two orthogonal projections ( figure 1 left panels ) .the root - mean - square value of all misalignment angles for each loop segment and loop is then minimized in the forward - fitting procedure to find the best nlfff approximation , ^ 2 \right)^{1/2 } \ .\ ] ] in addition to the 3d misalignment angle , we can also define a 2d misalignment angle with the same equations ( 9 ) and ( 10 ) , except that the magnetic field vectors and loop vectors are a function of two - dimensional space coordinates , as they are seen in the 2d projection into the -plane ( figure 1 , bottom right panel ) .if we have only 2d loop coordinates available ( in the case without stereoscopy ) , we can only forward - fit the field lines parameterized with a nlfff code by minimizing the 2d misalignment angle , because the third space variable is not available .our strategy here is to calculate the 2d misalignment angle in each position for an array of altitudes , , that covers a limited altitude range ] solar radii here .this yields multiple misalignment angles for each 2d loop position , which are shown in the -plane ( fig .1 top right panel ) and -plane ( fig .1 bottom right panel ) .our strategy is then to estimate the unknown third -coordinate in each 2d position from that height that shows the smallest 2d misalignment angle , and can then proceed with the forward - fitting procedure like in the case of 3d stereoscopic data , which is described in detail in aschwanden and malanushenko ( 2012 ) .a side - effect of the generalized 2d method is that the parameter space is enlarged by an additional dimension , the unknown or coordinate of the altitude of each loop position , which in principle increases the computation time by a linear factor of the number of altitude levels .however , we optimized the code by vectorization and by organizing the optimization of the altitude variables ( for each loop segment and loop ) and the -parameter variables ( for each magnetic charge ) in an interleaved mode , so that the computation time reduced by 1 - 2 orders of magnitude compared with earlier versions of the code ( aschwanden and malanushenko 2012 ; aschwanden , 2012a ) , without loss of accuracy .when comparing 3d with 2d misalignment angles , we have to be aware that the unknown third dimension at every loop position is handled differently in the two methods . in our new 2d methodwe optimize the third coordinate by minimizing the 2d misalignment angle independently at every loop segment position , interleaved with the optimization of the nonlinear force - free parameter . in the ( stereoscopic )3d method the third coordinate is used as a fixed constraint like the observables . however , we can calculate the median 2d misalignment angle with both methods , while the 3d misalignment angle is only defined for the ( stereoscopic ) 3d - fit method , but not for the ( loop - tracing ) 2d - fit method .in order to test the numerical convergence behavior , the uniqueness of the solutions , and the accuracy of the method in terms of misalignment angles we test our code first with simulated data .we simulate six cases that correspond to the same six nonpotential cases presented in aschwanden and malanushenko ( 2012 ; cases # 7 - 12 ) , consisting of a unipolar case ( n7 ; figure 2 top ) , a dipolar case ( n8 ; figure 2 middle ) , a quadrupolar case ( n9 ; figure 2 bottom ) , and three decapolar cases with 10 randomly buried magnetic charges each ( n10 , n11 , n12 ; figure 3 ) . in each casewe run both the 3d - fitting code ( mimicking the availability of stereoscopic loop data ) , as well as the 2d - fitting code ( corresponding to loop tracings from a single euv image without stereoscopic information ) . thus , in the 3d - fitting code the target loops are parameterized with 3d data ] of the target loops .the results are shown in figures 2 and 3 , with the 2d fits in the left - hand panels , and the 3d fits in the right - hand panels .the convergence of the 2d - code can be judged by comparing with the previously tested 3d - code ( aschwanden and malanushenko 2012 ) .we list a summary of the results in table 1 .the 3d - fits achieve a mean 3d misalignment angle of and a 2d misalignment angle of .this is reasonable and slightly better than the results of an earlier version of the 3d - code ( in table 4 of aschwanden and malanushenko 2012 ) . in comparison ,our new 2d - fitting code achieves an even better agreement with for the 2d - misalignment angle , while the 3d - misalignment angle is not defined for the 2d code ( due to the lack of line - of - sight coordinates ) . the better performance of the 2d code is due to the smaller number of constraints ( i.e. , 200 constrains for 2d - loop coordinates of 10 loops with 10 segments , compared with 300 constraints for the 3d - fit method ) .the smaller number of free parameters generally improves the accuracy of the solution .the uniqueness of the solution can be best expressed by the mean misalignment angle .strictly speaking , a nlfff solution would only be unique if the number of free parameters ( i.e. , the number of magnetic charges in our case ) matches the number of constraints ( i.e. , the number of fitting positions ( i.e. , the product of the number of loops times the number of fitted segment positions , i.e. , in our case ) .moreover , the los magnetogram is approximated by a number of magnetic charges that neglects weak magnetic sources , and there are residuals in the decomposition of magnetic charges that contribute to the noise or uncertainty and non - uniqueness of the solutions .therefore , the uniqueness of a nlfff solution can best be specified by an uncertainty measure for each field line , which can be quantified either by the misalignment angle or by a maximum transverse displacement of a field line , i.e , for a particular field line with full length .we show in table 1 also the ratios of the nonpotential to the potential energies for the 3d and 2d fit methods , which agree within an accuracy of order .note , that the simulated data have 1 , 2 , 4 , and 10 magnetic charges , which corresponds to the number of free parameters in the fit , while we used the double number of magnetic charges in the decomposition of the simulated los magnetogram , in order to make the parameterization of the fitted model somewhat different from the target model. nevertheless , although the 2d solutions have a high accuracy ( with a mean misalignment of ) , we have to be aware that the simulated data and the forward - fitting code use the same parameterization of nonlinear -parameters , which warrants a higher accurcay in forward - fitting than real data with a unknown parameterization .hence , we test the code with real solar data in the following section .for testing the feasibility , fidelity , and accuracy of the new analytical nlfff forward - fitting code based on 2d tracing of loops ( rather than 3d stereoscopy ) we are using the same observations for which either stereoscopic 3d reconstruction has been attempted earlier ( aschwanden , 2008b , c , 2009 , 2012a ; sandman , 2009 ; derosa , 2009 ; aschwanden and sandman 2010 ; sandman and aschwanden 2011 ; aschwanden 2012a ) , such as for active regions observed with stereo on 2007 april 30 , may 9 , may 19 , and december 11 , or where 2d loop tracing was performed and documented , such as for an active region observed on 1998 may 19 with trace ( aschwanden , 2008a ) . the active region numbers , observing times , spacecraft separation angles , number of traced loops , and maximum magnetic field strengths of these observations are listed in table 2 . in all cases we used line - of - sight magnetograms from the michelson doppler imager ( mdi ; scherrer , 1995 ) on board the _ solar and heliospheric observatory ( soho ) _ , while euv images were used either from the _ transition region and coronal explorer _( trace ; handy , 1999 ) , or from the _ extreme ultraviolet imager _ ( euvi ; wlser , 2004 ) onboard the stereo spacecraft a(head ) and b(ehind ) .we show the results of the nlfff forward - fitting of the six active regions in figures 4 to 9 , all in the same format , which includes the decomposed line - of - sight magnetogram of soho / mdi ( grey scale in center of figures 4 to 9 ) , the stereoscopically triangulated or visually traced loops ( blue curves in figures 4 to 9 ) , and the best - fit magnetic field lines ( red curves for the segments covered by the observed loops , and in orange color for complementary loops parts ( although truncated at a height of 0.15 solar radii ) .the orthogonal projections of the best - fit magnetic field lines are also shown in the right - hand and top panels of figures 4 to 9 , as well as the histograms of 2d and 3d ( ) misalignment angles ( in bottom panels of figures 4 to 9 ) .the median misalignment angles and are also listed in table 3 , for both the previous stereoscopic reconstructions ( aschwanden , 2012a ; aschwanden 2012b ) , marked with `` stereo '' in table 3 , and based on 2d loop tracing in the present study , marked with `` tracing '' in table 3 .the active region a ( 2007 april 30 ; figure 4 ) shows a lack of stereoscopically triangulated loops in the core of the active region ( due to the high level of confusion for loop tracing over the `` mossy '' regions ) , where the highest shear and degree of non - potentiality is expected , and thus deprives us from measuring the largest amount of free magnetic energy , while standard nlfff codes have stronger constraints in these core regions . in all 5 active regions we see sunspots with strong magnetic fields , but since we limit our nlfff solutions to an altitude range of solar radii , we can not see whether the diverging field lines above the sunspots are open or closed field lines .in principle we could display our nlfff solutions to larger altitudes to diagnose where open and closed field regions are , but the accuracy of reconstructed field lines is expected to decrease with height with our method , especially because of the second - order approximation that can represent helical twist well for vertical segments of loops ( near the surface ) , but not so well for horizontal segments of loops ( in large altitudes ) .we see in table 3 that the 3d misalignment angles have a mean value of for the stereoscopy method .if we compare the 2d misalignment angles between the two methods in table 3 , we see that the stereo method has a somewhat larger mean , , than the 2d loop tracing method , with .this is an effect of the optimization of the 2d misalignment angles in the 2d loop tracing method , where the third coordinate is a free variable and thus has a larger flexibility to find an appropriate model field line with a small 2d misalignment angle , in contrast to the stereoscopic method , where the third coordinate of every observed loop is entirely fixed and leaves less room in the minimization of the misalignment angles between observed loops and theoretical field models .ideally , if stereoscopy would work perfectly , the third coordinate should be sufficiently accurate so that the best field solution can be found easier with fewer free parameters .however , reality apparently reveals that there is a significant stereoscopic error that hinders optimum field fitting , which is non - existent in the 2d forward - fitting situation .actually , from the two stereoscopic misalignment angles we can estimate the stereoscopic error , assuming isotropic errors .thus , defining the 2d misalignment angle as , and the 3d misalignment angle as , with isotropic errors , we expect which yields a stereoscopic error of .this is somewhat larger than estimated earlier from the parallelity of stereoscopically triangulated loops with close spatial proximity , which amounted to ( aschwanden and sandman 2010 ) .thus both estimates assess a substantial value to the stereoscopic error that exceeds the accuracy of the best - fit nlfff solution based on 2d loop tracing ( ) by far. nevertheless , both best - fit misalignment angles are consistent with each other for the two forward - fitting methods .this forward - fitting experiment thus demonstrates that _ we obtain equally accurate nlfff fits to coronal data with or without stereoscopy _ , and thus makes the new method extremely useful . for the 2d loop tracing method of stereoscopic data ( casesa , b , c , and d in table 1 ) we just ignored information on the -coordinate of loops in the 2d forward - fitting algorithm . for cases e and f , we compare visual tracing of loops ( case e ) with automated loop tracing ( case f ) .interestingly , we find the most accurate forward - fit for case e , which has a 2d misalignment error of only ( figure 8) , which may have resulted from the higher spatial resolution of loop tracing using trace data ( with a pixel size of ) in case e , compared with the three times lower resolution of stereo ( with a pixel size of ) in the cases a , b , c , and d. comparing visually traced ( figure 8) versus automatically traced loops ( figure 9 ) , we note that the automated tracing leads to a less accurate nlfff fit , with versus for visual tracing .apparently , the automated loop tracing method ( aschwanden , 2008a ) can easily be mis - guided or side - tracked by about onto near - cospatial loops with similar coherent large curvature .comparing individual field lines in figure 8 with figure 9 , we see also that the automated loop tracing algorithm produces a number of short loop segments with large misalignment angles , which are obviously a weakness of the automated tracing code , resulting into a larger misalignment error for the fitted nlfff solution ( which is a factor of larger in the average for this case ) . in figure 10we show scatterplots of the magnetic field strengths retrieved at the photospheric level for the stereoscopic 3d method versus the field strength obtained from the 2d loop tracing method .the analytical nlfff forward - fitting algorithm starts first a decomposition of point - like buried magnetic charges from the line - of - sight component , but has no constraints for the transverse components and , except for the coronal loop coordinates .thus , the directions of coronal loops near the footpoints will determine the transverse components and in the photosphere .the scatterplots of versus in figure 10 show that the smallest scatter between the two methods appears for active region b , which is the one closest to a potential field ( although we are not able to trace and triangulate loops in the core of the active region , where supposedly the highest level of non - potentiality occurs ) .the linear regression fits show a good correspondence in the order of between the two methods , which results from different fitting criteria of the nonpotential field components .a ratio of in the azimuthal field ( or nonpotential ) field component ( equation 5 ) with respect to the potential field component ( equation 4 ) would result into a change of in the magnetic field strength .the free magnetic energy , which is the difference between the nonpotential and the potential field energy , integrated over the spatial volume of an active region , has been calculated for the stereoscopic forward - fitting method in aschwanden 2012b ( table 1 therein ) . herewe calculate these quantities also for the 2d loop tracing method , both methods being juxtaposed in table 3 and in figure 11 .the absolute values of the potential field energy agree within a factor of between the two methods , while the free energies show differences of 1%-2% of the potential energy .if we consider the difference in the free energy between two methods as a measure of a systematic error , we assess an uncertainty of , or about .quantitative comparisons of various nonlinear force - free field ( nlfff ) calculation methods applied to coronal volumes that encompass an active region included optimizational , magneto - frictional , grad - rubin based , and green s function based models ( schrijver , 2006 ; 2008 ; derosa , 2009 ) . a critical assessment of 11 nlfff methods revealed significant differences in the nonpotential magnetic energy and in the degree of misalignment ( ) with respect to stereoscopically triangulated coronal loops .the chief problem responsible for these discrepancies were identified in terms of the non - force - freeness of the lower ( photospheric ) boundary , too small boundary areas , and uncertainties of the boundary data ( derosa , 2009 ) .obviously , the non - force - freeness of the photosphere can only be circumvented by incorporating coronal magnetic field geometries into nlfff extrapolations , such as the information from stereoscopically triangulated loops .implementation of coronal magnetic field data into nlfff codes is not straightforward , because most of the conventional nlfff codes are designed to extrapolate from a lower boundary in upward direction , and thus the volume - filling coronal information can not be treated as a boundary problem .a natural method to match arbitrary constraints that are not necessarily a boundary problem is a forward - fitting approach , which however , requires a parameterization of a magnetic field model .since nlfff models are implicitly defined by the two differential equations of force - freeness and divergence - freeness , an explicit parameterization of a magnetic field is not trivial .there are essentially three approaches that have been attempted so far : ( i ) preprocessing of magnetic boundary data by minimizing the lorentz force at multiple spatial scales ( wiegelmann 2004 ; jing , 2009 ) ; ( ii ) linear force - free fitting with subsequent relaxation to a nonlinear force - free field ( malanushenko , 2009 , 2012 ) ; and ( iii ) magnetic field parameterization with buried magnetic charges and forward - fitting of an approximative analytical nlfff solution ( aschwanden 2012a , b ; aschwanden and malanushenko 2012 ) .the third method requires stereoscopically triangulated 3d coordinates of coronal loops , which have been successfully modeled in four different active regions , improving the misalignment angles between the observed loops and a potential field ( ) by a factor of about two , when fitted with the analytical nlfff solution ( ; aschwanden , 2012a ) .however , although this method provides a more realistic nlfff model , the main restriction is the availability of stereo data , as well as the inferior spatial resolution of stereo / euvi compared with other euv imagers ( e.g. , trace , or sdo / aia ) . in order to circumvent this restriction we developed a generalized code in this study that requires only a high - resolution euv image , from which a sufficient large number of coronal loops can be traced in two dimensions , plus a line - of - sight magnetogram , which were essentially always available since the lauch of the soho mission in 1995 .a fortunate outcome of this study is that the reduction of 3d information from coronal loops to 2d coordinates does not handicap the accuracy of a nlfff forward - fitting method , as we demonstrated here .one potential limitation of forward - fitting of analytical nlfff approximations may be the accuracy for strong nonpotential cases ( say near flaring times ) , since our analytical nlfff approximation is only accurate to second order of the force - free -parameter .another caveat may be the universality of the analytical nlfff parameterization .our approximation is designed to model azimuthal ( rotational ) twist around a vertical axis with respect to the solar surface , which corresponds to electric currents flowing in vertical direction .this geometry may not be appropriate for horizontally twisted structures , such as horizontally extended filaments .for such cases , a more general nlfff solution could be desirable . in this respect ,more general nlfff solutions such as currently developed by malanushenko , ( 2009 ; 2012 ) may provide new tools , after they will be generalized for a spherical solar surface and optimized for computational speed .nevertheless , our analytical nlfff forward - fitting algorithm can always be used to find a quick first approximation of a nlfff solution , which then could be refined by alternative ( more time - consuming ) nlfff codes .note that our forward - fitting code calculates one trial magnetic field configuration in about 0.01 s , while some iterations are needed for the convergence of an approximate nlfff solution , accumulating to a few minutes total computation time .our forward - fitting code computes a solution to an active region with an average computation time of minutes ( see cpu times in table 3 ) on a mac os x , 2 3.2 ghz quad - core intel xeon , 32 gb memory , 800 mhz ddr2 fb - dimm computer .simple cases that can be represented with magnetic charges , require only computation times of order s ( see cpu times in table 1 ) .we developed an analytical nlfff forward - fitting code that requires only the input of a line - of - sight magnetogram and a set of 2d loop tracing coordinates that can be obtained from any high - resolution euv image .this is the first attempt to measure the coronal magnetic field based on directly observed 2d images alone , while we needed stereoscopically triangulated 3d loop coordinates from stereo in previous studies .there exists only one other study ( to our knowledge ) that heads into the same direction of forward - fitting a nlfff model to 2d data ( malanushenko , 2012 ) , using simulated 2d data based on analytical nlfff solutions ( from low and lou 1990 ) or previous nlfff modeling ( from schrijver , 2008 ) .we forward - fitted analytical nlfff approximations to four active regions ( cases a , b , c , d ) using 2d loop tracings only and compared them with previous nlfff fits using ( stereoscopic ) 3d loop coordinates .furthermore we forward - fitted analytical nlfff approximations to another active region using visual 2d loop tracings ( case e ) or automatically traced 2d loop coordinates ( case f ) .our findings of these exercises are the following : 1 . our forward - fitting experiment with two different methods demonstrated that a nonlinear force - free magnetic field ( nlfff ) solution can be obtained with equal accuracy _ with or without stereoscopy_. this result relinquishes the necessity of stereo data for future magnetic modeling of active regions on the solar disk , but the availability of suitable stereo data was crucial to establish this result .the accuracy of a forward - fitted nlfff approximation that includes vertical currents ( with twisted azimuthal magnetic field components ) matches coronal loops observed in euv with a median 2d misalignment angle of , while stereoscopic 3d data exhibit a commensurable 2d misalignment angle , but a substantially larger 3d misalignment angle ( ) , which implies stereoscopic measurement errors in the order of .these substantial stereoscopic measurement errors lead to less accurate nlfff fits than 2d loop tracings with unconstrained line - of - sight positions .2d loop tracings in high resolution images ( pixels with trace ) lead to more accurate nlfff fits ( with a misalignment angle of in case e ) than images with lower spatial resolution ( pixels with stereo ) , yielding for the cases a , b , c , and d. 4 . visually ( or manually ) traced 2d loop coordinates appear to be still superior to the best automated loop tracing algorithms , yielding a misalignment angle of ( in case e ) versus ( in case f ) .magnetic field strengths of best - fit nlfff approximations are retrieved with an accuracy of , comparing the 2d loop tracing method with the stereoscopic 3d triangulation method .magnetic energies differ by a factor of between the two methods , while the free energy has a systematic error of ( of the total magnetic energy ) between the two methods , which is about an order of magnitude smaller than found between other ( standard ) nlfff codes .the computational speed of our nlfff code allows the computation of a space - filling magnetic field configuration of an active region in about 0.01 s , while forward - fitting to a set of coronal loops is feasible with about iterations , requiring a total computation time of a few minutes .these results demonstrate clearly that we can perform accurate nlfff magnetic modeling based on 2d loop tracings , which will relinquish the need of stereo data in future , at least for active regions near the solar disk center .the analyzed active regions extended up to 0.65 solar radii away from disk center , so in principle we can perform nlfff modeling for at least half of the number of active regions observed on the solar disk , especially since the parameterization of our analytical nlfff approximation takes the sphericity of the solar surface fully into account ( which is not the case in most other nlfff codes ) . for future developmentswe expect that other nlfff codes could implement minimization of the misalignment angle with coronal loops in parallel to the optimization of force - freeness and divergence - freeness , rather than by preprocessing of vector boundary data .further improvements in automated 2d loop tracings could replace visual / manual tracing methods and this way render nlfff modeling with coronal constraints in a fully automated way .the author appreciates the constructive comments by an anonymous referee and helpful discussions with allen gary , anna malanushenko , marc derosa , and karel schrijver .part of the work was supported by nasa contract nng 04ea00c of the sdo / aia instrument and the nasa stereo mission under nrl contract n00173 - 02-c-2035 .amari , t. , boulmezaoud , t.z . and mikic , z . , 1999 , 350 , 1051 .amari , t. , boulmezaoud , t.z . , and aly , j.j . , 2006 , , 446 , 691 .aschwanden , m.j . ,lee , j.k . ,gary , g.a . , smith , m . , and inhester , b .2008a , , 248 , 359 .aschwanden , m.j . ,wuelser , j.p . ,nitta , n.v . , and lemen , j.r .2008b , , 679 , 827 .aschwanden , m.j . ,nitta , n.v ., wuelser , j.p . , and lemen , j.r .2008c , , 680 , 1477 .aschwanden , m.j ., wuelser , j.p . , nitta , n. , lemen , j. , and sandman , a. 2009 , , 695 , 12 .aschwanden , m.j . andsandman , a.w .2010 , astronomical j. 140 , 723 .aschwanden , m.j .2011 , living reviews in solar physics 8 , 5 .aschwanden , m.j .2012a , , online - first , doi 10.1007/s11207 - 012 - 0069 - 7 .aschwanden , m.j . and malanushenko , a. 2012 , , online - first .aschwanden , m.j . ,wuelser , j.p . ,nitta , n. , lemen , j. , schrijver , c.j .derosa , m. , and malanushenko , a. , 2012a , , 756 , 124 .aschwanden , m.j . , 2012b , , ( in press ) .aschwanden , m.j . ,wuelser , j.p . ,nitta , n.v . , and lemen , j.r . , 2012b , , 281 , 101 .conlon , p.a . andgallagher , p.t . , 2010 , , 715 , 59 .derosa , m.l ., schrijver , c.j . ,barnes , g. , leka , k.d . , lites , b.w ., aschwanden , m.j . , amari , t. , canou , a. , mctiernan , j.m . ,regnier , s. , thalmann , j. , valori , g. , wheatland , m.s . ,wiegelmann , t. , cheung , m.c.m . , conlon , p.a . ,fuhrmann , m. , inhester , b. , and tadesse , t. 2009 , , 696 , 1780 .feng , l. , wiegelmann , t. , inhester , b. , solanki , s. , gan , w. q. , and ruan , p. , 2007, , 241 , 235 .gold , t. and hoyle , f .1960 , mnras 120/2 , 89 .grad , h. , rubin , h. , 1958 , _ proc .2nd un int .peaceful uses of atomic energy _ , 31 , 190 .handy , b. , 1999 , , 187 , 229 .inhester , b. 2006 , arxiv e - print : astro - ph/0612649 .inhester , b. , feng , l . , and wiegelmann , t . , 2008 , , 248 , 379 .jing , j. , tan , c . ,yuan , y. , wang , b. , wiegelmann , t. , xu , y . , wang h. , 2010 , , 713 , 440 .low , b.c . , andlou , y.q ., 1990 , , 408 , 689 .malanushenko , a. , longcope , d.w . , and mckenzie , d.e . , 2009 , , 707 , 1044 .malanushenko , a. , schrijver , c.j . ,derosa , m.l . ,wheatland , m.s . ,gilchrist , s.a ., 2012 , , 756 , 153 .metcalf , t.r . ,jiao , l. , uitenbroek , h. , mcclymont , a.n . , canfield , r.c .1995 , , 439 , 474 .sandman , a. , aschwanden , m.j . , derosa , m. , wuelser , j.p . and alexander , d. 2009 , , 259 , 1 .sandman , a.w .and aschwanden , m.j .2011 , , 270 , 503 .scherrer , p.h . , , 1995 , , 162 , 129 .schrijver , c.j . , derosa , m. , metcalf , t.r . ,liu , y. , mctiernan , j. , regnier , s. , valori , g. , wheatland , m.s . , and wiegelmann , t. 2006 , , 235 , 161 .schrijver , c.j ., derosa , m. , metcalf , t.r ., barnes , g. , lites , b. , tarbell , t. , mctiernan , j. , valori , g. , wiegelmann , t. , wheatland , m.s . ,amari , t. , aulanier , g. , demoulin , p. , fuhrmann , m. , kusano , k. , regnier , s. , and thalmann j.k . , 2008 , , 675 , 1637 .valori , g. , kliem , b. , and fuhrmann , m. , 2007 , , 245 , 263 .wheatland , m.s . ,sturrock , p.a . ,roumeliotis , g. 2000 , , 540 , 1150 .wheatland , m.s ., 2006 , , 238 , 29 .wheatland , m.s . and regnier , s. , 2009 , 700 , l88 .wiegelmann , t. , 2004 , , 219 , 87 .wiegelmann , t. , inhester , b. , and feng , l. 2009 , annales geophysicae 27/7 , 2925 .wiegelmann , t. and sakurai t. 2012 , living rev .solar phys . ,wlser j.p . , 2004 , spie , 5171 , 111 .yang , w.h . ,sturrock , p.a . , and antiochos , s.k ., 1986 , , 309 , 383 .lrrrrrr n7 & 0.3 & & & & 1.009 & 1.034 + n8 & 2.1 & & & & 1.010 & 1.009 + n9 & 3.0 & & & & 1.016 & 1.015 + n10 & 16.5 & & & & 1.055 & 1.054 + n11 & 12.5 & & & & 1.066 & 1.050 + n12 & 17.8 & & & & 1.140 & 1.094 + & & & & & + mean & & & & & & + lllllrcc a & 10953 ( s05e20 ) & 2007-apr-30 & 23:00 - 23:20 & 22:24 & stereo 6.0 & 200 & [ -3134,+1425 ] + b & 10955 ( s09e24 ) & 2007-may-9 & 20:30 - 20:50 & 20:47 & stereo 7.1 & 70 & [ -2396,+1926 ] + c & 10953 ( n03w03 ) & 2007-may-19 & 12:40 - 13:00 & 12:47 & stereo 8.6 & 100 & [ -2056,+2307 ] + d & 10978 ( s09e06 ) & 2007-dec-11 & 16:30 - 16:50 & 14:23 & stereo 42.7 & 87 & [ -2270,+2037 ] + e & 8222 ( n22w30 ) & 1998-may-19 & 22:21 - 22:22 & 20:48 & trace ( manu ) & 201 & [ -1787,+1200 ] + f & 8222 ( n22w30 ) & 1998-may-19 & 22:21 - 22:22 & 20:48 &trace ( auto ) & 222 & [ -1787,+1200 ] + lrrrrrr a ) 2007-apr-30 & 1103 & & & & 1.006 & 1.007 + b ) 2007-may-9 & 208 & & & & 1.023 & 1.009 + c ) 2007-may-19 & 415 & & & & 1.085 & 1.053 + d ) 2007-dec-11 & 390 & & & & 1.044 & 1.026 + e ) 1998-may-19 ( manu ) & 631 & & & + f ) 1998-may-19 ( auto ) & 915 & & & + & & & & & + ( a - f ) mean & & & & +
we developed a new nonlinear force - free magnetic field ( nlfff ) forward - fitting algorithm based on an analytical approximation of force - free and divergence - free nlfff solutions , which requires as input a line - of - sight magnetogram and traced 2d loop coordinates of coronal loops only , in contrast to stereoscopically triangulated 3d loop coordinates used in previous studies . test results of simulated magnetic configurations and from four active regions observed with stereo demonstrate that nlfff solutions can be fitted with equal accuracy with or without stereoscopy , which relinquishes the necessity of stereo data for magnetic modeling of active regions ( on the solar disk ) . the 2d loop tracing method achieves a 2d misalignment of between the model field lines and observed loops , and an accuracy of for the magnetic energy or free magnetic energy ratio . the three times higher spatial resolution of trace or sdo / aia ( compared with stereo ) yields also a proportionally smaller misalignment angle between model fit and observations . visual / manual loop tracings are found to produce more accurate magnetic model fits than automated tracing algorithms . the computation time of the new forward - fitting code amounts to a few minutes per active region .
a virtual organisation is a logical orchestration of globally dispersed resources to achieve common goals .it couples a wide variety of geographically distributed computational resources ( such as pcs , workstations and supercomputers ) , storage systems , databases , libraries and special purpose scientific instruments to present them as a unified integrated resource that can be shared transparently by communities .we are living in the era of virtual collaborations , where resources are logical and solutions are virtual .advancements on conceptual and technological level have enhanced the way people communicate .the exchange of information and resources between researchers is one driving stimulus for development .this is just as valid for the neural information processing community as for any other research community . as described by the uk e - science initiative goals can be reached by the usage of new stimulating techniques , such as enabling more effective and seamless collaboration of dispersed communities , both scientific and commercial , enabling large - scale applications and transparent access to high - end resources from the desktop , providing a uniform look & feel to a wide range of resources and location independence of computational resources as well as data . in the computational intelligence communitythese current developments are not used to the maximum possible extent until now . as an illustration for this we highlight the large number of neural network simulators that have been developed , as for instance the self - organizing map program package ( som - pak ) and the stuttgart neural network simulator ( snns ) to name only a few .many scientists , scared of existing programs failing to provide an easy - to - use , comprehensive interface , develop systems for their specific neural network applications .this is also because most of these systems lack a generalized framework for handling data sets and neural networks homogeneously .this is why we believe that there is a need for a neural network simulation system that can be accessed from everywhere .we see a solution to this problem in the n2sky system .sky computing is an emerging computing model where resources from multiple cloud providers are leveraged to create large scale distributed infrastructures .the term _ sky computing _ was coined in and was defined as an architectural concept that denotes federated cloud computing .it allows for the creation of large infrastructures consisting of clouds of different affinity , i.e. providing different types of resources , e.g. computational power , disk space , networks , etc ., which work together to form one giant cloud or , so to say , a _ sky computer_. n2sky is an artificial neural network simulation environment providing functions like creating , training and evaluating neural networks .the system is cloud based in order to allow for a growing virtual user community .the simulator interacts with cloud data resources ( i.e. databases ) to store and retrieve all relevant data about the static and dynamic components of neural network objects and with cloud computing resources to harness free processing cycles for the power - hungry neural network simulations .furthermore the system allows to be extended by additional neural network paradigms provided by arbitrary users .the layout of the paper is as follows : in the following section we give the motivation behind the work done . in section [ sec : design ]we present the design principles behind the n2sky development .the system deployment within a cloud environment is described in section [ sec : deployment ] .the architecture of n2sky is laid out in section [ sec : architecture ] . in section [ sec : interface ] the interface of n2sky is presented .the paper closes with a look at future developments and research directions in section 5 .over the last few years , the authors have developed several neural network simulation systems , fostered by current computer science paradigms .neuroweb is a simulator for neural networks which exploits internet - based networks as a transparent layer to exchange information ( neural network objects , neural network paradigms ) .neuroaccess deals with the conceptual and physical integration of neural networks into relational database systems .the n2cloud system is based on a service oriented architecture ( soa ) and is a further evolution step of the n2grid systems .the original idea behind the n2grid system was to consider all components of an artificial neural network as data objects that can be serialized and stored at some data site in the grid , whereas n2cloud will use the storage services provided by the cloud environment .this concept covers not only cloud storage but also all heterogeneous date sources available on the web .the motivation to use cloud technology lies in the essential characteristics of this model for enabling ubiquitous , convenient , on - demand network access to a shared pool of configurable computing resources ( e.g. , networks , servers , storage , applications , and services ) that can be rapidly provisioned and released with minimal management effort or service provider interaction . in the light of the development of n2sky and the goal to develop a virtual organisation for the neural network community five cloud characteristics can be revisited by the following : * * shared pool of resources . *resources are shared by multiple tenants .a tenant is defined by the type of cloud being used ; therefore a tenant can be either a department , organization , institution , etc .+ n2sky shares besides hardware resources also knowledge resources .this allows the creation of a shared pool of neural net paradigms , neural net objects and other data and information between researchers , developers and end users worldwide . * * on - demand self - service .* consumers can create their computing resources ( software , operating system , or server ) within mere minutes of deciding they need it without requiring human interaction with each service provider .+ n2sky allows for transparent access to `` high - end '' resources ( computing and knowledge resources ) stored within the cloud on a global scale from desktop or smart phone , i.e. whenever the consumer needs it independently from the consumer local infrastructure situation . * * broad network access .* users can access the computing resources from anywhere they need it as long as they are connected to the network .+ n2sky fosters location independence of computational , storage and network resources .* * rapid elasticity .* computing resources can scale up or scale down based on the users needs .to end users this appears to be unlimited resources .+ n2sky delivers to the users a resource infrastructure which scales according to the problem .this leads to the situation that always the necessary resources are available for any neural network problem . ** measured service .* services of cloud systems automatically control and optimize resource use by leveraging a metering capability enabling the pay - as - you - go model .this allows consumers of the computing resources to pay based on their use of the resource .+ n2sky supports the creation of neural network business models .access to neural network resources , as novel paradigms or trained neural networks for specific problem solutions , can be free or following certain business regulations , e.g. a certain fee for usage or access only for specific user groups .the presented n2sky environment takes up the technology of n2cloud to a new dimension using the virtual organisation paradigm .hereby the ravo reference architecture is used to allow the easy integration of n2sky into the cloud service stack using saas , paas , and iaas .cloud computing is a large scale distributed computing paradigm for utility computing based on virtualized , dynamically scalable pool of resources and services that can be delivered on - demand over the internet . in the scientific community it is sometimes stated as the natural evolution of grid computing .cloud computing therefore became a buzz word after ibm and google collaborated in this field followed by ibm s blue cloud " launch .three categories can be identified in the field of cloud computing : * * software as a service ( saas ) . *this type of cloud delivers configurable software applications offered by third party providers on an on - demand base and made available to geographically distributed users via the internet .examples are salesforce.com , crm , google docs , and so on . ** platform as a service ( paas ) .* acts as a runtime - system and application framework that presents itself as an execution environment and computing platform .it is accessible over the internet with the sole purpose of acting as a host for application software .this paradigm offers customers to develop new applications by using the available development tools and api s .examples are google s app engine and microsoft s azure , and so on . * * infrastructure as a service ( iaas ) . *traditional computing resources such as servers , storage , and other forms of low level network and physical hardware resources are hereby offered in a virtual , on - demand fashion over the internet .it provides the ability to provide on - demand resources in specific configurations .examples include amazon s ec2 and s3 , and so on .information technology ( it ) has become an essential part of our daily life .utilization of electronic platforms to solve logical and physical problems is extensive .grid computing is often related with virtual organisations ( vos ) when it comes to creation of an e - collaboration .the layered architecture for grid computing has remained ideal for vos . however , the grid computing paradigm has some limitations .existing grid environments are categorized as data grid or computational grid . today, problems being solved using vos require both data and storage resources simultaneously .scalability and dynamic nature of the problem solving environment is another serious concern .grid computing environments are not very flexible to allow the participant entities enter and leave the trust .cloud computing seems to be a promising solution to these issues .only , demand driven , scalable and dynamic problem solving environments are target of this newborn approach .cloud computing is not a deviation concept from the existing technological paradigms , rather it is an evolution .cloud computing centers around the concept of `` everything as a service '' ( xaas ) , ranging from hardware / software , infrastructure , platform , applications and even humans are configured as a service .most popular service types are iaas , paas and saas . existing paradigms and technologyare used to form vos , but lack of standards remained a critical issue for the last two decades .our research endeavor focused on developing a reference architecture for virtual organizations ( ravo ) .it is intended as a standard for building virtual organizations ( vo ) .it gives a starting point for the developers , organizations and individuals to collaborate electronically for achieving common goals in one or more domains .ravo consists of two parts , 1 .the requirement analysis phase , where boundaries of the vo are defined and components are identified .a gap analysis is also performed in case of evolution ( up - gradation ) of an existing system to a vo .2 . the blueprint for a layered architecture , which defines mandatory and optional components of the vo . this approach allows to foster new technologies ( specifically the soa / soi paradigm realized by clouds ) and the extensibility and changeability of the vo to be developed .the basic categorization of the the n2sky design depends on the three layers of the cloud service stack as they are : infrastructure as a service ( iaas ) , platform as a service ( paas ) and software as a service ( saas ) .figure [ fig : ravon2sky ] depicts the components of the n2sky framework , where yellow components are mandatory , and white and grey components are optional .* infrastructure as a service ( iaas ) * basically provides enhanced virtualisation capabilities . accordingly, different resources may be provided via a service interface . in n2skythe iaas layer consists of two sub - layers : a factory layer and an infrastructure enabler layer .users need administrative rights for accessing the resources in layer 0 over the resource management services in layer 1. * factory layer ( layer 0 ) .contains physical and logical resources for the n2sky .physical resources comprise of hardware devices for storage , computation cycles and network traffic in a distributed manner .logical resources contain expert s knowledge helping solving special problems like the paradigm matching .* infrastructure enabler layer ( layer 1 ). allows access to the resources provided by the factory layer .it consists of protocols , procedures and methods to manage the desired resources .* platform as a service ( paas ) * provides computational resources via a platform upon which applications and services can be developed and hosted .paas typically makes use of dedicated apis to control the behaviour of a server hosting engine which executes and replicates the execution according to user requests .it provides transparent access to the resources offered by the iaas layer and applications offered by the saas layer . in n2skyit is divided into two sublayers : * abstract layer ( layer 2 ) .this layer contains domain - independent tools that are designed not only for use in connection with neural networks . *neural network layer ( layer 3 ) .this layer is composed of domain - specific ( i.e. neural network ) applications .* software as a service ( saas ) * offers `` implementations of specific business functions and business processes that are provided with specific cloud capabilities , i.e. they provide applications / services using a cloud infrastructure or platform , rather than providing cloud features themselves '' . in context of n2sky ,saas is composed of one layer , namely the service layer .* service layer ( layer 4 ) .this layer contains the user interfaces of applications provided in layer 3 and is an entry point for both end users and contributors .components are hosted in the cloud or can be downloaded to local workstations or mobile devices .each of the five layers provide its functionality in a pure service - oriented manner so we can say that n2sky realizes the everything - as - a - service ( xaas ) paradigm .at the moment n2sky facilitates eucalyptus , which is an open source software platform that implements a cloud infrastructure ( similar to amazon s elastic compute cloud ) used within a data center .eucalyptus provides a highly robust and scalable infrastructure as a service ( iaas ) solution for service providers and enterprises .a eucalyptus cloud setup consists of three components the cloud controller ( clc ) , the cluster controller(s ) ( cc ) and node controller(s ) ( nc ) .the cloud controller is a java program that , in addition to high - level resource scheduling and system accounting , offers a web services interface and a web interface to the outside world .cluster controller and node controller are written in the programming language c and deployed as web services inside an apache environment .communication among these three types of components is accomplished via soap with ws - security .the n2sky system itself is a java - based environment for the simulation and evaluation of neural networks in a distributed environment .the apache axis library and an apache tomcat web container are used as a hosting environment for the web services . to access these services javaservlets / jsps have been employed as the web frontend .this design approach allows easy portability of n2sky to other cloud platforms .we just finished a deployment of n2sky onto the openstack environment .the motivation for this move is the change in the policy of eucalyptus towards a stronger commercial orientation and the increase in popularity of openstack within the cloud community .we are also working on a port to opennebula , the flagship cloud project of the european union .all these ports are simple for the n2sky software .the effort of porting n2sky lies in the fact getting competence into the new cloud environment .we maintain that the n2sky system can be deployed on various cloud platforms of the underlying infrastructure .this allows naturally to implement a federated cloud model , by fostering the specific affinities ( capabilities ) of different cloud providers ( e.g. data clouds , compute clouds , etc . ) .a possible specific deployment is show in figure [ deployfig ] .three different clouds are depicted providing unique capabilities : the cloud on the left hand side is a computing cloud , providing strong computing capabilities , responsible for the time consuming training an devaluation phases of neural networks . the cloud on the right hand side is a data cloud , which offers extensive storage resources , e.g. by access to relational or nosql database systems .the center cloud is the administrative cloud , which does not provide specific hardware resources but acts as central access point for the user and acts as mediator to the n2sky environment , e.g. by applying business models .the whole system architecture and its components are depicted in figure [ n2skyfig ] .a neural network has to be configured or trained ( supervised or unsupervised ) so that it may be able to adjust its weights in such a way that the application of a set of inputs produces the desired set of outputs . by using a particular paradigm selected by the userthe * n2sky simulation service * allows basically three tasks : * train * ( the training of an untrained neural network ) , * retrain * ( training of a previously trained network again in order to increase the training accuracy ) , * evaluate * ( evaluating an already trained network ) .the * n2sky data archive * is responsible to provide access to data of different objects ( respectively paradigms ) of neural networks by archiving or retrieving them from a database storage service .it can also publish evaluation data .it provides the method * put * ( inserts data into a data source ) and * get * ( retrieves data from a data source ) .the main objective of the * n2sky database service * is to facilitate users to benefit from already trained neural networks to solve their problems .so this service archives all the available neural network objects , their instances , or input / output data related to a particular neural network paradigm .this service dynamically updates the database as the user gives new input / output patterns , defines a new paradigm or evaluates the neural network .the * n2sky service monitor * keeps tracks of the available services , publishes these services to the whole system .initially users interact with it by selecting already published paradigms like backpropagation , quickpropagation , jordan etc . or submit jobs by defining own parameters .this module takes advantage of virtualization and provides a transparent way for the user to interact with the simulation services of the system .it also allows to implement business models by an accounting functionality and restricting access to specific paradigms .the * n2sky paradigm / replication service * contains the paradigm implementation that can be seen as the business logic of a neural network service implementation .the * n2sky registry * administrates the stored neural network paradigms .the main purpose of n2sky system is to provide neural network data and objects to users .thus the * n2sky java application / applet * provides a graphical user interface ( gui ) to the user .it especially supports experienced users to easily run their simulations by accessing data related neural network objects that has been published by the n2sky service manager and the n2sky data service .moreover the applet provides a facility to end - users to solve their problems by using predefined objects and paradigms . for the purpose of thin clients a simple web browser , which can execute on a pc or a smart phone ,can be used to access the front - end , the * n2sky ( mobile ) web portal*. it is relying on the * n2sky user management service * which grants access to the system . based on this service layout the following exemplary execution workflow can be derived ( the numbers refer to the labels in figure [ n2skyfig ] ) : 1 .the developer publishes a paradigm service to n2sky .2 . during paradigm service activation the paradigmis replicated to all running instances ( e.g. a java web archive is deployed to all running application server instances ) .3 . users log in per ( mobile ) web browser per restful web service .4 . central monitor service dispatches login requests to user management and access control component per restful web service .callback to service monitor either sending a new session i d or deny access .callback to ( mobile ) web browser redirecting session i d or deny access . 7 .query registry for neural network paradigms per restful web servicecallback to ( mobile ) web browser by sending paradigm metadatacreate a new neural object by using selected paradigm , start new cloud node instance , start training and after them start a new evaluation by using training result .start a new training thread , then get result and store it over data archive in database .start a new evaluation thread , then return result and store it in data archive to database .it is a design goal that no simulation service needs database connection .these services are able to run on an arbitrary node without having to deal with database replication .the design of the user interface of n2sky is driven by the following guiding principles : * * acceptance . * to be accepted by the userthe system has to provide an intuitive and flexible interface with all necessary ( computing and knowledge ) resources easily at hand . * * simplicity . *the handling of a neural network has to be simple .the environment has to supply functions to manipulate neural networks in an easy and ( more important ) natural way .this can only be achieved by a natural and adequate representation of the network in the provided environment of the system but also by an embedding of the network into the conventional framework of the system .a neural network software instantiation has to look , to react and to be administrated as any other object .the creation , update and deletion of a neural network instantiation has to be as simple as that of a conventional data object . * * originality . *a neural network object has to be simple , but it has not to loose its originality .a common framework always runs the risk to destroy the original properties of the unique objects .this leads to the situation that either objects of different types loose their distinguishable properties or loose their functionality .a suitable framework for neural networks has to pay attention to the properties and to the functional power of neural networks and should give the user the characteristics he is expecting . * * homogeneity .* neural networks have to be considered as conventional data , which can be stored in any data management environment , as database system or the distributed data stores in the cloud . from the logical point of view a neural network is a complex data value and can be stored as a normal data object .* * system extensibility . *n2sky offers an easy to use interface for neural network researchers to extend the set of neural network paradigms .this can be done by accessing paradigms from a n2sky paradigm service , or by uploading new paradigms .a new paradigm can be both easily integrated into the system and provided to other researcher on the grid by storing it on the paradigm server .following these principles we developed a portable interface , which self - adapts to different user platforms . technically speaking , we based the development of the n2sky interface on html5 , css 3.0 and jquery. thus the user needs a conventional web browser only ( as safari , chrome , mozilla firefox , or internet explorer ) to access the n2sky portal and can use arbitrary user platforms , as workstations , pcs , macs , tablets , or smart phones . in figure[ login ] the n2sky login screen is shown on an iphone and a mac as example for this portability issue . ]basically n2sky provides screens for the classical neural network tasks . in the followingwe present a short walk - through of the training of a backpropagation network . in this presentationwe show the screenshots of an iphone only . * * paradigm subscription .* the user chooses an published available neural network paradigm on the n2sky paradigm server and instantiates a new neural network object based on this paradigm . in figure [ subscription ]the most important steps of this workflow are shown ( from left to right ) .+ first the paradigm is chosen from paradigm store on the paradigm service by using a sql query statement .then the information on the chosen paradigm is shown to the user by the vienna neural netwrok specification language .after deciding for an appropriate paradigm the user can instantiate a new network object . in the shown example the user creates a three layer , fully connected backpropagation network .+ ] * * training . *the user starts a training phase on the newly created backpropagation network .first the user specifies the training parameters ( as momentum term , activation function , etc . ) . then the training data set ( training patterns ) have to defined .hereby the n2sky allows both the explicit specification of the data used ( as shown in the figure ) and the specification of the dataset by a sql or nosql query statement ( see next section for more details ) .the training is started in the cloud and the error graph is shown on the iphone .after completion of the training the training results , ( e.g. calculated weight values , error values , etc . ), are shown and stored in the database for further usage ( e.g. evaluation ) .+ ] * * evaluation . *the last step is the evaluation of the trained neural networks for problem solution .an evaluation object is created , which is using an existing training object . also herethe user has the possibility to define the evaluation data by an explicit list or a query statement .after the evaluation is performed in the cloud the finalized evaluation object ( the solution to the given problem ) can be used elsewhere .due to the situation that it is stored in the data store , it is available and accessible from everywhere .+ ] a highlight of the n2sky system is the use of standardized and user - friendly database query languages for searching of network paradigms and objects and defining the training and evaluation data set .the functional data stream definition allows to specify the data sets in a comfortable and natural way .it is not necessary to specify the data values explicitly , but the data streams can be described by sql and nosql statements.so it is easily possible to use real world data sets as training or evaluation data on a global scale .this approach implements interface homogeneity to the user too , who applies the the same tool both for administration tasks ( choosing a neural network paradigm or object ) and the analysis of the stored information . specifically for the definition of the training and evaluation data set , which can be huge data volumes , this functional specification by a query language statement is extremely comfortable .this unique feature allows for combining globally stored , distributed data within the n2sky environment easily . in figure [ datastream ] examples for sql using a relational database system ( left ) and nosql using mongodb ( right )are shown .this figure also exemplifies the usage of this approach on the one hand for administrative tasks ( choosing a neural network paradigm on the paradigm server ) and on the other hand for data set specification ( accessing big data on an internet nosql data store via a restful access mechanism ) ]in this paper we presented n2sky , a cloud - based framework enabling the computational intelligence community to share and exchange the neural network resources within a virtual organisation .n2sky is a prototype system with quite some room for further enhancement .ongoing research is done in the following areas : * we are working on an enhancement of the neural network paradigm description language vinnsl to allow for easier sharing of resources between the paradigm provider and the customers .we are also aiming to build a generalized semantic description of resources for exchanging data .* parallelization of neural network training is a further key for increasing the overall performance .based on our research on neural network parallelization we envision an automatically definition and usage of parallelization patterns for specific paradigms .furthermore the automatic selection of capable resources in the cloud for execution , e.g. multi - core or cluster systems is also a hot topic within this area . *key for fostering of cloud resources are service level agreements ( slas ) which give guarantees on quality of the delivered services .we are working on the embedment of our research findings on slas into n2sky to allow for novel business models on the selection and usage of neural network resources based on quality of service attributes . *a further important issue is to find neural network solvers for given problems , similar to a `` neural network google '' .in the course of this research we are using ontology alignment by mapping problem ontology onto solution ontology .peter paul beran , elisabeth vinek , erich schikuta , and thomas weishupl . .in _ proceedings of the international joint conference on neural networks , ijcnn 2008 , part of the ieee world congress on computational intelligence , wcci 2008 _ , pages 18721879 , 2008 .peter brezany , thomas mck , and erich schikuta . a software architecture for massively parallel input - output . in jerzy wasniewski , jack dongarra , kaj madsen , and dorte olesen , editors , _ third international workshop applied parallel computing industrial computation and optimization ( para96 ) _ , volume 1184 of _ lecture notes in computer science _, page 8596 , lyngby , denmark , 1996 .springer berlin / heidelberg .10.1007/3 - 540 - 62095 - 8_10 .irfan ul haq , rehab alnemr , adrian paschke , erich schikuta , harold boley , and christoph meinel .distributed trust management for validating sla choreographies . in philipp wieder , ramin yahyapour , and wolfgang ziegler , editors , _ grids and service - oriented architectures for service level agreements _ , pages 4555springer us , 2010 .irfan ul haq , ivona brandic , and erich schikuta .validation in layered cloud infrastructures . in _ economics of grids , clouds , systems , and services , 7th international workshop ,gecon10 _ , volume 6296 of _ lecture notes in computer science _, page 153164 , ischia , italy , 2010 .springer berlin / heidelberg .peter mell and timothy grance . the nist definition of cloud computing .nist special publication 800 - 145 , computer security division , national institute of standards and technology , gaithersburg , md 20899 - 8930 , usa ,september 2011 .hannes schabauer , erich schikuta , and thomas weishupl .solving very large traveling salesman problems by som parallelization on cluster architectures . in _sixth international conference on parallel and distributed computing applications and technologies ( pdcat05 ) _ , page 954958 , dalian , china , 2005 . ieee computer society .erich schikuta , flavia donno , heinz stockinger , helmut wanek , thomas weishupl , elisabeth vinek , and christoph witzany .business in the grid : project results . in _1st austrian grid symposium _ , hagenberg , austria , 2005 .erich schikuta and thomas weishupl .neural networks in the grid . in _ieee international joint conference on neural networks ( ijcnn04 ) _ , volume 2 , page 14091414 vol.2 , budapest , hungary , 2004 .ieee computer society .elisabeth vinek , peter paul beran , and erich schikuta . classification and composition of qos attributes in distributed , heterogeneous systems . in _11th ieee / acm international symposium on cluster , cloud , and grid computing ( ccgrid 2011 ) _ , newport beach , ca , usa , may 2011 .ieee computer society press .thomas weishupl , flavia donno , erich schikuta , heinz stockinger , and helmut wanek .business in the grid : big project . in _ grid economics & business models ( gecon 2005 ) of global grid forum _, volume 13 , seoul , korea , 2005 .thomas weishupl and erich schikuta . towards the merger of grid and economy . in _ international workshop on agents and autonomiccomputing and grid enabled virtual organizations ( aac - gevo04 ) at the 3rd international conference on grid and cooperative computing ( gcc04 ) _ , volume 3252/2004 of _ lecture notes in computer science _, page 563570 , wuhan , china , 2004 .springer berlin / heidelberg .thomas weishupl , christoph witzany , and erich schikuta .trust management and secure accounting for business in the grid . in _6th ieee international symposium on cluster computing and the grid ( ccgrid06 ) _ , page 349356 , singapore , 2006 .ieee computer society .thomas weishupl and erich schikuta . .in _ cnna 04 : proceedings of the 8th ieee international biannual workshop on cellular neural networks and their applications _, los alamitos , ca , usa , 2004 .ieee computer society .
we present the n2sky system , which provides a framework for the exchange of neural network specific knowledge , as neural network paradigms and objects , by a virtual organization environment . it follows the sky computing paradigm delivering ample resources by the usage of federated clouds . n2sky is a novel cloud - based neural network simulation environment , which follows a pure service oriented approach . the system implements a transparent environment aiming to enable both novice and experienced users to do neural network research easily and comfortably . n2sky is built using the ravo reference architecture of virtual organizations which allows itself naturally integrating into the cloud service stack ( saas , paas , and iaas ) of service oriented architectures .
during the last two decades great progress has been made in understanding synchronization of chaotic systems .nevertheless , synchrony in dissipative dynamical systems with coexisting attractors remains relatively unexplored and poorly understood to this very day .this relative lack of activity is hard to reconcile with the fact that multistability has been observed in numerous nonlinear systems in many fields of science , such as laser physics , neuroscience , cardiac dynamics , genetics , cell signaling , and ecology amongst others ; moreover , in situations where synchronization is actually a collective behavior known to play a primary role .even forms of extreme multistability , i.e. the coexistence of infinitely many attractors in phase space , have been recently observed in experiments .many of these results as well as some known coupling mechanisms and dynamical phenomena that seem to be correlated to the emergence of multistability are reviewed in .the dynamics of two unidirectionally coupled systems ( master - slave configuration ) of rssler - like , duffing and rssler - lorenz oscillators , as well as hnon maps have been studied , and some experiments along these lines have been carried out .bidirectionally coupled neuronal models also display very rich synchronous dynamics .one of the most prominent features of all these examples is the intricate dependence of synchronization on the initial conditions , a distinct feature of multistable systems that is nowhere to be found in monostable systems .phenomena such as anticipated intermittent phase synchronization , period - doubling synchronization , and intermittent switches between coexisting type - i and on - off intermittencies have been discovered . on the other hand , even though the existence and stability of multistable synchronous solutions in locally coupled kuramoto models have been studied , to our knowledge , the issue of under what conditions synchronization of more than two coupled generic ( and possibly chaotic ) multistable oscillators is guaranteed has not been addressed yet .furthermore , synchronization of multistable systems in the presence of intermittency still remains an unexplored problem . in this paper, we propose a methodology for studying the synchronization of multistable oscillators , which we illustrate with the example of a bistable system which has the great advantage of being experimentally implemented in electronic circuits .first , we demonstrate the high complexity of the basins of attraction of coexisting states in a solitary bistable oscillator , and the increasing complexity when two of such oscillators interact with each other giving rise to intermittency .second , we investigate the influence of both the initial conditions and the coupling strength on the synchronization of two bidirectionally coupled bistable systems ( with diffusive coupling ) in different coexisting synchronous states , including the existence of intermittency . then , we discuss the master stability function ( msf ) approach to the study of the stability of a synchronization manifold of coupled multistable systems .specifically , we obtain the msf for different coexisting chaotic attractors in a dynamical system separately , and then we evaluate how the modification of the coupling parameter allows the system to leave / enter a particular synchronization regime associated with a particular attractor without loss of synchrony in the whole network , even in the presence of intermittency .finally , we check the robustness of our theoretical predictions with electronic circuits to show the validity of our results for real systems where a certain parameter mismatch always exists .the main aim of our work is to develop a methodology that adequately predicts synchronizability in ensembles of bidirectionally coupled multistable systems . with this aim, we choose the piecewise linear rssler oscillator as a paradigmatic example of a bistable system with two coexisting chaotic attractors . when an unidirectional coupling is introduced , the coupled rssler - like system exhibits very rich dynamics , including such phenomena as intermittency , frequency - shifting and frequency - locking .nevertheless , little is known about synchronization scenarios for ensembles of bidirectionally coupled multistable systems , despite the fact that bidirectional coupling itself can lead to the emergence of multiple attractors .specifically , the equations describing the dynamics of the rssler - like oscillators are y ) , \\\dot{z}=-\alpha_3 ( -g(x)+z ) , \label{rossler1 } \end{array}\ ] ] with , , and are the state variables .the piecewise linear function introduces the nonlinearity in the system that leads to a chaotic behavior .the parameter values are , , , , , , and . for this parameter choice ,the system is known to be a bistable chaotic system ( i.e. the phase portrait displays two different chaotic attractors ) , as previously reported in .unlike most previously studied multistable systems ( see , e.g. ) , this system exhibits multistability in the autonomous evolution , without the need for chaotic driving .moreover , this system can be implemented in electronic circuits to experimentally assess the validity of the theoretical predictions . from any arbitrary initial condition within a bounded region in phase spacethe system rapidly converges to one of the two chaotic attractors shown in fig .[ fig1 ] ( a ) .we denote the larger attractor by ( fig .[ fig1 ] ( a ) , blue ( dark gray ) ) and the smaller one by ( fig .[ fig1 ] ( a ) , red ( gray ) ) .the basins of attraction of ( blue ( dark gray ) ) and ( red ( gray ) ) are shown in fig .[ fig1 ] ( b ) for initial conditions such that ] and .the basin of attraction of is seen to be much larger than the basin of .two spirals are clearly visible , where initial conditions leading to one or the other attractor seem to be intertwined .each of these spirals has a fixed point of the system as its focus , as has been previously reported in . in fig .[ fig1 ] ( c ) we focus on ] to better appreciate the details of the spiral that has its center in the origin .indeed , the mixing of initial conditions close to the center of the spiral seems to be present at arbitrarily low space scales , as the zoom around ] in fig .[ fig1 ] ( d ) shows .we have checked that this structure is preserved for another 4 orders of magnitude , with no end in sight for even lower space scales .indeed , this is not altogether surprising , as basins that are interwoven in a complicated fashion and fractal basins boundaries feature prominently in the phase portraits of many multistable systems ( see for review of fractal basin boundaries and fractal sets in nonlinear dynamics in general ) .although the precise characterization of these boundaries is outside the scope of this paper , we would like to stress how difficult it is to control the asymptotic dynamics of just one single oscillator under the presence of noise or uncertainties for initial conditions starting in certain regions of the phase space .( color online ) * coexisting attractors and their basins of attraction in the rssler - like oscillator ( eq . [ rossler1 ] ) . *( a ) large ( , blue ( dark gray ) ) and small ( , red ( gray)),title="fig : " ] ( color online ) * coexisting attractors and their basins of attraction in the rssler - like oscillator ( eq . [ rossler1 ] ) . *( a ) large ( , blue ( dark gray ) ) and small ( , red ( gray)),title="fig : " ] + ( color online ) * coexisting attractors and their basins of attraction in the rssler - like oscillator ( eq . [ rossler1 ] ) . *( a ) large ( , blue ( dark gray ) ) and small ( , red ( gray)),title="fig : " ] ( color online ) * coexisting attractors and their basins of attraction in the rssler - like oscillator ( eq . [ rossler1 ] ) . *( a ) large ( , blue ( dark gray ) ) and small ( , red ( gray)),title="fig : " ] attractors .( b ) basins of attraction for ( blue ) and ( red ) in the plane .initial conditions leading to unstable trajectories appear in white .( c ) basins of attraction for and , \times[-1,1] ] square in the plane .to start our study on synchronization of bidirectionally coupled multistable systems , we first consider the simple case of two coupled rssler - like oscillators .this particular system has been thoroughly analyzed for the case of a master - slave configuration . here , we investigate , for the first time to our knowledge , bidirectionally coupled multistable chaotic systems , and also for the first time the coupling is diffusive .the coupling is introduced through the variable with coupling strength so that the equations of the motion become y_{1,2}),\\ \dot{z}_{1,2}=-\alpha_3 ( -g(x_{1,2})+z_{1,2 } ) , \label{rossler2 } \end{array}\ ] ] where and is given by eq .( [ geq ] ) .our numerical simulations show the existence of four possible asymptotic regimes : a ) both systems end up in an attractor indistinguishable from ( from now on , ) , b ) both systems end up in an attractor indistinguishable from ( ) , c ) one system asymptotes to and the other to ( ) , d ) the systems switch intermittently back and forth between and in an irregular way ( ) ( the intermittent behavior of these systems in master - slave configurations is described in ) . as the coupling strength increased from to , all these cases appear , disappear and mix in a very complicated manner , depending on the initial conditions . in fig .[ fig2 ] we show the basins of attraction for these asymptotic regimes that result from fixing and , and exploring a finely discretized grid for ] . * in ( a ) , the largest lyapunov exponents corresponding to attractors ( magenta diamonds ) and ( grey circles ) as functions of .the dashed lines in the background correspond to the estimates obtained with ( ) . in ( b ) , the fraction of phase space points within the interval ] . * in ( a ) , the largest lyapunov exponents corresponding to attractors ( magenta diamonds ) and ( grey circles ) as functions of .the dashed lines in the background correspond to the estimates obtained with ( ) . in ( b ) , the fraction of phase space points within the interval ] times the total number of orbit points considered is considerably smaller than a number on the order of unity . even if a few points in hundreds of thousands or millions of phase - space points were affected by the discontinuity in the jacobian or by the small modification in the dynamics introduced by the polynomial , the effect on long time averages along phase - space orbits , such as those lyapunov exponent estimatesare based upon , would be negligible . in conclusion , in order to avoid the discontinuity in the jacobian of our dynamical system, we can compute the lyapunov exponents using a slightly modified dynamical system . doing this, we must ensure that the modification should be sufficiently small so that the results become independent of the size of the phase - space region whose dynamics is modified and of the modification form ( independent of the size of the modified region and of the degree of the polynomial in the example above ) .but then , the resulting lyapunov exponent estimates coincide with those obtained from the original dynamical system .we classify the dynamics as corresponding to or based on counting the number of local maxima in the variable along every semi - cycle for which , which is systematically larger than one for and exactly one for . in the presence of intermittency , however , the system may jump from to , or the other way around , while , which compromises the quality of the identification of , and time windows in fig .[ fig4 ] .the minimum value of the synchronization error , which never reaches zero due to the intrinsic noise of the electronic circuits , is vpp , and it is determined by the error of two unidirectionally coupled rssler - like circuits in the limit of strong coupling , as explained in .
we propose a methodology to analyze synchronization in an ensemble of diffusively coupled multistable systems . first , we study how two bidirectionally coupled multistable oscillators synchronize and demonstrate the high complexity of the basins of attraction of coexisting synchronous states . then , we propose the use of the master stability function ( msf ) for multistable systems to describe synchronizability , even during intermittent behaviour , of a network of multistable oscillators , regardless of both the number of coupled oscillators and the interaction structure . in particular , we show that a network of multistable elements is synchronizable for a given range of topology spectra and coupling strengths , irrespective of specific attractor dynamics to which different oscillators are locked , and even in the presence of intermittency . finally , we experimentally demonstrate the feasibility and robustness of the msf approach with a network of multistable electronic circuits .
arma identification methods usually lead to nonconvex optimization problems for which global convergence is not guaranteed , cf .e.g. .although these algorithms are simple and perform effectively , as observed in , ( * ? ? ?* section 1 ) , no theoretically satisfactory approach to arma parameter estimation appears to be available .alternative , convex optimization approaches have been recently proposed by byrnes , georgiou , lindquist and co - workers in the frame of a broad research effort on analytic interpolation with degree contraint , see , , , , , , , , , , , , , , , , , , , and references therein . in particular , describes a new setting for spectral estimation .this so - called _ three _ algorithm appears to allow for higher resolution in prescribed frequency bands and to be particularly suitable in case of short observation records .it effectively detects spectral lines and steep variations ( see for a recent biomedical application ) .an outline of this method is as follows .a given realization of a stochastic process ( a finite collection of data ) is fed to a suitably structured bank of filters , and the steady - state covariance matrix of the resulting output is estimated by statistical methods .only zeroth - order covariance lags of the output of the filters need to be estimated , ensuring statistical robustness of the method .finding now an input process whose _ rational _ spectrum is compatible with the estimated covariance poses naturally a nevanlinna - pick interpolation problem with bounded degree .the solution of this interpolation problem is considered as a mean of estimating the spectrum .a particular case described in the paper is the _ maximum differential entropy _ spectrum estimate , which amounts to the so - called central solution in the nevanlinna - pick theory .more generally , the scheme allows for a non constant _ a priori _estimate of the spectrum .the byrnes - georgiou - lindquist school has shown how this and other important problems of control theory may be advantageously cast in the frame of convex optimization .these problems admit a finite dimensional dual ( multipliers are matrices ! ) that can be shown to be solvable .the latter result , due to byrnes and lindquist ( see also ) is , however , nontrivial since the optimization occurs on an open , unbounded set of hermitian matrices .the numerical solution of the dual problem is also challenging , since the gradient of the dual functional tends to infinity at the boundary of the feasible set . finally , reparametrization of the problem may lead to loss of global concavity , see the discussion in ( * ? ? ?* section vii ) .this paper adds to this effort in that we consider estimation of a multivariate spectral density in the spirit of three , but employing a different metric for the optimization part , namely the _hellinger distance _ as in . in papers , byrnes , gusev and lindquist chose the kullback - leibler divergence as a frequency weighted entropy measure , thus introducing a broad generalization of burg s maximum entropy method .more recently , this motivation was supported by the well - known connection with prediction error methods , see e.g. . in the multivariable case ,a kullback - leibler pseudodistance may also be readily defined inspired by the _ von neumann s relative entropy _ of statistical quantum mechanics .the resulting spectrum approximation problem , however , leads to computable solutions of bounded mcmillan degree only in the case when the prior spectral density is the identity matrix ( maximum entropy solution ) . on the contrary , with a suitable extension of the scalar hellinger distance introduced in , the hellinger approximation generalizes nicely to the multivariable case for any prior estimate of the spectrum .the main contributions of this paper , after some background material in sections ii - iv , are found in sections v - viii . in sectionv , we establish _ strong convexity _ and _ smoothness _ of the dual functional on a certain domain of hermitian matrices . in sectionvi , we analyze in detail a variant of a newton - type _ matricial _ iteration designed to numerically solve the dual of the multivariable spectrum approximation problem . it had originally been sketched in .the computational burden is dramatically reduced by systematically resorting to solutions of lyapunov and riccati equations thanks to various nontrivial results of _ spectral factorization_. we then show in section vii that the algorithm is _ globally _ convergent . finally , in section viii, we present guidelines for its application to multivariate spectral estimation and present some simulations comparing to existing methods .simulation in the multivariable case shows that , at the price of some moderate extra complexity in the model , our method may perform much better than matlab s pem and matlab s n4sid in the case of a short observation record .paper introduces and solves the following moment problem : given a bank of filters described by an input - to - state stable transfer function and a state covariance matrix , give necessary and sufficient conditions for the existence of input spectra such that the steady state output has variance , that is , moreover , parametrize the set of all such spectra ( here , and in the sequel , integration takes place on the unit circle with respect to normalized lebesgue measure ) . throughout this paperwe use the following notations : for matrices and for spectra and transfer functions .the scalar product between square matrices is defined as .let be the family of -valued functions defined on the unit circle which are hermitian , positive - definite , bounded and coercive .we have the following _ existence _result : there exists satisfying ( [ constraint ] ) if and only if there exists such that paper deals with the following ( scalar ) spectrum _ approximation _ problem : when constraint ( [ constraint ] ) is feasible , find the spectrum which minimizes the kullback - leibler pseudo distance from an `` a priori '' spectrum , subject to the _ constraint _ ( [ constraint ] ) .it turns out that , if the prior is rational , the solution is also rational , and with degree that can be bounded in terms of the degrees of and .this problem again admits the maximum differential entropy spectrum ( compatible with the constraint ) as a particular case ( ) .the above minimization poses naturally a variational problem , which can be solved using lagrange theory .its dual problem admits a maximum and can be solved exploiting numerical algorithms . in restated and solved a similar variational problem with respect to a different metric , namely the hellinger distance : equation ( [ scalarhellinger ] ) defines a _ bona fide _ distance , well - known in mathematical statistics .the main advantage of this approach to spectral approximation is that it easily generalizes to the multivariable case , whereas log - like functionals do not enjoy this property .in this section , we discuss in depth the feasibility of ( [ constraint ] ) . following and ,let , let be the space of -valued continuous functions defined on the unit circle , and let the operator be defined as follows : we are interested in the _ range _ of the operator which , having to deal with hermitian matrices , we consider as a vector space over the reals .the following facts hold : [ rangegammaprop ] 1 .let .the following are equivalent : * there exists such that identity ( [ feasibility ] ) holds .* there exists such that .* there exists , such that .2 . let ( not necessarily positive definite ) .there exists such that identity ( [ feasibility ] ) holds if and only if .3 . if and only if ] , a compact set that does not contain any singular matrix . now recall that the matrix inversion operator is continuous at any nonsingular matrix .hence , admits a uniform bound on ] be an ortho - normal basis of ] needs to be computed only once ) .this enables us to solve equation ( [ newtonstep ] ) .the backtracking stage involves similar , though easier , computations .we must check the following conditions : checking ( [ backtrackingdomainj ] ) is really a matter of checking whether we can factorize .thus must be halved until the are ( [ riccati ] ) is solvable having .finally , to check ( [ backtrackingj ] ) , we need to compute .this can be done in a way similar to the above computations : that the minimum of exists _ and is unique _ , we investigate global convergence of our newton algorithm .first , we recall the following _ definition _ : a function twice differentiable in a set is said to be _ strongly convex _ in if there exists a constant such that for , where is the hessian of at .+ we restrict our analysis to a sublevel set of .let .the set is _ compact _ ( as it was shown in ( * ? ? ?* section vii ) ) . because of the backtracking in the algorithm , the sequence is decreasing . thus .we now wish to apply a theorem in ( * ? ? ?* , p. 488 ) on convergence of the newton algorithm with backtraking for strongly convex functions on .this theorem ensures linear decrease for a finite number of steps , and quadratic convergence to the minimum after the linear stage , thus establishing _ global _ convergence of the newton algorithm with backtracking .we proceed to establish first _ strong convexity _ of on . to do that, we employ the following result .[ strongconvexity ] let be defined over an open convex subset of a finite - dimensional linear space .assume that is twice continuously differentiable and strictly convex on .then is _ strongly _ convex on any compact set .first , recall that since is twice continuously differentiable and strictly convex , its hessian is an hermitian positive - definite matrix at each point . by lemma [ lemma0 ] ,the mapping from to its minimum ( real ) eigenvalue is continuous .it follows that the mapping from to the minimum eigenvalue of the hessian of at is also continuous , being a composition of continuous functions .hence the latter admits a minimum in the compact set by weierstrass theorem .thus is the minimum of the eigenvalues of all the hessians computed in , and can not be zero , since otherwise there would be an with singular , and this can not happen since is strictly convex .hence , i.e. is strongly convex on . 2ex by an argument similar to that of lemma [ lemma0 ] , it can be shown that for a twice continuously differentiable function which is strictly convex on , there exists such that for all .moreover , strong convexity on a _ closed _set implies boundedness of the latter .thus , strong convexity and boundedness of the hessian are intertwined , and both are _ essential _ in the proof of theorem [ bvtheorem ] ( see ) .[ bvtheorem ] the following facts hold true : 1 . is twice _ continuously _differentiable on ; 2 . is strongly convex on ; 3 .the hessian of is lipschitz - continuous over ; 4 .the sequence generated by the newton algorithm of section v ( [ newtonstep0])-([backtrackingj ] ) converges to the unique minimum point of in .property 1 is a trivial consequence of theorem [ jpsiregularityprop ] . to prove 2 ,remember that is strictly convex on , hence also on , and apply lemma [ strongconvexity ] . as for property 3 , what it really says is that the following operator : is lipschitz continuous on .theorem [ jpsiregularityprop ] implies that or , which is the same , that .the continuous differentiability of implies its lipschitz continuity over an arbitrary compact subset of , hence also over the sublevel set , and property 3 follows . finally , to prove 4 ,notice that all the hypotheses of ( * ? ? ?* , p. 488 ) are satisfied .namely , the function to be minimized is strongly convex on the compact set , and its hessian is lipschitz - continuous over .it remains to observe that is defined over a subset of the linear space which has _ finite dimension _ over ( recall that is spanned by a finite set of matrices .see proposition [ rangegammaprop ] and remark [ rangegammaremark ] , where ) .thus , once we choose a base in , to every there corresponds a vector in , to every positive definite bilinear form over there corresponds a positive definite matrix in , and to every compact set in there corresponds a compact set in .hence , every convergence result that holds in must also hold in the abstract setting , in view of the homeomorphism between one space and the other .following the purposes of the three method presented in , now we describe an application of the above approximation algorithm to the estimation of spectral densities .consider first the scalar case , and suppose that the finite sequence is extracted from a realization of a zero - mean , weakly stationary discrete - time process .we want to estimate the spectral density of .the idea is the following : * fix a transfer function , feed the data to it , and collect the output data . *compute a consistent , and possibly unbiased , estimate of the covariance matrix of the outputs .note that some output samples should be discarded so that the filter can be considered to operate in steady state .* choose as `` prior '' spectrum a coarse , low - order , estimate of the true spectrum of obtained by means of another ( simple ) identification method .* `` refine '' the estimate by solving the approximation problem ( [ primalproblem ] ) with respect to , , and . to be clear, the result of the above procedure is _ the only spectrum , compatible with the output variance , which is closest to the rough estimate in the distance_. note that we are left with significant degrees of freedom in applying the above procedure : the method for estimating , in particular its degree , and the whole structure of , which has no contraints other than being a stability matrix and being reachable .the coarsest possible estimate of is the constant spectrum equal to the sample variance of the , i.e. , where .the resulting spectrum has the form .another simple choice is , where is a low - order ar , ma or arma model estimated from by means of predictive error minimization methods or the like .the flexibility in the choice of is more essential , and has more profound implications . as described in , , and , the following choice : },\quad b={\left [ \begin{array}}{c}1\\1\\\vdots\\1\\1{\end{array } \right]}\ ] ] where the s lie inside the unit circle , implies that the ( true ) steady - state variance has the structure of a pick matrix , and the corresponding problem of finding _ any _ spectrum that satisfies ( [ constraint ] ) is a nevanlinna - pick interpolation . moreover ,the following choice : } , \quad b = { \left [ \begin{array}}{c}0\\0\\\vdots\\0\\1{\end{array } \right]}\ ] ] implies that the steady - state variance is a toeplitz matrix whose diagonals contain the lags of the covariance signal of the input , and the corresponding problem of finding _ any _ spectrum that satisfies ( [ constraint ] ) is a covariance extension problem .these facts justify the theoretical interest in algorithms for constrained spectrum approximation , if for no other reason , as tools to compute at least _ one _ solution to a nevanlinna - pick interpolation or to a covariance extension problem , respectively .but the freedom in choosing has implications also in the above practical application to spectral estimation , where the key properties , not surprisingly , depend on the poles of , i.e. , the eigenvalues of . in general , as described in , the _ magnitude _ of the latter has implications on the variance of the sample covariance : the closer the eigenvalues to the origin , the smaller that variance ( see ( * ? ? ?* section ii.d ) ) .moreover , at least as far as three is concerned , the _ phase _ of the eigenvalues influences resolution capability : more precisely , _ the spectrum estimation procedure has higher resolution in those sectors of the unit circle where more eigenvalues are located_. according to simulations , the latter statement appears to be true also in our setting ( the fundamental difference being that the metric which is minimized is the hellinger distance instead of the kullback - leibler one ) . in the above setting is a consistent estimate of the true steady - state variance . although must belong to as ( this being the caseeven if is the sum of a purely nondeterministic process and some sinusoids , as in the simulations that follow ) , it is almost certainly not the case that when we have available only the finitely many data . strictly speaking, this implies that the contraint ( [ constraint ] ) with is almost always not feasible .it turns out that , increasing the tolerance threshold in its step 5 , the newton algorithm exhibits some kind of robustness in this respect .that is , it leads to a whose corresponding spectrum is _ close _ to satisfying the constraint .nevertheless , we prefer a clear understanding of what the resulting spectrum really is .thus , we choose to enforce feasibility of the approximation problem , at least as permitted by machine number representation , before starting the optimization procedure . to this end , following the same approach employed in , we pose the approximation problem not in terms of the estimated , but in terms of its _ orthogonal projection _ onto , which can be easily computed by means of algebraic methods .that is to say : we can not approximate in the preimage , because that set is empty , thus we choose to approximate in , where is the matrix closest to such that its preimage is not empty .this seems a reasonable choice and by the way it is , _ mutatis mutandis _, what the moore - penrose pseudoinverse does for the `` solution '' , when the linear system is not solvable . note that it is not guaranteed at all that the projection of a positive definite matrix onto a subspace of the hermitian matrices is itself positive definite . in practice, this is not really a problem , inasmuch is `` sufficiently positive '' and close to .the positivity of must anyway be checked before proceeding .this approach and the considerations on the positivity issue should be compared to ( * ? ? ?* section ii.d ) , which deals with the particular case when is the space of toeplitz matrices , and to ( * ? ? ?* section 4 ) , where , to find a matrix a close to , a kullback - leibler criterion is adopted instead of least squares .figure [ covextfigure ] shows the results of the above estimation procedure with structured according to the covariance extension setting ( [ covarianceextension ] ) with covariance lags ( i.e. , is ) , run over samples of the following arma process : ( poles in ) where is a zero - mean gaussian white noise with unit variance .two priors , both estimated from data , have been considered : the constant spectrum and the spectrum , where is an ar model of order obtained from the data by means of the predictive error method procedure in matlab s system identification toolbox .figure [ bgl1figure ] shows the performance of the above procedure in a setting that resembles that of ( * ? ? ?* section iv.b , example 1 ) .the estimation procedure was run on 300 samples of a superposition of two sinusoids in colored noise : with , and independent normal random variables with zero mean and unit variance , and . the prior here considered is the constant spectrum equal to the sample variance of the data .following , was chosen real block - diagonal with the following poles ( equispaced in a narrow range where the frequencies of the two sinusoids lie , to increase resolution in that region ) : ( and a column of ones ) .it can be seen that hellinger - distance based approximation does a good job , as does the three algorithm , at detecting the spectral lines at frequencies and .we now consider spectral estimation for a multivariate process . here , 100 samples of a bivariate process with a high order spectrum were generated by feeding a bivariate gaussian white noise with mean 0 and variance to a square ( stable ) shaping filter of order .the latter was constructed with random coefficients , except for one fixed conjugate pair of poles with radius and argument , and one fixed conjugate pair of zeros with radius and argument .the transfer function was chosen with one pole in the origin and complex pole pairs with radius and frequencies equispaced in the range $ ] . then the above estimating procedure was applied , with prior spectrum chosen as the constant density equal to the sample covariance of the bivariate process .figure [ estimfigure ] shows a plot of , , and , respectively for the true spectrum and for the estimation of the latter based on one run of 100 samples . in figure [ comparisonfigure ]we compare the performances of various spectral estimation methods in the following way .we consider four estimates , , , and of .the spectral density is the estimate obtained by the procedure described above in subsection [ sep ] .the spectral density is the maximum entropy estimate obtained using the same employed to obtain our estimate .the spectral densities and are the estimates of obtained by using `` off - the - shelf '' matlab procedures for the prediction error method ( see i.e. or ) and for the n4sid method ( see or ) : the former is a multivariable extension of the classical approach to armax identification , while the latter is a standard algorithm in the modern field of subspace identification . in order to obtain a comparison reasonably independent of the specific data set, we have performed independent runs each with samples of . in such a way we have obtained different estimates , , for each method ., , , and ( average over 50 simulations).,width=491 ] we have then defined where denotes the spectral norm .this is understood as the average estimation error of our method at each frequency .similarly , we have defined the average errors , , and of the other methods . in the each of the plots of figure [ comparisonfigure ], we depict the average error of our method together with the average error of one of the other methods .more explicitly , the first diagram shows the error for the hellinger approximation method and for the maximum entropy spectrum described in .the second diagram shows the error for the hellinger approximation and for the spectrum obtained via matlab s pem identification method .the third diagram shows the same for hellinger approximation and matlab s n4sid method .the hellinger approximation based approach appears to perform better or much better than the other methods .the simulation yields similar results with data points . with data samples , pem and n4sid perform as well as our method .of course , one should always take into account the complexity of the resulting spectrum . in this example , being of order , the resulting spectral factor ( or `` model '' ) produced by the hellinger approximation has order , whereas the corresponding maximum entropy model has order and both n4sid and pem usually choose order . in our simulation ,the norm of the difference of two estimates produced by pem or by n4sid is sometimes very large when compared to the norm of the difference between any two of the estimates produced by our method .that is , although pem and n4sid are provably consistent as , when few data are available both of them may introduce occasional artifacts , which are well visible as `` peaks '' in figure [ comparisonfigure ] ( a `` peak '' in the 50-run average is due to a very high error in one of the runs , not to a systematic error ) .our method appears to be more robust in this respect .in this paper , we considered the the new approach to multivariate spectrum approximation problem with respect to the multivariable hellinger distance , which was proposed in .we developed in detail the matricial newton algorithm which was sketched there , and proved its global convergence .finally , we described an application of this approach to spectral estimation , and tested it against the well - known pem and n4sid algorithms .it appears that approximation in the hellinger distance may be a useful tool to gain insight into the dynamics of a multivariate process when fewer data are available .in particular , simulations suggest that this method is less prone to produce artifacts than pem and n4sid .another advantage of our method and of the maximum entropy paradigm is that a higher resolution estimate in a prescribed frequency band can be easily achieved by properly placing some poles of close to the unit circle and with phase in the prescribed band . _numerical _ robustness of the algorithm with respect to the number and the position of the poles is an open challenge .also , the analysis of the achievable precision of the results ( in a statistical sense ) has still to be developed .the detailed comments of the anonymous reviewers are gratefully acknowledged . c. i. byrnes , t. georgiou , a. lindquist and a. megretski , _ generalized interpolation in h - infinity with a complexity constraint _ , trans .american math .society vol . * 358(3 ) * , pp .965 - 987 , 2006 ( electronically published on december 9 , 2004 ) . c. i. byrnes and a. lindquist , _ the generalized moment problem with complexity constraint _ , integral equations and operator theory vol .* 56(2 ) * , pp .163 - 180 , 2006 . c. i. byrnes , t. georgiou , and a. lindquist , _ a new approach to spectral estimation : a tunable high - resolution spectral estimator _ , ieee trans .49 * , pp .3189 - 3205 , 2000 .m. deistler , _ a birds eye view on system identification _ , in modeling , estimation and control : festschrift in honor of giorgio picci on the occasion of his sixty - fifth birthday , a. chiuso , a. ferrante and s. pinzoni ( eds ) , springer - verlag , 2007 .a. ferrante , m. pavon and f. ramponi , _ further results on the byrnes - georgiou - lindquist generalized moment problem _ , in modeling , estimation and control : festschrift in honor of giorgio picci on the occasion of his sixty - fifth birthday , a. chiuso , a. ferrante and s. pinzoni ( eds ) , springer - verlag , pp . 73 - 83 , 2007 .t. georgiou , _ distances between time - series and their autocorrelation statistics _ , in modeling , estimation and control : festschrift in honor of giorgio picci on the occasion of his sixty - fifth birthday , a. chiuso , a. ferrante and s. pinzoni ( eds ) , springer - verlag , pp . 123 - 133 , 2007 . a. lindquist , _ prediction - error approximation by convex optimization _ , in modeling ,estimation and control : festschrift in honor of giorgio picci on the occasion of his sixty - fifth birthday , a. chiuso , a. ferrante and s. pinzoni ( eds ) , springer - verlag , pp . 265 - 275 , 2007 . a. nasiri amini , e. ebbini , and t.t .georgiou , _ noninvasive estimation of tissue temperature via high - resolution spectral analysis techniques _ , ieee trans . on biomedical engineering vol .* 52 * , pp .221 - 228 , 2005 .a. a. stoorvogel , and j. h. van schuppen , system identification with information theoretic criteria . in : bittantis , picci g ( eds ) _ identification , adaptation , learning : the science of learning models from data_. springer , berlin heidelberg , 1996 .
in this paper , we first describe a _ matricial _ newton - type algorithm designed to solve the multivariable spectrum approximation problem . we then prove its _ global _ convergence . finally , we apply this approximation procedure to _ multivariate spectral estimation _ , and test its effectiveness through simulation . simulation shows that , in the case of _ short observation records _ , this method may provide a valid alternative to standard multivariable identification techniques such as matlab s pem and matlab s n4sid . multivariable spectrum approximation , hellinger distance , convex optimization , matricial newton algorithm , global convergence , spectral estimation .
a classical method for model - checking timed behavioral properties such as those expressed using timed extensions of temporal logic is to rely on the use of observers . in this approach, we check that a given property , , is valid for a system by checking the behavior of the system composed with an observer for the property .that is , for every property of interest , we need a pair of a system ( the observer ) and a formula .then property is valid if and only if the composition of with , denoted ( ) , satisfies .this approach is useful when the properties are complex , for instance when they include realtime constraints or involve arithmetic expressions on variables .another advantage is that we can often reduce the initial verification problem to a much simpler model - checking problem , for example when is a simple reachability property . in this context ,a major problem is to prove the correctness of observers .essentially , this boils down to proving that every trace that contradicts a property can be detected .but this also involve proving that an observer will never block the execution of a valid trace ; we say that it is _ innocuous _ or non - intrusive .in other words , we need to assure that the `` measurements '' performed by the observer can be made without affecting the system . in the present work ,we propose to use a model - checking tool chain in order to check the correctness of observers .we consider observers related to linear time properties obtained by extending the pattern specification language of dwyer et al . with hard , realtime constraints . in this paper , we take the example of the pattern `` '' , meaning that event must occur within units of time ( u.t . ) of the first occurrence of , if any , but not later than . our approach can be used to prove both the soundness and correctness of an observer when we fix the values of the timing constraints ( the values of and in this particular case ) .our method is not enough , by itself , to prove the correctness of a verification tool .indeed , to be totally trustworthy , this will require the use of more heavy - duty software verification methods , such as interactive theorem proving .nonetheless our method is complementary to these approaches . in particularit can be used to debug new or optimized definitions of an observer for a given property before engaging in a more complex formal proof of its correctness .our method is obtained by automating an approach often referred to as _ visual verification _ , in which the correctness of a system is performed by inspecting a graphical representation of its state space . instead of visual inspection, we check a set of branching time ( modal -calculus ) properties on the discrete time state space of a system .these formulas are derived automatically from a definition of the pattern expressed as a first - order formula over timed traces .the gist of this method is that , in a discrete time setting , first - order formulas over timed traces can be expressed , interchangeably , as regular expressions , ltl formulas or modal -calculus formulas .this approach has been implemented on the tool tina , a model - checking toolbox for time petri net ( tpn ) .this implementation takes advantage of several components of tina : state space exploration algorithms with a discrete time semantics ( using the option ` -f1 ` of tina ) ; model - checkers for ltl and for modal -calculus , called _ selt _ and _ muse _ respectively ; a new notion of _ verification probes _recently added to fiacre , one of the input specification language of tina .while model checkers are used to replace visual verification , probes are used to ensure innocuousness of the observers .the rest of the paper is organized as follows . in sect .[ sec2:fiacre ] , we give a brief definition of fiacre and the use of probes and observers in this language . in sect .[ sec3:timedtrace ] , we introduce the technical notations necessary to define the semantics of patterns and time traces and focus on an example of timed patterns . before concluding , we describe the graphical verification method and show how to use a model - checker to automatize the verification process .the theory and technologies underlying our verification method are not new : model - checking algorithms , semantics of realtime patterns , connection between path properties and modal logics , nonetheless , we propose a novel way to combine these techniques in order to check the implementation of observers and in order to replace traditional `` visual '' verification methods that are prone to human errors .our paper also makes some contributions at the technical level .in particular , this is the first paper that documents the notion of probe , that was only recently added to fiacre .we believe that our ( language - level ) notion of probes is interesting in its own right and could be adopted in other specification languages .we consider systems modeled using the specification language fiacre .( both the system and the observers are expressed in the same language . )fiacre is a high - level , formal specification language designed to represent both the behavioral and timing aspects of reactive systems .fiacre programs are stratified in two main notions : _ processes _ , which are well - suited for modeling structured activities , and _ components _ , which describes a system as a composition of processes .components can be hierarchically composed .we give in fig .[ fig / fiacre - process ] a simple example of fiacre specification for a computer mouse button capable of emitting a double - click event . the behavior , in this case , is to emit the event if there are more than two events in strictly less than one unit of time ( u.t . ) . [cols="^,^ " , ] we can already debug the pattern by visually inspecting the state graph . for _ soundness _ , we need to check that , when the pattern is not satisfied for traces that do not satisfy formula the observer will detect a problem ( observer eventually reaches a state in the set _ errors _ ) . for _ innocuousness _we need to check that , from any state , it is always possible to reach a state where event ( respectively and ) can fire .indeed , this means that the observer can not selectively remove the observation of a particular sequence of external transitions or the passing of time .this graphical verification method has some drawbacks .as such , it relies on a discrete time model and only works for fixed values of the timing parameters ( we have to fix the value of and ) . nonetheless , it is usually enough to catch many errors in the observer before we try to prove the observer correct more formally .a problem with the previous approach is that it essentially relies on an informal inspection ( and on human interaction ) .we show how to solve this problem by replacing the visual inspection of the state graph by the verification of modal -calculus formulas .( the tina toolset includes a model - checker for the -calculus called _muse_. ) the general idea rests on the fact that we can interpret the state graph as a finite state automaton and ( some ) sets of traces as regular languages .this analogy is generally quite useful when dealing with model - checking problems .we start by defining some useful notations .label expressions are boolean expressions denoting a set of ( transition ) labels .for instance , denotes the external transitions , while the expression ` - ( ) ` is only matched by the silent transition label .we will also use the expression to denote the conjunction of all possible labels , e.g. . the model checker _muse _ allows the definition of label expressions using the same syntax . in the following ,we consider regular expressions build from label expressions .for example , the regular expression denotes traces of duration 1 with no events occurring at time . we remark that it is possible to define the set of ( discrete ) traces where the fott formula pres holds using the union of two regular languages : ( 1 ) the traces where never occurs , ; and ( 2 ) the traces where there is an four units of time after the first . in this particular case, is a regular expression corresponding to the property ) by construction , the regular language associated to is exactly the set of finite traces matching ( the discrete semantics ) of pres . in the most general case ,a regular expressions can always be automatically generated from an existential fott formula when the time constraints of delay expressions are fixed ( the intervals in the occurrences of ( ) ) .the next step is to check that the observer agrees with every trace conforming to . for thiswe simply need to check that , starting from the initial state of ( ) , it is not possible to reach a state in the set _ errors _ by following a sequence of transitions labeled by a word in . this is a simple instance of a language inclusion problem between finite state automata .more precisely , if is the set of states visited when accepting the traces in , we need to check that is included in the complement of the set _ present _ ( denoted ) . in our example of fig .[ fig : example ] , we have that , and therefore .this automata - based approach has still some drawbacks .this is what will motivate our use of a branching time logic in the next section . in particular , this method is not enough to check the soundness or the innocuousness of the observer . for innocuousness, we need to check that every event may always eventually happen . concerning soundness , we need to prove that ; which is false in our case .the problem lies in the treatment of time divergence ( and of fairness ) , as can be seen from one of the counter - example produced when we use our ltl model - checker to check the soundness property , namely : ` b.start.z.t.t.t.t.watch.t.t.\cdots ` ( ending with a cycle of ` t ` transitions ) .this is an example where the error transition is continuously enabled but never fired .we show how to interpret regular expressions over traces using a modal logic . in this case, the target logic is a modal -calculus with operators for forward and backward traversal of a state graph .( many temporal logics can be encoded in the -calculus , including ctl ) . in this context , the semantics of a formula over a kripke structure ( a state graph ) is the set of states where holds . the basic modalities in the logic are ` < a>\psi ` and ` \psi < a > ` , where is a label expression .a state is in if and only if there is a ( successor ) state in and a transition from to with a label in a. symmetrically , is in if and only if there is a ( predecessor ) state in and a transition from to with a label in a. in the following , we will also use two constants , ` t ` , the true formula ( matching all the states ) , and ` 0 ` , that denotes the initial state of the model ; and the least fixpoint operator ` min x | \psi(x ) ` .for example , the formula ` < > t ` matches all the states that are the source of an -transition , likewise ` reach _ ` ` min x|(<>t \vee < z > x ) ` matches all the states that can lead to an -transition using only internal transitions . as a consequence , we can test innocuousness by checking that the formula ` ( reach_reach_reach _ ) ` is true for all states .the soundness proof rely on an encoding from regular path expressions into modal formulas .we define two encodings : that matches the states encountered while firing a trace matching a regular expression ; and that matches the state reached ( at the end ) of a finite trace in .these encodings rely on two derived operators .( again , we assume here that is a label expression . ){l } \hfill { \psi \,{\,\ensuremath{\text{\texttt{o}}}\,}\ , a \ { \ensuremath{\overset{\text{\tiny def}}{=}}}\ \psi{\texttt{<a > } } } \hspace*{4em } { \psi \,{\texttt{*}}\ , a \ { \ensuremath{\overset{\text{\tiny def}}{=}}}\ { \texttt{min x | \,{\,\ensuremath{\vee}\,}\,x< > } } } \hspace*{2em}\\[1em ] \begin{array}[c]{lcl@{\quad}|@{\quad}lcl } { ( \kern-.5mm({r \cdot a})\kern-.5mm)_{e } } & { \ensuremath{\overset{\text{\tiny def}}{= } } } & { ( \kern-.5mm({r})\kern-.5mm)_{e } } \,{\texttt{o}}\ , a & { ( \kern-.5mm({r \cdot a})\kern-.5 mm ) _ { { } } } & { \ensuremath{\overset{\text{\tiny def}}{= } } } & { ( \kern-.5mm({r})\kern-.5 mm ) _ { { } } } \vee { ( \kern-.5mm({r \cdot a})\kern-.5mm)_{e}}\\ { ( \kern-.5mm({r \cdot a{\ensuremath{\mathclose{\overset{*}{\ } } } } } ) \kern-.5mm)_{e } } & \ { \ensuremath{\overset{\text{\tiny def}}{=}}}\ & { ( \kern-.5mm({r})\kern-.5mm)_{e } } \,{\texttt{*}}\ , a & { ( \kern-.5mm({r \cdot a{\ensuremath{\mathclose{\overset{*}{\ } } } } } ) \kern-.5 mm ) _ { { } } } & \ { \ensuremath{\overset{\text{\tiny def}}{=}}}\ & { ( \kern-.5mm({r})\kern-.5 mm ) _ { { } } } \vee { ( \kern-.5mm({r \cdot a{\ensuremath{\mathclose{\overset{*}{\ } } } } } ) \kern-.5mm)_{e}}\\ { ( \kern-.5mm({r \cdot { \ensuremath{\mathit{tick}}}})\kern-.5mm)_{e } } & \ { \ensuremath{\overset{\text{\tiny def}}{=}}}\ & { \texttt { ( } } { ( \kern-.5mm({r})\kern-.5mm)_{e } } { \texttt{\,o\,t)\,*\,(-t ) } } & { ( \kern-.5mm({r \cdot { \ensuremath{\mathit{tick}}}})\kern-.5 mm ) _ { { } } } & \ { \ensuremath{\overset{\text{\tiny def}}{=}}}\ & { ( \kern-.5mm({r})\kern-.5 mm ) _ { { } } } \vee { ( \kern-.5mm({r \cdot { \ensuremath{\mathit{tick}}}})\kern-.5mm)_{e}}\\ { ( \kern-.5mm({r_1 \vee r_2})\kern-.5mm)_{e } } & \ { \ensuremath{\overset{\text{\tiny def}}{=}}}\ & { ( \kern-.5mm({r_1})\kern-.5mm)_{e } } \vee { ( \kern-.5mm({r_2})\kern-.5mm)_{e } } & { ( \kern-.5mm({r_1 \vee r_2})\kern-.5 mm ) _ { { } } } & \ { \ensuremath{\overset{\text{\tiny def}}{=}}}\ & { ( \kern-.5mm({r_1})\kern-.5 mm ) _ { { } } } \vee { ( \kern-.5mm({r_2})\kern-.5mm)_{{}}}\\ { ( \kern-.5mm({\epsilon})\kern-.5mm)_{e } } & \ { \ensuremath{\overset{\text{\tiny def}}{=}}}\ & { \texttt{`0 } } & { ( \kern-.5mm({\epsilon})\kern-.5 mm ) _ { { } } } & \ { \ensuremath{\overset{\text{\tiny def}}{=}}}\ & { \texttt{`0 } } \end{array}\\ \end{array}\ ] ] given a kripke structure , the states matching the formula ( respectively ) in are the states reachable from the initial state after firing ( resp . all the states reachable while firing ) a sequence of transitions matching . by induction on the definition of .for example , if we assume that correspond to the regular expression , then matches all the states reachable from states where is true using ( finite ) sequences of transition with label in a ; i.e. formula ` * ` corresponds to .likewise , we use the interpretation of the empty expression , , to prefix every formula with the constant ` 0 ` ( that will only match the initial state ) .this is necessary since -calculus formulas are evaluated on all states whereas regular path expressions are evaluated from the initial state .for example , we give the formula for below , where stands for the expression : if is a modal -calculus formula that matches the error condition of the observer , then we can check the correctness and soundness of the observer by proving that the equivalence ( eq ) , below , is a tautology ( that it is true on every states of ( ) ) . again , we can interpret the `` error condition '' using the -calculus .the definition of errors is a little bit more involved than in the previous case .we say that a state is in error if the transition is enabled ( the formula ` < error > t ` is true ) or if the state can only be reached by firing the transition ( which corresponds to the formula ` ( t < error > t)\wedge(0(- error ) ) ` .hence is the disjunction of these two properties : the formula ( eq ) can be checked almost immediately ( less than on a standard computer ) for models of a few thousands states using _ muse_.listing [ muse ] gives a _ muse _ script file that can be used to test this equivalence relation .few works consider the verification of model - checking tools . indeed , most of the existing approaches concentrate on the verification of the model - checking algorithms , rather than on the verification of the tools themselves .for example , smaus et al . provide a formal proof of an algorithm for generating bchi automata from a ltl formula using the isabelle interactive theorem prover .this algorithm is at the heart of many ltl model - checker based on an automata - theoretic approach .the problem of verifying verification tools also appears in conjunction with certification issues . in particular , many certification norms , such as the do-178b, requires that any tool used for the development of a critical equipment be qualified at the same level of criticality than the equipment .( of course , certification does not necessarily mean formal proof ! ) in this context , we can cite the work done on the certification of the scade compiler , a tool - suite based on the synchronous language lustre that integrates a model - checking engine . nonetheless , only the code - generation part of the compiler is certified and not the verification part . concerning observer - based model - checking , most of the works rely on an automatic way to synthesize observers from a formal definition of the properties .for instance , aceto et al . propose a method to verify properties based on the use of test automata . in this framework ,verification is limited to safety and bounded liveness properties since the authors focus on properties that can be reduced to reachability checking . in the context of timepetri net , toussaint et al . also propose a verification technique based on `` timed observers '' , but they only consider four specific kinds of time constraints .none of these works consider the complexity or the correctness of the verification problem .another related work is , where the authors define observers based on timed automata for each pattern .our approach is quite orthogonal to the `` synthesis approach '' .indeed we seek , for each property , to come up with the best possible observer in practice . to this end , using our toolchain , we compare the complexity of different implementations on a fixed set of representative examples and for a specific set of properties and kept the best candidates .the need to check multiple implementations for the same patterns has motivated the need to develop a lightweight verification method for checking their correctness . compared to these works ,we make several contributions . we define a complete verification framework for checking observers with hard realtime constraints .this framework has been tested on a set of observers derived from high - level timed specification patterns .this work is also our first public application of the probe technology , that was added to fiacre only recently . to the best of our knowledge , the notion of _ probes _ is totally new in the context of formal specification language .paun and chechik propose a somewhat similar mechanism in in an untimed setting where they define new categories of events .however our approach is more general , as we define probes for a richer set of events , such as variables changing state .we believe that this ( language - level ) notion of probes is interesting in its own right and could be adopted by other formal specification languages .finally , we propose a formal approach that can be used to gain confidence on the implementation of our model - checking tools and that replaces traditional `` visual verification methods '' that are prone to human errors .this result also prove the usefulness of having access to a complete toolbox that provides different kind of tools : editors , model - checkers for different kind of logics , b. berthomieu , j .-bodeveix , and m. fillali and g. hubert and f. lang and f. peres and r. saad and s. jan and f. vernadat .the syntax and semantics of fiacre version 3.0 .http://www.laas.fr/fiacre/ , 2012 . a janowska , w penczek , a. prola , a zbrzezny . towards discrete - time verification of time petri nets with dense - time semantics. in _ proc . of the int. workshop on concurrency , specification and programming _ , 2011 .
a classical method for model - checking timed properties such as those expressed using timed extensions of temporal logic is to rely on the use of observers . in this context , a major problem is to prove the correctness of observers . essentially , this boils down to proving that : ( 1 ) every trace that contradicts a property can be detected by the observer ; but also that ( 2 ) the observer is innocuous , meaning that it can not interfere with the system under observation . in this paper , we describe a method for automatically testing the correctness of realtime observers . this method is obtained by automating an approach often referred to as _ visual verification _ , in which the correctness of a system is performed by inspecting a graphical representation of its state space . our approach has been implemented on the tool tina , a model - checking toolbox for time petri net .
many complex systems in nature and society can be successfully represented in terms of networks capturing the intricate web of connections among the units they are made of . in recent years , several large - scale properties of real - world webs have been uncovered , e.g. , a low average distance combined with a high average clustering coefficient , the broad , scale - free distribution of node degree and various signatures of hierarchical and modular organization . beside the mentioned global characteristics ,there has been a quickly growing interest in the local structural units of networks as well .small and well defined sub - graphs consisting of a few vertices have been introduced as motifs , whereas somewhat larger units , associated with more highly interconnected parts are usually called _ communities _ , clusters , cohesive groups , or modules .these structural sub - units can correspond to multi - protein functional units in molecular biology , a set of tightly coupled stocks or industrial sectors in economy , groups of people , cooperative players , etc .the location of such building blocks can be crucial to the understanding of the structural and functional properties of the systems under investigation .the complexity and the size of the investigated data sets are increasing every year . in parallel, the increasing number of available computational cores within a single computer or the advent of cloud computing provides an infrastructure , where such data can be processed . however , the performance potential of these systems is accessible only for problems , where the data processing can be distributed among several computing units .here we introduce the parallel version of cfinder , suitable for finding and visualizing overlapping clusters in large networks .this application is based on the earlier , serial version of cfinder , which turned out to be a quite popular network clustering program .the paper is organized as follows . in section 2we give a summary of the clique percolation method ( cpm ) .this is followed by the description of the method in section 3 , which distributes the computation among several cpus or computing units .the section 4 is devoted for experimental analysis of the time complexity of the method . in the last sectionwe conclude our findings .communities are usually defined as dense parts of networks and the majority of the community finding approaches separate these regions from each other by a relatively small number of links in a disjoint manner . however , in reality communities may even overlap as well . in this casethe nodes in the overlap are members of more than one community .a recently introduced , link density - based community finding technique allowing community overlaps is given by the cpm .in this approach a community is built up from adjacent blocks of the same size .these blocks correspond to -cliques , corresponding to subgraphs with the highest possible density : each of the members of the -clique is linked to every other member .two blocks are considered adjacent if they overlap with each other as strongly as possible , i.e. , if they share nodes .note that removing one link from a -clique leads to two adjacent -cliques sharing nodes .a community is a set of blocks that can be reached from one to the other through a sequence of adjacent blocks .note that any block belongs always to exactly one community , however , there may be nodes belonging to several communities at the same time .a consequence of the above definition is that the communities contain only densely connected nodes .thus , nodes with only a few connections or not participating in a densely connected subgraph are not classified into any community .we note that the parameter can be chosen according to the needs of the user .if one is interested in broader community covers , then communities at small values are appropriate. if the most dense community cores are the target of the study , then the communities at larger values of apply . for a general casewe recommend a value just below the percolation threshold .the pseudocode for cpm is given in algorithm [ alg : sercpm ] .the cpm is robust against removal or insertion of a single link . due to the local nature of this approach, such perturbations can alter only the communities containing at least one of the end points of a link .( in contrast , for global methods optimizing a homogeneously defined quantity , the removal or insertion of a single link can result in the change of the overall community structure . )we note , that beside the mentioned advantages the cpm has certain limits as well .e.g. , if there are not enough cliques in the network the method will not find any valuable community structure , whereas for many large overlapping cliques we may easily obtain a single percolating community for too low values .due to the deterministic nature , the cpm may find communities in a particular realization of a random network ensemble . in a general case , though , the members of communities are usually different in each realizations of the ensemble , if is below the percolation threshold . finally , we point out that the cpm will find the same communities in a given subgraph irrespective to the fact whether the subgraph is linked to a larger network or not ( see fig . [fig : subcl ] ) .therefore , a heterogeneous network can be analyzed by first dividing it into homogeneous parts , and applying the method to these subnetworks separately .the distributed version of the cpm takes advantage of the local property of the community definition .since the communities depend only on the local network structure , the network can be divided into small pieces .then the communities ( or the building blocks for the communities ) can be located in each piece of the network independently .the distributed cpm is composed of the following main stages : 1 . splitting up the network into pieces , 2 .finding communities in each piece of the network , 3 . merging the communities found in the previous step .we provide a pseudocode description in algorithm [ alg : pcpm ] .( ) the first step is the most crucial one in the process , since it has to satisfy the following conditions : * each part must be sufficiently small to be processable by one computing unit . *the network should not be split up into too many pieces , since the community finding procedure is not optimal on too small networks , and the computational overhead in the last , merging step becomes too high .* since the splitting step might divide communities as well , the nodes at split borders should appear in both subnetworks .in the final step these duplicated nodes can be used to construct the global community structure from the local communities of the subnetworks ( see fig . [fig : grsplit ] ) .the first and the second condition are contradictory : if one optimizes for memory usage on a single processing host , the network has to be split into numerous tiny subnetworks . however , as more subnetworks are created , the number of nodes appearing in mutual split borders is increasing as well , resulting in inefficient overall memory consumption and cpu usage .naturally , the optimal solution depends on the available resources . as a rule of thumb, one should distribute the tasks among the processing units such that each unit works with the largest piece of network processable on the given unit. the third condition , which requires the ability to reconstruct the global community structure from the locally found communities ( and community parts ) can be satisfied as follows . for simplicitylet us suppose that we would like to split the investigated network into two parts , as shown in fig .[ fig : grsplit ] .first we select a set of links ( indicated with dashed lines ) , whose removal cuts the network into two separate subnetworks .the end - nodes of these links ( indicated by filled squares ) define the boundary region of the subnetworks .we split the network into two pieces by removing the selected links , and for each subnetwork we separately insert back all nodes and links in the boundary region ( including links between boundary nodes that were not cut - links ) which means that the boundary region is duplicated . as a result, the -cliques located in the boundary region of the original network will appear in both subnetworks .thus , the communities found in the two pieces will overlap in these -cliques , enabling the reconstruction of the original communities .-cliques of the boundary region will appear in both pieces . ]the resulting isolated subgraphs can be clustered independently , therefore , the calculation can be distributed among several computational units ( pcs or processor cores ) .each individual task of the clustering process calculates the cpm - communities on each subnetwork .thus for each network piece the chains of maximally overlapping -cliques are known . sincea given -clique can be part of only one community , the communities for the whole network can be built up by merging the -cliques from the boundary regions of the subnetworks as follows .first we build a hyper - network from the network pieces in which nodes correspond to subnetworks , and links signal a shared boundary region between the subnetworks . for each hyper - node we check whether the cpm has found any communities in the corresponding subnetwork or not .if communities were found separately for adjacent hyper - nodes , the overlapping region of the two corresponding subnetworks is checked , and communities ( originally in different subnetworks ) sharing a common -clique are merged . by iterating over the hyperlinks in this manner , the communities of the original network build up from the merged communities .note that the hyperlinks can be processed in parallel , where the communities are indexed by an array in shared memory or in a shared database .we have tested the method on the two largest example networks available in the cfinder package : the coauthorship network ( number of nodes = 30561 , number of links = 125959 ) and the undirected word association network ( number of nodes = 10627 , number of links = 63788 ) .the main parameter , which has impact on the performance of the algorithm , is the size of the subnetworks the network is split into .note that this is the minimum size for a subnetwork .if a large clique is attached to a subnetwork , it can not be split up , it is either contained in one subnetwork or it is fully contained in several subnetworks .the parallel version has three main type of computational overhead compared to the serial version .the first one is the splitting step , where the graph is split into smaller pieces .the second source of the processing overhead is queuing the parallel jobs into a scheduler and waiting for free computing units .the third one is the merging of the communities from several subnetworks .if the total cpu consumption is also an issue , one has to take into account a fourth type of overhead : the processing time of the overlapping network regions , as these computations are performed more than once . since the merging step is implemented as a simple database update command , we measured the time consumption of the scheduling and the merging steps together .first we analyze the splitting step . in this stepthe subnetworks are built up by the breadth first search algorithm .building a larger network is less time consuming than building several small networks , thus , the time complexity is proportional to the logarithm of the subnetwork size . when plotting the splitting time as a function of the subnetwork size on a semilogarithmic plot , the slope of its decay is proportional to the average branching factor in the breadth first search process ( see fig .[ fig : split ] ) .if the high degree nodes in the network are close to each other , the breadth first algorithm builds up large subnetworks in a few steps and the size limit for the subnetwork is reached .hence the rest of the nodes outside the large subnetworks will form many disconnected small subnetwork pieces .our graph splitting algorithm collects such tiny subnetworks and attaches them to the larger network pieces . for networks , where this collecting step is needed, the graph splitting algorithm may consume more computing time ( see fig .[ fig : split ] ) . ,title="fig:",width=238 ] , title="fig:",width=238 ] now we turn to the second source of the computing overhead of the parallel method .if the network is split up into more subnetworks than the number of available computing units , they can not be processed in parallel , and some jobs must wait until the previous jobs finish .this effect dominates the running time as shown on fig .[ fig : run ] .the running time decays linearly with the number of subnetworks , which is inversely proportional to the subnetwork size .this trend is valid until the number of processes reaches the number of available computing nodes . above this subnetwork sizethe computing time is practically constant .the faster is the finding of the communities in smaller subnetworks , the more time is needed for merging the results from various subnetworks .in our implementation we used a small grid of personal computers , where the condor scheduling system distributed the jobs among 30 cores on linux computers with 2ghz amd opteron cpus connected by 100mb / s ethernet network . here the scheduling time and the communication overhead among the computing units is comparable to the processing time of the largest network , which is manageable in one computing unit . in similar environments we advise to use the serial version for small networks , since the parallel version will not give any advantage .the main targets of the parallel version are very large networks that do not fit into the memory of the computers available for the user .we note that for typical sparse networks the parallel version will not run faster on common architectures than the serial version .we expect that for special networks , where the splitting step results a large number of subnetworks with negligible number of cliques in the overlapping regions , the parallel version can be faster than the serial one provided that enough computing resources are available , e.g. using gpus with high bandwidth interface. such networks are not typical , therefore , our current implementation is aimed mainly to handle very large networks .the size of the processable network is limited be the first splitting step and by the last merging step , since here the network must be stored either in memory or on disks . if the network does not fit into the memory it is possible to apply effective disk based methods in these steps ., width=302 ]we have presented a parallel implementation of the cfinder algorithm .we have shown that due to the local nature of the underlying clique percolation method , the computation can be distributed among several computational units .the parallel version may solve large scale network clustering tasks , where lacking enough computing resources , e.g. the main memory of the available computer , would not allow to find the community structure .the project is supported by the european union and co - financed by the european social fund ( grant agreement no .tamop 4.2.1/b-09/1/kmr-2010 - 0003 ) and the national research and technological office ( nkth textrend ) .h. papadakis , c. panagiotakis , p. fragopoulou , local community finding using synthetic coordinates , in _ future information technology _ , eds .park , l.t .yang , laurence , c. lee , _ communications in computer and information science _ , * 185 * ( 2011 ) 915 .t. heimo , j. saramki , j .-onnela and k. kaski , spectral and network methods in the analysis of correlation matrices of stock returns , _ physica a - statistical mechanics and its applications _ , * 383 * , ( 2007 ) 147151 .
the amount of available data about complex systems is increasing every year , measurements of larger and larger systems are collected and recorded . a natural representation of such data is given by networks , whose size is following the size of the original system . the current trend of multiple cores in computing infrastructures call for a parallel reimplementation of earlier methods . here we present the grid version of cfinder , which can locate overlapping communities in directed , weighted or undirected networks based on the clique percolation method ( cpm ) . we show that the computation of the communities can be distributed among several cpu - s or computers . although switching to the parallel version not necessarily leads to gain in computing time , it definitely makes the community structure of extremely large networks accessible . electronic version of an article published as + parallel processing letters ( ppl ) 22:(1 ) p. 1240001 . ( 2012 ) + http://dx.doi.org/10.1142/s0129626412400014 + copyright world scientific publishing company + http://www.worldscinet.com/ppl/22/2201/s0129626412400014.html
over the past decade mathematical models have continued to provide insights into complex systems .much of this work was grounded in the exploration of simple models from many areas , including ecology and epidemiology .as the theoretical tools for the analysis of networks have been refined , the focus of current work has shifted to extend the range of systems that can be treated as networks . while earlier network models focused on structural properties of static systems , recent advances focus on dynamic and properties of time varying networks .similarly the information encoded in networks has evolved from simple , often binary , variables to structures in which nodes and links can assume complex states in what are often described as multi - layer or multiplex networks .theoretical progress in the analysis of complex networks both enables and requires a progression towards more complex models .for example one extension to simple epidemic models are coinfection models , which describe the simultaneous and interdependent spreading of two diseases .while these models have received some attention , very similar challenges encountered in ecology remain largely unexplored . in ecologyit is widely recognized that our environment is not evenly distributed .typical landscapes are broken into distinct habitat patches .examples include patches of forest remaining in an agricultural landscape , islands , in an archipelago , systems of lakes , or parks in a city , .a network representation of the environment can be constructed by using nodes to represent the discrete patches of habitat with links between pairs of patches between which a species can spread .another type of network that is considered in ecology are food webs , the networks of who eats who . in a food webthe nodes represent populations of different species and directed links represent trophic ( i.e. predator - prey ) interactions . an emerging topic in the ecological literatureare so - called meta - foodwebs , which combine trophic and geographical complexity .meta - foodwebs describe the interactions between several different food webs in space and one particular class of meta - foodwebs is described by the colonization - extinction model proposed by pillai .meta - foodwebs can be described as networks of networks or multilayer networks . to connect the ecological system to physical terminologyone can regard the system from two different perspectives .the first of these focusses on the food web : we can say that meta - foodwebs are collections of food - webs that exist in different spatial patches and interact through the dispersal of individuals between patches . seen from this perspective , the food webs are the layers of the network , predator - prey interactions are within - layer interactions , whereas dispersal of individuals between patches constitutes between layer interactions. we can describe the same class of systems in a different way by saying that meta - foodwebs are geographical networks of species dispersal that interact through feeding interactions .now the network layers are formed by the geographical network , dispersal between patches is a within - layer interaction , whereas the feeding interactions constitute between layer interactions .the former perspective is useful when the food web is more complex than the geographical network , whereas the latter is useful when the geographical network is more complex than the food web . in ecology plenty of examples for both casesare encountered , and thus it may be useful to apply the elegant notation proposed in .however , in the present paper we focus on the case where the food web is very simple ( a linear chain ) , whereas the geographical network is both larger and complex in structure , we therefore employ the later perspective .we thus regard the spatial dispersal networks as network layers , which interact through feeding interactions .pillai s meta - foodweb model has been studied in the ecological literature using agent - based simulations .the central ecological question driving this work is how landscape structure impacts food web structure .it was shown that higher connectivity in the geographical networks generally benefits the persistence of species on the landscape level .however , in very strongly connected systems , specialist species tend to outcompete generalist species , such that most complex food webs are found at intermediate geographical connectivity . a theoretical approach for the computation of persistence thresholds in colonization - extinction models has been proposed in .this work used the so called _ homogeneous approximation _ in which all patches are considered to have the same number of links ( degree ) .however , previous work on epidemics has demonstrated , that spreading processes can be understood well by utilising the power of generating functions .in particular , such approaches can be used to reveal the degree distribution ( the probability distribution of links per node ) of nodes in a particular state . by using generating functions we can find the degree distribution of the network of geographical patches experienced by each species .these degree distributions differ from the degree distribution of the underlying geographical network , because some patches may be inhospitable to a given species due to its interactions with other species .for instance the absence of suitable prey can make a patch inhospitable to a given predator and thus removes the node from the network accessible to a predator . by revealing the degree distributions experienced by the different species , generating functions hold the promise of enabling a deeper understanding of the impact of landscape connectivity on ecological dynamics .here we present a generating function approach to analyse how the degree distribution of the patch network affects food chains , in which each species has at most one predator and one prey .we find that properties such as the shape and mean of the patch degree distribution affects the occupation probability of all species in the food chain , and the viability of survival for the species at the top of the food chain . beyond the ecological insights ,this paper highlights meta - foodchains and meta - foodwebs as promising example systems for the future refinement of tools of statistical physics .we study a version of the model proposed by pillai et al . in .a set of species numbered to inhabits an environment comprising a set of discrete patches . this environment is represented by a network , where nodes represent the patches and links represent the possible routes of dispersal between patches .the model accounts for the presence , or absence , of each species in the food chain at each patch .the populations of different species interact with each other via trophic ( feeding ) relationships .species is a so - called primary producer , a species that can persist on abiotic resources . in the model this species can colonize any patch independently of the presence of other species .all other species , , are specialist consumers who each prey upon a single species , .therefore , a species can only inhabit a patch that species also inhabits . the system varies dynamically due to random extinctions of species at individual patches and colonisations of patches by new species .when established at a patch , species is subject to random extinction at rate .the interaction between species means that when species goes extinct on a patch , all species must also go extinct at that patch , because they now lack an essential resource farther down the chain .this indirect extinction means that a species will go extinct on a particular patch at an effective rate equal to the sum of its direct extinction rate , , and the extinction rates of all species below it in the food chain .when established at a patch species may also colonise neighbouring patches at a rate , , however due to the trophic interactions a species can only colonise a patch at which its prey , species , is already established . while indirect extinction is a process between nodes of a food chain at a single patch , colonisation is a process between different food webs . despite this difference bothimply that for the dynamics of a given species only the subnetwork of patches where species is established is relevant .we call this network the effective network for species . over time the effective network for any species changes due to colonization and extinction events of species .these events affect s effective network by changing its size , the number of nodes ; its connectivity , the number of links per node ; and its degree distribution , the probability distribution of the number of links per node . herewe present a method for finding the effective network of all species and hence the pattern of dispersal for all species in the food chain .to study the dispersal of species we describe the system on the level of the configuration model , where a given network is characterised by its degree distribution . from the degree distribution of the patch network we consider the dispersal of species to find the degree distribution of the effective network for species via two steps ( fig .[ dischange ] ) : 1 . finding the expected degree distribution of patches inwhich species is established .2 . removing links from this distribution which lead to patches in which species is not established , and which are hence inaccessible to species .similarly we use the same two steps to consider the dispersal of species over the degree distribution of its effective network to find the effective network of species . by repeating this process through successive levelswe find the degree distribution of all effective networks , and thus the properties of dispersal for all species to .example of the two step process for finding the degree distribution of the effective network of species .we start with the patch network and its degree distribution , , ( a ) .the first step identifies the nodes colonised by the species , , ( b ) .the second step removes links from colonised nodes to uncolonised nodes from the distribution giving the effective network , , ( c ) .this is the network upon which species disperses . ] to find the expected degree distribution of the patches inhabited by species we find the degree - dependent probability that species inhabits a patch with degree .the probability that a patch has degree is .furthermore , we define to be the probability that a patch has both degree and is inhabited by species ( fig .[ dischange ] ) . in contrasts to , is a dynamical variable which changes in time due to colonization and extinction events . for a sufficieantly large systemthese dynamics can be captured by the differential equation where the first term captures random extinctions and the second term captures the effect of colonisations . in the colonisation term is the probability that a link with an empty patch at one end has a colonised patch at the other . if we assume no correlation between the state of neighbouring patches then .this overestimates the potential for colonisation and it is shown in that for a significant parameter region a better approximation can be obtained by correcting for backtracking , which yields .systems of the form of eq .( [ dchi ] ) typically either approach a steady state where for all , such that the species goes extinct , or a nontrivial steady state in which the population survives . to compute this nontrivial steady statewe must solve the system of differential equations for all patch degrees .here we follow the approach of and use the method of generating functions to transform the system of equations into a single partial differential equation , which can then be solved for the desired steady state .we encode the degree distribution of the patches by the probability generating function , and the dynamically chaining probability generating function for the colonised patches .we can now write which with eq .( [ dchi ] ) yields where we used the dash to indicate the derivative with respect to .these derivatives appear as we use the common trick of shifting the summation index by moving factors of outside of sums to create expressions that can be written as , or their derivatives , . setting the left hand side of eq .( [ cpde ] ) to zero we find the steady state condition where we have let /[p'(1)-c(1)] ] ,the probability of a link which has a colonised patch at one end having an empty patch at the other is \bigg[1-\frac{c'(1)}{p'(1)}\bigg]}{\bigg[p'(1)-c(1)\bigg]\frac{c'(1)}{p'(1)}}\\ & = \frac{\bigg[c'(1)-c(1)\bigg]\bigg[\frac{p'(1)}{c'(1)}-1\bigg]}{p'(1)-c(1)}.\end{aligned}\ ] ] we define , a function that generates the distribution and , ie .the distribution of whether a link from a colonised node ends at an inhabited node . to find consider each link in to exist with probability .therefore and so as and eq . ( [ intg1 ] ) and eq .( [ alpha1 ] ) can be solved together to find the degree distribution of the effective network for species directly from the degree distribution of the patch network .we now have all the tools we need to find the effective network of all species .species disperses on its effective network , with the relative extinction by colonisation rate .hence the degree distribution of the effective network of species is given by with we find that there is good agreement between theoretical results from eq .[ gi ] and eq .[ alphai ] with results and simulations for the degree distributions of effective networks for species at various trophic levels for patch networks with both poisson ( fig .[ erexample ] ) and scale - free ( fig . [ baexample ] ) degree distributions .the peak of the distribution moves left for successive effective networks , indicating the preferential inhabitation of high degree nodes is counteracted by a larger effect of the removal of links to empty nodes .therefore the mean degree of the distribution decreases for the effective network of successive levels and thus dispersal is more difficult for species higher in the food chain . for the poisson patch degree distributionwe note that the excess mean degree decreases faster than mean degree and therefore the effective networks do not have poisson degree distributions .the theoretical abundance of species is given by , which we find by setting in eq .[ gi ] . comparing the abundance of species at different levels for a particular value of find that the patch network which gives the largest abundance of species low in the food chain is not necessarily the one that gives the greater abundance for species high in the food chain , fig .[ levelsfig ] .the poisson network has the highest abundance of species low in the food chain however the scale - free network has greater abundance of species high in the food chain .further the scale - free network can support more species than the poisson network . for infinite scale - free networks with is know that there is no epidemic limit and a species will survive for all values of .this is due to the infinite variance of the degree distribution .we find that the variance of the effective degree distribution of species is from which we see that if the patch degree distribution has infinite variance , as is infinite , then the variance of all effective degree distributions will also be infinite . therefore we expect that there is no epidemic limit for all species in a food chain on an infinite scale - free network .[ t ] ( colour online ) colonised abundance for many trophic levels of a food chain . shownare the fraction of patches inhabited by each species in food chains on patch networks with poisson degree distributions ( blue ) and scale - free degree distributions ( green ) .all species have and both patch networks have . for species in the low trophic levelsthe networks with poisson degree distribution have greater abundance but for species higher in the food chain it is the scale - free distribution that has the greater abundance . further the scale - free network can support more species that the poisson network.,title="fig : " ]we have presented a mathematical approach to the degree distribution of the network accessible to species at various levels of a food chain . for the finite patch networks we found the network accessibleboth shrinks and becomes harder to spread over for species at successive levels .hence the maximum effective extinction rate with which a species can survive decreases as we consider species higher in the food chain . the analytical solutions also indicate a maximum abundance for a species , , which is dependent on the effective extinction rate , , and the mean degree of the network it disperses over, .our results indicate that there is no degree distribution that is advantageous at all values of for a particular mean degree .importantly , we found that while very heterogeneous , scale - free , distributions come very close to the theoretical optimum at high , less heterogeneous distributions lead to higher abundances when is lower .one implication of the results is that species close to the bottom of the food chain can sometimes profit from homogeneous degree distributions .however , species higher up in the same food chain may nevertheless be more abundant if the underlying topology is more heterogeneous .that means a finite scale - free distribution is likely to allow more species to survive than a poisson distribution even when the latter allows a greater abundance of species low in the food chain . in land use planning oneis often forced to decide which patches of forest to conserve , or where to place green spaces in a city .hence the planner has some control over the degree distribution of the patch networks these created by these processes .if seeking to maximise the abundance of a particular species our results show that a good estimation of is required to inform any decisions .in addition maximising the abundance of the lowest species in the food chain may not maximise the number of species that survive .this is of particular note in the real world where small populations may be less resilient to external shocks .the approach we use provides the tools required to analyse the behaviour of food chains on patch networks with varying degree distributions .investigations into degree distributions other than those presented here will provide more information about the implications of landscape distributions on the resident species . in this paperwe have focused solely on food chains , an important class of food webs that is at the focus of many ecological studies . however, many more complex food webs topologies also play a significant role in ecology .in contrast to chains these webs also contain inter - species competition for prey , and predation on multiple prey species .this paper has established two operations in the algebra of pillai - style colonization extinction models .the pruning of links to patches where an essential resource is missing and the subsequent renormalization of the degree distribution . to accommodate additional interaction that occur in food webs an additional operation is necessary , the pruning of patches where a superior competitor is established .this operation is similar to the pruning of patches studied here and , although the notation does get more cumbersome , no fundamental obstacles should arise in this step .thus the approach proposed here can be adapted to deal with more complex webs .the treatment presented here was relatively easy because we were able to compute the degree distributions iteratively , from the bottom layer up .the same is also true for some more complex food web topologies as long as their is a clear hierarchy in the strength of competitors .some ecological interactions break these hierarchies .a simple example is neutral competition where two competitors can defend a patch in which they are established against the other species. a mathematically more interesting scenario ( which is fortunately rare in ecology ) is the case where predators can drive their own prey to extinction . in both cases interdependenciesarise such that systems of generating functions have to be solved simultaneously .dealing with these non - iterative cases is mathematically more difficult , but could also lead to much richer dynamics .they thus present a promising target for future work .s. boccaletti , g. bianconi , r. criado , c. del genio , j. gmez - gardees , m. romance , i. sendia - nadal , z. wang , and m. zanin , `` the structure and dynamics of multilayer networks , '' _ physics reports _ , vol .544 , pp .1122 , jul 2014 .m. de domenico , a. sol - ribalta , e. cozzo , m. kivel , y. moreno , m. a. porter , s. gmez , and a. arenas , `` mathematical formulation of multilayer networks , '' _ physical review x _ , vol . 3 , p. 041022, dec 2013 .ahn , h. jeong , n. masuda , and j. d. noh , `` epidemic dynamics of two species of interacting particles on scale - free networks ., '' _ physical review .e , statistical , nonlinear , and soft matter physics _ , vol .74 , p. 066113, dec 2006 .y. zhu , d. li , and f. zhang , `` modeling the sis immunization epidemic on finite size of ba network , '' in _ 2013 international conference on communications , circuits and systems ( icccas ) _ , vol . 2 , pp .98102 , ieee , nov 2013 .p. pillai , a. gonzalez , and m. loreau , `` metacommunity theory explains the emergence of food web complexity ., '' _ proceedings of the national academy of sciences of the united states of america _ , vol . 108 , pp . 192938 , nov 2011 .
notable recent works have focused on the multi - layer properties of coevolving diseases . we point out that very similar systems play an important role in population ecology . specifically we study a meta food - web model that was recently proposed by pillai et al . this model describes a network of species connected by feeding interactions , which spread over a network of spatial patches . focusing on the essential case , where the network of feeding interactions is a chain , we develop an analytical approach for the computation of the degree distributions of colonized spatial patches for the different species in the chain . this framework allows us to address ecologically relevant questions . considering configuration model ensembles of spatial networks , we find that there is an upper bound for the fraction of patches that a given species can occupy , which depends only on the networks mean degree . for a given mean degree there is then an optimal degree distribution that comes closest to the upper bound . notably scale - free degree distributions perform worse than more homogeneous degree distributions if the mean degree is sufficiently high . because species experience the underlying network differently the optimal degree distribution for one particular species is generally not the optimal distribution for the other species in the same food web . these results are of interest for conservation ecology , where , for instance , the task of selecting areas of old - growth forest to preserve in an agricultural landscape , amounts to the design of a patch network .
contact between spheres has intrigued researchers for more than a century , and still no simple closed - form analytical solution exists .one of the first and most important developments in the field , due to heinrich hertz in 1881 , is an approximate solution for the normal , frictionless contact of linear elastic spheres .the major assumption in hertz s model is that the contact area was small compared to the radii of curvature , which has served as a useful engineering approximation in many applications .ever since then many have tried to relax this assumption while maintaining a compact , workable solution . the green s function for symmetric loading on a sphere provides the means to find the exact response for arbitrary loading , a first step towards improving on hertz classic solution .existing forms of the green s function are however not suitable for fast and ready computation , either due to slow convergence of series or analytically cumbersome expressions .the goal of the present paper is to provide an alternative form of the green s function suitable for fast computation of solutions under arbitrary loading .sternberg and rosenthal present an in - depth study of the nature of the singularities on elastic sphere loaded by two opposing concentrated point forces .as expected , the dominant inverse square singularity in the stress components can be removed by subtraction of an appropriate multiple of boussinesq s solution for a point load at the surface of a half space .sternberg and rosenthal showed that the quickly convergent residual field retains a weaker singularity of logarithmic form , a result that is also evident in the solution developed here .the singular solutions obtained by sternberg and rosenthal were extended to arbitrarily oriented point forces by guerrero et al .our interest here is in developing an analogous separation of the green s function ( circular ring loading ) . in this regard ,a relatively compact form of the green s function for the sphere was derived by bondareva who used it to solve the problem of the weighted sphere . in , bondareva formulates an example with a sphere contacting a rigid surface .this has been used to solve the rebound of a sphere from a surface .bondareva s solution starts with the known series expansion for the solution of the elasticity problem of a sphere , and replaces it with finite integrals of known functions . in this paperwe introduce an alternative form for the green s function for a sphere , comprised of analytical functions and a quickly convergent series .no direct integration is required .the methodology for determining the analytical functions is motivated by the simple example of a point load on a sphere , for which we derive a solution similar in spirit to that of , but using a fundamentally different approach : partial summation of infinite series as compared with a functional _ ansatz_. the present methods allows us to readily generalize the point load solution to arbitrary symmetric normal loading .a typical contact problem involves solving a complicated integral equation for the contact stress once a displacement is specified .instead , we will use the derived green s function in the direct sense , solving for the displacements for a given load .this is used to check the validity of hertz contact theory through the assumed form of the stress distribution .the outline of the paper is as follows .the known series solution for symmetric loading on a sphere is reviewed in [ sec2 ] . the proposed method for simplificationis first illustrated in [ sec3 ] by deriving a quickly convergent form of the solution for a point force .the green s function for symmetric loading is then developed in [ sec4 ] , and is illustrated by application to different loadings .conclusions are given in consider a solid sphere of radius , with surface , , in spherical polar coordinates ( , , ) .the sphere is linear elastic with shear modulus and poisson s ratio .the surface is subject to tractions using the known properties of legendre functions , see .eqs . , allows us to express the normal stress as where the legendre series coefficients are the displacements and tractions for the sphere can also be represented in series form ( * ? ? ?* eq . 5 ) \ , r^n\ , p_n(\cos \theta ) , \\ 2 g u_\theta & = \sum\limits_{n=1}^\infty \big [ n(n+5 - 4\nu)a_n r + ( n+1 ) b_nr^{-1 } \big ] \ , r^n\,\frac { p_n^{1}(\cos \theta ) } { n(n+1 ) } , \\ \sigma_{rr } & = \sum\limits_{n=0}^\infty \big [ [ n(n-1 ) -2(1+\nu ) ] a_n + ( n-1 ) b_nr^{-2 } \big ] \ , r^n\,p_n(\cos \theta ) , \\ \sigma_{r\theta } & = \sum\limits_{n=1}^\infty \big [ n [ ( n-1)(n+3 ) + 2(1+\nu ) ] a_n + ( n^2 - 1 ) b_nr^{-2 } \big ] \ , r^n\ , \frac { p_n^{1}(\cos \theta ) } { n(n+1 ) } , \end{aligned}\ ] ] with , and corresponds to a rigid body translation via .it follows from that } , \ \n \ge 2 , \\ & b_n = \frac{-n}{n^2 - 1}\ , [ ( n-1)(n+3 ) + 2(1+\nu ) ] \ , r^2 a_n , \ \\end{aligned}\ ] ] thus , noting that , we have [ 6 ] bondareva , using a different representation , replaced the infinite summation of legendre functions by a combination of closed form expressions and an integral , each dependent on .the integral term contains a logarithmic singularity which , together with the complex - valued nature of its coefficients , makes its evaluation indirect .here we propose an alternative form for the green s function in a combination of closed - form expressions and a standard summation of legendre functions that is , by design , quickly convergent .in order to illustrate the method , we first consider the simpler problem of the point force of magnitude applied at defined by where we have used the property .the difficulty with the infinite summations is two fold : first , it is not a suitable form to reproduce the singular nature of the green s function ; secondly , it does not converge quickly as a function of the truncated value for .the idea here is to replace the summation by closed form expressions plus a summation that is both regular and quickly convergent .the fundamental idea behind the present method is to write , of eqs . in the form [ 24 ] where the functions and , are closed - form expressions , in this case , [ 232 ] and , are regular functions of defined by quickly convergent series in , the coefficients , are defined so that as .this criterion uniquely provides the constants , as solutions of a system of linear equations .similarly , , are uniquely defined by as .here we consider the specific case of .other values of could be treated in the same manner ; however , we will show that is adequate for the purpose of improving convergence . in this case. becomes [ 242 ] \notag \\ = & \frac{-f } { 8\pi gr } \ , \big [ \sum\limits_{n=2}^\infty \big ( 4(1-\nu ) + \frac{a_0}{n+1 } + \frac{a_1}{n } + \frac{a_2}{n-1 } + c_n \big ) p_n(\theta ) + c_0 p_0(\theta ) + c_1 p_1(\theta ) \notag \\ & + 4(1-\nu)\big(p_0 ( \theta ) + p_1(\theta ) \big ) + a_0\big(p_0 ( \theta ) + \frac{1}{2}p_1(\theta)\big ) + a_1 p_1 ( \theta ) % \notag \\ & \big ] , \\u_\theta ( r,\theta ) = & \frac{-f } { 8\pi gr}\ , \frac{d}{d\theta } \big [ b_0 s_0(\theta ) + b_1 s_1(\theta ) + b_2 s_2(\theta ) + g(\theta ) \big ] \notag \\ = & \frac{-f } { 8\pi gr } \ , \frac{d}{d\theta } \big [ \sum\limits_{n=2}^\infty \big ( \frac{b_0}{n+1 } + \frac{b_1}{n } + \frac{b_2}{n-1 } + d_n\big ) p_n(\theta ) \notag \\ & + b_0 \big(p_0 ( \theta ) + \frac{1}{2}p_1(\theta)\big ) + b_1 p_1(\theta ) + d_0 p_0(\theta ) + d_1 p_1(\theta ) \big ] , \end{aligned}\ ] ] where the associated three functions , are ( see the appendix ) [ 22 ] equations , and indicate the expected boussinesq - like singularity as well as the weaker singularity first described by sternberg and rosenthal .the logarithmic singularities in , can be compared to the potential functions ] , and ] .the integrands are smooth and bounded functions of for , which is always the case if the displacements are evaluated at points outside the region of the loading .however , for points under the load , the integration of involves a logarithmic singularity at .a simple means of dealing with this is described next .[ htb ] under a hertzian - type load distributed up to . left to right : n=4 , 10 , 100.,width=624 ] [ htb ] under a hertzian - type load distributed up to .left to right : n=4 , 10 , 100.,width=624 ] [ htb ] and given in equation with for a constant distributed load given by .the load was distributed up to .,width=624 ] the function exhibits a logarithmic singularity by virtue of the asymptotic behavior the integral in is evaluated by rewriting eq . in the equivalent form \sin\phi d \phi \notag \\ & + 4(1-\nu ) \sigma ( \theta ) \int_0^{\phi_0 } \hat s ( \theta , \phi ) \sin\phi d \phi \big\ } , \\ 0 \le \theta \le \phi_0,\end{aligned}\ ] ] where the angle defines the domain of the loading , which is normally for contact problems , much less that .the function has the same singularity as and has a relatively simple integral .we choose the integrand of the first integral in is now a smoothly varying function with no singularity , and the second integral is , explicitly , in summary , the solution for with the singularity removed has the following form ( see also eq . for and eq . for ) \sin\phi d \phi + h(\theta ) \big\ } , \\ \hat h_r ( \theta , \phi ) = & 4(1-\nu ) \hat s ( \theta , \phi ) , \\h(\theta ) = & 4(1-\nu ) \bigg [ \frac{g(\cos\tfrac\theta 2 , 1 ) } { \sin\frac\theta 2 } + \frac { g(\sin\tfrac\theta 2 , \sin\tfrac{\phi_0}{2 } ) } { \cos\frac\theta 2 } \bigg]\sigma ( \theta ) . \end{aligned}\ ] ] [ h ] and given in equation with for a hertzian - type distributed load given by .the load was distributed up to .,width=624 ] to check the convergence of the expressions in we will consider a symmetric constant distributed load of the form and a symmetric hertzian - type load of the form both loads have been normalized such that their resultant forces are for all ranges of the angle , which is equivalent to the point force given by equation .the solution on the interval is obtained using and for we apply equations directly .firstly , the convergence of the proposed solution ( eq . )is compared to the series solution for a hertzian - type load in figures [ fighsr ] and [ fighst ] .these curves indicate that the convergence of the radial displacement in the proposed solution is substantially superior to the series solution .figures [ fig5 ] and [ fig6 ] show the convergence of the displacements with the truncation limit under both types of loading .subsequently , figures [ fig7 ] and [ fig8 ] demonstrate that in the limit as the displacements due to the distributed loads approach those obtained for the point load .moreover , the normalized radial displacement , , is almost indistinguishable from the point load for a as large as 10 degrees .poisson s ratio of =0.4 has been used throughout .we would also like to investigate how the displacement due to a hertzian - type load compares with that from the hertzian contact theory .the dimensionless vertical displacement that we obtain by the methods outlined in this paper has the form where is the physical vertical displacement .[ h ] = 300 .the loads were distributed over =(red ) , (green ) , (blue).,width=624 ] [ h ] = 300 .the loads were distributed over =(red ) , (green ) , (blue).,width=624 ] the hertz contact theory is formulated in terms of the radius of the contact area , the displacements directly under the load , and the magnitude of the applied load . we need to reformulate these quantities in terms of the contact angle .the radius of the contact area is simply the maximum vertical displacement is related to in the following manner where eq .was used and the last equality arises from the fact that the hertzian solution presented here is for the contact of two spheres hence we need to half the total displacement .furthermore , hertz contact theory tells us that the resultant force is proportional to , or more accurately this allows to rewrite equation for the dimensionless vertical displacement via hertz contact theory , denoted as .substituting equations and into yields [ h ] as obtained by the methods in this paper for a hertzian - type load and that obtained from hertz contact theory defined in , as a function of the contact angle .,width=432 ] equation gives a way to compare the presented solution for the hertzian - type load to the solution from hertz contact theory .the numerical results are presented in figure [ fig9 ] , which compares the vertical displacements ( eq . with eq . ) as a function of the contact angle .note that along with and we also plot , which according to hertz theory should be equal to .the normalized difference between the displacements is shown in figure [ fig10 ] .as expected , the solutions are close for small contact areas and diverge as this area increases .the same can be said about the relationship between the displacements and . comparing the maximum displacements with does not tell us anything about the shape of the contact area for a sphere loaded by a hertzian - type load .hertz contact theory states that the contact area between two identical spheres is flat , and thus we can describe it using .therefore , we define a function to determine how close is our calculated displacement to the hertzian solution as where is a constant determined by enforcing , which results in the function is plotted in figure [ fig11 ] for several angles .these results show that the contact area is flat for small contact angles , but gains curvature for larger angles . according to hertz theory , for small contact angles , the function behaves as a constant .the angles shown in figure [ fig11 ] are too large to see this behaviour , however , at the values are close with and .[ h ] as a function of the contact angle .,width=432 ]a compact green s function for a sphere is presented which uses the fundamental idea of expressing a slowly convergent series with analytical functions and a quickly convergent series .the increased speed of convergence is demonstrated for the point force solution , which is also shown to be consistent with the more general distributed loading in the limit as the contact angle approaches zero . since the general green s function contains elliptical integrals , an easy method for dealing with the singularity in the integrand is presented . comparing the exact displacement due to a hertzian - type distributed load to the displacement given by hertz contact theorywe conclude that the hertz contact theory gives accurate results for contact angles up to about , with a steadily increasing error . for larger contact angles ,hertz theory overestimates the displacements and can not account for the shape of the contact area .this is to say that the stress distribution assumed in hertz theory results in a curved contact surface for larger contact angles .each curve corresponds to a different contact angle ranging from to in increments.,width=432 ]the orthogonality and completeness relations for the legendre functions are [ 4 ] starting with the definition for , and using , , the well known generating function follows integrating the identity with respect to implies taking the limit as yields . of follows from a similar result ( * ? ? ?* eq . 5.10.1.4 ) , while of follows from the recurrence relation after dividing by and summing from to ( agrees with ( * ? ? ?* eq . 5.10.1.6 ) ) .the recurrence relation can be used to then find for , .series of products of legendre functions given by eqs .6.11.3.1 and 6.11.3.2 of for .equation can be derived by operating on both sides by the legendre differential operator , and using the eigenvalue property to arrive at ( for ) . at the same time, the constants in the right member of follow by considering the formula for , in which case the sum on the left can be found .equation gives by noting that .the following is a simple consequence of legendre s addition formula ( * ? ? ?* eq . 3.19 ) multiply both sides of by and sum , implies , using eq ., the identity ( * ? ? ?* eq . 5.10.2.1 ) for , require the derivatives with respect to of the functions defined in eq . .they are similarly , the analytical function used to find ( eq . ) in section [ sec4 ] is the derivative of is
a compact form for the static green s function for symmetric loading of an elastic sphere is derived . the expression captures the singularity in closed form using standard functions and quickly convergent series . applications to problems involving contact between elastic spheres are discussed . an exact solution for a point load on a sphere is presented and subsequently generalized for distributed loads . examples for constant and hertzian - type distributed loads are provided , where the latter is also compared to the hertz contact theory for identical spheres . the results show that the form of the loading assumed in hertz contact theory is valid for contact angles up to about 10 degrees . for larger angles , the actual displacement is smaller and the contact surface is no longer flat .
let be the set of symmetric matrices with real entries , and its subset of nonnegative matrices . for , we consider the partial differential equation \right\},\ ] ] where , and where the unknown is a family of probability density functions on .the spatially homogeneous landau ( or fokker - planck - landau ) equation corresponds , in dimension , to the case where for some , physically , one assumes that , for some ] and for all , where .all the terms make sense due to our conditions on , , .see villani for a similar formulation .to our knowledge , the first ( and only ) paper proving a rate of convergence for a numerical scheme to solve ( [ eqa ] ) is that of fontbona - gurin - mlard .their method relies on a stochastic particle system .the aim of this paper is to go further in this direction .let us thus recall briefly the method of , relying on the probabilistic interpretation of ( [ eqa ] ) developped by funaki , gurin .let and be lipschitz continuous functions , and let .a -valued process is said to solve if , and if for all , setting , here is a -valued white noise on , independent of , with independent coordinates , each of which having covariance measure ( see walsh ) .existence and uniqueness in law for have been proved in gurin .if furthermore and , then is a weak solution to ( [ eqa ] ) .the condition that and are lipschitz continuous is satisfied in the case of the landau equation for maxwell or pseudo - maxwell molecules ..2 cm in , one considers an exchangeable stochastic particle system , satisfying a s.d.e . driven by brownian motions .it is then shown that one may find a coupling between a solution to and such a particle system in such a way that } |x^{1,n}_t - x_t^1|^2\right ] \leq c_{t } n^{-2/(d+4)},\ ] ] under the condition that has a finite moment of order .the proof relies on a clever coupling between the the white noise and brownian motions .in particular , one has to assume that has a density for all , in order to guarantee the uniqueness of some optimal couplings . for , satisfying and , we introduce for each , , is a nonnegative symmetric matrix and thus admits an unique symmetric nonnegative square root ^{\frac{1}{2}} ] for all and if for all , setting , this equation is nonlinear in the sense that its coefficients involve the law of the solution . compared to ( [ sdew ] ) , equation ( [ sdeb ] ) is simpler , since it is driven by a finite - dimensional brownian motion , and since the nonlinearity does not involve the driving process .however , one may check that at least formally , solutions to ( [ sdew ] ) and ( [ sdeb ] ) have the same law .the link with ( [ eqa ] ) relies on a simple application of the it formula .[ lien ] let solve .assume that , and that .then is a weak solution to ( [ eqa ] ) ..2 cm the natural linearization of ( [ sdeb ] ) consists of considering particles solving here are i.i.d . with law .we thus use brownian motions .when linearizing ( [ sdew ] ) , one needs to use brownian motions , since the white noise is infinite dimensional . however , one may check that the solution to ( [ sden ] ) and the particle system built in have the same distribution ( provided in ( * ? ? ?* equation ( 4 ) ) ) .the main result of this paper is the following .[ main ] assume that is lipschitz continuous , that is of class , with all its derivatives of order bounded , and that .\(i ) there is strong existence and uniqueness for : for any , there is an unique solution to .\(ii ) let be i.i.d .with law .there is an unique solution to ( [ sden ] ) .assume that , and consider the unique solution to .there is a constant depending only on such that } |x^{1,n}_t - x^1_t|^2\right ] \leq c_{t } \int_0^t \min\left ( n^{-1/2 } , n^{-1 } \sup_{x\in{{{{\mathbb{r}}}^d } } } ( 1+|a(x , p_t)^{-1}| ) \right ) dt \leq c_t n^{-1/2}.\ ] ] in the general case , we thus prove a rate of convergence in , which is faster than . if we have some information on the nondegeneracy of , then is smooth around , and we can get a better rate of convergence. assume for example that is uniformly elliptic ( which is unfortunately not the case of ( [ alandau ] ) , since for all ) .then , and we get a convergence rate in . in the case of the landau equation for true maxwell molecules , we obtain the following result . [ cormax ] consider the landau equation for maxwell molecules , where is given by ( [ alandau ] ) with and .then satisfy the assumptions of theorem [ main ] .let , and adopt the notation of theorem [ main]-(ii ) .\(i ) we have } |x^{1,n}_t - x^1_t|^2 ] \leq c_{t } n^{-1 } ( 1+\log n) ] .we finally consider the case of pseudo - maxwell molecules .[ corpm ] consider the landau equation for pseudo - maxwell molecules , where is given by ( [ alandau ] ) with , and .assume that has a bounded support .then satisfy the assumptions of theorem [ main]-(ii ) .assume furthermore that has a density with a finite entropy , and that is bounded below by a positive constant . with the notation of theorem [ main ], we have } |x^{1,n}_t - x^1_t|^2 ] \leq c_{t } n^{-1} ] .see villani for many informations on the wasserstein distance .our results are mainly based on the two following lemmas .[ ll ] for all , all , _ step 1 . _ for fixed , we consider the map defined by . then , is clearly uniformly bounded .lemma [ a1 ] ensures us that is uniformly bounded , so that ..2 cm _ step 2 ._ we now fix , and consider .we introduce a couple of random variables such that , , and ]. then = a(x,\nu) ] .furthermore , ) ] |\leq ||d^2a||_\infty { \mathbb{e}}[|x - y|^2]= c { { { \mathcal w}}}_2 ^ 2(\mu,\nu).\ ] ] lemma [ a1 ] ensures us that , so that ..2 cm _ step 3 ._ the growth estimate ( for ) follows from the lipschitz estimate , since , and since ..2 cm _ step 4 ._ the case of is much simpler . for , we introduce as in step 2. then |^2\leq c(|x - y|^2+{\mathbb{e}}[|x - y|]^2 ) \leq c(|x - y|^2+{{{\mathcal w}}}_2 ^ 2(\mu,\nu)) ] , whence ] for all .\(ii ) let solve ( [ sden ] ) . for all , \leq c_t |t - s| ] .furthermore , since and , we deduce that ] , whence the result by the gronwall lemma . .2 cm _ point ( ii ) ._ using the cauchy - scharz and doob inequalities , we see that for , \leq & c \int_s^t du { \mathbb{e}}\left[|{a^\frac{1}{2}}(x_u^{1,n},\frac{1}{n}\sum_{1}^n \delta_{x^{i , n}_u})|^2 \right ] + c_{t } \int_s^t du { \mathbb{e}}\left[|b(x_u^{1,n},\frac{1}{n}\sum_{1}^n \delta_{x^{i , n}_u})|^2 \right ] { \nonumber \\}\leq & c_t \int_s^t du { \mathbb{e}}\left [ 1 + |x^{1,n}_u|^2 + m_2\left(\frac{1}{n}\sum_{1}^n \delta_{x^{i , n}_u } \right)\right ] \leq c_t \int_s^t du { \mathbb{e}}\left [ 1 + |x^{1,n}_u|^2 \right].\end{aligned}\ ] ] we used lemma [ ll ] and that =\frac{1}{n}\sum_{1}^n { \mathbb{e}}[|x^{i , n}_u|^2 ] = { \mathbb{e}}[|x^{1,n}_u|^2 ] ] .the gronwall lemma allows us to conclude that }e[|x^{1,n}_t|^2 ] \leq c_t ] ._ of theorem [ main ] .we consider fixed ..2 cm _ point ( i ) ._ let ._ uniqueness ._ assume that we have two solutions to , and set , .using the cauchy - schwarz and doob inequalities , we obtain , for , } |x_s - y_s|^2\right]\leq & c_t { { \displaystyle}\int _ 0^t } { \mathbb{e}}[|{a^\frac{1}{2}}(x_s , p_s ) - { a^\frac{1}{2}}(y_s , q_s ) |^2 + |b(x_s , p_s ) - b(y_s , q_s ) |^2]ds { \nonumber \\}\leq & c_t { { \displaystyle}\int _ 0^t } { \mathbb{e}}\left [ |x_s - y_s|^2 + { { { \mathcal w}}}_2 ^ 2(p_s , q_s ) \right]ds \leq c_t { { \displaystyle}\int _ 0^t } { \mathbb{e}}\left [ |x_s - y_s|^2\right]ds.\end{aligned}\ ] ] we used lemma [ ll ] and the obvious inequality ] .thus there classically exists such that } |x_t^{n}-x_t|^2]=0 ] . passing to the limit in ( [ pic ] ), we see that solves ..2 cm _ point ( ii ) ._ first of all , the strong existence and uniqueness for ( [ sden ] ) follows from standard theory ( see e.g. stroock - varadhan ) , since for each , the maps and are lipschitz continuous ( use lemmas [ ll ] and [ a2 ] ) .we now consider i.i.d .with law , the solution to ( [ sden ] ) , and for each , the unique solution to . for each , let .due to the cauchy - schwarz and doob inequalities , for , }|x^{1,n}_s - x^1_s|^2\right ] \leq c_t { { \displaystyle}\int _ 0^t } ds { \mathbb{e}}\big[|{a^\frac{1}{2}}\left(x^{1,n}_s,\frac{1}{n}\sum_1^n \delta_{x^{i , n}_s}\right ) - { a^\frac{1}{2}}(x^{1}_s , p_s)|^2{\nonumber \\}&\hskip6 cm + |b\left(x^{1,n}_s,\frac{1}{n}\sum_1^n \delta_{x^{i , n}_s}\right ) - b(x^{1}_s , p_s)|^2 \big ] { \nonumber \\ } & \leq c_t { { \displaystyle}\int _ 0^t } ds \big ( { \mathbb{e}}\big[|{a^\frac{1}{2}}\left(x^{1,n}_s,\frac{1}{n}\sum_1^n \delta_{x^{i , n}_s}\right ) - { a^\frac{1}{2}}\left(x^{1}_s,\frac{1}{n}\sum_1^n \delta_{x^{i}_s } \right)|^2 { \nonumber \\}&\hskip4 cm + |b\left(x^{1,n}_s,\frac{1}{n}\sum_1^n \delta_{x^{i , n}_s}\right ) - b\left(x^{1}_s,\frac{1}{n}\sum_1^n \delta_{x^{i}_s } \right)|^2 \big ] + \delta_n(s ) \big),\end{aligned}\ ] ] where {\nonumber \\}= : & \delta_n^1(s)+\delta_n^2(s).\end{aligned}\ ] ] using lemmas [ ll ] and [ a2 ] , we obtain , for , }|x^{1,n}_s - x^1_s|^2\right ] & \leq c_t \int_0^t ds \left ( { \mathbb{e}}\left[|x^{1,n}_s - x^1_s|^2 + { { { \mathcal w}}}^2_2\left(\frac{1}{n}\sum_1^n \delta_{x^{i , n}_s},\frac{1}{n}\sum_1^n\delta_{x^{i}_s}\right)\right ] + \delta_n(s ) \right){\nonumber \\}&\leq c_t \int_0^t ds { \mathbb{e}}\left[|x^{1,n}_s - x^1_s|^2 + \frac{1}{n}\sum_1^n |x^{i , n}_s - x^{i}_s|^2\right ] + c_t \int_0^t ds \delta_n(s ) { \nonumber \\ } & \leq c_t { { \displaystyle}\int _ 0^t } ds { \mathbb{e}}\left[|x^{1,n}_s - x^1_s|^2\right ] + c_t { { \displaystyle}\int _ 0^t } ds \delta_n(s)\end{aligned}\ ] ] by exchangeability .the gronwall lemma ensures us that }|x^{1,n}_s - x^1_s|^2\right ] \leq c_t \int_0^t ds \delta_n(s).\end{aligned}\ ] ] it remains to estimate .the random variables are i.i.d . with law .thus lemma [ nesti ] shows that for , due to lemma [ mom]-(i ) and since by assumption .next , we use lemma [ a1bis]-(i ) , the cauchy - schwarz inequality , and then lemma [ nesti ] : for , \leq c\left(\frac{1+m_4(p_s)}{n } \right){{^\frac{1}{2}}}\leq \frac{c_t}{\sqrt n}.\end{aligned}\ ] ] but one may also use lemma [ a1bis]-(ii ) instead of lemma [ a1bis]-(i ) , and this gives , for , {\nonumber \\}\leq & c \sup_x|a(x , p_s)^{-1}| \left(\frac{1+m_4(p_s)}{n } \right ) \leq \frac{c_t}{n}\sup_x|a(x , p_s)^{-1}|.\end{aligned}\ ] ] thus . inserting this into ( [ gron ] ) , we obtain ( [ obj1 ] ) ..2 cm _ of theorem [ totaldisc ] ._ using lemmas [ ll ] and [ a2 ] , we get as usual ( see ( [ tech ] ) ) , by exchangeability , }|x^{1,n}_s - x^{1,n , n}_s|^2 \right ] & \leq c_t{{\displaystyle}\int _ 0^t } { \mathbb{e}}\left[|x^{1,n}_s - x^{1,n , n}_{{\rho_n(s)}}|^2 + { { { \mathcal w}}}_2 ^ 2\left ( \frac{1}{n}\sum_1^n\delta_{x^{i , n}_s},\frac{1}{n}\sum_1^n\delta_{x^{i , n , n}_{{\rho_n(s ) } } } \right)\right ] ds { \nonumber \\}&\leq c_t{{\displaystyle}\int _ 0^t } { \mathbb{e}}\left[|x^{1,n}_s - x^{1,n , n}_{{\rho_n(s)}}|^2 + \frac{1}{n}\sum_1^n|x^{i , n}_s - x^{i , n , n}_{{\rho_n(s)}}|^2 \right ] ds{\nonumber \\}&\leq c_t{{\displaystyle}\int _ 0^t } { \mathbb{e}}\left[|x^{1,n}_s - x^{1,n , n}_{{\rho_n(s)}}|^2\right ] ds{\nonumber \\}&\leq c_t{{\displaystyle}\int _ 0^t } { \mathbb{e}}\left[|x^{1,n}_s - x^{1,n , n}_s|^2\right ] ds + c_t{{\displaystyle}\int _ 0^t } { \mathbb{e}}\left[|x^{1,n}_s - x^{1,n}_{{\rho_n(s)}}|^2\right ] ds.\end{aligned}\ ] ] using finally lemma [ mom]-(ii ) , and since , we deduce that \leq c_t / n ] . inserting this into ( [ obj1 ] ), we get }|x^1_t - x^{1,n}_t|^2]\leq c_t\int_0^{t } \min(n^{-1/2},n^{-1}+(nt)^{-1 } ) dt \leq c_t n^{-1 } ( 1+\log n) ] ..2 cm it remains to give the .2 cm _ of corollary [ corpm ] ._ recall here that and that , that is and that has a bounded support , so that has bounded derivatives of order , and is lipschitz continuous .we consider a weak solution to ( [ eqa ] ) .as previously , we classically have .furthermore , it is again classical and widely used that the entropy of is non - increasing , so that for all times , see villani .if we prove that there is such that for all , , , then we deduce that is uniformly bounded , so that the corollary follows from ( [ obj1 ] ) .observe that setting , we have , where is a lowerbound of .but it is shown in desvillettes - villani ( * ? ? ?* proposition 4 ) that for , there is a constant such that for any probability density function on such that and , . actually , they consider the case where for some , but one can check that their proof works without modification when .we finally obtain for all , , which concludes the proof .we consider in this section the spatially homogeneous landau equation for soft potentials , which writes ( [ eqa ] ) with for some , the coulomb case being the most interesting from a physical point of view .then we have .we restrict our study to the case where ] , with .consider then be defined as with replaced by ^\gamma ] ) , will give something like }|x^{1,n , n,{{\varepsilon}}}_t - x^{1,{{\varepsilon}}}_t|^2]\leq ( n^{-1}+n^{-1})\exp(c_t{{\varepsilon}}^{2\gamma}) ] ..2 cm one would thus get }{\mathbb{e}}[|x^{1,n , n,{{\varepsilon}}}_t - x^1_t|^2]\leq c_t \left({{\varepsilon}}^2 + \left(n^{-1}+n^{-1}\right)e^{c_t{{\varepsilon}}^{2\gamma}}\right) ] ) , at least if we replace by and if has a density .based on the well - posedness result of , we hope that , at least when ] and =m_2(p_0) ] , described in the previous section , in dimension .we use no cutoff procedure in the case .we consider the initial condition with density , where is the gaussian density with mean and variance , while . the momentum and energy of given by and .thus in large time , the solution should converge to the gaussian distribution with mean and covariance matrix , see villani .we use the particle system ( [ sdenn ] ) with particles , and steps per unit of time .easy considerations show that the computation of ( [ sdenn ] ) until time is essentially proportionnal to , and should not depend too much on . however , it is consequently faster when for obvious computational reasons .let us also remark that the law of ( [ sdenn ] ) does not change when replacing by any such that .we thus use a cholesky decomposition , which is numerically quite fast .let us give an idea of the time needed to perform one time - step : with , it takes around seconds ( ) , s ( ) , s ( ) , and s ( ) .the computations are around times slower when ..2 cm now we alway use particles , and steps per unit of time .we draw , for different values of and , the histogram ( with sticks ) based on the second coordinates of .the plain curve is the expected asymptotic gaussian density , with mean and variance .the convergence to equilibrium seems to be slower and slower as is more and more negative .-0.3 cm -0.3 cm -0.3 cm -0.3 cm -0.3 cm -0.3 cm for too small values of ( say ) , the numerical results are not so convincing .this is not surprising , since the coefficients are more and more singular as becomes smaller and smaller .the following lemma can be found in stroock - varadhan ( when ) ( * ? ? ? *theorem 5.2.3 ) , or in villani ( * ? ? ?* theorem 1 ) ( for a more refined statement including all possible values of and ) .we start with point ( i ) .let .there is a unit vector such that , and we may assume that ( else , change the roles of ) .then , using that is nonnegative , we now prove ( ii ) .first observe that for all .as previously , whence .
we consider a class of nonlinear partial - differential equations , including the spatially homogeneous fokker - planck - landau equation for maxwell ( or pseudo - maxwell ) molecules . continuing the work of , we propose a probabilistic interpretation of such a p.d.e . in terms of a nonlinear stochastic differential equation driven by a standard brownian motion . we derive a numerical scheme , based on a system of particles driven by brownian motions , and study its rate of convergence . we finally deal with the possible extension of our numerical scheme to the case of the landau equation for soft potentials , and give some numerical results . * mathematics subject classification ( 2000 ) * : 82c40 , 60k35 . * keywords * : fokker - planck - landau equation , plasma physics , stochastic particle systems .
the lisa ( laser interferometer space antenna ) sensitivity goal requires that the test masses ( nominally 2 kg ) are kept in free fall with an acceleration noise below 3 fm s at frequencies down to 0.1 mhz . in order to achieve this high purity of geodesic motion ,environmental noisy forces are screened by shielding the test masses in a drag - free satellite , with precision thrusters driven by a position sensor in order to minimize the test mass - satellite relative displacement . among the residual disturbance sources ,magnetic effects play a paramount role , as discussed in : the fluctuations of both magnetic field and magnetic field gradient will couple to the test mass remnant dipole moment and susceptibility , to produce force noise . in the limit of weakly magnetic materials ,the component of the force acting on the mass along the lisa interferometer axis can be expressed as similar relations holding for and . herewe describe the test mass magnetic proprieties by its permanent , remnant magnetic dipole moment and its magnetization induced through the ( small ) magnetic susceptibility by the externally applied magnetic field . for lisa, fluctuations of are expected to be dominated by the interplanetary magnetic field , while fluctuations of the magnetic field gradient are expected to be produced by sources on the satellite itself . in order to relax the consequent environmental requirements on the satellite ,it is crucial to obtain a test mass with very good magnetic proprieties ; for lisa the requirements are na m and .the test mass design calls for a 70%au-30%pt alloy , with composition chosen in order to achieve the lowest susceptibility , while retaining high density to minimize the displacement caused by a given force disturbance .the characterization of the full - sized test mass for lisa and its flight precursor ltp , particularly important given the very stringent requirements on the magnetic cleanliness level , is made difficult with the standard magnetic characterization techniques , such as squid magnetometers and susceptometers , by its relatively large dimensions . in this articlewe discuss an application of a high sensitivity torsion pendulum facility , developed for several testing of force disturbances for lisa , to the independent characterization of both the test mass remnant moment and susceptibility , assessing these proprieties directly through the forces and torques associated with the variation of magnetic fields .in the proposed experiment we will measure with high resolution the aupt test mass remnant moment and susceptibility , by exploiting the high torque sensitivity of a torsion pendulum where the test mass is included in a light , non magnetic holder and is suspended by a thin fiber , as sketched in .the magnetic proprieties of the lisa test mass will be measured by observing the torques acting on it when subjected to a controlled oscillating field produced by a suitable excitation coils configuration .assuming the addition of the external , residual dc magnetic field , the total applied field is then . in order to evaluate the effect of the applied magnetic field we need to account 3 components of and , which depend on position within the test mass .the test mass is then modelled by meshing it into a grid of small elements with volume , located at the positions , each small enough to assume locally uniform field .each element interacting with the externally applied magnetic field is characterized by a remnant magnetic moment ( with ) and a susceptibility .the force along the axis acting on each test mass element can then be expressed using as a combination of a dc term term at the modulation frequency \sin{\omega_m t}\ ] ] and a term at twice the modulation frequency torque around the vertical axis running through the test mass center of mass is given instead by the interaction of the horizontal ( , ) projections of the remnant magnetic moment , , with the total applied magnetic field: and can be written as a combination of a dc term and a term the component vanishes because the magnetic moment induced through the susceptibility is parallel to the applied field .after evaluating the forces acting on each test mass element and given by , together with the torque about the axis given by , we account also for the torque arising from the net force difference at different locations within the test mass due to the remnant and induced magnetic moment : and are the horizontal distances of the element from the torsional axis . for given external and applied fields ,the overall torque about the axis , which the torsion pendulum is sensitive to , is then evaluated by adding the contributions from all the mass elements : , the torque can be read as the superposition of terms at dc , and . the test mass remnant magneticmoment can be measured from the torque it feels by an _uniform _ oscillating magnetic field .the torque in becomes then and can be detected by coherent demodulation of the fiber angular deflection at the excitation frequency , through the torsion pendulum transfer function ^{-1}. \label{e : transfer : function}\ ] ] here is the pendulum resonance frequency , is the torsional spring constant , is the pendulum mechanical quality factor , given by the inverse of the fiber loss angle .this experiment is routinely performed as a preliminary step to evaluate the impact of the laboratory magnetic noise on high sensitivity torsion pendula performance .m , diameter m , supporting along its axis one lisa cubic test mass , with side mm , mass kg , enclosed in a cylindrical al holder as sketched in the inset .the pendulum thermal noise , conservatively assuming a quality factor , is combined with the angular noise of the commercial autocollimator currently installed on the torsion pendulum facility , converted into equivalent mechanical noise through the torsion pendulum transfer function ( [ e : transfer : function ] ) .this noise performance can be applied to the measurement of both the test mass remnant moment and susceptibility , described in and [ s:1m : chi ] , respectively ., title="fig:",height=226 ] + the expected torque sensitivity of the proposed torsion pendulum , assuming a thermal torque noise spectrum for a quality factor , is shown in .the maximum sensitivity is set by the combination of the optical readout and the pendulum thermal noise at 30 fn m around 10 mhz .applying an oscillating magnetic field with amplitude 10 t , and assuming 3 hour integration time , this noise performance yields an expected remnant moment resolution of pa , well below the lisa requirements .suitable flipping of the test mass within the holder will allow measurement of all components of the magnetic moment . to take advantage of this resolution , the remnant moment of the sample holder , without test mass , should be measured and subtracted ( typical values for torsion pendula are na ) .( a ) top view , with the test mass displaced with respect to the coil axis and to the coil plane .the red arrows represent the components of the applied magnetic field in the horizontal plane and .( b ) side view .the single coil configuration creates a magnetic field and field gradient which differ on opposite sides of the test mass , inducing then a net torque .the relative test mass - coil position has been chosen in order to maximize the susceptibility induced torque ( see ) .,title="fig:",height=226 ] + assuming homogeneous test mass , the component in can be written as a forcing magnetic field pattern which exerts different forces on opposing test mass sides , it is then possible to single out the effect induced by the susceptibility as a net torque signal at twice the excitation frequency directly proportional to the test mass average susceptibility .any effect from the remnant moment will not directly couple to this measurement and will appear only in . a realistic configuration , compatible with the dimensions of the facility vacuum vessel , is described in , and employs a relatively small coil , with symmetry axis placed away from the pendulum torsion axis . in the case of a test mass average susceptibility , the expected induced torque at twice the excitation frequency is shown in , and has a maximum of fn m. assuming the same torque noise level and a 3 hour measurement as in , this signal amplitude will permit a resolution , corresponding to % of the lisa goal .the measurement resolution grows as , so it is possible to effectively improve the signal to noise ratio by increasing the source coil drive current .however , the major uncertainty is given here by the model assumed to evaluate the magnetic field and field gradients , and thus to estimate the forces and torques .nevertheless , the power of the technique is evident because it will allow characterization of the lisa test mass directly from the torques exerted by time - depending magnetic fields , and independent estimate of the material proprieties to be compared with other characterization methods .in , due to the interaction of the test mass with the magnetic field produced by the coil in the arrangement sketched in , as a function of the displacement between the test mass and the coil axis .the coil parameters are : radius cm , turns , excitation current a , on - axis displacement cm ; the maximum torque is obtained with off - axis displacement cm.,title="fig:",height=226 ] +the wide range of properties requested for the lisa test masses ( good optical quality to serve as end mirrors of the interferometers , stringent machining tolerances to avoid stray cross coupling , high mechanical strength to sustain the launch vibrations , high density and homogeneous composition to minimize acceleration for given force , good magnetic cleanliness ) makes its production a fundamental process within the lisa technology development .in addition , the strict requirements on the magnetic cleanliness make the verification of these proprieties a very important issue .even if a comprehensive testing campaign is planned during the preliminary phases of the flight , with the aim of establishing the `` feedthrough '' of the magnetic field fluctuations to acceleration noise , a ground based experimental investigation of the force / torques associated with varying magnetic field is highly desirable .the torsion pendulum technique , with its widely demonstrated high torque sensitivity , can be applied to a significant characterization of the magnetic proprieties of the lisa / ltp test masses .the principle of operation , based on the coherent detection of small torques associated with modulation of external magnetic fields , is analogous to the test procedure to be employed during the flight , and thus represents an important validation step in view of the mission .work is currently in progress to modify the existing facility in order to host the proposed experiment , and modelling is being performed in order to assess the validity of the analysis and the method performance .it is a pleasure to acknowledge many fruitful discussions with e adelberger , j mester , d l gill , s anza , a sanchez , d chen .
achieving the low frequency lisa sensitivity requires that the test masses acting as the interferometer end mirrors are free - falling with an unprecedented small degree of deviation . magnetic disturbances , originating in the interaction of the test mass with the environmental magnetic field , can significantly deteriorate the lisa performance and can be parameterized through the test mass remnant dipole moment and the magnetic susceptibility . while the lisa test flight precursor ltp will investigate these effects during the preliminary phases of the mission , the very stringent requirements on the test mass magnetic cleanliness make ground - based characterization of its magnetic proprieties paramount . we propose a torsion pendulum technique to accurately measure on ground the magnetic proprieties of the lisa / ltp test masses .
many crop plants of major significance that produce seeded fruit display alternate bearing , in which a year of heavy crop yield , known as a mast year , is followed by a year of extremely light yield and vice versa .this causes a cascading effect throughout the ecosystem , including leading to serious health problems for human beings . the alternate bearing phenomenon is quite common , andhas been observed in wild plants in forests as well as domesticated plants .understanding the dynamics of this natural phenomenon is a difficult task due to the complexities involved in the natural systems , and the limitations of experimental design and verification . in this paper, we attempt to illuminate some of the dynamical complexities of this phenomenon .various attempts have been made to determine the characteristics of the alternate bearing phenomenon , including , but not limited to , discussions in refs .however isagi were the first to attempt to study the characteristics of the phenomenon using a simple mathematical model which captures some of its key behaviors .however , even after this proposition , except for a few attempts , this system has not been studied in detail . in this work , we endeavor to study this system in detail and correlate the mathematical model with the ecological behavior .isagi proposed the following model based on the energy resource present in a plant : a constant amount of photosynthate is produced every year in individual plant .this photosynthate is used for growth and maintenance of the plant .the remaining photosynthate ( ) is stored within the plant body .the accumulated photosynthate stored in the plant is expressed as . for a particular year , if the accumulated photosynthate ( ) exceed a certain threshold , then the remaining amount is used for flowering , with the cost of flowering expressed as .these flowers are pollinated and bear fruits , the cost of which is designated as . usually , under relatively stable conditions , the fruiting cost is proportional to the cost of flowering , , where is a proportionality constant .after flowering , the leftover accumulated photosynthate is .once fruiting is over , this is reduced to , , . however , in non - mast years the accumulated energy becomes .this phenomenon , considering and , is modeled as follows : where represent the years and . here is considered as a parameter for this model . sincethis model is based on energy ( photosynthate ) stored in a plant it is termed a resource budget model . here , we explore the characteristic behavior of this model , considering it significance with reference to previous discussions . in this workwe explain the reason of existence of chaotic bands and their merging .we found that the bands chaotic attractors become bands when the period- unstable fixed points simultaneously collide with the chaotic bands . in this model and generally considered to be constants for mature trees .however , the branches of a natural tree may sometimes be cut , or suffer some destruction , which implies that there must be some variation in .in addition , due to weather fluctuations or other sources of external disturbances , the yearly photosynthate production , and therefore the quantity , also does not remain constant . here , we examine the importance of variations in and to explore their effects on the characteristic behavior of the system , and correlation with the real - world phenomenon of alternate bearing .results show that the ratio of cost of fruiting to the cost of flowering , which occurs in multi - bands regime , does not get changed due to the yearly variations in photosynthesis ( when yearly photosynthesis exceeds certain critical value ) .however the initial distance between the chaotic bands does depend on photosynthesis .this model looks like as a tent map , as shown schematically in fig .[ fig : model](a ) . however , it can be seen that if is below then it does not depend on parameter , the slope is always one . as discussed below, this independence leads to new results compared to standard tent maps this leads to the creation of separate chaotic bands which are useful for understanding the phenomenon of alternate bearing .note that this is the simplified model representing the phenomenon of alternate bearing .shown in fig .[ fig : model](b ) is the time - series generated from this model at which clearly indicates the high and low fruiting alternatively . shown in fig .[ fig : model](c ) is the number of fruits of an individual citrus unshiu tree at the nebukawa experimental station in kanagawa prefecture over many years .this is an example of alternate bearing at individual level whose high and low fruiting is clearly captured by the model ( fig . [ fig : model](b ) ) .[ fig : bif](a ) shows the bifurcation of as a function of in the model , eq .( [ eq : model ] ) . at low values of is only a single stable solution ( shown by the red solid line ) , giving the fixed point of period-1 , .we will denote the fixed points by , where represents the sequential order of the fixed points of period- .as increases above 1 , the fixed point becomes unstable ( shown by the red dashed line ) .simultaneously , chaos appears both above and below the period-1 unstable fixed point .the presence of chaos ( irregular behavior in ) in these trajectories is confirmed by determining the lyapunov exponent , which is positive for ( fig .[ fig : bif](b ) ) . here ,upper and lower chaotic bands correspond to the high and low yield years respectively .we will now try to understand the bifurcation , from stable period- to chaos , in the model in detail .an expanded view of bifurcation diagram of fig .[ fig : bif](a ) is shown in fig . [fig : small](a ) , over the range ] , which are always positive .the exact values depend on the relative values of the fixed points with respect to , as indicated in eq .( [ eq:4 ] ) .hence , these fixed points are unstable .this suggests that an infinite number of chaotic bands are created just after , separated by one unstable fixed point of period and period .these bands appear in very small regions which are difficult to detect . shown in fig .[ fig : pp ] are the zoomed regions of these bands at where we can detect up to period- fixed points , , -chaotic bands . to understand the complete dynamics of the system of eq .( [ eq : model ] ) it is necessary to determine the effect of variation of .all the infinite bands of chaotic attractors are created just after .the bands are separated by period- unstable fixed points .the two unstable period-2 fixed points lie above and below the unstable period-1 fixed point , separated near to by a distance , as shown in fig .[ fig : small](a ) . depends only on the distance between the two unstable fixed points of period . expanding in terms of , expressions for the two period- fixed points , eq .( [ eq:2 ] ) , become this suggests that upper fixed point , , is independent of , while the lower one , , varies linearly with .therefore , the distance between these two fixed points is , , the starting separation of the chaotic bands which appear above and below the unstable fixed is dependent on only .therefore , similar bifurcation diagrams , as those given in figs .[ fig : bif ] and [ fig : small ] , are observed for different values ( figures are shown here ) .note that the upper fixed point , is independent of ; hence , irrespective of the values of , the fixed point remains at .however , the fixed point , which depends on , moves up to as decreases .ecologically , this dependency of on suggests that , if there is high value of photosynthate in a particular year ( for example , with more sunshine , or less snow fall or clouds ) then there will be a huge difference in seed output in the following years .this means that , as per the above discussion , the distance , and hence the separation of the upper and lower chaotic bands will be large .this simple model thus explains the important ecological phenomenon of alternate bearing , explaining origin and variation in magnitude of the effect of the alternate bearing .this also predicts that if there is a very high value of yearly photosynthesis , then one can expect alternate bearing to be observed in the next year. note that if yearly photosynthesis is less in a plant then there will be small fluctuation in around , and hence the the magnitude of variations in yields across the years will also be less .1.cm .5 cm a closer look at fig .[ fig : small](a ) near to shows that two bands chaos are merged into a single band when the period- unstable fixed point collides with the two bands of chaotic attractors .this collision is termed a band - merging crisis .one method to detect such a crisis is to determine the variance in after -iterations , within the regime of two bands , the trajectory visits each band alternately , while in the single merged band the trajectory may move anywhere .therefore , when a crisis occurs there is large jump in the variance of .this is shown in fig . [fig : d](a ) , where a clear jump can be seen at , when two bands merge .similarly , at , four bands of chaotic attractors become two bands , as the upper and lower period-2 unstable fixed points each collide with the attractor bands .also , sixteen chaotic bands become 8 bands at the point at which the period-8 unstable fixed points collide separately at lower values of .these band merging crises are shown in fig .[ fig : d](a ) .this figure is generated by finding the variance in after iterations ( for detection of crises up to the merging of bands ) .note that , due to the infinitely small width of these bands ( as shown in fig .[ fig : pp ] ) , the higher order merging crisis that it is possible to detect numerically is the merging of the 32 bands containing the period-32 fixed points .this shows that , near to , the bands chaotic attractors become bands when the period- fixed points simultaneously collide with the chaotic bands . the infinite number of chaotic bands created near to start to merge on further increase of .this process of merging ends when the period-1 fixed point collides with the remaining two bands of chaos near to .this regime , in which collision occurs , is labeled as in fig .[ fig : small](a ) , from to the values of where the two bands of chaos become one band of chaos . in order to see how the width of the multi - band regime changes with , is numerically determined for different values of , as shown in fig .[ fig : d](b ) .the black circles and red squares correspond to the two values of and respectively . in the inset to fig .[ fig : d](b ) , it can be seen that the perfect overlapping of these values shows that the variation of with is independent of .a fit to the these curves shows that it is a nonlinear function of the form where and are fitting parameters .this function is shown for in fig .[ fig : d](b ) as a dashed blue line , with fitting parameters and .one important observation is that at smaller values of there is drastic change in .however , for large values it becomes saturated around , , .note that the alternate bearing occurs when the values of , , ratio of cost of fruiting to the cost of flowering , is in these multi - bands regime ( for ) .since is independent of ( after the photosynthate , which is always expected ) implies that does not get changed due to the yearly variation in photosynthate .one remarkable observation is that this value is very close to the value of estimated from field data . therefore the present analysis explains / supports this important characteristic of ecological alternate bearing phenomenon .the slight deviation in these numerical and experimental values could be due to the external fluctuation as discussed below .note that this saturation values of may be different for different type of plants , and may depends on regions and climates which needs to be probed further .natural systems are rarely free from external perturbations .therefore , in order to fully understand the dynamics of this system , we must also consider the effect of noise . here , we consider uniformly distributed random noise in , with values between ] . fig .[ fig : noise](a ) shows the bifurcation diagram generated with noise of strength ( of signal ) for fixed and .the features remain similar to those without noise , fig .[ fig : bif](a ) .the islands and the distance still persist under noise . to understand the effects of the variations of cause by this noise ,two data sets are shown in fig .[ fig : noise](b ) , with black - circles and blue - squares representing data for noise levels ( 10% ) and ( 5% ) respectively .the fits to these datasets , using the function eq .( [ eq : fit ] ) , show similar trends to that without noise ( fig .[ fig : d ] ) .however , the saturation distance decreases for higher strength of noise .therefore , the results presented in this work reveals that , under natural conditions in which fluctuations always exist , the characteristic properties and of alternate bearing phenomenon can persist .piecewise - smooth dynamical systems , which model many natural phenomena , are well studied and found to be important to understand the systems .piecewise - linear maps have been also well studied ( see recent paper and references therein ) . here, we have studied the resource budget model which captures the phenomenon of alternate bearing in plants .however the map we considered here , eq . ( [ eq : model ] ) , is different to standard tent maps . in this model ,one side ( ) has a constant coefficient linear equation , while in all reported works the slopes of both sides change. therefore this work adds a new class of map to tent - type maps , and we were able to study the variations of and . from a mathematical point of view , it is important to study this new class of tent map in detail .this system shows rich bifurcation behavior , where infinite islands containing period- unstable orbits are observed .the islands are destroyed due to band merging crises when the unstable fixed points collide with chaotic bands .we found that the distance only depends on and independent of .this suggests that if there is a high value of photosynthate in a year then there will be a huge difference in seed output in the next year .this variation is independent of the threshold value of the individual tree .therefore , this simple model demonstrates the characteristic properties of an ecological phenomenon , explaining the magnitude and variation of alternate bearing .we observe that only for smaller values of there is drastic change in .however , for large values is saturated at approximately , which is very close to estimations from real data .this shows that variations in the ratio of cost of fruiting to cost of flowering does not dependent on the amount of photosynthate . these results, the independence of on and the saturation of regime of multi - bands , may be useful for ecological perspectives and hence its open up to new challenges for further analysis and verification in experimental situations .these analysis may also be extended to other types of important systems .it will be also interesting to explore these analysis in coupled systems where masting ( synchronized production ) occurs .w. d. koenig and j. m. h. knopes , american scientist , * 93 * , 340 ( 2005 ) .w. d. koenig , m. h. knopes , w. j. carmen and i. s. pearse , ecology , * 96 * , 184 ( 2015 ) .d. kelly and v. l. sork , annu .. syst . * 33 * , 427 ( 1002 ) .s. p. monselise and e. e. goldschmidt , horticulture review , * 4 * , 128 ( 2011 ) .y. isagi , k. sugimura , a. sumida and h. ito , j. theor . biol . * 187 * , 231 ( 1997 ) . x. ye and k. sakai , chaos , * 23 * , 043124 ( 2013 ) .d. lyles , t. s. rosenstock , a. hasting and p. h. brown , j. theor .* 259 * , 701 ( 2009 ) .e. e. crone , l. polansky , and p. lesica , the american naturalist , * 166 * , 396 ( 2005 ) .m. rees , d. kelly , and o. n. bjornstad , the american naturalist , * 160 * , 44 ( 2002 ) .w. f. h. al - shameri and m. a. mahiub , i. j. math .analysis , * 7 * 1433 ( 2013 ). b. futter , v. avrutin and m. schanz , chaos , solitons & fractals , * 45 * , 465 ( 2012 ) . m. tabor , _ chaos and integrability in nonlinear dynamics : an introduction _ , ( wiley - blackwell , 1989 ) . c. grebogi , e. ott , f. romeiras and j. a. yorke , phys .a * 35 * , 5365 ( 1987 ) .k. satoh and t. aihara , j. phys .japan , * 59 * , 1184 ( 1990 ) .
we consider here the resource budget model of plant energy resources , which characterizes the ecological alternate bearing phenomenon in fruit crops , in which high and low yields occur in alternate years . the resource budget model is a tent - type map , which we study in detail . an infinite number of chaotic bands are observed in this map , which are separated by periodic unstable fixed points . these bands chaotic attractors become bands when the period- unstable fixed points simultaneously collide with the chaotic bands . the distance between two sets of coexisting chaotic bands that are separated by a period- unstable fixed point is discussed . we explore the effects of varying a range of parameters of the model . the presented results explain the characteristic behavior of the alternate bearing estimated from the real field data . effect of noise are also explored . the significance of these results to ecological perspectives of the alternate bearing phenomenon are highlighted . * major plants which produce large seed crops usually show alternate bearing , , a heavy yield year is followed by extremely light ones , and vice versa . this causes the cascading effect throughout the ecosystem , and may cause the serious health problem for human beings . therefore , it is very important to understand this natural phenomenon . in this paper we attempt to understand some of the dynamical complexities of this phenomenon . we consider here a simplest mathematical model which captures many of its characteristic behaviors . *
there has been a vast amount of recent literature dedicated to algorithms for sparse recovery , both in the context of inverse imaging problems and of _ compressed sensing_. as an alternative to the usual quadratic penalties used in regularization theory for ill - posed or ill - conditioned inverse problems , the use of -type penalties has been advocated in order to recover regularized solutions having sparse expansions on a given basis or frame , such as e.g. a wavelet system . denoting by the vector of coefficients describing the unknown object , by the vector of ( noisy ) data and by the linear operator ( matrix ) modelling the link between the two , the inverse problem amounts to finding a regularized solution of the equation . when it is known a priori that is a sparse vector, one can resort to the following penalized least - squares strategy , also referred to as the _ lasso _ after tibshirani : where is a positive regularization parameter regulating the balance between the penalty and the data misfit terms .the norm denotes the usual norm whereas is the norm of the vector . in compressedsensing ( also called _ compressive sampling _ ) , the aim is to reconstruct a _sparse _ signal or object from a small number of linear measurements .the recovery of such an object can then be achieved by searching for the sparsest solution to the linear system representing the measurement process , or equivalently by looking for a solution with minimum `` -norm '' . to avoid the combinatorial complexity of the latter problem, one can use as a proxy a convex -norm minimization strategy .when the data are affected by measurement errors , the problem is reformulated as a penalized least - squares optimization analogous to ( [ minimizer ] ) .let us observe that problem ( [ minimizer ] ) is equivalent to the constrained minimization problem : for a certain .one can show that and are piecewise linear functions of and .one always has that for .the relationship between and is given by and .several iterative methods for solving the minimization problems ( [ minimizer ] ) or ( [ constrmin ] ) have been proposed in the literature . for the purpose of comparison with our new acceleration scheme, we will focus on the following algorithms : 1 . the iterative soft - thresholding algorithm ( `` ista '' ) proposed in )goes as follows : ] if and zero otherwise . for any initial vector and under the condition , this scheme has been shown to converge to the minimizer defined by ( [ minimizer ] ) . when reinterpreted as a forward - backward proximal scheme , convergence can be seen to hold also for .[ tlwalg ] 2 .the fast iterative soft - thresholding algorithm ( `` fista '' ) , proposed in , is a variation of ista .defining the operator by ] , with . denotes the projection onto the -ball of radius .[psdalg ] the figures in section [ sec4 ] provide a visual way to compute the performance of these algorithms in two problem examples .note that these are the same as in , where the reader can find comparisons to yet other methods , including e.g. the -ls method , an interior point algorithm proposed in .in this section we describe the acceleration scheme we propose for solving the optimization problem ( [ constrmin ] ) .this problem is a particular case of the general problem of minimizing a convex and continuously differentiable function over a closed convex set . here . a gradient projection method for solving this problem can be stated as in algorithm [ gpm ] .some comments about the main steps of algorithm gp are in order .+ first of all , it is worth to stress that any choice of the steplength in a closed interval is permitted .this is very important from a practical point of view since it allows to make the updating rule of problem - related and oriented at optimizing the performance .+ if the projection performed in step 2 returns a vector equal to , then is a stationary point and the algorithm stops . when , it is possible to prove that is a descent direction for in and the backtracking loop in step 5 terminates with a finite number of runs ; thus the algorithm is well defined .+ the nonmonotone line - search strategy implemented in step 5 ensures that is lower than the maximum of the objective function in the last iterations ; of course , if then the strategy reduces to the standard monotone armijo rule .concerning the convergence properties of the algorithm , the following result can be derived from the analysis carried out in for more general gradient projection schemes : if the level set is bounded , then every accumulation point of the sequence generated by the algorithm gp is a stationary point of in .we observe that the assumption is trivially satisfied for problem since in this case the feasible region is bounded .now , we may discuss the choice of the steplengths ] ; projection : ; + if then stop , declaring that is a stationary point ; descent direction : ; set and ; backtracking loop : if then + go to step 6 ; else + set and go to step 5 ; endif set .end first of all we must recall the two bb rules usually exploited by the main steplength updating strategies . to this end , by denoting with the identity matrix , we can regard the matrix as an approximation of the hessian and derive two updating rules for by forcing quasi - newton properties on : where and . in this way, the steplengths are obtained .if then + set $ ] , and a non - negative integer ; + else + if then + ; + else + ; + ; + + if then + ; + ; + else + ; + ; + endif + endif + endif at this point , inspired by the steplength alternations successfully implemented in recent gradient methods , we propose a steplength updating rule for gp which adaptively alternates the values provided by .the details of the gp steplength selection are given in algorithm [ ss ] .this rule decides the alternation between two different selection strategies by means of the variable threshold instead of a constant parameter as done in and .this trick makes the choice of less important for the gp performance and , in our experience , seems able to avoid the drawbacks due to the use of the same steplength rule in too many consecutive iterations . in the following we denote by gpss the algorithm [ gpm ] equipped with the steplength selection [ ss ] .+ we end this section by describing the setting for the gpss parameters used in the computational study of this work : * _ line - search parameters _ : ( monotone line - search ) , , ; * _ steplength parameters _ : , , + , , . in our experience the above setting often provides satisfactory performance ; however , it can not be considered optimal for every application and a careful parameter tuning is always advisable .to assess the performances of our gpss algorithm and estimate the gain in speed it can provide with respect to the algorithms 1 to 5 , we perform some numerical tests . to this purposewe adopt the methodology proposed in and based on the notion of _ approximation isochrones_. it improves on the comparisons made for a single value of or , i.e. for a single level of sparsity of the recovered object . for values of in a given interval ,one computes the minimizer of ( [ minimizer ] ) .when the number of nonzero components in is not too large , this can be done by means of the direct ( non - iterative ) _ homotopy _ method or lars algorithm .then , for a fixed and given computation time , one runs one of the algorithms for each value of ( or ) . the relative error reached at the end of the computationis plotted as a function of and hence this plot is just the approximation isochrone showing the degree of accuracy reached in the given amount of computing time for each value of .a set of such plots allow to quickly grasp the performances of a given algorithm in various parameter regimes and to easily compare it with other methods ; it reveals in one glance under which circumstances the algorithms do well or fail .the paper also demonstrates the fact that the relative performances of the algorithms may strongly depend on the specific application one considers , and in particular on the properties of the linear operator modelling the problem .we test the different algorithms on two different operators arising typically either from a compressed sensing or from an inverse problem . in both casesthe matrix is of size 1848x8192 . in the first case ,the elements of are taken from a gaussian distribution with zero mean and variance such that .this matrix is rather well conditioned and can serve as a paradigm of compressed sensing applications .it is applied to a sparse vector and perturbed by additive gaussian noise ( about ) to yield the data .the second matrix models a severely ill - conditioned linear inverse problem that finds its origin in a problem of seismic tomography described in detail in . for both operators , the minimizer is computed for 50 different values of ( or equivalently , 50 different values of ) .then , for each iterative algorithm , we make plots having the relative error on the vertical axis and on the bottom horizontal axis ( on the top horizontal axis the value of is also reported ) .the number of nonzero components in is indicated by vertical dashed lines . in each plotwe report the isochrone lines that correspond to a given amount of computer time . in this wayone can see how close , for the different values of , the iterates approach the minimizer after a given time .let us remark that although the reported computing times are of course specific to a given computer and implementation , the overall behavior of the isochrones should be fairly general .for example , the fact that they get very close to each other in some places can be interpreted as a bottleneck feature of the algorithm . in figure [ gausspic ] ,we report the results for the ista , fista , gpsr , sparsa , psd and our new algorithm gpss for the case of the gaussian random matrix .the proposed gpss algorithm compares favorably with the other five , especially for small values of .experiments made by varying the parameter showing no significant difference , we report here only the results obtained with ( monotonic line search ) . however , the behavior for large penalties is not clearly visible on figure [ gausspic ] .it is better demonstrated when using a logarithmic scale for the relative error on the vertical axes as reported in figure [ gausspiclog ] . in figures[ geopic ] and [ geopiclog ] , we report the results for the case of the ill - conditioned matrix arising from the seismic inverse problem . clearly , for this operator , ista , gpsr and psd have a lot of difficulty in approaching the minimizer for small values of ( lines not approaching ) .the fista algorithm appears to work best for small penalty parameters whereas gpss and sparsa compete for the second place in such instance . from figure [ geopiclog ], we see that the gpss and sparsa algorithms are performing best for large values of .the reported encouraging numerical results call of course for further experiments , but we believe that they are sufficiently representative to allow honest extrapolation to reliable conclusions holding more generally . as seen , the proposed gpss algorithm performs well for the compressed sensing problem : for small values of , it clearly outperforms the other algorithms ( see figure [ gausspic ] ) whereas it is still competitive for larger values of . in the ill - conditioned inversion problem ,gpss an sparsa appear to perform better than all other tested algorithms for large values of , whereas they are challenged by the fista method for smaller values .i.l . and c.d.m .are supported by grant goa-062 of the vub .i.l . is supported by grant g.0564.09n of the fwo - vlaanderen .m.b . , r.z . and l.z .are partly supported by mur grant 2006018748 .figueiredo , r.d .nowak , s.j .wright , gradient projection for sparse reconstruction : application to compressed sensing and other inverse problems , ieee j. selected topics in signal process . 1 ( 2007 ) 586597 .
we propose a new gradient projection algorithm that compares favorably with the fastest algorithms available to date for -constrained sparse recovery from noisy data , both in the compressed sensing and inverse problem frameworks . the method exploits a line - search along the feasible direction and an adaptive steplength selection based on recent strategies for the alternation of the well - known barzilai - borwein rules . the convergence of the proposed approach is discussed and a computational study on both well - conditioned and ill - conditioned problems is carried out for performance evaluations in comparison with five other algorithms proposed in the literature .
we consider the initial boundary value problem for the scalar integro - differential equation with initial condition note that the flux includes a non - local integral term . for notational convenience ,we introduce the function is called the _erosion function_. the following assumptions apply to : we remark that the characteristic speed of ( [ eq1 ] ) is by ( [ k ] ) and ( [ eq : f ] ) , the characteristic speed is always positive , therefore no boundary condition is assigned at for ( [ eq1 ] ) .the equation ( [ eq1 ] ) arises as the _ slow erosion limit _ in a model of granular flow , studied in , with a specific erosion function note that this function satisfies all the assumptions in ( [ eq : f ] ) . in more details , let be the height of the moving layer , and be the slope of the standing profile . assuming , the following system of balance laws was proposed in this model describes the following phenomenon .the material is divided in two parts : a moving layer with height on top and a standing layer with slope at the bottom .the moving layer slides downhill with speed . if the slope ( the critical slope ) , the moving layer passes through without interaction with the standing layer . if the slope , then grains initially at rest are hit by rolling grains of the moving layer and start moving as well .hence the moving layer gets bigger . on the other hand , if , grains which are rolling can be deposited on the bed .hence the moving layer becomes smaller . in the slow erosion limit as , we proved in that the solution for the slope in provides the weak solution of the following scalar integro - differential equation here , the new time variable accounts for the total mass of granular material being poured downhill . introducing and writing for , we obtain the equation ( [ eq1 ] ) with ( [ fex ] ) . the result in the existence of entropy weak solutions to the initial boundary value problem ( [ eq1 ] ) with given in ( [ fex ] ) for finite `` time '' ( which is actually finite total mass ) .however , well - posedness property was left open due to the technical difficulties caused by the non - local term in the flux .furthermore , due to the discontinuities in , the function is only lipschitz continuous in its variables , therefore one can not apply directly previous results . indeed , classical results as require more smoothness on the coefficients ; see also .some closer results can be found in where the coefficient does not depend on time . in this paperwe consider a class of more general erosion functions that satisfy the assumptions in ( [ eq : f ] ) , and we study existence and well - posedness of bv solutions of ( [ eq1 ] ) . assuming that the slope is always positive , i.e. , , we seek bv solutions with bounded total mass .therefore , we define as the set of functions that satisfy assume that the initial data satisfies for some constants . a natural definition of entropy weak solution is given below .[ def1 ] let .a function is an * entropy weak solution * to on \times{{\mathbb{r}}}_- ] , , and the map \nit\mapsto q(t) ]. now we state the main result of this paper .[ th:1 ] assume and let , be given constants .then for any initial data there exists an entropy weak solution to the initial - boundary value problem for all .moreover , consider two solutions , of the integro - differential equation ( [ eq1 ] ) , corresponding to the initial data with , .then for any there exists such that \,.\end{aligned}\ ] ] recalling that , the solution established by theorem [ th:1 ] allows us to recover the profile of the standing layer : moreover , since , the equation can be rewritten as integrating in space on , using and that , we arrive at this nonlocal hamilton - jacobi equation is studied in , with a different class of erosion functions .assuming more erosion for large slope , i.e. , , the slope of the standing layer would blowup , leading to jumps in the standing profile .notice that , in our case , only upward jumps in can occur as singularities , which corresponds to convex kinks in the profile . about the continuous dependence notice that , when is a prescribed coefficient , the stability estimate ( [ continous - dep - on - init - data ] ) holds with , see . on the other hand , for the integral equation, one can not expect in general .indeed , a small variation in the norm of the initial data may cause a variation in the global term and then in the overall solution .however , a special case in which ( [ continous - dep - on - init - data ] ) holds with is when , which indeed is a solution of .other problems involving a nonlocal term in the flux have been considered in .well - known integro - differential equations which lead to blow up of the gradients include the camassa - holm equation and the variational wave equation .the cauchy problem for ( [ eq1 ] ) with initial data with bounded support is studied in where we use piecewise constant approximation generated by front tracing and obtain similar results .the rest of the paper is structured in the following way . as a step toward the final result , in section 2 we study the existence and well - posedness of the scalar equation ( [ eq : fixed_k - intro ] ) for a _ given _coefficient . here is a local term , and preserves the properties of the global integral term. such equation does not fall directly within the classical framework of , where more regularity on the coefficients is required ( ) .in particular , bv estimates for solutions of ( [ eq : fixed_k - intro ] ) are needed to obtain the continuous dependence on the initial data , see .we employ a fractional step argument to deal with the time dependence of , and then follow an approach similar to ( see also ) , where the authors deal with the case of .we further refer to on total variation estimates for general scalar balance laws : their result , in our context , would require more regularity ( ) on the coefficient .the properties of the integral operator , defined at , are summarized in the last appendix .in this section we study the well - posedness of the scalar equation ( [ eq : fixed_k - intro ] ) for a _ given _ coefficient , by reviewing some related results and completing the arguments where needed . throughout this section , we will use as the unknown variable . consider where satisfies the following assumptions , for some : [ cols= " < , < " , ] the above assumptions on are motivated by the properties of the integral operator , see proposition [ properties_of_k ] in the appendix .[ properties_semigroup_k_fixed ] assume satisfies and satisfies * ( k)*. let , be given constants .then there exist two constants and , with possibly and , and an operator \times\d_{c_0,\kappa_0}\to\d_{c_1,\kappa_1 } ] ( ) .then , the following estimate holds }{\mathrm{tv}\,}\left\{k(t,\cdot)- \tilde k(t,\cdot ) \right\ } \\ & & \qquad + ~ \hat c_2 \left(1+\sup_\tau { \mathrm{tv}\,}u(\tau,\cdot ) + \sup_\tau { \mathrm{tv}\,}\tilde u(\tau,\cdot ) \right)\|k- \tilde k\|_{\l^\infty([0,t]\times{{\mathbb{r}}}_- ) } \label{dep_on_coefficients } \,,\end{aligned}\ ] ] where is a bound on over the range of the solutions and depends on the bounds on the solutions , the coefficients and their total variation , .the ibvp can be extended to the following cauchy problem with extended initial data and the extended coefficient function due to the fact that the characteristic speed is positive , the solution for the cauchy problem restricted on will match the solution for the ibvp . in a same way , the ibvp is extended to the cauchy problem for with data . without causing confusion ,let s still denote and the solutions for and , respectively , and let and be the corresponding approximate solutions , constructed in the same way as in the proof of theorem [ properties_semigroup_k_fixed ] , with approximate coefficients and as in . denote the distance between these two solutions by notice that and that . on each time interval the coefficient is constant in time and the assumptions of proposition [ prop : klaris ] are satisfied .hence , from , we have the following estimate for some constants and that are uniform on ] that satisfies * ; * ; * .we define a sequence of approximate solution to the scalar equation . we fix and set , .the approximation is generated recursively , as starts from 0 and increases by 1 after each step .for each step with , let be defined on and set then we define on as the solution of the problem q(t_n , x)= q(t_n -,x ) \ , . &\end{array}\right.\end{aligned}\ ] ] this procedure leads to a solution operator , defined up to a certain time , of the problem q(0,x)= \bar q(x)\ , , & \end{array}\right.\end{aligned}\ ] ] where is defined by notice that the operator has the semigroup property for , .now we prove uniform bounds , independent of , on the family of approximate solutions .[ [ the - l1-bound . ] ] the bound .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + this follows by the application of in theorem [ properties_semigroup_k_fixed ] , at each time step , and the fact that is continuous in .until the solution is defined , we have [ [ lower - and - upper - bound - on - q . ] ] lower and upper bound on .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + define we observe that , by comparison with the equilibrium solution , ( i ) if then ; and ( ii ) if then for all .now consider and . choose and such that and .for example , one can take and .let be the first time that one of the following bounds fails , then , for , from the analysis of equation ( see ( [ eq : u_across_chars ] ) ) , we find that and are continuous and satisfy note that in ( [ eq - for - z ] ) we have , and in ( [ eqw ] ) we have .for , we have the estimate this gives us we conclude that the bounds in ( [ claim ] ) hold for with where yielding the lower and upper bounds . finally , if and , or if and , then we would only need to establish one of the bounds in ( [ claim ] ) , and the result follows . [ [ bounds - on - ffk . ] ] bounds on .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + once we have a lower , upper bound on and the bound on , we immediately find that \times { { \mathbb{r}}}_- \right)\end{aligned}\ ] ] uniformly w.r.t . . by definition of , we can easily verify that the following properties hold uniformly w.r.t . : * \times { { \mathbb{r}}}_-\right) ] ; * is bounded uniformly in time .indeed , ( i ) follows from the definition of and .about ( ii ) , at each time have for some , and .then because of ( i ) and .finally where , that depends on the lower bound on .lastly , from ( i ) and one obtains a uniform bound on the characteristic speed .[ [ bound - on - the - total - variation - of - q . ] ] bound on the total variation of .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + by definition of the total variation we have , for any the total variation of does not change at time when is updated .now consider a time interval , and we estimate the change of the total variation of in this time interval .we have where here is the entropy solution to on the other hand , is a solution of using the estimate we find for a suitable constant .notice that and that & \le & \left\|\left(k_n(\cdot - h)- k_n(\cdot)\right)\,f\left(q(\tau,\cdot)\right ) \right\|_{\l^1 } + \left\| k_n(\cdot - h ) \cdot \left(f\left(q(\tau,\cdot - h)\right)- f\left(q(\tau,\cdot)\right ) \right ) \right\|_{\l^1 } \nonumber\\[2 mm ] & \le & h \|k_nf\|_\infty \cdot \left\| f\left(q(\tau,\cdot)\right)\right\|_{\l^1 } ~+~ \left\| k_n\right\|_{\l^\infty } \|f'\|_\infty\ , \left\|q(\tau,\cdot)- q(\tau,\cdot - h)\right\|_{\l^1}\,.\nonumber \end{aligned}\ ] ] in conclusion , using also , we obtain where depend only on a - priori bounded quantities . now from we obtain \,d\tau\,.\end{aligned}\ ] ] we conclude that the total variation of may grow exponentially in on each interval , but it remains bounded for any bounded time .[ [ convergence - to - weak - solutions - existence - of - bv - solutions . ] ] convergence to weak solutions ; existence of bv solutions .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + now , without causing confusion , we will use for the approximate solution , where is the step size .let be the approximated coefficient of the equation , defined in . by compactness, there exists a subsequence of , as , that converges to a limit function in .let be the integral term , , corresponding to , which is uniformly bounded as well as the .we have \,d\xi \right\}\end{aligned}\ ] ] that vanishes as . therefore we can pass to the limit in the weak formulation .on the interval ] . summing this up over , we get \ , dx\,.\end{aligned}\ ] ] since in , in , pointwise and , uniformly bounded , by dominated convergence we can take the limit as and have the convergence of to \ , dx \ , dt & = & \int_{-\infty}^0 \left [ q \phi(t , x ) - q \phi(0,x ) \right ] \ , dx\,.\end{aligned}\ ] ] this completes the proof of existence of bv solutions for ( [ eq1 ] ) . once the bv solutions exist locally in time, we can further show that they enjoy better properties than the ones deduced from the approximate solutions . in particular we show that the lower and upper bounds on do not depend on time , leading to global in time existence of bv solutions .let be an entropy weak solution of on \times{{\mathbb{r}}}_- ] . by setting , we have q'(t ) & = & -k_x(t , x(t ) ) f(q ) = k f(q)^2 \ge 0\ , , \end{array } \right . & & \begin{array}{l } x(\bar t)=\bar x\,,\\[2 mm ] q(\bar t ) = q(\bar t , \bar x-)\ , .\end{array } \label{eq : q - along - chars } \end{aligned}\ ] ] we see that the solution is non - decreasing along any characteristics .therefore , we have for all . [ [ upper - bound - on - q . ] ] upper bound on .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + again , consider a point and let be the minimal backward characteristic through it . from the second equation in ( [ eq : q - along - chars ] ) we see that if , then as .now consider , and we have for all .define that satisfies using ( [ kruzkov ] ) with , we have the variation of along the characteristic is here we remove the absolute value signs because .then , ( [ w - along - chars ] ) implies that along characteristics .this gives the bound where can be chosen independently of .recalling , we have therefore , ( [ qbound ] ) implies an upper bound for for all .the uniform bound on the total variation follows because the constants in are now bounded uniformly in time . in this sectionwe prove the last part of theorem [ th:1 ] , showing that the flow generated by the integro - differential equation ( [ eq1 ] ) is lipschitz continuous , restricted to any domain of functions satisfying the following uniform bounds in , for some constants , . consider two solutions , of the integro - differential equation ( [ eq1 ] ) , say with initial data and satisfying the conditions in ( [ def : cald ] ) for ] ,let be the solution of the conservation law observe that , for each fixed , the distance between any two entropy - admissible solutions of the conservation law ( [ 7.5 ] ) is non - increasing in time .in particular , for , call the solution of with initial data ( see figure [ figl ] ) .we have q_1(t,)-q(t,)_^1(_- ) |q_1-|q_2_^1(_- ) t. for the integro - differential equation.,width=377 ] moreover we can use the lipschitz property of the solution operator for ( [ eq1 ] ) with fixed , and get the distance estimate q(t,)- q_2(t,)_^1(_- ) _ 0^t e()d , where indeed , observe that whenever , for any ] . from the bounds one easily deduces that by the assumptions on we find that hence the integral term is bounded and satisfies moreover , for all we have \,d\xi\right| & \le & \|f(q(t_1,\cdot ) ) - f(q(t_2,\cdot))\|_{\l^1({{\mathbb{r}}}_- ) } ~\le~ l |f'(\kappa_0)| \cdot |t_1 - t_2|\ , .\nonumber\end{aligned}\ ] ] this leads to the lipschitz continuity in for .namely , for all we have \,d\xi\right|\le\hat l ~~ |t_1-t_2|\,.\ ] ] here the lipschitz constant depends on the parameters , , . from the definition of ,the derivative function satisfies this immediately shows three facts : ( i ) is lipschitz in space variable , ( ii ) where the bv bounds are uniform in , and ( iii ) . fromwe get the estimate on the total variation of with depending on the parameters .finally , we show that \ni t\to k_x(t,\cdot)\in\l^1({{\mathbb{r}}}_-)$ ] is lipschitz continuous . by using , and ,one has with depending on the parameters , , .this paper was started as part of the international research program on nonlinear partial differential equations at the centre for advanced study at the norwegian academy of science and letters in oslo during the academic year 200809 .the first author would like to acknowledge also the kind hospitality of the department of mathematics , university of ferrara .the work of the second author is partially supported by nsf grant dms-0908047 .karlsen , k .- h . and risebro , n .- h . ; on the uniqueness and stability of entropy solutions of nonlinear degenerate parabolic equations with rough coefficients ._ discrete contin .syst . _ * 9 * ( 2003 ) , 10811104
we study a scalar integro - differential conservation law . the equation was first derived in as the slow erosion limit of granular flow . considering a set of more general erosion functions , we study the initial boundary value problem for which one can not adapt the standard theory of conservation laws . we construct approximate solutions with a fractional step method , by recomputing the integral term at each time step . a - priori bounds and bv estimates yield convergence and global existence of bv solutions . furthermore , we present a well - posedness analysis , showing that the solutions are stable in with respect to the initial data .
with the advent of initiatives like open data and new data publication paradigms as linked data , the volume of data available as rdf datasets in the semantic web has grown dramatically .projects such as the linking open data community ( lod ) encourage the publication of open data using the linked data principles which recommend using rdf as data publication format . by september 2010 ( last update of the lod diagram ) , more than 200 datasets were available at the lod site , which consisted of over 25 billion rdf triples .this massive amount of semi - structured , interlinked and distributed data publicly at hand , faces the database community with new challenges and opportunities : published data need to be loaded , updated , and queried efficiently .one question that immediately arises is : could traditional data management techniques be adapted to this new context , and help us deal with problems such as data integration from heterogeneous and autonomous data sources , query rewriting and optimization , control access , data security , etc . ?in particular , in this paper we address the issue of view definition mechanisms over rdf datasets .rdf datasets are formed by triples , where each triple _ ( s , p , o ) _ represents that subject _ s _ is related to object _ o _ through the property _p_. usually , triples representing schema and instance data coexist in rdf datasets ( these are denoted tbox and abox , respectively in description logics ontologies ) .a set of reserved words defined in rdf schema ( called the rdfs - vocabulary) is used to define classes , properties , and to represent hierarchical relationships between them .for example , the triple _( s , ` rdf : type ` , c ) _ explicitly states that _ s _ is an instance of _ c _ but it also implicitly states that object _c _ is an instance of ` rdf : class ` since there exists at least one resource that is an instance of _ c _ ( see section [ sec2:rdf ] for further details on rdf ) .the standard query language for rdf data is sparql , which is based on the evaluation of graph patterns ( see below for examples on sparql queries ) .although view definition mechanisms for rdf have been discussed in the literature , there is no consensus on what a view over rdf should be , and the requirements it should fulfill .moreover , although we could expect views to be useful over the web of linked data , as they have proved to be in many traditional data management application scenarios ( e.g. , data integration , query answering ) there is no evidence so far that this will be the case in the near future . in this workwe discuss the usage of views in those scenarios , and study current rdf view definition mechanisms , with focus on key issues such as expressiveness , scalability , rdfs inference support and the integration of views into existent tools and platforms .the dbtune project gathers more than 14 billion triples from different music - related websites .figure [ fig.lod ] presents a lod diagram that represents dbtune datasets ( purple nodes ) , their inter - relationships and the relationships with other lod datasets ( white nodes ) .each of the datasets included in the dbtune project has its own particularities .for instance , their structures or schemas differ from each other . this is because although dbtune datasets are described in terms of concepts and relationships defined in the music ontology ( mo ) , they do not strictly adhere to it , producing semantic and syntactic heterogeneities among them .we have selected three datasets from the dbtune project : bbc john peel sessions dataset , the jamendo website dataset and the magnatune record label dataset ( section [ sec5:datasets ] presents detailed information on this selection process , and explains the rationale behind this decision ) .information about the ` schema ' of the datasets can be extracted by means of sparql queries .figure [ fig : sourcesch ] presents a graphical representation of this information . in these graphs , light grey nodes represent classes for which at least one instance is found in the dataset ( we denote them used classes ) , dark grey nodes represent classes from the mo that are related to used classes ( either as subclasses or superclasses ) , solid arcs represent predicates between used classes , and dashed arcs represent the ` rdfs : subclassof ` predicate .predicates that relate classes with untyped uris are represented in italics .appendix [ sec : app1 ] describes how these graphs have been constructed .figure [ fig : sourcesch ] shows that there are differences between the schemas of each data source .let us consider , for example , the representation of the authoring relationship between _ musicartists _ and _ records_. in the jamendo dataset this relationship is represented using the ` foaf : made ` predicate ( figure [ fig : jamendo ] ) that connects artists with their records but also using its inverse relationship , namely the ` foaf : maker ` predicate between _ records _ and _ musicartists_. although these two relationships are the inverse of each other , no assumption can be made on the consistency of data , namely that the existence of a triple ( _ jam : artist1 foaf : made jam : record1 _ ) does not enforce the existence of another triple of the form ( _ jam : record1 foaf : maker jam : artist1 _ ) . in the magnatune dataset _ musicartists _ and_ records _ are related using the ` foaf : maker ` predicate ( figure [ fig : magnatune ] ) .we next present some use cases over the selected datasets that show how the notion of view ( in the traditional sense ) could be applied .* use case 1 : retrieving artists and their records .* a user needs to collect information about artists and their records . to fulfill this simple requirement ,a not trivial sparql query must be written .this query must take into consideration all the different representations of the relationship between artists and records in each dataset .example [ fig.uc1select ] presents a sparql query that returns the expected answer .a sparql 1.0 ` select ` query that retrieves artists and their records .+ .... select distinct ?record from named < http://dbtune.org/jamendo > from named < http://dbtune.org/magnatune > where { { graph < http://dbtune.org/jamendo/ > { ?artist foaf : made ?record . ?artist rdf : type mo : musicartist . ?record rdf : type mo : record } } union { graph < http://dbtune.org/jamendo/ > { ?record foaf : maker ?artist . ?artist rdf : type mo : musicartist . ?record rdf : type mo : record } } union { graph < http://dbtune.org/magnatune/ > { ?record foaf : maker ?artist . ?artist rdf : type mo : musicartist . ?record rdf : type mo : record } } } .... [ fig.uc1select ] sparql queries are too complex to be written by an end user , and require a precise knowledge of the schema . therefore , it would be desirable to somehow provide a uniform representation of this relationship in order to simplify querying the integrated information .several strategies could be used to provide a uniform view of all datasets .one possibility would be to materialize the missing triples , which in this case leads to the creation of new triples in the magnatune and jamendo datasets .( record , ` foaf : maker ` , artist ) _ triple that relates a record _ record _ with an artist _ artist _ , a new triple _ ( artist , ` foaf : made ` , record ) _ must be added to the dataset .this strategy would be hard to maintain and could also interfere with the independence of the sources . to avoidmaintenance issues , approaches that dynamically generate virtual triples are needed .some of them use reasoning and rules to create mappings between concepts and infer knowledge that is not explicitly stated .another approach could be to build new graphs that encapsulate underlying heterogeneities .for instance , sparql ` construct ` queries return graphs dynamically created from existent ones and allow the creation of new triples as the next example shows .the following sparql ` construct ` query returns a graph that contains all the _( artist , ` foaf : made ` , record ) _triples from the jamendo dataset but also generates new triples .that is , for each _( record , ` foaf : maker ` , artist ) _ triple in the magnatune and jamendo datasets it creates a _ ( artist , ` foaf : made ` , record ) _triple ) ( i.e , the query of example [ fig.uc1select ] ) ..... construct { ?artist foaf : made ?record } from named < http://dbtune.org/jamendo > from named < http://dbtune.org/magnatune > where { { graph < http://dbtune.org/jamendo > { ?artist foaf : made ?record . ?artist rdf : type mo : musicartist . ?record rdf : type mo : record } } union { graph < http://dbtune.org/jamendo > { ?record foaf : maker ?artist . ?artist rdf : type mo : musicartist . ?record rdf : type mo : record } } union { graph < http://dbtune.org/magnatune > { ?record foaf : maker ?artist . ?artist rdf : type mo : musicartist . ?record rdf : type mo : record } } } .... [ fig.uc1construct ] let us suppose that now our user wants to reutilize this query to retrieve the title of each record made by an artist .although the query in example [ fig.uc1construct ] generates a new graph , sparql does not provide mechanisms to pose queries against dynamically generated graphs ( e.g. , using graphs as sub - queries in the ` from ` clause ) . to answer this query in sparql 1.0 existent queriescan not be reused , and a new query must be formulated ( see next example ) .the sparql 1.0 ` select ` query below , retrieves artists , records and record titles ..... select distinct ?title from < http://dbtune.org/jamendo > from < http://dbtune.org/magnatune > from named < http://dbtune.org/jamendo > from named < http://dbtune.org/magnatune > where { ?record dc : title ?title . { graph < http://dbtune.org/jamendo/ > { ?artist foaf : made ?record . ?artist rdf : type mo : musicartist . ?record rdf : type mo : record . } } union { graph < http://dbtune.org/jamendo/ > { ?record foaf : maker ?artist . ?artist rdf : type mo : musicartist . ?record rdf : type mo : record . } } union { graph < http://dbtune.org/magnatune/ > { ?record foaf : maker ?artist . ?artist rdf : type mo : musicartist . ?record rdf : type mo : record . } } } .... [ fig.uc1sparql10app ] the sparql 1.1 proposal ( see section [ sec2 ] ) partially supports sub - queries , allowing only ` select ` queries to be part of the ` where ` clause .existent ` construct ` queries can not be reused either in the ` from ` clause ( e.g. : as datasets ) nor in the ` where ` clause ( e.g. : as graph patterns ) .example [ fig.uc1sparql11app ] presents a sparql 1.1 ` select ` query that retrieves artists , their records and their titles .it shows that , in order to reuse the query presented in example [ fig.uc1select ] , the code must be ` copy - pasted ' , which is hard to maintain , error - prone , and limits the use of optimization strategies based on view materialization .a sparql 1.1 ` select ` query that retrieves artists , records and record titles . ....recordtitle where { ?record dc : title ?record from < http://dbtune.org/magnatune > where { ?record foaf : maker ?artist . ?artist a mo : musicartist . ?record a mo : record } } union { select ?record from < http://dbtune.org/jamendo > where { ?artist foaf : made ?record . ?artist a mo : musicartist . ?record a mo : record } } union { select ?record from < http://dbtune.org/jamendo > where { ?record foaf : maker ?artist . ?artist a mo : musicartist . ?record a mo : record } } } .... [ fig.uc1sparql11app ] in light of the above , sparql extensions have been proposed to allow ` construct ` queries to be used as subqueries .for instance , networked graphs ( ng ) allow defining and storing graphs for later use in other queries .example [ fig.uc1ngdef ] shows , using rdf trig syntax , how the graph in example [ fig.uc1select ] can be implemented using ngs .an ng is defined by means of an rdf triple whose subject is the uri that identifies the graph , its predicate is denoted ` ng : definedby ` , and its object is a string that represents the ` construct ` query that will be evaluated at runtime , and whose results will populate the graph .applying networked graphs to use case 1 : definition .... def : query1 { def : query1 ng : definedby `` construct { ?artist foaf : made ?record } where { { graph < http://dbtune.org/jamendo/ > { ?artist foaf : made ?record . ?artist rdf : type mo : musicartist . ?record rdf : type mo : record } } union { graph < http://dbtune.org/jamendo/ > { ?record foaf : maker ?artist . ?artist rdf : type mo : musicartist . ?record rdf : type mo : record } } union { graph < http://dbtune.org/magnatune/ > { ?record foaf : maker ?artist . ?artist rdf : type mo : musicartist . ?record rdf : type mo : record } } } ' ' ^^ng : query } .... [ fig.uc1ngdef ] once defined , the ng can be reused in further queries .example [ fig.uc1ngapp ] presents a sparql query that uses the previously defined ng , encapsulating the different representations of the relationship between artists and their records . applying networked graphs to use case 1 : usage ....select distinct ?recordtitle where { ?record dc : title ?recordtitle .{ graph < http://definedviews / query1 > { ?artist foaf : made ?record } } } .... [ fig.uc1ngapp ] * use case 2 : musical manifestations and their authors .* let us now consider that the user wants to retrieve information about all musical manifestations stored in the datasets .figure [ fig : sourcesch ] shows that there are no instances of the _ musicalmanifestation _ class in the datasets but there are instances of two of their sub - classes : _ record _ and _ track_. sparql supports different entailment regimes , in particular rdf , rdfs , and owl . under rdfsentailment the application of inference rules generates results that are not explicitly stated in the datasets . for example, one of such rules allows inferring that , since _record _ and _ track _ are sub - classes of _ musicalmanifestation _ all the instances of _ record _ and _ track _ are also instances of _musicalmanifestation_. we take a closer look at inference mechanisms in section [ sec2:rdf ] example [ fig.usecase2 ] shows a sparql ` construct ` query that creates a graph that contains all the musical manifestation instances and for each instance its author , in case available . since _record _ and _ track _ are sub - classes of _ musicalmanifestation _ , all instances of the former two are also instances of the latter .thus , they should appear in the resulting graph .this query can be stored using ngs or implemented using sparql++ .we discuss sparql++ later in this paper .musical manifestations and their authors . ....construct { ?mm rdf : type mo : musicalmanifestation . ?mm foaf : maker ?artist } where { ?mm rdf : type mo : musicalmanifestation . optional { ?mm foaf : maker ?artist } .optional { ?mm a mo : track . ?record mo : track ?mmanifestation . ?record foaf : maker ?artist } . } .... [ fig.usecase2 ] this use case exemplifies a problem orthogonal to the one stated in use case 1 : the need of support entailment regimes in sparql implementations and in view definition mechanisms .although these mechanisms , at first sight , seem to solve the problems above , little information can be found in the literature regarding how to use them , the volume of data they can handle and also on the restrictions that may apply to the queries they support .the purpose of this work is two - fold .first , study different application scenarios in which views over rdf datasets could be useful ; second , discuss to what extent existent view definition mechanisms can be used on the described scenarios .this paper is aimed at providing an analysis of the state - of - the - art in view definition mechanisms over rdf datasets , and identifying open research problems in the field .we first introduce the basic concepts on rdf , rdfs and sparql ( section [ sec2 ] ) . in section [ sec3 ] , to give a framework to our study , we propose a definition of views over rdf datasets , along with four scenarios in which views have been traditionally applied in relational database systems . in section [ sec4 ]we study current view definition mechanisms , with a focus on the three ones that fulfill most of the conditions of our definition of views , and support the scenarios mentioned above .these proposals are sparql++ , networked graphs , and vsparql .we also provide a wider view , discussing other proposals in the field . in section [ sec5 ]we analyze the three selected proposals with respect to four goals : sparql 1.0 support , inference support , scalability , and facility for integration with existent platforms .we also perform experiments over the current a networked graphs implementation .finally , in section [ sec6 ] we present our conclusions and analyze open research directions .to make this paper self - contained in this section we present a brief review of basic concepts on rdf , rdfs and sparql .the resource description framework ( rdf ) is a data model for expressing assertions over resources identified by an universal resource identifier ( uri ) .assertions are expressed as _ subject - predicate - object _triples , where _ subject _ are always resources , and _ predicate _ and _ object _ could be resources or strings . _blank nodes _ ( _ bnodes _ ) are used to represent anonymous resources or resources without an uri , typically with a structural function , e.g. , to group a set of statements .data values in rdf are called _ literals _ and can only be _ objects _ in triples . a set of rdf triples or _rdf dataset _ can be seen as a directed graph where _ subject _ and _ object _ are nodes , and _ predicates _ are arcs . formally : consider the following sets u ( uri references ) ; ( blank nodes ) ; and l ( rdf literals ) .a triple is called an rdf triple .we denote ubl the union u b l. an rdf graph is a set of rdf triples .a subgraph is a subset of a graph .a graph is _ ground _ if it has no blank nodes .although the standard rdf serialization format is rdf / xml , several formats coexist in the web such as ntriples , turtle , n3 , trig , and several serialization formats over json .rdf schema ( rdfs ) is a particular rdf vocabulary supporting inheritance of classes and properties , as well as typing , among other features . in this workwe restrict ourselves to a fragment of this vocabulary which includes the most used features of rdf , contains the essential semantics , and is computationally more efficient than the complete rdfs vocabulary this fragment , called , contains the following predicates : rdfs : range ` [ range ] ` , rdfs : domain ` [ dom ] ` , rdf : type ` [ type ] ` , rdfs : subclassof ` [ sc ] ` , and rdfs : subpropertyof ` [ sp ] ` .the following set of rules captures the semantics of and allows reasoning over rdf .capital letters represent variables to be instantiated by elements of ubl .we use this subset of rdfs for addressing inference capabilities in view definitions .sparql is a query language for rdf graphs , which became a w3c standard in 2008 .the query evaluation mechanism of sparql is based on subgraph matching : rdf triples in the queried data and a query pattern are interpreted as nodes and edges of directed graphs , and the query graph is matched to the data graph , instantiating the variables in the query graph definition .the selection criteria is expressed as a graph pattern in the ` where ` clause , and it is composed of basic graph patterns defined as follows : sparql queries are built using an infinite set v of variables disjoint from ubl . a variable v v is denoted using either ? or $ as a prefix .a triple pattern is member of the set , that binds variables in v to rdf terms in the graph .a basic graph pattern ( bgp ) is a set of triple patterns connected by the ` . 'operator . * * group graph patterns * , a graph pattern containing multiple graph patterns that must all match , * * optional graph patterns * , a graph pattern that may match and extend the solution , but will not cause the query to fail , * * union graph patterns * , a set of graph patterns that are tried to match independently , and * * patterns on named graphs * , a graph pattern that is matched against named graphs . * ` select ` , which returns a set of the variables bound in the query pattern , * ` construct ` , which returns an rdf graph constructed by substituting variables in a set of triple templates , * ` ask ` , which returns a boolean value indicating whether a query pattern matches or not , and * ` describe ` , which returns an rdf graph that describes resources found .
views on rdf datasets have been discussed in several works , nevertheless there is no consensus on their definition nor the requirements they should fulfill . in traditional data management systems , views have proved to be useful in different application scenarios such as data integration , query answering , data security , and query modularization . in this work we have reviewed existent work on views over rdf datasets , and discussed the application of existent view definition mechanisms to four scenarios in which views have proved to be useful in traditional ( relational ) data management systems . to give a framework for the discussion we provided a definition of views over rdf datasets , an issue over which there is no consensus so far . we finally chose the three proposals closer to this definition , and analyzed them with respect to four selected goals .
in relativistic heavy - ion collisions the azimuthal anisotropy of the produced particles as a function of transverse momentum has emerged as the most renowned observable to study the collective properties of nuclear matter . due to the collision geometry in non - central heavy - ion collisions ,the initial volume containing the interacting nuclear matter is anisotropic in coordinate space .of particular interest is the scenario in which the produced nuclear matter managed to thermalize in this anisotropic volume , causing its initial anisotropy from the coordinate space to be transfered via mutual interactions into the resulting and observable anisotropy in momentum space .we refer to this phenomenon in this work as _ collective anisotropic flow _ , or just simply as _flow_. clearly , collective anisotropic flow is a direct probe of the degree of thermalization of the produced matter , and correspondingly an indirect probe of its transport properties ( e.g. viscosity ) .whatever its underlying cause is , the resulting anisotropic distribution in momentum space can always be expanded into fourier series : \right]\ , .\label{eq : fourier}\ ] ] the first few coefficients ( harmonics ) in the above series have by now been thoroughly studied by experimentalists as well as theorists : the first coefficient , is usually referred to as _ directed flow _ , the second coefficient , , is referred to as _ elliptic flow _ , the third coefficient , is referred to as _ triangular flow _ , etc . denotes the _ symmetry plane _ of the harmonic ( in general different harmonics will have different symmetry planes ) . denotes the azimuthal angles of the produced particles . for the case of an idealized initial geometry in heavy - ion collisions ,all symmetry planes coincide and are equal to the _ reaction plane _ of the collision ( a plane spanned by the impact parameter and the beam axis ) . given the above fourier series expansion , one can show , using just the orthogonality properties of trigonometric functions , that \right>\ , , \label{eq : vn}\ ] ] where angular brackets denote an average over all particles in an event . due to only mathematical steps involved in its derivation, we stress that eq .( [ eq : vn ] ) _ per se _ has no physical meaning .in particular , eq .( [ eq : vn ] ) can give rise to non - vanishing flow harmonics irrespectively of whether the azimuthal anisotropy in the momentum distribution has its origin in collective anisotropic flow or in some other completely unrelated physical process which can also yield event - by - event anisotropies ( e.g. mini - jets ) .we now attempt to attach a more rigorous treatment to the concept of collectivity " by discussing which tools and observables we can utilize experimentally in order to disentangle it from processes which generally involve only a small subset of the produced particles , generally termed nonflow " . in order to make a statement on whether the harmonics in eq .( [ eq : fourier ] ) are dominated by contributions from collective anisotropic flow or by some other processes which are non - collective in nature , we can use correlation techniques involving two or more particles . in this paperour main focus will be on the latter , to which we refer to as _ multi - particle correlation techniques_. when only collective anisotropic flow is present , all produced particles are independently emitted , and are correlated only to some common reference planes .this physical observation translates into the following mathematical statement : the left - hand side of eq .( [ eq : factorization ] ) is a joint multi - variate probability density function ( p.d.f . ) of observables .the right - hand side of eq .( [ eq : factorization ] ) is the product of the normalized marginalized p.d.f , , where , which are the same and are given by eq .( [ eq : fourier ] ) . therefore ,when all particles are emitted independently , as is the case for collective anisotropic flow , the joint p.d.f . for _ any _ number of particles will factorize as in eq .( [ eq : factorization ] ) . based on this reasoning, one can build up , in principle , infinitely many independent azimuthal observables sensitive to various combinations of flow harmonic moments and corresponding symmetry planes by adding more and more particles to the observables .when flow harmonics fluctuate event - by - event , different underlying p.d.f.s of flow fluctuations will result in different values of flow harmonic moments and corresponding symmetry planes .this illustrates our main point : in order to determine the underlying p.d.f . of flow fluctuations , oneis necessarily led towards multi - particle correlation techniques .we will elaborate on this point in detail and generalize it further in the main part of the paper . for completeness, we now present the historical overview of the utilization of multi - particle correlation techniques in anisotropic flow analyses , together with all of the technical limitations and issues inherit to them , which this paper overcomes .multi - particle correlation techniques in anisotropic flow analyses have been used for more than three decades . in the theoretical studies of global eventshapes and in the subsequent study presented in , the joint multi - variate p.d.f . of particles for an event with multiplicity was utilized in flow analyses for the first time . on the other hand , the very first experimental attempt to go beyond two - particle azimuthal correlations date back to bevalac work published in . in that paper , a quantitative description of collectivitywas attempted by generalizing the observable for two - particle correlations , namely the smaller angle between the transverse momenta of two produced particles , into the geometric mean of azimuthal separations within the -particle multiplet .however , it was realized immediately that the net contribution of low - order few - particle correlations is cumulative if one increases the number of particles in such multiplets , which triggered the demand for more sophisticated techniques that would instead suppress systematically such contributions for increasingly large multiplets .this was pursued further in a series of papers on multi - particle correlations and cumulants by borghini _et al _ ( for a summary of the mathematical and statistical properties of cumulants we refer the reader to ) . in the first paper of the series , borghini _ etal _ defined cumulants in the context of flow analyses in terms of the moments of the distribution of the -vector amplitude . as a landmark of their approach ,the authors have introduced a formalism of generating functions accompanied with interpolation methods in the complex plane as the simplest and fastest way to calculate cumulants from experimental data .the formalism of generating functions is particularly robust against biases stemming from non - uniform detector acceptance , which is frequently the dominant systematic bias in anisotropic flow analyses .however , there were some serious drawbacks , which were recognized and discussed already by the authors in the original paper .most notably , both two- and multi - particle cumulants were plagued by trivial and non - negligible contributions from autocorrelations , which caused an interference between the various harmonics .this led the authors to propose an improved version of the generating function in , which by design generated cumulants free from autocorrelations .in essence , the way cumulants were defined conceptually has changed between the two papers : in cumulants were defined directly in terms of multi - particle azimuthal correlations , which are free from autocorrelations by definition , while in cumulants were defined in terms of the moments of the distribution of the -vector amplitude , which by definition have contributions from autocorrelations . both methods to calculate cumulantswere capable of estimating reference and differential flow .further improvement , still relying on the formalism of generating functions , came with the lee - yang zero ( lyz ) method , which isolates the genuine multi - particle estimate for flow harmonics , corresponding to the asymptotic behavior of the cumulant series .the formalism of generating functions , however , has its own built - in systematic biases .most importantly , the proposed interpolating methods in the complex plane to calculate cumulants are not numerically stable for all values of flow harmonics and multiplicity ( parameter has to be tuned " ) ; in addition , one never exactly recovers the cumulants as they are defined ( the series expansion of the generating functions has to be terminated manually at a certain order , in order to close the coupled system of equations for the cumulants " ) ; finally , the formalism as presented in these papers is limited to the cases where all harmonics in multi - particle correlators coincide .a notable alternative cumulant approach in terms of implementation was used in , which , at the expense of reducing statistics , removed autocorrelations by explicitly constructing multiple subevents from the original event .these limitations were removed partially with -cumulants ( qc ) published recently in , which do not rely on the formalism of generating functions , but instead utilize voloshin s original idea of expressing multi - particle azimuthal correlations analytically in terms of -vectors evaluated ( in general ) in different harmonics .-cumulants , however , are very tedious to calculate analytically , and such calculations were accomplished only for a rather limited subset of multi - particle azimuthal correlations which have been most frequently used in anisotropic flow analyses to date .the present paper surpasses completely all technical limitations of these previous publications and provides a _ generic framework _ allowing _ all _ multi - particle azimuthal correlations to be evaluated analytically , with a fast single pass over the data , free from autocorrelations by definition , and corrected for systematic biases due to various detector inefficiencies ( e.g. non - uniform azimuthal acceptance , -dependent reconstruction efficiency , finite detector granularity , etc . ) . with this framework ,a plethora of new multi - particle azimuthal observables are now accessible experimentally . in this paperwe propose and discuss some new concrete examples ( so - called _ standard candles _ ) .we have paid special attention to the development of algorithms , which can be used to calculate recursively higher - order multi - particle azimuthal correlators in terms of lower - order ones , for the cases when their standalone generic formulae are too long and impractical for direct use and implementation .finally , we point out the existence of a peculiar systematic bias in traditional differential flow analyses , when all particles are divided into the two groups of reference particles ( rp ) and particles of interest ( poi ) .this systematic bias stems solely from the selection criteria for rps and pois , and is present also in the ideal case when all nonflow correlations are absent .the paper is organized as follows . in section [s : two- and multi - particle azimuthal correlations ] , we introduce two- and multi - particle azimuthal correlations , motivate and discuss their usage in anisotropic flow analyses , and point out the technical issues which plagued their evaluation in the past . in section [ s : generic equations ], we outline our new generic framework which enables exact and fast evaluation of all multi - particle azimuthal correlations , and can also be used to correct for systematic biases due to various detector inefficiencies . in section [ s : monte carlo studies ]we use two toy monte carlo studies to demonstrate the framework s ability to correct for biases due to non - uniform azimuthal acceptance and non - uniform reconstruction efficiency .we then use a realistic monte carlo to demonstrate its usage in the measurement of some new flow observables that we propose and discuss in detail . in section [s : detectors with finite granularity ] , we point out how biases due to finite granularity of the detector must be considered and corrected for in the measurement of multi - particle azimuthal correlations .finally , in section [ s : systematic bias due to particle selection criteria ] , we discuss the systematic bias which is present in traditional differential flow analyses even when all nonflow correlations are absent , but arise from the selection criteria of particles used for the differential flow analysis . in the appendices we present all technical steps in detail .we consider two- and multi - particle azimuthal correlations measured event - by - event as our basic observables whose moments can be related to moments of the flow harmonics and the corresponding symmetry planes .this relation can be illustrated with the simple example of the two - particle azimuthal correlation of harmonics and .for the dataset consisting of azimuthal angles we have : the constraint removes contributions from autocorrelations in each sum by definition . using the factorization property in eq .( [ eq : factorization ] ) for the case of joint two - particle p.d.f . and using the orthogonality properties of trigonometric functions , one can show that the first and second moment of are given as : these are the analytic expressions for the mean and variance of the two - particle azimuthal correlations , which are valid for the general case when the fourier - like p.d.f .( [ eq : fourier ] ) is parametrized with all harmonics . motivated with the previous simple example , we now introduce our main observables , namely multi - particle azimuthal correlations , in a generic way .the average -particle correlation in harmonics is given by the following generic definition : in the above definition , is the multiplicity of an event , labels the azimuthal angles of the produced particles , while labels particle weights whose physical meaning and use cases will be elaborated on .we have in summation enforced the condition in order to remove the trivial and non - negligible contributions from all possible autocorrelations ( self - correlations ) by definition in all summands .we stress that we consider any correlation technique utilized in anisotropic flow analyses to be unsound and unusable if it has any kind of contribution stemming from autocorrelations .particle weights appearing in definition ( [ eq : mpcorrelation ] ) can be used to remove systematic biases originating from detector inefficiencies of various types .well known examples of particle weights are so - called -weights , , which deal with the systematic bias due to non - uniform acceptance in azimuth , and -weights , , which deal with the non - uniform transverse momentum reconstruction efficiency of produced particles . in general , we allow the particle weight to be the most general function of the azimuthal angle , transverse momentum , pseudorapidity , particle type , etc . : the new generic framework presented in this paper allows one to use the above general particle weights for any multi - particle azimuthal correlation . in subsequent sections in toy monte carlo studies we provide two concrete examples .we can straightforwardly relate various moments of the observables defined in eq .( [ eq : mpcorrelation ] ) to various moments of the harmonics and the symmetry planes .in particular , relying solely on factorization as in eq .( [ eq : factorization ] ) and orthogonality properties of trigonometric functions , the following analytic expression follows for the first moment : this result was first presented in . when the averaging is extended to all events , only the isotropic correlators , i.e. the ones for which , will have non - zero values .it is obvious from the expression ( [ eq : mixedharmonicsexpversion ] ) that the trivial periodicity of each symmetry plane is automatically accounted for .as already remarked in the introduction , for the case of an idealized initial geometry all symmetry planes coincide and the imaginary part of eq .( [ eq : mixedharmonicsexpversion ] ) is identically zero for isotropic correlators .however , we point out that , in the more realistic case , the effects of flow fluctuations can be independently quantified by measuring the imaginary parts of isotropic correlators in mixed harmonics as well , which a priori are non - vanishing .the importance of our new generic framework is that it makes it possible for the first time to measure the above observables ( [ eq : mixedharmonicsexpversion ] ) for any number of particles in the correlators , for any values of the harmonics , and for both the real and imaginary parts .one of the consequences of event - by - event flow fluctuations is the fact that , where flow moments are defined as different underlying p.d.f.s , , of event - by - event flow fluctuations will yield different values for the moments .looking at this statement from a different angle , we can also conclude that two completely different p.d.f.s , reflecting completely different physical mechanisms that drive flow fluctuations , can have , accidentally , the very same first moment .thus , the traditional way of reporting results of anisotropic flow analyses by estimating only the first moment of the underlying p.d.f , namely , is , from our point of view , rather incomplete . instead , one should measure as many moments as possible of the underlying p.d.f , , because each moment by construction carries independent information . to finalize this discussion, we stress that a priori it is not guaranteed that a p.d.f .is uniquely determined by its moments .necessary and sufficient conditions for the p.d.f . to be uniquely determined in terms of its moments have been worked out only recently and are known as the krein - lin conditions : \equiv\int_0^\infty\frac{-\ln f(x^2)}{1+x^2}\ , dx\quad\rightarrow\quad k[f]=\infty\ , , \label{eq : krein}\ ] ] the generic framework presented in this paper enables one to measure the flow moments for any .such results , in combination with the krein - lin conditions outlined above , can be used to experimentally constrain the nature of the p.d.f . for flow fluctuations .in this section , we present and discuss our main results . for an event with multiplicity we construct the following two sets : where labels the azimuthal angles of particles , while labels particle weights introduced in eq .( [ eq : generalparticleweights ] ) . given these two sets , we calculate in each eventweighted -vectors as complex numbers defined by from the above definition , it immediately follows that : which shall be used in the implementation of our final results in order to reduce the amount of needed computations .we remark that we need a single pass over the particles to calculate the -vectors for multiple values of indices and .we first observe that the expressions in the numerator and the denominator of eq .( [ eq : mpcorrelation ] ) are trivially related .namely , given the result for the numerator which depends on harmonics , the result for the denominator can be obtained by using the result for numerator and setting all harmonics to 0 .therefore in what follows we focus mostly on the results for the numerator , and introduce the following shortcuts : the key experimental question in anisotropic flow analyses relying on correlation techniques was how to enforce the condition in the summations ( [ eq : num ] ) and ( [ eq : den ] ) without using the brute force approach of nested loops .such an approach is not feasible even for four - particle correlators and events with a multiplicity of the order of 100 particles .it is therefore unusable for events with multiplicities of the order of 1000 particles , characteristic of present day relativistic heavy - ion collisions . how this problem was resolved approximately and forsome specific correlators has been summarized in section [ s : introduction ] . herewe provide an exact and general answer .we outline explicitly the results for the case of 2- , 3- , and 4-p correlators expressed analytically in terms of -vectors defined in eq .( [ eq : qvector ] ) . for 2-p correlatorsit follows : additionally , for 3-p correlators it follows : finally , for 4-p correlators we have obtained : the analogous results for higher order correlators can be spelled out in a similar manner , but they are too long to fit in this paper . instead , we provide them calculated and implemented ( in .cpp and .nb file formats ) up to and including 8-particle correlators at the following link . as an alternative, we have developed recursive algorithms which , at the expense of runtime performance , calculate analytically higher order correlators in terms of lower order ones .the recursive algorithms will be presented in detail in section [ ss : algorithm ] . as the number of particles in correlators increases , the above analytical standalone expressions for multi - particle correlators quickly become impractical for direct use and implementation .for instance , the analogous analytic result for the 8-p correlator contains 4140 distinct terms , each of which is a product of up to eight distinct complex -vectors. a closer look at the structure of these analytic solutions revealed that the number of distinct terms per correlator form a well known bell sequence : which gives the number of different ways to partition a set with elements . in our context , is the number of particles in the correlator , and different way to partition " corresponds to different possible contributions from autocorrelations .the above results can be straightforwardly extended to the case of differential multi - particle correlators , for which one particle in the multiplet is restricted to belong only to the narrow differential bin of interest ; the self - contained treatment of differential multi - particle correlators is presented in appendix [ s : appendix to differential multi - particle correlators ] .as already remarked , direct evaluation of expression for higher order correlators quickly becomes impractical due to the number of terms .for that reason , we have developed algorithms which recursively express all higher order correlators in terms of the lower order ones . observing that is clear that the is determined through ordered partitions of the numbers .we can use this property to calculate for _ any _ as outlined in pseudo code in = = : return + : + + for = do + for = each combination of do + + + end for each + end for + return [ eq : algo1 ] ( ) a different recursive relation can be developed by examining eq .( [ eq : num ] ) itself .it can be seen that the innermost sum can be rewritten without the constraint of not being equal to any other index in the following way : this can be expanded into the following recursive formula , where , however , one must be careful to set the power of the weights equal to the number of summands ( i.e. would have a corresponding term , would have a corresponding term , etc . ) : an optimized version of this recursive formula , which ensures that unique terms are evaluated only once , is shown in pseudo code in , where initially all = : return + : + = + if = then + for = do + + end for + end if + return [ eq : algo2 ] ( ) the available implementation provides both and , as well as direct implementations of expansions of , like the ones presented in eqs .( [ eq:2pcorrelation])-([eq:4pcorrelation ] ) , for all higher order correlators up to and including .more details about the implementation are available in appendix [ s : algorithm ] .in this section we illustrate with monte carlo studies how the generic framework outlined in previous sections can be used .our exposition will branch into two main directions .firstly , in a toy monte carlo study we illustrate how our framework can serve to correct for detector effects by working out two concrete examples which are regularly encountered as systematic biases in the anisotropic flow analyses .the first one is the systematic bias stemming from the non - uniform azimuthal detector acceptance .the second one is the systematic bias stemming from the non - uniform reconstruction efficiency as a function of transverse momentum . in order to correct for such effects, we will construct and use -weights and -weights , respectively .secondly , in a realistic monte carlo study , we demonstrate how our framework can be used in the measurement of some new observables that we propose , and which were , with the techniques available so far experimentally inaccessible .we will conclude this section with estimates for these new observables in heavy - ion collisions at both rhic and lhc energies .we start by introducing the probability density function ( p.d.f . ) , , which will be used to sample the azimuthal angles of all particles .we consider to be a normalized fourier - like p.d.f .parametrized with six harmonics , and the reaction plane . written explicitly :\ , .\label{eq : pdf}\end{aligned}\ ] ] for each event we randomly determine the reaction plane by uniformly sampling its value from an interval . due to this randomization , which was directly motivated by random fluctuations in the direction of the impact parameter vector in real heavy - ion collisions , only the isotropic multi - particle correlators will have non - vanishing values once the data sample has been extended from a single event to multiple events . in the above p.d.f .we assign to the flow harmonics the following input values : which are constant for all events . atfirst we set all six harmonics to be independent of transverse momentum and pseudorapidity , but we will relax this setting in the second part of this section when we allow the harmonic to have a non - trivial dependence on transverse momentum .( [ eq : pdf ] ) then governs the distribution of the azimuthal angles of all particles , while the distribution of the other two kinematic variables , namely transverse momentum and pseudorapidity , are governed by the boltzmann and uniform p.d.f.s , respectively . for the boltzmann p.d.f .we have used the following parametrization : where is the mass of the particle , is the `` temperature '' , and is the multiplicity of the event .we have set to be the mass of the charged pions , i.e. gev/ . by increasing the parameter , one shifts the mean of the boltzmann distribution towards higher values , and we have used gev/ . in each eventwe have sampled precisely 500 particles , so as to avoid potential systematic biases due to trivial multiplicity fluctuations .finally , we remark that in all separate toy mc studies we have set the random seed to be the same in order to isolate genuine systematic effects from trivial effects due to statistical fluctuations .we start with an example in which we illustrate how our formalism can be used to correct for systematic biases due to non - uniform acceptance in the azimuthal angles , after which we switch to an example that corrects for systematic biases due to non - uniform efficiency in particle reconstruction as a function of transverse momentum .we select randomly one example for isotropic 2- , 3- , , and 8-p correlations , and , for simplicity , we use in this section a shorthand notation without subscripts for them .in particular , we have selected : numerical values on the right - hand side in the above equations were obtained by calculating the theoretical values for each correlator from the eq .( [ eq : mixedharmonicsexpversion ] ) , and inserting input values for flow harmonics from ( [ eq : inputvalues ] ) .we have rescaled observable by in all figures , in order to plot all values on the same scale .-weights for the case of non - uniform azimuthal acceptance shown in fig .[ fig : acceptance].,scaledwidth=100.0% ] -weights for the case of non - uniform azimuthal acceptance shown in fig .[ fig : acceptance].,scaledwidth=100.0% ] -weights compared to input values and values for uniform acceptance ( see the text for the precise explanation of the ordinate.),scaledwidth=50.0% ] our toy mc procedure consists of three separate runs .firstly , we run our simulation for the case of uniform azimuthal acceptance , to demonstrate that the generic equations which we have derived reproduce correctly the input values for all multi - particle observables . this can be seen by comparing filled and open black markers in fig .[ fig : phiresults ] .secondly , we have rerun the simulation using the same seed for random generation , but now have selected for analysis each particle with a probability which depends on its azimuthal angle .in particular , the particles which were sampled in the azimuthal range have been reduced by for this analysis . in this waywe have simulated a non - uniform azimuthal detector acceptance ( see fig .[ fig : acceptance ] ) , and the corresponding non - negligible systematic bias in anisotropic flow analyses , which is depicted with red filled markers in fig . [fig : phiresults ] . in order to correct for this systematic bias ,we have constructed -weights , , by inverting the histogram for non - uniform acceptance in fig .[ fig : acceptance ] .the resulting -weights are shown in fig .[ fig : phiweights ] .we remark that in our framework the weights do not have to be normalized explicitly , because the analytic equations we provide for multi - particle correlations are normalized by definition ( see eq .( [ eq : mpcorrelation ] ) ) .finally , we rerun the simulation for the third time with the same configuration as in the second run , now utilizing the constructed -weights from fig . [fig : phiweights ] when we are filling -vectors ( [ eq : qvector ] ) in each event . as can be seen from the blue open circles in fig .[ fig : phiresults ] , -weights completely suppress the systematic bias from non - uniform acceptance for all multi - particle observables we have selected in this example . based on the previous example , we conclude that as far as -weights can be constructed for the measured azimuthal distribution , our generic framework can be used to correct for the systematic bias for the cases when that distribution is non - uniform , and it is applicable for any multi - particle observable even when multiple harmonics are present in the system .these two points improve and generalize the prescription outlined in appendix b of . in the next example , we will demonstrate the usage of -weights . in this part of the study we use the same mc setup established in the previous example for the -weights with one exception .in this example we introduce the following dependence of : and we have set the above parameters to gev/ and .again , we have randomly selected one example for isotropic 2- , 3- , , and 8-p correlations ( suppressing their subscripts for simplicity in the rest of this section ) : some of the selected observables ( and ) do not have an explicit dependence on , so we do not expect them to exhibit any systematic bias in this example .analogously as in the previous example , our toy mc procedure consists of three separate runs .firstly , we run our simulation for the case of uniform reconstruction efficiency , in order to obtain the true yield ; this result is illustrated with the blue line in fig .[ fig : efficiency ] .secondly , we have rerun the same simulation , but now have selected for the analysis each particle with a probability which depends on its transverse momentum . the particles in the transverse momentum interval have been reduced by .the resulting yield is depicted by the red line in fig .[ fig : efficiency ] . the resulting systematic bias on the selected multi - particle observables ( [ eq : choicept ] ) can be seen by inspecting the red filled markers in fig . [fig : ptresults ] .as already remarked , such a bias is absent in observables and , because they do not have the explicit dependence on the harmonic ( see ( [ eq : choicept ] ) ) , which is the only harmonic in this study which has a non - trivial dependence . to correct for reconstruction efficiency ,we have constructed -weights , , by taking the ratio of the two histograms in fig .[ fig : efficiency ] .the result is shown in fig .[ fig : ptweights ] .finally , in the third run we use the same mc setup as in the second run , only now we make use of the constructed -weights from fig .[ fig : ptweights ] when filling the -vectors ( [ eq : qvector ] ) .the agreement between the results shown with black open squares ( uniform efficiency ) and the ones shown with blue open circles ( non - uniform efficiency using the -weights ) in fig .[ fig : ptresults ] , demonstrates clearly that the generic framework is capable of suppressing the systematic bias from non - uniform efficiency for all of the multi - particle observables in question .-weights corresponding to the non - uniform efficiency shown in fig .[ fig : efficiency].,scaledwidth=100.0% ] -weights corresponding to the non - uniform efficiency shown in fig .[ fig : efficiency].,scaledwidth=100.0% ] -weights.,scaledwidth=50.0% ] with the previous two examples we have demonstrated that , in a simple toy mc study , our generic framework can be utilized to correct for various detector inefficiencies .next , we will illustrate , in a study based on a realistic mc model , its use in the measurement of some new physical observables which we now propose .we now introduce a new type of observable for anisotropic flow analyses , the so - called _ standard candles ( sc ) _ , which can be measured with the generic framework we have presented in the previous sections .this observable is particularly useful for systems in which flow harmonics fluctuate in magnitude event - by - event ( the case we have in reality ) .we start with the following generic four - particle correlation : and we impose the constraint .the isotropic part of corresponding four - particle cumulant is given by : \right>\right>\left<\left<\cos[n(\varphi_1\!-\!\varphi_2)]\right>\right>\nonumber\\ & = & \left < v_{m}^2v_{n}^2\right>-\left < v_{m}^2\right>\left < v_{n}^2\right>\nonumber\\ & = & 0\ , , \label{eq:4p_sc_cumulant}\end{aligned}\ ] ] where double angular brackets indicate that the averaging from definition ( [ eq : mpcorrelation ] ) has been extended to all events .due to the condition that , a lot of terms which appear in the general cumulant expansion , for instance , are non - isotropic and , therefore , average to zero for a detector with uniform acceptance when the averaging is extended to all events . for fixed values of and over all events , the four - particle cumulant as defined in eq .( [ eq:4p_sc_cumulant ] ) , is zero by definition .any dependence on the symmetry planes and is also canceled by definition .we can get the result in the last line of eq .( [ eq:4p_sc_cumulant ] ) not only when and are fixed for all events , but also when event - by - event fluctuations of and are uncorrelated , since the expression can then be factorized .taking all these statements into account , the four - particle cumulant ( [ eq:4p_sc_cumulant ] ) is non - zero only if the event - by - event fluctuations of and are correlated .therefore , by measuring the observable ( [ eq:4p_sc_cumulant ] ) we can conclude whether finding larger than in an event will enhance or reduce the probability of finding larger than in that event , which is not constrained by any measurement performed yet .since by definition everything cancels out from the observable ( [ eq:4p_sc_cumulant ] ) except the last contribution , namely the correlation of event - by - event fluctuations of and , we name it a standard candle " .recently , by using different observables and methodology , these correlations between fluctuations of various harmonics have been studied in . in this study , the monte carlo event generator , a multiphase transport ( ampt ) model , has been used .ampt is a hybrid model consisting of four main parts : the initial conditions , partonic interactions , hadronization , and hadronic rescatterings .the initial conditions , which include the spatial and momentum distributions of minijet partons and soft string excitations , are obtained from the heavy ion jet interaction generator ( hijing ) .the following stage which describes the interactions between partons is modeled by zhang s parton cascade ( zpc ) , which presently includes only two body scatterings with cross sections obtained from pqcd with screening masses . in ampt with string melting ,the transition from partonic to hadronic matter is done through a simple coalescence model , which combines two quarks into mesons and three quarks into baryons . to describe the dynamics of the subsequent hadronic stage , a hadronic cascade , which is based on a relativistic transport ( art ) model ,is used .several configurations of the ampt model have been investigated to better understand the results based on ampt simulations .the partonic interactions can be tweaked by changing the partonic cross section : for rhic the default value is 10 mb , while using 3 mb generates weaker partonic interactions in zpc .we can also change the hadronic interactions by controlling the termination time in art .setting ntmax = 3 will turn off the hadronic interactions effectively .good agreement has been observed recently between anisotropic flow measurements and the ampt .therefore , we calculate multi - particle azimuthal correlations using ampt simulations with the input parameters suggested in at the lhc energy . for rhic energies we followed the parameters in while different configurations have also been used in this study . in fig .[ fig : sc ] we see a clear non - zero value for both ( red markers ) and ( black markers ) at the lhc energy .the positive results of suggest a positive correlation between the event - by - event fluctuations of and , which indicates that finding larger than in an event enhances the probability of finding larger than in that event . on the other hand, the negative results of predict that finding larger than enhances the probability of finding smaller than .a similar centrality dependence of and is also found at the rhic energy , see fig .in addition , we compare the and calculations for three different scenarios : ( a ) 3 mb ; ( b ) 10 mb ; ( c ) 10 mb , no rescattering .it was shown that the relative flow fluctuations of do not depend on the partonic interactions and only relate to the initial eccentricity fluctuations .therefore , the expectation is that and do not depend on the magnitudes of or ( which depend on both partonic interactions and hadronic interactions ) , but depend only on the initial correlations of event - by - event fluctuations of and .thus , both and remain the same for different configurations , since the initial state was kept the same each time . however , we find that when the partonic cross section is decreasing from 10 mb ( lower shear viscosity , see ) to 3 mb ( higher shear viscosity ) , the strength of decreases .additionally , the ` 10 mb , no rescattering ' setup seems to give slightly smaller magnitudes of and . consideringthe ampt model can quantitatively describe flow measurements at the lhc , our ampt calculations for these new observables provide predictions for the correlations of event - by - event fluctuations of and , and of and for the measurements at the lhc .such measurements have the potential to shed new light on the underlying physical mechanisms behind flow fluctuations .( red markers ) and ( black markers ) at = 2.76 tev pb pb collisions with ampt - stringmelting.,scaledwidth=50.0% ] ( solid markers ) and ( open markers ) at 200 gev au au collisions with ampt - stringmelting .different scenarios : ( a ) 3 mb ( green circle ) ; ( b ) 10 mb ( red square ) and ( c ) 10 mb , no rescattering ( azure star ) are presented ., scaledwidth=50.0% ]the previous results and examples are applicable only directly to detectors that have infinite resolution . finite resolution will both bias measurements and cause interference between harmonics . to study this, we define a detector with equal size adjacent azimuthal sectors where the edge of the first sector is shifted from 0 by .then the low and high edges of the sector are defined as follows : by integrating eq .( [ eq : fourier ] ) between and ( derivation is shown in appendix [ s : appendix_to_finite_granularity ] ) , the probability , , for a particle to be detected in the sector is found to be : \ , .\label{eq : pi}\ ] ] the expectation value for an observable for a single particle , , can then be evaluated from using the following formula : = \sum_{i=0}^{n-1}\theta_i p_i\,,\ ] ] where is the value of observable evaluated at the center of the sector .it follows that the expectation value of ( see derivation in appendix [ s : appendix_to_finite_granularity ] ) is given by : = \left\ { \begin{tabular}{ll } & for \\ & for \end{tabular } \right .\label{eq : harmonic_expectation}\ ] ] where is the set of all integers .it is evident from this formula that it is not possible to measure any harmonic which is a multiple of the number of sectors .if and the harmonics above can be neglected , eq .( [ eq : harmonic_expectation ] ) becomes : \approx v_{n } e^{in\psi_n } \frac{\sin\frac{n}{n}\pi}{\frac{n}{n}\pi}\,.\ ] ] in this case , the multi - particle azimuthal correlations defined in eq .( [ eq : mpcorrelation ] ) become ( under the assumption ( [ eq : factorization ] ) of a factorizable p.d.f . ) : \approx \prod_{k=1}^m v_{n_k } e^{in_k\psi_{n_k } } \frac{\sin\frac{n_k}{n}\pi}{\frac{n_k}{n}\pi}\ , .\label{eq : product}\ ] ] in this way , the term is a correction factor for a bias from finite granularity that must be applied for each harmonic that the multi - particle correlator is composed of due to an overall reduction in the measured value .figure [ fig : v2vsnsectors ] shows the result obtained by calculating and for detectors with various segmentations , and when the toy fourier - like p.d.f .was parametrized only with the single harmonic .the simulated values lie on the dashed line suppressed by 2 or 4 factors of ( see eq .( [ eq : product ] ) ) . in this case( if ) , the values can be corrected to reproduce the input values of or .the ` blip ' at is a special case where multiple factors proportional to in eq .( [ eq : harmonic_expectation ] ) contribute making the measured value 2 times bigger for and 6 times bigger for than the expected suppressed value ( the black dashed line ) when averaging over all events .and ( left and right , respectively ) evaluated for a range of sectors when only exists ( red squares ) .the magenta line shows the input value of or .the black dashed line shows the expected measured value of the correlator .the blue circles are the simulated values corrected for the reduction to the measured value due to finite granularity.,scaledwidth=100.0% ] and ( left and right , respectively ) evaluated for a range of sectors when only exists ( red squares ) .the magenta line shows the input value of or .the black dashed line shows the expected measured value of the correlator .the blue circles are the simulated values corrected for the reduction to the measured value due to finite granularity.,scaledwidth=100.0% ] if harmonics above are significant , eq . ( [ eq : harmonic_expectation ] ) shows that finite segmentation will introduce an interference from other harmonics ( in fact , from an infinite number of harmonics ) .if one , for example , considers the case where the first harmonics are non - zero , there will be a contribution from 2 terms in eq .( [ eq : harmonic_expectation ] ) . as an example, we will once again consider the case of the p.d.f in eq .( [ eq : pdf ] ) where the values of the first 6 harmonics are as in eq .( [ eq : inputvalues ] ) . in generalif one considers the case where the first harmonics are non - zero , then eq . ( [ eq : harmonic_expectation ] ) produces the following relationship for when a factorizable p.d.f .( [ eq : factorization ] ) exists and one averages over many events : = v_{n}^{2 } \frac{\sin^{2}\left(\frac{n \pi}{n}\right)}{\left(\frac{n \pi}{n}\right)^{2 } } + v_{n - n}^{2 } \frac{\sin^{2}\left(\frac{\left(n - n\right ) \pi}{n}\right)}{\left(\frac{\left(n - n\right ) \pi}{n}\right)^{2 } } = e\left[\left<2\right>_{n - n , n - n}\right]\ , .\label{eq:2partcorrmultharm}\ ] ] the harmonic , therefore , contaminates the measurement of , although the harmonic below is dominant ( i.e. is suppressed less ) . for low segmentation , this can cause significant interference from other harmonics .if one tries to compute with 8 sectors , for instance , then corresponds to which contaminates the calculation .( [ eq:2partcorrmultharm ] ) also explains the origin of the ` blip ' at in fig .[ fig : v2vsnsectors ] , where both terms are proportional to the same harmonic .figure [ fig : v2 - 6nsectors ] shows this interference for a detector with 8 , 12 , and 20 azimuthal sectors . for 8 sectors ,only can be corrected for .all other harmonics are contaminated and it is easily seen that , , and for the measured values .these cases can not be corrected for . however , unless the high harmonics are larger than the lower ones , the measured value will be closest to the lowest harmonic ( in the example could be corrected exactly only because was 0 in our toy mc study ( [ eq : inputvalues ] ) ) .for 12 sectors , all of the existing harmonics can be calculated ( and corrected for finite granularity ) .however , is still calculated incorrectly because it is actually measuring . for the case of 20 sectors ,all contaminations have disappeared and one can accurately determine the harmonics . in general , one can only measure up to and one should have a reasonable estimate of the size of the other harmonics to determine if the contamination from harmonics with will be significant . when the first 6 harmonics exist for detectors with 8 , 12 , and 20 sectors are shown ( red squares ) .the values of the input harmonics squared are represented by the magenta lines .the black dashed lines show the expected measured value with that number of sectors .the blue circles show the result obtained when correcting for the reduction to the measured value for the harmonic being measured.,scaledwidth=100.0% ] when the first 6 harmonics exist for detectors with 8 , 12 , and 20 sectors are shown ( red squares ) .the values of the input harmonics squared are represented by the magenta lines .the black dashed lines show the expected measured value with that number of sectors .the blue circles show the result obtained when correcting for the reduction to the measured value for the harmonic being measured.,scaledwidth=100.0% ] when the first 6 harmonics exist for detectors with 8 , 12 , and 20 sectors are shown ( red squares ) .the values of the input harmonics squared are represented by the magenta lines .the black dashed lines show the expected measured value with that number of sectors .the blue circles show the result obtained when correcting for the reduction to the measured value for the harmonic being measured.,scaledwidth=50.0% ]in this section we discuss our final topic which concerns the results based on multi - particle correlation techniques .in particular , we point out the existence of a systematic bias in traditional differential flow analyses with two- and multi - particle cumulants , which stems solely from the selection criteria applied on reference particles ( rp ) and on particles of interest ( poi ) , and which is present also in the ideal case when all nonflow correlations are absent .we need two separate groups of particles , rps and pois , in the traditional differential flow analyses to get a statistically stable result in the cases where there exists a small number of pois in a narrow differential bin of interest .the direct evaluation of the multi - particle correlators in eq .( [ eq : mpcorrelation ] ) using only pois would result in statistically unstable results . to circumvent this , borghini _ et al _ proposed in to use rps for all particles _ except the first _ in two- and multi - particle correlators , where rps are selected from some large statistical sample of particles in an event ( e.g. from all charged particles ) .any dependence of the differential flow of pois on rps would then be eliminated by separately evaluating multi - particle correlators by using only rps and then by explicitly dividing out their corresponding contribution to differential multi - particle correlators , in which only the first particle was restricted to be a poi . for a detailed description of traditional differential flow analyseswe refer the reader to ; now we quantify the systematic bias which stems solely from the applied selection criteria on rps and pois .usually it is said that collective anisotropic flow measured with is enhanced by flow fluctuations and is suppressed by flow fluctuations .when only using reference flow it is also easily shown that ( for a detailed derivation , see appendix a in ) : where is the mean value of the flow moment of interest , and the variance of that flow moment .however , in the more generally applied case , where the reference flow is used to obtain a differential flow , the situation becomes more complicated .the differential 2-particle cumulant estimate , , is obtained as : using , where is the correlation coefficient between the reference flow and the differential flow and is defined in the range ] one can obtain : this is very similar to eq .( [ eq : flow fluc 2-p ] ) .once again it is clear that the bias to the differential flow may not be the same as for the reference flow , an enhancement _ or _ a suppression is possible .three cases are explored in more detail below , while details of the calculations are provided in appendix [ s : appendix to fluctuations ] . + * and are perfectly correlated ( ) and .* for this case , rps and pois can have a full overlap , but it is not required .( [ eq : flow fluc 2-p ] ) can be written as : this case reduces to the regular case where is systematically _ enhanced _ by flow fluctuations , just as for the reference flow .since simply has opposite signs on the fluctuation terms , it follows that in this case it is _ suppressed _ , once again the same as the reference flow . +* and are uncorrelated ( ) . * in realitythis covers a case where the rps or pois are chosen from a two groups of particles that do not overlap and do not contain the same underlying correlations . for this case , so eq . ( [ eq : flow fluc 2-p ] ) trivially turns into : this means the differential 2-particle cumulant is systematically _ suppressed _ by the flow fluctuations in the reference flow , and that the 4-particle differential cumulant is systematically _enhanced_. fluctuations from the pois do not play any role . + * and are correlated , but the relative fluctuations are different .* once again the rps and pois may have a full overlap , but it is not required . in this caseit is assumed that , leading to : and the observed bias for the 2-particle ( 4-particle ) differential cumulant is an _ enhancement _ ( _ suppression _ ) as long as . in generalthe bias observed in the differential flow is influenced by the fluctuations in the reference flow .+ events with 10000 rps and 1000 pois .input flow is , reference flow fluctuations have . depending on the choice of particles for differential flow and the differential flow fluctuations , it is possible to get very different biases to the 2- and 4-particle cumulants.,scaledwidth=50.0% ] to illustrate the different cases a simulation of events with 10000 rps and 1000 pois has been made .the results are shown in fig .[ fig : flow fluc ] with input values , and . in the figuregaussian fluctuations are assumed , but other fluctuations , e.g. , uniform fluctuations , would yield similar results .the shaded bands indicate the reference flow of and , calculated with eqs .( [ eq : reference qc2 ] ) and ( [ eq : reference qc4 ] ) respectively , showing the usual enhancement or suppression .the first two points are from a simulation illustrating the first case above , where the pois and rps are perfectly correlated and share the same relative fluctuations and have a full overlap .the dotted lines are calculated using eq .( [ eq : case 1 ] ) for and the corresponding equation for .for the next two points the pois and rps are chosen with independent fluctuations and no overlap . in this case and are swapped , as expected from eq .( [ eq : case 2 ] ) .the last points show cases where the relative fluctuations in the pois differ from those in the rps , this can cause the usual enhancement and suppression to be larger , swapped or even be removed completely , depending on how the relative fluctuations are chosen . in the example simulations shown here , rps and pois do not overlap . for the casewhere eq .( [ eq : case 3 ] ) yields : for eq .( [ eq : case 3 ] ) yields : and finally for : it is tempting to use eqs .( [ eq : reference qc2 ] ) and ( [ eq : reference qc4 ] ) to estimate the magnitude of the flow fluctuations . however , when doing differential flow analysis with cumulants it is clear from eqs .( [ eq : flow fluc 2-p ] ) and ( [ eq : flow fluc 4-p ] ) that it may not be feasible .in fact , any analysis using differential flow should be very careful to describe the choice of rps and pois in great detail , such that comparison between different experiments and theories is not biased by mixing two or more of the cases shown in fig .[ fig : flow fluc ] and described above . as mentioned in section [s : systematic bias due to particle selection criteria ] for reference flow : where is the mean value of the flow moment of interest and is the variance of that flow moment .this can be obtained by assuming and using : \approx f(\mu_x ) + \frac{\sigma_x^2}{2}f''(\mu_x)\ , , \label{eq : expectation value app}\ ] ] where ] , where specifically in the case where and are perfectly correlated , when they are uncorrelated and when they are anticorrelated . is the standard deviation of the flow moment for pois .this means : from which it is clearly seen that can be either _ suppressed _ or _ enhanced _ by flow fluctuations depending on the value of . + the differential 4-particle cumulant estimate , , is obtained by : using eq . ([ eq : reference qc4 app ] ) this becomes : must now be estimated . by using : &\equiv&e[f(x)^2]-e[f(x)]^2\nonumber\\ & \approx & \left ( f'\left(\mu_x\right)\right ) ^2var[x]\ , , \label{eq : variance - cubed}\end{aligned}\ ] ] then where eq .( [ eq : expectation value app ] ) was also used for . is the correlation between and , applying the approximation in eq .( [ eq : variance - cubed ] ) to get to yields the correlation between and , which is to first order .the next term to be estimated : the last term in eq .( [ eq : kgintervention ] ) can be neglected . inserting these results into eq .( [ eq : kgintervention_2 ] ) it is seen that flow fluctuations bias in the following way : which once again can lead to either _ suppression _ or _enhancement _ of flow fluctuations .in general one can write : showing that the bias to the 2- and 4-particle cumulants are similar but opposite .we have presented the new generic framework within which all multi - particle azimuthal correlations can be evaluated analytically , with a fast single pass over the particles , free from autocorrelations by definition , and corrected for systematic biases due the various detector inefficiencies . for higher order correlatorsthe direct implementation of analytic solutions is not feasible due to their size ; this issue was resolved with the development of new recursive algorithms .we have proposed new multi - particle observables to be used in anisotropic flow analyses ( standard candles ) which can be measured for the first time within our generic framework .the systematic biases due to finite granularity of detector on multi - particle correlators have been quantified .we have pointed out the existence of a systematic bias characteristic for traditional differential flow analyses when all particles are divided into two groups of reference particles ( rp ) and particles of interest ( poi ) , which originates solely from the selection criteria for rps and pois , and which is present also in the ideal case when all nonflow correlations are absent .finally , we have straightforwardly generalized our generic framework to the case of differential multi - particle correlators .as mentioned in section [ ss : algorithm ] , we provide implementations for calculating generic multi - particle correlators defined in eq .( [ eq : mpcorrelation ] ) for : _ fully expanded _ : : expressions for ( see ( [ eq : num ] ) and ( [ eq:2pcorrelation]-[eq:4pcorrelation ] ) ) for ; _ recurrence _ : : expression ( see ) for any ; _recursive _ : : expression ( see ) for any .the largest feasible for the two latter methods above is of course limited by computing time , resources and machine precision . however , there is no inherent limitations on in the implementations .the implementation is done in plain callable c++ with no external dependencies .it can be integrated into any existing framework , including root based ones , by simple inclusion of the appropriate headers .examples of standalone and root applications are provided in the code .the code itself is further heavily documented at .the choice of method , using either _ expanded _ , _ recurrence _ , or _recursive _ expression , is left to the user .however , it should be noted , that using the truly general _ recurrence _ , or _recursive _ expressions does incur a performance penalty , as can be seen from fig .[ fig : app : algorithmstiming ] . : _ fully expanded _ are red circles , _ recurrence _ are green squares , and _ recursive _ are blue triangles . ]in this appendix , the equations used in section [ s : detectors with finite granularity ] to evaluate the effects of finite granularity are derived .we start by defining a detector with equal size adjacent azimuthal sectors with sectors being labeled by an integer where .furthermore the low edge of the first sector is shifted from 0 by .the edges of sector are then defined by : the p.d.f . for any particleis taken to be : \,.\ ] ] the probability of a particle going into sector is then found by integrating over the limits of the sector : \notag\\ & & = \frac{1}{2 \pi } \left [ \frac{2 \pi}{n } + \sum_{n=1}^{\infty } 2v_n \frac{\sin \left(n\left(\varphi_{l_{i } } - \psi_n\right)\right ) - \sin \left(n\left(\varphi_{h_{i } } - \psi_n\right)\right)}{n } \right ] \notag\\ & & = \frac{1}{2 \pi } \left [ \frac{2 \pi}{n } + \sum_{n=1}^{\infty } 2v_n \frac{2 \sin \left(n\frac{\left(\varphi_{h_{i } } - \varphi_{l_{i}}\right)}{2}\right ) \cos \left(n\left(\frac{\varphi_{h_{i}}+\varphi_{l_{i}}}{2 } - \psi_n\right)\right)}{n } \right ] \notag\\ & & = \frac{1}{n } \left [ 1 + \sum_{n=1}^{\infty } 2v_n \frac{\sin \left(n\frac{\left(\varphi_{h_{i } } - \varphi_{l_{i}}\right)}{2}\right)}{\frac{n \pi}{n } } \cos \left(n\left(\frac{\varphi_{h_{i}}+\varphi_{l_{i}}}{2 } - \psi_n\right)\right ) \right ] \notag\\ & & = \frac{1}{n } \left [ 1 + \sum_{n=1}^{\infty } 2v_n \frac{\sin \frac{n \pi}{n}}{\frac{n \pi}{n } } \cos \left(n\left(\left(i+\frac{1}{2}\right)\frac{2 \pi}{n } + \varphi_\delta - \psi_n\right)\right ) \right]\,.\end{aligned}\ ] ] the expected value of must then be evaluated as follows : & = & \sum\limits_{j=0}^{n-1 } e^{im\left[\left(j+\frac{1}{2}\right)\frac{2\pi}{n}+\varphi_\delta\right ] } p_j \notag\\ & = & \frac{1}{n } \sum_{j=0}^{n-1 } e^{im\left[(j+\frac{1}{2})\frac{2\pi}{n}+\varphi_{\delta}\right ] } \notag\\ & & { } + \frac{1}{n}\sum_{n=1}^{\infty}v_n\frac{\sin\frac{n\pi}{n}}{\frac{n\pi}{n } } \left [ e^{-in\psi_{n } } \sum_{j=0}^{n-1}e^{i(m+n)\left[(j+\frac{1}{2})\frac{2\pi}{n}+\varphi_{\delta}\right ] } + e^{in\psi_{n}}\sum_{j=0}^{n-1}e^{i(m - n)\left[(j+\frac{1}{2})\frac{2\pi}{n}+\varphi_{\delta}\right]}\right]\ , .\label{eq : expectexp}\end{aligned}\ ] ] eq .( [ eq : expectexp ] ) has terms of the form }n(-1)^{\frac{k}{n}}e^{ik\varphi_\delta} \frac{k}{n } \in \mathbb{z} \frac{k}{n } \notin \mathbb{z} ( -1)^{\frac{m}{n } } e^{im\varphi_\delta} \frac{m}{n } \in \mathbb{z} \sum\limits_{j=-\infty}^{\infty } v_{\left|jn - m\right|}\frac{\sin(j-\frac{m}{n})\pi}{(j-\frac{m}{n})\pi}(-1)^j e^{-i\left\{(jn - m)\psi_{\left|jn - m\right|}-jn\varphi_{\delta}\right\}} \frac{m}{n } \notin \mathbb{z} ] agrees with what is expected .if , is always a multiple of and one should use the equation for with which gives 1 . if , any fixed value of will not be a multiple of as and one should use the equation for . as , all other terms , except for the term ,become 0 because .the term has as leaving a value of .in this appendix we present generic equations for the differential ( or reduced ) correlators up to and including order four .all particles which are taken for the analysis are divided in each event into two groups : reference particles ( rp ) and particles of interest ( poi ) , which in general can overlap . in each differentialmulti - particle correlator we specify the first particle to be poi , and all remaining particles to be rp . by adopting the original notation introduced by borghini _ , we label azimuthal angles of pois with , and azimuthal angles of rps with . in practice , pois will correspond to particles in a differential bin of interest in an event ( e.g. particles in a narrow bin , particles in a narrow bin , etc . ) , while rps correspond to some large statistical sample of particles in an event ( e.g. all charged particles ) .the average differential -particle correlation in harmonics is given by the following generic definition : in the above definition is the number of rps in an event , is number of pois in a narrow differential bin in an event , labels the azimuthal angles of rps , labels the azimuthal angles of pois , while labels particle weights . in general, we allow independent particle weights for rps and pois .all trivial effects from autocorrelations are removed by the constraint , which enforces all indices in all summands to be unique in definition ( [ eq : mpdifferentialcorrelation ] ) .the only harmonic which corresponds to pois is underlined , in order to distinguish it from the all other harmonics which correspond to rps . as in the case of referencemulti - particle correlators studied in the main part of this paper , we first observe that the expressions in the numerator and the denominator of eq .( [ eq : mpdifferentialcorrelation ] ) are trivially related .therefore we introduce the following shortcuts : we will present our results for expressions ( [ eq : numdiff ] ) and ( [ eq : dendiff ] ) in terms of weighted - , - and -vectors , that we now define .the weighted -vector is a complex number defined by and filled with all particles labeled as rps in an event ( in total ) .the weighted -vector is constructed out of all pois ( in total ) in a narrow differential bin of interest in an event : , the weighted -vector is constructed only from particles in a narrow differential bin of interest in an event which are labeled both as pois and rps ( in total ) : the -vector was introduced in order to analytically remove all effects of autocorrelations in our final results .the indices and in definitions ( [ eq : qvector : app])-([q - vectordefinition ] ) are determined from the original indices in ( [ eq : mpdifferentialcorrelation ] ) , as will become clear shortly . in general, we will need - , - and -vectors evaluated for multiple values of indices and , which will be determined by the precise nature of the differential multi - particle correlator in question .the key point , however , is that to obtain - , - and -vectors for , in principle , any number of different values of indices and , a single pass over all particles still suffices .given the above definitions , and by following the same strategy and notation as in the main part of the paper , we have obtained the following analytic results for differential 2- , 3- and 4-particle correlations : the above relations are generic equations for differential multi - particle correlators , and they improve and generalize over the limited results presented in , which were applicable only for the special case in which all harmonics coincide .the further improvement consists of the fact that with these new results we allow for an independent weighting of pois and rps straight from the definition ( see eqs .( [ eq : qvector : app ] ) and ( [ p - vectordefinition ] ) ) which will have an obvious use case in experimental analyses when reconstruction efficiency for pois and rps differs .finally , we have preserved the full generality when it comes to different possible outcomes of particle labeling ; the results above are applicable for all three distinct cases of labeling , namely no overlap " , partial overlap " and full overlap " , between rps and pois .j. -y .ollitrault , phys .d * 46 * ( 1992 ) 229 .s. voloshin and y. zhang , z. phys .c * 70 * ( 1996 ) 665 [ arxiv : http://arxiv.org / abs / hep - ph/9407282[hep - ph/9407282 ] ] .p. danielewicz and m. gyulassy , phys .b * 129 * ( 1983 ) 283 .s. wang , y. z. jiang , y. m. liu , d. keane , d. beavis , s. y. chu , s. y. fung and m. vient _ et al ._ , phys .c * 44 * ( 1991 ) 1091 .j. jiang , d. beavis , s. y. chu , g. i. fai , s. y. fung , y. z. jiang , d. keane and q. j. liu _ et al ._ , phys . rev. lett .* 68 * ( 1992 ) 2739 .r. kubo , `` generalized cumulant expansion method , '' journal of the physical society of japan , vol .17 , no . 7 , ( 1962 ) .n. borghini , p. m. dinh , j. -y .ollitrault , phys .* c63 * ( 2001 ) 054906 .j. barrette _ et al ._ [ e877 collaboration ] , phys . rev .* 73 * ( 1994 ) 2532 .n. borghini , p. m. dinh , j. -y .ollitrault , phys .* c64 * ( 2001 ) 054901 .r. s. bhalerao , n. borghini and j. y. ollitrault nucl .a * 727 * ( 2003 ) 373 [ arxiv : http://arxiv.org / abs / nucl - th/0310016[nucl - th/0310016 ] ] .r. s. bhalerao , n. borghini , j. y. ollitrault , phys .* b580 * ( 2004 ) 157 - 162 . c. adler _ et al . _ [ star collaboration ] , phys .c * 66 * , 034904 ( 2002 ) [ nucl - ex/0206001 ] .a. bilandzic , r. snellings , s. voloshin , phys .* c83 * ( 2011 ) 044913 .r. s. bhalerao , m. luzum and j. -y .ollitrault , phys .c * 84 * ( 2011 ) 034910 .j. stoyanov , section 3 in `` determinacy of distributions by their moments '' , proceedings for _ international conference on mathematics & statistical modelling _ , 2006 .h. niemi , g. s. denicol , h. holopainen and p. huovinen , phys .c * 87 * ( 2013 ) 054901 [ arxiv:1212.1008 [ nucl - th ] ] .p. huo , j. jia and s. mohapatra , arxiv:1311.7091 [ nucl - ex ] .x. -n .wang and m. gyulassy , phys .lett . * 86 * , 3496 ( 2001 ) .b. zhang , comput .commun .* 109 * , 193 ( 1998 ) .z. -w .lin and c. m. ko , j. phys .g * 30 * , s263 ( 2004 ) .l. -w . chen andc. m. ko , phys .b * 634 * , 205 ( 2006 ) .b. -a .li and c. m. ko , phys .c * 52 * , 2037 ( 1995 ) .y. zhou , s. s. shi , k. xiao , k. j. wu and f. liu , phys .c * 82 * , 014905 ( 2010 ) [ arxiv:1004.2558 [ nucl - th ] ] .j. xu and c. m. ko , phys .c * 83 * , 034904 ( 2011 ) .y. zhou , k. xiao , f. liu and r. snellings , in preparation .a. bilandzic , `` anisotropic flow measurements in alice at the large hadron collider , '' cern - thesis-2012 - 018 .`` root system '' , http://root.cern.ch/.
we present a new generic framework which enables exact and fast evaluation of all multi - particle azimuthal correlations . the framework can be readily used along with a correction framework for systematic biases in anisotropic flow analyses due to various detector inefficiencies . a new recursive algorithm has been developed for higher order correlators for the cases where their direct implementation is not feasible . we propose and discuss new azimuthal observables for anisotropic flow analyses which can be measured for the first time with our new framework . the effect of finite detector granularity on multi - particle correlations is quantified and discussed in detail . we point out the existence of a systematic bias in traditional differential flow analyses which stems solely from the applied selection criteria on particles used in the analyses , and is also present in the ideal case when only flow correlations are present . finally , we extend the applicability of our generic framework to the case of differential multi - particle correlations .
in quantum information theory it is generally accepted that ( i ) total correlations of a bipartite quantum state are quantified by quantum mutual information ( see e.g. ) ( ii ) the total correlations can be decomposed into classical and quantum correlations ( see e.g. ) ( iii ) the quantum correlations are dominated by the classical correlations ( see e.g. ) notice that the quantum correlations are not only limited to quantum entanglement , because separable quantum states can also have correlations which are responsible for the improvements of some quantum tasks that can not be simulated by classical methods . however , there are some results that may raise doubts as to whether the statements ( i ) , ( ii ) and ( iii ) hold for all . in the following webriefly review and discuss a few of them .for example , it has been shown that there are quantum states for which statements ( i ) and ( ii ) can not be simultaneously true , if quantum correlations are measured by relative entropy of entanglement while classical correlations are quantified by a measure based on the maximum information that could be extracted on one system by making a povm measurement on the other one .recently , it has been shown that for certain quantum states the quantum correlations , as measured by entanglement of formation , exceed half of the total correlations , as measured by quantum mutual information , .if one assumes that statements ( i ) , ( ii ) and ( iii ) hold for all , then one concludes that entanglement of formation can not be considered as a measure of quantum correlations .however , if one assumes that entanglement of formation is a measure of quantum correlations and statements ( ii ) and ( iii ) hold , then it can be shown that for these quantum states .moreover , it has been shown that for certain quantum states quantum correlations , as measured by entanglement of formation , exceed total correlations , as measured by quantum mutual information , .it means that entanglement of formation can not be considered as a measure of quantum correlations , if one assumes that statements ( i ) , ( ii ) and ( iii ) hold for all .however , if one assumes that entanglement of formation is a measure of quantum correlations and statements ( ii ) and ( iii ) hold , then again one comes to conclusion that for these quantum states .furthermore , it has been shown that for some quantum states quantum correlations , as measured by entanglement of formation , exceed , i.e. . if one assumes that statements ( i ) and ( ii ) hold for all , then one concludes that for these states .however , if one assumes that entanglement of formation is a measure of quantum correlations and statements ( ii ) and ( iii ) hold , then once again one comes to conclusion that for these quantum states .the above examples show that statement ( i ) may not be true for all bipartite quantum states .it is clear that if we assume that it is true , then we immediately conclude that quantum mutual information must be a measure of total correlations also in the case when the quantum state has only classical correlations , i.e. in the present paper , we show that this is not the case for some classically correlated quantum mixed states and therefore in general case quantum mutual information can not be considered as a measure of total correlations of bipartite quantum states .assume that alice and bob share a pair of qubits in the following state where .this state can not have quantum correlations because it is separable and qubit ( ) is in the state which is a mixture of orthogonal states and .therefore , it is clear that the correlations between two orthogonal states of qubits and can be purely classical ( see e.g. ) .suppose now that alice and bob measure two observables and , respectively .if the measurement outcome of ( ) is ( ) , then qubit ( ) is certainly in the state .therefore , it is clear that the classical correlations between two orthogonal states of qubits and are simply correlations between two classical random variables and corresponding to the measurement outcomes of and , respectively .notice that and are random variables with alphabets , and probability mass functions ] denotes the probability that the measurement outcome of ( ) is ( ) . in the case under consideration, it can be shown that assume now that first alice performs a measurement of and then bob performs a measurement of .if the measurement outcome of is , then the post - measurement state of the system is given by .therefore , the conditional probability that bob s outcome is provided that alice s was is ] . in the case under consideration, it can be shown that = \left ( \begin{array}{cc } 1 & 0 \\ 0 & 1 \end{array } \right)\ ] ] and = \left ( \begin{array}{cc } \alpha & 0 \\ 0 & 1-\alpha \end{array } \right).\ ] ] thus , we see that random variables and are not independent and therefore there exist classical correlations between qubits and in the state ( [ stan qubitow ] ) .assume now , according to the statement ( i ) , that total correlations of a bipartite quantum state are quantified by quantum mutual information which is defined in formal analogy to classical mutual information as where denotes the von neumann entropy .then , according to eq .( [ eq : cm ] ) the classical correlations between qubits and are measured by the quantum mutual information . in the case under consideration, it can be shown that is just classical mutual information of random variables and given by therefore , we see that the classical correlations content of the quantum state ( [ stan qubitow ] ) can be arbitrarily small , as measured by classical mutual information , because .now , we check if these correlations can be really arbitrarily small . from eq .( [ prawdopodobienstwa warunkowe dla qubitow ] ) it follows that if alice s measurement outcome is , then bob s one is , i.e. is a one - to - one function of , ( see fig .[ fig1 ] ) .it means that the random variables and and what follows the states of qubits are perfectly correlated in the information - theoretic sense . therefore, we see that in the general case quantum mutual information can not be considered as a measure of total correlations in bipartite quantum states . now , we show why the classical mutual information does not capture all the correlations between random variables and , except the case when .we know that shannon entropy of random variable , , is a measure of alice s _ a priori _ uncertainty about the measurement outcome of and if the measurement outcome of is , then alice s uncertainty about the measurement outcome of is changed , preferably reduced , to .therefore , the information she gained about the measurement outcome of due to the measurement of is given by .thus , the average information gain about the measurement outcome of due to the knowledge of the measurement outcome of is , and it can be shown that it is equal to . therefore , we see that the average information gain about one random variable due to the knowledge of other one can be arbitrarily small although they are perfectly correlated . thus it is clear that classical mutual information is not a measure of correlations between two random variables , it is rather a measure of their mutual dependency .this conclusion leads us to the following question : what is an information - theoretic measure of correlations between random variables and ? for a pair of random variables with identical probability mass functions cover and thomas define it in the following way in the case under consideration , and it can be shown that , therefore , i.e. and are perfectly correlated for all .in the next section we show how to extend this definition to the case when the probability mass functions are not identical .assume now that alice and bob share a pair of qutrits in the following separable state in which orthogonal states of qutrits and are classically correlated .notice that and .suppose now that alice and bob measure two observables and .it can be easily shown that the probability mass functions and are not identical , and they are given by assume now that first alice performs a measurement of and then bob performs a measurement of .it can be shown that [ prawdopodobienstwa warunkowe dla qutritow ] and = \left ( \begin{array}{ccc } 0 & 0 & 0 \\ 0 & \frac{1}{3 } & 0 \\ \frac{1}{3 } & 0 & \frac{1}{3 } \end{array } \right ) .\label{prawdopodobienstwa laczne dla qutritow}\ ] ] thus , we see that the random variables and are not independent , and from eqs .( [ prawdopodobienstwa warunkowe dla qutritow ] ) it follows that bob s measurement outcomes are correlated with alice s ones , but they are not perfectly correlated ( see fig .[ fig2 ] ) .now , we show how to quantify these correlations . we know that for any two random variables and ( i ) with equality if and only if they are independent , and ( ii ) with equality if and only if , i.e. is perfectly correlated with in the information - theoretic sense .therefore , it is clear that if , then notice that this inequality can be rewritten in the following form therefore , the correlations between and can be measured by . in the case under consideration , it can be shown that , and therefore assume now that first bob performs a measurement of and then alice performs a measurement of .if the measurement outcome of is , then the post - measurement state of the system is given by .therefore , the conditional probability that alice s outcome is provided that bob s was is ] .it this case , the joint probabilities are given by ( [ prawdopodobienstwa laczne dla qutritow ] ) while the conditional probabilities are as follows = \left ( \begin{array}{ccc } 0 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{array } \right).\ ] ] thus , we see that alice s measurement outcomes are perfectly correlated with bob s ones ( see fig .[ fig3 ] ) .now , we explain why this is the case . we know that for any two random variables and ( i ) with equality if and only if they are independent , and ( ii ) with equality if and only if , i.e. is perfectly correlated with . therefore , it is clear that if , then notice that this inequality can be rewritten in the following form therefore , the correlations between and can be measured by . in the case under consideration , it can be shown that and therefore thus , we see that the correlations between two random variables and corresponding to the measurement outcomes of and depend on the temporal order of the measurements performed by alice and bob .therefore , in order to capture all classical correlations that can be observed in the state ( [ stanab2 ] ) , we propose to define a measure of correlations between two random variables and in the following way notice that in the case when a state has only classical correlations the shannon entropies are just equal to the von neumann entropies and from eqs .( [ eq : cm ] ) and ( [ eq : miara korelacji ] ) it follows that thus , we see that total correlations of a bipartite quantum state should be quantified by ( [ eq : tc ] ) instead of quantum mutual information alone , at least for the classically correlated quantum states .in this paper , we have shown that for bipartite quantum systems there exist quantum states for which quantum mutual information can not be considered as a proper measure of total correlations , understood as the correlations between the measurement outcomes of two local observables . moreover , for these states we have proposed a different way of quantifying total correlations , which takes into account that the correlations can depend on the temporal order of the measurements .
in quantum information theory it is generally accepted that quantum mutual information is an information - theoretic measure of total correlations of a bipartite quantum state . we argue that there exist quantum states for which quantum mutual information can not be considered as a measure of total correlations . moreover , for these states we propose a different way of quantifying total correlations .
consider the gaussian relay problem shown in figure [ fig : gaussian ] .suppose the receiver and the relay each receive information about the transmitted signal of power .specifically , let where have correlation coefficient and are jointly gaussian with zero mean and equal variance .what should the relay say to the ultimate receiver ?if the relay sends information at rate , what is the capacity of the resulting relay channel ?we first note that the capacity from to , ignoring the relay , is the channel from the relay to the ultimate receiver has capacity .this relay information is sent on a side channel that does not affect the distribution of , and the information becomes freely available to as long as it does nt exceed rate .we focus on three cases for the noise correlation : and . if , then , the relay is useless , and the capacity of the relay channel is for all .now consider , i.e. , the noises and are independent .then the relay has no more information about than does , but the relay furnishes an independent look at .what should the relay say to ?this capacity , mentioned in , remains unsolved and typifies the primary open problem of the relay channel . as a partial converse ,zhang obtained the strict inequality for all .how about the case ? this is the problem that we solve and generalize in this note . herethe relay , while having no more information than the receiver , has much to say , since knowledge of and allows the perfect determination of .however , the relay is limited to communication at rate .thus , by a simple cut - set argument , the total received information is limited to bits per transmission .we argue that this rate can actually be achieved .since it is obviously the best possible rate , the capacity for is given as ( see figure [ fig : graph ] . ) every bit sent by the relay counts as one bit of information , despite the fact that the relay does nt know what it is doing .we present two distinct methods of achieving the capacity .our first coding scheme consists of hashing into bits , then checking the codewords , , one by one , with respect to the ultimate receiver s output and the hash check of .more specifically , we check whether the corresponding estimated noise is typical , and then check whether the resulting satisfies the hash of the observed . since the typicality check reduces the uncertainty in by a factor of while the hash check reduces the uncertainty by a factor of , we can achieve the capacity .it turns out hashing is not the unique way of achieving .we can compress into using bits with as side information in the same manner as in wyner ziv source coding , which requires thus , bits are sufficient to reveal to the ultimate receiver . then , based upon the observation , the decoder can distinguish messages if for this scheme , we now choose the appropriate distribution of given .letting where is independent of , we can obtain the following parametric expression of over all : setting in , solving for , and inserting it in , we find the achievable rate is given by so `` compress - and - forward '' also achieves the capacity . inspecting what it is about this problem that allows this solution, we see that the critical ingredient is that the relay output is a deterministic function of the input and the receiver output .this leads to the more general result stated in theorem [ thm : main ] in the next section .we consider the following relay channel with a noiseless link as depicted in figure [ fig : det - relay ] .we define a _ relay channel with a noiseless link _ as the channel where the input signal is received by the relay and the receiver through a channel , and the relay can communicate to the receiver over a separate noiseless link of rate .we wish to communicate a message index = \{1,2,\ldots , 2^{nr}\} ] is interpreted to mean . ]we specify a code with an encoding function \to \mathcal{x}^n ] , and the decoding function \to [ 2^{nr}] ] .the capacity is the supremum of the rates for which can be made to tend to zero as .we state our main result . for the relay channel with a noiseless link of rate from the relay to the receiver ,if the relay output is a deterministic function of the input and the receiver output , then the capacity is given by [ thm : main ] the converse is immediate from the simple application of the max - flow min - cut theorem on information flow ( * ? ? ?* section 15.10 ) .the achievability has several interesting features .first , as we will show in the next section , a novel application of random binning achieves the cut - set bound . in this coding scheme, the relay simply sends the hash index of its received output .what is perhaps more interesting is that the same capacity can be achieved also via the well - known `` compress - and - forward '' coding scheme of cover and el gamal . in this coding scheme, the relay compresses its received output as in wyner ziv source coding with the ultimate receiver output as side information . in both coding schemes ,every bit of relay information carries one bit of information about the channel input , although the relay does not know the channel input . andthe relay information can be summarized in a manner completely independent of geometry ( random binning ) or completely dependent on geometry ( random covering ) . more surprisingly , we can partition the relay space using both random binning and random covering .thus , a combination of `` hash - and - forward '' and `` compress - and - forward '' achieves the capacity .the next section proves the achievability using the `` hash - and - forward '' coding scheme .the `` compress - and - forward '' scheme is deferred to section [ sec : second ] and the combination will be discussed in sections [ sec : discuss ] and [ sec : third ] .we combine the usual random codebook generation with list decoding and random binning of the relay output sequences : _ codebook generation ._ generate independent codewords of length according to . independently , assign all possible relay output sequences in into bins uniformly at random . __ to send the message index $ ] , the transmitter sends the codeword . upon receiving the output sequence , the relay sends the bin index to the receiver . _ decoding ._ let ( * ? ? ?* section 7.6 ) denote the set of jointly typical sequences under the distribution .the receiver constructs a list , ( x^n(w ) , y^n ) \in a_{\epsilon}^{(n)}\}\ ] ] of codewords that are jointly typical with . since the relay output is a deterministic function of , then for each codeword in , we can determine the corresponding relay output exactly .the receiver declares was sent if there exists a unique codeword with the corresponding relay bin index matching the true bin index received from the relay ._ analysis of the probability of error . _ without loss of generality , assume was sent .the sources of error are as follows ( see figure [ fig : scheme1 ] ) : 1 .the pair is not typical .the probability of this event vanishes as tends to infinity .the pair is typical , but there is more than one relay output sequence with the observed bin index , i.e. , . by markovs inequality , the probability of this event is upper bounded by the expected number of codewords in with the corresponding relay bin index equal to the true bin index . since the bin indexis assigned independently and uniformly , this is bounded by which vanishes asymptotically as if .the pair is typical and there is exactly one matching the true relay bin index , but there is more than one codeword that is jointly typical with and corresponds to the same relay output , i.e. , .the probability of this kind of error is upper bounded by which vanishes asymptotically if .the general relay channel was introduced by van der meulen .we refer the readers to cover and el gamal for the history and the definition of the general relay channel . for recent progress , refer to kramer et al . , el gamal et al . , and the references therein .we recall the following achievable rate for the general relay channel investigated in . for any relay channel , the capacity is lower bounded by where the supremum is taken over all joint probability distributions of the form subject to the constraint [ thm : ceg ] roughly speaking , the achievability of the rate in theorem [ thm : ceg ] is based on a superposition of `` decode - and - forward '' ( in which the relay decodes the message and sends it to the receiver ) and `` compress - and - forward '' ( in which the relay compresses its own received signal without decoding and sends it to the receiver ) .this coding scheme turns out to be optimal for many special cases ; theorem [ thm : ceg ] reduces to the capacity when the relay channel is degraded or reversely degraded and when there is feedback from the receiver to the relay .furthermore , for the semideterministic relay channel with the sender , the relay sender , the relay receiver and the receiver , el gamal and aref showed that theorem [ thm : ceg ] reduces to the capacity given by although this setup looks similar to ours , we note that neither nor theorem [ thm : main ] implies the other . in a sense ,our model is more deterministic in the relay - to - receiver link , while the el gamal aref model is more deterministic in the transmitter - to - relay link .a natural question arises whether our theorem [ thm : main ] follows from theorem [ thm : ceg ] as a special case .we first note that in the coding scheme described in section [ sec : main ] , the relay does neither `` decode '' nor `` compress '' , but instead `` hashes '' its received output .indeed , as a coding scheme , this `` hash - and - forward '' appears to be a novel method of summarizing the relay s information .however , `` hash - and - forward '' is not the unique coding scheme achieving the capacity in the next section , we show that `` compress - and - forward '' can achieve the same rate .theorem [ thm : main ] was proved using `` hash - and - forward '' in section [ sec : first ] . herewe argue that the capacity in theorem [ thm : main ] can also be achieved by `` compress - and - forward '' .we start with a special case of theorem [ thm : ceg ] .the `` compress - and - forward '' part ( cf .* theorem 6 ) ) , combined with the relay - to - receiver communication of rate , gives the achievable rate where the supremum is over all joint distributions of the form satisfying here the inequality comes from the wyner ziv compression of the relay s output based on the side information .the achievable rate captures the idea of decoding based on the receiver s output and the compressed version of the relay s output .we now derive the achievability of the capacity from an algebraic reduction of the achievable rate given by and .first observe that , because of the deterministic relationship , we have also note that , for any triple , if , there exists a distribution such that and . henceforth , maximums are taken over joint distributions of the form with .we have on the other hand , thus , we have in words , `` compress - and - forward '' achieves the capacity .it is rather surprising that both `` hash - and - forward '' and `` compress - and - forward '' optimally convey the relay information to the receiver , especially because of the dual nature of compression ( random covering ) and hashing ( random binning ) .( and the hashing in `` hash - and - forward '' should be distinguished from the hashing in wyner ziv source coding . ) the example in figure [ fig : bsc ] illuminates the difference between the two coding schemes . herethe binary input is sent over a binary symmetric channel with cross - over probability , or equivalently , the channel output is given as where the binary additive noise is independent of the input . with no information on available at the transmitter or the receiver , the capacity is now suppose there is an intermediate node which observes and `` relays '' that information to the decoder through a side channel of rate . since is a deterministic function of , theorem [ thm : main ] applies and we have for there are two ways of achieving the capacity .first , hashing .the relay hashes the entire binary into bins , then sends the bin index of to the decoder .the decoder checks whether a specific codeword is typical with the received output and then whether matches the bin index .next , covering . the relay compresses the state sequence using the binary lossy source code with rate .more specifically , we use the standard backward channel for the binary rate distortion problem ( see figure [ fig : dist ] ) : here is the reconstruction symbol and is independent of ( and ) with parameter satisfying thus , using bits , the ultimate receiver can reconstruct .finally , decoding based on , we can achieve the rate in summary , the optimal relay can partition its received signal space into either random bins or hamming spheres .the situation is somewhat reminiscent of that of lossless block source coding .suppose is independent and identically distributed ( i.i.d . ) . hereare two basic methods of compressing into bits with asymptotically negligible error. 1 . _ hashing . _the encoder simply hashes into one of indices . with high probability, there is a unique typical sequence with matching hash index .enumeration _the encoder enumerates typical sequences .then bits are required to give the enumeration index of the observed typical sequence . with high probability , the given sequence is typical .while these two schemes are apparently unrelated , they are both extreme cases of the following coding scheme . 1 . _ covering with hashing ._ by fixing and generating independent sequences each i.i.d . , we can induce a set of coverings for the space of typical s .for each cover , there are sequences that are jointly typical with .therefore , by hashing into one of hash indices and sending it along the cover index , we can recover a typical with high probability .this scheme requires bits .now if we take independent of , then we have the case of hashing only .on the other hand , if we take , then we have enumeration only , in which case the covers are hamming spheres of radius zero .it is interesting to note that the combination scheme works under any .thus motivated , we combine `` hash - and - forward '' with `` compress - and - forward '' in the next section .here we show that a combination of `` compress - and - forward '' and `` hash - and - forward '' can achieve the capacity for the setup in theorem [ thm : main ] .we first fix an _ arbitrary _conditional distribution and generate sequences each i.i.d . . then , with high probability , a typical has a jointly typical cover .( if there is more than one , pick the one with the smallest index .if there is none , assign . )there are two cases to consider , depending on our choice of ( and the input codebook distribution ) .first suppose if we treat as the relay output , is a deterministic function of and thus of .therefore , we can use `` hash - and - forward '' on sequences .( markov lemma justifies treating as the output of the memoryless channel . )this implies that we can achieve but from and the functional relationship between and , we have therefore , which is achieved by the above `` compress - hash - and - forward '' scheme with and satisfying . alternatively ,suppose then , we can easily achieve the rate by the `` compress - and - forward '' scheme .the rate suffices to convey to the ultimate receiver .but we can do better by using the remaining bits to further hash itself .( this hashing of should be distinguished from that of wyner ziv coding which bins codewords . ) by treating as a new ultimate receiver output and as the relay output , `` hash - and - forward '' on top of `` compress - and - forward '' can achieve since and the achievable rate in reduces to thus , by maximizing over input distributions , we can achieve the capacity for either case or .it should be stressed that our combined `` compress - hash - and - forward '' is optimal , regardless of the covering distribution . in other words ,any covering ( geometric partitioning ) of space achieves the capacity if properly combined with hashing ( nongeometric partitioning ) of the same space .in particular , taking leads to `` hash - and - forward '' while taking the optimal covering distribution for and in section [ sec : second ] leads to `` compress - and - forward '' .in this section , we show that theorem [ thm : main ] confirms the following conjecture by ahlswede and han on the capacity of channels with rate - limited state information at the receiver , for the special case in which the state is a deterministic function of the channel input and the output .first , we discuss the general setup considered by ahlswede and han , as shown in figure [ fig : ah ] . herewe assume that the channel has independent and identically distributed state and the decoder can be informed about the outcome of via a separate communication channel at a fixed rate .ahlswede and han offered the following conjecture on the capacity of this channel .the capacity of the state - dependent channel as depicted in figure [ fig : ah ] with rate - limited state information available at the receiver via a separate communication link of rate is given by where the maximum is over all joint distributions of the form such that and the auxiliary random variable has cardinality .it is immediately seen that this problem is a special case of a relay channel with a noiseless link ( figure [ fig : det - relay ] ) . indeed, we can identify the relay output with the channel state and identify the relay channel with the state - dependent channel .thus , the channel with rate - limited state information at the receiver is a relay channel in which the relay channel output is independent of the input .the binary symmetric channel example in section [ sec : discuss ] corresponds to this setup .now when the channel state is a deterministic function of , for example , as in the binary example in section [ sec : discuss ] , theorem [ thm : main ] proves the following . for the state - dependent channel with state information available at the decoder via a separate communication link of rate ,if the state is a deterministic function of the channel input and the channel output , then the capacity is given by our analysis of `` compress - and - forward '' coding scheme in section [ sec : second ] shows that reduces to , confirming the ahlswede han conjecture when is a function of . on the other hand , our proof of achievability ( section [ sec : first ] ) shows that `` hash - and - forward '' is equally efficient for informing the decoder of the state information .even a completely oblivious relay can boost the capacity to the cut set bound , if the relay reception is fully recoverable from the channel input and the ultimate receiver output . and there are two basic alternatives for the optimal relay function one can either compress the relay information as in the traditional method of `` compress - and - forward , '' or simply hash the relay information .in fact , infinitely many relaying schemes that combine hashing and compression can achieve the capacity .while this development depends heavily on the deterministic nature of the channel , it reveals an interesting role of hashing in communication .r. ahlswede and t. s. han , `` on source coding with side information via a multiple - access channel and related problems in multi - user information theory , '' _ ieee trans .inform . theory _it-29 , no . 3 , pp .396412 , 1983 .
the capacity of a class of deterministic relay channels with the transmitter input , the receiver output , the relay output , and a separate communication link from the relay to the receiver with capacity , is shown to be thus every bit from the relay is worth exactly one bit to the receiver . two alternative coding schemes are presented that achieve this capacity . the first scheme , `` hash - and - forward '' , is based on a simple yet novel use of random binning on the space of relay outputs , while the second scheme uses the usual `` compress - and - forward '' . in fact , these two schemes can be combined together to give a class of optimal coding schemes . as a corollary , this relay capacity result confirms a conjecture by ahlswede and han on the capacity of a channel with rate - limited state information at the decoder in the special case when the channel state is recoverable from the channel input and the output .
the main drawback of visible light communication ( vlc ) systems is the narrow modulation bandwidth of the light sources , which forms a barrier to achieving rival data rates .recently , the development of high - rate vlc systems has been an active research area . to this end , equalization techniques ,adaptive modulation schemes , and multiple - input - multiple - output ( mimo ) technology have been considered for achieving higher data - rates in vlc systems .orthogonal frequency division multiplexing ( ofdm ) and orthogonal frequency division multiple access ( ofdma ) schemes have also attracted attention in vlc systems due to their high spectral efficiency .however , conventional ofdm and ofdma techniques can not be directly applied to vlc systems , due to the restriction of positive and real signals imposed by intensity modulation and the illumination requirements . for this reason , dc - biasing and clipping techniqueshave been proposed to adapt ofdm and ofdma to vlc systems , but such techniques degrade the spectral efficiency and the bit error rate ( ber ) performance . _ power domain multiple access _ , also known as _ non - orthogonal multiple access_ ( noma ) , has been recently proposed as a promising candidate for 5 g wireless networks . in noma, users are multiplexed in the power domain using superposition coding at the transmitter side and successive interference cancellation ( sic ) at the receivers . in noma, each user can exploit the entire bandwidth for the whole time . as a result , significant enhancement in the sum ratecan be achieved .recent investigations on noma for rf systems are shown to yield substantial enhancement in throughput . in this paper, we propose noma as an efficient and flexible multiple access protocol for boosting spectral efficiency in vlc downlink ( dl ) systems thanks to the following reasons : * noma is efficient in multiplexing a few number of users .this is in line with vlc systems , which depend on transmitting leds that act as small cells to accommodate a small number of users in room environments . *sic requires channel state information ( csi ) at both the receivers and the transmitters to assist the fuctionalities of user demultiplexing , decoding order , and power allocation .this is a major limitation in rf but not in a vlc system , where the channel remains constant most of the time , changing only with the movement of the users .* noma performs better in high signal - to - noise ratio ( snr ) scenarios .this is the case in vlc links , which inherently offer high snrs due to the short separation between the led and the photo detector ( pd ) , and the dominant line of sight ( los ) path .* vlc system performance can be optimized by tuning the transmission angles of the leds and the field of views ( fovs ) of the pds .these two degrees of freedom can enhance the channel gain differences among users , which is critical for the performance of noma .the contribution of this paper is two - fold ; first , to the best of our knowledge , this is the first work suggesting noma as a potential multiple access scheme for high - rate vlc systems , and second , we develop a complete framework for indoor noma - vlc multi - led dl networks by adopting a novel channel - dependant power allocation strategy called _ gain ratio power allocation _ ( grpa ) .grpa significantly enhances the system s performance compared to the static power allocation approach by maximizing the users sum rate .moreover , the proposed framework can adjust the transmission angles of the leds and the fovs of the pds , to maximize system throughput .notice that our framework consider users mobility to establish a realistic scenario .we consider a realistic scenario with multiple leds in an indoor environment , such as a library or a conference room , with the beams formed by adjacent leds being slightly overlapped , as shown in fig .[ fig : vlc_network ] . in this way, users located at the cell boundary may be receiving data streams from two adjacent leds . in our setup , we assume that user u is associated or connected to led , since it lies in the coverage of its beam . similarly , u is associated or connected to led .finally , u , located in the intersection area of the two beams , can receive data from both leds .random walk mobility model is implemented to mimic the movements of the indoor users . in this mobility model , a user may move from its location to a new location by randomly choosing a direction between and and a speed between and m / s . [ ] [ ] [ 0.7][0]led[][][0.7][0]led[][][0.7][0][][][0.7][0][][][0.7][0][][][0.7][0][][][0.7][0][][][0.7][0][][][0.7][0][][][0.7][0][][][0.7][0][][][0.7][0][][][0.7][0][][][0.7][0][][][0.7][0][][][0.7][0][][][0.7][0][][][0.7][0][][][0.7][0]u[][][0.7][0]u[][][0.7][0]u leds and users.,title="fig : " ] using noma , led transmits the real and positive signals and with power values and , where and convey information intended for u and u , respectively .likewise , led transmits to u and to u with power values and , respectively .can potentially achieve diversity gain by receiving ( and combining particularly ) the two copies of the same symbol transmitted from the two different leds.]for each led , the transmitted signal is a superposition of the signals intended for its users .let us now denote by the index set of users connected to the led , which defines the led users group .then , the signal transmitted from led , , is given by while the total transmitted power is .the received signal at u is formed by the contribution of all signals transmitted from the leds in the network , written as where is the total number of leds and denotes additive white gaussian noise ( awgn ) of zero mean and total variance , which is the sum of contributions from shot noise and thermal noise at u .the los path gain from led to u is while for . in, is the user pd area , the distance between led and u , is the angle of irradiance with respect to the transmitter perpendicular axis , is the angle of incidence with respect to the receiver axis , is the fov of u , is the gain of the optical filter , and represents the gain of the optical concentrator given by while for and is the refractive index . also , is the lambertian radiant intensity of the led where is the order of lambertian emission , expressed as with being the transmitter semi - angle at half power .the multi - user interference is eliminated by means of sic and the decoding is performed in the order of increasing channel gain . based on this order, u can correctly decode the signals of all users with lower decoding order .the interference from users of higher decoding order ( i.e. , from u with ) is not eliminated and is treated as noise .the instantaneous signal - to - interference - plus - noise ratio ( sinr ) for u reads where u is higher than u in the decoding order .evidently , the strategy adopted for the allocation of the transmitted power among users is critical for the performance of the vlc system .in this section , we present the noma - vlc framework with the gain ratio power allocation ( grpa ) scheme . the goal is to implement noma in a realistic multi - led scenario for achieving the highest possible throughput .we consider the existence of a _ central control unit _ ( ccu ) that collects the necessary information about users locations and their associated channel gains .thanks to its deterministic nature , the vlc channel remains constant for a fixed receiver location , which simplifies channel estimation .when a user changes its location , the ccu updates its information accordingly .users association is based on the spatial dimension , it exploits user position in order to determine which led can provide access .if the user is located in the overlapping area of two adjacent leds , it will be associated with both of them . in this way, diversity gain can be achieved to enhance the performance of the cell edge users . to gain more insight, we investigate the effect of adjusting the leds transmission angles at half power in order to eliminate inter - beam interference . transmitting angle tuningcan be related to cell zooming approach in , where the cell size can be adjusted by modifying the transmitted power of the led . in this casethe dc component of the transmitted power needs to be calculated and changed each time the cell size needs to be altered . on the other hand ,our transmitting angle tuning approach would give similar performance without the need to alter the transmitted optical power , and thus a uniform illumination can be ensured .for practical feasibility , we assume that each led has two different transmission angle tunings .once a user steps out of the coverage of its associated led , it will be handed over to the nearest led in order to continue its reception .the sic decoding order among the users of the led is decided based on the channel gain of each user , . by substituting and into and substituting with , where is the height between the pds and the leds ( which is assumed to be fixed , i.e. , at table level ) , and assuming vertical alignment of leds and pds , the channel gain can be expressed as from , it can be seen that the channel gain depends on two parameters ; the distance and the fov of the pd .if the fovs of the pds are fixed ( i.e. , not tunable ) , then the sic ordering can be easily made in the decreasing order of the distance .users in the index set of led are sorted in the order of decreasing distance .then , users existing at the cell boundary ( if any ) , are moved to the end of the decoding order . in this way, cell boundary users can decode their signals after subtracting the signals components intended for other users in both cells .thus , if the decreasing order of the users distances from the led is for users in the cell centre , and for cell edge users , then the decoding order is set to where denotes the sic decoding order for user u . as a further step , we examine the effect of changing the fovs of the users .as shown in , the gain of the optical concentrator of the pd can be increased by reducing the fov .thus , fov tuning can be utilized to enhance channel gain differences among users , which is beneficial for the success of the noma technique . however , if the fov is smaller than it should be , the pd will no longer be able to observe the required led beam .the location of the user with respect to the transmitting led determines the optimal fov adjustment .we assume that each pd has three tunable fov settings .the fovs of cell edge users are tuned to receive the beam of one led only ( if possible ) to reduce beam overlapping and enhance spectral efficiency .this is done by tuning the fov to the lowest setting that allows reception from the nearest led .moreover , the fovs of users in the center of the cell are set to the lowest setting to enhance channel gain .moreover , fov tuning can be exploited to minimize the number of handovers in the system .particularly , as the users are moving to the proximity of the transmitting led , they can use a wider fov setting to stay connected to the same led and avoid unnecessary handovers . after adjusting the fovs of the users , sic decoding orderis done based on the distance . based on the previous step ,each led transmits the signals of its users using noma .to do so , different power values are allocated to the users based on their channel gains .the sum of the assigned power values is equal to the led transmitting power .we propose a novel gain ratio power allocation strategy , and compare it to the static power allocation , where the transmission power of the sorted user is set to where is the power allocation factor . according to grpa , power allocation depends on the user gain compared to the gain of the first sorted user , as well as the decoding order . after setting , the power allocated tothe sorted user is thus , the assigned power decreases with the increase of , since lower power levels will be sufficient for users with good channel conditions to decode their signals , after subtracting the signals of users with lower decoding order .moreover , the ratio is raised to the power of the decoding order , , to ensure fairness ; as users with low decoding order will need much higher power due to the large interference they receive .in this section , we evaluate the performance of the proposed noma - vlc framework .we consider a room with two transmitting leds . in the cases with no tuning , we set the led transmission angles and the fovs to fixed values , i.e. , and .the tunable transmission angles and fovs are and , respectively .we used the same leds and pds characteristics as in .[ t][][0.7][0]number of users[][l][0.7][180]average bit error rate[][r][0.5][0 ] fixed power alloc .( )[][r][0.5][0 ] fixed power alloc .( )[][r][0.5][0 ] grpa[][l][0.7][180]sum rate ( bps)[][r][0.5][0 ] no tuning[][r][0.5][0 ] tuning[][r][0.5][0 ] tuning[][r][0.5][0 ] no tuning[][r][0.5][0 ] tuning[][r][0.5][0 ] tuning first , we compare the ber performance for the grpa and the static power allocation .it was found by simulations that static power allocation gives its best ber performance at ( and ) . at these values ,the power allocated to users experiencing bad channel conditions is high enough to enable correct signal decoding .figure [ fig : fixedvsdynamic ] shows the ber for all users for the two power allocation schemes .the proposed grpa strategy performs better than static power allocation as it compensates for channel differences among users . for example , at ber , grpa was able to serve users , while static power allocation can only accommodate users to maintain the same ber performance .it should be pointed here that grpa is more sensitive to channel knowledge as can be inferred from .we assumed that the rf uplink channel is noiseless in our simulations .next , the effect of transmission angles and fovs tuning on system performance is studied .we primarily compare the following scenarios : 1 ) no tuning , 2 ) transmission angles tuning , and 3 ) fovs tuning .figures [ fig : sumratevsusers ] and [ fig : sumratevsusers_dynamic ] show the users sum rate for the three scenarios under static power allocation ( ) and the proposed grpa , respectively . as it can be seen , for a small number of users , transmission angles and fovs tuning increases the sum data rate .this is because the interference between the two beams is completely eliminated .however , as the number of users increases , less power is allocated to each user , which makes it better for cell edge users to receive their signals from both leds .thus , transmission angles tuning will degrade system throughput , as it limits cell edge users to receive from one led only .on the other hand , fovs tuning will have the best performance as it allows each user to optimize reception according to its position .as expected , the performance improvement induced by fovs tuning is significantly increased when grpa is adopted , as the latter accounts for channel gain variations .furthermore , fov tuning can be exploited to decrease the number of handovers performed in the system .figure [ fig : handovers ] shows the number of handovers for two scenarios : 1 ) fixed fov , and 2 ) fov of the moving users is adjusted to the highest setting while they move in the proximity of their associated leds .it can be seen that the number of handovers can be significantly decreased with the tunable fovs strategy .h. li , x. chen , b. huang , d. tang , and h. chen , `` high bandwidth visible light communications based on a post - equalization circuit , '' _ ieee photon ._ , vol . 26 , no . 2 , pp .119122 , jan .2014 .h. marshoud , d. dawoud , v. m. kapinas , g. k. karagiannidis , s. muhaidat , and b. sharif , `` mu - mimo precoding for vlc with imperfect csi , '' in _ proc .international workshop on optical wireless communication ( iwow ) _ , sep .2015 .j. dang and z. zhang , `` comparison of optical ofdm - idma and optical ofdma for uplink visible light communications , '' in _ proc .international conference on wireless communications signal processing ( wcsp ) _ , oct . 2012 .a. benjebbour , y. saito , y. kishiyama , a. li , a. harada , and t. nakamura , `` concept and practical considerations of non - orthogonal multiple access ( noma ) for future radio access , '' in _ proc .international symposium on intelligent signal processing and communications systems ( ispacs ) _ , nov . 2013 .z. ding , z. yang , p. fan , and h. poor , `` on the performance of non - orthogonal multiple access in g systems with randomly deployed users , '' _ ieee signal process ._ , vol . 21 , no . 12 , pp .15011505 , dec . 2014 .t. camp , j. boleng , and v. davies , `` a survey of mobility models for ad hoc network research , '' _ wireless communications & mobile computing ( wcmc ) : special issue on mobile ad hoc networking : research , trends and applications _ , vol. 2 , no . 5 , pp . 483502 , sep .
the main limitation of visible light communication ( vlc ) is the narrow modulation bandwidth , which reduces the achievable data rates . in this paper , we apply the non - orthogonal multiple access ( noma ) scheme to enhance the achievable throughput in high - rate vlc downlink networks . we first propose a novel gain ratio power allocation ( grpa ) strategy that takes into account the users channel conditions to ensure efficient and fair power allocation . our results indicate that grpa significantly enhances system performance compared to the static power allocation . we also study the effect of tuning the transmission angles of the light emitting diodes ( leds ) and the field of views ( fovs ) of the receivers , and demonstrate that these parameters can offer new degrees of freedom to boost noma performance . simulation results reveal that noma is a promising multiple access scheme for the downlink of vlc networks . multiple access , noma , power allocation , power domain multiple access , visible light communication .
opacity distribution functions are used to describe line opacities in lte for a set of pairs of temperature t and gas pressure p assuming a fixed chemical composition and ( in our case ) a fixed value for the microturbulence : the frequency dependence of the is described by dividing the whole wavelength range relevant to the desired stellar atmosphere computations into 300 to 1200 channels .each of these channels is devided into 10 - 12 subchannels which provide a statistical representation of line opacity ( ranging from high to low values ) .the ( radiative ) flux integrated over a set of subchannels approximates the overall ( radiative ) flux for each particular channel .thus , one can compute the total radiative flux throughout the model atmosphere , as well as surface fluxes and intensities .this technique was described in strom and kurucz ( 1966 ) and later used , for instance , by gustafsson et al .( 1975 ) and by kurucz ( 1979 ) .muthsam ( 1979 ) was the first to compute model atmospheres for cp stars . in the mean time, the reliability of atomic data obtained from experiments has been improved .similarly , available line lists have increased both in size and reliability by up to an order of magnitude .hence , it is worthwhile to bring individual model atmospheres for cp stars to the standards for stars with solar elemental abundance ( cf .kurucz 1993 ). for his work muthsam ( 1979 ) used the opacity sampling ( os ) technique which requires the computation of a crude synthetic spectrum .the latter has to be sufficiently accurate for both the calculation of the integral flux in a wavelength region similar in size to the channels used for the odf technique as well as for the calculation of the total radiative flux in each model layer .the os technique is more efficient for the computation of _ single _ models .as opposed to the odf technique , opacity sampling allows the study of vertical stratification of elemental abundances .moreover , the os technique is also capable of representing the blanketing effect in cool stars where the wavelength distribution of the opacities may change dramatically with depth as new molecules are formed ( cf.ekberg et al . 1986 ) . however , as we did not intend to study stratification or late type stars with our code , we rather decided to benefit from the main advantage of the odf technique : the rapid computation of small _ grids _ of model atmospheres ( as a function of and ) .such grids are very convenient for spectroscopic analyses , the computation of color and flux grids , the investigation of the flux distribution of pulsating stars , and doppler imaging .speed and accuracy requirements for an odf computation are determined by a ) the number of t - p pairs in ( [ kupka_eq1 ] ) used to represent the whole odf , b ) the number of spectral lines used for the opacity computation , and c ) the size of the wavelength grid used for each channel `` '' . under the assumption that the structure of the model atmosphere does not change dramatically for a different chemical composition , we can adapt our computations according to each of these three quantities .first , the t - p pairs are selected from a model atmosphere that is closest in and as well as chemical composition to the `` target '' model atmosphere ( or model grid within some limited and range ) . in a second step ,we use the vienna atomic line data base ( vald , piskunov et al .1995 ) together with its extraction tools preselect and select to choose between 70000 and 600000 lines ( a range valid from boo stars to extreme si peculiar stars ) out of 42 million lines .the lines are selected according to wavelength range , ionization stage , excitation potential , and the ratio of line vs. continuous opacity for the desired range of t - p pairs or a set of t - p pairs taken from the `` closest '' standard model atmosphere .finally , the odf as well as rosseland and/or planck mean opacities are computed in a two - stage process : an adaptive wavelength grid ( similar to that one for synth as described in piskunov 1992 ) is used by the vopdf code to compute line opacities for each t - p pair .these are processed by the opdf code which uses an adaptive histogram and a running geometric mean to compute the desired opacity tables .details will be given in piskunov & kupka ( 1998 ) .currently , the equation of state and the continuous opacities are taken from atlas9 as published by kurucz ( 1993 ) to simplify the comparison with standard models . without line extraction , a typical odf computation takes between 2 and 24 hours on an alpha workstation with a dec-21164/a cpu running at 600 mhz ( or between 6 and 72 hours for a dec-21064/a at 266 mhz ) .the vopdf / opdf codes can immediately run in parallel as opacities for different t - p pairs do not depend on each other ..parameters used for odf and model atmosphere computations . [ cols="<,>,<,<,>",options="header " , ] we present here some of the results of opacity calculations for three different cp stars .for the first case , vega , we compared available _ observations _ of its flux distribution from the lyman to the paschen series with surface fluxes derived from our own , _ individual model atmospheres _ for vega ( see table [ kupka_tab1 ] ) as well as with fluxes derived from a set of _ standard model atmospheres _ assuming a scaled solar abundance ( by -0.5 dex , using an odf published on cdrom 3 of kurucz ) .the stellar parameters and chemical compositions were taken from castelli & kurucz ( 1994 ) .the model atmosphere computations were done with different variants of the atlas9 code of kurucz .details will be given in piskunov & kupka ( 1998 ) .the overall agreement between observations , individual models , and standard models is quite good .the fluxes of individual and standard models are usually closer to each other than to the observations. small differences between the `` standard models '' and `` individual models '' mainly originate from differences in the gf - values of the line lists used , a different treatment of hydrogen lines , and from the fact that we used the individual chemical composition determined for vega .the latter does not change the model structure , but generates some specific flux features .this is supported by the result that the standard and the individual model atmospheres deviate by less than 30 k for all layers within the continuum as well as within the line formation region .numerical experiments provide no evidence for a difference between individual models created from odfs with 1% or 0.1% as a minimum ratio of line vs. continuum opacities .hence , we used the larger minimum ratio for most of the other odf computations to save cpu time . as a second case , we compared model atmospheres based on kurucz odfs for scaled solar abundances ( by -0.2 dex , 0.0 dex , and + 0.2 dex ) with models based on an individual odf with an abundance pattern as described in kupka et al .( 1996 ) for the mildly overabundant roap star cir ( cf .table [ kupka_tab1 ] ) . here , the t vs. optical depth and the t vs. p relations fall essentially between the scaled solar abundance patterns with 0.0 dex and + 0.2 dex . hence , the line profiles remain unaffected by the new model atmosphere .the behaviour of the flux distribution is more complicated and is shown in figure [ kupka_fig1 ] .though fe appears to be slightly underabundant in cir , the individual model atmosphere of this star produces a flux distribution closer to the case of an overabundance of + 0.2 dex , a slightly enhanced uv line blanketing after the balmer jump , various features of sr ii ( at 4077 and at 4215 ) and a mild depression at 5200 .finally , we computed odfs with different he , si , and fe abundances representing the mean composition and abundance spots of the strongly si overabundant star et and and compared the individual models with those computed under the assumption of a scaled solar abundance ( + 0.0 dex and + 1.0 dex ) . already for a mean overabundance of si of + 1.0 dex , deviations from the `` standard models '' occur that can not be `` simulated '' by changing , , or by choosing a specific scaling factor for the abundance . though a specific feature or a single layer may be `` fitted '' easily , one can not match the height of the balmer discontinuity , or the relation of t vs. optical depth , or the flux distribution in the visual and the uv at the same time for this star , and abundance analyses based on scaled solar abundance patterns suffer from systematic errors .we have presented a new method for the computation of model atmospheres for cp stars based on the odf approach .the method was successfully compared with standard models .application to various cp stars shows that model atmospheres for some abundance patterns ( boo stars or mildly peculiar roap stars ) can be computed using scaled solar abundances . for other patterns ,systematic deviations occur which can not be approximated by choosing a model atmosphere based on properly scaled solar abundances , making the computation of individual odfs ( or model atmospheres based on the os technique ) mandatory for applications such as abundance analyses and doppler imaging .one prominent example are si peculiar stars , essentially because si is genuinely abundant and many of its absorption lines cluster in certain wavelength regions .more details on this work will be given in piskunov & kupka ( 1998 ) .international cooperation and hardware for computations were supported by the fonds zur frderung der wissenschaftlichen forschung ( project s7303-ast ) who also provided funds for f. kupka .we are grateful to r. l. kurucz for his permission to use atlas9 and line lists from his cdrom distribution .anders , e. , grevesse , n. : 1989 , _ geochimica et cosmochimica acta _ , * 53 * , 197 kurucz , r.l . : 1993 , _ kurucz cd - rom no. 13 _ , smithsonian astrophysical observatory piskunov , n.e . , kupka , f. : 1998 , in preparation
we describe a new method for the computation of opacity distribution functions ( odfs ) useful to calculate one - dimensional model atmospheres in local thermal equilibrium ( lte ) . the new method is fast enough to be applied on current workstations and allows the computation of model atmospheres which deviate significantly from ( scaled ) solar chemical composition . it has reproduced existing odfs and model atmospheres for solar abundances . depending on the type of chemical peculiarity the `` individual '' model atmosphere may have a structure and surface fluxes similar to atmospheres based on ( scaled ) solar abundances or deviate in a way that can not be reproduced by any of the conventional models . examples are given to illustrate this behavior . the availability of models with `` individualized '' abundances is crucial for abundance analyses and doppler imaging of extreme cp stars . ii