diff
stringlengths
41
2.03M
msg
stringlengths
1
1.5k
repo
stringlengths
5
40
sha
stringlengths
40
40
time
stringlengths
20
20
mmm a / tensorflow / g3doc / api_docs / python / contrib . distributions . md <nl> ppp b / tensorflow / g3doc / api_docs / python / contrib . distributions . md <nl> Additional documentation from ` TransformedDistribution ` : <nl> <nl> # # # # # < b > ` condition_kwargs ` < / b > : <nl> <nl> - * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> * < b > ` bijector_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the bijector . <nl> + * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> <nl> # # # # # Args : <nl> <nl> Additional documentation from ` TransformedDistribution ` : <nl> <nl> # # # # # < b > ` condition_kwargs ` < / b > : <nl> <nl> - * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> * < b > ` bijector_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the bijector . <nl> + * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> <nl> # # # # # Args : <nl> <nl> Implements ` ( log o p o g ^ { - 1 } ) ( y ) + ( log o det o J o g ^ { - 1 } ) ( y ) ` , <nl> <nl> # # # # # < b > ` condition_kwargs ` < / b > : <nl> <nl> - * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> * < b > ` bijector_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the bijector . <nl> + * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> <nl> # # # # # Args : <nl> <nl> Additional documentation from ` TransformedDistribution ` : <nl> <nl> # # # # # < b > ` condition_kwargs ` < / b > : <nl> <nl> - * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> * < b > ` bijector_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the bijector . <nl> + * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> <nl> # # # # # Args : <nl> <nl> Implements ` p ( g ^ { - 1 } ( y ) ) det | J ( g ^ { - 1 } ( y ) ) | ` , where ` g ^ { - 1 } ` is the <nl> <nl> # # # # # < b > ` condition_kwargs ` < / b > : <nl> <nl> - * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> * < b > ` bijector_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the bijector . <nl> + * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> <nl> # # # # # Args : <nl> <nl> Samples from the base distribution and then passes through <nl> <nl> # # # # # < b > ` condition_kwargs ` < / b > : <nl> <nl> - * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> * < b > ` bijector_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the bijector . <nl> + * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> <nl> # # # # # Args : <nl> <nl> Additional documentation from ` TransformedDistribution ` : <nl> <nl> # # # # # < b > ` condition_kwargs ` < / b > : <nl> <nl> - * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> * < b > ` bijector_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the bijector . <nl> + * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> <nl> # # # # # Args : <nl> <nl> mmm a / tensorflow / g3doc / api_docs / python / functions_and_classes / shard1 / tf . contrib . distributions . TransformedDistribution . md <nl> ppp b / tensorflow / g3doc / api_docs / python / functions_and_classes / shard1 / tf . contrib . distributions . TransformedDistribution . md <nl> Additional documentation from ` TransformedDistribution ` : <nl> <nl> # # # # # < b > ` condition_kwargs ` < / b > : <nl> <nl> - * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> * < b > ` bijector_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the bijector . <nl> + * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> <nl> # # # # # Args : <nl> <nl> Additional documentation from ` TransformedDistribution ` : <nl> <nl> # # # # # < b > ` condition_kwargs ` < / b > : <nl> <nl> - * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> * < b > ` bijector_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the bijector . <nl> + * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> <nl> # # # # # Args : <nl> <nl> Implements ` ( log o p o g ^ { - 1 } ) ( y ) + ( log o det o J o g ^ { - 1 } ) ( y ) ` , <nl> <nl> # # # # # < b > ` condition_kwargs ` < / b > : <nl> <nl> - * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> * < b > ` bijector_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the bijector . <nl> + * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> <nl> # # # # # Args : <nl> <nl> Additional documentation from ` TransformedDistribution ` : <nl> <nl> # # # # # < b > ` condition_kwargs ` < / b > : <nl> <nl> - * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> * < b > ` bijector_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the bijector . <nl> + * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> <nl> # # # # # Args : <nl> <nl> Implements ` p ( g ^ { - 1 } ( y ) ) det | J ( g ^ { - 1 } ( y ) ) | ` , where ` g ^ { - 1 } ` is the <nl> <nl> # # # # # < b > ` condition_kwargs ` < / b > : <nl> <nl> - * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> * < b > ` bijector_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the bijector . <nl> + * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> <nl> # # # # # Args : <nl> <nl> Samples from the base distribution and then passes through <nl> <nl> # # # # # < b > ` condition_kwargs ` < / b > : <nl> <nl> - * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> * < b > ` bijector_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the bijector . <nl> + * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> <nl> # # # # # Args : <nl> <nl> Additional documentation from ` TransformedDistribution ` : <nl> <nl> # # # # # < b > ` condition_kwargs ` < / b > : <nl> <nl> - * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> * < b > ` bijector_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the bijector . <nl> + * < b > ` distribution_kwargs ` < / b > : Python dictionary of arg names / values forwarded to the distribution . <nl> <nl> # # # # # Args : <nl> <nl> mmm a / tensorflow / g3doc / api_docs / python / functions_and_classes / shard9 / tf . train . StepCounterHook . md <nl> ppp b / tensorflow / g3doc / api_docs / python / functions_and_classes / shard9 / tf . train . StepCounterHook . md <nl> <nl> Steps per second monitor . <nl> - - - <nl> <nl> - # # # # ` tf . train . StepCounterHook . __init__ ( every_n_steps = 100 , output_dir = None , summary_writer = None ) ` { # StepCounterHook . __init__ } <nl> + # # # # ` tf . train . StepCounterHook . __init__ ( every_n_steps = 100 , every_n_secs = None , output_dir = None , summary_writer = None ) ` { # StepCounterHook . __init__ } <nl> <nl> <nl> <nl> mmm a / tensorflow / g3doc / api_docs / python / train . md <nl> ppp b / tensorflow / g3doc / api_docs / python / train . md <nl> Initialize CheckpointSaverHook monitor . <nl> Steps per second monitor . <nl> - - - <nl> <nl> - # # # # ` tf . train . StepCounterHook . __init__ ( every_n_steps = 100 , output_dir = None , summary_writer = None ) ` { # StepCounterHook . __init__ } <nl> + # # # # ` tf . train . StepCounterHook . __init__ ( every_n_steps = 100 , every_n_secs = None , output_dir = None , summary_writer = None ) ` { # StepCounterHook . __init__ } <nl> <nl> <nl> <nl>
Update generated Python Op docs .
tensorflow/tensorflow
1f64f2fad3a7a984564c03663243f705b8c2e35c
2016-10-27T22:34:48Z
mmm a / data / widgets / options . xml <nl> ppp b / data / widgets / options . xml <nl> <nl> < check text = " Smooth auto - scroll " name = " smooth " / > <nl> < check text = " 2 Click Movement " name = " move_click2 " / > <nl> < check text = " 2 Click Drawing " name = " draw_click2 " / > <nl> - < box horizontal = " true " > <nl> + < grid columns = " 2 " > <nl> < label text = " Cursor : " / > <nl> < box name = " cursor_color_box " / > < ! - - custom widget - - > <nl> - < / box > <nl> + <nl> + < label text = " Grid Color : " / > <nl> + < box name = " grid_color_box " / > < ! - - custom widget - - > <nl> + <nl> + < label text = " Pixel Grid : " / > <nl> + < box name = " pixel_grid_color_box " / > < ! - - custom widget - - > <nl> + < / grid > <nl> <nl> < ! - - Undo - - > <nl> <nl> mmm a / src / commands / cmd_options . cpp <nl> ppp b / src / commands / cmd_options . cpp <nl> <nl> <nl> # include " app . h " <nl> # include " commands / command . h " <nl> + # include " context . h " <nl> # include " core / cfg . h " <nl> # include " modules / editors . h " <nl> # include " modules / gui . h " <nl> void OptionsCommand : : execute ( Context * context ) <nl> { <nl> JWidget check_smooth ; <nl> JWidget cursor_color , cursor_color_box ; <nl> + JWidget grid_color , grid_color_box ; <nl> + JWidget pixel_grid_color , pixel_grid_color_box ; <nl> JWidget button_ok ; <nl> JWidget move_click2 , draw_click2 ; <nl> JWidget checked_bg_reset ; <nl> void OptionsCommand : : execute ( Context * context ) <nl> " move_click2 " , & move_click2 , <nl> " draw_click2 " , & draw_click2 , <nl> " cursor_color_box " , & cursor_color_box , <nl> + " grid_color_box " , & grid_color_box , <nl> + " pixel_grid_color_box " , & pixel_grid_color_box , <nl> " checked_bg_size " , & checked_bg , <nl> " checked_bg_zoom " , & checked_bg_zoom , <nl> " checked_bg_color1_box " , & checked_bg_color1_box , <nl> void OptionsCommand : : execute ( Context * context ) <nl> " button_ok " , & button_ok , NULL ) ; <nl> <nl> / / Cursor color <nl> - cursor_color = colorbutton_new ( Editor : : get_cursor_color ( ) , IMAGE_INDEXED ) ; <nl> + cursor_color = colorbutton_new ( Editor : : get_cursor_color ( ) , IMAGE_RGB ) ; <nl> cursor_color - > setName ( " cursor_color " ) ; <nl> jwidget_add_child ( cursor_color_box , cursor_color ) ; <nl> <nl> + / / Grid color <nl> + grid_color = colorbutton_new ( context - > getSettings ( ) - > getGridColor ( ) , IMAGE_RGB ) ; <nl> + grid_color - > setName ( " grid_color " ) ; <nl> + jwidget_add_child ( grid_color_box , grid_color ) ; <nl> + <nl> + / / Pixel grid color <nl> + pixel_grid_color = colorbutton_new ( context - > getSettings ( ) - > getPixelGridColor ( ) , IMAGE_RGB ) ; <nl> + pixel_grid_color - > setName ( " pixel_grid_color " ) ; <nl> + jwidget_add_child ( pixel_grid_color_box , pixel_grid_color ) ; <nl> + <nl> + / / Others <nl> if ( get_config_bool ( " Options " , " MoveClick2 " , false ) ) <nl> jwidget_select ( move_click2 ) ; <nl> <nl> void OptionsCommand : : execute ( Context * context ) <nl> int undo_size_limit_value ; <nl> <nl> Editor : : set_cursor_color ( colorbutton_get_color ( cursor_color ) ) ; <nl> + context - > getSettings ( ) - > setGridColor ( colorbutton_get_color ( grid_color ) ) ; <nl> + context - > getSettings ( ) - > setPixelGridColor ( colorbutton_get_color ( pixel_grid_color ) ) ; <nl> <nl> set_config_bool ( " Options " , " MoveSmooth " , jwidget_is_selected ( check_smooth ) ) ; <nl> set_config_bool ( " Options " , " MoveClick2 " , jwidget_is_selected ( move_click2 ) ) ; <nl>
Added buttons in Options dialog to change grid colors ( normal grid and pixel grid ) .
aseprite/aseprite
e1bdcb989995cc205eb570596244bc19669fb50f
2010-04-29T02:47:08Z
new file mode 100644 <nl> index 000000000000 . . 9bf7d4fcd81b <nl> mmm / dev / null <nl> ppp b / validation - test / Sema / type_checker_perf / fast / simd_add . gyb <nl> <nl> + / / RUN : % scale - test - - begin 3 - - end 7 - - step 1 - - select NumLeafScopes % s <nl> + / / REQUIRES : OS = macosx <nl> + / / REQUIRES : asserts <nl> + <nl> + import SIMDOperators <nl> + <nl> + func test ( _ s : SIMD4 < Float > , <nl> + % for i in range ( 0 , N ) : <nl> + _ s $ { i } : SIMD4 < Float > , <nl> + % end <nl> + _ s $ { N } : SIMD4 < Float > <nl> + ) - > SIMD4 < Float > { <nl> + return s <nl> + % for i in range ( 0 , N ) : <nl> + + s $ { i } <nl> + % end <nl> + } <nl>
Merge pull request from rudkx / scale - test - simd - add
apple/swift
bcf046a5f90aa15e3d5671a5ec40ccc43528d11f
2018-12-04T05:55:29Z
mmm a / drivers / unix / dir_access_unix . cpp <nl> ppp b / drivers / unix / dir_access_unix . cpp <nl> void DirAccessUnix : : list_dir_end ( ) { <nl> _cisdir = false ; <nl> } <nl> <nl> - # ifdef HAVE_MNTENT <nl> + # if defined ( HAVE_MNTENT ) & & defined ( X11_ENABLED ) <nl> static bool _filter_drive ( struct mntent * mnt ) { <nl> / / Ignore devices that don ' t point to / dev <nl> if ( strncmp ( mnt - > mnt_fsname , " / dev " , 4 ) ! = 0 ) { <nl> return false ; <nl> } <nl> <nl> - / / Accept devices mounted at / media , / mnt or / home <nl> + / / Accept devices mounted at common locations <nl> if ( strncmp ( mnt - > mnt_dir , " / media " , 6 ) = = 0 | | <nl> strncmp ( mnt - > mnt_dir , " / mnt " , 4 ) = = 0 | | <nl> - strncmp ( mnt - > mnt_dir , " / home " , 5 ) = = 0 ) { <nl> + strncmp ( mnt - > mnt_dir , " / home " , 5 ) = = 0 | | <nl> + strncmp ( mnt - > mnt_dir , " / run / media " , 10 ) = = 0 ) { <nl> return true ; <nl> } <nl> <nl> static bool _filter_drive ( struct mntent * mnt ) { <nl> <nl> static void _get_drives ( List < String > * list ) { <nl> <nl> - # ifdef HAVE_MNTENT <nl> + # if defined ( HAVE_MNTENT ) & & defined ( X11_ENABLED ) <nl> / / Check / etc / mtab for the list of mounted partitions <nl> FILE * mtab = setmntent ( " / etc / mtab " , " r " ) ; <nl> if ( mtab ) { <nl>
Only do ' drive ' discovery on X11
godotengine/godot
65af96eab0708c02c5a72bb7d2a18444cc728046
2017-09-14T21:04:30Z
mmm a / include / LightGBM / boosting . h <nl> ppp b / include / LightGBM / boosting . h <nl> class Boosting { <nl> * \ param other <nl> * / <nl> virtual void MergeFrom ( const Boosting * other ) = 0 ; <nl> - / * ! <nl> - * \ brief Reset Config for current boosting <nl> - * \ param config Configs for boosting <nl> - * / <nl> - virtual void ResetConfig ( const BoostingConfig * config ) = 0 ; <nl> <nl> / * ! <nl> * \ brief Reset training data for current boosting <nl> + * \ param config Configs for boosting <nl> * \ param train_data Training data <nl> * \ param object_function Training objective function <nl> * \ param training_metrics Training metric <nl> * / <nl> - virtual void ResetTrainingData ( const Dataset * train_data , const ObjectiveFunction * object_function , const std : : vector < const Metric * > & training_metrics ) = 0 ; <nl> + virtual void ResetTrainingData ( const BoostingConfig * config , const Dataset * train_data , const ObjectiveFunction * object_function , const std : : vector < const Metric * > & training_metrics ) = 0 ; <nl> <nl> / * ! <nl> * \ brief Add a validation data <nl> mmm a / include / LightGBM / config . h <nl> ppp b / include / LightGBM / config . h <nl> struct ConfigBase { <nl> inline bool GetBool ( <nl> const std : : unordered_map < std : : string , std : : string > & params , <nl> const std : : string & name , bool * out ) ; <nl> + <nl> + static std : : unordered_map < std : : string , std : : string > Str2Map ( const char * parameters ) ; <nl> } ; <nl> <nl> / * ! \ brief Types of boosting * / <nl> struct OverallConfig : public ConfigBase { <nl> MetricConfig metric_config ; <nl> <nl> void Set ( const std : : unordered_map < std : : string , std : : string > & params ) override ; <nl> - void LoadFromString ( const char * str ) ; <nl> + <nl> private : <nl> void GetBoostingType ( const std : : unordered_map < std : : string , std : : string > & params ) ; <nl> <nl> mmm a / src / boosting / gbdt . cpp <nl> ppp b / src / boosting / gbdt . cpp <nl> GBDT : : ~ GBDT ( ) { <nl> <nl> void GBDT : : Init ( const BoostingConfig * config , const Dataset * train_data , const ObjectiveFunction * object_function , <nl> const std : : vector < const Metric * > & training_metrics ) { <nl> - gbdt_config_ = config ; <nl> iter_ = 0 ; <nl> saved_model_size_ = - 1 ; <nl> num_iteration_for_pred_ = 0 ; <nl> max_feature_idx_ = 0 ; <nl> - early_stopping_round_ = gbdt_config_ - > early_stopping_round ; <nl> - shrinkage_rate_ = gbdt_config_ - > learning_rate ; <nl> num_class_ = config - > num_class ; <nl> train_data_ = nullptr ; <nl> - ResetTrainingData ( train_data , object_function , training_metrics ) ; <nl> - / / initialize random generator <nl> - random_ = Random ( gbdt_config_ - > bagging_seed ) ; <nl> - <nl> - } <nl> - <nl> - void GBDT : : ResetConfig ( const BoostingConfig * config ) { <nl> - gbdt_config_ = config ; <nl> - early_stopping_round_ = gbdt_config_ - > early_stopping_round ; <nl> - shrinkage_rate_ = gbdt_config_ - > learning_rate ; <nl> - / / create tree learner <nl> - tree_learner_ . clear ( ) ; <nl> - for ( int i = 0 ; i < num_class_ ; + + i ) { <nl> - auto new_tree_learner = std : : unique_ptr < TreeLearner > ( TreeLearner : : CreateTreeLearner ( gbdt_config_ - > tree_learner_type , gbdt_config_ - > tree_config ) ) ; <nl> - new_tree_learner - > Init ( train_data_ ) ; <nl> - / / init tree learner <nl> - tree_learner_ . push_back ( std : : move ( new_tree_learner ) ) ; <nl> - } <nl> - tree_learner_ . shrink_to_fit ( ) ; <nl> - / / if need bagging , create buffer <nl> - if ( gbdt_config_ - > bagging_fraction < 1 . 0 & & gbdt_config_ - > bagging_freq > 0 ) { <nl> - out_of_bag_data_indices_ = std : : vector < data_size_t > ( num_data_ ) ; <nl> - bag_data_indices_ = std : : vector < data_size_t > ( num_data_ ) ; <nl> - } else { <nl> - out_of_bag_data_cnt_ = 0 ; <nl> - out_of_bag_data_indices_ . clear ( ) ; <nl> - bag_data_cnt_ = num_data_ ; <nl> - bag_data_indices_ . clear ( ) ; <nl> - } <nl> - / / initialize random generator <nl> - random_ = Random ( gbdt_config_ - > bagging_seed ) ; <nl> + ResetTrainingData ( config , train_data , object_function , training_metrics ) ; <nl> } <nl> <nl> - void GBDT : : ResetTrainingData ( const Dataset * train_data , const ObjectiveFunction * object_function , const std : : vector < const Metric * > & training_metrics ) { <nl> + void GBDT : : ResetTrainingData ( const BoostingConfig * config , const Dataset * train_data , const ObjectiveFunction * object_function , <nl> + const std : : vector < const Metric * > & training_metrics ) { <nl> if ( train_data_ ! = nullptr & & ! train_data_ - > CheckAlign ( * train_data ) ) { <nl> Log : : Fatal ( " cannot reset training data , since new training data has different bin mappers " ) ; <nl> } <nl> + gbdt_config_ = config ; <nl> + early_stopping_round_ = gbdt_config_ - > early_stopping_round ; <nl> + shrinkage_rate_ = gbdt_config_ - > learning_rate ; <nl> train_data_ = train_data ; <nl> / / create tree learner <nl> tree_learner_ . clear ( ) ; <nl> void GBDT : : ResetTrainingData ( const Dataset * train_data , const ObjectiveFunction * <nl> bag_data_cnt_ = num_data_ ; <nl> bag_data_indices_ . clear ( ) ; <nl> } <nl> + random_ = Random ( gbdt_config_ - > bagging_seed ) ; <nl> / / update score <nl> for ( int i = 0 ; i < iter_ ; + + i ) { <nl> for ( int curr_class = 0 ; curr_class < num_class_ ; + + curr_class ) { <nl> mmm a / src / boosting / gbdt . h <nl> ppp b / src / boosting / gbdt . h <nl> class GBDT : public Boosting { <nl> } <nl> } <nl> <nl> - / * ! <nl> - * \ brief Reset Config for current boosting <nl> - * \ param config Configs for boosting <nl> - * / <nl> - void ResetConfig ( const BoostingConfig * config ) override ; <nl> - <nl> / * ! <nl> * \ brief Reset training data for current boosting <nl> * \ param train_data Training data <nl> * \ param object_function Training objective function <nl> * \ param training_metrics Training metric <nl> * / <nl> - void ResetTrainingData ( const Dataset * train_data , const ObjectiveFunction * object_function , const std : : vector < const Metric * > & training_metrics ) override ; <nl> + void ResetTrainingData ( const BoostingConfig * config , const Dataset * train_data , const ObjectiveFunction * object_function , const std : : vector < const Metric * > & training_metrics ) override ; <nl> <nl> / * ! <nl> * \ brief Adding a validation dataset <nl> mmm a / src / c_api . cpp <nl> ppp b / src / c_api . cpp <nl> class Booster { <nl> <nl> Booster ( const Dataset * train_data , <nl> const char * parameters ) { <nl> - config_ . LoadFromString ( parameters ) ; <nl> + auto param = ConfigBase : : Str2Map ( parameters ) ; <nl> + config_ . Set ( param ) ; <nl> / / create boosting <nl> if ( config_ . io_config . input_model . size ( ) > 0 ) { <nl> Log : : Warning ( " continued train from model is not support for c_api , \ <nl> class Booster { <nl> } <nl> <nl> void ResetTrainingData ( const Dataset * train_data ) { <nl> - ConstructObjectAndTrainingMetrics ( train_data ) ; <nl> + train_data_ = train_data ; <nl> + ConstructObjectAndTrainingMetrics ( train_data_ ) ; <nl> / / initialize the boosting <nl> - boosting_ - > ResetTrainingData ( train_data , objective_fun_ . get ( ) , Common : : ConstPtrInVectorWrapper < Metric > ( train_metric_ ) ) ; <nl> + boosting_ - > ResetTrainingData ( & config_ . boosting_config , train_data_ , <nl> + objective_fun_ . get ( ) , Common : : ConstPtrInVectorWrapper < Metric > ( train_metric_ ) ) ; <nl> + } <nl> + <nl> + void ResetConfig ( const char * parameters ) { <nl> + auto param = ConfigBase : : Str2Map ( parameters ) ; <nl> + if ( param . count ( " num_class " ) ) { <nl> + Log : : Fatal ( " cannot change num class during training " ) ; <nl> + } <nl> + if ( param . count ( " boosting_type " ) ) { <nl> + Log : : Fatal ( " cannot change boosting_type during training " ) ; <nl> + } <nl> + config_ . Set ( param ) ; <nl> + ResetTrainingData ( train_data_ ) ; <nl> } <nl> <nl> void AddValidData ( const Dataset * valid_data ) { <nl> class Booster { <nl> return idx ; <nl> } <nl> <nl> - void ResetBoostingConfig ( const char * parameters ) { <nl> - config_ . LoadFromString ( parameters ) ; <nl> - boosting_ - > ResetConfig ( & config_ . boosting_config ) ; <nl> - } <nl> <nl> void RollbackOneIter ( ) { <nl> boosting_ - > RollbackOneIter ( ) ; <nl> class Booster { <nl> const Boosting * GetBoosting ( ) const { return boosting_ . get ( ) ; } <nl> <nl> private : <nl> + const Dataset * train_data_ ; <nl> std : : unique_ptr < Boosting > boosting_ ; <nl> / * ! \ brief All configs * / <nl> OverallConfig config_ ; <nl> DllExport int LGBM_CreateDatasetFromFile ( const char * filename , <nl> const DatesetHandle * reference , <nl> DatesetHandle * out ) { <nl> API_BEGIN ( ) ; <nl> - OverallConfig config ; <nl> - config . LoadFromString ( parameters ) ; <nl> - DatasetLoader loader ( config . io_config , nullptr ) ; <nl> + auto param = ConfigBase : : Str2Map ( parameters ) ; <nl> + IOConfig io_config ; <nl> + io_config . Set ( param ) ; <nl> + DatasetLoader loader ( io_config , nullptr ) ; <nl> loader . SetHeader ( filename ) ; <nl> if ( reference = = nullptr ) { <nl> * out = loader . LoadFromFile ( filename ) ; <nl> DllExport int LGBM_CreateDatasetFromMat ( const void * data , <nl> const DatesetHandle * reference , <nl> DatesetHandle * out ) { <nl> API_BEGIN ( ) ; <nl> - OverallConfig config ; <nl> - config . LoadFromString ( parameters ) ; <nl> - DatasetLoader loader ( config . io_config , nullptr ) ; <nl> + auto param = ConfigBase : : Str2Map ( parameters ) ; <nl> + IOConfig io_config ; <nl> + io_config . Set ( param ) ; <nl> + DatasetLoader loader ( io_config , nullptr ) ; <nl> std : : unique_ptr < Dataset > ret ; <nl> auto get_row_fun = RowFunctionFromDenseMatric ( data , nrow , ncol , data_type , is_row_major ) ; <nl> if ( reference = = nullptr ) { <nl> / / sample data first <nl> - Random rand ( config . io_config . data_random_seed ) ; <nl> - const int sample_cnt = static_cast < int > ( nrow < config . io_config . bin_construct_sample_cnt ? nrow : config . io_config . bin_construct_sample_cnt ) ; <nl> + Random rand ( io_config . data_random_seed ) ; <nl> + const int sample_cnt = static_cast < int > ( nrow < io_config . bin_construct_sample_cnt ? nrow : io_config . bin_construct_sample_cnt ) ; <nl> auto sample_indices = rand . Sample ( nrow , sample_cnt ) ; <nl> std : : vector < std : : vector < double > > sample_values ( ncol ) ; <nl> for ( size_t i = 0 ; i < sample_indices . size ( ) ; + + i ) { <nl> DllExport int LGBM_CreateDatasetFromMat ( const void * data , <nl> } <nl> ret . reset ( loader . CostructFromSampleData ( sample_values , sample_cnt , nrow ) ) ; <nl> } else { <nl> - ret . reset ( new Dataset ( nrow , config . io_config . num_class ) ) ; <nl> + ret . reset ( new Dataset ( nrow , io_config . num_class ) ) ; <nl> ret - > CopyFeatureMapperFrom ( <nl> reinterpret_cast < const Dataset * > ( * reference ) , <nl> - config . io_config . is_enable_sparse ) ; <nl> + io_config . is_enable_sparse ) ; <nl> } <nl> <nl> # pragma omp parallel for schedule ( guided ) <nl> DllExport int LGBM_CreateDatasetFromCSR ( const void * indptr , <nl> const DatesetHandle * reference , <nl> DatesetHandle * out ) { <nl> API_BEGIN ( ) ; <nl> - OverallConfig config ; <nl> - config . LoadFromString ( parameters ) ; <nl> - DatasetLoader loader ( config . io_config , nullptr ) ; <nl> + auto param = ConfigBase : : Str2Map ( parameters ) ; <nl> + IOConfig io_config ; <nl> + io_config . Set ( param ) ; <nl> + DatasetLoader loader ( io_config , nullptr ) ; <nl> std : : unique_ptr < Dataset > ret ; <nl> auto get_row_fun = RowFunctionFromCSR ( indptr , indptr_type , indices , data , data_type , nindptr , nelem ) ; <nl> int32_t nrow = static_cast < int32_t > ( nindptr - 1 ) ; <nl> if ( reference = = nullptr ) { <nl> / / sample data first <nl> - Random rand ( config . io_config . data_random_seed ) ; <nl> - const int sample_cnt = static_cast < int > ( nrow < config . io_config . bin_construct_sample_cnt ? nrow : config . io_config . bin_construct_sample_cnt ) ; <nl> + Random rand ( io_config . data_random_seed ) ; <nl> + const int sample_cnt = static_cast < int > ( nrow < io_config . bin_construct_sample_cnt ? nrow : io_config . bin_construct_sample_cnt ) ; <nl> auto sample_indices = rand . Sample ( nrow , sample_cnt ) ; <nl> std : : vector < std : : vector < double > > sample_values ; <nl> for ( size_t i = 0 ; i < sample_indices . size ( ) ; + + i ) { <nl> DllExport int LGBM_CreateDatasetFromCSR ( const void * indptr , <nl> CHECK ( num_col > = static_cast < int > ( sample_values . size ( ) ) ) ; <nl> ret . reset ( loader . CostructFromSampleData ( sample_values , sample_cnt , nrow ) ) ; <nl> } else { <nl> - ret . reset ( new Dataset ( nrow , config . io_config . num_class ) ) ; <nl> + ret . reset ( new Dataset ( nrow , io_config . num_class ) ) ; <nl> ret - > CopyFeatureMapperFrom ( <nl> reinterpret_cast < const Dataset * > ( * reference ) , <nl> - config . io_config . is_enable_sparse ) ; <nl> + io_config . is_enable_sparse ) ; <nl> } <nl> <nl> # pragma omp parallel for schedule ( guided ) <nl> DllExport int LGBM_CreateDatasetFromCSC ( const void * col_ptr , <nl> const DatesetHandle * reference , <nl> DatesetHandle * out ) { <nl> API_BEGIN ( ) ; <nl> - OverallConfig config ; <nl> - config . LoadFromString ( parameters ) ; <nl> - DatasetLoader loader ( config . io_config , nullptr ) ; <nl> + auto param = ConfigBase : : Str2Map ( parameters ) ; <nl> + IOConfig io_config ; <nl> + io_config . Set ( param ) ; <nl> + DatasetLoader loader ( io_config , nullptr ) ; <nl> std : : unique_ptr < Dataset > ret ; <nl> auto get_col_fun = ColumnFunctionFromCSC ( col_ptr , col_ptr_type , indices , data , data_type , ncol_ptr , nelem ) ; <nl> int32_t nrow = static_cast < int32_t > ( num_row ) ; <nl> if ( reference = = nullptr ) { <nl> Log : : Warning ( " Construct from CSC format is not efficient " ) ; <nl> / / sample data first <nl> - Random rand ( config . io_config . data_random_seed ) ; <nl> - const int sample_cnt = static_cast < int > ( nrow < config . io_config . bin_construct_sample_cnt ? nrow : config . io_config . bin_construct_sample_cnt ) ; <nl> + Random rand ( io_config . data_random_seed ) ; <nl> + const int sample_cnt = static_cast < int > ( nrow < io_config . bin_construct_sample_cnt ? nrow : io_config . bin_construct_sample_cnt ) ; <nl> auto sample_indices = rand . Sample ( nrow , sample_cnt ) ; <nl> std : : vector < std : : vector < double > > sample_values ( ncol_ptr - 1 ) ; <nl> # pragma omp parallel for schedule ( guided ) <nl> DllExport int LGBM_CreateDatasetFromCSC ( const void * col_ptr , <nl> } <nl> ret . reset ( loader . CostructFromSampleData ( sample_values , sample_cnt , nrow ) ) ; <nl> } else { <nl> - ret . reset ( new Dataset ( nrow , config . io_config . num_class ) ) ; <nl> + ret . reset ( new Dataset ( nrow , io_config . num_class ) ) ; <nl> ret - > CopyFeatureMapperFrom ( <nl> reinterpret_cast < const Dataset * > ( * reference ) , <nl> - config . io_config . is_enable_sparse ) ; <nl> + io_config . is_enable_sparse ) ; <nl> } <nl> <nl> # pragma omp parallel for schedule ( guided ) <nl> DllExport int LGBM_BoosterResetTrainingData ( BoosterHandle handle , <nl> DllExport int LGBM_BoosterResetParameter ( BoosterHandle handle , const char * parameters ) { <nl> API_BEGIN ( ) ; <nl> Booster * ref_booster = reinterpret_cast < Booster * > ( handle ) ; <nl> - ref_booster - > ResetBoostingConfig ( parameters ) ; <nl> + ref_booster - > ResetConfig ( parameters ) ; <nl> API_END ( ) ; <nl> } <nl> <nl> mmm a / src / io / config . cpp <nl> ppp b / src / io / config . cpp <nl> <nl> <nl> namespace LightGBM { <nl> <nl> - void OverallConfig : : LoadFromString ( const char * str ) { <nl> + std : : unordered_map < std : : string , std : : string > ConfigBase : : Str2Map ( const char * parameters ) { <nl> std : : unordered_map < std : : string , std : : string > params ; <nl> - auto args = Common : : Split ( str , " \ t \ n \ r " ) ; <nl> + auto args = Common : : Split ( parameters , " \ t \ n \ r " ) ; <nl> for ( auto arg : args ) { <nl> std : : vector < std : : string > tmp_strs = Common : : Split ( arg . c_str ( ) , ' = ' ) ; <nl> if ( tmp_strs . size ( ) = = 2 ) { <nl> void OverallConfig : : LoadFromString ( const char * str ) { <nl> } <nl> } <nl> ParameterAlias : : KeyAliasTransform ( & params ) ; <nl> - Set ( params ) ; <nl> + return params ; <nl> } <nl> <nl> void OverallConfig : : Set ( const std : : unordered_map < std : : string , std : : string > & params ) { <nl>
more flexiable reset config / training data logic for boosting
microsoft/LightGBM
b41e0f0afd2340b86772d66b6ea869dedf4caf10
2016-11-24T04:58:46Z
mmm a / test / cpp / interop / metrics_client . cc <nl> ppp b / test / cpp / interop / metrics_client . cc <nl> <nl> / * <nl> * <nl> - * Copyright 2015 , Google Inc . <nl> + * Copyright 2015 - 2016 , Google Inc . <nl> * All rights reserved . <nl> * <nl> * Redistribution and use in source and binary forms , with or without <nl> <nl> # include < gflags / gflags . h > <nl> # include < grpc + + / grpc + + . h > <nl> <nl> - # include " test / cpp / util / metrics_server . h " <nl> - # include " test / cpp / util / test_config . h " <nl> # include " src / proto / grpc / testing / metrics . grpc . pb . h " <nl> # include " src / proto / grpc / testing / metrics . pb . h " <nl> + # include " test / cpp / util / metrics_server . h " <nl> + # include " test / cpp / util / test_config . h " <nl> <nl> DEFINE_string ( metrics_server_address , " " , <nl> " The metrics server addresses in the fomrat < hostname > : < port > " ) ; <nl> + DEFINE_bool ( total_only , false , <nl> + " If true , this prints only the total value of all gauges " ) ; <nl> + <nl> + int kDeadlineSecs = 10 ; <nl> <nl> using grpc : : testing : : EmptyMessage ; <nl> using grpc : : testing : : GaugeResponse ; <nl> using grpc : : testing : : MetricsService ; <nl> using grpc : : testing : : MetricsServiceImpl ; <nl> <nl> - void PrintMetrics ( const grpc : : string & server_address ) { <nl> - gpr_log ( GPR_INFO , " creating a channel to % s " , server_address . c_str ( ) ) ; <nl> - std : : shared_ptr < grpc : : Channel > channel ( <nl> - grpc : : CreateChannel ( server_address , grpc : : InsecureChannelCredentials ( ) ) ) ; <nl> - <nl> - std : : unique_ptr < MetricsService : : Stub > stub ( MetricsService : : NewStub ( channel ) ) ; <nl> - <nl> + / / Prints the values of all Gauges ( unless total_only is set to ' true ' in which <nl> + / / case this only prints the sum of all gauge values ) . <nl> + bool PrintMetrics ( std : : unique_ptr < MetricsService : : Stub > stub , bool total_only ) { <nl> grpc : : ClientContext context ; <nl> EmptyMessage message ; <nl> <nl> + std : : chrono : : system_clock : : time_point deadline = <nl> + std : : chrono : : system_clock : : now ( ) + std : : chrono : : seconds ( kDeadlineSecs ) ; <nl> + <nl> + context . set_deadline ( deadline ) ; <nl> + <nl> std : : unique_ptr < grpc : : ClientReader < GaugeResponse > > reader ( <nl> stub - > GetAllGauges ( & context , message ) ) ; <nl> <nl> GaugeResponse gauge_response ; <nl> long overall_qps = 0 ; <nl> - int idx = 0 ; <nl> while ( reader - > Read ( & gauge_response ) ) { <nl> if ( gauge_response . value_case ( ) = = GaugeResponse : : kLongValue ) { <nl> - gpr_log ( GPR_INFO , " Gauge : % d ( % s : % ld ) " , + + idx , <nl> - gauge_response . name ( ) . c_str ( ) , gauge_response . long_value ( ) ) ; <nl> + if ( ! total_only ) { <nl> + gpr_log ( GPR_INFO , " % s : % ld " , gauge_response . name ( ) . c_str ( ) , <nl> + gauge_response . long_value ( ) ) ; <nl> + } <nl> overall_qps + = gauge_response . long_value ( ) ; <nl> } else { <nl> gpr_log ( GPR_INFO , " Gauge % s is not a long value " , <nl> void PrintMetrics ( const grpc : : string & server_address ) { <nl> } <nl> } <nl> <nl> - gpr_log ( GPR_INFO , " OVERALL : % ld " , overall_qps ) ; <nl> + gpr_log ( GPR_INFO , " % ld " , overall_qps ) ; <nl> <nl> const grpc : : Status status = reader - > Finish ( ) ; <nl> if ( ! status . ok ( ) ) { <nl> gpr_log ( GPR_ERROR , " Error in getting metrics from the client " ) ; <nl> } <nl> + <nl> + return status . ok ( ) ; <nl> } <nl> <nl> int main ( int argc , char * * argv ) { <nl> int main ( int argc , char * * argv ) { <nl> return 1 ; <nl> } <nl> <nl> - PrintMetrics ( FLAGS_metrics_server_address ) ; <nl> + std : : shared_ptr < grpc : : Channel > channel ( grpc : : CreateChannel ( <nl> + FLAGS_metrics_server_address , grpc : : InsecureChannelCredentials ( ) ) ) ; <nl> + <nl> + if ( ! PrintMetrics ( MetricsService : : NewStub ( channel ) , FLAGS_total_only ) ) { <nl> + return 1 ; <nl> + } <nl> <nl> return 0 ; <nl> } <nl> mmm a / test / cpp / util / metrics_server . cc <nl> ppp b / test / cpp / util / metrics_server . cc <nl> long Gauge : : Get ( ) { <nl> grpc : : Status MetricsServiceImpl : : GetAllGauges ( <nl> ServerContext * context , const EmptyMessage * request , <nl> ServerWriter < GaugeResponse > * writer ) { <nl> - gpr_log ( GPR_INFO , " GetAllGauges called " ) ; <nl> + gpr_log ( GPR_DEBUG , " GetAllGauges called " ) ; <nl> <nl> std : : lock_guard < std : : mutex > lock ( mu_ ) ; <nl> for ( auto it = gauges_ . begin ( ) ; it ! = gauges_ . end ( ) ; it + + ) { <nl> mmm a / tools / dockerfile / grpc_interop_stress_cxx / Dockerfile <nl> ppp b / tools / dockerfile / grpc_interop_stress_cxx / Dockerfile <nl> RUN apt - get update & & apt - get install - y \ <nl> wget \ <nl> zip & & apt - get clean <nl> <nl> + RUN easy_install - U pip <nl> + <nl> # Prepare ccache <nl> RUN ln - s / usr / bin / ccache / usr / local / bin / gcc <nl> RUN ln - s / usr / bin / ccache / usr / local / bin / g + + <nl> RUN ln - s / usr / bin / ccache / usr / local / bin / clang + + <nl> # C + + dependencies <nl> RUN apt - get update & & apt - get - y install libgflags - dev libgtest - dev libc + + - dev clang <nl> <nl> + # Google Cloud platform API libraries ( for BigQuery ) <nl> + RUN pip install - - upgrade google - api - python - client <nl> + <nl> # Define the default command . <nl> CMD [ " bash " ] <nl> mmm a / tools / dockerfile / grpc_interop_stress_cxx / build_interop_stress . sh <nl> ppp b / tools / dockerfile / grpc_interop_stress_cxx / build_interop_stress . sh <nl> cd / var / local / git / grpc <nl> make install - certs <nl> <nl> # build C + + interop stress client , interop client and server <nl> - make stress_test interop_client interop_server <nl> + make stress_test metrics_client interop_client interop_server <nl> new file mode 100755 <nl> index 00000000000 . . 0fa1bf1cb97 <nl> mmm / dev / null <nl> ppp b / tools / gcp / stress_test / run_client . py <nl> <nl> + # ! / usr / bin / env python2 . 7 <nl> + # Copyright 2015 - 2016 , Google Inc . <nl> + # All rights reserved . <nl> + # <nl> + # Redistribution and use in source and binary forms , with or without <nl> + # modification , are permitted provided that the following conditions are <nl> + # met : <nl> + # <nl> + # * Redistributions of source code must retain the above copyright <nl> + # notice , this list of conditions and the following disclaimer . <nl> + # * Redistributions in binary form must reproduce the above <nl> + # copyright notice , this list of conditions and the following disclaimer <nl> + # in the documentation and / or other materials provided with the <nl> + # distribution . <nl> + # * Neither the name of Google Inc . nor the names of its <nl> + # contributors may be used to endorse or promote products derived from <nl> + # this software without specific prior written permission . <nl> + # <nl> + # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS <nl> + # " AS IS " AND ANY EXPRESS OR IMPLIED WARRANTIES , INCLUDING , BUT NOT <nl> + # LIMITED TO , THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR <nl> + # A PARTICULAR PURPOSE ARE DISCLAIMED . IN NO EVENT SHALL THE COPYRIGHT <nl> + # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT , INDIRECT , INCIDENTAL , <nl> + # SPECIAL , EXEMPLARY , OR CONSEQUENTIAL DAMAGES ( INCLUDING , BUT NOT <nl> + # LIMITED TO , PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES ; LOSS OF USE , <nl> + # DATA , OR PROFITS ; OR BUSINESS INTERRUPTION ) HOWEVER CAUSED AND ON ANY <nl> + # THEORY OF LIABILITY , WHETHER IN CONTRACT , STRICT LIABILITY , OR TORT <nl> + # ( INCLUDING NEGLIGENCE OR OTHERWISE ) ARISING IN ANY WAY OUT OF THE USE <nl> + # OF THIS SOFTWARE , EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE . <nl> + <nl> + import datetime <nl> + import os <nl> + import re <nl> + import select <nl> + import subprocess <nl> + import sys <nl> + import time <nl> + <nl> + from stress_test_utils import EventType <nl> + from stress_test_utils import BigQueryHelper <nl> + <nl> + <nl> + # TODO ( sree ) : Write a python grpc client to directly query the metrics instead <nl> + # of calling metrics_client <nl> + def _get_qps ( metrics_cmd ) : <nl> + qps = 0 <nl> + try : <nl> + # Note : gpr_log ( ) writes even non - error messages to stderr stream . So it is <nl> + # important that we set stderr = subprocess . STDOUT <nl> + p = subprocess . Popen ( args = metrics_cmd , <nl> + stdout = subprocess . PIPE , <nl> + stderr = subprocess . STDOUT ) <nl> + retcode = p . wait ( ) <nl> + ( out_str , err_str ) = p . communicate ( ) <nl> + if retcode ! = 0 : <nl> + print ' Error in reading metrics information ' <nl> + print ' Output : ' , out_str <nl> + else : <nl> + # The overall qps is printed at the end of the line <nl> + m = re . search ( ' \ d + $ ' , out_str ) <nl> + qps = int ( m . group ( ) ) if m else 0 <nl> + except Exception as ex : <nl> + print ' Exception while reading metrics information : ' + str ( ex ) <nl> + return qps <nl> + <nl> + <nl> + def run_client ( ) : <nl> + " " " This is a wrapper around the stress test client and performs the following : <nl> + 1 ) Create the following two tables in Big Query : <nl> + ( i ) Summary table : To record events like the test started , completed <nl> + successfully or failed <nl> + ( ii ) Qps table : To periodically record the QPS sent by this client <nl> + 2 ) Start the stress test client and add a row in the Big Query summary <nl> + table <nl> + 3 ) Once every few seconds ( as specificed by the poll_interval_secs ) poll <nl> + the status of the stress test client process and perform the <nl> + following : <nl> + 3 . 1 ) If the process is still running , get the current qps by invoking <nl> + the metrics client program and add a row in the Big Query <nl> + Qps table . Sleep for a duration specified by poll_interval_secs <nl> + 3 . 2 ) If the process exited successfully , add a row in the Big Query <nl> + Summary table and exit <nl> + 3 . 3 ) If the process failed , add a row in Big Query summary table and <nl> + wait forever . <nl> + NOTE : This script typically runs inside a GKE pod which means <nl> + that the pod gets destroyed when the script exits . However , in <nl> + case the stress test client fails , we would not want the pod to <nl> + be destroyed ( since we might want to connect to the pod for <nl> + examining logs ) . This is the reason why the script waits forever <nl> + in case of failures <nl> + " " " <nl> + env = dict ( os . environ ) <nl> + image_type = env [ ' STRESS_TEST_IMAGE_TYPE ' ] <nl> + image_name = env [ ' STRESS_TEST_IMAGE ' ] <nl> + args_str = env [ ' STRESS_TEST_ARGS_STR ' ] <nl> + metrics_client_image = env [ ' METRICS_CLIENT_IMAGE ' ] <nl> + metrics_client_args_str = env [ ' METRICS_CLIENT_ARGS_STR ' ] <nl> + run_id = env [ ' RUN_ID ' ] <nl> + pod_name = env [ ' POD_NAME ' ] <nl> + logfile_name = env . get ( ' LOGFILE_NAME ' ) <nl> + poll_interval_secs = float ( env [ ' POLL_INTERVAL_SECS ' ] ) <nl> + project_id = env [ ' GCP_PROJECT_ID ' ] <nl> + dataset_id = env [ ' DATASET_ID ' ] <nl> + summary_table_id = env [ ' SUMMARY_TABLE_ID ' ] <nl> + qps_table_id = env [ ' QPS_TABLE_ID ' ] <nl> + <nl> + bq_helper = BigQueryHelper ( run_id , image_type , pod_name , project_id , <nl> + dataset_id , summary_table_id , qps_table_id ) <nl> + bq_helper . initialize ( ) <nl> + <nl> + # Create BigQuery Dataset and Tables : Summary Table and Metrics Table <nl> + if not bq_helper . setup_tables ( ) : <nl> + print ' Error in creating BigQuery tables ' <nl> + return <nl> + <nl> + start_time = datetime . datetime . now ( ) <nl> + <nl> + logfile = None <nl> + details = ' Logging to stdout ' <nl> + if logfile_name is not None : <nl> + print ' Opening logfile : % s . . . ' % logfile_name <nl> + details = ' Logfile : % s ' % logfile_name <nl> + logfile = open ( logfile_name , ' w ' ) <nl> + <nl> + # Update status that the test is starting ( in the status table ) <nl> + bq_helper . insert_summary_row ( EventType . STARTING , details ) <nl> + <nl> + metrics_cmd = [ metrics_client_image <nl> + ] + [ x for x in metrics_client_args_str . split ( ) ] <nl> + stress_cmd = [ image_name ] + [ x for x in args_str . split ( ) ] <nl> + <nl> + print ' Launching process % s . . . ' % stress_cmd <nl> + stress_p = subprocess . Popen ( args = stress_cmd , <nl> + stdout = logfile , <nl> + stderr = subprocess . STDOUT ) <nl> + <nl> + qps_history = [ 1 , 1 , 1 ] # Maintain the last 3 qps readings <nl> + qps_history_idx = 0 # Index into the qps_history list <nl> + <nl> + is_error = False <nl> + while True : <nl> + # Check if stress_client is still running . If so , collect metrics and upload <nl> + # to BigQuery status table <nl> + if stress_p . poll ( ) is not None : <nl> + end_time = datetime . datetime . now ( ) . isoformat ( ) <nl> + event_type = EventType . SUCCESS <nl> + details = ' End time : % s ' % end_time <nl> + if stress_p . returncode ! = 0 : <nl> + event_type = EventType . FAILURE <nl> + details = ' Return code = % d . End time : % s ' % ( stress_p . returncode , <nl> + end_time ) <nl> + is_error = True <nl> + bq_helper . insert_summary_row ( event_type , details ) <nl> + print details <nl> + break <nl> + <nl> + # Stress client still running . Get metrics <nl> + qps = _get_qps ( metrics_cmd ) <nl> + qps_recorded_at = datetime . datetime . now ( ) . isoformat ( ) <nl> + print ' qps : % d at % s ' % ( qps , qps_recorded_at ) <nl> + <nl> + # If QPS has been zero for the last 3 iterations , flag it as error and exit <nl> + qps_history [ qps_history_idx ] = qps <nl> + qps_history_idx = ( qps_history_idx + 1 ) % len ( qps_history ) <nl> + if sum ( qps_history ) = = 0 : <nl> + details = ' QPS has been zero for the last % d seconds - as of : % s ' % ( <nl> + poll_interval_secs * 3 , qps_recorded_at ) <nl> + is_error = True <nl> + bq_helper . insert_summary_row ( EventType . FAILURE , details ) <nl> + print details <nl> + break <nl> + <nl> + # Upload qps metrics to BiqQuery <nl> + bq_helper . insert_qps_row ( qps , qps_recorded_at ) <nl> + <nl> + time . sleep ( poll_interval_secs ) <nl> + <nl> + if is_error : <nl> + print ' Waiting indefinitely . . ' <nl> + select . select ( [ ] , [ ] , [ ] ) <nl> + <nl> + print ' Completed ' <nl> + return <nl> + <nl> + <nl> + if __name__ = = ' __main__ ' : <nl> + run_client ( ) <nl> new file mode 100755 <nl> index 00000000000 . . 64322f61004 <nl> mmm / dev / null <nl> ppp b / tools / gcp / stress_test / run_server . py <nl> <nl> + # ! / usr / bin / env python2 . 7 <nl> + # Copyright 2015 - 2016 , Google Inc . <nl> + # All rights reserved . <nl> + # <nl> + # Redistribution and use in source and binary forms , with or without <nl> + # modification , are permitted provided that the following conditions are <nl> + # met : <nl> + # <nl> + # * Redistributions of source code must retain the above copyright <nl> + # notice , this list of conditions and the following disclaimer . <nl> + # * Redistributions in binary form must reproduce the above <nl> + # copyright notice , this list of conditions and the following disclaimer <nl> + # in the documentation and / or other materials provided with the <nl> + # distribution . <nl> + # * Neither the name of Google Inc . nor the names of its <nl> + # contributors may be used to endorse or promote products derived from <nl> + # this software without specific prior written permission . <nl> + # <nl> + # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS <nl> + # " AS IS " AND ANY EXPRESS OR IMPLIED WARRANTIES , INCLUDING , BUT NOT <nl> + # LIMITED TO , THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR <nl> + # A PARTICULAR PURPOSE ARE DISCLAIMED . IN NO EVENT SHALL THE COPYRIGHT <nl> + # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT , INDIRECT , INCIDENTAL , <nl> + # SPECIAL , EXEMPLARY , OR CONSEQUENTIAL DAMAGES ( INCLUDING , BUT NOT <nl> + # LIMITED TO , PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES ; LOSS OF USE , <nl> + # DATA , OR PROFITS ; OR BUSINESS INTERRUPTION ) HOWEVER CAUSED AND ON ANY <nl> + # THEORY OF LIABILITY , WHETHER IN CONTRACT , STRICT LIABILITY , OR TORT <nl> + # ( INCLUDING NEGLIGENCE OR OTHERWISE ) ARISING IN ANY WAY OUT OF THE USE <nl> + # OF THIS SOFTWARE , EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE . <nl> + <nl> + import datetime <nl> + import os <nl> + import select <nl> + import subprocess <nl> + import sys <nl> + import time <nl> + <nl> + from stress_test_utils import BigQueryHelper <nl> + from stress_test_utils import EventType <nl> + <nl> + <nl> + def run_server ( ) : <nl> + " " " This is a wrapper around the interop server and performs the following : <nl> + 1 ) Create a ' Summary table ' in Big Query to record events like the server <nl> + started , completed successfully or failed . NOTE : This also creates <nl> + another table called the QPS table which is currently NOT needed on the <nl> + server ( it is needed on the stress test clients ) <nl> + 2 ) Start the server process and add a row in Big Query summary table <nl> + 3 ) Wait for the server process to terminate . The server process does not <nl> + terminate unless there is an error . <nl> + If the server process terminated with a failure , add a row in Big Query <nl> + and wait forever . <nl> + NOTE : This script typically runs inside a GKE pod which means that the <nl> + pod gets destroyed when the script exits . However , in case the server <nl> + process fails , we would not want the pod to be destroyed ( since we <nl> + might want to connect to the pod for examining logs ) . This is the <nl> + reason why the script waits forever in case of failures . <nl> + " " " <nl> + <nl> + # Read the parameters from environment variables <nl> + env = dict ( os . environ ) <nl> + <nl> + run_id = env [ ' RUN_ID ' ] # The unique run id for this test <nl> + image_type = env [ ' STRESS_TEST_IMAGE_TYPE ' ] <nl> + image_name = env [ ' STRESS_TEST_IMAGE ' ] <nl> + args_str = env [ ' STRESS_TEST_ARGS_STR ' ] <nl> + pod_name = env [ ' POD_NAME ' ] <nl> + project_id = env [ ' GCP_PROJECT_ID ' ] <nl> + dataset_id = env [ ' DATASET_ID ' ] <nl> + summary_table_id = env [ ' SUMMARY_TABLE_ID ' ] <nl> + qps_table_id = env [ ' QPS_TABLE_ID ' ] <nl> + <nl> + logfile_name = env . get ( ' LOGFILE_NAME ' ) <nl> + <nl> + print ( ' pod_name : % s , project_id : % s , run_id : % s , dataset_id : % s , ' <nl> + ' summary_table_id : % s , qps_table_id : % s ' ) % ( <nl> + pod_name , project_id , run_id , dataset_id , summary_table_id , <nl> + qps_table_id ) <nl> + <nl> + bq_helper = BigQueryHelper ( run_id , image_type , pod_name , project_id , <nl> + dataset_id , summary_table_id , qps_table_id ) <nl> + bq_helper . initialize ( ) <nl> + <nl> + # Create BigQuery Dataset and Tables : Summary Table and Metrics Table <nl> + if not bq_helper . setup_tables ( ) : <nl> + print ' Error in creating BigQuery tables ' <nl> + return <nl> + <nl> + start_time = datetime . datetime . now ( ) <nl> + <nl> + logfile = None <nl> + details = ' Logging to stdout ' <nl> + if logfile_name is not None : <nl> + print ' Opening log file : ' , logfile_name <nl> + logfile = open ( logfile_name , ' w ' ) <nl> + details = ' Logfile : % s ' % logfile_name <nl> + <nl> + # Update status that the test is starting ( in the status table ) <nl> + bq_helper . insert_summary_row ( EventType . STARTING , details ) <nl> + <nl> + stress_cmd = [ image_name ] + [ x for x in args_str . split ( ) ] <nl> + <nl> + print ' Launching process % s . . . ' % stress_cmd <nl> + stress_p = subprocess . Popen ( args = stress_cmd , <nl> + stdout = logfile , <nl> + stderr = subprocess . STDOUT ) <nl> + <nl> + returncode = stress_p . wait ( ) <nl> + if returncode ! = 0 : <nl> + end_time = datetime . datetime . now ( ) . isoformat ( ) <nl> + event_type = EventType . FAILURE <nl> + details = ' Returncode : % d ; End time : % s ' % ( returncode , end_time ) <nl> + bq_helper . insert_summary_row ( event_type , details ) <nl> + print ' Waiting indefinitely . . ' <nl> + select . select ( [ ] , [ ] , [ ] ) <nl> + return returncode <nl> + <nl> + <nl> + if __name__ = = ' __main__ ' : <nl> + run_server ( ) <nl> new file mode 100755 <nl> index 00000000000 . . c4b437e3459 <nl> mmm / dev / null <nl> ppp b / tools / gcp / stress_test / stress_test_utils . py <nl> <nl> + # ! / usr / bin / env python2 . 7 <nl> + # Copyright 2015 - 2016 , Google Inc . <nl> + # All rights reserved . <nl> + # <nl> + # Redistribution and use in source and binary forms , with or without <nl> + # modification , are permitted provided that the following conditions are <nl> + # met : <nl> + # <nl> + # * Redistributions of source code must retain the above copyright <nl> + # notice , this list of conditions and the following disclaimer . <nl> + # * Redistributions in binary form must reproduce the above <nl> + # copyright notice , this list of conditions and the following disclaimer <nl> + # in the documentation and / or other materials provided with the <nl> + # distribution . <nl> + # * Neither the name of Google Inc . nor the names of its <nl> + # contributors may be used to endorse or promote products derived from <nl> + # this software without specific prior written permission . <nl> + # <nl> + # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS <nl> + # " AS IS " AND ANY EXPRESS OR IMPLIED WARRANTIES , INCLUDING , BUT NOT <nl> + # LIMITED TO , THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR <nl> + # A PARTICULAR PURPOSE ARE DISCLAIMED . IN NO EVENT SHALL THE COPYRIGHT <nl> + # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT , INDIRECT , INCIDENTAL , <nl> + # SPECIAL , EXEMPLARY , OR CONSEQUENTIAL DAMAGES ( INCLUDING , BUT NOT <nl> + # LIMITED TO , PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES ; LOSS OF USE , <nl> + # DATA , OR PROFITS ; OR BUSINESS INTERRUPTION ) HOWEVER CAUSED AND ON ANY <nl> + # THEORY OF LIABILITY , WHETHER IN CONTRACT , STRICT LIABILITY , OR TORT <nl> + # ( INCLUDING NEGLIGENCE OR OTHERWISE ) ARISING IN ANY WAY OUT OF THE USE <nl> + # OF THIS SOFTWARE , EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE . <nl> + <nl> + import datetime <nl> + import json <nl> + import os <nl> + import re <nl> + import select <nl> + import subprocess <nl> + import sys <nl> + import time <nl> + <nl> + # Import big_query_utils module <nl> + bq_utils_dir = os . path . abspath ( os . path . join ( <nl> + os . path . dirname ( __file__ ) , ' . . / utils ' ) ) <nl> + sys . path . append ( bq_utils_dir ) <nl> + import big_query_utils as bq_utils <nl> + <nl> + <nl> + class EventType : <nl> + STARTING = ' STARTING ' <nl> + SUCCESS = ' SUCCESS ' <nl> + FAILURE = ' FAILURE ' <nl> + <nl> + <nl> + class BigQueryHelper : <nl> + " " " Helper class for the stress test wrappers to interact with BigQuery . <nl> + " " " <nl> + <nl> + def __init__ ( self , run_id , image_type , pod_name , project_id , dataset_id , <nl> + summary_table_id , qps_table_id ) : <nl> + self . run_id = run_id <nl> + self . image_type = image_type <nl> + self . pod_name = pod_name <nl> + self . project_id = project_id <nl> + self . dataset_id = dataset_id <nl> + self . summary_table_id = summary_table_id <nl> + self . qps_table_id = qps_table_id <nl> + <nl> + def initialize ( self ) : <nl> + self . bq = bq_utils . create_big_query ( ) <nl> + <nl> + def setup_tables ( self ) : <nl> + return bq_utils . create_dataset ( self . bq , self . project_id , self . dataset_id ) \ <nl> + and self . __create_summary_table ( ) \ <nl> + and self . __create_qps_table ( ) <nl> + <nl> + def insert_summary_row ( self , event_type , details ) : <nl> + row_values_dict = { <nl> + ' run_id ' : self . run_id , <nl> + ' image_type ' : self . image_type , <nl> + ' pod_name ' : self . pod_name , <nl> + ' event_date ' : datetime . datetime . now ( ) . isoformat ( ) , <nl> + ' event_type ' : event_type , <nl> + ' details ' : details <nl> + } <nl> + # row_unique_id is something that uniquely identifies the row ( BigQuery uses <nl> + # it for duplicate detection ) . <nl> + row_unique_id = ' % s_ % s_ % s ' % ( self . run_id , self . pod_name , event_type ) <nl> + row = bq_utils . make_row ( row_unique_id , row_values_dict ) <nl> + return bq_utils . insert_rows ( self . bq , self . project_id , self . dataset_id , <nl> + self . summary_table_id , [ row ] ) <nl> + <nl> + def insert_qps_row ( self , qps , recorded_at ) : <nl> + row_values_dict = { <nl> + ' run_id ' : self . run_id , <nl> + ' pod_name ' : self . pod_name , <nl> + ' recorded_at ' : recorded_at , <nl> + ' qps ' : qps <nl> + } <nl> + <nl> + # row_unique_id is something that uniquely identifies the row ( BigQuery uses <nl> + # it for duplicate detection ) . <nl> + row_unique_id = ' % s_ % s_ % s ' % ( self . run_id , self . pod_name , recorded_at ) <nl> + row = bq_utils . make_row ( row_unique_id , row_values_dict ) <nl> + return bq_utils . insert_rows ( self . bq , self . project_id , self . dataset_id , <nl> + self . qps_table_id , [ row ] ) <nl> + <nl> + def check_if_any_tests_failed ( self , num_query_retries = 3 ) : <nl> + query = ( ' SELECT event_type FROM % s . % s WHERE run_id = \ ' % s \ ' AND ' <nl> + ' event_type = " % s " ' ) % ( self . dataset_id , self . summary_table_id , <nl> + self . run_id , EventType . FAILURE ) <nl> + query_job = bq_utils . sync_query_job ( self . bq , self . project_id , query ) <nl> + page = self . bq . jobs ( ) . getQueryResults ( * * query_job [ ' jobReference ' ] ) . execute ( <nl> + num_retries = num_query_retries ) <nl> + num_failures = int ( page [ ' totalRows ' ] ) <nl> + print ' num rows : ' , num_failures <nl> + return num_failures > 0 <nl> + <nl> + def print_summary_records ( self , num_query_retries = 3 ) : <nl> + line = ' - ' * 120 <nl> + print line <nl> + print ' Summary records ' <nl> + print ' Run Id : ' , self . run_id <nl> + print ' Dataset Id : ' , self . dataset_id <nl> + print line <nl> + query = ( ' SELECT pod_name , image_type , event_type , event_date , details ' <nl> + ' FROM % s . % s WHERE run_id = \ ' % s \ ' ORDER by event_date ; ' ) % ( <nl> + self . dataset_id , self . summary_table_id , self . run_id ) <nl> + query_job = bq_utils . sync_query_job ( self . bq , self . project_id , query ) <nl> + <nl> + print ' { : < 25 } { : < 12 } { : < 12 } { : < 30 } { } ' . format ( <nl> + ' Pod name ' , ' Image type ' , ' Event type ' , ' Date ' , ' Details ' ) <nl> + print line <nl> + page_token = None <nl> + while True : <nl> + page = self . bq . jobs ( ) . getQueryResults ( <nl> + pageToken = page_token , <nl> + * * query_job [ ' jobReference ' ] ) . execute ( num_retries = num_query_retries ) <nl> + rows = page . get ( ' rows ' , [ ] ) <nl> + for row in rows : <nl> + print ' { : < 25 } { : < 12 } { : < 12 } { : < 30 } { } ' . format ( <nl> + row [ ' f ' ] [ 0 ] [ ' v ' ] , row [ ' f ' ] [ 1 ] [ ' v ' ] , row [ ' f ' ] [ 2 ] [ ' v ' ] , <nl> + row [ ' f ' ] [ 3 ] [ ' v ' ] , row [ ' f ' ] [ 4 ] [ ' v ' ] ) <nl> + page_token = page . get ( ' pageToken ' ) <nl> + if not page_token : <nl> + break <nl> + <nl> + def print_qps_records ( self , num_query_retries = 3 ) : <nl> + line = ' - ' * 80 <nl> + print line <nl> + print ' QPS Summary ' <nl> + print ' Run Id : ' , self . run_id <nl> + print ' Dataset Id : ' , self . dataset_id <nl> + print line <nl> + query = ( <nl> + ' SELECT pod_name , recorded_at , qps FROM % s . % s WHERE run_id = \ ' % s \ ' ' <nl> + ' ORDER by recorded_at ; ' ) % ( self . dataset_id , self . qps_table_id , <nl> + self . run_id ) <nl> + query_job = bq_utils . sync_query_job ( self . bq , self . project_id , query ) <nl> + print ' { : < 25 } { : 30 } { } ' . format ( ' Pod name ' , ' Recorded at ' , ' Qps ' ) <nl> + print line <nl> + page_token = None <nl> + while True : <nl> + page = self . bq . jobs ( ) . getQueryResults ( <nl> + pageToken = page_token , <nl> + * * query_job [ ' jobReference ' ] ) . execute ( num_retries = num_query_retries ) <nl> + rows = page . get ( ' rows ' , [ ] ) <nl> + for row in rows : <nl> + print ' { : < 25 } { : 30 } { } ' . format ( row [ ' f ' ] [ 0 ] [ ' v ' ] , row [ ' f ' ] [ 1 ] [ ' v ' ] , <nl> + row [ ' f ' ] [ 2 ] [ ' v ' ] ) <nl> + page_token = page . get ( ' pageToken ' ) <nl> + if not page_token : <nl> + break <nl> + <nl> + def __create_summary_table ( self ) : <nl> + summary_table_schema = [ <nl> + ( ' run_id ' , ' STRING ' , ' Test run id ' ) , <nl> + ( ' image_type ' , ' STRING ' , ' Client or Server ? ' ) , <nl> + ( ' pod_name ' , ' STRING ' , ' GKE pod hosting this image ' ) , <nl> + ( ' event_date ' , ' STRING ' , ' The date of this event ' ) , <nl> + ( ' event_type ' , ' STRING ' , ' STARTED / SUCCESS / FAILURE ' ) , <nl> + ( ' details ' , ' STRING ' , ' Any other relevant details ' ) <nl> + ] <nl> + desc = ( ' The table that contains START / SUCCESS / FAILURE events for ' <nl> + ' the stress test clients and servers ' ) <nl> + return bq_utils . create_table ( self . bq , self . project_id , self . dataset_id , <nl> + self . summary_table_id , summary_table_schema , <nl> + desc ) <nl> + <nl> + def __create_qps_table ( self ) : <nl> + qps_table_schema = [ <nl> + ( ' run_id ' , ' STRING ' , ' Test run id ' ) , <nl> + ( ' pod_name ' , ' STRING ' , ' GKE pod hosting this image ' ) , <nl> + ( ' recorded_at ' , ' STRING ' , ' Metrics recorded at time ' ) , <nl> + ( ' qps ' , ' INTEGER ' , ' Queries per second ' ) <nl> + ] <nl> + desc = ' The table that cointains the qps recorded at various intervals ' <nl> + return bq_utils . create_table ( self . bq , self . project_id , self . dataset_id , <nl> + self . qps_table_id , qps_table_schema , desc ) <nl> new file mode 100755 <nl> index 00000000000 . . 7bb1e143549 <nl> mmm / dev / null <nl> ppp b / tools / gcp / utils / big_query_utils . py <nl> <nl> + # ! / usr / bin / env python2 . 7 <nl> + # Copyright 2015 - 2016 , Google Inc . <nl> + # All rights reserved . <nl> + # <nl> + # Redistribution and use in source and binary forms , with or without <nl> + # modification , are permitted provided that the following conditions are <nl> + # met : <nl> + # <nl> + # * Redistributions of source code must retain the above copyright <nl> + # notice , this list of conditions and the following disclaimer . <nl> + # * Redistributions in binary form must reproduce the above <nl> + # copyright notice , this list of conditions and the following disclaimer <nl> + # in the documentation and / or other materials provided with the <nl> + # distribution . <nl> + # * Neither the name of Google Inc . nor the names of its <nl> + # contributors may be used to endorse or promote products derived from <nl> + # this software without specific prior written permission . <nl> + # <nl> + # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS <nl> + # " AS IS " AND ANY EXPRESS OR IMPLIED WARRANTIES , INCLUDING , BUT NOT <nl> + # LIMITED TO , THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR <nl> + # A PARTICULAR PURPOSE ARE DISCLAIMED . IN NO EVENT SHALL THE COPYRIGHT <nl> + # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT , INDIRECT , INCIDENTAL , <nl> + # SPECIAL , EXEMPLARY , OR CONSEQUENTIAL DAMAGES ( INCLUDING , BUT NOT <nl> + # LIMITED TO , PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES ; LOSS OF USE , <nl> + # DATA , OR PROFITS ; OR BUSINESS INTERRUPTION ) HOWEVER CAUSED AND ON ANY <nl> + # THEORY OF LIABILITY , WHETHER IN CONTRACT , STRICT LIABILITY , OR TORT <nl> + # ( INCLUDING NEGLIGENCE OR OTHERWISE ) ARISING IN ANY WAY OUT OF THE USE <nl> + # OF THIS SOFTWARE , EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE . <nl> + <nl> + import argparse <nl> + import json <nl> + import uuid <nl> + import httplib2 <nl> + <nl> + from apiclient import discovery <nl> + from apiclient . errors import HttpError <nl> + from oauth2client . client import GoogleCredentials <nl> + <nl> + NUM_RETRIES = 3 <nl> + <nl> + <nl> + def create_big_query ( ) : <nl> + " " " Authenticates with cloud platform and gets a BiqQuery service object <nl> + " " " <nl> + creds = GoogleCredentials . get_application_default ( ) <nl> + return discovery . build ( ' bigquery ' , ' v2 ' , credentials = creds ) <nl> + <nl> + <nl> + def create_dataset ( biq_query , project_id , dataset_id ) : <nl> + is_success = True <nl> + body = { <nl> + ' datasetReference ' : { <nl> + ' projectId ' : project_id , <nl> + ' datasetId ' : dataset_id <nl> + } <nl> + } <nl> + <nl> + try : <nl> + dataset_req = biq_query . datasets ( ) . insert ( projectId = project_id , body = body ) <nl> + dataset_req . execute ( num_retries = NUM_RETRIES ) <nl> + except HttpError as http_error : <nl> + if http_error . resp . status = = 409 : <nl> + print ' Warning : The dataset % s already exists ' % dataset_id <nl> + else : <nl> + # Note : For more debugging info , print " http_error . content " <nl> + print ' Error in creating dataset : % s . Err : % s ' % ( dataset_id , http_error ) <nl> + is_success = False <nl> + return is_success <nl> + <nl> + <nl> + def create_table ( big_query , project_id , dataset_id , table_id , table_schema , <nl> + description ) : <nl> + is_success = True <nl> + <nl> + body = { <nl> + ' description ' : description , <nl> + ' schema ' : { <nl> + ' fields ' : [ { <nl> + ' name ' : field_name , <nl> + ' type ' : field_type , <nl> + ' description ' : field_description <nl> + } for ( field_name , field_type , field_description ) in table_schema ] <nl> + } , <nl> + ' tableReference ' : { <nl> + ' datasetId ' : dataset_id , <nl> + ' projectId ' : project_id , <nl> + ' tableId ' : table_id <nl> + } <nl> + } <nl> + <nl> + try : <nl> + table_req = big_query . tables ( ) . insert ( projectId = project_id , <nl> + datasetId = dataset_id , <nl> + body = body ) <nl> + res = table_req . execute ( num_retries = NUM_RETRIES ) <nl> + print ' Successfully created % s " % s " ' % ( res [ ' kind ' ] , res [ ' id ' ] ) <nl> + except HttpError as http_error : <nl> + if http_error . resp . status = = 409 : <nl> + print ' Warning : Table % s already exists ' % table_id <nl> + else : <nl> + print ' Error in creating table : % s . Err : % s ' % ( table_id , http_error ) <nl> + is_success = False <nl> + return is_success <nl> + <nl> + <nl> + def insert_rows ( big_query , project_id , dataset_id , table_id , rows_list ) : <nl> + is_success = True <nl> + body = { ' rows ' : rows_list } <nl> + try : <nl> + insert_req = big_query . tabledata ( ) . insertAll ( projectId = project_id , <nl> + datasetId = dataset_id , <nl> + tableId = table_id , <nl> + body = body ) <nl> + print body <nl> + res = insert_req . execute ( num_retries = NUM_RETRIES ) <nl> + print res <nl> + except HttpError as http_error : <nl> + print ' Error in inserting rows in the table % s ' % table_id <nl> + is_success = False <nl> + return is_success <nl> + <nl> + <nl> + def sync_query_job ( big_query , project_id , query , timeout = 5000 ) : <nl> + query_data = { ' query ' : query , ' timeoutMs ' : timeout } <nl> + query_job = None <nl> + try : <nl> + query_job = big_query . jobs ( ) . query ( <nl> + projectId = project_id , <nl> + body = query_data ) . execute ( num_retries = NUM_RETRIES ) <nl> + except HttpError as http_error : <nl> + print ' Query execute job failed with error : % s ' % http_error <nl> + print http_error . content <nl> + return query_job <nl> + <nl> + # List of ( column name , column type , description ) tuples <nl> + def make_row ( unique_row_id , row_values_dict ) : <nl> + " " " row_values_dict is a dictionary of column name and column value . <nl> + " " " <nl> + return { ' insertId ' : unique_row_id , ' json ' : row_values_dict } <nl> similarity index 74 % <nl> rename from tools / gke / kubernetes_api . py <nl> rename to tools / gcp / utils / kubernetes_api . py <nl> mmm a / tools / gke / kubernetes_api . py <nl> ppp b / tools / gcp / utils / kubernetes_api . py <nl> <nl> # ! / usr / bin / env python2 . 7 <nl> - # Copyright 2015 , Google Inc . <nl> + # Copyright 2015 - 2016 , Google Inc . <nl> # All rights reserved . <nl> # <nl> # Redistribution and use in source and binary forms , with or without <nl> <nl> <nl> _REQUEST_TIMEOUT_SECS = 10 <nl> <nl> + <nl> def _make_pod_config ( pod_name , image_name , container_port_list , cmd_list , <nl> - arg_list ) : <nl> + arg_list , env_dict ) : <nl> " " " Creates a string containing the Pod defintion as required by the Kubernetes API " " " <nl> body = { <nl> ' kind ' : ' Pod ' , <nl> def _make_pod_config ( pod_name , image_name , container_port_list , cmd_list , <nl> { <nl> ' name ' : pod_name , <nl> ' image ' : image_name , <nl> - ' ports ' : [ ] <nl> + ' ports ' : [ { ' containerPort ' : port , <nl> + ' protocol ' : ' TCP ' } <nl> + for port in container_port_list ] , <nl> + ' imagePullPolicy ' : ' Always ' <nl> } <nl> ] <nl> } <nl> } <nl> - # Populate the ' ports ' list <nl> - for port in container_port_list : <nl> - port_entry = { ' containerPort ' : port , ' protocol ' : ' TCP ' } <nl> - body [ ' spec ' ] [ ' containers ' ] [ 0 ] [ ' ports ' ] . append ( port_entry ) <nl> + <nl> + env_list = [ { ' name ' : k , ' value ' : v } for ( k , v ) in env_dict . iteritems ( ) ] <nl> + if len ( env_list ) > 0 : <nl> + body [ ' spec ' ] [ ' containers ' ] [ 0 ] [ ' env ' ] = env_list <nl> <nl> # Add the ' Command ' and ' Args ' attributes if they are passed . <nl> # Note : <nl> # - ' Command ' overrides the ENTRYPOINT in the Docker Image <nl> - # - ' Args ' override the COMMAND in Docker image ( yes , it is confusing ! ) <nl> + # - ' Args ' override the CMD in Docker image ( yes , it is confusing ! ) <nl> if len ( cmd_list ) > 0 : <nl> body [ ' spec ' ] [ ' containers ' ] [ 0 ] [ ' command ' ] = cmd_list <nl> if len ( arg_list ) > 0 : <nl> def _make_pod_config ( pod_name , image_name , container_port_list , cmd_list , <nl> <nl> <nl> def _make_service_config ( service_name , pod_name , service_port_list , <nl> - container_port_list , is_headless ) : <nl> + container_port_list , is_headless ) : <nl> " " " Creates a string containing the Service definition as required by the Kubernetes API . <nl> <nl> NOTE : <nl> def _print_connection_error ( msg ) : <nl> print ( ' ERROR : Connection failed . Did you remember to run Kubenetes proxy on ' <nl> ' localhost ( i . e kubectl proxy - - port = < proxy_port > ) ? . Error : % s ' % msg ) <nl> <nl> + <nl> def _do_post ( post_url , api_name , request_body ) : <nl> " " " Helper to do HTTP POST . <nl> <nl> def _do_post ( post_url , api_name , request_body ) : <nl> " " " <nl> is_success = True <nl> try : <nl> - r = requests . post ( post_url , data = request_body , timeout = _REQUEST_TIMEOUT_SECS ) <nl> + r = requests . post ( post_url , <nl> + data = request_body , <nl> + timeout = _REQUEST_TIMEOUT_SECS ) <nl> if r . status_code = = requests . codes . conflict : <nl> print ( ' WARN : Looks like the resource already exists . Api : % s , url : % s ' % <nl> ( api_name , post_url ) ) <nl> def _do_post ( post_url , api_name , request_body ) : <nl> print ( ' ERROR : % s API returned error . HTTP response : ( % d ) % s ' % <nl> ( api_name , r . status_code , r . text ) ) <nl> is_success = False <nl> - except ( requests . exceptions . Timeout , requests . exceptions . ConnectionError ) as e : <nl> + except ( requests . exceptions . Timeout , <nl> + requests . exceptions . ConnectionError ) as e : <nl> is_success = False <nl> _print_connection_error ( str ( e ) ) <nl> return is_success <nl> def _do_delete ( del_url , api_name ) : <nl> print ( ' ERROR : % s API returned error . HTTP response : % s ' % <nl> ( api_name , r . text ) ) <nl> is_success = False <nl> - except ( requests . exceptions . Timeout , requests . exceptions . ConnectionError ) as e : <nl> + except ( requests . exceptions . Timeout , <nl> + requests . exceptions . ConnectionError ) as e : <nl> is_success = False <nl> _print_connection_error ( str ( e ) ) <nl> return is_success <nl> def create_service ( kube_host , kube_port , namespace , service_name , pod_name , <nl> post_url = ' http : / / % s : % d / api / v1 / namespaces / % s / services ' % ( <nl> kube_host , kube_port , namespace ) <nl> request_body = _make_service_config ( service_name , pod_name , service_port_list , <nl> - container_port_list , is_headless ) <nl> + container_port_list , is_headless ) <nl> return _do_post ( post_url , ' Create Service ' , request_body ) <nl> <nl> <nl> def create_pod ( kube_host , kube_port , namespace , pod_name , image_name , <nl> - container_port_list , cmd_list , arg_list ) : <nl> + container_port_list , cmd_list , arg_list , env_dict ) : <nl> " " " Creates a Kubernetes Pod . <nl> <nl> Note that it is generally NOT considered a good practice to directly create <nl> def create_pod ( kube_host , kube_port , namespace , pod_name , image_name , <nl> post_url = ' http : / / % s : % d / api / v1 / namespaces / % s / pods ' % ( kube_host , kube_port , <nl> namespace ) <nl> request_body = _make_pod_config ( pod_name , image_name , container_port_list , <nl> - cmd_list , arg_list ) <nl> + cmd_list , arg_list , env_dict ) <nl> return _do_post ( post_url , ' Create Pod ' , request_body ) <nl> <nl> <nl> def delete_pod ( kube_host , kube_port , namespace , pod_name ) : <nl> del_url = ' http : / / % s : % d / api / v1 / namespaces / % s / pods / % s ' % ( kube_host , kube_port , <nl> namespace , pod_name ) <nl> return _do_delete ( del_url , ' Delete Pod ' ) <nl> + <nl> + <nl> + def create_pod_and_service ( kube_host , kube_port , namespace , pod_name , <nl> + image_name , container_port_list , cmd_list , arg_list , <nl> + env_dict , is_headless_service ) : <nl> + " " " A helper function that creates a pod and a service ( if pod creation was successful ) . " " " <nl> + is_success = create_pod ( kube_host , kube_port , namespace , pod_name , image_name , <nl> + container_port_list , cmd_list , arg_list , env_dict ) <nl> + if not is_success : <nl> + print ' Error in creating Pod ' <nl> + return False <nl> + <nl> + is_success = create_service ( <nl> + kube_host , <nl> + kube_port , <nl> + namespace , <nl> + pod_name , # Use pod_name for service <nl> + pod_name , <nl> + container_port_list , # Service port list same as container port list <nl> + container_port_list , <nl> + is_headless_service ) <nl> + if not is_success : <nl> + print ' Error in creating Service ' <nl> + return False <nl> + <nl> + print ' Successfully created the pod / service % s ' % pod_name <nl> + return True <nl> + <nl> + <nl> + def delete_pod_and_service ( kube_host , kube_port , namespace , pod_name ) : <nl> + " " " A helper function that calls delete_pod and delete_service " " " <nl> + is_success = delete_pod ( kube_host , kube_port , namespace , pod_name ) <nl> + if not is_success : <nl> + print ' Error in deleting pod % s ' % pod_name <nl> + return False <nl> + <nl> + # Note : service name assumed to the the same as pod name <nl> + is_success = delete_service ( kube_host , kube_port , namespace , pod_name ) <nl> + if not is_success : <nl> + print ' Error in deleting service % s ' % pod_name <nl> + return False <nl> + <nl> + print ' Successfully deleted the Pod / Service : % s ' % pod_name <nl> + return True <nl> mmm a / tools / jenkins / build_interop_stress_image . sh <nl> ppp b / tools / jenkins / build_interop_stress_image . sh <nl> set - x <nl> <nl> # Params : <nl> # INTEROP_IMAGE - name of tag of the final interop image <nl> + # INTEROP_IMAGE_TAG - Optional . If set , the created image will be tagged using <nl> + # the command : ' docker tag $ INTEROP_IMAGE $ INTEROP_IMAGE_REPOSITORY_TAG ' <nl> # BASE_NAME - base name used to locate the base Dockerfile and build script <nl> # TTY_FLAG - optional - t flag to make docker allocate tty <nl> # BUILD_INTEROP_DOCKER_EXTRA_ARGS - optional args to be passed to the <nl> CONTAINER_NAME = " build_ $ { BASE_NAME } _ $ ( uuidgen ) " <nl> $ BASE_IMAGE \ <nl> bash - l / var / local / jenkins / grpc / tools / dockerfile / $ BASE_NAME / build_interop_stress . sh \ <nl> & & docker commit $ CONTAINER_NAME $ INTEROP_IMAGE \ <nl> + & & ( if [ - n " $ INTEROP_IMAGE_REPOSITORY_TAG " ] ; then docker tag - f $ INTEROP_IMAGE $ INTEROP_IMAGE_REPOSITORY_TAG ; fi ) \ <nl> & & echo " Successfully built image $ INTEROP_IMAGE " ) <nl> EXITCODE = $ ? <nl> <nl> new file mode 100755 <nl> index 00000000000 . . 634eb1aca53 <nl> mmm / dev / null <nl> ppp b / tools / run_tests / stress_test / run_stress_tests_on_gke . py <nl> <nl> + # ! / usr / bin / env python2 . 7 <nl> + # Copyright 2015 - 2016 , Google Inc . <nl> + # All rights reserved . <nl> + # <nl> + # Redistribution and use in source and binary forms , with or without <nl> + # modification , are permitted provided that the following conditions are <nl> + # met : <nl> + # <nl> + # * Redistributions of source code must retain the above copyright <nl> + # notice , this list of conditions and the following disclaimer . <nl> + # * Redistributions in binary form must reproduce the above <nl> + # copyright notice , this list of conditions and the following disclaimer <nl> + # in the documentation and / or other materials provided with the <nl> + # distribution . <nl> + # * Neither the name of Google Inc . nor the names of its <nl> + # contributors may be used to endorse or promote products derived from <nl> + # this software without specific prior written permission . <nl> + # <nl> + # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS <nl> + # " AS IS " AND ANY EXPRESS OR IMPLIED WARRANTIES , INCLUDING , BUT NOT <nl> + # LIMITED TO , THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR <nl> + # A PARTICULAR PURPOSE ARE DISCLAIMED . IN NO EVENT SHALL THE COPYRIGHT <nl> + # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT , INDIRECT , INCIDENTAL , <nl> + # SPECIAL , EXEMPLARY , OR CONSEQUENTIAL DAMAGES ( INCLUDING , BUT NOT <nl> + # LIMITED TO , PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES ; LOSS OF USE , <nl> + # DATA , OR PROFITS ; OR BUSINESS INTERRUPTION ) HOWEVER CAUSED AND ON ANY <nl> + # THEORY OF LIABILITY , WHETHER IN CONTRACT , STRICT LIABILITY , OR TORT <nl> + # ( INCLUDING NEGLIGENCE OR OTHERWISE ) ARISING IN ANY WAY OUT OF THE USE <nl> + # OF THIS SOFTWARE , EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE . <nl> + import argparse <nl> + import datetime <nl> + import os <nl> + import subprocess <nl> + import sys <nl> + import time <nl> + <nl> + stress_test_utils_dir = os . path . abspath ( os . path . join ( <nl> + os . path . dirname ( __file__ ) , ' . . / . . / gcp / stress_test ' ) ) <nl> + sys . path . append ( stress_test_utils_dir ) <nl> + from stress_test_utils import BigQueryHelper <nl> + <nl> + kubernetes_api_dir = os . path . abspath ( os . path . join ( <nl> + os . path . dirname ( __file__ ) , ' . . / . . / gcp / utils ' ) ) <nl> + sys . path . append ( kubernetes_api_dir ) <nl> + <nl> + import kubernetes_api <nl> + <nl> + _GRPC_ROOT = os . path . abspath ( os . path . join ( <nl> + os . path . dirname ( sys . argv [ 0 ] ) , ' . . / . . / . . ' ) ) <nl> + os . chdir ( _GRPC_ROOT ) <nl> + <nl> + # num of seconds to wait for the GKE image to start and warmup <nl> + _GKE_IMAGE_WARMUP_WAIT_SECS = 60 <nl> + <nl> + _SERVER_POD_NAME = ' stress - server ' <nl> + _CLIENT_POD_NAME_PREFIX = ' stress - client ' <nl> + _DATASET_ID_PREFIX = ' stress_test ' <nl> + _SUMMARY_TABLE_ID = ' summary ' <nl> + _QPS_TABLE_ID = ' qps ' <nl> + <nl> + _DEFAULT_DOCKER_IMAGE_NAME = ' grpc_stress_test ' <nl> + <nl> + # The default port on which the kubernetes proxy server is started on localhost <nl> + # ( i . e kubectl proxy - - port = < port > ) <nl> + _DEFAULT_KUBERNETES_PROXY_PORT = 8001 <nl> + <nl> + # How frequently should the stress client wrapper script ( running inside a GKE <nl> + # container ) poll the health of the stress client ( also running inside the GKE <nl> + # container ) and upload metrics to BigQuery <nl> + _DEFAULT_STRESS_CLIENT_POLL_INTERVAL_SECS = 60 <nl> + <nl> + # The default setting for stress test server and client <nl> + _DEFAULT_STRESS_SERVER_PORT = 8080 <nl> + _DEFAULT_METRICS_PORT = 8081 <nl> + _DEFAULT_TEST_CASES_STR = ' empty_unary : 1 , large_unary : 1 , client_streaming : 1 , server_streaming : 1 , empty_stream : 1 ' <nl> + _DEFAULT_NUM_CHANNELS_PER_SERVER = 5 <nl> + _DEFAULT_NUM_STUBS_PER_CHANNEL = 10 <nl> + _DEFAULT_METRICS_COLLECTION_INTERVAL_SECS = 30 <nl> + <nl> + # Number of stress client instances to launch <nl> + _DEFAULT_NUM_CLIENTS = 3 <nl> + <nl> + # How frequently should this test monitor the health of Stress clients and <nl> + # Servers running in GKE <nl> + _DEFAULT_TEST_POLL_INTERVAL_SECS = 60 <nl> + <nl> + # Default run time for this test ( 2 hour ) <nl> + _DEFAULT_TEST_DURATION_SECS = 7200 <nl> + <nl> + # The number of seconds it would take a GKE pod to warm up ( i . e get to ' Running ' <nl> + # state from the time of creation ) . Ideally this is something the test should <nl> + # automatically determine by using Kubernetes API to poll the pods status . <nl> + _DEFAULT_GKE_WARMUP_SECS = 60 <nl> + <nl> + <nl> + class KubernetesProxy : <nl> + " " " Class to start a proxy on localhost to the Kubernetes API server " " " <nl> + <nl> + def __init__ ( self , api_port ) : <nl> + self . port = api_port <nl> + self . p = None <nl> + self . started = False <nl> + <nl> + def start ( self ) : <nl> + cmd = [ ' kubectl ' , ' proxy ' , ' - - port = % d ' % self . port ] <nl> + self . p = subprocess . Popen ( args = cmd ) <nl> + self . started = True <nl> + time . sleep ( 2 ) <nl> + print ' . . Started ' <nl> + <nl> + def get_port ( self ) : <nl> + return self . port <nl> + <nl> + def is_started ( self ) : <nl> + return self . started <nl> + <nl> + def __del__ ( self ) : <nl> + if self . p is not None : <nl> + print ' Shutting down Kubernetes proxy . . ' <nl> + self . p . kill ( ) <nl> + <nl> + <nl> + class TestSettings : <nl> + <nl> + def __init__ ( self , build_docker_image , test_poll_interval_secs , <nl> + test_duration_secs , kubernetes_proxy_port ) : <nl> + self . build_docker_image = build_docker_image <nl> + self . test_poll_interval_secs = test_poll_interval_secs <nl> + self . test_duration_secs = test_duration_secs <nl> + self . kubernetes_proxy_port = kubernetes_proxy_port <nl> + <nl> + <nl> + class GkeSettings : <nl> + <nl> + def __init__ ( self , project_id , docker_image_name ) : <nl> + self . project_id = project_id <nl> + self . docker_image_name = docker_image_name <nl> + self . tag_name = ' gcr . io / % s / % s ' % ( project_id , docker_image_name ) <nl> + <nl> + <nl> + class BigQuerySettings : <nl> + <nl> + def __init__ ( self , run_id , dataset_id , summary_table_id , qps_table_id ) : <nl> + self . run_id = run_id <nl> + self . dataset_id = dataset_id <nl> + self . summary_table_id = summary_table_id <nl> + self . qps_table_id = qps_table_id <nl> + <nl> + <nl> + class StressServerSettings : <nl> + <nl> + def __init__ ( self , server_pod_name , server_port ) : <nl> + self . server_pod_name = server_pod_name <nl> + self . server_port = server_port <nl> + <nl> + <nl> + class StressClientSettings : <nl> + <nl> + def __init__ ( self , num_clients , client_pod_name_prefix , server_pod_name , <nl> + server_port , metrics_port , metrics_collection_interval_secs , <nl> + stress_client_poll_interval_secs , num_channels_per_server , <nl> + num_stubs_per_channel , test_cases_str ) : <nl> + self . num_clients = num_clients <nl> + self . client_pod_name_prefix = client_pod_name_prefix <nl> + self . server_pod_name = server_pod_name <nl> + self . server_port = server_port <nl> + self . metrics_port = metrics_port <nl> + self . metrics_collection_interval_secs = metrics_collection_interval_secs <nl> + self . stress_client_poll_interval_secs = stress_client_poll_interval_secs <nl> + self . num_channels_per_server = num_channels_per_server <nl> + self . num_stubs_per_channel = num_stubs_per_channel <nl> + self . test_cases_str = test_cases_str <nl> + <nl> + # = = Derived properties = = <nl> + # Note : Client can accept a list of server addresses ( a comma separated list <nl> + # of ' server_name : server_port ' ) . In this case , we only have one server <nl> + # address to pass <nl> + self . server_addresses = ' % s . default . svc . cluster . local : % d ' % ( <nl> + server_pod_name , server_port ) <nl> + self . client_pod_names_list = [ ' % s - % d ' % ( client_pod_name_prefix , i ) <nl> + for i in range ( 1 , num_clients + 1 ) ] <nl> + <nl> + <nl> + def _build_docker_image ( image_name , tag_name ) : <nl> + " " " Build the docker image and add tag it to the GKE repository " " " <nl> + print ' Building docker image : % s ' % image_name <nl> + os . environ [ ' INTEROP_IMAGE ' ] = image_name <nl> + os . environ [ ' INTEROP_IMAGE_REPOSITORY_TAG ' ] = tag_name <nl> + # Note that ' BASE_NAME ' HAS to be ' grpc_interop_stress_cxx ' since the script <nl> + # build_interop_stress_image . sh invokes the following script : <nl> + # tools / dockerfile / $ BASE_NAME / build_interop_stress . sh <nl> + os . environ [ ' BASE_NAME ' ] = ' grpc_interop_stress_cxx ' <nl> + cmd = [ ' tools / jenkins / build_interop_stress_image . sh ' ] <nl> + retcode = subprocess . call ( args = cmd ) <nl> + if retcode ! = 0 : <nl> + print ' Error in building docker image ' <nl> + return False <nl> + return True <nl> + <nl> + <nl> + def _push_docker_image_to_gke_registry ( docker_tag_name ) : <nl> + " " " Executes ' gcloud docker push < docker_tag_name > ' to push the image to GKE registry " " " <nl> + cmd = [ ' gcloud ' , ' docker ' , ' push ' , docker_tag_name ] <nl> + print ' Pushing % s to GKE registry . . ' % docker_tag_name <nl> + retcode = subprocess . call ( args = cmd ) <nl> + if retcode ! = 0 : <nl> + print ' Error in pushing docker image % s to the GKE registry ' % docker_tag_name <nl> + return False <nl> + return True <nl> + <nl> + <nl> + def _launch_server ( gke_settings , stress_server_settings , bq_settings , <nl> + kubernetes_proxy ) : <nl> + " " " Launches a stress test server instance in GKE cluster " " " <nl> + if not kubernetes_proxy . is_started : <nl> + print ' Kubernetes proxy must be started before calling this function ' <nl> + return False <nl> + <nl> + # This is the wrapper script that is run in the container . This script runs <nl> + # the actual stress test server <nl> + server_cmd_list = [ ' / var / local / git / grpc / tools / gcp / stress_test / run_server . py ' ] <nl> + <nl> + # run_server . py does not take any args from the command line . The args are <nl> + # instead passed via environment variables ( see server_env below ) <nl> + server_arg_list = [ ] <nl> + <nl> + # The parameters to the script run_server . py are injected into the container <nl> + # via environment variables <nl> + server_env = { <nl> + ' STRESS_TEST_IMAGE_TYPE ' : ' SERVER ' , <nl> + ' STRESS_TEST_IMAGE ' : ' / var / local / git / grpc / bins / opt / interop_server ' , <nl> + ' STRESS_TEST_ARGS_STR ' : ' - - port = % s ' % stress_server_settings . server_port , <nl> + ' RUN_ID ' : bq_settings . run_id , <nl> + ' POD_NAME ' : stress_server_settings . server_pod_name , <nl> + ' GCP_PROJECT_ID ' : gke_settings . project_id , <nl> + ' DATASET_ID ' : bq_settings . dataset_id , <nl> + ' SUMMARY_TABLE_ID ' : bq_settings . summary_table_id , <nl> + ' QPS_TABLE_ID ' : bq_settings . qps_table_id <nl> + } <nl> + <nl> + # Launch Server <nl> + is_success = kubernetes_api . create_pod_and_service ( <nl> + ' localhost ' , <nl> + kubernetes_proxy . get_port ( ) , <nl> + ' default ' , # Use ' default ' namespace <nl> + stress_server_settings . server_pod_name , <nl> + gke_settings . tag_name , <nl> + [ stress_server_settings . server_port ] , # Port that should be exposed <nl> + server_cmd_list , <nl> + server_arg_list , <nl> + server_env , <nl> + True # Headless = True for server . Since we want DNS records to be created by GKE <nl> + ) <nl> + <nl> + return is_success <nl> + <nl> + <nl> + def _launch_client ( gke_settings , stress_server_settings , stress_client_settings , <nl> + bq_settings , kubernetes_proxy ) : <nl> + " " " Launches a configurable number of stress test clients on GKE cluster " " " <nl> + if not kubernetes_proxy . is_started : <nl> + print ' Kubernetes proxy must be started before calling this function ' <nl> + return False <nl> + <nl> + stress_client_arg_list = [ <nl> + ' - - server_addresses = % s ' % stress_client_settings . server_addresses , <nl> + ' - - test_cases = % s ' % stress_client_settings . test_cases_str , <nl> + ' - - num_stubs_per_channel = % d ' % <nl> + stress_client_settings . num_stubs_per_channel <nl> + ] <nl> + <nl> + # This is the wrapper script that is run in the container . This script runs <nl> + # the actual stress client <nl> + client_cmd_list = [ ' / var / local / git / grpc / tools / gcp / stress_test / run_client . py ' ] <nl> + <nl> + # run_client . py takes no args . All args are passed as env variables ( see <nl> + # client_env ) <nl> + client_arg_list = [ ] <nl> + <nl> + metrics_server_address = ' localhost : % d ' % stress_client_settings . metrics_port <nl> + metrics_client_arg_list = [ <nl> + ' - - metrics_server_address = % s ' % metrics_server_address , <nl> + ' - - total_only = true ' <nl> + ] <nl> + <nl> + # The parameters to the script run_client . py are injected into the container <nl> + # via environment variables <nl> + client_env = { <nl> + ' STRESS_TEST_IMAGE_TYPE ' : ' CLIENT ' , <nl> + ' STRESS_TEST_IMAGE ' : ' / var / local / git / grpc / bins / opt / stress_test ' , <nl> + ' STRESS_TEST_ARGS_STR ' : ' ' . join ( stress_client_arg_list ) , <nl> + ' METRICS_CLIENT_IMAGE ' : ' / var / local / git / grpc / bins / opt / metrics_client ' , <nl> + ' METRICS_CLIENT_ARGS_STR ' : ' ' . join ( metrics_client_arg_list ) , <nl> + ' RUN_ID ' : bq_settings . run_id , <nl> + ' POLL_INTERVAL_SECS ' : <nl> + str ( stress_client_settings . stress_client_poll_interval_secs ) , <nl> + ' GCP_PROJECT_ID ' : gke_settings . project_id , <nl> + ' DATASET_ID ' : bq_settings . dataset_id , <nl> + ' SUMMARY_TABLE_ID ' : bq_settings . summary_table_id , <nl> + ' QPS_TABLE_ID ' : bq_settings . qps_table_id <nl> + } <nl> + <nl> + for pod_name in stress_client_settings . client_pod_names_list : <nl> + client_env [ ' POD_NAME ' ] = pod_name <nl> + is_success = kubernetes_api . create_pod_and_service ( <nl> + ' localhost ' , # Since proxy is running on localhost <nl> + kubernetes_proxy . get_port ( ) , <nl> + ' default ' , # default namespace <nl> + pod_name , <nl> + gke_settings . tag_name , <nl> + [ stress_client_settings . metrics_port <nl> + ] , # Client pods expose metrics port <nl> + client_cmd_list , <nl> + client_arg_list , <nl> + client_env , <nl> + False # Client is not a headless service <nl> + ) <nl> + if not is_success : <nl> + print ' Error in launching client % s ' % pod_name <nl> + return False <nl> + <nl> + return True <nl> + <nl> + <nl> + def _launch_server_and_client ( gke_settings , stress_server_settings , <nl> + stress_client_settings , bq_settings , <nl> + kubernetes_proxy_port ) : <nl> + # Start kubernetes proxy <nl> + print ' Kubernetes proxy ' <nl> + kubernetes_proxy = KubernetesProxy ( kubernetes_proxy_port ) <nl> + kubernetes_proxy . start ( ) <nl> + <nl> + print ' Launching server . . ' <nl> + is_success = _launch_server ( gke_settings , stress_server_settings , bq_settings , <nl> + kubernetes_proxy ) <nl> + if not is_success : <nl> + print ' Error in launching server ' <nl> + return False <nl> + <nl> + # Server takes a while to start . <nl> + # TODO ( sree ) Use Kubernetes API to query the status of the server instead of <nl> + # sleeping <nl> + print ' Waiting for % s seconds for the server to start . . . ' % _GKE_IMAGE_WARMUP_WAIT_SECS <nl> + time . sleep ( _GKE_IMAGE_WARMUP_WAIT_SECS ) <nl> + <nl> + # Launch client <nl> + client_pod_name_prefix = ' stress - client ' <nl> + is_success = _launch_client ( gke_settings , stress_server_settings , <nl> + stress_client_settings , bq_settings , <nl> + kubernetes_proxy ) <nl> + <nl> + if not is_success : <nl> + print ' Error in launching client ( s ) ' <nl> + return False <nl> + <nl> + print ' Waiting for % s seconds for the client images to start . . . ' % _GKE_IMAGE_WARMUP_WAIT_SECS <nl> + time . sleep ( _GKE_IMAGE_WARMUP_WAIT_SECS ) <nl> + return True <nl> + <nl> + <nl> + def _delete_server_and_client ( stress_server_settings , stress_client_settings , <nl> + kubernetes_proxy_port ) : <nl> + kubernetes_proxy = KubernetesProxy ( kubernetes_proxy_port ) <nl> + kubernetes_proxy . start ( ) <nl> + <nl> + # Delete clients first <nl> + is_success = True <nl> + for pod_name in stress_client_settings . client_pod_names_list : <nl> + is_success = kubernetes_api . delete_pod_and_service ( <nl> + ' localhost ' , kubernetes_proxy_port , ' default ' , pod_name ) <nl> + if not is_success : <nl> + return False <nl> + <nl> + # Delete server <nl> + is_success = kubernetes_api . delete_pod_and_service ( <nl> + ' localhost ' , kubernetes_proxy_port , ' default ' , <nl> + stress_server_settings . server_pod_name ) <nl> + return is_success <nl> + <nl> + <nl> + def run_test_main ( test_settings , gke_settings , stress_server_settings , <nl> + stress_client_clients ) : <nl> + is_success = True <nl> + <nl> + if test_settings . build_docker_image : <nl> + is_success = _build_docker_image ( gke_settings . docker_image_name , <nl> + gke_settings . tag_name ) <nl> + if not is_success : <nl> + return False <nl> + <nl> + is_success = _push_docker_image_to_gke_registry ( gke_settings . tag_name ) <nl> + if not is_success : <nl> + return False <nl> + <nl> + # Create a unique id for this run ( Note : Using timestamp instead of UUID to <nl> + # make it easier to deduce the date / time of the run just by looking at the run <nl> + # run id . This is useful in debugging when looking at records in Biq query ) <nl> + run_id = datetime . datetime . now ( ) . strftime ( ' % Y_ % m_ % d_ % H_ % M_ % S ' ) <nl> + dataset_id = ' % s_ % s ' % ( _DATASET_ID_PREFIX , run_id ) <nl> + <nl> + # Big Query settings ( common for both Stress Server and Client ) <nl> + bq_settings = BigQuerySettings ( run_id , dataset_id , _SUMMARY_TABLE_ID , <nl> + _QPS_TABLE_ID ) <nl> + <nl> + bq_helper = BigQueryHelper ( run_id , ' ' , ' ' , args . project_id , dataset_id , <nl> + _SUMMARY_TABLE_ID , _QPS_TABLE_ID ) <nl> + bq_helper . initialize ( ) <nl> + <nl> + try : <nl> + is_success = _launch_server_and_client ( gke_settings , stress_server_settings , <nl> + stress_client_settings , bq_settings , <nl> + test_settings . kubernetes_proxy_port ) <nl> + if not is_success : <nl> + return False <nl> + <nl> + start_time = datetime . datetime . now ( ) <nl> + end_time = start_time + datetime . timedelta ( <nl> + seconds = test_settings . test_duration_secs ) <nl> + print ' Running the test until % s ' % end_time . isoformat ( ) <nl> + <nl> + while True : <nl> + if datetime . datetime . now ( ) > end_time : <nl> + print ' Test was run for % d seconds ' % test_settings . test_duration_secs <nl> + break <nl> + <nl> + # Check if either stress server or clients have failed <nl> + if bq_helper . check_if_any_tests_failed ( ) : <nl> + is_success = False <nl> + print ' Some tests failed . ' <nl> + break <nl> + <nl> + # Things seem to be running fine . Wait until next poll time to check the <nl> + # status <nl> + print ' Sleeping for % d seconds . . ' % test_settings . test_poll_interval_secs <nl> + time . sleep ( test_settings . test_poll_interval_secs ) <nl> + <nl> + # Print BiqQuery tables <nl> + bq_helper . print_summary_records ( ) <nl> + bq_helper . print_qps_records ( ) <nl> + <nl> + finally : <nl> + # If is_success is False at this point , it means that the stress tests were <nl> + # started successfully but failed while running the tests . In this case we <nl> + # do should not delete the pods ( since they contain all the failure <nl> + # information ) <nl> + if is_success : <nl> + _delete_server_and_client ( stress_server_settings , stress_client_settings , <nl> + test_settings . kubernetes_proxy_port ) <nl> + <nl> + return is_success <nl> + <nl> + <nl> + argp = argparse . ArgumentParser ( <nl> + description = ' Launch stress tests in GKE ' , <nl> + formatter_class = argparse . ArgumentDefaultsHelpFormatter ) <nl> + argp . add_argument ( ' - - project_id ' , <nl> + required = True , <nl> + help = ' The Google Cloud Platform Project Id ' ) <nl> + argp . add_argument ( ' - - num_clients ' , <nl> + default = 1 , <nl> + type = int , <nl> + help = ' Number of client instances to start ' ) <nl> + argp . add_argument ( ' - - docker_image_name ' , <nl> + default = _DEFAULT_DOCKER_IMAGE_NAME , <nl> + help = ' The name of the docker image containing stress client ' <nl> + ' and stress servers ' ) <nl> + argp . add_argument ( ' - - build_docker_image ' , <nl> + dest = ' build_docker_image ' , <nl> + action = ' store_true ' , <nl> + help = ' Build a docker image and push to Google Container ' <nl> + ' Registry ' ) <nl> + argp . add_argument ( ' - - do_not_build_docker_image ' , <nl> + dest = ' build_docker_image ' , <nl> + action = ' store_false ' , <nl> + help = ' Do not build and push docker image to Google Container ' <nl> + ' Registry ' ) <nl> + argp . set_defaults ( build_docker_image = True ) <nl> + <nl> + argp . add_argument ( ' - - test_poll_interval_secs ' , <nl> + default = _DEFAULT_TEST_POLL_INTERVAL_SECS , <nl> + type = int , <nl> + help = ' How frequently should this script should monitor the ' <nl> + ' health of stress clients and servers running in the GKE ' <nl> + ' cluster ' ) <nl> + argp . add_argument ( ' - - test_duration_secs ' , <nl> + default = _DEFAULT_TEST_DURATION_SECS , <nl> + type = int , <nl> + help = ' How long should this test be run ' ) <nl> + argp . add_argument ( ' - - kubernetes_proxy_port ' , <nl> + default = _DEFAULT_KUBERNETES_PROXY_PORT , <nl> + type = int , <nl> + help = ' The port on which the kubernetes proxy ( on localhost ) ' <nl> + ' is started ' ) <nl> + argp . add_argument ( ' - - stress_server_port ' , <nl> + default = _DEFAULT_STRESS_SERVER_PORT , <nl> + type = int , <nl> + help = ' The port on which the stress server ( in GKE ' <nl> + ' containers ) listens ' ) <nl> + argp . add_argument ( ' - - stress_client_metrics_port ' , <nl> + default = _DEFAULT_METRICS_PORT , <nl> + type = int , <nl> + help = ' The port on which the stress clients ( in GKE ' <nl> + ' containers ) expose metrics ' ) <nl> + argp . add_argument ( ' - - stress_client_poll_interval_secs ' , <nl> + default = _DEFAULT_STRESS_CLIENT_POLL_INTERVAL_SECS , <nl> + type = int , <nl> + help = ' How frequently should the stress client wrapper script ' <nl> + ' running inside GKE should monitor health of the actual ' <nl> + ' stress client process and upload the metrics to BigQuery ' ) <nl> + argp . add_argument ( ' - - stress_client_metrics_collection_interval_secs ' , <nl> + default = _DEFAULT_METRICS_COLLECTION_INTERVAL_SECS , <nl> + type = int , <nl> + help = ' How frequently should metrics be collected in - memory on ' <nl> + ' the stress clients ( running inside GKE containers ) . Note ' <nl> + ' that this is NOT the same as the upload - to - BigQuery ' <nl> + ' frequency . The metrics upload frequency is controlled by the ' <nl> + ' - - stress_client_poll_interval_secs flag ' ) <nl> + argp . add_argument ( ' - - stress_client_num_channels_per_server ' , <nl> + default = _DEFAULT_NUM_CHANNELS_PER_SERVER , <nl> + type = int , <nl> + help = ' The number of channels created to each server from a ' <nl> + ' stress client ' ) <nl> + argp . add_argument ( ' - - stress_client_num_stubs_per_channel ' , <nl> + default = _DEFAULT_NUM_STUBS_PER_CHANNEL , <nl> + type = int , <nl> + help = ' The number of stubs created per channel . This number ' <nl> + ' indicates the max number of RPCs that can be made in ' <nl> + ' parallel on each channel at any given time ' ) <nl> + argp . add_argument ( ' - - stress_client_test_cases ' , <nl> + default = _DEFAULT_TEST_CASES_STR , <nl> + help = ' List of test cases ( with weights ) to be executed by the ' <nl> + ' stress test client . The list is in the following format : \ n ' <nl> + ' < testcase_1 : w_1 , < test_case2 : w_2 > . . < testcase_n : w_n > \ n ' <nl> + ' ( Note : The weights do not have to add up to 100 ) ' ) <nl> + <nl> + if __name__ = = ' __main__ ' : <nl> + args = argp . parse_args ( ) <nl> + <nl> + test_settings = TestSettings ( <nl> + args . build_docker_image , args . test_poll_interval_secs , <nl> + args . test_duration_secs , args . kubernetes_proxy_port ) <nl> + <nl> + gke_settings = GkeSettings ( args . project_id , args . docker_image_name ) <nl> + <nl> + stress_server_settings = StressServerSettings ( _SERVER_POD_NAME , <nl> + args . stress_server_port ) <nl> + stress_client_settings = StressClientSettings ( <nl> + args . num_clients , _CLIENT_POD_NAME_PREFIX , _SERVER_POD_NAME , <nl> + args . stress_server_port , args . stress_client_metrics_port , <nl> + args . stress_client_metrics_collection_interval_secs , <nl> + args . stress_client_poll_interval_secs , <nl> + args . stress_client_num_channels_per_server , <nl> + args . stress_client_num_stubs_per_channel , args . stress_client_test_cases ) <nl> + <nl> + run_test_main ( test_settings , gke_settings , stress_server_settings , <nl> + stress_client_settings ) <nl>
Merge pull request from sreecha / stress_test_scripts
grpc/grpc
8ba6a6b3211efbb90355239899cda7a08d8ffbe2
2016-02-29T23:01:54Z
mmm a / lib / SILPasses / ConstantPropagation . cpp <nl> ppp b / lib / SILPasses / ConstantPropagation . cpp <nl> static SILInstruction * constantFoldInstruction ( SILInstruction & I , <nl> } <nl> <nl> static bool CCPFunctionBody ( SILFunction & F , SILModule & M ) { <nl> + DEBUG ( llvm : : errs ( ) < < " * * * ConstPropagation processing : " < < F . getName ( ) <nl> + < < " \ n " ) ; <nl> + <nl> / / Initialize the worklist to all of the instructions ready to process . . . <nl> std : : set < SILInstruction * > WorkList ; <nl> for ( auto & BB : F ) { <nl> static bool CCPFunctionBody ( SILFunction & F , SILModule & M ) { <nl> <nl> / / Try to fold the instruction . <nl> if ( SILInstruction * C = constantFoldInstruction ( * I , M ) ) { <nl> - / / We were able to fold , so all users should use the new folded value . <nl> - assert ( I - > getTypes ( ) . size ( ) = = 1 & & <nl> - " Currently , we only support single result instructions . " ) ; <nl> - SILValue ( I ) . replaceAllUsesWith ( C ) ; <nl> - <nl> / / The users could be constant propagatable now . <nl> for ( auto UseI = I - > use_begin ( ) , <nl> UseE = I - > use_end ( ) ; UseI ! = UseE ; + + UseI ) { <nl> WorkList . insert ( cast < SILInstruction > ( UseI . getUser ( ) ) ) ; <nl> } <nl> <nl> + / / We were able to fold , so all users should use the new folded value . <nl> + assert ( I - > getTypes ( ) . size ( ) = = 1 & & <nl> + " Currently , we only support single result instructions . " ) ; <nl> + SILValue ( I ) . replaceAllUsesWith ( C ) ; <nl> + <nl> / / Remove the unused instruction . <nl> WorkList . erase ( I ) ; <nl> <nl> mmm a / test / SILPasses / constant_propagation . sil <nl> ppp b / test / SILPasses / constant_propagation . sil <nl> bb0 : <nl> / / CHECK - NEXT : } <nl> } <nl> <nl> + sil @ testChainingCCP : $ [ thin ] ( ) - > Builtin . Int1 { <nl> + bb0 : <nl> + % 1 = builtin_function_ref # Builtin . trunc_Int64_Int1 : $ [ thin ] Builtin . Int64 - > Builtin . Int1 <nl> + % 2 = integer_literal $ Builtin . Int64 , 0 <nl> + % 3 = struct $ Int64 ( % 2 : $ Builtin . Int64 ) <nl> + % 4 = struct_extract % 3 : $ Int64 , # value <nl> + % 5 = apply % 1 ( % 4 ) : $ [ thin ] Builtin . Int64 - > Builtin . Int1 <nl> + return % 5 : $ Builtin . Int1 <nl> + <nl> + / / CHECK - LABEL : sil @ testChainingCCP <nl> + / / CHECK : bb0 : <nl> + / / CHECK - NEXT : % 0 = integer_literal $ Builtin . Int1 , 0 <nl> + / / CHECK - NEXT : return % 0 : $ Builtin . Int1 <nl> + / / CHECK - NEXT : } <nl> + } <nl> + <nl>
[ SIL : CCP ] Fix a bug that was preventing chaining constant propagation .
apple/swift
7a1233be7447feff43979a1ccc2b3eeea149c308
2013-09-19T23:41:02Z
mmm a / core / class_db . cpp <nl> ppp b / core / class_db . cpp <nl> void ClassDB : : get_extensions_for_type ( const StringName & p_class , List < String > * p <nl> <nl> while ( ( K = resource_base_extensions . next ( K ) ) ) { <nl> StringName cmp = resource_base_extensions [ * K ] ; <nl> - if ( is_parent_class ( p_class , cmp ) ) <nl> + if ( is_parent_class ( p_class , cmp ) | | is_parent_class ( cmp , p_class ) ) <nl> p_extensions - > push_back ( * K ) ; <nl> } <nl> } <nl> mmm a / editor / property_editor . cpp <nl> ppp b / editor / property_editor . cpp <nl> void CustomPropertyEditor : : _menu_option ( int p_which ) { <nl> <nl> Set < String > valid_extensions ; <nl> for ( List < String > : : Element * E = extensions . front ( ) ; E ; E = E - > next ( ) ) { <nl> - print_line ( " found : " + E - > get ( ) ) ; <nl> valid_extensions . insert ( E - > get ( ) ) ; <nl> } <nl> <nl>
Fix recognition of resource extensions .
godotengine/godot
c530d8f43cdeba26aeb87e61e66ab24e9ade9121
2017-04-26T21:07:23Z
mmm a / include / reporters / catch_reporter_teamcity . hpp <nl> ppp b / include / reporters / catch_reporter_teamcity . hpp <nl> namespace Catch { <nl> } <nl> <nl> virtual void testCaseStarting ( TestCaseInfo const & testInfo ) CATCH_OVERRIDE { <nl> + testTimer . start ( ) ; <nl> StreamingReporterBase : : testCaseStarting ( testInfo ) ; <nl> stream < < " # # teamcity [ testStarted name = ' " <nl> < < escape ( testInfo . name ) < < " ' ] \ n " ; <nl> namespace Catch { <nl> < < escape ( testCaseStats . testInfo . name ) <nl> < < " ' out = ' " < < escape ( testCaseStats . stdErr ) < < " ' ] \ n " ; <nl> stream < < " # # teamcity [ testFinished name = ' " <nl> - < < escape ( testCaseStats . testInfo . name ) < < " ' ] \ n " ; <nl> + < < escape ( testCaseStats . testInfo . name ) < < " ' duration = ' " <nl> + < < testTimer . getElapsedMilliseconds ( ) < < " ' ] \ n " ; <nl> } <nl> <nl> private : <nl> namespace Catch { <nl> } <nl> private : <nl> bool m_headerPrintedForThisSection ; <nl> - <nl> + Timer testTimer ; <nl> } ; <nl> <nl> # ifdef CATCH_IMPL <nl>
teamcity reporter should time durations explicitly
catchorg/Catch2
3afd077b554779607ed8ccbf34c365eb73ba460d
2017-03-06T15:35:03Z
mmm a / xbmc / VideoDatabase . cpp <nl> ppp b / xbmc / VideoDatabase . cpp <nl> void CVideoDatabase : : CleanDatabase ( IVideoInfoScannerObserver * pObserver , const v <nl> CLog : : Log ( LOGDEBUG , " % s Cleaning tvshow table " , __FUNCTION__ ) ; <nl> sql = " delete from tvshow where idShow not in ( select idShow from tvshowlinkpath ) " ; <nl> m_pDS - > exec ( sql . c_str ( ) ) ; <nl> - sql = " DELETE tvshow . * FROM tvshow " <nl> - " JOIN tvshowlinkpath ON tvshow . idShow = tvshowlinkpath . idShow " <nl> - " JOIN path ON path . idPath = tvshowlinkpath . idPath " <nl> - " WHERE " <nl> - " tvshow . idShow NOT IN ( SELECT idShow from tvshowlinkepisode ) AND " <nl> - " path . strContent = ' ' " ; <nl> - m_pDS - > exec ( sql . c_str ( ) ) ; <nl> + <nl> + CStdString showsToDelete ; <nl> + sql = " select tvshow . idShow from tvshow " <nl> + " join tvshowlinkpath on tvshow . idShow = tvshowlinkpath . idShow " <nl> + " join path on path . idPath = tvshowlinkpath . idPath " <nl> + " where tvshow . idShow not in ( select idShow from tvshowlinkepisode ) " <nl> + " and path . strContent = = ' ' " ; <nl> + m_pDS - > query ( sql . c_str ( ) ) ; <nl> + while ( ! m_pDS - > eof ( ) ) <nl> + { <nl> + showsToDelete + = m_pDS - > fv ( 0 ) . get_asString ( ) + " , " ; <nl> + m_pDS - > next ( ) ; <nl> + } <nl> + m_pDS - > close ( ) ; <nl> + if ( ! showsToDelete . IsEmpty ( ) ) <nl> + { <nl> + sql = " delete from tvshow where idShow in ( " + showsToDelete . TrimRight ( " , " ) + " ) " ; <nl> + m_pDS - > exec ( sql . c_str ( ) ) ; <nl> + } <nl> <nl> CLog : : Log ( LOGDEBUG , " % s Cleaning actorlinktvshow table " , __FUNCTION__ ) ; <nl> sql = " delete from actorlinktvshow where idShow not in ( select idShow from tvshow ) " ; <nl>
fixed : Clean video database was broken in sqlite
xbmc/xbmc
77d40bb55c75e530522fdceb462aa2404f711821
2010-08-23T06:23:35Z
mmm a / cmake / modules / AddSwift . cmake <nl> ppp b / cmake / modules / AddSwift . cmake <nl> function ( _add_host_variant_link_flags target ) <nl> endif ( ) <nl> endfunction ( ) <nl> <nl> - # Add a single variant of a new Swift library . <nl> + # Add a new Swift host library . <nl> # <nl> # Usage : <nl> - # _add_swift_host_library_single ( <nl> - # target <nl> + # add_swift_host_library ( name <nl> # [ SHARED ] <nl> # [ STATIC ] <nl> # [ LLVM_LINK_COMPONENTS comp1 . . . ] <nl> # source1 [ source2 source3 . . . ] ) <nl> # <nl> - # target <nl> + # name <nl> # Name of the library ( e . g . , swiftParse ) . <nl> # <nl> # SHARED <nl> endfunction ( ) <nl> # LLVM components this library depends on . <nl> # <nl> # source1 . . . <nl> - # Sources to add into this library <nl> - function ( _add_swift_host_library_single target ) <nl> + # Sources to add into this library . <nl> + function ( add_swift_host_library name ) <nl> set ( options <nl> SHARED <nl> STATIC ) <nl> function ( _add_swift_host_library_single target ) <nl> set ( multiple_parameter_options <nl> LLVM_LINK_COMPONENTS ) <nl> <nl> - cmake_parse_arguments ( ASHLS <nl> + cmake_parse_arguments ( ASHL <nl> " $ { options } " <nl> " $ { single_parameter_options } " <nl> " $ { multiple_parameter_options } " <nl> $ { ARGN } ) <nl> - set ( ASHLS_SOURCES $ { ASHLS_UNPARSED_ARGUMENTS } ) <nl> + set ( ASHL_SOURCES $ { ASHL_UNPARSED_ARGUMENTS } ) <nl> <nl> - translate_flags ( ASHLS " $ { options } " ) <nl> + translate_flags ( ASHL " $ { options } " ) <nl> <nl> - if ( NOT ASHLS_SHARED AND NOT ASHLS_STATIC ) <nl> + if ( NOT ASHL_SHARED AND NOT ASHL_STATIC ) <nl> message ( FATAL_ERROR " Either SHARED or STATIC must be specified " ) <nl> endif ( ) <nl> <nl> - # Include LLVM Bitcode slices for iOS , Watch OS , and Apple TV OS device libraries . <nl> - set ( embed_bitcode_arg ) <nl> - if ( SWIFT_EMBED_BITCODE_SECTION ) <nl> - if ( SWIFT_HOST_VARIANT_SDK MATCHES " ( I | TV | WATCH ) OS " ) <nl> - list ( APPEND ASHLS_C_COMPILE_FLAGS " - fembed - bitcode " ) <nl> - set ( embed_bitcode_arg EMBED_BITCODE ) <nl> - endif ( ) <nl> - endif ( ) <nl> - <nl> if ( XCODE ) <nl> - string ( REGEX MATCHALL " / [ ^ / ] + " split_path $ { CMAKE_CURRENT_SOURCE_DIR } ) <nl> - list ( GET split_path - 1 dir ) <nl> - file ( GLOB_RECURSE ASHLS_HEADERS <nl> + get_filename_component ( dir $ { CMAKE_CURRENT_SOURCE_DIR } DIRECTORY ) <nl> + <nl> + file ( GLOB_RECURSE ASHL_HEADERS <nl> $ { SWIFT_SOURCE_DIR } / include / swift $ { dir } / * . h <nl> $ { SWIFT_SOURCE_DIR } / include / swift $ { dir } / * . def <nl> $ { CMAKE_CURRENT_SOURCE_DIR } / * . def ) <nl> - <nl> - file ( GLOB_RECURSE ASHLS_TDS <nl> + file ( GLOB_RECURSE ASHL_TDS <nl> $ { SWIFT_SOURCE_DIR } / include / swift $ { dir } / * . td ) <nl> <nl> - set_source_files_properties ( $ { ASHLS_HEADERS } $ { ASHLS_TDS } <nl> - PROPERTIES <nl> + set_source_files_properties ( $ { ASHL_HEADERS } $ { ASHL_TDS } PROPERTIES <nl> HEADER_FILE_ONLY true ) <nl> - source_group ( " TableGen descriptions " FILES $ { ASHLS_TDS } ) <nl> + source_group ( " TableGen descriptions " FILES $ { ASHL_TDS } ) <nl> <nl> - set ( ASHLS_SOURCES $ { ASHLS_SOURCES } $ { ASHLS_HEADERS } $ { ASHLS_TDS } ) <nl> + set ( ASHL_SOURCES $ { ASHL_SOURCES } $ { ASHL_HEADERS } $ { ASHL_TDS } ) <nl> endif ( ) <nl> <nl> - if ( ASHLS_SHARED ) <nl> + if ( ASHL_SHARED ) <nl> set ( libkind SHARED ) <nl> - elseif ( ASHLS_STATIC ) <nl> + elseif ( ASHL_STATIC ) <nl> set ( libkind STATIC ) <nl> endif ( ) <nl> <nl> - add_library ( " $ { target } " $ { libkind } $ { ASHLS_SOURCES } ) <nl> - _set_target_prefix_and_suffix ( " $ { target } " " $ { libkind } " " $ { SWIFT_HOST_VARIANT_SDK } " ) <nl> - add_dependencies ( $ { target } $ { LLVM_COMMON_DEPENDS } ) <nl> - <nl> - if ( SWIFT_HOST_VARIANT_SDK STREQUAL WINDOWS ) <nl> - swift_windows_include_for_arch ( $ { SWIFT_HOST_VARIANT_ARCH } SWIFTLIB_INCLUDE ) <nl> - target_include_directories ( " $ { target } " SYSTEM PRIVATE $ { SWIFTLIB_INCLUDE } ) <nl> - set_target_properties ( $ { target } <nl> - PROPERTIES <nl> - CXX_STANDARD 14 ) <nl> - endif ( ) <nl> - <nl> - if ( SWIFT_HOST_VARIANT_SDK STREQUAL WINDOWS ) <nl> - set_property ( TARGET " $ { target } " PROPERTY NO_SONAME ON ) <nl> - endif ( ) <nl> - <nl> - llvm_update_compile_flags ( $ { target } ) <nl> - <nl> - set_output_directory ( $ { target } <nl> + add_library ( $ { name } $ { libkind } $ { ASHL_SOURCES } ) <nl> + add_dependencies ( $ { name } $ { LLVM_COMMON_DEPENDS } ) <nl> + llvm_update_compile_flags ( $ { name } ) <nl> + swift_common_llvm_config ( $ { name } $ { ASHL_LLVM_LINK_COMPONENTS } ) <nl> + set_output_directory ( $ { name } <nl> BINARY_DIR $ { SWIFT_RUNTIME_OUTPUT_INTDIR } <nl> LIBRARY_DIR $ { SWIFT_LIBRARY_OUTPUT_INTDIR } ) <nl> <nl> if ( SWIFT_HOST_VARIANT_SDK IN_LIST SWIFT_APPLE_PLATFORMS ) <nl> - set_target_properties ( " $ { target } " <nl> + set_target_properties ( $ { name } <nl> PROPERTIES <nl> INSTALL_NAME_DIR " @ rpath " ) <nl> elseif ( SWIFT_HOST_VARIANT_SDK STREQUAL LINUX ) <nl> - set_target_properties ( " $ { target } " <nl> + set_target_properties ( $ { name } <nl> PROPERTIES <nl> INSTALL_RPATH " $ ORIGIN : / usr / lib / swift / linux " ) <nl> elseif ( SWIFT_HOST_VARIANT_SDK STREQUAL CYGWIN ) <nl> - set_target_properties ( " $ { target } " <nl> + set_target_properties ( $ { name } <nl> PROPERTIES <nl> INSTALL_RPATH " $ ORIGIN : / usr / lib / swift / cygwin " ) <nl> elseif ( SWIFT_HOST_VARIANT_SDK STREQUAL " ANDROID " ) <nl> - set_target_properties ( " $ { target } " <nl> + set_target_properties ( $ { name } <nl> PROPERTIES <nl> INSTALL_RPATH " $ ORIGIN " ) <nl> endif ( ) <nl> <nl> - set_target_properties ( " $ { target } " PROPERTIES BUILD_WITH_INSTALL_RPATH YES ) <nl> - set_target_properties ( " $ { target } " PROPERTIES FOLDER " Swift libraries " ) <nl> - <nl> - # Call llvm_config ( ) only for libraries that are part of the compiler . <nl> - swift_common_llvm_config ( " $ { target } " $ { ASHLS_LLVM_LINK_COMPONENTS } ) <nl> + set_target_properties ( $ { name } PROPERTIES <nl> + BUILD_WITH_INSTALL_RPATH YES <nl> + FOLDER " Swift libraries " ) <nl> <nl> - target_compile_options ( $ { target } PRIVATE <nl> - $ { ASHLS_C_COMPILE_FLAGS } ) <nl> - if ( SWIFT_HOST_VARIANT_SDK STREQUAL WINDOWS ) <nl> - if ( libkind STREQUAL SHARED ) <nl> - target_compile_definitions ( $ { target } PRIVATE <nl> - _WINDLL ) <nl> - endif ( ) <nl> - endif ( ) <nl> - <nl> - _add_host_variant_c_compile_flags ( $ { target } ) <nl> - _add_host_variant_link_flags ( $ { target } ) <nl> + _add_host_variant_c_compile_flags ( $ { name } ) <nl> + _add_host_variant_link_flags ( $ { name } ) <nl> + _set_target_prefix_and_suffix ( $ { name } " $ { libkind } " " $ { SWIFT_HOST_VARIANT_SDK } " ) <nl> <nl> # Set compilation and link flags . <nl> if ( SWIFT_HOST_VARIANT_SDK STREQUAL WINDOWS ) <nl> swift_windows_include_for_arch ( $ { SWIFT_HOST_VARIANT_ARCH } <nl> $ { SWIFT_HOST_VARIANT_ARCH } _INCLUDE ) <nl> - target_include_directories ( $ { target } SYSTEM PRIVATE <nl> + target_include_directories ( $ { name } SYSTEM PRIVATE <nl> $ { $ { SWIFT_HOST_VARIANT_ARCH } _INCLUDE } ) <nl> <nl> + if ( libkind STREQUAL SHARED ) <nl> + target_compile_definitions ( $ { name } PRIVATE <nl> + _WINDLL ) <nl> + endif ( ) <nl> + <nl> if ( NOT $ { CMAKE_C_COMPILER_ID } STREQUAL MSVC ) <nl> - swift_windows_get_sdk_vfs_overlay ( ASHLS_VFS_OVERLAY ) <nl> - target_compile_options ( $ { target } PRIVATE <nl> - " SHELL : - Xclang - ivfsoverlay - Xclang $ { ASHLS_VFS_OVERLAY } " ) <nl> + swift_windows_get_sdk_vfs_overlay ( ASHL_VFS_OVERLAY ) <nl> + target_compile_options ( $ { name } PRIVATE <nl> + " SHELL : - Xclang - ivfsoverlay - Xclang $ { ASHL_VFS_OVERLAY } " ) <nl> <nl> # MSVC doesn ' t support - Xclang . We don ' t need to manually specify <nl> # the dependent libraries as ` cl ` does so . <nl> - target_compile_options ( $ { target } PRIVATE <nl> + target_compile_options ( $ { name } PRIVATE <nl> " SHELL : - Xclang - - dependent - lib = oldnames " <nl> # TODO ( compnerd ) handle / MT , / MTd <nl> " SHELL : - Xclang - - dependent - lib = msvcrt $ < $ < CONFIG : Debug > : d > " ) <nl> endif ( ) <nl> + <nl> + set_target_properties ( $ { name } PROPERTIES <nl> + CXX_STANDARD 14 <nl> + NO_SONAME YES ) <nl> endif ( ) <nl> <nl> if ( $ { SWIFT_HOST_VARIANT_SDK } IN_LIST SWIFT_APPLE_PLATFORMS ) <nl> - target_link_options ( $ { target } PRIVATE <nl> - " LINKER : - compatibility_version , 1 " ) <nl> - if ( SWIFT_COMPILER_VERSION ) <nl> - target_link_options ( $ { target } PRIVATE <nl> - " LINKER : - current_version , $ { SWIFT_COMPILER_VERSION } " ) <nl> - endif ( ) <nl> # Include LLVM Bitcode slices for iOS , Watch OS , and Apple TV OS device libraries . <nl> if ( SWIFT_EMBED_BITCODE_SECTION ) <nl> - if ( $ { SWIFT_HOST_VARIANT_SDK } MATCHES " ( I | TV | WATCH ) OS " ) <nl> - target_link_options ( $ { target } PRIVATE <nl> - " LINKER : - bitcode_bundle " <nl> - " LINKER : - lto_library , $ { LLVM_LIBRARY_DIR } / libLTO . dylib " ) <nl> - <nl> - # Please note that using a generator expression to fit <nl> - # this in a single target_link_options does not work <nl> - # ( at least in CMake 3 . 15 and 3 . 16 ) , <nl> - # since that seems not to allow the LINKER : prefix to be <nl> - # evaluated ( i . e . it will be added as - is to the linker parameters ) <nl> - if ( SWIFT_EMBED_BITCODE_SECTION_HIDE_SYMBOLS ) <nl> - target_link_options ( $ { target } PRIVATE <nl> - " LINKER : - bitcode_hide_symbols " ) <nl> - endif ( ) <nl> + target_compile_options ( $ { name } PRIVATE <nl> + - fembed - bitcode ) <nl> + target_link_options ( $ { name } PRIVATE <nl> + " LINKER : - bitcode_bundle " <nl> + " LINKER : - lto_library , $ { LLVM_LIBRARY_DIR } / libLTO . dylib " ) <nl> + <nl> + # Please note that using a generator expression to fit this in a single <nl> + # target_link_options does not work ( at least in CMake 3 . 15 and 3 . 16 ) , <nl> + # since that seems not to allow the LINKER : prefix to be evaluated ( i . e . <nl> + # it will be added as - is to the linker parameters ) <nl> + if ( SWIFT_EMBED_BITCODE_SECTION_HIDE_SYMBOLS ) <nl> + target_link_options ( $ { name } PRIVATE <nl> + " LINKER : - bitcode_hide_symbols " ) <nl> endif ( ) <nl> endif ( ) <nl> - endif ( ) <nl> - <nl> - # Do not add code here . <nl> - endfunction ( ) <nl> - <nl> - # Add a new Swift host library . <nl> - # <nl> - # Usage : <nl> - # add_swift_host_library ( name <nl> - # [ SHARED ] <nl> - # [ STATIC ] <nl> - # [ LLVM_LINK_COMPONENTS comp1 . . . ] <nl> - # source1 [ source2 source3 . . . ] ) <nl> - # <nl> - # name <nl> - # Name of the library ( e . g . , swiftParse ) . <nl> - # <nl> - # SHARED <nl> - # Build a shared library . <nl> - # <nl> - # STATIC <nl> - # Build a static library . <nl> - # <nl> - # LLVM_LINK_COMPONENTS <nl> - # LLVM components this library depends on . <nl> - # <nl> - # source1 . . . <nl> - # Sources to add into this library . <nl> - function ( add_swift_host_library name ) <nl> - set ( options <nl> - SHARED <nl> - STATIC ) <nl> - set ( single_parameter_options ) <nl> - set ( multiple_parameter_options <nl> - LLVM_LINK_COMPONENTS ) <nl> - <nl> - cmake_parse_arguments ( ASHL <nl> - " $ { options } " <nl> - " $ { single_parameter_options } " <nl> - " $ { multiple_parameter_options } " <nl> - $ { ARGN } ) <nl> - set ( ASHL_SOURCES $ { ASHL_UNPARSED_ARGUMENTS } ) <nl> <nl> - translate_flags ( ASHL " $ { options } " ) <nl> - <nl> - if ( NOT ASHL_SHARED AND NOT ASHL_STATIC ) <nl> - message ( FATAL_ERROR " Either SHARED or STATIC must be specified " ) <nl> + target_link_options ( $ { name } PRIVATE <nl> + " LINKER : - compatibility_version , 1 " ) <nl> + if ( SWIFT_COMPILER_VERSION ) <nl> + target_link_options ( $ { name } PRIVATE <nl> + " LINKER : - current_version , $ { SWIFT_COMPILER_VERSION } " ) <nl> + endif ( ) <nl> endif ( ) <nl> <nl> - _add_swift_host_library_single ( <nl> - $ { name } <nl> - $ { ASHL_SHARED_keyword } <nl> - $ { ASHL_STATIC_keyword } <nl> - $ { ASHL_SOURCES } <nl> - LLVM_LINK_COMPONENTS $ { ASHL_LLVM_LINK_COMPONENTS } <nl> - ) <nl> - <nl> add_dependencies ( dev $ { name } ) <nl> if ( NOT LLVM_INSTALL_TOOLCHAIN_ONLY ) <nl> swift_install_in_component ( TARGETS $ { name } <nl>
Merge pull request from compnerd / host - library - handling
apple/swift
32a189c2816ceece8ac3bf0dd51912d0395e7c19
2020-05-12T19:23:38Z
mmm a / g3doc / DeveloperGuide . md <nl> ppp b / g3doc / DeveloperGuide . md <nl> methods ) . <nl> <nl> # # Style guide <nl> <nl> - MLIR follows the [ LLVM style ] ( https : / / llvm . org / docs / CodingStandards . html ) guide <nl> - except : <nl> + MLIR follows the [ LLVM style ] ( https : / / llvm . org / docs / CodingStandards . html ) guide . <nl> + We also adhere to the following , that deviates from , or isn ' t specified in the <nl> + LLVM style guide : <nl> <nl> * Adopts [ camelBack ] ( https : / / llvm . org / docs / Proposals / VariableNames . html ) ; <nl> + * Except for IR units ( Region , Block , and Operation ) , non - nullable output <nl> + argument are passed by non - const reference in general . <nl> <nl> # # Pass name and other command line options <nl> <nl>
Document that non - IR units are passed by non - const reference instead of pointer in general
tensorflow/tensorflow
57ebdf105583cb39bd8d29b83dc50dd0cf50868e
2019-08-31T22:18:11Z
mmm a / dlib / image_keypoint / hog_abstract . h <nl> ppp b / dlib / image_keypoint / hog_abstract . h <nl> namespace dlib <nl> ensures <nl> - Each local feature is extracted from a certain point in the input image . <nl> This function returns the identity of the local feature corresponding <nl> - to any particular image location p . Or in other words , <nl> - let P = = image_to_feat_space ( p ) , then ( * this ) ( P . y ( ) , P . x ( ) ) = = the local <nl> - feature closest to , or centered at , the point p in the input image . Note <nl> - that some image points might not have corresponding feature locations . <nl> - E . g . border points or points outside the image . In these cases the returned <nl> - point will be outside get_rect ( * this ) . <nl> + to the image location p . Or in other words , let P = = image_to_feat_space ( p ) , <nl> + then ( * this ) ( P . y ( ) , P . x ( ) ) = = the local feature closest to , or centered at , <nl> + the point p in the input image . Note that some image points might not have <nl> + corresponding feature locations . E . g . border points or points outside the <nl> + image . In these cases the returned point will be outside get_rect ( * this ) . <nl> ! * / <nl> <nl> const rectangle image_to_feat_space ( <nl>
Clarified spec .
davisking/dlib
7a8dcf2f2fc05037e02da5da0b99174fcab2160c
2011-08-10T02:37:42Z
mmm a / lib / SIL / Dominance . cpp <nl> ppp b / lib / SIL / Dominance . cpp <nl> using namespace swift ; <nl> template class llvm : : DominatorTreeBase < SILBasicBlock , false > ; <nl> template class llvm : : DominatorTreeBase < SILBasicBlock , true > ; <nl> template class llvm : : DomTreeNodeBase < SILBasicBlock > ; <nl> + using SILDomTree = llvm : : DomTreeBase < SILBasicBlock > ; <nl> + using SILPostDomTree = llvm : : PostDomTreeBase < SILBasicBlock > ; <nl> + template void <nl> + llvm : : DomTreeBuilder : : Calculate < SILDomTree , swift : : SILFunction > ( <nl> + SILDomTree & DT , swift : : SILFunction & F ) ; <nl> + template void <nl> + llvm : : DomTreeBuilder : : Calculate < SILPostDomTree , swift : : SILFunction > ( <nl> + SILPostDomTree & DT , swift : : SILFunction & F ) ; <nl> <nl> / / / Compute the immediate - dominators map . <nl> DominanceInfo : : DominanceInfo ( SILFunction * F ) <nl> mmm a / lib / SILOptimizer / Transforms / StackPromotion . cpp <nl> ppp b / lib / SILOptimizer / Transforms / StackPromotion . cpp <nl> class StackPromoter { <nl> if ( ! PostDomTreeValid ) { <nl> / / The StackPromoter acts as a " graph " for which the post - dominator - tree <nl> / / is calculated . <nl> - PostDomTree . recalculate ( * this ) ; <nl> + PostDomTree . recalculate ( * F ) ; <nl> PostDomTreeValid = true ; <nl> } <nl> } <nl>
Merge pull request from bob - wilson / master - next - dominators
apple/swift
df7810e3a81f0131d540a6de6f5d65e3c3d32bc2
2017-07-20T06:04:32Z
mmm a / hphp / hack / src / parser / full_fidelity_type_parser . ml <nl> ppp b / hphp / hack / src / parser / full_fidelity_type_parser . ml <nl> and parse_simple_type_or_type_constant parser = <nl> let token = peek_token parser in <nl> match Token . kind token with <nl> | ColonColon - > parse_remaining_type_constant parser ( make_token name ) <nl> - | Self | Parent - > <nl> - begin <nl> - match peek_token_kind ~ lookahead : 1 parser with <nl> - | ColonColon - > parse_remaining_type_constant parser ( make_token name ) <nl> - | _ - > <nl> - ( parser , make_type_constant <nl> - ( make_token token ) <nl> - ( make_missing ( ) ) <nl> - ( make_missing ( ) ) ) <nl> - end <nl> | _ - > ( parser , make_simple_type_specifier ( make_token name ) ) <nl> <nl> and parse_simple_type_or_type_constant_or_generic parser = <nl>
properly handle type constants that include parent / self
facebook/hhvm
cb6b6c14110b31e83b82feaef1b8fd8702369916
2017-09-05T21:19:49Z
mmm a / libraries / chain / include / eos / chain / types . hpp <nl> ppp b / libraries / chain / include / eos / chain / types . hpp <nl> namespace eos { namespace chain { <nl> chain_property_object_type , <nl> account_transaction_history_object_type , / / / < Defined by account_history_plugin <nl> transaction_history_object_type , / / / < Defined by account_history_plugin <nl> + public_key_history_object_type , / / / < Defined by account_history_plugin <nl> balance_object_type , / / / < Defined by native_contract library <nl> staked_balance_object_type , / / / < Defined by native_contract library <nl> producer_votes_object_type , / / / < Defined by native_contract library <nl> FC_REFLECT_ENUM ( eos : : chain : : object_type , <nl> ( chain_property_object_type ) <nl> ( account_transaction_history_object_type ) <nl> ( transaction_history_object_type ) <nl> + ( public_key_history_object_type ) <nl> ( balance_object_type ) <nl> ( staked_balance_object_type ) <nl> ( producer_votes_object_type ) <nl> mmm a / plugins / account_history_api_plugin / account_history_api_plugin . cpp <nl> ppp b / plugins / account_history_api_plugin / account_history_api_plugin . cpp <nl> void account_history_api_plugin : : plugin_startup ( ) { <nl> <nl> app ( ) . get_plugin < http_plugin > ( ) . add_api ( { <nl> CHAIN_RO_CALL ( get_transaction ) , <nl> - CHAIN_RO_CALL ( get_transactions ) <nl> + CHAIN_RO_CALL ( get_transactions ) , <nl> + CHAIN_RO_CALL ( get_key_accounts ) <nl> } ) ; <nl> } <nl> <nl> mmm a / plugins / account_history_plugin / account_history_plugin . cpp <nl> ppp b / plugins / account_history_plugin / account_history_plugin . cpp <nl> <nl> # include < eos / account_history_plugin / account_history_plugin . hpp > <nl> # include < eos / account_history_plugin / account_transaction_history_object . hpp > <nl> + # include < eos / account_history_plugin / public_key_history_object . hpp > <nl> # include < eos / account_history_plugin / transaction_history_object . hpp > <nl> # include < eos / chain / chain_controller . hpp > <nl> # include < eos / chain / config . hpp > <nl> class account_history_plugin_impl { <nl> public : <nl> ProcessedTransaction get_transaction ( const chain : : transaction_id_type & transaction_id ) const ; <nl> get_transactions_results get_transactions ( const AccountName & account_name , const optional < uint32_t > & skip_seq , const optional < uint32_t > & num_seq ) const ; <nl> + get_transactions_results get_transactions ( const AccountName & account_name , const optional < uint32_t > & start_seq , const optional < uint32_t > & stop_seq ) const ; <nl> + vector < AccountName > get_key_accounts ( const public_key_type & public_key ) const ; <nl> void applied_block ( const signed_block & ) ; <nl> <nl> chain_plugin * chain_plug ; <nl> class account_history_plugin_impl { <nl> <nl> optional < block_id_type > find_block_id ( const chainbase : : database & db , const transaction_id_type & transaction_id ) const ; <nl> ProcessedTransaction find_transaction ( const chain : : transaction_id_type & transaction_id , const signed_block & block ) const ; <nl> - ProcessedTransaction find_transaction ( const chain : : transaction_id_type & transaction_id , const block_id_type & block_id ) const ; <nl> bool is_scope_relevant ( const eos : : types : : Vector < AccountName > & scope ) ; <nl> get_transactions_results ordered_transactions ( const block_transaction_id_map & block_transaction_ids , const fc : : time_point & start_time , const uint32_t begin , const uint32_t end ) const ; <nl> + static void add ( chainbase : : database & db , const vector < types : : KeyPermissionWeight > & keys , const AccountName & account_name , const PermissionName & permission ) ; <nl> + static void remove ( chainbase : : database & db , const AccountName & account_name , const PermissionName & permission ) ; <nl> bool time_exceeded ( const fc : : time_point & start_time ) const ; <nl> + static const AccountName NEW_ACCOUNT ; <nl> + static const AccountName UPDATE_AUTH ; <nl> + static const AccountName DELETE_AUTH ; <nl> + static const PermissionName OWNER ; <nl> + static const PermissionName ACTIVE ; <nl> + static const PermissionName RECOVERY ; <nl> } ; <nl> const int64_t account_history_plugin_impl : : DEFAULT_TRANSACTION_TIME_LIMIT = 3 ; <nl> + const AccountName account_history_plugin_impl : : NEW_ACCOUNT = " newaccount " ; <nl> + const AccountName account_history_plugin_impl : : UPDATE_AUTH = " updateauth " ; <nl> + const AccountName account_history_plugin_impl : : DELETE_AUTH = " deleteauth " ; <nl> + const PermissionName account_history_plugin_impl : : OWNER = " owner " ; <nl> + const PermissionName account_history_plugin_impl : : ACTIVE = " active " ; <nl> + const PermissionName account_history_plugin_impl : : RECOVERY = " recovery " ; <nl> <nl> optional < block_id_type > account_history_plugin_impl : : find_block_id ( const chainbase : : database & db , const transaction_id_type & transaction_id ) const <nl> { <nl> ProcessedTransaction account_history_plugin_impl : : find_transaction ( const chain : : <nl> FC_THROW ( " Transaction with ID $ { tid } was indexed as being in block ID $ { bid } , but was not found in that block " , ( " tid " , transaction_id ) ( " bid " , block . id ( ) ) ) ; <nl> } <nl> <nl> - ProcessedTransaction account_history_plugin_impl : : find_transaction ( const chain : : transaction_id_type & transaction_id , const chain : : block_id_type & block_id ) const <nl> - { <nl> - auto block = chain_plug - > chain ( ) . fetch_block_by_id ( block_id ) ; <nl> - FC_ASSERT ( block , " Transaction with ID $ { tid } was indexed as being in block ID $ { bid } , but no such block was found " , ( " tid " , transaction_id ) ( " bid " , block_id ) ) ; <nl> - return find_transaction ( transaction_id , * block ) ; <nl> - } <nl> - <nl> ProcessedTransaction account_history_plugin_impl : : get_transaction ( const chain : : transaction_id_type & transaction_id ) const <nl> { <nl> const auto & db = chain_plug - > chain ( ) . get_database ( ) ; <nl> ProcessedTransaction account_history_plugin_impl : : get_transaction ( const chain : : t <nl> } ) ; <nl> if ( block_id . valid ( ) ) <nl> { <nl> - return find_transaction ( transaction_id , * block_id ) ; <nl> + auto block = chain_plug - > chain ( ) . fetch_block_by_id ( * block_id ) ; <nl> + FC_ASSERT ( block , " Transaction with ID $ { tid } was indexed as being in block ID $ { bid } , but no such block was found " , ( " tid " , transaction_id ) ( " bid " , block_id ) ) ; <nl> + return find_transaction ( transaction_id , * block ) ; <nl> } <nl> <nl> # warning TODO : lookup of recent transactions <nl> bool account_history_plugin_impl : : time_exceeded ( const fc : : time_point & start_time <nl> return ( fc : : time_point : : now ( ) - start_time ) . count ( ) > transactions_time_limit ; <nl> } <nl> <nl> + vector < AccountName > account_history_plugin_impl : : get_key_accounts ( const public_key_type & public_key ) const <nl> + { <nl> + std : : set < AccountName > accounts ; <nl> + const auto & db = chain_plug - > chain ( ) . get_database ( ) ; <nl> + db . with_read_lock ( [ & ] ( ) { <nl> + const auto & pub_key_idx = db . get_index < public_key_history_multi_index , by_pub_key > ( ) ; <nl> + auto range = pub_key_idx . equal_range ( public_key ) ; <nl> + for ( auto obj = range . first ; obj ! = range . second ; + + obj ) <nl> + { <nl> + accounts . insert ( obj - > account_name ) ; <nl> + } <nl> + } ) ; <nl> + return vector < AccountName > ( accounts . begin ( ) , accounts . end ( ) ) ; <nl> + } <nl> + <nl> void account_history_plugin_impl : : applied_block ( const signed_block & block ) <nl> { <nl> const auto block_id = block . id ( ) ; <nl> auto & db = chain_plug - > chain ( ) . get_mutable_database ( ) ; <nl> const bool check_relevance = filter_on . size ( ) ; <nl> for ( const auto & cycle : block . cycles ) <nl> + { <nl> for ( const auto & thread : cycle ) <nl> + { <nl> for ( const auto & trx : thread . user_input ) <nl> { <nl> if ( check_relevance & & ! is_scope_relevant ( trx . scope ) ) <nl> void account_history_plugin_impl : : applied_block ( const signed_block & block ) <nl> account_transaction_history . transaction_id = trx . id ( ) ; <nl> } ) ; <nl> } <nl> + <nl> + for ( const chain : : Message & msg : trx . messages ) <nl> + { <nl> + if ( msg . code = = config : : EosContractName ) <nl> + { <nl> + if ( msg . type = = NEW_ACCOUNT ) <nl> + { <nl> + const auto create = msg . as < types : : newaccount > ( ) ; <nl> + auto count = create . owner . keys . size ( ) + create . active . keys . size ( ) + create . recovery . keys . size ( ) ; <nl> + add ( db , create . owner . keys , create . name , OWNER ) ; <nl> + add ( db , create . active . keys , create . name , ACTIVE ) ; <nl> + add ( db , create . recovery . keys , create . name , RECOVERY ) ; <nl> + } <nl> + else if ( msg . type = = UPDATE_AUTH ) <nl> + { <nl> + const auto update = msg . as < types : : updateauth > ( ) ; <nl> + remove ( db , update . account , update . permission ) ; <nl> + add ( db , update . authority . keys , update . account , update . permission ) ; <nl> + } <nl> + else if ( msg . type = = DELETE_AUTH ) <nl> + { <nl> + const auto del = msg . as < types : : deleteauth > ( ) ; <nl> + remove ( db , del . account , del . permission ) ; <nl> + } <nl> + } <nl> + } <nl> } <nl> + } <nl> + } <nl> + } <nl> + <nl> + void account_history_plugin_impl : : add ( chainbase : : database & db , const vector < types : : KeyPermissionWeight > & keys , const AccountName & account_name , const PermissionName & permission ) <nl> + { <nl> + for ( auto pub_key_weight : keys ) <nl> + { <nl> + db . create < public_key_history_object > ( [ & ] ( public_key_history_object & obj ) { <nl> + obj . public_key = pub_key_weight . key ; <nl> + obj . account_name = account_name ; <nl> + obj . permission = permission ; <nl> + } ) ; <nl> + } <nl> + } <nl> + <nl> + void account_history_plugin_impl : : remove ( chainbase : : database & db , const AccountName & account_name , const PermissionName & permission ) <nl> + { <nl> + const auto & acct_perm_idx = db . get_index < public_key_history_multi_index , by_account_permission > ( ) ; <nl> + auto & mutatable_acct_perm_idx = db . get_mutable_index < public_key_history_multi_index > ( ) ; <nl> + auto range = acct_perm_idx . equal_range ( boost : : make_tuple ( account_name , permission ) ) ; <nl> + <nl> + for ( auto acct_perm = range . first ; acct_perm ! = range . second ; + + acct_perm ) <nl> + { <nl> + mutatable_acct_perm_idx . remove ( * acct_perm ) ; <nl> + } <nl> } <nl> <nl> bool account_history_plugin_impl : : is_scope_relevant ( const eos : : types : : Vector < AccountName > & scope ) <nl> void account_history_plugin : : plugin_startup ( ) <nl> my - > chain_plug = app ( ) . find_plugin < chain_plugin > ( ) ; <nl> auto & db = my - > chain_plug - > chain ( ) . get_mutable_database ( ) ; <nl> db . add_index < account_transaction_history_multi_index > ( ) ; <nl> + db . add_index < public_key_history_multi_index > ( ) ; <nl> db . add_index < transaction_history_multi_index > ( ) ; <nl> <nl> my - > chain_plug - > chain ( ) . applied_block . connect ( [ & impl = my ] ( const signed_block & block ) { <nl> read_only : : get_transactions_results read_only : : get_transactions ( const read_only : <nl> return account_history - > get_transactions ( params . account_name , params . skip_seq , params . num_seq ) ; <nl> } <nl> <nl> + read_only : : get_key_accounts_results read_only : : get_key_accounts ( const get_key_accounts_params & params ) const <nl> + { <nl> + return { account_history - > get_key_accounts ( params . public_key ) } ; <nl> + } <nl> } / / namespace account_history_apis <nl> } / / namespace eos <nl> mmm a / plugins / account_history_plugin / include / eos / account_history_plugin / account_history_plugin . hpp <nl> ppp b / plugins / account_history_plugin / include / eos / account_history_plugin / account_history_plugin . hpp <nl> class read_only { <nl> chain : : transaction_id_type transaction_id ; <nl> fc : : variant transaction ; <nl> } ; <nl> - <nl> get_transaction_results get_transaction ( const get_transaction_params & params ) const ; <nl> + <nl> struct get_transactions_params { <nl> chain : : AccountName account_name ; <nl> optional < uint32_t > skip_seq ; <nl> class read_only { <nl> } ; <nl> <nl> get_transactions_results get_transactions ( const get_transactions_params & params ) const ; <nl> + <nl> + struct get_key_accounts_params { <nl> + chain : : public_key_type public_key ; <nl> + } ; <nl> + struct get_key_accounts_results { <nl> + vector < chain : : AccountName > account_names ; <nl> + } ; <nl> + get_key_accounts_results get_key_accounts ( const get_key_accounts_params & params ) const ; <nl> } ; <nl> <nl> class read_write { <nl> FC_REFLECT ( eos : : account_history_apis : : read_only : : get_transaction_results , ( trans <nl> FC_REFLECT ( eos : : account_history_apis : : read_only : : get_transactions_params , ( account_name ) ( skip_seq ) ( num_seq ) ) <nl> FC_REFLECT ( eos : : account_history_apis : : read_only : : ordered_transaction_results , ( seq_num ) ( transaction_id ) ( transaction ) ) <nl> FC_REFLECT ( eos : : account_history_apis : : read_only : : get_transactions_results , ( transactions ) ( time_limit_exceeded_error ) ) <nl> + FC_REFLECT ( eos : : account_history_apis : : read_only : : get_key_accounts_params , ( public_key ) ) <nl> + FC_REFLECT ( eos : : account_history_apis : : read_only : : get_key_accounts_results , ( account_names ) ) <nl> mmm a / programs / eosc / main . cpp <nl> ppp b / programs / eosc / main . cpp <nl> const string get_account_func = chain_func_base + " / get_account " ; <nl> const string account_history_func_base = " / v1 / account_history " ; <nl> const string get_transaction_func = account_history_func_base + " / get_transaction " ; <nl> const string get_transactions_func = account_history_func_base + " / get_transactions " ; <nl> + const string get_key_accounts_func = account_history_func_base + " / get_key_accounts " ; <nl> <nl> inline std : : vector < Name > sort_names ( std : : vector < Name > & & names ) { <nl> std : : sort ( names . begin ( ) , names . end ( ) ) ; <nl> void create_account ( const vector < string > & cmd_line ) { <nl> transaction_helpers : : emplace_message ( trx , config : : EosContractName , vector < types : : AccountPermission > { { creator , " active " } } , " newaccount " , <nl> types : : newaccount { creator , newaccount , owner_auth , <nl> active_auth , recovery_auth , deposit } ) ; <nl> + if ( creator = = " inita " ) <nl> + { <nl> + fc : : optional < fc : : ecc : : private_key > private_key = eos : : utilities : : wif_to_key ( " 5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3 " ) ; <nl> + if ( private_key ) <nl> + { <nl> + wlog ( " public key $ { k } " , ( " k " , private_key - > get_public_key ( ) ) ) ; <nl> + trx . sign ( * private_key , eos : : chain : : chain_id_type { } ) ; <nl> + } <nl> + } <nl> <nl> std : : cout < < fc : : json : : to_pretty_string ( push_transaction ( trx ) ) < < std : : endl ; <nl> <nl> int send_command ( const vector < string > & cmd_line ) <nl> ? fc : : mutable_variant_object ( " account_name " , account_name ) ( " skip_seq " , cmd_line [ 2 ] ) <nl> : fc : : mutable_variant_object ( " account_name " , account_name ) ( " skip_seq " , cmd_line [ 2 ] ) ( " num_seq " , cmd_line [ 3 ] ) ; <nl> std : : cout < < fc : : json : : to_pretty_string ( call ( get_transactions_func , arg ) ) < < std : : endl ; <nl> + } else if ( command = = " accounts " ) { <nl> + if ( cmd_line . size ( ) ! = 2 ) <nl> + { <nl> + std : : cerr < < " usage : " < < program < < " accounts PUBLIC_KEY \ n " ; <nl> + return - 1 ; <nl> + } <nl> + chain : : public_key_type public_key ( cmd_line [ 1 ] ) ; <nl> + auto arg = fc : : mutable_variant_object ( " public_key " , public_key ) ; <nl> + std : : cout < < fc : : json : : to_pretty_string ( call ( get_key_accounts_func , arg ) ) < < std : : endl ; <nl> } <nl> return 0 ; <nl> } <nl>
Adding tracking accounts by public key .
EOSIO/eos
1a1ec658f338d33e075b122307a93273d25e0b1e
2017-08-28T15:27:32Z
mmm a / data / pref . xml <nl> ppp b / data / pref . xml <nl> <nl> < option id = " brush_angle " type = " bool " default = " true " / > <nl> < option id = " fg_color " type = " bool " default = " false " / > <nl> < option id = " bg_color " type = " bool " default = " false " / > <nl> + < option id = " image_color " type = " bool " default = " true " / > <nl> < option id = " ink_type " type = " bool " default = " true " / > <nl> < option id = " ink_opacity " type = " bool " default = " true " / > <nl> < option id = " shade " type = " bool " default = " true " / > <nl> mmm a / data / widgets / brush_slot_params . xml <nl> ppp b / data / widgets / brush_slot_params . xml <nl> <nl> < ! - - ASEPRITE - - > <nl> - < ! - - Copyright ( C ) 2015 by David Capello - - > <nl> + < ! - - Copyright ( C ) 2015 - 2016 by David Capello - - > <nl> < gui > <nl> < vbox id = " brush_slot_params " > <nl> < grid columns = " 2 " > <nl> <nl> < buttonset id = " color_params " columns = " 2 " multiple = " true " > <nl> < item id = " fg_color " text = " Foreground " / > <nl> < item id = " bg_color " text = " Background " / > <nl> + < item id = " image_color " text = " Image Color " hspan = " 2 " / > <nl> < / buttonset > <nl> <nl> < label text = " Ink : " / > <nl> mmm a / src / app / app_brushes . cpp <nl> ppp b / src / app / app_brushes . cpp <nl> void AppBrushes : : load ( const std : : string & filename ) <nl> flags | = int ( BrushSlot : : Flags : : PixelPerfect ) ; <nl> } <nl> <nl> + / / Image color ( enabled by default for backward compatibility ) <nl> + if ( ! brushElem - > Attribute ( " imagecolor " ) | | <nl> + bool_attr_is_true ( brushElem , " imagecolor " ) ) <nl> + flags | = int ( BrushSlot : : Flags : : ImageColor ) ; <nl> + <nl> if ( flags ! = 0 ) <nl> flags | = int ( BrushSlot : : Flags : : Locked ) ; <nl> <nl> void AppBrushes : : save ( const std : : string & filename ) const <nl> save_xml_image ( & maskElem , slot . brush ( ) - > maskBitmap ( ) ) ; <nl> brushElem . InsertEndChild ( maskElem ) ; <nl> } <nl> + <nl> + / / Image color <nl> + brushElem . SetAttribute ( <nl> + " imagecolor " , <nl> + ( flags & int ( BrushSlot : : Flags : : ImageColor ) ) ? " true " : " false " ) ; <nl> } <nl> } <nl> <nl> mmm a / src / app / brush_slot . h <nl> ppp b / src / app / brush_slot . h <nl> <nl> / / Aseprite <nl> - / / Copyright ( C ) 2001 - 2015 David Capello <nl> + / / Copyright ( C ) 2001 - 2016 David Capello <nl> / / <nl> / / This program is distributed under the terms of <nl> / / the End - User License Agreement for Aseprite . <nl> class BrushSlot { <nl> InkType = 0x0040 , <nl> InkOpacity = 0x0080 , <nl> Shade = 0x0100 , <nl> - PixelPerfect = 0x0200 <nl> + PixelPerfect = 0x0200 , <nl> + ImageColor = 0x0400 , <nl> } ; <nl> <nl> BrushSlot ( Flags flags = Flags ( 0 ) , <nl> mmm a / src / app / commands / cmd_new_brush . cpp <nl> ppp b / src / app / commands / cmd_new_brush . cpp <nl> void NewBrushCommand : : createBrush ( const Site & site , const Mask * mask ) <nl> brush - > setPatternOrigin ( mask - > bounds ( ) . origin ( ) ) ; <nl> <nl> ContextBar * ctxBar = App : : instance ( ) - > contextBar ( ) ; <nl> + int flags = int ( BrushSlot : : Flags : : BrushType ) ; <nl> + { <nl> + / / TODO merge this code with ContextBar : : createBrushSlotFromPreferences ( ) ? <nl> + auto & pref = Preferences : : instance ( ) ; <nl> + auto & saveBrush = pref . saveBrush ; <nl> + if ( saveBrush . imageColor ( ) ) <nl> + flags | = int ( BrushSlot : : Flags : : ImageColor ) ; <nl> + } <nl> + <nl> int slot = App : : instance ( ) - > brushes ( ) . addBrushSlot ( <nl> - BrushSlot ( BrushSlot : : Flags : : BrushType , brush ) ) ; <nl> + BrushSlot ( BrushSlot : : Flags ( flags ) , brush ) ) ; <nl> ctxBar - > setActiveBrush ( brush ) ; <nl> <nl> / / Get the shortcut for this brush and show it to the user <nl> mmm a / src / app / ui / brush_popup . cpp <nl> ppp b / src / app / ui / brush_popup . cpp <nl> class BrushOptionsItem : public ButtonSet : : Item { <nl> params . brushAngle ( ) - > setSelected ( brush . hasFlag ( BrushSlot : : Flags : : BrushAngle ) ) ; <nl> params . fgColor ( ) - > setSelected ( brush . hasFlag ( BrushSlot : : Flags : : FgColor ) ) ; <nl> params . bgColor ( ) - > setSelected ( brush . hasFlag ( BrushSlot : : Flags : : BgColor ) ) ; <nl> + params . imageColor ( ) - > setSelected ( brush . hasFlag ( BrushSlot : : Flags : : ImageColor ) ) ; <nl> params . inkType ( ) - > setSelected ( brush . hasFlag ( BrushSlot : : Flags : : InkType ) ) ; <nl> params . inkOpacity ( ) - > setSelected ( brush . hasFlag ( BrushSlot : : Flags : : InkOpacity ) ) ; <nl> params . shade ( ) - > setSelected ( brush . hasFlag ( BrushSlot : : Flags : : Shade ) ) ; <nl> class BrushOptionsItem : public ButtonSet : : Item { <nl> if ( params . brushAngle ( ) - > isSelected ( ) ) flags | = int ( BrushSlot : : Flags : : BrushAngle ) ; <nl> if ( params . fgColor ( ) - > isSelected ( ) ) flags | = int ( BrushSlot : : Flags : : FgColor ) ; <nl> if ( params . bgColor ( ) - > isSelected ( ) ) flags | = int ( BrushSlot : : Flags : : BgColor ) ; <nl> + if ( params . imageColor ( ) - > isSelected ( ) ) flags | = int ( BrushSlot : : Flags : : ImageColor ) ; <nl> if ( params . inkType ( ) - > isSelected ( ) ) flags | = int ( BrushSlot : : Flags : : InkType ) ; <nl> if ( params . inkOpacity ( ) - > isSelected ( ) ) flags | = int ( BrushSlot : : Flags : : InkOpacity ) ; <nl> if ( params . shade ( ) - > isSelected ( ) ) flags | = int ( BrushSlot : : Flags : : Shade ) ; <nl> class NewBrushOptionsItem : public ButtonSet : : Item { <nl> params . brushAngle ( ) - > setSelected ( saveBrush . brushAngle ( ) ) ; <nl> params . fgColor ( ) - > setSelected ( saveBrush . fgColor ( ) ) ; <nl> params . bgColor ( ) - > setSelected ( saveBrush . bgColor ( ) ) ; <nl> + params . imageColor ( ) - > setSelected ( saveBrush . imageColor ( ) ) ; <nl> params . inkType ( ) - > setSelected ( saveBrush . inkType ( ) ) ; <nl> params . inkOpacity ( ) - > setSelected ( saveBrush . inkOpacity ( ) ) ; <nl> params . shade ( ) - > setSelected ( saveBrush . shade ( ) ) ; <nl> class NewBrushOptionsItem : public ButtonSet : : Item { <nl> saveBrush . fgColor ( params . fgColor ( ) - > isSelected ( ) ) ; <nl> if ( saveBrush . bgColor ( ) ! = params . bgColor ( ) - > isSelected ( ) ) <nl> saveBrush . bgColor ( params . bgColor ( ) - > isSelected ( ) ) ; <nl> + if ( saveBrush . imageColor ( ) ! = params . imageColor ( ) - > isSelected ( ) ) <nl> + saveBrush . imageColor ( params . imageColor ( ) - > isSelected ( ) ) ; <nl> if ( saveBrush . inkType ( ) ! = params . inkType ( ) - > isSelected ( ) ) <nl> saveBrush . inkType ( params . inkType ( ) - > isSelected ( ) ) ; <nl> if ( saveBrush . inkOpacity ( ) ! = params . inkOpacity ( ) - > isSelected ( ) ) <nl> mmm a / src / app / ui / context_bar . cpp <nl> ppp b / src / app / ui / context_bar . cpp <nl> void ContextBar : : setActiveBrushBySlot ( tools : : Tool * tool , int slot ) <nl> if ( brush . hasFlag ( BrushSlot : : Flags : : BgColor ) ) <nl> pref . colorBar . bgColor ( brush . bgColor ( ) ) ; <nl> <nl> + / / If the image / stamp brush doesn ' t have the " ImageColor " flag , it <nl> + / / means that we have to change the image color to the current <nl> + / / " foreground color " . <nl> + if ( brush . brush ( ) & & <nl> + brush . brush ( ) - > type ( ) = = doc : : kImageBrushType & & <nl> + ! brush . hasFlag ( BrushSlot : : Flags : : ImageColor ) ) { <nl> + auto pixelFormat = brush . brush ( ) - > image ( ) - > pixelFormat ( ) ; <nl> + <nl> + brush . brush ( ) - > setImageColor ( <nl> + Brush : : ImageColor : : MainColor , <nl> + color_utils : : color_for_image ( pref . colorBar . fgColor ( ) , <nl> + pixelFormat ) ) ; <nl> + <nl> + brush . brush ( ) - > setImageColor ( <nl> + Brush : : ImageColor : : BackgroundColor , <nl> + color_utils : : color_for_image ( pref . colorBar . bgColor ( ) , <nl> + pixelFormat ) ) ; <nl> + } <nl> + <nl> if ( brush . hasFlag ( BrushSlot : : Flags : : InkType ) ) <nl> setInkType ( brush . inkType ( ) ) ; <nl> <nl> BrushSlot ContextBar : : createBrushSlotFromPreferences ( ) <nl> if ( saveBrush . brushAngle ( ) ) flags | = int ( BrushSlot : : Flags : : BrushAngle ) ; <nl> if ( saveBrush . fgColor ( ) ) flags | = int ( BrushSlot : : Flags : : FgColor ) ; <nl> if ( saveBrush . bgColor ( ) ) flags | = int ( BrushSlot : : Flags : : BgColor ) ; <nl> + if ( saveBrush . imageColor ( ) ) flags | = int ( BrushSlot : : Flags : : ImageColor ) ; <nl> if ( saveBrush . inkType ( ) ) flags | = int ( BrushSlot : : Flags : : InkType ) ; <nl> if ( saveBrush . inkOpacity ( ) ) flags | = int ( BrushSlot : : Flags : : InkOpacity ) ; <nl> if ( saveBrush . shade ( ) ) flags | = int ( BrushSlot : : Flags : : Shade ) ; <nl>
Add new " Image Color " parameter in brush slots ( fix )
aseprite/aseprite
1ffbd4c3430b0e7f1fc6f5263e550cb420313062
2016-09-20T13:26:02Z
mmm a / src / rdb_protocol / terms / rewrites . cc <nl> ppp b / src / rdb_protocol / terms / rewrites . cc <nl> class eq_join_term_t : public rewrite_term_t { <nl> r : : optarg ( " right " , v ) ) ) ) ) ) ) ; <nl> <nl> } <nl> - virtual const char * name ( ) const { return " inner_join " ; } <nl> + virtual const char * name ( ) const { return " eq_join " ; } <nl> } ; <nl> <nl> class delete_term_t : public rewrite_term_t { <nl>
Use actual name for ` eq_join `
rethinkdb/rethinkdb
138a5e96a04dd499c90dbe7e7bf47af494f862e4
2014-09-18T01:29:31Z
mmm a / stdlib / private / SwiftReflectionTest / SwiftReflectionTest . swift <nl> ppp b / stdlib / private / SwiftReflectionTest / SwiftReflectionTest . swift <nl> internal func readUInt ( ) - > UInt { <nl> / / / process . <nl> internal func sendReflectionInfos ( ) { <nl> debugLog ( " BEGIN \ ( # function ) " ) ; defer { debugLog ( " END \ ( # function ) " ) } <nl> - let infos = ( 0 . . < _dyld_image_count ( ) ) . flatMap ( getReflectionInfoForImage ) <nl> + let infos = ( 0 . . < _dyld_image_count ( ) ) . compactMap ( getReflectionInfoForImage ) <nl> <nl> var numInfos = infos . count <nl> debugLog ( " \ ( numInfos ) reflection info bundles . " ) <nl> mmm a / stdlib / public / SDK / Foundation / JSONEncoder . swift <nl> ppp b / stdlib / public / SDK / Foundation / JSONEncoder . swift <nl> fileprivate struct _JSONKeyedDecodingContainer < K : CodingKey > : KeyedDecodingCon <nl> / / MARK : - KeyedDecodingContainerProtocol Methods <nl> <nl> public var allKeys : [ Key ] { <nl> - return self . container . keys . flatMap { Key ( stringValue : $ 0 ) } <nl> + return self . container . keys . compactMap { Key ( stringValue : $ 0 ) } <nl> } <nl> <nl> public func contains ( _ key : Key ) - > Bool { <nl> mmm a / stdlib / public / SDK / Foundation / PlistEncoder . swift <nl> ppp b / stdlib / public / SDK / Foundation / PlistEncoder . swift <nl> fileprivate struct _PlistKeyedDecodingContainer < K : CodingKey > : KeyedDecodingCo <nl> / / MARK : - KeyedDecodingContainerProtocol Methods <nl> <nl> public var allKeys : [ Key ] { <nl> - return self . container . keys . flatMap { Key ( stringValue : $ 0 ) } <nl> + return self . container . keys . compactMap { Key ( stringValue : $ 0 ) } <nl> } <nl> <nl> public func contains ( _ key : Key ) - > Bool { <nl> mmm a / stdlib / public / core / FlatMap . swift <nl> ppp b / stdlib / public / core / FlatMap . swift <nl> extension LazySequenceProtocol { <nl> FlattenSequence < LazyMapSequence < Elements , SegmentOfResult > > > { <nl> return self . map ( transform ) . joined ( ) <nl> } <nl> - <nl> + <nl> / / / Returns the non - ` nil ` results of mapping the given transformation over <nl> / / / this sequence . <nl> / / / <nl> extension LazySequenceProtocol { <nl> / / / <nl> / / / - Complexity : O ( 1 ) <nl> @ _inlineable / / FIXME ( sil - serialize - all ) <nl> - public func flatMap < ElementOfResult > ( <nl> + public func compactMap < ElementOfResult > ( <nl> _ transform : @ escaping ( Elements . Element ) - > ElementOfResult ? <nl> ) - > LazyMapSequence < <nl> LazyFilterSequence < <nl> extension LazySequenceProtocol { <nl> > { <nl> return self . map ( transform ) . filter { $ 0 ! = nil } . map { $ 0 ! } <nl> } <nl> + <nl> + / / / Returns the non - ` nil ` results of mapping the given transformation over <nl> + / / / this sequence . <nl> + / / / <nl> + / / / Use this method to receive a sequence of nonoptional values when your <nl> + / / / transformation produces an optional value . <nl> + / / / <nl> + / / / - Parameter transform : A closure that accepts an element of this sequence <nl> + / / / as its argument and returns an optional value . <nl> + / / / <nl> + / / / - Complexity : O ( 1 ) <nl> + @ inline ( __always ) <nl> + @ available ( * , deprecated , renamed : " compactMap ( _ : ) " ) <nl> + public func flatMap < ElementOfResult > ( <nl> + _ transform : @ escaping ( Elements . Element ) - > ElementOfResult ? <nl> + ) - > LazyMapSequence < <nl> + LazyFilterSequence < <nl> + LazyMapSequence < Elements , ElementOfResult ? > > , <nl> + ElementOfResult <nl> + > { <nl> + return self . compactMap ( transform ) <nl> + } <nl> } <nl> <nl> extension LazyCollectionProtocol { <nl> extension LazyCollectionProtocol { <nl> > { <nl> return self . map ( transform ) . joined ( ) <nl> } <nl> - <nl> + <nl> + / / / Returns the non - ` nil ` results of mapping the given transformation over <nl> + / / / this collection . <nl> + / / / <nl> + / / / Use this method to receive a collection of nonoptional values when your <nl> + / / / transformation produces an optional value . <nl> + / / / <nl> + / / / - Parameter transform : A closure that accepts an element of this <nl> + / / / collection as its argument and returns an optional value . <nl> + / / / <nl> + / / / - Complexity : O ( 1 ) <nl> + @ _inlineable / / FIXME ( sil - serialize - all ) <nl> + public func compactMap < ElementOfResult > ( <nl> + _ transform : @ escaping ( Elements . Element ) - > ElementOfResult ? <nl> + ) - > LazyMapCollection < <nl> + LazyFilterCollection < <nl> + LazyMapCollection < Elements , ElementOfResult ? > > , <nl> + ElementOfResult <nl> + > { <nl> + return self . map ( transform ) . filter { $ 0 ! = nil } . map { $ 0 ! } <nl> + } <nl> + <nl> / / / Returns the non - ` nil ` results of mapping the given transformation over <nl> / / / this collection . <nl> / / / <nl> extension LazyCollectionProtocol { <nl> / / / collection as its argument and returns an optional value . <nl> / / / <nl> / / / - Complexity : O ( 1 ) <nl> + @ available ( * , deprecated , renamed : " compactMap ( _ : ) " ) <nl> @ _inlineable / / FIXME ( sil - serialize - all ) <nl> public func flatMap < ElementOfResult > ( <nl> _ transform : @ escaping ( Elements . Element ) - > ElementOfResult ? <nl> mmm a / stdlib / public / core / Flatten . swift <nl> ppp b / stdlib / public / core / Flatten . swift <nl> extension LazySequenceProtocol where Element : Sequence { <nl> @ _fixed_layout / / FIXME ( sil - serialize - all ) <nl> public struct FlattenCollection < Base > <nl> where Base : Collection , Base . Element : Collection { <nl> - / / FIXME : swift - 3 - indexing - model : check test coverage for collection . <nl> - <nl> @ _versioned / / FIXME ( sil - serialize - all ) <nl> internal var _base : Base <nl> <nl> extension FlattenCollection : Collection { <nl> <nl> @ _inlineable / / FIXME ( sil - serialize - all ) <nl> public func distance ( from start : Index , to end : Index ) - > Int { <nl> - / / The following line makes sure that distance ( from : to : ) is invoked on the <nl> + / / The following check makes sure that distance ( from : to : ) is invoked on the <nl> / / _base at least once , to trigger a _precondition in forward only <nl> / / collections . <nl> + if end < start { <nl> + _ = _base . distance ( from : _base . endIndex , to : _base . startIndex ) <nl> + } <nl> var _start : Index <nl> let _end : Index <nl> let step : Int <nl> mmm a / stdlib / public / core / SequenceAlgorithms . swift . gyb <nl> ppp b / stdlib / public / core / SequenceAlgorithms . swift . gyb <nl> extension Sequence { <nl> } <nl> <nl> extension Sequence { <nl> + / / / Returns an array containing the non - ` nil ` results of calling the given <nl> + / / / transformation with each element of this sequence . <nl> + / / / <nl> + / / / Use this method to receive an array of nonoptional values when your <nl> + / / / transformation produces an optional value . <nl> + / / / <nl> + / / / In this example , note the difference in the result of using ` map ` and <nl> + / / / ` compactMap ` with a transformation that returns an optional ` Int ` value . <nl> + / / / <nl> + / / / let possibleNumbers = [ " 1 " , " 2 " , " three " , " / / / 4 / / / " , " 5 " ] <nl> + / / / <nl> + / / / let mapped : [ Int ? ] = possibleNumbers . map { str in Int ( str ) } <nl> + / / / / / [ 1 , 2 , nil , nil , 5 ] <nl> + / / / <nl> + / / / let flatMapped : [ Int ] = possibleNumbers . compactMap { str in Int ( str ) } <nl> + / / / / / [ 1 , 2 , 5 ] <nl> + / / / <nl> + / / / - Parameter transform : A closure that accepts an element of this <nl> + / / / sequence as its argument and returns an optional value . <nl> + / / / - Returns : An array of the non - ` nil ` results of calling ` transform ` <nl> + / / / with each element of the sequence . <nl> + / / / <nl> + / / / - Complexity : O ( * m * + * n * ) , where * m * is the length of this sequence <nl> + / / / and * n * is the length of the result . <nl> + @ _inlineable <nl> + public func compactMap < ElementOfResult > ( <nl> + _ transform : ( Element ) throws - > ElementOfResult ? <nl> + ) rethrows - > [ ElementOfResult ] { <nl> + return try _compactMap ( transform ) <nl> + } <nl> + <nl> / / / Returns an array containing the non - ` nil ` results of calling the given <nl> / / / transformation with each element of this sequence . <nl> / / / <nl> extension Sequence { <nl> / / / <nl> / / / - Complexity : O ( * m * + * n * ) , where * m * is the length of this sequence <nl> / / / and * n * is the length of the result . <nl> - @ _inlineable <nl> + @ inline ( __always ) <nl> + @ available ( * , deprecated , renamed : " compactMap ( _ : ) " ) <nl> public func flatMap < ElementOfResult > ( <nl> _ transform : ( Element ) throws - > ElementOfResult ? <nl> ) rethrows - > [ ElementOfResult ] { <nl> - return try _flatMap ( transform ) <nl> + return try _compactMap ( transform ) <nl> } <nl> <nl> / / The implementation of flatMap accepting a closure with an optional result . <nl> extension Sequence { <nl> / / overloads . <nl> @ _inlineable / / FIXME ( sil - serialize - all ) <nl> @ inline ( __always ) <nl> - public func _flatMap < ElementOfResult > ( <nl> + public func _compactMap < ElementOfResult > ( <nl> _ transform : ( Element ) throws - > ElementOfResult ? <nl> ) rethrows - > [ ElementOfResult ] { <nl> var result : [ ElementOfResult ] = [ ] <nl> mmm a / stdlib / public / core / StringRangeReplaceableCollection . swift . gyb <nl> ppp b / stdlib / public / core / StringRangeReplaceableCollection . swift . gyb <nl> extension Sequence { <nl> <nl> extension Collection { <nl> @ _inlineable / / FIXME ( sil - serialize - all ) <nl> + public func compactMap ( <nl> + _ transform : ( Element ) throws - > String ? <nl> + ) rethrows - > [ String ] { <nl> + return try _compactMap ( transform ) <nl> + } <nl> + <nl> + @ available ( * , deprecated , renamed : " compactMap ( _ : ) " ) <nl> + @ inline ( __always ) <nl> public func flatMap ( <nl> _ transform : ( Element ) throws - > String ? <nl> ) rethrows - > [ String ] { <nl> - return try _flatMap ( transform ) <nl> + return try _compactMap ( transform ) <nl> } <nl> } <nl> / / = = = mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - = = = / / <nl> mmm a / test / Constraints / casts . swift <nl> ppp b / test / Constraints / casts . swift <nl> _ = b1 as Int / / expected - error { { cannot convert value of type ' Bool ' to type <nl> _ = seven as Int / / expected - error { { cannot convert value of type ' Double ' to type ' Int ' in coercion } } <nl> <nl> func rdar29894174 ( v : B ? ) { <nl> - let _ = [ v ] . flatMap { $ 0 as ? D } <nl> + let _ = [ v ] . compactMap { $ 0 as ? D } <nl> } <nl> <nl> / / When re - typechecking a solution with an ' is ' cast applied , <nl> mmm a / test / Constraints / closures . swift <nl> ppp b / test / Constraints / closures . swift <nl> let _ : ( ( Int ? ) - > Void ) = { ( arg : Int ! ) in } <nl> / / ( ) - > T to ( ) - > Optional < ( ) > . <nl> func returnsArray ( ) - > [ Int ] { return [ ] } <nl> <nl> - returnsArray ( ) . flatMap { $ 0 } . flatMap { } <nl> + returnsArray ( ) . compactMap { $ 0 } . compactMap { } <nl> / / expected - warning @ - 1 { { expression of type ' Int ' is unused } } <nl> - / / expected - warning @ - 2 { { result of call to ' flatMap ' is unused } } <nl> + / / expected - warning @ - 2 { { result of call to ' compactMap ' is unused } } <nl> <nl> / / rdar : / / problem / 30271695 <nl> - _ = [ " hi " ] . flatMap { $ 0 . isEmpty ? nil : $ 0 } <nl> + _ = [ " hi " ] . compactMap { $ 0 . isEmpty ? nil : $ 0 } <nl> <nl> / / rdar : / / problem / 32432145 - compiler should emit fixit to remove " _ in " in closures if 0 parameters is expected <nl> <nl> mmm a / test / Constraints / tuple_arguments . swift <nl> ppp b / test / Constraints / tuple_arguments . swift <nl> extension Sequence where Iterator . Element = = ( key : String , value : String ? ) { <nl> } <nl> <nl> func rdar33043106 ( _ records : [ ( Int ) ] , _ other : [ ( ( Int ) ) ] ) - > [ Int ] { <nl> - let x : [ Int ] = records . flatMap { _ in <nl> + let x : [ Int ] = records . map { _ in <nl> let i = 1 <nl> return i <nl> } <nl> - let y : [ Int ] = other . flatMap { _ in <nl> + let y : [ Int ] = other . map { _ in <nl> let i = 1 <nl> return i <nl> } <nl> func itsFalse ( _ : Int ) - > Bool ? { <nl> } <nl> <nl> func rdar33159366 ( s : AnySequence < Int > ) { <nl> - _ = s . flatMap ( itsFalse ) <nl> + _ = s . compactMap ( itsFalse ) <nl> let a = Array ( s ) <nl> - _ = a . flatMap ( itsFalse ) <nl> + _ = a . compactMap ( itsFalse ) <nl> } <nl> <nl> func sr5429 < T > ( t : T ) { <nl> new file mode 100644 <nl> index 000000000000 . . 4567b2beae0b <nl> mmm / dev / null <nl> ppp b / test / stdlib / FlatMapDeprecation . swift <nl> <nl> + / / RUN : % target - typecheck - verify - swift % s <nl> + <nl> + func flatMapOnSequence < <nl> + S : Sequence <nl> + > ( xs : S , f : ( S . Element ) - > S . Element ? ) { <nl> + _ = xs . flatMap ( f ) / / expected - warning { { deprecated } } expected - note { { compactMap } } <nl> + } <nl> + <nl> + func flatMapOnLazySequence < <nl> + S : LazySequenceProtocol <nl> + > ( xs : S , f : ( S . Element ) - > S . Element ? ) { <nl> + _ = xs . flatMap ( f ) / / expected - warning { { deprecated } } expected - note { { compactMap } } <nl> + } <nl> + <nl> + func flatMapOnLazyCollection < <nl> + C : LazyCollectionProtocol <nl> + > ( xs : C , f : ( C . Element ) - > C . Element ? ) { <nl> + _ = xs . flatMap ( f ) / / expected - warning { { deprecated } } expected - note { { compactMap } } <nl> + } <nl> + <nl> + func flatMapOnLazyBidirectionalCollection < <nl> + C : LazyCollectionProtocol & BidirectionalCollection <nl> + > ( xs : C , f : ( C . Element ) - > C . Element ? ) <nl> + where C . Elements : BidirectionalCollection { <nl> + _ = xs . flatMap ( f ) / / expected - warning { { deprecated } } expected - note { { compactMap } } <nl> + } <nl> + <nl> + func flatMapOnCollectinoOfStrings < <nl> + C : Collection <nl> + > ( xs : C , f : ( C . Element ) - > String ? ) { <nl> + _ = xs . flatMap ( f ) / / expected - warning { { deprecated } } expected - note { { compactMap } } <nl> + } <nl> new file mode 100644 <nl> index 000000000000 . . 26849a5f3c87 <nl> mmm / dev / null <nl> ppp b / validation - test / stdlib / Collection / FlattenCollection . swift . gyb <nl> <nl> + / / - * - swift - * - <nl> + / / RUN : % target - run - simple - swiftgyb <nl> + / / REQUIRES : executable_test <nl> + <nl> + import SwiftPrivate <nl> + import StdlibUnittest <nl> + import StdlibCollectionUnittest <nl> + <nl> + var tests = TestSuite ( " FlattenCollection " ) <nl> + <nl> + % { <nl> + variations = [ ( ' ' , ' Sequence ' ) , ( ' ' , ' Collection ' ) , ( ' Bidirectional ' , ' Collection ' ) ] <nl> + } % <nl> + <nl> + / / Test collections using value types as elements . <nl> + % for ( traversal , kind ) in variations : <nl> + tests . add $ { traversal } $ { kind } Tests ( <nl> + make $ { kind } : { ( elements : [ OpaqueValue < Int > ] ) - > Flatten $ { kind } < Minimal $ { traversal } $ { kind } < Minimal $ { traversal } $ { kind } < OpaqueValue < Int > > > > in <nl> + Minimal $ { traversal } $ { kind } ( elements : elements . map { Minimal $ { traversal } $ { kind } ( elements : [ $ 0 ] ) } ) . joined ( ) <nl> + } , <nl> + wrapValue : identity , <nl> + extractValue : identity , <nl> + make $ { kind } OfEquatable : { ( elements : [ MinimalEquatableValue ] ) - > Flatten $ { kind } < Minimal $ { traversal } $ { kind } < Minimal $ { traversal } $ { kind } < MinimalEquatableValue > > > in <nl> + Minimal $ { traversal } $ { kind } ( elements : elements . map { Minimal $ { traversal } $ { kind } ( elements : [ $ 0 ] ) } ) . joined ( ) <nl> + } , <nl> + wrapValueIntoEquatable : identityEq , <nl> + extractValueFromEquatable : identityEq <nl> + ) <nl> + % end <nl> + <nl> + / / Test collections using reference types as elements . <nl> + % for ( traversal , kind ) in variations : <nl> + tests . add $ { traversal } $ { kind } Tests ( <nl> + make $ { kind } : { ( elements : [ LifetimeTracked ] ) - > Flatten $ { kind } < Minimal $ { traversal } $ { kind } < Minimal $ { traversal } $ { kind } < LifetimeTracked > > > in <nl> + Minimal $ { traversal } $ { kind } ( elements : elements . map { Minimal $ { traversal } $ { kind } ( elements : [ $ 0 ] ) } ) . joined ( ) <nl> + } , <nl> + wrapValue : { ( element : OpaqueValue < Int > ) in <nl> + LifetimeTracked ( element . value , identity : element . identity ) <nl> + } , <nl> + extractValue : { ( element : LifetimeTracked ) in <nl> + OpaqueValue ( element . value , identity : element . identity ) <nl> + } , <nl> + make $ { kind } OfEquatable : { ( elements : [ LifetimeTracked ] ) - > Flatten $ { kind } < Minimal $ { traversal } $ { kind } < Minimal $ { traversal } $ { kind } < LifetimeTracked > > > in <nl> + Minimal $ { traversal } $ { kind } ( elements : elements . map { Minimal $ { traversal } $ { kind } ( elements : [ $ 0 ] ) } ) . joined ( ) <nl> + } , <nl> + wrapValueIntoEquatable : { ( element : MinimalEquatableValue ) in <nl> + LifetimeTracked ( element . value , identity : element . identity ) <nl> + } , <nl> + extractValueFromEquatable : { ( element : LifetimeTracked ) in <nl> + MinimalEquatableValue ( element . value , identity : element . identity ) <nl> + } <nl> + ) <nl> + % end <nl> + <nl> + / / Test collection instances and iterators . <nl> + % for ( traversal , kind ) in variations : <nl> + tests . test ( " FlattenCollection instances ( $ { traversal } $ { kind } ) " ) { <nl> + do { <nl> + let expected : [ String ] = [ ] <nl> + let base : [ [ String ] ] = [ [ ] , [ ] , [ ] ] <nl> + check $ { traversal } $ { kind } ( <nl> + expected , <nl> + Minimal $ { traversal } $ { kind } ( elements : base ) . joined ( ) , <nl> + sameValue : { $ 0 = = $ 1 } ) <nl> + } <nl> + do { <nl> + let expected = [ " apple " , " orange " , " banana " , " grapefruit " , " lychee " ] <nl> + let base = [ [ " apple " , " orange " ] , [ " banana " , " grapefruit " ] , [ " lychee " ] ] <nl> + check $ { traversal } $ { kind } ( <nl> + expected , <nl> + Minimal $ { traversal } $ { kind } ( elements : base ) . joined ( ) , <nl> + sameValue : { $ 0 = = $ 1 } ) <nl> + } <nl> + } <nl> + % end <nl> + <nl> + runAllTests ( ) <nl>
Merge remote - tracking branch ' origin / master ' into master - next
apple/swift
b19045a81a2b31b776ba9183e726f4af59c1874c
2017-12-20T01:09:50Z
mmm a / src / mongo / db / auth / authorization_manager . cpp <nl> ppp b / src / mongo / db / auth / authorization_manager . cpp <nl> <nl> <nl> namespace mongo { <nl> <nl> + const std : : string AuthorizationManager : : SERVER_RESOURCE_NAME = " $ SERVER " ; <nl> + const std : : string AuthorizationManager : : CLUSTER_RESOURCE_NAME = " $ CLUSTER " ; <nl> + <nl> namespace { <nl> Principal specialAdminPrincipal ( " special " ) ; <nl> const std : : string ADMIN_DBNAME = " admin " ; <nl> namespace mongo { <nl> readRoleActions . addAction ( ActionType : : collStats ) ; <nl> readRoleActions . addAction ( ActionType : : dbStats ) ; <nl> readRoleActions . addAction ( ActionType : : find ) ; <nl> + / / TODO : should dbHash go here ? <nl> <nl> / / Read - write role <nl> readWriteRoleActions . addAllActionsFromSet ( readRoleActions ) ; <nl> namespace mongo { <nl> serverAdminRoleActions . addAction ( ActionType : : hostInfo ) ; <nl> serverAdminRoleActions . addAction ( ActionType : : listDatabases ) ; <nl> serverAdminRoleActions . addAction ( ActionType : : logRotate ) ; <nl> - serverAdminRoleActions . addAction ( ActionType : : profile ) ; <nl> + serverAdminRoleActions . addAction ( ActionType : : profile ) ; / / TODO : should this be dbAdmin ? <nl> serverAdminRoleActions . addAction ( ActionType : : repairDatabase ) ; <nl> serverAdminRoleActions . addAction ( ActionType : : replSetFreeze ) ; <nl> serverAdminRoleActions . addAction ( ActionType : : replSetGetStatus ) ; <nl> namespace mongo { <nl> return & specialAdminPrincipal ; <nl> } <nl> <nl> + / / TODO : If resource is a ns , check against the dbname portion only . <nl> + <nl> const AcquiredPrivilege * privilege ; <nl> privilege = _acquiredPrivileges . getPrivilegeForAction ( resource , action ) ; <nl> if ( privilege ) { <nl> mmm a / src / mongo / db / auth / authorization_manager . h <nl> ppp b / src / mongo / db / auth / authorization_manager . h <nl> namespace mongo { <nl> class AuthorizationManager { <nl> MONGO_DISALLOW_COPYING ( AuthorizationManager ) ; <nl> public : <nl> + <nl> + static const std : : string SERVER_RESOURCE_NAME ; <nl> + static const std : : string CLUSTER_RESOURCE_NAME ; <nl> + <nl> / / Takes ownership of the externalState . <nl> explicit AuthorizationManager ( AuthExternalState * externalState ) ; <nl> ~ AuthorizationManager ( ) ; <nl> mmm a / src / mongo / db / commands . cpp <nl> ppp b / src / mongo / db / commands . cpp <nl> namespace mongo { <nl> return c - > locktype ( ) ; <nl> } <nl> <nl> - <nl> + / / TODO : remove this default implementation so that all Command subclasses have to explicitly <nl> + / / declare their own . <nl> void Command : : addRequiredPrivileges ( const std : : string & dbname , <nl> const BSONObj & cmdObj , <nl> std : : vector < Privilege > * out ) { <nl> mmm a / src / mongo / db / commands . h <nl> ppp b / src / mongo / db / commands . h <nl> <nl> <nl> # include < vector > <nl> <nl> + # include " mongo / db / auth / action_set . h " <nl> + # include " mongo / db / auth / action_type . h " <nl> + # include " mongo / db / auth / authorization_manager . h " <nl> # include " mongo / db / auth / privilege . h " <nl> # include " mongo / db / jsobj . h " <nl> # include " mongo / util / mongoutils / str . h " <nl> namespace mongo { <nl> virtual bool slaveOk ( ) const { <nl> return true ; <nl> } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : shutdown ) ; <nl> + out - > push_back ( Privilege ( AuthorizationManager : : SERVER_RESOURCE_NAME , actions ) ) ; <nl> + } <nl> virtual LockType locktype ( ) const { return NONE ; } <nl> virtual void help ( stringstream & help ) const ; <nl> CmdShutdown ( ) : Command ( " shutdown " ) { } <nl> mmm a / src / mongo / db / dbcommands . cpp <nl> ppp b / src / mongo / db / dbcommands . cpp <nl> <nl> # include < time . h > <nl> <nl> # include " mongo / bson / util / builder . h " <nl> + # include " mongo / db / auth / action_set . h " <nl> + # include " mongo / db / auth / action_type . h " <nl> + # include " mongo / db / auth / authorization_manager . h " <nl> + # include " mongo / db / auth / privilege . h " <nl> # include " mongo / db / background . h " <nl> # include " mongo / db / btreecursor . h " <nl> # include " mongo / db / commands . h " <nl> namespace mongo { <nl> virtual bool slaveOk ( ) const { <nl> return true ; <nl> } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { } / / No auth required <nl> virtual void help ( stringstream & help ) const { <nl> help < < " reset error state ( used with getpreverror ) " ; <nl> } <nl> namespace mongo { <nl> virtual LockType locktype ( ) const { return NONE ; } <nl> virtual bool logTheOp ( ) { return false ; } <nl> virtual bool slaveOk ( ) const { return true ; } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { } / / No auth required <nl> virtual void help ( stringstream & help ) const { <nl> help < < " return error status of the last operation on this connection \ n " <nl> < < " options : \ n " <nl> namespace mongo { <nl> virtual bool slaveOk ( ) const { <nl> return true ; <nl> } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { } / / No auth required <nl> CmdGetPrevError ( ) : Command ( " getPrevError " , false , " getpreverror " ) { } <nl> bool run ( const string & dbname , BSONObj & cmdObj , int , string & errmsg , BSONObjBuilder & result , bool fromRepl ) { <nl> LastError * le = lastError . disableForCommand ( ) ; <nl> namespace mongo { <nl> return false ; <nl> } <nl> <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : dropDatabase ) ; <nl> + out - > push_back ( Privilege ( AuthorizationManager : : CLUSTER_RESOURCE_NAME , actions ) ) ; <nl> + } <nl> + <nl> / / this is suboptimal but syncDataAndTruncateJournal is called from dropDatabase , and that <nl> / / may need a global lock . <nl> virtual bool lockGlobally ( ) const { return true ; } <nl> namespace mongo { <nl> virtual LockType locktype ( ) const { return WRITE ; } <nl> / / SERVER - 4328 todo don ' t lock globally . currently syncDataAndTruncateJournal is being called within , and that requires a global lock i believe . <nl> virtual bool lockGlobally ( ) const { return true ; } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : repairDatabase ) ; <nl> + out - > push_back ( Privilege ( AuthorizationManager : : SERVER_RESOURCE_NAME , actions ) ) ; <nl> + } <nl> CmdRepairDatabase ( ) : Command ( " repairDatabase " ) { } <nl> bool run ( const string & dbname , BSONObj & cmdObj , int , string & errmsg , BSONObjBuilder & result , bool fromRepl ) { <nl> BSONElement e = cmdObj . firstElement ( ) ; <nl> namespace mongo { <nl> help < < " http : / / dochub . mongodb . org / core / databaseprofiler " ; <nl> } <nl> virtual LockType locktype ( ) const { return WRITE ; } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : profile ) ; <nl> + / / TODO : should the resource here be the database instead of server ? <nl> + out - > push_back ( Privilege ( AuthorizationManager : : SERVER_RESOURCE_NAME , actions ) ) ; <nl> + } <nl> CmdProfile ( ) : Command ( " profile " ) { } <nl> bool run ( const string & dbname , BSONObj & cmdObj , int , string & errmsg , BSONObjBuilder & result , bool fromRepl ) { <nl> BSONElement e = cmdObj . firstElement ( ) ; <nl> namespace mongo { <nl> virtual void help ( stringstream & help ) const { help < < " internal " ; } <nl> virtual LockType locktype ( ) const { return NONE ; } <nl> CmdGetOpTime ( ) : Command ( " getoptime " ) { } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { } / / No auth required <nl> bool run ( const string & dbname , BSONObj & cmdObj , int , string & errmsg , BSONObjBuilder & result , bool fromRepl ) { <nl> mutex : : scoped_lock lk ( OpTime : : m ) ; <nl> result . appendDate ( " optime " , OpTime : : now ( lk ) . asDate ( ) ) ; <nl> namespace mongo { <nl> } <nl> void help ( stringstream & h ) const { h < < " http : / / dochub . mongodb . org / core / monitoring # MonitoringandDiagnostics - DatabaseRecord % 2FReplay % 28diagLoggingcommand % 29 " ; } <nl> virtual LockType locktype ( ) const { return WRITE ; } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : diagLogging ) ; <nl> + out - > push_back ( Privilege ( AuthorizationManager : : SERVER_RESOURCE_NAME , actions ) ) ; <nl> + } <nl> bool run ( const string & dbname , BSONObj & cmdObj , int , string & errmsg , BSONObjBuilder & result , bool ) { <nl> int was = _diaglog . setLevel ( cmdObj . firstElement ( ) . numberInt ( ) ) ; <nl> _diaglog . flush ( ) ; <nl> namespace mongo { <nl> virtual bool adminOnly ( ) const { <nl> return false ; <nl> } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : dropCollection ) ; <nl> + out - > push_back ( Privilege ( dbname , actions ) ) ; <nl> + } <nl> virtual void help ( stringstream & help ) const { help < < " drop a collection \ n { drop : < collectionName > } " ; } <nl> virtual LockType locktype ( ) const { return WRITE ; } <nl> virtual bool run ( const string & dbname , BSONObj & cmdObj , int , string & errmsg , BSONObjBuilder & result , bool ) { <nl> namespace mongo { <nl> virtual bool maintenanceOk ( ) const { return false ; } <nl> virtual bool adminOnly ( ) const { return false ; } <nl> virtual void help ( stringstream & help ) const { help < < " count objects in collection " ; } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : find ) ; <nl> + out - > push_back ( Privilege ( parseNs ( dbname , cmdObj ) , actions ) ) ; <nl> + } <nl> virtual bool run ( const string & dbname , BSONObj & cmdObj , int , string & errmsg , BSONObjBuilder & result , bool ) { <nl> <nl> long long skip = 0 ; <nl> namespace mongo { <nl> help < < " create a collection explicitly \ n " <nl> " { create : < ns > [ , capped : < bool > , size : < collSizeInBytes > , max : < nDocs > ] } " ; <nl> } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : createCollection ) ; <nl> + out - > push_back ( Privilege ( dbname , actions ) ) ; <nl> + } <nl> virtual bool run ( const string & dbname , BSONObj & cmdObj , int , string & errmsg , BSONObjBuilder & result , bool fromRepl ) { <nl> uassert ( 15888 , " must pass name of collection to create " , cmdObj . firstElement ( ) . valuestrsafe ( ) [ 0 ] ! = ' \ 0 ' ) ; <nl> string ns = dbname + ' . ' + cmdObj . firstElement ( ) . valuestr ( ) ; <nl> namespace mongo { <nl> virtual void help ( stringstream & help ) const { <nl> help < < " drop indexes for a collection " ; <nl> } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : dropIndexes ) ; <nl> + out - > push_back ( Privilege ( parseNs ( dbname , cmdObj ) , actions ) ) ; <nl> + } <nl> CmdDropIndexes ( ) : Command ( " dropIndexes " , false , " deleteIndexes " ) { } <nl> bool run ( const string & dbname , BSONObj & jsobj , int , string & errmsg , BSONObjBuilder & anObjBuilder , bool / * fromRepl * / ) { <nl> BSONElement e = jsobj . firstElement ( ) ; <nl> namespace mongo { <nl> virtual void help ( stringstream & help ) const { <nl> help < < " re - index a collection " ; <nl> } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : reIndex ) ; <nl> + out - > push_back ( Privilege ( parseNs ( dbname , cmdObj ) , actions ) ) ; <nl> + } <nl> CmdReIndex ( ) : Command ( " reIndex " ) { } <nl> bool run ( const string & dbname , BSONObj & jsobj , int , string & errmsg , BSONObjBuilder & result , bool / * fromRepl * / ) { <nl> static DBDirectClient db ; <nl> namespace mongo { <nl> } <nl> virtual LockType locktype ( ) const { return NONE ; } <nl> virtual void help ( stringstream & help ) const { help < < " list databases on this server " ; } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : listDatabases ) ; <nl> + out - > push_back ( Privilege ( AuthorizationManager : : SERVER_RESOURCE_NAME , actions ) ) ; <nl> + } <nl> CmdListDatabases ( ) : Command ( " listDatabases " , true ) { } <nl> bool run ( const string & dbname , BSONObj & jsobj , int , string & errmsg , BSONObjBuilder & result , bool / * fromRepl * / ) { <nl> vector < string > dbNames ; <nl> namespace mongo { <nl> virtual bool slaveOk ( ) const { return false ; } <nl> virtual LockType locktype ( ) const { return WRITE ; } <nl> virtual bool lockGlobally ( ) const { return true ; } <nl> - <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : closeAllDatabases ) ; <nl> + out - > push_back ( Privilege ( AuthorizationManager : : SERVER_RESOURCE_NAME , actions ) ) ; <nl> + } <nl> CmdCloseAllDatabases ( ) : Command ( " closeAllDatabases " ) { } <nl> bool run ( const string & dbname , BSONObj & jsobj , int , string & errmsg , BSONObjBuilder & result , bool / * fromRepl * / ) { <nl> bool ok ; <nl> namespace mongo { <nl> help < < " example : { filemd5 : ObjectId ( aaaaaaa ) , root : \ " fs \ " } " ; <nl> } <nl> virtual LockType locktype ( ) const { return READ ; } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : find ) ; <nl> + out - > push_back ( Privilege ( parseNs ( dbname , cmdObj ) , actions ) ) ; <nl> + } <nl> bool run ( const string & dbname , BSONObj & jsobj , int , string & errmsg , BSONObjBuilder & result , bool fromRepl ) { <nl> string ns = dbname ; <nl> ns + = " . " ; <nl> namespace mongo { <nl> " the structure of min . " <nl> " \ nnote : This command may take a while to run " ; <nl> } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : find ) ; <nl> + out - > push_back ( Privilege ( parseNs ( dbname , cmdObj ) , actions ) ) ; <nl> + } <nl> bool run ( const string & dbname , BSONObj & jsobj , int , string & errmsg , BSONObjBuilder & result , bool fromRepl ) { <nl> Timer timer ; <nl> <nl> namespace mongo { <nl> help < < " { collStats : \ " blog . posts \ " , scale : 1 } scale divides sizes e . g . for KB use 1024 \ n " <nl> " avgObjSize - in bytes " ; <nl> } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : collStats ) ; <nl> + out - > push_back ( Privilege ( parseNs ( dbname , cmdObj ) , actions ) ) ; <nl> + } <nl> bool run ( const string & dbname , BSONObj & jsobj , int , string & errmsg , BSONObjBuilder & result , bool fromRepl ) { <nl> string ns = dbname + " . " + jsobj . firstElement ( ) . valuestr ( ) ; <nl> Client : : Context cx ( ns ) ; <nl> namespace mongo { <nl> " Sets collection options . \ n " <nl> " Example : { collMod : ' foo ' , usePowerOf2Sizes : true } " ; <nl> } <nl> - <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : collMod ) ; <nl> + out - > push_back ( Privilege ( parseNs ( dbname , cmdObj ) , actions ) ) ; <nl> + } <nl> bool run ( const string & dbname , BSONObj & jsobj , int , string & errmsg , BSONObjBuilder & result , bool fromRepl ) { <nl> string ns = dbname + " . " + jsobj . firstElement ( ) . valuestr ( ) ; <nl> Client : : Context ctx ( ns ) ; <nl> namespace mongo { <nl> " Get stats on a database . Not instantaneous . Slower for databases with large . ns files . \ n " < < <nl> " Example : { dbStats : 1 , scale : 1 } " ; <nl> } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : dbStats ) ; <nl> + out - > push_back ( Privilege ( dbname , actions ) ) ; <nl> + } <nl> bool run ( const string & dbname , BSONObj & jsobj , int , string & errmsg , BSONObjBuilder & result , bool fromRepl ) { <nl> int scale = 1 ; <nl> if ( jsobj [ " scale " ] . isNumber ( ) ) { <nl> namespace mongo { <nl> virtual void help ( stringstream & help ) const { <nl> help < < " { cloneCollectionAsCapped : < fromName > , toCollection : < toName > , size : < sizeInBytes > } " ; <nl> } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet sourceActions ; <nl> + sourceActions . addAction ( ActionType : : find ) ; <nl> + out - > push_back ( Privilege ( parseNs ( dbname , cmdObj ) , sourceActions ) ) ; <nl> + <nl> + ActionSet targetActions ; <nl> + targetActions . addAction ( ActionType : : insert ) ; <nl> + targetActions . addAction ( ActionType : : ensureIndex ) ; <nl> + std : : string targetNs = dbname + " . " + cmdObj . getStringField ( " toCollection " ) ; <nl> + out - > push_back ( Privilege ( targetNs , targetActions ) ) ; <nl> + } <nl> bool run ( const string & dbname , BSONObj & jsobj , int , string & errmsg , BSONObjBuilder & result , bool fromRepl ) { <nl> string from = jsobj . getStringField ( " cloneCollectionAsCapped " ) ; <nl> string to = jsobj . getStringField ( " toCollection " ) ; <nl> namespace mongo { <nl> virtual void help ( stringstream & help ) const { <nl> help < < " { convertToCapped : < fromCollectionName > , size : < sizeInBytes > } " ; <nl> } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : convertToCapped ) ; <nl> + out - > push_back ( Privilege ( parseNs ( dbname , cmdObj ) , actions ) ) ; <nl> + } <nl> bool run ( const string & dbname , BSONObj & jsobj , int , string & errmsg , BSONObjBuilder & result , bool fromRepl ) { <nl> BackgroundOperation : : assertNoBgOpInProgForDb ( dbname . c_str ( ) ) ; <nl> <nl> namespace mongo { <nl> virtual void help ( stringstream & help ) const { <nl> help < < " { whatsmyuri : 1 } " ; <nl> } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { } / / No auth required <nl> virtual bool run ( const string & dbname , BSONObj & cmdObj , int , string & errmsg , BSONObjBuilder & result , bool ) { <nl> BSONObj info = cc ( ) . curop ( ) - > infoNoauth ( ) ; <nl> result < < " you " < < info [ " client " ] ; <nl> namespace mongo { <nl> virtual void help ( stringstream & help ) const { <nl> help < < " internal . for testing only . " ; <nl> } <nl> + / / No auth required , only enabled via command line for testing <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { } <nl> virtual bool run ( const string & dbname , BSONObj & cmdObj , int , string & errmsg , BSONObjBuilder & result , bool ) { <nl> <nl> AuthenticationInfo * ai = cc ( ) . getAuthenticationInfo ( ) ; <nl> namespace mongo { <nl> DBHashCmd ( ) : Command ( " dbHash " , false , " dbhash " ) { } <nl> virtual bool slaveOk ( ) const { return true ; } <nl> virtual LockType locktype ( ) const { return READ ; } <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : dbHash ) ; <nl> + out - > push_back ( Privilege ( dbname , actions ) ) ; <nl> + } <nl> virtual bool run ( const string & dbname , BSONObj & cmdObj , int , string & errmsg , BSONObjBuilder & result , bool ) { <nl> list < string > colls ; <nl> Database * db = cc ( ) . database ( ) ; <nl> namespace mongo { <nl> help < < " internal testing command . Makes db block ( in a read lock ) for 100 seconds \ n " ; <nl> help < < " w : true write lock . secs : < seconds > " ; <nl> } <nl> + / / No auth required , only enabled via command line for testing <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { } <nl> CmdSleep ( ) : Command ( " sleep " ) { } <nl> bool run ( const string & ns , BSONObj & cmdObj , int , string & errmsg , BSONObjBuilder & result , bool fromRepl ) { <nl> log ( ) < < " test only command sleep invoked " < < endl ; <nl> namespace mongo { <nl> virtual bool slaveOk ( ) const { return false ; } <nl> virtual LockType locktype ( ) const { return WRITE ; } <nl> virtual bool requiresAuth ( ) { return true ; } <nl> + / / Only enabled via command line for testing <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : captrunc ) ; <nl> + out - > push_back ( Privilege ( parseNs ( dbname , cmdObj ) , actions ) ) ; <nl> + } <nl> virtual bool run ( const string & dbname , BSONObj & cmdObj , int , string & errmsg , BSONObjBuilder & result , bool ) { <nl> string coll = cmdObj [ " captrunc " ] . valuestrsafe ( ) ; <nl> uassert ( 13416 , " captrunc must specify a collection " , ! coll . empty ( ) ) ; <nl> namespace mongo { <nl> virtual LockType locktype ( ) const { return WRITE ; } <nl> virtual bool requiresAuth ( ) { return true ; } <nl> virtual bool logTheOp ( ) { return true ; } <nl> + / / Only enabled via command line for testing <nl> + virtual void addRequiredPrivileges ( const std : : string & dbname , <nl> + const BSONObj & cmdObj , <nl> + std : : vector < Privilege > * out ) { <nl> + ActionSet actions ; <nl> + actions . addAction ( ActionType : : emptycapped ) ; <nl> + out - > push_back ( Privilege ( parseNs ( dbname , cmdObj ) , actions ) ) ; <nl> + } <nl> virtual bool run ( const string & dbname , BSONObj & cmdObj , int , string & errmsg , BSONObjBuilder & result , bool ) { <nl> string coll = cmdObj [ " emptycapped " ] . valuestrsafe ( ) ; <nl> uassert ( 13428 , " emptycapped must specify a collection " , ! coll . empty ( ) ) ; <nl>
SERVER - 7122 Begin assigning required privileges to commands .
mongodb/mongo
a4526bad1cf9984e11529aca3e6cb7ad58102f94
2012-11-26T23:10:23Z
mmm a / test / core / client_channel / service_config_test . cc <nl> ppp b / test / core / client_channel / service_config_test . cc <nl> TEST_F ( MessageSizeParserTest , InvalidMaxResponseMessageBytes ) { <nl> } / / namespace grpc_core <nl> <nl> int main ( int argc , char * * argv ) { <nl> - / / Regexes don ' t work in gcc4 . 8 and below , so just skip testing in those cases <nl> - # if defined ( __GNUC__ ) & & \ <nl> - ( ( __GNUC__ < 4 ) | | ( ( __GNUC__ = = 4 ) & & ( __GNUC_MINOR__ ) < = 8 ) ) <nl> + / / Regexes don ' t work in old libstdc + + versions , so just skip testing in those <nl> + / / cases <nl> + # if defined ( __GLIBCXX__ ) & & ( __GLIBCXX__ < = 20150623 ) <nl> + gpr_log ( GPR_ERROR , <nl> + " Skipping service_config_test since std : : regex is not supported on " <nl> + " this system . " ) ; <nl> return 0 ; <nl> # endif <nl> grpc : : testing : : TestEnvironment env ( argc , argv ) ; <nl>
Skip running service config test on older libstdc + + versions and log to ERROR
grpc/grpc
3883c577f1169f827f2ba7341a19699e93a0aeb4
2019-10-15T18:48:42Z
mmm a / tensorflow / contrib / data / python / kernel_tests / BUILD <nl> ppp b / tensorflow / contrib / data / python / kernel_tests / BUILD <nl> py_test ( <nl> " nomac " , # b / 62040583 <nl> ] , <nl> deps = [ <nl> + " : dataset_serialization_test " , <nl> " / / tensorflow / contrib / data / python / ops : dataset_ops " , <nl> - " / / tensorflow / contrib / data / python / ops : iterator_ops " , <nl> " / / tensorflow / contrib / data / python / ops : transformation_ops " , <nl> " / / tensorflow / core : protos_all_py " , <nl> " / / tensorflow / python : array_ops " , <nl> py_test ( <nl> " / / tensorflow / python : resource_variable_ops " , <nl> " / / tensorflow / python : session " , <nl> " / / tensorflow / python : sparse_tensor " , <nl> - " / / tensorflow / python : training " , <nl> " / / tensorflow / python / data / util : nest " , <nl> " / / third_party / py / numpy " , <nl> ] , <nl> py_library ( <nl> visibility = [ " / / visibility : private " ] , <nl> deps = [ <nl> " / / tensorflow / contrib / data / python / ops : iterator_ops " , <nl> + " / / tensorflow / python : client_testlib " , <nl> " / / tensorflow / python : errors " , <nl> " / / tensorflow / python : framework_ops " , <nl> " / / tensorflow / python : platform " , <nl> + " / / tensorflow / python : sparse_tensor " , <nl> " / / tensorflow / python : training " , <nl> " / / tensorflow / python : util " , <nl> " / / third_party / py / numpy " , <nl> mmm a / tensorflow / contrib / data / python / kernel_tests / dataset_constructor_op_test . py <nl> ppp b / tensorflow / contrib / data / python / kernel_tests / dataset_constructor_op_test . py <nl> <nl> from __future__ import division <nl> from __future__ import print_function <nl> <nl> - import os <nl> import threading <nl> <nl> import numpy as np <nl> <nl> + from tensorflow . contrib . data . python . kernel_tests import dataset_serialization_test_base <nl> from tensorflow . contrib . data . python . ops import batching <nl> from tensorflow . contrib . data . python . ops import dataset_ops <nl> - from tensorflow . contrib . data . python . ops import iterator_ops <nl> from tensorflow . core . protobuf import config_pb2 <nl> from tensorflow . python . client import session <nl> from tensorflow . python . data . util import nest <nl> <nl> from tensorflow . python . ops import math_ops <nl> from tensorflow . python . ops import resource_variable_ops <nl> from tensorflow . python . platform import test <nl> - from tensorflow . python . training import saver as saver_lib <nl> <nl> <nl> class DatasetConstructorTest ( test . TestCase ) : <nl> def testRestructureDataset ( self ) : <nl> new = batching . _RestructuredDataset ( dataset , new_types , new_shape_lists ) <nl> # pylint : enable = protected - access <nl> <nl> - def _iterator_checkpoint_prefix ( self ) : <nl> - return os . path . join ( self . get_temp_dir ( ) , " iterator " ) <nl> <nl> - def _testSaveRestoreFromTensorsUtility ( self , start , break_range , stop ) : <nl> - path = self . _iterator_checkpoint_prefix ( ) <nl> - step = 0 <nl> - meta_filename = path + " - % d . meta " % step <nl> + class DatasetConstructorSerializationTest ( <nl> + dataset_serialization_test_base . DatasetSerializationTestBase ) : <nl> <nl> - components = ( np . array ( 1 ) , np . array ( [ 1 , 2 , 3 ] ) , np . array ( 37 . 0 ) ) <nl> + def _build_tensor_dataset ( self , variable_array ) : <nl> + components = ( variable_array , np . array ( [ 1 , 2 , 3 ] ) , np . array ( 37 . 0 ) ) <nl> <nl> - with ops . Graph ( ) . as_default ( ) as g : <nl> - iterator = ( <nl> - dataset_ops . Dataset . from_tensors ( components ) <nl> - . make_initializable_iterator ( ) ) <nl> - init_op = iterator . initializer <nl> - get_next = iterator . get_next ( ) <nl> - saveable = iterator_ops . make_saveable_from_iterator ( iterator ) <nl> - ops . add_to_collection ( ops . GraphKeys . SAVEABLE_OBJECTS , saveable ) <nl> - for t in nest . flatten ( get_next ) : <nl> - ops . add_to_collection ( " get_next " , t ) <nl> - saver = saver_lib . Saver ( ) <nl> - with self . test_session ( graph = g ) as sess : <nl> - sess . run ( init_op ) <nl> - for _ in range ( start , break_range ) : <nl> - result = sess . run ( get_next ) <nl> - for component , result_component in zip ( components , result ) : <nl> - self . assertAllEqual ( component , result_component ) <nl> - saver . save ( sess , path , step ) <nl> - <nl> - with ops . Graph ( ) . as_default ( ) as g : <nl> - saver = saver_lib . import_meta_graph ( meta_filename ) <nl> - with self . test_session ( graph = g ) as sess : <nl> - get_next = nest . pack_sequence_as ( ( " a " , " b " , " c " ) , <nl> - ops . get_collection ( " get_next " ) ) <nl> - saver . restore ( sess , saver_lib . latest_checkpoint ( self . get_temp_dir ( ) ) ) <nl> - for _ in range ( break_range , stop ) : <nl> - result = sess . run ( get_next ) <nl> - for component , result_component in zip ( components , result ) : <nl> - self . assertAllEqual ( component , result_component ) <nl> - with self . assertRaises ( errors . OutOfRangeError ) : <nl> - sess . run ( get_next ) <nl> + return dataset_ops . Dataset . from_tensors ( components ) <nl> <nl> - def testRestoreFromTensors ( self ) : <nl> - self . _testSaveRestoreFromTensorsUtility ( 0 , 0 , 1 ) <nl> + def testFromTensorsCore ( self ) : <nl> + # Equal length components <nl> + arr = np . array ( 1 ) <nl> + num_outputs = 1 <nl> + diff_arr = np . array ( 2 ) <nl> + self . run_core_tests ( lambda : self . _build_tensor_dataset ( arr ) , <nl> + lambda : self . _build_tensor_dataset ( diff_arr ) , <nl> + num_outputs ) <nl> <nl> - def testRestoreExhuatedIteratorFromTensors ( self ) : <nl> - self . _testSaveRestoreFromTensorsUtility ( 0 , 1 , 1 ) <nl> + def _build_tensor_slices_dataset ( self , components ) : <nl> + return dataset_ops . Dataset . from_tensor_slices ( components ) <nl> <nl> - def _build_graph_tensor_slices ( self , components ) : <nl> - iterator = dataset_ops . Dataset . from_tensor_slices ( <nl> - components ) . make_initializable_iterator ( ) <nl> - init_op = iterator . initializer <nl> - get_next = iterator . get_next ( ) <nl> - saveable = iterator_ops . make_saveable_from_iterator ( iterator ) <nl> - ops . add_to_collection ( ops . GraphKeys . SAVEABLE_OBJECTS , saveable ) <nl> - for t in nest . flatten ( get_next ) : <nl> - ops . add_to_collection ( " get_next " , t ) <nl> - return init_op , get_next <nl> - <nl> - def _testSaveRestoreFromTensorSlicesUtility ( self , start , break_range , stop ) : <nl> - path = self . _iterator_checkpoint_prefix ( ) <nl> - step = 0 <nl> - meta_filename = path + " - % d . meta " % step <nl> - <nl> - components = ( np . tile ( np . array ( [ [ 1 ] , [ 2 ] , [ 3 ] , [ 4 ] ] ) , 20 ) , np . tile ( <nl> - np . array ( [ [ 12 ] , [ 13 ] , [ 14 ] , [ 15 ] ] ) , 22 ) , <nl> + def testFromTensorSlicesCore ( self ) : <nl> + # Equal length components <nl> + components = ( np . tile ( np . array ( [ [ 1 ] , [ 2 ] , [ 3 ] , [ 4 ] ] ) , 20 ) , <nl> + np . tile ( np . array ( [ [ 12 ] , [ 13 ] , [ 14 ] , [ 15 ] ] ) , 22 ) , <nl> np . array ( [ 37 . 0 , 38 . 0 , 39 . 0 , 40 . 0 ] ) ) <nl> <nl> - with ops . Graph ( ) . as_default ( ) as g : <nl> - init_op , get_next = self . _build_graph_tensor_slices ( components ) <nl> - saver = saver_lib . Saver ( ) <nl> - with self . test_session ( graph = g ) as sess : <nl> - sess . run ( init_op ) <nl> - for i in range ( start , break_range ) : <nl> - result = sess . run ( get_next ) <nl> - for component , result_component in zip ( components , result ) : <nl> - self . assertAllEqual ( component [ i ] , result_component ) <nl> - saver . save ( sess , path , step ) <nl> - <nl> - with ops . Graph ( ) . as_default ( ) as g : <nl> - saver = saver_lib . import_meta_graph ( meta_filename ) <nl> - with self . test_session ( graph = g ) as sess : <nl> - get_next = nest . pack_sequence_as ( ( " a " , " b " , " c " ) , <nl> - ops . get_collection ( " get_next " ) ) <nl> - saver . restore ( sess , saver_lib . latest_checkpoint ( self . get_temp_dir ( ) ) ) <nl> - for i in range ( break_range , stop ) : <nl> - result = sess . run ( get_next ) <nl> - for component , result_component in zip ( components , result ) : <nl> - self . assertAllEqual ( component [ i ] , result_component ) <nl> - with self . assertRaises ( errors . OutOfRangeError ) : <nl> - sess . run ( get_next ) <nl> - <nl> - def testRestoreFromTensorSlices ( self ) : <nl> - self . _testSaveRestoreFromTensorSlicesUtility ( 0 , 4 , 2 ) <nl> - <nl> - def testRestoreExhaustedIteratorFromTensorSlices ( self ) : <nl> - self . _testSaveRestoreFromTensorSlicesUtility ( 0 , 4 , 4 ) <nl> - <nl> - def tesRestoreFromTensorSlicesWithDict ( self ) : <nl> - <nl> - path = self . _iterator_checkpoint_prefix ( ) <nl> - step = 0 <nl> - meta_filename = path + " - % d . meta " % step <nl> - <nl> - components = { " foo " : [ 1 , 2 , 3 ] , " bar " : [ [ 4 . 0 ] , [ 5 . 0 ] , [ 6 . 0 ] ] } <nl> - <nl> - with ops . Graph ( ) . as_default ( ) as g : <nl> - init_op , get_next = self . _build_graph_tensor_slices ( components ) <nl> - saver = saver_lib . Saver ( ) <nl> - with self . test_session ( graph = g ) as sess : <nl> - sess . run ( init_op ) <nl> - for i in range ( 2 ) : <nl> - results = sess . run ( get_next ) <nl> - self . assertEqual ( components [ " foo " ] [ i ] , results [ " foo " ] ) <nl> - self . assertEqual ( components [ " bar " ] [ i ] , results [ " bar " ] ) <nl> - saver . save ( sess , path , step ) <nl> - <nl> - with ops . Graph ( ) . as_default ( ) as g : <nl> - saver = saver_lib . import_meta_graph ( meta_filename ) <nl> - with self . test_session ( graph = g ) as sess : <nl> - get_next = nest . pack_sequence_as ( ( " a " , " b " ) , <nl> - ops . get_collection ( " get_next " ) ) <nl> - saver . restore ( sess , saver_lib . latest_checkpoint ( self . get_temp_dir ( ) ) ) <nl> - for i in range ( 2 , 3 ) : <nl> - results = sess . run ( get_next ) <nl> - self . assertEqual ( components [ " foo " ] [ i ] , results [ " foo " ] ) <nl> - self . assertEqual ( components [ " bar " ] [ i ] , results [ " bar " ] ) <nl> - with self . assertRaises ( errors . OutOfRangeError ) : <nl> - sess . run ( get_next ) <nl> + diff_comp = ( np . tile ( np . array ( [ [ 1 ] , [ 2 ] , [ 3 ] , [ 4 ] ] ) , 20 ) , <nl> + np . tile ( np . array ( [ [ 5 ] , [ 6 ] , [ 7 ] , [ 8 ] ] ) , 22 ) , <nl> + np . array ( [ 1 . 0 , 2 . 0 , 3 . 0 , 4 . 0 ] ) ) <nl> + <nl> + dict_components = { " foo " : [ 1 , 2 , 3 ] , " bar " : [ [ 4 . 0 ] , [ 5 . 0 ] , [ 6 . 0 ] ] } <nl> + <nl> + self . run_core_tests ( lambda : self . _build_tensor_slices_dataset ( components ) , <nl> + lambda : self . _build_tensor_slices_dataset ( diff_comp ) , 4 ) <nl> + self . run_core_tests ( <nl> + lambda : self . _build_tensor_slices_dataset ( dict_components ) , None , 3 ) <nl> + <nl> + def _build_sparse_tensor_slice_dataset ( self , slices ) : <nl> + indices = np . array ( <nl> + [ [ i , j ] for i in range ( len ( slices ) ) for j in range ( len ( slices [ i ] ) ) ] , <nl> + dtype = np . int64 ) <nl> + values = np . array ( [ val for s in slices for val in s ] , dtype = np . float64 ) <nl> + dense_shape = np . array ( <nl> + [ len ( slices ) , max ( len ( s ) for s in slices ) + 1 ] , dtype = np . int64 ) <nl> + sparse_components = sparse_tensor . SparseTensor ( indices , values , dense_shape ) <nl> + return dataset_ops . Dataset . from_sparse_tensor_slices ( sparse_components ) <nl> + <nl> + def testFromSparseTensorSlicesCore ( self ) : <nl> + slices = [ [ 1 . , 2 . , 3 . ] , [ 1 . ] , [ 1 . ] , [ 1 . , 2 . ] , [ ] , [ 1 . , 2 . ] , [ ] , [ ] , [ ] ] <nl> + diff_slices = [ [ 1 . , 2 . ] , [ 2 . ] , [ 2 . , 3 . , 4 . ] , [ ] , [ ] , [ ] ] <nl> + <nl> + self . run_core_tests ( <nl> + lambda : self . _build_sparse_tensor_slice_dataset ( slices ) , <nl> + lambda : self . _build_sparse_tensor_slice_dataset ( diff_slices ) , <nl> + 9 , <nl> + sparse_tensors = True ) <nl> <nl> <nl> if __name__ = = " __main__ " : <nl> mmm a / tensorflow / contrib / data / python / kernel_tests / dataset_serialization_test_base . py <nl> ppp b / tensorflow / contrib / data / python / kernel_tests / dataset_serialization_test_base . py <nl> <nl> from tensorflow . contrib . data . python . ops import iterator_ops as contrib_iterator_ops <nl> from tensorflow . python . framework import errors <nl> from tensorflow . python . framework import ops <nl> + from tensorflow . python . framework import sparse_tensor <nl> from tensorflow . python . platform import gfile <nl> from tensorflow . python . platform import test <nl> from tensorflow . python . training import saver as saver_lib <nl> class DatasetSerializationTestBase ( test . TestCase ) : <nl> def tearDown ( self ) : <nl> self . _delete_ckpt ( ) <nl> <nl> - def run_core_tests ( self , ds_fn1 , ds_fn2 , num_outputs ) : <nl> + def run_core_tests ( self , ds_fn1 , ds_fn2 , num_outputs , sparse_tensors = False ) : <nl> " " " Runs the core tests . <nl> <nl> Args : <nl> def run_core_tests ( self , ds_fn1 , ds_fn2 , num_outputs ) : <nl> ds_fn2 : 0 - argument function that returns a Dataset different from <nl> ds_fn1 . If None , verify_restore_in_modified_graph test is not run . <nl> num_outputs : Total number of outputs expected from this Dataset . <nl> + sparse_tensors : Whether dataset is built from SparseTensor ( s ) . <nl> <nl> Raises : <nl> AssertionError if any test fails . <nl> " " " <nl> - self . verify_unused_iterator ( ds_fn1 , num_outputs ) <nl> - self . verify_fully_used_iterator ( ds_fn1 , num_outputs ) <nl> - self . verify_exhausted_iterator ( ds_fn1 , num_outputs ) <nl> - self . verify_init_before_restore ( ds_fn1 , num_outputs ) <nl> - self . verify_multiple_breaks ( ds_fn1 , num_outputs ) <nl> - self . verify_reset_restored_iterator ( ds_fn1 , num_outputs ) <nl> + self . verify_unused_iterator ( <nl> + ds_fn1 , num_outputs , sparse_tensors = sparse_tensors ) <nl> + self . verify_fully_used_iterator ( <nl> + ds_fn1 , num_outputs , sparse_tensors = sparse_tensors ) <nl> + self . verify_exhausted_iterator ( <nl> + ds_fn1 , num_outputs , sparse_tensors = sparse_tensors ) <nl> + self . verify_init_before_restore ( <nl> + ds_fn1 , num_outputs , sparse_tensors = sparse_tensors ) <nl> + self . verify_multiple_breaks ( <nl> + ds_fn1 , num_outputs , sparse_tensors = sparse_tensors ) <nl> + self . verify_reset_restored_iterator ( <nl> + ds_fn1 , num_outputs , sparse_tensors = sparse_tensors ) <nl> if ds_fn2 : <nl> - self . verify_restore_in_modified_graph ( ds_fn1 , ds_fn2 , num_outputs ) <nl> + self . verify_restore_in_modified_graph ( <nl> + ds_fn1 , ds_fn2 , num_outputs , sparse_tensors = sparse_tensors ) <nl> <nl> - def verify_unused_iterator ( self , ds_fn , num_outputs , verify_exhausted = True ) : <nl> + def verify_unused_iterator ( self , <nl> + ds_fn , <nl> + num_outputs , <nl> + sparse_tensors = False , <nl> + verify_exhausted = True ) : <nl> " " " Verifies that saving and restoring an unused iterator works . <nl> <nl> Args : <nl> ds_fn : See ` run_core_tests ` . <nl> num_outputs : See ` run_core_tests ` . <nl> + sparse_tensors : See ` run_core_tests ` . <nl> verify_exhausted : See ` gen_outputs ` . <nl> <nl> Raises : <nl> AssertionError if any test fails . <nl> " " " <nl> self . verify_run_with_breaks ( <nl> - ds_fn , [ 0 ] , num_outputs , verify_exhausted = verify_exhausted ) <nl> + ds_fn , [ 0 ] , <nl> + num_outputs , <nl> + sparse_tensors = sparse_tensors , <nl> + verify_exhausted = verify_exhausted ) <nl> <nl> - def verify_fully_used_iterator ( self , ds_fn , num_outputs ) : <nl> + def verify_fully_used_iterator ( self , ds_fn , num_outputs , <nl> + sparse_tensors = False ) : <nl> " " " Verifies that saving and restoring a fully used iterator works . <nl> <nl> Note that this only checks saving and restoring an iterator from which <nl> def verify_fully_used_iterator ( self , ds_fn , num_outputs ) : <nl> Args : <nl> ds_fn : See ` run_core_tests ` . <nl> num_outputs : See ` run_core_tests ` . <nl> + sparse_tensors : See ` run_core_tests ` . <nl> <nl> Raises : <nl> AssertionError if test fails . <nl> " " " <nl> - self . verify_run_with_breaks ( ds_fn , [ num_outputs ] , num_outputs ) <nl> + self . verify_run_with_breaks ( <nl> + ds_fn , [ num_outputs ] , num_outputs , sparse_tensors = sparse_tensors ) <nl> <nl> - def verify_exhausted_iterator ( self , ds_fn , num_outputs ) : <nl> + def verify_exhausted_iterator ( self , ds_fn , num_outputs , sparse_tensors = False ) : <nl> " " " Verifies that saving and restoring an exhausted iterator works . <nl> <nl> An exhausted iterator is one which has returned an OutOfRange error . <nl> def verify_exhausted_iterator ( self , ds_fn , num_outputs ) : <nl> Args : <nl> ds_fn : See ` run_core_tests ` . <nl> num_outputs : See ` run_core_tests ` . <nl> + sparse_tensors : See ` run_core_tests ` . <nl> <nl> Raises : <nl> AssertionError if any test fails . <nl> " " " <nl> - self . gen_outputs ( ds_fn , [ ] , num_outputs , verify_exhausted = True ) <nl> + self . gen_outputs ( <nl> + ds_fn , [ ] , <nl> + num_outputs , <nl> + verify_exhausted = True , <nl> + sparse_tensors = sparse_tensors ) <nl> actual = self . gen_outputs ( <nl> - ds_fn , [ ] , 0 , ckpt_saved = True , verify_exhausted = True ) <nl> + ds_fn , [ ] , <nl> + 0 , <nl> + ckpt_saved = True , <nl> + verify_exhausted = True , <nl> + sparse_tensors = sparse_tensors ) <nl> self . assertEqual ( len ( actual ) , 0 ) <nl> <nl> def verify_init_before_restore ( self , <nl> ds_fn , <nl> num_outputs , <nl> + sparse_tensors = False , <nl> verify_exhausted = True ) : <nl> - " " " Verifies that retoring into an already initilized iterator works . <nl> + " " " Verifies that restoring into an already initilized iterator works . <nl> <nl> Args : <nl> ds_fn : See ` run_core_tests ` . <nl> num_outputs : See ` run_core_tests ` . <nl> + sparse_tensors : See ` run_core_tests ` . <nl> verify_exhausted : See ` gen_outputs ` . <nl> <nl> Raises : <nl> def verify_init_before_restore ( self , <nl> self . gen_break_points ( num_outputs ) , <nl> num_outputs , <nl> init_before_restore = True , <nl> + sparse_tensors = sparse_tensors , <nl> verify_exhausted = verify_exhausted ) <nl> <nl> def verify_multiple_breaks ( self , <nl> ds_fn , <nl> num_outputs , <nl> num_breaks = 10 , <nl> + sparse_tensors = False , <nl> verify_exhausted = True ) : <nl> " " " Attempts to save / restore at multiple break points . <nl> <nl> def verify_multiple_breaks ( self , <nl> num_outputs : See ` run_core_tests ` . <nl> num_breaks : The number of break points . These are uniformly spread in <nl> [ 0 , num_outputs ] both inclusive . <nl> + sparse_tensors : See ` run_core_tests ` . <nl> verify_exhausted : See ` gen_outputs ` . <nl> <nl> Raises : <nl> def verify_multiple_breaks ( self , <nl> " " " <nl> self . verify_run_with_breaks ( <nl> ds_fn , <nl> - self . gen_break_points ( num_outputs ) , <nl> + self . gen_break_points ( num_outputs , num_breaks ) , <nl> num_outputs , <nl> + sparse_tensors = sparse_tensors , <nl> verify_exhausted = verify_exhausted ) <nl> <nl> def verify_reset_restored_iterator ( self , <nl> ds_fn , <nl> num_outputs , <nl> break_point = None , <nl> + sparse_tensors = False , <nl> verify_exhausted = True ) : <nl> " " " Attempts to re - initialize a restored iterator . <nl> <nl> def verify_reset_restored_iterator ( self , <nl> ds_fn : See ` run_core_tests ` . <nl> num_outputs : See ` run_core_tests ` . <nl> break_point : Break point . Optional . Defaults to num_outputs / 2 . <nl> + sparse_tensors : See ` run_core_tests ` . <nl> verify_exhausted : See ` gen_outputs ` . <nl> <nl> Raises : <nl> def verify_reset_restored_iterator ( self , <nl> <nl> # Collect ground truth containing all outputs . <nl> expected = self . gen_outputs ( <nl> - ds_fn , [ ] , num_outputs , verify_exhausted = verify_exhausted ) <nl> + ds_fn , [ ] , <nl> + num_outputs , <nl> + sparse_tensors = sparse_tensors , <nl> + verify_exhausted = verify_exhausted ) <nl> <nl> # Skip some items and save checkpoint . <nl> - self . gen_outputs ( ds_fn , [ ] , break_point , verify_exhausted = False ) <nl> + self . gen_outputs ( <nl> + ds_fn , [ ] , <nl> + break_point , <nl> + sparse_tensors = sparse_tensors , <nl> + verify_exhausted = False ) <nl> <nl> actual = [ ] <nl> # Restore from checkpoint and then run init_op . <nl> with ops . Graph ( ) . as_default ( ) as g : <nl> saver = self . _import_meta_graph ( ) <nl> - init_op , get_next_op = self . _get_iterator_ops_from_collection ( ds_fn ) <nl> + init_op , get_next_op = self . _get_iterator_ops_from_collection ( <nl> + ds_fn , sparse_tensors = sparse_tensors ) <nl> with self . test_session ( graph = g ) as sess : <nl> self . _restore ( saver , sess ) <nl> sess . run ( init_op ) <nl> def verify_restore_in_modified_graph ( self , <nl> ds_fn2 , <nl> num_outputs , <nl> break_point = None , <nl> + sparse_tensors = False , <nl> verify_exhausted = True ) : <nl> " " " Attempts to restore an iterator in a modified graph . <nl> <nl> def verify_restore_in_modified_graph ( self , <nl> ds_fn2 : See ` run_core_tests ` . <nl> num_outputs : See ` run_core_tests ` . <nl> break_point : Break point . Optional . Defaults to num_outputs / 2 . <nl> + sparse_tensors : See ` run_core_tests ` . <nl> verify_exhausted : See ` gen_outputs ` . <nl> <nl> Raises : <nl> def verify_restore_in_modified_graph ( self , <nl> <nl> # Skip ` break_point ` items and store the remaining produced from ds_fn1 <nl> # in ` expected ` . <nl> - self . gen_outputs ( ds_fn1 , [ ] , break_point , verify_exhausted = False ) <nl> + self . gen_outputs ( <nl> + ds_fn1 , [ ] , <nl> + break_point , <nl> + sparse_tensors = sparse_tensors , <nl> + verify_exhausted = False ) <nl> expected = self . gen_outputs ( <nl> ds_fn1 , [ ] , <nl> num_outputs - break_point , <nl> ckpt_saved = True , <nl> + sparse_tensors = sparse_tensors , <nl> verify_exhausted = verify_exhausted ) <nl> <nl> # Generate ` break_point ` items from ds_fn1 and save checkpoint . <nl> - self . gen_outputs ( ds_fn1 , [ ] , break_point , verify_exhausted = False ) <nl> + self . gen_outputs ( <nl> + ds_fn1 , [ ] , <nl> + break_point , <nl> + sparse_tensors = sparse_tensors , <nl> + verify_exhausted = False ) <nl> <nl> actual = [ ] <nl> # Build graph for ds_fn2 but load checkpoint for ds_fn1 . <nl> with ops . Graph ( ) . as_default ( ) as g : <nl> - _ , get_next_op , saver = self . _build_graph ( ds_fn2 ) <nl> + _ , get_next_op , saver = self . _build_graph ( <nl> + ds_fn2 , sparse_tensors = sparse_tensors ) <nl> with self . test_session ( graph = g ) as sess : <nl> self . _restore ( saver , sess ) <nl> for _ in range ( num_outputs - break_point ) : <nl> def verify_run_with_breaks ( self , <nl> ds_fn , <nl> break_points , <nl> num_outputs , <nl> - verify_exhausted = True , <nl> - init_before_restore = False ) : <nl> + init_before_restore = False , <nl> + sparse_tensors = False , <nl> + verify_exhausted = True ) : <nl> " " " Verifies that ds_fn ( ) produces the same outputs with and without breaks . <nl> <nl> 1 . Builds a Dataset using ` ds_fn ` and produces ` num_outputs ` items from it <nl> def verify_run_with_breaks ( self , <nl> ds_fn : See ` gen_outputs ` . <nl> break_points : See ` gen_outputs ` . <nl> num_outputs : See ` gen_outputs ` . <nl> - verify_exhausted : See ` gen_outputs ` . <nl> init_before_restore : See ` gen_outputs ` . <nl> + sparse_tensors : See ` run_core_tests ` . <nl> + verify_exhausted : See ` gen_outputs ` . <nl> <nl> Raises : <nl> AssertionError if any test fails . <nl> def verify_run_with_breaks ( self , <nl> expected = self . gen_outputs ( <nl> ds_fn , [ ] , <nl> num_outputs , <nl> - verify_exhausted = verify_exhausted , <nl> - init_before_restore = init_before_restore ) <nl> + init_before_restore = init_before_restore , <nl> + sparse_tensors = sparse_tensors , <nl> + verify_exhausted = verify_exhausted ) <nl> + <nl> actual = self . gen_outputs ( <nl> ds_fn , <nl> break_points , <nl> num_outputs , <nl> - verify_exhausted = verify_exhausted , <nl> - init_before_restore = init_before_restore ) <nl> + init_before_restore = init_before_restore , <nl> + sparse_tensors = sparse_tensors , <nl> + verify_exhausted = verify_exhausted ) <nl> + <nl> self . match ( expected , actual ) <nl> <nl> def gen_outputs ( self , <nl> def gen_outputs ( self , <nl> num_outputs , <nl> ckpt_saved = False , <nl> init_before_restore = False , <nl> + sparse_tensors = False , <nl> verify_exhausted = True ) : <nl> " " " Generates elements from input dataset while stopping at break points . <nl> <nl> def gen_outputs ( self , <nl> init_before_restore : Whether init should be called before saver . restore . <nl> This is just so that we can verify that restoring an already initialized <nl> iterator works . <nl> + sparse_tensors : Whether dataset is built from SparseTensor ( s ) . <nl> verify_exhausted : Whether to verify that the iterator has been exhausted <nl> after producing ` num_outputs ` elements . <nl> <nl> def gen_outputs ( self , <nl> def get_ops ( ) : <nl> if ckpt_saved : <nl> saver = self . _import_meta_graph ( ) <nl> - init_op , get_next_op = self . _get_iterator_ops_from_collection ( ds_fn ) <nl> + init_op , get_next_op = self . _get_iterator_ops_from_collection ( <nl> + ds_fn , sparse_tensors = sparse_tensors ) <nl> else : <nl> - init_op , get_next_op , saver = self . _build_graph ( ds_fn ) <nl> + init_op , get_next_op , saver = self . _build_graph ( <nl> + ds_fn , sparse_tensors = sparse_tensors ) <nl> return init_op , get_next_op , saver <nl> <nl> for i in range ( len ( break_points ) + 1 ) : <nl> def match ( self , expected , actual ) : <nl> if nest . is_sequence ( expected ) : <nl> self . assertEqual ( len ( expected ) , len ( actual ) ) <nl> if isinstance ( expected , dict ) : <nl> - for key1 , key2 in sorted ( expected , actual ) : <nl> + for key1 , key2 in zip ( sorted ( expected ) , sorted ( actual ) ) : <nl> self . assertEqual ( key1 , key2 ) <nl> self . match ( expected [ key1 ] , actual [ key2 ] ) <nl> else : <nl> def gen_break_points ( self , num_outputs , num_samples = 10 ) : <nl> " " " Generates ` num_samples ` breaks points in [ 0 , num_outputs ] . " " " <nl> return np . linspace ( 0 , num_outputs , num_samples , dtype = int ) <nl> <nl> - def _build_graph ( self , ds_fn ) : <nl> + def _build_graph ( self , ds_fn , sparse_tensors = False ) : <nl> iterator = ds_fn ( ) . make_initializable_iterator ( ) <nl> <nl> saveable = contrib_iterator_ops . make_saveable_from_iterator ( iterator ) <nl> ops . add_to_collection ( ops . GraphKeys . SAVEABLE_OBJECTS , saveable ) <nl> init_op = iterator . initializer <nl> - get_next = iterator . get_next ( ) <nl> - self . _add_iterator_ops_to_collection ( init_op , get_next ) <nl> + if sparse_tensors : <nl> + get_next = sparse_tensor . SparseTensor ( * iterator . get_next ( ) ) <nl> + else : <nl> + get_next = iterator . get_next ( ) <nl> + self . _add_iterator_ops_to_collection ( init_op , get_next , sparse_tensors ) <nl> saver = saver_lib . Saver ( allow_empty = True ) <nl> return init_op , get_next , saver <nl> <nl> - def _add_iterator_ops_to_collection ( self , init_op , get_next ) : <nl> + def _add_iterator_ops_to_collection ( self , <nl> + init_op , <nl> + get_next , <nl> + sparse_tensors = False ) : <nl> ops . add_to_collection ( " iterator_ops " , init_op ) <nl> # ` get_next ` may be a tuple e . g . in TensorSliceDataset . Since Collections <nl> # do not support tuples we flatten the tensors and restore the shape in <nl> # ` _get_iterator_ops_from_collection ` . <nl> - for el in nest . flatten ( get_next ) : <nl> - ops . add_to_collection ( " iterator_ops " , el ) <nl> + if sparse_tensors : <nl> + ops . add_to_collection ( " iterator_ops " , get_next . indices ) <nl> + ops . add_to_collection ( " iterator_ops " , get_next . values ) <nl> + ops . add_to_collection ( " iterator_ops " , get_next . dense_shape ) <nl> + else : <nl> + for el in nest . flatten ( get_next ) : <nl> + ops . add_to_collection ( " iterator_ops " , el ) <nl> <nl> - def _get_iterator_ops_from_collection ( self , ds_fn ) : <nl> + def _get_iterator_ops_from_collection ( self , ds_fn , sparse_tensors = False ) : <nl> all_ops = ops . get_collection ( " iterator_ops " ) <nl> - return all_ops [ 0 ] , nest . pack_sequence_as ( <nl> - self . _get_output_types ( ds_fn ) , all_ops [ 1 : ] ) <nl> + if sparse_tensors : <nl> + init_op , indices , values , dense_shape = all_ops <nl> + return init_op , sparse_tensor . SparseTensor ( indices , values , dense_shape ) <nl> + else : <nl> + return all_ops [ 0 ] , nest . pack_sequence_as ( <nl> + self . _get_output_types ( ds_fn ) , all_ops [ 1 : ] ) <nl> <nl> def _get_output_types ( self , ds_fn ) : <nl> with ops . Graph ( ) . as_default ( ) : <nl> mmm a / tensorflow / core / kernels / dataset . h <nl> ppp b / tensorflow / core / kernels / dataset . h <nl> class GraphDefBuilderWrapper { <nl> / / ` * output ` contains a pointer to the output ` Node ` . It is guaranteed to be <nl> / / non - null if the method returns with an OK status . <nl> / / The returned Node pointer is owned by the backing Graph of GraphDefBuilder . <nl> + / / TODO ( shivaniagrawal ) : Consider changing to gtl : : ArraySlice ? <nl> template < typename T > <nl> Status AddVector ( const std : : vector < T > & val , Node * * output ) { <nl> Tensor val_t = Tensor ( DataTypeToEnum < T > : : v ( ) , <nl> mmm a / tensorflow / core / kernels / sparse_tensor_slice_dataset_op . cc <nl> ppp b / tensorflow / core / kernels / sparse_tensor_slice_dataset_op . cc <nl> namespace { <nl> / / description of the following op . <nl> <nl> template < typename T > <nl> - class Dataset : public DatasetBase { <nl> + class Dataset : public GraphDatasetBase { <nl> public : <nl> - explicit Dataset ( const sparse : : SparseTensor & sparse_tensor ) <nl> - : sparse_tensor_ ( sparse_tensor ) , <nl> + explicit Dataset ( OpKernelContext * ctx , <nl> + const sparse : : SparseTensor & sparse_tensor ) <nl> + : GraphDatasetBase ( ctx ) , <nl> + sparse_tensor_ ( sparse_tensor ) , <nl> dtypes_ ( { DT_INT64 , sparse_tensor . dtype ( ) , DT_INT64 } ) , <nl> shapes_ ( { { - 1 , sparse_tensor . dims ( ) - 1 } , <nl> { - 1 } , <nl> class Dataset : public DatasetBase { <nl> return " SparseTensorSliceDatasetOp : : Dataset " ; <nl> } <nl> <nl> + protected : <nl> + Status AsGraphDefInternal ( DatasetGraphDefBuilder * b , <nl> + Node * * output ) const override { <nl> + Node * indices_node ; <nl> + TF_RETURN_IF_ERROR ( b - > AddTensor ( sparse_tensor_ . indices ( ) , & indices_node ) ) ; <nl> + Node * value_node ; <nl> + TF_RETURN_IF_ERROR ( b - > AddTensor ( sparse_tensor_ . values ( ) , & value_node ) ) ; <nl> + Node * dense_shape_node ; <nl> + std : : vector < int64 > dense_shape ; <nl> + dense_shape . reserve ( sparse_tensor_ . shape ( ) . size ( ) ) ; <nl> + for ( int i = 0 ; i < sparse_tensor_ . shape ( ) . size ( ) ; i + + ) <nl> + dense_shape . emplace_back ( sparse_tensor_ . shape ( ) [ i ] ) ; <nl> + TF_RETURN_IF_ERROR ( b - > AddVector ( dense_shape , & dense_shape_node ) ) ; <nl> + AttrValue val_dtype ; <nl> + b - > BuildAttrValue ( sparse_tensor_ . dtype ( ) , & val_dtype ) ; <nl> + TF_RETURN_IF_ERROR ( <nl> + b - > AddDataset ( this , { indices_node , value_node , dense_shape_node } , <nl> + { { " Tvalues " , val_dtype } } , output ) ) ; <nl> + return Status : : OK ( ) ; <nl> + } <nl> + <nl> private : <nl> class Iterator : public DatasetIterator < Dataset < T > > { <nl> public : <nl> class Dataset : public DatasetBase { <nl> <nl> + + iter_ ; <nl> } <nl> - <nl> if ( i_ = = next_non_empty_i_ ) { <nl> / / The current position is non - empty in the input <nl> / / ` SparseTensor ` , and we have already read the value from the <nl> class Dataset : public DatasetBase { <nl> return Status : : OK ( ) ; <nl> } <nl> <nl> + protected : <nl> + Status SaveInternal ( IteratorStateWriter * writer ) override { <nl> + mutex_lock l ( mu_ ) ; <nl> + TF_RETURN_IF_ERROR ( writer - > WriteScalar ( Iterator : : full_name ( " i " ) , i_ ) ) ; <nl> + TF_RETURN_IF_ERROR ( <nl> + writer - > WriteScalar ( Iterator : : full_name ( " iter_loc " ) , iter_ . loc ( ) ) ) ; <nl> + TF_RETURN_IF_ERROR ( writer - > WriteScalar ( <nl> + Iterator : : full_name ( " next_non_empty_i_ " ) , next_non_empty_i_ ) ) ; <nl> + if ( i_ < = next_non_empty_i_ ) { <nl> + TF_RETURN_IF_ERROR ( writer - > WriteTensor ( <nl> + Iterator : : full_name ( " next_indices_ " ) , next_indices_ ) ) ; <nl> + TF_RETURN_IF_ERROR ( writer - > WriteTensor ( <nl> + Iterator : : full_name ( " next_values_ " ) , next_values_ ) ) ; <nl> + } <nl> + return Status : : OK ( ) ; <nl> + } <nl> + <nl> + Status RestoreInternal ( OpKernelContext * ctx , <nl> + IteratorStateReader * reader ) override { <nl> + mutex_lock l ( mu_ ) ; <nl> + TF_RETURN_IF_ERROR ( reader - > ReadScalar ( Iterator : : full_name ( " i " ) , & i_ ) ) ; <nl> + int64 iter_loc ; <nl> + TF_RETURN_IF_ERROR ( <nl> + reader - > ReadScalar ( Iterator : : full_name ( " iter_loc " ) , & iter_loc ) ) ; <nl> + iter_ = group_iterable_ . at ( iter_loc ) ; <nl> + TF_RETURN_IF_ERROR ( reader - > ReadScalar ( <nl> + Iterator : : full_name ( " next_non_empty_i_ " ) , & next_non_empty_i_ ) ) ; <nl> + if ( i_ < = next_non_empty_i_ ) { <nl> + TF_RETURN_IF_ERROR ( reader - > ReadTensor ( <nl> + Iterator : : full_name ( " next_indices_ " ) , & next_indices_ ) ) ; <nl> + TF_RETURN_IF_ERROR ( reader - > ReadTensor ( <nl> + Iterator : : full_name ( " next_values_ " ) , & next_values_ ) ) ; <nl> + } <nl> + return Status : : OK ( ) ; <nl> + } <nl> + <nl> private : <nl> const int64 num_elements_ ; <nl> <nl> class SparseTensorSliceDatasetOp : public DatasetOpKernel { <nl> sparse : : SparseTensor sparse_tensor ( <nl> * indices , * values , TensorShape ( dense_shape - > vec < int64 > ( ) ) , std_order ) ; <nl> <nl> - * output = new Dataset < T > ( sparse_tensor ) ; <nl> + * output = new Dataset < T > ( ctx , sparse_tensor ) ; <nl> } <nl> <nl> private : <nl> mmm a / tensorflow / core / util / sparse / group_iterator . h <nl> ppp b / tensorflow / core / util / sparse / group_iterator . h <nl> class GroupIterable { <nl> class IteratorStep ; <nl> <nl> IteratorStep begin ( ) { return IteratorStep ( this , 0 ) ; } <nl> + IteratorStep at ( int64 loc ) { <nl> + CHECK ( loc > = 0 & & loc < = ix_ . dim_size ( 0 ) ) <nl> + < < " loc provided must lie between 0 and " < < ix_ . dim_size ( 0 ) ; <nl> + return IteratorStep ( this , loc ) ; <nl> + } <nl> IteratorStep end ( ) { return IteratorStep ( this , ix_ . dim_size ( 0 ) ) ; } <nl> <nl> template < typename TIX > <nl> class GroupIterable { <nl> IteratorStep & operator + + ( ) ; / / prefix + + <nl> IteratorStep operator + + ( int ) ; / / postfix + + <nl> Group operator * ( ) const { return Group ( iter_ , loc_ , next_loc_ ) ; } <nl> + int64 loc ( ) const { return loc_ ; } <nl> <nl> private : <nl> GroupIterable * iter_ ; <nl>
Saveable iterator for Dataset . from_sparse_tensor_slice ( . . ) .
tensorflow/tensorflow
692ee62a63a54d9fe02b3ef6e4e62de490046719
2017-11-11T18:26:46Z
mmm a / src / core / arm / dyncom / arm_dyncom_interpreter . cpp <nl> ppp b / src / core / arm / dyncom / arm_dyncom_interpreter . cpp <nl> ARM_INST_PTR INTERPRETER_TRANSLATE ( ldrb ) ( unsigned int inst , int index ) <nl> } <nl> ARM_INST_PTR INTERPRETER_TRANSLATE ( ldrbt ) ( unsigned int inst , int index ) <nl> { <nl> - arm_inst * inst_base = ( arm_inst * ) AllocBuffer ( sizeof ( arm_inst ) + sizeof ( ldst_inst ) ) ; <nl> - ldst_inst * inst_cream = ( ldst_inst * ) inst_base - > component ; <nl> + arm_inst * inst_base = ( arm_inst * ) AllocBuffer ( sizeof ( arm_inst ) + sizeof ( ldst_inst ) ) ; <nl> + ldst_inst * inst_cream = ( ldst_inst * ) inst_base - > component ; <nl> <nl> inst_base - > cond = BITS ( inst , 28 , 31 ) ; <nl> - inst_base - > idx = index ; <nl> - inst_base - > br = NON_BRANCH ; <nl> + inst_base - > idx = index ; <nl> + inst_base - > br = NON_BRANCH ; <nl> <nl> inst_cream - > inst = inst ; <nl> - if ( I_BIT = = 0 ) { <nl> + if ( BITS ( inst , 25 , 27 ) = = 2 ) { <nl> inst_cream - > get_addr = LnSWoUB ( ImmediatePostIndexed ) ; <nl> + } else if ( BITS ( inst , 25 , 27 ) = = 3 ) { <nl> + inst_cream - > get_addr = LnSWoUB ( ScaledRegisterPostIndexed ) ; <nl> } else { <nl> DEBUG_MSG ; <nl> } <nl> - # if 0 <nl> - inst_cream - > get_addr = get_calc_addr_op ( inst ) ; <nl> - if ( inst = = 0x54f13001 ) { <nl> - DEBUG_LOG ( ARM11 , " get_calc_addr_op : % llx \ n " , inst_cream - > get_addr ) ; <nl> - } <nl> - # endif <nl> <nl> if ( BITS ( inst , 12 , 15 ) = = 15 ) { <nl> inst_base - > br = INDIRECT_BRANCH ; <nl> ARM_INST_PTR INTERPRETER_TRANSLATE ( strb ) ( unsigned int inst , int index ) <nl> } <nl> ARM_INST_PTR INTERPRETER_TRANSLATE ( strbt ) ( unsigned int inst , int index ) <nl> { <nl> - arm_inst * inst_base = ( arm_inst * ) AllocBuffer ( sizeof ( arm_inst ) + sizeof ( ldst_inst ) ) ; <nl> - ldst_inst * inst_cream = ( ldst_inst * ) inst_base - > component ; <nl> + arm_inst * inst_base = ( arm_inst * ) AllocBuffer ( sizeof ( arm_inst ) + sizeof ( ldst_inst ) ) ; <nl> + ldst_inst * inst_cream = ( ldst_inst * ) inst_base - > component ; <nl> <nl> inst_base - > cond = BITS ( inst , 28 , 31 ) ; <nl> - inst_base - > idx = index ; <nl> - inst_base - > br = NON_BRANCH ; <nl> + inst_base - > idx = index ; <nl> + inst_base - > br = NON_BRANCH ; <nl> <nl> inst_cream - > inst = inst ; <nl> - / / inst_cream - > get_addr = get_calc_addr_op ( inst ) ; <nl> - if ( I_BIT = = 0 ) { <nl> + <nl> + if ( BITS ( inst , 25 , 27 ) = = 2 ) { <nl> inst_cream - > get_addr = LnSWoUB ( ImmediatePostIndexed ) ; <nl> + } else if ( BITS ( inst , 25 , 27 ) = = 3 ) { <nl> + inst_cream - > get_addr = LnSWoUB ( ScaledRegisterPostIndexed ) ; <nl> } else { <nl> DEBUG_MSG ; <nl> } <nl>
Merge pull request from lioncash / strbt
yuzu-emu/yuzu
14308a88a70edb110855aaedf1723f1b5941d721
2015-01-17T07:15:47Z
mmm a / tools / detect - builtins . js <nl> ppp b / tools / detect - builtins . js <nl> <nl> } <nl> / / Avoid endless recursion . <nl> if ( this_name = = = " prototype " & & name = = = " constructor " ) continue ; <nl> + / / Avoid needless duplication . <nl> + if ( this_name = = = " __PROTO__ " & & name = = = " constructor " ) continue ; <nl> / / Could get this from the parent , but having it locally is easier . <nl> var property = { " name " : name } ; <nl> try { <nl> <nl> property . length = value . length ; <nl> property . prototype = GetProperties ( " prototype " , value . prototype ) ; <nl> } <nl> - property . properties = GetProperties ( name , value ) ; <nl> + if ( type = = = " string " | | type = = = " number " ) { <nl> + property . value = value ; <nl> + } else { <nl> + property . properties = GetProperties ( name , value ) ; <nl> + } <nl> result [ name ] = property ; <nl> } <nl> + / / Print the __proto__ if it ' s not the default Object prototype . <nl> + if ( typeof object = = = " object " & & object . __proto__ ! = = null & & <nl> + ! object . __proto__ . hasOwnProperty ( " __proto__ " ) ) { <nl> + result . __PROTO__ = GetProperties ( " __PROTO__ " , object . __proto__ ) ; <nl> + } <nl> return result ; <nl> } ; <nl> <nl>
[ tools ] Fix detect - builtins . js
v8/v8
bf5f2b5998a1d37f82e29784228ab41432636377
2016-06-09T10:17:32Z
mmm a / db / db . h <nl> ppp b / db / db . h <nl> namespace mongo { <nl> variables . <nl> * / <nl> assert ( dbMutexInfo . isLocked ( ) ) ; <nl> + <nl> + log ( 5 ) < < " setClient : " < < ns < < endl ; <nl> + <nl> Top : : clientStart ( ns ) ; <nl> <nl> curNs = ns ; <nl> mmm a / db / repl . cpp <nl> ppp b / db / repl . cpp <nl> namespace mongo { <nl> } <nl> } cmdReplacePeer ; <nl> <nl> + class CmdForceDead : public Command { <nl> + public : <nl> + virtual bool slaveOk ( ) { <nl> + return true ; <nl> + } <nl> + virtual bool adminOnly ( ) { <nl> + return true ; <nl> + } <nl> + virtual bool logTheOp ( ) { <nl> + return false ; <nl> + } <nl> + CmdForceDead ( ) : Command ( " forcedead " ) { } <nl> + virtual bool run ( const char * ns , BSONObj & cmdObj , string & errmsg , BSONObjBuilder & result , bool fromRepl ) { <nl> + replAllDead = " forced by command " ; <nl> + return true ; <nl> + } <nl> + } cmdForceDead ; <nl> + <nl> class CmdResync : public Command { <nl> public : <nl> virtual bool slaveOk ( ) { <nl> namespace mongo { <nl> see logOp ( ) comments . <nl> * / <nl> void ReplSource : : sync_pullOpLog_applyOperation ( BSONObj & op , IdSets & ids , IdSets & modIds , OpTime * localLogTail ) { <nl> + log ( 6 ) < < " processing op : " < < op < < endl ; <nl> / / skip no - op <nl> if ( op . getStringField ( " op " ) [ 0 ] = = ' n ' ) <nl> return ; <nl> namespace mongo { <nl> <nl> bool incompleteClone = incompleteCloneDbs . count ( clientName ) ! = 0 ; <nl> <nl> + log ( 6 ) < < " ns : " < < ns < < " , justCreated : " < < justCreated < < " , incompleteClone : " < < incompleteClone < < endl ; <nl> + <nl> if ( justCreated | | incompleteClone ) { <nl> if ( incompleteClone ) { <nl> log ( ) < < " An earlier initial clone of ' " < < clientName < < " ' did not complete , will resync . " < < endl ; <nl> namespace mongo { <nl> save ( ) ; <nl> cursor . reset ( ) ; <nl> } <nl> - BSONObj info ; <nl> - massert ( " request for slave to resync failed " , <nl> - conn - > runCommand ( " admin " , fromjson ( " { resync : 1 , force : true } " ) , info ) ) ; <nl> + massert ( " request to kill slave replication falied " , <nl> + conn - > simpleCommand ( " admin " , 0 , " forcedead " ) ) ; <nl> } <nl> <nl> bool ReplSource : : updateSetsWithLocalOps ( IdSets & ids , IdSets & modIds , OpTime & localLogTail , bool unlock ) { <nl> namespace mongo { <nl> string name = e . embeddedObject ( ) . getField ( " name " ) . valuestr ( ) ; <nl> if ( name ! = " local " ) { <nl> if ( only . empty ( ) | | only = = name ) { <nl> + log ( 2 ) < < " adding to ' addDbNextPass ' : " < < name < < endl ; <nl> addDbNextPass . insert ( name ) ; <nl> } <nl> } <nl> mmm a / jstests / repl / pair6 . js <nl> ppp b / jstests / repl / pair6 . js <nl> disconnect = function ( ) { <nl> checkCount = function ( m , c ) { <nl> m . setSlaveOk ( ) ; <nl> assert . soon ( function ( ) { <nl> + if ( - 1 = = m . getDBNames ( ) . indexOf ( baseName ) ) { <nl> + return false ; <nl> + } <nl> + if ( - 1 = = m . getDB ( baseName ) . getCollectionNames ( ) . indexOf ( baseName ) ) { <nl> + return false ; <nl> + } <nl> actual = m . getDB ( baseName ) . getCollection ( baseName ) . find ( ) . count ( ) ; <nl> print ( actual ) ; <nl> return c = = actual ; } , <nl> " expected count " + c + " for " + m ) ; <nl> } <nl> <nl> + resetSlave = function ( s ) { <nl> + s . setSlaveOk ( ) ; <nl> + assert . soon ( function ( ) { <nl> + ret = s . getDB ( " admin " ) . runCommand ( { " resync " : 1 } ) ; <nl> + / / printjson ( ret ) ; <nl> + return 1 = = ret . ok ; <nl> + } ) ; <nl> + } <nl> + <nl> big = new Array ( 2000 ) . toString ( ) ; <nl> <nl> doTest = function ( ) { <nl> doTest = function ( ) { <nl> return ( lm = = 0 & & rm = = 1 ) ; <nl> } ) ; <nl> <nl> + resetSlave ( l ) ; <nl> + <nl> checkCount ( l , 1 ) ; <nl> checkCount ( r , 1 ) ; <nl> <nl> doTest = function ( ) { <nl> return ( lm = = 0 & & rm = = 1 ) ; <nl> } ) ; <nl> <nl> - sleep ( 30000 ) ; <nl> + sleep ( 15000 ) ; <nl> + <nl> + resetSlave ( l ) ; <nl> <nl> checkCount ( l , 1000 ) ; <nl> checkCount ( r , 1000 ) ; <nl>
require manual resync on filled oplog
mongodb/mongo
8bf9e88cf1136c9385e56d02f165da998082b6c1
2009-04-23T16:16:18Z
mmm a / addons / resource . language . en_gb / resources / strings . po <nl> ppp b / addons / resource . language . en_gb / resources / strings . po <nl> msgctxt " # 642 " <nl> msgid " Files without ReplayGain information " <nl> msgstr " " <nl> <nl> - # : system / settings / settings . xml <nl> - msgctxt " # 643 " <nl> - msgid " Avoid clipping on ReplayGained files " <nl> - msgstr " " <nl> + # empty string with id 643 <nl> <nl> # : system / settings / settings . xml <nl> msgctxt " # 644 " <nl> msgctxt " # 36269 " <nl> msgid " Reference volume ( PreAmp level ) to use for files without encoded ReplayGain information . Default is 89dB as per standard . Change with caution . " <nl> msgstr " " <nl> <nl> - # . Description of setting with label # 643 " Avoid clipping on ReplayGained files " <nl> - # : system / settings / settings . xml <nl> - msgctxt " # 36270 " <nl> - msgid " Reduce the volume of the file if clipping will occur . " <nl> - msgstr " " <nl> + # empty string with id 36270 <nl> <nl> # . Description of setting with label # 13314 " Crossfade between songs " <nl> # : system / settings / settings . xml <nl> mmm a / system / settings / settings . xml <nl> ppp b / system / settings / settings . xml <nl> <nl> < dependency type = " enable " setting = " musicplayer . replaygaintype " operator = " ! is " > 0 < / dependency > <nl> < / dependencies > <nl> < / setting > <nl> - < setting id = " musicplayer . replaygainavoidclipping " type = " boolean " parent = " musicplayer . replaygaintype " label = " 643 " help = " 36270 " > <nl> - < level > 3 < / level > <nl> - < default > false < / default > <nl> - < control type = " toggle " / > <nl> - < dependencies > <nl> - < dependency type = " enable " setting = " musicplayer . replaygaintype " operator = " ! is " > 0 < / dependency > <nl> - < / dependencies > <nl> - < / setting > <nl> < / group > <nl> < / category > <nl> < category id = " discs " label = " 14087 " help = " 36193 " > <nl> mmm a / xbmc / Application . cpp <nl> ppp b / xbmc / Application . cpp <nl> bool CApplication : : Create ( ) <nl> m_replayGainSettings . iType = CSettings : : GetInstance ( ) . GetInt ( CSettings : : SETTING_MUSICPLAYER_REPLAYGAINTYPE ) ; <nl> m_replayGainSettings . iPreAmp = CSettings : : GetInstance ( ) . GetInt ( CSettings : : SETTING_MUSICPLAYER_REPLAYGAINPREAMP ) ; <nl> m_replayGainSettings . iNoGainPreAmp = CSettings : : GetInstance ( ) . GetInt ( CSettings : : SETTING_MUSICPLAYER_REPLAYGAINNOGAINPREAMP ) ; <nl> - m_replayGainSettings . bAvoidClipping = CSettings : : GetInstance ( ) . GetBool ( CSettings : : SETTING_MUSICPLAYER_REPLAYGAINAVOIDCLIPPING ) ; <nl> <nl> / / Create the Mouse , Keyboard , Remote , and Joystick devices <nl> / / Initialize after loading settings to get joystick deadzone setting <nl> void CApplication : : OnSettingChanged ( const CSetting * setting ) <nl> m_replayGainSettings . iPreAmp = ( ( CSettingInt * ) setting ) - > GetValue ( ) ; <nl> else if ( StringUtils : : EqualsNoCase ( settingId , CSettings : : SETTING_MUSICPLAYER_REPLAYGAINNOGAINPREAMP ) ) <nl> m_replayGainSettings . iNoGainPreAmp = ( ( CSettingInt * ) setting ) - > GetValue ( ) ; <nl> - else if ( StringUtils : : EqualsNoCase ( settingId , CSettings : : SETTING_MUSICPLAYER_REPLAYGAINAVOIDCLIPPING ) ) <nl> - m_replayGainSettings . bAvoidClipping = ( ( CSettingBool * ) setting ) - > GetValue ( ) ; <nl> } <nl> <nl> void CApplication : : OnSettingAction ( const CSetting * setting ) <nl> mmm a / xbmc / Application . h <nl> ppp b / xbmc / Application . h <nl> struct ReplayGainSettings <nl> int iPreAmp ; <nl> int iNoGainPreAmp ; <nl> int iType ; <nl> - bool bAvoidClipping ; <nl> } ; <nl> <nl> class CBackgroundPlayer : public CThread <nl> mmm a / xbmc / settings / Settings . cpp <nl> ppp b / xbmc / settings / Settings . cpp <nl> const std : : string CSettings : : SETTING_MUSICPLAYER_SEEKDELAY = " musicplayer . seekde <nl> const std : : string CSettings : : SETTING_MUSICPLAYER_REPLAYGAINTYPE = " musicplayer . replaygaintype " ; <nl> const std : : string CSettings : : SETTING_MUSICPLAYER_REPLAYGAINPREAMP = " musicplayer . replaygainpreamp " ; <nl> const std : : string CSettings : : SETTING_MUSICPLAYER_REPLAYGAINNOGAINPREAMP = " musicplayer . replaygainnogainpreamp " ; <nl> - const std : : string CSettings : : SETTING_MUSICPLAYER_REPLAYGAINAVOIDCLIPPING = " musicplayer . replaygainavoidclipping " ; <nl> const std : : string CSettings : : SETTING_MUSICPLAYER_CROSSFADE = " musicplayer . crossfade " ; <nl> const std : : string CSettings : : SETTING_MUSICPLAYER_CROSSFADEALBUMTRACKS = " musicplayer . crossfadealbumtracks " ; <nl> const std : : string CSettings : : SETTING_MUSICPLAYER_VISUALISATION = " musicplayer . visualisation " ; <nl> void CSettings : : InitializeISettingCallbacks ( ) <nl> settingSet . insert ( CSettings : : SETTING_MUSICPLAYER_REPLAYGAINPREAMP ) ; <nl> settingSet . insert ( CSettings : : SETTING_MUSICPLAYER_REPLAYGAINNOGAINPREAMP ) ; <nl> settingSet . insert ( CSettings : : SETTING_MUSICPLAYER_REPLAYGAINTYPE ) ; <nl> - settingSet . insert ( CSettings : : SETTING_MUSICPLAYER_REPLAYGAINAVOIDCLIPPING ) ; <nl> settingSet . insert ( CSettings : : SETTING_SCRAPERS_MUSICVIDEOSDEFAULT ) ; <nl> settingSet . insert ( CSettings : : SETTING_SCREENSAVER_MODE ) ; <nl> settingSet . insert ( CSettings : : SETTING_SCREENSAVER_PREVIEW ) ; <nl> mmm a / xbmc / settings / Settings . h <nl> ppp b / xbmc / settings / Settings . h <nl> class CSettings : public CSettingCreator , public CSettingControlCreator <nl> static const std : : string SETTING_MUSICPLAYER_REPLAYGAINTYPE ; <nl> static const std : : string SETTING_MUSICPLAYER_REPLAYGAINPREAMP ; <nl> static const std : : string SETTING_MUSICPLAYER_REPLAYGAINNOGAINPREAMP ; <nl> - static const std : : string SETTING_MUSICPLAYER_REPLAYGAINAVOIDCLIPPING ; <nl> static const std : : string SETTING_MUSICPLAYER_CROSSFADE ; <nl> static const std : : string SETTING_MUSICPLAYER_CROSSFADEALBUMTRACKS ; <nl> static const std : : string SETTING_MUSICPLAYER_VISUALISATION ; <nl>
Remove the " avoid clipping " replay gain setting
xbmc/xbmc
016facf572d19802f7c9bec31b411b162ce06ac2
2016-11-14T15:34:55Z
mmm a / . gitignore <nl> ppp b / . gitignore <nl> docs / doxygen <nl> # Caffe <nl> third_party / caffe . BUILD <nl> <nl> + # python proto <nl> + py_proto <nl> mmm a / apollo . sh <nl> ppp b / apollo . sh <nl> function apollo_build ( ) { <nl> fail ' Build failed ! ' <nl> fi <nl> # build python proto <nl> - chmod - R + w bazel - genfiles / modules <nl> - PROTOC = ' . / bazel - out / host / bin / external / com_google_protobuf / protoc ' <nl> - find modules / - name " * . proto " | grep - v gnss | xargs $ { PROTOC } - - python_out = bazel - genfiles <nl> - find bazel - genfiles / * - type d - exec touch " { } / __init__ . py " \ ; <nl> + build_py_proto <nl> if [ - d " / home / tmp / conf " ] ; then <nl> sudo cp - r / home / tmp / conf bazel - apollo / external / ros / share / gnss_driver / <nl> sudo chown - R $ { DOCKER_USER } : $ { DOCKER_GRP } " bazel - apollo / external / ros / share / gnss_driver / conf " <nl> fi <nl> } <nl> <nl> + function build_py_proto ( ) { <nl> + if [ - d " . / py_proto " ] ; then <nl> + rm - rf py_proto <nl> + fi <nl> + mkdir py_proto <nl> + PROTOC = ' . / bazel - out / host / bin / external / com_google_protobuf / protoc ' <nl> + find modules / - name " * . proto " | grep - v gnss | xargs $ { PROTOC } - - python_out = py_proto <nl> + find py_proto / * - type d - exec touch " { } / __init__ . py " \ ; <nl> + } <nl> + <nl> function check ( ) { <nl> local check_start_time = $ ( get_now ) <nl> apollo_build & & run_test & & run_lint <nl> function release ( ) { <nl> mkdir - p $ MODULES_DIR / monitor / hwmonitor / hw / tools / <nl> cp bazel - bin / modules / monitor / hwmonitor / hw / tools / esdcan_test_app $ MODULES_DIR / monitor / hwmonitor / hw / tools / <nl> fi <nl> - cp - r bazel - genfiles / * $ LIB_DIR <nl> + cp - r bazel - genfiles / external $ LIB_DIR <nl> + cp - r py_proto / modules $ LIB_DIR <nl> <nl> # doc <nl> cp - r docs $ ROOT_DIR <nl> function main ( ) { <nl> buildgnss ) <nl> build_gnss <nl> ; ; <nl> + build_py ) <nl> + build_py_proto <nl> + ; ; <nl> buildvelodyne ) <nl> build_velodyne <nl> ; ; <nl> mmm a / scripts / apollo_base . sh <nl> ppp b / scripts / apollo_base . sh <nl> function set_lib_path ( ) { <nl> if [ - e " $ { APOLLO_ROOT_DIR } / bazel - apollo / external / ros / setup . bash " ] ; then <nl> source " $ { APOLLO_ROOT_DIR } / bazel - apollo / external / ros / setup . bash " <nl> fi <nl> - PY_LIB_PATH = $ { APOLLO_ROOT_DIR } / bazel - genfiles <nl> + PY_LIB_PATH = $ { APOLLO_ROOT_DIR } / py_proto <nl> PY_TOOLS_PATH = $ { APOLLO_ROOT_DIR } / modules / tools <nl> fi <nl> export PYTHONPATH = $ { PY_LIB_PATH } : $ { PY_TOOLS_PATH } : $ { PYTHONPATH } <nl>
Change python proto output path .
ApolloAuto/apollo
3a9efa56340a02fa095a9a97ee508957da63909c
2017-08-16T00:34:19Z
mmm a / src / library_sdl . js <nl> ppp b / src / library_sdl . js <nl> var LibrarySDL = { <nl> } <nl> } , <nl> <nl> + makeFontString : function ( height , fontName ) { <nl> + if ( fontName . charAt ( 0 ) ! = " ' " & & fontName . charAt ( 0 ) ! = ' " ' ) { <nl> + / / https : / / developer . mozilla . org / ru / docs / Web / CSS / font - family <nl> + / / Font family names containing whitespace should be quoted . <nl> + / / BTW , quote all font names is easier than searching spaces <nl> + fontName = ' " ' + fontName + ' " ' ; <nl> + } <nl> + return height + ' px ' + fontName + ' , serif ' ; <nl> + } , <nl> + <nl> estimateTextWidth : function ( fontData , text ) { <nl> var h = fontData . size ; <nl> - var fontString = h + ' px ' + fontData . name ; <nl> + var fontString = SDL . makeFontString ( h , fontData . name ) ; <nl> var tempCtx = SDL . ttfContext ; <nl> # if ASSERTIONS <nl> assert ( tempCtx , ' TTF_Init must have been called ' ) ; <nl> var LibrarySDL = { <nl> var w = SDL . estimateTextWidth ( fontData , text ) ; <nl> var h = fontData . size ; <nl> var color = SDL . loadColorToCSSRGB ( color ) ; / / XXX alpha breaks fonts ? <nl> - var fontString = h + ' px ' + fontData . name + ' , serif ' ; <nl> + var fontString = SDL . makeFontString ( h , fontData . name ) ; <nl> var surf = SDL . makeSurface ( w , h , 0 , false , ' text : ' + text ) ; / / bogus numbers . . <nl> var surfData = SDL . surfaces [ surf ] ; <nl> surfData . ctx . save ( ) ; <nl>
Add to SDL . estimateTextWidth same generic - font family as in TTF_RenderText_Solid ( )
emscripten-core/emscripten
ec59f6d22c7e195c8a30dd988edbbc9c4a43da3b
2017-07-11T19:05:02Z
mmm a / src / Icons / qBittorrent . desktop <nl> ppp b / src / Icons / qBittorrent . desktop <nl> <nl> [ Desktop Entry ] <nl> Categories = Qt ; Network ; P2P ; <nl> Comment = V1 . 6 . 0 <nl> - Exec = qbittorrent % f <nl> + Exec = qbittorrent % U <nl> GenericName = Bittorrent client <nl> GenericName [ bg ] = Торент клиент <nl> GenericName [ cs ] = Bittorrent klient <nl>
- Updated desktop file to % U because qBittorrent actually takes a list of URLs as argument
qbittorrent/qBittorrent
e3a29d8ebf9c1b56c755cab4cbf60bc4275eab5c
2009-11-06T16:21:56Z
mmm a / cocos / 2d / CCNode . h <nl> ppp b / cocos / 2d / CCNode . h <nl> class CC_DLL Node : public Ref <nl> * Resumes all scheduled selectors , actions and event listeners . <nl> * This method is called internally by onEnter <nl> * / <nl> - void resume ( void ) ; <nl> + virtual void resume ( void ) ; <nl> / * * <nl> * Pauses all scheduled selectors , actions and event listeners . . <nl> * This method is called internally by onExit <nl> * / <nl> - void pause ( void ) ; <nl> + virtual void pause ( void ) ; <nl> <nl> / * * <nl> * Resumes all scheduled selectors , actions and event listeners . <nl>
Merge pull request from zawasp / virtual_pause_resume
cocos2d/cocos2d-x
7d2ca95988cbe4cf2818dbabe9c96b2735d3bdd5
2014-08-14T01:31:10Z
mmm a / website / index . html <nl> ppp b / website / index . html <nl> < h1 id = " main - title " > <nl> < div class = " clear " > < / div > <nl> < / div > <nl> < / div > <nl> - < div id = " announcement " class = " colored - block " > <nl> - < div class = " page " > <nl> - ClickHouse Community Meetup in < a id = " announcement - link " href = " https : / / bitly . com / 2Jv9Bug " rel = " external nofollow " target = " _blank " > Berlin on July 3 < / a > <nl> - < / div > <nl> - < / div > <nl> < div class = " page " > <nl> < h2 id = " slogan " > ClickHouse . Just makes you think faster . < / h2 > <nl> <nl>
Remove Berlin meetup link
ClickHouse/ClickHouse
a552bcddbe1a5b7415854b74daf13bdf3709e463
2018-07-03T18:53:44Z
mmm a / LICENSE <nl> ppp b / LICENSE <nl> src / brpc / callback . h : 3 - clause BSD <nl> <nl> mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> <nl> + src / butil / third_party / rapidjson / msinttypes : licensed under the following terms : <nl> + <nl> + Copyright ( c ) 2006 - 2013 Alexander Chemeris <nl> + <nl> + Redistribution and use in source and binary forms , with or without <nl> + modification , are permitted provided that the following conditions are met : <nl> + <nl> + 1 . Redistributions of source code must retain the above copyright notice , <nl> + this list of conditions and the following disclaimer . <nl> + <nl> + 2 . Redistributions in binary form must reproduce the above copyright <nl> + notice , this list of conditions and the following disclaimer in the <nl> + documentation and / or other materials provided with the distribution . <nl> + <nl> + 3 . Neither the name of the product nor the names of its contributors may <nl> + be used to endorse or promote products derived from this software <nl> + without specific prior written permission . <nl> + <nl> + THIS SOFTWARE IS PROVIDED BY THE AUTHOR ` ` AS IS ' ' AND ANY EXPRESS OR IMPLIED <nl> + WARRANTIES , INCLUDING , BUT NOT LIMITED TO , THE IMPLIED WARRANTIES OF <nl> + MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED . IN NO <nl> + EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT , INDIRECT , INCIDENTAL , <nl> + SPECIAL , EXEMPLARY , OR CONSEQUENTIAL DAMAGES ( INCLUDING , BUT NOT LIMITED TO , <nl> + PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES ; LOSS OF USE , DATA , OR PROFITS ; <nl> + OR BUSINESS INTERRUPTION ) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY , <nl> + WHETHER IN CONTRACT , STRICT LIABILITY , OR TORT ( INCLUDING NEGLIGENCE OR <nl> + OTHERWISE ) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE , EVEN IF <nl> + ADVISED OF THE POSSIBILITY OF SUCH DAMAGE . <nl> + <nl> + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> + <nl> src / butil / ( some files ) , test / ( some files ) : 3 - clause BSD <nl> <nl> Some portions of these files are derived from code in the Chromium project , <nl>
add_license_in_rapidjson
apache/incubator-brpc
ab174b2bda6216e908cf36221ce74b020e0bd472
2020-01-03T09:25:11Z
mmm a / tools / editor / plugins / shader_graph_editor_plugin . h <nl> ppp b / tools / editor / plugins / shader_graph_editor_plugin . h <nl> class GraphCurveMapEdit : public Control { <nl> <nl> class ShaderGraphView : public Control { <nl> <nl> - OBJ_TYPE ( ShaderGraphView , Node ) ; <nl> + OBJ_TYPE ( ShaderGraphView , Control ) ; <nl> <nl> <nl> <nl>
fix error when open project and close editor
godotengine/godot
f3a3596295e9a2d0f86588c579eb6f0b43590f99
2016-05-22T11:07:31Z
mmm a / java / demo / pom . xml <nl> ppp b / java / demo / pom . xml <nl> <nl> < parent > <nl> < artifactId > libphonenumber - parent < / artifactId > <nl> < groupId > com . googlecode . libphonenumber < / groupId > <nl> - < version > 5 . 2 < / version > <nl> + < version > 5 . 3 - SNAPSHOT < / version > <nl> < / parent > <nl> < groupId > com . googlecode . libphonenumber < / groupId > <nl> < artifactId > demo < / artifactId > <nl> - < version > 5 . 2 < / version > <nl> + < version > 5 . 3 - SNAPSHOT < / version > <nl> <nl> < properties > <nl> < gae . version > 1 . 5 . 4 < / gae . version > <nl> <nl> < dependency > <nl> < groupId > com . googlecode . libphonenumber < / groupId > <nl> < artifactId > libphonenumber < / artifactId > <nl> - < version > 5 . 2 < / version > <nl> + < version > 5 . 3 - SNAPSHOT < / version > <nl> < / dependency > <nl> < dependency > <nl> < groupId > com . googlecode . libphonenumber < / groupId > <nl> < artifactId > geocoder < / artifactId > <nl> - < version > 2 . 3 < / version > <nl> + < version > 2 . 4 - SNAPSHOT < / version > <nl> < / dependency > <nl> < / dependencies > <nl> <nl> mmm a / java / geocoder / pom . xml <nl> ppp b / java / geocoder / pom . xml <nl> <nl> < modelVersion > 4 . 0 . 0 < / modelVersion > <nl> < groupId > com . googlecode . libphonenumber < / groupId > <nl> < artifactId > geocoder < / artifactId > <nl> - < version > 2 . 3 < / version > <nl> + < version > 2 . 4 - SNAPSHOT < / version > <nl> < packaging > jar < / packaging > <nl> < url > http : / / code . google . com / p / libphonenumber / < / url > <nl> <nl> < parent > <nl> < groupId > com . googlecode . libphonenumber < / groupId > <nl> < artifactId > libphonenumber - parent < / artifactId > <nl> - < version > 5 . 2 < / version > <nl> + < version > 5 . 3 - SNAPSHOT < / version > <nl> < / parent > <nl> <nl> < build > <nl> <nl> < dependency > <nl> < groupId > com . googlecode . libphonenumber < / groupId > <nl> < artifactId > libphonenumber < / artifactId > <nl> - < version > 5 . 2 < / version > <nl> + < version > 5 . 3 - SNAPSHOT < / version > <nl> < / dependency > <nl> < / dependencies > <nl> <nl> mmm a / java / libphonenumber / pom . xml <nl> ppp b / java / libphonenumber / pom . xml <nl> <nl> < modelVersion > 4 . 0 . 0 < / modelVersion > <nl> < groupId > com . googlecode . libphonenumber < / groupId > <nl> < artifactId > libphonenumber < / artifactId > <nl> - < version > 5 . 2 < / version > <nl> + < version > 5 . 3 - SNAPSHOT < / version > <nl> < packaging > jar < / packaging > <nl> < url > http : / / code . google . com / p / libphonenumber / < / url > <nl> <nl> < parent > <nl> < groupId > com . googlecode . libphonenumber < / groupId > <nl> < artifactId > libphonenumber - parent < / artifactId > <nl> - < version > 5 . 2 < / version > <nl> + < version > 5 . 3 - SNAPSHOT < / version > <nl> < / parent > <nl> <nl> < build > <nl> mmm a / java / pom . xml <nl> ppp b / java / pom . xml <nl> <nl> < modelVersion > 4 . 0 . 0 < / modelVersion > <nl> < groupId > com . googlecode . libphonenumber < / groupId > <nl> < artifactId > libphonenumber - parent < / artifactId > <nl> - < version > 5 . 2 < / version > <nl> + < version > 5 . 3 - SNAPSHOT < / version > <nl> < packaging > pom < / packaging > <nl> < url > http : / / code . google . com / p / libphonenumber / < / url > <nl> <nl> <nl> < / licenses > <nl> <nl> < scm > <nl> - < connection > scm : svn : http : / / libphonenumber . googlecode . com / svn / tags / libphonenumber - 5 . 2 < / connection > <nl> - < developerConnection > scm : svn : https : / / libphonenumber . googlecode . com / svn / tags / libphonenumber - 5 . 2 < / developerConnection > <nl> - < url > scm : svn : http : / / libphonenumber . googlecode . com / svn / tags / libphonenumber - 5 . 2 < / url > <nl> + < connection > scm : svn : http : / / libphonenumber . googlecode . com / svn / trunk / java / < / connection > <nl> + < developerConnection > scm : svn : https : / / libphonenumber . googlecode . com / svn / trunk / java / < / developerConnection > <nl> + < url > scm : svn : http : / / libphonenumber . googlecode . com / svn / trunk / java / < / url > <nl> < / scm > <nl> <nl> < properties > <nl>
[ maven - release - plugin ] prepare for next development iteration
google/libphonenumber
61fc69f36eeccdfd90c99638da4234449f8f95eb
2014-12-03T12:22:10Z
mmm a / cocos / 2d / CCLabel . cpp <nl> ppp b / cocos / 2d / CCLabel . cpp <nl> void Label : : computeStringNumLines ( ) <nl> _currNumLines = quantityOfLines ; <nl> } <nl> <nl> + int Label : : getStringNumLines ( ) const { <nl> + if ( _contentDirty ) <nl> + { <nl> + const_cast < Label * > ( this ) - > updateContent ( ) ; <nl> + } <nl> + <nl> + return _currNumLines ; <nl> + } <nl> + <nl> int Label : : getStringLength ( ) const <nl> { <nl> return static_cast < int > ( _currentUTF16String . length ( ) ) ; <nl> mmm a / cocos / 2d / CCLabel . h <nl> ppp b / cocos / 2d / CCLabel . h <nl> class CC_DLL Label : public SpriteBatchNode , public LabelProtocol <nl> float getAdditionalKerning ( ) const ; <nl> <nl> / / string related stuff <nl> - int getStringNumLines ( ) const { return _currNumLines ; } <nl> + int getStringNumLines ( ) const ; <nl> int getStringLength ( ) const ; <nl> <nl> FontAtlas * getFontAtlas ( ) { return _fontAtlas ; } <nl>
[ CCLabel ] Apply updateContent ( ) for getStringNumLines ( ) when Label is dirty
cocos2d/cocos2d-x
d1faf75b583ca4caccd30f272ef6fab41851b579
2014-09-24T16:46:17Z
mmm a / tensorflow / contrib / tensorrt / convert / convert_nodes . cc <nl> ppp b / tensorflow / contrib / tensorrt / convert / convert_nodes . cc <nl> tensorflow : : Status InjectCalibrationNode ( tensorrt : : convert : : SubGraphParams & s ) { <nl> return tensorflow : : Status : : OK ( ) ; <nl> } <nl> <nl> + string GetCommonNameScope ( const string & op_name_a , const string & op_name_b ) { <nl> + size_t last_scope_separator = 0 ; <nl> + for ( size_t i = 0 ; i < std : : min ( op_name_a . size ( ) , op_name_b . size ( ) ) ; + + i ) { <nl> + if ( op_name_a [ i ] ! = op_name_b [ i ] ) { <nl> + break ; <nl> + } else if ( op_name_a [ i ] = = ' / ' ) { <nl> + last_scope_separator = i + 1 ; <nl> + } <nl> + } <nl> + return op_name_a . substr ( 0 , last_scope_separator ) ; <nl> + } <nl> + <nl> tensorflow : : Status ConvertSubGraphToTensorRTNodeDef ( <nl> tensorrt : : convert : : SubGraphParams & s ) { <nl> / / Visit nodes in reverse topological order and construct the TRT network . <nl> tensorflow : : Status ConvertSubGraphToTensorRTNodeDef ( <nl> return tensorflow : : errors : : Internal ( <nl> " Failed to create TensorRT network object " ) ; <nl> } <nl> + <nl> + string subgraph_name_scope ; <nl> + if ( ! order . empty ( ) ) { <nl> + subgraph_name_scope = order . front ( ) - > name ( ) ; <nl> + } <nl> + for ( const tensorflow : : Node * node : order ) { <nl> + subgraph_name_scope = GetCommonNameScope ( <nl> + subgraph_name_scope , node - > name ( ) ) ; <nl> + } <nl> static int static_id = 0 ; <nl> - string engine_name = tensorflow : : strings : : StrCat ( " my_trt_op " , static_id + + ) ; <nl> + / / TODO ( sami , ben , jie ) : proper naming ! <nl> + string engine_name = <nl> + tensorflow : : strings : : StrCat ( subgraph_name_scope , " my_trt_op " ) ; <nl> + engine_name = tensorflow : : strings : : StrCat ( engine_name , static_id + + ) ; <nl> auto trt_rmgr = tensorflow : : trt : : TRTResourceManager : : instance ( ) ; <nl> auto weight_rmgr = trt_rmgr - > getManager ( " WeightStore " ) ; <nl> auto ws = new tensorflow : : trt : : TRTWeightStore ( ) ; <nl> TF_CHECK_OK ( weight_rmgr - > Create ( engine_name , engine_name , ws ) ) ; <nl> - <nl> + <nl> / / Build the network <nl> Converter converter ( trt_network . get ( ) , ws ) ; <nl> <nl> tensorflow : : Status ConvertSubGraphToTensorRTNodeDef ( <nl> <nl> VLOG ( 2 ) < < " Finished conversion " ; <nl> <nl> - / / TODO ( sami , ben , jie ) : proper naming ! <nl> - <nl> / / Gather output metadata <nl> std : : vector < string > output_names ; <nl> std : : vector < tensorflow : : DataType > output_dtypes ; <nl>
Fix name scope of generated TensorRT engine ops
tensorflow/tensorflow
9cfa96fdf6cfa10e7cdd97f4dd2e0fd644fb5c02
2018-02-22T00:15:52Z
mmm a / CMakeLists . txt <nl> ppp b / CMakeLists . txt <nl> if ( XCODE ) <nl> add_custom_target ( Miscellaneous <nl> SOURCES $ { SWIFT_TOPLEVEL_HEADERS } ) <nl> endif ( ) <nl> - <nl> - # Configure CPack . <nl> - set ( CPACK_GENERATOR " TGZ " ) <nl> - set ( CPACK_PACKAGE_RELOCATABLE " false " ) <nl> - set ( CPACK_PACKAGE_VENDOR " LLVM Project " ) <nl> - set ( CPACK_INCLUDE_TOPLEVEL_DIRECTORY " OFF " ) <nl> - set ( CPACK_SET_DESTDIR " ON " ) <nl> - <nl> - set ( CPACK_PACKAGE_NAME " swift " ) <nl> - set ( CPACK_SYSTEM_NAME " macosx " ) <nl> - <nl> - # FIXME : Real version number . <nl> - execute_process ( COMMAND date " + % Y % m % d " <nl> - OUTPUT_VARIABLE CPACK_PACKAGE_VERSION <nl> - OUTPUT_STRIP_TRAILING_WHITESPACE ) <nl> - <nl> - # CPack must be included * after * its configuration variables are set . <nl> - include ( CPack ) <nl> mmm a / lib / AST / CMakeLists . txt <nl> ppp b / lib / AST / CMakeLists . txt <nl> add_swift_library ( swiftAST <nl> ) <nl> <nl> if ( NOT SWIFT_BUILT_STANDALONE ) <nl> - add_dependencies ( swiftAST intrinsics_gen ) <nl> + get_property ( CLANG_TABLEGEN_TARGETS GLOBAL PROPERTY CLANG_TABLEGEN_TARGETS ) <nl> + add_dependencies ( swiftAST intrinsics_gen $ { CLANG_TABLEGEN_TARGETS } ) <nl> endif ( ) <nl> <nl> set ( swift_ast_verifier_flag ) <nl>
Merge pull request from gottesmm / in - tree - swift
apple/swift
90e50ef580d92bbcf302de420c44f71610bd500b
2016-02-08T21:28:49Z
mmm a / s / strategy_shard . cpp <nl> ppp b / s / strategy_shard . cpp <nl> namespace mongo { <nl> } <nl> <nl> assert ( cursor ) ; <nl> - cursor - > init ( ) ; <nl> <nl> try { <nl> + cursor - > init ( ) ; <nl> + <nl> log ( 5 ) < < " cursor type : " < < cursor - > type ( ) < < endl ; <nl> shardedCursorTypes . hit ( cursor - > type ( ) ) ; <nl> <nl>
fix mem leak
mongodb/mongo
4d6f5306c59d80af7e87c6293ab993c9845d88e1
2010-08-02T15:28:09Z
mmm a / jstests / concurrency / fsm_all . js <nl> ppp b / jstests / concurrency / fsm_all . js <nl> load ( ' jstests / concurrency / fsm_libs / runner . js ' ) ; <nl> <nl> var dir = ' jstests / concurrency / fsm_workloads ' ; <nl> <nl> - var blacklist = [ <nl> - / / Disabled due to MongoDB restrictions and / or workload restrictions <nl> - <nl> - / / This workload assumes it is running against a sharded cluster . <nl> - ' sharded_moveChunk_drop_shard_key_index . js ' , <nl> - ] . map ( function ( file ) { <nl> + var blacklist = [ ] . map ( function ( file ) { <nl> return dir + ' / ' + file ; <nl> } ) ; <nl> <nl> mmm a / jstests / concurrency / fsm_all_composed . js <nl> ppp b / jstests / concurrency / fsm_all_composed . js <nl> var blacklist = [ <nl> / / is slow and the composer doesn ' t honor iteration counts : <nl> ' remove_single_document_eval_nolock . js ' , <nl> ' update_simple_eval_nolock . js ' , <nl> - <nl> - / / This workload assumes it is running against a sharded cluster . <nl> - ' sharded_moveChunk_drop_shard_key_index . js ' , <nl> ] . map ( function ( file ) { <nl> return dir + ' / ' + file ; <nl> } ) ; <nl> mmm a / jstests / concurrency / fsm_all_replication . js <nl> ppp b / jstests / concurrency / fsm_all_replication . js <nl> var blacklist = [ <nl> ' agg_group_external . js ' , / / uses > 100MB of data , which can overwhelm test hosts <nl> ' agg_sort_external . js ' , / / uses > 100MB of data , which can overwhelm test hosts <nl> ' findAndModify_update_grow . js ' , / / can cause OOM kills on test hosts <nl> - <nl> - / / This workload assumes it is running against a sharded cluster . <nl> - ' sharded_moveChunk_drop_shard_key_index . js ' , <nl> ] . map ( function ( file ) { <nl> return dir + ' / ' + file ; <nl> } ) ; <nl> mmm a / jstests / concurrency / fsm_all_sharded_replication_with_balancer . js <nl> ppp b / jstests / concurrency / fsm_all_sharded_replication_with_balancer . js <nl> var blacklist = [ <nl> ' rename_collection_dbname_droptarget . js ' , <nl> ' rename_collection_droptarget . js ' , <nl> <nl> - / / This workload assumes that the distributed lock can always be acquired when running the split <nl> - / / command in its setup ( ) function ; however , a LockBusy error may be returned if the balancer is <nl> - / / running . <nl> - ' sharded_moveChunk_drop_shard_key_index . js ' , <nl> - <nl> ' update_simple_eval . js ' , / / eval doesn ' t work with sharded collections <nl> ' update_simple_eval_nolock . js ' , / / eval doesn ' t work with sharded collections <nl> ' update_upsert_multi . js ' , / / our update queries lack shard keys <nl> mmm a / jstests / concurrency / fsm_all_simultaneous . js <nl> ppp b / jstests / concurrency / fsm_all_simultaneous . js <nl> var blacklist = [ <nl> <nl> ' agg_group_external . js ' , / / uses > 100MB of data , which can overwhelm test hosts <nl> ' agg_sort_external . js ' , / / uses > 100MB of data , which can overwhelm test hosts <nl> - <nl> - / / This workload assumes it is running against a sharded cluster . <nl> - ' sharded_moveChunk_drop_shard_key_index . js ' , <nl> ] . map ( function ( file ) { <nl> return dir + ' / ' + file ; <nl> } ) ; <nl> mmm a / jstests / concurrency / fsm_workloads / sharded_moveChunk_drop_shard_key_index . js <nl> ppp b / jstests / concurrency / fsm_workloads / sharded_moveChunk_drop_shard_key_index . js <nl> var $ config = ( function ( ) { <nl> } <nl> } <nl> <nl> + function skip ( cluster ) { <nl> + if ( ! cluster . isSharded ( ) | | cluster . isBalancerEnabled ( ) ) { <nl> + return { skip : true , msg : ' only runs in a sharded cluster with the balancer disabled . ' } ; <nl> + } <nl> + return { skip : false } ; <nl> + } <nl> + <nl> return { <nl> threadCount : 10 , <nl> iterations : 100 , <nl> data : data , <nl> states : states , <nl> transitions : transitions , <nl> - setup : setup <nl> + setup : setup , <nl> + skip : skip <nl> } ; <nl> <nl> } ) ( ) ; <nl>
SERVER - 27039 Use skip ( ) in sharded_moveChunk_drop_shard_key_index . js .
mongodb/mongo
632871ad320932d35ce4180e82988d9326ad06c6
2016-12-14T15:26:02Z
mmm a / java / src / main / java / com / google / protobuf / GeneratedMessage . java <nl> ppp b / java / src / main / java / com / google / protobuf / GeneratedMessage . java <nl> <nl> <nl> / * * For use by generated code only . * / <nl> protected UnknownFieldSet unknownFields ; <nl> - <nl> + <nl> protected GeneratedMessage ( ) { <nl> unknownFields = UnknownFieldSet . getDefaultInstance ( ) ; <nl> } <nl> protected final void onChanged ( ) { <nl> * Gets the map field with the given field number . This method should be <nl> * overridden in the generated message class if the message contains map <nl> * fields . <nl> - * <nl> + * <nl> * Unlike other field types , reflection support for map fields can ' t be <nl> * implemented based on generated public API because we need to access a <nl> * map field as a list in reflection API but the generated API only allows <nl> * us to access it as a map . This method returns the underlying map field <nl> - * directly and thus enables us to access the map field as a list . <nl> + * directly and thus enables us to access the map field as a list . <nl> * / <nl> @ SuppressWarnings ( { " unused " , " rawtypes " } ) <nl> protected MapField internalGetMapField ( int fieldNumber ) { <nl> private void verifyExtensionContainingType ( <nl> public final < Type > Type getExtension ( <nl> final ExtensionLite < MessageType , Type > extensionLite ) { <nl> Extension < MessageType , Type > extension = checkNotLite ( extensionLite ) ; <nl> - <nl> + <nl> verifyExtensionContainingType ( extension ) ; <nl> FieldDescriptor descriptor = extension . getDescriptor ( ) ; <nl> final Object value = extensions . getField ( descriptor ) ; <nl> public FieldDescriptor loadDescriptor ( ) { <nl> implements ExtensionDescriptorRetriever { <nl> private volatile FieldDescriptor descriptor ; <nl> protected abstract FieldDescriptor loadDescriptor ( ) ; <nl> - <nl> + <nl> public FieldDescriptor getDescriptor ( ) { <nl> if ( descriptor = = null ) { <nl> synchronized ( this ) { <nl> private static Object invokeOrDie ( <nl> } <nl> } <nl> } <nl> - <nl> + <nl> / * * <nl> * Gets the map field with the given field number . This method should be <nl> * overridden in the generated message class if the message contains map <nl> * fields . <nl> - * <nl> + * <nl> * Unlike other field types , reflection support for map fields can ' t be <nl> * implemented based on generated public API because we need to access a <nl> * map field as a list in reflection API but the generated API only allows <nl> * us to access it as a map . This method returns the underlying map field <nl> - * directly and thus enables us to access the map field as a list . <nl> + * directly and thus enables us to access the map field as a list . <nl> * / <nl> @ SuppressWarnings ( { " rawtypes " , " unused " } ) <nl> protected MapField internalGetMapField ( int fieldNumber ) { <nl> public FieldAccessorTable ( <nl> oneofs = new OneofAccessor [ descriptor . getOneofs ( ) . size ( ) ] ; <nl> initialized = false ; <nl> } <nl> - <nl> + <nl> private boolean isMapFieldEnabled ( FieldDescriptor field ) { <nl> boolean result = true ; <nl> return result ; <nl> private static boolean supportFieldPresence ( FileDescriptor file ) { <nl> protected final FieldDescriptor field ; <nl> protected final boolean isOneofField ; <nl> protected final boolean hasHasMethod ; <nl> - <nl> + <nl> private int getOneofFieldNumber ( final GeneratedMessage message ) { <nl> return ( ( Internal . EnumLite ) invokeOrDie ( caseMethod , message ) ) . getNumber ( ) ; <nl> } <nl> - <nl> + <nl> private int getOneofFieldNumber ( final GeneratedMessage . Builder builder ) { <nl> return ( ( Internal . EnumLite ) invokeOrDie ( caseMethodBuilder , builder ) ) . getNumber ( ) ; <nl> } <nl> public void clear ( final Builder builder ) { <nl> <nl> private final FieldDescriptor field ; <nl> private final Message mapEntryMessageDefaultInstance ; <nl> - <nl> + <nl> private MapField < ? , ? > getMapField ( GeneratedMessage message ) { <nl> return ( MapField < ? , ? > ) message . internalGetMapField ( field . getNumber ( ) ) ; <nl> } <nl> - <nl> + <nl> private MapField < ? , ? > getMapField ( GeneratedMessage . Builder builder ) { <nl> return ( MapField < ? , ? > ) builder . internalGetMapField ( field . getNumber ( ) ) ; <nl> } <nl> - <nl> + <nl> public Object get ( GeneratedMessage message ) { <nl> List result = new ArrayList ( ) ; <nl> for ( int i = 0 ; i < getRepeatedCount ( message ) ; i + + ) { <nl> public Object getRepeated ( Builder builder , int index ) { <nl> } <nl> <nl> public void setRepeated ( Builder builder , int index , Object value ) { <nl> + builder . onChanged ( ) ; <nl> getMapField ( builder ) . getMutableList ( ) . set ( index , ( Message ) value ) ; <nl> } <nl> <nl> public void addRepeated ( Builder builder , Object value ) { <nl> + builder . onChanged ( ) ; <nl> getMapField ( builder ) . getMutableList ( ) . add ( ( Message ) value ) ; <nl> } <nl> <nl> public int getRepeatedCount ( Builder builder ) { <nl> } <nl> <nl> public void clear ( Builder builder ) { <nl> + builder . onChanged ( ) ; <nl> getMapField ( builder ) . getMutableList ( ) . clear ( ) ; <nl> } <nl> <nl> public void clear ( Builder builder ) { <nl> throw new UnsupportedOperationException ( <nl> " Nested builder not supported for map fields . " ) ; <nl> } <nl> - <nl> + <nl> public com . google . protobuf . Message . Builder getRepeatedBuilder ( <nl> Builder builder , int index ) { <nl> throw new UnsupportedOperationException ( <nl> public void clear ( Builder builder ) { <nl> final Class < ? extends Builder > builderClass , <nl> final String containingOneofCamelCaseName ) { <nl> super ( descriptor , camelCaseName , messageClass , builderClass , containingOneofCamelCaseName ) ; <nl> - <nl> + <nl> enumDescriptor = descriptor . getEnumType ( ) ; <nl> <nl> valueOfMethod = getMethodOrDie ( type , " valueOf " , <nl> public void clear ( Builder builder ) { <nl> getMethodOrDie ( builderClass , " set " + camelCaseName + " Value " , int . class ) ; <nl> } <nl> } <nl> - <nl> + <nl> private EnumDescriptor enumDescriptor ; <nl> <nl> private Method valueOfMethod ; <nl> private Method getValueDescriptorMethod ; <nl> - <nl> + <nl> private boolean supportUnknownEnumValue ; <nl> private Method getValueMethod ; <nl> private Method getValueMethodBuilder ; <nl> public void set ( final Builder builder , final Object value ) { <nl> final Class < ? extends GeneratedMessage > messageClass , <nl> final Class < ? extends Builder > builderClass ) { <nl> super ( descriptor , camelCaseName , messageClass , builderClass ) ; <nl> - <nl> + <nl> enumDescriptor = descriptor . getEnumType ( ) ; <nl> <nl> valueOfMethod = getMethodOrDie ( type , " valueOf " , <nl> public void set ( final Builder builder , final Object value ) { <nl> <nl> private final Method valueOfMethod ; <nl> private final Method getValueDescriptorMethod ; <nl> - <nl> + <nl> private boolean supportUnknownEnumValue ; <nl> private Method getRepeatedValueMethod ; <nl> private Method getRepeatedValueMethodBuilder ; <nl> public void addRepeated ( final Builder builder , final Object value ) { <nl> final Class < ? extends GeneratedMessage > messageClass , <nl> final Class < ? extends Builder > builderClass , <nl> final String containingOneofCamelCaseName ) { <nl> - super ( descriptor , camelCaseName , messageClass , builderClass , containingOneofCamelCaseName ) ; <nl> + super ( descriptor , camelCaseName , messageClass , builderClass , <nl> + containingOneofCamelCaseName ) ; <nl> <nl> newBuilderMethod = getMethodOrDie ( type , " newBuilder " ) ; <nl> getBuilderMethodBuilder = <nl> public void addRepeated ( final Builder builder , final Object value ) { <nl> protected Object writeReplace ( ) throws ObjectStreamException { <nl> return new GeneratedMessageLite . SerializedForm ( this ) ; <nl> } <nl> - <nl> + <nl> / * * <nl> * Checks that the { @ link Extension } is non - Lite and returns it as a <nl> * { @ link GeneratedExtension } . <nl> protected Object writeReplace ( ) throws ObjectStreamException { <nl> if ( extension . isLite ( ) ) { <nl> throw new IllegalArgumentException ( " Expected non - lite extension . " ) ; <nl> } <nl> - <nl> + <nl> return ( Extension < MessageType , T > ) extension ; <nl> } <nl> } <nl> mmm a / java / src / test / java / com / google / protobuf / MapTest . java <nl> ppp b / java / src / test / java / com / google / protobuf / MapTest . java <nl> private void setMapValues ( TestMap . Builder builder ) { <nl> builder . getMutableInt32ToStringField ( ) . put ( 1 , " 11 " ) ; <nl> builder . getMutableInt32ToStringField ( ) . put ( 2 , " 22 " ) ; <nl> builder . getMutableInt32ToStringField ( ) . put ( 3 , " 33 " ) ; <nl> - <nl> + <nl> builder . getMutableInt32ToBytesField ( ) . put ( 1 , TestUtil . toBytes ( " 11 " ) ) ; <nl> builder . getMutableInt32ToBytesField ( ) . put ( 2 , TestUtil . toBytes ( " 22 " ) ) ; <nl> builder . getMutableInt32ToBytesField ( ) . put ( 3 , TestUtil . toBytes ( " 33 " ) ) ; <nl> - <nl> + <nl> builder . getMutableInt32ToEnumField ( ) . put ( 1 , TestMap . EnumValue . FOO ) ; <nl> builder . getMutableInt32ToEnumField ( ) . put ( 2 , TestMap . EnumValue . BAR ) ; <nl> builder . getMutableInt32ToEnumField ( ) . put ( 3 , TestMap . EnumValue . BAZ ) ; <nl> - <nl> + <nl> builder . getMutableInt32ToMessageField ( ) . put ( <nl> 1 , MessageValue . newBuilder ( ) . setValue ( 11 ) . build ( ) ) ; <nl> builder . getMutableInt32ToMessageField ( ) . put ( <nl> 2 , MessageValue . newBuilder ( ) . setValue ( 22 ) . build ( ) ) ; <nl> builder . getMutableInt32ToMessageField ( ) . put ( <nl> 3 , MessageValue . newBuilder ( ) . setValue ( 33 ) . build ( ) ) ; <nl> - <nl> + <nl> builder . getMutableStringToInt32Field ( ) . put ( " 1 " , 11 ) ; <nl> builder . getMutableStringToInt32Field ( ) . put ( " 2 " , 22 ) ; <nl> builder . getMutableStringToInt32Field ( ) . put ( " 3 " , 33 ) ; <nl> private void assertMapValuesSet ( TestMap message ) { <nl> assertEquals ( " 11 " , message . getInt32ToStringField ( ) . get ( 1 ) ) ; <nl> assertEquals ( " 22 " , message . getInt32ToStringField ( ) . get ( 2 ) ) ; <nl> assertEquals ( " 33 " , message . getInt32ToStringField ( ) . get ( 3 ) ) ; <nl> - <nl> + <nl> assertEquals ( 3 , message . getInt32ToBytesField ( ) . size ( ) ) ; <nl> assertEquals ( TestUtil . toBytes ( " 11 " ) , message . getInt32ToBytesField ( ) . get ( 1 ) ) ; <nl> assertEquals ( TestUtil . toBytes ( " 22 " ) , message . getInt32ToBytesField ( ) . get ( 2 ) ) ; <nl> assertEquals ( TestUtil . toBytes ( " 33 " ) , message . getInt32ToBytesField ( ) . get ( 3 ) ) ; <nl> - <nl> + <nl> assertEquals ( 3 , message . getInt32ToEnumField ( ) . size ( ) ) ; <nl> assertEquals ( TestMap . EnumValue . FOO , message . getInt32ToEnumField ( ) . get ( 1 ) ) ; <nl> assertEquals ( TestMap . EnumValue . BAR , message . getInt32ToEnumField ( ) . get ( 2 ) ) ; <nl> assertEquals ( TestMap . EnumValue . BAZ , message . getInt32ToEnumField ( ) . get ( 3 ) ) ; <nl> - <nl> + <nl> assertEquals ( 3 , message . getInt32ToMessageField ( ) . size ( ) ) ; <nl> assertEquals ( 11 , message . getInt32ToMessageField ( ) . get ( 1 ) . getValue ( ) ) ; <nl> assertEquals ( 22 , message . getInt32ToMessageField ( ) . get ( 2 ) . getValue ( ) ) ; <nl> assertEquals ( 33 , message . getInt32ToMessageField ( ) . get ( 3 ) . getValue ( ) ) ; <nl> - <nl> + <nl> assertEquals ( 3 , message . getStringToInt32Field ( ) . size ( ) ) ; <nl> assertEquals ( 11 , message . getStringToInt32Field ( ) . get ( " 1 " ) . intValue ( ) ) ; <nl> assertEquals ( 22 , message . getStringToInt32Field ( ) . get ( " 2 " ) . intValue ( ) ) ; <nl> private void updateMapValues ( TestMap . Builder builder ) { <nl> builder . getMutableInt32ToStringField ( ) . put ( 1 , " 111 " ) ; <nl> builder . getMutableInt32ToStringField ( ) . remove ( 2 ) ; <nl> builder . getMutableInt32ToStringField ( ) . put ( 4 , " 44 " ) ; <nl> - <nl> + <nl> builder . getMutableInt32ToBytesField ( ) . put ( 1 , TestUtil . toBytes ( " 111 " ) ) ; <nl> builder . getMutableInt32ToBytesField ( ) . remove ( 2 ) ; <nl> builder . getMutableInt32ToBytesField ( ) . put ( 4 , TestUtil . toBytes ( " 44 " ) ) ; <nl> - <nl> + <nl> builder . getMutableInt32ToEnumField ( ) . put ( 1 , TestMap . EnumValue . BAR ) ; <nl> builder . getMutableInt32ToEnumField ( ) . remove ( 2 ) ; <nl> builder . getMutableInt32ToEnumField ( ) . put ( 4 , TestMap . EnumValue . QUX ) ; <nl> - <nl> + <nl> builder . getMutableInt32ToMessageField ( ) . put ( <nl> 1 , MessageValue . newBuilder ( ) . setValue ( 111 ) . build ( ) ) ; <nl> builder . getMutableInt32ToMessageField ( ) . remove ( 2 ) ; <nl> builder . getMutableInt32ToMessageField ( ) . put ( <nl> 4 , MessageValue . newBuilder ( ) . setValue ( 44 ) . build ( ) ) ; <nl> - <nl> + <nl> builder . getMutableStringToInt32Field ( ) . put ( " 1 " , 111 ) ; <nl> builder . getMutableStringToInt32Field ( ) . remove ( " 2 " ) ; <nl> builder . getMutableStringToInt32Field ( ) . put ( " 4 " , 44 ) ; <nl> private void assertMapValuesUpdated ( TestMap message ) { <nl> assertEquals ( " 111 " , message . getInt32ToStringField ( ) . get ( 1 ) ) ; <nl> assertEquals ( " 33 " , message . getInt32ToStringField ( ) . get ( 3 ) ) ; <nl> assertEquals ( " 44 " , message . getInt32ToStringField ( ) . get ( 4 ) ) ; <nl> - <nl> + <nl> assertEquals ( 3 , message . getInt32ToBytesField ( ) . size ( ) ) ; <nl> assertEquals ( TestUtil . toBytes ( " 111 " ) , message . getInt32ToBytesField ( ) . get ( 1 ) ) ; <nl> assertEquals ( TestUtil . toBytes ( " 33 " ) , message . getInt32ToBytesField ( ) . get ( 3 ) ) ; <nl> assertEquals ( TestUtil . toBytes ( " 44 " ) , message . getInt32ToBytesField ( ) . get ( 4 ) ) ; <nl> - <nl> + <nl> assertEquals ( 3 , message . getInt32ToEnumField ( ) . size ( ) ) ; <nl> assertEquals ( TestMap . EnumValue . BAR , message . getInt32ToEnumField ( ) . get ( 1 ) ) ; <nl> assertEquals ( TestMap . EnumValue . BAZ , message . getInt32ToEnumField ( ) . get ( 3 ) ) ; <nl> assertEquals ( TestMap . EnumValue . QUX , message . getInt32ToEnumField ( ) . get ( 4 ) ) ; <nl> - <nl> + <nl> assertEquals ( 3 , message . getInt32ToMessageField ( ) . size ( ) ) ; <nl> assertEquals ( 111 , message . getInt32ToMessageField ( ) . get ( 1 ) . getValue ( ) ) ; <nl> assertEquals ( 33 , message . getInt32ToMessageField ( ) . get ( 3 ) . getValue ( ) ) ; <nl> assertEquals ( 44 , message . getInt32ToMessageField ( ) . get ( 4 ) . getValue ( ) ) ; <nl> - <nl> + <nl> assertEquals ( 3 , message . getStringToInt32Field ( ) . size ( ) ) ; <nl> assertEquals ( 111 , message . getStringToInt32Field ( ) . get ( " 1 " ) . intValue ( ) ) ; <nl> assertEquals ( 33 , message . getStringToInt32Field ( ) . get ( " 3 " ) . intValue ( ) ) ; <nl> public void testGettersAndSetters ( ) throws Exception { <nl> TestMap . Builder builder = TestMap . newBuilder ( ) ; <nl> TestMap message = builder . build ( ) ; <nl> assertMapValuesCleared ( message ) ; <nl> - <nl> + <nl> builder = message . toBuilder ( ) ; <nl> setMapValues ( builder ) ; <nl> message = builder . build ( ) ; <nl> assertMapValuesSet ( message ) ; <nl> - <nl> + <nl> builder = message . toBuilder ( ) ; <nl> updateMapValues ( builder ) ; <nl> message = builder . build ( ) ; <nl> assertMapValuesUpdated ( message ) ; <nl> - <nl> + <nl> builder = message . toBuilder ( ) ; <nl> builder . clear ( ) ; <nl> message = builder . build ( ) ; <nl> public void testSerializeAndParse ( ) throws Exception { <nl> assertEquals ( message . getSerializedSize ( ) , message . toByteString ( ) . size ( ) ) ; <nl> message = TestMap . PARSER . parseFrom ( message . toByteString ( ) ) ; <nl> assertMapValuesSet ( message ) ; <nl> - <nl> + <nl> builder = message . toBuilder ( ) ; <nl> updateMapValues ( builder ) ; <nl> message = builder . build ( ) ; <nl> assertEquals ( message . getSerializedSize ( ) , message . toByteString ( ) . size ( ) ) ; <nl> message = TestMap . PARSER . parseFrom ( message . toByteString ( ) ) ; <nl> assertMapValuesUpdated ( message ) ; <nl> - <nl> + <nl> builder = message . toBuilder ( ) ; <nl> builder . clear ( ) ; <nl> message = builder . build ( ) ; <nl> public void testSerializeAndParse ( ) throws Exception { <nl> message = TestMap . PARSER . parseFrom ( message . toByteString ( ) ) ; <nl> assertMapValuesCleared ( message ) ; <nl> } <nl> - <nl> + <nl> public void testMergeFrom ( ) throws Exception { <nl> TestMap . Builder builder = TestMap . newBuilder ( ) ; <nl> setMapValues ( builder ) ; <nl> TestMap message = builder . build ( ) ; <nl> - <nl> + <nl> TestMap . Builder other = TestMap . newBuilder ( ) ; <nl> other . mergeFrom ( message ) ; <nl> assertMapValuesSet ( other . build ( ) ) ; <nl> public void testMergeFrom ( ) throws Exception { <nl> public void testEqualsAndHashCode ( ) throws Exception { <nl> / / Test that generated equals ( ) and hashCode ( ) will disregard the order <nl> / / of map entries when comparing / hashing map fields . <nl> - <nl> + <nl> / / We can ' t control the order of elements in a HashMap . The best we can do <nl> / / here is to add elements in different order . <nl> TestMap . Builder b1 = TestMap . newBuilder ( ) ; <nl> public void testEqualsAndHashCode ( ) throws Exception { <nl> b1 . getMutableInt32ToInt32Field ( ) . put ( 3 , 4 ) ; <nl> b1 . getMutableInt32ToInt32Field ( ) . put ( 5 , 6 ) ; <nl> TestMap m1 = b1 . build ( ) ; <nl> - <nl> + <nl> TestMap . Builder b2 = TestMap . newBuilder ( ) ; <nl> b2 . getMutableInt32ToInt32Field ( ) . put ( 5 , 6 ) ; <nl> b2 . getMutableInt32ToInt32Field ( ) . put ( 1 , 2 ) ; <nl> b2 . getMutableInt32ToInt32Field ( ) . put ( 3 , 4 ) ; <nl> TestMap m2 = b2 . build ( ) ; <nl> - <nl> + <nl> assertEquals ( m1 , m2 ) ; <nl> assertEquals ( m1 . hashCode ( ) , m2 . hashCode ( ) ) ; <nl> - <nl> + <nl> / / Make sure we did compare map fields . <nl> b2 . getMutableInt32ToInt32Field ( ) . put ( 1 , 0 ) ; <nl> m2 = b2 . build ( ) ; <nl> assertFalse ( m1 . equals ( m2 ) ) ; <nl> / / Don ' t check m1 . hashCode ( ) ! = m2 . hashCode ( ) because it ' s not guaranteed <nl> / / to be different . <nl> - <nl> + <nl> / / Regression test for b / 18549190 : if a map is a subset of the other map , <nl> / / equals ( ) should return false . <nl> b2 . getMutableInt32ToInt32Field ( ) . remove ( 1 ) ; <nl> public void testEqualsAndHashCode ( ) throws Exception { <nl> assertFalse ( m1 . equals ( m2 ) ) ; <nl> assertFalse ( m2 . equals ( m1 ) ) ; <nl> } <nl> - <nl> - <nl> + <nl> + <nl> public void testNestedBuilderOnChangeEventPropagation ( ) { <nl> TestOnChangeEventPropagation . Builder parent = <nl> TestOnChangeEventPropagation . newBuilder ( ) ; <nl> parent . getOptionalMessageBuilder ( ) . getMutableInt32ToInt32Field ( ) . put ( 1 , 2 ) ; <nl> TestOnChangeEventPropagation message = parent . build ( ) ; <nl> assertEquals ( 2 , message . getOptionalMessage ( ) . getInt32ToInt32Field ( ) . get ( 1 ) . intValue ( ) ) ; <nl> - <nl> + <nl> / / Make a change using nested builder . <nl> parent . getOptionalMessageBuilder ( ) . getMutableInt32ToInt32Field ( ) . put ( 1 , 3 ) ; <nl> - <nl> + <nl> / / Should be able to observe the change . <nl> message = parent . build ( ) ; <nl> assertEquals ( 3 , message . getOptionalMessage ( ) . getInt32ToInt32Field ( ) . get ( 1 ) . intValue ( ) ) ; <nl> - <nl> + <nl> / / Make another change using mergeFrom ( ) <nl> TestMap . Builder other = TestMap . newBuilder ( ) ; <nl> other . getMutableInt32ToInt32Field ( ) . put ( 1 , 4 ) ; <nl> parent . getOptionalMessageBuilder ( ) . mergeFrom ( other . build ( ) ) ; <nl> - <nl> + <nl> / / Should be able to observe the change . <nl> message = parent . build ( ) ; <nl> assertEquals ( 4 , message . getOptionalMessage ( ) . getInt32ToInt32Field ( ) . get ( 1 ) . intValue ( ) ) ; <nl> - <nl> + <nl> / / Make yet another change by clearing the nested builder . <nl> parent . getOptionalMessageBuilder ( ) . clear ( ) ; <nl> - <nl> + <nl> / / Should be able to observe the change . <nl> message = parent . build ( ) ; <nl> assertEquals ( 0 , message . getOptionalMessage ( ) . getInt32ToInt32Field ( ) . size ( ) ) ; <nl> } <nl> - <nl> + <nl> + public void testNestedBuilderOnChangeEventPropagationReflection ( ) { <nl> + FieldDescriptor intMapField = f ( " int32_to_int32_field " ) ; <nl> + / / Create an outer message builder with nested builder . <nl> + TestOnChangeEventPropagation . Builder parentBuilder = <nl> + TestOnChangeEventPropagation . newBuilder ( ) ; <nl> + TestMap . Builder testMapBuilder = parentBuilder . getOptionalMessageBuilder ( ) ; <nl> + <nl> + / / Create a map entry message . <nl> + TestMap . Builder entryBuilder = TestMap . newBuilder ( ) ; <nl> + entryBuilder . getMutableInt32ToInt32Field ( ) . put ( 1 , 1 ) ; <nl> + <nl> + / / Put the entry into the nested builder . <nl> + testMapBuilder . addRepeatedField ( <nl> + intMapField , entryBuilder . getRepeatedField ( intMapField , 0 ) ) ; <nl> + <nl> + / / Should be able to observe the change . <nl> + TestOnChangeEventPropagation message = parentBuilder . build ( ) ; <nl> + assertEquals ( 1 , message . getOptionalMessage ( ) . getInt32ToInt32Field ( ) . size ( ) ) ; <nl> + <nl> + / / Change the entry value . <nl> + entryBuilder . getMutableInt32ToInt32Field ( ) . put ( 1 , 4 ) ; <nl> + testMapBuilder = parentBuilder . getOptionalMessageBuilder ( ) ; <nl> + testMapBuilder . setRepeatedField ( <nl> + intMapField , 0 , entryBuilder . getRepeatedField ( intMapField , 0 ) ) ; <nl> + <nl> + / / Should be able to observe the change . <nl> + message = parentBuilder . build ( ) ; <nl> + assertEquals ( 4 , <nl> + message . getOptionalMessage ( ) . getInt32ToInt32Field ( ) . get ( 1 ) . intValue ( ) ) ; <nl> + <nl> + / / Clear the nested builder . <nl> + testMapBuilder = parentBuilder . getOptionalMessageBuilder ( ) ; <nl> + testMapBuilder . clearField ( intMapField ) ; <nl> + <nl> + / / Should be able to observe the change . <nl> + message = parentBuilder . build ( ) ; <nl> + assertEquals ( 0 , message . getOptionalMessage ( ) . getInt32ToInt32Field ( ) . size ( ) ) ; <nl> + } <nl> + <nl> / / The following methods are used to test reflection API . <nl> - <nl> + <nl> private static FieldDescriptor f ( String name ) { <nl> return TestMap . getDescriptor ( ) . findFieldByName ( name ) ; <nl> } <nl> - <nl> + <nl> private static Object getFieldValue ( Message mapEntry , String name ) { <nl> FieldDescriptor field = mapEntry . getDescriptorForType ( ) . findFieldByName ( name ) ; <nl> return mapEntry . getField ( field ) ; <nl> } <nl> - <nl> + <nl> private static Message . Builder setFieldValue ( <nl> Message . Builder mapEntry , String name , Object value ) { <nl> FieldDescriptor field = mapEntry . getDescriptorForType ( ) . findFieldByName ( name ) ; <nl> mapEntry . setField ( field , value ) ; <nl> return mapEntry ; <nl> } <nl> - <nl> + <nl> private static void assertHasMapValues ( Message message , String name , Map < ? , ? > values ) { <nl> FieldDescriptor field = f ( name ) ; <nl> for ( Object entry : ( List < ? > ) message . getField ( field ) ) { <nl> private static void assertHasMapValues ( Message message , String name , Map < ? , ? > v <nl> assertEquals ( value , values . get ( key ) ) ; <nl> } <nl> } <nl> - <nl> + <nl> private static < KeyType , ValueType > <nl> Message newMapEntry ( Message . Builder builder , String name , KeyType key , ValueType value ) { <nl> FieldDescriptor field = builder . getDescriptorForType ( ) . findFieldByName ( name ) ; <nl> Message newMapEntry ( Message . Builder builder , String name , KeyType key , ValueType <nl> entryBuilder . setField ( valueField , value ) ; <nl> return entryBuilder . build ( ) ; <nl> } <nl> - <nl> + <nl> private static void setMapValues ( Message . Builder builder , String name , Map < ? , ? > values ) { <nl> List < Message > entryList = new ArrayList < Message > ( ) ; <nl> for ( Map . Entry < ? , ? > entry : values . entrySet ( ) ) { <nl> private static void setMapValues ( Message . Builder builder , String name , Map < ? , ? > <nl> FieldDescriptor field = builder . getDescriptorForType ( ) . findFieldByName ( name ) ; <nl> builder . setField ( field , entryList ) ; <nl> } <nl> - <nl> + <nl> private static < KeyType , ValueType > <nl> Map < KeyType , ValueType > mapForValues ( <nl> KeyType key1 , ValueType value1 , KeyType key2 , ValueType value2 ) { <nl> public void testReflectionApi ( ) throws Exception { <nl> mapForValues ( <nl> 11 , MessageValue . newBuilder ( ) . setValue ( 22 ) . build ( ) , <nl> 33 , MessageValue . newBuilder ( ) . setValue ( 44 ) . build ( ) ) ) ; <nl> - <nl> + <nl> / / Test clearField ( ) <nl> builder . clearField ( f ( " int32_to_int32_field " ) ) ; <nl> builder . clearField ( f ( " int32_to_message_field " ) ) ; <nl> message = builder . build ( ) ; <nl> assertEquals ( 0 , message . getInt32ToInt32Field ( ) . size ( ) ) ; <nl> assertEquals ( 0 , message . getInt32ToMessageField ( ) . size ( ) ) ; <nl> - <nl> + <nl> / / Test setField ( ) <nl> setMapValues ( builder , " int32_to_int32_field " , <nl> mapForValues ( 11 , 22 , 33 , 44 ) ) ; <nl> public void testReflectionApi ( ) throws Exception { <nl> assertEquals ( 44 , message . getInt32ToInt32Field ( ) . get ( 33 ) . intValue ( ) ) ; <nl> assertEquals ( 222 , message . getInt32ToMessageField ( ) . get ( 111 ) . getValue ( ) ) ; <nl> assertEquals ( 444 , message . getInt32ToMessageField ( ) . get ( 333 ) . getValue ( ) ) ; <nl> - <nl> + <nl> / / Test addRepeatedField <nl> builder . addRepeatedField ( f ( " int32_to_int32_field " ) , <nl> newMapEntry ( builder , " int32_to_int32_field " , 55 , 66 ) ) ; <nl> public void testReflectionApi ( ) throws Exception { <nl> message = builder . build ( ) ; <nl> assertEquals ( 55 , message . getInt32ToInt32Field ( ) . get ( 55 ) . intValue ( ) ) ; <nl> assertEquals ( 555 , message . getInt32ToMessageField ( ) . get ( 555 ) . getValue ( ) ) ; <nl> - <nl> + <nl> / / Test setRepeatedField <nl> for ( int i = 0 ; i < builder . getRepeatedFieldCount ( f ( " int32_to_int32_field " ) ) ; i + + ) { <nl> Message mapEntry = ( Message ) builder . getRepeatedField ( f ( " int32_to_int32_field " ) , i ) ; <nl> public void testReflectionApi ( ) throws Exception { <nl> assertEquals ( 33 , message . getInt32ToInt32Field ( ) . get ( 44 ) . intValue ( ) ) ; <nl> assertEquals ( 55 , message . getInt32ToInt32Field ( ) . get ( 55 ) . intValue ( ) ) ; <nl> } <nl> - <nl> + <nl> public void testTextFormat ( ) throws Exception { <nl> TestMap . Builder builder = TestMap . newBuilder ( ) ; <nl> setMapValues ( builder ) ; <nl> TestMap message = builder . build ( ) ; <nl> - <nl> + <nl> String textData = TextFormat . printToString ( message ) ; <nl> - <nl> + <nl> builder = TestMap . newBuilder ( ) ; <nl> TextFormat . merge ( textData , builder ) ; <nl> message = builder . build ( ) ; <nl> - <nl> + <nl> assertMapValuesSet ( message ) ; <nl> } <nl> - <nl> + <nl> public void testDynamicMessage ( ) throws Exception { <nl> TestMap . Builder builder = TestMap . newBuilder ( ) ; <nl> setMapValues ( builder ) ; <nl> TestMap message = builder . build ( ) ; <nl> - <nl> + <nl> Message dynamicDefaultInstance = <nl> DynamicMessage . getDefaultInstance ( TestMap . getDescriptor ( ) ) ; <nl> Message dynamicMessage = dynamicDefaultInstance <nl> . newBuilderForType ( ) . mergeFrom ( message . toByteString ( ) ) . build ( ) ; <nl> - <nl> + <nl> assertEquals ( message , dynamicMessage ) ; <nl> assertEquals ( message . hashCode ( ) , dynamicMessage . hashCode ( ) ) ; <nl> } <nl> - <nl> + <nl> public void testReflectionEqualsAndHashCode ( ) throws Exception { <nl> / / Test that generated equals ( ) and hashCode ( ) will disregard the order <nl> / / of map entries when comparing / hashing map fields . <nl> public void testReflectionEqualsAndHashCode ( ) throws Exception { <nl> Message dynamicDefaultInstance = <nl> DynamicMessage . getDefaultInstance ( TestMap . getDescriptor ( ) ) ; <nl> FieldDescriptor field = f ( " int32_to_int32_field " ) ; <nl> - <nl> + <nl> Message . Builder b1 = dynamicDefaultInstance . newBuilderForType ( ) ; <nl> b1 . addRepeatedField ( field , newMapEntry ( b1 , " int32_to_int32_field " , 1 , 2 ) ) ; <nl> b1 . addRepeatedField ( field , newMapEntry ( b1 , " int32_to_int32_field " , 3 , 4 ) ) ; <nl> b1 . addRepeatedField ( field , newMapEntry ( b1 , " int32_to_int32_field " , 5 , 6 ) ) ; <nl> Message m1 = b1 . build ( ) ; <nl> - <nl> + <nl> Message . Builder b2 = dynamicDefaultInstance . newBuilderForType ( ) ; <nl> b2 . addRepeatedField ( field , newMapEntry ( b2 , " int32_to_int32_field " , 5 , 6 ) ) ; <nl> b2 . addRepeatedField ( field , newMapEntry ( b2 , " int32_to_int32_field " , 1 , 2 ) ) ; <nl> b2 . addRepeatedField ( field , newMapEntry ( b2 , " int32_to_int32_field " , 3 , 4 ) ) ; <nl> Message m2 = b2 . build ( ) ; <nl> - <nl> + <nl> assertEquals ( m1 , m2 ) ; <nl> assertEquals ( m1 . hashCode ( ) , m2 . hashCode ( ) ) ; <nl> - <nl> + <nl> / / Make sure we did compare map fields . <nl> b2 . setRepeatedField ( field , 0 , newMapEntry ( b1 , " int32_to_int32_field " , 0 , 0 ) ) ; <nl> m2 = b2 . build ( ) ; <nl> public void testReflectionEqualsAndHashCode ( ) throws Exception { <nl> / / Don ' t check m1 . hashCode ( ) ! = m2 . hashCode ( ) because it ' s not guaranteed <nl> / / to be different . <nl> } <nl> - <nl> + <nl> public void testUnknownEnumValues ( ) throws Exception { <nl> TestMap . Builder builder = TestMap . newBuilder ( ) ; <nl> builder . getMutableInt32ToEnumFieldValue ( ) . put ( 0 , 0 ) ; <nl> public void testUnknownEnumValues ( ) throws Exception { <nl> assertEquals ( TestMap . EnumValue . UNRECOGNIZED , <nl> message . getInt32ToEnumField ( ) . get ( 2 ) ) ; <nl> assertEquals ( 1000 , message . getInt32ToEnumFieldValue ( ) . get ( 2 ) . intValue ( ) ) ; <nl> - <nl> + <nl> / / Unknown enum values should be preserved after : <nl> / / 1 . Serialization and parsing . <nl> / / 2 . toBuild ( ) . <nl> public void testUnknownEnumValues ( ) throws Exception { <nl> assertEquals ( 1000 , builder . getInt32ToEnumFieldValue ( ) . get ( 2 ) . intValue ( ) ) ; <nl> builder = TestMap . newBuilder ( ) . mergeFrom ( message ) ; <nl> assertEquals ( 1000 , builder . getInt32ToEnumFieldValue ( ) . get ( 2 ) . intValue ( ) ) ; <nl> - <nl> + <nl> / / hashCode ( ) / equals ( ) should take unknown enum values into account . <nl> builder . getMutableInt32ToEnumFieldValue ( ) . put ( 2 , 1001 ) ; <nl> TestMap message2 = builder . build ( ) ; <nl> public void testUnknownEnumValues ( ) throws Exception { <nl> / / should be the same . <nl> assertTrue ( message . getInt32ToEnumField ( ) . equals ( message2 . getInt32ToEnumField ( ) ) ) ; <nl> } <nl> - <nl> + <nl> public void testUnknownEnumValuesInReflectionApi ( ) throws Exception { <nl> Descriptor descriptor = TestMap . getDescriptor ( ) ; <nl> EnumDescriptor enumDescriptor = TestMap . EnumValue . getDescriptor ( ) ; <nl> FieldDescriptor field = descriptor . findFieldByName ( " int32_to_enum_field " ) ; <nl> - <nl> + <nl> Map < Integer , Integer > data = new HashMap < Integer , Integer > ( ) ; <nl> data . put ( 0 , 0 ) ; <nl> data . put ( 1 , 1 ) ; <nl> data . put ( 2 , 1000 ) ; / / unknown value . <nl> - <nl> + <nl> TestMap . Builder builder = TestMap . newBuilder ( ) ; <nl> for ( Map . Entry < Integer , Integer > entry : data . entrySet ( ) ) { <nl> builder . getMutableInt32ToEnumFieldValue ( ) . put ( entry . getKey ( ) , entry . getValue ( ) ) ; <nl>
Fix Java maps reflection to call onChange to populate changes to parent
protocolbuffers/protobuf
20042b72da0a4fef46fd90e1e7766d124f16e465
2015-02-24T00:25:52Z
mmm a / . pre - commit - config . yaml <nl> ppp b / . pre - commit - config . yaml <nl> repos : <nl> - id : flake8 <nl> exclude : ' ^ ( pyextra ) | ( external ) | ( cereal ) | ( rednose ) | ( panda ) | ( laika ) | ( laika_repo ) | ( rednose_repo ) / ' <nl> args : <nl> - - - - ignore = E111 , E114 , E121 , E122 , E123 , E124 , E126 , E127 , E128 , E201 , E202 , E203 , E221 , E225 , E226 , E231 , E241 , E251 , E261 , E265 , E266 , E302 , E303 , E305 , E402 , E501 , E502 , E722 , E741 , W504 <nl> + - - - ignore = E111 , E114 , E121 , E122 , E123 , E124 , E126 , E127 , E128 , E201 , E202 , E203 , E226 , E231 , E241 , E251 , E261 , E265 , E266 , E302 , E303 , E305 , E402 , E501 , E502 , E722 , E741 , W504 <nl> - - - statistics <nl> - repo : local <nl> hooks : <nl> mmm a / common / stat_live . py <nl> ppp b / common / stat_live . py <nl> def push_and_update ( self , new_data ) : <nl> _std_last = self . raw_stat . std ( ) <nl> self . raw_stat . push_data ( new_data ) <nl> _delta_std = self . raw_stat . std ( ) - _std_last <nl> - if _delta_std < = 0 : <nl> + if _delta_std < = 0 : <nl> self . filtered_stat . push_data ( new_data ) <nl> else : <nl> pass <nl> mmm a / common / transformations / orientation . py <nl> ppp b / common / transformations / orientation . py <nl> def rot2euler ( rots ) : <nl> quats_from_rotations = rot2quat <nl> quat_from_rot = rot2quat <nl> rotations_from_quats = quat2rot <nl> - rot_from_quat = quat2rot <nl> - rot_from_quat = quat2rot <nl> + rot_from_quat = quat2rot <nl> + rot_from_quat = quat2rot <nl> euler_from_rot = rot2euler <nl> euler_from_quat = quat2euler <nl> rot_from_euler = euler2rot <nl> mmm a / opendbc <nl> ppp b / opendbc <nl> @ @ - 1 + 1 @ @ <nl> - Subproject commit b15edbc1b5a68fd725ea45ba9442a6c9be875971 <nl> + Subproject commit 73685b609d25cfc8f838d53f69cf78e136c612c2 <nl> mmm a / selfdrive / camerad / test / frame_test . py <nl> ppp b / selfdrive / camerad / test / frame_test . py <nl> <nl> font = ImageFont . truetype ( " arial " , size = 72 ) <nl> def get_frame ( idx ) : <nl> img = np . zeros ( ( 874 , 1164 , 3 ) , np . uint8 ) <nl> - img [ 100 : 400 , 100 : 100 + ( idx % 10 ) * 100 ] = 255 <nl> + img [ 100 : 400 , 100 : 100 + ( idx % 10 ) * 100 ] = 255 <nl> <nl> # big number <nl> im2 = Image . new ( " RGB " , ( 200 , 200 ) ) <nl> mmm a / selfdrive / car / chrysler / chryslercan . py <nl> ppp b / selfdrive / car / chrysler / chryslercan . py <nl> def create_lkas_hud ( packer , gear , lkas_active , hud_alert , hud_count , lkas_car_mo <nl> lines = 1 <nl> alerts = 0 <nl> <nl> - if hud_count < ( 1 * 4 ) : # first 3 seconds , 4Hz <nl> + if hud_count < ( 1 * 4 ) : # first 3 seconds , 4Hz <nl> alerts = 1 <nl> # CAR . PACIFICA_2018_HYBRID and CAR . PACIFICA_2019_HYBRID <nl> # had color = 1 and lines = 1 but trying 2017 hybrid style for now . <nl> mmm a / selfdrive / car / ford / interface . py <nl> ppp b / selfdrive / car / ford / interface . py <nl> def update ( self , c , can_strings ) : <nl> # events <nl> events = self . create_common_events ( ret ) <nl> <nl> - if self . CS . lkas_state not in [ 2 , 3 ] and ret . vEgo > 13 . * CV . MPH_TO_MS and ret . cruiseState . enabled : <nl> + if self . CS . lkas_state not in [ 2 , 3 ] and ret . vEgo > 13 . * CV . MPH_TO_MS and ret . cruiseState . enabled : <nl> events . add ( car . CarEvent . EventName . steerTempUnavailableMute ) <nl> <nl> ret . events = events . to_msg ( ) <nl> mmm a / selfdrive / car / ford / radar_interface . py <nl> ppp b / selfdrive / car / ford / radar_interface . py <nl> def update ( self , can_strings ) : <nl> if cpt [ ' X_Rel ' ] > 0 . 00001 : <nl> self . validCnt [ ii ] + = 1 <nl> else : <nl> - self . validCnt [ ii ] = max ( self . validCnt [ ii ] - 1 , 0 ) <nl> + self . validCnt [ ii ] = max ( self . validCnt [ ii ] - 1 , 0 ) <nl> # print ii , self . validCnt [ ii ] , cpt [ ' VALID ' ] , cpt [ ' X_Rel ' ] , cpt [ ' Angle ' ] <nl> <nl> # radar point only valid if there have been enough valid measurements <nl> mmm a / selfdrive / car / gm / gmcan . py <nl> ppp b / selfdrive / car / gm / gmcan . py <nl> def create_gas_regen_command ( packer , bus , throttle , idx , acc_engaged , at_full_st <nl> } <nl> <nl> dat = packer . make_can_msg ( " ASCMGasRegenCmd " , bus , values ) [ 2 ] <nl> - values [ " GasRegenChecksum " ] = ( ( ( 0xff - dat [ 1 ] ) & 0xff ) < < 16 ) | \ <nl> + values [ " GasRegenChecksum " ] = ( ( ( 0xff - dat [ 1 ] ) & 0xff ) < < 16 ) | \ <nl> ( ( ( 0xff - dat [ 2 ] ) & 0xff ) < < 8 ) | \ <nl> ( ( 0x100 - dat [ 3 ] - idx ) & 0xff ) <nl> <nl> mmm a / selfdrive / car / gm / values . py <nl> ppp b / selfdrive / car / gm / values . py <nl> class CAR : <nl> BUICK_REGAL = " BUICK REGAL ESSENCE 2018 " <nl> <nl> class CruiseButtons : <nl> - INIT = 0 <nl> - UNPRESS = 1 <nl> - RES_ACCEL = 2 <nl> - DECEL_SET = 3 <nl> - MAIN = 5 <nl> - CANCEL = 6 <nl> + INIT = 0 <nl> + UNPRESS = 1 <nl> + RES_ACCEL = 2 <nl> + DECEL_SET = 3 <nl> + MAIN = 5 <nl> + CANCEL = 6 <nl> <nl> class AccState : <nl> - OFF = 0 <nl> - ACTIVE = 1 <nl> - FAULTED = 3 <nl> + OFF = 0 <nl> + ACTIVE = 1 <nl> + FAULTED = 3 <nl> STANDSTILL = 4 <nl> <nl> class CanBus : <nl> POWERTRAIN = 0 <nl> - OBSTACLE = 1 <nl> - CHASSIS = 2 <nl> - SW_GMLAN = 3 <nl> + OBSTACLE = 1 <nl> + CHASSIS = 2 <nl> + SW_GMLAN = 3 <nl> <nl> def is_eps_status_ok ( eps_status , car_fingerprint ) : <nl> return eps_status in [ 0 , 1 ] <nl> mmm a / selfdrive / car / honda / values . py <nl> ppp b / selfdrive / car / honda / values . py <nl> <nl> <nl> # Car button codes <nl> class CruiseButtons : <nl> - RES_ACCEL = 4 <nl> - DECEL_SET = 3 <nl> - CANCEL = 2 <nl> - MAIN = 1 <nl> + RES_ACCEL = 4 <nl> + DECEL_SET = 3 <nl> + CANCEL = 2 <nl> + MAIN = 1 <nl> <nl> # See dbc files for info on values " <nl> VISUAL_HUD = { <nl> mmm a / selfdrive / car / mazda / mazdacan . py <nl> ppp b / selfdrive / car / mazda / mazdacan . py <nl> def create_steering_control ( packer , car_fingerprint , frame , apply_steer , lkas ) : <nl> <nl> b1 = int ( lkas [ " BIT_1 " ] ) <nl> ldw = int ( lkas [ " LDW " ] ) <nl> - er1 = int ( lkas [ " ERR_BIT_1 " ] ) <nl> + er1 = int ( lkas [ " ERR_BIT_1 " ] ) <nl> lnv = 0 <nl> - er2 = int ( lkas [ " ERR_BIT_2 " ] ) <nl> + er2 = int ( lkas [ " ERR_BIT_2 " ] ) <nl> <nl> steering_angle = int ( lkas [ " STEERING_ANGLE " ] ) <nl> b2 = int ( lkas [ " ANGLE_ENABLED " ] ) <nl> <nl> tmp = steering_angle + 2048 <nl> ahi = tmp > > 10 <nl> - amd = ( tmp & 0x3FF ) > > 2 <nl> + amd = ( tmp & 0x3FF ) > > 2 <nl> amd = ( amd > > 4 ) | ( ( amd & 0xF ) < < 4 ) <nl> alo = ( tmp & 0x3 ) < < 2 <nl> <nl> ctr = frame % 16 <nl> # bytes : [ 1 ] [ 2 ] [ 3 ] [ 4 ] <nl> - csum = 249 - ctr - hi - lo - ( lnv < < 3 ) - er1 - ( ldw < < 7 ) - ( er2 < < 4 ) - ( b1 < < 5 ) <nl> + csum = 249 - ctr - hi - lo - ( lnv < < 3 ) - er1 - ( ldw < < 7 ) - ( er2 < < 4 ) - ( b1 < < 5 ) <nl> <nl> # bytes [ 5 ] [ 6 ] [ 7 ] <nl> csum = csum - ahi - amd - alo - b2 <nl> mmm a / selfdrive / car / toyota / radar_interface . py <nl> ppp b / selfdrive / car / toyota / radar_interface . py <nl> def _update ( self , updated_messages ) : <nl> if ii in self . RADAR_A_MSGS : <nl> cpt = self . rcp . vl [ ii ] <nl> <nl> - if cpt [ ' LONG_DIST ' ] > = 255 or cpt [ ' NEW_TRACK ' ] : <nl> + if cpt [ ' LONG_DIST ' ] > = 255 or cpt [ ' NEW_TRACK ' ] : <nl> self . valid_cnt [ ii ] = 0 # reset counter <nl> if cpt [ ' VALID ' ] and cpt [ ' LONG_DIST ' ] < 255 : <nl> self . valid_cnt [ ii ] + = 1 <nl> else : <nl> - self . valid_cnt [ ii ] = max ( self . valid_cnt [ ii ] - 1 , 0 ) <nl> + self . valid_cnt [ ii ] = max ( self . valid_cnt [ ii ] - 1 , 0 ) <nl> <nl> score = self . rcp . vl [ ii + 16 ] [ ' SCORE ' ] <nl> # print ii , self . valid_cnt [ ii ] , score , cpt [ ' VALID ' ] , cpt [ ' LONG_DIST ' ] , cpt [ ' LAT_DIST ' ] <nl> mmm a / selfdrive / config . py <nl> ppp b / selfdrive / config . py <nl> class UIParams : <nl> lidar_car_x , lidar_car_y = lidar_x / 2 . , lidar_y / 1 . 1 <nl> car_hwidth = 1 . 7272 / 2 * lidar_zoom <nl> car_front = 2 . 6924 * lidar_zoom <nl> - car_back = 1 . 8796 * lidar_zoom <nl> + car_back = 1 . 8796 * lidar_zoom <nl> car_color = 110 <nl> mmm a / selfdrive / controls / lib / driver_monitor . py <nl> ppp b / selfdrive / controls / lib / driver_monitor . py <nl> def get_pose ( self , driver_state , cal_rpy , car_speed , op_engaged ) : <nl> # self . pose . roll_std = driver_state . faceOrientationStd [ 2 ] <nl> model_std_max = max ( self . pose . pitch_std , self . pose . yaw_std ) <nl> self . pose . low_std = model_std_max < _POSESTD_THRESHOLD <nl> - self . blink . left_blink = driver_state . leftBlinkProb * ( driver_state . leftEyeProb > _EYE_THRESHOLD ) <nl> - self . blink . right_blink = driver_state . rightBlinkProb * ( driver_state . rightEyeProb > _EYE_THRESHOLD ) <nl> + self . blink . left_blink = driver_state . leftBlinkProb * ( driver_state . leftEyeProb > _EYE_THRESHOLD ) <nl> + self . blink . right_blink = driver_state . rightBlinkProb * ( driver_state . rightEyeProb > _EYE_THRESHOLD ) <nl> self . face_detected = driver_state . faceProb > _FACE_THRESHOLD and \ <nl> abs ( driver_state . facePosition [ 0 ] ) < = 0 . 4 and abs ( driver_state . facePosition [ 1 ] ) < = 0 . 45 <nl> <nl> def get_pose ( self , driver_state , cal_rpy , car_speed , op_engaged ) : <nl> <nl> # update offseter <nl> # only update when driver is actively driving the car above a certain speed <nl> - if self . face_detected and car_speed > _POSE_CALIB_MIN_SPEED and self . pose . low_std and ( not op_engaged or not self . driver_distracted ) : <nl> + if self . face_detected and car_speed > _POSE_CALIB_MIN_SPEED and self . pose . low_std and ( not op_engaged or not self . driver_distracted ) : <nl> self . pose . pitch_offseter . push_and_update ( self . pose . pitch ) <nl> self . pose . yaw_offseter . push_and_update ( self . pose . yaw ) <nl> <nl> mmm a / selfdrive / controls / lib / latcontrol_indi . py <nl> ppp b / selfdrive / controls / lib / latcontrol_indi . py <nl> def update ( self , active , CS , CP , path_plan ) : <nl> indi_log . delta = float ( delta_u ) <nl> indi_log . output = float ( self . output_steer ) <nl> <nl> - check_saturation = ( CS . vEgo > 10 . ) and not CS . steeringRateLimited and not CS . steeringPressed <nl> + check_saturation = ( CS . vEgo > 10 . ) and not CS . steeringRateLimited and not CS . steeringPressed <nl> indi_log . saturated = self . _check_saturation ( self . output_steer , check_saturation , steers_max ) <nl> <nl> return float ( self . output_steer ) , float ( self . angle_steers_des ) , indi_log <nl> mmm a / selfdrive / controls / lib / planner . py <nl> ppp b / selfdrive / controls / lib / planner . py <nl> <nl> <nl> # lookup tables VS speed to determine min and max accels in cruise <nl> # make sure these accelerations are smaller than mpc limits <nl> - _A_CRUISE_MIN_V = [ - 1 . 0 , - . 8 , - . 67 , - . 5 , - . 30 ] <nl> + _A_CRUISE_MIN_V = [ - 1 . 0 , - . 8 , - . 67 , - . 5 , - . 30 ] <nl> _A_CRUISE_MIN_BP = [ 0 . , 5 . , 10 . , 20 . , 40 . ] <nl> <nl> # need fast accel at very low speed for stop and go <nl> mmm a / selfdrive / controls / tests / test_monitoring . py <nl> ppp b / selfdrive / controls / tests / test_monitoring . py <nl> class TestMonitoring ( unittest . TestCase ) : <nl> # 0 . op engaged , driver is doing fine all the time <nl> def test_fully_aware_driver ( self ) : <nl> events_output = run_DState_seq ( always_attentive , always_false , always_true , always_false ) [ 0 ] <nl> - self . assertTrue ( np . sum ( [ len ( event ) for event in events_output ] ) = = 0 ) <nl> + self . assertTrue ( np . sum ( [ len ( event ) for event in events_output ] ) = = 0 ) <nl> <nl> # 1 . op engaged , driver is distracted and does nothing <nl> def test_fully_distracted_driver ( self ) : <nl> events_output , d_status = run_DState_seq ( always_distracted , always_false , always_true , always_false ) <nl> - self . assertTrue ( len ( events_output [ int ( ( _DISTRACTED_TIME - _DISTRACTED_PRE_TIME_TILL_TERMINAL ) / 2 / DT_DMON ) ] ) = = 0 ) <nl> - self . assertEqual ( events_output [ int ( ( _DISTRACTED_TIME - _DISTRACTED_PRE_TIME_TILL_TERMINAL + \ <nl> + self . assertTrue ( len ( events_output [ int ( ( _DISTRACTED_TIME - _DISTRACTED_PRE_TIME_TILL_TERMINAL ) / 2 / DT_DMON ) ] ) = = 0 ) <nl> + self . assertEqual ( events_output [ int ( ( _DISTRACTED_TIME - _DISTRACTED_PRE_TIME_TILL_TERMINAL + \ <nl> ( ( _DISTRACTED_PRE_TIME_TILL_TERMINAL - _DISTRACTED_PROMPT_TIME_TILL_TERMINAL ) / 2 ) ) / DT_DMON ) ] . names [ 0 ] , EventName . preDriverDistracted ) <nl> - self . assertEqual ( events_output [ int ( ( _DISTRACTED_TIME - _DISTRACTED_PROMPT_TIME_TILL_TERMINAL + \ <nl> + self . assertEqual ( events_output [ int ( ( _DISTRACTED_TIME - _DISTRACTED_PROMPT_TIME_TILL_TERMINAL + \ <nl> ( ( _DISTRACTED_PROMPT_TIME_TILL_TERMINAL ) / 2 ) ) / DT_DMON ) ] . names [ 0 ] , EventName . promptDriverDistracted ) <nl> - self . assertEqual ( events_output [ int ( ( _DISTRACTED_TIME + \ <nl> + self . assertEqual ( events_output [ int ( ( _DISTRACTED_TIME + \ <nl> ( ( _TEST_TIMESPAN - 10 - _DISTRACTED_TIME ) / 2 ) ) / DT_DMON ) ] . names [ 0 ] , EventName . driverDistracted ) <nl> self . assertIs ( type ( d_status . awareness ) , float ) <nl> <nl> # 2 . op engaged , no face detected the whole time , no action <nl> def test_fully_invisible_driver ( self ) : <nl> events_output = run_DState_seq ( always_no_face , always_false , always_true , always_false ) [ 0 ] <nl> - self . assertTrue ( len ( events_output [ int ( ( _AWARENESS_TIME - _AWARENESS_PRE_TIME_TILL_TERMINAL ) / 2 / DT_DMON ) ] ) = = 0 ) <nl> - self . assertEqual ( events_output [ int ( ( _AWARENESS_TIME - _AWARENESS_PRE_TIME_TILL_TERMINAL + \ <nl> + self . assertTrue ( len ( events_output [ int ( ( _AWARENESS_TIME - _AWARENESS_PRE_TIME_TILL_TERMINAL ) / 2 / DT_DMON ) ] ) = = 0 ) <nl> + self . assertEqual ( events_output [ int ( ( _AWARENESS_TIME - _AWARENESS_PRE_TIME_TILL_TERMINAL + \ <nl> ( ( _AWARENESS_PRE_TIME_TILL_TERMINAL - _AWARENESS_PROMPT_TIME_TILL_TERMINAL ) / 2 ) ) / DT_DMON ) ] . names [ 0 ] , EventName . preDriverUnresponsive ) <nl> - self . assertEqual ( events_output [ int ( ( _AWARENESS_TIME - _AWARENESS_PROMPT_TIME_TILL_TERMINAL + \ <nl> + self . assertEqual ( events_output [ int ( ( _AWARENESS_TIME - _AWARENESS_PROMPT_TIME_TILL_TERMINAL + \ <nl> ( ( _AWARENESS_PROMPT_TIME_TILL_TERMINAL ) / 2 ) ) / DT_DMON ) ] . names [ 0 ] , EventName . promptDriverUnresponsive ) <nl> - self . assertEqual ( events_output [ int ( ( _AWARENESS_TIME + \ <nl> + self . assertEqual ( events_output [ int ( ( _AWARENESS_TIME + \ <nl> ( ( _TEST_TIMESPAN - 10 - _AWARENESS_TIME ) / 2 ) ) / DT_DMON ) ] . names [ 0 ] , EventName . driverUnresponsive ) <nl> <nl> # 3 . op engaged , down to orange , driver pays attention , back to normal ; then down to orange , driver touches wheel <nl> def test_normal_driver ( self ) : <nl> interaction_vector = [ car_interaction_NOT_DETECTED ] * int ( _DISTRACTED_SECONDS_TO_ORANGE * 3 / DT_DMON ) + \ <nl> [ car_interaction_DETECTED ] * ( int ( _TEST_TIMESPAN / DT_DMON ) - int ( _DISTRACTED_SECONDS_TO_ORANGE * 3 / DT_DMON ) ) <nl> events_output = run_DState_seq ( ds_vector , interaction_vector , always_true , always_false ) [ 0 ] <nl> - self . assertTrue ( len ( events_output [ int ( _DISTRACTED_SECONDS_TO_ORANGE * 0 . 5 / DT_DMON ) ] ) = = 0 ) <nl> + self . assertTrue ( len ( events_output [ int ( _DISTRACTED_SECONDS_TO_ORANGE * 0 . 5 / DT_DMON ) ] ) = = 0 ) <nl> self . assertEqual ( events_output [ int ( ( _DISTRACTED_SECONDS_TO_ORANGE - 0 . 1 ) / DT_DMON ) ] . names [ 0 ] , EventName . promptDriverDistracted ) <nl> - self . assertTrue ( len ( events_output [ int ( _DISTRACTED_SECONDS_TO_ORANGE * 1 . 5 / DT_DMON ) ] ) = = 0 ) <nl> + self . assertTrue ( len ( events_output [ int ( _DISTRACTED_SECONDS_TO_ORANGE * 1 . 5 / DT_DMON ) ] ) = = 0 ) <nl> self . assertEqual ( events_output [ int ( ( _DISTRACTED_SECONDS_TO_ORANGE * 3 - 0 . 1 ) / DT_DMON ) ] . names [ 0 ] , EventName . promptDriverDistracted ) <nl> - self . assertTrue ( len ( events_output [ int ( ( _DISTRACTED_SECONDS_TO_ORANGE * 3 + 0 . 1 ) / DT_DMON ) ] ) = = 0 ) <nl> + self . assertTrue ( len ( events_output [ int ( ( _DISTRACTED_SECONDS_TO_ORANGE * 3 + 0 . 1 ) / DT_DMON ) ] ) = = 0 ) <nl> <nl> # 4 . op engaged , down to orange , driver dodges camera , then comes back still distracted , down to red , \ <nl> # driver dodges , and then touches wheel to no avail , disengages and reengages <nl> def test_biggest_comma_fan ( self ) : <nl> self . assertEqual ( events_output [ int ( ( _DISTRACTED_SECONDS_TO_ORANGE + 0 . 5 * _invisible_time ) / DT_DMON ) ] . names [ 0 ] , EventName . promptDriverDistracted ) <nl> self . assertEqual ( events_output [ int ( ( _DISTRACTED_SECONDS_TO_RED + 1 . 5 * _invisible_time ) / DT_DMON ) ] . names [ 0 ] , EventName . driverDistracted ) <nl> self . assertEqual ( events_output [ int ( ( _DISTRACTED_SECONDS_TO_RED + 2 * _invisible_time + 1 . 5 ) / DT_DMON ) ] . names [ 0 ] , EventName . driverDistracted ) <nl> - self . assertTrue ( len ( events_output [ int ( ( _DISTRACTED_SECONDS_TO_RED + 2 * _invisible_time + 3 . 5 ) / DT_DMON ) ] ) = = 0 ) <nl> + self . assertTrue ( len ( events_output [ int ( ( _DISTRACTED_SECONDS_TO_RED + 2 * _invisible_time + 3 . 5 ) / DT_DMON ) ] ) = = 0 ) <nl> <nl> # 5 . op engaged , invisible driver , down to orange , driver touches wheel ; then down to orange again , driver appears <nl> # - both actions should clear the alert , but momentary appearence should not <nl> def test_sometimes_transparent_commuter ( self ) : <nl> ds_vector [ int ( ( 2 * _INVISIBLE_SECONDS_TO_ORANGE + 1 ) / DT_DMON ) : int ( ( 2 * _INVISIBLE_SECONDS_TO_ORANGE + 1 + _visible_time ) / DT_DMON ) ] = [ msg_ATTENTIVE ] * int ( _visible_time / DT_DMON ) <nl> interaction_vector [ int ( ( _INVISIBLE_SECONDS_TO_ORANGE ) / DT_DMON ) : int ( ( _INVISIBLE_SECONDS_TO_ORANGE + 1 ) / DT_DMON ) ] = [ True ] * int ( 1 / DT_DMON ) <nl> events_output = run_DState_seq ( ds_vector , interaction_vector , 2 * always_true , 2 * always_false ) [ 0 ] <nl> - self . assertTrue ( len ( events_output [ int ( _INVISIBLE_SECONDS_TO_ORANGE * 0 . 5 / DT_DMON ) ] ) = = 0 ) <nl> + self . assertTrue ( len ( events_output [ int ( _INVISIBLE_SECONDS_TO_ORANGE * 0 . 5 / DT_DMON ) ] ) = = 0 ) <nl> self . assertEqual ( events_output [ int ( ( _INVISIBLE_SECONDS_TO_ORANGE - 0 . 1 ) / DT_DMON ) ] . names [ 0 ] , EventName . promptDriverUnresponsive ) <nl> - self . assertTrue ( len ( events_output [ int ( ( _INVISIBLE_SECONDS_TO_ORANGE + 0 . 1 ) / DT_DMON ) ] ) = = 0 ) <nl> + self . assertTrue ( len ( events_output [ int ( ( _INVISIBLE_SECONDS_TO_ORANGE + 0 . 1 ) / DT_DMON ) ] ) = = 0 ) <nl> if _visible_time = = 1 : <nl> self . assertEqual ( events_output [ int ( ( _INVISIBLE_SECONDS_TO_ORANGE * 2 + 1 - 0 . 1 ) / DT_DMON ) ] . names [ 0 ] , EventName . promptDriverUnresponsive ) <nl> self . assertEqual ( events_output [ int ( ( _INVISIBLE_SECONDS_TO_ORANGE * 2 + 1 + 0 . 1 + _visible_time ) / DT_DMON ) ] . names [ 0 ] , EventName . preDriverUnresponsive ) <nl> elif _visible_time = = 10 : <nl> self . assertEqual ( events_output [ int ( ( _INVISIBLE_SECONDS_TO_ORANGE * 2 + 1 - 0 . 1 ) / DT_DMON ) ] . names [ 0 ] , EventName . promptDriverUnresponsive ) <nl> - self . assertTrue ( len ( events_output [ int ( ( _INVISIBLE_SECONDS_TO_ORANGE * 2 + 1 + 0 . 1 + _visible_time ) / DT_DMON ) ] ) = = 0 ) <nl> + self . assertTrue ( len ( events_output [ int ( ( _INVISIBLE_SECONDS_TO_ORANGE * 2 + 1 + 0 . 1 + _visible_time ) / DT_DMON ) ] ) = = 0 ) <nl> else : <nl> pass <nl> <nl> def test_last_second_responder ( self ) : <nl> interaction_vector [ int ( ( _INVISIBLE_SECONDS_TO_RED + _visible_time ) / DT_DMON ) : int ( ( _INVISIBLE_SECONDS_TO_RED + _visible_time + 1 ) / DT_DMON ) ] = [ True ] * int ( 1 / DT_DMON ) <nl> op_vector [ int ( ( _INVISIBLE_SECONDS_TO_RED + _visible_time + 1 ) / DT_DMON ) : int ( ( _INVISIBLE_SECONDS_TO_RED + _visible_time + 0 . 5 ) / DT_DMON ) ] = [ False ] * int ( 0 . 5 / DT_DMON ) <nl> events_output = run_DState_seq ( ds_vector , interaction_vector , op_vector , always_false ) [ 0 ] <nl> - self . assertTrue ( len ( events_output [ int ( _INVISIBLE_SECONDS_TO_ORANGE * 0 . 5 / DT_DMON ) ] ) = = 0 ) <nl> + self . assertTrue ( len ( events_output [ int ( _INVISIBLE_SECONDS_TO_ORANGE * 0 . 5 / DT_DMON ) ] ) = = 0 ) <nl> self . assertEqual ( events_output [ int ( ( _INVISIBLE_SECONDS_TO_ORANGE - 0 . 1 ) / DT_DMON ) ] . names [ 0 ] , EventName . promptDriverUnresponsive ) <nl> self . assertEqual ( events_output [ int ( ( _INVISIBLE_SECONDS_TO_RED - 0 . 1 ) / DT_DMON ) ] . names [ 0 ] , EventName . driverUnresponsive ) <nl> self . assertEqual ( events_output [ int ( ( _INVISIBLE_SECONDS_TO_RED + 0 . 5 * _visible_time ) / DT_DMON ) ] . names [ 0 ] , EventName . driverUnresponsive ) <nl> self . assertEqual ( events_output [ int ( ( _INVISIBLE_SECONDS_TO_RED + _visible_time + 0 . 5 ) / DT_DMON ) ] . names [ 0 ] , EventName . driverUnresponsive ) <nl> - self . assertTrue ( len ( events_output [ int ( ( _INVISIBLE_SECONDS_TO_RED + _visible_time + 1 + 0 . 1 ) / DT_DMON ) ] ) = = 0 ) <nl> + self . assertTrue ( len ( events_output [ int ( ( _INVISIBLE_SECONDS_TO_RED + _visible_time + 1 + 0 . 1 ) / DT_DMON ) ] ) = = 0 ) <nl> <nl> # 7 . op not engaged , always distracted driver <nl> # - dm should stay quiet when not engaged <nl> def test_pure_dashcam_user ( self ) : <nl> events_output = run_DState_seq ( always_distracted , always_false , always_false , always_false ) [ 0 ] <nl> - self . assertTrue ( np . sum ( [ len ( event ) for event in events_output ] ) = = 0 ) <nl> + self . assertTrue ( np . sum ( [ len ( event ) for event in events_output ] ) = = 0 ) <nl> <nl> # 8 . op engaged , car stops at traffic light , down to orange , no action , then car starts moving <nl> # - should only reach green when stopped , but continues counting down on launch <nl> def test_one_indecisive_model ( self ) : <nl> [ msg_DISTRACTED_UNCERTAIN ] * ( int ( _TEST_TIMESPAN / DT_DMON ) - int ( ( _DISTRACTED_SECONDS_TO_ORANGE + _UNCERTAIN_SECONDS_TO_GREEN ) / DT_DMON ) ) <nl> interaction_vector = always_false [ : ] <nl> events_output = run_DState_seq ( ds_vector , interaction_vector , always_true , always_false ) [ 0 ] <nl> - self . assertTrue ( len ( events_output [ int ( _UNCERTAIN_SECONDS_TO_GREEN * 0 . 5 / DT_DMON ) ] ) = = 0 ) <nl> + self . assertTrue ( len ( events_output [ int ( _UNCERTAIN_SECONDS_TO_GREEN * 0 . 5 / DT_DMON ) ] ) = = 0 ) <nl> self . assertEqual ( events_output [ int ( ( _UNCERTAIN_SECONDS_TO_GREEN - 0 . 1 ) / DT_DMON ) ] . names [ 0 ] , EventName . driverMonitorLowAcc ) <nl> - self . assertTrue ( len ( events_output [ int ( ( _UNCERTAIN_SECONDS_TO_GREEN + _DISTRACTED_SECONDS_TO_ORANGE - 0 . 5 ) / DT_DMON ) ] ) = = 0 ) <nl> + self . assertTrue ( len ( events_output [ int ( ( _UNCERTAIN_SECONDS_TO_GREEN + _DISTRACTED_SECONDS_TO_ORANGE - 0 . 5 ) / DT_DMON ) ] ) = = 0 ) <nl> self . assertEqual ( events_output [ int ( ( _TEST_TIMESPAN - 5 . ) / DT_DMON ) ] . names [ 0 ] , EventName . driverMonitorLowAcc ) <nl> <nl> # 10 . op engaged , model is somehow uncertain and driver is distracted <nl> def test_somehow_indecisive_model ( self ) : <nl> ds_vector = [ msg_DISTRACTED_BUT_SOMEHOW_UNCERTAIN ] * int ( _TEST_TIMESPAN / DT_DMON ) <nl> interaction_vector = always_false [ : ] <nl> events_output = run_DState_seq ( ds_vector , interaction_vector , always_true , always_false ) [ 0 ] <nl> - self . assertTrue ( len ( events_output [ int ( _UNCERTAIN_SECONDS_TO_GREEN * 0 . 5 / DT_DMON ) ] ) = = 0 ) <nl> + self . assertTrue ( len ( events_output [ int ( _UNCERTAIN_SECONDS_TO_GREEN * 0 . 5 / DT_DMON ) ] ) = = 0 ) <nl> self . assertEqual ( events_output [ int ( ( _UNCERTAIN_SECONDS_TO_GREEN ) / DT_DMON ) ] . names [ 0 ] , EventName . driverMonitorLowAcc ) <nl> self . assertEqual ( events_output [ int ( ( 2 . 5 * ( _DISTRACTED_TIME - _DISTRACTED_PRE_TIME_TILL_TERMINAL ) ) / DT_DMON ) ] . names [ 1 ] , EventName . preDriverDistracted ) <nl> self . assertEqual ( events_output [ int ( ( 2 . 5 * ( _DISTRACTED_TIME - _DISTRACTED_PROMPT_TIME_TILL_TERMINAL ) ) / DT_DMON ) ] . names [ 1 ] , EventName . promptDriverDistracted ) <nl> mmm a / selfdrive / debug / mpc / live_lateral_mpc . py <nl> ppp b / selfdrive / debug / mpc / live_lateral_mpc . py <nl> def mpc_vwr_thread ( addr = " 127 . 0 . 0 . 1 " ) : <nl> lineP . set_ydata ( path_x ) <nl> <nl> if lMpc is not None : <nl> - mpc_path_x = list ( lMpc . liveMpc . x ) [ 1 : ] <nl> - mpc_path_y = list ( lMpc . liveMpc . y ) [ 1 : ] <nl> - mpc_steer_angle = list ( lMpc . liveMpc . delta ) [ 1 : ] <nl> - mpc_psi = list ( lMpc . liveMpc . psi ) [ 1 : ] <nl> + mpc_path_x = list ( lMpc . liveMpc . x ) [ 1 : ] <nl> + mpc_path_y = list ( lMpc . liveMpc . y ) [ 1 : ] <nl> + mpc_steer_angle = list ( lMpc . liveMpc . delta ) [ 1 : ] <nl> + mpc_psi = list ( lMpc . liveMpc . psi ) [ 1 : ] <nl> <nl> line1 . set_xdata ( mpc_path_y ) <nl> line1 . set_ydata ( mpc_path_x ) <nl> mmm a / selfdrive / locationd / test / ubloxd . py <nl> ppp b / selfdrive / locationd / test / ubloxd . py <nl> def gen_nav_data ( msg , nav_frame_buffer ) : <nl> <nl> # parse GPS ephem <nl> gnssId = msg_meta_data [ ' gnssId ' ] <nl> - if gnssId = = 0 : <nl> + if gnssId = = 0 : <nl> svId = msg_meta_data [ ' svid ' ] <nl> subframeId = GET_FIELD_U ( measurements [ 1 ] [ ' dwrd ' ] , 3 , 8 ) <nl> words = [ ] <nl> mmm a / selfdrive / test / longitudinal_maneuvers / plant . py <nl> ppp b / selfdrive / test / longitudinal_maneuvers / plant . py <nl> def close ( self ) : <nl> Plant . live_params . close ( ) <nl> <nl> def speed_sensor ( self , speed ) : <nl> - if speed < 0 . 3 : <nl> + if speed < 0 . 3 : <nl> return 0 <nl> else : <nl> return speed * CV . MS_TO_KPH <nl> mmm a / selfdrive / test / process_replay / process_replay . py <nl> ppp b / selfdrive / test / process_replay / process_replay . py <nl> def calibration_rcv_callback ( msg , CP , cfg , fsm ) : <nl> # calibrationd publishes 1 calibrationData every 5 cameraOdometry packets . <nl> # should_recv always true to increment frame <nl> if msg . which ( ) = = ' carState ' : <nl> - if ( ( fsm . frame + 1 ) % 25 ) = = 0 : <nl> + if ( ( fsm . frame + 1 ) % 25 ) = = 0 : <nl> recv_socks = [ " liveCalibration " ] <nl> else : <nl> recv_socks = [ ] <nl> mmm a / tools / carcontrols / joystick_test . py <nl> ppp b / tools / carcontrols / joystick_test . py <nl> <nl> import pygame # pylint : disable = import - error <nl> <nl> # Define some colors <nl> - BLACK = ( 0 , 0 , 0 ) <nl> - WHITE = ( 255 , 255 , 255 ) <nl> + BLACK = ( 0 , 0 , 0 ) <nl> + WHITE = ( 255 , 255 , 255 ) <nl> <nl> # This is a simple class that will help us print to the screen <nl> # It has nothing to do with the joysticks , just outputting the <nl> def unindent ( self ) : <nl> # EVENT PROCESSING STEP <nl> for event in pygame . event . get ( ) : # User did something <nl> if event . type = = pygame . QUIT : # If user clicked close <nl> - done = True # Flag that we are done so we exit this loop <nl> + done = True # Flag that we are done so we exit this loop <nl> <nl> # Possible joystick actions : JOYAXISMOTION JOYBALLMOTION JOYBUTTONDOWN JOYBUTTONUP JOYHATMOTION <nl> if event . type = = pygame . JOYBUTTONDOWN : <nl> mmm a / tools / replay / unlogger . py <nl> ppp b / tools / replay / unlogger . py <nl> def keyboard_controller_thread ( q , route_start_time ) : <nl> kb = KBHit ( ) <nl> while 1 : <nl> c = kb . getch ( ) <nl> - if c = = ' m ' : # Move forward by 1m <nl> + if c = = ' m ' : # Move forward by 1m <nl> q . send_pyobj ( SeekRelativeTime ( 60 ) ) <nl> - elif c = = ' M ' : # Move backward by 1m <nl> + elif c = = ' M ' : # Move backward by 1m <nl> q . send_pyobj ( SeekRelativeTime ( - 60 ) ) <nl> - elif c = = ' s ' : # Move forward by 10s <nl> + elif c = = ' s ' : # Move forward by 10s <nl> q . send_pyobj ( SeekRelativeTime ( 10 ) ) <nl> - elif c = = ' S ' : # Move backward by 10s <nl> + elif c = = ' S ' : # Move backward by 10s <nl> q . send_pyobj ( SeekRelativeTime ( - 10 ) ) <nl> - elif c = = ' G ' : # Move backward by 10s <nl> + elif c = = ' G ' : # Move backward by 10s <nl> q . send_pyobj ( SeekAbsoluteTime ( 0 . ) ) <nl> - elif c = = " \ x20 " : # Space bar . <nl> + elif c = = " \ x20 " : # Space bar . <nl> q . send_pyobj ( TogglePause ( ) ) <nl> - elif c = = " \ n " : <nl> + elif c = = " \ n " : <nl> try : <nl> seek_time_input = input ( ' time : ' ) <nl> seek_time = absolute_time_str ( seek_time_input , route_start_time ) <nl>
Flake8 E22X ( )
commaai/openpilot
6051061ff8e7809960cc1f2bad9a582801d5a83e
2020-05-31T07:48:47Z
deleted file mode 100755 <nl> index 74718b022c7e . . 000000000000 <nl> Binary files a / nix and / dev / null differ <nl>
Merge pull request from practicalswift / remove - binary
apple/swift
92175c49e398c239f4b884892939fc2f8563374e
2016-01-13T08:24:25Z
new file mode 100644 <nl> index 000000000000 . . 02f45d8b86f5 <nl> mmm / dev / null <nl> ppp b / docs / HowSwiftImportsCAPIs . md <nl> <nl> + # How Swift imports C APIs <nl> + <nl> + When Swift imports a module or parses a bridging header from a C - based language <nl> + ( C , Objective - C ) , the APIs are mapped into Swift APIs and can be used directly <nl> + from Swift code . This provides the basis for Swift ' s Foreign Function Interface <nl> + ( FFI ) , providing interoperability with existing libraries written in C - based <nl> + languages . This document describes how APIs from C - based languages are mapped <nl> + into Swift APIs . <nl> + <nl> + this document is written for a broad audience , including Swift and C users who <nl> + might not be language experts . Therefore , it explains some advanced concepts <nl> + where necessary . <nl> + <nl> + * [ Names , identifiers and keywords ] ( # names - identifiers - and - keywords ) <nl> + * [ Unicode ] ( # unicode ) <nl> + * [ Names that are keywords in Swift ] ( # names - that - are - keywords - in - swift ) <nl> + * [ Name translation ] ( # name - translation ) <nl> + * [ Name customization ] ( # name - customization ) <nl> + * [ Size , stride , and alignment of types ] ( # size - stride - and - alignment - of - types ) <nl> + * [ Fundamental types ] ( # fundamental - types ) <nl> + * [ Free functions ] ( # free - functions ) <nl> + * [ Argument labels ] ( # argument - labels ) <nl> + * [ Variadic arguments ] ( # variadic - arguments ) <nl> + * [ Inline functions ] ( # inline - functions ) <nl> + * [ Global variables ] ( # global - variables ) <nl> + * [ Pointers to data ] ( # pointers - to - data ) <nl> + * [ Nullable and non - nullable pointers ] ( # nullable - and - non - nullable - pointers ) <nl> + * [ Incomplete types and pointers to them ] ( # incomplete - types - and - pointers - to - them ) <nl> + * [ Function pointers ] ( # function - pointers ) <nl> + * [ Fixed - size arrays ] ( # fixed - size - arrays ) <nl> + * [ Structs ] ( # structs ) <nl> + * [ Unions ] ( # unions ) <nl> + * [ Enums ] ( # enums ) <nl> + * [ Typedefs ] ( # typedefs ) <nl> + * [ Macros ] ( # macros ) <nl> + <nl> + # Names , identifiers and keywords <nl> + <nl> + # # Unicode <nl> + <nl> + C ( and C + + ) permit non - ASCII Unicode code points in identifiers . While Swift <nl> + does not permit arbitrary Unicode code points in identifiers ( so compatibility <nl> + might not be perfect ) , it tries to allow reasonable ones . However , C , <nl> + Objective - C , and C + + code in practice does not tend to use non - ASCII code points <nl> + in identifiers , so mapping them to Swift is not an important concern . <nl> + <nl> + # # Names that are keywords in Swift <nl> + <nl> + Some C and C + + identifiers are keywords in Swift . Despite that , such names are <nl> + imported as - is into Swift , because Swift permits escaping keywords to use them <nl> + as identifiers : <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + / / The name of this function is a keyword in Swift . <nl> + void func ( ) ; <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift . <nl> + <nl> + / / The name of the function is still ` func ` , but it is escaped to make the <nl> + / / keyword into an identifier . <nl> + func ` func ` ( ) <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / Swift user . <nl> + <nl> + func test ( ) { <nl> + / / Call the C function declared above . <nl> + ` func ` ( ) <nl> + } <nl> + ` ` ` <nl> + <nl> + # # Name translation <nl> + <nl> + Names of some C declarations appear in Swift differently . In particular , names <nl> + of enumerators ( enum constants ) go through a translation process that is <nl> + hardcoded into the compiler . For more details , see [ Name Translation from C to <nl> + Swift ] ( CToSwiftNameTranslation . md ) . <nl> + <nl> + # # Name customization <nl> + <nl> + As a general principle of Swift / C interoperability , the C API vendor has broad <nl> + control over how their APIs appear in Swift . In particular , the vendor can use <nl> + the ` swift_name ` Clang attribute to customize the names of their C APIs in order <nl> + to be more idiomatic in Swift . For more details , see [ Name Translation from C to <nl> + Swift ] ( CToSwiftNameTranslation . md ) . <nl> + <nl> + # Size , stride , and alignment of types <nl> + <nl> + In C , every type has a size ( computed with the ` sizeof ` operator ) , and an <nl> + alignment ( computed with ` alignof ` ) . <nl> + <nl> + In Swift , types have size , stride , and alignment . <nl> + <nl> + The concept of alignment in C and Swift is exactly the same . <nl> + <nl> + Size and stride is more complicated . In Swift , stride is the distance between <nl> + two elements in an array . The size of a type in Swift is the stride minus the <nl> + tail padding . For example : <nl> + <nl> + ` ` ` swift <nl> + struct SwiftStructWithPadding { <nl> + var x : Int16 <nl> + var y : Int8 <nl> + } <nl> + <nl> + print ( MemoryLayout < SwiftStructWithPadding > . size ) / / 3 <nl> + print ( MemoryLayout < SwiftStructWithPadding > . stride ) / / 4 <nl> + ` ` ` <nl> + <nl> + C ' s concept of size corresponds to Swift ' s stride ( not size ! ) C does not have an <nl> + equivalent of Swift ' s size . <nl> + <nl> + Swift tracks the exact size of the data stored in a type so that it can pack <nl> + additional data into bytes that otherwise would be wasted as padding . Swift also <nl> + tracks possible and impossible bit patterns for each type , and reuses impossible <nl> + bit patterns to encode more information , similarly to ` llvm : : PointerIntPair ` and <nl> + ` llvm : : PointerUnion ` . The language does this automatically , and transparently <nl> + for users . For example : <nl> + <nl> + ` ` ` swift <nl> + / / This enum takes 1 byte in memory , which has 256 possible bit patterns . <nl> + / / However , only 2 bit patterns are used . <nl> + enum Foo { <nl> + case A <nl> + case B <nl> + } <nl> + <nl> + print ( MemoryLayout < Foo > . size ) / / 1 <nl> + print ( MemoryLayout < Foo ? > . size ) / / also 1 : ` nil ` is represented as one of the 254 bit patterns that are not used by ` Foo . A ` or ` Foo . B ` . <nl> + ` ` ` <nl> + <nl> + Nevertheless , for types imported from C , the size and the stride are equal . <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + struct CStructWithPadding { <nl> + int16_t x ; <nl> + int8_t y ; <nl> + } ; <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift . <nl> + <nl> + struct CStructWithPadding { <nl> + var x : Int16 <nl> + var y : Int8 <nl> + } <nl> + ` ` ` <nl> + <nl> + ` ` ` <nl> + print ( MemoryLayout < CStructWithPadding > . size ) / / 4 <nl> + print ( MemoryLayout < CStructWithPadding > . stride ) / / 4 <nl> + ` ` ` <nl> + <nl> + # Fundamental types <nl> + <nl> + In C , certain types ( ` char ` , ` int ` , ` float ` etc . ) are built into the language <nl> + and into the compiler . These builtin types have behaviors that are not possible <nl> + to imitate in user - defined types ( e . g . , usual arithmetic conversions ) . <nl> + <nl> + C and C + + standard libraries provide headers that define character and integer <nl> + types that are typedefs to one of the underlying builtin types , for example , <nl> + ` int16_t ` , ` size_t ` , and ` ptrdiff_t ` . <nl> + <nl> + Swift does not have such builtin types . Swift ' s equivalents to C ' s fundamental <nl> + types are defined in the Swift standard library as ordinary structs . They are <nl> + implemented using compiler intrinsics , but the API surface is defined in <nl> + ordinary Swift code : <nl> + <nl> + ` ` ` swift <nl> + / / From the Swift standard library : <nl> + <nl> + struct Int32 { <nl> + internal var _value : Builtin . Int32 <nl> + <nl> + / / Note : ` Builtin . Xyz ` types are only accessible to the standard library . <nl> + } <nl> + <nl> + func + ( lhs : Int32 , rhs : Int32 ) - > Int32 { <nl> + return Int32 ( _value : Builtin . add_Int32 ( lhs . _value , rhs . _value ) ) <nl> + } <nl> + ` ` ` <nl> + <nl> + Memory layout of these " fundamental " Swift types does not have any surprises , <nl> + hidden vtable pointers , metadata , or reference counting ; it is exactly what you <nl> + expect from a corresponding C type : just the data , stored inline . For example , <nl> + Swift ' s ` Int32 ` is a contiguous chunk of four bytes , all of which store the <nl> + number . <nl> + <nl> + Fundamental types in C , with a few exceptions , have an implementation - defined <nl> + size , alignment , and stride ( distance between two array elements ) . <nl> + <nl> + Swift ' s integer and floating point types have fixed size , alignment and stride <nl> + across platforms , with two exceptions : ` Int ` and ` UInt ` . The sizes of ` Int ` and <nl> + ` UInt ` match the size of the pointer on the platform , similarly to how ` size_t ` , <nl> + ` ptrdiff_t ` , ` intptr_t ` , and ` uintptr_t ` have the same size as a pointer in most <nl> + C implementations . ` Int ` and ` UInt ` are distinct types , they are not typealiases <nl> + to explicitly - sized types . An explicit conversion is required to convert between <nl> + ` Int ` , ` UInt ` , and any other integer type , even if sizes happen to match on the <nl> + current platform . <nl> + <nl> + The table below summarizes mapping between fundamental types of C and C + + and <nl> + Swift types . This table is based on <nl> + [ ` swift . git / include / swift / ClangImporter / BuiltinMappedTypes . def ` ] ( . . / include / swift / ClangImporter / BuiltinMappedTypes . def ) . <nl> + <nl> + | C and C + + types | Swift types | <nl> + | mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm | mmmmmmmmm - - | <nl> + | C ` _Bool ` , C + + ` bool ` | ` typealias CBool = Bool ` | <nl> + | ` char ` , regardless if the target defines it as signed or unsigned | ` typealias CChar = Int8 ` | <nl> + | ` signed char ` , explicitly signed | ` typealias CSignedChar = Int8 ` | <nl> + | ` unsigned char ` , explicitly unsigned | ` typealias CUnsignedChar = UInt8 ` | <nl> + | ` short ` , ` signed short ` | ` typealias CShort = Int16 ` | <nl> + | ` unsigned short ` | ` typealias CUnsignedShort = UInt16 ` | <nl> + | ` int ` , ` signed int ` | ` typealias CInt = Int32 ` | <nl> + | ` unsigned int ` | ` typealias CUnsignedInt = UInt32 ` | <nl> + | ` long ` , ` signed long ` | Windows x86 \ _64 : ` typealias CLong = Int32 ` < br > Everywhere else : ` typealias CLong = Int ` | <nl> + | ` unsigned long ` | Windows x86 \ _64 : ` typealias CUnsignedLong = UInt32 ` < br > Everywhere else : ` typealias CUnsignedLong = UInt ` | <nl> + | ` long long ` , ` signed long long ` | ` typealias CLongLong = Int64 ` | <nl> + | ` unsigned long long ` | ` typealias CUnsignedLongLong = UInt64 ` | <nl> + | ` wchar_t ` , regardless if the target defines it as signed or unsigned | ` typealias CWideChar = Unicode . Scalar ` < br > ` Unicode . Scalar ` is a wrapper around ` UInt32 ` | <nl> + | ` char8_t ` ( proposed for C + + 20 ) | Not mapped | <nl> + | ` char16_t ` | ` typealias CChar16 = UInt16 ` | <nl> + | ` char32_t ` | ` typealias CChar32 = Unicode . Scalar ` | <nl> + | ` float ` | ` typealias CFloat = Float ` | <nl> + | ` double ` | ` typealias CDouble = Double ` | <nl> + | ` long double ` | ` CLongDouble ` , which is a typealias to ` Float80 ` or ` Double ` , depending on the platform . < br > There is no support for 128 - bit floating point . | <nl> + <nl> + First of all , notice that C types are mapped to Swift typealiases , not directly <nl> + to the underlying Swift types . The names of the typealiases are based on the <nl> + original C types . These typealiases allow developers to easily write Swift code <nl> + and APIs that work with APIs and data imported from C without a lot of ` # if ` <nl> + conditions . <nl> + <nl> + This table is generally unsurprising : C types are mapped to corresponding <nl> + explicitly - sized Swift types , except for the C ` long ` , which is mapped to ` Int ` <nl> + on 32 - bit and 64 - bit LP64 platforms . This was done to enhance portability <nl> + between 32 - bit and 64 - bit code . <nl> + <nl> + The difficulty with the C ` long ` type is that it can be 32 - bit or 64 - bit <nl> + depending on the platform . If C ` long ` was mapped to an explicitly - sized Swift <nl> + type , it would map to different Swift types on different platforms , making it <nl> + more difficult to write portable Swift code ( the user would have to compile <nl> + Swift code for both platforms to see all errors ; fixing compilation errors on <nl> + one platform can break another platform . ) By mapping ` long ` a distinct type , <nl> + ` Int ` , the language forces the user to think about both cases when compiling for <nl> + either platform . <nl> + <nl> + Nevertheless , mapping C ` long ` to ` Int ` does not work universally . <nl> + Specifically , it does not work on LLP64 platforms ( for example , Windows <nl> + x86 \ _64 ) , where C ` long ` is 32 - bit and Swift ' s ` Int ` is 64 - bit . On LLP64 <nl> + platforms , C ` long ` is mapped to Swift ' s explicitly - sized ` Int32 ` . <nl> + <nl> + Typedefs for integer types in the C standard library are mapped like this ( from <nl> + [ ` swift . git / swift / lib / ClangImporter / MappedTypes . def ` ] ( . . / lib / ClangImporter / MappedTypes . def ) : <nl> + <nl> + | C and C + + types | Swift types | <nl> + | mmmmmmmmmmmmmmm - | mmmmmmmmm - - | <nl> + | ` uint8_t ` | ` UInt8 ` | <nl> + | ` uint16_t ` | ` UInt16 ` | <nl> + | ` uint32_t ` | ` UInt32 ` | <nl> + | ` uint64_t ` | ` UInt64 ` | <nl> + | ` int8_t ` | ` Int8 ` | <nl> + | ` int16_t ` | ` Int16 ` | <nl> + | ` int32_t ` | ` Int32 ` | <nl> + | ` int64_t ` | ` Int64 ` | <nl> + | ` intptr_t ` | ` Int ` | <nl> + | ` uintptr_t ` | ` UInt ` | <nl> + | ` ptrdiff_t ` | ` Int ` | <nl> + | ` size_t ` | ` Int ` | <nl> + | ` rsize_t ` | ` Int ` | <nl> + | ` ssize_t ` | ` Int ` | <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + double Add ( int x , long y ) ; <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift . <nl> + <nl> + func Add ( _ x : CInt , _ y : CLong ) - > CDouble <nl> + ` ` ` <nl> + <nl> + # Free functions <nl> + <nl> + C functions are imported as free functions in Swift . Each type in the signature <nl> + of the C function is mapped to the corresponding Swift type . <nl> + <nl> + # # Argument labels <nl> + <nl> + Imported C functions don ' t have argument labels in Swift by default . Argument <nl> + labels can be added by API owners through annotations in the C header . <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + # define SWIFT_NAME ( X ) __attribute__ ( ( swift_name ( # X ) ) ) <nl> + <nl> + / / No argument labels by default . <nl> + void drawString ( const char * , int xPos , int yPos ) ; <nl> + <nl> + / / The attribute specifies the argument labels . <nl> + void drawStringRenamed ( const char * , int xPos , int yPos ) <nl> + SWIFT_NAME ( drawStringRenamed ( _ : x : y : ) ) ; <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift . <nl> + <nl> + func drawString ( _ : UnsafePointer < CChar > ! , _ xPos : Int , _ yPos : Int ) <nl> + func drawStringRenamed ( _ : UnsafePointer < CChar > ! , x : Int , y : Int ) <nl> + <nl> + drawString ( " hello " , 10 , 20 ) <nl> + drawStringRenamed ( " hello " , x : 10 , y : 20 ) <nl> + ` ` ` <nl> + <nl> + # # Variadic arguments <nl> + <nl> + C functions with variadic arguments are not imported into Swift , however , there <nl> + are no technical reasons why they can ' t be imported . <nl> + <nl> + Note that functions with ` va_list ` arguments are imported into Swift . ` va_list ` <nl> + corresponds to ` CVaListPointer ` in Swift . <nl> + <nl> + C APIs don ' t define a lot of variadic functions , so this limitation has not <nl> + caused a big problem so far . <nl> + <nl> + Often , for each variadic function there is a corresponding function that takes a <nl> + ` va_list ` which can be called from Swift . A motivated developer can write an <nl> + overlay that exposes a Swift variadic function that looks just like the C <nl> + variadic function , and implement it in terms of the ` va_list ` - based C API . You <nl> + can find examples of such overlays and wrappers by searching for usages of the <nl> + ` withVaList ` function in the [ ` swift . git / stdlib ` ] ( . . / stdlib ) directory . <nl> + <nl> + See also Apple ' s documentation about this topic : [ Use a CVaListPointer to Call <nl> + Variadic <nl> + Functions ] ( https : / / developer . apple . com / documentation / swift / imported_c_and_objective - c_apis / using_imported_c_functions_in_swift # 2992073 ) . <nl> + <nl> + # # Inline functions <nl> + <nl> + Inline C functions that are defined in headers are imported as regular Swift <nl> + functions . However , unlike free functions , inline functions require the caller <nl> + to emit a definition of the function , because no other translation unit is <nl> + guaranteed to provide a definition . <nl> + <nl> + Therefore , the Swift compiler uses Clang ' s CodeGen library to emit LLVM IR for <nl> + the C inline function . LLVM IR for C inline functions and LLVM IR for Swift code <nl> + is put into one LLVM module , allowing all LLVM optimizations ( like inlining ) to <nl> + work transparently across language boundaries . <nl> + <nl> + # Global variables <nl> + <nl> + Global C variables are imported as Swift variables or constants , depending on <nl> + constness . <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + extern int NumAlpacas ; <nl> + extern const int NumLlamas ; <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift . <nl> + <nl> + var NumAlpacas : CInt <nl> + let NumLlamas : CInt <nl> + ` ` ` <nl> + <nl> + # Pointers to data <nl> + <nl> + C has one way to form a pointer to a value of type ` T ` - - ` T * ` . <nl> + <nl> + Swift language does not provide a builtin pointer type . The standard library <nl> + defines multiple pointer types : <nl> + <nl> + * ` UnsafePointer < T > ` : equivalent to ` const T * ` in C . <nl> + <nl> + * ` UnsafeMutablePointer < T > ` : equivalent to ` T * ` in C , where ` T ` is non - const . <nl> + <nl> + * ` UnsafeRawPointer ` : a pointer for accessing data that is not statically typed , <nl> + similar to ` const void * ` in C . Unlike C pointer types , ` UnsafeRawPointer ` <nl> + allows type punning , and provides special APIs to do it correctly and safely . <nl> + <nl> + * ` UnsafeMutableRawPointer ` : like ` UnsafeRawPointer ` , but can mutate the data it <nl> + points to . <nl> + <nl> + * ` OpaquePointer ` : a pointer to typed data , however the type of the pointee is <nl> + not known , for example , it is determined by the value of some other variable , <nl> + or the type of the pointee is not representable in Swift . <nl> + <nl> + * ` AutoreleasingUnsafeMutablePointer < T > ` : only used for Objective - C <nl> + interoperability ; corresponds to an Objective - C pointer T ` __autoreleasing * ` , <nl> + where ` T ` is an Objective - C pointer type . <nl> + <nl> + C pointer types can be trivially imported in Swift as long as the memory layout <nl> + of the pointee is identical in C and Swift . So far , this document only described <nl> + primitive types , whose memory layout in C and Swift is indeed identical , for <nl> + example , ` char ` in C and ` Int8 ` in Swift . Pointers to such types are trivial to <nl> + import into Swift , for example , ` char * ` in C corresponds to <nl> + ` UnsafeMutablePointer < Int8 > ! ` in Swift . <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + void AddSecondToFirst ( int * x , const long * y ) ; <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift . <nl> + <nl> + func AddSecondToFirst ( _ x : UnsafeMutablePointer < CInt > ! , _ y : UnsafePointer < CLong > ! ) <nl> + ` ` ` <nl> + <nl> + # Nullable and non - nullable pointers <nl> + <nl> + Any C pointer can be null . However , in practice , many pointers are never null . <nl> + Therefore , code often does not expect certain pointers to be null and does not <nl> + handle null values gracefully . <nl> + <nl> + C does not provide a way to distinguish nullable and non - nullable pointers . <nl> + However , Swift makes this distinction : all pointer types ( for example , <nl> + ` UnsafePointer < T > ` and ` UnsafeMutablePointer < T > ` ) are non - nullable . Swift <nl> + represents the possibility of a missing value with a type called " Optional " , <nl> + spelled ` T ? ` in the shorthand form , or ` Optional < T > ` fully . The missing value is <nl> + called " nil " . For example , ` UnsafePointer < T > ? ` ( shorthand for <nl> + ` Optional < UnsafePointer < T > > ` ) can store a nil value . <nl> + <nl> + Swift also provides a different syntax for declaring an optional , ` T ! ` , which <nl> + creates a so - called " implicitly unwrapped optional " . These optionals are <nl> + automatically checked for nil and unwrapped if it is necessary for the <nl> + expression to compile . Unwrapping a ` T ? ` or a ` T ! ` optional that contains nil is <nl> + a fatal error ( the program is terminated with an error message ) . <nl> + <nl> + Formally , since any C pointer can be null , C pointers must be imported as <nl> + optional unsafe pointers in Swift . However , that is not idiomatic in Swift : <nl> + optional should be used when value can be truly missing , and when it is <nl> + meaningful for the API . C APIs do not provide this information in a <nl> + machine - readable form . Information about which pointers are nullable is <nl> + typically provided in free - form documentation for C APIs , if it is provided at <nl> + all . <nl> + <nl> + Clang implements an [ extension to the C language that allows C API vendors to <nl> + annotate pointers as nullable or <nl> + non - nullable ] ( https : / / clang . llvm . org / docs / AttributeReference . html # nullability - attributes ) . <nl> + <nl> + Quoting the Clang manual : <nl> + <nl> + > The ` _Nonnull ` nullability qualifier indicates that null is not a meaningful <nl> + > value for a value of the ` _Nonnull ` pointer type . For example , given a <nl> + > declaration such as : <nl> + <nl> + ` ` ` c <nl> + int fetch ( int * _Nonnull ptr ) ; <nl> + ` ` ` <nl> + <nl> + > a caller of ` fetch ` should not provide a null value , and the compiler will <nl> + > produce a warning if it sees a literal null value passed to fetch . Note that , <nl> + > unlike the declaration attribute ` nonnull ` , the presence of ` _Nonnull ` does <nl> + > not imply that passing null is undefined behavior : ` fetch ` is free to consider <nl> + > null undefined behavior or ( perhaps for backward - compatibility reasons ) <nl> + > defensively handle null . <nl> + <nl> + ` _Nonnull ` C pointers are imported in Swift as non - optional ` UnsafePointer < T > ` <nl> + or ` UnsafeMutablePointer < T > ` , depending on the constness of the pointer . <nl> + <nl> + ` ` ` swift <nl> + / / C declaration above imported in Swift . <nl> + <nl> + func fetch ( _ ptr : UnsafeMutablePointer < CInt > ) - > CInt <nl> + ` ` ` <nl> + <nl> + Quoting the Clang manual : <nl> + <nl> + > The ` _Nullable ` nullability qualifier indicates that a value of the <nl> + > ` _Nullable ` pointer type can be null . For example , given : <nl> + <nl> + ` ` ` c <nl> + int fetch_or_zero ( int * _Nullable ptr ) ; <nl> + ` ` ` <nl> + <nl> + > a caller of ` fetch_or_zero ` can provide null . <nl> + <nl> + ` _Nullable ` pointers are imported in Swift as ` UnsafePointer < T > ? ` or <nl> + ` UnsafeMutablePointer < T > ? ` , depending on the constness of the pointer . <nl> + <nl> + ` ` ` swift <nl> + / / C declaration above imported in Swift . <nl> + <nl> + func fetch_or_zero ( _ ptr : UnsafeMutablePointer < CInt > ? ) - > CInt <nl> + ` ` ` <nl> + <nl> + Quoting the Clang manual : <nl> + <nl> + > The ` _Null_unspecified ` nullability qualifier indicates that neither the <nl> + > ` _Nonnull ` nor ` _Nullable ` qualifiers make sense for a particular pointer <nl> + > type . It is used primarily to indicate that the role of null with specific <nl> + > pointers in a nullability - annotated header is unclear , e . g . , due to <nl> + > overly - complex implementations or historical factors with a long - lived API . <nl> + <nl> + ` _Null_unspecified ` and not annotated C pointers are imported in Swift as <nl> + implicitly - unwrapped optional pointers , ` UnsafePointer < T > ! ` or <nl> + ` UnsafeMutablePointer < T > ! ` . This strategy provides ergonomics equivalent to the <nl> + original C API ( no need to explicitly unwrap ) , and safety expected by Swift code <nl> + ( a dynamic check for null during implicit unwrapping ) . <nl> + <nl> + These qualifiers do not affect program semantics in C and C + + , allowing C API <nl> + vendors to safely add them to headers for the benefit of Swift users without <nl> + disturbing existing C and C + + users . <nl> + <nl> + In C APIs most pointers are non - nullable . To reduce the annotation burden , Clang <nl> + provides a way to mass - annotate pointers as non - nullable , and then mark <nl> + exceptions with ` _Nullable ` . <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + void Func1 ( int * _Nonnull x , int * _Nonnull y , int * _Nullable z ) ; <nl> + <nl> + # pragma clang assume_nonnull begin <nl> + <nl> + void Func2 ( int * x , int * y , int * _Nullable z ) ; <nl> + <nl> + # pragma clang assume_nonnull end <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift . <nl> + <nl> + / / Note that ` Func1 ` and ` Func2 ` are imported identically , but ` Func2 ` required <nl> + / / fewer annotations in the C header . <nl> + <nl> + func Func1 ( <nl> + _ x : UnsafeMutablePointer < CInt > , <nl> + _ y : UnsafeMutablePointer < CInt > , <nl> + _ z : UnsafeMutablePointer < CInt > ? <nl> + ) <nl> + <nl> + func Func2 ( <nl> + _ x : UnsafeMutablePointer < CInt > , <nl> + _ y : UnsafeMutablePointer < CInt > , <nl> + _ z : UnsafeMutablePointer < CInt > ? <nl> + ) <nl> + ` ` ` <nl> + <nl> + API owners that adopt nullability qualifiers usually wrap all declarations in <nl> + the header with a single ` assume_nonnull begin / end ` pair of pragmas , and then <nl> + annotate nullable pointers . <nl> + <nl> + See also Apple ' s documentation about this topic : [ Designating Nullability in <nl> + Objective - C <nl> + APIs ] ( https : / / developer . apple . com / documentation / swift / objective - c_and_c_code_customization / designating_nullability_in_objective - c_apis ) . <nl> + <nl> + # Incomplete types and pointers to them <nl> + <nl> + C and C + + have a notion of incomplete types ; Swift does not have anything <nl> + similar . Incomplete C types are not imported in Swift in any form . <nl> + <nl> + Sometimes types are incomplete only accidentally , for example , when a file just <nl> + happens to forward declare a type instead of including a header with a complete <nl> + definition , although it could include that header . In cases like that , to enable <nl> + Swift to import the C API , it is recommended to change C headers and to replace <nl> + forward declarations with ` # include ` s of the header that defines the type . <nl> + <nl> + Incomplete types are often used intentionally to define opaque types . This is <nl> + done too often , and Swift could not ignore this use case . Swift imports <nl> + pointers to incomplete types as ` OpaquePointer ` . For example : <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + struct Foo ; <nl> + void Print ( const Foo * foo ) ; <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift . <nl> + <nl> + / / Swift can ' t import the incomplete type ` Foo ` and has to drop some type <nl> + / / information when importing a pointer to ` Foo ` . <nl> + func Print ( _ foo : OpaquePointer ) <nl> + ` ` ` <nl> + <nl> + # Function pointers <nl> + <nl> + C supports only one form of function pointers : ` Result ( * ) ( Arg1 , Arg2 , Arg3 ) ` . <nl> + <nl> + Swift ' s closest native equivalent to a function pointer is a closure : ` ( Arg1 , <nl> + Arg2 , Arg3 ) - > Result ` . Swift closures don ' t have the same memory layout as C <nl> + function pointers : closures in Swift consist of two pointers , a pointer to the <nl> + code and a pointer to the captured data ( the context ) . A C function pointer can <nl> + be converted to a Swift closure ; however , bridging is required to adjust the <nl> + memory layout . <nl> + <nl> + As discussed above , there are cases where bridging that adjusts memory layout is <nl> + not possible , for example , when importing pointers to function pointers . For <nl> + example , while C ' s int ` ( * ) ( char ) ` can be imported as ` ( Int8 ) - > Int ` ( requires <nl> + an adjustment of memory layout ) , C ' s ` int ( * * ) ( char ) ` can ' t be imported as <nl> + ` UnsafePointer < ( Int8 ) - > Int > ` , because the pointee must have identical memory <nl> + layout in C and in Swift . <nl> + <nl> + Therefore , we need a Swift type that has a memory layout identical to C function <nl> + pointers , at least for such fallback cases . This type is spelled ` @ convention ( c ) <nl> + ( Arg1 , Arg2 , Arg3 ) - > Result ` . <nl> + <nl> + Even though it is possible to import C function pointers as Swift closure with a <nl> + context pointer in some cases , C function pointers are always imported as <nl> + ` @ convention ( c ) ` " closures " ( in quotes because they don ' t have a context <nl> + pointer , so they are not real closures ) . Swift provides an implicit conversion <nl> + from ` @ convention ( c ) ` closures to Swift closures with a context . <nl> + <nl> + Importing C function pointers also takes pointer nullability into account : a <nl> + nullable C function pointer is imported as optional . <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + void qsort ( <nl> + void * base , <nl> + size_t nmemb , <nl> + size_t size , <nl> + int ( * compar ) ( const void * , const void * ) ) ; <nl> + <nl> + void qsort_annotated ( <nl> + void * _Nonnull base , <nl> + size_t nmemb , <nl> + size_t size , <nl> + int ( * _Nonnull compar ) ( const void * _Nonnull , const void * _Nonnull ) ) ; <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift . <nl> + <nl> + func qsort ( <nl> + _ base : UnsafeMutableRawPointer ! , <nl> + _ nmemb : CInt , <nl> + _ size : CInt , <nl> + _ compar : ( @ convention ( c ) ( UnsafeRawPointer ? , UnsafeRawPointer ? ) - > CInt ) ! <nl> + ) <nl> + <nl> + func qsort_annotated ( <nl> + _ base : UnsafeMutableRawPointer , <nl> + _ nmemb : CInt , <nl> + _ size : CInt , <nl> + _ compar : @ convention ( c ) ( UnsafeRawPointer , UnsafeRawPointer ) - > CInt <nl> + ) <nl> + ` ` ` <nl> + <nl> + See also Apple ' s documentation about this topic : [ Using Imported C Functions in <nl> + Swift ] ( https : / / developer . apple . com / documentation / swift / imported_c_and_objective - c_apis / using_imported_c_functions_in_swift ) <nl> + <nl> + # Fixed - size arrays <nl> + <nl> + C ' s fixed - size arrays are imported as Swift tuples . <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + extern int x [ 4 ] ; <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift . <nl> + <nl> + var x : ( CInt , CInt , CInt , CInt ) { get set } <nl> + ` ` ` <nl> + <nl> + This mapping strategy is widely recognized as being far from optimal because <nl> + ergonomics of Swift tuples does not match C ' s fixed - size arrays . For example , <nl> + Swift tuples cannot be accessed through an index that only becomes known at <nl> + runtime . If you need to access a tuple element by index , you have to get an <nl> + unsafe pointer to values in a homogeneous tuple with <nl> + ` withUnsafeMutablePointer ( & myTuple ) { . . . } ` , and then perform pointer <nl> + arithmetic . <nl> + <nl> + Fixed - size arrays are a commonly requested feature in Swift , and a good proposal <nl> + is likely to be accepted . Once Swift has fixed - size arrays natively in the <nl> + language , we can use them to improve C interoperability . <nl> + <nl> + # Structs <nl> + <nl> + C structs are imported as Swift structs , their fields are mapped to stored Swift <nl> + properties . Bitfields are mapped to computed Swift properties . Swift structs <nl> + also get a synthesized default initializer ( that sets all properties to zero ) , <nl> + and an elementwise initializer ( that sets all properties to the provided <nl> + values ) . <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + struct Point { <nl> + int x ; <nl> + int y ; <nl> + } ; <nl> + <nl> + struct Line { <nl> + struct Point start ; <nl> + struct Point end ; <nl> + unsigned int brush : 4 ; <nl> + unsigned int stroke : 3 ; <nl> + } ; <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift . <nl> + <nl> + struct Point { <nl> + var x : CInt { get set } <nl> + var y : CInt { get set } <nl> + init ( ) <nl> + init ( x : CInt , y : CInt ) <nl> + } <nl> + <nl> + struct Line { <nl> + var start : Point { get set } <nl> + var end : Point { get set } <nl> + var brush : CUnsignedInt { get set } <nl> + var stroke : CUnsignedInt { get set } <nl> + <nl> + / / Default initializer that sets all properties to zero . <nl> + init ( ) <nl> + <nl> + / / Elementwise initializer . <nl> + init ( start : Point , end : Point , brush : CUnsignedInt , stroke : CUnsignedInt ) <nl> + } <nl> + ` ` ` <nl> + <nl> + Swift can also import unnamed and anonymous structs . <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + struct StructWithAnonymousStructs { <nl> + struct { <nl> + int x ; <nl> + } ; <nl> + struct { <nl> + int y ; <nl> + } containerForY ; <nl> + } ; <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift . <nl> + struct StructWithAnonymousStructs { <nl> + struct __Unnamed_struct___Anonymous_field0 { <nl> + var x : Int32 <nl> + init ( ) <nl> + init ( x : Int32 ) <nl> + } <nl> + struct __Unnamed_struct_containerForY { <nl> + var y : Int32 <nl> + init ( ) <nl> + init ( y : Int32 ) <nl> + } <nl> + var __Anonymous_field0 : StructWithAnonymousStructs . __Unnamed_struct___Anonymous_field0 <nl> + var x : Int32 <nl> + var containerForY : StructWithAnonymousStructs . __Unnamed_struct_containerForY <nl> + <nl> + / / Default initializer that sets all properties to zero . <nl> + init ( ) <nl> + <nl> + / / Elementwise initializer . <nl> + init ( <nl> + _ __Anonymous_field0 : StructWithAnonymousStructs . __Unnamed_struct___Anonymous_field0 , <nl> + containerForY : StructWithAnonymousStructs . __Unnamed_struct_containerForY <nl> + ) <nl> + } <nl> + ` ` ` <nl> + <nl> + See also Apple ' s documentation about this topic : [ Using Imported C Structs and <nl> + Unions in <nl> + Swift ] ( https : / / developer . apple . com / documentation / swift / imported_c_and_objective - c_apis / using_imported_c_structs_and_unions_in_swift ) . <nl> + <nl> + # Unions <nl> + <nl> + Swift does not have a direct equivalent to a C union . C unions are mapped to <nl> + Swift structs with computed properties that read from / write to the same <nl> + underlying storage . <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + union IntOrFloat { <nl> + int i ; <nl> + float f ; <nl> + } ; <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift . <nl> + <nl> + struct IntOrFloat { <nl> + var i : CInt { get set } / / Computed property . <nl> + var f : Float { get set } / / Computed property . <nl> + init ( i : CInt ) <nl> + init ( f : Float ) <nl> + init ( ) <nl> + } <nl> + ` ` ` <nl> + <nl> + See also Apple ' s documentation about this topic : [ Using Imported C Structs and <nl> + Unions in <nl> + Swift ] ( https : / / developer . apple . com / documentation / swift / imported_c_and_objective - c_apis / using_imported_c_structs_and_unions_in_swift ) . <nl> + <nl> + # Enums <nl> + <nl> + We would have liked to map C enums to Swift enums , like this : <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + / / Enum that is not explicitly marked as either open or closed . <nl> + enum HomeworkExcuse { <nl> + EatenByPet , <nl> + ForgotAtHome , <nl> + ThoughtItWasDueNextWeek , <nl> + } ; <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift : aspiration , not an actual mapping ! <nl> + <nl> + enum HomeworkExcuse : UInt32 { <nl> + case EatenByPet <nl> + case ForgotAtHome <nl> + case ThoughtItWasDueNextWeek <nl> + } <nl> + ` ` ` <nl> + <nl> + However , in practice , plain C enums are mapped to Swift structs like this : <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift : actual mapping . <nl> + <nl> + struct HomeworkExcuse : Equatable , RawRepresentable { <nl> + init ( _ rawValue : UInt32 ) <nl> + init ( rawValue : UInt32 ) <nl> + var rawValue : UInt32 <nl> + typealias RawValue = UInt32 <nl> + } <nl> + var EatenByPet : HomeworkExcuse { get } <nl> + var ForgotAtHome : HomeworkExcuse { get } <nl> + var ThoughtItWasDueNextWeek : HomeworkExcuse { get } <nl> + ` ` ` <nl> + <nl> + To explain why this mapping was chosen , we need to discuss certain features of C <nl> + enums : <nl> + <nl> + * In C , adding new enumerators is not source - breaking ; in Itanium C ABI , it is <nl> + not ABI breaking either as long as the size of the enum does not change . <nl> + Therefore , C library vendors add enumerators without expecting downstream <nl> + breakage . <nl> + <nl> + * In C , it is common to use enums as bitfields : enumerators are assigned values <nl> + that are powers of two , and enum values are bitwise - or combinations of <nl> + enumerators . <nl> + <nl> + * Some C API vendors define two sets of enumerators : " public " enumerators that <nl> + are listed in the header file , and " private " enumerators that are used only in <nl> + the implementation of the library . <nl> + <nl> + Due to these coding patterns , at runtime , C enums can carry values that were not <nl> + listed in the enum declaration at compile time ( either such values were added <nl> + after the code was compiled , or they are a result of an intentional cast from an <nl> + integer to an enum ) . <nl> + <nl> + Swift compiler performs exhaustiveness checks for switch statements , which <nl> + becomes problematic when performed on C enums , where expectations about <nl> + exhaustiveness are different . <nl> + <nl> + From Swift ' s point of view , C enums come in two flavors : closed AKA frozen , and <nl> + open AKA non - frozen . This distinction is aimed at supporting library evolution <nl> + and ABI stability , while allowing the user to ergonomically work with their <nl> + code . Swift ' s solution also supports the unusual C enum coding patterns . <nl> + <nl> + Frozen enums have a fixed set of cases ( enumerators in C terms ) . A library <nl> + vendor can change ( add or remove ) cases in a frozen enum , however , it will be <nl> + both ABI - breaking and source - breaking . In other words , there is a guarantee that <nl> + the set of enum cases that was seen at compile time exactly matches the set of <nl> + values that an enum variable can carry at runtime . Swift performs an <nl> + exhaustiveness check for switch statements on frozen enums : if switch does not <nl> + handle all enum cases , the user gets a warning . Moreover , the optimizer can make <nl> + an assumption that a variable that has a frozen enum type will only store values <nl> + that correspond to enum cases visible at compile time ; unused bit patterns can <nl> + be reused for other purposes . <nl> + <nl> + Non - frozen enums have an extensible set of cases . A library vendor can add cases <nl> + without breaking ABI or source compatibility . Swift performs a different flavor <nl> + of exhaustiveness check for switch statements on non - frozen enums : it always <nl> + requires an ` @ unknown default ` clause , but only produces a warning if the code <nl> + does not handle all cases available at the compilation time . <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + / / Enum that is not explicitly marked as either open or closed . <nl> + enum HomeworkExcuse { <nl> + EatenByPet , <nl> + ForgotAtHome , <nl> + ThoughtItWasDueNextWeek , <nl> + } ; <nl> + <nl> + / / An open enum : we expect to add more kinds of input devices in future . <nl> + enum InputDevice { <nl> + Keyboard , <nl> + Mouse , <nl> + Touchscreen , <nl> + } __attribute__ ( ( enum_extensibility ( open ) ) ) ; <nl> + <nl> + / / A closed enum : we think we know enough about the geometry of Earth to <nl> + / / confidently say that these are all cardinal directions we will ever need . <nl> + enum CardinalDirection { <nl> + East , <nl> + West , <nl> + North , <nl> + South , <nl> + } __attribute__ ( ( enum_extensibility ( closed ) ) ) ; <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift . <nl> + <nl> + struct HomeworkExcuse : Equatable , RawRepresentable { <nl> + init ( _ rawValue : UInt32 ) <nl> + init ( rawValue : UInt32 ) <nl> + var rawValue : UInt32 <nl> + typealias RawValue = UInt32 <nl> + } <nl> + var EatenByPet : HomeworkExcuse { get } <nl> + var ForgotAtHome : HomeworkExcuse { get } <nl> + var ThoughtItWasDueNextWeek : HomeworkExcuse { get } <nl> + <nl> + enum InputDevice : UInt32 { <nl> + init ? ( rawValue : UInt32 ) <nl> + var rawValue : UInt32 { get } <nl> + typealias RawValue = UInt32 <nl> + case Keyboard <nl> + case Mouse <nl> + case Touchscreen <nl> + } <nl> + <nl> + @ _frozen <nl> + enum CardinalDirection : UInt32 { <nl> + init ? ( rawValue : UInt32 ) <nl> + var rawValue : UInt32 { get } <nl> + typealias RawValue = UInt32 <nl> + case East <nl> + case West <nl> + case North <nl> + case South <nl> + } <nl> + ` ` ` <nl> + <nl> + C enums that are not marked as open or closed are mapped to structs since Swift <nl> + 1 . 0 . At that time , we realized that mapping C enums to Swift enums is not <nl> + correct if the Swift compiler makes Swift - like assumptions about such imported <nl> + enums . Specifically , Swift compiler assumes that the only allowed bit patterns <nl> + of an enum value are declared in enum ' s cases . This assumption is valid for <nl> + frozen Swift enums ( which were the only flavor of enums in Swift 1 . 0 ) . However , <nl> + this assumption does not hold for C enums , for which any bit pattern is a valid <nl> + value ; this assumption was creating an undefined behavior hazard . To resolve <nl> + this issue in Swift 1 . 0 , C enums were imported as Swift structs by default , and <nl> + their enumerators were exposed as global computed variables . Objective - C enums <nl> + declared with ` NS_ENUM ` were assumed to have " enum nature " and were imported as <nl> + Swift enums . <nl> + <nl> + The concept of open enums was added in Swift 5 ( [ SE - 0192 Handling Future <nl> + Enum <nl> + Cases ] ( https : / / github . com / apple / swift - evolution / blob / master / proposals / 0192 - non - exhaustive - enums . md ) ) , <nl> + but that proposal did not change the importing strategy of non - annotated C <nl> + enums , in part because of source compatibility concerns . It might be still <nl> + possible to change C enums to be imported as open Swift enums , but as the time <nl> + passes , it will be more difficult to change . <nl> + <nl> + Another feature of C enums is that they expose integer values to the user ; <nl> + furthermore , enum values are implicitly convertible to integers . Swift enums are <nl> + opaque by default . When imported in Swift , C enums conform to the <nl> + ` RawRepresentable ` protocol , allowing the user to explicitly convert between <nl> + numeric and typed values . <nl> + <nl> + ` ` ` swift <nl> + / / Converting enum values to integers and back . <nl> + <nl> + var south : CardinalDirection = . South <nl> + / / var southAsInteger : UInt32 = south / / error : type mismatch <nl> + var southAsInteger : UInt32 = south . rawValue / / = 3 <nl> + var southAsEnum = CardinalDirection ( rawValue : 3 ) / / = South <nl> + ` ` ` <nl> + <nl> + # Typedefs <nl> + <nl> + C typedefs are generally mapped to Swift typealiases , except for a few common C <nl> + coding patterns that are handled in a special way . <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + / / An ordinary typedef . <nl> + typedef int Money ; <nl> + <nl> + / / A special case pattern that is mapped to a named struct . <nl> + typedef struct { <nl> + int x ; <nl> + int y ; <nl> + } Point ; <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift . <nl> + <nl> + typealias Money = Int <nl> + <nl> + struct Point { <nl> + var x : CInt { get set } <nl> + var y : CInt { get set } <nl> + init ( ) <nl> + init ( x : CInt , y : CInt ) <nl> + } <nl> + ` ` ` <nl> + <nl> + # Macros <nl> + <nl> + C macros are generally not imported in Swift . Macros that define constants are <nl> + imported as readonly variables . <nl> + <nl> + ` ` ` c <nl> + / / C header . <nl> + <nl> + # define BUFFER_SIZE 4096 <nl> + # define SERVER_VERSION " 3 . 14 " <nl> + ` ` ` <nl> + <nl> + ` ` ` swift <nl> + / / C header imported in Swift . <nl> + <nl> + var BUFFER_SIZE : CInt { get } <nl> + var SERVER_VERSION : String { get } <nl> + ` ` ` <nl> + <nl> + See also Apple ' s documentation about this topic : [ Using Imported C Macros in <nl> + Swift ] ( https : / / developer . apple . com / documentation / swift / imported_c_and_objective - c_apis / using_imported_c_macros_in_swift ) . <nl>
Merge pull request from gribozavr / how - swift - imports - c
apple/swift
029afb15d646d86632a82e9240d426dacc1aece4
2020-02-06T14:08:32Z
mmm a / test / test_autograd . py <nl> ppp b / test / test_autograd . py <nl> def f ( x , y ) : <nl> <nl> self . assertTrue ( ' my_func ' in str ( p ) ) <nl> <nl> + def test_record_function_multithreaded ( self ) : <nl> + rf = record_function ( " outer " ) <nl> + rf . __enter__ ( ) <nl> + with profile ( ) : <nl> + # test that exiting the record function after starting a profile <nl> + # doesn ' t throw . <nl> + rf . __exit__ ( ) <nl> + <nl> + with profile ( ) : <nl> + rf . __enter__ ( ) <nl> + # test that exiting the record function after the profile has ended <nl> + # doesn ' t throw . <nl> + rf . __exit__ ( ) <nl> + <nl> + <nl> def test_dir ( self ) : <nl> x = torch . randn ( 10 , 10 ) <nl> keys = dir ( x ) <nl> mmm a / torch / csrc / autograd / record_function . h <nl> ppp b / torch / csrc / autograd / record_function . h <nl> struct TORCH_API RecordFunction { <nl> return parent_ ; <nl> } <nl> <nl> + bool active ( ) const { <nl> + return initialized_ ; <nl> + } <nl> + <nl> void setRunSampled ( bool run_sampled ) { <nl> run_sampled_ = run_sampled ; <nl> } <nl> mmm a / torch / csrc / autograd / record_function_ops . cpp <nl> ppp b / torch / csrc / autograd / record_function_ops . cpp <nl> void record_function_exit ( const at : : Tensor & handle ) { <nl> / / We don ' t actually need to do anything with handle just need to persist the <nl> / / lifetime until now . <nl> auto & rec = at : : cpp_custom_type_hack : : cast < RecordFunction > ( handle ) ; <nl> - if ( auto * current = RecordFunction : : current ( ) ) { <nl> - AT_ASSERT ( current - > parent ( ) = = & rec , " rec must be parent " ) ; <nl> - AT_ASSERT ( current - > name ( ) = = StringView ( " profiler : : _record_function_exit " ) ) ; <nl> - current - > end ( ) ; <nl> + auto * current = RecordFunction : : current ( ) ; <nl> + if ( rec . active ( ) & & current ) { <nl> + if ( current ! = & rec ) { <nl> + AT_ASSERT ( current - > parent ( ) = = & rec , " rec must be parent " ) ; <nl> + AT_ASSERT ( current - > name ( ) = = StringView ( " profiler : : _record_function_exit " ) ) ; <nl> + current - > end ( ) ; <nl> + } else { <nl> + AT_ASSERT ( current = = & rec , " rec must be active " ) ; <nl> + } <nl> rec . end ( ) ; <nl> } <nl> } <nl>
autograd / profiler : make record_function more threadsafe ( )
pytorch/pytorch
1e80ff7a67d5c54c11913d7c8ee1ded5caa6c708
2019-12-19T00:27:42Z
mmm a / depends / hosts / darwin . mk <nl> ppp b / depends / hosts / darwin . mk <nl> OSX_SDK = $ ( SDK_PATH ) / Xcode - $ ( XCODE_VERSION ) - $ ( XCODE_BUILD_ID ) - extracted - SDK - with - <nl> # When cross - compiling for Darwin using Clang , - mlinker - version must be passed to <nl> # ensure that modern linker features are enabled . <nl> darwin_CC = clang - target $ ( host ) - mmacosx - version - min = $ ( OSX_MIN_VERSION ) - - sysroot $ ( OSX_SDK ) - mlinker - version = $ ( LD64_VERSION ) - B $ ( build_prefix ) / bin <nl> - darwin_CXX = clang + + - target $ ( host ) - mmacosx - version - min = $ ( OSX_MIN_VERSION ) - - sysroot $ ( OSX_SDK ) - stdlib = libc + + - mlinker - version = $ ( LD64_VERSION ) - B $ ( build_prefix ) / bin <nl> + darwin_CXX = clang + + - target $ ( host ) - mmacosx - version - min = $ ( OSX_MIN_VERSION ) - - sysroot $ ( OSX_SDK ) - stdlib = libc + + - mlinker - version = $ ( LD64_VERSION ) - B $ ( build_prefix ) / bin - nostdinc + + - isystem $ ( OSX_SDK ) / usr / include / c + + / v1 <nl> <nl> darwin_CFLAGS = - pipe <nl> darwin_CXXFLAGS = $ ( darwin_CFLAGS ) <nl>
depends : specify libc + + header location for darwin
bitcoin/bitcoin
6b8e497eeaf38f272715c490f317fdc98a2174be
2020-07-11T01:05:54Z
mmm a / src / misc_utilities / classifychars . cpp <nl> ppp b / src / misc_utilities / classifychars . cpp <nl> int main ( int argc , const char * * argv ) <nl> PipelineData pipeline_data ( frame , Rect ( 0 , 0 , frame . cols , frame . rows ) , & config ) ; <nl> cvtColor ( frame , frame , CV_BGR2GRAY ) ; <nl> pipeline_data . crop_gray = Mat ( frame , Rect ( 0 , 0 , frame . cols , frame . rows ) ) ; <nl> + pipeline_data . thresholds = produceThresholds ( pipeline_data . crop_gray , & config ) ; <nl> + <nl> char statecode [ 3 ] ; <nl> statecode [ 0 ] = files [ i ] [ 0 ] ; <nl> statecode [ 1 ] = files [ i ] [ 1 ] ; <nl>
Calculating thresholds before char analysis for classifychars
openalpr/openalpr
d317cb9e71787e1b1d2e47f67f9b16c06aebff29
2017-01-16T21:39:05Z
mmm a / BUILD . gn <nl> ppp b / BUILD . gn <nl> declare_args ( ) { <nl> # TODO ( jgruber , v8 : 6666 ) : Support ia32 and maybe MSVC . <nl> # TODO ( jgruber , v8 : 6666 ) : Enable for remaining architectures once performance <nl> # regressions are addressed . <nl> - v8_enable_embedded_builtins = v8_current_cpu = = " x64 " & & ( ! is_win | | is_clang ) <nl> + v8_enable_embedded_builtins = <nl> + v8_use_snapshot & & v8_current_cpu = = " x64 " & & ( ! is_win | | is_clang ) <nl> <nl> # Enable code - generation - time checking of types in the CodeStubAssembler . <nl> v8_enable_verify_csa = false <nl> if ( v8_check_microtasks_scopes_consistency = = " " ) { <nl> v8_enable_debugging_features | | dcheck_always_on <nl> } <nl> <nl> + assert ( ! v8_enable_embedded_builtins | | v8_use_snapshot , <nl> + " Embedded builtins only work with snapshots " ) <nl> + <nl> # Specifies if the target build is a simulator build . Comparing target cpu <nl> # with v8 target cpu to not affect simulator builds for making cross - compile <nl> # snapshots . <nl> config ( " features " ) { <nl> if ( v8_check_microtasks_scopes_consistency ) { <nl> defines + = [ " V8_CHECK_MICROTASKS_SCOPES_CONSISTENCY " ] <nl> } <nl> - if ( v8_enable_embedded_builtins & & v8_use_snapshot ) { <nl> + if ( v8_enable_embedded_builtins ) { <nl> defines + = [ " V8_EMBEDDED_BUILTINS " ] <nl> } <nl> if ( v8_use_multi_snapshots ) { <nl>
[ build ] Tweak default value of v8_enable_embedded_builtins
v8/v8
9b0b3ab0a8c2de9abf3fd3ffffa7a8700c5dc832
2018-06-13T19:36:23Z
mmm a / docs / en / sql - reference / functions / url - functions . md <nl> ppp b / docs / en / sql - reference / functions / url - functions . md <nl> <nl> mmm <nl> toc_priority : 54 <nl> - toc_title : Working with URLs <nl> + toc_title : URLs <nl> mmm <nl> <nl> # Functions for Working with URLs { # functions - for - working - with - urls } <nl>
Update url - functions . md
ClickHouse/ClickHouse
fa0e2461e74b7076938ddb06804b0a1b62e0d86e
2020-06-19T10:15:32Z
mmm a / tools / emterpretify . py <nl> ppp b / tools / emterpretify . py <nl> <nl> ' 203 ' : ' SETTR0 ' , # [ l , 0 , 0 ] tempRet0 = l <nl> ' 240 ' : ' GETGLBI ' , # [ l , vl , vh ] get global value , int , indexed by v <nl> ' 241 ' : ' GETGLBD ' , # [ l , vl , vh ] get global value , double , indexed by v <nl> + ' 245 ' : ' SETGLBI ' , # [ vl , vh , l ] set global value , int , indexed by v ( v = l ) <nl> ' 250 ' : ' CALL ' , # [ lx , target , sig ] [ params . . . ] ( lx = ) target ( params . . ) lx ' s existence and type depend on the target ' s actual callsig ; <nl> # this instruction can take multiple 32 - bit instruction chunks <nl> # if target is a function table , then the first param is the index of the register holding the function pointer <nl> def make_target_call_sig ( sig ) : <nl> ' \ n } ' <nl> <nl> if ROPCODES [ ' GETGLBI ' ] not in CASES : <nl> - def make_target_access ( i ) : <nl> + def make_load ( i ) : <nl> + sig = ' i ' <nl> name = rglobal_vars [ i ] <nl> - <nl> - def make_target_access_sig ( sig ) : <nl> - return ' ' + get_access ( ' lx ' , sig [ 0 ] ) + ' = ' + name + ' ; break ; ' <nl> - <nl> - return make_target_access_sig ( ' i ' ) # XXX ' i ' is the assumption for now <nl> - <nl> + return ' ' + get_access ( ' lx ' , sig [ 0 ] ) + ' = ' + name + ' ; break ; ' <nl> CASES [ ROPCODES [ ' GETGLBI ' ] ] = ' switch ( ly | 0 ) { \ n ' + \ <nl> - ' \ n ' . join ( filter ( lambda x : ' None ' not in x , [ ' case % d : { \ n % s \ n } ' % ( i , make_target_access ( i ) ) for i in range ( global_var_id ) ] ) ) + \ <nl> + ' \ n ' . join ( filter ( lambda x : ' None ' not in x , [ ' case % d : { \ n % s \ n } ' % ( i , make_load ( i ) ) for i in range ( global_var_id ) ] ) ) + \ <nl> + ' \ n default : assert ( 0 ) ; ' + \ <nl> + ' \ n } ' <nl> + def make_store ( i ) : <nl> + sig = ' i ' <nl> + name = rglobal_vars [ i ] <nl> + return ' ' + name + ' = ' + get_coerced_access ( ' lz ' , sig [ 0 ] ) + ' ; break ; ' <nl> + CASES [ ROPCODES [ ' SETGLBI ' ] ] = ' switch ( ( inst > > 8 ) & 255 ) { \ n ' + \ <nl> + ' \ n ' . join ( filter ( lambda x : ' None ' not in x , [ ' case % d : { \ n % s \ n } ' % ( i , make_store ( i ) ) for i in range ( global_var_id ) ] ) ) + \ <nl> ' \ n default : assert ( 0 ) ; ' + \ <nl> ' \ n } ' <nl> <nl> def fix_case ( case ) : <nl> <nl> call_sigs = { } # signatures appearing for each call target <nl> def process_code ( func , code , absolute_targets ) : <nl> + global global_var_id <nl> absolute_start = code_start + len ( all_code ) # true absolute starting point of this function <nl> # print ' processing code ' , func , absolute_start <nl> for i in range ( len ( code ) / 4 ) : <nl> def process_code ( func , code , absolute_targets ) : <nl> # fix global - accessing instructions ' targets <nl> target = code [ j + 2 ] <nl> if target not in global_vars : <nl> - global global_var_id <nl> global_vars [ target ] = global_var_id <nl> rglobal_vars [ global_var_id ] = target <nl> global_var_id + = 1 <nl> code [ j + 2 ] = global_vars [ target ] <nl> + elif code [ j ] in [ ' SETGLBI ' ] : <nl> + # fix global - accessing instructions ' targets <nl> + target = code [ j + 1 ] <nl> + if target not in global_vars : <nl> + global_vars [ target ] = global_var_id <nl> + rglobal_vars [ global_var_id ] = target <nl> + global_var_id + = 1 <nl> + code [ j + 1 ] = global_vars [ target ] <nl> elif code [ j ] = = ' absolute - value ' : <nl> # put the 32 - bit absolute value of an abolute target here <nl> # print ' fixing absolute value ' , code [ j + 1 ] , absolute_targets [ unicode ( code [ j + 1 ] ) ] , absolute_start + absolute_targets [ unicode ( code [ j + 1 ] ) ] <nl> mmm a / tools / js - optimizer . js <nl> ppp b / tools / js - optimizer . js <nl> function emterpretify ( ast ) { <nl> switch ( name ) { <nl> case ' STACKTOP ' : opcode = ' SETST ' ; break ; <nl> case ' tempRet0 ' : opcode = ' SETTR0 ' ; break ; <nl> - default : throw ' assign global wha ? ' + name ; <nl> + default : { <nl> + var type = detectType ( value , asmData ) ; <nl> + assert ( type = = = ASM_INT ) ; <nl> + reg [ 1 ] . push ( ' SETGLBI ' , name , 0 , releaseIfFree ( reg [ 0 ] ) ) ; <nl> + reg [ 0 ] = - 1 ; <nl> + return reg ; <nl> + } <nl> } <nl> reg [ 1 ] . push ( opcode , releaseIfFree ( reg [ 0 ] ) , 0 , 0 ) ; <nl> - return [ - 1 , reg [ 1 ] ] ; <nl> + reg [ 0 ] = - 1 ; <nl> + return reg ; <nl> } <nl> } else if ( target [ 0 ] = = = ' sub ' ) { <nl> / / assign to memory <nl>
SETGLBI
emscripten-core/emscripten
da2f582f7bf15e629908b693107b75f850522e71
2014-09-26T18:22:42Z
mmm a / src / runtime / vm / translator / hopt / irtranslator . cpp <nl> ppp b / src / runtime / vm / translator / hopt / irtranslator . cpp <nl> <nl> + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> * / <nl> # include < stdint . h > <nl> - # include < assert . h > <nl> - # include < unistd . h > <nl> - # include < sys / mman . h > <nl> - # include < strstream > <nl> - # include < stdio . h > <nl> - # include < stdarg . h > <nl> # include < strings . h > <nl> - # include < string > <nl> - # include < queue > <nl> - # include < zlib . h > <nl> - # include < unwind . h > <nl> <nl> # include " folly / Format . h " <nl> - # include " folly / ScopeGuard . h " <nl> + # include " util / trace . h " <nl> <nl> # include " runtime / vm / bytecode . h " <nl> # include " runtime / vm / runtime . h " <nl> # include " runtime / base / complex_types . h " <nl> - # include " runtime / base / execution_context . h " <nl> - # include " runtime / base / strings . h " <nl> - # include " runtime / base / zend / zend_string . h " <nl> # include " runtime / base / runtime_option . h " <nl> - # include " runtime / base / server / source_root_info . h " <nl> + # include " runtime / vm / translator / targetcache . h " <nl> + # include " runtime / vm / translator / translator - deps . h " <nl> + # include " runtime / vm / translator / translator - inline . h " <nl> # include " runtime / vm / translator / translator - x64 . h " <nl> # include " runtime / vm / stats . h " <nl> <nl> deleted file mode 100644 <nl> index 7335eb15b54 . . 00000000000 <nl> mmm a / src / runtime / vm / translator / log . cpp <nl> ppp / dev / null <nl> <nl> - / * <nl> - + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> - | HipHop for PHP | <nl> - + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> - | Copyright ( c ) 2010 - Facebook , Inc . ( http : / / www . facebook . com ) | <nl> - + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> - | This source file is subject to version 3 . 01 of the PHP license , | <nl> - | that is bundled with this package in the file LICENSE , and is | <nl> - | available through the world - wide - web at the following url : | <nl> - | http : / / www . php . net / license / 3_01 . txt | <nl> - | If you did not receive a copy of the PHP license and are unable to | <nl> - | obtain it through the world - wide - web , please send a note to | <nl> - | license @ php . net so we can mail you a copy immediately . | <nl> - + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> - * / <nl> - # include < stdio . h > <nl> - # include < stdarg . h > <nl> - # include < stdlib . h > <nl> - <nl> - # include " log . h " <nl> - <nl> - namespace HPHP { <nl> - namespace VM { <nl> - namespace Log { <nl> - <nl> - # ifndef RELEASE / * { * / <nl> - <nl> - static FILE * out ; <nl> - <nl> - void initLogFile ( ) { <nl> - if ( ! out ) { <nl> - out = fopen ( " vm . log " , " w " ) ; <nl> - if ( ! out ) { <nl> - fprintf ( stderr , " could not create log file \ n " ) ; <nl> - exit ( 1 ) ; <nl> - } <nl> - } <nl> - } <nl> - <nl> - void vlog ( const char * fmt , va_list args ) { <nl> - initLogFile ( ) ; <nl> - vfprintf ( out , fmt , args ) ; <nl> - fflush ( out ) ; <nl> - } <nl> - <nl> - void log ( const char * fmt , . . . ) { <nl> - va_list args ; <nl> - va_start ( args , fmt ) ; <nl> - vlog ( fmt , args ) ; <nl> - va_end ( args ) ; <nl> - } <nl> - <nl> - # endif / * } * / <nl> - <nl> - } } } <nl> - <nl> deleted file mode 100644 <nl> index 0c58b5b8195 . . 00000000000 <nl> mmm a / src / runtime / vm / translator / log . h <nl> ppp / dev / null <nl> <nl> - / * <nl> - + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> - | HipHop for PHP | <nl> - + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> - | Copyright ( c ) 2010 - Facebook , Inc . ( http : / / www . facebook . com ) | <nl> - + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> - | This source file is subject to version 3 . 01 of the PHP license , | <nl> - | that is bundled with this package in the file LICENSE , and is | <nl> - | available through the world - wide - web at the following url : | <nl> - | http : / / www . php . net / license / 3_01 . txt | <nl> - | If you did not receive a copy of the PHP license and are unable to | <nl> - | obtain it through the world - wide - web , please send a note to | <nl> - | license @ php . net so we can mail you a copy immediately . | <nl> - + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> - * / <nl> - # ifndef incl_LOG_H_ <nl> - # define incl_LOG_H_ <nl> - <nl> - # include < stdarg . h > <nl> - <nl> - namespace HPHP { <nl> - namespace VM { <nl> - namespace Log { <nl> - # ifdef RELEASE <nl> - static inline void log ( const char * fmt , . . . ) { } <nl> - static inline void vlog ( const char * fmt , va_list ap ) { } <nl> - # else <nl> - extern void log ( const char * fmt , . . . ) ; <nl> - extern void vlog ( const char * fmt , va_list ap ) ; <nl> - # endif <nl> - } <nl> - } <nl> - } <nl> - <nl> - # endif <nl> mmm a / src / runtime / vm / translator / translator - x64 . cpp <nl> ppp b / src / runtime / vm / translator / translator - x64 . cpp <nl> <nl> # include < strings . h > <nl> # include < string > <nl> # include < queue > <nl> - # include < zlib . h > <nl> # include < unwind . h > <nl> <nl> # ifdef __FreeBSD__ <nl> typedef __sighandler_t * sighandler_t ; <nl> # include " runtime / ext / ext_continuation . h " <nl> # include " runtime / vm / debug / debug . h " <nl> # include " runtime / vm / translator / targetcache . h " <nl> - # include " runtime / vm / translator / log . h " <nl> # include " runtime / vm / translator / translator - deps . h " <nl> # include " runtime / vm / translator / translator - inline . h " <nl> # include " runtime / vm / translator / translator - x64 . h " <nl> # include " runtime / vm / translator / srcdb . h " <nl> # include " runtime / vm / translator / x64 - util . h " <nl> # include " runtime / vm / translator / unwind - x64 . h " <nl> - # include " runtime / vm / pendq . h " <nl> - # include " runtime / vm / treadmill . h " <nl> # include " runtime / vm / stats . h " <nl> # include " runtime / vm / pendq . h " <nl> # include " runtime / vm / treadmill . h " <nl>
Reduce irtranslator ' s dependency footprint .
facebook/hhvm
6d2f5c1419ead0585fa4c7482024e948a390ccbf
2013-01-17T18:43:50Z
mmm a / src / builtins / ia32 / builtins - ia32 . cc <nl> ppp b / src / builtins / ia32 / builtins - ia32 . cc <nl> void Builtins : : Generate_FunctionPrototypeApply ( MacroAssembler * masm ) { <nl> / / - - esp [ 12 ] : receiver <nl> / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> <nl> - / / 1 . Load receiver into edi , argArray into ebx ( if present ) , remove all <nl> + / / 1 . Load receiver into xmm0 , argArray into edx ( if present ) , remove all <nl> / / arguments from the stack ( including the receiver ) , and push thisArg ( if <nl> / / present ) instead . <nl> { <nl> void Builtins : : Generate_FunctionPrototypeApply ( MacroAssembler * masm ) { <nl> } <nl> <nl> / / mmmmmmmmm - - S t a t e mmmmmmmmmmmm - <nl> - / / - - ebx : argArray <nl> + / / - - edx : argArray <nl> / / - - edi : receiver <nl> / / - - esp [ 0 ] : return address <nl> / / - - esp [ 4 ] : thisArg <nl>
[ ia32 , root ] Update comment
v8/v8
2577a7df39ac301ca02e3e043e5bd13ace0d7fd7
2018-09-21T13:04:14Z
mmm a / stdlib / public / Reflection / TypeLowering . cpp <nl> ppp b / stdlib / public / Reflection / TypeLowering . cpp <nl> class ExistentialTypeInfoBuilder { <nl> switch ( FD - > Kind ) { <nl> case FieldDescriptorKind : : Class : <nl> Refcounting = ReferenceCounting : : Native ; <nl> - LLVM_FALLTHROUGH ; <nl> + SWIFT_FALLTHROUGH ; <nl> <nl> case FieldDescriptorKind : : ObjCClass : <nl> addAnyObject ( ) ; <nl> mmm a / stdlib / public / Reflection / TypeRef . cpp <nl> ppp b / stdlib / public / Reflection / TypeRef . cpp <nl> class DemanglingForTypeRef <nl> } <nl> <nl> / / Otherwise it requires a tuple wrapper . <nl> - LLVM_FALLTHROUGH ; <nl> + SWIFT_FALLTHROUGH ; <nl> } <nl> <nl> / / This covers both none and multiple parameters . <nl> mmm a / stdlib / public / SwiftShims / Visibility . h <nl> ppp b / stdlib / public / SwiftShims / Visibility . h <nl> <nl> # define __has_builtin ( builtin ) 0 <nl> # endif <nl> <nl> + # if ! defined ( __has_cpp_attribute ) <nl> + # define __has_cpp_attribute ( attribute ) 0 <nl> + # endif <nl> + <nl> # if __has_feature ( nullability ) <nl> / / Provide macros to temporarily suppress warning about the use of <nl> / / _Nullable and _Nonnull . <nl> <nl> # define SWIFT_RUNTIME_EXPORT SWIFT_EXPORT_ATTRIBUTE <nl> # endif <nl> <nl> + # if __cplusplus > 201402l & & __has_cpp_attribute ( fallthrough ) <nl> + # define SWIFT_FALLTHROUGH [ [ fallthrough ] ] <nl> + # elif __has_cpp_attribute ( gnu : : fallthrough ) <nl> + # define SWIFT_FALLTHROUGH [ [ gnu : : fallthrough ] ] <nl> + # elif __has_cpp_attribute ( clang : : fallthrough ) <nl> + # define SWIFT_FALLTHROUGH [ [ clang : : fallthrough ] ] <nl> + # elif __has_attribute ( fallthrough ) <nl> + # define SWIFT_FALLTHROUGH __attribute__ ( ( __fallthrough__ ) ) <nl> + # else <nl> + # define SWIFT_FALLTHROUGH <nl> + # endif <nl> + <nl> <nl> / / / Attributes for runtime - stdlib interfaces . <nl> / / / Use these for C implementations that are imported into Swift via SwiftShims <nl> mmm a / stdlib / public / runtime / Casting . cpp <nl> ppp b / stdlib / public / runtime / Casting . cpp <nl> static bool _dynamicCastToExistential ( OpaqueValue * dest , <nl> maybeDeallocateSource ( result ) ; <nl> return result ; <nl> } <nl> - LLVM_FALLTHROUGH ; <nl> + SWIFT_FALLTHROUGH ; <nl> <nl> case MetadataKind : : Enum : <nl> case MetadataKind : : Optional : <nl> static bool _dynamicCastToExistential ( OpaqueValue * dest , <nl> maybeDeallocateSource ( success ) ; <nl> return success ; <nl> } <nl> - LLVM_FALLTHROUGH ; <nl> + SWIFT_FALLTHROUGH ; <nl> <nl> default : <nl> return fallbackForNonClass ( ) ; <nl> swift_dynamicCastMetatypeImpl ( const Metadata * sourceType , <nl> / / Get the actual class object . <nl> targetType = static_cast < const ObjCClassWrapperMetadata * > ( targetType ) <nl> - > Class ; <nl> - LLVM_FALLTHROUGH ; <nl> + SWIFT_FALLTHROUGH ; <nl> case MetadataKind : : Class : <nl> / / The source value must also be a class ; otherwise the cast fails . <nl> switch ( sourceType - > getKind ( ) ) { <nl> swift_dynamicCastMetatypeImpl ( const Metadata * sourceType , <nl> / / Get the actual class object . <nl> sourceType = static_cast < const ObjCClassWrapperMetadata * > ( sourceType ) <nl> - > Class ; <nl> - LLVM_FALLTHROUGH ; <nl> + SWIFT_FALLTHROUGH ; <nl> case MetadataKind : : Class : { <nl> / / Check if the source is a subclass of the target . <nl> # if SWIFT_OBJC_INTEROP <nl> swift_dynamicCastMetatypeImpl ( const Metadata * sourceType , <nl> / / Get the actual class object . <nl> sourceType = static_cast < const ObjCClassWrapperMetadata * > ( sourceType ) <nl> - > Class ; <nl> - LLVM_FALLTHROUGH ; <nl> + SWIFT_FALLTHROUGH ; <nl> case MetadataKind : : Class : <nl> case MetadataKind : : ForeignClass : <nl> / / Check if the source is a subclass of the target . <nl> swift_dynamicCastMetatypeUnconditionalImpl ( const Metadata * sourceType , <nl> / / Get the actual class object . <nl> targetType = static_cast < const ObjCClassWrapperMetadata * > ( targetType ) <nl> - > Class ; <nl> - LLVM_FALLTHROUGH ; <nl> + SWIFT_FALLTHROUGH ; <nl> case MetadataKind : : Class : <nl> / / The source value must also be a class ; otherwise the cast fails . <nl> switch ( sourceType - > getKind ( ) ) { <nl> swift_dynamicCastMetatypeUnconditionalImpl ( const Metadata * sourceType , <nl> / / Get the actual class object . <nl> sourceType = static_cast < const ObjCClassWrapperMetadata * > ( sourceType ) <nl> - > Class ; <nl> - LLVM_FALLTHROUGH ; <nl> + SWIFT_FALLTHROUGH ; <nl> case MetadataKind : : Class : { <nl> / / Check if the source is a subclass of the target . <nl> # if SWIFT_OBJC_INTEROP <nl> swift_dynamicCastMetatypeUnconditionalImpl ( const Metadata * sourceType , <nl> / / Get the actual class object . <nl> sourceType = static_cast < const ObjCClassWrapperMetadata * > ( sourceType ) <nl> - > Class ; <nl> - LLVM_FALLTHROUGH ; <nl> + SWIFT_FALLTHROUGH ; <nl> case MetadataKind : : Class : <nl> case MetadataKind : : ForeignClass : <nl> / / Check if the source is a subclass of the target . <nl> static bool swift_dynamicCastImpl ( OpaqueValue * dest , OpaqueValue * src , <nl> return _dynamicCastFromAnyHashable ( dest , src , srcType , <nl> targetType , flags ) ; <nl> } <nl> - LLVM_FALLTHROUGH ; <nl> + SWIFT_FALLTHROUGH ; <nl> <nl> case MetadataKind : : Enum : <nl> case MetadataKind : : Optional : { <nl> static bool swift_dynamicCastImpl ( OpaqueValue * dest , OpaqueValue * src , <nl> break ; <nl> } <nl> <nl> - LLVM_FALLTHROUGH ; <nl> + SWIFT_FALLTHROUGH ; <nl> <nl> / / The non - polymorphic types . <nl> default : <nl> mmm a / stdlib / public / runtime / Demangle . cpp <nl> ppp b / stdlib / public / runtime / Demangle . cpp <nl> swift : : _swift_buildDemanglingForMetadata ( const Metadata * type , <nl> } <nl> <nl> / / Otherwise it requires a tuple wrapper . <nl> - LLVM_FALLTHROUGH ; <nl> + SWIFT_FALLTHROUGH ; <nl> } <nl> <nl> / / This covers both none and multiple parameters . <nl> mmm a / stdlib / public / runtime / MetadataCache . h <nl> ppp b / stdlib / public / runtime / MetadataCache . h <nl> class MetadataCacheEntryBase <nl> case LSK : : CompletionQueue : <nl> / / Move the existing completion queue to the cache entry . <nl> queueEntry - > CompletionQueue = LockedStorage . CompletionQueue ; <nl> - LLVM_FALLTHROUGH ; <nl> + SWIFT_FALLTHROUGH ; <nl> <nl> case LSK : : AllocatingThread : <nl> LockedStorageKind = LSK : : QueueEntry ; <nl> mmm a / stdlib / public / runtime / ReflectionMirror . mm <nl> ppp b / stdlib / public / runtime / ReflectionMirror . mm <nl> auto call ( OpaqueValue * passedValue , const Metadata * T , const Metadata * passedTyp <nl> return callClass ( ) ; <nl> } <nl> } <nl> - LLVM_FALLTHROUGH ; <nl> + SWIFT_FALLTHROUGH ; <nl> } <nl> <nl> / / / TODO : Implement specialized mirror witnesses for all kinds . <nl>
Merge pull request from compnerd / fall - into - the - gap
apple/swift
ec31346c4babe41f17d22b28ec77800e9a27b159
2020-05-07T21:16:27Z
mmm a / dbms / include / DB / DataStreams / IBlockInputStream . h <nl> ppp b / dbms / include / DB / DataStreams / IBlockInputStream . h <nl> class IBlockInputStream : private boost : : noncopyable <nl> <nl> BlockInputStreams children ; <nl> <nl> - size_t aio_threshold = std : : numeric_limits < size_t > : : max ( ) ; <nl> + size_t aio_threshold = 0 ; <nl> <nl> private : <nl> void getLeavesImpl ( BlockInputStreams & res , BlockInputStreamPtr this_shared_ptr = nullptr ) ; <nl> mmm a / dbms / include / DB / Storages / IStorage . h <nl> ppp b / dbms / include / DB / Storages / IStorage . h <nl> class IStorage : private boost : : noncopyable , public ITableDeclaration <nl> / * * Выполнить какую - либо фоновую работу . Например , объединение кусков в таблице типа MergeTree . <nl> * Возвращает - была ли выполнена какая - либо работа . <nl> * / <nl> - virtual bool optimize ( size_t aio_threshold ) <nl> + bool optimize ( size_t aio_threshold = 0 ) <nl> { <nl> - throw Exception ( " Method optimize is not supported by storage " + getName ( ) , ErrorCodes : : NOT_IMPLEMENTED ) ; <nl> + return performOptimize ( aio_threshold ) ; <nl> } <nl> <nl> / * * Получить запрос CREATE TABLE , который описывает данную таблицу . <nl> class IStorage : private boost : : noncopyable , public ITableDeclaration <nl> / / / проверяет валидность данных <nl> virtual bool checkData ( ) const { throw DB : : Exception ( " Check query is not supported for " + getName ( ) + " storage " ) ; } <nl> <nl> + protected : <nl> + virtual bool performOptimize ( size_t aio_threshold ) <nl> + { <nl> + throw Exception ( " Method optimize is not supported by storage " + getName ( ) , ErrorCodes : : NOT_IMPLEMENTED ) ; <nl> + } <nl> + <nl> protected : <nl> using ITableDeclaration : : ITableDeclaration ; <nl> <nl> mmm a / dbms / include / DB / Storages / MergeTree / MergedBlockOutputStream . h <nl> ppp b / dbms / include / DB / Storages / MergeTree / MergedBlockOutputStream . h <nl> class MergedBlockOutputStream : public IMergedBlockOutputStream <nl> String part_path_ , <nl> const NamesAndTypesList & columns_list_ , <nl> CompressionMethod compression_method , <nl> - const std : : map < std : : string , size_t > & merged_column_to_size_ , <nl> + const MergeTreeData : : DataPart : : ColumnToSize & merged_column_to_size_ , <nl> size_t aio_threshold_ ) <nl> : IMergedBlockOutputStream ( <nl> storage_ , storage_ . context . getSettings ( ) . min_compress_block_size , <nl> class MergedBlockOutputStream : public IMergedBlockOutputStream <nl> for ( const auto & it : columns_list ) <nl> { <nl> size_t estimated_size = 0 ; <nl> - auto it2 = merged_column_to_size_ . find ( it . name ) ; <nl> - if ( it2 ! = merged_column_to_size_ . end ( ) ) <nl> - estimated_size = it2 - > second ; <nl> + if ( aio_threshold > 0 ) <nl> + { <nl> + auto it2 = merged_column_to_size_ . find ( it . name ) ; <nl> + if ( it2 ! = merged_column_to_size_ . end ( ) ) <nl> + estimated_size = it2 - > second ; <nl> + } <nl> addStream ( part_path , it . name , * it . type , estimated_size ) ; <nl> } <nl> } <nl> mmm a / dbms / include / DB / Storages / StorageBuffer . h <nl> ppp b / dbms / include / DB / Storages / StorageBuffer . h <nl> friend class BufferBlockOutputStream ; <nl> <nl> / / / Сбрасывает все буферы в подчинённую таблицу . <nl> void shutdown ( ) override ; <nl> - bool optimize ( size_t aio_threshold ) override ; <nl> <nl> void rename ( const String & new_path_to_db , const String & new_database_name , const String & new_table_name ) override { name = new_table_name ; } <nl> <nl> friend class BufferBlockOutputStream ; <nl> void writeBlockToDestination ( const Block & block , StoragePtr table ) ; <nl> <nl> void flushThread ( ) ; <nl> + <nl> + bool performOptimize ( size_t aio_threshold ) override ; <nl> } ; <nl> <nl> } <nl> mmm a / dbms / include / DB / Storages / StorageMaterializedView . h <nl> ppp b / dbms / include / DB / Storages / StorageMaterializedView . h <nl> class StorageMaterializedView : public StorageView { <nl> <nl> BlockOutputStreamPtr write ( ASTPtr query ) override ; <nl> void drop ( ) override ; <nl> - bool optimize ( size_t aio_threshold ) override ; <nl> <nl> BlockInputStreams read ( <nl> const Names & column_names , <nl> class StorageMaterializedView : public StorageView { <nl> const NamesAndTypesList & alias_columns_ , <nl> const ColumnDefaults & column_defaults_ , <nl> bool attach_ ) ; <nl> + <nl> + bool performOptimize ( size_t aio_threshold ) override ; <nl> } ; <nl> <nl> } <nl> mmm a / dbms / include / DB / Storages / StorageMergeTree . h <nl> ppp b / dbms / include / DB / Storages / StorageMergeTree . h <nl> friend class MergeTreeBlockOutputStream ; <nl> <nl> BlockOutputStreamPtr write ( ASTPtr query ) override ; <nl> <nl> - / * * Выполнить очередной шаг объединения кусков . <nl> - * / <nl> - bool optimize ( size_t aio_threshold ) override <nl> - { <nl> - return merge ( aio_threshold , true ) ; <nl> - } <nl> - <nl> void dropPartition ( const Field & partition , bool detach , const Settings & settings ) override ; <nl> void attachPartition ( const Field & partition , bool unreplicated , bool part , const Settings & settings ) override ; <nl> void freezePartition ( const Field & partition , const Settings & settings ) override ; <nl> friend class MergeTreeBlockOutputStream ; <nl> <nl> MergeTreeData & getData ( ) { return data ; } <nl> <nl> + private : <nl> + / * * Выполнить очередной шаг объединения кусков . <nl> + * / <nl> + bool performOptimize ( size_t aio_threshold ) override <nl> + { <nl> + return merge ( aio_threshold , true ) ; <nl> + } <nl> + <nl> private : <nl> String path ; <nl> String database_name ; <nl> mmm a / dbms / include / DB / Storages / StorageReplicatedMergeTree . h <nl> ppp b / dbms / include / DB / Storages / StorageReplicatedMergeTree . h <nl> class StorageReplicatedMergeTree : public IStorage <nl> <nl> BlockOutputStreamPtr write ( ASTPtr query ) override ; <nl> <nl> - bool optimize ( size_t aio_threshold ) override ; <nl> - <nl> void alter ( const AlterCommands & params , const String & database_name , const String & table_name , Context & context ) override ; <nl> <nl> void dropPartition ( const Field & partition , bool detach , const Settings & settings ) override ; <nl> class StorageReplicatedMergeTree : public IStorage <nl> * / <nl> void waitForReplicaToProcessLogEntry ( const String & replica_name , const LogEntry & entry ) ; <nl> <nl> + bool performOptimize ( size_t aio_threshold ) override ; <nl> <nl> / / / Преобразовать число в строку формате суффиксов автоинкрементных нод в ZooKeeper . <nl> static String padIndex ( UInt64 index ) <nl> mmm a / dbms / src / IO / createReadBufferFromFileBase . cpp <nl> ppp b / dbms / src / IO / createReadBufferFromFileBase . cpp <nl> <nl> # include < DB / IO / createReadBufferFromFileBase . h > <nl> # include < DB / IO / ReadBufferFromFile . h > <nl> # include < DB / IO / ReadBufferAIO . h > <nl> + # include < Poco / File . h > <nl> <nl> namespace DB <nl> { <nl> namespace DB <nl> ReadBufferFromFileBase * createReadBufferFromFileBase ( const std : : string & filename_ , size_t aio_threshold , <nl> size_t buffer_size_ , int flags_ , char * existing_memory_ , size_t alignment ) <nl> { <nl> - struct stat info ; <nl> - int res = : : stat ( filename_ . c_str ( ) , & info ) ; <nl> - if ( res ! = 0 ) <nl> - throwFromErrno ( " Cannot stat file " + filename_ ) ; <nl> + size_t file_size = ( aio_threshold > 0 ) ? Poco : : File ( filename_ ) . getSize ( ) : 0 ; <nl> <nl> - if ( info . st_size < 0 ) <nl> - throw Exception ( " Corrupted file " + filename_ , ErrorCodes : : LOGICAL_ERROR ) ; <nl> - <nl> - if ( static_cast < size_t > ( info . st_size ) < aio_threshold ) <nl> + if ( ( aio_threshold = = 0 ) | | ( file_size < aio_threshold ) ) <nl> return new ReadBufferFromFile ( filename_ , buffer_size_ , flags_ , existing_memory_ , alignment ) ; <nl> else <nl> return new ReadBufferAIO ( filename_ , buffer_size_ , flags_ , existing_memory_ ) ; <nl> mmm a / dbms / src / IO / createWriteBufferFromFileBase . cpp <nl> ppp b / dbms / src / IO / createWriteBufferFromFileBase . cpp <nl> WriteBufferFromFileBase * createWriteBufferFromFileBase ( const std : : string & file <nl> size_t aio_threshold , size_t buffer_size_ , int flags_ , mode_t mode , char * existing_memory_ , <nl> size_t alignment ) <nl> { <nl> - if ( estimated_size < aio_threshold ) <nl> + if ( ( aio_threshold = = 0 ) | | ( estimated_size < aio_threshold ) ) <nl> return new WriteBufferFromFile ( filename_ , buffer_size_ , flags_ , mode , existing_memory_ , alignment ) ; <nl> else <nl> return new WriteBufferAIO ( filename_ , buffer_size_ , flags_ , mode , existing_memory_ ) ; <nl> mmm a / dbms / src / Storages / MergeTree / MergeTreeDataMerger . cpp <nl> ppp b / dbms / src / Storages / MergeTree / MergeTreeDataMerger . cpp <nl> MergeTreeData : : DataPartPtr MergeTreeDataMerger : : mergeParts ( <nl> Names union_column_names = union_columns . getNames ( ) ; <nl> <nl> MergeTreeData : : DataPart : : ColumnToSize merged_column_to_size ; <nl> - for ( const MergeTreeData : : DataPartPtr & part : parts ) <nl> - part - > accumulateColumnSizes ( merged_column_to_size ) ; <nl> + if ( aio_threshold > 0 ) <nl> + { <nl> + for ( const MergeTreeData : : DataPartPtr & part : parts ) <nl> + part - > accumulateColumnSizes ( merged_column_to_size ) ; <nl> + } <nl> <nl> MergeTreeData : : MutableDataPartPtr new_data_part = std : : make_shared < MergeTreeData : : DataPart > ( data ) ; <nl> ActiveDataPartSet : : parsePartName ( merged_name , * new_data_part ) ; <nl> mmm a / dbms / src / Storages / StorageBuffer . cpp <nl> ppp b / dbms / src / Storages / StorageBuffer . cpp <nl> void StorageBuffer : : shutdown ( ) <nl> if ( flush_thread . joinable ( ) ) <nl> flush_thread . join ( ) ; <nl> <nl> - / / / Параметр игнорируется . <nl> - optimize ( 0 ) ; <nl> + optimize ( ) ; <nl> } <nl> <nl> <nl> - bool StorageBuffer : : optimize ( size_t / * aio_threshold * / ) <nl> + bool StorageBuffer : : performOptimize ( size_t aio_threshold ) <nl> { <nl> flushAllBuffers ( false ) ; <nl> <nl> void StorageBuffer : : alter ( const AlterCommands & params , const String & database_ <nl> auto lock = lockStructureForAlter ( ) ; <nl> <nl> / / / Чтобы не осталось блоков старой структуры . <nl> - / / / Параметр игнорируется . <nl> - optimize ( 0 ) ; <nl> + optimize ( ) ; <nl> <nl> params . apply ( * columns , materialized_columns , alias_columns , column_defaults ) ; <nl> InterpreterAlterQuery : : updateMetadata ( database_name , table_name , <nl> mmm a / dbms / src / Storages / StorageMaterializedView . cpp <nl> ppp b / dbms / src / Storages / StorageMaterializedView . cpp <nl> void StorageMaterializedView : : drop ( ) <nl> } <nl> } <nl> <nl> - bool StorageMaterializedView : : optimize ( size_t aio_threshold ) { <nl> + bool StorageMaterializedView : : performOptimize ( size_t aio_threshold ) <nl> + { <nl> return data - > optimize ( aio_threshold ) ; <nl> } <nl> <nl> mmm a / dbms / src / Storages / StorageReplicatedMergeTree . cpp <nl> ppp b / dbms / src / Storages / StorageReplicatedMergeTree . cpp <nl> BlockOutputStreamPtr StorageReplicatedMergeTree : : write ( ASTPtr query ) <nl> } <nl> <nl> <nl> - bool StorageReplicatedMergeTree : : optimize ( size_t aio_threshold ) <nl> + bool StorageReplicatedMergeTree : : performOptimize ( size_t aio_threshold ) <nl> { <nl> / / / Померджим какие - нибудь куски из директории unreplicated . <nl> / / / TODO : Мерджить реплицируемые куски тоже . <nl>
dbms : Server : min_bytes_to_use_direct_io = 0 means no AIO ; various cleanups . [ # METR - 15090 ]
ClickHouse/ClickHouse
c7ab47d84ddfc7d23e361d30d90987ec76406847
2015-04-10T17:09:16Z
mmm a / tensorflow / docs_src / performance / performance_guide . md <nl> ppp b / tensorflow / docs_src / performance / performance_guide . md <nl> cuDNN can operate on both formats , it is faster to operate in its default <nl> format . <nl> <nl> The best practice is to build models that work with both ` NCHW ` and ` NHWC ` as it <nl> - is common to train using ` NCHW ` on GPU , and then do inference with NHWC on CPU . <nl> + is common to train using ` NCHW ` on GPU , and then do inference with ` NHWC ` on CPU . <nl> <nl> There are edge cases where ` NCHW ` can be slower on GPU than ` NHWC ` . One <nl> [ case ] ( https : / / github . com / tensorflow / tensorflow / issues / 7551 # issuecomment - 280421351 ) <nl>
Minor formatting fix
tensorflow/tensorflow
6503c937b2b0ec85212ad22a07aa2330ba94f94b
2017-08-03T22:23:36Z
mmm a / utils / swift_build_support / swift_build_support / products / pythonkit . py <nl> ppp b / utils / swift_build_support / swift_build_support / products / pythonkit . py <nl> def build ( self , host_target ) : <nl> ' - G ' , ' Ninja ' , <nl> ' - D ' , ' BUILD_SHARED_LIBS = YES ' , <nl> ' - D ' , ' CMAKE_INSTALL_PREFIX = { } / usr ' . format ( <nl> - self . args . install_destdir ) , <nl> + self . install_toolchain_path ( ) ) , <nl> ' - D ' , ' CMAKE_MAKE_PROGRAM = { } ' . format ( self . toolchain . ninja ) , <nl> ' - D ' , ' CMAKE_Swift_COMPILER = { } ' . format ( swiftc ) , <nl> ' - B ' , self . build_dir , <nl> mmm a / utils / swift_build_support / swift_build_support / products / tensorflow . py <nl> ppp b / utils / swift_build_support / swift_build_support / products / tensorflow . py <nl> def build ( self , host_target ) : <nl> ' - G ' , ' Ninja ' , <nl> ' - D ' , ' BUILD_SHARED_LIBS = YES ' , <nl> ' - D ' , ' CMAKE_INSTALL_PREFIX = { } / usr ' . format ( <nl> - self . args . install_destdir ) , <nl> + self . install_toolchain_path ( ) ) , <nl> ' - D ' , ' CMAKE_MAKE_PROGRAM = { } ' . format ( self . toolchain . ninja ) , <nl> ' - D ' , ' CMAKE_Swift_COMPILER = { } ' . format ( swiftc ) , <nl> ' - B ' , self . build_dir , <nl>
Merge remote - tracking branch ' origin / master ' into master - next
apple/swift
96fc206cf12a232c0e523262985d8d03d9e2de78
2020-02-28T16:44:03Z
mmm a / cmake / modules / SwiftConfigureSDK . cmake <nl> ppp b / cmake / modules / SwiftConfigureSDK . cmake <nl> macro ( configure_sdk_windows name environment architectures ) <nl> set ( WinSDK $ { arch } UMDir " $ ENV { UniversalCRTSdkDir } / Lib / $ ENV { UCRTVersion } / um / $ { WinSDKArchitecture } " ) <nl> set ( OverlayDirectory " $ { CMAKE_BINARY_DIR } / winsdk_lib_ $ { arch } _symlinks " ) <nl> <nl> - file ( MAKE_DIRECTORY $ { OverlayDirectory } ) <nl> - <nl> - file ( GLOB libraries RELATIVE " $ { WinSDK $ { arch } UMDir } " " $ { WinSDK $ { arch } UMDir } / * " ) <nl> - foreach ( library $ { libraries } ) <nl> - get_filename_component ( name_we " $ { library } " NAME_WE ) <nl> - get_filename_component ( ext " $ { library } " EXT ) <nl> - string ( TOLOWER " $ { ext } " lowercase_ext ) <nl> - set ( lowercase_ext_symlink_name " $ { name_we } $ { lowercase_ext } " ) <nl> - if ( NOT library STREQUAL lowercase_ext_symlink_name ) <nl> - execute_process ( COMMAND <nl> - " $ { CMAKE_COMMAND } " - E create_symlink " $ { WinSDK $ { arch } UMDir } / $ { library } " " $ { OverlayDirectory } / $ { lowercase_ext_symlink_name } " ) <nl> - endif ( ) <nl> - endforeach ( ) <nl> + if ( NOT EXISTS " $ ENV { UniversalCRTSdkDir } / Include / $ ENV { UCRTVersion } / um / WINDOWS . H " ) <nl> + file ( MAKE_DIRECTORY $ { OverlayDirectory } ) <nl> + <nl> + file ( GLOB libraries RELATIVE " $ { WinSDK $ { arch } UMDir } " " $ { WinSDK $ { arch } UMDir } / * " ) <nl> + foreach ( library $ { libraries } ) <nl> + get_filename_component ( name_we " $ { library } " NAME_WE ) <nl> + get_filename_component ( ext " $ { library } " EXT ) <nl> + string ( TOLOWER " $ { ext } " lowercase_ext ) <nl> + set ( lowercase_ext_symlink_name " $ { name_we } $ { lowercase_ext } " ) <nl> + if ( NOT library STREQUAL lowercase_ext_symlink_name ) <nl> + execute_process ( COMMAND <nl> + " $ { CMAKE_COMMAND } " - E create_symlink " $ { WinSDK $ { arch } UMDir } / $ { library } " " $ { OverlayDirectory } / $ { lowercase_ext_symlink_name } " ) <nl> + endif ( ) <nl> + endforeach ( ) <nl> + endif ( ) <nl> endforeach ( ) <nl> <nl> # Add this to the list of known SDKs . <nl>
build : do not create WinSDK symlinks when unneeded
apple/swift
a5d360ba7a2f80fa87e25b709afc66d08beb51cb
2018-11-06T00:16:08Z
mmm a / xbmc / cores / VideoPlayer / DVDCodecs / Video / DVDVideoCodecAndroidMediaCodec . cpp <nl> ppp b / xbmc / cores / VideoPlayer / DVDCodecs / Video / DVDVideoCodecAndroidMediaCodec . cpp <nl> CDVDVideoCodecAndroidMediaCodec : : CDVDVideoCodecAndroidMediaCodec ( CProcessInfo & p <nl> , m_OutputDuration ( 0 ) <nl> , m_fpsDuration ( 0 ) <nl> , m_lastPTS ( - 1 ) <nl> + , m_dtsShift ( DVD_NOPTS_VALUE ) <nl> , m_bitstream ( nullptr ) <nl> , m_render_surface ( surface_render ) <nl> , m_mpeg2_sequence ( nullptr ) <nl> bool CDVDVideoCodecAndroidMediaCodec : : Open ( CDVDStreamInfo & hints , CDVDCodecOptio <nl> m_codecControlFlags = 0 ; <nl> m_hints = hints ; <nl> m_indexInputBuffer = - 1 ; <nl> + m_dtsShift = DVD_NOPTS_VALUE ; <nl> <nl> CLog : : Log ( LOGDEBUG , LOGVIDEO , " CDVDVideoCodecAndroidMediaCodec : : Open hints : fpsrate % d / fpsscale % d \ n " , m_hints . fpsrate , m_hints . fpsscale ) ; <nl> CLog : : Log ( LOGDEBUG , LOGVIDEO , " CDVDVideoCodecAndroidMediaCodec : : Open hints : CodecID % d \ n " , m_hints . codec ) ; <nl> bool CDVDVideoCodecAndroidMediaCodec : : AddData ( const DemuxPacket & packet ) <nl> / / Do not try to pass pts as a unioned double / int64_t , <nl> / / some android devices will diddle with presentationTimeUs <nl> / / and you will get NaN back and VideoPlayerVideo will barf . <nl> + if ( m_dtsShift = = DVD_NOPTS_VALUE ) <nl> + m_dtsShift = ( dts = = DVD_NOPTS_VALUE ) ? 0 : dts ; <nl> + <nl> int64_t presentationTimeUs = 0 ; <nl> if ( pts ! = DVD_NOPTS_VALUE ) <nl> - presentationTimeUs = pts ; <nl> - else if ( dts ! = DVD_NOPTS_VALUE ) <nl> - presentationTimeUs = dts ; <nl> + presentationTimeUs = ( pts - m_dtsShift ) ; <nl> <nl> int flags = 0 ; <nl> int offset = 0 ; <nl> void CDVDVideoCodecAndroidMediaCodec : : FlushInternal ( ) <nl> / / new ones to match the number of output buffers <nl> m_OutputDuration = 0 ; <nl> m_lastPTS = - 1 ; <nl> + m_dtsShift = DVD_NOPTS_VALUE ; <nl> } <nl> <nl> bool CDVDVideoCodecAndroidMediaCodec : : ConfigureMediaCodec ( void ) <nl> int CDVDVideoCodecAndroidMediaCodec : : GetOutputPicture ( void ) <nl> if ( pts ! = AV_NOPTS_VALUE ) <nl> { <nl> m_videobuffer . pts = pts ; <nl> + m_videobuffer . pts + = m_dtsShift ; <nl> if ( m_lastPTS > = 0 & & pts > m_lastPTS ) <nl> m_OutputDuration + = pts - m_lastPTS ; <nl> m_lastPTS = pts ; <nl> mmm a / xbmc / cores / VideoPlayer / DVDCodecs / Video / DVDVideoCodecAndroidMediaCodec . h <nl> ppp b / xbmc / cores / VideoPlayer / DVDCodecs / Video / DVDVideoCodecAndroidMediaCodec . h <nl> class CDVDVideoCodecAndroidMediaCodec : public CDVDVideoCodec , public CJNISurfac <nl> <nl> uint32_t m_OutputDuration , m_fpsDuration ; <nl> int64_t m_lastPTS ; <nl> + double m_dtsShift ; <nl> <nl> static std : : atomic < bool > m_InstanceGuard ; <nl> <nl>
Merge pull request from peak3d / negative
xbmc/xbmc
5880ccf21c62d60ae2779cc69b4d8952ea4174ad
2018-11-28T18:21:54Z
mmm a / src / classify / protos . cpp <nl> ppp b / src / classify / protos . cpp <nl> int AddConfigToClass ( CLASS_TYPE Class ) { <nl> * @ param Class The class to add to <nl> * / <nl> int AddProtoToClass ( CLASS_TYPE Class ) { <nl> - int i ; <nl> - int Bit ; <nl> - int NewNumProtos ; <nl> - int NewProto ; <nl> - BIT_VECTOR Config ; <nl> - <nl> if ( Class - > NumProtos > = Class - > MaxNumProtos ) { <nl> / * add protos in PROTO_INCREMENT chunks at a time * / <nl> - NewNumProtos = ( ( ( Class - > MaxNumProtos + PROTO_INCREMENT ) / <nl> + int NewNumProtos = ( ( ( Class - > MaxNumProtos + PROTO_INCREMENT ) / <nl> PROTO_INCREMENT ) * PROTO_INCREMENT ) ; <nl> <nl> Class - > Prototypes = static_cast < PROTO > ( Erealloc ( Class - > Prototypes , <nl> int AddProtoToClass ( CLASS_TYPE Class ) { <nl> Class - > MaxNumProtos = NewNumProtos ; <nl> ASSERT_HOST ( NewNumProtos < = MAX_NUM_PROTOS ) ; <nl> } <nl> - NewProto = Class - > NumProtos + + ; <nl> + int NewProto = Class - > NumProtos + + ; <nl> ASSERT_HOST ( Class - > NumProtos < = MAX_NUM_PROTOS ) ; <nl> return ( NewProto ) ; <nl> } <nl>
Fix some compiler warnings ( unused local variables )
tesseract-ocr/tesseract
f92181561cecefccf63e94a7f0e3c64ba60a804b
2019-07-17T05:47:28Z
mmm a / README . md <nl> ppp b / README . md <nl> Supported operating systems and hardware : <nl> [ # 304 ] ( https : / / github . com / winlinvip / simple - rtmp - server / issues / 304 ) . <nl> 1 . Support push RTSP to SRS , read <nl> [ # 133 ] ( https : / / github . com / winlinvip / simple - rtmp - server / issues / 133 ) . <nl> + 1 . Support DVR http api , read <nl> + [ # 179 ] ( https : / / github . com / winlinvip / simple - rtmp - server / issues / 179 ) . <nl> 1 . [ no - plan ] Support < 500ms latency , FRSC ( Fast RTMP - compatible Stream Channel tech ) . <nl> 1 . [ no - plan ] Support RTMP 302 redirect [ # 92 ] ( https : / / github . com / winlinvip / simple - rtmp - server / issues / 92 ) . <nl> 1 . [ no - plan ] Support multiple processes , for both origin and edge <nl>
for , update readme .
ossrs/srs
691f7322041ebbd877eb9bd3cec646807de4027e
2015-02-23T11:51:13Z
mmm a / fdbmonitor / CMakeLists . txt <nl> ppp b / fdbmonitor / CMakeLists . txt <nl> endif ( ) <nl> # Create a local sandbox for quick manual testing without simulator <nl> file ( MAKE_DIRECTORY $ { CMAKE_BINARY_DIR } / sandbox / data ) <nl> file ( MAKE_DIRECTORY $ { CMAKE_BINARY_DIR } / sandbox / logs ) <nl> - configure_file ( $ { CMAKE_SOURCE_DIR } / cmake / Sandbox . conf . cmake <nl> - $ { CMAKE_BINARY_DIR } / sandbox / foundationdb . conf ) <nl> + if ( NOT EXISTS $ { CMAKE_BINARY_DIR } / sandbox / foundationdb . conf ) <nl> + configure_file ( $ { CMAKE_SOURCE_DIR } / cmake / Sandbox . conf . cmake <nl> + $ { CMAKE_BINARY_DIR } / sandbox / foundationdb . conf ) <nl> + endif ( ) <nl> <nl> # this is not portable on Windows - but fdbmonitor isn ' t built there anyways . . . <nl> add_custom_target ( clean_sandbox <nl>
Don ' t overwrite sandbox config on recompile
apple/foundationdb
438be4edd50fb703687eb49994d346b1e8a6b109
2020-08-12T20:32:31Z
mmm a / tensorflow / lite / delegates / gpu / cl / gpu_api_delegate . cc <nl> ppp b / tensorflow / lite / delegates / gpu / cl / gpu_api_delegate . cc <nl> class Delegate { <nl> absl : : Status status = <nl> NewInferenceEnvironment ( env_options , & environment_ , & properties ) ; <nl> if ( ! properties . is_opencl_available ) { <nl> - context - > ReportError ( context , <nl> - " TfLiteGpuDelegate : OpenCL is not available " ) ; <nl> + TF_LITE_KERNEL_LOG ( context , " TfLiteGpuDelegate : OpenCL is not available " ) ; <nl> } <nl> if ( ! properties . is_gl_sharing_supported ) { <nl> - context - > ReportError ( context , <nl> - " TfLiteGpuDelegate : GL sharing is not supported " ) ; <nl> + TF_LITE_KERNEL_LOG ( context , <nl> + " TfLiteGpuDelegate : GL sharing is not supported " ) ; <nl> } <nl> if ( ! properties . is_cl_to_gl_fast_sync_supported ) { <nl> - context - > ReportError ( <nl> + TF_LITE_KERNEL_LOG ( <nl> context , " TfLiteGpuDelegate : fast CL to GL sync is not supported " ) ; <nl> } <nl> if ( ! properties . is_gl_to_cl_fast_sync_supported ) { <nl> - context - > ReportError ( <nl> + TF_LITE_KERNEL_LOG ( <nl> context , " TfLiteGpuDelegate : fast GL to CL sync is not supported " ) ; <nl> } <nl> RETURN_IF_ERROR ( status ) ; <nl> TfLiteStatus DelegatePrepare ( TfLiteContext * context , TfLiteDelegate * delegate ) { <nl> / / for whatever reason forbids that . <nl> const auto status = gpu_delegate - > Prepare ( context , params ) ; <nl> if ( ! status . ok ( ) ) { <nl> - context - > ReportError ( context , " TfLiteGpuDelegate Init : % s " , <nl> - std : : string ( status . message ( ) ) . c_str ( ) ) ; <nl> + TF_LITE_KERNEL_LOG ( context , " TfLiteGpuDelegate Init : % s " , <nl> + std : : string ( status . message ( ) ) . c_str ( ) ) ; <nl> return nullptr ; <nl> } <nl> return gpu_delegate ; <nl> TfLiteStatus DelegatePrepare ( TfLiteContext * context , TfLiteDelegate * delegate ) { <nl> / / . prepare <nl> [ ] ( TfLiteContext * context , TfLiteNode * node ) - > TfLiteStatus { <nl> if ( ! node - > user_data ) { <nl> - context - > ReportError ( <nl> + TF_LITE_KERNEL_LOG ( <nl> context , <nl> " TfLiteGpuDelegate Prepare : delegate is not initialized " ) ; <nl> return kTfLiteError ; <nl> TfLiteStatus DelegatePrepare ( TfLiteContext * context , TfLiteDelegate * delegate ) { <nl> [ ] ( TfLiteContext * context , TfLiteNode * node ) - > TfLiteStatus { <nl> const auto status = GetDelegate ( node ) - > Invoke ( context ) ; <nl> if ( ! status . ok ( ) ) { <nl> - context - > ReportError ( context , " TfLiteGpuDelegate Invoke : % s " , <nl> - std : : string ( status . message ( ) ) . c_str ( ) ) ; <nl> + TF_LITE_KERNEL_LOG ( context , " TfLiteGpuDelegate Invoke : % s " , <nl> + std : : string ( status . message ( ) ) . c_str ( ) ) ; <nl> return kTfLiteError ; <nl> } <nl> return kTfLiteOk ; <nl> mmm a / tensorflow / lite / delegates / gpu / delegate . cc <nl> ppp b / tensorflow / lite / delegates / gpu / delegate . cc <nl> TfLiteStatus DelegatePrepare ( TfLiteContext * context , TfLiteDelegate * delegate ) { <nl> absl : : make_unique < DelegateKernel > ( gpu_delegate ) ; <nl> const auto status = gpu_delegate_kernel - > Prepare ( context , params ) ; <nl> if ( ! status . ok ( ) ) { <nl> - context - > ReportError ( context , " TfLiteGpuDelegate Init : % s " , <nl> - std : : string ( status . message ( ) ) . c_str ( ) ) ; <nl> + TF_LITE_KERNEL_LOG ( context , " TfLiteGpuDelegate Init : % s " , <nl> + std : : string ( status . message ( ) ) . c_str ( ) ) ; <nl> return nullptr ; <nl> } <nl> return gpu_delegate_kernel . release ( ) ; <nl> TfLiteStatus DelegatePrepare ( TfLiteContext * context , TfLiteDelegate * delegate ) { <nl> / / . prepare <nl> [ ] ( TfLiteContext * context , TfLiteNode * node ) - > TfLiteStatus { <nl> if ( ! node - > user_data ) { <nl> - context - > ReportError ( <nl> + TF_LITE_KERNEL_LOG ( <nl> context , <nl> " TfLiteGpuDelegate Prepare : delegate is not initialized " ) ; <nl> return kTfLiteError ; <nl> TfLiteStatus DelegatePrepare ( TfLiteContext * context , TfLiteDelegate * delegate ) { <nl> [ ] ( TfLiteContext * context , TfLiteNode * node ) - > TfLiteStatus { <nl> const auto status = GetDelegateKernel ( node ) - > Invoke ( context ) ; <nl> if ( ! status . ok ( ) ) { <nl> - context - > ReportError ( context , " TfLiteGpuDelegate Invoke : % s " , <nl> - std : : string ( status . message ( ) ) . c_str ( ) ) ; <nl> + TF_LITE_KERNEL_LOG ( context , " TfLiteGpuDelegate Invoke : % s " , <nl> + std : : string ( status . message ( ) ) . c_str ( ) ) ; <nl> return kTfLiteError ; <nl> } <nl> return kTfLiteOk ; <nl> mmm a / tensorflow / lite / delegates / gpu / gl_delegate . cc <nl> ppp b / tensorflow / lite / delegates / gpu / gl_delegate . cc <nl> TfLiteStatus DelegatePrepare ( TfLiteContext * context , TfLiteDelegate * delegate ) { <nl> / / for whatever reason forbids that . <nl> const auto status = gpu_delegate - > Prepare ( context , params ) ; <nl> if ( status . ok ( ) ) return gpu_delegate ; <nl> - context - > ReportError ( context , " TfLiteGpuDelegate Prepare : % s " , <nl> - std : : string ( status . message ( ) ) . c_str ( ) ) ; <nl> + TF_LITE_KERNEL_LOG ( context , " TfLiteGpuDelegate Prepare : % s " , <nl> + std : : string ( status . message ( ) ) . c_str ( ) ) ; <nl> return nullptr ; <nl> } , <nl> / / . free <nl> TfLiteStatus DelegatePrepare ( TfLiteContext * context , TfLiteDelegate * delegate ) { <nl> [ ] ( TfLiteContext * context , TfLiteNode * node ) - > TfLiteStatus { <nl> const auto status = GetGpuDelegate ( node ) - > Invoke ( context ) ; <nl> if ( status . ok ( ) ) return kTfLiteOk ; <nl> - context - > ReportError ( context , " TfLiteGpuDelegate Invoke : % s " , <nl> - std : : string ( status . message ( ) ) . c_str ( ) ) ; <nl> + TF_LITE_KERNEL_LOG ( context , " TfLiteGpuDelegate Invoke : % s " , <nl> + std : : string ( status . message ( ) ) . c_str ( ) ) ; <nl> return kTfLiteError ; <nl> } , <nl> nullptr , / / . profiling_string <nl> TfLiteStatus DelegateCopyFromBufferHandle ( TfLiteContext * context , <nl> if ( ! gpu_delegate ) return kTfLiteError ; <nl> const auto status = gpu_delegate - > CopyFromBufferHandle ( buffer_handle , tensor ) ; <nl> if ( status . ok ( ) ) return kTfLiteOk ; <nl> - context - > ReportError ( context , " TfLiteGpuDelegate CopyFromBufferHandle : % s " , <nl> - std : : string ( status . message ( ) ) . c_str ( ) ) ; <nl> + TF_LITE_KERNEL_LOG ( context , " TfLiteGpuDelegate CopyFromBufferHandle : % s " , <nl> + std : : string ( status . message ( ) ) . c_str ( ) ) ; <nl> return kTfLiteError ; <nl> } <nl> <nl> TfLiteStatus DelegateCopyToBufferHandle ( TfLiteContext * context , <nl> if ( ! gpu_delegate ) return kTfLiteError ; <nl> const auto status = gpu_delegate - > CopyToBufferHandle ( buffer_handle , tensor ) ; <nl> if ( status . ok ( ) ) return kTfLiteOk ; <nl> - context - > ReportError ( context , " TfLiteGpuDelegate CopyToBufferHandle : % s " , <nl> - std : : string ( status . message ( ) ) . c_str ( ) ) ; <nl> + TF_LITE_KERNEL_LOG ( context , " TfLiteGpuDelegate CopyToBufferHandle : % s " , <nl> + std : : string ( status . message ( ) ) . c_str ( ) ) ; <nl> return kTfLiteError ; <nl> } <nl> <nl>
Replace context - > ReportError calls with TF_LITE_KERNEL_LOG macro .
tensorflow/tensorflow
946b3753ae848752a24aa03d6f90f2c9736c664c
2020-06-10T16:54:25Z
deleted file mode 100644 <nl> index e69de29bb2d . . 00000000000 <nl> deleted file mode 100644 <nl> index e69de29bb2d . . 00000000000 <nl> deleted file mode 100644 <nl> index e69de29bb2d . . 00000000000 <nl>
Delete empty config files
facebook/hhvm
a951c9f704e2a10678581bc1176e47388dcbc849
2020-06-08T22:57:42Z
mmm a / test / expr / expressions . swift <nl> ppp b / test / expr / expressions . swift <nl> func r22913570 ( ) { <nl> func swift22_deprecation_increment_decrement ( ) { <nl> var i = 0 <nl> var f = 1 . 0 <nl> - var si = " foo " . startIndex <nl> <nl> i + + / / expected - warning { { ' + + ' is deprecated : it will be removed in Swift 3 } } { { 4 - 6 = + = 1 } } <nl> - - i / / expected - warning { { ' - - ' is deprecated : it will be removed in Swift 3 } } { { 3 - 5 = } } { { 6 - 6 = - = 1 } } <nl>
Drop unused variable
apple/swift
a12370bf950e1f71e14c13e645ddedb07f21d804
2016-03-18T22:15:06Z
mmm a / src / net . cpp <nl> ppp b / src / net . cpp <nl> <nl> # include < miniupnpc / upnperrors . h > <nl> # endif <nl> <nl> - # include < boost / filesystem . hpp > <nl> - # include < boost / thread . hpp > <nl> <nl> # include < math . h > <nl> <nl> mmm a / src / netbase . cpp <nl> ppp b / src / netbase . cpp <nl> <nl> # include " util . h " <nl> # include " utilstrencodings . h " <nl> <nl> - # ifdef HAVE_GETADDRINFO_A <nl> - # include < netdb . h > <nl> - # endif <nl> # include < atomic > <nl> <nl> # ifndef WIN32 <nl> - # if HAVE_INET_PTON <nl> - # include < arpa / inet . h > <nl> - # endif <nl> # include < fcntl . h > <nl> # endif <nl> <nl> # include < boost / algorithm / string / case_conv . hpp > / / for to_lower ( ) <nl> # include < boost / algorithm / string / predicate . hpp > / / for startswith ( ) and endswith ( ) <nl> - # include < boost / thread . hpp > <nl> <nl> # if ! defined ( HAVE_MSG_NOSIGNAL ) & & ! defined ( MSG_NOSIGNAL ) <nl> # define MSG_NOSIGNAL 0 <nl>
net : misc header cleanups
bitcoin/bitcoin
67ee4ec9015592c8447955356adfcbb1bf473e32
2017-01-03T22:56:21Z
mmm a / python / mxnet / callback . py <nl> ppp b / python / mxnet / callback . py <nl> def _callback ( param ) : <nl> <nl> <nl> class Speedometer ( object ) : <nl> - " " " Calculate and log training speed periodically . <nl> + " " " Logs training speed and evaluation metrics periodically . <nl> <nl> Parameters <nl> mmmmmmmmm - <nl> batch_size : int <nl> - batch_size of data . <nl> + Batch size of data . <nl> frequent : int <nl> - How many batches between calculations . <nl> - Defaults to calculating & logging every 50 batches . <nl> + Specifies how frequently training speed and evaluation metrics <nl> + must be logged . Default behavior is to log once every 50 batches . <nl> auto_reset : bool <nl> - Reset the metric after each log . <nl> + Reset the evaluation metrics after each log . <nl> + <nl> + Example : <nl> + mmmmmm - - <nl> + > > > # Print training speed and evaluation metrics every ten batches . Batch size is one . <nl> + . . . <nl> + > > > module . fit ( iterator , num_epoch = n_epoch , <nl> + . . . batch_end_callback = mx . callback . Speedometer ( 1 , 10 ) ) <nl> + Epoch [ 0 ] Batch [ 10 ] Speed : 1910 . 41 samples / sec Train - accuracy = 0 . 200000 <nl> + Epoch [ 0 ] Batch [ 20 ] Speed : 1764 . 83 samples / sec Train - accuracy = 0 . 400000 <nl> + Epoch [ 0 ] Batch [ 30 ] Speed : 1740 . 59 samples / sec Train - accuracy = 0 . 500000 <nl> " " " <nl> def __init__ ( self , batch_size , frequent = 50 , auto_reset = True ) : <nl> self . batch_size = batch_size <nl>
Update documentation for mx . callback . Speedometer . ( )
apache/incubator-mxnet
fa704d7eebb879dc739446c1cc811dc4cd505ae9
2017-05-06T11:56:30Z
mmm a / src / operator / contrib / batch_norm_relu . cc <nl> ppp b / src / operator / contrib / batch_norm_relu . cc <nl> static bool BatchNormWithReLUShape ( const nnvm : : NodeAttrs & attrs , <nl> CHECK_EQ ( in_shape - > size ( ) , 5U ) < < " Input : [ data , gamma , beta , MovingMean , MovingVar ] " ; <nl> CHECK_EQ ( out_shape - > size ( ) , 4U ) ; <nl> const mxnet : : TShape & dshape = in_shape - > at ( batchnormrelu : : kData ) ; <nl> + if ( ! mxnet : : ndim_is_known ( dshape ) ) { <nl> + return false ; <nl> + } <nl> <nl> const size_t channelAxis = static_cast < size_t > ( param . axis < 0 <nl> ? static_cast < int > ( dshape . ndim ( ) ) + param . axis <nl> static bool BatchNormWithReLUShape ( const nnvm : : NodeAttrs & attrs , <nl> <nl> const int channelCount = dshape [ channelAxis ] ; <nl> <nl> - if ( ! mxnet : : ndim_is_known ( dshape ) ) { <nl> - return false ; <nl> - } <nl> - <nl> in_shape - > at ( batchnormrelu : : kGamma ) = mxnet : : TShape ( Shape1 ( channelCount ) ) ; <nl> in_shape - > at ( batchnormrelu : : kBeta ) = mxnet : : TShape ( Shape1 ( channelCount ) ) ; <nl> in_shape - > at ( batchnormrelu : : kInMovingMean ) = mxnet : : TShape ( Shape1 ( channelCount ) ) ; / / kMovingMean <nl> mmm a / src / operator / nn / batch_norm . cc <nl> ppp b / src / operator / nn / batch_norm . cc <nl> static bool BatchNormShape ( const nnvm : : NodeAttrs & attrs , <nl> CHECK_EQ ( in_shape - > size ( ) , 5U ) < < " Input : [ data , gamma , beta , MovingMean , MovingVar ] " ; <nl> CHECK_EQ ( out_shape - > size ( ) , 3U ) ; <nl> const mxnet : : TShape & dshape = in_shape - > at ( batchnorm : : kData ) ; <nl> + if ( ! mxnet : : ndim_is_known ( dshape ) ) { <nl> + return false ; <nl> + } <nl> <nl> const size_t channelAxis = static_cast < size_t > ( param . axis < 0 <nl> ? static_cast < int > ( dshape . ndim ( ) ) + param . axis <nl> static bool BatchNormShape ( const nnvm : : NodeAttrs & attrs , <nl> <nl> const index_t channelCount = dshape [ channelAxis ] ; <nl> <nl> - if ( ! mxnet : : ndim_is_known ( dshape ) ) { <nl> - return false ; <nl> - } <nl> - <nl> in_shape - > at ( batchnorm : : kGamma ) = mxnet : : TShape ( Shape1 ( channelCount ) ) ; <nl> in_shape - > at ( batchnorm : : kBeta ) = mxnet : : TShape ( Shape1 ( channelCount ) ) ; <nl> in_shape - > at ( batchnorm : : kInMovingMean ) = mxnet : : TShape ( Shape1 ( channelCount ) ) ; / / kMovingMean <nl> mmm a / src / operator / nn / group_norm . cc <nl> ppp b / src / operator / nn / group_norm . cc <nl> static bool GroupNormShape ( const nnvm : : NodeAttrs & attrs , <nl> using namespace mshadow ; <nl> CHECK_EQ ( in_shape - > size ( ) , 3U ) < < " Input : [ data , gamma , beta ] " ; <nl> const mxnet : : TShape & dshape = in_shape - > at ( groupnorm : : kData ) ; <nl> - CHECK_GE ( dshape . ndim ( ) , 3U ) ; <nl> - const int num_groups = param . num_groups ; <nl> - CHECK_EQ ( dshape [ 1 ] % num_groups , 0 ) < < " # of channels must be divisible by # of groups " ; <nl> - <nl> if ( ! mxnet : : ndim_is_known ( dshape ) ) { <nl> return false ; <nl> } <nl> <nl> + CHECK_GE ( dshape . ndim ( ) , 3U ) ; <nl> + const int num_groups = param . num_groups ; <nl> + CHECK_EQ ( dshape [ 1 ] % num_groups , 0 ) < < " # of channels must be divisible by # of groups " ; <nl> + <nl> in_shape - > at ( groupnorm : : kGamma ) = mxnet : : TShape ( Shape1 ( dshape [ 1 ] ) ) ; <nl> in_shape - > at ( groupnorm : : kBeta ) = mxnet : : TShape ( Shape1 ( dshape [ 1 ] ) ) ; <nl> <nl> mmm a / src / operator / nn / layer_norm . cc <nl> ppp b / src / operator / nn / layer_norm . cc <nl> static bool LayerNormShape ( const nnvm : : NodeAttrs & attrs , <nl> using namespace mshadow ; <nl> CHECK_EQ ( in_shape - > size ( ) , 3U ) < < " Input : [ data , gamma , beta ] " ; <nl> const mxnet : : TShape & dshape = in_shape - > at ( layernorm : : kData ) ; <nl> + if ( ! mxnet : : ndim_is_known ( dshape ) ) { <nl> + return false ; <nl> + } <nl> + <nl> int axis = GetRealAxis ( param . axis , dshape . ndim ( ) ) ; <nl> CHECK ( axis > = 0 & & axis < dshape . ndim ( ) ) <nl> < < " Channel axis out of range : axis = " < < param . axis ; <nl> <nl> const index_t channelCount = dshape [ axis ] ; <nl> <nl> - if ( ! mxnet : : ndim_is_known ( dshape ) ) { <nl> - return false ; <nl> - } <nl> SHAPE_ASSIGN_CHECK ( * in_shape , <nl> layernorm : : kGamma , <nl> mxnet : : TShape ( Shape1 ( channelCount ) ) ) ; <nl> mmm a / src / operator / nn / pooling . cc <nl> ppp b / src / operator / nn / pooling . cc <nl> static bool PoolingShape ( const nnvm : : NodeAttrs & attrs , <nl> mxnet : : ShapeVector * out_shape ) { <nl> const PoolingParam & param = nnvm : : get < PoolingParam > ( attrs . parsed ) ; <nl> CHECK_EQ ( in_shape - > size ( ) , 1U ) ; <nl> + const mxnet : : TShape & dshape = ( * in_shape ) [ 0 ] ; <nl> + if ( ! mxnet : : ndim_is_known ( dshape ) ) { <nl> + return false ; <nl> + } <nl> + <nl> if ( param . pool_type = = pool_enum : : kLpPooling ) { <nl> CHECK ( param . p_value . has_value ( ) ) ; <nl> } <nl> - const mxnet : : TShape & dshape = ( * in_shape ) [ 0 ] ; <nl> + <nl> if ( param . pooling_convention = = pool_enum : : kSame ) { <nl> CHECK_EQ ( dshape . ndim ( ) , 3U ) <nl> < < " Pooling : Input data should be 3D in ( batch , channel , x ) " <nl> static bool PoolingShape ( const nnvm : : NodeAttrs & attrs , <nl> < < " Pooling : Input data should be 3D in ( batch , channel , x ) " <nl> < < " Or 4D in ( batch , channel , y , x ) " <nl> < < " Or 5D in ( batch , channel , d , y , x ) " ; <nl> - if ( ! mxnet : : ndim_is_known ( dshape ) ) return false ; <nl> + <nl> int layout = param . GetLayout ( dshape . ndim ( ) ) ; <nl> if ( param . global_pool ) { <nl> mxnet : : TShape oshape = dshape ; <nl> mmm a / src / operator / tensor / matrix_op - inl . h <nl> ppp b / src / operator / tensor / matrix_op - inl . h <nl> inline bool TransposeShape ( const nnvm : : NodeAttrs & attrs , <nl> CHECK_EQ ( out_attrs - > size ( ) , 1U ) ; <nl> mxnet : : TShape & shp = ( * in_attrs ) [ 0 ] ; <nl> mxnet : : TShape & out_shp = ( * out_attrs ) [ 0 ] ; <nl> - CHECK_LE ( shp . ndim ( ) , 6 ) < < " Transpose support at most 6 dimensions " ; <nl> - if ( shp . ndim ( ) = = - 1 & & out_shp . ndim ( ) = = - 1 ) <nl> + if ( ! mxnet : : ndim_is_known ( shp ) & & ! mxnet : : ndim_is_known ( out_shp ) ) <nl> return false ; / / none of the shapes is known <nl> + CHECK_LE ( shp . ndim ( ) , 6 ) < < " Transpose support at most 6 dimensions " ; <nl> if ( out_shp . ndim ( ) > = 0 & & shp . ndim ( ) > = 0 ) <nl> CHECK_EQ ( out_shp . ndim ( ) , shp . ndim ( ) ) ; <nl> mxnet : : TShape get ( std : : max ( shp . ndim ( ) , out_shp . ndim ( ) ) , - 1 ) ; <nl> inline bool ExpandDimShape ( const nnvm : : NodeAttrs & attrs , <nl> const ExpandDimParam & param = nnvm : : get < ExpandDimParam > ( attrs . parsed ) ; <nl> CHECK_EQ ( in_attrs - > size ( ) , 1U ) ; <nl> CHECK_EQ ( out_attrs - > size ( ) , 1U ) ; <nl> - if ( ! mxnet : : ndim_is_known ( in_attrs - > at ( 0 ) ) & & ! mxnet : : ndim_is_known ( out_attrs - > at ( 0 ) ) ) { <nl> + mxnet : : TShape & ishape = ( * in_attrs ) [ 0 ] ; <nl> + mxnet : : TShape & oshape = ( * out_attrs ) [ 0 ] ; <nl> + if ( ! mxnet : : ndim_is_known ( ishape ) & & ! mxnet : : ndim_is_known ( oshape ) ) { <nl> return false ; <nl> } <nl> <nl> - mxnet : : TShape & ishape = ( * in_attrs ) [ 0 ] ; <nl> - mxnet : : TShape & oshape = ( * out_attrs ) [ 0 ] ; <nl> int indim = ishape . ndim ( ) ; <nl> bool unknown_ishape = false ; <nl> if ( - 1 = = indim ) { <nl> inline bool SliceLikeShape ( const nnvm : : NodeAttrs & attrs , <nl> CHECK_EQ ( out_attrs - > size ( ) , 1U ) ; <nl> mxnet : : TShape & ishape = ( * in_attrs ) [ 0 ] ; <nl> mxnet : : TShape & from_shape = ( * in_attrs ) [ 1 ] ; <nl> + if ( ! mxnet : : ndim_is_known ( ishape ) | | ! mxnet : : ndim_is_known ( from_shape ) ) { <nl> + return false ; <nl> + } <nl> if ( param . axes . ndim ( ) = = 0 ) { <nl> CHECK_EQ ( ishape . ndim ( ) , from_shape . ndim ( ) ) <nl> < < " By default slice_axis performs slice on all axes , but ndim mismatch " <nl> inline bool RepeatOpShape ( const nnvm : : NodeAttrs & attrs , <nl> CHECK_EQ ( in_attrs - > size ( ) , 1U ) ; <nl> CHECK_EQ ( out_attrs - > size ( ) , 1U ) ; <nl> const mxnet : : TShape & ishape = ( * in_attrs ) [ 0 ] ; <nl> + if ( ! mxnet : : ndim_is_known ( ishape ) ) { <nl> + return false ; <nl> + } <nl> int repeats = 0 ; <nl> dmlc : : optional < int > axisOpt ; <nl> GetRepeatParams ( param , ishape , & repeats , & axisOpt ) ; <nl> inline bool DepthToSpaceOpShape ( const nnvm : : NodeAttrs & attrs , <nl> mxnet : : TShape expected_out ( 4 , - 1 ) ; <nl> <nl> mxnet : : TShape & in_shape = in_attrs - > at ( 0 ) ; <nl> + if ( ! mxnet : : ndim_is_known ( in_shape ) ) { <nl> + return false ; <nl> + } <nl> int block = param . block_size ; <nl> CHECK_NE ( block , 0 ) < < " block_size must be a positive integer value " ; <nl> CHECK_NE ( in_shape [ 1 ] , 0 ) < < " Depth dimension : 1 cannot be 0 " ; <nl> inline bool SpaceToDepthOpShape ( const nnvm : : NodeAttrs & attrs , <nl> mxnet : : TShape expected_out ( in_attrs - > at ( 0 ) . ndim ( ) , - 1 ) ; <nl> <nl> mxnet : : TShape & in_shape = in_attrs - > at ( 0 ) ; <nl> + if ( ! mxnet : : ndim_is_known ( in_shape ) ) { <nl> + return false ; <nl> + } <nl> int block = param . block_size ; <nl> CHECK_NE ( block , 0 ) < < " block_size must be a positive integer value " ; <nl> CHECK_NE ( in_shape [ 0 ] , 0 ) <nl>
Fix FInferShape for some ops to support partial type inference ( )
apache/incubator-mxnet
d9fc74ecb6efa2c32501ba343b6e9d0c0ee43f57
2020-05-22T17:21:04Z
mmm a / include / LightGBM / boosting . h <nl> ppp b / include / LightGBM / boosting . h <nl> class Boosting { <nl> / * ! <nl> * \ brief Prediction for one record , not sigmoid transform <nl> * \ param feature_values Feature value on this record <nl> - * \ param num_used_model Number of used model <nl> * \ return Prediction result for this record <nl> * / <nl> - virtual double PredictRaw ( const double * feature_values , <nl> - int num_used_model ) const = 0 ; <nl> + virtual double PredictRaw ( const double * feature_values ) const = 0 ; <nl> <nl> / * ! <nl> * \ brief Prediction for one record , sigmoid transformation will be used if needed <nl> * \ param feature_values Feature value on this record <nl> - * \ param num_used_model Number of used model <nl> * \ return Prediction result for this record <nl> * / <nl> - virtual double Predict ( const double * feature_values , <nl> - int num_used_model ) const = 0 ; <nl> + virtual double Predict ( const double * feature_values ) const = 0 ; <nl> <nl> / * ! <nl> * \ brief Predtion for one record with leaf index <nl> * \ param feature_values Feature value on this record <nl> - * \ param num_used_model Number of used model <nl> * \ return Predicted leaf index for this record <nl> * / <nl> virtual std : : vector < int > PredictLeafIndex ( <nl> - const double * feature_values , <nl> - int num_used_model ) const = 0 ; <nl> + const double * feature_values ) const = 0 ; <nl> <nl> / * ! <nl> * \ brief Predtion for multiclass classification <nl> * \ param feature_values Feature value on this record <nl> * \ return Prediction result , num_class numbers per line <nl> * / <nl> - virtual std : : vector < double > PredictMulticlass ( const double * value , int num_used_model ) const = 0 ; <nl> + virtual std : : vector < double > PredictMulticlass ( const double * value ) const = 0 ; <nl> <nl> / * ! <nl> * \ brief save model to file <nl> class Boosting { <nl> * \ return Number of classes <nl> * / <nl> virtual int NumberOfClass ( ) const = 0 ; <nl> + <nl> + / * ! <nl> + * \ brief Set number of used model for prediction <nl> + * / <nl> + virtual void SetNumUsedModel ( int num_used_model ) = 0 ; <nl> <nl> / * ! <nl> * \ brief Get Type name of this boosting object <nl> mmm a / src / application / application . cpp <nl> ppp b / src / application / application . cpp <nl> void Application : : LoadData ( ) { <nl> Predictor * predictor = nullptr ; <nl> / / need to continue train <nl> if ( boosting_ - > NumberOfSubModels ( ) > 0 ) { <nl> - predictor = new Predictor ( boosting_ , config_ . io_config . is_sigmoid , config_ . predict_leaf_index , - 1 ) ; <nl> + predictor = new Predictor ( boosting_ , config_ . io_config . is_sigmoid , config_ . predict_leaf_index ) ; <nl> if ( config_ . io_config . num_class = = 1 ) { <nl> predict_fun = <nl> [ & predictor ] ( const std : : vector < std : : pair < int , double > > & features ) { <nl> void Application : : Train ( ) { <nl> <nl> <nl> void Application : : Predict ( ) { <nl> + boosting_ - > SetNumUsedModel ( config_ . io_config . num_model_predict ) ; <nl> / / create predictor <nl> Predictor predictor ( boosting_ , config_ . io_config . is_sigmoid , <nl> - config_ . predict_leaf_index , config_ . io_config . num_model_predict ) ; <nl> + config_ . predict_leaf_index ) ; <nl> predictor . Predict ( config_ . io_config . data_filename . c_str ( ) , <nl> config_ . io_config . output_result . c_str ( ) , config_ . io_config . has_header ) ; <nl> Log : : Info ( " Finish predict . " ) ; <nl> mmm a / src / application / predictor . hpp <nl> ppp b / src / application / predictor . hpp <nl> class Predictor { <nl> * \ param is_sigmoid True if need to predict result with sigmoid transform ( if needed , like binary classification ) <nl> * \ param predict_leaf_index True if output leaf index instead of prediction score <nl> * / <nl> - Predictor ( const Boosting * boosting , bool is_simgoid , bool is_predict_leaf_index , int num_used_model ) <nl> - : is_simgoid_ ( is_simgoid ) , is_predict_leaf_index_ ( is_predict_leaf_index ) , <nl> - num_used_model_ ( num_used_model ) { <nl> + Predictor ( const Boosting * boosting , bool is_simgoid , bool is_predict_leaf_index ) <nl> + : is_simgoid_ ( is_simgoid ) , is_predict_leaf_index_ ( is_predict_leaf_index ) { <nl> boosting_ = boosting ; <nl> num_features_ = boosting_ - > MaxFeatureIdx ( ) + 1 ; <nl> num_class_ = boosting_ - > NumberOfClass ( ) ; <nl> class Predictor { <nl> std : : vector < double > PredictRawOneLine ( const std : : vector < std : : pair < int , double > > & features ) { <nl> const int tid = PutFeatureValuesToBuffer ( features ) ; <nl> / / get result without sigmoid transformation <nl> - return std : : vector < double > ( 1 , boosting_ - > PredictRaw ( features_ [ tid ] , num_used_model_ ) ) ; <nl> + return std : : vector < double > ( 1 , boosting_ - > PredictRaw ( features_ [ tid ] ) ) ; <nl> } <nl> <nl> / * ! <nl> class Predictor { <nl> std : : vector < int > PredictLeafIndexOneLine ( const std : : vector < std : : pair < int , double > > & features ) { <nl> const int tid = PutFeatureValuesToBuffer ( features ) ; <nl> / / get result for leaf index <nl> - return boosting_ - > PredictLeafIndex ( features_ [ tid ] , num_used_model_ ) ; <nl> + return boosting_ - > PredictLeafIndex ( features_ [ tid ] ) ; <nl> } <nl> <nl> / * ! <nl> class Predictor { <nl> std : : vector < double > PredictOneLine ( const std : : vector < std : : pair < int , double > > & features ) { <nl> const int tid = PutFeatureValuesToBuffer ( features ) ; <nl> / / get result with sigmoid transform if needed <nl> - return std : : vector < double > ( 1 , boosting_ - > Predict ( features_ [ tid ] , num_used_model_ ) ) ; <nl> + return std : : vector < double > ( 1 , boosting_ - > Predict ( features_ [ tid ] ) ) ; <nl> } <nl> <nl> / * ! <nl> class Predictor { <nl> std : : vector < double > PredictMulticlassOneLine ( const std : : vector < std : : pair < int , double > > & features ) { <nl> const int tid = PutFeatureValuesToBuffer ( features ) ; <nl> / / get result with sigmoid transform if needed <nl> - return boosting_ - > PredictMulticlass ( features_ [ tid ] , num_used_model_ ) ; <nl> + return boosting_ - > PredictMulticlass ( features_ [ tid ] ) ; <nl> } <nl> <nl> / * ! <nl> class Predictor { <nl> int num_threads_ ; <nl> / * ! \ brief True if output leaf index instead of prediction score * / <nl> bool is_predict_leaf_index_ ; <nl> - / * ! \ brief Number of used model * / <nl> - int num_used_model_ ; <nl> } ; <nl> <nl> } / / namespace LightGBM <nl> mmm a / src / boosting / gbdt . cpp <nl> ppp b / src / boosting / gbdt . cpp <nl> namespace LightGBM { <nl> GBDT : : GBDT ( ) <nl> : train_score_updater_ ( nullptr ) , <nl> gradients_ ( nullptr ) , hessians_ ( nullptr ) , <nl> - out_of_bag_data_indices_ ( nullptr ) , bag_data_indices_ ( nullptr ) { <nl> + out_of_bag_data_indices_ ( nullptr ) , bag_data_indices_ ( nullptr ) , <nl> + saved_model_size_ ( - 1 ) , num_used_model_ ( 0 ) { <nl> } <nl> <nl> GBDT : : ~ GBDT ( ) { <nl> void GBDT : : Init ( const BoostingConfig * config , const Dataset * train_data , const O <nl> const std : : vector < const Metric * > & training_metrics ) { <nl> gbdt_config_ = dynamic_cast < const GBDTConfig * > ( config ) ; <nl> iter_ = 0 ; <nl> + saved_model_size_ = - 1 ; <nl> max_feature_idx_ = 0 ; <nl> early_stopping_round_ = gbdt_config_ - > early_stopping_round ; <nl> train_data_ = train_data ; <nl> void GBDT : : ModelsFromString ( const std : : string & model_str ) { <nl> } <nl> } <nl> Log : : Info ( " % d models has been loaded \ n " , models_ . size ( ) ) ; <nl> + num_used_model_ = static_cast < int > ( models_ . size ( ) ) / num_class_ ; <nl> } <nl> <nl> std : : string GBDT : : FeatureImportance ( ) const { <nl> std : : string GBDT : : FeatureImportance ( ) const { <nl> return str_buf . str ( ) ; <nl> } <nl> <nl> - double GBDT : : PredictRaw ( const double * value , int num_used_model ) const { <nl> - if ( num_used_model < 0 ) { <nl> - num_used_model = static_cast < int > ( models_ . size ( ) ) ; <nl> - } <nl> + double GBDT : : PredictRaw ( const double * value ) const { <nl> double ret = 0 . 0f ; <nl> - for ( int i = 0 ; i < num_used_model ; + + i ) { <nl> + for ( int i = 0 ; i < num_used_model_ ; + + i ) { <nl> ret + = models_ [ i ] - > Predict ( value ) ; <nl> } <nl> return ret ; <nl> } <nl> <nl> - double GBDT : : Predict ( const double * value , int num_used_model ) const { <nl> - if ( num_used_model < 0 ) { <nl> - num_used_model = static_cast < int > ( models_ . size ( ) ) ; <nl> - } <nl> + double GBDT : : Predict ( const double * value ) const { <nl> double ret = 0 . 0f ; <nl> - for ( int i = 0 ; i < num_used_model ; + + i ) { <nl> + for ( int i = 0 ; i < num_used_model_ ; + + i ) { <nl> ret + = models_ [ i ] - > Predict ( value ) ; <nl> } <nl> / / if need sigmoid transform <nl> double GBDT : : Predict ( const double * value , int num_used_model ) const { <nl> return ret ; <nl> } <nl> <nl> - std : : vector < double > GBDT : : PredictMulticlass ( const double * value , int num_used_model ) const { <nl> - if ( num_used_model < 0 ) { <nl> - num_used_model = static_cast < int > ( models_ . size ( ) ) / num_class_ ; <nl> - } <nl> + std : : vector < double > GBDT : : PredictMulticlass ( const double * value ) const { <nl> std : : vector < double > ret ( num_class_ , 0 . 0f ) ; <nl> - for ( int i = 0 ; i < num_used_model ; + + i ) { <nl> + for ( int i = 0 ; i < num_used_model_ ; + + i ) { <nl> for ( int j = 0 ; j < num_class_ ; + + j ) { <nl> ret [ j ] + = models_ [ i * num_class_ + j ] - > Predict ( value ) ; <nl> } <nl> std : : vector < double > GBDT : : PredictMulticlass ( const double * value , int num_used_mo <nl> return ret ; <nl> } <nl> <nl> - std : : vector < int > GBDT : : PredictLeafIndex ( const double * value , int num_used_model ) const { <nl> - if ( num_used_model < 0 ) { <nl> - num_used_model = static_cast < int > ( models_ . size ( ) ) ; <nl> - } <nl> + std : : vector < int > GBDT : : PredictLeafIndex ( const double * value ) const { <nl> std : : vector < int > ret ; <nl> - for ( int i = 0 ; i < num_used_model ; + + i ) { <nl> + for ( int i = 0 ; i < num_used_model_ ; + + i ) { <nl> ret . push_back ( models_ [ i ] - > PredictLeafIndex ( value ) ) ; <nl> } <nl> return ret ; <nl> mmm a / src / boosting / gbdt . h <nl> ppp b / src / boosting / gbdt . h <nl> class GBDT : public Boosting { <nl> / * ! <nl> * \ brief Predtion for one record without sigmoid transformation <nl> * \ param feature_values Feature value on this record <nl> - * \ param num_used_model Number of used model <nl> * \ return Prediction result for this record <nl> * / <nl> - double PredictRaw ( const double * feature_values , int num_used_model ) const override ; <nl> + double PredictRaw ( const double * feature_values ) const override ; <nl> <nl> / * ! <nl> * \ brief Predtion for one record with sigmoid transformation if enabled <nl> * \ param feature_values Feature value on this record <nl> - * \ param num_used_model Number of used model <nl> * \ return Prediction result for this record <nl> * / <nl> - double Predict ( const double * feature_values , int num_used_model ) const override ; <nl> + double Predict ( const double * feature_values ) const override ; <nl> <nl> / * ! <nl> * \ brief Predtion for multiclass classification <nl> * \ param feature_values Feature value on this record <nl> * \ return Prediction result , num_class numbers per line <nl> * / <nl> - std : : vector < double > PredictMulticlass ( const double * value , int num_used_model ) const override ; <nl> + std : : vector < double > PredictMulticlass ( const double * value ) const override ; <nl> <nl> / * ! <nl> * \ brief Predtion for one record with leaf index <nl> * \ param feature_values Feature value on this record <nl> - * \ param num_used_model Number of used model <nl> * \ return Predicted leaf index for this record <nl> * / <nl> - std : : vector < int > PredictLeafIndex ( const double * value , int num_used_model ) const override ; <nl> + std : : vector < int > PredictLeafIndex ( const double * value ) const override ; <nl> <nl> / * ! <nl> * \ brief Serialize models by string <nl> class GBDT : public Boosting { <nl> * \ return Number of classes <nl> * / <nl> inline int NumberOfClass ( ) const override { return num_class_ ; } <nl> + <nl> + / * ! <nl> + * \ brief Set number of used model for prediction <nl> + * / <nl> + inline void SetNumUsedModel ( int num_used_model ) { <nl> + if ( num_used_model > = 0 ) { <nl> + num_used_model_ = static_cast < int > ( num_used_model / num_class_ ) ; <nl> + } <nl> + } <nl> + <nl> <nl> / * ! <nl> * \ brief Get Type name of this boosting object <nl> class GBDT : public Boosting { <nl> / * ! \ brief Index of label column * / <nl> data_size_t label_idx_ ; <nl> / * ! \ brief Saved number of models * / <nl> - int saved_model_size_ = - 1 ; <nl> + int saved_model_size_ ; <nl> / * ! \ brief File to write models * / <nl> std : : ofstream model_output_file_ ; <nl> + / * ! \ brief number of used model * / <nl> + int num_used_model_ ; <nl> } ; <nl> <nl> } / / namespace LightGBM <nl>
move num_used_model out of predict function
microsoft/LightGBM
4e29145993371cba4b32c4d61ea36e721a0ccdb1
2016-11-03T06:14:37Z
mmm a / code / unclassified / src / maximum_subarray_sum / maxsubarraysum . cpp <nl> ppp b / code / unclassified / src / maximum_subarray_sum / maxsubarraysum . cpp <nl> int maxSubarraySum ( const std : : vector < int > & arr ) <nl> int currMax = 0 ; <nl> for ( int element : arr ) <nl> { <nl> - currMax = currMax + element ; <nl> + currMax + = element ; <nl> if ( maxSumSoFar < currMax ) <nl> maxSumSoFar = currMax ; <nl> if ( currMax < 0 ) / / if the current maximum sum becomes less than 0 then we make it 0 <nl>
Update maxsubarraysum . cpp
OpenGenus/cosmos
98375343c521799d7794ac57ac22cb92a48a9054
2020-03-22T13:03:42Z
new file mode 100644 <nl> index 00000000000 . . 7e808ac4da3 <nl> mmm / dev / null <nl> ppp b / hphp / hack / test / typecheck / perf / HH_FLAGS <nl> <nl> + - - timeout 1 <nl> + - - all - errors <nl> new file mode 100644 <nl> index 00000000000 . . 0fa0c479036 <nl> mmm / dev / null <nl> ppp b / hphp / hack / test / typecheck / perf / vec_update . php <nl> <nl> + < ? hh / / strict <nl> + / / Copyright 2004 - present Facebook . All Rights Reserved . <nl> + <nl> + function test2 ( int $ id ) : void { <nl> + $ res = Map { } ; <nl> + $ res [ $ id ] = vec [ 1 ] ; <nl> + $ res [ $ id ] [ 0 ] + = 0 ; / / + 3 tyvars <nl> + $ res [ $ id ] [ 1 ] + = 0 ; / / + 6 tyvars <nl> + $ res [ $ id ] [ 2 ] + = 0 ; / / + 9 tyvars . . . <nl> + $ res [ $ id ] [ 3 ] + = 0 ; <nl> + $ res [ $ id ] [ 4 ] + = 0 ; <nl> + $ res [ $ id ] [ 5 ] + = 0 ; <nl> + $ res [ $ id ] [ 6 ] + = 0 ; <nl> + $ res [ $ id ] [ 7 ] + = 0 ; <nl> + $ res [ $ id ] [ 8 ] + = 0 ; <nl> + $ res [ $ id ] [ 9 ] + = 0 ; <nl> + $ res [ $ id ] [ 10 ] + = 0 ; <nl> + $ res [ $ id ] [ 11 ] + = 0 ; <nl> + $ res [ $ id ] [ 12 ] + = 0 ; <nl> + $ res [ $ id ] [ 13 ] + = 0 ; <nl> + $ res [ $ id ] [ 14 ] + = 0 ; <nl> + $ res [ $ id ] [ 15 ] + = 0 ; <nl> + $ res [ $ id ] [ 16 ] + = 0 ; <nl> + $ res [ $ id ] [ 17 ] + = 0 ; <nl> + $ res [ $ id ] [ 18 ] + = 0 ; <nl> + $ res [ $ id ] [ 19 ] + = 0 ; <nl> + $ res [ $ id ] [ 20 ] + = 0 ; <nl> + $ res [ $ id ] [ 21 ] + = 0 ; <nl> + $ res [ $ id ] [ 22 ] + = 0 ; <nl> + / / hh_show_env ( ) ; <nl> + } <nl> new file mode 100644 <nl> index 00000000000 . . 4269126fceb <nl> mmm / dev / null <nl> ppp b / hphp / hack / test / typecheck / perf / vec_update . php . exp <nl> @ @ - 0 , 0 + 1 @ @ <nl> + No errors <nl>
New inference : add performance regression test
facebook/hhvm
b798fe1ea5fd80f700d5f656570ae70c219e8a7c
2019-05-02T09:05:39Z
mmm a / src / mongo / db / SConscript <nl> ppp b / src / mongo / db / SConscript <nl> env . Library ( <nl> " $ BUILD_DIR / mongo / db / logical_session_cache " , <nl> " $ BUILD_DIR / mongo / db / logical_session_id " , <nl> " $ BUILD_DIR / mongo / util / background_job " , <nl> - " query / query " , <nl> + " cursor_server_params " , <nl> " background " , <nl> + " query / query " , <nl> ] , <nl> ) <nl> <nl> env . Library ( <nl> ' logical_session_id ' , <nl> ' sessions_collection ' , <nl> ] , <nl> + LIBDEPS_PRIVATE = [ <nl> + ' $ BUILD_DIR / mongo / s / query / cluster_query ' , <nl> + ] , <nl> ) <nl> <nl> env . Library ( <nl> env . Library ( <nl> ] , <nl> ) <nl> <nl> + env . Library ( <nl> + target = ' cursor_server_params ' , <nl> + source = [ <nl> + ' cursor_server_params . cpp ' , <nl> + ] , <nl> + LIBDEPS = [ <nl> + ' $ BUILD_DIR / mongo / db / server_parameters ' , <nl> + ] , <nl> + ) <nl> + <nl> env . Library ( <nl> target = ' ttl_collection_cache ' , <nl> source = [ <nl> mmm a / src / mongo / db / clientcursor . cpp <nl> ppp b / src / mongo / db / clientcursor . cpp <nl> <nl> # include " mongo / db / commands . h " <nl> # include " mongo / db / commands / server_status . h " <nl> # include " mongo / db / commands / server_status_metric . h " <nl> + # include " mongo / db / cursor_server_params . h " <nl> # include " mongo / db / jsobj . h " <nl> # include " mongo / db / repl / repl_client_info . h " <nl> # include " mongo / db / repl / replication_coordinator_global . h " <nl> - # include " mongo / db / server_parameters . h " <nl> # include " mongo / util / background . h " <nl> # include " mongo / util / concurrency / idle_thread_block . h " <nl> # include " mongo / util / exit . h " <nl> static ServerStatusMetricField < Counter64 > dCursorStatsOpenNoTimeout ( " cursor . open <nl> static ServerStatusMetricField < Counter64 > dCursorStatusTimedout ( " cursor . timedOut " , <nl> & cursorStatsTimedOut ) ; <nl> <nl> - MONGO_EXPORT_SERVER_PARAMETER ( clientCursorMonitorFrequencySecs , int , 4 ) ; <nl> - <nl> long long ClientCursor : : totalOpen ( ) { <nl> return cursorStatsOpen . get ( ) ; <nl> } <nl> class ClientCursorMonitor : public BackgroundJob { <nl> CursorManager : : timeoutCursorsGlobal ( opCtx . get ( ) , now ) ) ; <nl> } <nl> MONGO_IDLE_THREAD_BLOCK ; <nl> - sleepsecs ( clientCursorMonitorFrequencySecs . load ( ) ) ; <nl> + sleepsecs ( getClientCursorMonitorFrequencySecs ( ) ) ; <nl> } <nl> } <nl> } ; <nl> mmm a / src / mongo / db / cursor_manager . cpp <nl> ppp b / src / mongo / db / cursor_manager . cpp <nl> <nl> # include " mongo / db / catalog / database . h " <nl> # include " mongo / db / catalog / database_holder . h " <nl> # include " mongo / db / client . h " <nl> + # include " mongo / db / cursor_server_params . h " <nl> # include " mongo / db / db_raii . h " <nl> # include " mongo / db / kill_sessions_common . h " <nl> # include " mongo / db / logical_session_cache . h " <nl> <nl> namespace mongo { <nl> using std : : vector ; <nl> <nl> - constexpr Minutes CursorManager : : kDefaultCursorTimeoutMinutes ; <nl> - <nl> - MONGO_EXPORT_SERVER_PARAMETER ( <nl> - cursorTimeoutMillis , <nl> - int , <nl> - durationCount < Milliseconds > ( CursorManager : : kDefaultCursorTimeoutMinutes ) ) ; <nl> - <nl> constexpr int CursorManager : : kNumPartitions ; <nl> <nl> namespace { <nl> bool CursorManager : : cursorShouldTimeout_inlock ( const ClientCursor * cursor , Date_ <nl> if ( cursor - > isNoTimeout ( ) | | cursor - > _isPinned ) { <nl> return false ; <nl> } <nl> - return ( now - cursor - > _lastUseDate ) > = Milliseconds ( cursorTimeoutMillis . load ( ) ) ; <nl> + return ( now - cursor - > _lastUseDate ) > = Milliseconds ( getCursorTimeoutMillis ( ) ) ; <nl> } <nl> <nl> std : : size_t CursorManager : : timeoutCursors ( OperationContext * opCtx , Date_t now ) { <nl> mmm a / src / mongo / db / cursor_manager . h <nl> ppp b / src / mongo / db / cursor_manager . h <nl> class PlanExecutor ; <nl> * / <nl> class CursorManager { <nl> public : <nl> - / / The number of minutes a cursor is allowed to be idle before timing out . <nl> - static constexpr Minutes kDefaultCursorTimeoutMinutes { 10 } ; <nl> using RegistrationToken = Partitioned < unordered_set < PlanExecutor * > > : : PartitionId ; <nl> <nl> / * * <nl> new file mode 100644 <nl> index 000000000000 . . 39c3038c0b85 <nl> mmm / dev / null <nl> ppp b / src / mongo / db / cursor_server_params . cpp <nl> <nl> + / * * <nl> + * Copyright ( C ) 2017 MongoDB Inc . <nl> + * <nl> + * This program is free software : you can redistribute it and / or modify <nl> + * it under the terms of the GNU Affero General Public License , version 3 , <nl> + * as published by the Free Software Foundation . <nl> + * <nl> + * This program is distributed in the hope that it will be useful , <nl> + * but WITHOUT ANY WARRANTY ; without even the implied warranty of <nl> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the <nl> + * GNU Affero General Public License for more details . <nl> + * <nl> + * You should have received a copy of the GNU Affero General Public License <nl> + * along with this program . If not , see < http : / / www . gnu . org / licenses / > . <nl> + * <nl> + * As a special exception , the copyright holders give permission to link the <nl> + * code of portions of this program with the OpenSSL library under certain <nl> + * conditions as described in each individual source file and distribute <nl> + * linked combinations including the program with the OpenSSL library . You <nl> + * must comply with the GNU Affero General Public License in all respects <nl> + * for all of the code used other than as permitted herein . If you modify <nl> + * file ( s ) with this exception , you may extend this exception to your <nl> + * version of the file ( s ) , but you are not obligated to do so . If you do not <nl> + * wish to do so , delete this exception statement from your version . If you <nl> + * delete this exception statement from all source files in the program , <nl> + * then also delete it in the license file . <nl> + * / <nl> + <nl> + # include " mongo / platform / basic . h " <nl> + <nl> + # include " mongo / db / cursor_server_params . h " <nl> + <nl> + # include " mongo / db / server_parameters . h " <nl> + <nl> + namespace mongo { <nl> + namespace { <nl> + <nl> + static constexpr Minutes kDefaultCursorTimeoutMinutes { 10 } ; <nl> + <nl> + MONGO_EXPORT_SERVER_PARAMETER ( clientCursorMonitorFrequencySecs , int , 4 ) ; <nl> + MONGO_EXPORT_SERVER_PARAMETER ( cursorTimeoutMillis , <nl> + long long , <nl> + durationCount < Milliseconds > ( kDefaultCursorTimeoutMinutes ) ) ; <nl> + <nl> + } / / namespace <nl> + <nl> + int getClientCursorMonitorFrequencySecs ( ) { <nl> + return clientCursorMonitorFrequencySecs . load ( ) ; <nl> + } <nl> + <nl> + long long getCursorTimeoutMillis ( ) { <nl> + return cursorTimeoutMillis . load ( ) ; <nl> + } <nl> + <nl> + Milliseconds getDefaultCursorTimeoutMillis ( ) { <nl> + return kDefaultCursorTimeoutMinutes ; <nl> + } <nl> + <nl> + } / / namespace mongo <nl> new file mode 100644 <nl> index 000000000000 . . eadc7f6d0b10 <nl> mmm / dev / null <nl> ppp b / src / mongo / db / cursor_server_params . h <nl> <nl> + / * * <nl> + * Copyright ( C ) 2017 MongoDB Inc . <nl> + * <nl> + * This program is free software : you can redistribute it and / or modify <nl> + * it under the terms of the GNU Affero General Public License , version 3 , <nl> + * as published by the Free Software Foundation . <nl> + * <nl> + * This program is distributed in the hope that it will be useful , <nl> + * but WITHOUT ANY WARRANTY ; without even the implied warranty of <nl> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the <nl> + * GNU Affero General Public License for more details . <nl> + * <nl> + * You should have received a copy of the GNU Affero General Public License <nl> + * along with this program . If not , see < http : / / www . gnu . org / licenses / > . <nl> + * <nl> + * As a special exception , the copyright holders give permission to link the <nl> + * code of portions of this program with the OpenSSL library under certain <nl> + * conditions as described in each individual source file and distribute <nl> + * linked combinations including the program with the OpenSSL library . You <nl> + * must comply with the GNU Affero General Public License in all respects <nl> + * for all of the code used other than as permitted herein . If you modify <nl> + * file ( s ) with this exception , you may extend this exception to your <nl> + * version of the file ( s ) , but you are not obligated to do so . If you do not <nl> + * wish to do so , delete this exception statement from your version . If you <nl> + * delete this exception statement from all source files in the program , <nl> + * then also delete it in the license file . <nl> + * / <nl> + <nl> + # pragma once <nl> + <nl> + # include " mongo / util / duration . h " <nl> + <nl> + namespace mongo { <nl> + <nl> + int getClientCursorMonitorFrequencySecs ( ) ; <nl> + <nl> + / / Period of time after which mortal cursors are killed for inactivity . Configurable with server <nl> + / / parameter " cursorTimeoutMillis " . <nl> + long long getCursorTimeoutMillis ( ) ; <nl> + <nl> + Milliseconds getDefaultCursorTimeoutMillis ( ) ; <nl> + <nl> + } / / namespace mongo <nl> mmm a / src / mongo / db / logical_session_id . idl <nl> ppp b / src / mongo / db / logical_session_id . idl <nl> structs : <nl> operation executes . " <nl> type : TxnNumber <nl> optional : true <nl> + <nl> + SessionsCollectionFetchResultIndividualResult : <nl> + description : " Individual result " <nl> + strict : true <nl> + fields : <nl> + _id : LogicalSessionId <nl> + <nl> + SessionsCollectionFetchResultCursor : <nl> + description : " Cursor object " <nl> + strict : false <nl> + fields : <nl> + firstBatch : array < SessionsCollectionFetchResultIndividualResult > <nl> + <nl> + SessionsCollectionFetchResult : <nl> + description : " Parser for pulling out the fetch results from SessionsCollection : : fetch " <nl> + strict : false <nl> + fields : <nl> + cursor : SessionsCollectionFetchResultCursor <nl> + <nl> + SessionsCollectionFetchRequestFilterId : <nl> + description : " Id " <nl> + strict : true <nl> + fields : <nl> + $ in : <nl> + type : array < LogicalSessionId > <nl> + cpp_name : " in " <nl> + <nl> + SessionsCollectionFetchRequestFilter : <nl> + description : " filter " <nl> + strict : true <nl> + fields : <nl> + _id : SessionsCollectionFetchRequestFilterId <nl> + <nl> + SessionsCollectionFetchRequestProjection : <nl> + description : " projection " <nl> + strict : true <nl> + fields : <nl> + _id : int <nl> + <nl> + SessionsCollectionFetchRequest : <nl> + description : " Parser for forming the fetch request for SessionsCollection : : fetch " <nl> + strict : true <nl> + fields : <nl> + find : namespacestring <nl> + filter : SessionsCollectionFetchRequestFilter <nl> + projection : SessionsCollectionFetchRequestProjection <nl> + batchSize : int <nl> + singleBatch : bool <nl> + allowPartialResults : bool <nl> + limit : int <nl> mmm a / src / mongo / db / sessions_collection . cpp <nl> ppp b / src / mongo / db / sessions_collection . cpp <nl> SessionsCollection : : SendBatchFn SessionsCollection : : makeSendFnForCommand ( DBClien <nl> return send ; <nl> } <nl> <nl> + SessionsCollection : : FindBatchFn SessionsCollection : : makeFindFnForCommand ( DBClientBase * client ) { <nl> + auto send = [ client ] ( BSONObj cmd ) - > StatusWith < BSONObj > { <nl> + BSONObj res ; <nl> + if ( ! client - > runCommand ( SessionsCollection : : kSessionsDb . toString ( ) , cmd , res ) ) { <nl> + return getStatusFromCommandResult ( res ) ; <nl> + } <nl> + <nl> + return res ; <nl> + } ; <nl> + <nl> + return send ; <nl> + } <nl> + <nl> Status SessionsCollection : : doRefresh ( const LogicalSessionRecordSet & sessions , <nl> Date_t refreshTime , <nl> SendBatchFn send ) { <nl> Status SessionsCollection : : doRemoveExternal ( const LogicalSessionIdSet & sessions , <nl> return Status : : OK ( ) ; <nl> } <nl> <nl> + StatusWith < LogicalSessionIdSet > SessionsCollection : : doFetch ( const LogicalSessionIdSet & sessions , <nl> + FindBatchFn send ) { <nl> + auto makeT = [ ] { return std : : vector < LogicalSessionId > { } ; } ; <nl> + <nl> + auto add = [ ] ( std : : vector < LogicalSessionId > & batch , const LogicalSessionId & record ) { <nl> + batch . push_back ( record ) ; <nl> + } ; <nl> + <nl> + LogicalSessionIdSet removed = sessions ; <nl> + <nl> + auto wrappedSend = [ & ] ( BSONObj batch ) { <nl> + auto swBatchResult = send ( batch ) ; <nl> + <nl> + if ( ! swBatchResult . isOK ( ) ) { <nl> + return swBatchResult . getStatus ( ) ; <nl> + } else { <nl> + auto result = SessionsCollectionFetchResult : : parse ( " SessionsCollectionFetchResult " _sd , <nl> + swBatchResult . getValue ( ) ) ; <nl> + <nl> + for ( const auto & lsid : result . getCursor ( ) . getFirstBatch ( ) ) { <nl> + removed . erase ( lsid . get_id ( ) ) ; <nl> + } <nl> + <nl> + return Status : : OK ( ) ; <nl> + } <nl> + } ; <nl> + <nl> + auto sendLocal = [ & ] ( std : : vector < LogicalSessionId > & batch ) { <nl> + SessionsCollectionFetchRequest request ; <nl> + <nl> + request . setFind ( NamespaceString { SessionsCollection : : kSessionsCollection } ) ; <nl> + request . setFilter ( { } ) ; <nl> + request . getFilter ( ) . set_id ( { } ) ; <nl> + request . getFilter ( ) . get_id ( ) . setIn ( batch ) ; <nl> + <nl> + request . setProjection ( { } ) ; <nl> + request . getProjection ( ) . set_id ( 1 ) ; <nl> + request . setBatchSize ( batch . size ( ) ) ; <nl> + request . setLimit ( batch . size ( ) ) ; <nl> + request . setAllowPartialResults ( true ) ; <nl> + request . setSingleBatch ( true ) ; <nl> + <nl> + return wrappedSend ( request . toBSON ( ) ) ; <nl> + } ; <nl> + <nl> + auto status = runBulkGeneric ( makeT , add , sendLocal , sessions ) ; <nl> + <nl> + if ( ! status . isOK ( ) ) { <nl> + return status ; <nl> + } <nl> + <nl> + return removed ; <nl> + } <nl> + <nl> } / / namespace mongo <nl> mmm a / src / mongo / db / sessions_collection . h <nl> ppp b / src / mongo / db / sessions_collection . h <nl> class SessionsCollection { <nl> * / <nl> virtual Status removeRecords ( OperationContext * opCtx , const LogicalSessionIdSet & sessions ) = 0 ; <nl> <nl> + / * * <nl> + * Checks a set of lsids and returns the set that no longer exists <nl> + * <nl> + * Returns an error if the fetch cannot occur , for example from a network error . <nl> + * / <nl> + virtual StatusWith < LogicalSessionIdSet > findRemovedSessions ( <nl> + OperationContext * opCtx , const LogicalSessionIdSet & sessions ) = 0 ; <nl> + <nl> protected : <nl> / * * <nl> * Makes a send function for the given client . <nl> class SessionsCollection { <nl> using SendBatchFn = stdx : : function < Status ( BSONObj batch ) > ; <nl> SendBatchFn makeSendFnForCommand ( DBClientBase * client ) ; <nl> SendBatchFn makeSendFnForBatchWrite ( DBClientBase * client ) ; <nl> + using FindBatchFn = stdx : : function < StatusWith < BSONObj > ( BSONObj batch ) > ; <nl> + FindBatchFn makeFindFnForCommand ( DBClientBase * client ) ; <nl> <nl> / * * <nl> * Formats and sends batches of refreshes for the given set of sessions . <nl> class SessionsCollection { <nl> * / <nl> Status doRemove ( const LogicalSessionIdSet & sessions , SendBatchFn send ) ; <nl> Status doRemoveExternal ( const LogicalSessionIdSet & sessions , SendBatchFn send ) ; <nl> + <nl> + / * * <nl> + * Formats and sends batches of fetches for the given set of sessions . <nl> + * / <nl> + StatusWith < LogicalSessionIdSet > doFetch ( const LogicalSessionIdSet & sessions , FindBatchFn send ) ; <nl> } ; <nl> <nl> } / / namespace mongo <nl> mmm a / src / mongo / db / sessions_collection_mock . h <nl> ppp b / src / mongo / db / sessions_collection_mock . h <nl> class MockSessionsCollection : public SessionsCollection { <nl> return _impl - > removeRecords ( sessions ) ; <nl> } <nl> <nl> + StatusWith < LogicalSessionIdSet > findRemovedSessions ( <nl> + OperationContext * opCtx , const LogicalSessionIdSet & sessions ) override { <nl> + return LogicalSessionIdSet { } ; <nl> + } <nl> + <nl> private : <nl> std : : shared_ptr < MockSessionsCollectionImpl > _impl ; <nl> } ; <nl> mmm a / src / mongo / db / sessions_collection_rs . cpp <nl> ppp b / src / mongo / db / sessions_collection_rs . cpp <nl> Status makePrimaryConnection ( OperationContext * opCtx , boost : : optional < ScopedDbCo <nl> } <nl> <nl> template < typename Callback > <nl> - Status runIfStandaloneOrPrimary ( OperationContext * opCtx , Callback callback ) { <nl> + auto runIfStandaloneOrPrimary ( OperationContext * opCtx , Callback callback ) <nl> + - > boost : : optional < decltype ( std : : declval < Callback > ( ) ( ) ) > { <nl> Lock : : DBLock lk ( opCtx , SessionsCollection : : kSessionsDb , MODE_IX ) ; <nl> Lock : : CollectionLock lock ( opCtx - > lockState ( ) , SessionsCollection : : kSessionsFullNS , MODE_IX ) ; <nl> <nl> Status runIfStandaloneOrPrimary ( OperationContext * opCtx , Callback callback ) { <nl> return callback ( ) ; <nl> } <nl> <nl> - return { ErrorCodes : : NotMaster , " Cannot perform a local write " } ; <nl> + return boost : : none ; <nl> } <nl> <nl> - } / / namespace <nl> - <nl> - Status SessionsCollectionRS : : refreshSessions ( OperationContext * opCtx , <nl> - const LogicalSessionRecordSet & sessions , <nl> - Date_t refreshTime ) { <nl> - bool ran = false ; <nl> - <nl> - / / If we are the primary , write directly to ourself . <nl> - auto status = runIfStandaloneOrPrimary ( opCtx , [ & ] { <nl> - ran = true ; <nl> - DBDirectClient client ( opCtx ) ; <nl> - return doRefresh ( sessions , refreshTime , makeSendFnForBatchWrite ( & client ) ) ; <nl> - } ) ; <nl> - <nl> - if ( ran ) { <nl> - return status ; <nl> - } <nl> - <nl> - / / If we are not writeable , then send refreshSessions cmd to the primary . <nl> + template < typename Callback > <nl> + auto sendToPrimary ( OperationContext * opCtx , Callback callback ) <nl> + - > decltype ( std : : declval < Callback > ( ) ( static_cast < DBClientBase * > ( nullptr ) ) ) { <nl> boost : : optional < ScopedDbConnection > conn ; <nl> auto res = makePrimaryConnection ( opCtx , & conn ) ; <nl> if ( ! res . isOK ( ) ) { <nl> return res ; <nl> } <nl> <nl> - return doRefreshExternal ( sessions , refreshTime , makeSendFnForCommand ( conn - > get ( ) ) ) ; <nl> + return callback ( conn - > get ( ) ) ; <nl> } <nl> <nl> - Status SessionsCollectionRS : : removeRecords ( OperationContext * opCtx , <nl> - const LogicalSessionIdSet & sessions ) { <nl> - bool ran = false ; <nl> - <nl> + template < typename LocalCallback , typename RemoteCallback > <nl> + auto dispatch ( OperationContext * opCtx , LocalCallback localCallback , RemoteCallback remoteCallback ) <nl> + - > decltype ( std : : declval < RemoteCallback > ( ) ( static_cast < DBClientBase * > ( nullptr ) ) ) { <nl> / / If we are the primary , write directly to ourself . <nl> - auto status = runIfStandaloneOrPrimary ( opCtx , [ & ] { <nl> - ran = true ; <nl> - DBDirectClient client ( opCtx ) ; <nl> - return doRemove ( sessions , makeSendFnForBatchWrite ( & client ) ) ; <nl> - } ) ; <nl> - <nl> - if ( ran ) { <nl> - return status ; <nl> - } <nl> + auto result = runIfStandaloneOrPrimary ( opCtx , [ & ] { return localCallback ( ) ; } ) ; <nl> <nl> - / / If we are not writeable , then send endSessions cmd to the primary <nl> - boost : : optional < ScopedDbConnection > conn ; <nl> - auto res = makePrimaryConnection ( opCtx , & conn ) ; <nl> - if ( ! res . isOK ( ) ) { <nl> - return res ; <nl> + if ( result ) { <nl> + return * result ; <nl> } <nl> <nl> - return doRemoveExternal ( sessions , makeSendFnForCommand ( conn - > get ( ) ) ) ; <nl> + return sendToPrimary ( opCtx , remoteCallback ) ; <nl> } <nl> <nl> + } / / namespace <nl> + <nl> + Status SessionsCollectionRS : : refreshSessions ( OperationContext * opCtx , <nl> + const LogicalSessionRecordSet & sessions , <nl> + Date_t refreshTime ) { <nl> + return dispatch ( opCtx , <nl> + [ & ] { <nl> + DBDirectClient client ( opCtx ) ; <nl> + return doRefresh ( sessions , refreshTime , makeSendFnForBatchWrite ( & client ) ) ; <nl> + } , <nl> + [ & ] ( DBClientBase * client ) { <nl> + return doRefreshExternal ( <nl> + sessions , refreshTime , makeSendFnForCommand ( client ) ) ; <nl> + } ) ; <nl> + } <nl> + <nl> + Status SessionsCollectionRS : : removeRecords ( OperationContext * opCtx , <nl> + const LogicalSessionIdSet & sessions ) { <nl> + return dispatch ( opCtx , <nl> + [ & ] { <nl> + DBDirectClient client ( opCtx ) ; <nl> + return doRemove ( sessions , makeSendFnForBatchWrite ( & client ) ) ; <nl> + } , <nl> + [ & ] ( DBClientBase * client ) { <nl> + return doRemoveExternal ( sessions , makeSendFnForCommand ( client ) ) ; <nl> + } ) ; <nl> + } <nl> + <nl> + StatusWith < LogicalSessionIdSet > SessionsCollectionRS : : findRemovedSessions ( <nl> + OperationContext * opCtx , const LogicalSessionIdSet & sessions ) { <nl> + return dispatch ( <nl> + opCtx , <nl> + [ & ] { <nl> + DBDirectClient client ( opCtx ) ; <nl> + return doFetch ( sessions , makeFindFnForCommand ( & client ) ) ; <nl> + } , <nl> + [ & ] ( DBClientBase * client ) { return doFetch ( sessions , makeFindFnForCommand ( client ) ) ; } ) ; <nl> + } <nl> <nl> } / / namespace mongo <nl> mmm a / src / mongo / db / sessions_collection_rs . h <nl> ppp b / src / mongo / db / sessions_collection_rs . h <nl> class SessionsCollectionRS : public SessionsCollection { <nl> * Removes the authoritative records for the specified sessions . <nl> * / <nl> Status removeRecords ( OperationContext * opCtx , const LogicalSessionIdSet & sessions ) override ; <nl> + <nl> + StatusWith < LogicalSessionIdSet > findRemovedSessions ( <nl> + OperationContext * opCtx , const LogicalSessionIdSet & sessions ) override ; <nl> } ; <nl> <nl> } / / namespace mongo <nl> mmm a / src / mongo / db / sessions_collection_sharded . cpp <nl> ppp b / src / mongo / db / sessions_collection_sharded . cpp <nl> <nl> <nl> # include " mongo / db / sessions_collection_sharded . h " <nl> <nl> + # include " mongo / db / matcher / extensions_callback_noop . h " <nl> # include " mongo / db / operation_context . h " <nl> + # include " mongo / db / query / canonical_query . h " <nl> + # include " mongo / db / query / query_request . h " <nl> # include " mongo / s / commands / cluster_write . h " <nl> + # include " mongo / s / query / cluster_find . h " <nl> # include " mongo / s / write_ops / batch_write_exec . h " <nl> # include " mongo / s / write_ops / batched_command_request . h " <nl> # include " mongo / s / write_ops / batched_command_response . h " <nl> Status SessionsCollectionSharded : : removeRecords ( OperationContext * opCtx , <nl> return doRemove ( sessions , send ) ; <nl> } <nl> <nl> + StatusWith < LogicalSessionIdSet > SessionsCollectionSharded : : findRemovedSessions ( <nl> + OperationContext * opCtx , const LogicalSessionIdSet & sessions ) { <nl> + <nl> + auto send = [ & ] ( BSONObj toSend ) - > StatusWith < BSONObj > { <nl> + const NamespaceString nss ( SessionsCollection : : kSessionsFullNS ) ; <nl> + <nl> + auto qr = QueryRequest : : makeFromFindCommand ( nss , toSend , false ) ; <nl> + if ( ! qr . isOK ( ) ) { <nl> + return qr . getStatus ( ) ; <nl> + } <nl> + <nl> + const boost : : intrusive_ptr < ExpressionContext > expCtx ; <nl> + auto cq = CanonicalQuery : : canonicalize ( opCtx , <nl> + std : : move ( qr . getValue ( ) ) , <nl> + expCtx , <nl> + ExtensionsCallbackNoop ( ) , <nl> + MatchExpressionParser : : kAllowAllSpecialFeatures & <nl> + ~ MatchExpressionParser : : AllowedFeatures : : kExpr ) ; <nl> + if ( ! cq . isOK ( ) ) { <nl> + return cq . getStatus ( ) ; <nl> + } <nl> + <nl> + / / Do the work to generate the first batch of results . This blocks waiting to get responses <nl> + / / from the shard ( s ) . <nl> + std : : vector < BSONObj > batch ; <nl> + BSONObj viewDefinition ; <nl> + auto cursorId = ClusterFind : : runQuery ( <nl> + opCtx , * cq . getValue ( ) , ReadPreferenceSetting : : get ( opCtx ) , & batch , & viewDefinition ) ; <nl> + <nl> + if ( ! cursorId . isOK ( ) ) { <nl> + return cursorId . getStatus ( ) ; <nl> + } <nl> + <nl> + BSONObjBuilder result ; <nl> + CursorResponseBuilder firstBatch ( / * firstBatch * / true , & result ) ; <nl> + for ( const auto & obj : batch ) { <nl> + firstBatch . append ( obj ) ; <nl> + } <nl> + firstBatch . done ( cursorId . getValue ( ) , nss . ns ( ) ) ; <nl> + <nl> + return result . obj ( ) ; <nl> + } ; <nl> + <nl> + return doFetch ( sessions , send ) ; <nl> + } <nl> <nl> } / / namespace mongo <nl> mmm a / src / mongo / db / sessions_collection_sharded . h <nl> ppp b / src / mongo / db / sessions_collection_sharded . h <nl> class SessionsCollectionSharded : public SessionsCollection { <nl> * Removes the authoritative records for the specified sessions . <nl> * / <nl> Status removeRecords ( OperationContext * opCtx , const LogicalSessionIdSet & sessions ) override ; <nl> + <nl> + StatusWith < LogicalSessionIdSet > findRemovedSessions ( <nl> + OperationContext * opCtx , const LogicalSessionIdSet & sessions ) override ; <nl> } ; <nl> <nl> } / / namespace mongo <nl> mmm a / src / mongo / db / sessions_collection_standalone . cpp <nl> ppp b / src / mongo / db / sessions_collection_standalone . cpp <nl> Status SessionsCollectionStandalone : : removeRecords ( OperationContext * opCtx , <nl> return doRemove ( sessions , makeSendFnForBatchWrite ( & client ) ) ; <nl> } <nl> <nl> + StatusWith < LogicalSessionIdSet > SessionsCollectionStandalone : : findRemovedSessions ( <nl> + OperationContext * opCtx , const LogicalSessionIdSet & sessions ) { <nl> + DBDirectClient client ( opCtx ) ; <nl> + return doFetch ( sessions , makeFindFnForCommand ( & client ) ) ; <nl> + } <nl> + <nl> } / / namespace mongo <nl> mmm a / src / mongo / db / sessions_collection_standalone . h <nl> ppp b / src / mongo / db / sessions_collection_standalone . h <nl> class SessionsCollectionStandalone : public SessionsCollection { <nl> * Removes the authoritative records for the specified sessions . <nl> * / <nl> Status removeRecords ( OperationContext * opCtx , const LogicalSessionIdSet & sessions ) override ; <nl> + <nl> + StatusWith < LogicalSessionIdSet > findRemovedSessions ( <nl> + OperationContext * opCtx , const LogicalSessionIdSet & sessions ) override ; <nl> } ; <nl> <nl> } / / namespace mongo <nl> mmm a / src / mongo / dbtests / cursor_manager_test . cpp <nl> ppp b / src / mongo / dbtests / cursor_manager_test . cpp <nl> <nl> # include " mongo / db / client . h " <nl> # include " mongo / db / clientcursor . h " <nl> # include " mongo / db / cursor_manager . h " <nl> + # include " mongo / db / cursor_server_params . h " <nl> # include " mongo / db / exec / queued_data_stage . h " <nl> # include " mongo / db / exec / working_set . h " <nl> # include " mongo / db / exec / working_set_common . h " <nl> TEST_F ( CursorManagerTest , InactiveCursorShouldTimeout ) { <nl> <nl> ASSERT_EQ ( 0UL , cursorManager - > timeoutCursors ( _opCtx . get ( ) , Date_t ( ) ) ) ; <nl> <nl> - clock - > advance ( Milliseconds ( CursorManager : : kDefaultCursorTimeoutMinutes ) ) ; <nl> + clock - > advance ( getDefaultCursorTimeoutMillis ( ) ) ; <nl> ASSERT_EQ ( 1UL , cursorManager - > timeoutCursors ( _opCtx . get ( ) , clock - > now ( ) ) ) ; <nl> ASSERT_EQ ( 0UL , cursorManager - > numCursors ( ) ) ; <nl> <nl> TEST_F ( CursorManagerTest , InactivePinnedCursorShouldNotTimeout ) { <nl> { makeFakePlanExecutor ( ) , NamespaceString { " test . collection " } , { } , false , BSONObj ( ) } ) ; <nl> <nl> / / The pin is still in scope , so it should not time out . <nl> - clock - > advance ( Milliseconds ( CursorManager : : kDefaultCursorTimeoutMinutes ) ) ; <nl> + clock - > advance ( getDefaultCursorTimeoutMillis ( ) ) ; <nl> ASSERT_EQ ( 0UL , cursorManager - > timeoutCursors ( _opCtx . get ( ) , clock - > now ( ) ) ) ; <nl> } <nl> <nl> TEST_F ( CursorManagerTest , InactiveKilledCursorsShouldTimeout ) { <nl> _opCtx . get ( ) , collectionGoingAway , " KilledCursorsShouldTimeoutTest " ) ; <nl> <nl> / / Advance the clock to simulate time passing . <nl> - clock - > advance ( Milliseconds ( CursorManager : : kDefaultCursorTimeoutMinutes ) ) ; <nl> + clock - > advance ( getDefaultCursorTimeoutMillis ( ) ) ; <nl> <nl> ASSERT_EQ ( 1UL , cursorManager - > timeoutCursors ( _opCtx . get ( ) , clock - > now ( ) ) ) ; <nl> ASSERT_EQ ( 0UL , cursorManager - > numCursors ( ) ) ; <nl> TEST_F ( CursorManagerTest , InactiveKilledCursorsThatAreStillPinnedShouldNotTimeou <nl> _opCtx . get ( ) , collectionGoingAway , " KilledCursorsShouldTimeoutTest " ) ; <nl> <nl> / / Advance the clock to simulate time passing . <nl> - clock - > advance ( Milliseconds ( CursorManager : : kDefaultCursorTimeoutMinutes ) ) ; <nl> + clock - > advance ( getDefaultCursorTimeoutMillis ( ) ) ; <nl> <nl> / / The pin is still in scope , so it should not time out . <nl> ASSERT_EQ ( 0UL , cursorManager - > timeoutCursors ( _opCtx . get ( ) , clock - > now ( ) ) ) ; <nl> TEST_F ( CursorManagerTest , UsingACursorShouldUpdateTimeOfLastUse ) { <nl> <nl> / / We should be able to time out the unused cursor , but the one we used should stay alive . <nl> ASSERT_EQ ( 2UL , cursorManager - > numCursors ( ) ) ; <nl> - clock - > advance ( Milliseconds ( CursorManager : : kDefaultCursorTimeoutMinutes ) - Milliseconds ( 1 ) ) ; <nl> + clock - > advance ( getDefaultCursorTimeoutMillis ( ) - Milliseconds ( 1 ) ) ; <nl> ASSERT_EQ ( 1UL , cursorManager - > timeoutCursors ( _opCtx . get ( ) , clock - > now ( ) ) ) ; <nl> ASSERT_EQ ( 1UL , cursorManager - > numCursors ( ) ) ; <nl> <nl> TEST_F ( CursorManagerTest , CursorShouldNotTimeOutUntilIdleForLongEnoughAfterBeing <nl> _opCtx . get ( ) , { makeFakePlanExecutor ( ) , kTestNss , { } , false , BSONObj ( ) } ) ; <nl> <nl> / / Advance the clock to simulate time passing . <nl> - clock - > advance ( CursorManager : : kDefaultCursorTimeoutMinutes + Milliseconds ( 1 ) ) ; <nl> + clock - > advance ( getDefaultCursorTimeoutMillis ( ) + Milliseconds ( 1 ) ) ; <nl> <nl> / / Make sure the pinned cursor does not time out , before or after unpinning it . <nl> ASSERT_EQ ( 1UL , cursorManager - > numCursors ( ) ) ; <nl> TEST_F ( CursorManagerTest , CursorShouldNotTimeOutUntilIdleForLongEnoughAfterBeing <nl> <nl> / / Advance the clock to simulate more time passing , then assert that the now - inactive cursor <nl> / / times out . <nl> - clock - > advance ( CursorManager : : kDefaultCursorTimeoutMinutes + Milliseconds ( 1 ) ) ; <nl> + clock - > advance ( getDefaultCursorTimeoutMillis ( ) + Milliseconds ( 1 ) ) ; <nl> ASSERT_EQ ( 1UL , cursorManager - > timeoutCursors ( _opCtx . get ( ) , clock - > now ( ) ) ) ; <nl> ASSERT_EQ ( 0UL , cursorManager - > numCursors ( ) ) ; <nl> } <nl> mmm a / src / mongo / dbtests / logical_sessions_tests . cpp <nl> ppp b / src / mongo / dbtests / logical_sessions_tests . cpp <nl> class SessionsCollectionStandaloneRefreshTest : public SessionsCollectionStandal <nl> } <nl> } ; <nl> <nl> + / / Test that finding entries in this collection works . <nl> + class SessionsCollectionStandaloneFindTest : public SessionsCollectionStandaloneTest { <nl> + public : <nl> + void run ( ) { <nl> + DBDirectClient db ( opCtx ( ) ) ; <nl> + auto notInsertedRecord = makeRecord ( ) ; <nl> + <nl> + auto insertedRecord = makeRecord ( ) ; <nl> + ASSERT ( insertRecord ( opCtx ( ) , insertedRecord ) . isOK ( ) ) ; <nl> + <nl> + / / if a record isn ' t there , it ' s been removed <nl> + { <nl> + LogicalSessionIdSet lsids { notInsertedRecord . getId ( ) } ; <nl> + <nl> + auto response = collection ( ) - > findRemovedSessions ( opCtx ( ) , lsids ) ; <nl> + ASSERT_EQ ( response . isOK ( ) , true ) ; <nl> + ASSERT_EQ ( response . getValue ( ) . size ( ) , 1u ) ; <nl> + ASSERT ( * ( response . getValue ( ) . begin ( ) ) = = notInsertedRecord . getId ( ) ) ; <nl> + } <nl> + <nl> + / / if a record is there , it hasn ' t been removed <nl> + { <nl> + LogicalSessionIdSet lsids { insertedRecord . getId ( ) } ; <nl> + <nl> + auto response = collection ( ) - > findRemovedSessions ( opCtx ( ) , lsids ) ; <nl> + ASSERT_EQ ( response . isOK ( ) , true ) ; <nl> + ASSERT_EQ ( response . getValue ( ) . size ( ) , 0u ) ; <nl> + } <nl> + <nl> + / / We can tell the difference with multiple records <nl> + { <nl> + LogicalSessionIdSet lsids { insertedRecord . getId ( ) , notInsertedRecord . getId ( ) } ; <nl> + <nl> + auto response = collection ( ) - > findRemovedSessions ( opCtx ( ) , lsids ) ; <nl> + ASSERT_EQ ( response . isOK ( ) , true ) ; <nl> + ASSERT_EQ ( response . getValue ( ) . size ( ) , 1u ) ; <nl> + ASSERT ( * ( response . getValue ( ) . begin ( ) ) = = notInsertedRecord . getId ( ) ) ; <nl> + } <nl> + <nl> + / / Batch logic works <nl> + { <nl> + LogicalSessionIdSet insertedRecords ; <nl> + LogicalSessionIdSet uninsertedRecords ; <nl> + LogicalSessionIdSet mixedRecords ; <nl> + <nl> + for ( int i = 0 ; i < 5000 ; + + i ) { <nl> + auto insertedRecord = makeRecord ( ) ; <nl> + ASSERT ( insertRecord ( opCtx ( ) , insertedRecord ) . isOK ( ) ) ; <nl> + insertedRecords . insert ( insertedRecord . getId ( ) ) ; <nl> + <nl> + auto uninsertedRecord = makeRecord ( ) ; <nl> + uninsertedRecords . insert ( uninsertedRecord . getId ( ) ) ; <nl> + <nl> + mixedRecords . insert ( insertedRecord . getId ( ) ) ; <nl> + mixedRecords . insert ( uninsertedRecord . getId ( ) ) ; <nl> + } <nl> + <nl> + auto response = collection ( ) - > findRemovedSessions ( opCtx ( ) , mixedRecords ) ; <nl> + ASSERT_EQ ( response . isOK ( ) , true ) ; <nl> + ASSERT_EQ ( response . getValue ( ) . size ( ) , 5000u ) ; <nl> + ASSERT ( response . getValue ( ) = = uninsertedRecords ) ; <nl> + } <nl> + } <nl> + } ; <nl> class All : public Suite { <nl> public : <nl> All ( ) : Suite ( " logical_sessions " ) { } <nl> class All : public Suite { <nl> void setupTests ( ) { <nl> add < SessionsCollectionStandaloneRemoveTest > ( ) ; <nl> add < SessionsCollectionStandaloneRefreshTest > ( ) ; <nl> + add < SessionsCollectionStandaloneFindTest > ( ) ; <nl> } <nl> } ; <nl> <nl> mmm a / src / mongo / s / query / SConscript <nl> ppp b / src / mongo / s / query / SConscript <nl> env . Library ( <nl> " cluster_cursor_cleanup_job . cpp " , <nl> ] , <nl> LIBDEPS = [ <nl> + " $ BUILD_DIR / mongo / db / cursor_server_params " , <nl> " $ BUILD_DIR / mongo / s / coreshard " , <nl> " $ BUILD_DIR / mongo / util / background_job " , <nl> ] , <nl> mmm a / src / mongo / s / query / cluster_cursor_cleanup_job . cpp <nl> ppp b / src / mongo / s / query / cluster_cursor_cleanup_job . cpp <nl> <nl> # include " mongo / s / query / cluster_cursor_cleanup_job . h " <nl> <nl> # include " mongo / db / client . h " <nl> + # include " mongo / db / cursor_server_params . h " <nl> # include " mongo / db / server_parameters . h " <nl> # include " mongo / s / grid . h " <nl> # include " mongo / s / query / cluster_cursor_manager . h " <nl> <nl> <nl> namespace mongo { <nl> <nl> - namespace { <nl> - <nl> - / / Period of time after which mortal cursors are killed for inactivity . Configurable with server <nl> - / / parameter " cursorTimeoutMillis " . <nl> - AtomicInt64 cursorTimeoutMillis ( durationCount < Milliseconds > ( Minutes ( 10 ) ) ) ; <nl> - <nl> - ExportedServerParameter < long long , ServerParameterType : : kStartupAndRuntime > <nl> - cursorTimeoutMillisConfig ( ServerParameterSet : : getGlobal ( ) , <nl> - " cursorTimeoutMillis " , <nl> - & cursorTimeoutMillis ) ; <nl> - <nl> - / / Frequency with which ClusterCursorCleanupJob is run . <nl> - MONGO_EXPORT_SERVER_PARAMETER ( clientCursorMonitorFrequencySecs , long long , 4 ) ; <nl> - <nl> - } / / namespace <nl> - <nl> ClusterCursorCleanupJob clusterCursorCleanupJob ; <nl> <nl> std : : string ClusterCursorCleanupJob : : name ( ) const { <nl> void ClusterCursorCleanupJob : : run ( ) { <nl> while ( ! globalInShutdownDeprecated ( ) ) { <nl> / / Mirroring the behavior in CursorManager : : timeoutCursors ( ) , a negative value for <nl> / / cursorTimeoutMillis has the same effect as a 0 value : cursors are cleaned immediately . <nl> - auto cursorTimeoutValue = cursorTimeoutMillis . load ( ) ; <nl> + auto cursorTimeoutValue = getCursorTimeoutMillis ( ) ; <nl> const auto opCtx = client - > makeOperationContext ( ) ; <nl> Date_t cutoff = ( cursorTimeoutValue > 0 ) <nl> ? ( Date_t : : now ( ) - Milliseconds ( cursorTimeoutValue ) ) <nl> void ClusterCursorCleanupJob : : run ( ) { <nl> manager - > incrementCursorsTimedOut ( manager - > reapZombieCursors ( opCtx . get ( ) ) ) ; <nl> <nl> MONGO_IDLE_THREAD_BLOCK ; <nl> - sleepsecs ( clientCursorMonitorFrequencySecs . load ( ) ) ; <nl> + sleepsecs ( getClientCursorMonitorFrequencySecs ( ) ) ; <nl> } <nl> } <nl> <nl>
SERVER - 30805 add LSC : : findRemovedSessions ( )
mongodb/mongo
c351caa6815218c5b4a9801342ccbb1b050f6aea
2017-08-31T20:10:18Z
mmm a / tensorflow / contrib / kfac / python / kernel_tests / BUILD <nl> ppp b / tensorflow / contrib / kfac / python / kernel_tests / BUILD <nl> py_test ( <nl> " / / tensorflow / python : framework_ops " , <nl> " / / tensorflow / python : linalg_ops " , <nl> " / / tensorflow / python : random_seed " , <nl> + " / / tensorflow / python : variable_scope " , <nl> + " / / tensorflow / python : variables " , <nl> " / / third_party / py / numpy " , <nl> ] , <nl> ) <nl> mmm a / tensorflow / contrib / kfac / python / kernel_tests / utils_test . py <nl> ppp b / tensorflow / contrib / kfac / python / kernel_tests / utils_test . py <nl> <nl> from tensorflow . python . ops import linalg_ops <nl> from tensorflow . python . ops import math_ops <nl> from tensorflow . python . ops import variable_scope <nl> + from tensorflow . python . ops import variables <nl> from tensorflow . python . platform import test <nl> <nl> <nl> def testCrossReplicaMean ( self ) : <nl> tensor = array_ops . zeros ( [ ] , dtype = dtypes . float32 ) <nl> mean = utils . cross_replica_mean ( tensor ) <nl> <nl> + def testBatchExecute ( self ) : <nl> + " " " Ensure batch_execute runs in a round - robin fashion . " " " <nl> + <nl> + def increment_var ( var ) : <nl> + return lambda : var . assign_add ( 1 ) <nl> + <nl> + with ops . Graph ( ) . as_default ( ) , self . test_session ( ) as sess : <nl> + i = variable_scope . get_variable ( ' i ' , initializer = 0 ) <nl> + accumulators = [ <nl> + variable_scope . get_variable ( ' var % d ' % j , initializer = 0 ) <nl> + for j in range ( 3 ) <nl> + ] <nl> + thunks = [ increment_var ( var ) for var in accumulators ] <nl> + increment_accumulators = utils . batch_execute ( i , thunks , 2 ) <nl> + increment_i = i . assign_add ( 1 ) <nl> + <nl> + sess . run ( variables . global_variables_initializer ( ) ) <nl> + <nl> + # Ensure one op per thunk . <nl> + self . assertEqual ( 3 , len ( increment_accumulators ) ) <nl> + <nl> + # Ensure round - robin execution . <nl> + values = [ ] <nl> + for _ in range ( 5 ) : <nl> + sess . run ( increment_accumulators ) <nl> + sess . run ( increment_i ) <nl> + values . append ( sess . run ( accumulators ) ) <nl> + self . assertAllClose ( <nl> + [ <nl> + [ 1 , 1 , 0 ] , # <nl> + [ 2 , 1 , 1 ] , # <nl> + [ 2 , 2 , 2 ] , # <nl> + [ 3 , 3 , 2 ] , # <nl> + [ 4 , 3 , 3 ] <nl> + ] , <nl> + values ) <nl> + <nl> <nl> if __name__ = = ' __main__ ' : <nl> test . main ( ) <nl> mmm a / tensorflow / contrib / kfac / python / ops / BUILD <nl> ppp b / tensorflow / contrib / kfac / python / ops / BUILD <nl> py_library ( <nl> deps = [ <nl> " / / tensorflow / contrib / tpu " , <nl> " / / tensorflow / python : array_ops " , <nl> + " / / tensorflow / python : control_flow_ops " , <nl> " / / tensorflow / python : dtypes " , <nl> " / / tensorflow / python : framework_ops " , <nl> " / / tensorflow / python : gradients " , <nl> mmm a / tensorflow / contrib / kfac / python / ops / utils . py <nl> ppp b / tensorflow / contrib / kfac / python / ops / utils . py <nl> <nl> from tensorflow . python . framework import dtypes <nl> from tensorflow . python . framework import ops <nl> from tensorflow . python . ops import array_ops <nl> + from tensorflow . python . ops import control_flow_ops <nl> from tensorflow . python . ops import gradients_impl <nl> from tensorflow . python . ops import linalg_ops <nl> from tensorflow . python . ops import math_ops <nl> def ensure_sequence ( obj ) : <nl> return ( obj , ) <nl> <nl> <nl> + def batch_execute ( global_step , thunks , batch_size , name = None ) : <nl> + " " " Executes a subset of ops per global step . <nl> + <nl> + Given a list of thunks , each of which produces a single stateful op , <nl> + ensures that exactly ' batch_size ' ops are run per global step . Ops are <nl> + scheduled in a round - robin fashion . For example , with 3 ops <nl> + <nl> + global_step | op0 | op1 | op2 <nl> + mmmmmmmmmmmm + mmm - - + mmm - - + mmm - - <nl> + 0 | x | x | <nl> + mmmmmmmmmmmm + mmm - - + mmm - - + mmm - - <nl> + 1 | x | | x <nl> + mmmmmmmmmmmm + mmm - - + mmm - - + mmm - - <nl> + 2 | | x | x <nl> + mmmmmmmmmmmm + mmm - - + mmm - - + mmm - - <nl> + 3 | x | x | <nl> + mmmmmmmmmmmm + mmm - - + mmm - - + mmm - - <nl> + 4 | x | | x <nl> + <nl> + Does not guarantee order of op execution within a single global step . <nl> + <nl> + Args : <nl> + global_step : Tensor indicating time . Determines which ops run . <nl> + thunks : List of thunks . Each thunk encapsulates one op . Return values are <nl> + ignored . <nl> + batch_size : int . Number of ops to execute per global_step . <nl> + name : string or None . Name scope for newly added ops . <nl> + <nl> + Returns : <nl> + List of ops . Exactly ' batch_size ' ops are guaranteed to have an effect <nl> + every global step . <nl> + " " " <nl> + <nl> + def true_fn ( thunk ) : <nl> + " " " Ensures thunk is executed and returns an Op ( not a Tensor ) . " " " <nl> + <nl> + def result ( ) : <nl> + with ops . control_dependencies ( [ thunk ( ) ] ) : <nl> + return control_flow_ops . no_op ( ) <nl> + <nl> + return result <nl> + <nl> + def false_fn ( _ ) : <nl> + " " " Executes a no - op . " " " <nl> + <nl> + def result ( ) : <nl> + return control_flow_ops . no_op ( ) <nl> + <nl> + return result <nl> + <nl> + with ops . name_scope ( name , " batch_execute " ) : <nl> + true_fns = [ true_fn ( thunk ) for thunk in thunks ] <nl> + false_fns = [ false_fn ( thunk ) for thunk in thunks ] <nl> + num_thunks = len ( thunks ) <nl> + conditions = [ <nl> + math_ops . less ( <nl> + math_ops . mod ( batch_size - 1 + global_step * batch_size - j , <nl> + num_thunks ) , batch_size ) for j in range ( num_thunks ) <nl> + ] <nl> + result = [ <nl> + control_flow_ops . cond ( condition , true_fn , false_fn ) <nl> + for ( condition , true_fn , <nl> + false_fn ) in zip ( conditions , true_fns , false_fns ) <nl> + ] <nl> + return result <nl> + <nl> + <nl> # TODO ( b / 69623235 ) : Add a function for finding tensors that share gradients <nl> # to eliminate redundant fisher factor computations . <nl> mmm a / tensorflow / contrib / kfac / python / ops / utils_lib . py <nl> ppp b / tensorflow / contrib / kfac / python / ops / utils_lib . py <nl> <nl> " generate_random_signs " , <nl> " fwd_gradients " , <nl> " ensure_sequence " , <nl> + " batch_execute " , <nl> ] <nl> <nl> remove_undocumented ( __name__ , allowed_exception_list = _allowed_symbols ) <nl>
K - FAC : Utility function for scheduling N ops per global_step .
tensorflow/tensorflow
91526e1505e947fc64aece30ecfcd7ecec5de2c1
2018-01-12T02:13:37Z
mmm a / src / core / tsi / ssl_transport_security . c <nl> ppp b / src / core / tsi / ssl_transport_security . c <nl> static int does_entry_match_name ( const char * entry , size_t entry_length , <nl> return 0 ; <nl> } <nl> name_subdomain = strchr ( name , ' . ' ) ; <nl> + if ( name_subdomain = = NULL ) return 0 ; <nl> name_subdomain_length = strlen ( name_subdomain ) ; <nl> - if ( name_subdomain = = NULL | | name_subdomain_length < 2 ) return 0 ; <nl> + if ( name_subdomain_length < 2 ) return 0 ; <nl> name_subdomain + + ; / * Starts after the dot . * / <nl> name_subdomain_length - - ; <nl> entry + = 2 ; / * Remove * . * / <nl>
Merge pull request from ctiller / sec - wtf
grpc/grpc
46f8495266e8ec3cf0639b10847e7c9b1d1485d3
2015-02-25T18:18:48Z
mmm a / admin / static / coffee / dataexplorer . coffee <nl> ppp b / admin / static / coffee / dataexplorer . coffee <nl> module ' DataExplorerView ' , - > <nl> databases_suggestions_template : Handlebars . templates [ ' dataexplorer - databases_suggestions - template ' ] <nl> namespaces_suggestions_template : Handlebars . templates [ ' dataexplorer - namespaces_suggestions - template ' ] <nl> reason_dataexplorer_broken_template : Handlebars . templates [ ' dataexplorer - reason_broken - template ' ] <nl> - dataexplorer_toggle_size_template : Handlebars . templates [ ' dataexplorer - toggle_size - template ' ] <nl> <nl> # Constants <nl> limit : 40 # How many results we display per page / / Final for now <nl> module ' DataExplorerView ' , - > <nl> <nl> clear_history_view : ( event ) = > <nl> @ history_view . clear_history event <nl> + <nl> open_close_history : ( event ) = > <nl> @ history_view . open_close_history event , @ $ ( ' . close_queries_link ' ) <nl> + if @ history_view . state is ' visible ' <nl> + @ $ ( ' . clear_queries_link ' ) . fadeIn ' fast ' <nl> + @ $ ( ' . close_queries_link ' ) . addClass ' active ' <nl> + else <nl> + @ $ ( ' . clear_queries_link ' ) . fadeOut ' fast ' <nl> + @ $ ( ' . close_queries_link ' ) . removeClass ' active ' <nl> <nl> displaying_full_view : false # Boolean for the full view ( true if full view ) <nl> <nl> module ' DataExplorerView ' , - > <nl> $ ( ' # cluster ' ) . addClass ' container ' <nl> $ ( ' # cluster ' ) . removeClass ' cluster_with_margin ' <nl> @ . $ ( ' . wrapper_scrollbar ' ) . css ' width ' , ' 888px ' <nl> - @ $ ( ' . change_size ' ) . html @ dataexplorer_toggle_size_template <nl> - normal_view : true <nl> + @ $ ( ' . option_icon ' ) . removeClass ' fullscreen_exit ' <nl> + @ $ ( ' . option_icon ' ) . addClass ' fullscreen ' <nl> <nl> display_full : = > <nl> $ ( ' # cluster ' ) . removeClass ' container ' <nl> $ ( ' # cluster ' ) . addClass ' cluster_with_margin ' <nl> @ . $ ( ' . wrapper_scrollbar ' ) . css ' width ' , ( $ ( window ) . width ( ) - 92 ) + ' px ' <nl> - @ $ ( ' . change_size ' ) . html @ dataexplorer_toggle_size_template <nl> - normal_view : false <nl> - <nl> + @ $ ( ' . option_icon ' ) . removeClass ' fullscreen ' <nl> + @ $ ( ' . option_icon ' ) . addClass ' fullscreen_exit ' <nl> <nl> destroy : = > <nl> @ results_view . destroy ( ) <nl> module ' DataExplorerView ' , - > <nl> class @ HistoryView extends Backbone . View <nl> dataexplorer_history_template : Handlebars . templates [ ' dataexplorer - history - template ' ] <nl> dataexplorer_query_li_template : Handlebars . templates [ ' dataexplorer - query_li - template ' ] <nl> - dataexplorer_toggle_history_template : Handlebars . templates [ ' dataexplorer - toggle_history - template ' ] <nl> className : ' history ' <nl> <nl> size_history_displayed : 300 <nl> module ' DataExplorerView ' , - > <nl> initialize : ( args ) = > <nl> @ container = args . container <nl> @ history = args . history <nl> - @ height_history = 200 <nl> + @ height_history = 204 <nl> <nl> render : = > <nl> @ $ el . html @ dataexplorer_history_template ( ) <nl> module ' DataExplorerView ' , - > <nl> $ ( ' . arrow_history ' ) . hide ( ) # In case the user trigger hide / show really fast <nl> @ $ ( ' . nano_border ' ) . slideUp ' fast ' <nl> @ $ ( ' . arrow_history ' ) . slideUp ' fast ' <nl> - container . html @ dataexplorer_toggle_history_template <nl> - displayed : true <nl> else <nl> @ state = ' visible ' <nl> @ $ ( ' . arrow_history ' ) . show ( ) <nl> @ $ ( ' . nano_border ' ) . show ( ) <nl> @ resize ( ) <nl> @ $ ( ' . nano > . content ' ) . scrollTop $ ( ' . history_list ' ) . height ( ) <nl> - container . html @ dataexplorer_toggle_history_template <nl> - displayed : false <nl> <nl> <nl> # args = <nl> mmm a / admin / static / handlebars / dataexplorer . html <nl> ppp b / admin / static / handlebars / dataexplorer . html <nl> <nl> < div class = " section " > <nl> < div class = " options_container " > <nl> <nl> - < button class = " clear_queries_link btn " > Clear history < / button > <nl> - < button class = " close_queries_link btn " > Show history < / button > <nl> - < button class = " btn button_query change_size " > < img src = " / images / arrow - expand_right . png " / > < / button > <nl> + < button class = " clear_queries_link btn " title = " Clear history " > Clear history < div class = " option_icon clear_history " > < / div > < / button > <nl> + < button class = " close_queries_link btn " title = " History " > History < div class = " option_icon show_history " > < / div > < / button > <nl> + < button class = " btn button_query change_size " title = " Full screen " > < div class = " option_icon fullscreen " > < / div > < / button > <nl> < / div > <nl> < h1 class = " title small_margin_bottom " > Data Explorer < / h1 > <nl> < div id = " user - alert - space " > < / div > <nl> < h1 class = " title small_margin_bottom " > Data Explorer < / h1 > <nl> < div class = " nano_border nano_border_bottom " > < / div > <nl> < / script > <nl> <nl> - < script id = " dataexplorer - toggle_size - template " type = " text / x - handlebars - template " > <nl> - { { # if normal_view } } < img src = " / images / arrow - expand_right . png " / > { { else } } < img src = " / images / arrow - expand_left . png " / > { { / if } } <nl> - < / script > <nl> - <nl> - < script id = " dataexplorer - toggle_history - template " type = " text / x - handlebars - template " > <nl> - { { # if displayed } } Show history { { else } } Hide history { { / if } } <nl> - < / script > <nl> - <nl> < script id = " dataexplorer - query_li - template " type = " text / x - handlebars - template " > <nl> { { # if no_query } } <nl> < li class = " no_history { { displayed_class } } " > No history available < / li > <nl> deleted file mode 100644 <nl> index 320d85d348d . . 00000000000 <nl> Binary files a / admin / static / images / arrow - expand_left . png and / dev / null differ <nl> deleted file mode 100644 <nl> index 6be3323bdc9 . . 00000000000 <nl> Binary files a / admin / static / images / arrow - expand_right . png and / dev / null differ <nl> new file mode 100644 <nl> index 00000000000 . . 4b6eff51cd6 <nl> Binary files / dev / null and b / admin / static / images / book_alt_16x16 . png differ <nl> deleted file mode 100644 <nl> index 4988fa87e18 . . 00000000000 <nl> Binary files a / admin / static / images / full_view . png and / dev / null differ <nl> new file mode 100644 <nl> index 00000000000 . . 1d5292c4623 <nl> Binary files / dev / null and b / admin / static / images / fullscreen_16x16 . png differ <nl> new file mode 100644 <nl> index 00000000000 . . ecf3814c89d <nl> Binary files / dev / null and b / admin / static / images / fullscreen_exit_16x16 . png differ <nl> new file mode 100644 <nl> index 00000000000 . . adc9debe6a6 <nl> Binary files / dev / null and b / admin / static / images / trash_stroke_16x16 . png differ <nl> mmm a / admin / static / less / styles . less <nl> ppp b / admin / static / less / styles . less <nl> TODO : retire this <nl> . options_container { <nl> float : right ; <nl> margin - top : 5px ; <nl> + . option_icon { <nl> + width : 16px ; <nl> + height : 16px ; <nl> + opacity : 0 . 3 ; <nl> + } <nl> + . fullscreen { <nl> + background : url ( / images / fullscreen_16x16 . png ) center no - repeat ; <nl> + } <nl> + . fullscreen_exit { <nl> + background : url ( / images / fullscreen_exit_16x16 . png ) center no - repeat ; <nl> + } <nl> + . show_history { <nl> + background : url ( / images / book_alt_16x16 . png ) center no - repeat ; <nl> + float : left ; <nl> + margin - right : 5px ; <nl> + } <nl> + . clear_queries_link { <nl> + display : none ; <nl> + . clear_history { <nl> + background : url ( / images / trash_stroke_16x16 . png ) center no - repeat ; <nl> + float : left ; <nl> + margin - right : 5px ; <nl> + } <nl> + } <nl> . close_queries_link { <nl> - width : 110px ; / / 108 should be enough , but firefox on osx isn ' t happy . Anyway , it ' s going to be replaced with icons . . Anyway , it ' s going to be replaced with icons . . Anyway , it ' s going to be replaced with icons . . Anyway , it ' s going to be replaced with icons . . Anyway , it ' s going to be replaced with icons . <nl> + & . active { <nl> + background - color : # f2f2f2 ; / / layer fill content <nl> + . box - shadow ( ~ " inset 0 2px 1px rgba ( 0 , 0 , 0 , . 06 ) " ) ; / / inner shadow <nl> + . gradient ( ~ " linear - gradient ( bottom , rgba ( 255 , 255 , 255 , . 02 ) 0 % , rgba ( 0 , 0 , 0 , . 02 ) 100 % ) " ) ; / / gradient overlay <nl> + } <nl> } <nl> + <nl> + <nl> } <nl> <nl> h1 . title { <nl> mmm a / src / clustering / administration / http / server . cc <nl> ppp b / src / clustering / administration / http / server . cc <nl> administrative_http_server_manager_t : : administrative_http_server_manager_t ( <nl> white_list . insert ( " / fonts / opensans - semibolditalic - webfont . ttf " ) ; <nl> white_list . insert ( " / fonts / opensans - semibolditalic - webfont . woff " ) ; <nl> white_list . insert ( " / fonts / stylesheet . css " ) ; <nl> + white_list . insert ( " / images / fullscreen_16x16 . png " ) ; <nl> + white_list . insert ( " / images / trash_stroke_16x16 . png " ) ; <nl> + white_list . insert ( " / images / book_alt_16x16 . png " ) ; <nl> + white_list . insert ( " / images / book_alt2_16x14 . png " ) ; <nl> + white_list . insert ( " / images / fullscreen_exit_16x16 . png " ) ; <nl> white_list . insert ( " / images / ajax - loader . gif " ) ; <nl> - white_list . insert ( " / images / arrow - expand_right . png " ) ; <nl> - white_list . insert ( " / images / arrow - expand_left . png " ) ; <nl> - white_list . insert ( " / images / full_view . png " ) ; <nl> white_list . insert ( " / images / arrow_down . png " ) ; <nl> white_list . insert ( " / images / arrow_right . png " ) ; <nl> white_list . insert ( " / images / bar - line - graph - icon . png " ) ; <nl>
Fix icons / style for the history
rethinkdb/rethinkdb
9c50e3c5031e34fd1774f1420fbab4c6d7797d41
2013-02-28T00:06:40Z
mmm a / addons / skin . confluence / 720p / DialogSongInfo . xml <nl> ppp b / addons / skin . confluence / 720p / DialogSongInfo . xml <nl> <nl> < aligny > center < / aligny > <nl> < font > font13 < / font > <nl> < textcolor > white < / textcolor > <nl> - < label fallback = " 161 " > $ INFO [ ListItem . Album ] $ INFO [ musicplayer . discnumber , - $ LOCALIZE [ 427 ] : ] < / label > <nl> + < label fallback = " 161 " > $ INFO [ ListItem . Album ] $ INFO [ listitem . discnumber , - $ LOCALIZE [ 427 ] ] < / label > <nl> < / control > <nl> < control type = " label " > <nl> < description > Genre Title < / description > <nl>
fix the disc number tag in the song dialog
xbmc/xbmc
bae008c47c8e438f685b433e9f6ee5b29be531c3
2011-03-27T07:32:23Z
mmm a / src / net . cpp <nl> ppp b / src / net . cpp <nl> void CConnman : : ThreadMessageHandler ( ) <nl> / / Send messages <nl> { <nl> LOCK ( pnode - > cs_sendProcessing ) ; <nl> - m_msgproc - > SendMessages ( pnode , flagInterruptMsgProc ) ; <nl> + m_msgproc - > SendMessages ( pnode ) ; <nl> } <nl> <nl> if ( flagInterruptMsgProc ) <nl> mmm a / src / net . h <nl> ppp b / src / net . h <nl> class NetEventsInterface <nl> { <nl> public : <nl> virtual bool ProcessMessages ( CNode * pnode , std : : atomic < bool > & interrupt ) = 0 ; <nl> - virtual bool SendMessages ( CNode * pnode , std : : atomic < bool > & interrupt ) = 0 ; <nl> + virtual bool SendMessages ( CNode * pnode ) = 0 ; <nl> virtual void InitializeNode ( CNode * pnode ) = 0 ; <nl> virtual void FinalizeNode ( NodeId id , bool & update_connection_time ) = 0 ; <nl> <nl> mmm a / src / net_processing . cpp <nl> ppp b / src / net_processing . cpp <nl> static void RelayAddress ( const CAddress & addr , bool fReachable , CConnman * connma <nl> connman - > ForEachNodeThen ( std : : move ( sortfunc ) , std : : move ( pushfunc ) ) ; <nl> } <nl> <nl> - void static ProcessGetBlockData ( CNode * pfrom , const CChainParams & chainparams , const CInv & inv , CConnman * connman , const std : : atomic < bool > & interruptMsgProc ) <nl> + void static ProcessGetBlockData ( CNode * pfrom , const CChainParams & chainparams , const CInv & inv , CConnman * connman ) <nl> { <nl> bool send = false ; <nl> std : : shared_ptr < const CBlock > a_recent_block ; <nl> void static ProcessGetData ( CNode * pfrom , const CChainParams & chainparams , CConnm <nl> const CInv & inv = * it ; <nl> if ( inv . type = = MSG_BLOCK | | inv . type = = MSG_FILTERED_BLOCK | | inv . type = = MSG_CMPCT_BLOCK | | inv . type = = MSG_WITNESS_BLOCK ) { <nl> it + + ; <nl> - ProcessGetBlockData ( pfrom , chainparams , inv , connman , interruptMsgProc ) ; <nl> + ProcessGetBlockData ( pfrom , chainparams , inv , connman ) ; <nl> } <nl> } <nl> <nl> class CompareInvMempoolOrder <nl> } <nl> } ; <nl> <nl> - bool PeerLogicValidation : : SendMessages ( CNode * pto , std : : atomic < bool > & interruptMsgProc ) <nl> + bool PeerLogicValidation : : SendMessages ( CNode * pto ) <nl> { <nl> const Consensus : : Params & consensusParams = Params ( ) . GetConsensus ( ) ; <nl> { <nl> mmm a / src / net_processing . h <nl> ppp b / src / net_processing . h <nl> class PeerLogicValidation final : public CValidationInterface , public NetEventsI <nl> void InitializeNode ( CNode * pnode ) override ; <nl> / * * Handle removal of a peer by updating various state and removing it from mapNodeState * / <nl> void FinalizeNode ( NodeId nodeid , bool & fUpdateConnectionTime ) override ; <nl> - / * * Process protocol messages received from a given node * / <nl> + / * * <nl> + * Process protocol messages received from a given node <nl> + * <nl> + * @ param [ in ] pfrom The node which we have received messages from . <nl> + * @ param [ in ] interrupt Interrupt condition for processing threads <nl> + * / <nl> bool ProcessMessages ( CNode * pfrom , std : : atomic < bool > & interrupt ) override ; <nl> / * * <nl> * Send queued protocol messages to be sent to a give node . <nl> * <nl> * @ param [ in ] pto The node which we are sending messages to . <nl> - * @ param [ in ] interrupt Interrupt condition for processing threads <nl> * @ return True if there is more work to be done <nl> * / <nl> - bool SendMessages ( CNode * pto , std : : atomic < bool > & interrupt ) override EXCLUSIVE_LOCKS_REQUIRED ( pto - > cs_sendProcessing ) ; <nl> + bool SendMessages ( CNode * pto ) override EXCLUSIVE_LOCKS_REQUIRED ( pto - > cs_sendProcessing ) ; <nl> <nl> / * * Consider evicting an outbound peer based on the amount of time they ' ve been behind our tip * / <nl> void ConsiderEviction ( CNode * pto , int64_t time_in_seconds ) ; <nl> mmm a / src / test / denialofservice_tests . cpp <nl> ppp b / src / test / denialofservice_tests . cpp <nl> BOOST_FIXTURE_TEST_SUITE ( denialofservice_tests , TestingSetup ) <nl> / / work . <nl> BOOST_AUTO_TEST_CASE ( outbound_slow_chain_eviction ) <nl> { <nl> - std : : atomic < bool > interruptDummy ( false ) ; <nl> <nl> / / Mock an outbound peer <nl> CAddress addr1 ( ip ( 0xa0b0c001 ) , NODE_NONE ) ; <nl> BOOST_AUTO_TEST_CASE ( outbound_slow_chain_eviction ) <nl> / / Test starts here <nl> { <nl> LOCK2 ( cs_main , dummyNode1 . cs_sendProcessing ) ; <nl> - peerLogic - > SendMessages ( & dummyNode1 , interruptDummy ) ; / / should result in getheaders <nl> + peerLogic - > SendMessages ( & dummyNode1 ) ; / / should result in getheaders <nl> } <nl> { <nl> LOCK2 ( cs_main , dummyNode1 . cs_vSend ) ; <nl> BOOST_AUTO_TEST_CASE ( outbound_slow_chain_eviction ) <nl> SetMockTime ( nStartTime + 21 * 60 ) ; <nl> { <nl> LOCK2 ( cs_main , dummyNode1 . cs_sendProcessing ) ; <nl> - peerLogic - > SendMessages ( & dummyNode1 , interruptDummy ) ; / / should result in getheaders <nl> + peerLogic - > SendMessages ( & dummyNode1 ) ; / / should result in getheaders <nl> } <nl> { <nl> LOCK2 ( cs_main , dummyNode1 . cs_vSend ) ; <nl> BOOST_AUTO_TEST_CASE ( outbound_slow_chain_eviction ) <nl> SetMockTime ( nStartTime + 24 * 60 ) ; <nl> { <nl> LOCK2 ( cs_main , dummyNode1 . cs_sendProcessing ) ; <nl> - peerLogic - > SendMessages ( & dummyNode1 , interruptDummy ) ; / / should result in disconnect <nl> + peerLogic - > SendMessages ( & dummyNode1 ) ; / / should result in disconnect <nl> } <nl> BOOST_CHECK ( dummyNode1 . fDisconnect = = true ) ; <nl> SetMockTime ( 0 ) ; <nl> BOOST_AUTO_TEST_CASE ( stale_tip_peer_management ) <nl> <nl> BOOST_AUTO_TEST_CASE ( DoS_banning ) <nl> { <nl> - std : : atomic < bool > interruptDummy ( false ) ; <nl> <nl> connman - > ClearBanned ( ) ; <nl> CAddress addr1 ( ip ( 0xa0b0c001 ) , NODE_NONE ) ; <nl> BOOST_AUTO_TEST_CASE ( DoS_banning ) <nl> } <nl> { <nl> LOCK2 ( cs_main , dummyNode1 . cs_sendProcessing ) ; <nl> - peerLogic - > SendMessages ( & dummyNode1 , interruptDummy ) ; <nl> + peerLogic - > SendMessages ( & dummyNode1 ) ; <nl> } <nl> BOOST_CHECK ( connman - > IsBanned ( addr1 ) ) ; <nl> BOOST_CHECK ( ! connman - > IsBanned ( ip ( 0xa0b0c001 | 0x0000ff00 ) ) ) ; / / Different IP , not banned <nl> BOOST_AUTO_TEST_CASE ( DoS_banning ) <nl> } <nl> { <nl> LOCK2 ( cs_main , dummyNode2 . cs_sendProcessing ) ; <nl> - peerLogic - > SendMessages ( & dummyNode2 , interruptDummy ) ; <nl> + peerLogic - > SendMessages ( & dummyNode2 ) ; <nl> } <nl> BOOST_CHECK ( ! connman - > IsBanned ( addr2 ) ) ; / / 2 not banned yet . . . <nl> BOOST_CHECK ( connman - > IsBanned ( addr1 ) ) ; / / . . . but 1 still should be <nl> BOOST_AUTO_TEST_CASE ( DoS_banning ) <nl> } <nl> { <nl> LOCK2 ( cs_main , dummyNode2 . cs_sendProcessing ) ; <nl> - peerLogic - > SendMessages ( & dummyNode2 , interruptDummy ) ; <nl> + peerLogic - > SendMessages ( & dummyNode2 ) ; <nl> } <nl> BOOST_CHECK ( connman - > IsBanned ( addr2 ) ) ; <nl> <nl> BOOST_AUTO_TEST_CASE ( DoS_banning ) <nl> <nl> BOOST_AUTO_TEST_CASE ( DoS_banscore ) <nl> { <nl> - std : : atomic < bool > interruptDummy ( false ) ; <nl> <nl> connman - > ClearBanned ( ) ; <nl> gArgs . ForceSetArg ( " - banscore " , " 111 " ) ; / / because 11 is my favorite number <nl> BOOST_AUTO_TEST_CASE ( DoS_banscore ) <nl> } <nl> { <nl> LOCK2 ( cs_main , dummyNode1 . cs_sendProcessing ) ; <nl> - peerLogic - > SendMessages ( & dummyNode1 , interruptDummy ) ; <nl> + peerLogic - > SendMessages ( & dummyNode1 ) ; <nl> } <nl> BOOST_CHECK ( ! connman - > IsBanned ( addr1 ) ) ; <nl> { <nl> BOOST_AUTO_TEST_CASE ( DoS_banscore ) <nl> } <nl> { <nl> LOCK2 ( cs_main , dummyNode1 . cs_sendProcessing ) ; <nl> - peerLogic - > SendMessages ( & dummyNode1 , interruptDummy ) ; <nl> + peerLogic - > SendMessages ( & dummyNode1 ) ; <nl> } <nl> BOOST_CHECK ( ! connman - > IsBanned ( addr1 ) ) ; <nl> { <nl> BOOST_AUTO_TEST_CASE ( DoS_banscore ) <nl> } <nl> { <nl> LOCK2 ( cs_main , dummyNode1 . cs_sendProcessing ) ; <nl> - peerLogic - > SendMessages ( & dummyNode1 , interruptDummy ) ; <nl> + peerLogic - > SendMessages ( & dummyNode1 ) ; <nl> } <nl> BOOST_CHECK ( connman - > IsBanned ( addr1 ) ) ; <nl> gArgs . ForceSetArg ( " - banscore " , std : : to_string ( DEFAULT_BANSCORE_THRESHOLD ) ) ; <nl> BOOST_AUTO_TEST_CASE ( DoS_banscore ) <nl> <nl> BOOST_AUTO_TEST_CASE ( DoS_bantime ) <nl> { <nl> - std : : atomic < bool > interruptDummy ( false ) ; <nl> <nl> connman - > ClearBanned ( ) ; <nl> int64_t nStartTime = GetTime ( ) ; <nl> BOOST_AUTO_TEST_CASE ( DoS_bantime ) <nl> } <nl> { <nl> LOCK2 ( cs_main , dummyNode . cs_sendProcessing ) ; <nl> - peerLogic - > SendMessages ( & dummyNode , interruptDummy ) ; <nl> + peerLogic - > SendMessages ( & dummyNode ) ; <nl> } <nl> BOOST_CHECK ( connman - > IsBanned ( addr ) ) ; <nl> <nl>
Merge : net : Remove unused interrupt from SendMessages
bitcoin/bitcoin
172f984f598f471f970d2ed4bf6379e9aa33901e
2018-07-09T14:50:09Z
mmm a / . gitignore <nl> ppp b / . gitignore <nl> cmake_install . cmake <nl> CTestTestfile . cmake <nl> * . a <nl> * . o <nl> + cmake - build - * <nl> <nl> # Python cache <nl> * . pyc <nl> mmm a / CMakeLists . txt <nl> ppp b / CMakeLists . txt <nl> include ( cmake / find_readline_edit . cmake ) <nl> include ( cmake / find_zookeeper . cmake ) <nl> include ( cmake / find_re2 . cmake ) <nl> include ( cmake / find_rdkafka . cmake ) <nl> + include ( cmake / find_capnp . cmake ) <nl> <nl> include ( cmake / find_contrib_lib . cmake ) <nl> find_contrib_lib ( cityhash ) <nl> new file mode 100644 <nl> index 00000000000 . . 26407d51817 <nl> mmm / dev / null <nl> ppp b / cmake / find_capnp . cmake <nl> <nl> + option ( ENABLE_CAPNP " Enable Cap ' n Proto " ON ) <nl> + <nl> + if ( ENABLE_CAPNP ) <nl> + set ( CAPNP_PATHS " / usr / local / opt / capnp / lib " ) <nl> + set ( CAPNP_INCLUDE_PATHS " / usr / local / opt / capnp / include " ) <nl> + find_library ( CAPNP capnp PATHS $ { CAPNP_PATHS } ) <nl> + find_library ( CAPNPC capnpc PATHS $ { CAPNP_PATHS } ) <nl> + find_library ( KJ kj PATHS $ { CAPNP_PATHS } ) <nl> + set ( CAPNP_LIBS $ { CAPNP } $ { CAPNPC } $ { KJ } ) <nl> + <nl> + find_path ( CAPNP_INCLUDE_DIR NAMES capnp / schema - parser . h PATHS $ { CAPNP_INCLUDE_PATHS } ) <nl> + if ( CAPNP_INCLUDE_DIR AND CAPNP_LIBS ) <nl> + include_directories ( $ { CAPNP_INCLUDE_DIR } ) <nl> + set ( USE_CAPNP 1 ) <nl> + endif ( ) <nl> + endif ( ) <nl> + <nl> + if ( USE_CAPNP ) <nl> + message ( STATUS " Using capnp = $ { USE_CAPNP } : $ { CAPNP_INCLUDE_DIR } : $ { CAPNP_LIBS } " ) <nl> + else ( ) <nl> + message ( STATUS " Build without capnp ( support for Cap ' n Proto format will be disabled ) " ) <nl> + endif ( ) <nl> mmm a / contrib / poco <nl> ppp b / contrib / poco <nl> @ @ - 1 + 1 @ @ <nl> - Subproject commit ad1643c6698a8c890b68186d5c9d72e496c27af2 <nl> + Subproject commit 1366df1c7e068bb2efd846bc8dc8e286b090904e <nl> mmm a / dbms / CMakeLists . txt <nl> ppp b / dbms / CMakeLists . txt <nl> if ( USE_ICU ) <nl> target_link_libraries ( dbms $ { ICU_LIBS } ) <nl> endif ( ) <nl> <nl> + if ( USE_CAPNP ) <nl> + target_link_libraries ( dbms $ { CAPNP_LIBS } ) <nl> + endif ( ) <nl> + <nl> target_link_libraries ( dbms <nl> $ { PLATFORM_LIBS } <nl> $ { CMAKE_DL_LIBS } <nl> mmm a / dbms / src / Common / Allocator . cpp <nl> ppp b / dbms / src / Common / Allocator . cpp <nl> <nl> + # include < Common / Allocator . h > <nl> + <nl> # if ! defined ( __APPLE__ ) & & ! defined ( __FreeBSD__ ) <nl> # include < malloc . h > <nl> # endif <nl> <nl> <nl> # include < Common / MemoryTracker . h > <nl> # include < Common / Exception . h > <nl> - # include < Common / Allocator . h > <nl> - <nl> + # include < Common / formatReadable . h > <nl> # include < IO / WriteHelpers . h > <nl> <nl> + <nl> / / / Required for older Darwin builds , that lack definition of MAP_ANONYMOUS <nl> # ifndef MAP_ANONYMOUS <nl> # define MAP_ANONYMOUS MAP_ANON <nl> void * Allocator < clear_memory_ > : : alloc ( size_t size , size_t alignment ) <nl> if ( size > = MMAP_THRESHOLD ) <nl> { <nl> if ( alignment > MMAP_MIN_ALIGNMENT ) <nl> - throw DB : : Exception ( " Too large alignment : more than page size . " , DB : : ErrorCodes : : BAD_ARGUMENTS ) ; <nl> + throw DB : : Exception ( " Too large alignment " + formatReadableSizeWithBinarySuffix ( alignment ) + " : more than page size when allocating " <nl> + + formatReadableSizeWithBinarySuffix ( size ) + " . " , DB : : ErrorCodes : : BAD_ARGUMENTS ) ; <nl> <nl> buf = mmap ( nullptr , size , PROT_READ | PROT_WRITE , MAP_PRIVATE | MAP_ANONYMOUS , - 1 , 0 ) ; <nl> if ( MAP_FAILED = = buf ) <nl> - DB : : throwFromErrno ( " Allocator : Cannot mmap . " , DB : : ErrorCodes : : CANNOT_ALLOCATE_MEMORY ) ; <nl> + DB : : throwFromErrno ( " Allocator : Cannot mmap " + formatReadableSizeWithBinarySuffix ( size ) + " . " , DB : : ErrorCodes : : CANNOT_ALLOCATE_MEMORY ) ; <nl> <nl> / / / No need for zero - fill , because mmap guarantees it . <nl> } <nl> void * Allocator < clear_memory_ > : : alloc ( size_t size , size_t alignment ) <nl> buf = : : malloc ( size ) ; <nl> <nl> if ( nullptr = = buf ) <nl> - DB : : throwFromErrno ( " Allocator : Cannot malloc . " , DB : : ErrorCodes : : CANNOT_ALLOCATE_MEMORY ) ; <nl> + DB : : throwFromErrno ( " Allocator : Cannot malloc " + formatReadableSizeWithBinarySuffix ( size ) + " . " , DB : : ErrorCodes : : CANNOT_ALLOCATE_MEMORY ) ; <nl> } <nl> else <nl> { <nl> void * Allocator < clear_memory_ > : : alloc ( size_t size , size_t alignment ) <nl> int res = posix_memalign ( & buf , alignment , size ) ; <nl> <nl> if ( 0 ! = res ) <nl> - DB : : throwFromErrno ( " Cannot allocate memory ( posix_memalign ) " , DB : : ErrorCodes : : CANNOT_ALLOCATE_MEMORY , res ) ; <nl> + DB : : throwFromErrno ( " Cannot allocate memory ( posix_memalign ) " + formatReadableSizeWithBinarySuffix ( size ) + " . " , DB : : ErrorCodes : : CANNOT_ALLOCATE_MEMORY , res ) ; <nl> <nl> if ( clear_memory ) <nl> memset ( buf , 0 , size ) ; <nl> void Allocator < clear_memory_ > : : free ( void * buf , size_t size ) <nl> if ( size > = MMAP_THRESHOLD ) <nl> { <nl> if ( 0 ! = munmap ( buf , size ) ) <nl> - DB : : throwFromErrno ( " Allocator : Cannot munmap . " , DB : : ErrorCodes : : CANNOT_MUNMAP ) ; <nl> + DB : : throwFromErrno ( " Allocator : Cannot munmap " + formatReadableSizeWithBinarySuffix ( size ) + " . " , DB : : ErrorCodes : : CANNOT_MUNMAP ) ; <nl> } <nl> else <nl> { <nl> void * Allocator < clear_memory_ > : : realloc ( void * buf , size_t old_size , size_t new <nl> buf = : : realloc ( buf , new_size ) ; <nl> <nl> if ( nullptr = = buf ) <nl> - DB : : throwFromErrno ( " Allocator : Cannot realloc . " , DB : : ErrorCodes : : CANNOT_ALLOCATE_MEMORY ) ; <nl> + DB : : throwFromErrno ( " Allocator : Cannot realloc from " + formatReadableSizeWithBinarySuffix ( old_size ) + " to " + formatReadableSizeWithBinarySuffix ( new_size ) + " . " , DB : : ErrorCodes : : CANNOT_ALLOCATE_MEMORY ) ; <nl> <nl> if ( clear_memory ) <nl> memset ( reinterpret_cast < char * > ( buf ) + old_size , 0 , new_size - old_size ) ; <nl> void * Allocator < clear_memory_ > : : realloc ( void * buf , size_t old_size , size_t new <nl> <nl> buf = mremap ( buf , old_size , new_size , MREMAP_MAYMOVE ) ; <nl> if ( MAP_FAILED = = buf ) <nl> - DB : : throwFromErrno ( " Allocator : Cannot mremap memory chunk from " + DB : : toString ( old_size ) + " to " + DB : : toString ( new_size ) + " bytes . " , DB : : ErrorCodes : : CANNOT_MREMAP ) ; <nl> + DB : : throwFromErrno ( " Allocator : Cannot mremap memory chunk from " + formatReadableSizeWithBinarySuffix ( old_size ) + " to " + formatReadableSizeWithBinarySuffix ( new_size ) + " . " , DB : : ErrorCodes : : CANNOT_MREMAP ) ; <nl> <nl> / / / No need for zero - fill , because mmap guarantees it . <nl> } <nl> void * Allocator < clear_memory_ > : : realloc ( void * buf , size_t old_size , size_t new <nl> buf = : : realloc ( buf , new_size ) ; <nl> <nl> if ( nullptr = = buf ) <nl> - DB : : throwFromErrno ( " Allocator : Cannot realloc . " , DB : : ErrorCodes : : CANNOT_ALLOCATE_MEMORY ) ; <nl> + DB : : throwFromErrno ( " Allocator : Cannot realloc from " + formatReadableSizeWithBinarySuffix ( old_size ) + " to " + formatReadableSizeWithBinarySuffix ( new_size ) + " . " , DB : : ErrorCodes : : CANNOT_ALLOCATE_MEMORY ) ; <nl> <nl> if ( clear_memory ) <nl> memset ( reinterpret_cast < char * > ( buf ) + old_size , 0 , new_size - old_size ) ; <nl> mmm a / dbms / src / Common / ArrayCache . h <nl> ppp b / dbms / src / Common / ArrayCache . h <nl> <nl> <nl> # include < Common / Exception . h > <nl> # include < Common / randomSeed . h > <nl> + # include < Common / formatReadable . h > <nl> <nl> / / / Required for older Darwin builds , that lack definition of MAP_ANONYMOUS <nl> # ifndef MAP_ANONYMOUS <nl> class ArrayCache : private boost : : noncopyable <nl> { <nl> ptr = mmap ( address_hint , size , PROT_READ | PROT_WRITE , MAP_PRIVATE | MAP_ANONYMOUS , - 1 , 0 ) ; <nl> if ( MAP_FAILED = = ptr ) <nl> - DB : : throwFromErrno ( " Allocator : Cannot mmap . " , DB : : ErrorCodes : : CANNOT_ALLOCATE_MEMORY ) ; <nl> + DB : : throwFromErrno ( " Allocator : Cannot mmap " + formatReadableSizeWithBinarySuffix ( size ) + " . " , DB : : ErrorCodes : : CANNOT_ALLOCATE_MEMORY ) ; <nl> } <nl> <nl> ~ Chunk ( ) <nl> { <nl> if ( ptr & & 0 ! = munmap ( ptr , size ) ) <nl> - DB : : throwFromErrno ( " Allocator : Cannot munmap . " , DB : : ErrorCodes : : CANNOT_MUNMAP ) ; <nl> + DB : : throwFromErrno ( " Allocator : Cannot munmap " + formatReadableSizeWithBinarySuffix ( size ) + " . " , DB : : ErrorCodes : : CANNOT_MUNMAP ) ; <nl> } <nl> <nl> Chunk ( Chunk & & other ) : ptr ( other . ptr ) , size ( other . size ) <nl> mmm a / dbms / src / Common / MemoryTracker . cpp <nl> ppp b / dbms / src / Common / MemoryTracker . cpp <nl> namespace DB <nl> MemoryTracker : : ~ MemoryTracker ( ) <nl> { <nl> if ( peak ) <nl> - logPeakMemoryUsage ( ) ; <nl> + { <nl> + try <nl> + { <nl> + logPeakMemoryUsage ( ) ; <nl> + } <nl> + catch ( . . . ) <nl> + { <nl> + / / / Exception in Logger , intentionally swallow . <nl> + } <nl> + } <nl> <nl> / * * This is needed for next memory tracker to be consistent with sum of all referring memory trackers . <nl> * <nl> mmm a / dbms / src / Common / ZooKeeper / KeeperException . h <nl> ppp b / dbms / src / Common / ZooKeeper / KeeperException . h <nl> namespace ProfileEvents <nl> namespace zkutil <nl> { <nl> <nl> + <nl> + / / / You should reinitialize ZooKeeper session in case of these errors <nl> + inline bool isUnrecoverableErrorCode ( int32_t zk_return_code ) <nl> + { <nl> + return zk_return_code = = ZINVALIDSTATE | | zk_return_code = = ZSESSIONEXPIRED | | zk_return_code = = ZSESSIONMOVED ; <nl> + } <nl> + <nl> + / / / Errors related with temporary network problems <nl> + inline bool isTemporaryErrorCode ( int32_t zk_return_code ) <nl> + { <nl> + return zk_return_code = = ZCONNECTIONLOSS | | zk_return_code = = ZOPERATIONTIMEOUT ; <nl> + } <nl> + <nl> + / / / Any error related with network or master election <nl> + / / / In case of these errors you should retry the query or reinitialize ZooKeeper session ( see isUnrecoverable ( ) ) <nl> + inline bool isHardwareErrorCode ( int32_t zk_return_code ) <nl> + { <nl> + return isUnrecoverableErrorCode ( zk_return_code ) | | isTemporaryErrorCode ( zk_return_code ) ; <nl> + } <nl> + <nl> + <nl> class KeeperException : public DB : : Exception <nl> { <nl> private : <nl> class KeeperException : public DB : : Exception <nl> : DB : : Exception ( msg , DB : : ErrorCodes : : KEEPER_EXCEPTION ) , code ( code ) { incrementEventCounter ( ) ; } <nl> <nl> public : <nl> - KeeperException ( const std : : string & msg ) : KeeperException ( msg , ZOK , 0 ) { } <nl> + explicit KeeperException ( const std : : string & msg ) : KeeperException ( msg , ZOK , 0 ) { } <nl> KeeperException ( const std : : string & msg , const int32_t code ) <nl> : KeeperException ( msg + " ( " + zerror ( code ) + " ) " , code , 0 ) { } <nl> - KeeperException ( const int32_t code ) : KeeperException ( zerror ( code ) , code , 0 ) { } <nl> + explicit KeeperException ( const int32_t code ) : KeeperException ( zerror ( code ) , code , 0 ) { } <nl> KeeperException ( const int32_t code , const std : : string & path ) <nl> : KeeperException ( std : : string { zerror ( code ) } + " , path : " + path , code , 0 ) { } <nl> <nl> KeeperException ( const KeeperException & exc ) : DB : : Exception ( exc ) , code ( exc . code ) { incrementEventCounter ( ) ; } <nl> <nl> - const char * name ( ) const throw ( ) { return " zkutil : : KeeperException " ; } <nl> - const char * className ( ) const throw ( ) { return " zkutil : : KeeperException " ; } <nl> - KeeperException * clone ( ) const { return new KeeperException ( * this ) ; } <nl> + const char * name ( ) const throw ( ) override { return " zkutil : : KeeperException " ; } <nl> + const char * className ( ) const throw ( ) override { return " zkutil : : KeeperException " ; } <nl> + KeeperException * clone ( ) const override { return new KeeperException ( * this ) ; } <nl> <nl> - / / / при этих ошибках надо переинициализировать сессию с zookeeper <nl> + / / / You should reinitialize ZooKeeper session in case of these errors <nl> bool isUnrecoverable ( ) const <nl> { <nl> - return code = = ZINVALIDSTATE | | code = = ZSESSIONEXPIRED | | code = = ZSESSIONMOVED ; <nl> + return isUnrecoverableErrorCode ( code ) ; <nl> } <nl> <nl> - / / / любая ошибка связанная с работой сети , перевыбором мастера <nl> - / / / при этих ошибках надо либо повторить запрос повторно , либо переинициализировать сессию ( см . isUnrecoverable ( ) ) <nl> - bool isHardwareError ( ) const <nl> + / / / Errors related with temporary network problems <nl> + bool isTemporaryError ( ) const <nl> { <nl> - return isUnrecoverable ( ) | | code = = ZCONNECTIONLOSS | | code = = ZOPERATIONTIMEOUT ; <nl> + return isTemporaryErrorCode ( code ) ; <nl> } <nl> <nl> - bool isTemporaryError ( ) const <nl> + / / / Any error related with network or master election <nl> + / / / In case of these errors you should retry the query or reinitialize ZooKeeper session ( see isUnrecoverable ( ) ) <nl> + bool isHardwareError ( ) const <nl> { <nl> - return code = = ZCONNECTIONLOSS | | code = = ZOPERATIONTIMEOUT ; <nl> + return isHardwareErrorCode ( code ) ; <nl> } <nl> <nl> const int32_t code ; <nl> mmm a / dbms / src / Common / config . h . in <nl> ppp b / dbms / src / Common / config . h . in <nl> <nl> # cmakedefine01 USE_RE2_ST <nl> # cmakedefine01 USE_VECTORCLASS <nl> # cmakedefine01 USE_RDKAFKA <nl> + # cmakedefine01 USE_CAPNP <nl> # cmakedefine01 Poco_DataODBC_FOUND <nl> # cmakedefine01 Poco_MongoDB_FOUND <nl> # cmakedefine01 Poco_NetSSL_FOUND <nl> new file mode 100644 <nl> index 00000000000 . . fa48e450ca8 <nl> mmm / dev / null <nl> ppp b / dbms / src / DataStreams / CapnProtoRowInputStream . cpp <nl> <nl> + # if USE_CAPNP <nl> + <nl> + # include < Core / Block . h > <nl> + # include < IO / ReadBuffer . h > <nl> + # include < DataStreams / CapnProtoRowInputStream . h > <nl> + <nl> + # include < capnp / serialize . h > <nl> + # include < capnp / dynamic . h > <nl> + # include < boost / algorithm / string . hpp > <nl> + # include < boost / range / join . hpp > <nl> + # include < common / logger_useful . h > <nl> + <nl> + <nl> + namespace DB <nl> + { <nl> + <nl> + <nl> + CapnProtoRowInputStream : : NestedField split ( const Block & sample , size_t i ) <nl> + { <nl> + CapnProtoRowInputStream : : NestedField field = { { } , i } ; <nl> + <nl> + / / Remove leading dot in field definition , e . g . " . msg " - > " msg " <nl> + String name ( sample . safeGetByPosition ( i ) . name ) ; <nl> + if ( name . size ( ) > 0 & & name [ 0 ] = = ' . ' ) <nl> + name . erase ( 0 , 1 ) ; <nl> + <nl> + boost : : split ( field . tokens , name , boost : : is_any_of ( " . " ) ) ; <nl> + return field ; <nl> + } <nl> + <nl> + <nl> + Field convertNodeToField ( capnp : : DynamicValue : : Reader value ) <nl> + { <nl> + switch ( value . getType ( ) ) { <nl> + case capnp : : DynamicValue : : UNKNOWN : <nl> + throw Exception ( " Unknown field type " ) ; <nl> + case capnp : : DynamicValue : : VOID : <nl> + return Field ( ) ; <nl> + case capnp : : DynamicValue : : BOOL : <nl> + return UInt64 ( value . as < bool > ( ) ? 1 : 0 ) ; <nl> + case capnp : : DynamicValue : : INT : <nl> + return Int64 ( ( value . as < int64_t > ( ) ) ) ; <nl> + case capnp : : DynamicValue : : UINT : <nl> + return UInt64 ( value . as < uint64_t > ( ) ) ; <nl> + case capnp : : DynamicValue : : FLOAT : <nl> + return Float64 ( value . as < double > ( ) ) ; <nl> + case capnp : : DynamicValue : : TEXT : <nl> + { <nl> + auto arr = value . as < capnp : : Text > ( ) ; <nl> + return String ( arr . begin ( ) , arr . size ( ) ) ; <nl> + } <nl> + case capnp : : DynamicValue : : DATA : <nl> + { <nl> + auto arr = value . as < capnp : : Data > ( ) . asChars ( ) ; <nl> + return String ( arr . begin ( ) , arr . size ( ) ) ; <nl> + } <nl> + case capnp : : DynamicValue : : LIST : <nl> + { <nl> + auto listValue = value . as < capnp : : DynamicList > ( ) ; <nl> + Array res ( listValue . size ( ) ) ; <nl> + for ( auto i : kj : : indices ( listValue ) ) <nl> + res [ i ] = convertNodeToField ( listValue [ i ] ) ; <nl> + return res ; <nl> + } <nl> + case capnp : : DynamicValue : : ENUM : <nl> + return UInt64 ( value . as < capnp : : DynamicEnum > ( ) . getRaw ( ) ) ; <nl> + case capnp : : DynamicValue : : STRUCT : <nl> + throw Exception ( " STRUCT type not supported , read individual fields instead " ) ; <nl> + case capnp : : DynamicValue : : CAPABILITY : <nl> + throw Exception ( " CAPABILITY type not supported " ) ; <nl> + case capnp : : DynamicValue : : ANY_POINTER : <nl> + throw Exception ( " ANY_POINTER type not supported " ) ; <nl> + } <nl> + } <nl> + <nl> + capnp : : StructSchema : : Field getFieldOrThrow ( capnp : : StructSchema node , const std : : string & field ) <nl> + { <nl> + KJ_IF_MAYBE ( child , node . findFieldByName ( field ) ) <nl> + return * child ; <nl> + else <nl> + throw Exception ( " Field " + field + " doesn ' t exist in schema . " ) ; <nl> + } <nl> + <nl> + void CapnProtoRowInputStream : : createActions ( const NestedFieldList & sortedFields , capnp : : StructSchema reader ) <nl> + { <nl> + String last ; <nl> + size_t level = 0 ; <nl> + capnp : : StructSchema : : Field parent ; <nl> + <nl> + for ( const auto & field : sortedFields ) <nl> + { <nl> + / / Move to a different field in the same structure , keep parent <nl> + if ( level > 0 & & field . tokens [ level - 1 ] ! = last ) <nl> + { <nl> + auto child = getFieldOrThrow ( parent . getContainingStruct ( ) , field . tokens [ level - 1 ] ) ; <nl> + reader = child . getType ( ) . asStruct ( ) ; <nl> + actions . push_back ( { Action : : POP } ) ; <nl> + actions . push_back ( { Action : : PUSH , child } ) ; <nl> + } <nl> + / / Descend to a nested structure <nl> + for ( ; level < field . tokens . size ( ) - 1 ; + + level ) <nl> + { <nl> + last = field . tokens [ level ] ; <nl> + parent = getFieldOrThrow ( reader , last ) ; <nl> + reader = parent . getType ( ) . asStruct ( ) ; <nl> + actions . push_back ( { Action : : PUSH , parent } ) ; <nl> + } <nl> + / / Read field from the structure <nl> + actions . push_back ( { Action : : READ , getFieldOrThrow ( reader , field . tokens [ level ] ) , field . pos } ) ; <nl> + } <nl> + } <nl> + <nl> + CapnProtoRowInputStream : : CapnProtoRowInputStream ( ReadBuffer & istr_ , const Block & sample_ , const String & schema_file , const String & root_object ) <nl> + : istr ( istr_ ) , sample ( sample_ ) , parser ( std : : make_shared < SchemaParser > ( ) ) <nl> + { <nl> + / / Parse the schema and fetch the root object <nl> + auto schema = parser - > impl . parseDiskFile ( schema_file , schema_file , { } ) ; <nl> + root = schema . getNested ( root_object ) . asStruct ( ) ; <nl> + <nl> + / * * <nl> + * The schema typically consists of fields in various nested structures . <nl> + * Here we gather the list of fields and sort them in a way so that fields in the same structur are adjacent , <nl> + * and the nesting level doesn ' t decrease to make traversal easier . <nl> + * / <nl> + NestedFieldList list ; <nl> + size_t columns = sample . columns ( ) ; <nl> + for ( size_t i = 0 ; i < columns ; + + i ) <nl> + list . push_back ( split ( sample , i ) ) ; <nl> + <nl> + / / Reorder list to make sure we don ' t have to backtrack <nl> + std : : sort ( list . begin ( ) , list . end ( ) , [ ] ( const NestedField & a , const NestedField & b ) <nl> + { <nl> + if ( a . tokens . size ( ) = = b . tokens . size ( ) ) <nl> + return a . tokens < b . tokens ; <nl> + return a . tokens . size ( ) < b . tokens . size ( ) ; <nl> + } ) ; <nl> + <nl> + createActions ( list , root ) ; <nl> + } <nl> + <nl> + <nl> + bool CapnProtoRowInputStream : : read ( Block & block ) <nl> + { <nl> + if ( istr . eof ( ) ) <nl> + return false ; <nl> + <nl> + / / Read from underlying buffer directly <nl> + auto buf = istr . buffer ( ) ; <nl> + auto base = reinterpret_cast < const capnp : : word * > ( istr . position ( ) ) ; <nl> + <nl> + / / Check if there ' s enough bytes in the buffer to read the full message <nl> + kj : : Array < capnp : : word > heap_array ; <nl> + auto array = kj : : arrayPtr ( base , buf . size ( ) - istr . offset ( ) ) ; <nl> + auto expected_words = capnp : : expectedSizeInWordsFromPrefix ( array ) ; <nl> + if ( expected_words * sizeof ( capnp : : word ) > array . size ( ) ) <nl> + { <nl> + / / We ' ll need to reassemble the message in a contiguous buffer <nl> + heap_array = kj : : heapArray < capnp : : word > ( expected_words ) ; <nl> + istr . readStrict ( heap_array . asChars ( ) . begin ( ) , heap_array . asChars ( ) . size ( ) ) ; <nl> + array = heap_array . asPtr ( ) ; <nl> + } <nl> + <nl> + capnp : : FlatArrayMessageReader msg ( array ) ; <nl> + std : : vector < capnp : : DynamicStruct : : Reader > stack ; <nl> + stack . push_back ( msg . getRoot < capnp : : DynamicStruct > ( root ) ) ; <nl> + <nl> + for ( auto action : actions ) <nl> + { <nl> + switch ( action . type ) { <nl> + case Action : : READ : { <nl> + auto & col = block . getByPosition ( action . column ) ; <nl> + Field value = convertNodeToField ( stack . back ( ) . get ( action . field ) ) ; <nl> + col . column - > insert ( value ) ; <nl> + break ; <nl> + } <nl> + case Action : : POP : <nl> + stack . pop_back ( ) ; <nl> + break ; <nl> + case Action : : PUSH : <nl> + stack . push_back ( stack . back ( ) . get ( action . field ) . as < capnp : : DynamicStruct > ( ) ) ; <nl> + break ; <nl> + } <nl> + } <nl> + <nl> + / / Advance buffer position if used directly <nl> + if ( heap_array . size ( ) = = 0 ) <nl> + { <nl> + auto parsed = ( msg . getEnd ( ) - base ) * sizeof ( capnp : : word ) ; <nl> + istr . position ( ) + = parsed ; <nl> + } <nl> + <nl> + return true ; <nl> + } <nl> + <nl> + } <nl> + <nl> + # endif <nl> new file mode 100644 <nl> index 00000000000 . . f5712945b3d <nl> mmm / dev / null <nl> ppp b / dbms / src / DataStreams / CapnProtoRowInputStream . h <nl> <nl> + # pragma once <nl> + <nl> + # include < Core / Block . h > <nl> + # include < DataStreams / IRowInputStream . h > <nl> + <nl> + # include < capnp / schema - parser . h > <nl> + <nl> + namespace DB <nl> + { <nl> + <nl> + class ReadBuffer ; <nl> + <nl> + / * * A stream for reading messages in Cap ' n Proto format in given schema . <nl> + * Like Protocol Buffers and Thrift ( but unlike JSON or MessagePack ) , <nl> + * Cap ' n Proto messages are strongly - typed and not self - describing . <nl> + * The schema in this case cannot be compiled in , so it uses a runtime schema parser . <nl> + * See https : / / capnproto . org / cxx . html <nl> + * / <nl> + class CapnProtoRowInputStream : public IRowInputStream <nl> + { <nl> + public : <nl> + struct NestedField <nl> + { <nl> + std : : vector < std : : string > tokens ; <nl> + size_t pos ; <nl> + } ; <nl> + using NestedFieldList = std : : vector < NestedField > ; <nl> + <nl> + / * * schema_file - location of the capnproto schema , e . g . " schema . canpn " <nl> + * root_object - name to the root object , e . g . " Message " <nl> + * / <nl> + CapnProtoRowInputStream ( ReadBuffer & istr_ , const Block & sample_ , const String & schema_file , const String & root_object ) ; <nl> + <nl> + bool read ( Block & block ) override ; <nl> + <nl> + private : <nl> + / / Build a traversal plan from a sorted list of fields <nl> + void createActions ( const NestedFieldList & sortedFields , capnp : : StructSchema reader ) ; <nl> + <nl> + / * Action for state machine for traversing nested structures . * / <nl> + struct Action <nl> + { <nl> + enum Type { POP , PUSH , READ } ; <nl> + Type type ; <nl> + capnp : : StructSchema : : Field field ; <nl> + size_t column ; <nl> + } ; <nl> + <nl> + / / Wrapper for classes that could throw in destructor <nl> + / / https : / / github . com / capnproto / capnproto / issues / 553 <nl> + template < typename T > <nl> + struct DestructorCatcher <nl> + { <nl> + T impl ; <nl> + template < typename . . . Arg > <nl> + DestructorCatcher ( Arg & & . . . args ) : impl ( kj : : fwd < Arg > ( args ) . . . ) { } <nl> + ~ DestructorCatcher ( ) noexcept try { } catch ( . . . ) { } <nl> + } ; <nl> + using SchemaParser = DestructorCatcher < capnp : : SchemaParser > ; <nl> + <nl> + ReadBuffer & istr ; <nl> + const Block sample ; <nl> + std : : shared_ptr < SchemaParser > parser ; <nl> + capnp : : StructSchema root ; <nl> + std : : vector < Action > actions ; <nl> + } ; <nl> + <nl> + } <nl> mmm a / dbms / src / DataStreams / FormatFactory . cpp <nl> ppp b / dbms / src / DataStreams / FormatFactory . cpp <nl> <nl> + # include < Common / config . h > <nl> # include < Interpreters / Context . h > <nl> # include < DataStreams / NativeBlockInputStream . h > <nl> # include < DataStreams / NativeBlockOutputStream . h > <nl> <nl> # include < DataStreams / FormatFactory . h > <nl> # include < DataStreams / SquashingBlockOutputStream . h > <nl> # include < DataTypes / FormatSettingsJSON . h > <nl> + # if USE_CAPNP <nl> + # include < DataStreams / CapnProtoRowInputStream . h > <nl> + # endif <nl> + <nl> + # include < boost / algorithm / string . hpp > <nl> <nl> namespace DB <nl> { <nl> BlockInputStreamPtr FormatFactory : : getInput ( const String & name , ReadBuffer & bu <nl> { <nl> return wrap_row_stream ( std : : make_shared < JSONEachRowRowInputStream > ( buf , sample , settings . input_format_skip_unknown_fields ) ) ; <nl> } <nl> + # if USE_CAPNP <nl> + else if ( name = = " CapnProto " ) <nl> + { <nl> + std : : vector < String > tokens ; <nl> + auto schema_and_root = settings . format_schema . toString ( ) ; <nl> + boost : : split ( tokens , schema_and_root , boost : : is_any_of ( " : " ) ) ; <nl> + if ( tokens . size ( ) ! = 2 ) <nl> + throw Exception ( " Format CapnProto requires ' format_schema ' setting to have schema_file : root_object format , e . g . ' schema . capnp : Message ' " ) ; <nl> + <nl> + return wrap_row_stream ( std : : make_shared < CapnProtoRowInputStream > ( buf , sample , tokens [ 0 ] , tokens [ 1 ] ) ) ; <nl> + } <nl> + # endif <nl> else if ( name = = " TabSeparatedRaw " <nl> | | name = = " TSVRaw " <nl> | | name = = " BlockTabSeparated " <nl> new file mode 100644 <nl> index 00000000000 . . 6a18e2c7fa3 <nl> mmm / dev / null <nl> ppp b / dbms / src / DataStreams / PushingToViewsBlockOutputStream . cpp <nl> <nl> + # include " PushingToViewsBlockOutputStream . h " <nl> + # include < Storages / MergeTree / ReplicatedMergeTreeBlockOutputStream . h > <nl> + <nl> + <nl> + namespace DB <nl> + { <nl> + <nl> + PushingToViewsBlockOutputStream : : PushingToViewsBlockOutputStream ( String database , String table , const Context & context_ , <nl> + const ASTPtr & query_ptr_ , bool no_destination ) <nl> + : context ( context_ ) , query_ptr ( query_ptr_ ) <nl> + { <nl> + storage = context . getTable ( database , table ) ; <nl> + <nl> + / * * TODO This is a very important line . At any insertion into the table one of streams should own lock . <nl> + * Although now any insertion into the table is done via PushingToViewsBlockOutputStream , <nl> + * but it ' s clear that here is not the best place for this functionality . <nl> + * / <nl> + addTableLock ( storage - > lockStructure ( true , __PRETTY_FUNCTION__ ) ) ; <nl> + <nl> + Dependencies dependencies = context . getDependencies ( database , table ) ; <nl> + <nl> + / / / We need special context for materialized views insertions <nl> + if ( ! dependencies . empty ( ) ) <nl> + { <nl> + views_context = std : : make_unique < Context > ( context ) ; <nl> + / / Do not deduplicate insertions into MV if the main insertion is Ok <nl> + views_context - > getSettingsRef ( ) . insert_deduplicate = false ; <nl> + } <nl> + <nl> + for ( const auto & database_table : dependencies ) <nl> + { <nl> + auto dependent_table = context . getTable ( database_table . first , database_table . second ) ; <nl> + auto & materialized_view = dynamic_cast < const StorageMaterializedView & > ( * dependent_table ) ; <nl> + <nl> + auto query = materialized_view . getInnerQuery ( ) ; <nl> + auto next = std : : make_shared < PushingToViewsBlockOutputStream > ( database_table . first , database_table . second , * views_context , ASTPtr ( ) ) ; <nl> + <nl> + views . emplace_back ( std : : move ( query ) , std : : move ( next ) ) ; <nl> + } <nl> + <nl> + / * Do not push to destination table if the flag is set * / <nl> + if ( ! no_destination ) <nl> + { <nl> + output = storage - > write ( query_ptr , context . getSettingsRef ( ) ) ; <nl> + replicated_output = dynamic_cast < ReplicatedMergeTreeBlockOutputStream * > ( output . get ( ) ) ; <nl> + } <nl> + } <nl> + <nl> + <nl> + void PushingToViewsBlockOutputStream : : write ( const Block & block ) <nl> + { <nl> + if ( output ) <nl> + output - > write ( block ) ; <nl> + <nl> + / / / Don ' t process materialized views if this block is duplicate <nl> + if ( replicated_output & & replicated_output - > lastBlockIsDuplicate ( ) ) <nl> + return ; <nl> + <nl> + / / / Insert data into materialized views only after successful insert into main table <nl> + for ( auto & view : views ) <nl> + { <nl> + BlockInputStreamPtr from = std : : make_shared < OneBlockInputStream > ( block ) ; <nl> + InterpreterSelectQuery select ( view . first , * views_context , QueryProcessingStage : : Complete , 0 , from ) ; <nl> + BlockInputStreamPtr data = std : : make_shared < MaterializingBlockInputStream > ( select . execute ( ) . in ) ; <nl> + copyData ( * data , * view . second ) ; <nl> + } <nl> + } <nl> + <nl> + } <nl> mmm a / dbms / src / DataStreams / PushingToViewsBlockOutputStream . h <nl> ppp b / dbms / src / DataStreams / PushingToViewsBlockOutputStream . h <nl> <nl> namespace DB <nl> { <nl> <nl> + class ReplicatedMergeTreeBlockOutputStream ; <nl> + <nl> <nl> / * * Writes data to the specified table and to all dependent materialized views . <nl> * / <nl> class PushingToViewsBlockOutputStream : public IBlockOutputStream <nl> { <nl> public : <nl> - PushingToViewsBlockOutputStream ( String database , String table , const Context & context_ , const ASTPtr & query_ptr_ , bool no_destination = false ) <nl> - : context ( context_ ) , query_ptr ( query_ptr_ ) <nl> - { <nl> - storage = context . getTable ( database , table ) ; <nl> - <nl> - / * * TODO This is a very important line . At any insertion into the table one of streams should own lock . <nl> - * Although now any insertion into the table is done via PushingToViewsBlockOutputStream , <nl> - * but it ' s clear that here is not the best place for this functionality . <nl> - * / <nl> - addTableLock ( storage - > lockStructure ( true , __PRETTY_FUNCTION__ ) ) ; <nl> - <nl> - Dependencies dependencies = context . getDependencies ( database , table ) ; <nl> - for ( const auto & database_table : dependencies ) <nl> - views . emplace_back ( <nl> - dynamic_cast < const StorageMaterializedView & > ( * context . getTable ( database_table . first , database_table . second ) ) . getInnerQuery ( ) , <nl> - std : : make_shared < PushingToViewsBlockOutputStream > ( database_table . first , database_table . second , context , ASTPtr ( ) ) ) ; <nl> - <nl> - / * Do not push to destination table if the flag is set * / <nl> - if ( ! no_destination ) <nl> - output = storage - > write ( query_ptr , context . getSettingsRef ( ) ) ; <nl> - } <nl> - <nl> - void write ( const Block & block ) override <nl> - { <nl> - for ( auto & view : views ) <nl> - { <nl> - BlockInputStreamPtr from = std : : make_shared < OneBlockInputStream > ( block ) ; <nl> - InterpreterSelectQuery select ( view . first , context , QueryProcessingStage : : Complete , 0 , from ) ; <nl> - BlockInputStreamPtr data = std : : make_shared < MaterializingBlockInputStream > ( select . execute ( ) . in ) ; <nl> - copyData ( * data , * view . second ) ; <nl> - } <nl> + PushingToViewsBlockOutputStream ( String database , String table , const Context & context_ , const ASTPtr & query_ptr_ , bool no_destination = false ) ; <nl> <nl> - if ( output ) <nl> - output - > write ( block ) ; <nl> - } <nl> + void write ( const Block & block ) override ; <nl> <nl> void flush ( ) override <nl> { <nl> class PushingToViewsBlockOutputStream : public IBlockOutputStream <nl> } <nl> <nl> private : <nl> + <nl> StoragePtr storage ; <nl> BlockOutputStreamPtr output ; <nl> + ReplicatedMergeTreeBlockOutputStream * replicated_output = nullptr ; <nl> + <nl> const Context & context ; <nl> ASTPtr query_ptr ; <nl> + <nl> std : : vector < std : : pair < ASTPtr , BlockOutputStreamPtr > > views ; <nl> + std : : unique_ptr < Context > views_context ; <nl> } ; <nl> <nl> <nl> mmm a / dbms / src / Functions / FunctionHelpers . cpp <nl> ppp b / dbms / src / Functions / FunctionHelpers . cpp <nl> Block createBlockWithNestedColumns ( const Block & block , ColumnNumbers args , size <nl> if ( col . type - > isNullable ( ) ) <nl> { <nl> bool is_const = col . column - > isConst ( ) ; <nl> - auto const_col = static_cast < const ColumnConst * > ( col . column . get ( ) ) ; <nl> + auto const_col = typeid_cast < const ColumnConst * > ( col . column . get ( ) ) ; <nl> <nl> if ( is_const & & ! const_col - > getDataColumn ( ) . isNullable ( ) ) <nl> throw Exception ( " Column at position " + toString ( i + 1 ) + " with type " + col . type - > getName ( ) + <nl> mmm a / dbms / src / Functions / IFunction . cpp <nl> ppp b / dbms / src / Functions / IFunction . cpp <nl> bool defaultImplementationForNulls ( <nl> const ColumnWithTypeAndName & source_col = temporary_block . getByPosition ( result ) ; <nl> ColumnWithTypeAndName & dest_col = block . getByPosition ( result ) ; <nl> <nl> - if ( source_col . column - > isConst ( ) ) <nl> - dest_col . column = source_col . column ; <nl> - else <nl> - { <nl> - / / / Initialize the result column . <nl> - ColumnPtr null_map = std : : make_shared < ColumnUInt8 > ( block . rows ( ) , 0 ) ; <nl> - dest_col . column = std : : make_shared < ColumnNullable > ( source_col . column , null_map ) ; <nl> + / / / Initialize the result column . <nl> + ColumnPtr null_map = std : : make_shared < ColumnUInt8 > ( block . rows ( ) , 0 ) ; <nl> + dest_col . column = std : : make_shared < ColumnNullable > ( source_col . column , null_map ) ; <nl> <nl> - / / / Deduce the null map of the result from the null maps of the nullable columns . <nl> - createNullMap ( block , args , result ) ; <nl> - } <nl> + / / / Deduce the null map of the result from the null maps of the nullable columns . <nl> + createNullMap ( block , args , result ) ; <nl> <nl> return true ; <nl> } <nl> mmm a / dbms / src / Interpreters / Settings . h <nl> ppp b / dbms / src / Interpreters / Settings . h <nl> struct Settings <nl> / * * The maximum number of concurrent requests per user . * / \ <nl> M ( SettingUInt64 , max_concurrent_queries_for_user , 0 ) \ <nl> \ <nl> + / * * For INSERT queries in the replicated table , specifies that deduplication of insertings blocks should be preformed * / \ <nl> + M ( SettingBool , insert_deduplicate , true ) \ <nl> + \ <nl> / * * For INSERT queries in the replicated table , wait writing for the specified number of replicas and linearize the addition of the data . 0 - disabled . * / \ <nl> M ( SettingUInt64 , insert_quorum , 0 ) \ <nl> M ( SettingMilliseconds , insert_quorum_timeout , 600000 ) \ <nl> mmm a / dbms / src / Storages / MergeTree / DataPartsExchange . cpp <nl> ppp b / dbms / src / Storages / MergeTree / DataPartsExchange . cpp <nl> void Service : : processQuery ( const Poco : : Net : : HTMLForm & params , ReadBuffer & body <nl> <nl> MergeTreeData : : DataPartPtr Service : : findPart ( const String & name ) <nl> { <nl> - MergeTreeData : : DataPartPtr part = data . getPartIfExists ( name ) ; <nl> + / / / It is important to include PreCommitted parts here <nl> + / / / Because part could be actually committed into ZooKeeper , but response from ZooKeeper to the server could be delayed <nl> + auto part = data . getPartIfExists ( name , { MergeTreeDataPart : : State : : PreCommitted , MergeTreeDataPart : : State : : Committed } ) ; <nl> if ( part ) <nl> return part ; <nl> - throw Exception ( " No part " + name + " in table " ) ; <nl> + <nl> + throw Exception ( " No part " + name + " in table " , ErrorCodes : : NO_SUCH_DATA_PART ) ; <nl> } <nl> <nl> MergeTreeData : : DataPartPtr Service : : findShardedPart ( const String & name , size_t shard_no ) <nl> mmm a / dbms / src / Storages / MergeTree / MergeTreeData . cpp <nl> ppp b / dbms / src / Storages / MergeTree / MergeTreeData . cpp <nl> <nl> <nl> # include < Poco / DirectoryIterator . h > <nl> <nl> + # include < boost / range / adaptor / filtered . hpp > <nl> + <nl> # include < algorithm > <nl> # include < iomanip > <nl> # include < thread > <nl> MergeTreeData : : MergeTreeData ( <nl> database_name ( database_ ) , table_name ( table_ ) , <nl> full_path ( full_path_ ) , columns ( columns_ ) , <nl> broken_part_callback ( broken_part_callback_ ) , <nl> - parts_clean_callback ( parts_clean_callback_ ? parts_clean_callback_ : [ this ] ( ) { clearOldParts ( ) ; } ) , <nl> log_name ( log_name_ ) , log ( & Logger : : get ( log_name + " ( Data ) " ) ) <nl> { <nl> merging_params . check ( * columns ) ; <nl> String MergeTreeData : : MergingParams : : getModeName ( ) const <nl> { <nl> switch ( mode ) <nl> { <nl> - case Ordinary : return " " ; <nl> - case Collapsing : return " Collapsing " ; <nl> - case Summing : return " Summing " ; <nl> - case Aggregating : return " Aggregating " ; <nl> - case Unsorted : return " Unsorted " ; <nl> + case Ordinary : return " " ; <nl> + case Collapsing : return " Collapsing " ; <nl> + case Summing : return " Summing " ; <nl> + case Aggregating : return " Aggregating " ; <nl> + case Unsorted : return " Unsorted " ; <nl> case Replacing : return " Replacing " ; <nl> - case Graphite : return " Graphite " ; <nl> + case Graphite : return " Graphite " ; <nl> <nl> default : <nl> throw Exception ( " Unknown mode of operation for MergeTreeData : " + toString < int > ( mode ) , ErrorCodes : : LOGICAL_ERROR ) ; <nl> String MergeTreeData : : MergingParams : : getModeName ( ) const <nl> <nl> Int64 MergeTreeData : : getMaxDataPartIndex ( ) <nl> { <nl> - std : : lock_guard < std : : mutex > lock_all ( all_data_parts_mutex ) ; <nl> + std : : lock_guard < std : : mutex > lock_all ( data_parts_mutex ) ; <nl> <nl> Int64 max_block_id = 0 ; <nl> - for ( const auto & part : all_data_parts ) <nl> + for ( const auto & part : data_parts ) <nl> max_block_id = std : : max ( max_block_id , part - > info . max_block ) ; <nl> <nl> return max_block_id ; <nl> void MergeTreeData : : loadDataParts ( bool skip_sanity_checks ) <nl> LOG_DEBUG ( log , " Loading data parts " ) ; <nl> <nl> std : : lock_guard < std : : mutex > lock ( data_parts_mutex ) ; <nl> - std : : lock_guard < std : : mutex > lock_all ( all_data_parts_mutex ) ; <nl> - <nl> data_parts . clear ( ) ; <nl> - all_data_parts . clear ( ) ; <nl> <nl> Strings part_file_names ; <nl> Poco : : DirectoryIterator end ; <nl> void MergeTreeData : : loadDataParts ( bool skip_sanity_checks ) <nl> } <nl> <nl> part - > modification_time = Poco : : File ( full_path + file_name ) . getLastModified ( ) . epochTime ( ) ; <nl> + / / / Assume that all parts are Committed , covered parts will be detected and marked as Outdated later <nl> + part - > state = DataPartState : : Committed ; <nl> <nl> data_parts . insert ( part ) ; <nl> } <nl> void MergeTreeData : : loadDataParts ( bool skip_sanity_checks ) <nl> for ( auto & part : broken_parts_to_detach ) <nl> part - > renameAddPrefix ( true , " " ) ; <nl> <nl> - all_data_parts = data_parts ; <nl> - <nl> / / / Delete from the set of current parts those parts that are covered by another part ( those parts that <nl> / / / were merged ) , but that for some reason are still not deleted from the filesystem . <nl> / / / Deletion of files will be performed later in the clearOldParts ( ) method . <nl> <nl> if ( data_parts . size ( ) > = 2 ) <nl> { <nl> - DataParts : : iterator prev_jt = data_parts . begin ( ) ; <nl> - DataParts : : iterator curr_jt = prev_jt ; <nl> - + + curr_jt ; <nl> - while ( curr_jt ! = data_parts . end ( ) ) <nl> + auto committed_parts = getDataPartsRange ( { DataPartState : : Committed } ) ; <nl> + auto prev_jt = committed_parts . begin ( ) ; <nl> + auto curr_jt = std : : next ( prev_jt ) ; <nl> + <nl> + while ( curr_jt ! = committed_parts . end ( ) ) <nl> { <nl> / / / Don ' t consider data parts belonging to different partitions . <nl> if ( ( * curr_jt ) - > info . partition_id ! = ( * prev_jt ) - > info . partition_id ) <nl> void MergeTreeData : : loadDataParts ( bool skip_sanity_checks ) <nl> if ( ( * curr_jt ) - > contains ( * * prev_jt ) ) <nl> { <nl> ( * prev_jt ) - > remove_time = ( * prev_jt ) - > modification_time ; <nl> - data_parts . erase ( prev_jt ) ; <nl> + ( * prev_jt ) - > state = DataPartState : : Outdated ; / / / prev_jt becomes invalid here <nl> prev_jt = curr_jt ; <nl> + + curr_jt ; <nl> } <nl> else if ( ( * prev_jt ) - > contains ( * * curr_jt ) ) <nl> { <nl> ( * curr_jt ) - > remove_time = ( * curr_jt ) - > modification_time ; <nl> - data_parts . erase ( curr_jt + + ) ; <nl> + ( * curr_jt ) - > state = DataPartState : : Outdated ; / / / curr_jt becomes invalid here <nl> + + + curr_jt ; <nl> } <nl> else <nl> { <nl> MergeTreeData : : DataPartsVector MergeTreeData : : grabOldParts ( ) <nl> time_t now = time ( nullptr ) ; <nl> <nl> { <nl> - std : : lock_guard < std : : mutex > lock_all_parts ( all_data_parts_mutex ) ; <nl> + std : : lock_guard < std : : mutex > lock_parts ( data_parts_mutex ) ; <nl> <nl> - for ( auto it = all_data_parts . begin ( ) ; it ! = all_data_parts . end ( ) ; ) <nl> + for ( auto it = data_parts . begin ( ) ; it ! = data_parts . end ( ) ; + + it ) <nl> { <nl> - if ( it - > unique ( ) & & / / / After this ref_count cannot increase . <nl> + if ( ( * it ) - > state = = DataPartState : : Outdated & & <nl> + it - > unique ( ) & & / / / Grab only parts that is not using by anyone ( SELECTs for example ) <nl> ( * it ) - > remove_time < now & & <nl> now - ( * it ) - > remove_time > settings . old_parts_lifetime . totalSeconds ( ) ) <nl> { <nl> + ( * it ) - > state = DataPartState : : Deleting ; <nl> res . push_back ( * it ) ; <nl> - all_data_parts . erase ( it + + ) ; <nl> } <nl> - else <nl> - + + it ; <nl> } <nl> } <nl> <nl> MergeTreeData : : DataPartsVector MergeTreeData : : grabOldParts ( ) <nl> } <nl> <nl> <nl> - void MergeTreeData : : addOldParts ( const MergeTreeData : : DataPartsVector & parts ) <nl> + void MergeTreeData : : rollbackDeletingParts ( const MergeTreeData : : DataPartsVector & parts ) <nl> + { <nl> + std : : lock_guard < std : : mutex > lock ( data_parts_mutex ) ; <nl> + for ( auto & part : parts ) <nl> + { <nl> + / / / We should modify it under data_parts_mutex <nl> + part - > assertState ( { DataPartState : : Deleting } ) ; <nl> + part - > state = DataPartState : : Outdated ; <nl> + } <nl> + } <nl> + <nl> + void MergeTreeData : : removePartsFinally ( const MergeTreeData : : DataPartsVector & parts ) <nl> { <nl> - std : : lock_guard < std : : mutex > lock ( all_data_parts_mutex ) ; <nl> - all_data_parts . insert ( parts . begin ( ) , parts . end ( ) ) ; <nl> + std : : lock_guard < std : : mutex > lock ( data_parts_mutex ) ; <nl> + <nl> + / / / TODO : use data_parts iterators instead of pointers <nl> + for ( auto & part : parts ) <nl> + { <nl> + if ( part - > state ! = DataPartState : : Deleting ) <nl> + throw Exception ( " An attempt to delete part " + part - > getNameWithState ( ) + " with unexpected state " , ErrorCodes : : LOGICAL_ERROR ) ; <nl> + <nl> + auto it = data_parts . find ( part ) ; <nl> + if ( it = = data_parts . end ( ) ) <nl> + throw Exception ( " Deleting data part " + part - > name + " is not exist " , ErrorCodes : : LOGICAL_ERROR ) ; <nl> + <nl> + data_parts . erase ( it ) ; <nl> + } <nl> } <nl> <nl> void MergeTreeData : : clearOldParts ( ) <nl> void MergeTreeData : : dropAllData ( ) <nl> LOG_TRACE ( log , " dropAllData : waiting for locks . " ) ; <nl> <nl> std : : lock_guard < std : : mutex > lock ( data_parts_mutex ) ; <nl> - std : : lock_guard < std : : mutex > lock_all ( all_data_parts_mutex ) ; <nl> <nl> LOG_TRACE ( log , " dropAllData : removing data from memory . " ) ; <nl> <nl> data_parts . clear ( ) ; <nl> - all_data_parts . clear ( ) ; <nl> column_sizes . clear ( ) ; <nl> <nl> context . dropCaches ( ) ; <nl> void MergeTreeData : : createConvertExpression ( const DataPartPtr & part , const Name <nl> const IDataType * observed_type ; <nl> if ( is_nullable ) <nl> { <nl> - const DataTypeNullable & nullable_type = static_cast < const DataTypeNullable & > ( * column . type ) ; <nl> + auto & nullable_type = static_cast < const DataTypeNullable & > ( * column . type ) ; <nl> observed_type = nullable_type . getNestedType ( ) . get ( ) ; <nl> } <nl> else <nl> void MergeTreeData : : renameTempPartAndAdd ( MutableDataPartPtr & part , SimpleIncrem <nl> + " existing part ( s ) ( including " + removed [ 0 ] - > name + " ) " , ErrorCodes : : LOGICAL_ERROR ) ; <nl> } <nl> <nl> + <nl> + <nl> MergeTreeData : : DataPartsVector MergeTreeData : : renameTempPartAndReplace ( <nl> MutableDataPartPtr & part , SimpleIncrement * increment , Transaction * out_transaction ) <nl> { <nl> if ( out_transaction & & out_transaction - > data ) <nl> throw Exception ( " Using the same MergeTreeData : : Transaction for overlapping transactions is invalid " , ErrorCodes : : LOGICAL_ERROR ) ; <nl> <nl> + part - > assertState ( { DataPartState : : Temporary } ) ; <nl> + <nl> DataPartsVector replaced ; <nl> { <nl> std : : lock_guard < std : : mutex > lock ( data_parts_mutex ) ; <nl> MergeTreeData : : DataPartsVector MergeTreeData : : renameTempPartAndReplace ( <nl> <nl> LOG_TRACE ( log , " Renaming temporary part " < < part - > relative_path < < " to " < < new_name < < " . " ) ; <nl> <nl> - if ( data_parts . count ( part ) ) <nl> - throw Exception ( " Part " + new_name + " already exists " , ErrorCodes : : DUPLICATE_DATA_PART ) ; <nl> - <nl> - bool in_all_data_parts ; <nl> + auto it_duplicate = data_parts . find ( part ) ; <nl> + if ( it_duplicate ! = data_parts . end ( ) ) <nl> { <nl> - std : : lock_guard < std : : mutex > lock_all ( all_data_parts_mutex ) ; <nl> - in_all_data_parts = all_data_parts . count ( part ) ! = 0 ; <nl> + String message = " Part " + ( * it_duplicate ) - > getNameWithState ( ) + " already exists " ; <nl> + if ( ( * it_duplicate ) - > checkState ( { DataPartState : : Outdated , DataPartState : : Deleting } ) ) <nl> + message + = " , but it will be deleted soon " ; <nl> + <nl> + throw Exception ( message , ErrorCodes : : DUPLICATE_DATA_PART ) ; <nl> } <nl> - / / / New part can be removed from data_parts but not from filesystem and ZooKeeper <nl> - if ( in_all_data_parts ) <nl> - clearOldPartsAndRemoveFromZK ( ) ; <nl> <nl> - / / / Rename the part . <nl> - part - > renameTo ( new_name ) ; <nl> - part - > is_temp = false ; <nl> + / / / Rename the part only in memory . Will rename it on disk only if all check is passed . <nl> + / / / It allows us maintain invariant : if non - temporary parts in filesystem then they are in data_parts <nl> part - > name = new_name ; <nl> <nl> - bool obsolete = false ; / / / Is the part covered by some other part ? <nl> + / / / Is the part covered by some other part ? <nl> + bool obsolete = false ; <nl> + <nl> + auto check_replacing_part_state = [ & ] ( const DataPartPtr & cur_part ) <nl> + { <nl> + cur_part - > assertState ( { DataPartState : : PreCommitted , DataPartState : : Committed } ) ; <nl> + if ( cur_part - > state = = DataPartState : : PreCommitted ) <nl> + throw Exception ( " Could not add part " + new_name + " while replacing part " + cur_part - > name + " is in pre - committed state " , ErrorCodes : : LOGICAL_ERROR ) ; <nl> + } ; <nl> + <nl> + / / / Don ' t consider parts going to be deleted <nl> + auto active_parts = getDataPartsRange ( { DataPartState : : Committed , DataPartState : : PreCommitted } ) ; <nl> + / / / Parts contained in the part are consecutive in data_parts , intersecting the insertion place for the part itself . <nl> + auto it_middle = active_parts . convert ( data_parts . lower_bound ( part ) ) ; <nl> <nl> - / / / Parts contained in the part are consecutive in data_parts , intersecting the insertion place <nl> - / / / for the part itself . <nl> - auto it = data_parts . lower_bound ( part ) ; <nl> / / / Go to the left . <nl> - while ( it ! = data_parts . begin ( ) ) <nl> + for ( auto it = it_middle ; it ! = active_parts . begin ( ) ; ) <nl> { <nl> - - it ; <nl> + <nl> if ( ! part - > contains ( * * it ) ) <nl> { <nl> if ( ( * it ) - > contains ( * part ) ) <nl> MergeTreeData : : DataPartsVector MergeTreeData : : renameTempPartAndReplace ( <nl> + + it ; <nl> break ; <nl> } <nl> + <nl> + check_replacing_part_state ( * it ) ; <nl> replaced . push_back ( * it ) ; <nl> - ( * it ) - > remove_time = time ( nullptr ) ; <nl> - removePartContributionToColumnSizes ( * it ) ; <nl> - data_parts . erase ( it + + ) ; / / / Yes , + + , not - - . <nl> + / / replaced . push_back ( * it ) ; <nl> + / / ( * it ) - > remove_time = time ( nullptr ) ; <nl> + / / ( * it ) - > state = replaced_parts_state ; <nl> + / / removePartContributionToColumnSizes ( * it ) ; <nl> + / / data_parts . erase ( it + + ) ; / / / Yes , + + , not - - . <nl> } <nl> - std : : reverse ( replaced . begin ( ) , replaced . end ( ) ) ; / / / Parts must be in ascending order . <nl> + <nl> + / / / Parts must be in ascending order . <nl> + std : : reverse ( replaced . begin ( ) , replaced . end ( ) ) ; <nl> + <nl> / / / Go to the right . <nl> - while ( it ! = data_parts . end ( ) ) <nl> + for ( auto it = it_middle ; it ! = active_parts . end ( ) ; ) <nl> { <nl> + if ( ( * it ) - > name = = part - > name ) <nl> + throw Exception ( " Unexpected duplicate part " + part - > getNameWithState ( ) + " . It is a bug . " , ErrorCodes : : LOGICAL_ERROR ) ; <nl> + <nl> if ( ! part - > contains ( * * it ) ) <nl> { <nl> - if ( ( * it ) - > name = = part - > name | | ( * it ) - > contains ( * part ) ) <nl> + if ( ( * it ) - > contains ( * part ) ) <nl> obsolete = true ; <nl> break ; <nl> } <nl> + <nl> + check_replacing_part_state ( * it ) ; <nl> replaced . push_back ( * it ) ; <nl> - ( * it ) - > remove_time = time ( nullptr ) ; <nl> - removePartContributionToColumnSizes ( * it ) ; <nl> - data_parts . erase ( it + + ) ; <nl> + + + it ; <nl> + / / replaced . push_back ( * it ) ; <nl> + / / ( * it ) - > remove_time = time ( nullptr ) ; <nl> + / / ( * it ) - > state = replaced_parts_state ; <nl> + / / removePartContributionToColumnSizes ( * it ) ; <nl> + / / data_parts . erase ( it + + ) ; <nl> } <nl> <nl> if ( obsolete ) <nl> { <nl> LOG_WARNING ( log , " Obsolete part " < < part - > name < < " added " ) ; <nl> part - > remove_time = time ( nullptr ) ; <nl> + / / / I case of fail , we want to delete part from filesystem immediately ( to avoid any conflicts ) <nl> + part - > is_temp = true ; <nl> } <nl> else <nl> { <nl> + / / / Now we can rename part on filesystem <nl> + part - > is_temp = false ; <nl> + part - > renameTo ( new_name ) ; <nl> + <nl> + if ( ! out_transaction ) <nl> + { <nl> + / / / Ordinary MergeTree engines ( they don ' t use out_transaction ) commit parts immediately <nl> + part - > state = DataPartState : : Committed ; <nl> + addPartContributionToColumnSizes ( part ) ; <nl> + } <nl> + else <nl> + { <nl> + / / / Whereas ReplicatedMergeTree uses intermediate PreCommitted state <nl> + part - > state = DataPartState : : PreCommitted ; <nl> + } <nl> + <nl> data_parts . insert ( part ) ; <nl> - addPartContributionToColumnSizes ( part ) ; <nl> - } <nl> <nl> - { <nl> - std : : lock_guard < std : : mutex > lock_all ( all_data_parts_mutex ) ; <nl> - all_data_parts . insert ( part ) ; <nl> + auto current_time = time ( nullptr ) ; <nl> + for ( auto & replacing_part : replaced ) <nl> + { <nl> + if ( ! out_transaction ) <nl> + { <nl> + replacing_part - > remove_time = current_time ; <nl> + replacing_part - > state = DataPartState : : Outdated ; <nl> + removePartContributionToColumnSizes ( replacing_part ) ; <nl> + } <nl> + } <nl> } <nl> } <nl> <nl> MergeTreeData : : DataPartsVector MergeTreeData : : renameTempPartAndReplace ( <nl> { <nl> out_transaction - > data = this ; <nl> out_transaction - > parts_to_add_on_rollback = replaced ; <nl> - out_transaction - > parts_to_remove_on_rollback = DataPartsVector ( 1 , part ) ; <nl> + out_transaction - > parts_to_remove_on_rollback = { part } ; <nl> } <nl> <nl> return replaced ; <nl> } <nl> <nl> - void MergeTreeData : : replaceParts ( const DataPartsVector & remove , const DataPartsVector & add , bool clear_without_timeout ) <nl> + void MergeTreeData : : removePartsFromWorkingSet ( const DataPartsVector & remove , bool clear_without_timeout ) <nl> { <nl> std : : lock_guard < std : : mutex > lock ( data_parts_mutex ) ; <nl> <nl> - for ( const DataPartPtr & part : remove ) <nl> + for ( auto & part : remove ) <nl> { <nl> - part - > remove_time = clear_without_timeout ? 0 : time ( nullptr ) ; <nl> + if ( ! data_parts . count ( part ) ) <nl> + throw Exception ( " Part " + part - > getNameWithState ( ) + " not found in data_parts " , ErrorCodes : : LOGICAL_ERROR ) ; <nl> <nl> - if ( data_parts . erase ( part ) ) <nl> - removePartContributionToColumnSizes ( part ) ; <nl> + part - > assertState ( { DataPartState : : PreCommitted , DataPartState : : Committed , DataPartState : : Outdated } ) ; <nl> } <nl> <nl> - for ( const DataPartPtr & part : add ) <nl> + auto remove_time = clear_without_timeout ? 0 : time ( nullptr ) ; <nl> + for ( const DataPartPtr & part : remove ) <nl> { <nl> - if ( data_parts . insert ( part ) . second ) <nl> - addPartContributionToColumnSizes ( part ) ; <nl> + if ( part - > state = = DataPartState : : Committed ) <nl> + removePartContributionToColumnSizes ( part ) ; <nl> + part - > state = DataPartState : : Outdated ; <nl> + part - > remove_time = remove_time ; <nl> } <nl> } <nl> <nl> - void MergeTreeData : : renameAndDetachPart ( const DataPartPtr & part , const String & prefix , bool restore_covered , bool move_to_detached ) <nl> + <nl> + void MergeTreeData : : renameAndDetachPart ( const DataPartPtr & part_to_detach , const String & prefix , bool restore_covered , <nl> + bool move_to_detached ) <nl> { <nl> - LOG_INFO ( log , " Renaming " < < part - > relative_path < < " to " < < prefix < < part - > name < < " and detaching it . " ) ; <nl> + LOG_INFO ( log , " Renaming " < < part_to_detach - > relative_path < < " to " < < prefix < < part_to_detach - > name < < " and detaching it . " ) ; <nl> <nl> std : : lock_guard < std : : mutex > lock ( data_parts_mutex ) ; <nl> - std : : lock_guard < std : : mutex > lock_all ( all_data_parts_mutex ) ; <nl> + / / std : : lock_guard < std : : mutex > lock_all ( all_data_parts_mutex ) ; <nl> <nl> - if ( ! all_data_parts . erase ( part ) ) <nl> - throw Exception ( " No such data part " , ErrorCodes : : NO_SUCH_DATA_PART ) ; <nl> + auto it_part = data_parts . find ( part_to_detach ) ; <nl> + if ( it_part = = data_parts . end ( ) ) <nl> + throw Exception ( " No such data part " + part_to_detach - > getNameWithState ( ) , ErrorCodes : : NO_SUCH_DATA_PART ) ; <nl> + <nl> + / / / What if part_to_detach is reference to * it_part ? Make a new owner just in case . <nl> + auto part = * it_part ; <nl> <nl> removePartContributionToColumnSizes ( part ) ; <nl> - data_parts . erase ( part ) ; <nl> + part - > state = DataPartState : : Deleting ; <nl> if ( move_to_detached | | ! prefix . empty ( ) ) <nl> part - > renameAddPrefix ( move_to_detached , prefix ) ; <nl> <nl> if ( restore_covered ) <nl> { <nl> - auto it = all_data_parts . lower_bound ( part ) ; <nl> + auto suitable_parts = getDataPartsRange ( { DataPartState : : PreCommitted , DataPartState : : Committed , DataPartState : : Outdated } ) ; <nl> + auto it = suitable_parts . convert ( data_parts . lower_bound ( part ) ) ; <nl> + <nl> Strings restored ; <nl> bool error = false ; <nl> <nl> Int64 pos = part - > info . min_block ; <nl> <nl> - if ( it ! = all_data_parts . begin ( ) ) <nl> + if ( it ! = suitable_parts . begin ( ) ) <nl> { <nl> - - it ; <nl> if ( part - > contains ( * * it ) ) <nl> { <nl> if ( ( * it ) - > info . min_block ! = part - > info . min_block ) <nl> error = true ; <nl> - data_parts . insert ( * it ) ; <nl> - addPartContributionToColumnSizes ( * it ) ; <nl> + <nl> + if ( ( * it ) - > state ! = DataPartState : : Committed ) <nl> + { <nl> + addPartContributionToColumnSizes ( * it ) ; <nl> + ( * it ) - > state = DataPartState : : Committed ; <nl> + } <nl> + <nl> pos = ( * it ) - > info . max_block + 1 ; <nl> restored . push_back ( ( * it ) - > name ) ; <nl> } <nl> void MergeTreeData : : renameAndDetachPart ( const DataPartPtr & part , const String & <nl> else <nl> error = true ; <nl> <nl> - for ( ; it ! = all_data_parts . end ( ) & & part - > contains ( * * it ) ; + + it ) <nl> + for ( ; it ! = suitable_parts . end ( ) & & part - > contains ( * * it ) ; + + it ) <nl> { <nl> if ( ( * it ) - > info . min_block < pos ) <nl> continue ; <nl> if ( ( * it ) - > info . min_block > pos ) <nl> error = true ; <nl> - data_parts . insert ( * it ) ; <nl> - addPartContributionToColumnSizes ( * it ) ; <nl> + <nl> + if ( ( * it ) - > state ! = DataPartState : : Committed ) <nl> + { <nl> + addPartContributionToColumnSizes ( * it ) ; <nl> + ( * it ) - > state = DataPartState : : Committed ; <nl> + } <nl> + <nl> pos = ( * it ) - > info . max_block + 1 ; <nl> restored . push_back ( ( * it ) - > name ) ; <nl> } <nl> void MergeTreeData : : renameAndDetachPart ( const DataPartPtr & part , const String & <nl> } <nl> } <nl> <nl> - void MergeTreeData : : detachPartInPlace ( const DataPartPtr & part ) <nl> - { <nl> - renameAndDetachPart ( part , " " , false , false ) ; <nl> - } <nl> - <nl> - MergeTreeData : : DataParts MergeTreeData : : getDataParts ( ) const <nl> - { <nl> - std : : lock_guard < std : : mutex > lock ( data_parts_mutex ) ; <nl> - return data_parts ; <nl> - } <nl> - <nl> - MergeTreeData : : DataPartsVector MergeTreeData : : getDataPartsVector ( ) const <nl> - { <nl> - std : : lock_guard < std : : mutex > lock ( data_parts_mutex ) ; <nl> - return DataPartsVector ( std : : begin ( data_parts ) , std : : end ( data_parts ) ) ; <nl> - } <nl> <nl> size_t MergeTreeData : : getTotalActiveSizeInBytes ( ) const <nl> { <nl> std : : lock_guard < std : : mutex > lock ( data_parts_mutex ) ; <nl> <nl> size_t res = 0 ; <nl> - for ( auto & part : data_parts ) <nl> + for ( auto & part : getDataPartsRange ( { DataPartState : : Committed } ) ) <nl> res + = part - > size_in_bytes ; <nl> <nl> return res ; <nl> } <nl> <nl> - MergeTreeData : : DataParts MergeTreeData : : getAllDataParts ( ) const <nl> - { <nl> - std : : lock_guard < std : : mutex > lock ( all_data_parts_mutex ) ; <nl> - return all_data_parts ; <nl> - } <nl> <nl> size_t MergeTreeData : : getMaxPartsCountForPartition ( ) const <nl> { <nl> size_t MergeTreeData : : getMaxPartsCountForPartition ( ) const <nl> size_t cur_count = 0 ; <nl> const String * cur_partition_id = nullptr ; <nl> <nl> - for ( const auto & part : data_parts ) <nl> + for ( const auto & part : getDataPartsRange ( { DataPartState : : Committed } ) ) <nl> { <nl> if ( cur_partition_id & & part - > info . partition_id = = * cur_partition_id ) <nl> { <nl> MergeTreeData : : DataPartPtr MergeTreeData : : getActiveContainingPart ( const String & <nl> std : : lock_guard < std : : mutex > lock ( data_parts_mutex ) ; <nl> <nl> / / / The part can be covered only by the previous or the next one in data_parts . <nl> - auto it = data_parts . lower_bound ( part_info ) ; <nl> + auto committed_parts = getDataPartsRange ( { DataPartState : : Committed } ) ; <nl> + auto it = committed_parts . convert ( data_parts . lower_bound ( part_info ) ) ; <nl> <nl> - if ( it ! = data_parts . end ( ) ) <nl> + if ( it ! = committed_parts . end ( ) ) <nl> { <nl> if ( ( * it ) - > name = = part_name ) <nl> return * it ; <nl> MergeTreeData : : DataPartPtr MergeTreeData : : getActiveContainingPart ( const String & <nl> return * it ; <nl> } <nl> <nl> - if ( it ! = data_parts . begin ( ) ) <nl> + if ( it ! = committed_parts . begin ( ) ) <nl> { <nl> - - it ; <nl> if ( ( * it ) - > info . contains ( part_info ) ) <nl> MergeTreeData : : DataPartPtr MergeTreeData : : getActiveContainingPart ( const String & <nl> return nullptr ; <nl> } <nl> <nl> - MergeTreeData : : DataPartPtr MergeTreeData : : getPartIfExists ( const String & part_name ) <nl> + <nl> + MergeTreeData : : DataPartPtr MergeTreeData : : getPartIfExists ( const String & part_name , const MergeTreeData : : DataPartStates & valid_states ) <nl> { <nl> auto part_info = MergeTreePartInfo : : fromPartName ( part_name , format_version ) ; <nl> <nl> - std : : lock_guard < std : : mutex > lock ( all_data_parts_mutex ) ; <nl> - auto it = all_data_parts . lower_bound ( part_info ) ; <nl> - if ( it ! = all_data_parts . end ( ) & & ( * it ) - > name = = part_name ) <nl> + std : : lock_guard < std : : mutex > lock ( data_parts_mutex ) ; <nl> + <nl> + auto filtered_parts = getDataPartsRange ( valid_states ) ; <nl> + auto it = filtered_parts . convert ( data_parts . find ( part_info ) ) ; <nl> + if ( it ! = filtered_parts . end ( ) & & ( * it ) - > name = = part_name ) <nl> return * it ; <nl> <nl> return nullptr ; <nl> void MergeTreeData : : calculateColumnSizesImpl ( ) <nl> { <nl> column_sizes . clear ( ) ; <nl> <nl> - for ( const auto & part : data_parts ) <nl> + / / / Take into account only committed parts <nl> + for ( const auto & part : getDataPartsRange ( { DataPartState : : Committed } ) ) <nl> addPartContributionToColumnSizes ( part ) ; <nl> } <nl> <nl> String MergeTreeData : : getPartitionIDFromQuery ( const ASTPtr & ast , const Context <nl> return partition_id ; <nl> } <nl> <nl> + MergeTreeData : : DataPartsVector MergeTreeData : : getDataPartsVector ( const DataPartStates & affordable_states ) const <nl> + { <nl> + DataPartsVector res ; <nl> + { <nl> + std : : lock_guard < std : : mutex > lock ( data_parts_mutex ) ; <nl> + std : : copy_if ( data_parts . begin ( ) , data_parts . end ( ) , std : : back_inserter ( res ) , DataPart : : getStatesFilter ( affordable_states ) ) ; <nl> + } <nl> + return res ; <nl> + } <nl> + <nl> + MergeTreeData : : DataPartsVector MergeTreeData : : getDataPartsVector ( const MergeTreeData : : DataPartStates & affordable_states , <nl> + MergeTreeData : : DataPartStateVector & out_states_snapshot ) const <nl> + { <nl> + DataPartsVector res ; <nl> + { <nl> + std : : lock_guard < std : : mutex > lock ( data_parts_mutex ) ; <nl> + std : : copy_if ( data_parts . begin ( ) , data_parts . end ( ) , std : : back_inserter ( res ) , DataPart : : getStatesFilter ( affordable_states ) ) ; <nl> + <nl> + out_states_snapshot . resize ( res . size ( ) ) ; <nl> + for ( size_t i = 0 ; i < res . size ( ) ; + + i ) <nl> + out_states_snapshot [ i ] = res [ i ] - > state ; <nl> + } <nl> + return res ; <nl> + } <nl> + <nl> + MergeTreeData : : DataParts MergeTreeData : : getDataParts ( const DataPartStates & affordable_states ) const <nl> + { <nl> + DataParts res ; <nl> + { <nl> + std : : lock_guard < std : : mutex > lock ( data_parts_mutex ) ; <nl> + std : : copy_if ( data_parts . begin ( ) , data_parts . end ( ) , std : : inserter ( res , res . end ( ) ) , DataPart : : getStatesFilter ( affordable_states ) ) ; <nl> + } <nl> + return res ; <nl> + } <nl> + <nl> + MergeTreeData : : DataParts MergeTreeData : : getDataParts ( ) const <nl> + { <nl> + return getDataParts ( { DataPartState : : Committed } ) ; <nl> + } <nl> + <nl> + MergeTreeData : : DataPartsVector MergeTreeData : : getDataPartsVector ( ) const <nl> + { <nl> + return getDataPartsVector ( { DataPartState : : Committed } ) ; <nl> + } <nl> + <nl> + MergeTreeData : : DataParts MergeTreeData : : getAllDataParts ( ) const <nl> + { <nl> + return getDataParts ( { DataPartState : : PreCommitted , DataPartState : : Committed , DataPartState : : Outdated } ) ; <nl> + } <nl> + <nl> + MergeTreeData : : DataPartPtr MergeTreeData : : getAnyPartInPartition ( <nl> + const String & partition_id , std : : lock_guard < std : : mutex > & data_parts_lock ) <nl> + { <nl> + auto min_block = std : : numeric_limits < Int64 > : : min ( ) ; <nl> + MergeTreePartInfo dummy_part_info ( partition_id , min_block , min_block , 0 ) ; <nl> + <nl> + auto committed_parts = getDataPartsRange ( { DataPartState : : Committed } ) ; <nl> + auto it = committed_parts . convert ( data_parts . lower_bound ( dummy_part_info ) ) ; <nl> + <nl> + if ( it ! = committed_parts . end ( ) & & ( * it ) - > info . partition_id = = partition_id ) <nl> + return * it ; <nl> + return { } ; <nl> + } <nl> + <nl> void MergeTreeData : : Transaction : : rollback ( ) <nl> { <nl> if ( data & & ( ! parts_to_remove_on_rollback . empty ( ) | | ! parts_to_add_on_rollback . empty ( ) ) ) <nl> void MergeTreeData : : Transaction : : rollback ( ) <nl> <nl> LOG_DEBUG ( data - > log , " Undoing transaction . " < < ss . str ( ) ) ; <nl> <nl> - data - > replaceParts ( parts_to_remove_on_rollback , parts_to_add_on_rollback , true ) ; <nl> - <nl> + / / / PreCommitted - > Outdated <nl> + replaceParts ( DataPartState : : Outdated , DataPartState : : Committed , true ) ; <nl> clear ( ) ; <nl> } <nl> } <nl> <nl> - MergeTreeData : : DataPartPtr MergeTreeData : : getAnyPartInPartition ( <nl> - const String & partition_id , std : : lock_guard < std : : mutex > & data_parts_lock ) <nl> + void MergeTreeData : : Transaction : : commit ( ) <nl> { <nl> - auto min_block = std : : numeric_limits < Int64 > : : min ( ) ; <nl> - MergeTreePartInfo dummy_part_info ( partition_id , min_block , min_block , 0 ) ; <nl> - auto it = data_parts . lower_bound ( dummy_part_info ) ; <nl> - if ( it ! = data_parts . end ( ) & & ( * it ) - > info . partition_id = = partition_id ) <nl> - return * it ; <nl> - return { } ; <nl> + / / / PreCommitted - > Committed , Committed - > Outdated <nl> + replaceParts ( DataPartState : : Committed , DataPartState : : Outdated , false ) ; <nl> + clear ( ) ; <nl> } <nl> <nl> + void MergeTreeData : : Transaction : : replaceParts ( MergeTreeData : : DataPartState move_precommitted_to , <nl> + MergeTreeData : : DataPartState move_committed_to , bool remove_without_delay ) <nl> + { <nl> + auto & committed_parts = parts_to_add_on_rollback ; <nl> + auto & precommitted_parts = parts_to_remove_on_rollback ; <nl> + <nl> + / / / TODO : also make sense to activate CleanupThread ' s cv <nl> + auto remove_time = ( remove_without_delay ) ? 0 : time ( nullptr ) ; <nl> + <nl> + { <nl> + std : : lock_guard < std : : mutex > lock ( data - > data_parts_mutex ) ; <nl> + <nl> + for ( auto & part : committed_parts ) <nl> + part - > assertState ( { DataPartState : : Committed } ) ; <nl> + for ( auto & part : precommitted_parts ) <nl> + part - > assertState ( { DataPartState : : PreCommitted } ) ; <nl> + <nl> + / / / If it is rollback then do nothing , else make it Outdated and remove their size contribution <nl> + if ( move_committed_to ! = DataPartState : : Committed ) <nl> + { <nl> + for ( auto & part : committed_parts ) <nl> + { <nl> + part - > state = move_committed_to ; <nl> + part - > remove_time = remove_time ; <nl> + data - > removePartContributionToColumnSizes ( part ) ; <nl> + } <nl> + } <nl> + <nl> + / / / If it is rollback just change state to Outdated , else change state to Committed and add their size contribution <nl> + for ( auto & part : precommitted_parts ) <nl> + { <nl> + part - > state = move_precommitted_to ; <nl> + if ( move_precommitted_to = = DataPartState : : Committed ) <nl> + data - > addPartContributionToColumnSizes ( part ) ; <nl> + else <nl> + part - > remove_time = remove_time ; <nl> + } <nl> + } <nl> + } <nl> + <nl> + <nl> } <nl> mmm a / dbms / src / Storages / MergeTree / MergeTreeData . h <nl> ppp b / dbms / src / Storages / MergeTree / MergeTreeData . h <nl> <nl> # include < DataStreams / GraphiteRollupSortedBlockInputStream . h > <nl> # include < Storages / MergeTree / MergeTreeDataPart . h > <nl> <nl> + # include < common / RangeFiltered . h > <nl> <nl> namespace DB <nl> { <nl> class MergeTreeData : public ITableDeclaration <nl> / / / After the DataPart is added to the working set , it cannot be changed . <nl> using DataPartPtr = std : : shared_ptr < const DataPart > ; <nl> <nl> + using DataPartState = MergeTreeDataPart : : State ; <nl> + using DataPartStates = std : : initializer_list < DataPartState > ; <nl> + using DataPartStateVector = std : : vector < DataPartState > ; <nl> + <nl> struct DataPartPtrLess <nl> { <nl> using is_transparent = void ; <nl> class MergeTreeData : public ITableDeclaration <nl> public : <nl> Transaction ( ) { } <nl> <nl> - void commit ( ) <nl> - { <nl> - clear ( ) ; <nl> - } <nl> + void commit ( ) ; <nl> <nl> void rollback ( ) ; <nl> <nl> class MergeTreeData : public ITableDeclaration <nl> parts_to_remove_on_rollback . clear ( ) ; <nl> parts_to_add_on_rollback . clear ( ) ; <nl> } <nl> + <nl> + void replaceParts ( DataPartState move_precommitted_to , DataPartState move_committed_to , bool remove_without_delay ) ; <nl> } ; <nl> <nl> / / / An object that stores the names of temporary files created in the part directory during ALTER of its <nl> class MergeTreeData : public ITableDeclaration <nl> String getLogName ( ) const { return log_name ; } <nl> <nl> / / / Returns a copy of the list so that the caller shouldn ' t worry about locks . <nl> + DataParts getDataParts ( const DataPartStates & affordable_states ) const ; <nl> + DataPartsVector getDataPartsVector ( const DataPartStates & affordable_states ) const ; <nl> + DataPartsVector getDataPartsVector ( const DataPartStates & affordable_states , DataPartStateVector & out_states_snapshot ) const ; <nl> + <nl> + / / / Returns a virtual container iteration only through parts with specified states <nl> + decltype ( auto ) getDataPartsRange ( const DataPartStates & affordable_states ) const <nl> + { <nl> + return createRangeFiltered ( DataPart : : getStatesFilter ( affordable_states ) , data_parts ) ; <nl> + } <nl> + <nl> + / / / Returns Committed parts <nl> DataParts getDataParts ( ) const ; <nl> DataPartsVector getDataPartsVector ( ) const ; <nl> + <nl> + / / / Returns all parts except Temporary and Deleting ones <nl> DataParts getAllDataParts ( ) const ; <nl> <nl> + / / / Returns an comitted part with the given name or a part containing it . If there is no such part , returns nullptr . <nl> + DataPartPtr getActiveContainingPart ( const String & part_name ) ; <nl> + <nl> + / / / Returns the part with the given name ( and state ) or nullptr if no such part . <nl> + DataPartPtr getPartIfExists ( const String & part_name , const DataPartStates & valid_states = { DataPartState : : Committed } ) ; <nl> + <nl> / / / Total size of active parts in bytes . <nl> size_t getTotalActiveSizeInBytes ( ) const ; <nl> <nl> class MergeTreeData : public ITableDeclaration <nl> / / / If until is non - null , wake up from the sleep earlier if the event happened . <nl> void delayInsertIfNeeded ( Poco : : Event * until = nullptr ) ; <nl> <nl> - / / / Returns an active part with the given name or a part containing it . If there is no such part , <nl> - / / / returns nullptr . <nl> - DataPartPtr getActiveContainingPart ( const String & part_name ) ; <nl> - <nl> - / / / Returns the part with the given name or nullptr if no such part . <nl> - DataPartPtr getPartIfExists ( const String & part_name ) ; <nl> DataPartPtr getShardedPartIfExists ( const String & part_name , size_t shard_no ) ; <nl> <nl> / / / Renames temporary part to a permanent part and adds it to the working set . <nl> class MergeTreeData : public ITableDeclaration <nl> DataPartsVector renameTempPartAndReplace ( <nl> MutableDataPartPtr & part , SimpleIncrement * increment = nullptr , Transaction * out_transaction = nullptr ) ; <nl> <nl> - / / / Removes from the working set parts in remove and adds parts in add . Parts in add must already be in <nl> - / / / all_data_parts . <nl> + / / / Removes parts from the working set parts . <nl> + / / / Parts in add must already be in data_parts with PreCommitted , Committed , or Outdated states . <nl> / / / If clear_without_timeout is true , the parts will be deleted at once , or during the next call to <nl> / / / clearOldParts ( ignoring old_parts_lifetime ) . <nl> - void replaceParts ( const DataPartsVector & remove , const DataPartsVector & add , bool clear_without_timeout ) ; <nl> + void removePartsFromWorkingSet ( const DataPartsVector & remove , bool clear_without_timeout ) ; <nl> <nl> / / / Renames the part to detached / < prefix > _ < part > and forgets about it . The data won ' t be deleted in <nl> / / / clearOldParts . <nl> / / / If restore_covered is true , adds to the working set inactive parts , which were merged into the deleted part . <nl> void renameAndDetachPart ( const DataPartPtr & part , const String & prefix = " " , bool restore_covered = false , bool move_to_detached = true ) ; <nl> <nl> - / / / Removes the part from the list of parts ( including all_data_parts ) , but doesn ' t move the directory . <nl> - void detachPartInPlace ( const DataPartPtr & part ) ; <nl> - <nl> / / / Returns old inactive parts that can be deleted . At the same time removes them from the list of parts <nl> / / / but not from the disk . <nl> DataPartsVector grabOldParts ( ) ; <nl> <nl> - / / / Reverts the changes made by grabOldParts ( ) . <nl> - void addOldParts ( const DataPartsVector & parts ) ; <nl> + / / / Reverts the changes made by grabOldParts ( ) , parts should be in Deleting state . <nl> + void rollbackDeletingParts ( const DataPartsVector & parts ) ; <nl> + <nl> + / / / Removes parts from data_parts , they should be in Deleting state <nl> + void removePartsFinally ( const DataPartsVector & parts ) ; <nl> <nl> / / / Delete irrelevant parts . <nl> void clearOldParts ( ) ; <nl> class MergeTreeData : public ITableDeclaration <nl> broken_part_callback ( name ) ; <nl> } <nl> <nl> - / / / Delete old parts from disk and ZooKeeper ( in replicated case ) <nl> - void clearOldPartsAndRemoveFromZK ( ) <nl> - { <nl> - parts_clean_callback ( ) ; <nl> - } <nl> - <nl> ExpressionActionsPtr getPrimaryExpression ( ) const { return primary_expr ; } <nl> SortDescription getSortDescription ( ) const { return sort_descr ; } <nl> <nl> class MergeTreeData : public ITableDeclaration <nl> <nl> / / / Engine - specific methods <nl> BrokenPartCallback broken_part_callback ; <nl> - / / / Use to delete outdated parts immediately from memory , disk and ZooKeeper <nl> - PartsCleanCallback parts_clean_callback ; <nl> <nl> String log_name ; <nl> Logger * log ; <nl> class MergeTreeData : public ITableDeclaration <nl> / / / The set of all data parts including already merged but not yet deleted . Usually it is small ( tens of elements ) . <nl> / / / The part is referenced from here , from the list of current parts and from each thread reading from it . <nl> / / / This means that if reference count is 1 - the part is not used right now and can be deleted . <nl> - DataParts all_data_parts ; <nl> - mutable std : : mutex all_data_parts_mutex ; <nl> + / / DataParts all_data_parts ; <nl> + / / mutable std : : mutex all_data_parts_mutex ; <nl> <nl> / / / Used to serialize calls to grabOldParts . <nl> std : : mutex grab_old_parts_mutex ; <nl> class MergeTreeData : public ITableDeclaration <nl> void addPartContributionToColumnSizes ( const DataPartPtr & part ) ; <nl> void removePartContributionToColumnSizes ( const DataPartPtr & part ) ; <nl> <nl> - / / / If there is no part in the partition with ID ` partition_id ` , returns empty ptr . <nl> + / / / If there is no part in the partition with ID ` partition_id ` , returns empty ptr . Should be called under the lock . <nl> DataPartPtr getAnyPartInPartition ( const String & partition_id , std : : lock_guard < std : : mutex > & data_parts_lock ) ; <nl> } ; <nl> <nl> mmm a / dbms / src / Storages / MergeTree / MergeTreeDataPart . cpp <nl> ppp b / dbms / src / Storages / MergeTree / MergeTreeDataPart . cpp <nl> size_t MergeTreeDataPart : : calcTotalSize ( const String & from ) <nl> std : : vector < std : : string > files ; <nl> cur . list ( files ) ; <nl> size_t res = 0 ; <nl> - for ( size_t i = 0 ; i < files . size ( ) ; + + i ) <nl> - res + = calcTotalSize ( from + files [ i ] ) ; <nl> + for ( const auto & file : files ) <nl> + res + = calcTotalSize ( from + file ) ; <nl> return res ; <nl> } <nl> <nl> void MergeTreeDataPart : : remove ( ) const <nl> LOG_WARNING ( storage . log , " Directory " < < from < < " ( part to remove ) doesn ' t exist or one of nested files has gone . " <nl> " Most likely this is due to manual removing . This should be discouraged . Ignoring . " ) ; <nl> <nl> + std : : terminate ( ) ; <nl> return ; <nl> } <nl> <nl> void MergeTreeDataPart : : renameTo ( const String & new_relative_path , bool remove_n <nl> } <nl> } <nl> <nl> - from_file . setLastModified ( Poco : : Timestamp : : fromEpochTime ( time ( 0 ) ) ) ; <nl> + from_file . setLastModified ( Poco : : Timestamp : : fromEpochTime ( time ( nullptr ) ) ) ; <nl> from_file . renameTo ( to ) ; <nl> relative_path = new_relative_path ; <nl> } <nl> size_t MergeTreeDataPart : : getIndexSizeInAllocatedBytes ( ) const <nl> return res ; <nl> } <nl> <nl> + String MergeTreeDataPart : : stateToString ( MergeTreeDataPart : : State state ) <nl> + { <nl> + switch ( state ) <nl> + { <nl> + case State : : Temporary : <nl> + return " Temporary " ; <nl> + case State : : PreCommitted : <nl> + return " PreCommitted " ; <nl> + case State : : Committed : <nl> + return " Committed " ; <nl> + case State : : Outdated : <nl> + return " Outdated " ; <nl> + case State : : Deleting : <nl> + return " Deleting " ; <nl> + default : <nl> + throw Exception ( " Unknown part state " + std : : to_string ( static_cast < int > ( state ) ) , ErrorCodes : : LOGICAL_ERROR ) ; <nl> + } <nl> + } <nl> + <nl> + String MergeTreeDataPart : : stateString ( ) const <nl> + { <nl> + return stateToString ( state ) ; <nl> + } <nl> + <nl> } <nl> mmm a / dbms / src / Storages / MergeTree / MergeTreeDataPart . h <nl> ppp b / dbms / src / Storages / MergeTree / MergeTreeDataPart . h <nl> struct MergeTreeDataPart <nl> / / / If true , the destructor will delete the directory with the part . <nl> bool is_temp = false ; <nl> <nl> + / / / If true it means that there are no ZooKeeper node for this part , so it should be deleted only from filesystem <nl> + bool is_duplicate = false ; <nl> + <nl> / / / For resharding . <nl> size_t shard_no = 0 ; <nl> <nl> + / * * <nl> + * Part state is a stage of its lifetime . States are ordered and state of a part could be increased only . <nl> + * Part state should be modified under data_parts mutex . <nl> + * <nl> + * Possible state transitions : <nl> + * Temporary - > Precommitted : we are trying to commit a fetched , inserted or merged part to active set <nl> + * Precommitted - > Outdated : we could not to add a part to active set and doing a rollback ( for example it is duplicated part ) <nl> + * Precommitted - > Commited : we successfully committed a part to active dataset <nl> + * Precommitted - > Outdated : a part was replaced by a covering part or DROP PARTITION <nl> + * Outdated - > Deleting : a cleaner selected this part for deletion <nl> + * Deleting - > Outdated : if an ZooKeeper error occurred during the deletion , we will retry deletion <nl> + * / <nl> + enum class State <nl> + { <nl> + Temporary , / / / the part is generating now , it is not in data_parts list <nl> + PreCommitted , / / / the part is in data_parts , but not used for SELECTs <nl> + Committed , / / / active data part , used by current and upcoming SELECTs <nl> + Outdated , / / / not active data part , but could be used by only current SELECTs , could be deleted after SELECTs finishes <nl> + Deleting / / / not active data part with identity refcounter , it is deleting right now by a cleaner <nl> + } ; <nl> + <nl> + / / / Current state of the part . If the part is in working set already , it should be accessed via data_parts mutex <nl> + mutable State state { State : : Temporary } ; <nl> + <nl> + / / / Returns name of state <nl> + static String stateToString ( State state ) ; <nl> + String stateString ( ) const ; <nl> + <nl> + String getNameWithState ( ) const <nl> + { <nl> + return name + " ( state " + stateString ( ) + " ) " ; <nl> + } <nl> + <nl> + / / / Returns true if state of part is one of affordable_states <nl> + bool checkState ( const std : : initializer_list < State > & affordable_states ) const <nl> + { <nl> + for ( auto affordable_state : affordable_states ) <nl> + { <nl> + if ( state = = affordable_state ) <nl> + return true ; <nl> + } <nl> + return false ; <nl> + } <nl> + <nl> + / / / Throws an exception if state of the part is not in affordable_states <nl> + void assertState ( const std : : initializer_list < State > & affordable_states ) const <nl> + { <nl> + if ( ! checkState ( affordable_states ) ) <nl> + { <nl> + String states_str ; <nl> + for ( auto state : affordable_states ) <nl> + states_str + = stateToString ( state ) + " " ; <nl> + <nl> + throw Exception ( " Unexpected state of part " + getNameWithState ( ) + " . Expected : " + states_str ) ; <nl> + } <nl> + } <nl> + <nl> + / / / In comparison with lambdas , it is move assignable and could has several overloaded operator ( ) <nl> + struct StatesFilter <nl> + { <nl> + std : : initializer_list < State > affordable_states ; <nl> + StatesFilter ( const std : : initializer_list < State > & affordable_states ) : affordable_states ( affordable_states ) { } <nl> + <nl> + bool operator ( ) ( const std : : shared_ptr < const MergeTreeDataPart > & part ) const <nl> + { <nl> + return part - > checkState ( affordable_states ) ; <nl> + } <nl> + } ; <nl> + <nl> + / / / Returns a lambda that returns true only for part with states from specified list <nl> + static inline StatesFilter getStatesFilter ( const std : : initializer_list < State > & affordable_states ) <nl> + { <nl> + return StatesFilter ( affordable_states ) ; <nl> + } <nl> + <nl> / / / Primary key ( correspond to primary . idx file ) . <nl> / / / Always loaded in RAM . Contains each index_granularity - th value of primary key tuple . <nl> / / / Note that marks ( also correspond to primary key ) is not always in RAM , but cached . See MarkCache . h . <nl> mmm a / dbms / src / Storages / MergeTree / ReplicatedMergeTreeBlockOutputStream . cpp <nl> ppp b / dbms / src / Storages / MergeTree / ReplicatedMergeTreeBlockOutputStream . cpp <nl> namespace ErrorCodes <nl> <nl> <nl> ReplicatedMergeTreeBlockOutputStream : : ReplicatedMergeTreeBlockOutputStream ( <nl> - StorageReplicatedMergeTree & storage_ , size_t quorum_ , size_t quorum_timeout_ms_ ) <nl> - : storage ( storage_ ) , quorum ( quorum_ ) , quorum_timeout_ms ( quorum_timeout_ms_ ) , <nl> + StorageReplicatedMergeTree & storage_ , size_t quorum_ , size_t quorum_timeout_ms_ , bool deduplicate_ ) <nl> + : storage ( storage_ ) , quorum ( quorum_ ) , quorum_timeout_ms ( quorum_timeout_ms_ ) , deduplicate ( deduplicate_ ) , <nl> log ( & Logger : : get ( storage . data . getLogName ( ) + " ( Replicated OutputStream ) " ) ) <nl> { <nl> / / / The quorum value ` 1 ` has the same meaning as if it is disabled . <nl> void ReplicatedMergeTreeBlockOutputStream : : checkQuorumPrecondition ( zkutil : : ZooKe <nl> <nl> void ReplicatedMergeTreeBlockOutputStream : : write ( const Block & block ) <nl> { <nl> + last_block_is_duplicate = false ; <nl> + <nl> / / / TODO Is it possible to not lock the table structure here ? <nl> storage . data . delayInsertIfNeeded ( & storage . restarting_thread - > getWakeupEvent ( ) ) ; <nl> <nl> void ReplicatedMergeTreeBlockOutputStream : : write ( const Block & block ) <nl> <nl> MergeTreeData : : MutableDataPartPtr part = storage . writer . writeTempPart ( current_block ) ; <nl> <nl> - SipHash hash ; <nl> - part - > checksums . summaryDataChecksum ( hash ) ; <nl> - union <nl> - { <nl> - char bytes [ 16 ] ; <nl> - UInt64 words [ 2 ] ; <nl> - } hash_value ; <nl> - hash . get128 ( hash_value . bytes ) ; <nl> + String block_id ; <nl> <nl> - String checksum ( hash_value . bytes , 16 ) ; <nl> + if ( deduplicate ) <nl> + { <nl> + SipHash hash ; <nl> + part - > checksums . summaryDataChecksum ( hash ) ; <nl> + union <nl> + { <nl> + char bytes [ 16 ] ; <nl> + UInt64 words [ 2 ] ; <nl> + } hash_value ; <nl> + hash . get128 ( hash_value . bytes ) ; <nl> <nl> - / / / We take the hash from the data as ID . That is , do not insert the same data twice . <nl> - String block_id = toString ( hash_value . words [ 0 ] ) + " _ " + toString ( hash_value . words [ 1 ] ) ; <nl> + / / / We take the hash from the data as ID . That is , do not insert the same data twice . <nl> + block_id = toString ( hash_value . words [ 0 ] ) + " _ " + toString ( hash_value . words [ 1 ] ) ; <nl> <nl> - LOG_DEBUG ( log , " Wrote block with ID ' " < < block_id < < " ' , " < < block . rows ( ) < < " rows " ) ; <nl> + LOG_DEBUG ( log , " Wrote block with ID ' " < < block_id < < " ' , " < < block . rows ( ) < < " rows " ) ; <nl> + } <nl> + else <nl> + { <nl> + LOG_DEBUG ( log , " Wrote block with " < < block . rows ( ) < < " rows " ) ; <nl> + } <nl> <nl> commitPart ( zookeeper , part , block_id ) ; <nl> <nl> void ReplicatedMergeTreeBlockOutputStream : : write ( const Block & block ) <nl> <nl> void ReplicatedMergeTreeBlockOutputStream : : writeExistingPart ( MergeTreeData : : MutableDataPartPtr & part ) <nl> { <nl> + last_block_is_duplicate = false ; <nl> + <nl> / / / NOTE No delay in this case . That ' s Ok . <nl> <nl> auto zookeeper = storage . getZooKeeper ( ) ; <nl> void ReplicatedMergeTreeBlockOutputStream : : commitPart ( zkutil : : ZooKeeperPtr & zoo <nl> { <nl> LOG_INFO ( log , " Block with ID " < < block_id < < " already exists ; ignoring it ( removing part " < < part - > name < < " ) " ) ; <nl> <nl> + part - > is_duplicate = true ; <nl> transaction . rollback ( ) ; <nl> + last_block_is_duplicate = true ; <nl> } <nl> else if ( zookeeper - > exists ( quorum_info . status_path ) ) <nl> { <nl> mmm a / dbms / src / Storages / MergeTree / ReplicatedMergeTreeBlockOutputStream . h <nl> ppp b / dbms / src / Storages / MergeTree / ReplicatedMergeTreeBlockOutputStream . h <nl> class StorageReplicatedMergeTree ; <nl> class ReplicatedMergeTreeBlockOutputStream : public IBlockOutputStream <nl> { <nl> public : <nl> - ReplicatedMergeTreeBlockOutputStream ( StorageReplicatedMergeTree & storage_ , <nl> - size_t quorum_ , size_t quorum_timeout_ms_ ) ; <nl> + ReplicatedMergeTreeBlockOutputStream ( StorageReplicatedMergeTree & storage_ , size_t quorum_ , size_t quorum_timeout_ms_ , <nl> + bool deduplicate_ ) ; <nl> <nl> void write ( const Block & block ) override ; <nl> <nl> / / / For ATTACHing existing data on filesystem . <nl> void writeExistingPart ( MergeTreeData : : MutableDataPartPtr & part ) ; <nl> <nl> + / / / For proper deduplication in MaterializedViews <nl> + bool lastBlockIsDuplicate ( ) const <nl> + { <nl> + return last_block_is_duplicate ; <nl> + } <nl> + <nl> private : <nl> struct QuorumInfo <nl> { <nl> class ReplicatedMergeTreeBlockOutputStream : public IBlockOutputStream <nl> size_t quorum ; <nl> size_t quorum_timeout_ms ; <nl> <nl> + bool deduplicate = true ; <nl> + bool last_block_is_duplicate = false ; <nl> + <nl> using Logger = Poco : : Logger ; <nl> Logger * log ; <nl> } ; <nl> mmm a / dbms / src / Storages / MergeTree / ReplicatedMergeTreeCleanupThread . cpp <nl> ppp b / dbms / src / Storages / MergeTree / ReplicatedMergeTreeCleanupThread . cpp <nl> void ReplicatedMergeTreeCleanupThread : : run ( ) <nl> tryLogCurrentException ( __PRETTY_FUNCTION__ ) ; <nl> } <nl> <nl> - storage . shutdown_event . tryWait ( CLEANUP_SLEEP_MS ) ; <nl> + storage . cleanup_thread_event . tryWait ( CLEANUP_SLEEP_MS ) ; <nl> } <nl> <nl> LOG_DEBUG ( log , " Cleanup thread finished " ) ; <nl> void ReplicatedMergeTreeCleanupThread : : run ( ) <nl> <nl> void ReplicatedMergeTreeCleanupThread : : iterate ( ) <nl> { <nl> - storage . clearOldPartsAndRemoveFromZK ( log ) ; <nl> + storage . clearOldPartsAndRemoveFromZK ( ) ; <nl> storage . data . clearOldTemporaryDirectories ( ) ; <nl> <nl> if ( storage . is_leader_node ) <nl> mmm a / dbms / src / Storages / MergeTree / ReplicatedMergeTreeRestartingThread . cpp <nl> ppp b / dbms / src / Storages / MergeTree / ReplicatedMergeTreeRestartingThread . cpp <nl> void ReplicatedMergeTreeRestartingThread : : partialShutdown ( ) <nl> storage . merge_selecting_event . set ( ) ; <nl> storage . queue_updating_event - > set ( ) ; <nl> storage . alter_query_event - > set ( ) ; <nl> + storage . cleanup_thread_event . set ( ) ; <nl> storage . replica_is_active_node = nullptr ; <nl> <nl> LOG_TRACE ( log , " Waiting for threads to finish " ) ; <nl> mmm a / dbms / src / Storages / StorageFactory . cpp <nl> ppp b / dbms / src / Storages / StorageFactory . cpp <nl> StoragePtr StorageFactory : : get ( <nl> <nl> if ( query . is_materialized_view ) <nl> { <nl> + / / / Pass local_context here to convey setting for inner table <nl> return StorageMaterializedView : : create ( <nl> - table_name , database_name , context , query , columns , <nl> + table_name , database_name , local_context , query , columns , <nl> materialized_columns , alias_columns , column_defaults , <nl> attach ) ; <nl> } <nl> mmm a / dbms / src / Storages / StorageMaterializedView . cpp <nl> ppp b / dbms / src / Storages / StorageMaterializedView . cpp <nl> <nl> # include < Storages / VirtualColumnFactory . h > <nl> <nl> # include < Common / typeid_cast . h > <nl> + # include " StorageReplicatedMergeTree . h " <nl> <nl> <nl> namespace DB <nl> static void extractDependentTable ( const ASTSelectQuery & query , String & select_ <nl> StorageMaterializedView : : StorageMaterializedView ( <nl> const String & table_name_ , <nl> const String & database_name_ , <nl> - Context & context_ , <nl> + Context & local_context , <nl> const ASTCreateQuery & query , <nl> NamesAndTypesListPtr columns_ , <nl> const NamesAndTypesList & materialized_columns_ , <nl> StorageMaterializedView : : StorageMaterializedView ( <nl> const ColumnDefaults & column_defaults_ , <nl> bool attach_ ) <nl> : IStorage { materialized_columns_ , alias_columns_ , column_defaults_ } , table_name ( table_name_ ) , <nl> - database_name ( database_name_ ) , context ( context_ ) , columns ( columns_ ) <nl> + database_name ( database_name_ ) , global_context ( local_context . getGlobalContext ( ) ) , columns ( columns_ ) <nl> { <nl> if ( ! query . select ) <nl> throw Exception ( " SELECT query is not specified for " + getName ( ) , ErrorCodes : : INCORRECT_QUERY ) ; <nl> StorageMaterializedView : : StorageMaterializedView ( <nl> extractDependentTable ( * query . select , select_database_name , select_table_name ) ; <nl> <nl> if ( ! select_table_name . empty ( ) ) <nl> - context . getGlobalContext ( ) . addDependency ( <nl> + global_context . addDependency ( <nl> DatabaseAndTableName ( select_database_name , select_table_name ) , <nl> DatabaseAndTableName ( database_name , table_name ) ) ; <nl> <nl> StorageMaterializedView : : StorageMaterializedView ( <nl> / / / Execute the query . <nl> try <nl> { <nl> - InterpreterCreateQuery create_interpreter ( manual_create_query , context ) ; <nl> + InterpreterCreateQuery create_interpreter ( manual_create_query , local_context ) ; <nl> create_interpreter . execute ( ) ; <nl> } <nl> catch ( . . . ) <nl> { <nl> / / / In case of any error we should remove dependency to the view . <nl> if ( ! select_table_name . empty ( ) ) <nl> - context . getGlobalContext ( ) . removeDependency ( <nl> + global_context . removeDependency ( <nl> DatabaseAndTableName ( select_database_name , select_table_name ) , <nl> DatabaseAndTableName ( database_name , table_name ) ) ; <nl> <nl> BlockOutputStreamPtr StorageMaterializedView : : write ( const ASTPtr & query , const <nl> <nl> void StorageMaterializedView : : drop ( ) <nl> { <nl> - context . getGlobalContext ( ) . removeDependency ( <nl> + global_context . removeDependency ( <nl> DatabaseAndTableName ( select_database_name , select_table_name ) , <nl> DatabaseAndTableName ( database_name , table_name ) ) ; <nl> <nl> auto inner_table_name = getInnerTableName ( ) ; <nl> <nl> - if ( context . tryGetTable ( database_name , inner_table_name ) ) <nl> + if ( global_context . tryGetTable ( database_name , inner_table_name ) ) <nl> { <nl> / / / We create and execute ` drop ` query for internal table . <nl> auto drop_query = std : : make_shared < ASTDropQuery > ( ) ; <nl> drop_query - > database = database_name ; <nl> drop_query - > table = inner_table_name ; <nl> ASTPtr ast_drop_query = drop_query ; <nl> - InterpreterDropQuery drop_interpreter ( ast_drop_query , context ) ; <nl> + InterpreterDropQuery drop_interpreter ( ast_drop_query , global_context ) ; <nl> drop_interpreter . execute ( ) ; <nl> } <nl> } <nl> bool StorageMaterializedView : : optimize ( const ASTPtr & query , const ASTPtr & part <nl> <nl> StoragePtr StorageMaterializedView : : getInnerTable ( ) const <nl> { <nl> - return context . getTable ( database_name , getInnerTableName ( ) ) ; <nl> + return global_context . getTable ( database_name , getInnerTableName ( ) ) ; <nl> } <nl> <nl> } <nl> mmm a / dbms / src / Storages / StorageMaterializedView . h <nl> ppp b / dbms / src / Storages / StorageMaterializedView . h <nl> friend class ext : : shared_ptr_helper < StorageMaterializedView > ; <nl> String table_name ; <nl> String database_name ; <nl> ASTPtr inner_query ; <nl> - Context & context ; <nl> + Context & global_context ; <nl> NamesAndTypesListPtr columns ; <nl> <nl> StorageMaterializedView ( <nl> const String & table_name_ , <nl> const String & database_name_ , <nl> - Context & context_ , <nl> + Context & local_context , <nl> const ASTCreateQuery & query , <nl> NamesAndTypesListPtr columns_ , <nl> const NamesAndTypesList & materialized_columns_ , <nl> mmm a / dbms / src / Storages / StorageMergeTree . cpp <nl> ppp b / dbms / src / Storages / StorageMergeTree . cpp <nl> void StorageMergeTree : : dropPartition ( const ASTPtr & query , const ASTPtr & partit <nl> if ( detach ) <nl> data . renameAndDetachPart ( part , " " ) ; <nl> else <nl> - data . replaceParts ( { part } , { } , false ) ; <nl> + data . removePartsFromWorkingSet ( { part } , false ) ; <nl> } <nl> <nl> LOG_INFO ( log , ( detach ? " Detached " : " Removed " ) < < removed_parts < < " parts inside partition ID " < < partition_id < < " . " ) ; <nl> mmm a / dbms / src / Storages / StorageReplicatedMergeTree . cpp <nl> ppp b / dbms / src / Storages / StorageReplicatedMergeTree . cpp <nl> void StorageReplicatedMergeTree : : executeDropRange ( const StorageReplicatedMergeTr <nl> <nl> / / / If the part needs to be removed , it is more reliable to delete the directory after the changes in ZooKeeper . <nl> if ( ! entry . detach ) <nl> - data . replaceParts ( { part } , { } , true ) ; <nl> + data . removePartsFromWorkingSet ( { part } , true ) ; <nl> } <nl> <nl> LOG_INFO ( log , ( entry . detach ? " Detached " : " Removed " ) < < removed_parts < < " parts inside " < < entry . new_part_name < < " . " ) ; <nl> void StorageReplicatedMergeTree : : updateQuorum ( const String & part_name ) <nl> <nl> bool StorageReplicatedMergeTree : : fetchPart ( const String & part_name , const String & replica_path , bool to_detached , size_t quorum ) <nl> { <nl> + if ( auto part = data . getPartIfExists ( part_name , { MergeTreeDataPart : : State : : Outdated , MergeTreeDataPart : : State : : Deleting } ) ) <nl> + { <nl> + LOG_DEBUG ( log , " Part " < < part - > getNameWithState ( ) < < " should be deleted after previous attempt before fetch " ) ; <nl> + / / / Force premature parts cleanup <nl> + cleanup_thread_event . set ( ) ; <nl> + return false ; <nl> + } <nl> + <nl> { <nl> std : : lock_guard < std : : mutex > lock ( currently_fetching_parts_mutex ) ; <nl> if ( ! currently_fetching_parts . insert ( part_name ) . second ) <nl> BlockOutputStreamPtr StorageReplicatedMergeTree : : write ( const ASTPtr & query , con <nl> { <nl> assertNotReadonly ( ) ; <nl> <nl> + bool deduplicate = data . settings . replicated_deduplication_window ! = 0 & & settings . insert_deduplicate ; <nl> + <nl> return std : : make_shared < ReplicatedMergeTreeBlockOutputStream > ( * this , <nl> - settings . insert_quorum , settings . insert_quorum_timeout . totalMilliseconds ( ) ) ; <nl> + settings . insert_quorum , settings . insert_quorum_timeout . totalMilliseconds ( ) , deduplicate ) ; <nl> } <nl> <nl> <nl> void StorageReplicatedMergeTree : : attachPartition ( const ASTPtr & partition , bool <nl> loaded_parts . push_back ( data . loadPartAndFixMetadata ( source_dir + part ) ) ; <nl> } <nl> <nl> - ReplicatedMergeTreeBlockOutputStream output ( * this , 0 , 0 ) ; / / / TODO Allow to use quorum here . <nl> + ReplicatedMergeTreeBlockOutputStream output ( * this , 0 , 0 , false ) ; / / / TODO Allow to use quorum here . <nl> for ( auto & part : loaded_parts ) <nl> { <nl> String old_name = part - > name ; <nl> void StorageReplicatedMergeTree : : reshardPartitions ( <nl> <nl> / / / Make a list of local partitions that need to be resharded . <nl> std : : set < std : : string > unique_partition_list ; <nl> - const MergeTreeData : : DataParts & data_parts = data . getDataParts ( ) ; <nl> - for ( MergeTreeData : : DataParts : : iterator it = data_parts . cbegin ( ) ; it ! = data_parts . cend ( ) ; + + it ) <nl> + auto data_parts = data . getDataParts ( ) ; <nl> + for ( auto & part : data_parts ) <nl> { <nl> - const String & current_partition_id = ( * it ) - > info . partition_id ; <nl> + const String & current_partition_id = part - > info . partition_id ; <nl> if ( include_all | | partition_id = = current_partition_id ) <nl> unique_partition_list . insert ( current_partition_id ) ; <nl> } <nl> bool StorageReplicatedMergeTree : : checkSpaceForResharding ( const ReplicaToSpaceInf <nl> } <nl> <nl> <nl> - void StorageReplicatedMergeTree : : clearOldPartsAndRemoveFromZK ( Logger * log_ ) <nl> + void StorageReplicatedMergeTree : : clearOldPartsAndRemoveFromZK ( ) <nl> { <nl> / / / Critical section is not required ( since grabOldParts ( ) returns unique part set on each call ) <nl> <nl> - Logger * log = log_ ? log_ : this - > log ; <nl> - <nl> auto table_lock = lockStructure ( false , __PRETTY_FUNCTION__ ) ; <nl> auto zookeeper = getZooKeeper ( ) ; <nl> <nl> MergeTreeData : : DataPartsVector parts = data . grabOldParts ( ) ; <nl> - size_t count = parts . size ( ) ; <nl> - <nl> - if ( ! count ) <nl> + if ( parts . empty ( ) ) <nl> return ; <nl> <nl> - / / / Part names that were successfully deleted from filesystem and should be deleted from ZooKeeper <nl> - Strings part_names ; <nl> - auto remove_from_zookeeper = [ & ] ( ) <nl> + MergeTreeData : : DataPartsVector parts_to_delete_only_from_filesystem ; / / Only duplicates <nl> + MergeTreeData : : DataPartsVector parts_to_delete_completely ; / / All parts except duplicates <nl> + MergeTreeData : : DataPartsVector parts_to_retry_deletion ; / / Parts that should be retried due to network problems <nl> + MergeTreeData : : DataPartsVector parts_to_remove_from_filesystem ; / / Parts removed from ZK <nl> + <nl> + for ( const auto & part : parts ) <nl> { <nl> - LOG_DEBUG ( log , " Removed " < < part_names . size ( ) < < " old parts from filesystem . Removing them from ZooKeeper . " ) ; <nl> + if ( ! part - > is_duplicate ) <nl> + parts_to_delete_completely . emplace_back ( part ) ; <nl> + else <nl> + parts_to_delete_only_from_filesystem . emplace_back ( part ) ; <nl> + } <nl> + parts . clear ( ) ; <nl> <nl> - try <nl> - { <nl> - removePartsFromZooKeeper ( zookeeper , part_names ) ; <nl> - } <nl> - catch ( . . . ) <nl> + auto remove_parts_from_filesystem = [ log = log ] ( const MergeTreeData : : DataPartsVector & parts_to_remove ) <nl> + { <nl> + for ( auto & part : parts_to_remove ) <nl> { <nl> - LOG_ERROR ( log , " There is a problem with deleting parts from ZooKeeper : " < < getCurrentExceptionMessage ( false ) ) ; <nl> + try <nl> + { <nl> + part - > remove ( ) ; <nl> + } <nl> + catch ( . . . ) <nl> + { <nl> + tryLogCurrentException ( log , " There is a problem with deleting part " + part - > name + " from filesystem " ) ; <nl> + } <nl> } <nl> } ; <nl> <nl> + / / / Delete duplicate parts from filesystem <nl> + if ( ! parts_to_delete_only_from_filesystem . empty ( ) ) <nl> + { <nl> + remove_parts_from_filesystem ( parts_to_delete_only_from_filesystem ) ; <nl> + data . removePartsFinally ( parts_to_delete_only_from_filesystem ) ; <nl> + <nl> + LOG_DEBUG ( log , " Removed " < < parts_to_delete_only_from_filesystem . size ( ) < < " old duplicate parts " ) ; <nl> + } <nl> + <nl> + / / / Delete normal parts from ZooKeeper <nl> + NameSet part_names_to_retry_deletion ; <nl> try <nl> { <nl> - LOG_DEBUG ( log , " Removing " < < parts . size ( ) < < " old parts from filesystem " ) ; <nl> + Strings part_names_to_delete_completely ; <nl> + for ( const auto & part : parts_to_delete_completely ) <nl> + part_names_to_delete_completely . emplace_back ( part - > name ) ; <nl> <nl> - while ( ! parts . empty ( ) ) <nl> - { <nl> - MergeTreeData : : DataPartPtr & part = parts . back ( ) ; <nl> - part - > remove ( ) ; <nl> - part_names . emplace_back ( part - > name ) ; <nl> - parts . pop_back ( ) ; <nl> - } <nl> + LOG_DEBUG ( log , " Removing " < < parts_to_delete_completely . size ( ) < < " old parts from ZooKeeper " ) ; <nl> + removePartsFromZooKeeper ( zookeeper , part_names_to_delete_completely , & part_names_to_retry_deletion ) ; <nl> } <nl> catch ( . . . ) <nl> { <nl> - tryLogCurrentException ( __PRETTY_FUNCTION__ ) ; <nl> + LOG_ERROR ( log , " There is a problem with deleting parts from ZooKeeper : " < < getCurrentExceptionMessage ( false ) ) ; <nl> + } <nl> <nl> - / / / Finalize deletion of parts already deleted from filesystem , rollback remaining parts <nl> - data . addOldParts ( parts ) ; <nl> - remove_from_zookeeper ( ) ; <nl> + / / / Part names that were reliably deleted from ZooKeeper should be deleted from filesystem <nl> + auto num_reliably_deleted_parts = parts_to_delete_completely . size ( ) - part_names_to_retry_deletion . size ( ) ; <nl> + LOG_DEBUG ( log , " Removed " < < num_reliably_deleted_parts < < " old parts from ZooKeeper . Removing them from filesystem . " ) ; <nl> <nl> - throw ; <nl> + / / / Delete normal parts on two sets <nl> + for ( auto & part : parts_to_delete_completely ) <nl> + { <nl> + if ( part_names_to_retry_deletion . count ( part - > name ) = = 0 ) <nl> + parts_to_remove_from_filesystem . emplace_back ( part ) ; <nl> + else <nl> + parts_to_retry_deletion . emplace_back ( part ) ; <nl> } <nl> <nl> - / / / Finalize deletion <nl> - remove_from_zookeeper ( ) ; <nl> + / / / Will retry deletion <nl> + if ( ! parts_to_retry_deletion . empty ( ) ) <nl> + { <nl> + data . rollbackDeletingParts ( parts_to_retry_deletion ) ; <nl> + LOG_DEBUG ( log , " Will retry deletion of " < < parts_to_retry_deletion . size ( ) < < " parts in the next time " ) ; <nl> + } <nl> <nl> - LOG_DEBUG ( log , " Removed " < < count < < " old parts " ) ; <nl> + / / / Remove parts from filesystem and finally from data_parts <nl> + if ( ! parts_to_remove_from_filesystem . empty ( ) ) <nl> + { <nl> + remove_parts_from_filesystem ( parts_to_remove_from_filesystem ) ; <nl> + data . removePartsFinally ( parts_to_remove_from_filesystem ) ; <nl> + <nl> + LOG_DEBUG ( log , " Removed " < < parts_to_remove_from_filesystem . size ( ) < < " old parts " ) ; <nl> + } <nl> } <nl> <nl> <nl> static int32_t tryMultiWithRetries ( zkutil : : ZooKeeperPtr & zookeeper , zkutil : : Ops <nl> } <nl> <nl> <nl> - void StorageReplicatedMergeTree : : removePartsFromZooKeeper ( zkutil : : ZooKeeperPtr & zookeeper , const Strings & part_names ) <nl> + void StorageReplicatedMergeTree : : removePartsFromZooKeeper ( zkutil : : ZooKeeperPtr & zookeeper , const Strings & part_names , <nl> + NameSet * parts_should_be_retied ) <nl> { <nl> zkutil : : Ops ops ; <nl> auto it_first_node_in_batch = part_names . cbegin ( ) ; <nl> void StorageReplicatedMergeTree : : removePartsFromZooKeeper ( zkutil : : ZooKeeperPtr & <nl> auto cur_code = tryMultiWithRetries ( zookeeper , cur_ops ) ; <nl> <nl> if ( cur_code = = ZNONODE ) <nl> + { <nl> LOG_DEBUG ( log , " There is no part " < < * it_in_batch < < " in ZooKeeper , it was only in filesystem " ) ; <nl> + } <nl> + else if ( parts_should_be_retied & & zkutil : : isHardwareErrorCode ( cur_code ) ) <nl> + { <nl> + parts_should_be_retied - > emplace ( * it_in_batch ) ; <nl> + } <nl> else if ( cur_code ! = ZOK ) <nl> + { <nl> LOG_WARNING ( log , " Cannot remove part " < < * it_in_batch < < " from ZooKeeper : " < < : : zerror ( cur_code ) ) ; <nl> + } <nl> } <nl> } <nl> + else if ( parts_should_be_retied & & zkutil : : isHardwareErrorCode ( code ) ) <nl> + { <nl> + for ( auto it_in_batch = it_first_node_in_batch ; it_in_batch ! = it_next ; + + it_in_batch ) <nl> + parts_should_be_retied - > emplace ( * it_in_batch ) ; <nl> + } <nl> else if ( code ! = ZOK ) <nl> { <nl> LOG_WARNING ( log , " There was a problem with deleting " < < ( it_next - it_first_node_in_batch ) <nl> mmm a / dbms / src / Storages / StorageReplicatedMergeTree . h <nl> ppp b / dbms / src / Storages / StorageReplicatedMergeTree . h <nl> friend class ext : : shared_ptr_helper < StorageReplicatedMergeTree > ; <nl> <nl> private : <nl> / / / Delete old chunks from disk and from ZooKeeper . <nl> - void clearOldPartsAndRemoveFromZK ( Logger * log_ = nullptr ) ; <nl> + void clearOldPartsAndRemoveFromZK ( ) ; <nl> <nl> friend class ReplicatedMergeTreeBlockOutputStream ; <nl> friend class ReplicatedMergeTreeRestartingThread ; <nl> friend class ext : : shared_ptr_helper < StorageReplicatedMergeTree > ; <nl> <nl> / / / A thread that removes old parts , log entries , and blocks . <nl> std : : unique_ptr < ReplicatedMergeTreeCleanupThread > cleanup_thread ; <nl> + / / / Is used to wakeup cleanup_thread <nl> + Poco : : Event cleanup_thread_event ; <nl> <nl> / / / A thread that processes reconnection to ZooKeeper when the session expires . <nl> std : : unique_ptr < ReplicatedMergeTreeRestartingThread > restarting_thread ; <nl> friend class ext : : shared_ptr_helper < StorageReplicatedMergeTree > ; <nl> void removePartFromZooKeeper ( const String & part_name , zkutil : : Ops & ops ) ; <nl> <nl> / / / Quickly removes big set of parts from ZooKeeper ( using async multi queries ) <nl> - void removePartsFromZooKeeper ( zkutil : : ZooKeeperPtr & zookeeper , const Strings & part_names ) ; <nl> + void removePartsFromZooKeeper ( zkutil : : ZooKeeperPtr & zookeeper , const Strings & part_names , <nl> + NameSet * parts_should_be_retied = nullptr ) ; <nl> <nl> / / / Removes a part from ZooKeeper and adds a task to the queue to download it . It is supposed to do this with broken parts . <nl> void removePartAndEnqueueFetch ( const String & part_name ) ; <nl> mmm a / dbms / src / Storages / System / StorageSystemParts . cpp <nl> ppp b / dbms / src / Storages / System / StorageSystemParts . cpp <nl> BlockInputStreams StorageSystemParts : : read ( <nl> * / <nl> if ( e . code ( ) = = ErrorCodes : : TABLE_IS_DROPPED ) <nl> continue ; <nl> - else <nl> - throw ; <nl> + <nl> + throw ; <nl> } <nl> <nl> String engine = storage - > getName ( ) ; <nl> <nl> MergeTreeData * data = nullptr ; <nl> <nl> - if ( StorageMergeTree * merge_tree = dynamic_cast < StorageMergeTree * > ( & * storage ) ) <nl> + if ( auto merge_tree = dynamic_cast < StorageMergeTree * > ( & * storage ) ) <nl> { <nl> data = & merge_tree - > getData ( ) ; <nl> } <nl> - else if ( StorageReplicatedMergeTree * replicated_merge_tree = dynamic_cast < StorageReplicatedMergeTree * > ( & * storage ) ) <nl> + else if ( auto replicated_merge_tree = dynamic_cast < StorageReplicatedMergeTree * > ( & * storage ) ) <nl> { <nl> data = & replicated_merge_tree - > getData ( ) ; <nl> } <nl> + else <nl> + { <nl> + throw Exception ( " Unknown engine " + engine , ErrorCodes : : LOGICAL_ERROR ) ; <nl> + } <nl> <nl> - MergeTreeData : : DataParts active_parts = data - > getDataParts ( ) ; <nl> - MergeTreeData : : DataParts all_parts ; <nl> + using State = MergeTreeDataPart : : State ; <nl> + MergeTreeData : : DataPartStateVector all_parts_state ; <nl> + MergeTreeData : : DataPartsVector all_parts ; <nl> if ( need [ 0 ] ) <nl> - all_parts = data - > getAllDataParts ( ) ; <nl> + all_parts = data - > getDataPartsVector ( { State : : Committed , State : : Outdated } , all_parts_state ) ; <nl> else <nl> - all_parts = active_parts ; <nl> + all_parts = data - > getDataPartsVector ( { State : : Committed } , all_parts_state ) ; <nl> <nl> / / / Finally , we ' ll go through the list of parts . <nl> - for ( const MergeTreeData : : DataPartPtr & part : all_parts ) <nl> + for ( size_t part_number = 0 ; part_number < all_parts . size ( ) ; + + part_number ) <nl> { <nl> + const auto & part = all_parts [ part_number ] ; <nl> + auto part_state = all_parts_state [ part_number ] ; <nl> + <nl> size_t i = 0 ; <nl> { <nl> WriteBufferFromOwnString out ; <nl> BlockInputStreams StorageSystemParts : : read ( <nl> block . getByPosition ( i + + ) . column - > insert ( out . str ( ) ) ; <nl> } <nl> block . getByPosition ( i + + ) . column - > insert ( part - > name ) ; <nl> - block . getByPosition ( i + + ) . column - > insert ( static_cast < UInt64 > ( active_parts . count ( part ) ) ) ; <nl> + block . getByPosition ( i + + ) . column - > insert ( static_cast < UInt64 > ( part_state = = State : : Committed ) ) ; <nl> block . getByPosition ( i + + ) . column - > insert ( static_cast < UInt64 > ( part - > marks_count ) ) ; <nl> <nl> size_t marks_size = 0 ; <nl> BlockInputStreams StorageSystemParts : : read ( <nl> block . getByPosition ( i + + ) . column - > insert ( static_cast < UInt64 > ( part - > remove_time ) ) ; <nl> <nl> / / / For convenience , in returned refcount , don ' t add references that was due to local variables in this method : all_parts , active_parts . <nl> - block . getByPosition ( i + + ) . column - > insert ( static_cast < UInt64 > ( part . use_count ( ) - ( active_parts . count ( part ) ? 2 : 1 ) ) ) ; <nl> + block . getByPosition ( i + + ) . column - > insert ( static_cast < UInt64 > ( part . use_count ( ) - 1 ) ) ; <nl> <nl> block . getByPosition ( i + + ) . column - > insert ( static_cast < UInt64 > ( part - > getMinDate ( ) ) ) ; <nl> block . getByPosition ( i + + ) . column - > insert ( static_cast < UInt64 > ( part - > getMaxDate ( ) ) ) ; <nl> new file mode 100644 <nl> index 00000000000 . . da45942aeff <nl> mmm / dev / null <nl> ppp b / dbms / src / Storages / tests / gtest_range_filtered . cpp . cpp <nl> <nl> + # include < gtest / gtest . h > <nl> + # include < common / RangeFiltered . h > <nl> + # include < vector > <nl> + # include < set > <nl> + <nl> + <nl> + TEST ( RangeFiltered , simple ) <nl> + { <nl> + std : : vector < int > v ; <nl> + <nl> + for ( int i = 0 ; i < 10 ; + + i ) <nl> + v . push_back ( i ) ; <nl> + <nl> + auto v30 = createRangeFiltered ( [ ] ( int i ) { return i % 3 = = 0 ; } , v ) ; <nl> + auto v31 = createRangeFiltered ( [ ] ( int i ) { return i % 3 ! = 0 ; } , v ) ; <nl> + <nl> + for ( const int & i : v30 ) <nl> + ASSERT_EQ ( i % 3 , 0 ) ; <nl> + <nl> + for ( const int & i : v31 ) <nl> + ASSERT_NE ( i % 3 , 0 ) ; <nl> + <nl> + { <nl> + auto it = v30 . begin ( ) ; <nl> + ASSERT_EQ ( * it , 0 ) ; <nl> + <nl> + auto it2 = std : : next ( it ) ; <nl> + ASSERT_EQ ( * it2 , 3 ) ; <nl> + <nl> + auto it3 = it ; <nl> + it = std : : next ( it2 ) ; <nl> + ASSERT_EQ ( * it , 6 ) ; <nl> + } <nl> + <nl> + { <nl> + auto it = std : : next ( v30 . begin ( ) ) ; <nl> + ASSERT_EQ ( * it , 3 ) ; <nl> + <nl> + * it = 2 ; / / / it becomes invalid <nl> + ASSERT_EQ ( * ( + + it ) , 6 ) ; / / / but iteration is sucessfull <nl> + <nl> + * v30 . begin ( ) = 1 ; <nl> + ASSERT_EQ ( * v30 . begin ( ) , 6 ) ; <nl> + } <nl> + } <nl> mmm a / dbms / tests / instructions / coverity . txt <nl> ppp b / dbms / tests / instructions / coverity . txt <nl> export PATH = $ PATH : / home / milovidov / cov - analysis - linux64 - 2017 . 07 / bin <nl> <nl> mkdir ClickHouse_coverity <nl> cd ClickHouse_coverity <nl> - git clone git @ github . com : yandex / ClickHouse . git . <nl> + git clone - - recursive git @ github . com : yandex / ClickHouse . git . <nl> <nl> mkdir build <nl> cd build <nl> mmm a / dbms / tests / integration / test_extreme_deduplication / configs / conf . d / merge_tree . xml <nl> ppp b / dbms / tests / integration / test_extreme_deduplication / configs / conf . d / merge_tree . xml <nl> <nl> < replicated_deduplication_window > 999999999 < / replicated_deduplication_window > <nl> < replicated_deduplication_window_seconds > 1 < / replicated_deduplication_window_seconds > <nl> < cleanup_delay_period > 1 < / cleanup_delay_period > <nl> + < old_parts_lifetime > 1 < / old_parts_lifetime > <nl> < / merge_tree > <nl> < / yandex > <nl> mmm a / dbms / tests / integration / test_extreme_deduplication / test . py <nl> ppp b / dbms / tests / integration / test_extreme_deduplication / test . py <nl> <nl> from helpers . network import PartitionManager <nl> from helpers . test_tools import TSV <nl> from helpers . client import CommandRequest <nl> + from helpers . client import QueryTimeoutExceedException <nl> <nl> <nl> cluster = ClickHouseCluster ( __file__ ) <nl> <nl> - node1 = cluster . add_instance ( ' node1 ' , config_dir = ' configs ' , with_zookeeper = True ) <nl> - node2 = cluster . add_instance ( ' node2 ' , config_dir = ' configs ' , with_zookeeper = True ) <nl> + node1 = cluster . add_instance ( ' node1 ' , config_dir = ' configs ' , with_zookeeper = True , macroses = { " layer " : 0 , " shard " : 0 , " replica " : 1 } ) <nl> + node2 = cluster . add_instance ( ' node2 ' , config_dir = ' configs ' , with_zookeeper = True , macroses = { " layer " : 0 , " shard " : 0 , " replica " : 2 } ) <nl> nodes = [ node1 , node2 ] <nl> <nl> @ pytest . fixture ( scope = " module " ) <nl> def started_cluster ( ) : <nl> try : <nl> cluster . start ( ) <nl> - <nl> - for node in nodes : <nl> - node . query ( ' ' ' <nl> - CREATE TABLE simple ( date Date , id UInt32 ) <nl> - ENGINE = ReplicatedMergeTree ( ' / clickhouse / tables / 0 / simple ' , ' { replica } ' , date , id , 8192 ) ; <nl> - ' ' ' . format ( replica = node . name ) ) <nl> - node . query ( " INSERT INTO simple VALUES ( 0 , 0 ) " ) <nl> - <nl> - node . query ( ' ' ' <nl> - CREATE TABLE simple2 ( date Date , id UInt32 ) <nl> - ENGINE = ReplicatedMergeTree ( ' / clickhouse / tables / 0 / simple2 ' , ' { replica } ' , date , id , 8192 ) ; <nl> - ' ' ' . format ( replica = node . name ) ) <nl> - <nl> yield cluster <nl> <nl> finally : <nl> def started_cluster ( ) : <nl> def test_deduplication_window_in_seconds ( started_cluster ) : <nl> node = node1 <nl> <nl> - node . query ( " INSERT INTO simple2 VALUES ( 0 , 0 ) " ) <nl> + node1 . query ( " " " <nl> + CREATE TABLE simple ON CLUSTER test_cluster ( date Date , id UInt32 ) <nl> + ENGINE = ReplicatedMergeTree ( ' / clickhouse / tables / { shard } / simple ' , ' { replica } ' , date , id , 8192 ) " " " ) <nl> + <nl> + node . query ( " INSERT INTO simple VALUES ( 0 , 0 ) " ) <nl> time . sleep ( 1 ) <nl> - node . query ( " INSERT INTO simple2 VALUES ( 0 , 0 ) " ) # deduplication works here <nl> - node . query ( " INSERT INTO simple2 VALUES ( 0 , 1 ) " ) <nl> - assert TSV ( node . query ( " SELECT count ( ) FROM simple2 " ) ) = = TSV ( " 2 \ n " ) <nl> + node . query ( " INSERT INTO simple VALUES ( 0 , 0 ) " ) # deduplication works here <nl> + node . query ( " INSERT INTO simple VALUES ( 0 , 1 ) " ) <nl> + assert TSV ( node . query ( " SELECT count ( ) FROM simple " ) ) = = TSV ( " 2 \ n " ) <nl> <nl> # wait clean thread <nl> time . sleep ( 2 ) <nl> <nl> - assert TSV . toMat ( node . query ( " SELECT count ( ) FROM system . zookeeper WHERE path = ' / clickhouse / tables / 0 / simple2 / blocks ' " ) ) [ 0 ] [ 0 ] = = " 1 " <nl> - node . query ( " INSERT INTO simple2 VALUES ( 0 , 0 ) " ) # deduplication doesn ' t works here , the first hash node was deleted <nl> - assert TSV . toMat ( node . query ( " SELECT count ( ) FROM simple2 " ) ) [ 0 ] [ 0 ] = = " 3 " <nl> - <nl> + assert TSV . toMat ( node . query ( " SELECT count ( ) FROM system . zookeeper WHERE path = ' / clickhouse / tables / 0 / simple / blocks ' " ) ) [ 0 ] [ 0 ] = = " 1 " <nl> + node . query ( " INSERT INTO simple VALUES ( 0 , 0 ) " ) # deduplication doesn ' t works here , the first hash node was deleted <nl> + assert TSV . toMat ( node . query ( " SELECT count ( ) FROM simple " ) ) [ 0 ] [ 0 ] = = " 3 " <nl> <nl> - def check_timeout_exception ( e ) : <nl> - s = str ( e ) <nl> - # print s <nl> - assert s . find ( ' timed out ! ' ) > = 0 or s . find ( ' Return code : - 9 ' ) > = 0 <nl> + node1 . query ( " " " DROP TABLE simple ON CLUSTER test_cluster " " " ) <nl> <nl> <nl> # Currently this test just reproduce incorrect behavior that sould be fixed <nl> def test_deduplication_works_in_case_of_intensive_inserts ( started_cluster ) : <nl> inserters = [ ] <nl> fetchers = [ ] <nl> <nl> + node1 . query ( " " " <nl> + CREATE TABLE simple ON CLUSTER test_cluster ( date Date , id UInt32 ) <nl> + ENGINE = ReplicatedMergeTree ( ' / clickhouse / tables / { shard } / simple ' , ' { replica } ' , date , id , 8192 ) " " " ) <nl> + <nl> + node1 . query ( " INSERT INTO simple VALUES ( 0 , 0 ) " ) <nl> + <nl> for node in nodes : <nl> host = node . ip_address <nl> <nl> def test_deduplication_works_in_case_of_intensive_inserts ( started_cluster ) : <nl> set - e <nl> for i in ` seq 1000 ` ; do <nl> res = ` clickhouse - client - - host { } - q " SELECT count ( ) FROM simple " ` <nl> - if [ [ $ res - ne 1 ] ] ; then <nl> + if [ [ $ ? - ne 0 | | $ res - ne 1 ] ] ; then <nl> echo " Selected $ res elements ! Host : { } " 1 > & 2 <nl> exit - 1 <nl> fi ; <nl> def test_deduplication_works_in_case_of_intensive_inserts ( started_cluster ) : <nl> for inserter in inserters : <nl> try : <nl> inserter . get_answer ( ) <nl> - except Exception as e : <nl> - check_timeout_exception ( e ) <nl> + except QueryTimeoutExceedException : <nl> + # Only timeout is accepted <nl> + pass <nl> <nl> # There were not errors during SELECTs <nl> for fetcher in fetchers : <nl> try : <nl> fetcher . get_answer ( ) <nl> - except Exception as e : <nl> + except QueryTimeoutExceedException : <nl> + # Only timeout is accepted <nl> pass <nl> - # Uncomment when problem will be fixed <nl> - # check_timeout_exception ( e ) <nl> + <nl> + node1 . query ( " " " DROP TABLE simple ON CLUSTER test_cluster " " " ) <nl> new file mode 100644 <nl> index 00000000000 . . 02efd126b77 <nl> mmm / dev / null <nl> ppp b / dbms / tests / integration / test_random_inserts / configs / conf . d / merge_tree . xml <nl> <nl> + < yandex > <nl> + < merge_tree > <nl> + < replicated_deduplication_window > 999999999 < / replicated_deduplication_window > <nl> + < replicated_deduplication_window_seconds > 999999999 < / replicated_deduplication_window_seconds > <nl> + < cleanup_delay_period > 10 < / cleanup_delay_period > <nl> + < old_parts_lifetime > 1 < / old_parts_lifetime > <nl> + < / merge_tree > <nl> + < / yandex > <nl> new file mode 100644 <nl> index 00000000000 . . 64239dfdb6c <nl> mmm / dev / null <nl> ppp b / dbms / tests / integration / test_random_inserts / configs / conf . d / remote_servers . xml <nl> <nl> + < yandex > <nl> + < remote_servers > <nl> + < test_cluster > <nl> + < shard > <nl> + < internal_replication > true < / internal_replication > <nl> + < replica > <nl> + < host > node1 < / host > <nl> + < port > 9000 < / port > <nl> + < / replica > <nl> + < replica > <nl> + < host > node2 < / host > <nl> + < port > 9000 < / port > <nl> + < / replica > <nl> + < / shard > <nl> + < / test_cluster > <nl> + < / remote_servers > <nl> + < / yandex > <nl> new file mode 100644 <nl> index 00000000000 . . d9325c91191 <nl> mmm / dev / null <nl> ppp b / dbms / tests / integration / test_random_inserts / test . py <nl> <nl> + import time <nl> + import os <nl> + from contextlib import contextmanager <nl> + <nl> + import pytest <nl> + <nl> + from helpers . cluster import ClickHouseCluster <nl> + from helpers . network import PartitionManager <nl> + from helpers . test_tools import TSV <nl> + from helpers . client import CommandRequest <nl> + <nl> + <nl> + cluster = ClickHouseCluster ( __file__ ) <nl> + <nl> + node1 = cluster . add_instance ( ' node1 ' , config_dir = ' configs ' , with_zookeeper = True , macroses = { " layer " : 0 , " shard " : 0 , " replica " : 1 } ) <nl> + node2 = cluster . add_instance ( ' node2 ' , config_dir = ' configs ' , with_zookeeper = True , macroses = { " layer " : 0 , " shard " : 0 , " replica " : 2 } ) <nl> + nodes = [ node1 , node2 ] <nl> + <nl> + @ pytest . fixture ( scope = " module " ) <nl> + def started_cluster ( ) : <nl> + try : <nl> + cluster . start ( ) <nl> + yield cluster <nl> + <nl> + finally : <nl> + pass <nl> + cluster . shutdown ( ) <nl> + <nl> + def test_random_inserts ( started_cluster ) : <nl> + # Duration of the test , reduce it if don ' t want to wait <nl> + DURATION_SECONDS = 10 # * 60 <nl> + <nl> + node1 . query ( " " " <nl> + CREATE TABLE simple ON CLUSTER test_cluster ( date Date , i UInt32 , s String ) <nl> + ENGINE = ReplicatedMergeTree ( ' / clickhouse / tables / { shard } / simple ' , ' { replica } ' , date , i , 8192 ) " " " ) <nl> + <nl> + with PartitionManager ( ) as pm_random_drops : <nl> + for sacrifice in nodes : <nl> + pass # This test doesn ' t work with partition problems still <nl> + # pm_random_drops . _add_rule ( { ' probability ' : 0 . 01 , ' destination ' : sacrifice . ip_address , ' source_port ' : 2181 , ' action ' : ' REJECT - - reject - with tcp - reset ' } ) <nl> + # pm_random_drops . _add_rule ( { ' probability ' : 0 . 01 , ' source ' : sacrifice . ip_address , ' destination_port ' : 2181 , ' action ' : ' REJECT - - reject - with tcp - reset ' } ) <nl> + <nl> + min_timestamp = int ( time . time ( ) ) <nl> + max_timestamp = min_timestamp + DURATION_SECONDS <nl> + num_timestamps = max_timestamp - min_timestamp + 1 <nl> + <nl> + bash_script = os . path . join ( os . path . dirname ( __file__ ) , " test . sh " ) <nl> + inserters = [ ] <nl> + for node in nodes : <nl> + cmd = [ ' / bin / bash ' , bash_script , node . ip_address , str ( min_timestamp ) , str ( max_timestamp ) ] <nl> + inserters . append ( CommandRequest ( cmd , timeout = DURATION_SECONDS * 2 , stdin = ' ' ) ) <nl> + print node . name , node . ip_address <nl> + <nl> + for inserter in inserters : <nl> + inserter . get_answer ( ) <nl> + <nl> + answer = " { } \ t { } \ t { } \ t { } \ n " . format ( num_timestamps , num_timestamps , min_timestamp , max_timestamp ) <nl> + for node in nodes : <nl> + assert TSV ( node . query ( " SELECT count ( ) , uniqExact ( i ) , min ( i ) , max ( i ) FROM simple " ) ) = = TSV ( answer ) , node . name + " : " + node . query ( " SELECT groupArray ( _part ) , i , count ( ) AS c FROM simple GROUP BY i ORDER BY c DESC LIMIT 1 " ) <nl> + <nl> + node1 . query ( " " " DROP TABLE simple ON CLUSTER test_cluster " " " ) <nl> new file mode 100755 <nl> index 00000000000 . . d743ffe4e91 <nl> mmm / dev / null <nl> ppp b / dbms / tests / integration / test_random_inserts / test . sh <nl> <nl> + # ! / bin / bash <nl> + # set - e <nl> + <nl> + [ [ - n " $ 1 " ] ] & & host = " $ 1 " | | host = " 127 . 0 . 0 . 1 " <nl> + [ [ - n " $ 2 " ] ] & & min_timestamp = " $ 2 " | | min_timestamp = $ ( ( $ ( date + % s ) - 10 ) ) <nl> + [ [ - n " $ 3 " ] ] & & max_timestamp = " $ 3 " | | max_timestamp = $ ( ( $ ( date + % s ) + 10 ) ) <nl> + <nl> + timestamps = ` seq $ min_timestamp $ max_timestamp ` <nl> + <nl> + function reliable_insert { <nl> + local ts = " $ 1 " <nl> + num_tries = 0 <nl> + while true ; do <nl> + if ( ( $ num_tries > 20 ) ) ; then <nl> + echo " Too many retries " 1 > & 2 <nl> + exit - 1 <nl> + fi <nl> + <nl> + # echo clickhouse - client - - host $ host - q " INSERT INTO simple VALUES ( 0 , $ ts , ' $ ts ' ) " <nl> + res = ` clickhouse - client - - host $ host - q " INSERT INTO simple VALUES ( 0 , $ ts , ' $ ts ' ) " 2 > & 1 ` <nl> + rt = $ ? <nl> + num_tries = $ ( ( $ num_tries + 1 ) ) <nl> + <nl> + if ( ( $ rt = = 0 ) ) ; then break ; fi <nl> + if [ [ $ res = = * " Code : 319 . " * " Unknown status , client must retry " * | | $ res = = * " Code : 999 . " * ] ] ; then <nl> + continue <nl> + else <nl> + echo FAIL " $ res " 1 > & 2 <nl> + exit - 1 <nl> + fi <nl> + done ; <nl> + } <nl> + <nl> + for i in $ timestamps ; do <nl> + <nl> + cur_timestamp = $ ( date + % s ) <nl> + while ( ( $ cur_timestamp < $ i ) ) ; do <nl> + ts = ` shuf - i $ min_timestamp - $ cur_timestamp - n 1 ` <nl> + reliable_insert " $ ts " <nl> + cur_timestamp = $ ( date + % s ) <nl> + done <nl> + <nl> + # echo $ i > > $ host " . txt " <nl> + reliable_insert " $ i " <nl> + done <nl> \ No newline at end of file <nl> mmm a / dbms / tests / queries / 0_stateless / 00503_cast_const_nullable . reference <nl> ppp b / dbms / tests / queries / 0_stateless / 00503_cast_const_nullable . reference <nl> <nl> 1 <nl> 1 <nl> + \ N <nl> mmm a / dbms / tests / queries / 0_stateless / 00503_cast_const_nullable . sql <nl> ppp b / dbms / tests / queries / 0_stateless / 00503_cast_const_nullable . sql <nl> <nl> SELECT CAST ( 1 AS Nullable ( UInt8 ) ) AS id WHERE id = CAST ( 1 AS Nullable ( UInt8 ) ) ; <nl> SELECT CAST ( 1 AS Nullable ( UInt8 ) ) AS id WHERE id = 1 ; <nl> + SELECT NULL = = CAST ( toUInt8 ( 0 ) AS Nullable ( UInt8 ) ) ; <nl> new file mode 100644 <nl> index 00000000000 . . adf6abb7298 <nl> mmm / dev / null <nl> ppp b / dbms / tests / queries / 0_stateless / 00510_materizlized_view_and_deduplication . reference <nl> <nl> + 2 <nl> + 3 <nl> + <nl> + 2 <nl> + 3 <nl> + <nl> + 1 <nl> + 1 <nl> new file mode 100644 <nl> index 00000000000 . . c74ee7e423e <nl> mmm / dev / null <nl> ppp b / dbms / tests / queries / 0_stateless / 00510_materizlized_view_and_deduplication . sql <nl> <nl> + SET experimental_allow_extended_storage_definition_syntax = 1 ; <nl> + <nl> + DROP TABLE IF EXISTS test . with_deduplication ; <nl> + DROP TABLE IF EXISTS test . without_deduplication ; <nl> + DROP TABLE IF EXISTS test . with_deduplication_mv ; <nl> + DROP TABLE IF EXISTS test . without_deduplication_mv ; <nl> + <nl> + CREATE TABLE test . with_deduplication ( x UInt32 ) <nl> + ENGINE ReplicatedMergeTree ( ' / clickhouse / tables / test / with_deduplication ' , ' r1 ' ) ORDER BY x ; <nl> + CREATE TABLE test . without_deduplication ( x UInt32 ) <nl> + ENGINE ReplicatedMergeTree ( ' / clickhouse / tables / test / without_deduplication ' , ' r1 ' ) ORDER BY x SETTINGS replicated_deduplication_window = 0 ; <nl> + <nl> + CREATE MATERIALIZED VIEW test . with_deduplication_mv <nl> + ENGINE = ReplicatedAggregatingMergeTree ( ' / clickhouse / tables / test / with_deduplication_mv ' , ' r1 ' ) ORDER BY dummy <nl> + AS SELECT 0 AS dummy , countState ( x ) AS cnt FROM test . with_deduplication ; <nl> + CREATE MATERIALIZED VIEW test . without_deduplication_mv <nl> + ENGINE = ReplicatedAggregatingMergeTree ( ' / clickhouse / tables / test / without_deduplication_mv ' , ' r1 ' ) ORDER BY dummy <nl> + AS SELECT 0 AS dummy , countState ( x ) AS cnt FROM test . without_deduplication ; <nl> + <nl> + INSERT INTO test . with_deduplication VALUES ( 42 ) ; <nl> + INSERT INTO test . with_deduplication VALUES ( 42 ) ; <nl> + INSERT INTO test . with_deduplication VALUES ( 43 ) ; <nl> + <nl> + INSERT INTO test . without_deduplication VALUES ( 42 ) ; <nl> + INSERT INTO test . without_deduplication VALUES ( 42 ) ; <nl> + INSERT INTO test . without_deduplication VALUES ( 43 ) ; <nl> + <nl> + SELECT count ( ) FROM test . with_deduplication ; <nl> + SELECT count ( ) FROM test . without_deduplication ; <nl> + <nl> + - - Implicit insert isn ' t deduplicated <nl> + SELECT ' ' ; <nl> + SELECT countMerge ( cnt ) FROM test . with_deduplication_mv ; <nl> + SELECT countMerge ( cnt ) FROM test . without_deduplication_mv ; <nl> + <nl> + - - Explicit insert is deduplicated <nl> + ALTER TABLE test . ` . inner . with_deduplication_mv ` DROP PARTITION ID ' all ' ; <nl> + ALTER TABLE test . ` . inner . without_deduplication_mv ` DROP PARTITION ID ' all ' ; <nl> + INSERT INTO test . ` . inner . with_deduplication_mv ` SELECT 0 AS dummy , arrayReduce ( ' countState ' , [ toUInt32 ( 42 ) ] ) AS cnt ; <nl> + INSERT INTO test . ` . inner . with_deduplication_mv ` SELECT 0 AS dummy , arrayReduce ( ' countState ' , [ toUInt32 ( 42 ) ] ) AS cnt ; <nl> + INSERT INTO test . ` . inner . without_deduplication_mv ` SELECT 0 AS dummy , arrayReduce ( ' countState ' , [ toUInt32 ( 42 ) ] ) AS cnt ; <nl> + INSERT INTO test . ` . inner . without_deduplication_mv ` SELECT 0 AS dummy , arrayReduce ( ' countState ' , [ toUInt32 ( 42 ) ] ) AS cnt ; <nl> + <nl> + SELECT ' ' ; <nl> + SELECT countMerge ( cnt ) FROM test . with_deduplication_mv ; <nl> + SELECT countMerge ( cnt ) FROM test . without_deduplication_mv ; <nl> + <nl> + DROP TABLE IF EXISTS test . with_deduplication ; <nl> + DROP TABLE IF EXISTS test . without_deduplication ; <nl> + DROP TABLE IF EXISTS test . with_deduplication_mv ; <nl> + DROP TABLE IF EXISTS test . without_deduplication_mv ; <nl> new file mode 100644 <nl> index 00000000000 . . d7ad7ba44db <nl> mmm / dev / null <nl> ppp b / docs / en / formats / capnproto . rst <nl> <nl> + CapnProto <nl> + mmmmmmmmm <nl> + <nl> + Cap ' n Proto is a binary message format . Like Protocol Buffers and Thrift ( but unlike JSON or MessagePack ) , Cap ' n Proto messages are strongly - typed and not self - describing . Due to this , it requires a ` ` schema ` ` setting to specify schema file and the root object . The schema is parsed on runtime and cached for each SQL statement . <nl> + <nl> + . . code - block : : sql <nl> + <nl> + SELECT SearchPhrase , count ( ) AS c FROM test . hits GROUP BY SearchPhrase FORMAT CapnProto SETTINGS schema = ' schema . capnp : Message ' <nl> + <nl> + When the schema file looks like : <nl> + <nl> + . . code - block : : text <nl> + <nl> + struct Message { <nl> + SearchPhrase @ 0 : Text ; <nl> + c @ 1 : Uint64 ; <nl> + } <nl> + <nl> + Deserialization is almost as efficient as the binary rows format , with typically zero allocation overhead per message . <nl> + <nl> + You can use this format as an efficient exchange message format in your data processing pipeline . <nl> new file mode 100644 <nl> index 00000000000 . . 3301a08e44c <nl> mmm / dev / null <nl> ppp b / libs / libcommon / include / common / RangeFiltered . h <nl> <nl> + # pragma once <nl> + # include < type_traits > <nl> + <nl> + <nl> + / / / Similar to boost : : filtered_range but a little bit easier and allows to convert ordinary iterators to filtered <nl> + template < typename F , typename C > <nl> + struct RangeFiltered <nl> + { <nl> + using RawIterator = typename C : : iterator ; <nl> + class Iterator ; <nl> + <nl> + / / / Will iterate over elements for which filter ( * it ) = = true <nl> + RangeFiltered ( F & & filter , const C & container ) <nl> + : filter ( std : : move ( filter ) ) , container ( container ) { } <nl> + <nl> + Iterator begin ( ) const <nl> + { <nl> + return { * this , std : : begin ( container ) } ; <nl> + } <nl> + <nl> + Iterator end ( ) const <nl> + { <nl> + return { * this , std : : end ( container ) } ; <nl> + } <nl> + <nl> + / / / Convert ordinary iterator to filtered one <nl> + / / / Real position will be in range [ ordinary_iterator ; end ( ) ] , so it is suitable to use with lower [ upper ] _bound ( ) <nl> + inline Iterator convert ( RawIterator ordinary_iterator ) const <nl> + { <nl> + return { * this , ordinary_iterator } ; <nl> + } <nl> + <nl> + <nl> + / / / It is similar to boost : : filtered_iterator , but has additional features : <nl> + / / / it doesn ' t store end ( ) iterator <nl> + / / / it doesn ' t store predicate , so it allows to implement operator = ( ) <nl> + / / / it guarantees that operator + + ( ) works properly in case of filter ( * it ) = = false <nl> + class Iterator <nl> + { <nl> + public : <nl> + using Range = RangeFiltered < F , C > ; <nl> + <nl> + typedef Iterator self_type ; <nl> + typedef typename std : : iterator_traits < RawIterator > : : value_type value_type ; <nl> + typedef typename std : : iterator_traits < RawIterator > : : reference reference ; <nl> + typedef const value_type & const_reference ; <nl> + typedef typename std : : iterator_traits < RawIterator > : : pointer pointer ; <nl> + typedef const value_type * const_pointer ; <nl> + typedef typename std : : iterator_traits < RawIterator > : : difference_type difference_type ; <nl> + typedef std : : bidirectional_iterator_tag iterator_category ; <nl> + <nl> + Iterator ( const Range & range_ , RawIterator iter_ ) <nl> + : range ( & range_ ) , iter ( iter_ ) <nl> + { <nl> + for ( ; iter ! = std : : end ( range - > container ) & & ! range - > filter ( * iter ) ; + + iter ) ; <nl> + } <nl> + <nl> + Iterator ( const Iterator & rhs ) = default ; <nl> + Iterator ( Iterator & & rhs ) noexcept = default ; <nl> + <nl> + Iterator operator + + ( ) <nl> + { <nl> + + + iter ; <nl> + for ( ; iter ! = std : : end ( range - > container ) & & ! range - > filter ( * iter ) ; + + iter ) ; <nl> + return * this ; <nl> + } <nl> + <nl> + Iterator operator - - ( ) <nl> + { <nl> + - - iter ; <nl> + for ( ; ! range - > filter ( * iter ) ; - - iter ) ; / / / Don ' t check std : : begin ( ) bound <nl> + return * this ; <nl> + } <nl> + <nl> + pointer operator - > ( ) <nl> + { <nl> + return iter . operator - > ( ) ; <nl> + } <nl> + <nl> + const_pointer operator - > ( ) const <nl> + { <nl> + return iter . operator - > ( ) ; <nl> + } <nl> + <nl> + reference operator * ( ) <nl> + { <nl> + return * iter ; <nl> + } <nl> + <nl> + const_reference operator * ( ) const <nl> + { <nl> + return * iter ; <nl> + } <nl> + <nl> + bool operator = = ( const self_type & rhs ) const <nl> + { <nl> + return iter = = rhs . iter ; <nl> + } <nl> + <nl> + bool operator ! = ( const self_type & rhs ) const <nl> + { <nl> + return iter ! = rhs . iter ; <nl> + } <nl> + <nl> + self_type & operator = ( const self_type & rhs ) = default ; <nl> + self_type & operator = ( self_type & & rhs ) noexcept = default ; <nl> + <nl> + ~ Iterator ( ) = default ; <nl> + <nl> + private : <nl> + const Range * range = nullptr ; <nl> + RawIterator iter ; <nl> + } ; <nl> + <nl> + protected : <nl> + F filter ; <nl> + const C & container ; <nl> + } ; <nl> + <nl> + <nl> + template < typename F , typename C > <nl> + inline RangeFiltered < std : : decay_t < F > , std : : decay_t < C > > createRangeFiltered ( F & & filter , C & & container ) <nl> + { <nl> + return { std : : forward < F > ( filter ) , std : : forward < C > ( container ) } ; <nl> + } ; <nl>
Merge remote - tracking branch ' upstream / master ' into fix4
ClickHouse/ClickHouse
c878af87405f3b886ba636a9be6d82ddaf96d4f1
2017-10-27T20:13:35Z
mmm a / Marlin / Configuration_adv . h <nl> ppp b / Marlin / Configuration_adv . h <nl> <nl> / / # define JOYSTICK_DEBUG <nl> # endif <nl> <nl> + / * * <nl> + * Mechanical Gantry Calibration <nl> + * Modern replacement for the Prusa TMC_Z_CALIBRATION . <nl> + * Adds capability to work with any adjustable current drivers . <nl> + * Implemented as G34 because M915 is deprecated . <nl> + * / <nl> + / / # define MECHANICAL_GANTRY_CALIBRATION <nl> + # if ENABLED ( MECHANICAL_GANTRY_CALIBRATION ) <nl> + # define GANTRY_CALIBRATION_CURRENT 600 / / Default calibration current in ma <nl> + # define GANTRY_CALIBRATION_EXTRA_HEIGHT 15 / / Extra distance in mm past Z_ # # # _POS to move <nl> + # define GANTRY_CALIBRATION_FEEDRATE 500 / / Feedrate for correction move <nl> + / / # define GANTRY_CALIBRATION_TO_MIN / / Enable to calibrate Z in the MIN direction <nl> + <nl> + / / # define GANTRY_CALIBRATION_SAFE_POSITION { X_CENTER , Y_CENTER } / / Safe position for nozzle <nl> + / / # define GANTRY_CALIBRATION_XY_PARK_FEEDRATE 3000 / / XY Park Feedrate - MMM <nl> + / / # define GANTRY_CALIBRATION_COMMANDS_PRE " " <nl> + # define GANTRY_CALIBRATION_COMMANDS_POST " G28 " / / G28 highly recommended to ensure an accurate position <nl> + # endif <nl> + <nl> / * * <nl> * MAX7219 Debug Matrix <nl> * <nl> new file mode 100644 <nl> index 00000000000 . . eb1d32f9092 <nl> mmm / dev / null <nl> ppp b / Marlin / src / gcode / calibrate / G34 . cpp <nl> <nl> + / * * <nl> + * Marlin 3D Printer Firmware <nl> + * Copyright ( c ) 2020 MarlinFirmware [ https : / / github . com / MarlinFirmware / Marlin ] <nl> + * <nl> + * Based on Sprinter and grbl . <nl> + * Copyright ( c ) 2011 Camiel Gubbels / Erik van der Zalm <nl> + * <nl> + * This program is free software : you can redistribute it and / or modify <nl> + * it under the terms of the GNU General Public License as published by <nl> + * the Free Software Foundation , either version 3 of the License , or <nl> + * ( at your option ) any later version . <nl> + * <nl> + * This program is distributed in the hope that it will be useful , <nl> + * but WITHOUT ANY WARRANTY ; without even the implied warranty of <nl> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the <nl> + * GNU General Public License for more details . <nl> + * <nl> + * You should have received a copy of the GNU General Public License <nl> + * along with this program . If not , see < https : / / www . gnu . org / licenses / > . <nl> + * <nl> + * / <nl> + <nl> + # include " . . / . . / inc / MarlinConfigPre . h " <nl> + <nl> + # if ENABLED ( MECHANICAL_GANTRY_CALIBRATION ) <nl> + <nl> + # include " . . / gcode . h " <nl> + # include " . . / . . / module / motion . h " <nl> + # include " . . / . . / module / stepper . h " <nl> + # include " . . / . . / module / endstops . h " <nl> + <nl> + # if HAS_LEVELING <nl> + # include " . . / . . / feature / bedlevel / bedlevel . h " <nl> + # endif <nl> + <nl> + # define DEBUG_OUT ENABLED ( DEBUG_LEVELING_FEATURE ) <nl> + # include " . . / . . / core / debug_out . h " <nl> + <nl> + void GcodeSuite : : G34 ( ) { <nl> + <nl> + if ( homing_needed ( ) ) return ; <nl> + <nl> + TEMPORARY_SOFT_ENDSTOP_STATE ( false ) ; <nl> + TEMPORARY_BED_LEVELING_STATE ( false ) ; <nl> + TemporaryGlobalEndstopsState unlock_z ( false ) ; <nl> + <nl> + # ifdef GANTRY_CALIBRATION_COMMANDS_PRE <nl> + gcode . process_subcommands_now_P ( PSTR ( GANTRY_CALIBRATION_COMMANDS_PRE ) ) ; <nl> + if ( DEBUGGING ( LEVELING ) ) DEBUG_ECHOLNPGM ( " Sub Commands Processed " ) ; <nl> + # endif <nl> + <nl> + # ifdef GANTRY_CALIBRATION_SAFE_POSITION <nl> + / / Move XY to safe position <nl> + if ( DEBUGGING ( LEVELING ) ) DEBUG_ECHOLNPGM ( " Parking XY " ) ; <nl> + const xy_pos_t safe_pos = GANTRY_CALIBRATION_SAFE_POSITION ; <nl> + do_blocking_move_to ( safe_pos , MMM_TO_MMS ( GANTRY_CALIBRATION_XY_PARK_FEEDRATE ) ) ; <nl> + # endif <nl> + <nl> + const float move_distance = parser . intval ( ' Z ' , GANTRY_CALIBRATION_EXTRA_HEIGHT ) , <nl> + zbase = ENABLED ( GANTRY_CALIBRATION_TO_MIN ) ? Z_MIN_POS : Z_MAX_POS , <nl> + zpounce = zbase - move_distance , zgrind = zbase + move_distance ; <nl> + <nl> + / / Move Z to pounce position <nl> + if ( DEBUGGING ( LEVELING ) ) DEBUG_ECHOLNPGM ( " Setting Z Pounce " ) ; <nl> + do_blocking_move_to_z ( zpounce , MMM_TO_MMS ( HOMING_FEEDRATE_Z ) ) ; <nl> + <nl> + / / Store current motor settings , then apply reduced value <nl> + <nl> + # define _REDUCE_CURRENT ANY ( HAS_MOTOR_CURRENT_SPI , HAS_MOTOR_CURRENT_PWM , HAS_MOTOR_CURRENT_DAC , HAS_MOTOR_CURRENT_I2C , HAS_TRINAMIC_CONFIG ) <nl> + # if _REDUCE_CURRENT <nl> + if ( DEBUGGING ( LEVELING ) ) DEBUG_ECHOLNPGM ( " Reducing Current " ) ; <nl> + # endif <nl> + <nl> + # if HAS_MOTOR_CURRENT_SPI <nl> + const uint16_t target_current = parser . intval ( ' S ' , GANTRY_CALIBRATION_CURRENT ) ; <nl> + const uint32_t previous_current = stepper . motor_current_setting [ Z_AXIS ] ; <nl> + stepper . set_digipot_current ( Z_AXIS , target_current ) ; <nl> + # elif HAS_MOTOR_CURRENT_PWM <nl> + const uint16_t target_current = parser . intval ( ' S ' , GANTRY_CALIBRATION_CURRENT ) ; <nl> + const uint32_t previous_current = stepper . motor_current_setting [ Z_AXIS ] ; <nl> + stepper . set_digipot_current ( 1 , target_current ) ; <nl> + # elif HAS_MOTOR_CURRENT_DAC <nl> + const float target_current = parser . floatval ( ' S ' , GANTRY_CALIBRATION_CURRENT ) ; <nl> + const float previous_current = dac_amps ( Z_AXIS , target_current ) ; <nl> + stepper_dac . set_current_value ( Z_AXIS , target_current ) ; <nl> + # elif ENABLED ( HAS_MOTOR_CURRENT_I2C ) <nl> + const uint16_t target_current = parser . intval ( ' S ' , GANTRY_CALIBRATION_CURRENT ) ; <nl> + previous_current = dac_amps ( Z_AXIS ) ; <nl> + digipot_i2c . set_current ( Z_AXIS , target_current ) <nl> + # elif HAS_TRINAMIC_CONFIG <nl> + const uint16_t target_current = parser . intval ( ' S ' , GANTRY_CALIBRATION_CURRENT ) ; <nl> + static uint16_t previous_current_arr [ NUM_Z_STEPPER_DRIVERS ] ; <nl> + # if AXIS_IS_TMC ( Z ) <nl> + previous_current_arr [ 0 ] = stepperZ . getMilliamps ( ) ; <nl> + stepperZ . rms_current ( target_current ) ; <nl> + # endif <nl> + # if AXIS_IS_TMC ( Z2 ) <nl> + previous_current_arr [ 1 ] = stepperZ2 . getMilliamps ( ) ; <nl> + stepperZ2 . rms_current ( target_current ) ; <nl> + # endif <nl> + # if AXIS_IS_TMC ( Z3 ) <nl> + previous_current_arr [ 2 ] = stepperZ3 . getMilliamps ( ) ; <nl> + stepperZ3 . rms_current ( target_current ) ; <nl> + # endif <nl> + # if AXIS_IS_TMC ( Z4 ) <nl> + previous_current_arr [ 3 ] = stepperZ4 . getMilliamps ( ) ; <nl> + stepperZ4 . rms_current ( target_current ) ; <nl> + # endif <nl> + # endif <nl> + <nl> + / / Do Final Z move to adjust <nl> + if ( DEBUGGING ( LEVELING ) ) DEBUG_ECHOLNPGM ( " Final Z Move " ) ; <nl> + do_blocking_move_to_z ( zgrind , MMM_TO_MMS ( GANTRY_CALIBRATION_FEEDRATE ) ) ; <nl> + <nl> + / / Back off end plate , back to normal motion range <nl> + if ( DEBUGGING ( LEVELING ) ) DEBUG_ECHOLNPGM ( " Z Backoff " ) ; <nl> + do_blocking_move_to_z ( zpounce , MMM_TO_MMS ( GANTRY_CALIBRATION_FEEDRATE ) ) ; <nl> + <nl> + # if _REDUCE_CURRENT <nl> + / / Reset current to original values <nl> + if ( DEBUGGING ( LEVELING ) ) DEBUG_ECHOLNPGM ( " Restore Current " ) ; <nl> + # endif <nl> + <nl> + # if HAS_MOTOR_CURRENT_SPI <nl> + stepper . set_digipot_current ( Z_AXIS , previous_current ) ; <nl> + # elif HAS_MOTOR_CURRENT_PWM <nl> + stepper . set_digipot_current ( 1 , previous_current ) ; <nl> + # elif HAS_MOTOR_CURRENT_DAC <nl> + stepper_dac . set_current_value ( Z_AXIS , previous_current ) ; <nl> + # elif ENABLED ( HAS_MOTOR_CURRENT_I2C ) <nl> + digipot_i2c . set_current ( Z_AXIS , previous_current ) <nl> + # elif HAS_TRINAMIC_CONFIG <nl> + # if AXIS_IS_TMC ( Z ) <nl> + stepperZ . rms_current ( previous_current_arr [ 0 ] ) ; <nl> + # endif <nl> + # if AXIS_IS_TMC ( Z2 ) <nl> + stepperZ2 . rms_current ( previous_current_arr [ 1 ] ) ; <nl> + # endif <nl> + # if AXIS_IS_TMC ( Z3 ) <nl> + stepperZ3 . rms_current ( previous_current_arr [ 2 ] ) ; <nl> + # endif <nl> + # if AXIS_IS_TMC ( Z4 ) <nl> + stepperZ4 . rms_current ( previous_current_arr [ 3 ] ) ; <nl> + # endif <nl> + # endif <nl> + <nl> + # ifdef GANTRY_CALIBRATION_COMMANDS_POST <nl> + if ( DEBUGGING ( LEVELING ) ) DEBUG_ECHOLNPGM ( " Running Post Commands " ) ; <nl> + gcode . process_subcommands_now_P ( PSTR ( GANTRY_CALIBRATION_COMMANDS_POST ) ) ; <nl> + # endif <nl> + } <nl> + <nl> + # endif / / MECHANICAL_GANTRY_CALIBRATION <nl> mmm a / Marlin / src / gcode / calibrate / G34_M422 . cpp <nl> ppp b / Marlin / src / gcode / calibrate / G34_M422 . cpp <nl> <nl> * <nl> * / <nl> <nl> - # include " . . / . . / inc / MarlinConfig . h " <nl> + # include " . . / . . / inc / MarlinConfigPre . h " <nl> <nl> # if ENABLED ( Z_STEPPER_AUTO_ALIGN ) <nl> <nl> # include " . . / . . / feature / z_stepper_align . h " <nl> <nl> # include " . . / gcode . h " <nl> - # include " . . / . . / module / planner . h " <nl> - # include " . . / . . / module / stepper . h " <nl> # include " . . / . . / module / motion . h " <nl> + # include " . . / . . / module / stepper . h " <nl> + # include " . . / . . / module / planner . h " <nl> # include " . . / . . / module / probe . h " <nl> - <nl> - # if HAS_MULTI_HOTEND <nl> - # include " . . / . . / module / tool_change . h " <nl> - # endif <nl> + # include " . . / . . / lcd / ultralcd . h " / / for LCD_MESSAGEPGM <nl> <nl> # if HAS_LEVELING <nl> # include " . . / . . / feature / bedlevel / bedlevel . h " <nl> # endif <nl> <nl> + # if HAS_MULTI_HOTEND <nl> + # include " . . / . . / module / tool_change . h " <nl> + # endif <nl> + <nl> # if ENABLED ( Z_STEPPER_ALIGN_KNOWN_STEPPER_POSITIONS ) <nl> - # include " . . / . . / libs / least_squares_fit . h " <nl> + # include " . . / . . / libs / least_squares_fit . h " <nl> # endif <nl> <nl> # define DEBUG_OUT ENABLED ( DEBUG_LEVELING_FEATURE ) <nl> void GcodeSuite : : G34 ( ) { <nl> / / In BLTOUCH HS mode , the probe travels in a deployed state . <nl> / / Users of G34 might have a badly misaligned bed , so raise Z by the <nl> / / length of the deployed pin ( BLTOUCH stroke < 7mm ) <nl> - # define Z_BASIC_CLEARANCE Z_CLEARANCE_BETWEEN_PROBES + 7 . 0f * BOTH ( BLTOUCH , BLTOUCH_HS_MODE ) <nl> + # define Z_BASIC_CLEARANCE ( Z_CLEARANCE_BETWEEN_PROBES + 7 . 0f * BOTH ( BLTOUCH , BLTOUCH_HS_MODE ) ) <nl> <nl> / / Compute a worst - case clearance height to probe from . After the first <nl> / / iteration this will be re - calculated based on the actual bed position <nl> void GcodeSuite : : G34 ( ) { <nl> z_maxdiff = 0 . 0f , <nl> amplification = z_auto_align_amplification ; <nl> <nl> - / / These are needed after the for - loop <nl> - uint8_t iteration ; <nl> - bool err_break = false ; <nl> - float z_measured_min ; <nl> - <nl> # if DISABLED ( Z_STEPPER_ALIGN_KNOWN_STEPPER_POSITIONS ) <nl> bool adjustment_reverse = false ; <nl> # endif <nl> <nl> - / / ' iteration ' is declared above and is also used after the for - loop . <nl> - / / * not * the same as LOOP_L_N ( iteration , z_auto_align_iterations ) <nl> - for ( iteration = 0 ; iteration < z_auto_align_iterations ; + + iteration ) { <nl> + # if HAS_DISPLAY <nl> + PGM_P const msg_iteration = GET_TEXT ( MSG_ITERATION ) ; <nl> + const uint8_t iter_str_len = strlen_P ( msg_iteration ) ; <nl> + # endif <nl> + <nl> + / / Final z and iteration values will be used after breaking the loop <nl> + float z_measured_min ; <nl> + uint8_t iteration = 0 ; <nl> + bool err_break = false ; / / To break out of nested loops <nl> + while ( iteration < z_auto_align_iterations ) { <nl> if ( DEBUGGING ( LEVELING ) ) DEBUG_ECHOLNPGM ( " > probing all positions . " ) ; <nl> <nl> - SERIAL_ECHOLNPAIR ( " \ nITERATION : " , int ( iteration + 1 ) ) ; <nl> + const int iter = iteration + 1 ; <nl> + SERIAL_ECHOLNPAIR ( " \ nG34 Iteration : " , iter ) ; <nl> + # if HAS_DISPLAY <nl> + char str [ iter_str_len + 2 + 1 ] ; <nl> + sprintf_P ( str , msg_iteration , iter ) ; <nl> + ui . set_status ( str ) ; <nl> + # endif <nl> <nl> / / Initialize minimum value <nl> z_measured_min = 100000 . 0f ; <nl> void GcodeSuite : : G34 ( ) { <nl> / / current_position . z has been manually altered in the " dirty trick " above . <nl> const float z_probed_height = probe . probe_at_point ( z_stepper_align . xy [ iprobe ] , raise_after , 0 , true , false ) ; <nl> if ( isnan ( z_probed_height ) ) { <nl> - SERIAL_ECHOLNPGM ( " Probing failed . " ) ; <nl> + SERIAL_ECHOLNPGM ( " Probing failed " ) ; <nl> + LCD_MESSAGEPGM ( MSG_LCD_PROBING_FAILED ) ; <nl> err_break = true ; <nl> break ; <nl> } <nl> void GcodeSuite : : G34 ( ) { <nl> , " Z3 - Z1 = " , ABS ( z_measured [ 2 ] - z_measured [ 0 ] ) <nl> # endif <nl> ) ; <nl> + # if HAS_DISPLAY <nl> + char fstr1 [ 10 ] ; <nl> + # if NUM_Z_STEPPER_DRIVERS = = 2 <nl> + char msg [ 6 + ( 6 + 5 ) * 1 + 1 ] ; <nl> + # else <nl> + char msg [ 6 + ( 6 + 5 ) * 3 + 1 ] , fstr2 [ 10 ] , fstr3 [ 10 ] ; <nl> + # endif <nl> + sprintf_P ( msg , <nl> + PSTR ( " Diffs Z1 - Z2 = % s " <nl> + # if NUM_Z_STEPPER_DRIVERS = = 3 <nl> + " Z2 - Z3 = % s " <nl> + " Z3 - Z1 = % s " <nl> + # endif <nl> + ) , dtostrf ( ABS ( z_measured [ 0 ] - z_measured [ 1 ] ) , 1 , 3 , fstr1 ) <nl> + # if NUM_Z_STEPPER_DRIVERS = = 3 <nl> + , dtostrf ( ABS ( z_measured [ 1 ] - z_measured [ 2 ] ) , 1 , 3 , fstr2 ) <nl> + , dtostrf ( ABS ( z_measured [ 2 ] - z_measured [ 0 ] ) , 1 , 3 , fstr3 ) <nl> + # endif <nl> + ) ; <nl> + ui . set_status ( msg ) ; <nl> + # endif <nl> + <nl> + auto decreasing_accuracy = [ ] ( const float & v1 , const float & v2 ) { <nl> + if ( v1 < v2 * 0 . 7f ) { <nl> + SERIAL_ECHOLNPGM ( " Decreasing Accuracy Detected . " ) ; <nl> + LCD_MESSAGEPGM ( MSG_DECREASING_ACCURACY ) ; <nl> + return true ; <nl> + } <nl> + return false ; <nl> + } ; <nl> <nl> # if ENABLED ( Z_STEPPER_ALIGN_KNOWN_STEPPER_POSITIONS ) <nl> + <nl> / / Check if the applied corrections go in the correct direction . <nl> / / Calculate the sum of the absolute deviations from the mean of the probe measurements . <nl> / / Compare to the last iteration to ensure it ' s getting better . <nl> void GcodeSuite : : G34 ( ) { <nl> z_align_level_indicator + = ABS ( z_measured [ zstepper ] - z_measured_mean ) ; <nl> <nl> / / If it ' s getting worse , stop and throw an error <nl> - if ( last_z_align_level_indicator < z_align_level_indicator * 0 . 7f ) { <nl> - SERIAL_ECHOLNPGM ( " Decreasing accuracy detected . " ) ; <nl> - err_break = true ; <nl> - break ; <nl> - } <nl> + err_break = decreasing_accuracy ( last_z_align_level_indicator , z_align_level_indicator ) ; <nl> + if ( err_break ) break ; <nl> <nl> last_z_align_level_indicator = z_align_level_indicator ; <nl> # endif <nl> void GcodeSuite : : G34 ( ) { <nl> if ( z_align_abs ) amplification = ( iteration = = 1 ) ? _MIN ( last_z_align_move [ zstepper ] / z_align_abs , 2 . 0f ) : z_auto_align_amplification ; <nl> <nl> / / Check for less accuracy compared to last move <nl> - if ( last_z_align_move [ zstepper ] < z_align_abs * 0 . 7f ) { <nl> - SERIAL_ECHOLNPGM ( " Decreasing accuracy detected . " ) ; <nl> + if ( decreasing_accuracy ( last_z_align_move [ zstepper ] , z_align_abs ) ) { <nl> if ( DEBUGGING ( LEVELING ) ) DEBUG_ECHOLNPAIR ( " > Z " , int ( zstepper + 1 ) , " last_z_align_move = " , last_z_align_move [ zstepper ] ) ; <nl> if ( DEBUGGING ( LEVELING ) ) DEBUG_ECHOLNPAIR ( " > Z " , int ( zstepper + 1 ) , " z_align_abs = " , z_align_abs ) ; <nl> adjustment_reverse = ! adjustment_reverse ; <nl> void GcodeSuite : : G34 ( ) { <nl> <nl> if ( err_break ) break ; <nl> <nl> - if ( success_break ) { SERIAL_ECHOLNPGM ( " Target accuracy achieved . " ) ; break ; } <nl> + if ( success_break ) { <nl> + SERIAL_ECHOLNPGM ( " Target accuracy achieved . " ) ; <nl> + LCD_MESSAGEPGM ( MSG_ACCURACY_ACHIEVED ) ; <nl> + break ; <nl> + } <nl> <nl> - } / / for ( iteration ) <nl> + iteration + + ; <nl> + } / / while ( iteration < z_auto_align_iterations ) <nl> <nl> if ( err_break ) <nl> SERIAL_ECHOLNPGM ( " G34 aborted . " ) ; <nl> mmm a / Marlin / src / gcode / gcode . cpp <nl> ppp b / Marlin / src / gcode / gcode . cpp <nl> void GcodeSuite : : process_parsed_command ( const bool no_ok / * = false * / ) { <nl> case 33 : G33 ( ) ; break ; / / G33 : Delta Auto - Calibration <nl> # endif <nl> <nl> - # if ENABLED ( Z_STEPPER_AUTO_ALIGN ) <nl> + # if EITHER ( Z_STEPPER_AUTO_ALIGN , MECHANICAL_GANTRY_CALIBRATION ) <nl> case 34 : G34 ( ) ; break ; / / G34 : Z Stepper automatic alignment using probe <nl> # endif <nl> <nl> mmm a / Marlin / src / gcode / gcode . h <nl> ppp b / Marlin / src / gcode / gcode . h <nl> class GcodeSuite { <nl> <nl> TERN_ ( DELTA_AUTO_CALIBRATION , static void G33 ( ) ) ; <nl> <nl> - # if ENABLED ( Z_STEPPER_AUTO_ALIGN ) <nl> + # if EITHER ( Z_STEPPER_AUTO_ALIGN , MECHANICAL_GANTRY_CALIBRATION ) <nl> static void G34 ( ) ; <nl> - static void M422 ( ) ; <nl> # endif <nl> <nl> + TERN_ ( Z_STEPPER_AUTO_ALIGN , static void M422 ( ) ) ; <nl> + <nl> TERN_ ( ASSISTED_TRAMMING , static void G35 ( ) ) ; <nl> <nl> TERN_ ( G38_PROBE_TARGET , static void G38 ( const int8_t subcode ) ) ; <nl> mmm a / Marlin / src / inc / SanityCheck . h <nl> ppp b / Marlin / src / inc / SanityCheck . h <nl> <nl> # elif defined ( CHAMBER_HEATER_PIN ) <nl> # error " CHAMBER_HEATER_PIN is now HEATER_CHAMBER_PIN . Please update your configuration and / or pins . " <nl> # elif defined ( TMC_Z_CALIBRATION ) <nl> - # error " TMC_Z_CALIBRATION has been deprecated in favor of Z_STEPPER_AUTO_ALIGN . Please update your configuration . " <nl> + # error " TMC_Z_CALIBRATION has been deprecated in favor of MECHANICAL_GANTRY_CALIBRATION . Please update your configuration . " <nl> # elif defined ( Z_MIN_PROBE_ENDSTOP ) <nl> # error " Z_MIN_PROBE_ENDSTOP is no longer required . Please remove it from Configuration . h . " <nl> # elif defined ( DUAL_NOZZLE_DUPLICATION_MODE ) <nl> static_assert ( _ARR_TEST ( 3 , 0 ) & & _ARR_TEST ( 3 , 1 ) & & _ARR_TEST ( 3 , 2 ) <nl> # endif <nl> # endif <nl> <nl> + # if ENABLED ( MECHANICAL_GANTRY_CALIBRATION ) <nl> + # if NONE ( HAS_MOTOR_CURRENT_DAC , HAS_MOTOR_CURRENT_SPI , HAS_MOTOR_CURRENT_DAC , HAS_TRINAMIC_CONFIG , HAS_MOTOR_CURRENT_PWM ) <nl> + # error " It is highly recommended to have adjustable current drivers to prevent damage . Disable this line to continue anyway . " <nl> + # elif ! defined ( GANTRY_CALIBRATION_CURRENT ) <nl> + # error " MECHANICAL_GANTRY_CALIBRATION Requires GANTRY_CALIBRATION_CURRENT to be set . " <nl> + # elif ! defined ( GANTRY_CALIBRATION_EXTRA_HEIGHT ) <nl> + # error " MECHANICAL_GANTRY_CALIBRATION Requires GANTRY_CALIBRATION_EXTRA_HEIGHT to be set . " <nl> + # elif ! defined ( GANTRY_CALIBRATION_FEEDRATE ) <nl> + # error " MECHANICAL_GANTRY_CALIBRATION Requires GANTRY_CALIBRATION_FEEDRATE to be set . " <nl> + # endif <nl> + # if defined ( GANTRY_CALIBRATION_SAFE_POSITION ) & & ! defined ( GANTRY_CALIBRATION_XY_PARK_FEEDRATE ) <nl> + # error " GANTRY_CALIBRATION_SAFE_POSITION Requires GANTRY_CALIBRATION_XY_PARK_FEEDRATE to be set . " <nl> + # endif <nl> + # endif <nl> + <nl> # if ENABLED ( PRINTCOUNTER ) & & DISABLED ( EEPROM_SETTINGS ) <nl> # error " PRINTCOUNTER requires EEPROM_SETTINGS . Please update your Configuration . " <nl> # endif <nl> mmm a / Marlin / src / lcd / language / language_en . h <nl> ppp b / Marlin / src / lcd / language / language_en . h <nl> namespace Language_en { <nl> PROGMEM Language_Str MSG_AUTO_HOME_Z = _UxGT ( " Home Z " ) ; <nl> PROGMEM Language_Str MSG_AUTO_Z_ALIGN = _UxGT ( " Auto Z - Align " ) ; <nl> PROGMEM Language_Str MSG_ASSISTED_TRAMMING = _UxGT ( " Assisted Tramming " ) ; <nl> + PROGMEM Language_Str MSG_ITERATION = _UxGT ( " G34 Iteration : % i " ) ; <nl> + PROGMEM Language_Str MSG_DECREASING_ACCURACY = _UxGT ( " Accuracy Decreasing ! " ) ; <nl> + PROGMEM Language_Str MSG_ACCURACY_ACHIEVED = _UxGT ( " Accuracy Achieved " ) ; <nl> PROGMEM Language_Str MSG_LEVEL_BED_HOMING = _UxGT ( " Homing XYZ " ) ; <nl> PROGMEM Language_Str MSG_LEVEL_BED_WAITING = _UxGT ( " Click to Begin " ) ; <nl> PROGMEM Language_Str MSG_LEVEL_BED_NEXT_POINT = _UxGT ( " Next Point " ) ; <nl> mmm a / Marlin / src / lcd / menu / menu_motion . cpp <nl> ppp b / Marlin / src / lcd / menu / menu_motion . cpp <nl> void menu_motion ( ) { <nl> / / <nl> / / Auto Z - Align <nl> / / <nl> - # if ENABLED ( Z_STEPPER_AUTO_ALIGN ) <nl> + # if EITHER ( Z_STEPPER_AUTO_ALIGN , MECHANICAL_GANTRY_CALIBRATION ) <nl> GCODES_ITEM ( MSG_AUTO_Z_ALIGN , PSTR ( " G34 " ) ) ; <nl> # endif <nl> <nl> mmm a / buildroot / tests / LPC1769 - tests <nl> ppp b / buildroot / tests / LPC1769 - tests <nl> opt_set MOTHERBOARD BOARD_COHESION3D_REMIX <nl> opt_set X_DRIVER_TYPE TMC2130 <nl> opt_set Y_DRIVER_TYPE TMC2130 <nl> opt_set Z_DRIVER_TYPE TMC2130 <nl> - opt_enable AUTO_BED_LEVELING_BILINEAR EEPROM_SETTINGS EEPROM_CHITCHAT \ <nl> + opt_enable AUTO_BED_LEVELING_BILINEAR EEPROM_SETTINGS EEPROM_CHITCHAT MECHANICAL_GANTRY_CALIBRATION \ <nl> TMC_USE_SW_SPI MONITOR_DRIVER_STATUS STEALTHCHOP_XY STEALTHCHOP_Z HYBRID_THRESHOLD \ <nl> SENSORLESS_PROBING Z_SAFE_HOMING X_STALL_SENSITIVITY Y_STALL_SENSITIVITY Z_STALL_SENSITIVITY TMC_DEBUG \ <nl> EXPERIMENTAL_I2CBUS <nl> mmm a / platformio . ini <nl> ppp b / platformio . ini <nl> default_src_filter = + < src / * > - < src / config > - < src / HAL > + < src / HAL / shared > <nl> - < src / gcode / bedlevel / G42 . cpp > <nl> - < src / gcode / bedlevel / M420 . cpp > <nl> - < src / gcode / calibrate / G33 . cpp > <nl> + - < src / gcode / calibrate / G34 . cpp > <nl> - < src / gcode / calibrate / G34_M422 . cpp > <nl> - < src / gcode / calibrate / G76_M192_M871 . cpp > <nl> - < src / gcode / calibrate / G425 . cpp > <nl> MK2_MULTIPLEXER = src_filter = + < src / feature / snmm . cpp > <nl> EXT_SOLENOID | MANUAL_SOLENOID_CONTROL = src_filter = + < src / feature / solenoid . cpp > + < src / gcode / control / M380_M381 . cpp > <nl> HAS_CUTTER = src_filter = + < src / feature / spindle_laser . cpp > + < src / gcode / control / M3 - M5 . cpp > <nl> EXPERIMENTAL_I2CBUS = src_filter = + < src / feature / twibus . cpp > + < src / gcode / feature / i2c > <nl> + MECHANICAL_GANTRY_CAL . + = src_filter = + < src / gcode / calibrate / G34 . cpp > <nl> Z_STEPPER_AUTO_ALIGN = src_filter = + < src / feature / z_stepper_align . cpp > + < src / gcode / calibrate / G34_M422 . cpp > <nl> G26_MESH_VALIDATION = src_filter = + < src / gcode / bedlevel / G26 . cpp > <nl> ASSISTED_TRAMMING = src_filter = + < src / gcode / bedlevel / G35 . cpp > <nl>
G34 Mechanical Gantry Calibration ( like Prusa M915 ) ( )
MarlinFirmware/Marlin
8b060a3902f6c05c9079c1919eea80558c7fe4f1
2020-10-16T21:39:55Z
mmm a / src / training / lstmtester . cpp <nl> ppp b / src / training / lstmtester . cpp <nl> <nl> namespace tesseract { <nl> <nl> LSTMTester : : LSTMTester ( int64_t max_memory ) <nl> - : test_data_ ( max_memory ) , total_pages_ ( 0 ) , async_running_ ( false ) { } <nl> + : test_data_ ( max_memory ) { } <nl> <nl> / / Loads a set of lstmf files that were created using the lstm . train config to <nl> / / tesseract into memory ready for testing . Returns false if nothing was <nl> mmm a / src / training / lstmtester . h <nl> ppp b / src / training / lstmtester . h <nl> class LSTMTester { <nl> <nl> / / The data to test with . <nl> DocumentCache test_data_ ; <nl> - int total_pages_ ; <nl> + int total_pages_ = 0 ; <nl> / / Flag that indicates an asynchronous test is currently running . <nl> / / Protected by running_mutex_ . <nl> - bool async_running_ ; <nl> + bool async_running_ = false ; <nl> std : : mutex running_mutex_ ; <nl> / / Stored copies of the args for use while running asynchronously . <nl> - int test_iteration_ ; <nl> - const double * test_training_errors_ ; <nl> + int test_iteration_ = 0 ; <nl> + const double * test_training_errors_ = nullptr ; <nl> TessdataManager test_model_mgr_ ; <nl> - int test_training_stage_ ; <nl> + int test_training_stage_ = 0 ; <nl> STRING test_result_ ; <nl> } ; <nl> <nl>
Fix CID 1386099 ( Uninitialized pointer field )
tesseract-ocr/tesseract
97dda3d53500a6460dcddc9418dce2ab1bfdff66
2019-09-14T13:43:50Z
mmm a / build / cocos2d_libs . xcodeproj / project . pbxproj <nl> ppp b / build / cocos2d_libs . xcodeproj / project . pbxproj <nl> <nl> CLANG_WARN_BOOL_CONVERSION = YES ; <nl> CLANG_WARN_CONSTANT_CONVERSION = YES ; <nl> COPY_PHASE_STRIP = NO ; <nl> - ENABLE_BITCODE = NO ; <nl> ENABLE_TESTABILITY = YES ; <nl> GCC_C_LANGUAGE_STANDARD = c99 ; <nl> GCC_DYNAMIC_NO_PIC = NO ; <nl> <nl> CLANG_CXX_LIBRARY = " libc + + " ; <nl> CLANG_WARN_BOOL_CONVERSION = YES ; <nl> CLANG_WARN_CONSTANT_CONVERSION = YES ; <nl> - ENABLE_BITCODE = NO ; <nl> GCC_C_LANGUAGE_STANDARD = c99 ; <nl> GCC_PREPROCESSOR_DEFINITIONS = ( <nl> " CC_ENABLE_CHIPMUNK_INTEGRATION = 1 " , <nl> <nl> buildSettings = { <nl> ALWAYS_SEARCH_USER_PATHS = YES ; <nl> COMBINE_HIDPI_IMAGES = YES ; <nl> - ENABLE_BITCODE = NO ; <nl> EXECUTABLE_EXTENSION = a ; <nl> EXECUTABLE_PREFIX = " " ; <nl> GCC_PRECOMPILE_PREFIX_HEADER = YES ; <nl> <nl> buildSettings = { <nl> ALWAYS_SEARCH_USER_PATHS = YES ; <nl> COMBINE_HIDPI_IMAGES = YES ; <nl> - ENABLE_BITCODE = NO ; <nl> EXECUTABLE_EXTENSION = a ; <nl> EXECUTABLE_PREFIX = " " ; <nl> GCC_GENERATE_DEBUGGING_SYMBOLS = NO ; <nl> <nl> isa = XCBuildConfiguration ; <nl> buildSettings = { <nl> ALWAYS_SEARCH_USER_PATHS = YES ; <nl> + ENABLE_BITCODE = NO ; <nl> EXECUTABLE_PREFIX = " " ; <nl> GCC_PRECOMPILE_PREFIX_HEADER = YES ; <nl> GCC_PREFIX_HEADER = " . . / cocos / platform / ios / cocos2d - prefix . pch " ; <nl> <nl> isa = XCBuildConfiguration ; <nl> buildSettings = { <nl> ALWAYS_SEARCH_USER_PATHS = YES ; <nl> + ENABLE_BITCODE = NO ; <nl> EXECUTABLE_PREFIX = " " ; <nl> GCC_GENERATE_DEBUGGING_SYMBOLS = NO ; <nl> GCC_PRECOMPILE_PREFIX_HEADER = YES ; <nl> mmm a / build / cocos2d_tests . xcodeproj / project . pbxproj <nl> ppp b / build / cocos2d_tests . xcodeproj / project . pbxproj <nl> <nl> 1D6058900D05DD3D006BFB54 = { <nl> DevelopmentTeam = U7E7529TA5 ; <nl> } ; <nl> + 507B40FE1C31BEA60067B53E = { <nl> + DevelopmentTeam = MDDB52YB8L ; <nl> + } ; <nl> + 507B427E1C31E6070067B53E = { <nl> + DevelopmentTeam = MDDB52YB8L ; <nl> + } ; <nl> 507B43541C31FB340067B53E = { <nl> CreatedOnToolsVersion = 7 . 2 ; <nl> } ; <nl> + 507B43611C31FB670067B53E = { <nl> + DevelopmentTeam = MDDB52YB8L ; <nl> + } ; <nl> + 507B43C31C3201360067B53E = { <nl> + DevelopmentTeam = MDDB52YB8L ; <nl> + } ; <nl> } ; <nl> } ; <nl> buildConfigurationList = C01FCF4E08A954540054247B / * Build configuration list for PBXProject " cocos2d_tests " * / ; <nl> <nl> 507B43B71C31FB670067B53E / * Debug * / = { <nl> isa = XCBuildConfiguration ; <nl> buildSettings = { <nl> - CODE_SIGN_IDENTITY = " iPhone Developer " ; <nl> - ENABLE_BITCODE = NO ; <nl> GCC_PREPROCESSOR_DEFINITIONS = ( <nl> " $ ( inherited ) " , <nl> CC_TARGET_OS_TVOS , <nl> <nl> 507B43B81C31FB670067B53E / * Release * / = { <nl> isa = XCBuildConfiguration ; <nl> buildSettings = { <nl> - CODE_SIGN_IDENTITY = " iPhone Developer " ; <nl> - ENABLE_BITCODE = NO ; <nl> GCC_PREPROCESSOR_DEFINITIONS = ( <nl> " $ ( inherited ) " , <nl> CC_TARGET_OS_TVOS , <nl> <nl> 507B43F01C3201360067B53E / * Debug * / = { <nl> isa = XCBuildConfiguration ; <nl> buildSettings = { <nl> - CODE_SIGN_IDENTITY = " iPhone Developer " ; <nl> GCC_PREPROCESSOR_DEFINITIONS = ( <nl> " $ ( inherited ) " , <nl> CC_TARGET_OS_TVOS , <nl> <nl> 507B43F11C3201360067B53E / * Release * / = { <nl> isa = XCBuildConfiguration ; <nl> buildSettings = { <nl> - CODE_SIGN_IDENTITY = " iPhone Developer " ; <nl> GCC_PREPROCESSOR_DEFINITIONS = ( <nl> " $ ( inherited ) " , <nl> CC_TARGET_OS_TVOS , <nl> <nl> 507B435A1C31FB350067B53E / * Release * / , <nl> ) ; <nl> defaultConfigurationIsVisible = 0 ; <nl> + defaultConfigurationName = Debug ; <nl> } ; <nl> 507B43B61C31FB670067B53E / * Build configuration list for PBXNativeTarget " lua - tests tvOS " * / = { <nl> isa = XCConfigurationList ; <nl> mmm a / cocos / audio / ios / CDAudioManager . m <nl> ppp b / cocos / audio / ios / CDAudioManager . m <nl> - ( BOOL ) isBackgroundMusicPlaying { <nl> / / determine ringer switch state <nl> - ( BOOL ) isDeviceMuted { <nl> <nl> - # if TARGET_IPHONE_SIMULATOR <nl> + # if TARGET_IPHONE_SIMULATOR | | defined ( CC_TARGET_OS_TVOS ) <nl> / / Calling audio route stuff on the simulator causes problems <nl> return NO ; <nl> # else <nl>
fixes compilation issues on tvOS
cocos2d/cocos2d-x
77aa3871b6874593efc97c88715d9f01ebe26389
2015-12-29T20:30:00Z
mmm a / src / EditTableDialog . cpp <nl> ppp b / src / EditTableDialog . cpp <nl> void EditTableDialog : : itemChanged ( QTreeWidgetItem * item , int column ) <nl> { <nl> case kName : <nl> / / When a field of that name already exists , show a warning to the user and don ' t apply the new name <nl> - if ( m_table . findField ( item - > text ( column ) ) ) <nl> + if ( m_table . findField ( item - > text ( column ) ) ! = - 1 ) <nl> { <nl> QMessageBox : : warning ( this , qApp - > applicationName ( ) , tr ( " There already is a field with that name . Please rename it first or choose a different " <nl> " name for this field . " ) ) ; <nl>
fix : message ' field already exists ' for non existent fields
sqlitebrowser/sqlitebrowser
0c10dbdd8ddf8e9c3568f923f9f5226c725789aa
2017-01-26T13:03:21Z
mmm a / tensorflow / python / training / monitored_session . py <nl> ppp b / tensorflow / python / training / monitored_session . py <nl> def _create_session ( self ) : <nl> ' or parameter server . A new session will be created . ' <nl> ' Error : % s ' , e ) <nl> <nl> + def _check_stop ( self ) : <nl> + try : <nl> + if self . _sess : <nl> + return self . _sess . _check_stop ( ) # pylint : disable = protected - access <nl> + else : <nl> + return True <nl> + except _PREEMPTION_ERRORS as e : <nl> + logging . info ( ' An error was raised while considering whether the ' <nl> + ' session is complete . This may be due to a preemption in ' <nl> + ' a connected worker or parameter server . The current ' <nl> + ' session will be closed and a new session will be ' <nl> + ' created . Error : % s ' , e ) <nl> + self . close ( ) <nl> + self . _sess = self . _create_session ( ) <nl> + # Since we have just recreated the session , the overall computation should <nl> + # not stop : <nl> + return False <nl> + except Exception : # pylint : disable = broad - except <nl> + # ` should_stop ` should return True instead of raising an exception . <nl> + return True <nl> + <nl> def run ( self , fetches , feed_dict = None , options = None , run_metadata = None ) : <nl> while True : <nl> try : <nl> def __init__ ( self , sess , coord , stop_grace_period_secs = 120 ) : <nl> self . _stop_grace_period_secs = stop_grace_period_secs <nl> <nl> def _check_stop ( self ) : <nl> - # Check with the coordinator if we should stop . <nl> + # If the coordinator was asked to stop due to an exception , then it needs <nl> + # to be propagated to this stack . <nl> + self . _coord . raise_requested_exception ( ) <nl> + # At this point , no exceptions are recorded in the coordinator . <nl> return self . _coord . should_stop ( ) <nl> <nl> def close ( self ) : <nl> def close ( self ) : <nl> # useful exceptions are already reported by join ( ) . <nl> pass <nl> <nl> + def run ( self , * args , * * kwargs ) : <nl> + try : <nl> + return self . _sess . run ( * args , * * kwargs ) <nl> + except _PREEMPTION_ERRORS as original_exception : <nl> + raise original_exception <nl> + except Exception as original_exception : # pylint : disable = broad - except <nl> + # A non - preemption error could have been caused by a preemption error <nl> + # in the coordinator . If this is the case , raise that exception instead , <nl> + # since it ' s the root cause . Otherwise , stick to the ` original_exception ` . <nl> + try : <nl> + self . _coord . raise_requested_exception ( ) <nl> + except _PREEMPTION_ERRORS as preemption_in_coordinator : <nl> + raise preemption_in_coordinator <nl> + except Exception : # pylint : disable = broad - except <nl> + raise original_exception <nl> + else : <nl> + raise original_exception <nl> + <nl> <nl> class _HookedSession ( _WrappedSession ) : <nl> " " " A _WrappedSession that calls hooks during calls to run ( ) . <nl> mmm a / tensorflow / python / training / monitored_session_test . py <nl> ppp b / tensorflow / python / training / monitored_session_test . py <nl> def test_stop_threads_on_close ( self ) : <nl> <nl> <nl> class AbortAtNSession ( object ) : <nl> - " " " A mock sessionthat aborts at the N - th run call . " " " <nl> + " " " A mock session that aborts at the N - th run call . " " " <nl> <nl> def __init__ ( self , sess , n ) : <nl> self . _sess = sess <nl> def run ( self , * args , * * kwargs ) : <nl> return self . _sess . run ( * args , * * kwargs ) <nl> <nl> <nl> + class StopCoordinatorWithException ( session_run_hook . SessionRunHook ) : <nl> + " " " With this hook Coordinator throws an exception after N - runs . " " " <nl> + <nl> + def __init__ ( self , calls_before_stopping , exception_to_raise = None ) : <nl> + self . _started_the_side_thread_already = False <nl> + self . _lock = threading . Lock ( ) <nl> + self . _stored_exception_event = threading . Event ( ) <nl> + self . _calls_before_stopping = calls_before_stopping <nl> + self . _exception_to_raise = ( exception_to_raise or errors_impl . AbortedError ( <nl> + None , None , ' Aborted at N ' ) ) <nl> + <nl> + def _maybe_stop_with_exception ( self , coord ) : <nl> + while True : <nl> + with self . _lock : <nl> + if self . _calls_before_stopping = = 0 : <nl> + try : <nl> + raise self . _exception_to_raise <nl> + except Exception as e : # pylint : disable = broad - except <nl> + coord . request_stop ( e ) <nl> + self . _stored_exception_event . set ( ) <nl> + break <nl> + <nl> + def after_create_session ( self , session , coord ) : <nl> + if self . _started_the_side_thread_already : <nl> + return <nl> + <nl> + separate_thread = threading . Thread ( <nl> + target = self . _maybe_stop_with_exception , args = ( coord , ) ) <nl> + <nl> + coord . register_thread ( separate_thread ) <nl> + separate_thread . start ( ) <nl> + self . _started_the_side_thread_already = True <nl> + # Coordinator will take care of joining ` separate_thread ` . <nl> + <nl> + def after_run ( self , run_context , run_values ) : <nl> + stopping_now = False <nl> + with self . _lock : <nl> + self . _calls_before_stopping - = 1 <nl> + if self . _calls_before_stopping = = 0 : <nl> + stopping_now = True <nl> + <nl> + if stopping_now : <nl> + self . _stored_exception_event . wait ( ) <nl> + <nl> + <nl> + class FailTrainingAfterCoordinatorStopped ( StopCoordinatorWithException ) : <nl> + " " " With this hook training encounters an exception after N - runs . " " " <nl> + <nl> + def __init__ ( self , calls_before_stopping ) : <nl> + StopCoordinatorWithException . __init__ ( self , calls_before_stopping ) <nl> + self . _coord = None <nl> + <nl> + def after_create_session ( self , session , coord ) : <nl> + self . _coord = coord <nl> + return StopCoordinatorWithException . after_create_session ( <nl> + self , session , coord ) <nl> + <nl> + def after_run ( self , run_context , run_values ) : <nl> + StopCoordinatorWithException . after_run ( self , run_context , run_values ) <nl> + try : <nl> + # After a ` run ` , an exception could have been stored inside the <nl> + # coordinator . <nl> + self . _coord . raise_requested_exception ( ) <nl> + except errors_impl . AbortedError : <nl> + # In real world , the main thread may or may not know about the exception <nl> + # that stopped the coordinator . Because the coordinator has stopped , the <nl> + # main thread could have gotten stuck as well ( for example , the <nl> + # coordinator was supposed to execute ` FIFOQueue . enqueue ` while the main <nl> + # thread is executing a blocking ` FIFOQueue . dequeue ` ) . After it got stuck , <nl> + # the session is going to get garbage collected after some time with : <nl> + raise errors_impl . CancelledError ( None , None , <nl> + ' Session got garbage - collected . ' ) <nl> + <nl> + <nl> + class CountingSessionCreator ( object ) : <nl> + " " " A creator that counts the number of created sessions . " " " <nl> + <nl> + def __init__ ( self , session ) : <nl> + self . _initial_session = session <nl> + # We only have one session per test case . We can ' t re - create it , thus <nl> + # it shouldn ' t be closed . <nl> + self . _initial_session . close = lambda * args : None <nl> + self . _create_session_calls = 0 <nl> + <nl> + @ property <nl> + def number_of_sessions_created ( self ) : <nl> + return self . _create_session_calls <nl> + <nl> + def create_session ( self ) : <nl> + self . _create_session_calls + = 1 <nl> + return self . _initial_session <nl> + <nl> + <nl> class RecoverableSessionTest ( test . TestCase ) : <nl> " " " _RecoverableSession tests . " " " <nl> <nl> def create_session ( self ) : <nl> with self . assertRaisesRegexp ( IndexError , ' pop from empty list ' ) : <nl> recoverable_sess . run ( v , feed_dict = { c : - 12 } ) <nl> <nl> + def test_recovery_from_coordinator_exception ( self ) : <nl> + with self . test_session ( ) as test_session : <nl> + session_creator = CountingSessionCreator ( test_session ) <nl> + session = monitored_session . MonitoredSession ( <nl> + session_creator , <nl> + [ StopCoordinatorWithException ( calls_before_stopping = 2 ) ] ) <nl> + <nl> + self . assertEqual ( 1 , session_creator . number_of_sessions_created ) <nl> + self . assertFalse ( session . should_stop ( ) ) <nl> + <nl> + c = constant_op . constant ( 0 ) <nl> + v = array_ops . identity ( c ) <nl> + <nl> + # The coordinator will not abort during this call , since it ' s the call <nl> + # number 0 . <nl> + self . assertEqual ( 51 , session . run ( v , feed_dict = { c : 51 } ) ) <nl> + self . assertFalse ( session . should_stop ( ) ) <nl> + # The coordinator will abort during the next call , since it ' s the call <nl> + # number 1 . <nl> + self . assertEqual ( 42 , session . run ( v , feed_dict = { c : 42 } ) ) <nl> + # Even though the coordinator was asked to stop , the underlying session is <nl> + # recreated and is to be continued . <nl> + self . assertFalse ( session . should_stop ( ) ) <nl> + self . assertEqual ( 2 , session_creator . number_of_sessions_created ) <nl> + <nl> + def test_recovery_from_non_preemption_in_coordinator ( self ) : <nl> + with self . test_session ( ) as test_session : <nl> + session_creator = CountingSessionCreator ( test_session ) <nl> + hook = StopCoordinatorWithException ( <nl> + calls_before_stopping = 2 , <nl> + exception_to_raise = errors_impl . UnknownError ( <nl> + None , None , ' Some fatal exception inside the coordinator . ' ) ) <nl> + session = monitored_session . MonitoredSession ( session_creator , [ hook ] ) <nl> + <nl> + self . assertEqual ( 1 , session_creator . number_of_sessions_created ) <nl> + self . assertFalse ( session . should_stop ( ) ) <nl> + <nl> + c = constant_op . constant ( 0 ) <nl> + v = array_ops . identity ( c ) <nl> + <nl> + # The coordinator will not abort during this call , since it ' s the call <nl> + # number 0 . <nl> + self . assertEqual ( 51 , session . run ( v , feed_dict = { c : 51 } ) ) <nl> + self . assertFalse ( session . should_stop ( ) ) <nl> + # The coordinator will abort during the next call , since it ' s the call <nl> + # number 1 . <nl> + self . assertEqual ( 42 , session . run ( v , feed_dict = { c : 42 } ) ) <nl> + # The coordinator was asked to stop due to non - redeemable error . Training <nl> + # should stop and the session should not be recreated . <nl> + self . assertTrue ( session . should_stop ( ) ) <nl> + self . assertEqual ( 1 , session_creator . number_of_sessions_created ) <nl> + with self . assertRaises ( errors_impl . UnknownError ) : <nl> + session . close ( ) <nl> + <nl> + def test_recovery_from_session_getting_stuck ( self ) : <nl> + with self . test_session ( ) as test_session : <nl> + session_creator = CountingSessionCreator ( test_session ) <nl> + session = monitored_session . MonitoredSession ( <nl> + session_creator , <nl> + [ FailTrainingAfterCoordinatorStopped ( calls_before_stopping = 2 ) ] ) <nl> + <nl> + self . assertEqual ( 1 , session_creator . number_of_sessions_created ) <nl> + self . assertFalse ( session . should_stop ( ) ) <nl> + <nl> + c = constant_op . constant ( 0 ) <nl> + v = array_ops . identity ( c ) <nl> + <nl> + # Training will not fail , since it ' s the call number 0 . <nl> + self . assertEqual ( 51 , session . run ( v , feed_dict = { c : 51 } ) ) <nl> + self . assertFalse ( session . should_stop ( ) ) <nl> + # Training will fail during the next call , since it ' s the call <nl> + # number 1 . <nl> + self . assertEqual ( 42 , session . run ( v , feed_dict = { c : 42 } ) ) <nl> + # Even though the coordinator stopped which and training failed , the <nl> + # underlying session is recreated and training is to be continued . <nl> + self . assertFalse ( session . should_stop ( ) ) <nl> + self . assertEqual ( 2 , session_creator . number_of_sessions_created ) <nl> + <nl> <nl> class FakeSession ( monitored_session . _WrappedSession ) : <nl> <nl>
Recover MonitoredSession when the Coordinator is requested to stop with one of the _PREEMPTION_ERRORS .
tensorflow/tensorflow
8f9b1af8ae3b3e0f0b2e252e004ed9179be66529
2017-08-10T19:39:31Z
mmm a / tensorflow / lite / tools / versioning / op_version . cc <nl> ppp b / tensorflow / lite / tools / versioning / op_version . cc <nl> int GetBuiltinOperatorVersion ( const OpSignature & op_sig ) { <nl> } <nl> return 1 ; <nl> <nl> - case BuiltinOperator_ADD : <nl> case BuiltinOperator_PAD : <nl> case BuiltinOperator_PADV2 : <nl> case BuiltinOperator_SPACE_TO_DEPTH : <nl>
Fix for broken build .
tensorflow/tensorflow
24f3f5593a06d24fa1ca6be257f1265b5293d492
2020-05-15T16:16:02Z
mmm a / xbmc / addons / Addon . cpp <nl> ppp b / xbmc / addons / Addon . cpp <nl> CAddon : : CAddon ( cp_plugin_info_t * props ) <nl> : m_props ( props ) <nl> , m_parent ( AddonPtr ( ) ) <nl> { <nl> - if ( m_props . type ! = ADDON_VIZ & & m_props . type ! = ADDON_SCREENSAVER ) <nl> - BuildLibName ( props ) ; <nl> + BuildLibName ( props ) ; <nl> BuildProfilePath ( ) ; <nl> CUtil : : AddFileToFolder ( Profile ( ) , " settings . xml " , m_userSettingsPath ) ; <nl> m_enabled = true ; <nl> mmm a / xbmc / addons / Addon . h <nl> ppp b / xbmc / addons / Addon . h <nl> class CAddon : public IAddon <nl> CAddon ( const CAddon & ) ; / / protected as all copying is handled by Clone ( ) <nl> CAddon ( const CAddon & , const AddonPtr & ) ; <nl> bool LoadUserSettings ( ) ; <nl> + virtual void BuildLibName ( cp_plugin_info_t * props = NULL ) ; <nl> TiXmlDocument m_addonXmlDoc ; <nl> TiXmlDocument m_userXmlDoc ; <nl> CStdString m_userSettingsPath ; <nl> class CAddon : public IAddon <nl> bool m_hasStrings ; <nl> bool m_checkedStrings ; <nl> <nl> - void BuildLibName ( cp_plugin_info_t * props = NULL ) ; <nl> CStdString m_profile ; <nl> bool m_enabled ; <nl> CLocalizeStrings m_strings ; <nl> mmm a / xbmc / addons / AddonDll . h <nl> ppp b / xbmc / addons / AddonDll . h <nl> namespace ADDON <nl> protected : <nl> void HandleException ( std : : exception & e , const char * context ) ; <nl> bool Initialized ( ) { return m_initialized ; } <nl> + virtual void BuildLibName ( cp_plugin_info_t * props = NULL ) { } <nl> TheStruct * m_pStruct ; <nl> TheProps * m_pInfo ; <nl> <nl>
redo 29887 , less stupid this time
xbmc/xbmc
ff78847336a55c4819d750832755e26e435ae5d5
2010-05-07T08:46:34Z
mmm a / src / php / ext / grpc / php_grpc . c <nl> ppp b / src / php / ext / grpc / php_grpc . c <nl> ZEND_GET_MODULE ( grpc ) <nl> <nl> / * { { { PHP_INI <nl> * / <nl> - / * Remove comments and fill if you need to have entries in php . ini <nl> PHP_INI_BEGIN ( ) <nl> - STD_PHP_INI_ENTRY ( " grpc . global_value " , " 42 " , PHP_INI_ALL , OnUpdateLong , <nl> - global_value , zend_grpc_globals , grpc_globals ) <nl> - STD_PHP_INI_ENTRY ( " grpc . global_string " , " foobar " , PHP_INI_ALL , <nl> - OnUpdateString , global_string , zend_grpc_globals , <nl> - grpc_globals ) <nl> + STD_PHP_INI_ENTRY ( " grpc . enable_fork_support " , " 0 " , PHP_INI_SYSTEM , OnUpdateBool , <nl> + enable_fork_support , zend_grpc_globals , grpc_globals ) <nl> + STD_PHP_INI_ENTRY ( " grpc . poll_strategy " , NULL , PHP_INI_SYSTEM , OnUpdateString , <nl> + poll_strategy , zend_grpc_globals , grpc_globals ) <nl> PHP_INI_END ( ) <nl> - * / <nl> / * } } } * / <nl> <nl> / * { { { php_grpc_init_globals <nl> * / <nl> - / * Uncomment this function if you have INI entries <nl> - static void php_grpc_init_globals ( zend_grpc_globals * grpc_globals ) <nl> - { <nl> - grpc_globals - > global_value = 0 ; <nl> - grpc_globals - > global_string = NULL ; <nl> - } <nl> - * / <nl> + static void php_grpc_init_globals ( zend_grpc_globals * grpc_globals ) { <nl> + grpc_globals - > enable_fork_support = 0 ; <nl> + grpc_globals - > poll_strategy = NULL ; <nl> + } <nl> / * } } } * / <nl> + <nl> void create_new_channel ( <nl> wrapped_grpc_channel * channel , <nl> char * target , <nl> void register_fork_handlers ( ) { <nl> } <nl> } <nl> <nl> + void apply_ini_settings ( ) { <nl> + if ( GRPC_G ( enable_fork_support ) ) { <nl> + setenv ( " GRPC_ENABLE_FORK_SUPPORT " , " 1 " , 1 / * overwrite ? * / ) ; <nl> + } <nl> + <nl> + if ( GRPC_G ( poll_strategy ) ) { <nl> + setenv ( " GRPC_POLL_STRATEGY " , GRPC_G ( poll_strategy ) , 1 / * overwrite ? * / ) ; <nl> + } <nl> + } <nl> + <nl> / * { { { PHP_MINIT_FUNCTION <nl> * / <nl> PHP_MINIT_FUNCTION ( grpc ) { <nl> - / * If you have INI entries , uncomment these lines <nl> - REGISTER_INI_ENTRIES ( ) ; <nl> - * / <nl> + ZEND_INIT_MODULE_GLOBALS ( grpc , php_grpc_init_globals , NULL ) ; <nl> + REGISTER_INI_ENTRIES ( ) ; <nl> + <nl> / * Register call error constants * / <nl> REGISTER_LONG_CONSTANT ( " Grpc \ \ CALL_OK " , GRPC_CALL_OK , <nl> CONST_CS | CONST_PERSISTENT ) ; <nl> PHP_MINIT_FUNCTION ( grpc ) { <nl> / * { { { PHP_MSHUTDOWN_FUNCTION <nl> * / <nl> PHP_MSHUTDOWN_FUNCTION ( grpc ) { <nl> - / * uncomment this line if you have INI entries <nl> - UNREGISTER_INI_ENTRIES ( ) ; <nl> - * / <nl> + UNREGISTER_INI_ENTRIES ( ) ; <nl> / / WARNING : This function IS being called by PHP when the extension <nl> / / is unloaded but the logs were somehow suppressed . <nl> if ( GRPC_G ( initialized ) ) { <nl> PHP_MINFO_FUNCTION ( grpc ) { <nl> php_info_print_table_row ( 2 , " grpc support " , " enabled " ) ; <nl> php_info_print_table_row ( 2 , " grpc module version " , PHP_GRPC_VERSION ) ; <nl> php_info_print_table_end ( ) ; <nl> - / * Remove comments if you have entries in php . ini <nl> - DISPLAY_INI_ENTRIES ( ) ; <nl> - * / <nl> + DISPLAY_INI_ENTRIES ( ) ; <nl> } <nl> / * } } } * / <nl> <nl> PHP_MINFO_FUNCTION ( grpc ) { <nl> * / <nl> PHP_RINIT_FUNCTION ( grpc ) { <nl> if ( ! GRPC_G ( initialized ) ) { <nl> + apply_ini_settings ( ) ; <nl> grpc_init ( ) ; <nl> register_fork_handlers ( ) ; <nl> grpc_php_init_completion_queue ( TSRMLS_C ) ; <nl> mmm a / src / php / ext / grpc / php_grpc . h <nl> ppp b / src / php / ext / grpc / php_grpc . h <nl> PHP_RINIT_FUNCTION ( grpc ) ; <nl> * / <nl> ZEND_BEGIN_MODULE_GLOBALS ( grpc ) <nl> zend_bool initialized ; <nl> + zend_bool enable_fork_support ; <nl> + char * poll_strategy ; <nl> ZEND_END_MODULE_GLOBALS ( grpc ) <nl> <nl> / * In every utility function you add that needs to use variables <nl> new file mode 100644 <nl> index 00000000000 . . 0fbcc1f119e <nl> mmm / dev / null <nl> ppp b / src / php / ext / grpc / tests / grpc - default - ini . phpt <nl> <nl> + - - TEST - - <nl> + Ensure default ini settings <nl> + - - SKIPIF - - <nl> + < ? php if ( ! extension_loaded ( " grpc " ) ) print " skip " ; ? > <nl> + - - FILE - - <nl> + < ? php <nl> + if ( ini_get ( ' grpc . enable_fork_support ' ) ) { <nl> + die ( ' grpc . enable_fork_support not off by default ' ) ; <nl> + } <nl> + if ( ini_get ( ' grpc . poll_strategy ' ) ! = = " " ) { <nl> + die ( ' grpc . poll_strategy not empty by default ' ) ; <nl> + } <nl> + echo ' ok ' ; <nl> + - - EXPECT - - <nl> + ok <nl> new file mode 100644 <nl> index 00000000000 . . 55c18ee526e <nl> mmm / dev / null <nl> ppp b / src / php / ext / grpc / tests / grpc - set - ini . phpt <nl> <nl> + - - TEST - - <nl> + Ensure ini settings are handled <nl> + - - SKIPIF - - <nl> + < ? php if ( ! extension_loaded ( " grpc " ) ) print " skip " ; ? > <nl> + - - INI - - <nl> + grpc . enable_fork_support = 1 <nl> + grpc . poll_strategy = epoll1 <nl> + - - FILE - - <nl> + < ? php <nl> + if ( ! ini_get ( ' grpc . enable_fork_support ' ) ) { <nl> + die ( ' grpc . enable_fork_support not set ' ) ; <nl> + } <nl> + if ( ! getenv ( ' GRPC_ENABLE_FORK_SUPPORT ' ) ) { <nl> + die ( ' env GRPC_ENABLE_FORK_SUPPORT not set ' ) ; <nl> + } <nl> + if ( ini_get ( ' grpc . poll_strategy ' ) ! = = ' epoll1 ' ) { <nl> + die ( ' grpc . poll_strategy ! = = epoll1 ' ) ; <nl> + } <nl> + if ( getenv ( ' GRPC_POLL_STRATEGY ' ) ! = = ' epoll1 ' ) { <nl> + die ( ' env GRPC_POLL_STRATEGY not epoll1 ' ) ; <nl> + } <nl> + echo ' ok ' ; <nl> + - - EXPECT - - <nl> + ok <nl>
Merge pull request from kellegous / ini
grpc/grpc
e4c2c4fd1a6414334bd94f58b993b1e58ee99a71
2019-03-28T20:59:04Z
deleted file mode 100644 <nl> index 2d62cb0d060 . . 00000000000 <nl> mmm a / src / citra_qt / callstack . cpp <nl> ppp / dev / null <nl> <nl> - # include < QStandardItemModel > <nl> - # include " callstack . hxx " <nl> - <nl> - / / # include " debugger / debugger . h " <nl> - <nl> - GCallstackView : : GCallstackView ( QWidget * parent ) : QDockWidget ( parent ) <nl> - { <nl> - ui . setupUi ( this ) ; <nl> - <nl> - callstack_model = new QStandardItemModel ( this ) ; <nl> - callstack_model - > setColumnCount ( 3 ) ; <nl> - callstack_model - > setHeaderData ( 0 , Qt : : Horizontal , " Depth " ) ; <nl> - callstack_model - > setHeaderData ( 1 , Qt : : Horizontal , " Address " ) ; <nl> - callstack_model - > setHeaderData ( 2 , Qt : : Horizontal , " Function Name " ) ; <nl> - ui . treeView - > setModel ( callstack_model ) ; <nl> - <nl> - / / TODO : Make single clicking a callstack entry jump to the corresponding disassembly position <nl> - } <nl> - <nl> - void GCallstackView : : OnCPUStepped ( ) <nl> - { <nl> - / * <nl> - Debugger : : Callstack callstack ; <nl> - Debugger : : GetCallstack ( callstack ) ; <nl> - callstack_model - > setRowCount ( callstack . size ( ) ) ; <nl> - <nl> - for ( int i = 0 ; i < callstack . size ( ) ; + + i ) <nl> - for ( Debugger : : CallstackIterator it = callstack . begin ( ) ; it ! = callstack . end ( ) ; + + it ) <nl> - { <nl> - Debugger : : CallstackEntry entry = callstack [ i ] ; <nl> - callstack_model - > setItem ( i , 0 , new QStandardItem ( QString ( " % 1 " ) . arg ( i + 1 ) ) ) ; <nl> - callstack_model - > setItem ( i , 1 , new QStandardItem ( QString ( " 0x % 1 " ) . arg ( entry . addr , 8 , 16 , QLatin1Char ( ' 0 ' ) ) ) ) ; <nl> - callstack_model - > setItem ( i , 2 , new QStandardItem ( QString : : fromStdString ( entry . name ) ) ) ; <nl> - } <nl> - * / <nl> - } <nl> \ No newline at end of file <nl> mmm a / src / citra_qt / citra_qt . vcxproj <nl> ppp b / src / citra_qt / citra_qt . vcxproj <nl> <nl> < ClCompile Include = " . . \ . . \ externals \ qhexedit \ qhexedit . cpp " / > <nl> < ClCompile Include = " . . \ . . \ externals \ qhexedit \ qhexedit_p . cpp " / > <nl> < ClCompile Include = " . . \ . . \ externals \ qhexedit \ xbytearray . cpp " / > <nl> - < ClCompile Include = " bootmanager . cpp " / > <nl> - < ClCompile Include = " callstack . cpp " / > <nl> < ClCompile Include = " config \ controller_config . cpp " / > <nl> < ClCompile Include = " config \ controller_config_util . cpp " / > <nl> - < ClCompile Include = " cpu_regs . cpp " / > <nl> - < ClCompile Include = " disasm . cpp " / > <nl> + < ClCompile Include = " debugger \ callstack . cpp " / > <nl> + < ClCompile Include = " debugger \ registers . cpp " / > <nl> + < ClCompile Include = " debugger \ disassembler . cpp " / > <nl> + < ClCompile Include = " debugger \ ramview . cpp " / > <nl> + < ClCompile Include = " bootmanager . cpp " / > <nl> < ClCompile Include = " hotkeys . cpp " / > <nl> < ClCompile Include = " main . cpp " / > <nl> - < ClCompile Include = " ramview . cpp " / > <nl> < / ItemGroup > <nl> < ItemGroup > <nl> < MOC Include = " . . \ . . \ externals \ qhexedit \ commands . h " / > <nl> < MOC Include = " . . \ . . \ externals \ qhexedit \ qhexedit . h " / > <nl> < MOC Include = " . . \ . . \ externals \ qhexedit \ qhexedit_p . h " / > <nl> < MOC Include = " . . \ . . \ externals \ qhexedit \ xbytearray . h " / > <nl> - < MOC Include = " config \ controller_config . hxx " / > <nl> - < MOC Include = " config \ controller_config_util . hxx " / > <nl> - < MOC Include = " callstack . hxx " / > <nl> - < MOC Include = " cpu_regs . hxx " / > <nl> - < MOC Include = " disasm . hxx " / > <nl> - < MOC Include = " ramview . hxx " / > <nl> + < MOC Include = " debugger \ callstack . hxx " / > <nl> + < MOC Include = " debugger \ registers . hxx " / > <nl> + < MOC Include = " debugger \ disassembler . hxx " / > <nl> + < MOC Include = " debugger \ ramview . hxx " / > <nl> < MOC Include = " bootmanager . hxx " / > <nl> < MOC Include = " hotkeys . hxx " / > <nl> < MOC Include = " main . hxx " / > <nl> <nl> < / ProjectReference > <nl> < / ItemGroup > <nl> < ItemGroup > <nl> - < ClInclude Include = " callstack . hxx " / > <nl> < ClInclude Include = " config \ controller_config . hxx " / > <nl> < ClInclude Include = " config \ controller_config_util . hxx " / > <nl> - < ClInclude Include = " cpu_regs . hxx " / > <nl> - < ClInclude Include = " disasm . hxx " / > <nl> - < ClInclude Include = " ramview . hxx " / > <nl> - < ClInclude Include = " ui_callstack . h " / > <nl> < ClInclude Include = " ui_controller_config . h " / > <nl> - < ClInclude Include = " ui_cpu_regs . h " / > <nl> - < ClInclude Include = " ui_disasm . h " / > <nl> - < ClInclude Include = " ui_hotkeys . h " / > <nl> - < ClInclude Include = " ui_main . h " / > <nl> < ClInclude Include = " version . h " / > <nl> < / ItemGroup > <nl> < ItemGroup > <nl> - < UIC Include = " callstack . ui " / > <nl> < UIC Include = " config \ controller_config . ui " / > <nl> - < UIC Include = " cpu_regs . ui " / > <nl> - < UIC Include = " disasm . ui " / > <nl> + < UIC Include = " debugger \ callstack . ui " / > <nl> + < UIC Include = " debugger \ registers . ui " / > <nl> + < UIC Include = " debugger \ disassembler . ui " / > <nl> < UIC Include = " hotkeys . ui " / > <nl> < UIC Include = " main . ui " / > <nl> < / ItemGroup > <nl> mmm a / src / citra_qt / citra_qt . vcxproj . filters <nl> ppp b / src / citra_qt / citra_qt . vcxproj . filters <nl> <nl>  < ? xml version = " 1 . 0 " encoding = " utf - 8 " ? > <nl> < Project ToolsVersion = " 4 . 0 " xmlns = " http : / / schemas . microsoft . com / developer / msbuild / 2003 " > <nl> < ItemGroup > <nl> - < Filter Include = " debugger " > <nl> - < UniqueIdentifier > { 1b8f77c1 - 61e8 - 4a9f - 95f8 - 8d1c53015ad8 } < / UniqueIdentifier > <nl> - < / Filter > <nl> < Filter Include = " qhexedit " > <nl> < UniqueIdentifier > { dede739c - 939b - 4147 - 9e72 - 4a326b97d237 } < / UniqueIdentifier > <nl> < / Filter > <nl> < Filter Include = " config " > <nl> < UniqueIdentifier > { 80178741 - d3ab - 4031 - 892c - ec58490ea8bf } < / UniqueIdentifier > <nl> < / Filter > <nl> + < Filter Include = " debugger " > <nl> + < UniqueIdentifier > { 9495d0e7 - 87d6 - 4fe1 - 92f1 - cfa1bbec7025 } < / UniqueIdentifier > <nl> + < / Filter > <nl> < / ItemGroup > <nl> < ItemGroup > <nl> < ClCompile Include = " . . \ . . \ externals \ qhexedit \ commands . cpp " > <nl> <nl> < ClCompile Include = " config \ controller_config_util . cpp " > <nl> < Filter > config < / Filter > <nl> < / ClCompile > <nl> - < ClCompile Include = " cpu_regs . cpp " > <nl> + < ClCompile Include = " debugger \ callstack . cpp " > <nl> < Filter > debugger < / Filter > <nl> < / ClCompile > <nl> - < ClCompile Include = " disasm . cpp " > <nl> + < ClCompile Include = " debugger \ ramview . cpp " > <nl> < Filter > debugger < / Filter > <nl> < / ClCompile > <nl> - < ClCompile Include = " ramview . cpp " > <nl> + < ClCompile Include = " debugger \ disassembler . cpp " > <nl> < Filter > debugger < / Filter > <nl> < / ClCompile > <nl> - < ClCompile Include = " callstack . cpp " > <nl> + < ClCompile Include = " debugger \ registers . cpp " > <nl> < Filter > debugger < / Filter > <nl> < / ClCompile > <nl> < / ItemGroup > <nl> <nl> < MOC Include = " . . \ . . \ externals \ qhexedit \ qhexedit . h " > <nl> < Filter > qhexedit < / Filter > <nl> < / MOC > <nl> + < MOC Include = " bootmanager . hxx " / > <nl> + < MOC Include = " hotkeys . hxx " / > <nl> + < MOC Include = " main . hxx " / > <nl> + < MOC Include = " debugger \ callstack . hxx " > <nl> + < Filter > debugger < / Filter > <nl> + < / MOC > <nl> + < MOC Include = " debugger \ ramview . hxx " > <nl> + < Filter > debugger < / Filter > <nl> + < / MOC > <nl> + < MOC Include = " debugger \ disassembler . hxx " > <nl> + < Filter > debugger < / Filter > <nl> + < / MOC > <nl> + < MOC Include = " debugger \ registers . hxx " > <nl> + < Filter > debugger < / Filter > <nl> + < / MOC > <nl> < / ItemGroup > <nl> < ItemGroup > <nl> - < ClInclude Include = " hotkeys . hxx " / > <nl> - < ClInclude Include = " ui_hotkeys . h " / > <nl> - < ClInclude Include = " main . hxx " / > <nl> - < ClInclude Include = " ui_main . h " / > <nl> < ClInclude Include = " version . h " / > <nl> < ClInclude Include = " config \ controller_config . hxx " > <nl> < Filter > config < / Filter > <nl> <nl> < ClInclude Include = " config \ controller_config_util . hxx " > <nl> < Filter > config < / Filter > <nl> < / ClInclude > <nl> - < ClInclude Include = " cpu_regs . hxx " > <nl> - < Filter > debugger < / Filter > <nl> - < / ClInclude > <nl> - < ClInclude Include = " disasm . hxx " > <nl> - < Filter > debugger < / Filter > <nl> - < / ClInclude > <nl> - < ClInclude Include = " ramview . hxx " > <nl> - < Filter > debugger < / Filter > <nl> - < / ClInclude > <nl> - < ClInclude Include = " ui_callstack . h " > <nl> - < Filter > debugger < / Filter > <nl> - < / ClInclude > <nl> - < ClInclude Include = " ui_cpu_regs . h " > <nl> - < Filter > debugger < / Filter > <nl> - < / ClInclude > <nl> - < ClInclude Include = " ui_disasm . h " > <nl> - < Filter > debugger < / Filter > <nl> - < / ClInclude > <nl> - < ClInclude Include = " callstack . hxx " > <nl> - < Filter > debugger < / Filter > <nl> - < / ClInclude > <nl> < ClInclude Include = " ui_controller_config . h " > <nl> < Filter > config < / Filter > <nl> < / ClInclude > <nl> <nl> < UIC Include = " config \ controller_config . ui " > <nl> < Filter > config < / Filter > <nl> < / UIC > <nl> - < UIC Include = " callstack . ui " > <nl> + < UIC Include = " debugger \ callstack . ui " > <nl> < Filter > debugger < / Filter > <nl> < / UIC > <nl> - < UIC Include = " cpu_regs . ui " > <nl> + < UIC Include = " debugger \ disassembler . ui " > <nl> < Filter > debugger < / Filter > <nl> < / UIC > <nl> - < UIC Include = " disasm . ui " > <nl> + < UIC Include = " debugger \ registers . ui " > <nl> < Filter > debugger < / Filter > <nl> < / UIC > <nl> < / ItemGroup > <nl> new file mode 100644 <nl> index 00000000000 . . f59f2d8c86d <nl> mmm / dev / null <nl> ppp b / src / citra_qt / debugger / callstack . cpp <nl> <nl> + # include < QStandardItemModel > <nl> + <nl> + # include " callstack . hxx " <nl> + <nl> + # include " core / core . h " <nl> + # include " core / arm / arm_interface . h " <nl> + # include " core / mem_map . h " <nl> + # include " common / symbols . h " <nl> + # include " core / arm / disassembler / arm_disasm . h " <nl> + <nl> + CallstackWidget : : CallstackWidget ( QWidget * parent ) : QDockWidget ( parent ) <nl> + { <nl> + ui . setupUi ( this ) ; <nl> + <nl> + callstack_model = new QStandardItemModel ( this ) ; <nl> + callstack_model - > setColumnCount ( 4 ) ; <nl> + callstack_model - > setHeaderData ( 0 , Qt : : Horizontal , " Stack pointer " ) ; <nl> + callstack_model - > setHeaderData ( 2 , Qt : : Horizontal , " Return address " ) ; <nl> + callstack_model - > setHeaderData ( 1 , Qt : : Horizontal , " Call address " ) ; <nl> + callstack_model - > setHeaderData ( 3 , Qt : : Horizontal , " Function " ) ; <nl> + ui . treeView - > setModel ( callstack_model ) ; <nl> + } <nl> + <nl> + void CallstackWidget : : OnCPUStepped ( ) <nl> + { <nl> + ARM_Disasm * disasm = new ARM_Disasm ( ) ; <nl> + ARM_Interface * app_core = Core : : g_app_core ; <nl> + <nl> + u32 sp = app_core - > GetReg ( 13 ) ; / / stack pointer <nl> + u32 addr , ret_addr , call_addr , func_addr ; <nl> + <nl> + int counter = 0 ; <nl> + for ( int addr = 0x10000000 ; addr > = sp ; addr - = 4 ) <nl> + { <nl> + ret_addr = Memory : : Read32 ( addr ) ; <nl> + call_addr = ret_addr - 4 ; / / get call address ? ? ? <nl> + <nl> + / * TODO ( mattvail ) clean me , move to debugger interface * / <nl> + u32 insn = Memory : : Read32 ( call_addr ) ; <nl> + if ( disasm - > decode ( insn ) = = OP_BL ) <nl> + { <nl> + std : : string name ; <nl> + / / ripped from disasm <nl> + uint8_t cond = ( insn > > 28 ) & 0xf ; <nl> + uint32_t i_offset = insn & 0xffffff ; <nl> + / / Sign - extend the 24 - bit offset <nl> + if ( ( i_offset > > 23 ) & 1 ) <nl> + i_offset | = 0xff000000 ; <nl> + <nl> + / / Pre - compute the left - shift and the prefetch offset <nl> + i_offset < < = 2 ; <nl> + i_offset + = 8 ; <nl> + func_addr = call_addr + i_offset ; <nl> + <nl> + callstack_model - > setItem ( counter , 0 , new QStandardItem ( QString ( " 0x % 1 " ) . arg ( addr , 8 , 16 , QLatin1Char ( ' 0 ' ) ) ) ) ; <nl> + callstack_model - > setItem ( counter , 1 , new QStandardItem ( QString ( " 0x % 1 " ) . arg ( ret_addr , 8 , 16 , QLatin1Char ( ' 0 ' ) ) ) ) ; <nl> + callstack_model - > setItem ( counter , 2 , new QStandardItem ( QString ( " 0x % 1 " ) . arg ( call_addr , 8 , 16 , QLatin1Char ( ' 0 ' ) ) ) ) ; <nl> + <nl> + name = Symbols : : HasSymbol ( func_addr ) ? Symbols : : GetSymbol ( func_addr ) . name : " unknown " ; <nl> + callstack_model - > setItem ( counter , 3 , new QStandardItem ( QString ( " % 1_ % 2 " ) . arg ( QString : : fromStdString ( name ) ) <nl> + . arg ( QString ( " 0x % 1 " ) . arg ( func_addr , 8 , 16 , QLatin1Char ( ' 0 ' ) ) ) ) ) ; <nl> + <nl> + counter + + ; <nl> + } <nl> + } <nl> + } <nl> \ No newline at end of file <nl> similarity index 58 % <nl> rename from src / citra_qt / callstack . hxx <nl> rename to src / citra_qt / debugger / callstack . hxx <nl> mmm a / src / citra_qt / callstack . hxx <nl> ppp b / src / citra_qt / debugger / callstack . hxx <nl> <nl> # include < QDockWidget > <nl> - # include " ui_callstack . h " <nl> - # include " common / platform . h " <nl> + # include " . . / ui_callstack . h " <nl> <nl> class QStandardItemModel ; <nl> <nl> - class GCallstackView : public QDockWidget <nl> + class CallstackWidget : public QDockWidget <nl> { <nl> Q_OBJECT <nl> <nl> public : <nl> - GCallstackView ( QWidget * parent = 0 ) ; <nl> + CallstackWidget ( QWidget * parent = 0 ) ; <nl> <nl> public slots : <nl> void OnCPUStepped ( ) ; <nl> similarity index 100 % <nl> rename from src / citra_qt / callstack . ui <nl> rename to src / citra_qt / debugger / callstack . ui <nl> similarity index 75 % <nl> rename from src / citra_qt / disasm . cpp <nl> rename to src / citra_qt / debugger / disassembler . cpp <nl> mmm a / src / citra_qt / disasm . cpp <nl> ppp b / src / citra_qt / debugger / disassembler . cpp <nl> <nl> # include < QtGui > <nl> - # include " ui_disasm . h " <nl> - # include " disasm . hxx " <nl> <nl> - # include " bootmanager . hxx " <nl> - # include " hotkeys . hxx " <nl> + # include " disassembler . hxx " <nl> + <nl> + # include " . . / bootmanager . hxx " <nl> + # include " . . / hotkeys . hxx " <nl> <nl> # include " common / common . h " <nl> # include " core / mem_map . h " <nl> <nl> # include " core / arm / interpreter / armdefs . h " <nl> # include " core / arm / disassembler / arm_disasm . h " <nl> <nl> - GDisAsmView : : GDisAsmView ( QWidget * parent , EmuThread & emu_thread ) : QDockWidget ( parent ) , base_addr ( 0 ) , emu_thread ( emu_thread ) <nl> + DisassemblerWidget : : DisassemblerWidget ( QWidget * parent , EmuThread & emu_thread ) : QDockWidget ( parent ) , base_addr ( 0 ) , emu_thread ( emu_thread ) <nl> { <nl> disasm_ui . setupUi ( this ) ; <nl> <nl> GDisAsmView : : GDisAsmView ( QWidget * parent , EmuThread & emu_thread ) : QDockWidget ( p <nl> model = new QStandardItemModel ( this ) ; <nl> model - > setColumnCount ( 3 ) ; <nl> disasm_ui . treeView - > setModel ( model ) ; <nl> - <nl> + disasm_ui . tableView - > setModel ( model ) ; <nl> RegisterHotkey ( " Disassembler " , " Start / Stop " , QKeySequence ( Qt : : Key_F5 ) , Qt : : ApplicationShortcut ) ; <nl> RegisterHotkey ( " Disassembler " , " Step " , QKeySequence ( Qt : : Key_F10 ) , Qt : : ApplicationShortcut ) ; <nl> RegisterHotkey ( " Disassembler " , " Step into " , QKeySequence ( Qt : : Key_F11 ) , Qt : : ApplicationShortcut ) ; <nl> GDisAsmView : : GDisAsmView ( QWidget * parent , EmuThread & emu_thread ) : QDockWidget ( p <nl> connect ( GetHotkey ( " Disassembler " , " Set Breakpoint " , this ) , SIGNAL ( activated ( ) ) , this , SLOT ( OnSetBreakpoint ( ) ) ) ; <nl> } <nl> <nl> - void GDisAsmView : : Init ( ) <nl> + void DisassemblerWidget : : Init ( ) <nl> { <nl> ARM_Disasm * disasm = new ARM_Disasm ( ) ; <nl> <nl> void GDisAsmView : : Init ( ) <nl> } <nl> disasm_ui . treeView - > resizeColumnToContents ( 0 ) ; <nl> disasm_ui . treeView - > resizeColumnToContents ( 1 ) ; <nl> - <nl> + disasm_ui . treeView - > resizeColumnToContents ( 2 ) ; <nl> + disasm_ui . tableView - > resizeColumnToContents ( 0 ) ; <nl> + disasm_ui . tableView - > resizeColumnToContents ( 1 ) ; <nl> + disasm_ui . tableView - > resizeColumnToContents ( 2 ) ; <nl> + <nl> QModelIndex model_index = model - > index ( 0 , 0 ) ; <nl> disasm_ui . treeView - > scrollTo ( model_index ) ; <nl> disasm_ui . treeView - > selectionModel ( ) - > setCurrentIndex ( model_index , QItemSelectionModel : : SelectCurrent | QItemSelectionModel : : Rows ) ; <nl> + <nl> + disasm_ui . tableView - > scrollTo ( model_index ) ; <nl> + disasm_ui . tableView - > selectionModel ( ) - > setCurrentIndex ( model_index , QItemSelectionModel : : SelectCurrent | QItemSelectionModel : : Rows ) ; <nl> } <nl> <nl> - void GDisAsmView : : OnSetBreakpoint ( ) <nl> + void DisassemblerWidget : : OnSetBreakpoint ( ) <nl> { <nl> int selected_row = SelectedRow ( ) ; <nl> <nl> void GDisAsmView : : OnSetBreakpoint ( ) <nl> } <nl> } <nl> <nl> - void GDisAsmView : : OnContinue ( ) <nl> + void DisassemblerWidget : : OnContinue ( ) <nl> { <nl> emu_thread . SetCpuRunning ( true ) ; <nl> } <nl> <nl> - void GDisAsmView : : OnStep ( ) <nl> + void DisassemblerWidget : : OnStep ( ) <nl> { <nl> OnStepInto ( ) ; / / change later <nl> } <nl> <nl> - void GDisAsmView : : OnStepInto ( ) <nl> + void DisassemblerWidget : : OnStepInto ( ) <nl> { <nl> emu_thread . SetCpuRunning ( false ) ; <nl> emu_thread . ExecStep ( ) ; <nl> } <nl> <nl> - void GDisAsmView : : OnPause ( ) <nl> + void DisassemblerWidget : : OnPause ( ) <nl> { <nl> emu_thread . SetCpuRunning ( false ) ; <nl> } <nl> <nl> - void GDisAsmView : : OnToggleStartStop ( ) <nl> + void DisassemblerWidget : : OnToggleStartStop ( ) <nl> { <nl> emu_thread . SetCpuRunning ( ! emu_thread . IsCpuRunning ( ) ) ; <nl> } <nl> <nl> - void GDisAsmView : : OnCPUStepped ( ) <nl> + void DisassemblerWidget : : OnCPUStepped ( ) <nl> { <nl> ARMword next_instr = Core : : g_app_core - > GetPC ( ) ; <nl> <nl> void GDisAsmView : : OnCPUStepped ( ) <nl> QModelIndex model_index = model - > index ( index , 0 ) ; <nl> disasm_ui . treeView - > scrollTo ( model_index ) ; <nl> disasm_ui . treeView - > selectionModel ( ) - > setCurrentIndex ( model_index , QItemSelectionModel : : SelectCurrent | QItemSelectionModel : : Rows ) ; <nl> + <nl> + disasm_ui . tableView - > scrollTo ( model_index ) ; <nl> + disasm_ui . tableView - > selectionModel ( ) - > setCurrentIndex ( model_index , QItemSelectionModel : : SelectCurrent | QItemSelectionModel : : Rows ) ; <nl> + disasm_ui . tableView - > selectionModel ( ) - > select ( model_index , QItemSelectionModel : : SelectCurrent | QItemSelectionModel : : Rows ) ; <nl> } <nl> <nl> - int GDisAsmView : : SelectedRow ( ) <nl> + int DisassemblerWidget : : SelectedRow ( ) <nl> { <nl> QModelIndex index = disasm_ui . treeView - > selectionModel ( ) - > currentIndex ( ) ; <nl> if ( ! index . isValid ( ) ) <nl> return - 1 ; <nl> <nl> return model - > itemFromIndex ( disasm_ui . treeView - > selectionModel ( ) - > currentIndex ( ) ) - > row ( ) ; <nl> - } <nl> \ No newline at end of file <nl> + } <nl> + / * <nl> + void DisassemblerWidget : : paintEvent ( ) <nl> + { <nl> + QPainter painter ( this ) ; <nl> + painter . drawRect ( 10 , 10 , 50 , 50 ) ; <nl> + } <nl> + * / <nl> \ No newline at end of file <nl> similarity index 80 % <nl> rename from src / citra_qt / disasm . hxx <nl> rename to src / citra_qt / debugger / disassembler . hxx <nl> mmm a / src / citra_qt / disasm . hxx <nl> ppp b / src / citra_qt / debugger / disassembler . hxx <nl> <nl> # include < QDockWidget > <nl> - # include " ui_disasm . h " <nl> + # include " . . / ui_disassembler . h " <nl> <nl> # include " common / common . h " <nl> # include " common / break_points . h " <nl> class QAction ; <nl> class QStandardItemModel ; <nl> class EmuThread ; <nl> <nl> - class GDisAsmView : public QDockWidget <nl> + class DisassemblerWidget : public QDockWidget <nl> { <nl> Q_OBJECT <nl> <nl> public : <nl> - GDisAsmView ( QWidget * parent , EmuThread & emu_thread ) ; <nl> + DisassemblerWidget ( QWidget * parent , EmuThread & emu_thread ) ; <nl> <nl> void Init ( ) ; <nl> <nl> new file mode 100644 <nl> index 00000000000 . . e65b0aa9b67 <nl> mmm / dev / null <nl> ppp b / src / citra_qt / debugger / disassembler . ui <nl> <nl> + < ? xml version = " 1 . 0 " encoding = " UTF - 8 " ? > <nl> + < ui version = " 4 . 0 " > <nl> + < class > DockWidget < / class > <nl> + < widget class = " QDockWidget " name = " DockWidget " > <nl> + < property name = " geometry " > <nl> + < rect > <nl> + < x > 0 < / x > <nl> + < y > 0 < / y > <nl> + < width > 430 < / width > <nl> + < height > 401 < / height > <nl> + < / rect > <nl> + < / property > <nl> + < property name = " windowTitle " > <nl> + < string > Disassembly < / string > <nl> + < / property > <nl> + < widget class = " QWidget " name = " dockWidgetContents " > <nl> + < layout class = " QVBoxLayout " name = " verticalLayout " > <nl> + < item > <nl> + < layout class = " QHBoxLayout " name = " horizontalLayout " > <nl> + < item > <nl> + < widget class = " QPushButton " name = " button_step " > <nl> + < property name = " text " > <nl> + < string > Step < / string > <nl> + < / property > <nl> + < / widget > <nl> + < / item > <nl> + < item > <nl> + < widget class = " QPushButton " name = " button_pause " > <nl> + < property name = " text " > <nl> + < string > Pause < / string > <nl> + < / property > <nl> + < / widget > <nl> + < / item > <nl> + < item > <nl> + < widget class = " QPushButton " name = " button_continue " > <nl> + < property name = " text " > <nl> + < string > Continue < / string > <nl> + < / property > <nl> + < / widget > <nl> + < / item > <nl> + < item > <nl> + < widget class = " QPushButton " name = " pushButton " > <nl> + < property name = " text " > <nl> + < string > Step Into < / string > <nl> + < / property > <nl> + < / widget > <nl> + < / item > <nl> + < item > <nl> + < widget class = " QPushButton " name = " button_breakpoint " > <nl> + < property name = " text " > <nl> + < string > Set Breakpoint < / string > <nl> + < / property > <nl> + < / widget > <nl> + < / item > <nl> + < / layout > <nl> + < / item > <nl> + < item > <nl> + < widget class = " QTreeView " name = " treeView " > <nl> + < property name = " alternatingRowColors " > <nl> + < bool > true < / bool > <nl> + < / property > <nl> + < property name = " indentation " > <nl> + < number > 20 < / number > <nl> + < / property > <nl> + < property name = " rootIsDecorated " > <nl> + < bool > false < / bool > <nl> + < / property > <nl> + < attribute name = " headerVisible " > <nl> + < bool > false < / bool > <nl> + < / attribute > <nl> + < / widget > <nl> + < / item > <nl> + < item > <nl> + < widget class = " QTableView " name = " tableView " > <nl> + < property name = " alternatingRowColors " > <nl> + < bool > true < / bool > <nl> + < / property > <nl> + < attribute name = " headerVisible " > <nl> + < bool > false < / bool > <nl> + < / attribute > <nl> + < / widget > <nl> + < / item > <nl> + < / layout > <nl> + < / widget > <nl> + < / widget > <nl> + < resources / > <nl> + < connections / > <nl> + < / ui > <nl> similarity index 100 % <nl> rename from src / citra_qt / ramview . cpp <nl> rename to src / citra_qt / debugger / ramview . cpp <nl> similarity index 100 % <nl> rename from src / citra_qt / ramview . hxx <nl> rename to src / citra_qt / debugger / ramview . hxx <nl> similarity index 95 % <nl> rename from src / citra_qt / cpu_regs . cpp <nl> rename to src / citra_qt / debugger / registers . cpp <nl> mmm a / src / citra_qt / cpu_regs . cpp <nl> ppp b / src / citra_qt / debugger / registers . cpp <nl> <nl> - # include " cpu_regs . hxx " <nl> + # include " registers . hxx " <nl> <nl> # include " core / core . h " <nl> - # include " core / arm / interpreter / armdefs . h " <nl> + # include " core / arm / arm_interface . h " <nl> <nl> - GARM11RegsView : : GARM11RegsView ( QWidget * parent ) : QDockWidget ( parent ) <nl> + RegistersWidget : : RegistersWidget ( QWidget * parent ) : QDockWidget ( parent ) <nl> { <nl> cpu_regs_ui . setupUi ( this ) ; <nl> <nl> GARM11RegsView : : GARM11RegsView ( QWidget * parent ) : QDockWidget ( parent ) <nl> CSPR - > addChild ( new QTreeWidgetItem ( QStringList ( " N " ) ) ) ; <nl> } <nl> <nl> - void GARM11RegsView : : OnCPUStepped ( ) <nl> + void RegistersWidget : : OnCPUStepped ( ) <nl> { <nl> ARM_Interface * app_core = Core : : g_app_core ; <nl> <nl> similarity index 64 % <nl> rename from src / citra_qt / cpu_regs . hxx <nl> rename to src / citra_qt / debugger / registers . hxx <nl> mmm a / src / citra_qt / cpu_regs . hxx <nl> ppp b / src / citra_qt / debugger / registers . hxx <nl> <nl> - # include " ui_cpu_regs . h " <nl> + # include " . . / ui_registers . h " <nl> <nl> # include < QDockWidget > <nl> # include < QTreeWidgetItem > <nl> <nl> - / / # include " ui_gekko_regs . h " <nl> - <nl> class QTreeWidget ; <nl> <nl> - class GARM11RegsView : public QDockWidget <nl> + class RegistersWidget : public QDockWidget <nl> { <nl> Q_OBJECT <nl> <nl> public : <nl> - GARM11RegsView ( QWidget * parent = NULL ) ; <nl> + RegistersWidget ( QWidget * parent = NULL ) ; <nl> <nl> public slots : <nl> void OnCPUStepped ( ) ; <nl> similarity index 100 % <nl> rename from src / citra_qt / cpu_regs . ui <nl> rename to src / citra_qt / debugger / registers . ui <nl> deleted file mode 100644 <nl> index fb384516421 . . 00000000000 <nl> mmm a / src / citra_qt / disasm . ui <nl> ppp / dev / null <nl> <nl> - < ? xml version = " 1 . 0 " encoding = " UTF - 8 " ? > <nl> - < ui version = " 4 . 0 " > <nl> - < class > DockWidget < / class > <nl> - < widget class = " QDockWidget " name = " DockWidget " > <nl> - < property name = " geometry " > <nl> - < rect > <nl> - < x > 0 < / x > <nl> - < y > 0 < / y > <nl> - < width > 430 < / width > <nl> - < height > 401 < / height > <nl> - < / rect > <nl> - < / property > <nl> - < property name = " windowTitle " > <nl> - < string > Disassembly < / string > <nl> - < / property > <nl> - < widget class = " QWidget " name = " dockWidgetContents " > <nl> - < layout class = " QVBoxLayout " name = " verticalLayout " > <nl> - < item > <nl> - < layout class = " QHBoxLayout " name = " horizontalLayout " > <nl> - < item > <nl> - < widget class = " QPushButton " name = " button_step " > <nl> - < property name = " text " > <nl> - < string > Step < / string > <nl> - < / property > <nl> - < / widget > <nl> - < / item > <nl> - < item > <nl> - < widget class = " QPushButton " name = " button_pause " > <nl> - < property name = " text " > <nl> - < string > Pause < / string > <nl> - < / property > <nl> - < / widget > <nl> - < / item > <nl> - < item > <nl> - < widget class = " QPushButton " name = " button_continue " > <nl> - < property name = " text " > <nl> - < string > Continue < / string > <nl> - < / property > <nl> - < / widget > <nl> - < / item > <nl> - < item > <nl> - < widget class = " QPushButton " name = " pushButton " > <nl> - < property name = " text " > <nl> - < string > Step Into < / string > <nl> - < / property > <nl> - < / widget > <nl> - < / item > <nl> - < item > <nl> - < widget class = " QPushButton " name = " button_breakpoint " > <nl> - < property name = " text " > <nl> - < string > Set Breakpoint < / string > <nl> - < / property > <nl> - < / widget > <nl> - < / item > <nl> - < / layout > <nl> - < / item > <nl> - < item > <nl> - < widget class = " QTreeView " name = " treeView " > <nl> - < property name = " alternatingRowColors " > <nl> - < bool > true < / bool > <nl> - < / property > <nl> - < property name = " indentation " > <nl> - < number > 20 < / number > <nl> - < / property > <nl> - < property name = " rootIsDecorated " > <nl> - < bool > false < / bool > <nl> - < / property > <nl> - < attribute name = " headerVisible " > <nl> - < bool > false < / bool > <nl> - < / attribute > <nl> - < / widget > <nl> - < / item > <nl> - < / layout > <nl> - < / widget > <nl> - < / widget > <nl> - < resources / > <nl> - < connections / > <nl> - < / ui > <nl> mmm a / src / citra_qt / main . cpp <nl> ppp b / src / citra_qt / main . cpp <nl> <nl> # include " hotkeys . hxx " <nl> <nl> / / debugger <nl> - # include " disasm . hxx " <nl> - # include " cpu_regs . hxx " <nl> - # include " callstack . hxx " <nl> - # include " ramview . hxx " <nl> + # include " debugger / disassembler . hxx " <nl> + # include " debugger / registers . hxx " <nl> + # include " debugger / callstack . hxx " <nl> + # include " debugger / ramview . hxx " <nl> <nl> # include " core / system . h " <nl> # include " core / loader . h " <nl> GMainWindow : : GMainWindow ( ) <nl> ui . horizontalLayout - > addWidget ( render_window ) ; <nl> / / render_window - > hide ( ) ; <nl> <nl> - disasm = new GDisAsmView ( this , render_window - > GetEmuThread ( ) ) ; <nl> - addDockWidget ( Qt : : BottomDockWidgetArea , disasm ) ; <nl> - disasm - > hide ( ) ; <nl> + disasmWidget = new DisassemblerWidget ( this , render_window - > GetEmuThread ( ) ) ; <nl> + addDockWidget ( Qt : : BottomDockWidgetArea , disasmWidget ) ; <nl> + disasmWidget - > hide ( ) ; <nl> <nl> - arm_regs = new GARM11RegsView ( this ) ; <nl> - addDockWidget ( Qt : : RightDockWidgetArea , arm_regs ) ; <nl> - arm_regs - > hide ( ) ; <nl> + registersWidget = new RegistersWidget ( this ) ; <nl> + addDockWidget ( Qt : : RightDockWidgetArea , registersWidget ) ; <nl> + registersWidget - > hide ( ) ; <nl> + <nl> + callstackWidget = new CallstackWidget ( this ) ; <nl> + addDockWidget ( Qt : : RightDockWidgetArea , callstackWidget ) ; <nl> + callstackWidget - > hide ( ) ; <nl> <nl> QMenu * debug_menu = ui . menu_View - > addMenu ( tr ( " Debugging " ) ) ; <nl> - debug_menu - > addAction ( disasm - > toggleViewAction ( ) ) ; <nl> - debug_menu - > addAction ( arm_regs - > toggleViewAction ( ) ) ; <nl> + debug_menu - > addAction ( disasmWidget - > toggleViewAction ( ) ) ; <nl> + debug_menu - > addAction ( registersWidget - > toggleViewAction ( ) ) ; <nl> + debug_menu - > addAction ( callstackWidget - > toggleViewAction ( ) ) ; <nl> <nl> / / Set default UI state <nl> / / geometry : 55 % of the window contents are in the upper screen half , 45 % in the lower half <nl> GMainWindow : : GMainWindow ( ) <nl> connect ( ui . action_Hotkeys , SIGNAL ( triggered ( ) ) , this , SLOT ( OnOpenHotkeysDialog ( ) ) ) ; <nl> <nl> / / BlockingQueuedConnection is important here , it makes sure we ' ve finished refreshing our views before the CPU continues <nl> - connect ( & render_window - > GetEmuThread ( ) , SIGNAL ( CPUStepped ( ) ) , disasm , SLOT ( OnCPUStepped ( ) ) , Qt : : BlockingQueuedConnection ) ; <nl> - connect ( & render_window - > GetEmuThread ( ) , SIGNAL ( CPUStepped ( ) ) , arm_regs , SLOT ( OnCPUStepped ( ) ) , Qt : : BlockingQueuedConnection ) ; <nl> + connect ( & render_window - > GetEmuThread ( ) , SIGNAL ( CPUStepped ( ) ) , disasmWidget , SLOT ( OnCPUStepped ( ) ) , Qt : : BlockingQueuedConnection ) ; <nl> + connect ( & render_window - > GetEmuThread ( ) , SIGNAL ( CPUStepped ( ) ) , registersWidget , SLOT ( OnCPUStepped ( ) ) , Qt : : BlockingQueuedConnection ) ; <nl> + connect ( & render_window - > GetEmuThread ( ) , SIGNAL ( CPUStepped ( ) ) , callstackWidget , SLOT ( OnCPUStepped ( ) ) , Qt : : BlockingQueuedConnection ) ; <nl> <nl> / / Setup hotkeys <nl> RegisterHotkey ( " Main Window " , " Load Image " , QKeySequence : : Open ) ; <nl> void GMainWindow : : BootGame ( const char * filename ) <nl> ERROR_LOG ( BOOT , " Failed to load ROM : % s " , error_str . c_str ( ) ) ; <nl> } <nl> <nl> - disasm - > Init ( ) ; <nl> - arm_regs - > OnCPUStepped ( ) ; <nl> + disasmWidget - > Init ( ) ; <nl> + registersWidget - > OnCPUStepped ( ) ; <nl> + callstackWidget - > OnCPUStepped ( ) ; <nl> <nl> render_window - > GetEmuThread ( ) . start ( ) ; <nl> } <nl> mmm a / src / citra_qt / main . hxx <nl> ppp b / src / citra_qt / main . hxx <nl> <nl> <nl> class GImageInfo ; <nl> class GRenderWindow ; <nl> - class GDisAsmView ; <nl> - class GARM11RegsView ; <nl> + class DisassemblerWidget ; <nl> + class RegistersWidget ; <nl> + class CallstackWidget ; <nl> <nl> class GMainWindow : public QMainWindow <nl> { <nl> private : <nl> Ui : : MainWindow ui ; <nl> <nl> GRenderWindow * render_window ; <nl> - GDisAsmView * disasm ; <nl> - GARM11RegsView * arm_regs ; <nl> + <nl> + DisassemblerWidget * disasmWidget ; <nl> + RegistersWidget * registersWidget ; <nl> + CallstackWidget * callstackWidget ; <nl> } ; <nl> <nl> # endif / / _CITRA_QT_MAIN_HXX_ <nl> similarity index 90 % <nl> rename from src / citra_qt / ui_disasm . h <nl> rename to src / citra_qt / ui_disassembler . h <nl> mmm a / src / citra_qt / ui_disasm . h <nl> ppp b / src / citra_qt / ui_disassembler . h <nl> <nl> / * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * <nl> - * * Form generated from reading UI file ' disasm . ui ' <nl> + * * Form generated from reading UI file ' disassembler . ui ' <nl> * * <nl> * * Created by : Qt User Interface Compiler version 4 . 8 . 5 <nl> * * <nl> * * WARNING ! All changes made in this file will be lost when recompiling UI file ! <nl> * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * / <nl> <nl> - # ifndef UI_DISASM_H <nl> - # define UI_DISASM_H <nl> + # ifndef UI_DISASSEMBLER_H <nl> + # define UI_DISASSEMBLER_H <nl> <nl> # include < QtCore / QVariant > <nl> # include < QtGui / QAction > <nl> <nl> # include < QtGui / QHBoxLayout > <nl> # include < QtGui / QHeaderView > <nl> # include < QtGui / QPushButton > <nl> + # include < QtGui / QTableView > <nl> # include < QtGui / QTreeView > <nl> # include < QtGui / QVBoxLayout > <nl> # include < QtGui / QWidget > <nl> class Ui_DockWidget <nl> QPushButton * pushButton ; <nl> QPushButton * button_breakpoint ; <nl> QTreeView * treeView ; <nl> + QTableView * tableView ; <nl> <nl> void setupUi ( QDockWidget * DockWidget ) <nl> { <nl> class Ui_DockWidget <nl> <nl> verticalLayout - > addWidget ( treeView ) ; <nl> <nl> + tableView = new QTableView ( dockWidgetContents ) ; <nl> + tableView - > setObjectName ( QString : : fromUtf8 ( " tableView " ) ) ; <nl> + tableView - > setAlternatingRowColors ( true ) ; <nl> + <nl> + verticalLayout - > addWidget ( tableView ) ; <nl> + <nl> DockWidget - > setWidget ( dockWidgetContents ) ; <nl> <nl> retranslateUi ( DockWidget ) ; <nl> namespace Ui { <nl> <nl> QT_END_NAMESPACE <nl> <nl> - # endif / / UI_DISASM_H <nl> + # endif / / UI_DISASSEMBLER_H <nl> similarity index 94 % <nl> rename from src / citra_qt / ui_cpu_regs . h <nl> rename to src / citra_qt / ui_registers . h <nl> mmm a / src / citra_qt / ui_cpu_regs . h <nl> ppp b / src / citra_qt / ui_registers . h <nl> <nl> / * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * <nl> - * * Form generated from reading UI file ' cpu_regs . ui ' <nl> + * * Form generated from reading UI file ' registers . ui ' <nl> * * <nl> * * Created by : Qt User Interface Compiler version 4 . 8 . 5 <nl> * * <nl> * * WARNING ! All changes made in this file will be lost when recompiling UI file ! <nl> * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * / <nl> <nl> - # ifndef UI_CPU_REGS_H <nl> - # define UI_CPU_REGS_H <nl> + # ifndef UI_REGISTERS_H <nl> + # define UI_REGISTERS_H <nl> <nl> # include < QtCore / QVariant > <nl> # include < QtGui / QAction > <nl> namespace Ui { <nl> <nl> QT_END_NAMESPACE <nl> <nl> - # endif / / UI_CPU_REGS_H <nl> + # endif / / UI_REGISTERS_H <nl>
UI / debugger changes
yuzu-emu/yuzu
e5f09b8be65c06927164428b5d400024e2388dbc
2014-04-18T22:34:23Z
mmm a / src / relooper / Relooper . cpp <nl> ppp b / src / relooper / Relooper . cpp <nl> <nl> + / / We are implementing the Relooper C API , so always export from this file . <nl> + # ifndef RELOOPERDLL_EXPORTS <nl> + # define RELOOPERDLL_EXPORTS <nl> + # endif <nl> <nl> # include " Relooper . h " <nl> <nl> VoidIntMap __blockDebugMap__ ; / / maps block pointers in currently running code t <nl> <nl> extern " C " { <nl> <nl> - void rl_set_output_buffer ( char * buffer , int size ) { <nl> + RELOOPERDLL_API void rl_set_output_buffer ( char * buffer , int size ) { <nl> # if DEBUG <nl> printf ( " # include \ " Relooper . h \ " \ n " ) ; <nl> printf ( " int main ( ) { \ n " ) ; <nl> void rl_set_output_buffer ( char * buffer , int size ) { <nl> Relooper : : SetOutputBuffer ( buffer , size ) ; <nl> } <nl> <nl> - void rl_make_output_buffer ( int size ) { <nl> + RELOOPERDLL_API void rl_make_output_buffer ( int size ) { <nl> Relooper : : SetOutputBuffer ( ( char * ) malloc ( size ) , size ) ; <nl> } <nl> <nl> - void rl_set_asm_js_mode ( int on ) { <nl> + RELOOPERDLL_API void rl_set_asm_js_mode ( int on ) { <nl> Relooper : : SetAsmJSMode ( on ) ; <nl> } <nl> <nl> - void * rl_new_block ( const char * text , const char * branch_var ) { <nl> + RELOOPERDLL_API void * rl_new_block ( const char * text , const char * branch_var ) { <nl> Block * ret = new Block ( text , branch_var ) ; <nl> # if DEBUG <nl> printf ( " void * b % d = rl_new_block ( \ " / / code % d \ " ) ; \ n " , ret - > Id , ret - > Id ) ; <nl> void * rl_new_block ( const char * text , const char * branch_var ) { <nl> return ret ; <nl> } <nl> <nl> - void rl_delete_block ( void * block ) { <nl> + RELOOPERDLL_API void rl_delete_block ( void * block ) { <nl> # if DEBUG <nl> printf ( " rl_delete_block ( block_map [ % d ] ) ; \ n " , ( ( Block * ) block ) - > Id ) ; <nl> # endif <nl> delete ( Block * ) block ; <nl> } <nl> <nl> - void rl_block_add_branch_to ( void * from , void * to , const char * condition , const char * code ) { <nl> + RELOOPERDLL_API void rl_block_add_branch_to ( void * from , void * to , const char * condition , const char * code ) { <nl> # if DEBUG <nl> printf ( " rl_block_add_branch_to ( block_map [ % d ] , block_map [ % d ] , % s % s % s , % s % s % s ) ; \ n " , ( ( Block * ) from ) - > Id , ( ( Block * ) to ) - > Id , condition ? " \ " " : " " , condition ? condition : " NULL " , condition ? " \ " " : " " , code ? " \ " " : " " , code ? code : " NULL " , code ? " \ " " : " " ) ; <nl> # endif <nl> ( ( Block * ) from ) - > AddBranchTo ( ( Block * ) to , condition , code ) ; <nl> } <nl> <nl> - void * rl_new_relooper ( ) { <nl> + RELOOPERDLL_API void * rl_new_relooper ( ) { <nl> # if DEBUG <nl> printf ( " void * block_map [ 10000 ] ; \ n " ) ; <nl> printf ( " void * rl = rl_new_relooper ( ) ; \ n " ) ; <nl> void * rl_new_relooper ( ) { <nl> return new Relooper ; <nl> } <nl> <nl> - void rl_delete_relooper ( void * relooper ) { <nl> + RELOOPERDLL_API void rl_delete_relooper ( void * relooper ) { <nl> delete ( Relooper * ) relooper ; <nl> } <nl> <nl> - void rl_relooper_add_block ( void * relooper , void * block ) { <nl> + RELOOPERDLL_API void rl_relooper_add_block ( void * relooper , void * block ) { <nl> # if DEBUG <nl> printf ( " rl_relooper_add_block ( rl , block_map [ % d ] ) ; \ n " , ( ( Block * ) block ) - > Id ) ; <nl> # endif <nl> ( ( Relooper * ) relooper ) - > AddBlock ( ( Block * ) block ) ; <nl> } <nl> <nl> - void rl_relooper_calculate ( void * relooper , void * entry ) { <nl> + RELOOPERDLL_API void rl_relooper_calculate ( void * relooper , void * entry ) { <nl> # if DEBUG <nl> printf ( " rl_relooper_calculate ( rl , block_map [ % d ] ) ; \ n " , ( ( Block * ) entry ) - > Id ) ; <nl> printf ( " rl_relooper_render ( rl ) ; \ n " ) ; <nl> void rl_relooper_calculate ( void * relooper , void * entry ) { <nl> ( ( Relooper * ) relooper ) - > Calculate ( ( Block * ) entry ) ; <nl> } <nl> <nl> - void rl_relooper_render ( void * relooper ) { <nl> + RELOOPERDLL_API void rl_relooper_render ( void * relooper ) { <nl> ( ( Relooper * ) relooper ) - > Render ( ) ; <nl> } <nl> <nl> mmm a / tools / shared . py <nl> ppp b / tools / shared . py <nl> def find_temp_directory ( ) : <nl> # we re - check sanity when the settings are changed ) <nl> # We also re - check sanity and clear the cache when the version changes <nl> <nl> - EMSCRIPTEN_VERSION = ' 1 . 9 . 4 ' <nl> + EMSCRIPTEN_VERSION = ' 1 . 9 . 5 ' <nl> <nl> def generate_sanity ( ) : <nl> return EMSCRIPTEN_VERSION + ' | ' + get_llvm_target ( ) + ' | ' + LLVM_ROOT + ' | ' + get_clang_version ( ) <nl>
Merge pull request from juj / relooper_linkage
emscripten-core/emscripten
ed697ccbcd43f1dc72024c2157a1f8f6f18d9e96
2014-01-25T04:16:50Z
mmm a / lib / Sema / CSApply . cpp <nl> ppp b / lib / Sema / CSApply . cpp <nl> namespace { <nl> / / If we ' re performing an assignment to a weak or unowned variable from <nl> / / a constructor call , emit a warning that the instance will be <nl> / / immediately deallocated . <nl> - diagnoseUnownedImmediateDeallocation ( cs . getTypeChecker ( ) , expr ) ; <nl> + diagnoseUnownedImmediateDeallocation ( cs . getASTContext ( ) , expr ) ; <nl> } <nl> return expr ; <nl> } <nl> mmm a / lib / Sema / MiscDiagnostics . cpp <nl> ppp b / lib / Sema / MiscDiagnostics . cpp <nl> static const Expr * lookThroughExprsToImmediateDeallocation ( const Expr * E ) { <nl> } <nl> } <nl> <nl> - static void diagnoseUnownedImmediateDeallocationImpl ( TypeChecker & TC , <nl> + static void diagnoseUnownedImmediateDeallocationImpl ( ASTContext & ctx , <nl> const VarDecl * varDecl , <nl> const Expr * initExpr , <nl> SourceLoc diagLoc , <nl> static void diagnoseUnownedImmediateDeallocationImpl ( TypeChecker & TC , <nl> if ( varDecl - > getDeclContext ( ) - > isTypeContext ( ) ) <nl> storageKind = SK_Property ; <nl> <nl> - TC . diagnose ( diagLoc , diag : : unowned_assignment_immediate_deallocation , <nl> - varDecl - > getName ( ) , ownershipAttr - > get ( ) , unsigned ( storageKind ) ) <nl> + ctx . Diags . diagnose ( diagLoc , diag : : unowned_assignment_immediate_deallocation , <nl> + varDecl - > getName ( ) , ownershipAttr - > get ( ) , <nl> + unsigned ( storageKind ) ) <nl> . highlight ( diagRange ) ; <nl> <nl> - TC . diagnose ( diagLoc , diag : : unowned_assignment_requires_strong ) <nl> + ctx . Diags . diagnose ( diagLoc , diag : : unowned_assignment_requires_strong ) <nl> . highlight ( diagRange ) ; <nl> <nl> - TC . diagnose ( varDecl , diag : : decl_declared_here , varDecl - > getFullName ( ) ) ; <nl> + ctx . Diags . diagnose ( varDecl , diag : : decl_declared_here , varDecl - > getFullName ( ) ) ; <nl> } <nl> <nl> - void swift : : diagnoseUnownedImmediateDeallocation ( TypeChecker & TC , <nl> + void swift : : diagnoseUnownedImmediateDeallocation ( ASTContext & ctx , <nl> const AssignExpr * assignExpr ) { <nl> auto * destExpr = assignExpr - > getDest ( ) - > getValueProvidingExpr ( ) ; <nl> auto * initExpr = assignExpr - > getSrc ( ) ; <nl> void swift : : diagnoseUnownedImmediateDeallocation ( TypeChecker & TC , <nl> } <nl> <nl> if ( VD ) <nl> - diagnoseUnownedImmediateDeallocationImpl ( TC , VD , initExpr , <nl> + diagnoseUnownedImmediateDeallocationImpl ( ctx , VD , initExpr , <nl> assignExpr - > getLoc ( ) , <nl> initExpr - > getSourceRange ( ) ) ; <nl> } <nl> <nl> - void swift : : diagnoseUnownedImmediateDeallocation ( TypeChecker & TC , <nl> + void swift : : diagnoseUnownedImmediateDeallocation ( ASTContext & ctx , <nl> const Pattern * pattern , <nl> SourceLoc equalLoc , <nl> const Expr * initExpr ) { <nl> void swift : : diagnoseUnownedImmediateDeallocation ( TypeChecker & TC , <nl> const Pattern * subPattern = elt . getPattern ( ) ; <nl> Expr * subInitExpr = TE - > getElement ( i ) ; <nl> <nl> - diagnoseUnownedImmediateDeallocation ( TC , subPattern , equalLoc , <nl> + diagnoseUnownedImmediateDeallocation ( ctx , subPattern , equalLoc , <nl> subInitExpr ) ; <nl> } <nl> } <nl> } else if ( auto * NP = dyn_cast < NamedPattern > ( pattern ) ) { <nl> - diagnoseUnownedImmediateDeallocationImpl ( TC , NP - > getDecl ( ) , initExpr , <nl> + diagnoseUnownedImmediateDeallocationImpl ( ctx , NP - > getDecl ( ) , initExpr , <nl> equalLoc , <nl> initExpr - > getSourceRange ( ) ) ; <nl> } <nl> mmm a / lib / Sema / MiscDiagnostics . h <nl> ppp b / lib / Sema / MiscDiagnostics . h <nl> bool diagnoseArgumentLabelError ( ASTContext & ctx , <nl> / / / with a non - owning attribute , such as ' weak ' or ' unowned ' and the initializer <nl> / / / expression refers to a class constructor , emit a warning that the assigned <nl> / / / instance will be immediately deallocated . <nl> - void diagnoseUnownedImmediateDeallocation ( TypeChecker & TC , <nl> + void diagnoseUnownedImmediateDeallocation ( ASTContext & ctx , <nl> const AssignExpr * assignExpr ) ; <nl> <nl> / / / If \ p pattern binds to a declaration with a non - owning attribute , such as <nl> / / / ' weak ' or ' unowned ' and \ p initializer refers to a class constructor , <nl> / / / emit a warning that the bound instance will be immediately deallocated . <nl> - void diagnoseUnownedImmediateDeallocation ( TypeChecker & TC , <nl> + void diagnoseUnownedImmediateDeallocation ( ASTContext & ctx , <nl> const Pattern * pattern , <nl> SourceLoc equalLoc , <nl> const Expr * initializer ) ; <nl> mmm a / lib / Sema / TypeCheckDecl . cpp <nl> ppp b / lib / Sema / TypeCheckDecl . cpp <nl> class DeclChecker : public DeclVisitor < DeclChecker > { <nl> / / If we ' re performing an binding to a weak or unowned variable from a <nl> / / constructor call , emit a warning that the instance will be immediately <nl> / / deallocated . <nl> - diagnoseUnownedImmediateDeallocation ( TC , PBD - > getPattern ( i ) , <nl> - PBD - > getEqualLoc ( i ) , <nl> - init ) ; <nl> + diagnoseUnownedImmediateDeallocation ( Ctx , PBD - > getPattern ( i ) , <nl> + PBD - > getEqualLoc ( i ) , <nl> + init ) ; <nl> <nl> / / If we entered an initializer context , contextualize any <nl> / / auto - closures we might have created . <nl>
diagnoseUnownedImmediateDeallocation doesn ' t need a TypeChecker
apple/swift
fb55c032f82292da6066ad9248bb3fcbf8c099c0
2019-11-07T20:41:37Z
mmm a / cocos / ui / UIImageView . h <nl> ppp b / cocos / ui / UIImageView . h <nl> class ImageView : public Widget <nl> / * * <nl> * create a imageview <nl> * <nl> - * @ param fileName file name of texture . <nl> + * @ param imageFileName file name of texture . <nl> * <nl> * @ param texType @ see UI_TEX_TYPE_LOCAL <nl> * / <nl> class ImageView : public Widget <nl> / * * <nl> * Sets if imageview is using scale9 renderer . <nl> * <nl> - * @ param true that using scale9 renderer , false otherwise . <nl> + * @ param able true that using scale9 renderer , false otherwise . <nl> * / <nl> void setScale9Enabled ( bool able ) ; <nl> <nl>
Merge pull request from favorcode / v3
cocos2d/cocos2d-x
dd428b527b8b463b15340d2d330e3169df1ed973
2014-05-20T09:34:56Z
mmm a / torch / autograd / profiler . py <nl> ppp b / torch / autograd / profiler . py <nl> def get_key ( event , group_by_input_shapes ) : <nl> for evt in self : <nl> stats [ get_key ( evt , group_by_input_shapes ) ] . add ( <nl> evt , group_by_input_shapes ) <nl> + <nl> return EventList ( stats . values ( ) , use_cuda = self . _use_cuda , profile_memory = self . _profile_memory ) <nl> <nl> def total_average ( self ) : <nl> class FormattedTimesMixin ( object ) : <nl> cpu_time_total_str = attr_formatter ( ' cpu_time_total ' ) <nl> cuda_time_total_str = attr_formatter ( ' cuda_time_total ' ) <nl> self_cpu_time_total_str = attr_formatter ( ' self_cpu_time_total ' ) <nl> + self_cuda_time_total_str = attr_formatter ( ' self_cuda_time_total ' ) <nl> <nl> @ property <nl> def cpu_time ( self ) : <nl> def self_cpu_time_total ( self ) : <nl> def cuda_time_total ( self ) : <nl> return sum ( kinfo . interval . elapsed_us ( ) for kinfo in self . kernels ) <nl> <nl> + @ property <nl> + def self_cuda_time_total ( self ) : <nl> + return sum ( kinfo . interval . elapsed_us ( ) for kinfo in self . kernels ) - \ <nl> + sum ( [ child . cuda_time_total for child in self . cpu_children ] ) <nl> + <nl> @ property <nl> def cpu_time_total ( self ) : <nl> return self . cpu_interval . elapsed_us ( ) <nl> def __init__ ( self ) : <nl> self . cpu_time_total = 0 <nl> self . cuda_time_total = 0 <nl> self . self_cpu_time_total = 0 <nl> + self . self_cuda_time_total = 0 <nl> self . input_shapes = None <nl> self . cpu_memory_usage = 0 <nl> self . cuda_memory_usage = 0 <nl> def add ( self , other , group_by_input_shapes = False ) : <nl> self . cpu_time_total + = other . cpu_time_total <nl> self . cuda_time_total + = other . cuda_time_total <nl> self . self_cpu_time_total + = other . self_cpu_time_total <nl> + self . self_cuda_time_total + = other . self_cuda_time_total <nl> self . cpu_memory_usage + = other . cpu_memory_usage <nl> self . cuda_memory_usage + = other . cuda_memory_usage <nl> self . self_cpu_memory_usage + = other . self_cpu_memory_usage <nl> def __iadd__ ( self , other ) : <nl> def __repr__ ( self ) : <nl> return ( <nl> ' < FunctionEventAvg key = { } self_cpu_time = { } cpu_time = { } ' <nl> - ' cuda_time = { } input_shapes = { } > ' <nl> + ' self_cuda_time = { } cuda_time = { } input_shapes = { } > ' <nl> ' cpu_memory_usage = { } cuda_memory_usage = { } ' . format ( <nl> self . key , <nl> self . self_cpu_time_total_str , <nl> self . cpu_time_str , <nl> + self . self_cuda_time_total_str , <nl> self . cuda_time_str , <nl> str ( self . input_shapes ) , <nl> self . cpu_memory_usage , <nl> def build_table ( <nl> has_input_shapes = any ( <nl> [ event . input_shapes is not None for event in events ] ) <nl> name_column_width = max ( [ len ( evt . key ) for evt in events ] ) + 4 <nl> - DEFAULT_COLUMN_WIDTH = 15 <nl> + DEFAULT_COLUMN_WIDTH = 12 <nl> SHAPES_COLUMN_WIDTH = 45 <nl> <nl> headers = [ <nl> ' Name ' , <nl> - ' Self CPU total % ' , <nl> - ' Self CPU total ' , <nl> + ' Self CPU % ' , <nl> + ' Self CPU ' , <nl> ' CPU total % ' , <nl> ' CPU total ' , <nl> ' CPU time avg ' , <nl> ] <nl> if use_cuda : <nl> headers . extend ( [ <nl> - ' CUDA total % ' , <nl> + ' Self CUDA ' , <nl> + ' Self CUDA % ' , <nl> ' CUDA total ' , <nl> ' CUDA time avg ' , <nl> ] ) <nl> def build_table ( <nl> ' Self CUDA Mem ' , <nl> ] ) <nl> headers . append ( <nl> - ' Number of Calls ' <nl> + ' # of Calls ' <nl> ) <nl> # Only append Node ID if any event has a valid ( > = 0 ) Node ID <nl> append_node_id = any ( [ evt . node_id ! = - 1 for evt in events ] ) <nl> def build_table ( <nl> line_length = [ - SPACING_SIZE ] <nl> <nl> def add_column ( padding ) : <nl> - row_format [ 0 ] + = ' { : < ' + str ( padding ) + ' } ' <nl> + row_format [ 0 ] + = ' { : > ' + str ( padding ) + ' } ' <nl> header_sep [ 0 ] + = ' - ' * padding + ' ' <nl> line_length [ 0 ] + = padding + SPACING_SIZE <nl> <nl> def append ( s ) : <nl> result . append ( ' \ n ' ) # Yes , newline after the end as well <nl> <nl> self_cpu_time_total = sum ( [ event . self_cpu_time_total for event in events ] ) <nl> - cuda_time_total = sum ( [ evt . cuda_time_total for evt in events ] ) <nl> + cuda_time_total = sum ( [ evt . self_cuda_time_total for evt in events ] ) <nl> # Actual printing <nl> if header is not None : <nl> append ( ' = ' * line_length ) <nl> def append ( s ) : <nl> ] <nl> if use_cuda : <nl> row_values . extend ( [ <nl> + evt . self_cuda_time_total_str , <nl> # CUDA time total % <nl> - format_time_share ( evt . cuda_time_total , cuda_time_total ) , <nl> + format_time_share ( evt . self_cuda_time_total , cuda_time_total ) , <nl> evt . cuda_time_total_str , <nl> evt . cuda_time_str , # Cuda time avg <nl> ] ) <nl> mmm a / torch / csrc / autograd / profiler . cpp <nl> ppp b / torch / csrc / autograd / profiler . cpp <nl> namespace { <nl> NUM_PROFILER_CFG_IVALUE_IDX / / must be last in list <nl> } ; <nl> <nl> + const std : : unordered_set < std : : string > disable_cuda_profiling = { <nl> + " aten : : view " , <nl> + " aten : : t " , <nl> + " aten : : transpose " , <nl> + " aten : : stride " , <nl> + " aten : : empty " , <nl> + " aten : : empty_like " , <nl> + " aten : : empty_strided " , <nl> + " aten : : as_strided " , <nl> + " aten : : expand " , <nl> + " aten : : resize_ " , <nl> + " aten : : squeeze " , <nl> + " aten : : unsqueeze " , <nl> + " aten : : slice " , <nl> + " aten : : _unsafe_view " , <nl> + " aten : : size " <nl> + } ; <nl> + <nl> CUDAStubs default_stubs ; <nl> constexpr CUDAStubs * default_stubs_addr = & default_stubs ; <nl> / / Constant initialization , so it is guaranteed to be initialized before <nl> static CUDAStubs * cuda_stubs = default_stubs_addr ; <nl> / / - TorchScript functions / methods <nl> / / - user defined named ranges ( see ` record_function ` python context manager ) <nl> / / <nl> - / / Profiler setups a pair of callbacks that record profiling events and save them <nl> - / / into the thread local profiler struct ( ThreadLocalDebugInfo , PROFILER_STATE slot ) <nl> + / / Profiler setups a pair of callbacks that record profiling events and save <nl> + / / them into the thread local profiler struct ( ThreadLocalDebugInfo , <nl> + / / PROFILER_STATE slot ) <nl> / / <nl> / / <nl> / / Thus , the overall logic is : <nl> static CUDAStubs * cuda_stubs = default_stubs_addr ; <nl> / / <nl> <nl> / / Profiler state <nl> - struct ProfilerThreadLocalState <nl> - : public c10 : : MemoryReportingInfoBase { <nl> - explicit ProfilerThreadLocalState ( <nl> - const ProfilerConfig & config ) <nl> - : config_ ( config ) , remoteProfiledEvents_ { c10 : : nullopt } { } <nl> + struct ProfilerThreadLocalState : public c10 : : MemoryReportingInfoBase { <nl> + explicit ProfilerThreadLocalState ( const ProfilerConfig & config ) <nl> + : config_ ( config ) , remoteProfiledEvents_ { c10 : : nullopt } { } <nl> ~ ProfilerThreadLocalState ( ) override = default ; <nl> <nl> inline const ProfilerConfig & config ( ) const { <nl> struct ProfilerThreadLocalState <nl> return result ; <nl> } <nl> <nl> - void mark ( <nl> - std : : string name , <nl> - bool include_cuda = true ) { <nl> + void mark ( std : : string name , bool include_cuda = true ) { <nl> if ( config_ . state = = ProfilerState : : Disabled ) { <nl> return ; <nl> } <nl> struct ProfilerThreadLocalState <nl> cuda_stubs - > nvtxMarkA ( name . c_str ( ) ) ; <nl> } else { <nl> Event evt ( <nl> - EventKind : : Mark , <nl> - at : : StringView ( std : : move ( name ) ) , <nl> - at : : RecordFunction : : currentThreadId ( ) , <nl> - include_cuda & & config_ . state = = ProfilerState : : CUDA <nl> - ) ; <nl> + EventKind : : Mark , <nl> + at : : StringView ( std : : move ( name ) ) , <nl> + at : : RecordFunction : : currentThreadId ( ) , <nl> + include_cuda & & config_ . state = = ProfilerState : : CUDA ) ; <nl> evt . setNodeId ( at : : RecordFunction : : getDefaultNodeId ( ) ) ; <nl> getEventList ( ) . record ( std : : move ( evt ) ) ; <nl> } <nl> } <nl> <nl> - void setOrAddRemoteProfiledEvents ( std : : vector < Event > & & remoteProfiledEvents ) { <nl> + void setOrAddRemoteProfiledEvents ( <nl> + std : : vector < Event > & & remoteProfiledEvents ) { <nl> / / Lock to serialize access from multiple callback threads . <nl> std : : lock_guard < std : : mutex > guard ( state_mutex_ ) ; <nl> if ( remoteProfiledEvents_ ) { <nl> struct ProfilerThreadLocalState <nl> <nl> void pushRange ( <nl> const at : : StringView & name , <nl> + const bool record_cuda , <nl> const char * msg = " " , <nl> int64_t sequence_nr = - 1 , <nl> std : : vector < std : : vector < int64_t > > & & shapes = { } , <nl> struct ProfilerThreadLocalState <nl> return ; <nl> } <nl> if ( config_ . state = = ProfilerState : : NVTX ) { <nl> - cuda_stubs - > nvtxRangePushA ( getNvtxStr ( <nl> - name , msg , sequence_nr , shapes ) . c_str ( ) ) ; <nl> + cuda_stubs - > nvtxRangePushA ( <nl> + getNvtxStr ( name , msg , sequence_nr , shapes ) . c_str ( ) ) ; <nl> } else { <nl> - Event evt ( EventKind : : PushRange , <nl> + Event evt ( <nl> + EventKind : : PushRange , <nl> name , <nl> at : : RecordFunction : : currentThreadId ( ) , <nl> - config_ . state = = ProfilerState : : CUDA , <nl> + record_cuda , <nl> handle , <nl> std : : move ( shapes ) , <nl> at : : RecordFunction : : getDefaultNodeId ( ) ) ; <nl> struct ProfilerThreadLocalState <nl> } <nl> } <nl> <nl> - void popRange ( uint64_t thread_id , at : : RecordFunctionHandle handle ) { <nl> + void popRange ( <nl> + uint64_t thread_id , <nl> + const bool record_cuda , <nl> + at : : RecordFunctionHandle handle ) { <nl> if ( config_ . state = = ProfilerState : : Disabled ) { <nl> return ; <nl> } <nl> struct ProfilerThreadLocalState <nl> / / called on a different thread than pushRange <nl> / / As a convention , we put the async pop on the original <nl> / / thread and save current thread id in pop event <nl> - Event evt ( EventKind : : PopRange , <nl> + Event evt ( <nl> + EventKind : : PopRange , <nl> at : : StringView ( " " ) , <nl> at : : RecordFunction : : currentThreadId ( ) , <nl> - config_ . state = = ProfilerState : : CUDA , <nl> + record_cuda , <nl> handle ) ; <nl> evt . setNodeId ( at : : RecordFunction : : getDefaultNodeId ( ) ) ; <nl> getEventList ( thread_id ) . record ( std : : move ( evt ) ) ; <nl> struct ProfilerThreadLocalState <nl> } <nl> <nl> void reportMemoryUsage ( <nl> - void * / * unused * / , int64_t alloc_size , c10 : : Device device ) override { <nl> + void * / * unused * / , <nl> + int64_t alloc_size , <nl> + c10 : : Device device ) override { <nl> if ( config_ . profile_memory & & config_ . state ! = ProfilerState : : Disabled ) { <nl> uint64_t thread_id = at : : RecordFunction : : currentThreadId ( ) ; <nl> Event evt ( <nl> struct ProfilerThreadLocalState <nl> return config_ . profile_memory ; <nl> } <nl> <nl> - private : <nl> + private : <nl> std : : string getNvtxStr ( <nl> const at : : StringView & name , <nl> const char * msg , <nl> struct ProfilerThreadLocalState <nl> std : : unordered_map < uint64_t , std : : shared_ptr < RangeEventList > > <nl> event_lists_map_ ; <nl> <nl> - ProfilerConfig config_ = ProfilerConfig ( ProfilerState : : Disabled , false , false ) ; <nl> + ProfilerConfig config_ = <nl> + ProfilerConfig ( ProfilerState : : Disabled , false , false ) ; <nl> at : : CallbackHandle handle_ = 0 ; <nl> c10 : : optional < std : : vector < std : : vector < Event > > > remoteProfiledEvents_ ; <nl> } ; <nl> void pushProfilingCallbacks ( ) { <nl> if ( ! state_ptr | | state_ptr - > config ( ) . state = = ProfilerState : : Disabled ) { <nl> return ; <nl> } <nl> + bool record_cuda = <nl> + state_ptr - > config ( ) . state = = ProfilerState : : CUDA ; <nl> + if ( record_cuda & & disable_cuda_profiling . find ( fn . name ( ) . str ( ) ) ! = disable_cuda_profiling . end ( ) ) { <nl> + record_cuda = false ; <nl> + } <nl> <nl> auto * msg = ( fn . seqNr ( ) > = 0 ) ? " , seq = " : " " ; <nl> if ( state_ptr - > config ( ) . report_input_shapes ) { <nl> void pushProfilingCallbacks ( ) { <nl> } <nl> } <nl> state_ptr - > pushRange ( <nl> - fn . name ( ) , msg , fn . seqNr ( ) , std : : move ( inputSizes ) , fn . handle ( ) ) ; <nl> + fn . name ( ) , record_cuda , msg , fn . seqNr ( ) , std : : move ( inputSizes ) , fn . handle ( ) ) ; <nl> } else { <nl> - state_ptr - > pushRange ( fn . name ( ) , msg , fn . seqNr ( ) , { } , fn . handle ( ) ) ; <nl> + state_ptr - > pushRange ( fn . name ( ) , record_cuda , msg , fn . seqNr ( ) , { } , fn . handle ( ) ) ; <nl> } <nl> } , <nl> [ ] ( const at : : RecordFunction & fn ) { <nl> void pushProfilingCallbacks ( ) { <nl> if ( ! state_ptr | | state_ptr - > config ( ) . state = = ProfilerState : : Disabled ) { <nl> return ; <nl> } <nl> - state_ptr - > popRange ( fn . getStartCallbacksThreadId ( ) , fn . handle ( ) ) ; <nl> + bool record_cuda = <nl> + state_ptr - > config ( ) . state = = ProfilerState : : CUDA ; <nl> + if ( record_cuda & & disable_cuda_profiling . find ( fn . name ( ) . str ( ) ) ! = disable_cuda_profiling . end ( ) ) { <nl> + record_cuda = false ; <nl> + } <nl> + state_ptr - > popRange ( fn . getStartCallbacksThreadId ( ) , record_cuda , fn . handle ( ) ) ; <nl> } ) <nl> . needsInputs ( state_ptr - > config ( ) . report_input_shapes ) <nl> . needsIds ( true ) ) ; <nl> mmm a / torch / testing / _internal / distributed / rpc / rpc_test . py <nl> ppp b / torch / testing / _internal / distributed / rpc / rpc_test . py <nl> def test_profiler_remote_cuda ( self ) : <nl> fut1 . wait ( ) <nl> fut2 . wait ( ) <nl> <nl> + def get_name ( event ) : <nl> + return event . name [ event . name . find ( REMOTE_OP_STR ) + len ( REMOTE_OP_STR ) : ] <nl> + <nl> function_events = p . function_events <nl> for event in function_events : <nl> if event . is_async : <nl> def test_profiler_remote_cuda ( self ) : <nl> if event . node_id = = 1 : <nl> continue <nl> self . assertTrue ( event . node_id in [ dst_cuda_0 , dst_cuda_1 ] ) <nl> - self . assertGreater ( event . cuda_time_total , 0 ) <nl> - self . assertEqual ( 1 , len ( event . kernels ) ) <nl> - kernel = event . kernels [ 0 ] <nl> - if event . node_id = = dst_cuda_0 : <nl> - self . assertEqual ( kernel . device , 0 ) <nl> - if event . node_id = = dst_cuda_1 : <nl> - self . assertEqual ( kernel . device , 1 ) <nl> - <nl> - self . assertGreater ( event . cuda_time , 0 ) <nl> + if get_name ( event ) in EXPECTED_REMOTE_EVENTS : <nl> + self . assertGreater ( event . cuda_time_total , 0 ) <nl> + self . assertEqual ( 1 , len ( event . kernels ) ) <nl> + kernel = event . kernels [ 0 ] <nl> + if event . node_id = = dst_cuda_0 : <nl> + self . assertEqual ( kernel . device , 0 ) <nl> + if event . node_id = = dst_cuda_1 : <nl> + self . assertEqual ( kernel . device , 1 ) <nl> + self . assertGreater ( event . cuda_time , 0 ) <nl> <nl> # Validate that EXPECTED_REMOTE_EVENTS is a subset of remotely profiled <nl> # events . <nl> - <nl> - def get_name ( event ) : <nl> - return event . name [ event . name . find ( REMOTE_OP_STR ) + len ( REMOTE_OP_STR ) : ] <nl> - <nl> remote_events = [ event for event in function_events if event . is_remote ] <nl> remote_event_names = [ get_name ( event ) for event in remote_events if get_name ( event ) in EXPECTED_REMOTE_EVENTS ] <nl> self . assertEqual ( set ( remote_event_names ) , set ( EXPECTED_REMOTE_EVENTS ) ) <nl>
add self cuda time to avoid double / quadruple counting ( )
pytorch/pytorch
50b91103a90ac87c7836a330d692f5db3949f1ee
2020-09-29T04:51:13Z
mmm a / tensorflow / core / BUILD <nl> ppp b / tensorflow / core / BUILD <nl> cc_library ( <nl> " : protos_cc " , <nl> " / / third_party / eigen3 " , <nl> ] , <nl> + alwayslink = 1 , <nl> ) <nl> <nl> # Full Tensorflow library with operator support . Use this unless reducing <nl> cc_library ( <nl> " : protos_cc " , <nl> " / / third_party / eigen3 " , <nl> ] , <nl> + alwayslink = 1 , <nl> ) <nl> <nl> filegroup ( <nl>
Add alwayslink = 1 to Android build rules . This allows native Android binaries to
tensorflow/tensorflow
4cacbfb66529d7ca4ca950af1d024359072bc163
2016-03-28T20:33:32Z
mmm a / hphp / hack / src / client / clientGetDefinition . ml <nl> ppp b / hphp / hack / src / client / clientGetDefinition . ml <nl> open Hh_json <nl> <nl> let to_json x = <nl> JSON_Array ( List . map x begin function ( symbol , definition ) - > <nl> - let definition_pos , definition_span = Option . value_map definition <nl> + let definition_pos , definition_span , definition_id = <nl> + Option . value_map definition <nl> ~ f : ( fun x - > Pos . json x . SymbolDefinition . pos , <nl> - Pos . multiline_json x . SymbolDefinition . span ) <nl> - ~ default : ( JSON_Null , JSON_Null ) <nl> + Pos . multiline_json x . SymbolDefinition . span , <nl> + x . SymbolDefinition . id ) <nl> + ~ default : ( JSON_Null , JSON_Null , None ) <nl> + in <nl> + let definition_id = Option . value_map definition_id <nl> + ~ f : ( fun x - > JSON_String x ) ~ default : JSON_Null <nl> in <nl> JSON_Object [ <nl> " name " , JSON_String symbol . SymbolOccurrence . name ; <nl> let to_json x = <nl> " pos " , Pos . json ( symbol . SymbolOccurrence . pos ) ; <nl> " definition_pos " , definition_pos ; <nl> " definition_span " , definition_span ; <nl> + " definition_id " , definition_id ; <nl> ] <nl> end ) <nl> <nl> mmm a / hphp / hack / test / integration / common_tests . py <nl> ppp b / hphp / hack / test / integration / common_tests . py <nl> def test_ide_get_definition ( self ) : <nl> ' " pos " : { { " filename " : " " , " line " : 1 , " char_start " : 42 , " char_end " : 44 } } , ' <nl> ' " definition_pos " : { { " filename " : " " , " line " : 1 , " char_start " : 15 , ' <nl> ' " char_end " : 17 } } , " definition_span " : { { " filename " : " " , " line_start " : 1 , ' <nl> - ' " char_start " : 6 , " line_end " : 1 , " char_end " : 22 } } } } ] ' <nl> + ' " char_start " : 6 , " line_end " : 1 , " char_end " : 22 } } , ' <nl> + ' " definition_id " : " function : : bar " } } ] ' <nl> ] , <nl> options = [ ' - - ide - get - definition ' , ' 1 : 43 ' ] , <nl> stdin = ' < ? hh function bar ( ) { } function test ( ) { bar ( ) } ' ) <nl> def test_ide_get_definition_multi_file ( self ) : <nl> ' " char_start " : 45 , " char_end " : 64 } } , " definition_pos " : ' <nl> ' { { " filename " : " { root } foo_5 . php " , " line " : 4 , " char_start " : 26 , ' <nl> ' " char_end " : 45 } } , " definition_span " : { { " filename " : " { root } foo_5 . php " , ' <nl> - ' " line_start " : 4 , " char_start " : 3 , " line_end " : 4 , " char_end " : 50 } } } } ] ' <nl> + ' " line_start " : 4 , " char_start " : 3 , " line_end " : 4 , " char_end " : 50 } } , ' <nl> + ' " definition_id " : ' <nl> + ' " method : : ClassToBeIdentified : : methodToBeIdentified " } } ] ' <nl> ] , <nl> options = [ ' - - ide - get - definition ' , ' 1 : 50 ' ] , <nl> stdin = ' < ? hh function test ( ) { ' <nl>
Expose symbol ID in - - ide - get - definition mode
facebook/hhvm
cb388da33335109cbe2c8d5908898c81d3b930b1
2016-06-28T19:53:06Z
mmm a / dbms / src / DataStreams / AddingDefaultsBlockInputStream . cpp <nl> ppp b / dbms / src / DataStreams / AddingDefaultsBlockInputStream . cpp <nl> Block AddingDefaultsBlockInputStream : : readImpl ( ) <nl> evaluate_block . erase ( column . first ) ; <nl> } <nl> <nl> - evaluateMissingDefaultsUnsafe ( evaluate_block , header . getNamesAndTypesList ( ) , column_defaults , context ) ; <nl> + evaluateMissingDefaults ( evaluate_block , header . getNamesAndTypesList ( ) , column_defaults , context , false ) ; <nl> <nl> std : : unordered_map < size_t , MutableColumnPtr > mixed_columns ; <nl> <nl> mmm a / dbms / src / DataStreams / IBlockInputStream . h <nl> ppp b / dbms / src / DataStreams / IBlockInputStream . h <nl> class IBlockInputStream ; <nl> using BlockInputStreamPtr = std : : shared_ptr < IBlockInputStream > ; <nl> using BlockInputStreams = std : : vector < BlockInputStreamPtr > ; <nl> <nl> - class BlockMissingValues ; <nl> class TableStructureReadLock ; <nl> <nl> using TableStructureReadLockPtr = std : : shared_ptr < TableStructureReadLock > ; <nl> mmm a / dbms / src / Interpreters / evaluateMissingDefaults . cpp <nl> ppp b / dbms / src / Interpreters / evaluateMissingDefaults . cpp <nl> static ASTPtr requiredExpressions ( Block & block , const NamesAndTypesList & requi <nl> setAlias ( it - > second . expression - > clone ( ) , it - > first ) ) ; <nl> } <nl> <nl> + if ( default_expr_list - > children . empty ( ) ) <nl> + return nullptr ; <nl> return default_expr_list ; <nl> } <nl> <nl> - <nl> void evaluateMissingDefaults ( Block & block , <nl> const NamesAndTypesList & required_columns , <nl> const ColumnDefaults & column_defaults , <nl> - const Context & context ) <nl> + const Context & context , bool with_block_copy ) <nl> { <nl> if ( column_defaults . empty ( ) ) <nl> return ; <nl> <nl> ASTPtr default_expr_list = requiredExpressions ( block , required_columns , column_defaults ) ; <nl> - / / / nothing to evaluate <nl> - if ( default_expr_list - > children . empty ( ) ) <nl> + if ( ! default_expr_list ) <nl> return ; <nl> <nl> + if ( ! with_block_copy ) <nl> + { <nl> + auto syntax_result = SyntaxAnalyzer ( context , { } ) . analyze ( default_expr_list , block . getNamesAndTypesList ( ) ) ; <nl> + ExpressionAnalyzer { default_expr_list , syntax_result , context } . getActions ( true ) - > execute ( block ) ; <nl> + return ; <nl> + } <nl> + <nl> / * * ExpressionAnalyzer eliminates " unused " columns , in order to ensure their safety <nl> * we are going to operate on a copy instead of the original block * / <nl> Block copy_block { block } ; <nl> / / / evaluate default values for defaulted columns <nl> <nl> - NamesAndTypesList available_columns ; <nl> - for ( size_t i = 0 , size = block . columns ( ) ; i < size ; + + i ) <nl> - available_columns . emplace_back ( block . getByPosition ( i ) . name , block . getByPosition ( i ) . type ) ; <nl> - <nl> - auto syntax_result = SyntaxAnalyzer ( context , { } ) . analyze ( default_expr_list , available_columns ) ; <nl> + auto syntax_result = SyntaxAnalyzer ( context , { } ) . analyze ( default_expr_list , block . getNamesAndTypesList ( ) ) ; <nl> ExpressionAnalyzer { default_expr_list , syntax_result , context } . getActions ( true ) - > execute ( copy_block ) ; <nl> <nl> / / / move evaluated columns to the original block , materializing them at the same time <nl> void evaluateMissingDefaults ( Block & block , <nl> } <nl> } <nl> <nl> - <nl> - void evaluateMissingDefaultsUnsafe ( Block & block , <nl> - const NamesAndTypesList & required_columns , <nl> - const std : : unordered_map < std : : string , ColumnDefault > & column_defaults , <nl> - const Context & context ) <nl> - { <nl> - if ( column_defaults . empty ( ) ) <nl> - return ; <nl> - <nl> - ASTPtr default_expr_list = requiredExpressions ( block , required_columns , column_defaults ) ; <nl> - if ( default_expr_list - > children . empty ( ) ) <nl> - return ; <nl> - <nl> - NamesAndTypesList available_columns ; <nl> - for ( size_t i = 0 , size = block . columns ( ) ; i < size ; + + i ) <nl> - available_columns . emplace_back ( block . getByPosition ( i ) . name , block . getByPosition ( i ) . type ) ; <nl> - <nl> - auto syntax_result = SyntaxAnalyzer ( context , { } ) . analyze ( default_expr_list , available_columns ) ; <nl> - ExpressionAnalyzer { default_expr_list , syntax_result , context } . getActions ( true ) - > execute ( block ) ; <nl> - } <nl> - <nl> } <nl> mmm a / dbms / src / Interpreters / evaluateMissingDefaults . h <nl> ppp b / dbms / src / Interpreters / evaluateMissingDefaults . h <nl> struct ColumnDefault ; <nl> void evaluateMissingDefaults ( Block & block , <nl> const NamesAndTypesList & required_columns , <nl> const std : : unordered_map < std : : string , ColumnDefault > & column_defaults , <nl> - const Context & context ) ; <nl> - <nl> - void evaluateMissingDefaultsUnsafe ( Block & block , <nl> - const NamesAndTypesList & required_columns , <nl> - const std : : unordered_map < std : : string , ColumnDefault > & column_defaults , <nl> - const Context & context ) ; <nl> + const Context & context , bool with_block_copy = true ) ; <nl> <nl> } <nl>
clearer evaluateMissingDefaults [ CLICKHOUSE - 3578 ]
ClickHouse/ClickHouse
c642e16ee1ac3a5c9986c898e5d8c58c4e705bb7
2018-11-15T16:57:20Z
similarity index 100 % <nl> rename from test / expect / TestJit . test_concat_fusion . expect <nl> rename to test / expect / TestJit . test_concat_fusion_cuda . expect <nl> similarity index 100 % <nl> rename from test / expect / TestJit . test_cpp . expect <nl> rename to test / expect / TestJit . test_cpp_cuda . expect <nl> similarity index 100 % <nl> rename from test / expect / TestJit . test_fuse_last_device . expect <nl> rename to test / expect / TestJit . test_fuse_last_device_cuda . expect <nl> similarity index 100 % <nl> rename from test / expect / TestJit . test_fusion_distribute . expect <nl> rename to test / expect / TestJit . test_fusion_distribute_cuda . expect <nl> similarity index 100 % <nl> rename from test / expect / TestJit . test_lstm_fusion_concat . expect <nl> rename to test / expect / TestJit . test_lstm_fusion_concat_cuda . expect <nl> mmm a / test / test_jit . py <nl> ppp b / test / test_jit . py <nl> def test_lstm_fusion_cpu ( self ) : <nl> @ unittest . skipIf ( IS_WINDOWS , " NYI : fuser support for Windows " ) <nl> @ unittest . skipIf ( not RUN_CUDA , " fuser requires CUDA " ) <nl> @ skipIfRocm <nl> - def test_lstm_fusion_concat ( self ) : <nl> + def test_lstm_fusion_concat_cuda ( self ) : <nl> inputs = get_lstm_inputs ( ' cuda ' ) <nl> ge = self . checkTrace ( LSTMCellC , inputs ) <nl> self . assertExpectedGraph ( ge . graph_for ( * inputs ) ) <nl> def test_lstm_fusion_concat ( self ) : <nl> @ unittest . skipIf ( IS_WINDOWS , " NYI : fuser support for Windows " ) <nl> @ unittest . skipIf ( not RUN_CUDA , " fuser requires CUDA " ) <nl> @ skipIfRocm <nl> - def test_concat_fusion ( self ) : <nl> + def test_concat_fusion_cuda ( self ) : <nl> hx = torch . randn ( 3 , 20 , dtype = torch . float , device = ' cuda ' ) <nl> cx = torch . randn ( 3 , 20 , dtype = torch . float , device = ' cuda ' ) <nl> <nl> def fn ( x , y , z ) : <nl> @ unittest . skipIf ( IS_WINDOWS , " NYI : fuser support for Windows " ) <nl> @ unittest . skipIf ( not RUN_CUDA , " fuser requires CUDA " ) <nl> @ skipIfRocm <nl> - def test_fusion_distribute ( self ) : <nl> + def test_fusion_distribute_cuda ( self ) : <nl> def f ( x , y ) : <nl> z1 , z2 = ( x + y ) . chunk ( 2 , dim = 1 ) <nl> return z1 * z2 <nl> def f ( x , y ) : <nl> @ unittest . skipIf ( IS_WINDOWS , " NYI : fuser support for Windows " ) <nl> @ unittest . skipIf ( not RUN_CUDA , " fuser requires CUDA " ) <nl> @ skipIfRocm <nl> - def test_fusion_rand ( self ) : <nl> + def test_fusion_rand_cuda ( self ) : <nl> class M ( torch . jit . ScriptModule ) : <nl> __constants__ = [ ' d ' ] <nl> <nl> def create ( self , x ) : <nl> @ unittest . skipIf ( IS_WINDOWS , " NYI : fuser support for Windows " ) <nl> @ unittest . skipIf ( not RUN_CUDA , " fuser requires CUDA " ) <nl> @ skipIfRocm <nl> - def test_fusion_arg_configurations ( self ) : <nl> + def test_fusion_arg_configurations_cuda ( self ) : <nl> # A smoke test to make sure we won ' t use the same kernel for contiguous <nl> # and non - contiguous arguments . <nl> # TODO : add optionally enabled debug counters to the fuser to verify <nl> def fn_test_comparison_gt_lt ( x , y ) : <nl> @ unittest . skipIf ( IS_WINDOWS , " NYI : fuser support for Windows " ) <nl> @ unittest . skipIf ( not RUN_CUDA , " fuser requires CUDA " ) <nl> @ skipIfRocm <nl> - def test_comparison_gt_lt ( self ) : <nl> + def test_comparison_gt_lt_cuda ( self ) : <nl> x = torch . randn ( 4 , 4 , dtype = torch . float , device = ' cuda ' ) <nl> y = torch . randn ( 4 , 4 , dtype = torch . float , device = ' cuda ' ) <nl> <nl> def test_comparison_gt_lt ( self ) : <nl> @ unittest . skipIf ( IS_WINDOWS , " NYI : fuser support for Windows " ) <nl> @ unittest . skipIf ( not RUN_CUDA , " fuser requires CUDA " ) <nl> @ skipIfRocm <nl> - def test_comparison_ge_le ( self ) : <nl> + def test_comparison_ge_le_cuda ( self ) : <nl> def f ( x , y ) : <nl> mask = ( x > = 0 ) . type_as ( x ) <nl> z = x * mask + y <nl> def fn_test_relu ( x , y ) : <nl> @ unittest . skipIf ( IS_WINDOWS , " NYI : fuser support for Windows " ) <nl> @ unittest . skipIf ( not RUN_CUDA , " fuser requires CUDA " ) <nl> @ skipIfRocm <nl> - def test_relu ( self ) : <nl> + def test_relu_cuda ( self ) : <nl> x = torch . randn ( 4 , 4 , dtype = torch . float , device = ' cuda ' ) <nl> y = torch . randn ( 4 , 4 , dtype = torch . float , device = ' cuda ' ) <nl> <nl> def test_relu ( self ) : <nl> <nl> @ unittest . skipIf ( IS_WINDOWS , " NYI : fuser support for Windows " ) <nl> @ unittest . skipIf ( not RUN_CUDA , " fuser requires CUDA " ) <nl> - def test_small_constant ( self ) : <nl> + def test_small_constant_cuda ( self ) : <nl> def fn_test_small_constant ( x , y ) : <nl> return ( 1e - 8 * x + 5e - 9 * y ) * 1e8 <nl> x = torch . randn ( 4 , 4 , dtype = torch . float , device = ' cuda ' ) <nl> def fn_test_exp ( x , y ) : <nl> @ unittest . skipIf ( IS_WINDOWS , " NYI : fuser support for Windows " ) <nl> @ unittest . skipIf ( not RUN_CUDA , " fuser requires CUDA " ) <nl> @ skipIfRocm <nl> - def test_exp ( self ) : <nl> + def test_exp_cuda ( self ) : <nl> x = torch . randn ( 4 , 4 , dtype = torch . float , device = ' cuda ' ) <nl> y = torch . randn ( 4 , 4 , dtype = torch . float , device = ' cuda ' ) <nl> <nl> def broadcast ( a , b ) : <nl> @ unittest . skipIf ( IS_WINDOWS , " NYI : fuser support for Windows " ) <nl> @ unittest . skipIf ( not RUN_CUDA_MULTI_GPU , " needs non - zero device " ) <nl> @ skipIfRocm <nl> - def test_fuse_last_device ( self ) : <nl> + def test_fuse_last_device_cuda ( self ) : <nl> device = ' cuda : ' + str ( 1 ) <nl> x = torch . tensor ( [ 0 . 4 ] , dtype = torch . float , device = device ) <nl> y = torch . tensor ( [ 0 . 7 ] , dtype = torch . float , device = device ) <nl> def doit ( x , y ) : <nl> @ unittest . skipIf ( IS_WINDOWS , " NYI : fuser support for Windows " ) <nl> @ unittest . skipIf ( not RUN_CUDA , " cpp tests require CUDA " ) <nl> @ skipIfRocm <nl> - def test_cpp ( self ) : <nl> + def test_cpp_cuda ( self ) : <nl> # rather than rebuild assertExpected in cpp , <nl> # just glob all the cpp outputs into one file for now <nl> self . assertExpected ( torch . _C . _jit_run_cpp_tests ( ) ) <nl> def foo ( a ) : <nl> @ unittest . skipIf ( IS_WINDOWS , " NYI : fuser support for Windows " ) <nl> @ unittest . skipIf ( not RUN_CUDA , " calls . cuda ( ) " ) <nl> @ skipIfRocm <nl> - def test_traced_module ( self ) : <nl> + def test_traced_module_cuda ( self ) : <nl> class Model ( nn . Module ) : <nl> def __init__ ( self , num_features , num_layers ) : <nl> super ( Model , self ) . __init__ ( ) <nl>
Rename cuda tests to have ' cuda ' in their names ( )
pytorch/pytorch
1ad61a18b27d33315cde64cf2d072329b519a99e
2018-09-06T18:57:52Z
mmm a / test / test_cpp_extensions . py <nl> ppp b / test / test_cpp_extensions . py <nl> <nl> + import os <nl> import unittest <nl> import sys <nl> <nl> <nl> <nl> from torch . utils . cpp_extension import CUDA_HOME <nl> TEST_CUDA = torch . cuda . is_available ( ) and CUDA_HOME is not None <nl> - TEST_CUDNN = TEST_CUDA and torch . backends . cudnn . is_available ( ) <nl> + TEST_CUDNN = False <nl> + if TEST_CUDA : <nl> + CUDNN_HEADER_EXISTS = os . path . isfile ( os . path . join ( CUDA_HOME , ' include / cudnn . h ' ) ) <nl> + TEST_CUDNN = TEST_CUDA and CUDNN_HEADER_EXISTS and torch . backends . cudnn . is_available ( ) <nl> <nl> <nl> class TestCppExtension ( common . TestCase ) : <nl> mmm a / test / test_utils . py <nl> ppp b / test / test_utils . py <nl> def test_cpu ( self ) : <nl> @ unittest . skipIf ( IS_WINDOWS , " ffi doesn ' t currently work on Windows " ) <nl> @ skipIfRocm <nl> def test_gpu ( self ) : <nl> + from torch . utils . cpp_extension import CUDA_HOME <nl> create_extension ( <nl> name = ' gpulib ' , <nl> headers = [ test_dir + ' / ffi / src / cuda / cudalib . h ' ] , <nl> def test_gpu ( self ) : <nl> ] , <nl> with_cuda = True , <nl> verbose = False , <nl> + include_dirs = [ os . path . join ( CUDA_HOME , ' include ' ) ] , <nl> ) . build ( ) <nl> import gpulib <nl> tensor = torch . ones ( 2 , 2 ) . float ( ) <nl> mmm a / torch / utils / cpp_extension . py <nl> ppp b / torch / utils / cpp_extension . py <nl> def _find_cuda_home ( ) : <nl> BUILT_FROM_SOURCE_VERSION_PATTERN = re . compile ( r ' \ d + \ . \ d + \ . \ d + \ w + \ + \ w + ' ) <nl> <nl> <nl> + def is_binary_build ( ) : <nl> + return not BUILT_FROM_SOURCE_VERSION_PATTERN . match ( torch . version . __version__ ) <nl> + <nl> + <nl> def check_compiler_abi_compatibility ( compiler ) : <nl> ' ' ' <nl> Verifies that the given compiler is ABI - compatible with PyTorch . <nl> def check_compiler_abi_compatibility ( compiler ) : <nl> False if the compiler is ( likely ) ABI - incompatible with PyTorch , <nl> else True . <nl> ' ' ' <nl> - if BUILT_FROM_SOURCE_VERSION_PATTERN . match ( torch . version . __version__ ) : <nl> + if not is_binary_build ( ) : <nl> return True <nl> try : <nl> check_cmd = ' { } ' if sys . platform = = ' win32 ' else ' { } - - version ' <nl> def build_extensions ( self ) : <nl> self . _check_abi ( ) <nl> for extension in self . extensions : <nl> self . _define_torch_extension_name ( extension ) <nl> + self . _add_gnu_abi_flag_if_binary ( extension ) <nl> <nl> # Register . cu and . cuh as valid source extensions . <nl> self . compiler . src_extensions + = [ ' . cu ' , ' . cuh ' ] <nl> def _define_torch_extension_name ( self , extension ) : <nl> else : <nl> extension . extra_compile_args . append ( define ) <nl> <nl> + def _add_gnu_abi_flag_if_binary ( self , extension ) : <nl> + # If the version string looks like a binary build , <nl> + # we know that PyTorch was compiled with gcc 4 . 9 . 2 . <nl> + # if the extension is compiled with gcc > = 5 . 1 , <nl> + # then we have to define _GLIBCXX_USE_CXX11_ABI = 0 <nl> + # so that the std : : string in the API is resolved to <nl> + # non - C + + 11 symbols <nl> + define = ' - D_GLIBCXX_USE_CXX11_ABI = 0 ' <nl> + if is_binary_build ( ) : <nl> + if isinstance ( extension . extra_compile_args , dict ) : <nl> + for args in extension . extra_compile_args . values ( ) : <nl> + args . append ( define ) <nl> + else : <nl> + extension . extra_compile_args . append ( define ) <nl> + <nl> <nl> def CppExtension ( name , sources , * args , * * kwargs ) : <nl> ' ' ' <nl> def _write_ninja_file ( path , <nl> common_cflags = [ ' - DTORCH_EXTENSION_NAME = { } ' . format ( name ) ] <nl> common_cflags + = [ ' - I { } ' . format ( include ) for include in includes ] <nl> <nl> + if is_binary_build ( ) : <nl> + common_cflags + = [ ' - D_GLIBCXX_USE_CXX11_ABI = 0 ' ] <nl> + <nl> cflags = common_cflags + [ ' - fPIC ' , ' - std = c + + 11 ' ] + extra_cflags <nl> if sys . platform = = ' win32 ' : <nl> from distutils . spawn import _nt_quote_args <nl>
Soumith ' s last few patches to v0 . 4 . 1
pytorch/pytorch
5c0d9a24937a7e9561eccb8dac64b35fce05a034
2018-08-21T01:28:27Z
mmm a / src / core / lib / http / httpcli . c <nl> ppp b / src / core / lib / http / httpcli . c <nl> static void next_address ( grpc_exec_ctx * exec_ctx , internal_request * req , <nl> static void on_resolved ( grpc_exec_ctx * exec_ctx , void * arg , grpc_error * error ) { <nl> internal_request * req = arg ; <nl> if ( error ! = GRPC_ERROR_NONE ) { <nl> - finish ( exec_ctx , req , error ) ; <nl> + finish ( exec_ctx , req , GRPC_ERROR_REF ( error ) ) ; <nl> return ; <nl> } <nl> req - > next_address = 0 ; <nl>
Add missing ref
grpc/grpc
d6466872c8ccf09fdc30a80ad86b78542069dd66
2017-05-23T23:29:00Z
mmm a / docs / conf . py <nl> ppp b / docs / conf . py <nl> <nl> <nl> # General information about the project . <nl> project = u ' envoy ' <nl> - copyright = u ' 2016 , Lyft ' <nl> + copyright = u ' 2016 - 2017 , Lyft ' <nl> author = u ' Lyft ' <nl> <nl> # The version info for the project you ' re documenting , acts as replacement for <nl> mmm a / docs / requirements . txt <nl> ppp b / docs / requirements . txt <nl> <nl> docutils = = 0 . 12 <nl> - sphinx = = 1 . 5 . 0 <nl> + sphinx = = 1 . 5 . 5 <nl> sphinxcontrib - httpdomain = = 1 . 5 . 0 <nl> + sphinx_rtd_theme = = 0 . 2 . 4 <nl> GitPython = = 2 . 0 . 8 <nl> - <nl> - # Includes fix for https : / / github . com / snide / sphinx_rtd_theme / issues / 348 <nl> - git + https : / / github . com / snide / sphinx_rtd_theme @ abfa98539a2bfc44198a9ca8c2f16efe84cc4d26 <nl>
docs : upgrade sphinx and theme ( )
envoyproxy/envoy
774b039b1dcc48d5f3c7cddbf03db4f25f5469b7
2017-04-27T15:02:03Z
mmm a / xbmc / guilib / GUIListItem . cpp <nl> ppp b / xbmc / guilib / GUIListItem . cpp <nl> const CStdStringW & CGUIListItem : : GetSortLabel ( ) const <nl> return m_sortLabel ; <nl> } <nl> <nl> - void CGUIListItem : : SetThumbnailImage ( const CStdString & strThumbnail ) <nl> - { <nl> - SetArt ( " thumb " , strThumbnail ) ; <nl> - } <nl> - <nl> - CStdString CGUIListItem : : GetThumbnailImage ( ) const <nl> - { <nl> - return GetArt ( " thumb " ) ; <nl> - } <nl> - <nl> void CGUIListItem : : SetArt ( const std : : string & type , const std : : string & url ) <nl> { <nl> ArtMap : : iterator i = m_art . find ( type ) ; <nl> bool CGUIListItem : : HasIcon ( ) const <nl> return ( m_strIcon . size ( ) ! = 0 ) ; <nl> } <nl> <nl> - <nl> - bool CGUIListItem : : HasThumbnail ( ) const <nl> - { <nl> - return HasArt ( " thumb " ) ; <nl> - } <nl> - <nl> bool CGUIListItem : : HasOverlay ( ) const <nl> { <nl> return ( m_overlayIcon ! = CGUIListItem : : ICON_OVERLAY_NONE ) ; <nl> mmm a / xbmc / guilib / GUIListItem . h <nl> ppp b / xbmc / guilib / GUIListItem . h <nl> class CGUIListItem <nl> void SetIconImage ( const CStdString & strIcon ) ; <nl> const CStdString & GetIconImage ( ) const ; <nl> <nl> - void SetThumbnailImage ( const CStdString & strThumbnail ) ; <nl> - CStdString GetThumbnailImage ( ) const ; <nl> - <nl> void SetOverlayImage ( GUIIconOverlay icon , bool bOnOff = false ) ; <nl> CStdString GetOverlayImage ( ) const ; <nl> <nl> class CGUIListItem <nl> bool IsSelected ( ) const ; <nl> <nl> bool HasIcon ( ) const ; <nl> - bool HasThumbnail ( ) const ; <nl> bool HasOverlay ( ) const ; <nl> virtual bool IsFileItem ( ) const { return false ; } ; <nl> <nl>
drop Get / Set / HasThumbnail ( Image ) from GUIListItem - use Has / Get / SetArt instead
xbmc/xbmc
bd5f7e6147495f040279d034cea0508c7c3a80e4
2012-10-10T09:29:10Z
mmm a / emcc <nl> ppp b / emcc <nl> try : <nl> fastcomp_opts + = [ ' - enable - emscripten - cxx - exceptions ' ] <nl> if shared . Settings . DISABLE_EXCEPTION_CATCHING = = 2 : <nl> fastcomp_opts + = [ ' - emscripten - cxx - exceptions - whitelist = ' + ' , ' . join ( shared . Settings . EXCEPTION_CATCHING_WHITELIST or [ ' fake ' ] ) ] <nl> + if shared . Settings . ASYNCIFY : <nl> + fastcomp_opts + = [ ' - emscripten - asyncify ' ] <nl> + fastcomp_opts + = [ ' - emscripten - asyncify - functions = ' + ' , ' . join ( shared . Settings . ASYNCIFY_FUNCTIONS ) ] <nl> + fastcomp_opts + = [ ' - emscripten - asyncify - whitelist = ' + ' , ' . join ( shared . Settings . ASYNCIFY_WHITELIST ) ] <nl> <nl> if shared . Settings . ASM_JS : <nl> assert opt_level > = 1 or fastcomp , ' asm . js requires - O1 or above ' <nl> mmm a / emscripten . py <nl> ppp b / emscripten . py <nl> def keyfunc ( other ) : <nl> basic_vars + = [ ' ___rand_seed ' ] <nl> <nl> asm_runtime_funcs = [ ' stackAlloc ' , ' stackSave ' , ' stackRestore ' , ' setThrew ' , ' setTempRet0 ' , ' getTempRet0 ' ] <nl> + <nl> + # See if we need ASYNCIFY functions <nl> + # We might not need them even if ASYNCIFY is enabled <nl> + need_asyncify = ' _emscripten_alloc_async_context ' in exported_implemented_functions <nl> + if need_asyncify : <nl> + basic_vars + = [ ' ___async ' , ' ___async_unwind ' , ' ___async_retval ' , ' ___async_cur_frame ' ] <nl> + asm_runtime_funcs + = [ ' setAsync ' ] <nl> + <nl> # function tables <nl> function_tables = [ ' dynCall_ ' + table for table in last_forwarded_json [ ' Functions ' ] [ ' tables ' ] ] <nl> function_tables_impls = [ ] <nl> def math_fix ( g ) : <nl> top = top | 0 ; <nl> STACKTOP = top ; <nl> } <nl> + ' ' ' + ( ' ' ' <nl> + function setAsync ( ) { <nl> + ___async = 1 ; <nl> + } ' ' ' if need_asyncify else ' ' ) + ' ' ' <nl> function setThrew ( threw , value ) { <nl> threw = threw | 0 ; <nl> value = value | 0 ; <nl> new file mode 100644 <nl> index 00000000000 . . 55360a51080 <nl> mmm / dev / null <nl> ppp b / src / library_async . js <nl> <nl> + / * <nl> + * The layout of normal and async stack frames <nl> + * <nl> + * mmmmmmmmmmmmmmmmmmmmm < - - saved sp for the current function <nl> + * < last normal stack frame > <nl> + * mmmmmmmmmmmmmmmmmmmmm <nl> + * pointer to the previous frame < - - __async_cur_frame <nl> + * saved sp <nl> + * callback function < - - ctx , returned by alloc / reallloc , used by the program <nl> + * saved local variable1 <nl> + * saved local variable2 <nl> + * . . . <nl> + * mmmmmmmmmmmmmmmmmmmmm < - - STACKTOP <nl> + * <nl> + * / <nl> + <nl> + mergeInto ( LibraryManager . library , { <nl> + __async : 0 , / / whether a truly async function has been called <nl> + __async_unwind : 1 , / / whether to unwind the async stack frame <nl> + __async_retval : ' allocate ( 2 , " i32 " , ALLOC_STATIC ) ' , / / store the return value for async functions <nl> + __async_cur_frame : 0 , / / address to the current frame , which stores previous frame , stack pointer and async context <nl> + <nl> + # if ASYNCIFY <nl> + emscripten_async_resume__deps : [ ' __async ' , ' __async_unwind ' , ' __async_cur_frame ' ] , <nl> + # else <nl> + emscripten_async_resume__deps : [ function ( ) { throw ' ERROR : Please compile your program with - s ASYNCIFY = 1 in order to use asynchronous operations like emscripten_sleep ' ; } ] , <nl> + # endif <nl> + emscripten_async_resume__sig : ' v ' , <nl> + emscripten_async_resume__asm : true , <nl> + emscripten_async_resume : function ( ) { <nl> + var callback = 0 ; <nl> + ___async = 0 ; <nl> + ___async_unwind = 1 ; <nl> + while ( 1 ) { <nl> + if ( ! ___async_cur_frame ) return ; <nl> + callback = { { { makeGetValueAsm ( ' ___async_cur_frame ' , 8 , ' i32 ' ) } } } ; <nl> + / / the signature of callback is always vi <nl> + / / the only argument is ctx <nl> + dynCall_vi ( callback , ( ___async_cur_frame + 8 ) | 0 ) ; <nl> + if ( ___async ) return ; / / that was an async call <nl> + if ( ! ___async_unwind ) { <nl> + / / keep the async stack <nl> + ___async_unwind = 1 ; <nl> + continue ; <nl> + } <nl> + / / unwind normal stack frame <nl> + stackRestore ( { { { makeGetValueAsm ( ' ___async_cur_frame ' , 4 , ' i32 ' ) } } } ) ; <nl> + / / pop the last async stack frame <nl> + ___async_cur_frame = { { { makeGetValueAsm ( ' ___async_cur_frame ' , 0 , ' i32 ' ) } } } ; <nl> + } <nl> + } , <nl> + <nl> + emscripten_sleep__deps : [ ' emscripten_async_resume ' ] , <nl> + emscripten_sleep : function ( ms ) { <nl> + asm . setAsync ( ) ; / / tell the scheduler that we have a callback on hold <nl> + Browser . safeSetTimeout ( _emscripten_async_resume , ms ) ; <nl> + } , <nl> + <nl> + emscripten_alloc_async_context__deps : [ ' __async_cur_frame ' ] , <nl> + emscripten_alloc_async_context__sig : ' iii ' , <nl> + emscripten_alloc_async_context__asm : true , <nl> + emscripten_alloc_async_context : function ( len , sp ) { <nl> + len = len | 0 ; <nl> + sp = sp | 0 ; <nl> + / / len is the size of ctx <nl> + / / we also need to store prev_frame , stack pointer before ctx <nl> + var new_frame = 0 ; new_frame = stackAlloc ( ( len + 8 ) | 0 ) | 0 ; <nl> + / / save sp <nl> + { { { makeSetValueAsm ( ' new_frame ' , 4 , ' sp ' , ' i32 ' ) } } } ; <nl> + / / link the frame with previous one <nl> + { { { makeSetValueAsm ( ' new_frame ' , 0 , ' ___async_cur_frame ' , ' i32 ' ) } } } ; <nl> + ___async_cur_frame = new_frame ; <nl> + return ( ___async_cur_frame + 8 ) | 0 ; <nl> + } , <nl> + <nl> + emscripten_realloc_async_context__deps : [ ' __async_cur_frame ' ] , <nl> + emscripten_realloc_async_context__sig : ' ii ' , <nl> + emscripten_realloc_async_context__asm : true , <nl> + emscripten_realloc_async_context : function ( len ) { <nl> + len = len | 0 ; <nl> + / / assuming that we have on the stacktop <nl> + stackRestore ( ___async_cur_frame ) ; <nl> + return ( ( stackAlloc ( ( len + 8 ) | 0 ) | 0 ) + 8 ) | 0 ; <nl> + } , <nl> + <nl> + emscripten_free_async_context__deps : [ ' __async_cur_frame ' ] , <nl> + emscripten_free_async_context__sig : ' vi ' , <nl> + emscripten_free_async_context__asm : true , <nl> + emscripten_free_async_context : function ( ctx ) { <nl> + / / this function is called when a possibly async function turned out to be sync <nl> + / / just undo a recent emscripten_alloc_async_context <nl> + ctx = ctx | 0 ; <nl> + # if ASSERTIONS <nl> + assert ( ___async_cur_frame + 8 = = ctx ) ; <nl> + # endif <nl> + stackRestore ( ___async_cur_frame ) ; <nl> + ___async_cur_frame = { { { makeGetValueAsm ( ' ___async_cur_frame ' , 0 , ' i32 ' ) } } } ; <nl> + } , <nl> + <nl> + emscripten_check_async : true , <nl> + emscripten_do_not_unwind : true , <nl> + emscripten_do_not_unwind_async : true , <nl> + <nl> + emscripten_get_async_return_value_addr__deps : [ ' __async_retval ' ] , <nl> + emscripten_get_async_return_value_addr : true <nl> + } ) ; <nl> mmm a / src / modules . js <nl> ppp b / src / modules . js <nl> var LibraryManager = { <nl> load : function ( ) { <nl> if ( this . library ) return ; <nl> <nl> - var libraries = [ ' library . js ' , ' library_path . js ' , ' library_fs . js ' , ' library_idbfs . js ' , ' library_memfs . js ' , ' library_nodefs . js ' , ' library_sockfs . js ' , ' library_tty . js ' , ' library_browser . js ' , ' library_sdl . js ' , ' library_gl . js ' , ' library_glut . js ' , ' library_xlib . js ' , ' library_egl . js ' , ' library_gc . js ' , ' library_jansson . js ' , ' library_openal . js ' , ' library_glfw . js ' , ' library_uuid . js ' , ' library_glew . js ' , ' library_html5 . js ' ] . concat ( additionalLibraries ) ; <nl> + var libraries = [ ' library . js ' , ' library_path . js ' , ' library_fs . js ' , ' library_idbfs . js ' , ' library_memfs . js ' , ' library_nodefs . js ' , ' library_sockfs . js ' , ' library_tty . js ' , ' library_browser . js ' , ' library_sdl . js ' , ' library_gl . js ' , ' library_glut . js ' , ' library_xlib . js ' , ' library_egl . js ' , ' library_gc . js ' , ' library_jansson . js ' , ' library_openal . js ' , ' library_glfw . js ' , ' library_uuid . js ' , ' library_glew . js ' , ' library_html5 . js ' , ' library_async . js ' ] . concat ( additionalLibraries ) ; <nl> for ( var i = 0 ; i < libraries . length ; i + + ) { <nl> var filename = libraries [ i ] ; <nl> var src = read ( filename ) ; <nl> mmm a / src / settings . js <nl> ppp b / src / settings . js <nl> var DISABLE_EXCEPTION_CATCHING = 0 ; / / Disables generating code to actually catc <nl> var EXCEPTION_CATCHING_WHITELIST = [ ] ; / / Enables catching exception in the listed functions only , if <nl> / / DISABLE_EXCEPTION_CATCHING = 2 is set <nl> <nl> + / / For more explanations of this option , please visit <nl> + / / https : / / github . com / kripken / emscripten / wiki / Asyncify <nl> + var ASYNCIFY = 0 ; / / Whether to enable asyncify transformation <nl> + / / This allows to inject some async functions to the C code that appear to be sync <nl> + / / e . g . emscripten_sleep <nl> + var ASYNCIFY_FUNCTIONS = [ ' emscripten_sleep ' ] ; / / Functions that call any funcion in the list , directly or indirectly <nl> + / / will be transfromed <nl> + var ASYNCIFY_WHITELIST = [ ' qsort ' , / / Functions in this list are never considered async , even if they appear in ASYNCIFY_FUNCTIONS <nl> + ' trinkle ' , / / In the asyncify transformation , any function that calls a function pointer is considered async <nl> + ' __toread ' , / / This whitelist is useful when a function is known to be sync <nl> + ' __uflow ' , / / currently this link contains some functions in libc <nl> + ' __fwritex ' , <nl> + ' MUSL_vfprintf ' ] ; <nl> + <nl> + <nl> var EXECUTION_TIMEOUT = - 1 ; / / Throw an exception after X seconds - useful to debug infinite loops <nl> var CHECK_OVERFLOWS = 0 ; / / Add code that checks for overflows in integer math operations . <nl> / / There is currently not much to do to handle overflows if they occur . <nl> mmm a / system / include / emscripten / emscripten . h <nl> ppp b / system / include / emscripten / emscripten . h <nl> void emscripten_asm_const ( const char * code ) ; <nl> int emscripten_asm_const_int ( const char * code , . . . ) ; <nl> double emscripten_asm_const_double ( const char * code , . . . ) ; <nl> <nl> + / * <nl> + * Sleep for ` ms ` milliseconds <nl> + * This function should only be used when ASYNCIFY is enabled <nl> + * / <nl> + # if __EMSCRIPTEN__ <nl> + void emscripten_sleep ( unsigned int ms ) ; <nl> + # else <nl> + # define emscripten_sleep SDL_Delay <nl> + # endif <nl> + <nl> # ifdef __cplusplus <nl> } <nl> # endif <nl> mmm a / tests / test_core . py <nl> ppp b / tests / test_core . py <nl> def test_64bit_return_value ( self ) : <nl> assert " low = 5678 " in out <nl> assert " high = 1234 " in out <nl> <nl> + def test_asyncify ( self ) : <nl> + if not Settings . ASM_JS : <nl> + return self . skip ( ' asyncify requires asm . js ' ) <nl> + if os . environ . get ( ' EMCC_FAST_COMPILER ' ) = = ' 0 ' : <nl> + return self . skip ( ' asyncify requires fastcomp ' ) <nl> + <nl> + src = r ' ' ' <nl> + # include < stdio . h > <nl> + # include < emscripten . h > <nl> + void f ( void * p ) { <nl> + * ( int * ) p = 99 ; <nl> + printf ( " ! " ) ; <nl> + } <nl> + int main ( ) { <nl> + int i = 0 ; <nl> + printf ( " Hello " ) ; <nl> + emscripten_async_call ( f , & i , 1 ) ; <nl> + printf ( " World " ) ; <nl> + emscripten_sleep ( 100 ) ; <nl> + printf ( " % d \ n " , i ) ; <nl> + } <nl> + ' ' ' <nl> + Settings . ASYNCIFY = 1 ; <nl> + self . do_run ( src , ' HelloWorld ! 99 ' ) ; <nl> + <nl> # Generate tests for everything <nl> def make_run ( fullname , name = - 1 , compiler = - 1 , embetter = 0 , quantum_size = 0 , <nl> typed_arrays = 0 , emcc_args = None , env = None ) : <nl>
Merge pull request from coolwanglu / async_v5
emscripten-core/emscripten
52e9cf9e8bb941bc3af8bba0447cb0ecbdcb462f
2014-07-25T00:34:56Z
mmm a / doc / release - notes . md <nl> ppp b / doc / release - notes . md <nl> processing the entire blockchain . <nl> Compatibility <nl> = = = = = = = = = = = = = = <nl> <nl> - Bitcoin Core is extensively tested on multiple operating systems using <nl> - the Linux kernel , macOS 10 . 10 + , and Windows 7 and newer ( Windows XP is not supported ) . <nl> + Bitcoin Core is supported and extensively tested on operating systems using <nl> + the Linux kernel , macOS 10 . 10 + , and Windows 7 and newer . It is not recommended <nl> + to use Bitcoin Core on unsupported systems . <nl> <nl> Bitcoin Core should also work on most other Unix - like systems but is not <nl> frequently tested on them . <nl>
Merge : doc : Removing redundant line : " Windows XP not supported "
bitcoin/bitcoin
f5a70d1462592a23bbad4aa150e6b2beaeec7c42
2018-12-31T15:02:47Z
mmm a / Marlin / planner . cpp <nl> ppp b / Marlin / planner . cpp <nl> void Planner : : set_position_mm ( const AxisEnum axis , const float & v ) { <nl> <nl> / / Recalculate the steps / s ^ 2 acceleration rates , based on the mm / s ^ 2 <nl> void Planner : : reset_acceleration_rates ( ) { <nl> - uint32_t highest_acceleration_allaxes_steps_per_s2 ; <nl> + uint32_t highest_rate = 1 ; <nl> LOOP_XYZE ( i ) { <nl> max_acceleration_steps_per_s2 [ i ] = max_acceleration_mm_per_s2 [ i ] * axis_steps_per_mm [ i ] ; <nl> - if ( max_acceleration_steps_per_s2 [ i ] > highest_acceleration_allaxes_steps_per_s2 ) highest_acceleration_allaxes_steps_per_s2 = max_acceleration_steps_per_s2 [ i ] ; <nl> + NOLESS ( highest_rate , max_acceleration_steps_per_s2 [ i ] ) ; <nl> } <nl> - cutoff_long = 4294967295UL / highest_acceleration_allaxes_steps_per_s2 ; <nl> + cutoff_long = 4294967295UL / highest_rate ; <nl> } <nl> <nl> / / Recalculate position , steps_to_mm if axis_steps_per_mm changes ! <nl>
Fix uninitialized var in reset_acceleration_rates
MarlinFirmware/Marlin
3fcf91580872bf0c7a415a8ff539e5f0cf01a244
2016-10-24T16:55:41Z