diff
stringlengths 41
2.03M
| msg
stringlengths 1
1.5k
⌀ | repo
stringlengths 5
40
| sha
stringlengths 40
40
| time
stringlengths 20
20
|
---|---|---|---|---|
mmm a / tensorflow / stream_executor / stream_executor_pimpl . h <nl> ppp b / tensorflow / stream_executor / stream_executor_pimpl . h <nl> class StreamExecutor { <nl> DeviceMemory < T > GetSubBuffer ( DeviceMemory < T > * parent , uint64 element_offset , <nl> uint64 element_count ) ; <nl> <nl> - / / As GetSubBuffer ( ) , but returns a ScopedDeviceMemory < T > . <nl> - template < typename T > <nl> - ScopedDeviceMemory < T > AllocateOwnedSubBuffer ( DeviceMemory < T > * parent , <nl> - uint64 element_offset , <nl> - uint64 element_count ) { <nl> - return ScopedDeviceMemory < T > ( <nl> - this , GetSubBuffer < T > ( parent , element_offset , element_count ) ) ; <nl> - } <nl> - <nl> / / Finds a symbol and returns device memory allocated to the symbol . The <nl> / / symbol is searched in any kernels that were previously loaded through <nl> / / GetKernel ( ) before the GetSymbol ( ) call . The user has to make sure that the <nl>
|
Remove unused function with inconsistent API .
|
tensorflow/tensorflow
|
dd4234d7472c90be49286d7bac6bae8f61cbc815
|
2019-04-15T18:45:11Z
|
mmm a / tensorflow / core / framework / tensor . h <nl> ppp b / tensorflow / core / framework / tensor . h <nl> void Tensor : : FillDimsAndValidateCompatibleShape ( <nl> template < typename T , size_t NDIMS > <nl> typename TTypes < T , NDIMS > : : Tensor Tensor : : shaped ( <nl> gtl : : ArraySlice < int64 > new_sizes ) { <nl> - CheckType ( DataTypeToEnum < T > : : v ( ) ) ; <nl> - CHECK ( IsAligned ( ) ) ; <nl> + CheckTypeAndIsAligned ( DataTypeToEnum < T > : : v ( ) ) ; <nl> Eigen : : array < Eigen : : DenseIndex , NDIMS > dims ; <nl> FillDimsAndValidateCompatibleShape ( new_sizes , & dims ) ; <nl> return typename TTypes < T , NDIMS > : : Tensor ( base < T > ( ) , dims ) ; <nl>
|
Another rolling back of performance regression .
|
tensorflow/tensorflow
|
5eba03ef1098de5918973d6e691e5cda31e3147a
|
2018-02-06T22:03:14Z
|
mmm a / include / swift / AST / DiagnosticsDriver . def <nl> ppp b / include / swift / AST / DiagnosticsDriver . def <nl> <nl> / / <nl> / / This source file is part of the Swift . org open source project <nl> / / <nl> - / / Copyright ( c ) 2014 - 2017 Apple Inc . and the Swift project authors <nl> + / / Copyright ( c ) 2014 - 2020 Apple Inc . and the Swift project authors <nl> / / Licensed under Apache License v2 . 0 with Runtime Library Exception <nl> / / <nl> / / See https : / / swift . org / LICENSE . txt for license information <nl> ERROR ( error_conflicting_options , none , <nl> ERROR ( error_option_not_supported , none , <nl> " ' % 0 ' is not supported with ' % 1 ' " , <nl> ( StringRef , StringRef ) ) <nl> + ERROR ( error_requirement_not_met , none , <nl> + " ' % 0 ' requires ' % 1 ' " , <nl> + ( StringRef , StringRef ) ) <nl> <nl> WARNING ( warn_ignore_embed_bitcode , none , <nl> " ignoring - embed - bitcode since no object file is being generated " , ( ) ) <nl> mmm a / include / swift / AST / DiagnosticsFrontend . def <nl> ppp b / include / swift / AST / DiagnosticsFrontend . def <nl> ERROR ( explicit_swift_module_map_corrupted , none , <nl> " explicit Swift module map from % 0 is malformed " , <nl> ( StringRef ) ) <nl> <nl> + ERROR ( placeholder_dependency_module_map_missing , none , <nl> + " cannot open Swift placeholder dependency module map from % 0 " , <nl> + ( StringRef ) ) <nl> + <nl> + ERROR ( placeholder_dependency_module_map_corrupted , none , <nl> + " Swift placeholder dependency module map from % 0 is malformed " , <nl> + ( StringRef ) ) <nl> + <nl> REMARK ( default_previous_install_name , none , <nl> " default previous install name for % 0 is % 1 " , ( StringRef , StringRef ) ) <nl> <nl> mmm a / include / swift / AST / ModuleDependencies . h <nl> ppp b / include / swift / AST / ModuleDependencies . h <nl> class Identifier ; <nl> / / / Which kind of module dependencies we are looking for . <nl> enum class ModuleDependenciesKind : int8_t { <nl> Swift , <nl> + / / Placeholder dependencies are a kind of dependencies used only by the <nl> + / / dependency scanner . They are swift modules that the scanner will not be <nl> + / / able to locate in its search paths and which are the responsibility of the <nl> + / / scanner ' s client to ensure are provided . <nl> + / / <nl> + / / Placeholder dependencies will be specified in the scanner ' s output <nl> + / / dependency graph where it is the responsibility of the scanner ' s client to <nl> + / / ensure required post - processing takes place to " resolve " them . In order to <nl> + / / do so , the client ( swift driver , or any other client build system ) is <nl> + / / expected to have access to a full dependency graph of all placeholder <nl> + / / dependencies and be able to replace placeholder nodes in the dependency <nl> + / / graph with their full dependency trees , ` uniquing ` common dependency module <nl> + / / nodes in the process . <nl> + / / <nl> + / / One example where placeholder dependencies are employed is when using <nl> + / / SwiftPM in Explicit Module Build mode . SwiftPM constructs a build plan for <nl> + / / all targets ahead - of - time . When planning a build for a target that depends <nl> + / / on other targets , the dependency scanning action is not able to locate <nl> + / / dependency target modules , because they have not yet been built . Instead , <nl> + / / the build system treats them as placeholder dependencies and resolves them <nl> + / / with ` actual ` dependencies in a post - processing step once dependency graphs <nl> + / / of all targets , individually , have been computed . <nl> + SwiftPlaceholder , <nl> Clang , <nl> } ; <nl> <nl> enum class ModuleDependenciesKind : int8_t { <nl> / / / This class is mostly an implementation detail for \ c ModuleDependencies . <nl> class ModuleDependenciesStorageBase { <nl> public : <nl> - const bool isSwiftModule ; <nl> + const ModuleDependenciesKind dependencyKind ; <nl> <nl> - ModuleDependenciesStorageBase ( bool isSwiftModule , <nl> + ModuleDependenciesStorageBase ( ModuleDependenciesKind dependencyKind , <nl> const std : : string & compiledModulePath ) <nl> - : isSwiftModule ( isSwiftModule ) , <nl> + : dependencyKind ( dependencyKind ) , <nl> compiledModulePath ( compiledModulePath ) { } <nl> <nl> virtual ModuleDependenciesStorageBase * clone ( ) const = 0 ; <nl> class SwiftModuleDependenciesStorage : public ModuleDependenciesStorageBase { <nl> ArrayRef < StringRef > buildCommandLine , <nl> ArrayRef < StringRef > extraPCMArgs , <nl> StringRef contextHash <nl> - ) : ModuleDependenciesStorageBase ( / * isSwiftModule = * / true , compiledModulePath ) , <nl> + ) : ModuleDependenciesStorageBase ( ModuleDependenciesKind : : Swift , <nl> + compiledModulePath ) , <nl> swiftInterfaceFile ( swiftInterfaceFile ) , <nl> compiledModuleCandidates ( compiledModuleCandidates . begin ( ) , <nl> compiledModuleCandidates . end ( ) ) , <nl> class SwiftModuleDependenciesStorage : public ModuleDependenciesStorageBase { <nl> } <nl> <nl> static bool classof ( const ModuleDependenciesStorageBase * base ) { <nl> - return base - > isSwiftModule ; <nl> + return base - > dependencyKind = = ModuleDependenciesKind : : Swift ; <nl> } <nl> } ; <nl> <nl> class ClangModuleDependenciesStorage : public ModuleDependenciesStorageBase { <nl> const std : : string & contextHash , <nl> const std : : vector < std : : string > & nonPathCommandLine , <nl> const std : : vector < std : : string > & fileDependencies <nl> - ) : ModuleDependenciesStorageBase ( / * isSwiftModule = * / false , <nl> + ) : ModuleDependenciesStorageBase ( ModuleDependenciesKind : : Clang , <nl> compiledModulePath ) , <nl> moduleMapFile ( moduleMapFile ) , <nl> contextHash ( contextHash ) , <nl> class ClangModuleDependenciesStorage : public ModuleDependenciesStorageBase { <nl> } <nl> <nl> static bool classof ( const ModuleDependenciesStorageBase * base ) { <nl> - return ! base - > isSwiftModule ; <nl> + return base - > dependencyKind = = ModuleDependenciesKind : : Clang ; <nl> + } <nl> + } ; <nl> + <nl> + / / / Describes an placeholder Swift module dependency module stub . <nl> + / / / <nl> + / / / This class is mostly an implementation detail for \ c ModuleDependencies . <nl> + class PlaceholderSwiftModuleDependencyStorage : public ModuleDependenciesStorageBase { <nl> + public : <nl> + PlaceholderSwiftModuleDependencyStorage ( const std : : string & compiledModulePath , <nl> + const std : : string & moduleDocPath , <nl> + const std : : string & sourceInfoPath ) <nl> + : ModuleDependenciesStorageBase ( ModuleDependenciesKind : : SwiftPlaceholder , <nl> + compiledModulePath ) , <nl> + moduleDocPath ( moduleDocPath ) , <nl> + sourceInfoPath ( sourceInfoPath ) { } <nl> + <nl> + ModuleDependenciesStorageBase * clone ( ) const override { <nl> + return new PlaceholderSwiftModuleDependencyStorage ( * this ) ; <nl> + } <nl> + <nl> + / / / The path to the . swiftModuleDoc file . <nl> + const std : : string moduleDocPath ; <nl> + <nl> + / / / The path to the . swiftSourceInfo file . <nl> + const std : : string sourceInfoPath ; <nl> + <nl> + static bool classof ( const ModuleDependenciesStorageBase * base ) { <nl> + return base - > dependencyKind = = ModuleDependenciesKind : : SwiftPlaceholder ; <nl> } <nl> } ; <nl> <nl> class ModuleDependencies { <nl> fileDependencies ) ) ; <nl> } <nl> <nl> + / / / Describe a placeholder dependency swift module . <nl> + static ModuleDependencies forPlaceholderSwiftModuleStub ( <nl> + const std : : string & compiledModulePath , <nl> + const std : : string & moduleDocPath , <nl> + const std : : string & sourceInfoPath ) { <nl> + return ModuleDependencies ( <nl> + std : : make_unique < PlaceholderSwiftModuleDependencyStorage > ( <nl> + compiledModulePath , moduleDocPath , sourceInfoPath ) ) ; <nl> + } <nl> + <nl> / / / Retrieve the path to the compiled module . <nl> const std : : string getCompiledModulePath ( ) const { <nl> return storage - > compiledModulePath ; <nl> class ModuleDependencies { <nl> / / / Whether the dependencies are for a Swift module . <nl> bool isSwiftModule ( ) const ; <nl> <nl> + / / / Whether this represents a placeholder module stub <nl> + bool isPlaceholderSwiftModule ( ) const ; <nl> + <nl> ModuleDependenciesKind getKind ( ) const { <nl> - return isSwiftModule ( ) ? ModuleDependenciesKind : : Swift <nl> - : ModuleDependenciesKind : : Clang ; <nl> + return storage - > dependencyKind ; <nl> } <nl> / / / Retrieve the dependencies for a Swift module . <nl> const SwiftModuleDependenciesStorage * getAsSwiftModule ( ) const ; <nl> class ModuleDependencies { <nl> / / / Retrieve the dependencies for a Clang module . <nl> const ClangModuleDependenciesStorage * getAsClangModule ( ) const ; <nl> <nl> + / / / Retrieve the dependencies for a placeholder dependency module stub . <nl> + const PlaceholderSwiftModuleDependencyStorage * <nl> + getAsPlaceholderDependencyModule ( ) const ; <nl> + <nl> / / / Add a dependency on the given module , if it was not already in the set . <nl> void addModuleDependency ( StringRef module , <nl> llvm : : StringSet < > * alreadyAddedModules = nullptr ) ; <nl> class ModuleDependenciesCache { <nl> / / / Dependencies for Swift modules that have already been computed . <nl> llvm : : StringMap < ModuleDependencies > SwiftModuleDependencies ; <nl> <nl> + / / / Dependencies for Swift placeholder dependency modules . <nl> + llvm : : StringMap < ModuleDependencies > PlaceholderSwiftModuleDependencies ; <nl> + <nl> / / / Dependencies for Clang modules that have already been computed . <nl> llvm : : StringMap < ModuleDependencies > ClangModuleDependencies ; <nl> <nl> mmm a / include / swift / AST / SearchPathOptions . h <nl> ppp b / include / swift / AST / SearchPathOptions . h <nl> class SearchPathOptions { <nl> <nl> / / / A map of explict Swift module information . <nl> std : : string ExplicitSwiftModuleMap ; <nl> + <nl> + / / / A map of placeholder Swift module dependency information . <nl> + std : : string PlaceholderDependencyModuleMap ; <nl> private : <nl> static StringRef <nl> pathStringFromFrameworkSearchPath ( const FrameworkSearchPath & next ) { <nl> mmm a / include / swift / Frontend / ModuleInterfaceLoader . h <nl> ppp b / include / swift / Frontend / ModuleInterfaceLoader . h <nl> class ExplicitSwiftModuleLoader : public SerializedModuleLoaderBase { <nl> ~ ExplicitSwiftModuleLoader ( ) ; <nl> } ; <nl> <nl> + / / / Information about explicitly specified Swift module files . <nl> + struct ExplicitModuleInfo { <nl> + / / Path of the . swiftmodule file . <nl> + StringRef modulePath ; <nl> + / / Path of the . swiftmoduledoc file . <nl> + StringRef moduleDocPath ; <nl> + / / Path of the . swiftsourceinfo file . <nl> + StringRef moduleSourceInfoPath ; <nl> + / / Opened buffer for the . swiftmodule file . <nl> + std : : unique_ptr < llvm : : MemoryBuffer > moduleBuffer ; <nl> + } ; <nl> + <nl> + / / / Parser of explicit module maps passed into the compiler . <nl> + / / [ <nl> + / / { <nl> + / / " moduleName " : " A " , <nl> + / / " modulePath " : " A . swiftmodule " , <nl> + / / " docPath " : " A . swiftdoc " , <nl> + / / " sourceInfoPath " : " A . swiftsourceinfo " <nl> + / / } , <nl> + / / { <nl> + / / " moduleName " : " B " , <nl> + / / " modulePath " : " B . swiftmodule " , <nl> + / / " docPath " : " B . swiftdoc " , <nl> + / / " sourceInfoPath " : " B . swiftsourceinfo " <nl> + / / } <nl> + / / ] <nl> + class ExplicitModuleMapParser { <nl> + public : <nl> + ExplicitModuleMapParser ( llvm : : BumpPtrAllocator & Allocator ) : Saver ( Allocator ) { } <nl> + <nl> + std : : error_code <nl> + parseSwiftExplicitModuleMap ( llvm : : StringRef fileName , <nl> + llvm : : StringMap < ExplicitModuleInfo > & moduleMap ) { <nl> + using namespace llvm : : yaml ; <nl> + / / Load the input file . <nl> + llvm : : ErrorOr < std : : unique_ptr < llvm : : MemoryBuffer > > fileBufOrErr = <nl> + llvm : : MemoryBuffer : : getFile ( fileName ) ; <nl> + if ( ! fileBufOrErr ) { <nl> + return std : : make_error_code ( std : : errc : : no_such_file_or_directory ) ; <nl> + } <nl> + StringRef Buffer = fileBufOrErr - > get ( ) - > getBuffer ( ) ; <nl> + / / Use a new source manager instead of the one from ASTContext because we <nl> + / / don ' t want the JSON file to be persistent . <nl> + llvm : : SourceMgr SM ; <nl> + Stream Stream ( llvm : : MemoryBufferRef ( Buffer , fileName ) , SM ) ; <nl> + for ( auto DI = Stream . begin ( ) ; DI ! = Stream . end ( ) ; + + DI ) { <nl> + assert ( DI ! = Stream . end ( ) & & " Failed to read a document " ) ; <nl> + if ( auto * MN = dyn_cast_or_null < SequenceNode > ( DI - > getRoot ( ) ) ) { <nl> + for ( auto & entry : * MN ) { <nl> + if ( parseSingleModuleEntry ( entry , moduleMap ) ) { <nl> + return std : : make_error_code ( std : : errc : : invalid_argument ) ; <nl> + } <nl> + } <nl> + } else { <nl> + return std : : make_error_code ( std : : errc : : invalid_argument ) ; <nl> + } <nl> + } <nl> + return std : : error_code { } ; / / success <nl> + } <nl> + <nl> + private : <nl> + StringRef getScalaNodeText ( llvm : : yaml : : Node * N ) { <nl> + SmallString < 32 > Buffer ; <nl> + return Saver . save ( cast < llvm : : yaml : : ScalarNode > ( N ) - > getValue ( Buffer ) ) ; <nl> + } <nl> + <nl> + bool parseSingleModuleEntry ( llvm : : yaml : : Node & node , <nl> + llvm : : StringMap < ExplicitModuleInfo > & moduleMap ) { <nl> + using namespace llvm : : yaml ; <nl> + auto * mapNode = dyn_cast < MappingNode > ( & node ) ; <nl> + if ( ! mapNode ) <nl> + return true ; <nl> + StringRef moduleName ; <nl> + ExplicitModuleInfo result ; <nl> + for ( auto & entry : * mapNode ) { <nl> + auto key = getScalaNodeText ( entry . getKey ( ) ) ; <nl> + auto val = getScalaNodeText ( entry . getValue ( ) ) ; <nl> + if ( key = = " moduleName " ) { <nl> + moduleName = val ; <nl> + } else if ( key = = " modulePath " ) { <nl> + result . modulePath = val ; <nl> + } else if ( key = = " docPath " ) { <nl> + result . moduleDocPath = val ; <nl> + } else if ( key = = " sourceInfoPath " ) { <nl> + result . moduleSourceInfoPath = val ; <nl> + } else { <nl> + / / Being forgiving for future fields . <nl> + continue ; <nl> + } <nl> + } <nl> + if ( moduleName . empty ( ) ) <nl> + return true ; <nl> + moduleMap [ moduleName ] = std : : move ( result ) ; <nl> + return false ; <nl> + } <nl> + <nl> + llvm : : StringSaver Saver ; <nl> + } ; <nl> + <nl> struct ModuleInterfaceLoaderOptions { <nl> bool remarkOnRebuildFromInterface = false ; <nl> bool disableInterfaceLock = false ; <nl> mmm a / include / swift / Option / FrontendOptions . td <nl> ppp b / include / swift / Option / FrontendOptions . td <nl> def swift_module_file <nl> def explict_swift_module_map <nl> : Separate < [ " - " ] , " explicit - swift - module - map - file " > , MetaVarName < " < path > " > , <nl> HelpText < " Specify a JSON file containing information of explict Swift modules " > ; <nl> + <nl> + def placeholder_dependency_module_map <nl> + : Separate < [ " - " ] , " placeholder - dependency - module - map - file " > , MetaVarName < " < path > " > , <nl> + HelpText < " Specify a JSON file containing information of external Swift module dependencies " > ; <nl> } <nl> <nl> <nl> mmm a / lib / AST / ModuleDependencies . cpp <nl> ppp b / lib / AST / ModuleDependencies . cpp <nl> bool ModuleDependencies : : isSwiftModule ( ) const { <nl> return isa < SwiftModuleDependenciesStorage > ( storage . get ( ) ) ; <nl> } <nl> <nl> + bool ModuleDependencies : : isPlaceholderSwiftModule ( ) const { <nl> + return isa < PlaceholderSwiftModuleDependencyStorage > ( storage . get ( ) ) ; <nl> + } <nl> + <nl> / / / Retrieve the dependencies for a Swift module . <nl> const SwiftModuleDependenciesStorage * <nl> ModuleDependencies : : getAsSwiftModule ( ) const { <nl> ModuleDependencies : : getAsClangModule ( ) const { <nl> return dyn_cast < ClangModuleDependenciesStorage > ( storage . get ( ) ) ; <nl> } <nl> <nl> + / / / Retrieve the dependencies for a placeholder dependency module stub . <nl> + const PlaceholderSwiftModuleDependencyStorage * <nl> + ModuleDependencies : : getAsPlaceholderDependencyModule ( ) const { <nl> + return dyn_cast < PlaceholderSwiftModuleDependencyStorage > ( storage . get ( ) ) ; <nl> + } <nl> + <nl> void ModuleDependencies : : addModuleDependency ( <nl> StringRef module , llvm : : StringSet < > * alreadyAddedModules ) { <nl> if ( ! alreadyAddedModules | | alreadyAddedModules - > insert ( module ) . second ) <nl> ModuleDependenciesCache : : getDependenciesMap ( ModuleDependenciesKind kind ) { <nl> switch ( kind ) { <nl> case ModuleDependenciesKind : : Swift : <nl> return SwiftModuleDependencies ; <nl> - <nl> + case ModuleDependenciesKind : : SwiftPlaceholder : <nl> + return PlaceholderSwiftModuleDependencies ; <nl> case ModuleDependenciesKind : : Clang : <nl> return ClangModuleDependencies ; <nl> } <nl> ModuleDependenciesCache : : getDependenciesMap ( ModuleDependenciesKind kind ) const { <nl> switch ( kind ) { <nl> case ModuleDependenciesKind : : Swift : <nl> return SwiftModuleDependencies ; <nl> - <nl> + case ModuleDependenciesKind : : SwiftPlaceholder : <nl> + return PlaceholderSwiftModuleDependencies ; <nl> case ModuleDependenciesKind : : Clang : <nl> return ClangModuleDependencies ; <nl> } <nl> bool ModuleDependenciesCache : : hasDependencies ( <nl> Optional < ModuleDependenciesKind > kind ) const { <nl> if ( ! kind ) { <nl> return hasDependencies ( moduleName , ModuleDependenciesKind : : Swift ) | | <nl> + hasDependencies ( moduleName , ModuleDependenciesKind : : SwiftPlaceholder ) | | <nl> hasDependencies ( moduleName , ModuleDependenciesKind : : Clang ) ; <nl> } <nl> <nl> Optional < ModuleDependencies > ModuleDependenciesCache : : findDependencies ( <nl> if ( auto swiftDep = findDependencies ( <nl> moduleName , ModuleDependenciesKind : : Swift ) ) <nl> return swiftDep ; <nl> - <nl> - return findDependencies ( moduleName , ModuleDependenciesKind : : Clang ) ; <nl> + else if ( auto swiftPlaceholderDep = findDependencies ( <nl> + moduleName , ModuleDependenciesKind : : SwiftPlaceholder ) ) <nl> + return swiftPlaceholderDep ; <nl> + else <nl> + return findDependencies ( moduleName , ModuleDependenciesKind : : Clang ) ; <nl> } <nl> <nl> const auto & map = getDependenciesMap ( * kind ) ; <nl> mmm a / lib / AST / ModuleLoader . cpp <nl> ppp b / lib / AST / ModuleLoader . cpp <nl> ModuleDependencies : : collectCrossImportOverlayNames ( ASTContext & ctx , <nl> using namespace llvm : : sys ; <nl> using namespace file_types ; <nl> Optional < std : : string > modulePath ; <nl> + / / A map from secondary module name to a vector of overlay names . <nl> + llvm : : StringMap < llvm : : SmallSetVector < Identifier , 4 > > result ; <nl> / / Mimic getModuleDefiningPath ( ) for Swift and Clang module . <nl> if ( auto * swiftDep = dyn_cast < SwiftModuleDependenciesStorage > ( storage . get ( ) ) ) { <nl> / / Prefer interface path to binary module path if we have it . <nl> ModuleDependencies : : collectCrossImportOverlayNames ( ASTContext & ctx , <nl> if ( llvm : : sys : : path : : extension ( parentDir ) = = " . swiftmodule " ) { <nl> modulePath = parentDir . str ( ) ; <nl> } <nl> - } else { <nl> - modulePath = cast < ClangModuleDependenciesStorage > ( storage . get ( ) ) - > moduleMapFile ; <nl> + } else if ( auto * clangDep = dyn_cast < ClangModuleDependenciesStorage > ( storage . get ( ) ) ) { <nl> + modulePath = clangDep - > moduleMapFile ; <nl> assert ( modulePath . hasValue ( ) ) ; <nl> + } else { / / PlaceholderSwiftModuleDependencies <nl> + return result ; <nl> } <nl> - / / A map from secondary module name to a vector of overlay names . <nl> - llvm : : StringMap < llvm : : SmallSetVector < Identifier , 4 > > result ; <nl> findOverlayFilesInternal ( ctx , * modulePath , moduleName , SourceLoc ( ) , <nl> [ & ] ( StringRef file ) { <nl> StringRef bystandingModule ; <nl> mmm a / lib / Driver / Driver . cpp <nl> ppp b / lib / Driver / Driver . cpp <nl> static void validateProfilingArgs ( DiagnosticEngine & diags , <nl> } <nl> } <nl> <nl> + static void validateDependencyScanningArgs ( DiagnosticEngine & diags , <nl> + const ArgList & args ) { <nl> + const Arg * ExternalDependencyMap = args . getLastArg ( options : : OPT_placeholder_dependency_module_map ) ; <nl> + const Arg * ScanDependencies = args . getLastArg ( options : : OPT_scan_dependencies ) ; <nl> + if ( ExternalDependencyMap & & ! ScanDependencies ) { <nl> + diags . diagnose ( SourceLoc ( ) , diag : : error_requirement_not_met , <nl> + " - placeholder - dependency - module - map - file " , " - scan - dependencies " ) ; <nl> + } <nl> + } <nl> + <nl> static void validateDebugInfoArgs ( DiagnosticEngine & diags , <nl> const ArgList & args ) { <nl> / / Check for missing debug option when verifying debug info . <nl> static void validateArgs ( DiagnosticEngine & diags , const ArgList & args , <nl> validateBridgingHeaderArgs ( diags , args ) ; <nl> validateWarningControlArgs ( diags , args ) ; <nl> validateProfilingArgs ( diags , args ) ; <nl> + validateDependencyScanningArgs ( diags , args ) ; <nl> validateDebugInfoArgs ( diags , args ) ; <nl> validateCompilationConditionArgs ( diags , args ) ; <nl> validateSearchPathArgs ( diags , args ) ; <nl> mmm a / lib / Frontend / CompilerInvocation . cpp <nl> ppp b / lib / Frontend / CompilerInvocation . cpp <nl> static bool ParseSearchPathArgs ( SearchPathOptions & Opts , <nl> for ( auto A : Args . filtered ( OPT_candidate_module_file ) ) { <nl> Opts . CandidateCompiledModules . push_back ( resolveSearchPath ( A - > getValue ( ) ) ) ; <nl> } <nl> + if ( const Arg * A = Args . getLastArg ( OPT_placeholder_dependency_module_map ) ) <nl> + Opts . PlaceholderDependencyModuleMap = A - > getValue ( ) ; <nl> <nl> / / Opts . RuntimeIncludePath is set by calls to <nl> / / setRuntimeIncludePath ( ) or setMainExecutablePath ( ) . <nl> mmm a / lib / Frontend / ModuleInterfaceLoader . cpp <nl> ppp b / lib / Frontend / ModuleInterfaceLoader . cpp <nl> bool InterfaceSubContextDelegateImpl : : runInSubCompilerInstance ( StringRef moduleN <nl> } <nl> <nl> struct ExplicitSwiftModuleLoader : : Implementation { <nl> - / / Information about explicitly specified Swift module files . <nl> - struct ExplicitModuleInfo { <nl> - / / Path of the . swiftmodule file . <nl> - StringRef modulePath ; <nl> - / / Path of the . swiftmoduledoc file . <nl> - StringRef moduleDocPath ; <nl> - / / Path of the . swiftsourceinfo file . <nl> - StringRef moduleSourceInfoPath ; <nl> - / / Opened buffer for the . swiftmodule file . <nl> - std : : unique_ptr < llvm : : MemoryBuffer > moduleBuffer ; <nl> - } ; <nl> ASTContext & Ctx ; <nl> llvm : : BumpPtrAllocator Allocator ; <nl> - llvm : : StringSaver Saver ; <nl> llvm : : StringMap < ExplicitModuleInfo > ExplicitModuleMap ; <nl> - Implementation ( ASTContext & Ctx ) : Ctx ( Ctx ) , Saver ( Allocator ) { } <nl> + Implementation ( ASTContext & Ctx ) : Ctx ( Ctx ) { } <nl> <nl> - StringRef getScalaNodeText ( llvm : : yaml : : Node * N ) { <nl> - SmallString < 32 > Buffer ; <nl> - return Saver . save ( cast < llvm : : yaml : : ScalarNode > ( N ) - > getValue ( Buffer ) ) ; <nl> - } <nl> - <nl> - bool parseSingleModuleEntry ( llvm : : yaml : : Node & node ) { <nl> - using namespace llvm : : yaml ; <nl> - auto * mapNode = dyn_cast < MappingNode > ( & node ) ; <nl> - if ( ! mapNode ) <nl> - return true ; <nl> - StringRef moduleName ; <nl> - ExplicitModuleInfo result ; <nl> - for ( auto & entry : * mapNode ) { <nl> - auto key = getScalaNodeText ( entry . getKey ( ) ) ; <nl> - auto val = getScalaNodeText ( entry . getValue ( ) ) ; <nl> - if ( key = = " moduleName " ) { <nl> - moduleName = val ; <nl> - } else if ( key = = " modulePath " ) { <nl> - result . modulePath = val ; <nl> - } else if ( key = = " docPath " ) { <nl> - result . moduleDocPath = val ; <nl> - } else if ( key = = " sourceInfoPath " ) { <nl> - result . moduleSourceInfoPath = val ; <nl> - } else { <nl> - / / Being forgiving for future fields . <nl> - continue ; <nl> - } <nl> - } <nl> - if ( moduleName . empty ( ) ) <nl> - return true ; <nl> - ExplicitModuleMap [ moduleName ] = std : : move ( result ) ; <nl> - return false ; <nl> - } <nl> - / / [ <nl> - / / { <nl> - / / " moduleName " : " A " , <nl> - / / " modulePath " : " A . swiftmodule " , <nl> - / / " docPath " : " A . swiftdoc " , <nl> - / / " sourceInfoPath " : " A . swiftsourceinfo " <nl> - / / } , <nl> - / / { <nl> - / / " moduleName " : " B " , <nl> - / / " modulePath " : " B . swiftmodule " , <nl> - / / " docPath " : " B . swiftdoc " , <nl> - / / " sourceInfoPath " : " B . swiftsourceinfo " <nl> - / / } <nl> - / / ] <nl> void parseSwiftExplicitModuleMap ( StringRef fileName ) { <nl> - using namespace llvm : : yaml ; <nl> - / / Load the input file . <nl> - llvm : : ErrorOr < std : : unique_ptr < llvm : : MemoryBuffer > > fileBufOrErr = <nl> - llvm : : MemoryBuffer : : getFile ( fileName ) ; <nl> - if ( ! fileBufOrErr ) { <nl> + ExplicitModuleMapParser parser ( Allocator ) ; <nl> + auto result = <nl> + parser . parseSwiftExplicitModuleMap ( fileName , ExplicitModuleMap ) ; <nl> + if ( result = = std : : errc : : invalid_argument ) <nl> + Ctx . Diags . diagnose ( SourceLoc ( ) , diag : : explicit_swift_module_map_corrupted , <nl> + fileName ) ; <nl> + else if ( result = = std : : errc : : no_such_file_or_directory ) <nl> Ctx . Diags . diagnose ( SourceLoc ( ) , diag : : explicit_swift_module_map_missing , <nl> fileName ) ; <nl> - return ; <nl> - } <nl> - StringRef Buffer = fileBufOrErr - > get ( ) - > getBuffer ( ) ; <nl> - / / Use a new source manager instead of the one from ASTContext because we <nl> - / / don ' t want the JSON file to be persistent . <nl> - llvm : : SourceMgr SM ; <nl> - Stream Stream ( llvm : : MemoryBufferRef ( Buffer , fileName ) , SM ) ; <nl> - for ( auto DI = Stream . begin ( ) ; DI ! = Stream . end ( ) ; + + DI ) { <nl> - assert ( DI ! = Stream . end ( ) & & " Failed to read a document " ) ; <nl> - if ( auto * MN = dyn_cast_or_null < SequenceNode > ( DI - > getRoot ( ) ) ) { <nl> - for ( auto & entry : * MN ) { <nl> - if ( parseSingleModuleEntry ( entry ) ) { <nl> - Ctx . Diags . diagnose ( SourceLoc ( ) , <nl> - diag : : explicit_swift_module_map_corrupted , <nl> - fileName ) ; <nl> - return ; <nl> - } <nl> - } <nl> - } else { <nl> - Ctx . Diags . diagnose ( SourceLoc ( ) , <nl> - diag : : explicit_swift_module_map_corrupted , <nl> - fileName ) ; <nl> - return ; <nl> - } <nl> - } <nl> } <nl> } ; <nl> <nl> mmm a / lib / FrontendTool / ScanDependencies . cpp <nl> ppp b / lib / FrontendTool / ScanDependencies . cpp <nl> namespace { <nl> const ModuleDependencyID & module , <nl> unsigned indentLevel ) { <nl> out < < " { \ n " ; <nl> + std : : string moduleKind ; <nl> + if ( module . second = = ModuleDependenciesKind : : Swift ) <nl> + moduleKind = " swift " ; <nl> + else if ( module . second = = ModuleDependenciesKind : : SwiftPlaceholder ) <nl> + moduleKind = " swiftPlaceholder " ; <nl> + else <nl> + moduleKind = " clang " ; <nl> <nl> writeJSONSingleField ( <nl> out , <nl> - module . second = = ModuleDependenciesKind : : Swift ? " swift " : " clang " , <nl> + moduleKind , <nl> module . first , <nl> indentLevel + 1 , <nl> / * trailingComma = * / false ) ; <nl> static void writeJSON ( llvm : : raw_ostream & out , <nl> out . indent ( 2 * 2 ) ; <nl> out < < " { \ n " ; <nl> <nl> + auto externalSwiftDep = moduleDeps . getAsPlaceholderDependencyModule ( ) ; <nl> + auto swiftDeps = moduleDeps . getAsSwiftModule ( ) ; <nl> + auto clangDeps = moduleDeps . getAsClangModule ( ) ; <nl> + <nl> / / Module path . <nl> const char * modulePathSuffix = <nl> moduleDeps . isSwiftModule ( ) ? " . swiftmodule " : " . pcm " ; <nl> - std : : string modulePath = module . first + modulePathSuffix ; <nl> + <nl> + std : : string modulePath = externalSwiftDep <nl> + ? externalSwiftDep - > compiledModulePath <nl> + : module . first + modulePathSuffix ; <nl> writeJSONSingleField ( out , " modulePath " , modulePath , / * indentLevel = * / 3 , <nl> / * trailingComma = * / true ) ; <nl> <nl> / / Source files . <nl> - auto swiftDeps = moduleDeps . getAsSwiftModule ( ) ; <nl> - auto clangDeps = moduleDeps . getAsClangModule ( ) ; <nl> if ( swiftDeps ) { <nl> writeJSONSingleField ( out , " sourceFiles " , swiftDeps - > sourceFiles , 3 , <nl> - / * trailingComma = * / true ) ; <nl> - } else { <nl> + / * trailingComma = * / true ) ; <nl> + } else if ( clangDeps ) { <nl> writeJSONSingleField ( out , " sourceFiles " , clangDeps - > fileDependencies , 3 , <nl> / * trailingComma = * / true ) ; <nl> } <nl> <nl> / / Direct dependencies . <nl> - writeJSONSingleField ( out , " directDependencies " , directDependencies , <nl> - 3 , / * trailingComma = * / true ) ; <nl> + if ( swiftDeps | | clangDeps ) <nl> + writeJSONSingleField ( out , " directDependencies " , directDependencies , 3 , <nl> + / * trailingComma = * / true ) ; <nl> <nl> / / Swift and Clang - specific details . <nl> out . indent ( 3 * 2 ) ; <nl> static void writeJSON ( llvm : : raw_ostream & out , <nl> out . indent ( 5 * 2 ) ; <nl> out < < " \ " commandLine \ " : [ \ n " ; <nl> for ( auto & arg : swiftDeps - > buildCommandLine ) { <nl> + <nl> out . indent ( 6 * 2 ) ; <nl> out < < " \ " " < < arg < < " \ " " ; <nl> if ( & arg ! = & swiftDeps - > buildCommandLine . back ( ) ) <nl> static void writeJSON ( llvm : : raw_ostream & out , <nl> if ( ! swiftDeps - > extraPCMArgs . empty ( ) ) { <nl> out . indent ( 5 * 2 ) ; <nl> out < < " \ " extraPcmArgs \ " : [ \ n " ; <nl> - for ( auto & arg : swiftDeps - > extraPCMArgs ) { <nl> + for ( auto & arg : swiftDeps - > extraPCMArgs ) { <nl> out . indent ( 6 * 2 ) ; <nl> out < < " \ " " < < arg < < " \ " " ; <nl> if ( & arg ! = & swiftDeps - > extraPCMArgs . back ( ) ) <nl> static void writeJSON ( llvm : : raw_ostream & out , <nl> if ( swiftDeps - > bridgingHeaderFile ) { <nl> out . indent ( 5 * 2 ) ; <nl> out < < " \ " bridgingHeader \ " : { \ n " ; <nl> - writeJSONSingleField ( out , " path " , <nl> - * swiftDeps - > bridgingHeaderFile , 6 , <nl> + writeJSONSingleField ( out , " path " , * swiftDeps - > bridgingHeaderFile , 6 , <nl> / * trailingComma = * / true ) ; <nl> - writeJSONSingleField ( out , " sourceFiles " , <nl> - swiftDeps - > bridgingSourceFiles , 6 , <nl> + writeJSONSingleField ( out , " sourceFiles " , swiftDeps - > bridgingSourceFiles , <nl> + 6 , <nl> / * trailingComma = * / true ) ; <nl> writeJSONSingleField ( out , " moduleDependencies " , <nl> swiftDeps - > bridgingModuleDependencies , 6 , <nl> static void writeJSON ( llvm : : raw_ostream & out , <nl> out . indent ( 5 * 2 ) ; <nl> out < < " } \ n " ; <nl> } <nl> + } else if ( externalSwiftDep ) { <nl> + out < < " \ " swiftPlaceholder \ " : { \ n " ; <nl> + <nl> + / / Module doc file <nl> + if ( externalSwiftDep - > moduleDocPath ! = " " ) <nl> + writeJSONSingleField ( out , " moduleDocPath " , <nl> + externalSwiftDep - > moduleDocPath , <nl> + / * indentLevel = * / 5 , <nl> + / * trailingComma = * / true ) ; <nl> + <nl> + / / Module Source Info file <nl> + if ( externalSwiftDep - > moduleDocPath ! = " " ) <nl> + writeJSONSingleField ( out , " moduleSourceInfoPath " , <nl> + externalSwiftDep - > sourceInfoPath , <nl> + / * indentLevel = * / 5 , <nl> + / * trailingComma = * / true ) ; <nl> } else { <nl> out < < " \ " clang \ " : { \ n " ; <nl> <nl> / / Module map file . <nl> - writeJSONSingleField ( out , " moduleMapPath " , <nl> - clangDeps - > moduleMapFile , 5 , <nl> + writeJSONSingleField ( out , " moduleMapPath " , clangDeps - > moduleMapFile , 5 , <nl> / * trailingComma = * / true ) ; <nl> <nl> / / Context hash . <nl> - writeJSONSingleField ( out , " contextHash " , <nl> - clangDeps - > contextHash , 5 , <nl> + writeJSONSingleField ( out , " contextHash " , clangDeps - > contextHash , 5 , <nl> / * trailingComma = * / true ) ; <nl> <nl> / / Command line . <nl> - writeJSONSingleField ( out , " commandLine " , <nl> - clangDeps - > nonPathCommandLine , 5 , <nl> + writeJSONSingleField ( out , " commandLine " , clangDeps - > nonPathCommandLine , 5 , <nl> / * trailingComma = * / false ) ; <nl> } <nl> <nl> static void writeJSON ( llvm : : raw_ostream & out , <nl> out < < " } \ n " ; <nl> out . indent ( 3 * 2 ) ; <nl> out < < " } \ n " ; <nl> - <nl> out . indent ( 2 * 2 ) ; <nl> out < < " } " ; <nl> <nl> bool swift : : scanDependencies ( CompilerInstance & instance ) { <nl> depTracker - > addDependency ( sourceFile , / * IsSystem = * / false ) ; <nl> for ( const auto & bridgingSourceFile : swiftDeps - > bridgingSourceFiles ) <nl> depTracker - > addDependency ( bridgingSourceFile , / * IsSystem = * / false ) ; <nl> - } else { <nl> - auto clangDeps = deps - > getAsClangModule ( ) ; <nl> + } else if ( auto clangDeps = deps - > getAsClangModule ( ) ) { <nl> if ( ! clangDeps - > moduleMapFile . empty ( ) ) <nl> depTracker - > addDependency ( clangDeps - > moduleMapFile , / * IsSystem = * / false ) ; <nl> for ( const auto & sourceFile : clangDeps - > fileDependencies ) <nl> mmm a / lib / Serialization / ModuleDependencyScanner . cpp <nl> ppp b / lib / Serialization / ModuleDependencyScanner . cpp <nl> <nl> / / <nl> / / = = = mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - = = = / / <nl> <nl> - # include " swift / Serialization / SerializedModuleLoader . h " <nl> # include " swift / AST / ASTContext . h " <nl> # include " swift / AST / DiagnosticSuppression . h " <nl> + # include " swift / AST / DiagnosticsFrontend . h " <nl> # include " swift / AST / ModuleDependencies . h " <nl> # include " swift / AST / SourceFile . h " <nl> # include " swift / Basic / FileTypes . h " <nl> + # include " swift / Frontend / ModuleInterfaceLoader . h " <nl> + # include " swift / Serialization / SerializedModuleLoader . h " <nl> # include " swift / Subsystems . h " <nl> using namespace swift ; <nl> using llvm : : ErrorOr ; <nl> class ModuleDependencyScanner : public SerializedModuleLoaderBase { <nl> public : <nl> Optional < ModuleDependencies > dependencies ; <nl> <nl> - ModuleDependencyScanner ( ASTContext & ctx , ModuleLoadingMode LoadMode , <nl> - Identifier moduleName , <nl> - InterfaceSubContextDelegate & astDelegate ) <nl> + / / / Describes the kind of dependencies this scanner is able to identify <nl> + ModuleDependenciesKind dependencyKind ; <nl> + <nl> + ModuleDependencyScanner ( <nl> + ASTContext & ctx , ModuleLoadingMode LoadMode , Identifier moduleName , <nl> + InterfaceSubContextDelegate & astDelegate , <nl> + ModuleDependenciesKind dependencyKind = ModuleDependenciesKind : : Swift ) <nl> : SerializedModuleLoaderBase ( ctx , nullptr , LoadMode , <nl> / * IgnoreSwiftSourceInfoFile = * / true ) , <nl> - moduleName ( moduleName ) , astDelegate ( astDelegate ) { } <nl> + moduleName ( moduleName ) , astDelegate ( astDelegate ) , <nl> + dependencyKind ( dependencyKind ) { } <nl> <nl> virtual std : : error_code findModuleFilesInDirectory ( <nl> AccessPathElem ModuleID , <nl> class ModuleDependencyScanner : public SerializedModuleLoaderBase { <nl> llvm_unreachable ( " Not used " ) ; <nl> } <nl> } ; <nl> - } <nl> + <nl> + / / / A ModuleLoader that loads placeholder dependency module stubs specified in <nl> + / / / - placeholder - dependency - module - map - file <nl> + / / / This loader is used only in dependency scanning to inform the scanner that a <nl> + / / / set of modules constitute placeholder dependencies that are not visible to the <nl> + / / / scanner but will nevertheless be provided by the scanner ' s clients . <nl> + / / / This " loader " will not attempt to load any module files . <nl> + class PlaceholderSwiftModuleScanner : public ModuleDependencyScanner { <nl> + / / / Scan the given placeholder module map <nl> + void parsePlaceholderModuleMap ( StringRef fileName ) { <nl> + ExplicitModuleMapParser parser ( Allocator ) ; <nl> + auto result = <nl> + parser . parseSwiftExplicitModuleMap ( fileName , PlaceholderDependencyModuleMap ) ; <nl> + if ( result = = std : : errc : : invalid_argument ) { <nl> + Ctx . Diags . diagnose ( SourceLoc ( ) , <nl> + diag : : placeholder_dependency_module_map_corrupted , <nl> + fileName ) ; <nl> + } <nl> + else if ( result = = std : : errc : : no_such_file_or_directory ) { <nl> + Ctx . Diags . diagnose ( SourceLoc ( ) , <nl> + diag : : placeholder_dependency_module_map_missing , <nl> + fileName ) ; <nl> + } <nl> + } <nl> + <nl> + llvm : : StringMap < ExplicitModuleInfo > PlaceholderDependencyModuleMap ; <nl> + llvm : : BumpPtrAllocator Allocator ; <nl> + <nl> + public : <nl> + PlaceholderSwiftModuleScanner ( ASTContext & ctx , ModuleLoadingMode LoadMode , <nl> + Identifier moduleName , <nl> + StringRef PlaceholderDependencyModuleMap , <nl> + InterfaceSubContextDelegate & astDelegate ) <nl> + : ModuleDependencyScanner ( ctx , LoadMode , moduleName , astDelegate , <nl> + ModuleDependenciesKind : : SwiftPlaceholder ) { <nl> + <nl> + / / FIXME : Find a better place for this map to live , to avoid <nl> + / / doing the parsing on every module . <nl> + parsePlaceholderModuleMap ( PlaceholderDependencyModuleMap ) ; <nl> + } <nl> + <nl> + std : : error_code findModuleFilesInDirectory ( <nl> + AccessPathElem ModuleID , const SerializedModuleBaseName & BaseName , <nl> + SmallVectorImpl < char > * ModuleInterfacePath , <nl> + std : : unique_ptr < llvm : : MemoryBuffer > * ModuleBuffer , <nl> + std : : unique_ptr < llvm : : MemoryBuffer > * ModuleDocBuffer , <nl> + std : : unique_ptr < llvm : : MemoryBuffer > * ModuleSourceInfoBuffer ) override { <nl> + StringRef moduleName = ModuleID . Item . str ( ) ; <nl> + auto it = PlaceholderDependencyModuleMap . find ( moduleName ) ; <nl> + / / If no placeholder module stub path is given matches the name , return with an <nl> + / / error code . <nl> + if ( it = = PlaceholderDependencyModuleMap . end ( ) ) { <nl> + return std : : make_error_code ( std : : errc : : not_supported ) ; <nl> + } <nl> + auto & moduleInfo = it - > getValue ( ) ; <nl> + assert ( ! moduleInfo . moduleBuffer & & <nl> + " Placeholder dependency module stubs cannot have an associated buffer " ) ; <nl> + <nl> + auto dependencies = ModuleDependencies : : forPlaceholderSwiftModuleStub ( <nl> + moduleInfo . modulePath , moduleInfo . moduleDocPath , <nl> + moduleInfo . moduleSourceInfoPath ) ; <nl> + this - > dependencies = std : : move ( dependencies ) ; <nl> + return std : : error_code { } ; <nl> + } <nl> + } ; <nl> + } / / namespace <nl> <nl> static std : : vector < std : : string > getCompiledCandidates ( ASTContext & ctx , <nl> StringRef moduleName , <nl> Optional < ModuleDependencies > SerializedModuleLoaderBase : : getModuleDependencies ( <nl> if ( auto found = cache . findDependencies ( <nl> moduleName , ModuleDependenciesKind : : Swift ) ) <nl> return found ; <nl> + if ( auto found = <nl> + cache . findDependencies ( moduleName , ModuleDependenciesKind : : SwiftPlaceholder ) ) <nl> + return found ; <nl> <nl> - / / Check whether there is a module with this name that we can import . <nl> auto moduleId = Ctx . getIdentifier ( moduleName ) ; <nl> - ModuleDependencyScanner scanner ( Ctx , LoadMode , moduleId , delegate ) ; <nl> - if ( ! scanner . canImportModule ( { moduleId , SourceLoc ( ) } ) ) <nl> - return None ; <nl> - <nl> - / / Record the dependencies . <nl> - cache . recordDependencies ( moduleName , * scanner . dependencies , <nl> - ModuleDependenciesKind : : Swift ) ; <nl> - return std : : move ( scanner . dependencies ) ; <nl> + / / Instantiate dependency scanning " loaders " . <nl> + SmallVector < std : : unique_ptr < ModuleDependencyScanner > , 2 > scanners ; <nl> + scanners . push_back ( std : : make_unique < ModuleDependencyScanner > ( <nl> + Ctx , LoadMode , moduleId , delegate ) ) ; <nl> + scanners . push_back ( std : : make_unique < PlaceholderSwiftModuleScanner > ( <nl> + Ctx , LoadMode , moduleId , Ctx . SearchPathOpts . PlaceholderDependencyModuleMap , <nl> + delegate ) ) ; <nl> + <nl> + / / Check whether there is a module with this name that we can import . <nl> + for ( auto & scanner : scanners ) { <nl> + if ( scanner - > canImportModule ( { moduleId , SourceLoc ( ) } ) ) { <nl> + / / Record the dependencies . <nl> + cache . recordDependencies ( moduleName , * ( scanner - > dependencies ) , <nl> + scanner - > dependencyKind ) ; <nl> + return std : : move ( scanner - > dependencies ) ; <nl> + } <nl> + } <nl> + <nl> + return None ; <nl> } <nl> mmm a / test / ScanDependencies / Inputs / ModuleDependencyGraph . swift <nl> ppp b / test / ScanDependencies / Inputs / ModuleDependencyGraph . swift <nl> import Foundation <nl> <nl> enum ModuleDependencyId : Hashable { <nl> case swift ( String ) <nl> + case swiftPlaceholder ( String ) <nl> case clang ( String ) <nl> <nl> var moduleName : String { <nl> switch self { <nl> - case . swift ( let name ) : return name <nl> - case . clang ( let name ) : return name <nl> + case . swift ( let name ) : return name <nl> + case . swiftPlaceholder ( let name ) : return name <nl> + case . clang ( let name ) : return name <nl> } <nl> } <nl> } <nl> enum ModuleDependencyId : Hashable { <nl> extension ModuleDependencyId : Codable { <nl> enum CodingKeys : CodingKey { <nl> case swift <nl> + case swiftPlaceholder <nl> case clang <nl> } <nl> <nl> extension ModuleDependencyId : Codable { <nl> let moduleName = try container . decode ( String . self , forKey : . swift ) <nl> self = . swift ( moduleName ) <nl> } catch { <nl> - let moduleName = try container . decode ( String . self , forKey : . clang ) <nl> - self = . clang ( moduleName ) <nl> + do { <nl> + let moduleName = try container . decode ( String . self , forKey : . swiftPlaceholder ) <nl> + self = . swiftPlaceholder ( moduleName ) <nl> + } catch { <nl> + let moduleName = try container . decode ( String . self , forKey : . clang ) <nl> + self = . clang ( moduleName ) <nl> + } <nl> } <nl> } <nl> <nl> extension ModuleDependencyId : Codable { <nl> switch self { <nl> case . swift ( let moduleName ) : <nl> try container . encode ( moduleName , forKey : . swift ) <nl> + case . swiftPlaceholder ( let moduleName ) : <nl> + try container . encode ( moduleName , forKey : . swift ) <nl> case . clang ( let moduleName ) : <nl> try container . encode ( moduleName , forKey : . clang ) <nl> } <nl> struct SwiftModuleDetails : Codable { <nl> var extraPcmArgs : [ String ] ? <nl> } <nl> <nl> + / / / Details specific to Swift external modules . <nl> + struct swiftPlaceholderModuleDetails : Codable { <nl> + / / / The path to the . swiftModuleDoc file . <nl> + var moduleDocPath : String ? <nl> + <nl> + / / / The path to the . swiftSourceInfo file . <nl> + var moduleSourceInfoPath : String ? <nl> + } <nl> + <nl> / / / Details specific to Clang modules . <nl> struct ClangModuleDetails : Codable { <nl> / / / The path to the module map used to build this module . <nl> struct ModuleDependencies : Codable { <nl> var modulePath : String <nl> <nl> / / / The source files used to build this module . <nl> - var sourceFiles : [ String ] = [ ] <nl> + var sourceFiles : [ String ] ? = [ ] <nl> <nl> / / / The set of direct module dependencies of this module . <nl> - var directDependencies : [ ModuleDependencyId ] = [ ] <nl> + var directDependencies : [ ModuleDependencyId ] ? = [ ] <nl> <nl> / / / Specific details of a particular kind of module . <nl> var details : Details <nl> struct ModuleDependencies : Codable { <nl> / / / a bridging header . <nl> case swift ( SwiftModuleDetails ) <nl> <nl> + / / / Swift external modules carry additional details that specify their <nl> + / / / module doc path and source info paths . <nl> + case swiftPlaceholder ( swiftPlaceholderModuleDetails ) <nl> + <nl> / / / Clang modules are built from a module map file . <nl> case clang ( ClangModuleDetails ) <nl> } <nl> struct ModuleDependencies : Codable { <nl> extension ModuleDependencies . Details : Codable { <nl> enum CodingKeys : CodingKey { <nl> case swift <nl> + case swiftPlaceholder <nl> case clang <nl> } <nl> <nl> extension ModuleDependencies . Details : Codable { <nl> let details = try container . decode ( SwiftModuleDetails . self , forKey : . swift ) <nl> self = . swift ( details ) <nl> } catch { <nl> - let details = try container . decode ( ClangModuleDetails . self , forKey : . clang ) <nl> - self = . clang ( details ) <nl> + do { <nl> + let details = try container . decode ( swiftPlaceholderModuleDetails . self , forKey : . swiftPlaceholder ) <nl> + self = . swiftPlaceholder ( details ) <nl> + } catch { <nl> + let details = try container . decode ( ClangModuleDetails . self , forKey : . clang ) <nl> + self = . clang ( details ) <nl> + } <nl> } <nl> } <nl> <nl> extension ModuleDependencies . Details : Codable { <nl> switch self { <nl> case . swift ( let details ) : <nl> try container . encode ( details , forKey : . swift ) <nl> + case . swiftPlaceholder ( let details ) : <nl> + try container . encode ( details , forKey : . swiftPlaceholder ) <nl> case . clang ( let details ) : <nl> try container . encode ( details , forKey : . clang ) <nl> } <nl> new file mode 100644 <nl> index 000000000000 . . 601b60d62b4e <nl> mmm / dev / null <nl> ppp b / test / ScanDependencies / module_deps_external . swift <nl> <nl> + / / RUN : % empty - directory ( % t ) <nl> + / / RUN : mkdir - p % t / clang - module - cache <nl> + / / RUN : mkdir - p % t / inputs <nl> + <nl> + / / RUN : echo " [ { " > % / t / inputs / map . json <nl> + / / RUN : echo " \ " moduleName \ " : \ " SomeExternalModule \ " , " > > % / t / inputs / map . json <nl> + / / RUN : echo " \ " modulePath \ " : \ " % / t / inputs / SomeExternalModule . swiftmodule \ " , " > > % / t / inputs / map . json <nl> + / / RUN : echo " \ " docPath \ " : \ " % / t / inputs / SomeExternalModule . swiftdoc \ " , " > > % / t / inputs / map . json <nl> + / / RUN : echo " \ " sourceInfoPath \ " : \ " % / t / inputs / SomeExternalModule . swiftsourceinfo \ " " > > % / t / inputs / map . json <nl> + / / RUN : echo " } , " > > % / t / inputs / map . json <nl> + / / RUN : echo " { " > > % / t / inputs / map . json <nl> + / / RUN : echo " \ " moduleName \ " : \ " Swift \ " , " > > % / t / inputs / map . json <nl> + / / RUN : echo " \ " modulePath \ " : \ " % / stdlib_module \ " " > > % / t / inputs / map . json <nl> + / / RUN : echo " } , " > > % / t / inputs / map . json <nl> + / / RUN : echo " { " > > % / t / inputs / map . json <nl> + / / RUN : echo " \ " moduleName \ " : \ " SwiftOnoneSupport \ " , " > > % / t / inputs / map . json <nl> + / / RUN : echo " \ " modulePath \ " : \ " % / ononesupport_module \ " " > > % / t / inputs / map . json <nl> + / / RUN : echo " } ] " > > % / t / inputs / map . json <nl> + <nl> + <nl> + / / RUN : echo " [ { " > % / t / inputs / map . json <nl> + / / RUN : echo " \ " moduleName \ " : \ " SomeExternalModule \ " , " > > % / t / inputs / map . json <nl> + / / RUN : echo " \ " modulePath \ " : \ " % / t / inputs / SomeExternalModule . swiftmodule \ " , " > > % / t / inputs / map . json <nl> + / / RUN : echo " \ " docPath \ " : \ " % / t / inputs / SomeExternalModule . swiftdoc \ " , " > > % / t / inputs / map . json <nl> + / / RUN : echo " \ " sourceInfoPath \ " : \ " % / t / inputs / SomeExternalModule . swiftsourceinfo \ " " > > % / t / inputs / map . json <nl> + / / RUN : echo " } ] " > > % / t / inputs / map . json <nl> + <nl> + / / RUN : % target - swift - frontend - scan - dependencies - module - cache - path % t / clang - module - cache % s - placeholder - dependency - module - map - file % t / inputs / map . json - o % t / deps . json - I % S / Inputs / CHeaders - I % S / Inputs / Swift - emit - dependencies - emit - dependencies - path % t / deps . d - import - objc - header % S / Inputs / CHeaders / Bridging . h - swift - version 4 - disable - implicit - swift - modules - Xcc - Xclang - Xcc - fno - implicit - modules <nl> + <nl> + / / Check the contents of the JSON output <nl> + / / RUN : % FileCheck % s < % t / deps . json <nl> + <nl> + / / Check the make - style dependencies file <nl> + / / RUN : % FileCheck % s - check - prefix CHECK - MAKE - DEPS < % t / deps . d <nl> + <nl> + / / Check that the JSON parses correctly into the canonical Swift data <nl> + / / structures . <nl> + <nl> + / / RUN : mkdir - p % t / PrintGraph <nl> + / / RUN : cp % S / Inputs / PrintGraph . swift % t / main . swift <nl> + / / RUN : % target - build - swift % S / Inputs / ModuleDependencyGraph . swift % t / main . swift - o % t / main <nl> + / / RUN : % target - codesign % t / main <nl> + / / RUN : % target - run % t / main % t / deps . json <nl> + <nl> + / / REQUIRES : executable_test <nl> + / / REQUIRES : objc_interop <nl> + import SomeExternalModule <nl> + <nl> + / / CHECK : " mainModuleName " : " deps " <nl> + <nl> + / / / mmmmmm - - Main module <nl> + / / CHECK - LABEL : " modulePath " : " deps . swiftmodule " , <nl> + / / CHECK - NEXT : sourceFiles <nl> + / / CHECK - NEXT : module_deps_external . swift <nl> + <nl> + / / CHECK : directDependencies <nl> + / / CHECK - NEXT : { <nl> + / / CHECK - NEXT : " swiftPlaceholder " : " SomeExternalModule " <nl> + / / CHECK - NEXT : } <nl> + / / CHECK - NEXT : { <nl> + / / CHECK - NEXT : " swift " : " Swift " <nl> + / / CHECK - NEXT : } <nl> + / / CHECK - NEXT : { <nl> + / / CHECK - NEXT : " swift " : " SwiftOnoneSupport " <nl> + / / CHECK - NEXT : } <nl> + / / CHECK - NEXT : { <nl> + / / CHECK - NEXT : " swift " : " F " <nl> + / / CHECK - NEXT : } <nl> + / / CHECK - NEXT : ] , <nl> + <nl> + / / CHECK : " extraPcmArgs " : [ <nl> + / / CHECK - NEXT : " - Xcc " , <nl> + / / CHECK - NEXT : " - target " , <nl> + / / CHECK - NEXT : " - Xcc " , <nl> + / / CHECK : " - fapinotes - swift - version = 4 " <nl> + <nl> + / / CHECK : " bridgingHeader " : <nl> + / / CHECK - NEXT : " path " : <nl> + / / CHECK - SAME : Bridging . h <nl> + <nl> + / / CHECK - NEXT : " sourceFiles " : <nl> + / / CHECK - NEXT : Bridging . h <nl> + / / CHECK - NEXT : BridgingOther . h <nl> + <nl> + / / CHECK : " moduleDependencies " : [ <nl> + / / CHECK - NEXT : " F " <nl> + / / CHECK - NEXT : ] <nl> + <nl> + / / / mmmmmm - - Swift external module SomeExternalModule <nl> + / / CHECK - LABEL : " modulePath " : " BUILD_DIR / { { . * } } / ScanDependencies / Output / module_deps_external . swift . tmp / inputs / SomeExternalModule . swiftmodule " , <nl> + / / CHECK - NEXT : " details " : { <nl> + / / CHECK - NEXT : " swiftPlaceholder " : { <nl> + / / CHECK - NEXT : " moduleDocPath " : " BUILD_DIR / { { . * } } / ScanDependencies / Output / module_deps_external . swift . tmp / inputs / SomeExternalModule . swiftdoc " , <nl> + / / CHECK - NEXT : " moduleSourceInfoPath " : " BUILD_DIR / { { . * } } / ScanDependencies / Output / module_deps_external . swift . tmp / inputs / SomeExternalModule . swiftsourceinfo " , <nl> + <nl> + / / / mmmmmm - - Swift module Swift <nl> + / / CHECK - LABEL : " modulePath " : " Swift . swiftmodule " , <nl> + <nl> + / / CHECK : directDependencies <nl> + / / CHECK - NEXT : { <nl> + / / CHECK - NEXT : " clang " : " SwiftShims " <nl> + <nl> + / / / mmmmmm - - Clang module SwiftShims <nl> + / / CHECK - LABEL : " modulePath " : " SwiftShims . pcm " , <nl> + <nl> + / / Check make - style dependencies <nl> + / / CHECK - MAKE - DEPS : module_deps_external . swift <nl> + / / CHECK - MAKE - DEPS - SAME : Bridging . h <nl> + / / CHECK - MAKE - DEPS - SAME : BridgingOther . h <nl> + / / CHECK - MAKE - DEPS - SAME : module . modulemap <nl>
|
Merge remote - tracking branch ' origin / master ' into master - rebranch
|
apple/swift
|
113db8c6d8db99f646b5390839c821aa63dc2265
|
2020-07-28T21:43:59Z
|
mmm a / hphp / runtime / vm / func . h <nl> ppp b / hphp / runtime / vm / func . h <nl> struct Func { <nl> ClonedFlag ( const ClonedFlag & ) { } <nl> ClonedFlag & operator = ( const ClonedFlag & ) = delete ; <nl> <nl> - std : : atomic_flag flag { ATOMIC_FLAG_INIT } ; <nl> + std : : atomic_flag flag = ATOMIC_FLAG_INIT ; <nl> } ; <nl> <nl> <nl>
|
Fix atomic_flag initialization in Func
|
facebook/hhvm
|
9218bdb8f7bb4933d6187df427807077516a1de7
|
2015-06-12T18:32:19Z
|
mmm a / data / widgets / options . xml <nl> ppp b / data / widgets / options . xml <nl> <nl> < listitem text = " Editor " value = " section_editor " / > <nl> < listitem text = " Timeline " value = " section_timeline " / > <nl> < listitem text = " Cursors " value = " section_cursors " / > <nl> - < listitem text = " Grid & amp ; & amp ; Background " value = " section_grid " / > <nl> + < listitem text = " Background " value = " section_bg " / > <nl> + < listitem text = " Grid " value = " section_grid " / > <nl> < listitem text = " Undo " value = " section_undo " / > <nl> < listitem text = " Theme " value = " section_theme " / > <nl> < listitem text = " Experimental " value = " section_experimental " / > <nl> <nl> < / grid > <nl> < / vbox > <nl> <nl> - < ! - - Grid & background - - > <nl> + < ! - - Background - - > <nl> + < vbox id = " section_bg " > <nl> + < combobox id = " bg_scope " / > <nl> + <nl> + < separator text = " Checked Background " horizontal = " true " / > <nl> + < hbox > <nl> + < label text = " Size : " / > <nl> + < combobox id = " checked_bg_size " expansive = " true " / > <nl> + < / hbox > <nl> + < check text = " Apply Zoom " id = " checked_bg_zoom " / > <nl> + < hbox > <nl> + < label text = " Colors : " / > <nl> + < box horizontal = " true " id = " checked_bg_color1_box " / > <nl> + < box horizontal = " true " id = " checked_bg_color2_box " / > <nl> + < / hbox > <nl> + <nl> + < hbox > <nl> + < hbox expansive = " true " / > <nl> + < button id = " reset_bg " text = " Reset " width = " 60 " / > <nl> + < / hbox > <nl> + < / vbox > <nl> + <nl> + < ! - - Grid - - > <nl> < vbox id = " section_grid " > <nl> < combobox id = " grid_scope " / > <nl> < separator text = " Grid " horizontal = " true " / > <nl> <nl> < check id = " pixel_grid_auto_opacity " text = " Auto " / > <nl> < / grid > <nl> <nl> - < separator text = " Checked Background " horizontal = " true " / > <nl> - < hbox > <nl> - < label text = " Size : " / > <nl> - < combobox id = " checked_bg_size " expansive = " true " / > <nl> - < / hbox > <nl> - < check text = " Apply Zoom " id = " checked_bg_zoom " / > <nl> - < hbox > <nl> - < label text = " Colors : " / > <nl> - < box horizontal = " true " id = " checked_bg_color1_box " / > <nl> - < box horizontal = " true " id = " checked_bg_color2_box " / > <nl> - < / hbox > <nl> - <nl> < hbox > <nl> < hbox expansive = " true " / > <nl> - < button id = " reset " text = " Reset " width = " 60 " / > <nl> + < button id = " reset_grid " text = " Reset " width = " 60 " / > <nl> < / hbox > <nl> < / vbox > <nl> <nl> mmm a / src / app / commands / cmd_options . cpp <nl> ppp b / src / app / commands / cmd_options . cpp <nl> class OptionsWindow : public app : : gen : : Options { <nl> autoScroll ( ) - > setSelected ( true ) ; <nl> <nl> / / Scope <nl> + bgScope ( ) - > addItem ( " New Documents " ) ; <nl> gridScope ( ) - > addItem ( " New Documents " ) ; <nl> if ( context - > activeDocument ( ) ) { <nl> + bgScope ( ) - > addItem ( " Active Document " ) ; <nl> + bgScope ( ) - > setSelectedItemIndex ( 1 ) ; <nl> + bgScope ( ) - > Change . connect ( base : : Bind < void > ( & OptionsWindow : : onChangeBgScope , this ) ) ; <nl> + <nl> gridScope ( ) - > addItem ( " Active Document " ) ; <nl> gridScope ( ) - > setSelectedItemIndex ( 1 ) ; <nl> gridScope ( ) - > Change . connect ( base : : Bind < void > ( & OptionsWindow : : onChangeGridScope , this ) ) ; <nl> class OptionsWindow : public app : : gen : : Options { <nl> checkedBgColor1Box ( ) - > addChild ( m_checked_bg_color1 ) ; <nl> checkedBgColor2Box ( ) - > addChild ( m_checked_bg_color2 ) ; <nl> <nl> - / / Reset button <nl> - reset ( ) - > Click . connect ( base : : Bind < void > ( & OptionsWindow : : onReset , this ) ) ; <nl> + / / Reset buttons <nl> + resetBg ( ) - > Click . connect ( base : : Bind < void > ( & OptionsWindow : : onResetBg , this ) ) ; <nl> + resetGrid ( ) - > Click . connect ( base : : Bind < void > ( & OptionsWindow : : onResetGrid , this ) ) ; <nl> <nl> / / Links <nl> locateFile ( ) - > Click . connect ( base : : Bind < void > ( & OptionsWindow : : onLocateConfigFile , this ) ) ; <nl> class OptionsWindow : public app : : gen : : Options { <nl> / / Apply button <nl> buttonApply ( ) - > Click . connect ( base : : Bind < void > ( & OptionsWindow : : saveConfig , this ) ) ; <nl> <nl> + onChangeBgScope ( ) ; <nl> onChangeGridScope ( ) ; <nl> sectionListbox ( ) - > selectIndex ( m_curSection ) ; <nl> } <nl> class OptionsWindow : public app : : gen : : Options { <nl> loadThemes ( ) ; <nl> } <nl> <nl> + void onChangeBgScope ( ) { <nl> + int item = bgScope ( ) - > getSelectedItemIndex ( ) ; <nl> + <nl> + switch ( item ) { <nl> + case 0 : m_curPref = & m_globPref ; break ; <nl> + case 1 : m_curPref = & m_docPref ; break ; <nl> + } <nl> + <nl> + checkedBgSize ( ) - > setSelectedItemIndex ( int ( m_curPref - > bg . type ( ) ) ) ; <nl> + checkedBgZoom ( ) - > setSelected ( m_curPref - > bg . zoom ( ) ) ; <nl> + m_checked_bg_color1 - > setColor ( m_curPref - > bg . color1 ( ) ) ; <nl> + m_checked_bg_color2 - > setColor ( m_curPref - > bg . color2 ( ) ) ; <nl> + } <nl> + <nl> void onChangeGridScope ( ) { <nl> int item = gridScope ( ) - > getSelectedItemIndex ( ) ; <nl> <nl> class OptionsWindow : public app : : gen : : Options { <nl> m_pixelGridColor - > setColor ( m_curPref - > pixelGrid . color ( ) ) ; <nl> pixelGridOpacity ( ) - > setValue ( m_curPref - > pixelGrid . opacity ( ) ) ; <nl> pixelGridAutoOpacity ( ) - > setSelected ( m_curPref - > pixelGrid . autoOpacity ( ) ) ; <nl> + } <nl> <nl> - checkedBgSize ( ) - > setSelectedItemIndex ( int ( m_curPref - > bg . type ( ) ) ) ; <nl> - checkedBgZoom ( ) - > setSelected ( m_curPref - > bg . zoom ( ) ) ; <nl> - m_checked_bg_color1 - > setColor ( m_curPref - > bg . color1 ( ) ) ; <nl> - m_checked_bg_color2 - > setColor ( m_curPref - > bg . color2 ( ) ) ; <nl> + void onResetBg ( ) { <nl> + / / Reset global preferences ( use default values specified in pref . xml ) <nl> + if ( m_curPref = = & m_globPref ) { <nl> + DocumentPreferences & pref = m_globPref ; <nl> + <nl> + checkedBgSize ( ) - > setSelectedItemIndex ( int ( pref . bg . type . defaultValue ( ) ) ) ; <nl> + checkedBgZoom ( ) - > setSelected ( pref . bg . zoom . defaultValue ( ) ) ; <nl> + m_checked_bg_color1 - > setColor ( pref . bg . color1 . defaultValue ( ) ) ; <nl> + m_checked_bg_color2 - > setColor ( pref . bg . color2 . defaultValue ( ) ) ; <nl> + } <nl> + / / Reset document preferences with global settings <nl> + else { <nl> + DocumentPreferences & pref = m_globPref ; <nl> + <nl> + checkedBgSize ( ) - > setSelectedItemIndex ( int ( pref . bg . type ( ) ) ) ; <nl> + checkedBgZoom ( ) - > setSelected ( pref . bg . zoom ( ) ) ; <nl> + m_checked_bg_color1 - > setColor ( pref . bg . color1 ( ) ) ; <nl> + m_checked_bg_color2 - > setColor ( pref . bg . color2 ( ) ) ; <nl> + } <nl> } <nl> <nl> - void onReset ( ) { <nl> + void onResetGrid ( ) { <nl> / / Reset global preferences ( use default values specified in pref . xml ) <nl> if ( m_curPref = = & m_globPref ) { <nl> DocumentPreferences & pref = m_globPref ; <nl> class OptionsWindow : public app : : gen : : Options { <nl> m_pixelGridColor - > setColor ( pref . pixelGrid . color . defaultValue ( ) ) ; <nl> pixelGridOpacity ( ) - > setValue ( pref . pixelGrid . opacity . defaultValue ( ) ) ; <nl> pixelGridAutoOpacity ( ) - > setSelected ( pref . pixelGrid . autoOpacity . defaultValue ( ) ) ; <nl> - <nl> - checkedBgSize ( ) - > setSelectedItemIndex ( int ( pref . bg . type . defaultValue ( ) ) ) ; <nl> - checkedBgZoom ( ) - > setSelected ( pref . bg . zoom . defaultValue ( ) ) ; <nl> - m_checked_bg_color1 - > setColor ( pref . bg . color1 . defaultValue ( ) ) ; <nl> - m_checked_bg_color2 - > setColor ( pref . bg . color2 . defaultValue ( ) ) ; <nl> } <nl> / / Reset document preferences with global settings <nl> else { <nl> class OptionsWindow : public app : : gen : : Options { <nl> m_pixelGridColor - > setColor ( pref . pixelGrid . color ( ) ) ; <nl> pixelGridOpacity ( ) - > setValue ( pref . pixelGrid . opacity ( ) ) ; <nl> pixelGridAutoOpacity ( ) - > setSelected ( pref . pixelGrid . autoOpacity ( ) ) ; <nl> - <nl> - checkedBgSize ( ) - > setSelectedItemIndex ( int ( pref . bg . type ( ) ) ) ; <nl> - checkedBgZoom ( ) - > setSelected ( pref . bg . zoom ( ) ) ; <nl> - m_checked_bg_color1 - > setColor ( pref . bg . color1 ( ) ) ; <nl> - m_checked_bg_color2 - > setColor ( pref . bg . color2 ( ) ) ; <nl> } <nl> } <nl> <nl>
|
Divide Grid & Background sections in preferences
|
aseprite/aseprite
|
b0df5ac3f44569cccf95ec873cfd08705a460292
|
2016-12-05T14:06:32Z
|
mmm a / src / json . hpp <nl> ppp b / src / json . hpp <nl> struct hash < nlohmann : : json > <nl> <nl> / / / specialization for std : : less < value_t > <nl> template < > <nl> - struct less < : : nlohmann : : detail : : value_t > <nl> + struct less < : : nlohmann : : detail : : value_t > / / do not remove the space after ' < ' , <nl> + / / see https : / / github . com / nlohmann / json / pull / 679 <nl> { <nl> / * ! <nl> @ brief compare two value_t enum values <nl>
|
Merge pull request from traits / patch - 1
|
nlohmann/json
|
a46afd4008bed1f1b5d57a006fd2d2d0c24ae3a3
|
2017-08-10T09:39:13Z
|
mmm a / cocos / scripting / lua - bindings / auto / api / TextField . lua <nl> ppp b / cocos / scripting / lua - bindings / auto / api / TextField . lua <nl> <nl> - - @ param self <nl> - - @ return string # string ret ( return value : string ) <nl> <nl> + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> + - - @ function [ parent = # TextField ] setPasswordStyleText <nl> + - - @ param self <nl> + - - @ param # char char <nl> + <nl> mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> - - @ function [ parent = # TextField ] getDeleteBackward <nl> - - @ param self <nl> <nl> - - @ param self <nl> - - @ param # bool bool <nl> <nl> + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> + - - @ function [ parent = # TextField ] getPlaceHolderColor <nl> + - - @ param self <nl> + - - @ return color4b_table # color4b_table ret ( return value : color4b_table ) <nl> + <nl> mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> - - @ function [ parent = # TextField ] getPasswordStyleText <nl> - - @ param self <nl> <nl> - - @ param # bool bool <nl> <nl> mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> mmm @ function [ parent = # TextField ] setPasswordStyleText <nl> + - - @ function [ parent = # TextField ] isPasswordEnabled <nl> - - @ param self <nl> mmm @ param # char char <nl> + - - @ return bool # bool ret ( return value : bool ) <nl> <nl> mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> - - @ function [ parent = # TextField ] setDeleteBackward <nl> <nl> - - @ param # string str <nl> <nl> mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> mmm @ function [ parent = # TextField ] isPasswordEnabled <nl> + - - @ overload self , color4b_table <nl> + - - @ overload self , color3b_table <nl> + - - @ function [ parent = # TextField ] setPlaceHolderColor <nl> - - @ param self <nl> mmm @ return bool # bool ret ( return value : bool ) <nl> - <nl> + - - @ param # color3b_table color3b <nl> + <nl> mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> - - @ function [ parent = # TextField ] setTextHorizontalAlignment <nl> - - @ param self <nl> - - @ param # int texthalignment <nl> <nl> + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> + - - @ function [ parent = # TextField ] setTextColor <nl> + - - @ param self <nl> + - - @ param # color4b_table color4b <nl> + <nl> mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> - - @ function [ parent = # TextField ] getMaxLength <nl> - - @ param self <nl> mmm a / cocos / scripting / lua - bindings / auto / lua_cocos2dx_ui_auto . cpp <nl> ppp b / cocos / scripting / lua - bindings / auto / lua_cocos2dx_ui_auto . cpp <nl> int lua_cocos2dx_ui_TextField_getStringValue ( lua_State * tolua_S ) <nl> <nl> return 0 ; <nl> } <nl> + int lua_cocos2dx_ui_TextField_setPasswordStyleText ( lua_State * tolua_S ) <nl> + { <nl> + int argc = 0 ; <nl> + cocos2d : : ui : : TextField * cobj = nullptr ; <nl> + bool ok = true ; <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + tolua_Error tolua_err ; <nl> + # endif <nl> + <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + if ( ! tolua_isusertype ( tolua_S , 1 , " ccui . TextField " , 0 , & tolua_err ) ) goto tolua_lerror ; <nl> + # endif <nl> + <nl> + cobj = ( cocos2d : : ui : : TextField * ) tolua_tousertype ( tolua_S , 1 , 0 ) ; <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + if ( ! cobj ) <nl> + { <nl> + tolua_error ( tolua_S , " invalid ' cobj ' in function ' lua_cocos2dx_ui_TextField_setPasswordStyleText ' " , nullptr ) ; <nl> + return 0 ; <nl> + } <nl> + # endif <nl> + <nl> + argc = lua_gettop ( tolua_S ) - 1 ; <nl> + if ( argc = = 1 ) <nl> + { <nl> + const char * arg0 ; <nl> + <nl> + std : : string arg0_tmp ; ok & = luaval_to_std_string ( tolua_S , 2 , & arg0_tmp , " ccui . TextField : setPasswordStyleText " ) ; arg0 = arg0_tmp . c_str ( ) ; <nl> + if ( ! ok ) <nl> + return 0 ; <nl> + cobj - > setPasswordStyleText ( arg0 ) ; <nl> + return 0 ; <nl> + } <nl> + CCLOG ( " % s has wrong number of arguments : % d , was expecting % d \ n " , " ccui . TextField : setPasswordStyleText " , argc , 1 ) ; <nl> + return 0 ; <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + tolua_lerror : <nl> + tolua_error ( tolua_S , " # ferror in function ' lua_cocos2dx_ui_TextField_setPasswordStyleText ' . " , & tolua_err ) ; <nl> + # endif <nl> + <nl> + return 0 ; <nl> + } <nl> int lua_cocos2dx_ui_TextField_getDeleteBackward ( lua_State * tolua_S ) <nl> { <nl> int argc = 0 ; <nl> int lua_cocos2dx_ui_TextField_setPasswordEnabled ( lua_State * tolua_S ) <nl> <nl> return 0 ; <nl> } <nl> + int lua_cocos2dx_ui_TextField_getPlaceHolderColor ( lua_State * tolua_S ) <nl> + { <nl> + int argc = 0 ; <nl> + cocos2d : : ui : : TextField * cobj = nullptr ; <nl> + bool ok = true ; <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + tolua_Error tolua_err ; <nl> + # endif <nl> + <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + if ( ! tolua_isusertype ( tolua_S , 1 , " ccui . TextField " , 0 , & tolua_err ) ) goto tolua_lerror ; <nl> + # endif <nl> + <nl> + cobj = ( cocos2d : : ui : : TextField * ) tolua_tousertype ( tolua_S , 1 , 0 ) ; <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + if ( ! cobj ) <nl> + { <nl> + tolua_error ( tolua_S , " invalid ' cobj ' in function ' lua_cocos2dx_ui_TextField_getPlaceHolderColor ' " , nullptr ) ; <nl> + return 0 ; <nl> + } <nl> + # endif <nl> + <nl> + argc = lua_gettop ( tolua_S ) - 1 ; <nl> + if ( argc = = 0 ) <nl> + { <nl> + if ( ! ok ) <nl> + return 0 ; <nl> + const cocos2d : : Color4B & ret = cobj - > getPlaceHolderColor ( ) ; <nl> + color4b_to_luaval ( tolua_S , ret ) ; <nl> + return 1 ; <nl> + } <nl> + CCLOG ( " % s has wrong number of arguments : % d , was expecting % d \ n " , " ccui . TextField : getPlaceHolderColor " , argc , 0 ) ; <nl> + return 0 ; <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + tolua_lerror : <nl> + tolua_error ( tolua_S , " # ferror in function ' lua_cocos2dx_ui_TextField_getPlaceHolderColor ' . " , & tolua_err ) ; <nl> + # endif <nl> + <nl> + return 0 ; <nl> + } <nl> int lua_cocos2dx_ui_TextField_getPasswordStyleText ( lua_State * tolua_S ) <nl> { <nl> int argc = 0 ; <nl> int lua_cocos2dx_ui_TextField_setMaxLengthEnabled ( lua_State * tolua_S ) <nl> <nl> return 0 ; <nl> } <nl> - int lua_cocos2dx_ui_TextField_setPasswordStyleText ( lua_State * tolua_S ) <nl> + int lua_cocos2dx_ui_TextField_isPasswordEnabled ( lua_State * tolua_S ) <nl> { <nl> int argc = 0 ; <nl> cocos2d : : ui : : TextField * cobj = nullptr ; <nl> int lua_cocos2dx_ui_TextField_setPasswordStyleText ( lua_State * tolua_S ) <nl> # if COCOS2D_DEBUG > = 1 <nl> if ( ! cobj ) <nl> { <nl> - tolua_error ( tolua_S , " invalid ' cobj ' in function ' lua_cocos2dx_ui_TextField_setPasswordStyleText ' " , nullptr ) ; <nl> + tolua_error ( tolua_S , " invalid ' cobj ' in function ' lua_cocos2dx_ui_TextField_isPasswordEnabled ' " , nullptr ) ; <nl> return 0 ; <nl> } <nl> # endif <nl> <nl> argc = lua_gettop ( tolua_S ) - 1 ; <nl> - if ( argc = = 1 ) <nl> + if ( argc = = 0 ) <nl> { <nl> - const char * arg0 ; <nl> - <nl> - std : : string arg0_tmp ; ok & = luaval_to_std_string ( tolua_S , 2 , & arg0_tmp , " ccui . TextField : setPasswordStyleText " ) ; arg0 = arg0_tmp . c_str ( ) ; <nl> if ( ! ok ) <nl> return 0 ; <nl> - cobj - > setPasswordStyleText ( arg0 ) ; <nl> - return 0 ; <nl> + bool ret = cobj - > isPasswordEnabled ( ) ; <nl> + tolua_pushboolean ( tolua_S , ( bool ) ret ) ; <nl> + return 1 ; <nl> } <nl> - CCLOG ( " % s has wrong number of arguments : % d , was expecting % d \ n " , " ccui . TextField : setPasswordStyleText " , argc , 1 ) ; <nl> + CCLOG ( " % s has wrong number of arguments : % d , was expecting % d \ n " , " ccui . TextField : isPasswordEnabled " , argc , 0 ) ; <nl> return 0 ; <nl> <nl> # if COCOS2D_DEBUG > = 1 <nl> tolua_lerror : <nl> - tolua_error ( tolua_S , " # ferror in function ' lua_cocos2dx_ui_TextField_setPasswordStyleText ' . " , & tolua_err ) ; <nl> + tolua_error ( tolua_S , " # ferror in function ' lua_cocos2dx_ui_TextField_isPasswordEnabled ' . " , & tolua_err ) ; <nl> # endif <nl> <nl> return 0 ; <nl> int lua_cocos2dx_ui_TextField_setPlaceHolder ( lua_State * tolua_S ) <nl> <nl> return 0 ; <nl> } <nl> - int lua_cocos2dx_ui_TextField_isPasswordEnabled ( lua_State * tolua_S ) <nl> + int lua_cocos2dx_ui_TextField_setPlaceHolderColor ( lua_State * tolua_S ) <nl> + { <nl> + int argc = 0 ; <nl> + cocos2d : : ui : : TextField * cobj = nullptr ; <nl> + bool ok = true ; <nl> + # if COCOS2D_DEBUG > = 1 <nl> + tolua_Error tolua_err ; <nl> + # endif <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + if ( ! tolua_isusertype ( tolua_S , 1 , " ccui . TextField " , 0 , & tolua_err ) ) goto tolua_lerror ; <nl> + # endif <nl> + cobj = ( cocos2d : : ui : : TextField * ) tolua_tousertype ( tolua_S , 1 , 0 ) ; <nl> + # if COCOS2D_DEBUG > = 1 <nl> + if ( ! cobj ) <nl> + { <nl> + tolua_error ( tolua_S , " invalid ' cobj ' in function ' lua_cocos2dx_ui_TextField_setPlaceHolderColor ' " , nullptr ) ; <nl> + return 0 ; <nl> + } <nl> + # endif <nl> + argc = lua_gettop ( tolua_S ) - 1 ; <nl> + do { <nl> + if ( argc = = 1 ) { <nl> + cocos2d : : Color4B arg0 ; <nl> + ok & = luaval_to_color4b ( tolua_S , 2 , & arg0 , " ccui . TextField : setPlaceHolderColor " ) ; <nl> + <nl> + if ( ! ok ) { break ; } <nl> + cobj - > setPlaceHolderColor ( arg0 ) ; <nl> + return 0 ; <nl> + } <nl> + } while ( 0 ) ; <nl> + ok = true ; <nl> + do { <nl> + if ( argc = = 1 ) { <nl> + cocos2d : : Color3B arg0 ; <nl> + ok & = luaval_to_color3b ( tolua_S , 2 , & arg0 , " ccui . TextField : setPlaceHolderColor " ) ; <nl> + <nl> + if ( ! ok ) { break ; } <nl> + cobj - > setPlaceHolderColor ( arg0 ) ; <nl> + return 0 ; <nl> + } <nl> + } while ( 0 ) ; <nl> + ok = true ; <nl> + CCLOG ( " % s has wrong number of arguments : % d , was expecting % d \ n " , " ccui . TextField : setPlaceHolderColor " , argc , 1 ) ; <nl> + return 0 ; <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + tolua_lerror : <nl> + tolua_error ( tolua_S , " # ferror in function ' lua_cocos2dx_ui_TextField_setPlaceHolderColor ' . " , & tolua_err ) ; <nl> + # endif <nl> + <nl> + return 0 ; <nl> + } <nl> + int lua_cocos2dx_ui_TextField_setTextHorizontalAlignment ( lua_State * tolua_S ) <nl> { <nl> int argc = 0 ; <nl> cocos2d : : ui : : TextField * cobj = nullptr ; <nl> int lua_cocos2dx_ui_TextField_isPasswordEnabled ( lua_State * tolua_S ) <nl> # if COCOS2D_DEBUG > = 1 <nl> if ( ! cobj ) <nl> { <nl> - tolua_error ( tolua_S , " invalid ' cobj ' in function ' lua_cocos2dx_ui_TextField_isPasswordEnabled ' " , nullptr ) ; <nl> + tolua_error ( tolua_S , " invalid ' cobj ' in function ' lua_cocos2dx_ui_TextField_setTextHorizontalAlignment ' " , nullptr ) ; <nl> return 0 ; <nl> } <nl> # endif <nl> <nl> argc = lua_gettop ( tolua_S ) - 1 ; <nl> - if ( argc = = 0 ) <nl> + if ( argc = = 1 ) <nl> { <nl> + cocos2d : : TextHAlignment arg0 ; <nl> + <nl> + ok & = luaval_to_int32 ( tolua_S , 2 , ( int * ) & arg0 , " ccui . TextField : setTextHorizontalAlignment " ) ; <nl> if ( ! ok ) <nl> return 0 ; <nl> - bool ret = cobj - > isPasswordEnabled ( ) ; <nl> - tolua_pushboolean ( tolua_S , ( bool ) ret ) ; <nl> - return 1 ; <nl> + cobj - > setTextHorizontalAlignment ( arg0 ) ; <nl> + return 0 ; <nl> } <nl> - CCLOG ( " % s has wrong number of arguments : % d , was expecting % d \ n " , " ccui . TextField : isPasswordEnabled " , argc , 0 ) ; <nl> + CCLOG ( " % s has wrong number of arguments : % d , was expecting % d \ n " , " ccui . TextField : setTextHorizontalAlignment " , argc , 1 ) ; <nl> return 0 ; <nl> <nl> # if COCOS2D_DEBUG > = 1 <nl> tolua_lerror : <nl> - tolua_error ( tolua_S , " # ferror in function ' lua_cocos2dx_ui_TextField_isPasswordEnabled ' . " , & tolua_err ) ; <nl> + tolua_error ( tolua_S , " # ferror in function ' lua_cocos2dx_ui_TextField_setTextHorizontalAlignment ' . " , & tolua_err ) ; <nl> # endif <nl> <nl> return 0 ; <nl> } <nl> - int lua_cocos2dx_ui_TextField_setTextHorizontalAlignment ( lua_State * tolua_S ) <nl> + int lua_cocos2dx_ui_TextField_setTextColor ( lua_State * tolua_S ) <nl> { <nl> int argc = 0 ; <nl> cocos2d : : ui : : TextField * cobj = nullptr ; <nl> int lua_cocos2dx_ui_TextField_setTextHorizontalAlignment ( lua_State * tolua_S ) <nl> # if COCOS2D_DEBUG > = 1 <nl> if ( ! cobj ) <nl> { <nl> - tolua_error ( tolua_S , " invalid ' cobj ' in function ' lua_cocos2dx_ui_TextField_setTextHorizontalAlignment ' " , nullptr ) ; <nl> + tolua_error ( tolua_S , " invalid ' cobj ' in function ' lua_cocos2dx_ui_TextField_setTextColor ' " , nullptr ) ; <nl> return 0 ; <nl> } <nl> # endif <nl> int lua_cocos2dx_ui_TextField_setTextHorizontalAlignment ( lua_State * tolua_S ) <nl> argc = lua_gettop ( tolua_S ) - 1 ; <nl> if ( argc = = 1 ) <nl> { <nl> - cocos2d : : TextHAlignment arg0 ; <nl> + cocos2d : : Color4B arg0 ; <nl> <nl> - ok & = luaval_to_int32 ( tolua_S , 2 , ( int * ) & arg0 , " ccui . TextField : setTextHorizontalAlignment " ) ; <nl> + ok & = luaval_to_color4b ( tolua_S , 2 , & arg0 , " ccui . TextField : setTextColor " ) ; <nl> if ( ! ok ) <nl> return 0 ; <nl> - cobj - > setTextHorizontalAlignment ( arg0 ) ; <nl> + cobj - > setTextColor ( arg0 ) ; <nl> return 0 ; <nl> } <nl> - CCLOG ( " % s has wrong number of arguments : % d , was expecting % d \ n " , " ccui . TextField : setTextHorizontalAlignment " , argc , 1 ) ; <nl> + CCLOG ( " % s has wrong number of arguments : % d , was expecting % d \ n " , " ccui . TextField : setTextColor " , argc , 1 ) ; <nl> return 0 ; <nl> <nl> # if COCOS2D_DEBUG > = 1 <nl> tolua_lerror : <nl> - tolua_error ( tolua_S , " # ferror in function ' lua_cocos2dx_ui_TextField_setTextHorizontalAlignment ' . " , & tolua_err ) ; <nl> + tolua_error ( tolua_S , " # ferror in function ' lua_cocos2dx_ui_TextField_setTextColor ' . " , & tolua_err ) ; <nl> # endif <nl> <nl> return 0 ; <nl> int lua_register_cocos2dx_ui_TextField ( lua_State * tolua_S ) <nl> tolua_function ( tolua_S , " setAttachWithIME " , lua_cocos2dx_ui_TextField_setAttachWithIME ) ; <nl> tolua_function ( tolua_S , " getFontSize " , lua_cocos2dx_ui_TextField_getFontSize ) ; <nl> tolua_function ( tolua_S , " getStringValue " , lua_cocos2dx_ui_TextField_getStringValue ) ; <nl> + tolua_function ( tolua_S , " setPasswordStyleText " , lua_cocos2dx_ui_TextField_setPasswordStyleText ) ; <nl> tolua_function ( tolua_S , " getDeleteBackward " , lua_cocos2dx_ui_TextField_getDeleteBackward ) ; <nl> tolua_function ( tolua_S , " getPlaceHolder " , lua_cocos2dx_ui_TextField_getPlaceHolder ) ; <nl> tolua_function ( tolua_S , " getAttachWithIME " , lua_cocos2dx_ui_TextField_getAttachWithIME ) ; <nl> int lua_register_cocos2dx_ui_TextField ( lua_State * tolua_S ) <nl> tolua_function ( tolua_S , " attachWithIME " , lua_cocos2dx_ui_TextField_attachWithIME ) ; <nl> tolua_function ( tolua_S , " getStringLength " , lua_cocos2dx_ui_TextField_getStringLength ) ; <nl> tolua_function ( tolua_S , " setPasswordEnabled " , lua_cocos2dx_ui_TextField_setPasswordEnabled ) ; <nl> + tolua_function ( tolua_S , " getPlaceHolderColor " , lua_cocos2dx_ui_TextField_getPlaceHolderColor ) ; <nl> tolua_function ( tolua_S , " getPasswordStyleText " , lua_cocos2dx_ui_TextField_getPasswordStyleText ) ; <nl> tolua_function ( tolua_S , " setMaxLengthEnabled " , lua_cocos2dx_ui_TextField_setMaxLengthEnabled ) ; <nl> - tolua_function ( tolua_S , " setPasswordStyleText " , lua_cocos2dx_ui_TextField_setPasswordStyleText ) ; <nl> + tolua_function ( tolua_S , " isPasswordEnabled " , lua_cocos2dx_ui_TextField_isPasswordEnabled ) ; <nl> tolua_function ( tolua_S , " setDeleteBackward " , lua_cocos2dx_ui_TextField_setDeleteBackward ) ; <nl> tolua_function ( tolua_S , " setFontSize " , lua_cocos2dx_ui_TextField_setFontSize ) ; <nl> tolua_function ( tolua_S , " setPlaceHolder " , lua_cocos2dx_ui_TextField_setPlaceHolder ) ; <nl> - tolua_function ( tolua_S , " isPasswordEnabled " , lua_cocos2dx_ui_TextField_isPasswordEnabled ) ; <nl> + tolua_function ( tolua_S , " setPlaceHolderColor " , lua_cocos2dx_ui_TextField_setPlaceHolderColor ) ; <nl> tolua_function ( tolua_S , " setTextHorizontalAlignment " , lua_cocos2dx_ui_TextField_setTextHorizontalAlignment ) ; <nl> + tolua_function ( tolua_S , " setTextColor " , lua_cocos2dx_ui_TextField_setTextColor ) ; <nl> tolua_function ( tolua_S , " getMaxLength " , lua_cocos2dx_ui_TextField_getMaxLength ) ; <nl> tolua_function ( tolua_S , " isMaxLengthEnabled " , lua_cocos2dx_ui_TextField_isMaxLengthEnabled ) ; <nl> tolua_function ( tolua_S , " setDetachWithIME " , lua_cocos2dx_ui_TextField_setDetachWithIME ) ; <nl> mmm a / cocos / scripting / lua - bindings / auto / lua_cocos2dx_ui_auto . hpp <nl> ppp b / cocos / scripting / lua - bindings / auto / lua_cocos2dx_ui_auto . hpp <nl> int register_all_cocos2dx_ui ( lua_State * tolua_S ) ; <nl> <nl> <nl> <nl> + <nl> + <nl> + <nl> <nl> <nl> <nl>
|
[ AUTO ] : updating luabinding automatically
|
cocos2d/cocos2d-x
|
e2857097d610f22f122f331926da92e1d7411bd4
|
2014-08-14T02:30:08Z
|
mmm a / xbmc / cores / VideoPlayer / DVDInputStreams / DVDInputStreamBluray . cpp <nl> ppp b / xbmc / cores / VideoPlayer / DVDInputStreams / DVDInputStreamBluray . cpp <nl> void CDVDInputStreamBluray : : SetupPlayerSettings ( ) <nl> } <nl> m_dll - > bd_set_player_setting ( m_bd , BLURAY_PLAYER_SETTING_REGION_CODE , region ) ; <nl> m_dll - > bd_set_player_setting ( m_bd , BLURAY_PLAYER_SETTING_PARENTAL , 99 ) ; <nl> - m_dll - > bd_set_player_setting ( m_bd , BLURAY_PLAYER_SETTING_PLAYER_PROFILE , BLURAY_PLAYER_PROFILE_2_v2_0 ) ; <nl> + m_dll - > bd_set_player_setting ( m_bd , BLURAY_PLAYER_SETTING_PLAYER_PROFILE , BLURAY_PLAYER_PROFILE_5_v2_4 ) ; <nl> + m_dll - > bd_set_player_setting ( m_bd , BLURAY_PLAYER_SETTING_3D_CAP , 0xffffffff ) ; <nl> <nl> std : : string langCode ; <nl> g_LangCodeExpander . ConvertToISO6392T ( g_langInfo . GetDVDAudioLanguage ( ) , langCode ) ; <nl>
|
[ bluray ] Set player profile to 5 . 0 ( Blu - ray 3D ) and enable player 3D - cap .
|
xbmc/xbmc
|
51f61f4caf53db1cd247c290aa6624c228371b80
|
2017-07-07T07:02:53Z
|
mmm a / BUILD . gn <nl> ppp b / BUILD . gn <nl> source_set ( " v8_base " ) { <nl> " src / compiler / ia32 / instruction - codes - ia32 . h " , <nl> " src / compiler / ia32 / instruction - selector - ia32 . cc " , <nl> " src / compiler / ia32 / linkage - ia32 . cc " , <nl> + " src / ic / ia32 / access - compiler - ia32 . cc " , <nl> + " src / ic / ia32 / handler - compiler - ia32 . cc " , <nl> " src / ic / ia32 / ic - ia32 . cc " , <nl> " src / ic / ia32 / ic - compiler - ia32 . cc " , <nl> " src / ic / ia32 / stub - cache - ia32 . cc " , <nl>
|
gn : Add missing source files to x86 build
|
v8/v8
|
aee775a50935059c86ba36978e5a7af52a447c8d
|
2014-09-30T08:22:16Z
|
mmm a / hphp / runtime / ext / icu / ext_icu_collator . cpp <nl> ppp b / hphp / runtime / ext / icu / ext_icu_collator . cpp <nl> static bool HHVM_METHOD ( Collator , setAttribute , int64_t attr , int64_t val ) { <nl> return true ; <nl> } <nl> <nl> + static Variant HHVM_METHOD ( Collator , getSortKey , const String & val ) { <nl> + FETCH_COL ( data , this_ ) ; <nl> + UErrorCode error = U_ZERO_ERROR ; <nl> + icu : : UnicodeString strval ( u16 ( val , error ) ) ; <nl> + if ( U_FAILURE ( error ) ) { <nl> + return false ; <nl> + } <nl> + <nl> + int sortkey_len = ucol_getSortKey ( data - > collator ( ) , <nl> + strval . getBuffer ( ) , strval . length ( ) , <nl> + nullptr , <nl> + 0 ) ; <nl> + if ( sortkey_len < = 0 ) { <nl> + return false ; <nl> + } <nl> + <nl> + String ret ( sortkey_len + 1 , ReserveString ) ; <nl> + sortkey_len = ucol_getSortKey ( data - > collator ( ) , <nl> + strval . getBuffer ( ) , strval . length ( ) , <nl> + ( uint8_t * ) ret . get ( ) - > mutableData ( ) , <nl> + ret . get ( ) - > capacity ( ) ) ; <nl> + if ( sortkey_len < = 0 ) { <nl> + return false ; <nl> + } <nl> + <nl> + ret . setSize ( sortkey_len ) ; <nl> + return ret ; <nl> + } <nl> + <nl> static bool HHVM_METHOD ( Collator , setStrength , int64_t strength ) { <nl> FETCH_COL ( data , this_ ) ; <nl> ucol_setStrength ( data - > collator ( ) , ( UCollationStrength ) strength ) ; <nl> void IntlExtension : : initCollator ( ) { <nl> HHVM_ME ( Collator , getErrorCode ) ; <nl> HHVM_ME ( Collator , getErrorMessage ) ; <nl> HHVM_ME ( Collator , getLocale ) ; <nl> + HHVM_ME ( Collator , getSortKey ) ; <nl> HHVM_ME ( Collator , getStrength ) ; <nl> HHVM_ME ( Collator , setAttribute ) ; <nl> HHVM_ME ( Collator , setStrength ) ; <nl> mmm a / hphp / runtime / ext / icu / ext_icu_collator . php <nl> ppp b / hphp / runtime / ext / icu / ext_icu_collator . php <nl> public function getLocale ( int $ type ) : string ; <nl> * keys can be compared directly instead of strings . <nl> * / <nl> < < __Native > > <nl> - public function getSortKey ( string $ str ) : string ; <nl> + public function getSortKey ( string $ str ) : mixed ; <nl> <nl> / * * <nl> * Get current collation strength <nl> mmm a / hphp / test / frameworks / results / mediawiki . expect <nl> ppp b / hphp / test / frameworks / results / mediawiki . expect <nl> S <nl> CdbTest : : testMediaWikiTestCaseParentSetupCalled <nl> S <nl> CleanUpTest : : testAllBytes <nl> - . <nl> + F <nl> CleanUpTest : : testAscii <nl> - . <nl> + F <nl> CleanUpTest : : testBomRegression <nl> - . <nl> + F <nl> CleanUpTest : : testChunkRegression <nl> - . <nl> + F <nl> CleanUpTest : : testDoubleBytes <nl> - . <nl> + F <nl> CleanUpTest : : testForbiddenRegression <nl> - . <nl> + F <nl> CleanUpTest : : testHangulRegression <nl> - . <nl> + F <nl> CleanUpTest : : testInterposeRegression <nl> - . <nl> + F <nl> CleanUpTest : : testLatin <nl> - . <nl> + F <nl> CleanUpTest : : testLatinNormal <nl> - . <nl> + F <nl> CleanUpTest : : testMediaWikiTestCaseParentSetupCalled <nl> . <nl> CleanUpTest : : testNull <nl> - . <nl> + F <nl> CleanUpTest : : testOverlongRegression <nl> - . <nl> + F <nl> CleanUpTest : : testSurrogateRegression <nl> - . <nl> + F <nl> CleanUpTest : : testTripleBytes <nl> - . <nl> + F <nl> CollationTest : : testGetFirstLetter with data set # 0 <nl> - S <nl> + . <nl> CollationTest : : testGetFirstLetter with data set # 1 <nl> - S <nl> + . <nl> CollationTest : : testGetFirstLetter with data set # 10 <nl> - S <nl> + . <nl> CollationTest : : testGetFirstLetter with data set # 11 <nl> - S <nl> + . <nl> CollationTest : : testGetFirstLetter with data set # 12 <nl> - S <nl> + . <nl> CollationTest : : testGetFirstLetter with data set # 13 <nl> - S <nl> + . <nl> CollationTest : : testGetFirstLetter with data set # 14 <nl> - S <nl> + . <nl> CollationTest : : testGetFirstLetter with data set # 15 <nl> - S <nl> + . <nl> CollationTest : : testGetFirstLetter with data set # 16 <nl> - S <nl> + . <nl> CollationTest : : testGetFirstLetter with data set # 2 <nl> - S <nl> + . <nl> CollationTest : : testGetFirstLetter with data set # 3 <nl> - S <nl> + . <nl> CollationTest : : testGetFirstLetter with data set # 4 <nl> - S <nl> + . <nl> CollationTest : : testGetFirstLetter with data set # 5 <nl> - S <nl> + . <nl> CollationTest : : testGetFirstLetter with data set # 6 <nl> - S <nl> + . <nl> CollationTest : : testGetFirstLetter with data set # 7 <nl> - S <nl> + . <nl> CollationTest : : testGetFirstLetter with data set # 8 <nl> - S <nl> + . <nl> CollationTest : : testGetFirstLetter with data set # 9 <nl> - S <nl> + . <nl> CollationTest : : testIsPrefix with data set # 0 <nl> - S <nl> + . <nl> CollationTest : : testIsPrefix with data set # 1 <nl> - S <nl> + . <nl> CollationTest : : testIsPrefix with data set # 2 <nl> - S <nl> + . <nl> CollationTest : : testIsPrefix with data set # 3 <nl> - S <nl> + . <nl> CollationTest : : testIsPrefix with data set # 4 <nl> - S <nl> + . <nl> CollationTest : : testIsPrefix with data set # 5 <nl> - S <nl> + . <nl> CollationTest : : testIsPrefix with data set # 6 <nl> - S <nl> + . <nl> CollationTest : : testIsPrefix with data set # 7 <nl> - S <nl> + . <nl> CollationTest : : testIsPrefix with data set # 8 <nl> - S <nl> + . <nl> CollationTest : : testMediaWikiTestCaseParentSetupCalled <nl> - S <nl> + . <nl> CollationTest : : testNotIsPrefix with data set # 0 <nl> - S <nl> + . <nl> CollationTest : : testNotIsPrefix with data set # 1 <nl> - S <nl> + . <nl> CollationTest : : testNotIsPrefix with data set # 2 <nl> - S <nl> + . <nl> CollationTest : : testNotIsPrefix with data set # 3 <nl> - S <nl> + . <nl> DatabaseMysqlBaseTest : : testAddIdentifierQuotes with data set # 0 <nl> . <nl> DatabaseMysqlBaseTest : : testAddIdentifierQuotes with data set # 1 <nl> ORMTableTest : : testMediaWikiTestCaseParentSetupCalled <nl> ORMTableTest : : testSingleton <nl> . <nl> OutputPageTest : : testHandheld <nl> - . <nl> + F <nl> OutputPageTest : : testMediaWikiTestCaseParentSetupCalled <nl> . <nl> OutputPageTest : : testPrintRequests <nl> - . <nl> + F <nl> OutputPageTest : : testScreenRequests <nl> . <nl> PNGHandlerTest : : testGetImageArea with data set # 0 <nl> new file mode 100644 <nl> index 00000000000 . . 3d7449ae8e6 <nl> mmm / dev / null <nl> ppp b / hphp / test / slow / ext_collator / getSortKey . php <nl> <nl> + < ? hh <nl> + <nl> + $ inputs = array ( <nl> + array ( ' 1 ' , ' 2 ' , ' 10 ' ) , <nl> + array ( ' y ' , ' k ' , ' i ' ) , <nl> + ) ; <nl> + <nl> + $ locales = array ( <nl> + ' en_US ' , <nl> + ' lt_LT ' , <nl> + ) ; <nl> + <nl> + function sort_key_cmp ( Collator $ c , string $ a , string $ b ) { <nl> + $ ka = $ c - > getSortKey ( $ a ) ; <nl> + $ kb = $ c - > getSortKey ( $ b ) ; <nl> + if ( $ ka < $ kb ) { <nl> + return - 1 ; <nl> + } else if ( $ ka = = = $ kb ) { <nl> + return 0 ; <nl> + } else { <nl> + return 1 ; <nl> + } <nl> + } <nl> + <nl> + foreach ( $ inputs as $ input ) { <nl> + foreach ( $ locales as $ locale ) { <nl> + $ c = new Collator ( $ locale ) ; <nl> + usort ( $ input , function ( $ a , $ b ) use ( $ c ) { <nl> + return sort_key_cmp ( $ c , $ a , $ b ) ; <nl> + } ) ; <nl> + var_dump ( array ( $ locale = > $ input ) ) ; <nl> + $ c - > setAttribute ( Collator : : NUMERIC_COLLATION , Collator : : ON ) ; <nl> + usort ( $ input , function ( $ a , $ b ) use ( $ c ) { <nl> + return sort_key_cmp ( $ c , $ a , $ b ) ; <nl> + } ) ; <nl> + var_dump ( array ( $ locale . ' numeric ' = > $ input ) ) ; <nl> + } <nl> + } <nl> new file mode 100644 <nl> index 00000000000 . . 9915d6a466b <nl> mmm / dev / null <nl> ppp b / hphp / test / slow / ext_collator / getSortKey . php . expect <nl> <nl> + array ( 1 ) { <nl> + [ " en_US " ] = > <nl> + array ( 3 ) { <nl> + [ 0 ] = > <nl> + string ( 1 ) " 1 " <nl> + [ 1 ] = > <nl> + string ( 2 ) " 10 " <nl> + [ 2 ] = > <nl> + string ( 1 ) " 2 " <nl> + } <nl> + } <nl> + array ( 1 ) { <nl> + [ " en_US numeric " ] = > <nl> + array ( 3 ) { <nl> + [ 0 ] = > <nl> + string ( 1 ) " 1 " <nl> + [ 1 ] = > <nl> + string ( 1 ) " 2 " <nl> + [ 2 ] = > <nl> + string ( 2 ) " 10 " <nl> + } <nl> + } <nl> + array ( 1 ) { <nl> + [ " lt_LT " ] = > <nl> + array ( 3 ) { <nl> + [ 0 ] = > <nl> + string ( 1 ) " 1 " <nl> + [ 1 ] = > <nl> + string ( 2 ) " 10 " <nl> + [ 2 ] = > <nl> + string ( 1 ) " 2 " <nl> + } <nl> + } <nl> + array ( 1 ) { <nl> + [ " lt_LT numeric " ] = > <nl> + array ( 3 ) { <nl> + [ 0 ] = > <nl> + string ( 1 ) " 1 " <nl> + [ 1 ] = > <nl> + string ( 1 ) " 2 " <nl> + [ 2 ] = > <nl> + string ( 2 ) " 10 " <nl> + } <nl> + } <nl> + array ( 1 ) { <nl> + [ " en_US " ] = > <nl> + array ( 3 ) { <nl> + [ 0 ] = > <nl> + string ( 1 ) " i " <nl> + [ 1 ] = > <nl> + string ( 1 ) " k " <nl> + [ 2 ] = > <nl> + string ( 1 ) " y " <nl> + } <nl> + } <nl> + array ( 1 ) { <nl> + [ " en_US numeric " ] = > <nl> + array ( 3 ) { <nl> + [ 0 ] = > <nl> + string ( 1 ) " i " <nl> + [ 1 ] = > <nl> + string ( 1 ) " k " <nl> + [ 2 ] = > <nl> + string ( 1 ) " y " <nl> + } <nl> + } <nl> + array ( 1 ) { <nl> + [ " lt_LT " ] = > <nl> + array ( 3 ) { <nl> + [ 0 ] = > <nl> + string ( 1 ) " i " <nl> + [ 1 ] = > <nl> + string ( 1 ) " y " <nl> + [ 2 ] = > <nl> + string ( 1 ) " k " <nl> + } <nl> + } <nl> + array ( 1 ) { <nl> + [ " lt_LT numeric " ] = > <nl> + array ( 3 ) { <nl> + [ 0 ] = > <nl> + string ( 1 ) " i " <nl> + [ 1 ] = > <nl> + string ( 1 ) " y " <nl> + [ 2 ] = > <nl> + string ( 1 ) " k " <nl> + } <nl> + } <nl>
|
Implement Collator : : getSortKey
|
facebook/hhvm
|
2f71bf1611fff30aa21f6656552ad0adcff27e9c
|
2014-03-18T21:00:06Z
|
mmm a / BUILD <nl> ppp b / BUILD <nl> py_library ( <nl> ) <nl> <nl> cc_binary ( <nl> - name = " internal / _api_implementation . so " , <nl> + name = " python / google / protobuf / internal / _api_implementation . so " , <nl> srcs = [ " python / google / protobuf / internal / api_implementation . cc " ] , <nl> copts = COPTS + [ <nl> " - DPYTHON_PROTO2_CPP_IMPL_V2 " , <nl> cc_binary ( <nl> ) <nl> <nl> cc_binary ( <nl> - name = " pyext / _message . so " , <nl> + name = " python / google / protobuf / pyext / _message . so " , <nl> srcs = glob ( [ <nl> " python / google / protobuf / pyext / * . cc " , <nl> " python / google / protobuf / pyext / * . h " , <nl> py_proto_library ( <nl> data = select ( { <nl> " / / conditions : default " : [ ] , <nl> " : use_fast_cpp_protos " : [ <nl> - " : internal / _api_implementation . so " , <nl> - " : pyext / _message . so " , <nl> + " : python / google / protobuf / internal / _api_implementation . so " , <nl> + " : python / google / protobuf / pyext / _message . so " , <nl> ] , <nl> } ) , <nl> default_runtime = " " , <nl>
|
Place Python extensions correctly in Bazel build .
|
protocolbuffers/protobuf
|
df5841f0b2a523abeb2fc304e024cd623f13d2f1
|
2016-10-18T20:17:27Z
|
mmm a / CONTRIBUTING . md <nl> ppp b / CONTRIBUTING . md <nl> maximize the chances of your PR being merged . <nl> PR . This person does not necessarily need to have commit access . <nl> * The previous two points generally mean that every PR should have two approvals . ( Exceptions can <nl> be made by the senior committers ) . <nl> + * The above rules may be waived for PRs which only update docs or comments , or trivial changes to <nl> + tests and tools ( where trivial is decided by the maintainer in question ) . <nl> * In general , we should also attempt to make sure that at least one of the approvals is * from an <nl> organization different from the PR author . * E . g . , if Lyft authors a PR , at least one approver <nl> should be from an organization other than Lyft . This helps us make sure that we aren ' t putting <nl>
|
updating reviewer guidelines ( )
|
envoyproxy/envoy
|
dde9b2061000b341c4e402df8d32c5b333a144d2
|
2017-12-13T22:49:30Z
|
mmm a / src / leveldb / . gitignore <nl> ppp b / src / leveldb / . gitignore <nl> build_config . mk <nl> * . so . * <nl> * _test <nl> db_bench <nl> + leveldbutil <nl> Release <nl> Debug <nl> Benchmark <nl> mmm a / src / leveldb / Makefile <nl> ppp b / src / leveldb / Makefile <nl> OPT ? = - O2 - DNDEBUG # ( A ) Production use ( optimized mode ) <nl> # mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> <nl> # detect what platform we ' re building on <nl> - $ ( shell CC = $ ( CC ) CXX = $ ( CXX ) TARGET_OS = $ ( TARGET_OS ) \ <nl> + $ ( shell CC = " $ ( CC ) " CXX = " $ ( CXX ) " TARGET_OS = " $ ( TARGET_OS ) " \ <nl> . / build_detect_platform build_config . mk . / ) <nl> # this file is generated by the previous line to set build flags and sources <nl> include build_config . mk <nl> mmm a / src / leveldb / build_detect_platform <nl> ppp b / src / leveldb / build_detect_platform <nl> if test - z " $ CXX " ; then <nl> CXX = g + + <nl> fi <nl> <nl> + if test - z " $ TMPDIR " ; then <nl> + TMPDIR = / tmp <nl> + fi <nl> + <nl> # Detect OS <nl> if test - z " $ TARGET_OS " ; then <nl> TARGET_OS = ` uname - s ` <nl> if [ " $ CROSS_COMPILE " = " true " ] ; then <nl> # Cross - compiling ; do not try any compilation tests . <nl> true <nl> else <nl> + CXXOUTPUT = " $ { TMPDIR } / leveldb_build_detect_platform - cxx . $ $ " <nl> + <nl> # If - std = c + + 0x works , use < cstdatomic > . Otherwise use port_posix . h . <nl> - $ CXX $ CXXFLAGS - std = c + + 0x - x c + + - - o / dev / null 2 > / dev / null < < EOF <nl> + $ CXX $ CXXFLAGS - std = c + + 0x - x c + + - - o $ CXXOUTPUT 2 > / dev / null < < EOF <nl> # include < cstdatomic > <nl> int main ( ) { } <nl> EOF <nl> EOF <nl> fi <nl> <nl> # Test whether tcmalloc is available <nl> - $ CXX $ CXXFLAGS - x c + + - - o / dev / null - ltcmalloc 2 > / dev / null < < EOF <nl> + $ CXX $ CXXFLAGS - x c + + - - o $ CXXOUTPUT - ltcmalloc 2 > / dev / null < < EOF <nl> int main ( ) { } <nl> EOF <nl> if [ " $ ? " = 0 ] ; then <nl> PLATFORM_LIBS = " $ PLATFORM_LIBS - ltcmalloc " <nl> fi <nl> + <nl> + rm - f $ CXXOUTPUT 2 > / dev / null <nl> fi <nl> <nl> PLATFORM_CCFLAGS = " $ PLATFORM_CCFLAGS $ COMMON_FLAGS " <nl> mmm a / src / leveldb / db / db_impl . cc <nl> ppp b / src / leveldb / db / db_impl . cc <nl> Status DBImpl : : Recover ( VersionEdit * edit ) { <nl> if ( ParseFileName ( filenames [ i ] , & number , & type ) ) { <nl> expected . erase ( number ) ; <nl> if ( type = = kLogFile & & ( ( number > = min_log ) | | ( number = = prev_log ) ) ) <nl> - logs . push_back ( number ) ; <nl> + logs . push_back ( number ) ; <nl> } <nl> } <nl> if ( ! expected . empty ( ) ) { <nl> Status DBImpl : : FinishCompactionOutputFile ( CompactionState * compact , <nl> ( unsigned long long ) output_number , <nl> ( unsigned long long ) current_entries , <nl> ( unsigned long long ) current_bytes ) ; <nl> - <nl> - / / rate - limit compaction file creation with a 100ms pause <nl> - env_ - > SleepForMicroseconds ( 100000 ) ; <nl> } <nl> } <nl> return s ; <nl> new file mode 100644 <nl> index 000000000000 . . 1b1cf8bb28da <nl> mmm / dev / null <nl> ppp b / src / leveldb / issues / issue178_test . cc <nl> <nl> + / / Copyright ( c ) 2013 The LevelDB Authors . All rights reserved . <nl> + / / Use of this source code is governed by a BSD - style license that can be <nl> + / / found in the LICENSE file . See the AUTHORS file for names of contributors . <nl> + <nl> + / / Test for issue 178 : a manual compaction causes deleted data to reappear . <nl> + # include < iostream > <nl> + # include < sstream > <nl> + # include < cstdlib > <nl> + <nl> + # include " leveldb / db . h " <nl> + # include " leveldb / write_batch . h " <nl> + # include " util / testharness . h " <nl> + <nl> + namespace { <nl> + <nl> + const int kNumKeys = 1100000 ; <nl> + <nl> + std : : string Key1 ( int i ) { <nl> + char buf [ 100 ] ; <nl> + snprintf ( buf , sizeof ( buf ) , " my_key_ % d " , i ) ; <nl> + return buf ; <nl> + } <nl> + <nl> + std : : string Key2 ( int i ) { <nl> + return Key1 ( i ) + " _xxx " ; <nl> + } <nl> + <nl> + class Issue178 { } ; <nl> + <nl> + TEST ( Issue178 , Test ) { <nl> + / / Get rid of any state from an old run . <nl> + std : : string dbpath = leveldb : : test : : TmpDir ( ) + " / leveldb_cbug_test " ; <nl> + DestroyDB ( dbpath , leveldb : : Options ( ) ) ; <nl> + <nl> + / / Open database . Disable compression since it affects the creation <nl> + / / of layers and the code below is trying to test against a very <nl> + / / specific scenario . <nl> + leveldb : : DB * db ; <nl> + leveldb : : Options db_options ; <nl> + db_options . create_if_missing = true ; <nl> + db_options . compression = leveldb : : kNoCompression ; <nl> + ASSERT_OK ( leveldb : : DB : : Open ( db_options , dbpath , & db ) ) ; <nl> + <nl> + / / create first key range <nl> + leveldb : : WriteBatch batch ; <nl> + for ( size_t i = 0 ; i < kNumKeys ; i + + ) { <nl> + batch . Put ( Key1 ( i ) , " value for range 1 key " ) ; <nl> + } <nl> + ASSERT_OK ( db - > Write ( leveldb : : WriteOptions ( ) , & batch ) ) ; <nl> + <nl> + / / create second key range <nl> + batch . Clear ( ) ; <nl> + for ( size_t i = 0 ; i < kNumKeys ; i + + ) { <nl> + batch . Put ( Key2 ( i ) , " value for range 2 key " ) ; <nl> + } <nl> + ASSERT_OK ( db - > Write ( leveldb : : WriteOptions ( ) , & batch ) ) ; <nl> + <nl> + / / delete second key range <nl> + batch . Clear ( ) ; <nl> + for ( size_t i = 0 ; i < kNumKeys ; i + + ) { <nl> + batch . Delete ( Key2 ( i ) ) ; <nl> + } <nl> + ASSERT_OK ( db - > Write ( leveldb : : WriteOptions ( ) , & batch ) ) ; <nl> + <nl> + / / compact database <nl> + std : : string start_key = Key1 ( 0 ) ; <nl> + std : : string end_key = Key1 ( kNumKeys - 1 ) ; <nl> + leveldb : : Slice least ( start_key . data ( ) , start_key . size ( ) ) ; <nl> + leveldb : : Slice greatest ( end_key . data ( ) , end_key . size ( ) ) ; <nl> + <nl> + / / commenting out the line below causes the example to work correctly <nl> + db - > CompactRange ( & least , & greatest ) ; <nl> + <nl> + / / count the keys <nl> + leveldb : : Iterator * iter = db - > NewIterator ( leveldb : : ReadOptions ( ) ) ; <nl> + size_t num_keys = 0 ; <nl> + for ( iter - > SeekToFirst ( ) ; iter - > Valid ( ) ; iter - > Next ( ) ) { <nl> + num_keys + + ; <nl> + } <nl> + delete iter ; <nl> + ASSERT_EQ ( kNumKeys , num_keys ) < < " Bad number of keys " ; <nl> + <nl> + / / close database <nl> + delete db ; <nl> + DestroyDB ( dbpath , leveldb : : Options ( ) ) ; <nl> + } <nl> + <nl> + } / / anonymous namespace <nl> + <nl> + int main ( int argc , char * * argv ) { <nl> + return leveldb : : test : : RunAllTests ( ) ; <nl> + } <nl> mmm a / src / leveldb / util / comparator . cc <nl> ppp b / src / leveldb / util / comparator . cc <nl> class BytewiseComparatorImpl : public Comparator { <nl> } ; <nl> } / / namespace <nl> <nl> - static port : : OnceType once_comparator = LEVELDB_ONCE_INIT ; <nl> + static port : : OnceType once = LEVELDB_ONCE_INIT ; <nl> static const Comparator * bytewise ; <nl> <nl> static void InitModule ( ) { <nl> static void InitModule ( ) { <nl> } <nl> <nl> const Comparator * BytewiseComparator ( ) { <nl> - port : : InitOnce ( & once_comparator , InitModule ) ; <nl> + port : : InitOnce ( & once , InitModule ) ; <nl> return bytewise ; <nl> } <nl> <nl>
|
Merge commit ' 84d6d69fc69662b2709fffbeaf3c3b4f53c535b1 '
|
bitcoin/bitcoin
|
4a9a8f3f48f7820cff2964489f09a34160b1671b
|
2013-08-17T22:58:04Z
|
mmm a / filament / src / driver / opengl / GLUtils . h <nl> ppp b / filament / src / driver / opengl / GLUtils . h <nl> constexpr / * inline * / GLenum getInternalFormat ( filament : : driver : : TextureFormat <nl> / / this should not happen <nl> return 0 ; <nl> # endif <nl> + <nl> + # if defined ( GL_KHR_texture_compression_astc_hdr ) <nl> + case TextureFormat : : RGBA_ASTC_4x4 : return GL_COMPRESSED_RGBA_ASTC_4x4_KHR ; <nl> + case TextureFormat : : RGBA_ASTC_5x4 : return GL_COMPRESSED_RGBA_ASTC_5x4_KHR ; <nl> + case TextureFormat : : RGBA_ASTC_5x5 : return GL_COMPRESSED_RGBA_ASTC_5x5_KHR ; <nl> + case TextureFormat : : RGBA_ASTC_6x5 : return GL_COMPRESSED_RGBA_ASTC_6x5_KHR ; <nl> + case TextureFormat : : RGBA_ASTC_6x6 : return GL_COMPRESSED_RGBA_ASTC_6x6_KHR ; <nl> + case TextureFormat : : RGBA_ASTC_8x5 : return GL_COMPRESSED_RGBA_ASTC_8x5_KHR ; <nl> + case TextureFormat : : RGBA_ASTC_8x6 : return GL_COMPRESSED_RGBA_ASTC_8x6_KHR ; <nl> + case TextureFormat : : RGBA_ASTC_8x8 : return GL_COMPRESSED_RGBA_ASTC_8x8_KHR ; <nl> + case TextureFormat : : RGBA_ASTC_10x5 : return GL_COMPRESSED_RGBA_ASTC_10x5_KHR ; <nl> + case TextureFormat : : RGBA_ASTC_10x6 : return GL_COMPRESSED_RGBA_ASTC_10x6_KHR ; <nl> + case TextureFormat : : RGBA_ASTC_10x8 : return GL_COMPRESSED_RGBA_ASTC_10x8_KHR ; <nl> + case TextureFormat : : RGBA_ASTC_10x10 : return GL_COMPRESSED_RGBA_ASTC_10x10_KHR ; <nl> + case TextureFormat : : RGBA_ASTC_12x10 : return GL_COMPRESSED_RGBA_ASTC_12x10_KHR ; <nl> + case TextureFormat : : RGBA_ASTC_12x12 : return GL_COMPRESSED_RGBA_ASTC_12x12_KHR ; <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_4x4 : return GL_COMPRESSED_SRGB8_ALPHA8_ASTC_4x4_KHR ; <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_5x4 : return GL_COMPRESSED_SRGB8_ALPHA8_ASTC_5x4_KHR ; <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_5x5 : return GL_COMPRESSED_SRGB8_ALPHA8_ASTC_5x5_KHR ; <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_6x5 : return GL_COMPRESSED_SRGB8_ALPHA8_ASTC_6x5_KHR ; <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_6x6 : return GL_COMPRESSED_SRGB8_ALPHA8_ASTC_6x6_KHR ; <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_8x5 : return GL_COMPRESSED_SRGB8_ALPHA8_ASTC_8x5_KHR ; <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_8x6 : return GL_COMPRESSED_SRGB8_ALPHA8_ASTC_8x6_KHR ; <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_8x8 : return GL_COMPRESSED_SRGB8_ALPHA8_ASTC_8x8_KHR ; <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_10x5 : return GL_COMPRESSED_SRGB8_ALPHA8_ASTC_10x5_KHR ; <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_10x6 : return GL_COMPRESSED_SRGB8_ALPHA8_ASTC_10x6_KHR ; <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_10x8 : return GL_COMPRESSED_SRGB8_ALPHA8_ASTC_10x8_KHR ; <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_10x10 : return GL_COMPRESSED_SRGB8_ALPHA8_ASTC_10x10_KHR ; <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_12x10 : return GL_COMPRESSED_SRGB8_ALPHA8_ASTC_12x10_KHR ; <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_12x12 : return GL_COMPRESSED_SRGB8_ALPHA8_ASTC_12x12_KHR ; <nl> + # else <nl> + case TextureFormat : : RGBA_ASTC_4x4 : <nl> + case TextureFormat : : RGBA_ASTC_5x4 : <nl> + case TextureFormat : : RGBA_ASTC_5x5 : <nl> + case TextureFormat : : RGBA_ASTC_6x5 : <nl> + case TextureFormat : : RGBA_ASTC_6x6 : <nl> + case TextureFormat : : RGBA_ASTC_8x5 : <nl> + case TextureFormat : : RGBA_ASTC_8x6 : <nl> + case TextureFormat : : RGBA_ASTC_8x8 : <nl> + case TextureFormat : : RGBA_ASTC_10x5 : <nl> + case TextureFormat : : RGBA_ASTC_10x6 : <nl> + case TextureFormat : : RGBA_ASTC_10x8 : <nl> + case TextureFormat : : RGBA_ASTC_10x10 : <nl> + case TextureFormat : : RGBA_ASTC_12x10 : <nl> + case TextureFormat : : RGBA_ASTC_12x12 : <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_4x4 : <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_5x4 : <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_5x5 : <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_6x5 : <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_6x6 : <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_8x5 : <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_8x6 : <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_8x8 : <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_10x5 : <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_10x6 : <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_10x8 : <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_10x10 : <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_12x10 : <nl> + case TextureFormat : : SRGB8_ALPHA8_ASTC_12x12 : <nl> + / / this should not happen <nl> + return 0 ; <nl> + # endif <nl> } <nl> } <nl> <nl> mmm a / libs / filabridge / include / filament / driver / DriverEnums . h <nl> ppp b / libs / filabridge / include / filament / driver / DriverEnums . h <nl> enum class CompressedPixelDataType : uint16_t { <nl> ETC2_EAC_RGBA8 , ETC2_EAC_SRGBA8 , <nl> <nl> / / Available everywhere except Android / iOS <nl> - DXT1_RGB , DXT1_RGBA , DXT3_RGBA , DXT5_RGBA <nl> + DXT1_RGB , DXT1_RGBA , DXT3_RGBA , DXT5_RGBA , <nl> + <nl> + / / ASTC formats are available with a GLES extension <nl> + RGBA_ASTC_4x4 , <nl> + RGBA_ASTC_5x4 , <nl> + RGBA_ASTC_5x5 , <nl> + RGBA_ASTC_6x5 , <nl> + RGBA_ASTC_6x6 , <nl> + RGBA_ASTC_8x5 , <nl> + RGBA_ASTC_8x6 , <nl> + RGBA_ASTC_8x8 , <nl> + RGBA_ASTC_10x5 , <nl> + RGBA_ASTC_10x6 , <nl> + RGBA_ASTC_10x8 , <nl> + RGBA_ASTC_10x10 , <nl> + RGBA_ASTC_12x10 , <nl> + RGBA_ASTC_12x12 , <nl> + SRGB8_ALPHA8_ASTC_4x4 , <nl> + SRGB8_ALPHA8_ASTC_5x4 , <nl> + SRGB8_ALPHA8_ASTC_5x5 , <nl> + SRGB8_ALPHA8_ASTC_6x5 , <nl> + SRGB8_ALPHA8_ASTC_6x6 , <nl> + SRGB8_ALPHA8_ASTC_8x5 , <nl> + SRGB8_ALPHA8_ASTC_8x6 , <nl> + SRGB8_ALPHA8_ASTC_8x8 , <nl> + SRGB8_ALPHA8_ASTC_10x5 , <nl> + SRGB8_ALPHA8_ASTC_10x6 , <nl> + SRGB8_ALPHA8_ASTC_10x8 , <nl> + SRGB8_ALPHA8_ASTC_10x10 , <nl> + SRGB8_ALPHA8_ASTC_12x10 , <nl> + SRGB8_ALPHA8_ASTC_12x12 , <nl> } ; <nl> <nl> / * * Supported texel formats <nl> enum class CompressedPixelDataType : uint16_t { <nl> * Compressed texture formats <nl> * mmmmmmmmmmmmmmmmmmmmmmmm - - <nl> * <nl> - * A few compressed texture formats are supported as well : <nl> + * Many compressed texture formats are supported as well , which include ( but are not limited to ) <nl> + * the following list : <nl> * <nl> * Name | Format <nl> * : mmmmmmmmmmmmmmm - | : mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> enum class TextureFormat : uint16_t { <nl> ETC2_EAC_RGBA8 , ETC2_EAC_SRGBA8 , <nl> <nl> / / Available everywhere except Android / iOS <nl> - DXT1_RGB , DXT1_RGBA , DXT3_RGBA , DXT5_RGBA <nl> + DXT1_RGB , DXT1_RGBA , DXT3_RGBA , DXT5_RGBA , <nl> + <nl> + / / ASTC formats are available with a GLES extension <nl> + RGBA_ASTC_4x4 , <nl> + RGBA_ASTC_5x4 , <nl> + RGBA_ASTC_5x5 , <nl> + RGBA_ASTC_6x5 , <nl> + RGBA_ASTC_6x6 , <nl> + RGBA_ASTC_8x5 , <nl> + RGBA_ASTC_8x6 , <nl> + RGBA_ASTC_8x8 , <nl> + RGBA_ASTC_10x5 , <nl> + RGBA_ASTC_10x6 , <nl> + RGBA_ASTC_10x8 , <nl> + RGBA_ASTC_10x10 , <nl> + RGBA_ASTC_12x10 , <nl> + RGBA_ASTC_12x12 , <nl> + SRGB8_ALPHA8_ASTC_4x4 , <nl> + SRGB8_ALPHA8_ASTC_5x4 , <nl> + SRGB8_ALPHA8_ASTC_5x5 , <nl> + SRGB8_ALPHA8_ASTC_6x5 , <nl> + SRGB8_ALPHA8_ASTC_6x6 , <nl> + SRGB8_ALPHA8_ASTC_8x5 , <nl> + SRGB8_ALPHA8_ASTC_8x6 , <nl> + SRGB8_ALPHA8_ASTC_8x8 , <nl> + SRGB8_ALPHA8_ASTC_10x5 , <nl> + SRGB8_ALPHA8_ASTC_10x6 , <nl> + SRGB8_ALPHA8_ASTC_10x8 , <nl> + SRGB8_ALPHA8_ASTC_10x10 , <nl> + SRGB8_ALPHA8_ASTC_12x10 , <nl> + SRGB8_ALPHA8_ASTC_12x12 , <nl> } ; <nl> <nl> enum class TextureUsage : uint8_t { <nl> mmm a / libs / image / include / image / KtxBundle . h <nl> ppp b / libs / image / include / image / KtxBundle . h <nl> class KtxBundle { <nl> static constexpr uint32_t RGBA_S3TC_DXT3 = 0x83F2 ; <nl> static constexpr uint32_t RGBA_S3TC_DXT5 = 0x83F3 ; <nl> <nl> + static constexpr uint32_t RGBA_ASTC_4x4 = 0x93B0 ; <nl> + static constexpr uint32_t RGBA_ASTC_5x4 = 0x93B1 ; <nl> + static constexpr uint32_t RGBA_ASTC_5x5 = 0x93B2 ; <nl> + static constexpr uint32_t RGBA_ASTC_6x5 = 0x93B3 ; <nl> + static constexpr uint32_t RGBA_ASTC_6x6 = 0x93B4 ; <nl> + static constexpr uint32_t RGBA_ASTC_8x5 = 0x93B5 ; <nl> + static constexpr uint32_t RGBA_ASTC_8x6 = 0x93B6 ; <nl> + static constexpr uint32_t RGBA_ASTC_8x8 = 0x93B7 ; <nl> + static constexpr uint32_t RGBA_ASTC_10x5 = 0x93B8 ; <nl> + static constexpr uint32_t RGBA_ASTC_10x6 = 0x93B9 ; <nl> + static constexpr uint32_t RGBA_ASTC_10x8 = 0x93BA ; <nl> + static constexpr uint32_t RGBA_ASTC_10x10 = 0x93BB ; <nl> + static constexpr uint32_t RGBA_ASTC_12x10 = 0x93BC ; <nl> + static constexpr uint32_t RGBA_ASTC_12x12 = 0x93BD ; <nl> + static constexpr uint32_t SRGB8_ALPHA8_ASTC_4x4 = 0x93D0 ; <nl> + static constexpr uint32_t SRGB8_ALPHA8_ASTC_5x4 = 0x93D1 ; <nl> + static constexpr uint32_t SRGB8_ALPHA8_ASTC_5x5 = 0x93D2 ; <nl> + static constexpr uint32_t SRGB8_ALPHA8_ASTC_6x5 = 0x93D3 ; <nl> + static constexpr uint32_t SRGB8_ALPHA8_ASTC_6x6 = 0x93D4 ; <nl> + static constexpr uint32_t SRGB8_ALPHA8_ASTC_8x5 = 0x93D5 ; <nl> + static constexpr uint32_t SRGB8_ALPHA8_ASTC_8x6 = 0x93D6 ; <nl> + static constexpr uint32_t SRGB8_ALPHA8_ASTC_8x8 = 0x93D7 ; <nl> + static constexpr uint32_t SRGB8_ALPHA8_ASTC_10x5 = 0x93D8 ; <nl> + static constexpr uint32_t SRGB8_ALPHA8_ASTC_10x6 = 0x93D9 ; <nl> + static constexpr uint32_t SRGB8_ALPHA8_ASTC_10x8 = 0x93DA ; <nl> + static constexpr uint32_t SRGB8_ALPHA8_ASTC_10x10 = 0x93DB ; <nl> + static constexpr uint32_t SRGB8_ALPHA8_ASTC_12x10 = 0x93DC ; <nl> + static constexpr uint32_t SRGB8_ALPHA8_ASTC_12x12 = 0x93DD ; <nl> + <nl> private : <nl> image : : KtxInfo mInfo = { } ; <nl> uint32_t mNumMipLevels ; <nl> mmm a / samples / web / filaweb . cpp <nl> ppp b / samples / web / filaweb . cpp <nl> SkyLight getSkyLight ( Engine & engine , const char * name ) { <nl> return result ; <nl> } <nl> <nl> - filament : : driver : : CompressedPixelDataType toPixelDataType ( uint32_t format ) { <nl> - using DstFormat = filament : : driver : : CompressedPixelDataType ; <nl> + template < typename T > <nl> + T toFilamentEnum ( uint32_t format ) { <nl> switch ( format ) { <nl> - case KtxBundle : : RGB_S3TC_DXT1 : return DstFormat : : DXT1_RGB ; <nl> - case KtxBundle : : RGBA_S3TC_DXT1 : return DstFormat : : DXT1_RGBA ; <nl> - case KtxBundle : : RGBA_S3TC_DXT3 : return DstFormat : : DXT3_RGBA ; <nl> - case KtxBundle : : RGBA_S3TC_DXT5 : return DstFormat : : DXT5_RGBA ; <nl> + case KtxBundle : : RGB_S3TC_DXT1 : return T : : DXT1_RGB ; <nl> + case KtxBundle : : RGBA_S3TC_DXT1 : return T : : DXT1_RGBA ; <nl> + case KtxBundle : : RGBA_S3TC_DXT3 : return T : : DXT3_RGBA ; <nl> + case KtxBundle : : RGBA_S3TC_DXT5 : return T : : DXT5_RGBA ; <nl> + case KtxBundle : : RGBA_ASTC_4x4 : return T : : RGBA_ASTC_4x4 ; <nl> + case KtxBundle : : RGBA_ASTC_5x4 : return T : : RGBA_ASTC_5x4 ; <nl> + case KtxBundle : : RGBA_ASTC_5x5 : return T : : RGBA_ASTC_5x5 ; <nl> + case KtxBundle : : RGBA_ASTC_6x5 : return T : : RGBA_ASTC_6x5 ; <nl> + case KtxBundle : : RGBA_ASTC_6x6 : return T : : RGBA_ASTC_6x6 ; <nl> + case KtxBundle : : RGBA_ASTC_8x5 : return T : : RGBA_ASTC_8x5 ; <nl> + case KtxBundle : : RGBA_ASTC_8x6 : return T : : RGBA_ASTC_8x6 ; <nl> + case KtxBundle : : RGBA_ASTC_8x8 : return T : : RGBA_ASTC_8x8 ; <nl> + case KtxBundle : : RGBA_ASTC_10x5 : return T : : RGBA_ASTC_10x5 ; <nl> + case KtxBundle : : RGBA_ASTC_10x6 : return T : : RGBA_ASTC_10x6 ; <nl> + case KtxBundle : : RGBA_ASTC_10x8 : return T : : RGBA_ASTC_10x8 ; <nl> + case KtxBundle : : RGBA_ASTC_10x10 : return T : : RGBA_ASTC_10x10 ; <nl> + case KtxBundle : : RGBA_ASTC_12x10 : return T : : RGBA_ASTC_12x10 ; <nl> + case KtxBundle : : RGBA_ASTC_12x12 : return T : : RGBA_ASTC_12x12 ; <nl> + case KtxBundle : : SRGB8_ALPHA8_ASTC_4x4 : return T : : SRGB8_ALPHA8_ASTC_4x4 ; <nl> + case KtxBundle : : SRGB8_ALPHA8_ASTC_5x4 : return T : : SRGB8_ALPHA8_ASTC_5x4 ; <nl> + case KtxBundle : : SRGB8_ALPHA8_ASTC_5x5 : return T : : SRGB8_ALPHA8_ASTC_5x5 ; <nl> + case KtxBundle : : SRGB8_ALPHA8_ASTC_6x5 : return T : : SRGB8_ALPHA8_ASTC_6x5 ; <nl> + case KtxBundle : : SRGB8_ALPHA8_ASTC_6x6 : return T : : SRGB8_ALPHA8_ASTC_6x6 ; <nl> + case KtxBundle : : SRGB8_ALPHA8_ASTC_8x5 : return T : : SRGB8_ALPHA8_ASTC_8x5 ; <nl> + case KtxBundle : : SRGB8_ALPHA8_ASTC_8x6 : return T : : SRGB8_ALPHA8_ASTC_8x6 ; <nl> + case KtxBundle : : SRGB8_ALPHA8_ASTC_8x8 : return T : : SRGB8_ALPHA8_ASTC_8x8 ; <nl> + case KtxBundle : : SRGB8_ALPHA8_ASTC_10x5 : return T : : SRGB8_ALPHA8_ASTC_10x5 ; <nl> + case KtxBundle : : SRGB8_ALPHA8_ASTC_10x6 : return T : : SRGB8_ALPHA8_ASTC_10x6 ; <nl> + case KtxBundle : : SRGB8_ALPHA8_ASTC_10x8 : return T : : SRGB8_ALPHA8_ASTC_10x8 ; <nl> + case KtxBundle : : SRGB8_ALPHA8_ASTC_10x10 : return T : : SRGB8_ALPHA8_ASTC_10x10 ; <nl> + case KtxBundle : : SRGB8_ALPHA8_ASTC_12x10 : return T : : SRGB8_ALPHA8_ASTC_12x10 ; <nl> + case KtxBundle : : SRGB8_ALPHA8_ASTC_12x12 : return T : : SRGB8_ALPHA8_ASTC_12x12 ; <nl> } <nl> - return ( filament : : driver : : CompressedPixelDataType ) 0xffff ; <nl> + return ( T ) 0xffff ; <nl> + } <nl> + <nl> + filament : : driver : : CompressedPixelDataType toPixelDataType ( uint32_t format ) { <nl> + return toFilamentEnum < filament : : driver : : CompressedPixelDataType > ( format ) ; <nl> } <nl> <nl> filament : : driver : : TextureFormat toTextureFormat ( uint32_t format ) { <nl> - using DstFormat = filament : : driver : : TextureFormat ; <nl> - switch ( format ) { <nl> - case KtxBundle : : RGB_S3TC_DXT1 : return DstFormat : : DXT1_RGB ; <nl> - case KtxBundle : : RGBA_S3TC_DXT1 : return DstFormat : : DXT1_RGBA ; <nl> - case KtxBundle : : RGBA_S3TC_DXT3 : return DstFormat : : DXT3_RGBA ; <nl> - case KtxBundle : : RGBA_S3TC_DXT5 : return DstFormat : : DXT5_RGBA ; <nl> - } <nl> - return ( filament : : driver : : TextureFormat ) 0xffff ; <nl> + return toFilamentEnum < filament : : driver : : TextureFormat > ( format ) ; <nl> } <nl> <nl> } / / namespace filaweb <nl> mmm a / samples / web / filaweb . js <nl> ppp b / samples / web / filaweb . js <nl> for ( let ext of Module . ctx . getSupportedExtensions ( ) ) { <nl> if ( ext = = " WEBGL_compressed_texture_s3tc " ) { <nl> use_s3tc = true ; <nl> } else if ( ext = = " WEBGL_compressed_texture_astc " ) { <nl> + Module . ctx . getExtension ( ' WEBGL_compressed_texture_astc ' ) ; <nl> use_astc = true ; <nl> } else if ( ext = = " WEBGL_compressed_texture_etc1 " ) { <nl> use_etc1 = true ; <nl> mmm a / samples / web / suzanne . html <nl> ppp b / samples / web / suzanne . html <nl> <nl> < script src = " filaweb . js " > < / script > <nl> < script > <nl> load ( { <nl> - ' albedo ' : use_s3tc ? load_rawfile ( ' monkey / albedo_s3tc . ktx ' ) : <nl> - load_rawfile ( ' monkey / albedo . ktx ' ) , <nl> + ' albedo ' : ( use_s3tc ? load_rawfile ( ' monkey / albedo_s3tc . ktx ' ) : <nl> + ( use_astc ? load_rawfile ( ' monkey / albedo_astc . ktx ' ) : <nl> + load_rawfile ( ' monkey / albedo . ktx ' ) ) ) , <nl> ' metallic ' : load_rawfile ( ' monkey / metallic . ktx ' ) , <nl> ' roughness ' : load_rawfile ( ' monkey / roughness . ktx ' ) , <nl> ' normal ' : load_rawfile ( ' monkey / normal . ktx ' ) , <nl>
|
Add ASTC support to Suzanne .
|
google/filament
|
3ad0574878c563d4d48f8242792ac7f5f2794ebe
|
2018-10-02T14:46:51Z
|
mmm a / lib / Parse / SyntaxParsingContext . cpp <nl> ppp b / lib / Parse / SyntaxParsingContext . cpp <nl> void finalizeSourceFile ( RootContextData & RootData , <nl> if ( SF . hasSyntaxRoot ( ) ) { <nl> auto SourceRaw = SF . getSyntaxRoot ( ) . getRaw ( ) ; <nl> auto Decls = <nl> - SourceRaw - > getChild ( SourceFileSyntax : : Cursor : : Items ) - > getLayout ( ) ; <nl> + SourceRaw - > getChild ( SourceFileSyntax : : Cursor : : Statements ) - > getLayout ( ) ; <nl> std : : copy ( Decls . begin ( ) , Decls . end ( ) , std : : back_inserter ( AllTopLevel ) ) ; <nl> EOFToken = SourceRaw - > getChild ( SourceFileSyntax : : Cursor : : EOFToken ) ; <nl> } <nl> mmm a / utils / gyb_syntax_support / CommonNodes . py <nl> ppp b / utils / gyb_syntax_support / CommonNodes . py <nl> <nl> <nl> # code - block - > ' { ' stmt - list ' } ' <nl> Node ( ' CodeBlock ' , kind = ' Syntax ' , <nl> - traits = [ ' Braced ' ] , <nl> + traits = [ ' Braced ' , ' WithStatements ' ] , <nl> children = [ <nl> Child ( ' LeftBrace ' , kind = ' LeftBraceToken ' ) , <nl> Child ( ' Statements ' , kind = ' CodeBlockItemList ' ) , <nl> mmm a / utils / gyb_syntax_support / DeclNodes . py <nl> ppp b / utils / gyb_syntax_support / DeclNodes . py <nl> <nl> <nl> # else - if - directive - clause - > ' # elseif ' expr stmt - list <nl> Node ( ' ElseifDirectiveClause ' , kind = ' Syntax ' , <nl> + traits = [ ' WithStatements ' ] , <nl> children = [ <nl> Child ( ' PoundElseif ' , kind = ' PoundElseifToken ' ) , <nl> Child ( ' Condition ' , kind = ' Expr ' ) , <nl> - Child ( ' Body ' , kind = ' CodeBlockItemList ' ) , <nl> + Child ( ' Statements ' , kind = ' CodeBlockItemList ' ) , <nl> ] ) , <nl> <nl> # if - config - decl - > ' # if ' expr stmt - list else - if - directive - clause - list <nl> # else - clause ? ' # endif ' <nl> Node ( ' IfConfigDecl ' , kind = ' Decl ' , <nl> + traits = [ ' WithStatements ' ] , <nl> children = [ <nl> Child ( ' PoundIf ' , kind = ' PoundIfToken ' ) , <nl> Child ( ' Condition ' , kind = ' Expr ' ) , <nl> - Child ( ' Body ' , kind = ' CodeBlockItemList ' ) , <nl> + Child ( ' Statements ' , kind = ' CodeBlockItemList ' ) , <nl> Child ( ' ElseifDirectiveClauses ' , kind = ' ElseifDirectiveClauseList ' , <nl> is_optional = True ) , <nl> Child ( ' ElseClause ' , kind = ' ElseDirectiveClause ' , <nl> <nl> <nl> # source - file = code - block - item - list eof <nl> Node ( ' SourceFile ' , kind = ' Syntax ' , <nl> + traits = [ ' WithStatements ' ] , <nl> children = [ <nl> - Child ( ' Items ' , kind = ' CodeBlockItemList ' ) , <nl> + Child ( ' Statements ' , kind = ' CodeBlockItemList ' ) , <nl> Child ( ' EOFToken ' , kind = ' EOFToken ' ) <nl> ] ) , <nl> <nl> <nl> <nl> # else - directive - clause - > ' # else ' stmt - list <nl> Node ( ' ElseDirectiveClause ' , kind = ' Syntax ' , <nl> + traits = [ ' WithStatements ' ] , <nl> children = [ <nl> Child ( ' PoundElse ' , kind = ' PoundElseToken ' ) , <nl> - Child ( ' Body ' , kind = ' CodeBlockItemList ' ) , <nl> + Child ( ' Statements ' , kind = ' CodeBlockItemList ' ) , <nl> ] ) , <nl> <nl> # access - level - modifier - > ' private ' | ' private ' ' ( ' ' set ' ' ) ' <nl> mmm a / utils / gyb_syntax_support / ExprNodes . py <nl> ppp b / utils / gyb_syntax_support / ExprNodes . py <nl> <nl> ] ) , <nl> <nl> Node ( ' ClosureExpr ' , kind = ' Expr ' , <nl> - traits = [ ' Braced ' ] , <nl> + traits = [ ' Braced ' , ' WithStatements ' ] , <nl> children = [ <nl> Child ( ' LeftBrace ' , kind = ' LeftBraceToken ' ) , <nl> Child ( ' Signature ' , kind = ' ClosureSignature ' , is_optional = True ) , <nl> mmm a / utils / gyb_syntax_support / StmtNodes . py <nl> ppp b / utils / gyb_syntax_support / StmtNodes . py <nl> <nl> # switch - case - > switch - case - label stmt - list <nl> # | default - label stmt - list <nl> Node ( ' SwitchCase ' , kind = ' Syntax ' , <nl> + traits = [ ' WithStatements ' ] , <nl> children = [ <nl> Child ( ' Label ' , kind = ' Syntax ' ) , <nl> - Child ( ' Body ' , kind = ' CodeBlockItemList ' ) , <nl> + Child ( ' Statements ' , kind = ' CodeBlockItemList ' ) , <nl> ] ) , <nl> <nl> # switch - default - label - > ' default ' ' : ' <nl> mmm a / utils / gyb_syntax_support / Traits . py <nl> ppp b / utils / gyb_syntax_support / Traits . py <nl> def __init__ ( self , trait_name , children ) : <nl> Child ( ' LabelName ' , kind = ' IdentifierToken ' , is_optional = True ) , <nl> Child ( ' LabelColon ' , kind = ' ColonToken ' , is_optional = True ) , <nl> ] ) , <nl> + <nl> + Trait ( ' WithStatements ' , <nl> + children = [ <nl> + Child ( ' Statements ' , kind = ' CodeBlockItemList ' ) , <nl> + ] ) , <nl> ] <nl>
|
Merge remote - tracking branch ' origin / master ' into master - next
|
apple/swift
|
31af7587e97b02f1b64cabf25eb27e2554b4e53d
|
2018-02-22T22:51:22Z
|
mmm a / . gitignore <nl> ppp b / . gitignore <nl> config . log <nl> / addons / skin . pm3 - hd / media / Textures . xbt <nl> / addons / visualization . glspectrum / opengl_spectrum . vis <nl> / addons / visualization . projectm / projectM . vis <nl> - / addons / visualization . projectm / * . milk <nl> - / addons / visualization . projectm / * . tga <nl> - / addons / visualization . projectm / * . prjm <nl> / addons / visualization . waveform / Waveform . vis <nl> <nl> # / guilib / <nl>
|
Update . gitignore for last change .
|
xbmc/xbmc
|
dafdf8fe570671132dff78c8528d9b6271370611
|
2010-05-15T22:01:39Z
|
mmm a / . gitattributes <nl> ppp b / . gitattributes <nl> Examples / SequenceToSequence / CMUDict / Data / cmudict - 0 . 7b * text <nl> * . zip binary <nl> * . dnn binary <nl> Examples / Image / Detection / FastRCNN / fastRCNN / * / * . pyd binary <nl> + Examples / Image / Detection / FastRCNN / fastRCNN / * / * . so binary <nl> Tests / UnitTests / V2LibraryTests / data / * . bin binary <nl> Tests / UnitTests / ReaderTests / Data / CNTKBinaryReader / * . bin binary <nl> mmm a / Examples / Image / Detection / FastRCNN / PARAMETERS . py <nl> ppp b / Examples / Image / Detection / FastRCNN / PARAMETERS . py <nl> <nl> print ( datetime . datetime . fromtimestamp ( time . time ( ) ) . strftime ( ' % Y - % m - % d % H : % M : % S ' ) ) <nl> <nl> # dataset name <nl> - datasetName = " grocery " <nl> + datasetName = " Grocery " <nl> # datasetName = " pascalVoc " <nl> # datasetName = " pascalVoc_aeroplanesOnly " <nl> <nl> <nl> # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> # project - specific parameters <nl> # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> - if datasetName . startswith ( " grocery " ) : <nl> + if datasetName . startswith ( " Grocery " ) : <nl> classes = ( ' __background__ ' , # always index 0 <nl> ' avocado ' , ' orange ' , ' butter ' , ' champagne ' , ' eggBox ' , ' gerkin ' , ' joghurt ' , ' ketchup ' , <nl> ' orangeJuice ' , ' onion ' , ' pepper ' , ' tomato ' , ' water ' , ' milk ' , ' tabasco ' , ' mustard ' ) <nl> mmm a / Examples / Image / Detection / FastRCNN / cntk_helpers . py <nl> ppp b / Examples / Image / Detection / FastRCNN / cntk_helpers . py <nl> <nl> from fastRCNN . nms import nms as nmsPython <nl> from builtins import range <nl> <nl> + available_font = " arial . ttf " <nl> + try : <nl> + dummy = ImageFont . truetype ( available_font , 16 ) <nl> + except : <nl> + available_font = " FreeMono . ttf " <nl> <nl> # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> # Region - of - interest <nl> mmm a / Examples / Image / Detection / FastRCNN / fastRCNN / imdb . py <nl> ppp b / Examples / Image / Detection / FastRCNN / fastRCNN / imdb . py <nl> <nl> from builtins import range <nl> <nl> import sys <nl> - if sys . version_info [ 0 ] < 3 : <nl> - from utils2_win64 . cython_bbox import bbox_overlaps <nl> - else : <nl> - from . utils3_win64 . cython_bbox import bbox_overlaps <nl> + from . utils . cython_bbox import bbox_overlaps <nl> <nl> class imdb ( object ) : <nl> " " " Image database . " " " <nl> mmm a / Examples / Image / Detection / FastRCNN / fastRCNN / test . py <nl> ppp b / Examples / Image / Detection / FastRCNN / fastRCNN / test . py <nl> <nl> <nl> " " " Test a Fast R - CNN network on an imdb ( image database ) . " " " <nl> <nl> - # from config import cfg # , get_output_dir <nl> - # from blob import im_list_to_blob <nl> from __future__ import print_function <nl> import os , sys , cv2 , numpy as np , pickle as cp , heapq <nl> from . nms import nms as nmsPython <nl> - if sys . version_info [ 0 ] < 3 : <nl> - from utils2_win64 . cython_nms import nms <nl> - else : <nl> - from . utils3_win64 . cython_nms import nms <nl> + from . utils . cython_nms import nms <nl> from . timer import Timer <nl> from cntk_helpers import im_detect , apply_nms <nl> from builtins import range <nl> similarity index 100 % <nl> rename from Examples / Image / Detection / FastRCNN / fastRCNN / utils3_win64 / cython_bbox . pyd <nl> rename to Examples / Image / Detection / FastRCNN / fastRCNN / utils / cython_bbox . cp34 - win_amd64 . pyd <nl> new file mode 100644 <nl> index 00000000000 . . 1a649c31eda <nl> Binary files / dev / null and b / Examples / Image / Detection / FastRCNN / fastRCNN / utils / cython_bbox . cp35 - win_amd64 . pyd differ <nl> new file mode 100644 <nl> index 00000000000 . . 715c2de3928 <nl> Binary files / dev / null and b / Examples / Image / Detection / FastRCNN / fastRCNN / utils / cython_bbox . cpython - 34m . so differ <nl> similarity index 100 % <nl> rename from Examples / Image / Detection / FastRCNN / fastRCNN / utils3_win64 / cython_nms . pyd <nl> rename to Examples / Image / Detection / FastRCNN / fastRCNN / utils / cython_nms . cp34 - win_amd64 . pyd <nl> new file mode 100644 <nl> index 00000000000 . . 841b1894275 <nl> Binary files / dev / null and b / Examples / Image / Detection / FastRCNN / fastRCNN / utils / cython_nms . cp35 - win_amd64 . pyd differ <nl> new file mode 100644 <nl> index 00000000000 . . 75bc0ac5505 <nl> Binary files / dev / null and b / Examples / Image / Detection / FastRCNN / fastRCNN / utils / cython_nms . cpython - 34m . so differ <nl> deleted file mode 100644 <nl> index aa48dacb39e . . 00000000000 <nl> Binary files a / Examples / Image / Detection / FastRCNN / fastRCNN / utils2_win32 / cython_bbox . pyd and / dev / null differ <nl> deleted file mode 100644 <nl> index d85c7c0179f . . 00000000000 <nl> Binary files a / Examples / Image / Detection / FastRCNN / fastRCNN / utils2_win32 / cython_nms . pyd and / dev / null differ <nl> deleted file mode 100644 <nl> index 139597f9cb0 . . 00000000000 <nl> mmm a / Examples / Image / Detection / FastRCNN / fastRCNN / utils2_win64 / __init__ . py <nl> ppp / dev / null <nl> <nl> - <nl> - <nl> deleted file mode 100644 <nl> index ae3c9592ef6 . . 00000000000 <nl> Binary files a / Examples / Image / Detection / FastRCNN / fastRCNN / utils2_win64 / cython_bbox . pyd and / dev / null differ <nl> deleted file mode 100644 <nl> index 45d6b08d9a6 . . 00000000000 <nl> Binary files a / Examples / Image / Detection / FastRCNN / fastRCNN / utils2_win64 / cython_nms . pyd and / dev / null differ <nl> mmm a / Examples / Image / TransferLearning / README . md <nl> ppp b / Examples / Image / TransferLearning / README . md <nl> We use the ` Flowers ` data set ( [ Examples / Image / DataSets / Flowers ] ( . . / DataSets / Flo <nl> <nl> # # # Details <nl> <nl> - Run ` python TransferLearning . py ` to train and evaluate the transfer learning model . The model achieves 93 % accuracy on the Flowers data set after training for 20 epochs . A detailed walk through is provided in the [ CNTK github wiki ] ( https : / / github . com / Microsoft / CNTK / wiki / How - do - I - in - Python ) . <nl> + Run ` python TransferLearning . py ` to train and evaluate the transfer learning model . The model achieves 93 % accuracy on the Flowers data set after training for 20 epochs . A detailed walk through is provided in the [ ' Build your own image classifier using Transfer Learning ' ] ( https : / / github . com / Microsoft / CNTK / wiki / Build - your - own - image - classifier - using - Transfer - Learning ) tutorial on the CNTK github wiki . <nl>
|
updated Fast R - CNN to py35
|
microsoft/CNTK
|
50beeadbfb955c6c9d79ceb773513e4e03e03350
|
2017-02-15T08:28:34Z
|
new file mode 100644 <nl> index 000000000000 . . c2c0bf22f5da <nl> mmm / dev / null <nl> ppp b / cocos2dx / platform / win32 / CCAccelerometer_win32 . cpp <nl> <nl> + / * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * <nl> + Copyright ( c ) 2011 cocos2d - x . org http : / / cocos2d - x . org <nl> + Copyright ( c ) 2012 Rocco Loscalzo ( Wartortle ) <nl> + <nl> + Permission is hereby granted , free of charge , to any person obtaining a copy <nl> + of this software and associated documentation files ( the " Software " ) , to deal <nl> + in the Software without restriction , including without limitation the rights <nl> + to use , copy , modify , merge , publish , distribute , sublicense , and / or sell <nl> + copies of the Software , and to permit persons to whom the Software is <nl> + furnished to do so , subject to the following conditions : <nl> + <nl> + The above copyright notice and this permission notice shall be included in <nl> + all copies or substantial portions of the Software . <nl> + <nl> + THE SOFTWARE IS PROVIDED " AS IS " , WITHOUT WARRANTY OF ANY KIND , EXPRESS OR <nl> + IMPLIED , INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY , <nl> + FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT . IN NO EVENT SHALL THE <nl> + AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM , DAMAGES OR OTHER <nl> + LIABILITY , WHETHER IN AN ACTION OF CONTRACT , TORT OR OTHERWISE , ARISING FROM , <nl> + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN <nl> + THE SOFTWARE . <nl> + * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * / <nl> + <nl> + # include " CCAccelerometer_win32 . h " <nl> + # include " CCEGLView_win32 . h " <nl> + # include " CCDirector . h " <nl> + # include " ccMacros . h " <nl> + <nl> + namespace <nl> + { <nl> + <nl> + double g_accelX = 0 . 0 ; <nl> + double g_accelY = 0 . 0 ; <nl> + double g_accelZ = 0 . 0 ; <nl> + const double g_accelerationStep = 0 . 2f ; <nl> + const double g_minAcceleration = - 1 . 0f ; <nl> + const double g_maxAcceleration = 1 . 0f ; <nl> + <nl> + template < class T > <nl> + T CLAMP ( const T val , const T minVal , const T maxVal ) <nl> + { <nl> + CC_ASSERT ( minVal < = maxVal ) ; <nl> + T result = val ; <nl> + if ( result < minVal ) <nl> + result = minVal ; <nl> + else if ( result > maxVal ) <nl> + result = maxVal ; <nl> + <nl> + CC_ASSERT ( minVal < = result & & result < = maxVal ) ; <nl> + return result ; <nl> + } <nl> + <nl> + bool handleKeyDown ( WPARAM wParam ) <nl> + { <nl> + bool sendUpdate = false ; <nl> + switch ( wParam ) <nl> + { <nl> + case VK_LEFT : <nl> + sendUpdate = true ; <nl> + g_accelX = CLAMP ( g_accelX - g_accelerationStep , g_minAcceleration , g_maxAcceleration ) ; <nl> + break ; <nl> + case VK_RIGHT : <nl> + sendUpdate = true ; <nl> + g_accelX = CLAMP ( g_accelX + g_accelerationStep , g_minAcceleration , g_maxAcceleration ) ; <nl> + break ; <nl> + case VK_UP : <nl> + sendUpdate = true ; <nl> + g_accelY = CLAMP ( g_accelY + g_accelerationStep , g_minAcceleration , g_maxAcceleration ) ; <nl> + break ; <nl> + case VK_DOWN : <nl> + sendUpdate = true ; <nl> + g_accelY = CLAMP ( g_accelY - g_accelerationStep , g_minAcceleration , g_maxAcceleration ) ; <nl> + break ; <nl> + case VK_OEM_COMMA : <nl> + sendUpdate = true ; <nl> + g_accelZ = CLAMP ( g_accelZ + g_accelerationStep , g_minAcceleration , g_maxAcceleration ) ; <nl> + break ; <nl> + case VK_OEM_PERIOD : <nl> + sendUpdate = true ; <nl> + g_accelZ = CLAMP ( g_accelZ - g_accelerationStep , g_minAcceleration , g_maxAcceleration ) ; <nl> + break ; <nl> + } <nl> + return sendUpdate ; <nl> + } <nl> + <nl> + bool handleKeyUp ( WPARAM wParam ) <nl> + { <nl> + bool sendUpdate = false ; <nl> + switch ( wParam ) <nl> + { <nl> + case VK_LEFT : <nl> + case VK_RIGHT : <nl> + sendUpdate = true ; <nl> + g_accelX = 0 . 0 ; <nl> + break ; <nl> + case VK_UP : <nl> + case VK_DOWN : <nl> + sendUpdate = true ; <nl> + g_accelY = 0 . 0 ; <nl> + break ; <nl> + case VK_OEM_COMMA : <nl> + case VK_OEM_PERIOD : <nl> + sendUpdate = true ; <nl> + g_accelZ = 0 . 0 ; <nl> + break ; <nl> + } <nl> + return sendUpdate ; <nl> + } <nl> + <nl> + void myAccelerometerKeyHook ( UINT message , WPARAM wParam , LPARAM lParam ) <nl> + { <nl> + cocos2d : : CCAccelerometer * pAccelerometer = cocos2d : : CCAccelerometer : : sharedAccelerometer ( ) ; <nl> + bool sendUpdate = false ; <nl> + switch ( message ) <nl> + { <nl> + case WM_KEYDOWN : <nl> + sendUpdate = handleKeyDown ( wParam ) ; <nl> + break ; <nl> + case WM_KEYUP : <nl> + sendUpdate = handleKeyUp ( wParam ) ; <nl> + break ; <nl> + case WM_CHAR : <nl> + / / Deliberately empty - all handled through key up and down events <nl> + break ; <nl> + default : <nl> + / / Not expected to get here ! ! <nl> + CC_ASSERT ( false ) ; <nl> + break ; <nl> + } <nl> + <nl> + if ( sendUpdate ) <nl> + { <nl> + const time_t theTime = time ( NULL ) ; <nl> + const double timestamp = ( double ) theTime / 100 . 0 ; <nl> + pAccelerometer - > update ( g_accelX , g_accelY , g_accelZ , timestamp ) ; <nl> + } <nl> + } <nl> + <nl> + void resetAccelerometer ( ) <nl> + { <nl> + g_accelX = 0 . 0 ; <nl> + g_accelY = 0 . 0 ; <nl> + g_accelZ = 0 . 0 ; <nl> + } <nl> + <nl> + } <nl> + <nl> + namespace cocos2d <nl> + { <nl> + <nl> + / / static members <nl> + CCAccelerometer * CCAccelerometer : : m_spCCAccelerometer = NULL ; <nl> + <nl> + CCAccelerometer : : CCAccelerometer ( ) : <nl> + m_pAccelDelegate ( NULL ) <nl> + { <nl> + } <nl> + <nl> + CCAccelerometer : : ~ CCAccelerometer ( ) <nl> + { <nl> + if ( m_spCCAccelerometer ) <nl> + { <nl> + delete m_spCCAccelerometer ; <nl> + m_spCCAccelerometer = NULL ; <nl> + } <nl> + } <nl> + <nl> + / / static <nl> + CCAccelerometer * CCAccelerometer : : sharedAccelerometer ( ) <nl> + { <nl> + if ( m_spCCAccelerometer = = NULL ) <nl> + { <nl> + m_spCCAccelerometer = new CCAccelerometer ( ) ; <nl> + } <nl> + return m_spCCAccelerometer ; <nl> + } <nl> + <nl> + void CCAccelerometer : : setDelegate ( CCAccelerometerDelegate * pDelegate ) <nl> + { <nl> + m_pAccelDelegate = pDelegate ; <nl> + <nl> + / / Enable / disable the accelerometer . <nl> + / / Well , there isn ' t one on Win32 so we don ' t do anything other than register <nl> + / / and deregister ourselves from the Windows Key handler . <nl> + if ( pDelegate ) <nl> + { <nl> + / / Register our handler <nl> + CCEGLView : : sharedOpenGLView ( ) . setAccelerometerKeyHook ( & myAccelerometerKeyHook ) ; <nl> + } <nl> + else <nl> + { <nl> + / / De - register our handler <nl> + CCEGLView : : sharedOpenGLView ( ) . setAccelerometerKeyHook ( NULL ) ; <nl> + resetAccelerometer ( ) ; <nl> + } <nl> + } <nl> + <nl> + void CCAccelerometer : : update ( double x , double y , double z , double timestamp ) <nl> + { <nl> + if ( m_pAccelDelegate ) <nl> + { <nl> + m_obAccelerationValue . x = x ; <nl> + m_obAccelerationValue . y = y ; <nl> + m_obAccelerationValue . z = z ; <nl> + m_obAccelerationValue . timestamp = timestamp ; <nl> + <nl> + / / Handle orientation changes <nl> + CCDirector * pDirector = CCDirector : : sharedDirector ( ) ; <nl> + const ccDeviceOrientation orientation = pDirector - > getDeviceOrientation ( ) ; <nl> + const double tmp = m_obAccelerationValue . x ; <nl> + switch ( orientation ) <nl> + { <nl> + case kCCDeviceOrientationLandscapeRight : <nl> + m_obAccelerationValue . x = - m_obAccelerationValue . y ; <nl> + m_obAccelerationValue . y = tmp ; <nl> + break ; <nl> + <nl> + case kCCDeviceOrientationLandscapeLeft : <nl> + m_obAccelerationValue . x = m_obAccelerationValue . y ; <nl> + m_obAccelerationValue . y = - tmp ; <nl> + break ; <nl> + <nl> + case kCCDeviceOrientationPortraitUpsideDown : <nl> + m_obAccelerationValue . x = - m_obAccelerationValue . y ; <nl> + m_obAccelerationValue . y = - tmp ; <nl> + break ; <nl> + <nl> + case kCCDeviceOrientationPortrait : <nl> + break ; <nl> + } <nl> + <nl> + / / Delegate <nl> + m_pAccelDelegate - > didAccelerate ( & m_obAccelerationValue ) ; <nl> + } <nl> + } <nl> + <nl> + } / / end of namespace cococs2d <nl> + <nl>
|
On Win32 platform , there ' s now support for simulating the Accelerometer
|
cocos2d/cocos2d-x
|
ffc445799190f446f0d54b39065e1d1c43c54819
|
2012-02-06T11:29:28Z
|
mmm a / test / mjsunit / mjsunit . status <nl> ppp b / test / mjsunit / mjsunit . status <nl> <nl> <nl> # BUG ( v8 : 3148 ) : Invalid branch instruction emitted . <nl> ' debug - references ' : [ PASS , [ ' mode = = debug ' , SKIP ] ] , <nl> + ' mirror - array ' : [ PASS , [ ' mode = = debug ' , SKIP ] ] , <nl> + <nl> + # BUG ( v8 : 3156 ) : Fails on gc stress bots . <nl> + ' compiler / concurrent - invalidate - transition - map ' : [ PASS , [ ' gc_stress = = True ' , FAIL ] ] , <nl> + # BUG ( v8 : 3157 ) : Fails on gc stress bots . <nl> + ' sparse - array - reverse ' : [ PASS , [ ' gc_stress = = True ' , FAIL ] ] , <nl> } ] , # ' arch = = a64 ' <nl> <nl> [ ' arch = = a64 and mode = = debug and simulator_run = = True ' , { <nl>
|
A64 : Skip tests failing on gc stress bots
|
v8/v8
|
b0fcc801e9c1a8a26543d8cb494251b7917a567d
|
2014-02-12T12:18:36Z
|
mmm a / python / caffe / pycaffe . cpp <nl> ppp b / python / caffe / pycaffe . cpp <nl> <nl> <nl> # define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION <nl> <nl> - # include < string > <nl> - # include < vector > <nl> - <nl> # include " boost / python . hpp " <nl> # include " boost / python / suite / indexing / vector_indexing_suite . hpp " <nl> # include " numpy / arrayobject . h " <nl> + <nl> + # include < string > / / NOLINT ( build / include_order ) <nl> + # include < vector > / / NOLINT ( build / include_order ) <nl> + <nl> # include " caffe / caffe . hpp " <nl> <nl> / / Temporary solution for numpy < 1 . 7 versions : old macro , no promises . <nl>
|
fix include order for pycaffe on osx , override lint
|
BVLC/caffe
|
fffae6cd690ad4c2151e16ea0b268a6a1d1f81e0
|
2014-02-27T23:53:32Z
|
mmm a / scene / 2d / light_2d . cpp <nl> ppp b / scene / 2d / light_2d . cpp <nl> void Light2D : : _update_light_visibility ( ) { <nl> if ( ! is_inside_tree ( ) ) <nl> return ; <nl> <nl> - VS : : get_singleton ( ) - > canvas_light_set_enabled ( canvas_light , enabled & & is_visible ( ) ) ; <nl> + bool editor_ok = true ; <nl> + <nl> + # ifdef TOOLS_ENABLED <nl> + if ( editor_only ) { <nl> + if ( ! get_tree ( ) - > is_editor_hint ( ) ) { <nl> + editor_ok = false ; <nl> + } else { <nl> + editor_ok = ( get_tree ( ) - > get_edited_scene_root ( ) & & ( this = = get_tree ( ) - > get_edited_scene_root ( ) | | get_owner ( ) = = get_tree ( ) - > get_edited_scene_root ( ) ) ) ; <nl> + } <nl> + } <nl> + # else <nl> + if ( editor_only ) { <nl> + editor_ok = false ; <nl> + } <nl> + # endif <nl> + <nl> + VS : : get_singleton ( ) - > canvas_light_set_enabled ( canvas_light , enabled & & is_visible ( ) & & editor_ok ) ; <nl> } <nl> <nl> void Light2D : : set_enabled ( bool p_enabled ) { <nl> bool Light2D : : is_enabled ( ) const { <nl> return enabled ; <nl> } <nl> <nl> + void Light2D : : set_editor_only ( bool p_editor_only ) { <nl> + <nl> + editor_only = p_editor_only ; <nl> + _update_light_visibility ( ) ; <nl> + } <nl> + <nl> + bool Light2D : : is_editor_only ( ) const { <nl> + <nl> + return editor_only ; <nl> + } <nl> + <nl> void Light2D : : set_texture ( const Ref < Texture > & p_texture ) { <nl> <nl> texture = p_texture ; <nl> void Light2D : : _bind_methods ( ) { <nl> ObjectTypeDB : : bind_method ( _MD ( " set_enabled " , " enabled " ) , & Light2D : : set_enabled ) ; <nl> ObjectTypeDB : : bind_method ( _MD ( " is_enabled " ) , & Light2D : : is_enabled ) ; <nl> <nl> + ObjectTypeDB : : bind_method ( _MD ( " set_editor_only " , " editor_only " ) , & Light2D : : set_editor_only ) ; <nl> + ObjectTypeDB : : bind_method ( _MD ( " is_editor_only " ) , & Light2D : : is_editor_only ) ; <nl> + <nl> ObjectTypeDB : : bind_method ( _MD ( " set_texture " , " texture " ) , & Light2D : : set_texture ) ; <nl> ObjectTypeDB : : bind_method ( _MD ( " get_texture " ) , & Light2D : : get_texture ) ; <nl> <nl> void Light2D : : _bind_methods ( ) { <nl> <nl> <nl> ADD_PROPERTY ( PropertyInfo ( Variant : : BOOL , " enabled " ) , _SCS ( " set_enabled " ) , _SCS ( " is_enabled " ) ) ; <nl> + ADD_PROPERTY ( PropertyInfo ( Variant : : BOOL , " editor_only " ) , _SCS ( " set_editor_only " ) , _SCS ( " is_editor_only " ) ) ; <nl> ADD_PROPERTY ( PropertyInfo ( Variant : : OBJECT , " texture " , PROPERTY_HINT_RESOURCE_TYPE , " Texture " ) , _SCS ( " set_texture " ) , _SCS ( " get_texture " ) ) ; <nl> ADD_PROPERTY ( PropertyInfo ( Variant : : VECTOR2 , " offset " ) , _SCS ( " set_texture_offset " ) , _SCS ( " get_texture_offset " ) ) ; <nl> ADD_PROPERTY ( PropertyInfo ( Variant : : REAL , " scale " , PROPERTY_HINT_RANGE , " 0 . 01 , 50 , 0 . 01 " ) , _SCS ( " set_texture_scale " ) , _SCS ( " get_texture_scale " ) ) ; <nl> Light2D : : Light2D ( ) { <nl> <nl> canvas_light = VisualServer : : get_singleton ( ) - > canvas_light_create ( ) ; <nl> enabled = true ; <nl> + editor_only = false ; <nl> shadow = false ; <nl> color = Color ( 1 , 1 , 1 ) ; <nl> height = 0 ; <nl> mmm a / scene / 2d / light_2d . h <nl> ppp b / scene / 2d / light_2d . h <nl> class Light2D : public Node2D { <nl> private : <nl> RID canvas_light ; <nl> bool enabled ; <nl> + bool editor_only ; <nl> bool shadow ; <nl> Color color ; <nl> Color shadow_color ; <nl> class Light2D : public Node2D { <nl> void set_enabled ( bool p_enabled ) ; <nl> bool is_enabled ( ) const ; <nl> <nl> + void set_editor_only ( bool p_editor_only ) ; <nl> + bool is_editor_only ( ) const ; <nl> + <nl> void set_texture ( const Ref < Texture > & p_texture ) ; <nl> Ref < Texture > get_texture ( ) const ; <nl> <nl>
|
Merge pull request from RandomShaper / light2d - editor - only
|
godotengine/godot
|
e9521523a2e170ef0aace47d44f068cc755a817e
|
2016-10-09T12:36:55Z
|
mmm a / Changelog <nl> ppp b / Changelog <nl> <nl> - FEATURE : Torrents can be rechecked from Web UI ( Stephanos Antaris ) <nl> - FEATURE : New peers can manually be added to the torrents <nl> - FEATURE : Support per - peer rate limiting <nl> + - FEATURE : Support peer manual ban <nl> - COSMETIC : Merged download / upload lists <nl> - COSMETIC : Torrents can be filtered based on their status <nl> - COSMETIC : Torrent properties are now displayed in main window <nl> deleted file mode 100644 <nl> index a28a8a2e9b . . 0000000000 <nl> Binary files a / src / Icons / oxygen / add_peer . png and / dev / null differ <nl> new file mode 100644 <nl> index 0000000000 . . 13d300c2d2 <nl> Binary files / dev / null and b / src / Icons / oxygen / user - group - delete . png differ <nl> new file mode 100644 <nl> index 0000000000 . . c39092037c <nl> Binary files / dev / null and b / src / Icons / oxygen / user - group - new . png differ <nl> mmm a / src / bittorrent . cpp <nl> ppp b / src / bittorrent . cpp <nl> void bittorrent : : configureSession ( ) { <nl> / / * Maximum ratio <nl> setDeleteRatio ( Preferences : : getDeleteRatio ( ) ) ; <nl> / / Ip Filter <nl> + FilterParserThread : : processFilterList ( s , Preferences : : bannedIPs ( ) ) ; <nl> if ( Preferences : : isFilteringEnabled ( ) ) { <nl> enableIPFilter ( Preferences : : getFilter ( ) ) ; <nl> } else { <nl> bool bittorrent : : hasActiveTorrents ( ) const { <nl> return false ; <nl> } <nl> <nl> + void bittorrent : : banIP ( QString ip ) { <nl> + FilterParserThread : : processFilterList ( s , QStringList ( ip ) ) ; <nl> + Preferences : : banIP ( ip ) ; <nl> + } <nl> + <nl> / / Delete a torrent from the session , given its hash <nl> / / permanent = true means that the torrent will be removed from the hard - drive too <nl> void bittorrent : : deleteTorrent ( QString hash , bool permanent ) { <nl> mmm a / src / bittorrent . h <nl> ppp b / src / bittorrent . h <nl> class bittorrent : public QObject { <nl> void addMagnetSkipAddDlg ( QString uri ) ; <nl> void downloadFromURLList ( const QStringList & urls ) ; <nl> void configureSession ( ) ; <nl> + void banIP ( QString ip ) ; <nl> <nl> protected slots : <nl> void addTorrentsFromScanFolder ( QStringList & ) ; <nl> mmm a / src / filterParserThread . h <nl> ppp b / src / filterParserThread . h <nl> class FilterParserThread : public QThread { <nl> / / * PeerGuardian Text ( P2P ) : http : / / wiki . phoenixlabs . org / wiki / P2P_Format <nl> / / * PeerGuardian Binary ( P2B ) : http : / / wiki . phoenixlabs . org / wiki / P2B_Format <nl> void processFilterFile ( QString _filePath ) { <nl> + / / First , import current filter <nl> + filter = s - > get_ip_filter ( ) ; <nl> if ( isRunning ( ) ) { <nl> / / Already parsing a filter , abort first <nl> abort = true ; <nl> class FilterParserThread : public QThread { <nl> start ( ) ; <nl> } <nl> <nl> + static void processFilterList ( session * s , QStringList IPs ) { <nl> + / / First , import current filter <nl> + ip_filter filter = s - > get_ip_filter ( ) ; <nl> + foreach ( const QString & ip , IPs ) { <nl> + qDebug ( " Manual ban of peer % s " , ip . toLocal8Bit ( ) . data ( ) ) ; <nl> + address_v4 addr = address_v4 : : from_string ( ip . toLocal8Bit ( ) . data ( ) ) ; <nl> + filter . add_rule ( addr , addr , ip_filter : : blocked ) ; <nl> + } <nl> + s - > set_ip_filter ( filter ) ; <nl> + } <nl> + <nl> } ; <nl> <nl> # endif <nl> mmm a / src / icons . qrc <nl> ppp b / src / icons . qrc <nl> <nl> < file > Icons / oxygen / edit - paste . png < / file > <nl> < file > Icons / oxygen / run - build . png < / file > <nl> < file > Icons / oxygen / proxy . png < / file > <nl> + < file > Icons / oxygen / user - group - delete . png < / file > <nl> + < file > Icons / oxygen / user - group - new . png < / file > <nl> < file > Icons / oxygen / log . png < / file > <nl> < file > Icons / oxygen / unavailable . png < / file > <nl> < file > Icons / oxygen / button_ok . png < / file > <nl> <nl> < file > Icons / oxygen / unsubscribe . png < / file > <nl> < file > Icons / oxygen / draw - rectangle . png < / file > <nl> < file > Icons / oxygen / subscribe16 . png < / file > <nl> - < file > Icons / oxygen / add_peer . png < / file > <nl> < / qresource > <nl> < / RCC > <nl> \ No newline at end of file <nl> mmm a / src / peerlistdelegate . h <nl> ppp b / src / peerlistdelegate . h <nl> <nl> # include < QItemDelegate > <nl> # include " misc . h " <nl> <nl> - enum PeerListColumns { IP , CLIENT , PROGRESS , DOWN_SPEED , UP_SPEED , TOT_DOWN , TOT_UP } ; <nl> + enum PeerListColumns { IP , CLIENT , PROGRESS , DOWN_SPEED , UP_SPEED , TOT_DOWN , TOT_UP , IP_HIDDEN } ; <nl> <nl> class PeerListDelegate : public QItemDelegate { <nl> Q_OBJECT <nl> mmm a / src / peerlistwidget . cpp <nl> ppp b / src / peerlistwidget . cpp <nl> PeerListWidget : : PeerListWidget ( PropertiesWidget * parent ) : properties ( parent ) , di <nl> setRootIsDecorated ( false ) ; <nl> setItemsExpandable ( false ) ; <nl> setAllColumnsShowFocus ( true ) ; <nl> + setSelectionMode ( QAbstractItemView : : ExtendedSelection ) ; <nl> / / List Model <nl> - listModel = new QStandardItemModel ( 0 , 7 ) ; <nl> + listModel = new QStandardItemModel ( 0 , 8 ) ; <nl> listModel - > setHeaderData ( IP , Qt : : Horizontal , tr ( " IP " ) ) ; <nl> listModel - > setHeaderData ( CLIENT , Qt : : Horizontal , tr ( " Client " , " i . e . : Client application " ) ) ; <nl> listModel - > setHeaderData ( PROGRESS , Qt : : Horizontal , tr ( " Progress " , " i . e : % downloaded " ) ) ; <nl> PeerListWidget : : PeerListWidget ( PropertiesWidget * parent ) : properties ( parent ) , di <nl> proxyModel - > setDynamicSortFilter ( true ) ; <nl> proxyModel - > setSourceModel ( listModel ) ; <nl> setModel ( proxyModel ) ; <nl> + hideColumn ( IP_HIDDEN ) ; <nl> / / Context menu <nl> setContextMenuPolicy ( Qt : : CustomContextMenu ) ; <nl> connect ( this , SIGNAL ( customContextMenuRequested ( QPoint ) ) , this , SLOT ( showPeerListMenu ( QPoint ) ) ) ; <nl> void PeerListWidget : : updatePeerCountryResolutionState ( ) { <nl> <nl> void PeerListWidget : : showPeerListMenu ( QPoint ) { <nl> QMenu menu ; <nl> + bool empty_menu = true ; <nl> QTorrentHandle h = properties - > getCurrentTorrent ( ) ; <nl> if ( ! h . is_valid ( ) ) return ; <nl> QModelIndexList selectedIndexes = selectionModel ( ) - > selectedRows ( ) ; <nl> QStringList selectedPeerIPs ; <nl> foreach ( const QModelIndex & index , selectedIndexes ) { <nl> - QString IP = proxyModel - > data ( index ) . toString ( ) ; <nl> + int row = proxyModel - > mapToSource ( index ) . row ( ) ; <nl> + QString IP = listModel - > data ( listModel - > index ( row , IP_HIDDEN ) ) . toString ( ) ; <nl> selectedPeerIPs < < IP ; <nl> } <nl> / / Add Peer Action <nl> QAction * addPeerAct = 0 ; <nl> if ( ! h . is_queued ( ) & & ! h . is_checking ( ) ) { <nl> - addPeerAct = menu . addAction ( QIcon ( " : / Icons / oxygen / add_peer . png " ) , tr ( " Add a new peer " ) ) ; <nl> + addPeerAct = menu . addAction ( QIcon ( " : / Icons / oxygen / user - group - new . png " ) , tr ( " Add a new peer " ) ) ; <nl> + empty_menu = false ; <nl> } <nl> / / Per Peer Speed limiting actions <nl> QAction * upLimitAct = 0 ; <nl> QAction * dlLimitAct = 0 ; <nl> + QAction * banAct = 0 ; <nl> if ( ! selectedPeerIPs . isEmpty ( ) ) { <nl> upLimitAct = menu . addAction ( QIcon ( " : / Icons / skin / seeding . png " ) , tr ( " Limit upload rate " ) ) ; <nl> dlLimitAct = menu . addAction ( QIcon ( " : / Icons / skin / downloading . png " ) , tr ( " Limit download rate " ) ) ; <nl> + banAct = menu . addAction ( QIcon ( " : / Icons / oxygen / user - group - delete . png " ) , tr ( " Ban peer permanently " ) ) ; <nl> + empty_menu = false ; <nl> } <nl> + if ( empty_menu ) return ; <nl> QAction * act = menu . exec ( QCursor : : pos ( ) ) ; <nl> if ( act = = addPeerAct ) { <nl> boost : : asio : : ip : : tcp : : endpoint ep = PeerAdditionDlg : : askForPeerEndpoint ( ) ; <nl> void PeerListWidget : : showPeerListMenu ( QPoint ) { <nl> limitDlRateSelectedPeers ( selectedPeerIPs ) ; <nl> return ; <nl> } <nl> + if ( act = = banAct ) { <nl> + banSelectedPeers ( selectedPeerIPs ) ; <nl> + return ; <nl> + } <nl> + } <nl> + <nl> + void PeerListWidget : : banSelectedPeers ( QStringList peer_ips ) { <nl> + / / Confirm first <nl> + int ret = QMessageBox : : question ( this , tr ( " Are you sure ? - - qBittorrent " ) , tr ( " Are you sure you want to ban permanently the selected peers ? " ) , <nl> + tr ( " & Yes " ) , tr ( " & No " ) , <nl> + QString ( ) , 0 , 1 ) ; <nl> + if ( ret ) return ; <nl> + foreach ( const QString & ip , peer_ips ) { <nl> + qDebug ( " Banning peer % s . . . " , ip . toLocal8Bit ( ) . data ( ) ) ; <nl> + properties - > getBTSession ( ) - > addConsoleMessage ( tr ( " Manually banning peer % 1 . . . " ) . arg ( ip ) ) ; <nl> + properties - > getBTSession ( ) - > banIP ( ip ) ; <nl> + } <nl> + / / Refresh list <nl> + loadPeers ( properties - > getCurrentTorrent ( ) ) ; <nl> } <nl> <nl> void PeerListWidget : : limitUpRateSelectedPeers ( QStringList peer_ips ) { <nl> QStandardItem * PeerListWidget : : addPeer ( QString ip , peer_info peer ) { <nl> / / Adding Peer to peer list <nl> listModel - > insertRow ( row ) ; <nl> listModel - > setData ( listModel - > index ( row , IP ) , ip ) ; <nl> + listModel - > setData ( listModel - > index ( row , IP_HIDDEN ) , ip ) ; <nl> / / Resolve peer host name is asked <nl> if ( resolver ) <nl> resolver - > resolve ( peer . ip ) ; <nl> mmm a / src / peerlistwidget . h <nl> ppp b / src / peerlistwidget . h <nl> protected slots : <nl> void showPeerListMenu ( QPoint ) ; <nl> void limitUpRateSelectedPeers ( QStringList peer_ips ) ; <nl> void limitDlRateSelectedPeers ( QStringList peer_ips ) ; <nl> + void banSelectedPeers ( QStringList peer_ips ) ; <nl> } ; <nl> <nl> # endif / / PEERLISTWIDGET_H <nl> mmm a / src / preferences . h <nl> ppp b / src / preferences . h <nl> class Preferences { <nl> return settings . value ( QString : : fromUtf8 ( " Preferences / IPFilter / File " ) , QString ( ) ) . toString ( ) ; <nl> } <nl> <nl> + static void banIP ( QString ip ) { <nl> + QSettings settings ( " qBittorrent " , " qBittorrent " ) ; <nl> + QStringList banned_ips = settings . value ( QString : : fromUtf8 ( " Preferences / IPFilter / BannedIPs " ) , QStringList ( ) ) . toStringList ( ) ; <nl> + if ( ! banned_ips . contains ( ip ) ) { <nl> + banned_ips < < ip ; <nl> + settings . setValue ( " Preferences / IPFilter / BannedIPs " , banned_ips ) ; <nl> + } <nl> + } <nl> + <nl> + static QStringList bannedIPs ( ) { <nl> + QSettings settings ( " qBittorrent " , " qBittorrent " ) ; <nl> + return settings . value ( QString : : fromUtf8 ( " Preferences / IPFilter / BannedIPs " ) , QStringList ( ) ) . toStringList ( ) ; <nl> + } <nl> + <nl> / / RSS <nl> static bool isRSSEnabled ( ) { <nl> QSettings settings ( " qBittorrent " , " qBittorrent " ) ; <nl> class Preferences { <nl> QSettings settings ( " qBittorrent " , " qBittorrent " ) ; <nl> return settings . value ( " Preferences / WebUI / Password " , " " ) . toString ( ) ; <nl> } <nl> + <nl> } ; <nl> <nl> # endif / / PREFERENCES_H <nl> mmm a / src / propertieswidget . cpp <nl> ppp b / src / propertieswidget . cpp <nl> const QTorrentHandle & PropertiesWidget : : getCurrentTorrent ( ) const { <nl> return h ; <nl> } <nl> <nl> + bittorrent * PropertiesWidget : : getBTSession ( ) const { <nl> + return BTSession ; <nl> + } <nl> + <nl> void PropertiesWidget : : loadTorrentInfos ( QTorrentHandle & _h ) { <nl> h = _h ; <nl> if ( ! h . is_valid ( ) ) { <nl> mmm a / src / propertieswidget . h <nl> ppp b / src / propertieswidget . h <nl> public slots : <nl> PropertiesWidget ( QWidget * parent , TransferListWidget * transferList , bittorrent * BTSession ) ; <nl> ~ PropertiesWidget ( ) ; <nl> const QTorrentHandle & getCurrentTorrent ( ) const ; <nl> + bittorrent * getBTSession ( ) const ; <nl> } ; <nl> <nl> # endif / / PROPERTIESWIDGET_H <nl>
|
- Support peer manual ban ( from peer list )
|
qbittorrent/qBittorrent
|
7c8455115008fd65283e6c4b4f17490105a04f42
|
2009-11-17T16:02:35Z
|
mmm a / tensorflow / core / grappler / utils / graph_view . cc <nl> ppp b / tensorflow / core / grappler / utils / graph_view . cc <nl> void MutableGraphView : : RemoveNodesInternal ( <nl> removed_node_index ; <nl> } <nl> nodes_ . pop_back ( ) ; <nl> - graph ( ) - > mutable_node ( ) - > RemoveLast ( ) ; <nl> + } <nl> + if ( ! sorted_node_indices_to_remove . empty ( ) ) { <nl> + const int current_size = graph ( ) - > node_size ( ) ; <nl> + const int num_to_remove = sorted_node_indices_to_remove . size ( ) ; <nl> + graph ( ) - > mutable_node ( ) - > DeleteSubrange ( current_size - num_to_remove , <nl> + num_to_remove ) ; <nl> } <nl> } <nl> <nl>
|
Use DeleteSubrange when deleting nodes from the graph in the new GraphView .
|
tensorflow/tensorflow
|
98b54f7d934d9fa77c095d18f77eff4654b8ee84
|
2019-09-16T21:51:55Z
|
mmm a / stdlib / public / SDK / AppKit / AppKit . swift <nl> ppp b / stdlib / public / SDK / AppKit / AppKit . swift <nl> struct _NSViewMirror : _MirrorType { <nl> <nl> init ( _ v : NSView ) { _v = v } <nl> <nl> - var value : Any { get { return _v } } <nl> + var value : Any { return _v } <nl> <nl> - var valueType : Any . Type { get { return ( _v as Any ) . dynamicType } } <nl> + var valueType : Any . Type { return ( _v as Any ) . dynamicType } <nl> <nl> - var objectIdentifier : ObjectIdentifier ? { get { return . None } } <nl> + var objectIdentifier : ObjectIdentifier ? { return . None } <nl> <nl> - var count : Int { get { return 0 } } <nl> + var count : Int { return 0 } <nl> <nl> subscript ( _ : Int ) - > ( String , _MirrorType ) { <nl> _preconditionFailure ( " _MirrorType access out of bounds " ) <nl> } <nl> <nl> - var summary : String { get { return " " } } <nl> + var summary : String { return " " } <nl> <nl> - var quickLookObject : PlaygroundQuickLook ? { get { <nl> + var quickLookObject : PlaygroundQuickLook ? { <nl> / / adapted from the Xcode QuickLooks implementation <nl> <nl> var result : PlaygroundQuickLook ? = nil <nl> struct _NSViewMirror : _MirrorType { <nl> <nl> return result <nl> <nl> - } } <nl> + } <nl> <nl> - var disposition : _MirrorDisposition { get { return . Aggregate } } <nl> + var disposition : _MirrorDisposition { return . Aggregate } <nl> } <nl> <nl> extension NSView : _Reflectable { <nl> mmm a / stdlib / public / core / UnicodeScalar . swift <nl> ppp b / stdlib / public / core / UnicodeScalar . swift <nl> public struct UnicodeScalar : <nl> var _value : UInt32 <nl> <nl> / / / A numeric representation of ` self ` . <nl> - public var value : UInt32 { <nl> - get { <nl> - return _value <nl> - } <nl> - } <nl> + public var value : UInt32 { return _value } <nl> <nl> @ _transparent <nl> public init ( _builtinUnicodeScalarLiteral value : Builtin . Int32 ) { <nl>
|
[ stdlib ] Remove the get keyword of read - only computed property
|
apple/swift
|
c5c63519e55a5d3a80f8bb2b8bbec017f958310b
|
2015-12-24T12:59:34Z
|
mmm a / src / compiler / pipeline . cc <nl> ppp b / src / compiler / pipeline . cc <nl> PipelineWasmCompilationJob : : Status PipelineWasmCompilationJob : : FinalizeJobImpl ( <nl> code_desc , code_generator - > frame ( ) - > GetTotalFrameSlotCount ( ) , <nl> data_ . wasm_function_index ( ) , code_generator - > GetSafepointTableOffset ( ) , <nl> code_generator - > GetHandlerTableOffset ( ) , <nl> - data_ . wasm_compilation_data ( ) - > ReleaseProtectedInstructions ( ) , <nl> + data_ . wasm_compilation_data ( ) - > GetProtectedInstructions ( ) , <nl> code_generator - > GetSourcePositionTable ( ) , wasm : : WasmCode : : kTurbofan ) ; <nl> <nl> if ( ! code ) return FAILED ; <nl> mmm a / src / compiler / wasm - compiler . cc <nl> ppp b / src / compiler / wasm - compiler . cc <nl> MaybeHandle < Code > CompileCWasmEntry ( Isolate * isolate , wasm : : FunctionSig * sig ) { <nl> return code ; <nl> } <nl> <nl> - WasmCompilationData : : WasmCompilationData ( <nl> - wasm : : RuntimeExceptionSupport runtime_exception_support ) <nl> - : protected_instructions_ ( <nl> - new std : : vector < trap_handler : : ProtectedInstructionData > ( ) ) , <nl> - runtime_exception_support_ ( runtime_exception_support ) { } <nl> - <nl> - void WasmCompilationData : : AddProtectedInstruction ( uint32_t instr_offset , <nl> - uint32_t landing_offset ) { <nl> - protected_instructions_ - > emplace_back ( <nl> - trap_handler : : ProtectedInstructionData { instr_offset , landing_offset } ) ; <nl> - } <nl> - <nl> TurbofanWasmCompilationUnit : : TurbofanWasmCompilationUnit ( <nl> wasm : : WasmCompilationUnit * wasm_unit ) <nl> : wasm_unit_ ( wasm_unit ) , <nl> mmm a / src / compiler / wasm - compiler . h <nl> ppp b / src / compiler / wasm - compiler . h <nl> namespace compiler { <nl> class WasmCompilationData { <nl> public : <nl> explicit WasmCompilationData ( <nl> - wasm : : RuntimeExceptionSupport runtime_exception_support ) ; <nl> + wasm : : RuntimeExceptionSupport runtime_exception_support ) <nl> + : runtime_exception_support_ ( runtime_exception_support ) { } <nl> <nl> - void AddProtectedInstruction ( uint32_t instr_offset , uint32_t landing_offset ) ; <nl> + void AddProtectedInstruction ( uint32_t instr_offset , uint32_t landing_offset ) { <nl> + protected_instructions_ . push_back ( { instr_offset , landing_offset } ) ; <nl> + } <nl> <nl> - std : : unique_ptr < std : : vector < trap_handler : : ProtectedInstructionData > > <nl> - ReleaseProtectedInstructions ( ) { <nl> - return std : : move ( protected_instructions_ ) ; <nl> + OwnedVector < trap_handler : : ProtectedInstructionData > <nl> + GetProtectedInstructions ( ) { <nl> + return OwnedVector < trap_handler : : ProtectedInstructionData > : : Of ( <nl> + protected_instructions_ ) ; <nl> } <nl> <nl> wasm : : RuntimeExceptionSupport runtime_exception_support ( ) const { <nl> class WasmCompilationData { <nl> } <nl> <nl> private : <nl> - std : : unique_ptr < std : : vector < trap_handler : : ProtectedInstructionData > > <nl> - protected_instructions_ ; <nl> + std : : vector < trap_handler : : ProtectedInstructionData > protected_instructions_ ; <nl> <nl> / / See ModuleEnv : : runtime_exception_support_ . <nl> wasm : : RuntimeExceptionSupport runtime_exception_support_ ; <nl> mmm a / src / source - position - table . cc <nl> ppp b / src / source - position - table . cc <nl> OwnedVector < byte > SourcePositionTableBuilder : : ToSourcePositionTableVector ( ) { <nl> if ( bytes_ . empty ( ) ) return OwnedVector < byte > ( ) ; <nl> DCHECK ( ! Omit ( ) ) ; <nl> <nl> - OwnedVector < byte > table = OwnedVector < byte > : : New ( bytes_ . size ( ) ) ; <nl> - MemCopy ( table . start ( ) , bytes_ . data ( ) , bytes_ . size ( ) ) ; <nl> + OwnedVector < byte > table = OwnedVector < byte > : : Of ( bytes_ ) ; <nl> <nl> # ifdef ENABLE_SLOW_DCHECKS <nl> / / Brute force testing : Record all positions and decode <nl> mmm a / src / vector . h <nl> ppp b / src / vector . h <nl> <nl> # ifndef V8_VECTOR_H_ <nl> # define V8_VECTOR_H_ <nl> <nl> - # include < string . h > <nl> # include < algorithm > <nl> + # include < cstring > <nl> + # include < iterator > <nl> <nl> # include " src / allocation . h " <nl> # include " src / checks . h " <nl> class OwnedVector { <nl> <nl> / / Allocates a new vector of the specified size via the default allocator . <nl> static OwnedVector < T > New ( size_t size ) { <nl> + if ( size = = 0 ) return { } ; <nl> return OwnedVector < T > ( std : : unique_ptr < T [ ] > ( new T [ size ] ) , size ) ; <nl> } <nl> <nl> + / / Allocates a new vector containing the specified collection of values . <nl> + / / { Iterator } is the common type of { std : : begin } and { std : : end } called on a <nl> + / / { const U & } . This function is only instantiable if that type exists . <nl> + template < typename U , typename Iterator = typename std : : common_type < <nl> + decltype ( std : : begin ( std : : declval < const U & > ( ) ) ) , <nl> + decltype ( std : : end ( std : : declval < const U & > ( ) ) ) > : : type > <nl> + static OwnedVector < T > Of ( const U & collection ) { <nl> + Iterator begin = std : : begin ( collection ) ; <nl> + Iterator end = std : : end ( collection ) ; <nl> + OwnedVector < T > vec = New ( std : : distance ( begin , end ) ) ; <nl> + std : : copy ( begin , end , vec . start ( ) ) ; <nl> + return vec ; <nl> + } <nl> + <nl> private : <nl> std : : unique_ptr < T [ ] > data_ ; <nl> size_t length_ = 0 ; <nl> mmm a / src / wasm / baseline / liftoff - compiler . cc <nl> ppp b / src / wasm / baseline / liftoff - compiler . cc <nl> bool LiftoffCompilationUnit : : ExecuteCompilation ( ) { <nl> compiler : : GetWasmCallDescriptor ( & zone , wasm_unit_ - > func_body_ . sig ) ; <nl> base : : Optional < TimedHistogramScope > liftoff_compile_time_scope ( <nl> base : : in_place , wasm_unit_ - > counters_ - > liftoff_compile_time ( ) ) ; <nl> - DCHECK ( ! protected_instructions_ ) ; <nl> - protected_instructions_ . reset ( <nl> - new std : : vector < trap_handler : : ProtectedInstructionData > ( ) ) ; <nl> + DCHECK ( protected_instructions_ . empty ( ) ) ; <nl> wasm : : WasmFullDecoder < wasm : : Decoder : : kValidate , wasm : : LiftoffCompiler > <nl> decoder ( & zone , module , wasm_unit_ - > func_body_ , & asm_ , call_descriptor , <nl> wasm_unit_ - > env_ , & source_position_table_builder_ , <nl> - protected_instructions_ . get ( ) , & zone ) ; <nl> + & protected_instructions_ , & zone ) ; <nl> decoder . Decode ( ) ; <nl> liftoff_compile_time_scope . reset ( ) ; <nl> if ( ! decoder . interface ( ) . ok ( ) ) { <nl> bool LiftoffCompilationUnit : : ExecuteCompilation ( ) { <nl> / / Record the memory cost this unit places on the system until <nl> / / it is finalized . <nl> wasm_unit_ - > memory_cost_ = <nl> - asm_ . pc_offset ( ) + protected_instructions_ - > size ( ) * <nl> + asm_ . pc_offset ( ) + protected_instructions_ . size ( ) * <nl> sizeof ( trap_handler : : ProtectedInstructionData ) ; <nl> <nl> safepoint_table_offset_ = decoder . interface ( ) . GetSafepointTableOffset ( ) ; <nl> wasm : : WasmCode * LiftoffCompilationUnit : : FinishCompilation ( <nl> <nl> OwnedVector < byte > source_positions = <nl> source_position_table_builder_ . ToSourcePositionTableVector ( ) ; <nl> - <nl> + auto protected_instructions_copy = <nl> + OwnedVector < trap_handler : : ProtectedInstructionData > : : Of ( <nl> + protected_instructions_ ) ; <nl> wasm : : WasmCode * code = wasm_unit_ - > native_module_ - > AddCode ( <nl> desc , asm_ . GetTotalFrameSlotCount ( ) , wasm_unit_ - > func_index_ , <nl> - safepoint_table_offset_ , 0 , std : : move ( protected_instructions_ ) , <nl> + safepoint_table_offset_ , 0 , std : : move ( protected_instructions_copy ) , <nl> std : : move ( source_positions ) , wasm : : WasmCode : : kLiftoff ) ; <nl> <nl> return code ; <nl> mmm a / src / wasm / baseline / liftoff - compiler . h <nl> ppp b / src / wasm / baseline / liftoff - compiler . h <nl> class LiftoffCompilationUnit final { <nl> wasm : : LiftoffAssembler asm_ ; <nl> int safepoint_table_offset_ ; <nl> SourcePositionTableBuilder source_position_table_builder_ ; <nl> - std : : unique_ptr < std : : vector < trap_handler : : ProtectedInstructionData > > <nl> - protected_instructions_ ; <nl> + std : : vector < trap_handler : : ProtectedInstructionData > protected_instructions_ ; <nl> <nl> DISALLOW_COPY_AND_ASSIGN ( LiftoffCompilationUnit ) ; <nl> } ; <nl> mmm a / src / wasm / wasm - code - manager . cc <nl> ppp b / src / wasm / wasm - code - manager . cc <nl> void WasmCode : : RegisterTrapHandlerData ( ) { <nl> size_t size = instructions ( ) . size ( ) ; <nl> const int index = <nl> RegisterHandlerData ( base , size , protected_instructions ( ) . size ( ) , <nl> - protected_instructions ( ) . data ( ) ) ; <nl> + protected_instructions ( ) . start ( ) ) ; <nl> <nl> / / TODO ( eholk ) : if index is negative , fail . <nl> CHECK_LE ( 0 , index ) ; <nl> WasmCode * NativeModule : : AddOwnedCode ( <nl> Maybe < uint32_t > index , WasmCode : : Kind kind , size_t constant_pool_offset , <nl> uint32_t stack_slots , size_t safepoint_table_offset , <nl> size_t handler_table_offset , <nl> - std : : unique_ptr < ProtectedInstructions > protected_instructions , <nl> + OwnedVector < trap_handler : : ProtectedInstructionData > protected_instructions , <nl> WasmCode : : Tier tier , WasmCode : : FlushICache flush_icache ) { <nl> / / both allocation and insertion in owned_code_ happen in the same critical <nl> / / section , thus ensuring owned_code_ ' s elements are rarely if ever moved . <nl> WasmCode * NativeModule : : AddAnonymousCode ( Handle < Code > code , <nl> source_pos . reset ( new byte [ source_pos_table - > length ( ) ] ) ; <nl> source_pos_table - > copy_out ( 0 , source_pos . get ( ) , source_pos_table - > length ( ) ) ; <nl> } <nl> - std : : unique_ptr < ProtectedInstructions > protected_instructions ( <nl> - new ProtectedInstructions ( 0 ) ) ; <nl> Vector < const byte > orig_instructions ( <nl> reinterpret_cast < byte * > ( code - > InstructionStart ( ) ) , <nl> static_cast < size_t > ( code - > InstructionSize ( ) ) ) ; <nl> WasmCode * NativeModule : : AddAnonymousCode ( Handle < Code > code , <nl> static_cast < size_t > ( code - > relocation_size ( ) ) , / / reloc_size <nl> std : : move ( source_pos ) , / / source positions <nl> static_cast < size_t > ( source_pos_table - > length ( ) ) , <nl> - Nothing < uint32_t > ( ) , / / index <nl> - kind , / / kind <nl> - code - > constant_pool_offset ( ) , / / constant_pool_offset <nl> - stack_slots , / / stack_slots <nl> - safepoint_table_offset , / / safepoint_table_offset <nl> - code - > handler_table_offset ( ) , / / handler_table_offset <nl> - std : : move ( protected_instructions ) , / / protected_instructions <nl> - WasmCode : : kOther , / / kind <nl> - WasmCode : : kNoFlushICache ) ; / / flush_icache <nl> + Nothing < uint32_t > ( ) , / / index <nl> + kind , / / kind <nl> + code - > constant_pool_offset ( ) , / / constant_pool_offset <nl> + stack_slots , / / stack_slots <nl> + safepoint_table_offset , / / safepoint_table_offset <nl> + code - > handler_table_offset ( ) , / / handler_table_offset <nl> + { } , / / protected_instructions <nl> + WasmCode : : kOther , / / kind <nl> + WasmCode : : kNoFlushICache ) ; / / flush_icache <nl> <nl> / / Apply the relocation delta by iterating over the RelocInfo . <nl> intptr_t delta = ret - > instruction_start ( ) - code - > InstructionStart ( ) ; <nl> WasmCode * NativeModule : : AddAnonymousCode ( Handle < Code > code , <nl> WasmCode * NativeModule : : AddCode ( <nl> const CodeDesc & desc , uint32_t frame_slots , uint32_t index , <nl> size_t safepoint_table_offset , size_t handler_table_offset , <nl> - std : : unique_ptr < ProtectedInstructions > protected_instructions , <nl> + OwnedVector < trap_handler : : ProtectedInstructionData > protected_instructions , <nl> OwnedVector < byte > source_pos_table , WasmCode : : Tier tier ) { <nl> std : : unique_ptr < byte [ ] > reloc_info ; <nl> if ( desc . reloc_size ) { <nl> mmm a / src / wasm / wasm - code - manager . h <nl> ppp b / src / wasm / wasm - code - manager . h <nl> class V8_EXPORT_PRIVATE DisjointAllocationPool final { <nl> DISALLOW_COPY_AND_ASSIGN ( DisjointAllocationPool ) <nl> } ; <nl> <nl> - using ProtectedInstructions = <nl> - std : : vector < trap_handler : : ProtectedInstructionData > ; <nl> - <nl> class V8_EXPORT_PRIVATE WasmCode final { <nl> public : <nl> enum Kind { <nl> class V8_EXPORT_PRIVATE WasmCode final { <nl> pc < reinterpret_cast < Address > ( instructions_ . end ( ) ) ; <nl> } <nl> <nl> - const ProtectedInstructions & protected_instructions ( ) const { <nl> - / / TODO ( mstarzinger ) : Code that doesn ' t have trapping instruction should <nl> - / / not be required to have this vector , make it possible to be null . <nl> - DCHECK_NOT_NULL ( protected_instructions_ ) ; <nl> - return * protected_instructions_ . get ( ) ; <nl> + Vector < trap_handler : : ProtectedInstructionData > protected_instructions ( ) <nl> + const { <nl> + return protected_instructions_ . as_vector ( ) ; <nl> } <nl> <nl> void Validate ( ) const ; <nl> class V8_EXPORT_PRIVATE WasmCode final { <nl> Maybe < uint32_t > index , Kind kind , size_t constant_pool_offset , <nl> uint32_t stack_slots , size_t safepoint_table_offset , <nl> size_t handler_table_offset , <nl> - std : : unique_ptr < ProtectedInstructions > protected_instructions , <nl> + OwnedVector < trap_handler : : ProtectedInstructionData > <nl> + protected_instructions , <nl> Tier tier ) <nl> : instructions_ ( instructions ) , <nl> reloc_info_ ( std : : move ( reloc_info ) ) , <nl> class V8_EXPORT_PRIVATE WasmCode final { <nl> size_t safepoint_table_offset_ = 0 ; <nl> size_t handler_table_offset_ = 0 ; <nl> intptr_t trap_handler_index_ = - 1 ; <nl> - std : : unique_ptr < ProtectedInstructions > protected_instructions_ ; <nl> + OwnedVector < trap_handler : : ProtectedInstructionData > protected_instructions_ ; <nl> Tier tier_ ; <nl> <nl> DISALLOW_COPY_AND_ASSIGN ( WasmCode ) ; <nl> class V8_EXPORT_PRIVATE NativeModule final { <nl> public : <nl> WasmCode * AddCode ( const CodeDesc & desc , uint32_t frame_count , uint32_t index , <nl> size_t safepoint_table_offset , size_t handler_table_offset , <nl> - std : : unique_ptr < ProtectedInstructions > , <nl> + OwnedVector < trap_handler : : ProtectedInstructionData > <nl> + protected_instructions , <nl> OwnedVector < byte > source_position_table , <nl> WasmCode : : Tier tier ) ; <nl> <nl> class V8_EXPORT_PRIVATE NativeModule final { <nl> WasmCode : : Kind kind , size_t constant_pool_offset , <nl> uint32_t stack_slots , size_t safepoint_table_offset , <nl> size_t handler_table_offset , <nl> - std : : unique_ptr < ProtectedInstructions > , WasmCode : : Tier , <nl> - WasmCode : : FlushICache ) ; <nl> + OwnedVector < trap_handler : : ProtectedInstructionData > , <nl> + WasmCode : : Tier , WasmCode : : FlushICache ) ; <nl> <nl> WasmCode * CreateEmptyJumpTable ( uint32_t num_wasm_functions ) ; <nl> <nl> mmm a / src / wasm / wasm - serialization . cc <nl> ppp b / src / wasm / wasm - serialization . cc <nl> void NativeModuleSerializer : : WriteCode ( const WasmCode * code , Writer * writer ) { <nl> / / Write the reloc info , source positions , and protected code . <nl> writer - > WriteVector ( code - > reloc_info ( ) ) ; <nl> writer - > WriteVector ( code - > source_positions ( ) ) ; <nl> - writer - > WriteVector ( <nl> - { reinterpret_cast < const byte * > ( code - > protected_instructions ( ) . data ( ) ) , <nl> - sizeof ( trap_handler : : ProtectedInstructionData ) * <nl> - code - > protected_instructions ( ) . size ( ) } ) ; <nl> + writer - > WriteVector ( Vector < byte > : : cast ( code - > protected_instructions ( ) ) ) ; <nl> # if V8_TARGET_ARCH_MIPS | | V8_TARGET_ARCH_MIPS64 | | V8_TARGET_ARCH_ARM <nl> / / On platforms that don ' t support misaligned word stores , copy to an aligned <nl> / / buffer if necessary so we can relocate the serialized code . <nl> bool NativeModuleDeserializer : : ReadCode ( uint32_t fn_index , Reader * reader ) { <nl> source_pos . reset ( new byte [ source_position_size ] ) ; <nl> reader - > ReadVector ( { source_pos . get ( ) , source_position_size } ) ; <nl> } <nl> - std : : unique_ptr < ProtectedInstructions > protected_instructions ( <nl> - new ProtectedInstructions ( protected_instructions_size ) ) ; <nl> - if ( protected_instructions_size > 0 ) { <nl> - size_t size = sizeof ( trap_handler : : ProtectedInstructionData ) * <nl> - protected_instructions - > size ( ) ; <nl> - Vector < byte > data ( reinterpret_cast < byte * > ( protected_instructions - > data ( ) ) , <nl> - size ) ; <nl> - reader - > ReadVector ( data ) ; <nl> - } <nl> + auto protected_instructions = <nl> + OwnedVector < trap_handler : : ProtectedInstructionData > : : New ( <nl> + protected_instructions_size ) ; <nl> + reader - > ReadVector ( Vector < byte > : : cast ( protected_instructions . as_vector ( ) ) ) ; <nl> WasmCode * ret = native_module_ - > AddOwnedCode ( <nl> code_buffer , std : : move ( reloc_info ) , reloc_size , std : : move ( source_pos ) , <nl> source_position_size , Just ( fn_index ) , WasmCode : : kFunction , <nl> mmm a / test / cctest / wasm / test - run - wasm . cc <nl> ppp b / test / cctest / wasm / test - run - wasm . cc <nl> TEST ( Liftoff_tier_up ) { <nl> memcpy ( buffer . get ( ) , sub_code - > instructions ( ) . start ( ) , sub_size ) ; <nl> desc . buffer = buffer . get ( ) ; <nl> desc . instr_size = static_cast < int > ( sub_size ) ; <nl> - std : : unique_ptr < ProtectedInstructions > protected_instructions ( <nl> - new ProtectedInstructions ( sub_code - > protected_instructions ( ) ) ) ; <nl> - native_module - > AddCode ( desc , 0 , add . function_index ( ) , 0 , 0 , <nl> - std : : move ( protected_instructions ) , <nl> + native_module - > AddCode ( desc , 0 , add . function_index ( ) , 0 , 0 , { } , <nl> OwnedVector < byte > ( ) , WasmCode : : kOther ) ; <nl> <nl> / / Second run should now execute { sub } . <nl>
|
[ wasm ] Store protected instructions in an OwnedVector
|
v8/v8
|
ce2d01bca3aad086ff7cea0353e7f90f95910dc0
|
2018-06-27T12:22:10Z
|
mmm a / tensorflow / core / kernels / data / concatenate_dataset_op_test . cc <nl> ppp b / tensorflow / core / kernels / data / concatenate_dataset_op_test . cc <nl> class ConcatenateDatasetOpTest : public DatasetOpsTestBase { <nl> const DataTypeVector & output_types , <nl> const std : : vector < PartialTensorShape > & output_shapes , <nl> std : : unique_ptr < OpKernel > * op_kernel ) { <nl> - node_def_ = test : : function : : NDef ( <nl> + NodeDef node_def = test : : function : : NDef ( <nl> kNodeName , kOpName , { " input_dataset " , " another_dataset " } , <nl> { { " output_types " , output_types } , { " output_shapes " , output_shapes } } ) ; <nl> - TF_RETURN_IF_ERROR ( CreateOpKernel ( node_def_ , op_kernel ) ) ; <nl> + TF_RETURN_IF_ERROR ( CreateOpKernel ( node_def , op_kernel ) ) ; <nl> return Status : : OK ( ) ; <nl> } <nl> <nl> class ConcatenateDatasetOpTest : public DatasetOpsTestBase { <nl> TF_RETURN_IF_ERROR ( CreateOpKernelContext ( op_kernel , inputs , context ) ) ; <nl> return Status : : OK ( ) ; <nl> } <nl> - <nl> - private : <nl> - NodeDef node_def_ ; <nl> } ; <nl> <nl> - struct TestParam { <nl> + struct TestCase { <nl> std : : vector < std : : vector < Tensor > > input_tensors ; <nl> std : : vector < Tensor > expected_outputs ; <nl> DataTypeVector expected_output_dtypes ; <nl> struct TestParam { <nl> int64 expected_cardinality ; <nl> std : : vector < int > breakpoints ; <nl> } ; <nl> - TestParam TestCase1 ( ) { <nl> - / / Test case 1 : same shape . <nl> + <nl> + / / Test case 1 : same shape . <nl> + TestCase SameShapeTestCase ( ) { <nl> return { / * input_tensors * / <nl> { { DatasetOpsTestBase : : CreateTensor < int64 > ( TensorShape { 2 , 2 } , <nl> { 1 , 2 , 3 , 4 } ) , <nl> TestParam TestCase1 ( ) { <nl> / * breakpoints * / { 0 , 2 , 5 } } ; <nl> } <nl> <nl> - TestParam TestCase2 ( ) { <nl> - / / Test case 2 : different shape . <nl> + / / Test case 2 : different shape . <nl> + TestCase DifferentShapeTestCase ( ) { <nl> return { <nl> / * input_tensors * / <nl> { { DatasetOpsTestBase : : CreateTensor < int64 > ( TensorShape { 2 , 3 } , <nl> TestParam TestCase2 ( ) { <nl> / * breakpoints * / { 0 , 2 , 5 } } ; <nl> } <nl> <nl> - class ConcatenateDatasetOpTestHelper : public ConcatenateDatasetOpTest { <nl> - public : <nl> - ~ ConcatenateDatasetOpTestHelper ( ) override { <nl> - if ( dataset_ ) dataset_ - > Unref ( ) ; <nl> - } <nl> - <nl> - protected : <nl> - Status CreateDatasetFromTestCase ( const TestParam & test_case ) { <nl> - std : : vector < Tensor > tensor_slice_dataset_tensors ; <nl> - TF_RETURN_IF_ERROR ( CreateTensorSliceDatasetTensors ( <nl> - test_case . input_tensors , & tensor_slice_dataset_tensors ) ) ; <nl> - gtl : : InlinedVector < TensorValue , 4 > inputs ; <nl> - for ( auto & tensor : tensor_slice_dataset_tensors ) { <nl> - inputs . emplace_back ( & tensor ) ; <nl> - } <nl> - TF_RETURN_IF_ERROR ( CreateConcatenateDatasetKernel ( <nl> - test_case . expected_output_dtypes , test_case . expected_output_shapes , <nl> - & dataset_kernel_ ) ) ; <nl> - TF_RETURN_IF_ERROR ( CreateConcatenateDatasetContext ( <nl> - dataset_kernel_ . get ( ) , & inputs , & dataset_kernel_ctx_ ) ) ; <nl> - TF_RETURN_IF_ERROR ( CreateDataset ( dataset_kernel_ . get ( ) , <nl> - dataset_kernel_ctx_ . get ( ) , & dataset_ ) ) ; <nl> - return Status : : OK ( ) ; <nl> - } <nl> - <nl> - Status CreateIteratorFromTestCase ( const TestParam & test_case ) { <nl> - TF_RETURN_IF_ERROR ( CreateDatasetFromTestCase ( test_case ) ) ; <nl> - TF_RETURN_IF_ERROR ( <nl> - CreateIteratorContext ( dataset_kernel_ctx_ . get ( ) , & iterator_ctx_ ) ) ; <nl> - TF_RETURN_IF_ERROR ( <nl> - dataset_ - > MakeIterator ( iterator_ctx_ . get ( ) , " Iterator " , & iterator_ ) ) ; <nl> - return Status : : OK ( ) ; <nl> - } <nl> - <nl> - std : : unique_ptr < OpKernel > dataset_kernel_ ; <nl> - std : : unique_ptr < OpKernelContext > dataset_kernel_ctx_ ; <nl> - DatasetBase * dataset_ = nullptr ; / / owned by this class . <nl> - std : : unique_ptr < IteratorContext > iterator_ctx_ ; <nl> - std : : unique_ptr < IteratorBase > iterator_ ; <nl> - } ; <nl> + / / Test case 3 : different dtypes <nl> + TestCase DifferentDtypeTestCase ( ) { <nl> + return { / * input_tensors * / { { DatasetOpsTestBase : : CreateTensor < int64 > ( <nl> + TensorShape ( { 2 , 2 } ) , { 1 , 2 , 3 , 4 } ) } , <nl> + { DatasetOpsTestBase : : CreateTensor < double > ( <nl> + TensorShape ( { 2 , 2 } ) , { 1 . 0 , 2 . 0 , 3 . 0 , 4 . 0 } ) } } , <nl> + / * expected_outputs * / { } , <nl> + / * expected_output_dtypes * / { DT_INT64 } , <nl> + / * expected_output_shapes * / { PartialTensorShape ( { 2 } ) } , <nl> + / * expected_cardinality * / 0 , <nl> + / * breakpoints * / { } } ; <nl> + } <nl> <nl> - class ParameterizedDatasetTest <nl> - : public ConcatenateDatasetOpTestHelper , <nl> - public : : testing : : WithParamInterface < TestParam > { } ; <nl> + class ParameterizedConcatenateDatasetOpTest <nl> + : public ConcatenateDatasetOpTest , <nl> + public : : testing : : WithParamInterface < TestCase > { } ; <nl> <nl> - TEST_P ( ParameterizedDatasetTest , GetNext ) { <nl> + TEST_P ( ParameterizedConcatenateDatasetOpTest , GetNext ) { <nl> int thread_num = 2 , cpu_num = 2 ; <nl> TF_ASSERT_OK ( InitThreadPool ( thread_num ) ) ; <nl> TF_ASSERT_OK ( InitFunctionLibraryRuntime ( { } , cpu_num ) ) ; <nl> - const TestParam & test_case = GetParam ( ) ; <nl> - TF_ASSERT_OK ( CreateIteratorFromTestCase ( test_case ) ) ; <nl> + <nl> + const TestCase & test_case = GetParam ( ) ; <nl> + std : : vector < Tensor > tensor_slice_dataset_tensors ; <nl> + TF_ASSERT_OK ( CreateTensorSliceDatasetTensors ( test_case . input_tensors , <nl> + & tensor_slice_dataset_tensors ) ) ; <nl> + gtl : : InlinedVector < TensorValue , 4 > inputs ; <nl> + for ( auto & tensor : tensor_slice_dataset_tensors ) { <nl> + inputs . emplace_back ( & tensor ) ; <nl> + } <nl> + std : : unique_ptr < OpKernel > dataset_kernel ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetKernel ( test_case . expected_output_dtypes , <nl> + test_case . expected_output_shapes , <nl> + & dataset_kernel ) ) ; <nl> + std : : unique_ptr < OpKernelContext > dataset_kernel_ctx ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetContext ( dataset_kernel . get ( ) , & inputs , <nl> + & dataset_kernel_ctx ) ) ; <nl> + DatasetBase * concatenate_dataset ; <nl> + TF_ASSERT_OK ( CreateDataset ( dataset_kernel . get ( ) , dataset_kernel_ctx . get ( ) , <nl> + & concatenate_dataset ) ) ; <nl> + core : : ScopedUnref scored_unref ( concatenate_dataset ) ; <nl> + std : : unique_ptr < IteratorContext > iterator_ctx ; <nl> + TF_ASSERT_OK ( CreateIteratorContext ( dataset_kernel_ctx . get ( ) , & iterator_ctx ) ) ; <nl> + std : : unique_ptr < IteratorBase > iterator ; <nl> + TF_ASSERT_OK ( concatenate_dataset - > MakeIterator ( iterator_ctx . get ( ) , " Iterator " , <nl> + & iterator ) ) ; <nl> <nl> auto expected_outputs_it = test_case . expected_outputs . begin ( ) ; <nl> bool end_of_sequence = false ; <nl> std : : vector < Tensor > out_tensors ; <nl> while ( ! end_of_sequence ) { <nl> - TF_EXPECT_OK ( iterator_ - > GetNext ( iterator_ctx_ . get ( ) , & out_tensors , <nl> - & end_of_sequence ) ) ; <nl> + TF_EXPECT_OK ( <nl> + iterator - > GetNext ( iterator_ctx . get ( ) , & out_tensors , & end_of_sequence ) ) ; <nl> if ( ! end_of_sequence ) { <nl> for ( const auto & tensor : out_tensors ) { <nl> EXPECT_NE ( expected_outputs_it , test_case . expected_outputs . end ( ) ) ; <nl> TEST_P ( ParameterizedDatasetTest , GetNext ) { <nl> EXPECT_EQ ( expected_outputs_it , test_case . expected_outputs . end ( ) ) ; <nl> } <nl> <nl> - TEST_F ( ConcatenateDatasetOpTestHelper , DifferentDtypes ) { <nl> + TEST_F ( ConcatenateDatasetOpTest , DifferentDtypes ) { <nl> int thread_num = 2 , cpu_num = 2 ; <nl> TF_ASSERT_OK ( InitThreadPool ( thread_num ) ) ; <nl> TF_ASSERT_OK ( InitFunctionLibraryRuntime ( { } , cpu_num ) ) ; <nl> <nl> - TestParam test_case_with_different_dtypes = { <nl> - / * input_tensors * / { <nl> - { CreateTensor < int64 > ( TensorShape ( { 2 , 2 } ) , { 1 , 2 , 3 , 4 } ) } , <nl> - { CreateTensor < double > ( TensorShape ( { 2 , 2 } ) , { 1 . 0 , 2 . 0 , 3 . 0 , 4 . 0 } ) } } , <nl> - / * expected_outputs * / { } , <nl> - / * expected_output_dtypes * / { DT_INT64 } , <nl> - / * expected_output_shapes * / { PartialTensorShape ( { 2 } ) } , <nl> - / * expected_cardinality * / 0 , <nl> - / * breakpoints * / { } } ; <nl> - <nl> - EXPECT_EQ ( CreateDatasetFromTestCase ( test_case_with_different_dtypes ) . code ( ) , <nl> + const TestCase & test_case = DifferentDtypeTestCase ( ) ; <nl> + std : : vector < Tensor > tensor_slice_dataset_tensors ; <nl> + TF_ASSERT_OK ( CreateTensorSliceDatasetTensors ( test_case . input_tensors , <nl> + & tensor_slice_dataset_tensors ) ) ; <nl> + gtl : : InlinedVector < TensorValue , 4 > inputs ; <nl> + for ( auto & tensor : tensor_slice_dataset_tensors ) { <nl> + inputs . emplace_back ( & tensor ) ; <nl> + } <nl> + std : : unique_ptr < OpKernel > dataset_kernel ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetKernel ( test_case . expected_output_dtypes , <nl> + test_case . expected_output_shapes , <nl> + & dataset_kernel ) ) ; <nl> + std : : unique_ptr < OpKernelContext > dataset_kernel_ctx ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetContext ( dataset_kernel . get ( ) , & inputs , <nl> + & dataset_kernel_ctx ) ) ; <nl> + DatasetBase * concatenate_dataset ; <nl> + EXPECT_EQ ( CreateDataset ( dataset_kernel . get ( ) , dataset_kernel_ctx . get ( ) , <nl> + & concatenate_dataset ) <nl> + . code ( ) , <nl> tensorflow : : error : : INVALID_ARGUMENT ) ; <nl> } <nl> <nl> - TEST_F ( ConcatenateDatasetOpTestHelper , DatasetName ) { <nl> + TEST_F ( ConcatenateDatasetOpTest , DatasetNodeName ) { <nl> int thread_num = 2 , cpu_num = 2 ; <nl> TF_ASSERT_OK ( InitThreadPool ( thread_num ) ) ; <nl> TF_ASSERT_OK ( InitFunctionLibraryRuntime ( { } , cpu_num ) ) ; <nl> - TF_ASSERT_OK ( CreateDatasetFromTestCase ( TestCase1 ( ) ) ) ; <nl> <nl> - EXPECT_EQ ( dataset_ - > type_string ( ) , kOpName ) ; <nl> + const TestCase & test_case = SameShapeTestCase ( ) ; <nl> + std : : vector < Tensor > tensor_slice_dataset_tensors ; <nl> + TF_ASSERT_OK ( CreateTensorSliceDatasetTensors ( test_case . input_tensors , <nl> + & tensor_slice_dataset_tensors ) ) ; <nl> + gtl : : InlinedVector < TensorValue , 4 > inputs ; <nl> + for ( auto & tensor : tensor_slice_dataset_tensors ) { <nl> + inputs . emplace_back ( & tensor ) ; <nl> + } <nl> + std : : unique_ptr < OpKernel > dataset_kernel ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetKernel ( test_case . expected_output_dtypes , <nl> + test_case . expected_output_shapes , <nl> + & dataset_kernel ) ) ; <nl> + std : : unique_ptr < OpKernelContext > dataset_kernel_ctx ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetContext ( dataset_kernel . get ( ) , & inputs , <nl> + & dataset_kernel_ctx ) ) ; <nl> + DatasetBase * concatenate_dataset ; <nl> + TF_ASSERT_OK ( CreateDataset ( dataset_kernel . get ( ) , dataset_kernel_ctx . get ( ) , <nl> + & concatenate_dataset ) ) ; <nl> + core : : ScopedUnref scored_unref ( concatenate_dataset ) ; <nl> + <nl> + EXPECT_EQ ( concatenate_dataset - > node_name ( ) , kNodeName ) ; <nl> } <nl> <nl> - TEST_P ( ParameterizedDatasetTest , DatasetOutputDtypes ) { <nl> + TEST_F ( ConcatenateDatasetOpTest , DatasetTypeString ) { <nl> int thread_num = 2 , cpu_num = 2 ; <nl> TF_ASSERT_OK ( InitThreadPool ( thread_num ) ) ; <nl> TF_ASSERT_OK ( InitFunctionLibraryRuntime ( { } , cpu_num ) ) ; <nl> - const TestParam & test_case = GetParam ( ) ; <nl> - TF_ASSERT_OK ( CreateDatasetFromTestCase ( test_case ) ) ; <nl> - TF_EXPECT_OK ( VerifyTypesMatch ( dataset_ - > output_dtypes ( ) , <nl> + <nl> + const TestCase & test_case = SameShapeTestCase ( ) ; <nl> + std : : vector < Tensor > tensor_slice_dataset_tensors ; <nl> + TF_ASSERT_OK ( CreateTensorSliceDatasetTensors ( test_case . input_tensors , <nl> + & tensor_slice_dataset_tensors ) ) ; <nl> + gtl : : InlinedVector < TensorValue , 4 > inputs ; <nl> + for ( auto & tensor : tensor_slice_dataset_tensors ) { <nl> + inputs . emplace_back ( & tensor ) ; <nl> + } <nl> + std : : unique_ptr < OpKernel > dataset_kernel ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetKernel ( test_case . expected_output_dtypes , <nl> + test_case . expected_output_shapes , <nl> + & dataset_kernel ) ) ; <nl> + std : : unique_ptr < OpKernelContext > dataset_kernel_ctx ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetContext ( dataset_kernel . get ( ) , & inputs , <nl> + & dataset_kernel_ctx ) ) ; <nl> + DatasetBase * concatenate_dataset ; <nl> + TF_ASSERT_OK ( CreateDataset ( dataset_kernel . get ( ) , dataset_kernel_ctx . get ( ) , <nl> + & concatenate_dataset ) ) ; <nl> + core : : ScopedUnref scored_unref ( concatenate_dataset ) ; <nl> + <nl> + EXPECT_EQ ( concatenate_dataset - > type_string ( ) , kOpName ) ; <nl> + } <nl> + <nl> + TEST_P ( ParameterizedConcatenateDatasetOpTest , DatasetOutputDtypes ) { <nl> + int thread_num = 2 , cpu_num = 2 ; <nl> + TF_ASSERT_OK ( InitThreadPool ( thread_num ) ) ; <nl> + TF_ASSERT_OK ( InitFunctionLibraryRuntime ( { } , cpu_num ) ) ; <nl> + <nl> + const TestCase & test_case = GetParam ( ) ; <nl> + std : : vector < Tensor > tensor_slice_dataset_tensors ; <nl> + TF_ASSERT_OK ( CreateTensorSliceDatasetTensors ( test_case . input_tensors , <nl> + & tensor_slice_dataset_tensors ) ) ; <nl> + gtl : : InlinedVector < TensorValue , 4 > inputs ; <nl> + for ( auto & tensor : tensor_slice_dataset_tensors ) { <nl> + inputs . emplace_back ( & tensor ) ; <nl> + } <nl> + std : : unique_ptr < OpKernel > dataset_kernel ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetKernel ( test_case . expected_output_dtypes , <nl> + test_case . expected_output_shapes , <nl> + & dataset_kernel ) ) ; <nl> + std : : unique_ptr < OpKernelContext > dataset_kernel_ctx ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetContext ( dataset_kernel . get ( ) , & inputs , <nl> + & dataset_kernel_ctx ) ) ; <nl> + DatasetBase * concatenate_dataset ; <nl> + TF_ASSERT_OK ( CreateDataset ( dataset_kernel . get ( ) , dataset_kernel_ctx . get ( ) , <nl> + & concatenate_dataset ) ) ; <nl> + core : : ScopedUnref scored_unref ( concatenate_dataset ) ; <nl> + TF_EXPECT_OK ( VerifyTypesMatch ( concatenate_dataset - > output_dtypes ( ) , <nl> test_case . expected_output_dtypes ) ) ; <nl> } <nl> <nl> - TEST_P ( ParameterizedDatasetTest , DatasetOutputShapes ) { <nl> + TEST_P ( ParameterizedConcatenateDatasetOpTest , DatasetOutputShapes ) { <nl> int thread_num = 2 , cpu_num = 2 ; <nl> TF_ASSERT_OK ( InitThreadPool ( thread_num ) ) ; <nl> TF_ASSERT_OK ( InitFunctionLibraryRuntime ( { } , cpu_num ) ) ; <nl> - const TestParam & test_case = GetParam ( ) ; <nl> - TF_ASSERT_OK ( CreateDatasetFromTestCase ( test_case ) ) ; <nl> - TF_EXPECT_OK ( VerifyShapesCompatible ( dataset_ - > output_shapes ( ) , <nl> + <nl> + const TestCase & test_case = GetParam ( ) ; <nl> + std : : vector < Tensor > tensor_slice_dataset_tensors ; <nl> + TF_ASSERT_OK ( CreateTensorSliceDatasetTensors ( test_case . input_tensors , <nl> + & tensor_slice_dataset_tensors ) ) ; <nl> + gtl : : InlinedVector < TensorValue , 4 > inputs ; <nl> + for ( auto & tensor : tensor_slice_dataset_tensors ) { <nl> + inputs . emplace_back ( & tensor ) ; <nl> + } <nl> + std : : unique_ptr < OpKernel > dataset_kernel ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetKernel ( test_case . expected_output_dtypes , <nl> + test_case . expected_output_shapes , <nl> + & dataset_kernel ) ) ; <nl> + std : : unique_ptr < OpKernelContext > dataset_kernel_ctx ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetContext ( dataset_kernel . get ( ) , & inputs , <nl> + & dataset_kernel_ctx ) ) ; <nl> + DatasetBase * concatenate_dataset ; <nl> + TF_ASSERT_OK ( CreateDataset ( dataset_kernel . get ( ) , dataset_kernel_ctx . get ( ) , <nl> + & concatenate_dataset ) ) ; <nl> + core : : ScopedUnref scored_unref ( concatenate_dataset ) ; <nl> + <nl> + TF_EXPECT_OK ( VerifyShapesCompatible ( concatenate_dataset - > output_shapes ( ) , <nl> test_case . expected_output_shapes ) ) ; <nl> } <nl> <nl> - TEST_P ( ParameterizedDatasetTest , Cardinality ) { <nl> + TEST_P ( ParameterizedConcatenateDatasetOpTest , Cardinality ) { <nl> int thread_num = 2 , cpu_num = 2 ; <nl> TF_ASSERT_OK ( InitThreadPool ( thread_num ) ) ; <nl> TF_ASSERT_OK ( InitFunctionLibraryRuntime ( { } , cpu_num ) ) ; <nl> - const TestParam & test_case = GetParam ( ) ; <nl> - TF_ASSERT_OK ( CreateDatasetFromTestCase ( test_case ) ) ; <nl> <nl> - EXPECT_EQ ( dataset_ - > Cardinality ( ) , GetParam ( ) . expected_cardinality ) ; <nl> + const TestCase & test_case = GetParam ( ) ; <nl> + std : : vector < Tensor > tensor_slice_dataset_tensors ; <nl> + TF_ASSERT_OK ( CreateTensorSliceDatasetTensors ( test_case . input_tensors , <nl> + & tensor_slice_dataset_tensors ) ) ; <nl> + gtl : : InlinedVector < TensorValue , 4 > inputs ; <nl> + for ( auto & tensor : tensor_slice_dataset_tensors ) { <nl> + inputs . emplace_back ( & tensor ) ; <nl> + } <nl> + std : : unique_ptr < OpKernel > dataset_kernel ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetKernel ( test_case . expected_output_dtypes , <nl> + test_case . expected_output_shapes , <nl> + & dataset_kernel ) ) ; <nl> + std : : unique_ptr < OpKernelContext > dataset_kernel_ctx ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetContext ( dataset_kernel . get ( ) , & inputs , <nl> + & dataset_kernel_ctx ) ) ; <nl> + DatasetBase * concatenate_dataset ; <nl> + TF_ASSERT_OK ( CreateDataset ( dataset_kernel . get ( ) , dataset_kernel_ctx . get ( ) , <nl> + & concatenate_dataset ) ) ; <nl> + core : : ScopedUnref scored_unref ( concatenate_dataset ) ; <nl> + <nl> + EXPECT_EQ ( concatenate_dataset - > Cardinality ( ) , test_case . expected_cardinality ) ; <nl> } <nl> <nl> - TEST_F ( ConcatenateDatasetOpTestHelper , DatasetSave ) { <nl> + TEST_F ( ConcatenateDatasetOpTest , DatasetSave ) { <nl> int thread_num = 2 , cpu_num = 2 ; <nl> TF_ASSERT_OK ( InitThreadPool ( thread_num ) ) ; <nl> TF_ASSERT_OK ( InitFunctionLibraryRuntime ( { } , cpu_num ) ) ; <nl> - TF_ASSERT_OK ( CreateDatasetFromTestCase ( TestCase1 ( ) ) ) ; <nl> + <nl> + const TestCase & test_case = SameShapeTestCase ( ) ; <nl> + std : : vector < Tensor > tensor_slice_dataset_tensors ; <nl> + TF_ASSERT_OK ( CreateTensorSliceDatasetTensors ( test_case . input_tensors , <nl> + & tensor_slice_dataset_tensors ) ) ; <nl> + gtl : : InlinedVector < TensorValue , 4 > inputs ; <nl> + for ( auto & tensor : tensor_slice_dataset_tensors ) { <nl> + inputs . emplace_back ( & tensor ) ; <nl> + } <nl> + std : : unique_ptr < OpKernel > dataset_kernel ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetKernel ( test_case . expected_output_dtypes , <nl> + test_case . expected_output_shapes , <nl> + & dataset_kernel ) ) ; <nl> + std : : unique_ptr < OpKernelContext > dataset_kernel_ctx ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetContext ( dataset_kernel . get ( ) , & inputs , <nl> + & dataset_kernel_ctx ) ) ; <nl> + DatasetBase * concatenate_dataset ; <nl> + TF_ASSERT_OK ( CreateDataset ( dataset_kernel . get ( ) , dataset_kernel_ctx . get ( ) , <nl> + & concatenate_dataset ) ) ; <nl> <nl> std : : unique_ptr < SerializationContext > serialization_ctx ; <nl> TF_ASSERT_OK ( CreateSerializationContext ( & serialization_ctx ) ) ; <nl> VariantTensorData data ; <nl> VariantTensorDataWriter writer ( & data ) ; <nl> - TF_ASSERT_OK ( dataset_ - > Save ( serialization_ctx . get ( ) , & writer ) ) ; <nl> + TF_ASSERT_OK ( concatenate_dataset - > Save ( serialization_ctx . get ( ) , & writer ) ) ; <nl> TF_ASSERT_OK ( writer . Flush ( ) ) ; <nl> } <nl> <nl> - TEST_P ( ParameterizedDatasetTest , IteratorOutputDtypes ) { <nl> + TEST_P ( ParameterizedConcatenateDatasetOpTest , IteratorOutputDtypes ) { <nl> int thread_num = 2 , cpu_num = 2 ; <nl> TF_ASSERT_OK ( InitThreadPool ( thread_num ) ) ; <nl> TF_ASSERT_OK ( InitFunctionLibraryRuntime ( { } , cpu_num ) ) ; <nl> - const TestParam & test_case = GetParam ( ) ; <nl> - TF_ASSERT_OK ( CreateIteratorFromTestCase ( test_case ) ) ; <nl> - TF_EXPECT_OK ( VerifyTypesMatch ( iterator_ - > output_dtypes ( ) , <nl> + <nl> + const TestCase & test_case = GetParam ( ) ; <nl> + std : : vector < Tensor > tensor_slice_dataset_tensors ; <nl> + TF_ASSERT_OK ( CreateTensorSliceDatasetTensors ( test_case . input_tensors , <nl> + & tensor_slice_dataset_tensors ) ) ; <nl> + gtl : : InlinedVector < TensorValue , 4 > inputs ; <nl> + for ( auto & tensor : tensor_slice_dataset_tensors ) { <nl> + inputs . emplace_back ( & tensor ) ; <nl> + } <nl> + std : : unique_ptr < OpKernel > dataset_kernel ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetKernel ( test_case . expected_output_dtypes , <nl> + test_case . expected_output_shapes , <nl> + & dataset_kernel ) ) ; <nl> + std : : unique_ptr < OpKernelContext > dataset_kernel_ctx ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetContext ( dataset_kernel . get ( ) , & inputs , <nl> + & dataset_kernel_ctx ) ) ; <nl> + DatasetBase * concatenate_dataset ; <nl> + TF_ASSERT_OK ( CreateDataset ( dataset_kernel . get ( ) , dataset_kernel_ctx . get ( ) , <nl> + & concatenate_dataset ) ) ; <nl> + core : : ScopedUnref scored_unref ( concatenate_dataset ) ; <nl> + std : : unique_ptr < IteratorContext > iterator_ctx ; <nl> + TF_ASSERT_OK ( CreateIteratorContext ( dataset_kernel_ctx . get ( ) , & iterator_ctx ) ) ; <nl> + std : : unique_ptr < IteratorBase > iterator ; <nl> + TF_ASSERT_OK ( concatenate_dataset - > MakeIterator ( iterator_ctx . get ( ) , " Iterator " , <nl> + & iterator ) ) ; <nl> + <nl> + TF_EXPECT_OK ( VerifyTypesMatch ( iterator - > output_dtypes ( ) , <nl> test_case . expected_output_dtypes ) ) ; <nl> } <nl> <nl> - TEST_P ( ParameterizedDatasetTest , IteratorOutputShapes ) { <nl> + TEST_P ( ParameterizedConcatenateDatasetOpTest , IteratorOutputShapes ) { <nl> int thread_num = 2 , cpu_num = 2 ; <nl> TF_ASSERT_OK ( InitThreadPool ( thread_num ) ) ; <nl> TF_ASSERT_OK ( InitFunctionLibraryRuntime ( { } , cpu_num ) ) ; <nl> - const TestParam & test_case = GetParam ( ) ; <nl> - TF_ASSERT_OK ( CreateIteratorFromTestCase ( test_case ) ) ; <nl> - TF_EXPECT_OK ( VerifyShapesCompatible ( iterator_ - > output_shapes ( ) , <nl> + <nl> + const TestCase & test_case = GetParam ( ) ; <nl> + std : : vector < Tensor > tensor_slice_dataset_tensors ; <nl> + TF_ASSERT_OK ( CreateTensorSliceDatasetTensors ( test_case . input_tensors , <nl> + & tensor_slice_dataset_tensors ) ) ; <nl> + gtl : : InlinedVector < TensorValue , 4 > inputs ; <nl> + for ( auto & tensor : tensor_slice_dataset_tensors ) { <nl> + inputs . emplace_back ( & tensor ) ; <nl> + } <nl> + std : : unique_ptr < OpKernel > dataset_kernel ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetKernel ( test_case . expected_output_dtypes , <nl> + test_case . expected_output_shapes , <nl> + & dataset_kernel ) ) ; <nl> + std : : unique_ptr < OpKernelContext > dataset_kernel_ctx ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetContext ( dataset_kernel . get ( ) , & inputs , <nl> + & dataset_kernel_ctx ) ) ; <nl> + DatasetBase * concatenate_dataset ; <nl> + TF_ASSERT_OK ( CreateDataset ( dataset_kernel . get ( ) , dataset_kernel_ctx . get ( ) , <nl> + & concatenate_dataset ) ) ; <nl> + core : : ScopedUnref scored_unref ( concatenate_dataset ) ; <nl> + std : : unique_ptr < IteratorContext > iterator_ctx ; <nl> + TF_ASSERT_OK ( CreateIteratorContext ( dataset_kernel_ctx . get ( ) , & iterator_ctx ) ) ; <nl> + std : : unique_ptr < IteratorBase > iterator ; <nl> + TF_ASSERT_OK ( concatenate_dataset - > MakeIterator ( iterator_ctx . get ( ) , " Iterator " , <nl> + & iterator ) ) ; <nl> + TF_EXPECT_OK ( VerifyShapesCompatible ( iterator - > output_shapes ( ) , <nl> test_case . expected_output_shapes ) ) ; <nl> } <nl> <nl> - TEST_F ( ConcatenateDatasetOpTestHelper , IteratorOutputPrefix ) { <nl> + TEST_F ( ConcatenateDatasetOpTest , IteratorOutputPrefix ) { <nl> int thread_num = 2 , cpu_num = 2 ; <nl> TF_ASSERT_OK ( InitThreadPool ( thread_num ) ) ; <nl> TF_ASSERT_OK ( InitFunctionLibraryRuntime ( { } , cpu_num ) ) ; <nl> - TF_ASSERT_OK ( CreateIteratorFromTestCase ( TestCase1 ( ) ) ) ; <nl> - EXPECT_EQ ( iterator_ - > prefix ( ) , " Iterator : : Concatenate " ) ; <nl> + <nl> + const TestCase & test_case = SameShapeTestCase ( ) ; <nl> + std : : vector < Tensor > tensor_slice_dataset_tensors ; <nl> + TF_ASSERT_OK ( CreateTensorSliceDatasetTensors ( test_case . input_tensors , <nl> + & tensor_slice_dataset_tensors ) ) ; <nl> + gtl : : InlinedVector < TensorValue , 4 > inputs ; <nl> + for ( auto & tensor : tensor_slice_dataset_tensors ) { <nl> + inputs . emplace_back ( & tensor ) ; <nl> + } <nl> + std : : unique_ptr < OpKernel > dataset_kernel ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetKernel ( test_case . expected_output_dtypes , <nl> + test_case . expected_output_shapes , <nl> + & dataset_kernel ) ) ; <nl> + std : : unique_ptr < OpKernelContext > dataset_kernel_ctx ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetContext ( dataset_kernel . get ( ) , & inputs , <nl> + & dataset_kernel_ctx ) ) ; <nl> + DatasetBase * concatenate_dataset ; <nl> + TF_ASSERT_OK ( CreateDataset ( dataset_kernel . get ( ) , dataset_kernel_ctx . get ( ) , <nl> + & concatenate_dataset ) ) ; <nl> + core : : ScopedUnref scored_unref ( concatenate_dataset ) ; <nl> + std : : unique_ptr < IteratorContext > iterator_ctx ; <nl> + TF_ASSERT_OK ( CreateIteratorContext ( dataset_kernel_ctx . get ( ) , & iterator_ctx ) ) ; <nl> + std : : unique_ptr < IteratorBase > iterator ; <nl> + TF_ASSERT_OK ( concatenate_dataset - > MakeIterator ( iterator_ctx . get ( ) , " Iterator " , <nl> + & iterator ) ) ; <nl> + EXPECT_EQ ( iterator - > prefix ( ) , " Iterator : : Concatenate " ) ; <nl> } <nl> <nl> - TEST_P ( ParameterizedDatasetTest , Roundtrip ) { <nl> + TEST_P ( ParameterizedConcatenateDatasetOpTest , Roundtrip ) { <nl> int thread_num = 2 , cpu_num = 2 ; <nl> TF_ASSERT_OK ( InitThreadPool ( thread_num ) ) ; <nl> TF_ASSERT_OK ( InitFunctionLibraryRuntime ( { } , cpu_num ) ) ; <nl> - const TestParam & test_case = GetParam ( ) ; <nl> - auto expected_outputs_it = test_case . expected_outputs . begin ( ) ; <nl> - TF_ASSERT_OK ( CreateIteratorFromTestCase ( test_case ) ) ; <nl> + const TestCase & test_case = GetParam ( ) ; <nl> + std : : vector < Tensor > tensor_slice_dataset_tensors ; <nl> + TF_ASSERT_OK ( CreateTensorSliceDatasetTensors ( test_case . input_tensors , <nl> + & tensor_slice_dataset_tensors ) ) ; <nl> + gtl : : InlinedVector < TensorValue , 4 > inputs ; <nl> + for ( auto & tensor : tensor_slice_dataset_tensors ) { <nl> + inputs . emplace_back ( & tensor ) ; <nl> + } <nl> + std : : unique_ptr < OpKernel > dataset_kernel ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetKernel ( test_case . expected_output_dtypes , <nl> + test_case . expected_output_shapes , <nl> + & dataset_kernel ) ) ; <nl> + std : : unique_ptr < OpKernelContext > dataset_kernel_ctx ; <nl> + TF_ASSERT_OK ( CreateConcatenateDatasetContext ( dataset_kernel . get ( ) , & inputs , <nl> + & dataset_kernel_ctx ) ) ; <nl> + DatasetBase * concatenate_dataset ; <nl> + TF_ASSERT_OK ( CreateDataset ( dataset_kernel . get ( ) , dataset_kernel_ctx . get ( ) , <nl> + & concatenate_dataset ) ) ; <nl> + core : : ScopedUnref scored_unref ( concatenate_dataset ) ; <nl> + std : : unique_ptr < IteratorContext > iterator_ctx ; <nl> + TF_ASSERT_OK ( CreateIteratorContext ( dataset_kernel_ctx . get ( ) , & iterator_ctx ) ) ; <nl> + std : : unique_ptr < IteratorBase > iterator ; <nl> + TF_ASSERT_OK ( concatenate_dataset - > MakeIterator ( iterator_ctx . get ( ) , " Iterator " , <nl> + & iterator ) ) ; <nl> <nl> std : : unique_ptr < SerializationContext > serialization_ctx ; <nl> TF_ASSERT_OK ( CreateSerializationContext ( & serialization_ctx ) ) ; <nl> TEST_P ( ParameterizedDatasetTest , Roundtrip ) { <nl> bool end_of_sequence = false ; <nl> std : : vector < Tensor > out_tensors ; <nl> int cur_iteration = 0 ; <nl> + auto expected_outputs_it = test_case . expected_outputs . begin ( ) ; <nl> std : : vector < int > breakpoints = GetParam ( ) . breakpoints ; <nl> for ( int breakpoint : breakpoints ) { <nl> VariantTensorData data ; <nl> VariantTensorDataWriter writer ( & data ) ; <nl> - TF_EXPECT_OK ( iterator_ - > Save ( serialization_ctx . get ( ) , & writer ) ) ; <nl> + TF_EXPECT_OK ( iterator - > Save ( serialization_ctx . get ( ) , & writer ) ) ; <nl> TF_EXPECT_OK ( writer . Flush ( ) ) ; <nl> VariantTensorDataReader reader ( & data ) ; <nl> - TF_EXPECT_OK ( iterator_ - > Restore ( iterator_ctx_ . get ( ) , & reader ) ) ; <nl> + TF_EXPECT_OK ( iterator - > Restore ( iterator_ctx . get ( ) , & reader ) ) ; <nl> <nl> while ( cur_iteration < breakpoint ) { <nl> - TF_EXPECT_OK ( iterator_ - > GetNext ( iterator_ctx_ . get ( ) , & out_tensors , <nl> - & end_of_sequence ) ) ; <nl> + TF_EXPECT_OK ( iterator - > GetNext ( iterator_ctx . get ( ) , & out_tensors , <nl> + & end_of_sequence ) ) ; <nl> if ( ! end_of_sequence ) { <nl> for ( auto & tensor : out_tensors ) { <nl> EXPECT_NE ( expected_outputs_it , test_case . expected_outputs . end ( ) ) ; <nl> TEST_P ( ParameterizedDatasetTest , Roundtrip ) { <nl> cur_iteration + + ; <nl> } <nl> <nl> - if ( breakpoint > = dataset_ - > Cardinality ( ) ) { <nl> + if ( breakpoint > = concatenate_dataset - > Cardinality ( ) ) { <nl> EXPECT_TRUE ( end_of_sequence ) ; <nl> EXPECT_EQ ( expected_outputs_it , test_case . expected_outputs . end ( ) ) ; <nl> } else { <nl> TEST_P ( ParameterizedDatasetTest , Roundtrip ) { <nl> } <nl> } <nl> <nl> - INSTANTIATE_TEST_SUITE_P ( <nl> - ConcatenateDatasetOpTest , ParameterizedDatasetTest , <nl> - : : testing : : ValuesIn ( std : : vector < TestParam > ( { TestCase1 ( ) , TestCase2 ( ) } ) ) ) ; <nl> + INSTANTIATE_TEST_SUITE_P ( ConcatenateDatasetOpTest , <nl> + ParameterizedConcatenateDatasetOpTest , <nl> + : : testing : : ValuesIn ( std : : vector < TestCase > ( <nl> + { SameShapeTestCase ( ) , DifferentShapeTestCase ( ) } ) ) ) ; <nl> } / / namespace <nl> } / / namespace data <nl> } / / namespace tensorflow <nl>
|
Refactor ConcatenateDatasetOpTest
|
tensorflow/tensorflow
|
8aacb4828c5e2eadf2f64872efa1cbf383f63b2a
|
2019-03-19T04:28:37Z
|
mmm a / AUTHORS <nl> ppp b / AUTHORS <nl> Felix Geisendörfer < haimuiba @ gmail . com > <nl> Filipe David Manana < fdmanana @ gmail . com > <nl> Franziska Hinkelmann < franziska . hinkelmann @ gmail . com > <nl> Geoffrey Garside < ggarside @ gmail . com > <nl> + Gus Caplan < me @ gus . host > <nl> Gwang Yoon Hwang < ryumiel @ company100 . net > <nl> Henrique Ferreiro < henrique . ferreiro @ gmail . com > <nl> Hirofumi Mako < mkhrfm @ gmail . com > <nl> mmm a / include / v8 . h <nl> ppp b / include / v8 . h <nl> class V8_EXPORT Value : public Data { <nl> <nl> bool IsWebAssemblyCompiledModule ( ) const ; <nl> <nl> + / * * <nl> + * Returns true if the value is a Module Namespace Object . <nl> + * / <nl> + bool IsModuleNamespaceObject ( ) const ; <nl> + <nl> V8_WARN_UNUSED_RESULT MaybeLocal < BigInt > ToBigInt ( <nl> Local < Context > context ) const ; <nl> V8_WARN_UNUSED_RESULT MaybeLocal < Boolean > ToBoolean ( <nl> mmm a / src / api . cc <nl> ppp b / src / api . cc <nl> bool Value : : IsSetIterator ( ) const { <nl> <nl> bool Value : : IsPromise ( ) const { return Utils : : OpenHandle ( this ) - > IsJSPromise ( ) ; } <nl> <nl> + bool Value : : IsModuleNamespaceObject ( ) const { <nl> + return Utils : : OpenHandle ( this ) - > IsJSModuleNamespace ( ) ; <nl> + } <nl> + <nl> MaybeLocal < String > Value : : ToString ( Local < Context > context ) const { <nl> auto obj = Utils : : OpenHandle ( this ) ; <nl> if ( obj - > IsString ( ) ) return ToApiHandle < String > ( obj ) ; <nl> mmm a / test / cctest / test - api . cc <nl> ppp b / test / cctest / test - api . cc <nl> TEST ( ImportMeta ) { <nl> CHECK ( result - > StrictEquals ( Local < v8 : : Value > : : Cast ( v8 : : Utils : : ToLocal ( meta ) ) ) ) ; <nl> } <nl> <nl> + TEST ( GetModuleNamespace ) { <nl> + LocalContext context ; <nl> + v8 : : Isolate * isolate = context - > GetIsolate ( ) ; <nl> + v8 : : HandleScope scope ( isolate ) ; <nl> + <nl> + Local < String > url = v8_str ( " www . google . com " ) ; <nl> + Local < String > source_text = v8_str ( " export default 5 ; export const a = 10 ; " ) ; <nl> + v8 : : ScriptOrigin origin ( url , Local < v8 : : Integer > ( ) , Local < v8 : : Integer > ( ) , <nl> + Local < v8 : : Boolean > ( ) , Local < v8 : : Integer > ( ) , <nl> + Local < v8 : : Value > ( ) , Local < v8 : : Boolean > ( ) , <nl> + Local < v8 : : Boolean > ( ) , True ( isolate ) ) ; <nl> + v8 : : ScriptCompiler : : Source source ( source_text , origin ) ; <nl> + Local < Module > module = <nl> + v8 : : ScriptCompiler : : CompileModule ( isolate , & source ) . ToLocalChecked ( ) ; <nl> + module - > InstantiateModule ( context . local ( ) , UnexpectedModuleResolveCallback ) <nl> + . ToChecked ( ) ; <nl> + module - > Evaluate ( context . local ( ) ) . ToLocalChecked ( ) ; <nl> + <nl> + Local < Value > ns_val = module - > GetModuleNamespace ( ) ; <nl> + CHECK ( ns_val - > IsModuleNamespaceObject ( ) ) ; <nl> + Local < Object > ns = ns_val . As < Object > ( ) ; <nl> + CHECK ( ns - > Get ( context . local ( ) , v8_str ( " default " ) ) <nl> + . ToLocalChecked ( ) <nl> + - > StrictEquals ( v8 : : Number : : New ( isolate , 5 ) ) ) ; <nl> + CHECK ( ns - > Get ( context . local ( ) , v8_str ( " a " ) ) <nl> + . ToLocalChecked ( ) <nl> + - > StrictEquals ( v8 : : Number : : New ( isolate , 10 ) ) ) ; <nl> + } <nl> + <nl> TEST ( GlobalTemplateWithDoubleProperty ) { <nl> v8 : : Isolate * isolate = CcTest : : isolate ( ) ; <nl> v8 : : HandleScope handle_scope ( isolate ) ; <nl>
|
[ api ] introduce v8 : : Value : : IsModuleNamespaceObject
|
v8/v8
|
39d546a24022b62b00aedf7b556ac6c9e2306aab
|
2018-04-13T18:26:36Z
|
new file mode 100644 <nl> index 00000000000 . . affc21cf848 <nl> Binary files / dev / null and b / logo . png differ <nl>
|
logo . png
|
godotengine/godot
|
03fd40149f2b3e74cfb8c9e2cbc521f340fa932e
|
2014-02-12T15:21:47Z
|
mmm a / cyber / record / file / record_file_base . h <nl> ppp b / cyber / record / file / record_file_base . h <nl> class RecordFileBase { <nl> std : : string path_ ; <nl> proto : : Header header_ ; <nl> proto : : Index index_ ; <nl> - int fd_ = - 1 ; <nl> + int fd_ ; <nl> } ; <nl> <nl> } / / namespace record <nl> mmm a / cyber / record / file / record_file_test . cc <nl> ppp b / cyber / record / file / record_file_test . cc <nl> TEST ( RecordFileTest , TestOneChunkFile ) { <nl> <nl> TEST ( RecordFileTest , TestIndex ) { <nl> { <nl> - RecordFileWriter rfw ; <nl> + RecordFileWriter * rfw = new RecordFileWriter ( ) ; <nl> <nl> - ASSERT_TRUE ( rfw . Open ( kTestFile2 ) ) ; <nl> - ASSERT_EQ ( kTestFile2 , rfw . GetPath ( ) ) ; <nl> + ASSERT_TRUE ( rfw - > Open ( kTestFile2 ) ) ; <nl> + ASSERT_EQ ( kTestFile2 , rfw - > GetPath ( ) ) ; <nl> <nl> Header header = HeaderBuilder : : GetHeaderWithChunkParams ( 0 , 0 ) ; <nl> header . set_segment_interval ( 0 ) ; <nl> header . set_segment_raw_size ( 0 ) ; <nl> - ASSERT_TRUE ( rfw . WriteHeader ( header ) ) ; <nl> - ASSERT_FALSE ( rfw . GetHeader ( ) . is_complete ( ) ) ; <nl> + ASSERT_TRUE ( rfw - > WriteHeader ( header ) ) ; <nl> + ASSERT_FALSE ( rfw - > GetHeader ( ) . is_complete ( ) ) ; <nl> <nl> Channel chan1 ; <nl> chan1 . set_name ( kChan1 ) ; <nl> chan1 . set_message_type ( kMsgType ) ; <nl> chan1 . set_proto_desc ( kStr10B ) ; <nl> - ASSERT_TRUE ( rfw . WriteChannel ( chan1 ) ) ; <nl> + ASSERT_TRUE ( rfw - > WriteChannel ( chan1 ) ) ; <nl> <nl> Channel chan2 ; <nl> chan2 . set_name ( kChan2 ) ; <nl> chan2 . set_message_type ( kMsgType ) ; <nl> chan2 . set_proto_desc ( kStr10B ) ; <nl> - ASSERT_TRUE ( rfw . WriteChannel ( chan2 ) ) ; <nl> + ASSERT_TRUE ( rfw - > WriteChannel ( chan2 ) ) ; <nl> <nl> SingleMessage msg1 ; <nl> msg1 . set_channel_name ( chan1 . name ( ) ) ; <nl> msg1 . set_content ( kStr10B ) ; <nl> msg1 . set_time ( 1e9 ) ; <nl> - ASSERT_TRUE ( rfw . WriteMessage ( msg1 ) ) ; <nl> - ASSERT_EQ ( 1 , rfw . GetMessageNumber ( chan1 . name ( ) ) ) ; <nl> + ASSERT_TRUE ( rfw - > WriteMessage ( msg1 ) ) ; <nl> + ASSERT_EQ ( 1 , rfw - > GetMessageNumber ( chan1 . name ( ) ) ) ; <nl> <nl> SingleMessage msg2 ; <nl> msg2 . set_channel_name ( chan2 . name ( ) ) ; <nl> msg2 . set_content ( kStr10B ) ; <nl> msg2 . set_time ( 2e9 ) ; <nl> - ASSERT_TRUE ( rfw . WriteMessage ( msg2 ) ) ; <nl> - ASSERT_EQ ( 1 , rfw . GetMessageNumber ( chan2 . name ( ) ) ) ; <nl> + ASSERT_TRUE ( rfw - > WriteMessage ( msg2 ) ) ; <nl> + ASSERT_EQ ( 1 , rfw - > GetMessageNumber ( chan2 . name ( ) ) ) ; <nl> <nl> SingleMessage msg3 ; <nl> msg3 . set_channel_name ( chan1 . name ( ) ) ; <nl> msg3 . set_content ( kStr10B ) ; <nl> msg3 . set_time ( 3e9 ) ; <nl> - ASSERT_TRUE ( rfw . WriteMessage ( msg3 ) ) ; <nl> - ASSERT_EQ ( 2 , rfw . GetMessageNumber ( chan1 . name ( ) ) ) ; <nl> - <nl> - rfw . Close ( ) ; <nl> - ASSERT_TRUE ( rfw . GetHeader ( ) . is_complete ( ) ) ; <nl> - ASSERT_EQ ( 1 , rfw . GetHeader ( ) . chunk_number ( ) ) ; <nl> - ASSERT_EQ ( 1e9 , rfw . GetHeader ( ) . begin_time ( ) ) ; <nl> - ASSERT_EQ ( 3e9 , rfw . GetHeader ( ) . end_time ( ) ) ; <nl> - ASSERT_EQ ( 3 , rfw . GetHeader ( ) . message_number ( ) ) ; <nl> + ASSERT_TRUE ( rfw - > WriteMessage ( msg3 ) ) ; <nl> + ASSERT_EQ ( 2 , rfw - > GetMessageNumber ( chan1 . name ( ) ) ) ; <nl> + <nl> + rfw - > Close ( ) ; <nl> + ASSERT_TRUE ( rfw - > GetHeader ( ) . is_complete ( ) ) ; <nl> + ASSERT_EQ ( 1 , rfw - > GetHeader ( ) . chunk_number ( ) ) ; <nl> + ASSERT_EQ ( 1e9 , rfw - > GetHeader ( ) . begin_time ( ) ) ; <nl> + ASSERT_EQ ( 3e9 , rfw - > GetHeader ( ) . end_time ( ) ) ; <nl> + ASSERT_EQ ( 3 , rfw - > GetHeader ( ) . message_number ( ) ) ; <nl> } <nl> { <nl> RecordFileReader reader ; <nl> TEST ( RecordFileTest , TestIndex ) { <nl> } <nl> } <nl> } <nl> - ASSERT_FALSE ( remove ( kTestFile2 ) ) ; <nl> } <nl> <nl> } / / namespace record <nl> mmm a / cyber / record / file / record_file_writer . cc <nl> ppp b / cyber / record / file / record_file_writer . cc <nl> using apollo : : cyber : : proto : : Header ; <nl> using apollo : : cyber : : proto : : SectionType ; <nl> using apollo : : cyber : : proto : : SingleIndex ; <nl> <nl> + RecordFileWriter : : RecordFileWriter ( ) { } <nl> + <nl> RecordFileWriter : : ~ RecordFileWriter ( ) { Close ( ) ; } <nl> <nl> bool RecordFileWriter : : Open ( const std : : string & path ) { <nl> bool RecordFileWriter : : Open ( const std : : string & path ) { <nl> < < " , errno : " < < errno ; <nl> return false ; <nl> } <nl> - chunk_active_ = std : : make_unique < Chunk > ( ) ; <nl> + chunk_active_ . reset ( new Chunk ( ) ) ; <nl> + chunk_flush_ . reset ( new Chunk ( ) ) ; <nl> + is_writing_ = true ; <nl> + flush_thread_ = std : : make_shared < std : : thread > ( [ this ] ( ) { this - > Flush ( ) ; } ) ; <nl> + if ( flush_thread_ = = nullptr ) { <nl> + AERROR < < " Init flush thread error . " ; <nl> + return false ; <nl> + } <nl> return true ; <nl> } <nl> <nl> void RecordFileWriter : : Close ( ) { <nl> - if ( fd_ < 0 ) { <nl> - return ; <nl> - } <nl> - flush_task_ . wait ( ) ; <nl> - Flush ( * chunk_active_ ) ; <nl> + if ( is_writing_ ) { <nl> + / / wait for the flush operation that may exist now <nl> + while ( ! chunk_flush_ - > empty ( ) ) { <nl> + std : : this_thread : : sleep_for ( std : : chrono : : milliseconds ( 100 ) ) ; <nl> + } <nl> <nl> - if ( ! WriteIndex ( ) ) { <nl> - AERROR < < " Write index section failed , file : " < < path_ ; <nl> - } <nl> + / / last swap <nl> + { <nl> + std : : unique_lock < std : : mutex > flush_lock ( flush_mutex_ ) ; <nl> + chunk_flush_ . swap ( chunk_active_ ) ; <nl> + flush_cv_ . notify_one ( ) ; <nl> + } <nl> <nl> - header_ . set_is_complete ( true ) ; <nl> - if ( ! WriteHeader ( header_ ) ) { <nl> - AERROR < < " Overwrite header section failed , file : " < < path_ ; <nl> - } <nl> + / / wait for the last flush operation <nl> + while ( ! chunk_flush_ - > empty ( ) ) { <nl> + std : : this_thread : : sleep_for ( std : : chrono : : milliseconds ( 100 ) ) ; <nl> + } <nl> <nl> - if ( close ( fd_ ) < 0 ) { <nl> - AERROR < < " Close file failed , file : " < < path_ < < " , fd : " < < fd_ <nl> - < < " , errno : " < < errno ; <nl> + is_writing_ = false ; <nl> + flush_cv_ . notify_all ( ) ; <nl> + if ( flush_thread_ & & flush_thread_ - > joinable ( ) ) { <nl> + flush_thread_ - > join ( ) ; <nl> + flush_thread_ = nullptr ; <nl> + } <nl> + <nl> + if ( ! WriteIndex ( ) ) { <nl> + AERROR < < " Write index section failed , file : " < < path_ ; <nl> + } <nl> + <nl> + header_ . set_is_complete ( true ) ; <nl> + if ( ! WriteHeader ( header_ ) ) { <nl> + AERROR < < " Overwrite header section failed , file : " < < path_ ; <nl> + } <nl> + <nl> + if ( close ( fd_ ) < 0 ) { <nl> + AERROR < < " Close file failed , file : " < < path_ < < " , fd : " < < fd_ <nl> + < < " , errno : " < < errno ; <nl> + } <nl> } <nl> - fd_ = - 1 ; <nl> } <nl> <nl> bool RecordFileWriter : : WriteHeader ( const Header & header ) { <nl> bool RecordFileWriter : : WriteChunk ( const ChunkHeader & chunk_header , <nl> } <nl> <nl> bool RecordFileWriter : : WriteMessage ( const proto : : SingleMessage & message ) { <nl> - CHECK_GE ( fd_ , 0 ) < < " First , call Open " ; <nl> chunk_active_ - > add ( message ) ; <nl> auto it = channel_message_number_map_ . find ( message . channel_name ( ) ) ; <nl> if ( it ! = channel_message_number_map_ . end ( ) ) { <nl> bool RecordFileWriter : : WriteMessage ( const proto : : SingleMessage & message ) { <nl> if ( ! need_flush ) { <nl> return true ; <nl> } <nl> - <nl> - ACHECK ( flush_task_ . wait_for ( std : : chrono : : milliseconds ( 0 ) ) = = <nl> - std : : future_status : : ready ) <nl> - < < " Flushing didn ' t finish . Either the hardware cannot keep up or the " <nl> - " flush rate is too fast . " ; <nl> - <nl> - flush_task_ = std : : async ( <nl> - std : : launch : : async , <nl> - [ this , chunk = std : : move ( chunk_active_ ) ] ( ) { this - > Flush ( * chunk ) ; } ) ; <nl> - chunk_active_ = std : : make_unique < Chunk > ( ) ; <nl> - <nl> + { <nl> + std : : unique_lock < std : : mutex > flush_lock ( flush_mutex_ ) ; <nl> + chunk_flush_ . swap ( chunk_active_ ) ; <nl> + flush_cv_ . notify_one ( ) ; <nl> + } <nl> return true ; <nl> } <nl> <nl> - void RecordFileWriter : : Flush ( const Chunk & chunk ) { <nl> - if ( ! WriteChunk ( chunk . header_ , * ( chunk . body_ . get ( ) ) ) ) { <nl> - AERROR < < " Write chunk fail . " ; <nl> + void RecordFileWriter : : Flush ( ) { <nl> + while ( is_writing_ ) { <nl> + std : : unique_lock < std : : mutex > flush_lock ( flush_mutex_ ) ; <nl> + flush_cv_ . wait ( flush_lock , <nl> + [ this ] { return ! chunk_flush_ - > empty ( ) | | ! is_writing_ ; } ) ; <nl> + if ( ! is_writing_ ) { <nl> + break ; <nl> + } <nl> + if ( chunk_flush_ - > empty ( ) ) { <nl> + continue ; <nl> + } <nl> + if ( ! WriteChunk ( chunk_flush_ - > header_ , * ( chunk_flush_ - > body_ . get ( ) ) ) ) { <nl> + AERROR < < " Write chunk fail . " ; <nl> + } <nl> + chunk_flush_ - > clear ( ) ; <nl> } <nl> } <nl> <nl> - void RecordFileWriter : : WaitForWrite ( ) { flush_task_ . wait ( ) ; } <nl> - <nl> uint64_t RecordFileWriter : : GetMessageNumber ( <nl> const std : : string & channel_name ) const { <nl> auto search = channel_message_number_map_ . find ( channel_name ) ; <nl> mmm a / cyber / record / file / record_file_writer . h <nl> ppp b / cyber / record / file / record_file_writer . h <nl> <nl> <nl> # include < condition_variable > <nl> # include < fstream > <nl> - # include < future > <nl> # include < memory > <nl> # include < string > <nl> + # include < thread > <nl> # include < type_traits > <nl> # include < unordered_map > <nl> # include < utility > <nl> struct Chunk { <nl> std : : unique_ptr < proto : : ChunkBody > body_ = nullptr ; <nl> } ; <nl> <nl> - / * * <nl> - Writes cyber record files on an asynchronous task <nl> - * / <nl> class RecordFileWriter : public RecordFileBase { <nl> public : <nl> - RecordFileWriter ( ) = default ; <nl> - ~ RecordFileWriter ( ) ; <nl> + RecordFileWriter ( ) ; <nl> + virtual ~ RecordFileWriter ( ) ; <nl> bool Open ( const std : : string & path ) override ; <nl> void Close ( ) override ; <nl> bool WriteHeader ( const proto : : Header & header ) ; <nl> class RecordFileWriter : public RecordFileBase { <nl> bool WriteMessage ( const proto : : SingleMessage & message ) ; <nl> uint64_t GetMessageNumber ( const std : : string & channel_name ) const ; <nl> <nl> - / / For testing <nl> - void WaitForWrite ( ) ; <nl> - <nl> private : <nl> bool WriteChunk ( const proto : : ChunkHeader & chunk_header , <nl> const proto : : ChunkBody & chunk_body ) ; <nl> template < typename T > <nl> bool WriteSection ( const T & message ) ; <nl> bool WriteIndex ( ) ; <nl> - void Flush ( const Chunk & chunk ) ; <nl> - bool IsChunkFlushEmpty ( ) ; <nl> - void BlockUntilSpaceAvailable ( ) ; <nl> - / / make moveable <nl> - std : : unique_ptr < Chunk > chunk_active_ ; <nl> - / / Initialize with a dummy value to simplify checking later <nl> - std : : future < void > flush_task_ = std : : async ( std : : launch : : async , [ ] ( ) { } ) ; <nl> + void Flush ( ) ; <nl> + bool is_writing_ = false ; <nl> + std : : unique_ptr < Chunk > chunk_active_ = nullptr ; <nl> + std : : unique_ptr < Chunk > chunk_flush_ = nullptr ; <nl> + std : : shared_ptr < std : : thread > flush_thread_ = nullptr ; <nl> + std : : mutex flush_mutex_ ; <nl> + std : : condition_variable flush_cv_ ; <nl> std : : unordered_map < std : : string , uint64_t > channel_message_number_map_ ; <nl> } ; <nl> <nl> bool RecordFileWriter : : WriteSection ( const T & message ) { <nl> Section section ; <nl> / / / zero out whole struct even if padded <nl> memset ( & section , 0 , sizeof ( section ) ) ; <nl> - section . type = type ; <nl> - section . size = static_cast < int64_t > ( message . ByteSizeLong ( ) ) ; <nl> + section = { type , static_cast < int64_t > ( message . ByteSizeLong ( ) ) } ; <nl> ssize_t count = write ( fd_ , & section , sizeof ( section ) ) ; <nl> if ( count < 0 ) { <nl> AERROR < < " Write fd failed , fd : " < < fd_ < < " , errno : " < < errno ; <nl> mmm a / cyber / record / record_viewer_test . cc <nl> ppp b / cyber / record / record_viewer_test . cc <nl> <nl> <nl> # include " gtest / gtest . h " <nl> <nl> - # include " cyber / common / file . h " <nl> # include " cyber / common / log . h " <nl> # include " cyber / record / record_reader . h " <nl> # include " cyber / record / record_writer . h " <nl> static void ConstructRecord ( uint64_t msg_num , uint64_t begin_time , <nl> ai = msg_num - 1 - i ; <nl> } <nl> auto msg = std : : make_shared < RawMessage > ( std : : to_string ( ai ) ) ; <nl> - / / Since writer is meant for real time operations at a set rate , <nl> - / / we need to wait in the test . Otherwise , we should write synchronously <nl> - writer . WaitForWrite ( ) ; <nl> writer . WriteMessage ( kChannelName1 , msg , begin_time + time_step * ai ) ; <nl> } <nl> ASSERT_EQ ( msg_num , writer . GetMessageNumber ( kChannelName1 ) ) ; <nl> TEST ( RecordTest , iterator_test ) { <nl> <nl> uint64_t i = 0 ; <nl> for ( auto & msg : viewer ) { <nl> - ASSERT_EQ ( kChannelName1 , msg . channel_name ) ; <nl> - ASSERT_EQ ( begin_time + step_time * i , msg . time ) ; <nl> - ASSERT_EQ ( std : : to_string ( i ) , msg . content ) ; <nl> - + + i ; <nl> + EXPECT_EQ ( kChannelName1 , msg . channel_name ) ; <nl> + EXPECT_EQ ( begin_time + step_time * i , msg . time ) ; <nl> + EXPECT_EQ ( std : : to_string ( i ) , msg . content ) ; <nl> + i + + ; <nl> } <nl> EXPECT_EQ ( msg_num , i ) ; <nl> <nl> i = 0 ; <nl> std : : for_each ( viewer . begin ( ) , viewer . end ( ) , [ & i ] ( RecordMessage & msg ) { <nl> - ASSERT_EQ ( kChannelName1 , msg . channel_name ) ; <nl> + EXPECT_EQ ( kChannelName1 , msg . channel_name ) ; <nl> / / EXPECT_EQ ( begin_time + step_time * i , msg . time ) ; <nl> - ASSERT_EQ ( std : : to_string ( i ) , msg . content ) ; <nl> - + + i ; <nl> + EXPECT_EQ ( std : : to_string ( i ) , msg . content ) ; <nl> + i + + ; <nl> } ) ; <nl> EXPECT_EQ ( msg_num , i ) ; <nl> <nl> i = 0 ; <nl> for ( auto it = viewer . begin ( ) ; it ! = viewer . end ( ) ; + + it ) { <nl> - ASSERT_EQ ( kChannelName1 , it - > channel_name ) ; <nl> - ASSERT_EQ ( begin_time + step_time * i , it - > time ) ; <nl> - ASSERT_EQ ( std : : to_string ( i ) , it - > content ) ; <nl> - + + i ; <nl> + EXPECT_EQ ( kChannelName1 , it - > channel_name ) ; <nl> + EXPECT_EQ ( begin_time + step_time * i , it - > time ) ; <nl> + EXPECT_EQ ( std : : to_string ( i ) , it - > content ) ; <nl> + i + + ; <nl> } <nl> EXPECT_EQ ( msg_num , i ) ; <nl> ASSERT_FALSE ( remove ( kTestFile ) ) ; <nl> TEST ( RecordTest , iterator_test_reverse ) { <nl> <nl> uint64_t i = 0 ; <nl> for ( auto & msg : viewer ) { <nl> - ASSERT_EQ ( kChannelName1 , msg . channel_name ) ; <nl> - ASSERT_EQ ( begin_time + step_time * i , msg . time ) ; <nl> - ASSERT_EQ ( std : : to_string ( i ) , msg . content ) ; <nl> - + + i ; <nl> + EXPECT_EQ ( kChannelName1 , msg . channel_name ) ; <nl> + EXPECT_EQ ( begin_time + step_time * i , msg . time ) ; <nl> + EXPECT_EQ ( std : : to_string ( i ) , msg . content ) ; <nl> + i + + ; <nl> } <nl> EXPECT_EQ ( msg_num , i ) ; <nl> <nl> i = 0 ; <nl> for ( auto it = viewer . begin ( ) ; it ! = viewer . end ( ) ; + + it ) { <nl> - ASSERT_EQ ( kChannelName1 , it - > channel_name ) ; <nl> - ASSERT_EQ ( begin_time + step_time * i , it - > time ) ; <nl> - ASSERT_EQ ( std : : to_string ( i ) , it - > content ) ; <nl> - + + i ; <nl> + EXPECT_EQ ( kChannelName1 , it - > channel_name ) ; <nl> + EXPECT_EQ ( begin_time + step_time * i , it - > time ) ; <nl> + EXPECT_EQ ( std : : to_string ( i ) , it - > content ) ; <nl> + i + + ; <nl> } <nl> EXPECT_EQ ( msg_num , i ) ; <nl> ASSERT_FALSE ( remove ( kTestFile ) ) ; <nl> TEST ( RecordTest , mult_iterator_test ) { <nl> <nl> uint64_t i = 0 ; <nl> for ( auto & msg : viewer ) { / / # 2 iterator <nl> - ASSERT_EQ ( kChannelName1 , msg . channel_name ) ; <nl> - ASSERT_EQ ( begin_time + step_time * i , msg . time ) ; <nl> - ASSERT_EQ ( std : : to_string ( i ) , msg . content ) ; <nl> - + + i ; <nl> + EXPECT_EQ ( kChannelName1 , msg . channel_name ) ; <nl> + EXPECT_EQ ( begin_time + step_time * i , msg . time ) ; <nl> + EXPECT_EQ ( std : : to_string ( i ) , msg . content ) ; <nl> + i + + ; <nl> } <nl> EXPECT_EQ ( msg_num , i ) ; <nl> ASSERT_FALSE ( remove ( kTestFile ) ) ; <nl> mmm a / cyber / record / record_writer . cc <nl> ppp b / cyber / record / record_writer . cc <nl> bool RecordWriter : : IsNewChannel ( const std : : string & channel_name ) const { <nl> channel_message_number_map_ . end ( ) ; <nl> } <nl> <nl> - void RecordWriter : : WaitForWrite ( ) { file_writer_ - > WaitForWrite ( ) ; } <nl> - <nl> void RecordWriter : : OnNewChannel ( const std : : string & channel_name , <nl> const std : : string & message_type , <nl> const std : : string & proto_desc ) { <nl> mmm a / cyber / record / record_writer . h <nl> ppp b / cyber / record / record_writer . h <nl> class RecordWriter : public RecordBase { <nl> * / <nl> bool IsNewChannel ( const std : : string & channel_name ) const ; <nl> <nl> - / * * <nl> - * @ brief Meant for testing <nl> - * / <nl> - void WaitForWrite ( ) ; <nl> - <nl> private : <nl> bool WriteMessage ( const proto : : SingleMessage & single_msg ) ; <nl> bool SplitOutfile ( ) ; <nl> mmm a / tools / bazel . rc <nl> ppp b / tools / bazel . rc <nl> build - - cxxopt = " - fdiagnostics - color = always " <nl> <nl> build - - per_file_copt = external / upb / . * @ - Wno - sign - compare <nl> build - - copt = " - Werror = sign - compare " <nl> - build - - per_file_copt = ^ modules / . * \ . cc , ^ cyber / . * \ . cc , @ " - Werror = return - type " <nl> + build - - copt = " - Werror = return - type " <nl> build - - copt = " - Werror = unused - variable " <nl> build - - copt = " - Werror = unused - but - set - variable " <nl> build - - copt = " - Werror = switch " <nl>
|
Revert " cyber_recorder file writer threading bug fix ( ) "
|
ApolloAuto/apollo
|
0b2b252780122fa55f9b406dc27d69079abe6cae
|
2020-10-27T13:25:00Z
|
mmm a / src / core / console_user_server / include / configuration_manager . hpp <nl> ppp b / src / core / console_user_server / include / configuration_manager . hpp <nl> class configuration_manager final { <nl> std : : vector < std : : pair < std : : string , std : : vector < std : : string > > > targets = { <nl> { constants : : get_user_configuration_directory ( ) , { core_configuration_file_path } } , <nl> } ; <nl> - file_monitor_ = std : : make_unique < file_monitor > ( logger_ , targets , std : : bind ( & configuration_manager : : reload_core_configuration , this , std : : placeholders : : _1 ) ) ; <nl> + file_monitor_ = std : : make_unique < file_monitor > ( logger_ , targets , <nl> + std : : bind ( & configuration_manager : : core_configuration_file_updated_callback , this , std : : placeholders : : _1 ) ) ; <nl> <nl> - reload_core_configuration ( core_configuration_file_path ) ; <nl> + core_configuration_file_updated_callback ( core_configuration_file_path ) ; <nl> } <nl> <nl> ~ configuration_manager ( void ) { <nl> class configuration_manager final { <nl> } <nl> <nl> private : <nl> - void reload_core_configuration ( const std : : string & file_path ) { <nl> + void core_configuration_file_updated_callback ( const std : : string & file_path ) { <nl> auto new_ptr = std : : make_unique < core_configuration > ( logger_ , file_path ) ; <nl> / / skip if karabiner . json is broken . <nl> if ( core_configuration_ & & ! new_ptr - > is_loaded ( ) ) { <nl>
|
reload_core_configuration - > core_configuration_file_updated_callback
|
pqrs-org/Karabiner-Elements
|
03ef7adf488371625f49bc2fab1b3222b452978b
|
2017-01-19T12:01:18Z
|
mmm a / CHANGES . txt <nl> ppp b / CHANGES . txt <nl> Unreleased Changes <nl> * Update go_package options to reference google . golang . org / protobuf module . <nl> <nl> <nl> - 2020 - 07 - 14 version 3 . 13 . 0 - rc1 ( C + + / Java / Python / PHP / Objective - C / C # / Ruby / JavaScript ) <nl> + 2020 - 07 - 14 version 3 . 13 . 0 ( C + + / Java / Python / PHP / Objective - C / C # / Ruby / JavaScript ) <nl> + <nl> + PHP : <nl> + * The C extension is completely rewritten . The new C extension has significantly <nl> + better parsing performance and fixes a handful of conformance issues . It will <nl> + also make it easier to add support for more features like proto2 and proto3 presence . <nl> + * The new C extension does not support PHP 5 . x . PHP 5 . x users can still use pure - PHP . <nl> <nl> C + + : <nl> * Removed deprecated unsafe arena string accessors <nl> Unreleased Changes <nl> performance ( the legacy generated code will still work , but might incur <nl> a slight performance penalty ) . <nl> <nl> + 2020 - 07 - 28 version 3 . 12 . 4 ( C + + / Java / Python / PHP / Objective - C / C # / Ruby / JavaScript ) <nl> + <nl> + This release contains no significant changes , but exists because 3 . 12 . 3 was <nl> + mistakenly tagged at the wrong commit . <nl> + <nl> 2020 - 06 - 01 version 3 . 12 . 3 ( C + + / Java / Python / PHP / Objective - C / C # / Ruby / JavaScript ) <nl> <nl> Objective - C <nl>
|
Updated CHANGES . txt
|
protocolbuffers/protobuf
|
e5fe9b8cbdfe7d25852ff2e34e3250e2c0c98860
|
2020-08-19T17:31:31Z
|
mmm a / SConstruct <nl> ppp b / SConstruct <nl> def findVersion ( root , choices ) : <nl> raise " can ' t find a version of [ " + root + " ] choices : " + choices <nl> <nl> if " darwin " = = os . sys . platform : <nl> - env . Append ( CPPPATH = [ " / sw / include " , " - I / System / Library / Frameworks / JavaVM . framework / Versions / CurrentJDK / Headers / " ] ) <nl> - env . Append ( LIBPATH = [ " / sw / lib / " ] ) <nl> + env . Append ( CPPPATH = [ " / sw / include " , " / opt / local / include " , " - I / System / Library / Frameworks / JavaVM . framework / Versions / CurrentJDK / Headers / " ] ) <nl> + env . Append ( LIBPATH = [ " / sw / lib / " , " / opt / local / lib " ] ) <nl> <nl> env . Append ( CPPFLAGS = " - mmacosx - version - min = 10 . 4 " ) <nl> env . Append ( FRAMEWORKS = [ " JavaVM " ] ) <nl>
|
add macports include / lib locations
|
mongodb/mongo
|
015ebd1407f22dec035dc26d4a05a774810011e4
|
2009-01-23T18:37:41Z
|
mmm a / CONTRIBUTORS . md <nl> ppp b / CONTRIBUTORS . md <nl> List of Contributors <nl> * [ Aaron Markham ] ( https : / / github . com / aaronmarkham ) <nl> * [ Sam Skalicky ] ( https : / / github . com / samskalicky ) <nl> * [ Per Goncalves da Silva ] ( https : / / github . com / perdasilva ) <nl> - * [ Cheng - Che Lee ] ( https : / / github . com / stu1130 ) <nl> similarity index 82 % <nl> rename from python / mxnet / io / io . py <nl> rename to python / mxnet / io . py <nl> mmm a / python / mxnet / io / io . py <nl> ppp b / python / mxnet / io . py <nl> <nl> <nl> " " " Data iterators for common data formats . " " " <nl> from __future__ import absolute_import <nl> - from collections import namedtuple <nl> + from collections import OrderedDict , namedtuple <nl> <nl> import sys <nl> import ctypes <nl> import logging <nl> import threading <nl> + try : <nl> + import h5py <nl> + except ImportError : <nl> + h5py = None <nl> import numpy as np <nl> - <nl> - from . . base import _LIB <nl> - from . . base import c_str_array , mx_uint , py_str <nl> - from . . base import DataIterHandle , NDArrayHandle <nl> - from . . base import mx_real_t <nl> - from . . base import check_call , build_param_doc as _build_param_doc <nl> - from . . ndarray import NDArray <nl> - from . . ndarray . sparse import CSRNDArray <nl> - from . . ndarray import _ndarray_cls <nl> - from . . ndarray import array <nl> - from . . ndarray import concat <nl> - <nl> - from . utils import init_data , has_instance , getdata_by_idx <nl> + from . base import _LIB <nl> + from . base import c_str_array , mx_uint , py_str <nl> + from . base import DataIterHandle , NDArrayHandle <nl> + from . base import mx_real_t <nl> + from . base import check_call , build_param_doc as _build_param_doc <nl> + from . ndarray import NDArray <nl> + from . ndarray . sparse import CSRNDArray <nl> + from . ndarray . sparse import array as sparse_array <nl> + from . ndarray import _ndarray_cls <nl> + from . ndarray import array <nl> + from . ndarray import concatenate <nl> + from . ndarray import arange <nl> + from . ndarray . random import shuffle as random_shuffle <nl> <nl> class DataDesc ( namedtuple ( ' DataDesc ' , [ ' name ' , ' shape ' ] ) ) : <nl> " " " DataDesc is used to store name , shape , type and layout <nl> def getindex ( self ) : <nl> def getpad ( self ) : <nl> return self . current_batch . pad <nl> <nl> + def _init_data ( data , allow_empty , default_name ) : <nl> + " " " Convert data into canonical form . " " " <nl> + assert ( data is not None ) or allow_empty <nl> + if data is None : <nl> + data = [ ] <nl> + <nl> + if isinstance ( data , ( np . ndarray , NDArray , h5py . Dataset ) <nl> + if h5py else ( np . ndarray , NDArray ) ) : <nl> + data = [ data ] <nl> + if isinstance ( data , list ) : <nl> + if not allow_empty : <nl> + assert ( len ( data ) > 0 ) <nl> + if len ( data ) = = 1 : <nl> + data = OrderedDict ( [ ( default_name , data [ 0 ] ) ] ) # pylint : disable = redefined - variable - type <nl> + else : <nl> + data = OrderedDict ( # pylint : disable = redefined - variable - type <nl> + [ ( ' _ % d_ % s ' % ( i , default_name ) , d ) for i , d in enumerate ( data ) ] ) <nl> + if not isinstance ( data , dict ) : <nl> + raise TypeError ( " Input must be NDArray , numpy . ndarray , h5py . Dataset " + \ <nl> + " a list of them or dict with them as values " ) <nl> + for k , v in data . items ( ) : <nl> + if not isinstance ( v , ( NDArray , h5py . Dataset ) if h5py else NDArray ) : <nl> + try : <nl> + data [ k ] = array ( v ) <nl> + except : <nl> + raise TypeError ( ( " Invalid type ' % s ' for % s , " % ( type ( v ) , k ) ) + \ <nl> + " should be NDArray , numpy . ndarray or h5py . Dataset " ) <nl> + <nl> + return list ( sorted ( data . items ( ) ) ) <nl> + <nl> + def _has_instance ( data , dtype ) : <nl> + " " " Return True if ` ` data ` ` has instance of ` ` dtype ` ` . <nl> + This function is called after _init_data . <nl> + ` ` data ` ` is a list of ( str , NDArray ) " " " <nl> + for item in data : <nl> + _ , arr = item <nl> + if isinstance ( arr , dtype ) : <nl> + return True <nl> + return False <nl> + <nl> + def _shuffle ( data , idx ) : <nl> + " " " Shuffle the data . " " " <nl> + shuffle_data = [ ] <nl> + <nl> + for k , v in data : <nl> + if ( isinstance ( v , h5py . Dataset ) if h5py else False ) : <nl> + shuffle_data . append ( ( k , v ) ) <nl> + elif isinstance ( v , CSRNDArray ) : <nl> + shuffle_data . append ( ( k , sparse_array ( v . asscipy ( ) [ idx ] , v . context ) ) ) <nl> + else : <nl> + shuffle_data . append ( ( k , array ( v . asnumpy ( ) [ idx ] , v . context ) ) ) <nl> + <nl> + return shuffle_data <nl> <nl> class NDArrayIter ( DataIter ) : <nl> " " " Returns an iterator for ` ` mx . nd . NDArray ` ` , ` ` numpy . ndarray ` ` , ` ` h5py . Dataset ` ` <nl> class NDArrayIter ( DataIter ) : <nl> . . . <nl> > > > batchidx # Remaining examples are discarded . So , 10 / 3 batches are created . <nl> 3 <nl> - > > > dataiter = mx . io . NDArrayIter ( data , labels , 3 , False , last_batch_handle = ' roll_over ' ) <nl> - > > > batchidx = 0 <nl> - > > > for batch in dataiter : <nl> - . . . batchidx + = 1 <nl> - . . . <nl> - > > > batchidx # Remaining examples are rolled over to the next iteration . <nl> - 3 <nl> - > > > dataiter . reset ( ) <nl> - > > > dataiter . next ( ) . data [ 0 ] . asnumpy ( ) <nl> - [ [ [ 36 . 37 . ] <nl> - [ 38 . 39 . ] ] <nl> - [ [ 0 . 1 . ] <nl> - [ 2 . 3 . ] ] <nl> - [ [ 4 . 5 . ] <nl> - [ 6 . 7 . ] ] ] <nl> - ( 3L , 2L , 2L ) <nl> <nl> ` NDArrayIter ` also supports multiple input and labels . <nl> <nl> class NDArrayIter ( DataIter ) : <nl> Only supported if no h5py . Dataset inputs are used . <nl> last_batch_handle : str , optional <nl> How to handle the last batch . This parameter can be ' pad ' , ' discard ' or <nl> - ' roll_over ' . <nl> - If ' pad ' , the last batch will be padded with data starting from the begining <nl> - If ' discard ' , the last batch will be discarded <nl> - If ' roll_over ' , the remaining elements will be rolled over to the next iteration and <nl> - note that it is intended for training and can cause problems if used for prediction . <nl> + ' roll_over ' . ' roll_over ' is intended for training and can cause problems <nl> + if used for prediction . <nl> data_name : str , optional <nl> The data name . <nl> label_name : str , optional <nl> def __init__ ( self , data , label = None , batch_size = 1 , shuffle = False , <nl> label_name = ' softmax_label ' ) : <nl> super ( NDArrayIter , self ) . __init__ ( batch_size ) <nl> <nl> - self . data = init_data ( data , allow_empty = False , default_name = data_name ) <nl> - self . label = init_data ( label , allow_empty = True , default_name = label_name ) <nl> + self . data = _init_data ( data , allow_empty = False , default_name = data_name ) <nl> + self . label = _init_data ( label , allow_empty = True , default_name = label_name ) <nl> <nl> - if ( ( has_instance ( self . data , CSRNDArray ) or has_instance ( self . label , CSRNDArray ) ) and <nl> + if ( ( _has_instance ( self . data , CSRNDArray ) or _has_instance ( self . label , CSRNDArray ) ) and <nl> ( last_batch_handle ! = ' discard ' ) ) : <nl> raise NotImplementedError ( " ` NDArrayIter ` only supports ` ` CSRNDArray ` ` " \ <nl> " with ` last_batch_handle ` set to ` discard ` . " ) <nl> <nl> - self . idx = np . arange ( self . data [ 0 ] [ 1 ] . shape [ 0 ] ) <nl> - self . shuffle = shuffle <nl> - self . last_batch_handle = last_batch_handle <nl> - self . batch_size = batch_size <nl> - self . cursor = - self . batch_size <nl> - self . num_data = self . idx . shape [ 0 ] <nl> - # shuffle <nl> - self . reset ( ) <nl> + # shuffle data <nl> + if shuffle : <nl> + tmp_idx = arange ( self . data [ 0 ] [ 1 ] . shape [ 0 ] , dtype = np . int32 ) <nl> + self . idx = random_shuffle ( tmp_idx , out = tmp_idx ) . asnumpy ( ) <nl> + self . data = _shuffle ( self . data , self . idx ) <nl> + self . label = _shuffle ( self . label , self . idx ) <nl> + else : <nl> + self . idx = np . arange ( self . data [ 0 ] [ 1 ] . shape [ 0 ] ) <nl> + <nl> + # batching <nl> + if last_batch_handle = = ' discard ' : <nl> + new_n = self . data [ 0 ] [ 1 ] . shape [ 0 ] - self . data [ 0 ] [ 1 ] . shape [ 0 ] % batch_size <nl> + self . idx = self . idx [ : new_n ] <nl> <nl> self . data_list = [ x [ 1 ] for x in self . data ] + [ x [ 1 ] for x in self . label ] <nl> self . num_source = len ( self . data_list ) <nl> - # used for ' roll_over ' <nl> - self . _cache_data = None <nl> - self . _cache_label = None <nl> + self . num_data = self . idx . shape [ 0 ] <nl> + assert self . num_data > = batch_size , \ <nl> + " batch_size needs to be smaller than data size . " <nl> + self . cursor = - batch_size <nl> + self . batch_size = batch_size <nl> + self . last_batch_handle = last_batch_handle <nl> <nl> @ property <nl> def provide_data ( self ) : <nl> def provide_label ( self ) : <nl> <nl> def hard_reset ( self ) : <nl> " " " Ignore roll over data and set to start . " " " <nl> - if self . shuffle : <nl> - self . _shuffle_data ( ) <nl> self . cursor = - self . batch_size <nl> - self . _cache_data = None <nl> - self . _cache_label = None <nl> <nl> def reset ( self ) : <nl> - " " " Resets the iterator to the beginning of the data . " " " <nl> - if self . shuffle : <nl> - self . _shuffle_data ( ) <nl> - # the range below indicate the last batch <nl> - if self . last_batch_handle = = ' roll_over ' and \ <nl> - self . num_data - self . batch_size < self . cursor < self . num_data : <nl> - # ( self . cursor - self . num_data ) represents the data we have for the last batch <nl> - self . cursor = self . cursor - self . num_data - self . batch_size <nl> + if self . last_batch_handle = = ' roll_over ' and self . cursor > self . num_data : <nl> + self . cursor = - self . batch_size + ( self . cursor % self . num_data ) % self . batch_size <nl> else : <nl> self . cursor = - self . batch_size <nl> <nl> def iter_next ( self ) : <nl> - " " " Increments the coursor by batch_size for next batch <nl> - and check current cursor if it exceed the number of data points . " " " <nl> self . cursor + = self . batch_size <nl> return self . cursor < self . num_data <nl> <nl> def next ( self ) : <nl> - " " " Returns the next batch of data . " " " <nl> - if not self . iter_next ( ) : <nl> - raise StopIteration <nl> - data = self . getdata ( ) <nl> - label = self . getlabel ( ) <nl> - # iter should stop when last batch is not complete <nl> - if data [ 0 ] . shape [ 0 ] ! = self . batch_size : <nl> - # in this case , cache it for next epoch <nl> - self . _cache_data = data <nl> - self . _cache_label = label <nl> + if self . iter_next ( ) : <nl> + return DataBatch ( data = self . getdata ( ) , label = self . getlabel ( ) , \ <nl> + pad = self . getpad ( ) , index = None ) <nl> + else : <nl> raise StopIteration <nl> - return DataBatch ( data = data , label = label , \ <nl> - pad = self . getpad ( ) , index = None ) <nl> - <nl> - def _getdata ( self , data_source , start = None , end = None ) : <nl> - " " " Load data from underlying arrays . " " " <nl> - assert start is not None or end is not None , ' should at least specify start or end ' <nl> - start = start if start is not None else 0 <nl> - end = end if end is not None else data_source [ 0 ] [ 1 ] . shape [ 0 ] <nl> - s = slice ( start , end ) <nl> - return [ <nl> - x [ 1 ] [ s ] <nl> - if isinstance ( x [ 1 ] , ( np . ndarray , NDArray ) ) else <nl> - # h5py ( only supports indices in increasing order ) <nl> - array ( x [ 1 ] [ sorted ( self . idx [ s ] ) ] [ [ <nl> - list ( self . idx [ s ] ) . index ( i ) <nl> - for i in sorted ( self . idx [ s ] ) <nl> - ] ] ) for x in data_source <nl> - ] <nl> <nl> - def _concat ( self , first_data , second_data ) : <nl> - " " " Helper function to concat two NDArrays . " " " <nl> - return [ <nl> - concat ( first_data [ 0 ] , second_data [ 0 ] , dim = 0 ) <nl> - ] <nl> - <nl> - def _batchify ( self , data_source ) : <nl> + def _getdata ( self , data_source ) : <nl> " " " Load data from underlying arrays , internal use only . " " " <nl> - assert self . cursor < self . num_data , ' DataIter needs reset . ' <nl> - # first batch of next epoch with ' roll_over ' <nl> - if self . last_batch_handle = = ' roll_over ' and \ <nl> - - self . batch_size < self . cursor < 0 : <nl> - assert self . _cache_data is not None or self . _cache_label is not None , \ <nl> - ' next epoch should have cached data ' <nl> - cache_data = self . _cache_data if self . _cache_data is not None else self . _cache_label <nl> - second_data = self . _getdata ( <nl> - data_source , end = self . cursor + self . batch_size ) <nl> - if self . _cache_data is not None : <nl> - self . _cache_data = None <nl> - else : <nl> - self . _cache_label = None <nl> - return self . _concat ( cache_data , second_data ) <nl> - # last batch with ' pad ' <nl> - elif self . last_batch_handle = = ' pad ' and \ <nl> - self . cursor + self . batch_size > self . num_data : <nl> - pad = self . batch_size - self . num_data + self . cursor <nl> - first_data = self . _getdata ( data_source , start = self . cursor ) <nl> - second_data = self . _getdata ( data_source , end = pad ) <nl> - return self . _concat ( first_data , second_data ) <nl> - # normal case <nl> + assert ( self . cursor < self . num_data ) , " DataIter needs reset . " <nl> + if self . cursor + self . batch_size < = self . num_data : <nl> + return [ <nl> + # np . ndarray or NDArray case <nl> + x [ 1 ] [ self . cursor : self . cursor + self . batch_size ] <nl> + if isinstance ( x [ 1 ] , ( np . ndarray , NDArray ) ) else <nl> + # h5py ( only supports indices in increasing order ) <nl> + array ( x [ 1 ] [ sorted ( self . idx [ <nl> + self . cursor : self . cursor + self . batch_size ] ) ] [ [ <nl> + list ( self . idx [ self . cursor : <nl> + self . cursor + self . batch_size ] ) . index ( i ) <nl> + for i in sorted ( self . idx [ <nl> + self . cursor : self . cursor + self . batch_size ] ) <nl> + ] ] ) for x in data_source <nl> + ] <nl> else : <nl> - if self . cursor + self . batch_size < self . num_data : <nl> - end_idx = self . cursor + self . batch_size <nl> - # get incomplete last batch <nl> - else : <nl> - end_idx = self . num_data <nl> - return self . _getdata ( data_source , self . cursor , end_idx ) <nl> + pad = self . batch_size - self . num_data + self . cursor <nl> + return [ <nl> + # np . ndarray or NDArray case <nl> + concatenate ( [ x [ 1 ] [ self . cursor : ] , x [ 1 ] [ : pad ] ] ) <nl> + if isinstance ( x [ 1 ] , ( np . ndarray , NDArray ) ) else <nl> + # h5py ( only supports indices in increasing order ) <nl> + concatenate ( [ <nl> + array ( x [ 1 ] [ sorted ( self . idx [ self . cursor : ] ) ] [ [ <nl> + list ( self . idx [ self . cursor : ] ) . index ( i ) <nl> + for i in sorted ( self . idx [ self . cursor : ] ) <nl> + ] ] ) , <nl> + array ( x [ 1 ] [ sorted ( self . idx [ : pad ] ) ] [ [ <nl> + list ( self . idx [ : pad ] ) . index ( i ) <nl> + for i in sorted ( self . idx [ : pad ] ) <nl> + ] ] ) <nl> + ] ) for x in data_source <nl> + ] <nl> <nl> def getdata ( self ) : <nl> - " " " Get data . " " " <nl> - return self . _batchify ( self . data ) <nl> + return self . _getdata ( self . data ) <nl> <nl> def getlabel ( self ) : <nl> - " " " Get label . " " " <nl> - return self . _batchify ( self . label ) <nl> + return self . _getdata ( self . label ) <nl> <nl> def getpad ( self ) : <nl> - " " " Get pad value of DataBatch . " " " <nl> if self . last_batch_handle = = ' pad ' and \ <nl> self . cursor + self . batch_size > self . num_data : <nl> return self . cursor + self . batch_size - self . num_data <nl> - # check the first batch <nl> - elif self . last_batch_handle = = ' roll_over ' and \ <nl> - - self . batch_size < self . cursor < 0 : <nl> - return - self . cursor <nl> else : <nl> return 0 <nl> <nl> - def _shuffle_data ( self ) : <nl> - " " " Shuffle the data . " " " <nl> - # shuffle index <nl> - np . random . shuffle ( self . idx ) <nl> - # get the data by corresponding index <nl> - self . data = getdata_by_idx ( self . data , self . idx ) <nl> - self . label = getdata_by_idx ( self . label , self . idx ) <nl> <nl> class MXDataIter ( DataIter ) : <nl> " " " A python wrapper a C + + data iterator . <nl> class MXDataIter ( DataIter ) : <nl> underlying C + + data iterators . <nl> <nl> Usually you don ' t need to interact with ` MXDataIter ` directly unless you are <nl> - implementing your own data iterators in C + + . To do that , please refer to <nl> + implementing your own data iterators in C + + . To do that , please refer to <nl> examples under the ` src / io ` folder . <nl> <nl> Parameters <nl> deleted file mode 100644 <nl> index 5c5e2e68d84 . . 00000000000 <nl> mmm a / python / mxnet / io / __init__ . py <nl> ppp / dev / null <nl> <nl> - # ! / usr / bin / env python <nl> - <nl> - # Licensed to the Apache Software Foundation ( ASF ) under one <nl> - # or more contributor license agreements . See the NOTICE file <nl> - # distributed with this work for additional information <nl> - # regarding copyright ownership . The ASF licenses this file <nl> - # to you under the Apache License , Version 2 . 0 ( the <nl> - # " License " ) ; you may not use this file except in compliance <nl> - # with the License . You may obtain a copy of the License at <nl> - # <nl> - # http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> - # <nl> - # Unless required by applicable law or agreed to in writing , <nl> - # software distributed under the License is distributed on an <nl> - # " AS IS " BASIS , WITHOUT WARRANTIES OR CONDITIONS OF ANY <nl> - # KIND , either express or implied . See the License for the <nl> - # specific language governing permissions and limitations <nl> - # under the License . <nl> - <nl> - # coding : utf - 8 <nl> - # pylint : disable = wildcard - import <nl> - " " " Data iterators for common data formats and utility functions . " " " <nl> - from __future__ import absolute_import <nl> - <nl> - from . import io <nl> - from . io import * <nl> - <nl> - from . import utils <nl> - from . utils import * <nl> deleted file mode 100644 <nl> index 872e6410d7d . . 00000000000 <nl> mmm a / python / mxnet / io / utils . py <nl> ppp / dev / null <nl> <nl> - # Licensed to the Apache Software Foundation ( ASF ) under one <nl> - # or more contributor license agreements . See the NOTICE file <nl> - # distributed with this work for additional information <nl> - # regarding copyright ownership . The ASF licenses this file <nl> - # to you under the Apache License , Version 2 . 0 ( the <nl> - # " License " ) ; you may not use this file except in compliance <nl> - # with the License . You may obtain a copy of the License at <nl> - # <nl> - # http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> - # <nl> - # Unless required by applicable law or agreed to in writing , <nl> - # software distributed under the License is distributed on an <nl> - # " AS IS " BASIS , WITHOUT WARRANTIES OR CONDITIONS OF ANY <nl> - # KIND , either express or implied . See the License for the <nl> - # specific language governing permissions and limitations <nl> - # under the License . <nl> - <nl> - " " " utility functions for io . py " " " <nl> - from collections import OrderedDict <nl> - <nl> - import numpy as np <nl> - try : <nl> - import h5py <nl> - except ImportError : <nl> - h5py = None <nl> - <nl> - from . . ndarray . sparse import CSRNDArray <nl> - from . . ndarray . sparse import array as sparse_array <nl> - from . . ndarray import NDArray <nl> - from . . ndarray import array <nl> - <nl> - def init_data ( data , allow_empty , default_name ) : <nl> - " " " Convert data into canonical form . " " " <nl> - assert ( data is not None ) or allow_empty <nl> - if data is None : <nl> - data = [ ] <nl> - <nl> - if isinstance ( data , ( np . ndarray , NDArray , h5py . Dataset ) <nl> - if h5py else ( np . ndarray , NDArray ) ) : <nl> - data = [ data ] <nl> - if isinstance ( data , list ) : <nl> - if not allow_empty : <nl> - assert ( len ( data ) > 0 ) <nl> - if len ( data ) = = 1 : <nl> - data = OrderedDict ( [ ( default_name , data [ 0 ] ) ] ) # pylint : disable = redefined - variable - type <nl> - else : <nl> - data = OrderedDict ( # pylint : disable = redefined - variable - type <nl> - [ ( ' _ % d_ % s ' % ( i , default_name ) , d ) for i , d in enumerate ( data ) ] ) <nl> - if not isinstance ( data , dict ) : <nl> - raise TypeError ( " Input must be NDArray , numpy . ndarray , h5py . Dataset " + <nl> - " a list of them or dict with them as values " ) <nl> - for k , v in data . items ( ) : <nl> - if not isinstance ( v , ( NDArray , h5py . Dataset ) if h5py else NDArray ) : <nl> - try : <nl> - data [ k ] = array ( v ) <nl> - except : <nl> - raise TypeError ( ( " Invalid type ' % s ' for % s , " % ( type ( v ) , k ) ) + <nl> - " should be NDArray , numpy . ndarray or h5py . Dataset " ) <nl> - <nl> - return list ( sorted ( data . items ( ) ) ) <nl> - <nl> - <nl> - def has_instance ( data , dtype ) : <nl> - " " " Return True if ` ` data ` ` has instance of ` ` dtype ` ` . <nl> - This function is called after _init_data . <nl> - ` ` data ` ` is a list of ( str , NDArray ) " " " <nl> - for item in data : <nl> - _ , arr = item <nl> - if isinstance ( arr , dtype ) : <nl> - return True <nl> - return False <nl> - <nl> - <nl> - def getdata_by_idx ( data , idx ) : <nl> - " " " Shuffle the data . " " " <nl> - shuffle_data = [ ] <nl> - <nl> - for k , v in data : <nl> - if ( isinstance ( v , h5py . Dataset ) if h5py else False ) : <nl> - shuffle_data . append ( ( k , v ) ) <nl> - elif isinstance ( v , CSRNDArray ) : <nl> - shuffle_data . append ( ( k , sparse_array ( v . asscipy ( ) [ idx ] , v . context ) ) ) <nl> - else : <nl> - shuffle_data . append ( ( k , array ( v . asnumpy ( ) [ idx ] , v . context ) ) ) <nl> - <nl> - return shuffle_data <nl> mmm a / tests / python / unittest / test_io . py <nl> ppp b / tests / python / unittest / test_io . py <nl> def test_Cifar10Rec ( ) : <nl> assert ( labelcount [ i ] = = 5000 ) <nl> <nl> <nl> - def _init_NDArrayIter_data ( ) : <nl> + def test_NDArrayIter ( ) : <nl> data = np . ones ( [ 1000 , 2 , 2 ] ) <nl> - labels = np . ones ( [ 1000 , 1 ] ) <nl> + label = np . ones ( [ 1000 , 1 ] ) <nl> for i in range ( 1000 ) : <nl> data [ i ] = i / 100 <nl> - labels [ i ] = i / 100 <nl> - return data , labels <nl> - <nl> - <nl> - def _test_last_batch_handle ( data , labels ) : <nl> - # Test the three parameters ' pad ' , ' discard ' , ' roll_over ' <nl> - last_batch_handle_list = [ ' pad ' , ' discard ' , ' roll_over ' ] <nl> - labelcount_list = [ ( 124 , 100 ) , ( 100 , 96 ) , ( 100 , 96 ) ] <nl> - batch_count_list = [ 8 , 7 , 7 ] <nl> - <nl> - for idx in range ( len ( last_batch_handle_list ) ) : <nl> - dataiter = mx . io . NDArrayIter ( <nl> - data , labels , 128 , False , last_batch_handle = last_batch_handle_list [ idx ] ) <nl> - batch_count = 0 <nl> - labelcount = [ 0 for i in range ( 10 ) ] <nl> - for batch in dataiter : <nl> - label = batch . label [ 0 ] . asnumpy ( ) . flatten ( ) <nl> - # check data if it matches corresponding labels <nl> - assert ( ( batch . data [ 0 ] . asnumpy ( ) [ : , 0 , 0 ] = = label ) . all ( ) ) , last_batch_handle_list [ idx ] <nl> - for i in range ( label . shape [ 0 ] ) : <nl> - labelcount [ int ( label [ i ] ) ] + = 1 <nl> - # keep the last batch of ' pad ' to be used later <nl> - # to test first batch of roll_over in second iteration <nl> - batch_count + = 1 <nl> - if last_batch_handle_list [ idx ] = = ' pad ' and \ <nl> - batch_count = = 8 : <nl> - cache = batch . data [ 0 ] . asnumpy ( ) <nl> - # check if batchifying functionality work properly <nl> - assert labelcount [ 0 ] = = labelcount_list [ idx ] [ 0 ] , last_batch_handle_list [ idx ] <nl> - assert labelcount [ 8 ] = = labelcount_list [ idx ] [ 1 ] , last_batch_handle_list [ idx ] <nl> - assert batch_count = = batch_count_list [ idx ] <nl> - # roll_over option <nl> - dataiter . reset ( ) <nl> - assert np . array_equal ( dataiter . next ( ) . data [ 0 ] . asnumpy ( ) , cache ) <nl> - <nl> - <nl> - def _test_shuffle ( data , labels ) : <nl> - dataiter = mx . io . NDArrayIter ( data , labels , 1 , False ) <nl> - batch_list = [ ] <nl> + label [ i ] = i / 100 <nl> + dataiter = mx . io . NDArrayIter ( <nl> + data , label , 128 , True , last_batch_handle = ' pad ' ) <nl> + batchidx = 0 <nl> for batch in dataiter : <nl> - # cache the original data <nl> - batch_list . append ( batch . data [ 0 ] . asnumpy ( ) ) <nl> - dataiter = mx . io . NDArrayIter ( data , labels , 1 , True ) <nl> - idx_list = dataiter . idx <nl> - i = 0 <nl> + batchidx + = 1 <nl> + assert ( batchidx = = 8 ) <nl> + dataiter = mx . io . NDArrayIter ( <nl> + data , label , 128 , False , last_batch_handle = ' pad ' ) <nl> + batchidx = 0 <nl> + labelcount = [ 0 for i in range ( 10 ) ] <nl> for batch in dataiter : <nl> - # check if each data point have been shuffled to corresponding positions <nl> - assert np . array_equal ( batch . data [ 0 ] . asnumpy ( ) , batch_list [ idx_list [ i ] ] ) <nl> - i + = 1 <nl> - <nl> + label = batch . label [ 0 ] . asnumpy ( ) . flatten ( ) <nl> + assert ( ( batch . data [ 0 ] . asnumpy ( ) [ : , 0 , 0 ] = = label ) . all ( ) ) <nl> + for i in range ( label . shape [ 0 ] ) : <nl> + labelcount [ int ( label [ i ] ) ] + = 1 <nl> <nl> - def test_NDArrayIter ( ) : <nl> - data , labels = _init_NDArrayIter_data ( ) <nl> - _test_last_batch_handle ( data , labels ) <nl> - _test_shuffle ( data , labels ) <nl> + for i in range ( 10 ) : <nl> + if i = = 0 : <nl> + assert ( labelcount [ i ] = = 124 ) <nl> + else : <nl> + assert ( labelcount [ i ] = = 100 ) <nl> <nl> <nl> def test_NDArrayIter_h5py ( ) : <nl> if not h5py : <nl> return <nl> <nl> - data , labels = _init_NDArrayIter_data ( ) <nl> + data = np . ones ( [ 1000 , 2 , 2 ] ) <nl> + label = np . ones ( [ 1000 , 1 ] ) <nl> + for i in range ( 1000 ) : <nl> + data [ i ] = i / 100 <nl> + label [ i ] = i / 100 <nl> <nl> try : <nl> - os . remove ( ' ndarraytest . h5 ' ) <nl> + os . remove ( " ndarraytest . h5 " ) <nl> except OSError : <nl> pass <nl> - with h5py . File ( ' ndarraytest . h5 ' ) as f : <nl> - f . create_dataset ( ' data ' , data = data ) <nl> - f . create_dataset ( ' label ' , data = labels ) <nl> + with h5py . File ( " ndarraytest . h5 " ) as f : <nl> + f . create_dataset ( " data " , data = data ) <nl> + f . create_dataset ( " label " , data = label ) <nl> + <nl> + dataiter = mx . io . NDArrayIter ( <nl> + f [ " data " ] , f [ " label " ] , 128 , True , last_batch_handle = ' pad ' ) <nl> + batchidx = 0 <nl> + for batch in dataiter : <nl> + batchidx + = 1 <nl> + assert ( batchidx = = 8 ) <nl> + <nl> + dataiter = mx . io . NDArrayIter ( <nl> + f [ " data " ] , f [ " label " ] , 128 , False , last_batch_handle = ' pad ' ) <nl> + labelcount = [ 0 for i in range ( 10 ) ] <nl> + for batch in dataiter : <nl> + label = batch . label [ 0 ] . asnumpy ( ) . flatten ( ) <nl> + assert ( ( batch . data [ 0 ] . asnumpy ( ) [ : , 0 , 0 ] = = label ) . all ( ) ) <nl> + for i in range ( label . shape [ 0 ] ) : <nl> + labelcount [ int ( label [ i ] ) ] + = 1 <nl> <nl> - _test_last_batch_handle ( f [ ' data ' ] , f [ ' label ' ] ) <nl> try : <nl> os . remove ( " ndarraytest . h5 " ) <nl> except OSError : <nl> pass <nl> <nl> + for i in range ( 10 ) : <nl> + if i = = 0 : <nl> + assert ( labelcount [ i ] = = 124 ) <nl> + else : <nl> + assert ( labelcount [ i ] = = 100 ) <nl> + <nl> <nl> def test_NDArrayIter_csr ( ) : <nl> # creating toy data <nl> def test_NDArrayIter_csr ( ) : <nl> { ' data ' : train_data } , dns , batch_size ) <nl> except ImportError : <nl> pass <nl> - # scipy . sparse . csr_matrix with shuffle <nl> - num_batch = 0 <nl> - csr_iter = iter ( mx . io . NDArrayIter ( { ' data ' : train_data } , dns , batch_size , <nl> - shuffle = True , last_batch_handle = ' discard ' ) ) <nl> - for _ in csr_iter : <nl> - num_batch + = 1 <nl> - <nl> - assert ( num_batch = = num_rows / / batch_size ) <nl> <nl> # CSRNDArray with shuffle <nl> csr_iter = iter ( mx . io . NDArrayIter ( { ' csr_data ' : csr , ' dns_data ' : dns } , dns , batch_size , <nl> shuffle = True , last_batch_handle = ' discard ' ) ) <nl> num_batch = 0 <nl> - for _ in csr_iter : <nl> + for batch in csr_iter : <nl> num_batch + = 1 <nl> <nl> assert ( num_batch = = num_rows / / batch_size ) <nl>
|
Revert " Change the way NDArrayIter handle the last batch " ( )
|
apache/incubator-mxnet
|
8ff50c95201e02e849e0592de5fb7af87489be53
|
2018-09-12T20:32:57Z
|
mmm a / src / config / cmd_args . cc <nl> ppp b / src / config / cmd_args . cc <nl> void usage ( const char * name ) { <nl> printf ( " \ t % s [ OPTIONS ] [ FILE ] \ n " , name ) ; <nl> <nl> printf ( " \ nOptions : \ n " ) ; <nl> - <nl> - printf ( " - h , - - help \ t \ tPrint these usage options . \ n " ) ; <nl> - <nl> - printf ( " - - create \ t \ tCreate a new database . \ n " ) ; <nl> - printf ( " - f , - - file \ t \ tPath to file or block device where database goes . Can be \ n " <nl> - " \ t \ t \ tspecified multiple times to use multiple files . " ) ; <nl> - <nl> - printf ( " - c , - - cores \ t \ tNumber of cores to use for handling requests . \ n " ) ; <nl> - <nl> - printf ( " - m , - - max - cache - size \ tMaximum amount of RAM to use for caching disk \ n " ) ; <nl> - printf ( " \ t \ t \ tblocks , in megabytes . \ n " ) ; <nl> - <nl> - printf ( " - l , - - log - file \ tFile to log to . If not provided , messages will be printed to stderr . \ n " ) ; <nl> - printf ( " - p , - - port \ t \ tSocket port to listen on . Defaults to % d . \ n " , DEFAULT_LISTEN_PORT ) ; <nl> - printf ( " - - wait - for - flush \ tDo not respond to commands until changes are durable . Expects \ n " <nl> - " \ t \ t \ t ' y ' or ' n ' . \ n " ) ; <nl> - printf ( " - - flush - timer \ tTime in milliseconds that the server should allow changes to sit \ n " <nl> - " \ t \ t \ tin memory before flushing it to disk . Pass ' disable ' to allow modified data to \ n " <nl> - " \ t \ t \ tsit in memory indefinitely . \ n " ) ; <nl> - if ( DEFAULT_FLUSH_TIMER_MS = = NEVER_FLUSH ) printf ( " \ t \ t \ tDefaults to ' disable ' . \ n " ) ; <nl> - else printf ( " \ t \ t \ tDefaults to % dms . \ n " , DEFAULT_FLUSH_TIMER_MS ) ; <nl> - printf ( " - - flush - threshold \ tIf more than X % % of the server ' s maximum cache size is \ n " <nl> - " \ t \ t \ tmodified data , the server will flush it all to disk . Pass 0 to flush \ n " <nl> - " \ t \ t \ timmediately when changes are made . \ n " ) ; <nl> - <nl> + / / " 24 characters start here . <nl> + printf ( " - h , - - help Print these usage options . \ n " ) ; <nl> + printf ( " - - create Create a new database . \ n " ) ; <nl> + printf ( " - f , - - file Path to file or block device where database goes . Can be \ n " <nl> + " specified multiple times to use multiple files . \ n " ) ; <nl> + printf ( " - c , - - cores Number of cores to use for handling requests . \ n " ) ; <nl> + printf ( " - m , - - max - cache - size Maximum amount of RAM to use for caching disk \ n " ) ; <nl> + printf ( " blocks , in megabytes . \ n " ) ; <nl> + printf ( " - l , - - log - file File to log to . If not provided , messages will be printed to stderr . \ n " ) ; <nl> + printf ( " - p , - - port Socket port to listen on . Defaults to % d . \ n " , DEFAULT_LISTEN_PORT ) ; <nl> + printf ( " - - wait - for - flush Do not respond to commands until changes are durable . Expects \ n " <nl> + " ' y ' or ' n ' . \ n " ) ; <nl> + printf ( " - - flush - timer Time in milliseconds that the server should allow changes to sit \ n " <nl> + " in memory before flushing it to disk . Pass ' disable ' to allow modified data to \ n " <nl> + " sit in memory indefinitely . \ n " ) ; <nl> + if ( DEFAULT_FLUSH_TIMER_MS = = NEVER_FLUSH ) { <nl> + printf ( " Defaults to ' disable ' . \ n " ) ; <nl> + } <nl> + else { <nl> + printf ( " Defaults to % dms . \ n " , DEFAULT_FLUSH_TIMER_MS ) ; <nl> + } <nl> + printf ( " - - flush - threshold If more than X % % of the server ' s maximum cache size is \ n " <nl> + " modified data , the server will flush it all to disk . Pass 0 to flush \ n " <nl> + " immediately when changes are made . \ n " ) ; <nl> printf ( " - - gc - range low - high ( e . g . - - gc - range 0 . 5 - 0 . 75 ) \ n " <nl> - " \ t \ t \ tThe proportion of garbage maintained by garbage collection . \ n " ) ; <nl> - printf ( " - - active - data - extents \ t \ tHow many places in the file to write to at once . \ n " ) ; <nl> - <nl> + " The proportion of garbage maintained by garbage collection . \ n " ) ; <nl> + printf ( " - - active - data - extents \ n " <nl> + " How many places in the file to write to at once . \ n " ) ; <nl> printf ( " \ nOptions for new databases : \ n " ) ; <nl> - printf ( " - s , - - slices \ t \ tShards total . \ n " ) ; <nl> - printf ( " - - block - size \ t \ tSize of a block , in bytes . \ n " ) ; <nl> - printf ( " - - extent - size \ t \ tSize of an extent , in bytes . \ n " ) ; <nl> + printf ( " - s , - - slices Shards total . \ n " ) ; <nl> + printf ( " - - block - size Size of a block , in bytes . \ n " ) ; <nl> + printf ( " - - extent - size Size of an extent , in bytes . \ n " ) ; <nl> <nl> exit ( - 1 ) ; <nl> } <nl>
|
Adjusted help message to use spaces , not tabs .
|
rethinkdb/rethinkdb
|
3f0a4f10d549ab3cb75354f74ef4972a8458c414
|
2010-11-03T22:23:09Z
|
new file mode 100644 <nl> index 000000000000 . . 5150fbe1c724 <nl> mmm / dev / null <nl> ppp b / doc / release - notes - 19133 . md <nl> <nl> + # # CLI <nl> + <nl> + A new ` bitcoin - cli - generate ` command , equivalent to RPC ` generatenewaddress ` <nl> + followed by ` generatetoaddress ` , can generate blocks for command line testing <nl> + purposes . This is a client - side version of the <nl> + [ former ] ( https : / / github . com / bitcoin / bitcoin / issues / 14299 ) ` generate ` RPC . See <nl> + the help for details . ( # 19133 ) <nl>
|
doc : add release note for bitcoin - cli - generate
|
bitcoin/bitcoin
|
9886c7d98d3386dfdd3ecce1e6189c49ff1c3a65
|
2020-06-23T05:09:27Z
|
mmm a / PowerEditor / src / Notepad_plus . cpp <nl> ppp b / PowerEditor / src / Notepad_plus . cpp <nl> ToolBarButtonUnit toolBarIcons [ ] = { <nl> { IDM_VIEW_ALL_CHARACTERS , IDI_VIEW_ALL_CHAR_OFF_ICON , IDI_VIEW_ALL_CHAR_ON_ICON , IDI_VIEW_ALL_CHAR_OFF_ICON , IDR_INVISIBLECHAR } , <nl> { IDM_VIEW_INDENT_GUIDE , IDI_VIEW_INDENT_OFF_ICON , IDI_VIEW_INDENT_ON_ICON , IDI_VIEW_INDENT_OFF_ICON , IDR_INDENTGUIDE } , <nl> { IDM_LANG_USER_DLG , IDI_VIEW_UD_DLG_OFF_ICON , IDI_VIEW_UD_DLG_ON_ICON , IDI_VIEW_UD_DLG_OFF_ICON , IDR_SHOWPANNEL } , <nl> - { IDM_VIEW_DOC_MAP , IDI_VIEW_UD_DLG_OFF_ICON , IDI_VIEW_UD_DLG_ON_ICON , IDI_VIEW_UD_DLG_OFF_ICON , IDR_DOCMAP } , <nl> - { IDM_VIEW_FUNC_LIST , IDI_VIEW_UD_DLG_OFF_ICON , IDI_VIEW_UD_DLG_ON_ICON , IDI_VIEW_UD_DLG_OFF_ICON , IDR_FUNC_LIST } , <nl> - { IDM_VIEW_FILEBROWSER , IDI_VIEW_UD_DLG_OFF_ICON , IDI_VIEW_UD_DLG_ON_ICON , IDI_VIEW_UD_DLG_OFF_ICON , IDR_FILEBROWSER } , <nl> - { IDM_VIEW_MONITORING , IDI_VIEW_UD_DLG_OFF_ICON , IDI_VIEW_UD_DLG_ON_ICON , IDI_VIEW_UD_DLG_OFF_ICON , IDR_FILEMONITORING } , <nl> + { IDM_VIEW_DOC_MAP , IDI_VIEW_DOC_MAP_OFF_ICON , IDI_VIEW_DOC_MAP_ON_ICON , IDI_VIEW_DOC_MAP_OFF_ICON , IDR_DOCMAP } , <nl> + / / { IDM_VIEW_FUNC_LIST , IDI_VIEW_UD_DLG_OFF_ICON , IDI_VIEW_UD_DLG_ON_ICON , IDI_VIEW_UD_DLG_OFF_ICON , IDR_FUNC_LIST } , <nl> + { IDM_VIEW_FUNC_LIST , IDI_VIEW_FUNCLIST_OFF_ICON , IDI_VIEW_FUNCLIST_ON_ICON , IDI_VIEW_FUNCLIST_OFF_ICON , IDR_FUNC_LIST } , <nl> + { IDM_VIEW_FILEBROWSER , IDI_VIEW_FILEBROWSER_OFF_ICON , IDI_VIEW_FILEBROWSER_ON_ICON , IDI_VIEW_FILEBROWSER_OFF_ICON , IDR_FILEBROWSER } , <nl> + { IDM_VIEW_MONITORING , IDI_VIEW_MONITORING_OFF_ICON , IDI_VIEW_MONITORING_ON_ICON , IDI_VIEW_MONITORING_OFF_ICON , IDR_FILEMONITORING } , <nl> <nl> / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - / / <nl> { 0 , IDI_SEPARATOR_ICON , IDI_SEPARATOR_ICON , IDI_SEPARATOR_ICON , IDI_SEPARATOR_ICON } , <nl> mmm a / PowerEditor / src / Notepad_plus . rc <nl> ppp b / PowerEditor / src / Notepad_plus . rc <nl> IDI_FUNCLIST_LEAF BITMAP " icons / funcList_leaf . bmp " <nl> IDI_FUNCLIST_SORTBUTTON BITMAP " icons / funclstSort . bmp " <nl> IDI_FUNCLIST_RELOADBUTTON BITMAP " icons / funclstReload . bmp " <nl> <nl> + <nl> + IDI_VIEW_DOC_MAP_ON_ICON ICON " icons / docMap_on . ico " <nl> + IDI_VIEW_DOC_MAP_OFF_ICON ICON " icons / docMap_off . ico " <nl> + IDI_VIEW_FUNCLIST_ON_ICON ICON " icons / funcList_on . ico " <nl> + IDI_VIEW_FUNCLIST_OFF_ICON ICON " icons / funcList_off . ico " <nl> + IDI_VIEW_FILEBROWSER_ON_ICON ICON " icons / fileBrowser_on . ico " <nl> + IDI_VIEW_FILEBROWSER_OFF_ICON ICON " icons / fileBrowser_off . ico " <nl> + IDI_VIEW_MONITORING_ON_ICON ICON " icons / monitoring_on . ico " <nl> + IDI_VIEW_MONITORING_OFF_ICON ICON " icons / monitoring_off . ico " <nl> + <nl> + <nl> IDR_M30_MENU MENU <nl> BEGIN <nl> POPUP " & File " <nl> new file mode 100644 <nl> index 0000000000 . . c9bd485f04 <nl> Binary files / dev / null and b / PowerEditor / src / icons / docMap_off . ico differ <nl> new file mode 100644 <nl> index 0000000000 . . 34c35a7407 <nl> Binary files / dev / null and b / PowerEditor / src / icons / docMap_on . ico differ <nl> new file mode 100644 <nl> index 0000000000 . . 091f44b811 <nl> Binary files / dev / null and b / PowerEditor / src / icons / fileBrowser_off . ico differ <nl> new file mode 100644 <nl> index 0000000000 . . fd065a278e <nl> Binary files / dev / null and b / PowerEditor / src / icons / fileBrowser_on . ico differ <nl> new file mode 100644 <nl> index 0000000000 . . 859d5eba70 <nl> Binary files / dev / null and b / PowerEditor / src / icons / funcList_off . ico differ <nl> new file mode 100644 <nl> index 0000000000 . . 286e7740d6 <nl> Binary files / dev / null and b / PowerEditor / src / icons / funcList_on . ico differ <nl> new file mode 100644 <nl> index 0000000000 . . aed126c86f <nl> Binary files / dev / null and b / PowerEditor / src / icons / monitoring_off . ico differ <nl> new file mode 100644 <nl> index 0000000000 . . 0699a4c4a0 <nl> Binary files / dev / null and b / PowerEditor / src / icons / monitoring_on . ico differ <nl> mmm a / PowerEditor / src / resource . h <nl> ppp b / PowerEditor / src / resource . h <nl> <nl> # define IDI_FUNCLIST_SORTBUTTON 631 <nl> # define IDI_FUNCLIST_RELOADBUTTON 632 <nl> <nl> + <nl> + # define IDI_VIEW_DOC_MAP_ON_ICON 633 <nl> + # define IDI_VIEW_DOC_MAP_OFF_ICON 634 <nl> + # define IDI_VIEW_FILEBROWSER_ON_ICON 635 <nl> + # define IDI_VIEW_FILEBROWSER_OFF_ICON 636 <nl> + # define IDI_VIEW_FUNCLIST_ON_ICON 637 <nl> + # define IDI_VIEW_FUNCLIST_OFF_ICON 638 <nl> + # define IDI_VIEW_MONITORING_ON_ICON 639 <nl> + # define IDI_VIEW_MONITORING_OFF_ICON 640 <nl> + <nl> + <nl> + <nl> # define IDC_MY_CUR 1402 <nl> # define IDC_UP_ARROW 1403 <nl> # define IDC_DRAG_TAB 1404 <nl>
|
Fix toolbar display bug in big icon mode issue
|
notepad-plus-plus/notepad-plus-plus
|
0a6b19fedce7eb43742400ec75beb87163f6cd53
|
2018-05-25T07:53:32Z
|
mmm a / PowerEditor / src / MISC / Common / Sorters . h <nl> ppp b / PowerEditor / src / MISC / Common / Sorters . h <nl> class NaturalSorter : public ISorter <nl> else if ( aChunkIsNum ) <nl> { <nl> size_t delta = 0 ; <nl> - compareResult = std : : stoll ( a . substr ( i ) ) - std : : stoll ( b . substr ( i ) , & delta ) ; <nl> + <nl> + / / stoll crashes if number exceeds the limit for unsigned long long <nl> + / / Maximum value for a variable of type unsigned long long | 18446744073709551615 <nl> + / / So take the max length 18 to convert the number <nl> + const size_t maxLen = 18 ; <nl> + size_t aLen = a . length ( ) - i , bLen = b . length ( ) - i ; <nl> + if ( aLen > maxLen | | bLen > maxLen ) <nl> + { <nl> + delta = min ( min ( aLen , bLen ) , maxLen ) ; <nl> + compareResult = std : : stoll ( a . substr ( i , delta ) ) - std : : stoll ( b . substr ( i , delta ) ) ; <nl> + } <nl> + else <nl> + { <nl> + compareResult = std : : stoll ( a . substr ( i ) ) - std : : stoll ( b . substr ( i ) , & delta ) ; <nl> + } <nl> i + = delta ; <nl> } <nl> / / Both are strings <nl> class NaturalSorter : public ISorter <nl> else if ( aChunkIsNum ) <nl> { <nl> size_t delta = 0 ; <nl> - compareResult = std : : stoll ( a . substr ( i ) ) - std : : stoll ( b . substr ( i ) , & delta ) ; <nl> + <nl> + / / stoll crashes if number exceeds the limit for unsigned long long <nl> + / / Maximum value for a variable of type unsigned long long | 18446744073709551615 <nl> + / / So take the max length 18 to convert the number <nl> + const size_t maxLen = 18 ; <nl> + size_t aLen = a . length ( ) - i , bLen = b . length ( ) - i ; <nl> + if ( aLen > maxLen | | bLen > maxLen ) <nl> + { <nl> + delta = min ( min ( aLen , bLen ) , maxLen ) ; <nl> + compareResult = std : : stoll ( a . substr ( i , delta ) ) - std : : stoll ( b . substr ( i , delta ) ) ; <nl> + } <nl> + else <nl> + { <nl> + compareResult = std : : stoll ( a . substr ( i ) ) - std : : stoll ( b . substr ( i ) , & delta ) ; <nl> + } <nl> i + = delta ; <nl> } <nl> / / Both are strings <nl>
|
Fix crash while sorting lines with numbers longer than 20 digits
|
notepad-plus-plus/notepad-plus-plus
|
ff20c264df4167943fff6247fec4b0c0ce6227fb
|
2019-05-30T15:26:22Z
|
mmm a / db / db_filesnapshot . cc <nl> ppp b / db / db_filesnapshot . cc <nl> Status DBImpl : : EnableFileDeletions ( bool force ) { <nl> } <nl> <nl> int DBImpl : : IsFileDeletionsEnabled ( ) const { <nl> - return disable_delete_obsolete_files_ ; <nl> + return ! disable_delete_obsolete_files_ ; <nl> } <nl> <nl> Status DBImpl : : GetLiveFiles ( std : : vector < std : : string > & ret , <nl> mmm a / db / db_properties_test . cc <nl> ppp b / db / db_properties_test . cc <nl> TEST_F ( DBPropertiesTest , Empty ) { <nl> ASSERT_OK ( db_ - > DisableFileDeletions ( ) ) ; <nl> ASSERT_TRUE ( <nl> dbfull ( ) - > GetProperty ( " rocksdb . is - file - deletions - enabled " , & num ) ) ; <nl> - ASSERT_EQ ( " 1 " , num ) ; <nl> + ASSERT_EQ ( " 0 " , num ) ; <nl> <nl> ASSERT_OK ( db_ - > DisableFileDeletions ( ) ) ; <nl> ASSERT_TRUE ( <nl> dbfull ( ) - > GetProperty ( " rocksdb . is - file - deletions - enabled " , & num ) ) ; <nl> - ASSERT_EQ ( " 2 " , num ) ; <nl> + ASSERT_EQ ( " 0 " , num ) ; <nl> <nl> ASSERT_OK ( db_ - > DisableFileDeletions ( ) ) ; <nl> ASSERT_TRUE ( <nl> dbfull ( ) - > GetProperty ( " rocksdb . is - file - deletions - enabled " , & num ) ) ; <nl> - ASSERT_EQ ( " 3 " , num ) ; <nl> + ASSERT_EQ ( " 0 " , num ) ; <nl> <nl> ASSERT_OK ( db_ - > EnableFileDeletions ( false ) ) ; <nl> ASSERT_TRUE ( <nl> dbfull ( ) - > GetProperty ( " rocksdb . is - file - deletions - enabled " , & num ) ) ; <nl> - ASSERT_EQ ( " 2 " , num ) ; <nl> + ASSERT_EQ ( " 0 " , num ) ; <nl> <nl> ASSERT_OK ( db_ - > EnableFileDeletions ( ) ) ; <nl> ASSERT_TRUE ( <nl> dbfull ( ) - > GetProperty ( " rocksdb . is - file - deletions - enabled " , & num ) ) ; <nl> - ASSERT_EQ ( " 0 " , num ) ; <nl> + ASSERT_EQ ( " 1 " , num ) ; <nl> } while ( ChangeOptions ( ) ) ; <nl> } <nl> <nl>
|
fix behavior does not match name for " IsFileDeletionsEnabled "
|
facebook/rocksdb
|
70282cf876597ae2d51bf9838dccd2b214448c02
|
2018-03-22T05:13:34Z
|
mmm a / include / osquery / filesystem . h <nl> ppp b / include / osquery / filesystem . h <nl> const std : : string kSQLGlobRecursive = kSQLGlobWildcard + kSQLGlobWildcard ; <nl> * @ param path the path of the file that you would like to read . <nl> * @ param content a reference to a string which will be populated with the <nl> * contents of the path indicated by the path parameter . <nl> + * @ param dry_run do not actually read the file content . <nl> * <nl> * @ return an instance of Status , indicating success or failure . <nl> * / <nl> - Status readFile ( const boost : : filesystem : : path & path , std : : string & content ) ; <nl> + Status readFile ( const boost : : filesystem : : path & path , <nl> + std : : string & content , <nl> + bool dry_run = false ) ; <nl> + <nl> + / * * <nl> + * @ brief Return the status of an attempted file read . <nl> + * <nl> + * @ param path the path of the file that you would like to read . <nl> + * <nl> + * @ return success iff the file would have been read . On success the status <nl> + * message is the complete / absolute path . <nl> + * / <nl> + Status readFile ( const boost : : filesystem : : path & path ) ; <nl> <nl> / * * <nl> * @ brief Write text to disk . <nl> mmm a / osquery / core / hash . cpp <nl> ppp b / osquery / core / hash . cpp <nl> <nl> # include < iomanip > <nl> # include < sstream > <nl> <nl> + # include < osquery / filesystem . h > <nl> # include < osquery / hash . h > <nl> # include < osquery / logger . h > <nl> <nl> std : : string hashFromBuffer ( HashType hash_type , const void * buffer , size_t size ) <nl> } <nl> <nl> std : : string hashFromFile ( HashType hash_type , const std : : string & path ) { <nl> - Hash hash ( hash_type ) ; <nl> + / / Perform a dry - run of a file read without filling in any content . <nl> + auto status = readFile ( path ) ; <nl> + if ( ! status . ok ( ) ) { <nl> + return " " ; <nl> + } <nl> <nl> - FILE * file = fopen ( path . c_str ( ) , " rb " ) ; <nl> + Hash hash ( hash_type ) ; <nl> + / / Use the canonicalized path returned from a successful readFile dry - run . <nl> + FILE * file = fopen ( status . what ( ) . c_str ( ) , " rb " ) ; <nl> if ( file = = nullptr ) { <nl> VLOG ( 1 ) < < " Cannot hash / open file " < < path ; <nl> return " " ; <nl> mmm a / osquery / filesystem / filesystem . cpp <nl> ppp b / osquery / filesystem / filesystem . cpp <nl> Status writeTextFile ( const fs : : path & path , <nl> return Status ( 0 , " OK " ) ; <nl> } <nl> <nl> - Status readFile ( const fs : : path & path , std : : string & content ) { <nl> + Status readFile ( const fs : : path & path , std : : string & content , bool dry_run ) { <nl> struct stat file ; <nl> - if ( lstat ( path . string ( ) . c_str ( ) , & file ) = = 0 ) { <nl> + if ( lstat ( path . string ( ) . c_str ( ) , & file ) = = 0 & & S_ISLNK ( file . st_mode ) ) { <nl> if ( file . st_uid ! = 0 & & ! FLAGS_read_user_links ) { <nl> return Status ( 1 , " User link reads disabled " ) ; <nl> } <nl> Status readFile ( const fs : : path & path , std : : string & content ) { <nl> return Status ( 1 , " File exceeds read limits " ) ; <nl> } <nl> <nl> + if ( dry_run ) { <nl> + / / The caller is only interested in performing file read checks . <nl> + boost : : system : : error_code ec ; <nl> + return Status ( 0 , fs : : canonical ( path , ec ) . string ( ) ) ; <nl> + } <nl> + <nl> if ( size = = - 1 | | size = = 0 ) { <nl> / / Size could not be determined . This may be a special device . <nl> std : : stringstream buffer ; <nl> Status readFile ( const fs : : path & path , std : : string & content ) { <nl> return Status ( 0 , " OK " ) ; <nl> } <nl> <nl> + Status readFile ( const fs : : path & path ) { <nl> + std : : string blank ; <nl> + return readFile ( path , blank , true ) ; <nl> + } <nl> + <nl> Status isWritable ( const fs : : path & path ) { <nl> auto path_exists = pathExists ( path ) ; <nl> if ( ! path_exists . ok ( ) ) { <nl> mmm a / osquery / filesystem / tests / filesystem_tests . cpp <nl> ppp b / osquery / filesystem / tests / filesystem_tests . cpp <nl> TEST_F ( FilesystemTests , test_read_limit ) { <nl> FLAGS_read_user_max = user_max ; <nl> <nl> / / Test that user symlinks aren ' t followed if configured . <nl> + / / ' root2 . txt ' is a symlink in this case . <nl> FLAGS_read_user_links = false ; <nl> content . erase ( ) ; <nl> status = readFile ( kFakeDirectory + " / root2 . txt " , content ) ; <nl> EXPECT_FALSE ( status . ok ( ) ) ; <nl> <nl> - / / But they are read if enabled . <nl> + / / Make sure non - link files are still readable . <nl> + content . erase ( ) ; <nl> + status = readFile ( kFakeDirectory + " / root . txt " , content ) ; <nl> + EXPECT_TRUE ( status . ok ( ) ) ; <nl> + <nl> + / / Any the links are readable if enabled . <nl> FLAGS_read_user_links = true ; <nl> status = readFile ( kFakeDirectory + " / root2 . txt " , content ) ; <nl> EXPECT_TRUE ( status . ok ( ) ) ; <nl>
|
Merge pull request from theopolis / fix_1332
|
osquery/osquery
|
7d463180f9bcc2f1458c50a5e1999f8300dbbbde
|
2015-07-15T02:21:28Z
|
mmm a / include / swift / SIL / SILLocation . h <nl> ppp b / include / swift / SIL / SILLocation . h <nl> class RegularLocation : public SILLocation { <nl> return AL ; <nl> } <nl> <nl> + / / / Returns a location that is compiler - generated , but with a hint as to where <nl> + / / / it may have been generated from . These locations will have an artificial <nl> + / / / line location of zero in DWARF , but in CodeView we want to use the given <nl> + / / / line since line zero does not represent an artificial line in CodeView . <nl> + static RegularLocation getAutoGeneratedLocation ( SourceLoc L ) { <nl> + RegularLocation AL ( L ) ; <nl> + AL . markAutoGenerated ( ) ; <nl> + return AL ; <nl> + } <nl> + <nl> private : <nl> RegularLocation ( ) : SILLocation ( RegularKind ) { } <nl> <nl> mmm a / lib / IRGen / IRGenDebugInfo . cpp <nl> ppp b / lib / IRGen / IRGenDebugInfo . cpp <nl> class IRGenDebugInfoImpl : public IRGenDebugInfo { <nl> void clearLoc ( IRBuilder & Builder ) ; <nl> void pushLoc ( ) ; <nl> void popLoc ( ) ; <nl> + void setInlinedTrapLocation ( IRBuilder & Builder , const SILDebugScope * Scope ) ; <nl> void setEntryPointLoc ( IRBuilder & Builder ) ; <nl> llvm : : DIScope * getEntryPointFn ( ) ; <nl> llvm : : DIScope * getOrCreateScope ( const SILDebugScope * DS ) ; <nl> class IRGenDebugInfoImpl : public IRGenDebugInfo { <nl> / / / Decode ( and cache ) a SourceLoc . <nl> SILLocation : : DebugLoc decodeSourceLoc ( SourceLoc SL ) ; <nl> <nl> + IRGenDebugInfoFormat getDebugInfoFormat ( ) { return Opts . DebugInfoFormat ; } <nl> + <nl> private : <nl> static StringRef getFilenameFromDC ( const DeclContext * DC ) { <nl> if ( auto LF = dyn_cast < LoadedFile > ( DC ) ) <nl> class IRGenDebugInfoImpl : public IRGenDebugInfo { <nl> return decodeSourceLoc ( OptLoc - > getStartSourceLoc ( ) ) ; <nl> } <nl> <nl> + SILLocation : : DebugLoc sanitizeCodeViewDebugLoc ( SILLocation : : DebugLoc DLoc ) { <nl> + if ( Opts . DebugInfoFormat = = IRGenDebugInfoFormat : : CodeView ) <nl> + / / When WinDbg finds two locations with the same line but different <nl> + / / columns , the user must select an address when they break on that <nl> + / / line . Also , clang does not emit column locations in CodeView for C + + . <nl> + DLoc . Column = 0 ; <nl> + return DLoc ; <nl> + } <nl> + <nl> SILLocation : : DebugLoc decodeDebugLoc ( SILLocation Loc ) { <nl> if ( Loc . isDebugInfoLoc ( ) ) <nl> - return Loc . getDebugInfoLoc ( ) ; <nl> + return sanitizeCodeViewDebugLoc ( Loc . getDebugInfoLoc ( ) ) ; <nl> return decodeSourceLoc ( Loc . getDebugSourceLoc ( ) ) ; <nl> } <nl> <nl> SILLocation : : DebugLoc getDebugLocation ( Optional < SILLocation > OptLoc ) { <nl> - if ( ! OptLoc | | OptLoc - > isInPrologue ( ) ) <nl> + if ( ! OptLoc | | ( Opts . DebugInfoFormat ! = IRGenDebugInfoFormat : : CodeView & & <nl> + OptLoc - > isInPrologue ( ) ) ) <nl> return { } ; <nl> return decodeDebugLoc ( * OptLoc ) ; <nl> } <nl> void IRGenDebugInfoImpl : : setCurrentLoc ( IRBuilder & Builder , <nl> / / Reuse the last source location if we are still in the same <nl> / / scope to get a more contiguous line table . <nl> L = LastDebugLoc ; <nl> + } else if ( DS = = LastScope & & Loc . is < ArtificialUnreachableLocation > ( ) & & <nl> + Opts . DebugInfoFormat = = IRGenDebugInfoFormat : : CodeView ) { <nl> + / / Remove unreachable locations with line zero because line zero does not <nl> + / / represent an artificial location in CodeView . <nl> + L = LastDebugLoc ; <nl> } else { <nl> / / Decode the location . <nl> L = getDebugLocation ( Loc ) ; <nl> / / Otherwise use a line 0 artificial location , but the file from the <nl> - / / location . <nl> - if ( Loc . isAutoGenerated ( ) ) { <nl> + / / location . If we are emitting CodeView , we do not want to use line zero <nl> + / / since it does not represent an artificial line location . <nl> + if ( Loc . isAutoGenerated ( ) & & <nl> + Opts . DebugInfoFormat ! = IRGenDebugInfoFormat : : CodeView ) { <nl> L . Line = 0 ; <nl> L . Column = 0 ; <nl> } <nl> void IRGenDebugInfoImpl : : popLoc ( ) { <nl> std : : tie ( LastDebugLoc , LastScope ) = LocationStack . pop_back_val ( ) ; <nl> } <nl> <nl> + / / / This is done for WinDbg to avoid having two non - contiguous sets of <nl> + / / / instructions because the ` ` @ llvm . trap ` ` instruction gets placed at the end <nl> + / / / of the function . <nl> + void IRGenDebugInfoImpl : : setInlinedTrapLocation ( IRBuilder & Builder , <nl> + const SILDebugScope * Scope ) { <nl> + if ( Opts . DebugInfoFormat ! = IRGenDebugInfoFormat : : CodeView ) <nl> + return ; <nl> + auto DLInlinedAt = llvm : : DebugLoc : : get ( LastDebugLoc . Line , LastDebugLoc . Column , <nl> + getOrCreateScope ( LastScope ) ) ; <nl> + / / FIXME : This location should point to stdlib instead of being artificial . <nl> + auto DL = llvm : : DebugLoc : : get ( 0 , 0 , getOrCreateScope ( Scope ) , DLInlinedAt ) ; <nl> + Builder . SetCurrentDebugLocation ( DL ) ; <nl> + } <nl> + <nl> void IRGenDebugInfoImpl : : setEntryPointLoc ( IRBuilder & Builder ) { <nl> auto DL = llvm : : DebugLoc : : get ( 0 , 0 , getEntryPointFn ( ) , nullptr ) ; <nl> Builder . SetCurrentDebugLocation ( DL ) ; <nl> void IRGenDebugInfoImpl : : emitTypeMetadata ( IRGenFunction & IGF , <nl> SILLocation : : DebugLoc IRGenDebugInfoImpl : : decodeSourceLoc ( SourceLoc SL ) { <nl> auto & Cached = DebugLocCache [ SL . getOpaquePointerValue ( ) ] ; <nl> if ( Cached . Filename . empty ( ) ) <nl> - Cached = SILLocation : : decode ( SL , SM ) ; <nl> + Cached = sanitizeCodeViewDebugLoc ( SILLocation : : decode ( SL , SM ) ) ; <nl> return Cached ; <nl> } <nl> <nl> void IRGenDebugInfo : : popLoc ( ) { <nl> static_cast < IRGenDebugInfoImpl * > ( this ) - > popLoc ( ) ; <nl> } <nl> <nl> + void IRGenDebugInfo : : setInlinedTrapLocation ( IRBuilder & Builder , <nl> + const SILDebugScope * Scope ) { <nl> + static_cast < IRGenDebugInfoImpl * > ( this ) - > setInlinedTrapLocation ( Builder , <nl> + Scope ) ; <nl> + } <nl> + <nl> void IRGenDebugInfo : : setEntryPointLoc ( IRBuilder & Builder ) { <nl> static_cast < IRGenDebugInfoImpl * > ( this ) - > setEntryPointLoc ( Builder ) ; <nl> } <nl> ArtificialLocation : : ArtificialLocation ( const SILDebugScope * DS , <nl> IRGenDebugInfo * DI , IRBuilder & Builder ) <nl> : AutoRestoreLocation ( DI , Builder ) { <nl> if ( DI ) { <nl> - auto DL = llvm : : DebugLoc : : get ( 0 , 0 , DI - > getOrCreateScope ( DS ) ) ; <nl> + unsigned Line = 0 ; <nl> + auto * Scope = DI - > getOrCreateScope ( DS ) ; <nl> + if ( static_cast < IRGenDebugInfoImpl * > ( DI ) - > getDebugInfoFormat ( ) = = <nl> + IRGenDebugInfoFormat : : CodeView ) { <nl> + / / In CodeView , line zero is not an artificial line location and so we <nl> + / / try to use the location of the scope . <nl> + if ( auto * LB = dyn_cast < llvm : : DILexicalBlock > ( Scope ) ) <nl> + Line = LB - > getLine ( ) ; <nl> + else if ( auto * SP = dyn_cast < llvm : : DISubprogram > ( Scope ) ) <nl> + Line = SP - > getLine ( ) ; <nl> + } <nl> + auto DL = llvm : : DebugLoc : : get ( Line , 0 , Scope ) ; <nl> Builder . SetCurrentDebugLocation ( DL ) ; <nl> } <nl> } <nl> mmm a / lib / IRGen / IRGenDebugInfo . h <nl> ppp b / lib / IRGen / IRGenDebugInfo . h <nl> class IRGenDebugInfo { <nl> / / / Restore the current debug location from the stack . <nl> void popLoc ( ) ; <nl> <nl> - / / / Emit the final line 0 location for the unified trap block at the <nl> - / / / end of the function . <nl> - void setArtificialTrapLocation ( IRBuilder & Builder , <nl> - const SILDebugScope * Scope ) ; <nl> + / / / If we are not emitting CodeView , this does nothing since the ` ` llvm . trap ` ` <nl> + / / / instructions should already have an artificial location of zero . <nl> + / / / In CodeView , since zero is not an artificial location , we emit the <nl> + / / / location of the unified trap block at the end of the fuction as an <nl> + / / / artificial inline location pointing to the user ' s instruction . <nl> + void setInlinedTrapLocation ( IRBuilder & Builder , const SILDebugScope * Scope ) ; <nl> <nl> / / / Set the location for SWIFT_ENTRY_POINT_FUNCTION . <nl> void setEntryPointLoc ( IRBuilder & Builder ) ; <nl> mmm a / lib / IRGen / IRGenSIL . cpp <nl> ppp b / lib / IRGen / IRGenSIL . cpp <nl> class IRGenSILFunction : <nl> SILType Type = SILVal - > getType ( ) ; <nl> auto & LTI = cast < LoadableTypeInfo > ( IGM . getTypeInfo ( Type ) ) ; <nl> auto Alloca = LTI . allocateStack ( * this , Type , " debug . copy " ) ; <nl> + ArtificialLocation AutoRestore ( Scope , IGM . DebugInfo , Builder ) ; <nl> LTI . initialize ( * this , e , Alloca . getAddress ( ) , false / * isOutlined * / ) ; <nl> copy . push_back ( Alloca . getAddressPointer ( ) ) ; <nl> } <nl> void IRGenSILFunction : : visitCondFailInst ( swift : : CondFailInst * i ) { <nl> llvm : : BasicBlock * contBB = llvm : : BasicBlock : : Create ( IGM . getLLVMContext ( ) ) ; <nl> Builder . CreateCondBr ( cond , failBB , contBB ) ; <nl> Builder . emitBlock ( failBB ) ; <nl> + if ( IGM . DebugInfo ) <nl> + / / If we are emitting DWARF , this does nothing . Otherwise the ` ` llvm . trap ` ` <nl> + / / instruction emitted from ` ` Builtin . condfail ` ` should have an inlined <nl> + / / debug location . This is because zero is not an artificial line location <nl> + / / in CodeView . <nl> + IGM . DebugInfo - > setInlinedTrapLocation ( Builder , i - > getDebugScope ( ) ) ; <nl> emitTrap ( / * EmitUnreachable = * / true ) ; <nl> Builder . emitBlock ( contBB ) ; <nl> FailBBs . push_back ( failBB ) ; <nl> mmm a / lib / SILGen / SILGenPattern . cpp <nl> ppp b / lib / SILGen / SILGenPattern . cpp <nl> void PatternMatchEmission : : emitCaseBody ( CaseStmt * caseBlock ) { <nl> <nl> / / Implicitly break out of the pattern match statement . <nl> if ( SGF . B . hasValidInsertionPoint ( ) ) { <nl> - / / Case blocks without trailing braces have ambiguous cleanup locations . <nl> - SILLocation cleanupLoc = RegularLocation : : getAutoGeneratedLocation ( ) ; <nl> + / / Case blocks without trailing braces have a line location of the last <nl> + / / instruction in the case block . <nl> + SILLocation cleanupLoc = <nl> + RegularLocation : : getAutoGeneratedLocation ( caseBlock - > getEndLoc ( ) ) ; <nl> if ( auto * braces = dyn_cast < BraceStmt > ( caseBlock - > getBody ( ) ) ) <nl> if ( braces - > getNumElements ( ) = = 1 & & <nl> dyn_cast_or_null < DoStmt > ( braces - > getElement ( 0 ) . dyn_cast < Stmt * > ( ) ) ) <nl> mmm a / test / DebugInfo / basic . swift <nl> ppp b / test / DebugInfo / basic . swift <nl> <nl> / / CHECK - LINETABLES - NOT : DW_TAG_basic_type <nl> / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> / / Now check that we do generate line + scope info with - g . <nl> - / / RUN : % target - swift - frontend % s - emit - ir - g - o - | % FileCheck % s <nl> + / / RUN : % target - swift - frontend % s - emit - ir - g - o - \ <nl> + / / RUN : | % FileCheck % s - - check - prefixes CHECK , DWARF - CHECK <nl> / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> / / Currently - gdwarf - types should give the same results as - g . <nl> - / / RUN : % target - swift - frontend % s - emit - ir - gdwarf - types - o - | % FileCheck % s <nl> + / / RUN : % target - swift - frontend % s - emit - ir - gdwarf - types - o - \ <nl> + / / RUN : | % FileCheck % s - - check - prefixes CHECK , DWARF - CHECK <nl> / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> / / Verify that - g - debug - info - format = dwarf gives the same results as - g . <nl> / / RUN : % target - swift - frontend % s - emit - ir - g - debug - info - format = dwarf - o - \ <nl> - / / RUN : | % FileCheck % s <nl> + / / RUN : | % FileCheck % s - - check - prefixes CHECK , DWARF - CHECK <nl> + / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> + / / RUN : % target - swift - frontend % s - emit - ir - g - debug - info - format = codeview - o - \ <nl> + / / RUN : | % FileCheck % s - - check - prefixes CHECK , CV - CHECK <nl> / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> / / <nl> / / CHECK : foo <nl> public <nl> func foo ( _ a : Int64 , _ b : Int64 ) - > Int64 { <nl> var a = a <nl> var b = b <nl> - / / CHECK - DAG : ! DILexicalBlock ( scope : ! [ [ FOO ] ] , { { . * } } line : [ [ @ LINE - 3 ] ] , column : 43 ) <nl> - / / CHECK - DAG : ! [ [ ASCOPE : . * ] ] = ! DILocation ( line : [ [ @ LINE - 4 ] ] , column : 10 , scope : ! [ [ FOO ] ] ) <nl> + / / CHECK - DAG : ! DILexicalBlock ( scope : ! [ [ FOO ] ] , { { . * } } line : [ [ @ LINE - 3 ] ] <nl> + / / CHECK - DAG : ! [ [ ASCOPE : . * ] ] = ! DILocation ( line : [ [ @ LINE - 4 ] ] , { { . * } } scope : ! [ [ FOO ] ] ) <nl> / / Check that a is the first and b is the second argument . <nl> / / CHECK - DAG : store i64 % 0 , i64 * [ [ AADDR : . * ] ] , align <nl> / / CHECK - DAG : store i64 % 1 , i64 * [ [ BADDR : . * ] ] , align <nl> func foo ( _ a : Int64 , _ b : Int64 ) - > Int64 { <nl> / / CHECK - DAG : ! DILexicalBlock ( { { . * } } line : [ [ @ LINE - 1 ] ] <nl> / / Transparent inlined multiply : <nl> / / CHECK - DAG : smul { { . * } } , ! dbg ! [ [ MUL : [ 0 - 9 ] + ] ] <nl> - / / CHECK - DAG : [ [ MUL ] ] = ! DILocation ( line : [ [ @ LINE + 1 ] ] , column : 16 , <nl> + / / CHECK - DAG : [ [ MUL ] ] = ! DILocation ( line : [ [ @ LINE + 1 ] ] , <nl> return a * b <nl> } else { <nl> - / / CHECK - DAG : ! [ [ PARENT : [ 0 - 9 ] + ] ] = distinct ! DILexicalBlock ( { { . * } } line : [ [ @ LINE - 1 ] ] , column : 13 ) <nl> + / / CHECK - DAG : ! [ [ PARENT : [ 0 - 9 ] + ] ] = distinct ! DILexicalBlock ( { { . * } } line : [ [ @ LINE - 1 ] ] <nl> var c : Int64 = 42 <nl> - / / CHECK - DAG : ! [ [ CONDITION : [ 0 - 9 ] + ] ] = distinct ! DILexicalBlock ( scope : ! [ [ PARENT ] ] , { { . * } } , line : [ [ @ LINE + 1 ] ] , <nl> + / / CHECK - DAG : ! [ [ CONDITION : [ 0 - 9 ] + ] ] = distinct ! DILexicalBlock ( scope : ! [ [ PARENT ] ] , { { . * } } , line : [ [ @ LINE + 1 ] ] <nl> if a = = 0 { <nl> - / / CHECK - DAG : ! DILexicalBlock ( scope : ! [ [ CONDITION ] ] , { { . * } } , line : [ [ @ LINE - 1 ] ] , column : 18 ) <nl> + / / CHECK - DAG : ! DILexicalBlock ( scope : ! [ [ CONDITION ] ] , { { . * } } , line : [ [ @ LINE - 1 ] ] <nl> / / What about a nested scope ? <nl> return 0 <nl> } <nl> func foo ( _ a : Int64 , _ b : Int64 ) - > Int64 { <nl> / / CHECK - DAG : ! [ [ MAINMODULE ] ] = ! DIModule ( { { . * } } , name : " basic " <nl> <nl> / / DWARF Version <nl> - / / CHECK - DAG : i32 2 , ! " Dwarf Version " , i32 4 } <nl> + / / DWARF - CHECK - DAG : i32 2 , ! " Dwarf Version " , i32 4 } <nl> + / / CV - CHECK - DAG : i32 2 , ! " CodeView " , i32 1 } <nl> <nl> / / Debug Info Version <nl> / / CHECK - DAG : i32 2 , ! " Debug Info Version " , i32 <nl> new file mode 100644 <nl> index 000000000000 . . 6512f66d5b9a <nl> mmm / dev / null <nl> ppp b / test / DebugInfo / columns . swift <nl> <nl> + / / RUN : % target - swift - frontend % s - emit - ir - g - o - | % FileCheck % s - - check - prefixes CHECK , DWARF - CHECK <nl> + / / RUN : % target - swift - frontend % s - emit - ir - g - debug - info - format = codeview - o - \ <nl> + / / RUN : | % FileCheck % s - - check - prefixes CHECK , CV - CHECK <nl> + <nl> + public func foo ( _ a : Int64 , _ b : Int64 ) - > Int64 { / / line 5 <nl> + / / CHECK : sdiv i64 { { . * } } , ! dbg ! [ [ DIV : [ 0 - 9 ] + ] ] <nl> + / / CHECK : call { { . * } } @ llvm . sadd . with . overflow { { . * } } , ! dbg ! [ [ ADD : [ 0 - 9 ] + ] ] <nl> + let c = a + b / a ; / / line 8 <nl> + / / CHECK : icmp slt i64 { { . * } } , ! dbg ! [ [ SLT : [ 0 - 9 ] + ] ] <nl> + if ( c < b ) { / / line 10 <nl> + / / CHECK : call { { . * } } @ llvm . ssub . with . overflow { { . * } } , ! dbg ! [ [ SUB : [ 0 - 9 ] + ] ] <nl> + return a - b / / line 12 <nl> + } <nl> + return c <nl> + } <nl> + <nl> + / / CHECK - DAG : ! DISubprogram ( name : " foo " , { { . * } } line : 5 , <nl> + / / DWARF - CHECK - DAG : ! DILexicalBlock ( { { . * } } , line : 5 , column : 50 ) <nl> + / / DWARF - CHECK - DAG : ! [ [ DIV ] ] = ! DILocation ( line : 8 , column : 17 , <nl> + / / DWARF - CHECK - DAG : ! [ [ ADD ] ] = ! DILocation ( line : 8 , column : 13 , <nl> + / / DWARF - CHECK - DAG : ! [ [ SLT ] ] = ! DILocation ( line : 10 , column : 9 , <nl> + / / DWARF - CHECK - DAG : ! DILexicalBlock ( { { . * } } , line : 10 , column : 14 ) <nl> + / / DWARF - CHECK - DAG : ! [ [ SUB ] ] = ! DILocation ( line : 12 , column : 14 , <nl> + <nl> + / / CV - CHECK - DAG : ! DILexicalBlock ( { { . * } } , line : 5 ) <nl> + / / CV - CHECK - DAG : ! [ [ DIV ] ] = ! DILocation ( line : 8 , scope : <nl> + / / CV - CHECK - DAG : ! [ [ ADD ] ] = ! DILocation ( line : 8 , scope : <nl> + / / CV - CHECK - DAG : ! [ [ SLT ] ] = ! DILocation ( line : 10 , scope : <nl> + / / CV - CHECK - DAG : ! DILexicalBlock ( { { . * } } , line : 10 ) <nl> + / / CV - CHECK - DAG : ! [ [ SUB ] ] = ! DILocation ( line : 12 , scope : <nl> new file mode 100644 <nl> index 000000000000 . . bb8cb1a0e407 <nl> mmm / dev / null <nl> ppp b / test / DebugInfo / line - directive - codeview . swift <nl> <nl> + func markUsed < T > ( _ t : T ) { } <nl> + func myFunc ( ) { <nl> + if 1 = = 1 { <nl> + # sourceLocation ( file : " abc . swift " , line : 42 ) <nl> + markUsed ( " Hello World " ) <nl> + # sourceLocation ( ) <nl> + } <nl> + markUsed ( " Test " ) <nl> + # sourceLocation ( file : " abc . swift " , line : 142 ) <nl> + markUsed ( " abc again " ) <nl> + # sourceLocation ( file : " def . swift " , line : 142 ) <nl> + markUsed ( " jump directly to def " ) <nl> + } <nl> + <nl> + / / REQUIRES : OS = Windows <nl> + / / RUN : % swiftc_driver % s - S - g - debug - info - format = codeview - target x86_64 - unknown - windows - msvc - o - | % FileCheck - - check - prefix CV - CHECK % s <nl> + / / CV - CHECK : . cv_file [ [ MAIN : [ 0 - 9 ] + ] ] " { { . * } } line - directive - codeview . swift " <nl> + / / CV - CHECK : . cv_loc { { [ 0 - 9 ] + } } [ [ MAIN ] ] 1 { { 0 ? } } <nl> + / / CV - CHECK : . def $ S4main6myFuncyyF ; <nl> + / / CV - CHECK - NOT : . def <nl> + / / CV - CHECK : . cv_func_id [ [ MYFUNC : [ 0 - 9 ] + ] ] <nl> + / / CV - CHECK : . cv_file [ [ ABC : [ 0 - 9 ] + ] ] " { { . * } } abc . swift " <nl> + / / CV - CHECK : . cv_loc [ [ MYFUNC ] ] [ [ ABC ] ] 42 { { 0 ? } } <nl> + / / CV - CHECK : . cv_loc [ [ MYFUNC ] ] [ [ MAIN ] ] 8 { { 0 ? } } <nl> + / / CV - CHECK : . cv_loc [ [ MYFUNC ] ] [ [ ABC ] ] 142 { { 0 ? } } <nl> + / / CV - CHECK : . cv_file [ [ DEF : [ 0 - 9 ] + ] ] " { { . * } } def . swift " <nl> + / / CV - CHECK : . cv_loc [ [ MYFUNC ] ] [ [ DEF ] ] 142 { { 0 ? } } <nl> + / / CV - CHECK : . cv_linetable [ [ MYFUNC ] ] , $ S4main6myFuncyyF <nl> new file mode 100644 <nl> index 000000000000 . . 5edab44da53d <nl> mmm / dev / null <nl> ppp b / test / DebugInfo / linetable - codeview . swift <nl> <nl> + / / RUN : % swiftc_driver % s - g - debug - info - format = codeview - emit - ir - o - | % FileCheck % s <nl> + func markUsed < T > ( _ t : T ) { } <nl> + func arithmetic ( _ a : Int64 , _ b : Int64 ) { <nl> + markUsed ( a + b ) / / line 4 <nl> + markUsed ( a / b ) / / line 5 <nl> + } <nl> + struct SimpleStruct { / / NOTE : Classes do not work in Windows yet . <nl> + var myVal1 : Float <nl> + var myVal2 : Float <nl> + func sum ( myArg : Float ) { / / line 10 <nl> + markUsed ( myVal1 + myVal2 + myArg ) <nl> + } <nl> + } <nl> + func myLoop ( ) { <nl> + for index in 0 . . . 3 { / / line 15 <nl> + markUsed ( index ) / / line 16 <nl> + } <nl> + } <nl> + func mySwitch ( _ a : Int64 ) { <nl> + switch a { <nl> + case 0 : <nl> + markUsed ( a - 1 ) / / line 22 <nl> + default : do { <nl> + markUsed ( a + 1 ) <nl> + } / / line 25 <nl> + } <nl> + } <nl> + <nl> + / / func arithmetic ( _ a : Int64 , _ b : Int64 ) <nl> + / / CHECK : define { { . * } } @ " $ S4main10arithmeticyys5Int64V_ADtF " ( i64 , i64 ) <nl> + / / CHECK : call { i64 , i1 } @ llvm . sadd . with . overflow . i64 ( { { . * } } ) , ! dbg ! [ [ ADD : [ 0 - 9 ] + ] ] <nl> + / / NOTE : The division will emit an ` ` unreachable ` ` instruction sandwiched <nl> + / / between other instructions for the division . We want to make sure <nl> + / / all instructions from the division have the same debug location and <nl> + / / are contiguous . <nl> + / / CHECK : call { { . * } } @ " $ Ss18_fatalErrorMessage__4file4line5flagss5NeverOs12StaticStringV_A2HSus6UInt32VtF " { { . * } } , ! dbg ! [ [ DIV : [ 0 - 9 ] + ] ] <nl> + / / CHECK - NEXT : unreachable , ! dbg ! [ [ DIV ] ] <nl> + / / CHECK sdiv i64 % 0 , % 1 , ! dbg ! [ [ DIV ] ] <nl> + / / CHECK : call void @ llvm . trap ( ) , ! dbg ! [ [ INLINEDADD : [ 0 - 9 ] + ] ] <nl> + / / CHECK - NEXT : unreachable , ! dbg ! [ [ INLINEDADD ] ] <nl> + <nl> + / / func sum ( myArg : Float ) <nl> + / / CHECK : define { { . * } } @ " $ S4main12SimpleStructV3sum5myArgySf_tF " { { . * } } ! dbg ! [ [ SUM : [ 0 - 9 ] + ] ] <nl> + / / NOTE : The point of this test is to trigger IRGenSIL : : emitShadowCopy ( ) <nl> + / / and IRGenSIL : : emitShadowCopyIfNeeded ( ) . It may be worthwhile to <nl> + / / simplify this testcase . <nl> + / / CHECK : store float % 0 , float * % myArg . addr , { { . * } } , ! dbg ! [ [ PROLOGUE : [ 0 - 9 ] + ] ] <nl> + / / CHECK : store float { { . * } } , float * % debug . copy . myVal1 . _value , { { . * } } , ! dbg ! [ [ PROLOGUE ] ] <nl> + <nl> + / / func myLoop ( ) { <nl> + / / CHECK : define { { . * } } @ " $ S4main6myLoopyyF " <nl> + / / CHECK : call void @ llvm . dbg . declare ( metadata i64 * % index . addr , { { . * } } ) , ! dbg ! [ [ FORLOOP : [ 0 - 9 ] + ] ] <nl> + / / CHECK : phi i64 [ % { { . [ 0 - 9 ] + } } , % { { . [ 0 - 9 ] + } } ] , ! dbg ! [ [ FORLOOP ] ] <nl> + / / CHECK : call { { . * } } @ " $ S4main8markUsedyyxlF " { { . * } } , ! dbg ! [ [ FORBODY : [ 0 - 9 ] + ] ] <nl> + / / CHECK : ret void <nl> + <nl> + / / func mySwitch ( _ a : Int64 ) <nl> + / / CHECK : call { i64 , i1 } @ llvm . ssub . with . overflow . i64 { { . * } } <nl> + / / CHECK : br label % [ [ RETLABEL : [ 0 - 9 ] + ] ] , ! dbg ! [ [ CASE : [ 0 - 9 ] + ] ] <nl> + / / CHECK : call { i64 , i1 } @ llvm . sadd . with . overflow . i64 { { . * } } <nl> + / / CHECK : br label % [ [ RETLABEL ] ] , ! dbg ! [ [ DEFAULTCLEANUP : [ 0 - 9 ] + ] ] <nl> + / / CHECK : ; < label > : [ [ RETLABEL ] ] : <nl> + / / CHECK - NEXT : ret void <nl> + <nl> + <nl> + / / CHECK - DAG : ! [ [ ADD ] ] = ! DILocation ( line : 4 , scope : <nl> + / / CHECK - DAG : ! [ [ DIV ] ] = ! DILocation ( line : 5 , scope : <nl> + / / FIXME : The location of ` ` @ llvm . trap ` ` should be in Integers . swift . gyb <nl> + / / instead of being artificial . <nl> + / / CHECK - DAG : ! [ [ INLINEDADD ] ] = ! DILocation ( line : 0 , scope : ! { { [ 0 - 9 ] + } } , inlinedAt : ! [ [ ADD ] ] <nl> + <nl> + / / NOTE : These prologue instructions are given artificial line locations for <nl> + / / LLDB , but for CodeView they should have the location of the function <nl> + / / to keep the linetables contiguous . <nl> + / / CHECK - DAG : ! [ [ SUM ] ] = distinct ! DISubprogram ( name : " sum " , linkageName : " $ S4main12SimpleStructV3sum5myArgySf_tF " <nl> + / / CHECK - DAG : ! [ [ PROLOGUE ] ] = ! DILocation ( line : 10 , scope : ! [ [ SUM ] ] ) <nl> + / / CHECK - DAG : ! [ [ FORLOOP ] ] = ! DILocation ( line : 15 , scope : <nl> + / / CHECK - DAG : ! [ [ FORBODY ] ] = ! DILocation ( line : 16 , scope : <nl> + / / CHECK - DAG : ! [ [ CASE ] ] = ! DILocation ( line : 22 , scope : <nl> + / / CHECK - DAG : ! [ [ DEFAULTCLEANUP ] ] = ! DILocation ( line : 25 , scope : <nl>
|
Merge pull request from sparkasaurusRex / codeview - linetables
|
apple/swift
|
dc85c0a46977feafa1e89a2ad260a1339ee26cdf
|
2018-07-20T16:54:31Z
|
mmm a / src / qt / notificator . cpp <nl> ppp b / src / qt / notificator . cpp <nl> <nl> # include < QSystemTrayIcon > <nl> # include < QTemporaryFile > <nl> # include < QVariant > <nl> - <nl> - # ifdef Q_OS_MAC <nl> - # include " macnotificationhandler . h " <nl> - <nl> - # include < ApplicationServices / ApplicationServices . h > <nl> - # endif <nl> - <nl> # ifdef USE_DBUS <nl> # include < stdint . h > <nl> - <nl> # include < QtDBus > <nl> # endif <nl> + / / Include ApplicationServices . h after QtDbus to avoid redefinition of check ( ) . <nl> + / / This affects at least OSX 10 . 6 . See / usr / include / AssertMacros . h for details . <nl> + / / Note : This could also be worked around using : <nl> + / / # define __ASSERT_MACROS_DEFINE_VERSIONS_WITHOUT_UNDERSCORES 0 <nl> + # ifdef Q_OS_MAC <nl> + # include < ApplicationServices / ApplicationServices . h > <nl> + # include " macnotificationhandler . h " <nl> + # endif <nl> <nl> <nl> / / https : / / wiki . ubuntu . com / NotificationDevelopmentGuidelines recommends at least 128 <nl>
|
qt5 : fix a build issue with osx and qtdbus
|
bitcoin/bitcoin
|
c614bd718b91ec5560a14ca85bffa5e9a3e78b84
|
2014-01-10T21:30:33Z
|
mmm a / tensorflow / core / profiler / utils / BUILD <nl> ppp b / tensorflow / core / profiler / utils / BUILD <nl> cc_library ( <nl> " / / tensorflow / core : lib " , <nl> " / / tensorflow / core : lib_internal " , <nl> " @ com_google_absl / / absl / container : flat_hash_map " , <nl> + " @ com_google_absl / / absl / container : flat_hash_set " , <nl> " @ com_google_absl / / absl / strings " , <nl> " @ com_google_absl / / absl / types : optional " , <nl> ] , <nl> mmm a / tensorflow / core / profiler / utils / xplane_schema . cc <nl> ppp b / tensorflow / core / profiler / utils / xplane_schema . cc <nl> limitations under the License . <nl> # include " tensorflow / core / profiler / utils / xplane_schema . h " <nl> <nl> # include " absl / container / flat_hash_map . h " <nl> + # include " absl / container / flat_hash_set . h " <nl> # include " absl / strings / string_view . h " <nl> # include " absl / types / optional . h " <nl> # include " tensorflow / core / lib / gtl / map_util . h " <nl> absl : : optional < int64 > FindStatType ( absl : : string_view stat_name ) { <nl> return absl : : nullopt ; <nl> } <nl> <nl> + bool IsInternalStat ( absl : : optional < int64 > stat_type ) { <nl> + static const auto * const kInternalStats = new absl : : flat_hash_set < int64 > { <nl> + StatType : : kKernelDetails , StatType : : kLevel0 , <nl> + StatType : : kProducerType , StatType : : kProducerId , <nl> + StatType : : kConsumerType , StatType : : kConsumerId , <nl> + StatType : : kIsRoot , StatType : : kIsAsync } ; <nl> + return stat_type . has_value ( ) & & kInternalStats - > contains ( * stat_type ) ; <nl> + } <nl> + <nl> } / / namespace profiler <nl> } / / namespace tensorflow <nl> mmm a / tensorflow / core / profiler / utils / xplane_schema . h <nl> ppp b / tensorflow / core / profiler / utils / xplane_schema . h <nl> inline bool IsStatType ( StatType stat_type , absl : : string_view stat_name ) { <nl> absl : : optional < int64 > FindStatType ( absl : : string_view stat_name ) ; <nl> <nl> / / Returns true if the given stat shouldn ' t be shown in the trace viewer . <nl> - inline bool IsInternalStat ( absl : : optional < int64 > stat_type ) { <nl> - return stat_type = = StatType : : kKernelDetails | | <nl> - stat_type = = StatType : : kLevel0 ; <nl> - } <nl> + bool IsInternalStat ( absl : : optional < int64 > stat_type ) ; <nl> <nl> / / Support for flow events : <nl> / / This class enables encoding / decoding the flow id and direction , stored as <nl>
|
Register the semantic stats as internal .
|
tensorflow/tensorflow
|
8b3f0347e74bc69206f5ee36ccccd61ce6fec26f
|
2020-05-29T21:48:39Z
|
mmm a / tensorflow / compiler / xla / service / gpu / BUILD <nl> ppp b / tensorflow / compiler / xla / service / gpu / BUILD <nl> cc_library ( <nl> " / / tensorflow / stream_executor : device_memory_allocator " , <nl> " @ com_google_absl / / absl / container : flat_hash_map " , <nl> " @ com_google_absl / / absl / memory " , <nl> + " @ com_google_absl / / absl / strings : str_format " , <nl> " @ com_google_absl / / absl / types : span " , <nl> ] , <nl> ) <nl> cc_library ( <nl> " : buffer_allocations " , <nl> " : cusolver_context " , <nl> " : cudnn_batchnorm_runner " , <nl> + " : gpu_constants " , <nl> " : gpu_conv_runner " , <nl> " : gpu_debug_info_manager " , <nl> " : gpu_executable_run_options " , <nl> mmm a / tensorflow / compiler / xla / service / gpu / buffer_allocations . cc <nl> ppp b / tensorflow / compiler / xla / service / gpu / buffer_allocations . cc <nl> limitations under the License . <nl> namespace xla { <nl> namespace gpu { <nl> <nl> - void BufferAllocations : : Builder : : RegisterBuffer ( BufferAllocation : : Index index , <nl> - se : : DeviceMemoryBase address ) { <nl> - InsertOrDie ( & registered_buffers_ , index , address ) ; <nl> - } <nl> - <nl> - StatusOr < std : : unique_ptr < BufferAllocations > > BufferAllocations : : Builder : : Build ( <nl> - const BufferAssignment * buffer_assignment , int device_ordinal , <nl> - se : : DeviceMemoryAllocator * memory_allocator ) { <nl> - const int64 num_buffers = buffer_assignment - > Allocations ( ) . size ( ) ; <nl> - auto buffer_allocations = absl : : WrapUnique ( new BufferAllocations ( <nl> - num_buffers , device_ordinal , memory_allocator , buffer_assignment ) ) ; <nl> - <nl> - for ( BufferAllocation : : Index i = 0 ; i < num_buffers ; + + i ) { <nl> - const BufferAllocation & allocation = buffer_assignment - > GetAllocation ( i ) ; <nl> - const int64 expected_alignment = [ & ] { <nl> - if ( allocation . is_entry_computation_parameter ( ) ) { <nl> - return kEntryParameterAlignBytes ; <nl> - } else if ( allocation . is_constant ( ) ) { <nl> - return kConstantBufferAlignBytes ; <nl> - } else { <nl> - return kXlaAllocatedBufferAlignBytes ; <nl> - } <nl> - } ( ) ; <nl> - <nl> - / / If buffer # i ' s address is already registered ( e . g . external arguments or <nl> - / / result buffers ) , use that registered buffer . <nl> - if ( se : : DeviceMemoryBase * address = <nl> - tensorflow : : gtl : : FindOrNull ( registered_buffers_ , i ) ) { <nl> - if ( reinterpret_cast < uintptr_t > ( address - > opaque ( ) ) % expected_alignment ! = <nl> - 0 ) { <nl> - return InternalError ( <nl> - " Address of registered buffer % d must be a multiple of % x , but " <nl> - " was % p " , <nl> - i , kEntryParameterAlignBytes , address - > opaque ( ) ) ; <nl> - } <nl> - buffer_allocations - > SetBuffer ( i , * address ) ; <nl> - continue ; <nl> - } <nl> - <nl> - / / Allocate each allocation that might escape , or is the temp buffer . <nl> - bool seen_temp_buffer = false ; <nl> - if ( allocation . maybe_live_out ( ) | | allocation . IsPreallocatedTempBuffer ( ) ) { <nl> - const int64 buffer_size = allocation . size ( ) ; <nl> - se : : DeviceMemoryBase buffer_address ; <nl> - if ( buffer_size > 0 ) { <nl> - se : : OwningDeviceMemory buffer ; <nl> - TF_ASSIGN_OR_RETURN ( <nl> - buffer , memory_allocator - > Allocate ( device_ordinal , buffer_size ) ) ; <nl> - if ( reinterpret_cast < uintptr_t > ( buffer - > opaque ( ) ) % <nl> - expected_alignment ! = <nl> - 0 ) { <nl> - return InternalError ( <nl> - " Address returned by memory_allocator - > Allocate must be a " <nl> - " multiple of 0x % x , but was % p " , <nl> - kXlaAllocatedBufferAlignBytes , buffer - > opaque ( ) ) ; <nl> - } <nl> - / / We do manual memory management within BufferAllocations . Be sure not <nl> - / / to do a TF_RETURN_IF_ERROR between this line and the <nl> - / / buffer_allocations - > SetBuffer ( buffer_address ) call below ! <nl> - buffer_address = buffer . Release ( ) ; <nl> - } <nl> - <nl> - buffer_allocations - > SetBuffer ( i , buffer_address ) ; <nl> - if ( allocation . IsPreallocatedTempBuffer ( ) ) { <nl> - if ( seen_temp_buffer ) { <nl> - LOG ( FATAL ) < < " Multiple temporary buffers detected . BufferAssigner " <nl> - < < " must guarantee at most one temporary buffer . " ; <nl> - } <nl> - seen_temp_buffer = true ; <nl> - buffer_allocations - > temp_buffer_base_ = buffer_address ; <nl> - } <nl> - } <nl> - } <nl> - <nl> - if ( VLOG_IS_ON ( 2 ) ) { <nl> - for ( BufferAllocation : : Index i = 0 ; i < num_buffers ; + + i ) { <nl> - const auto & buf = buffer_allocations - > buffers_ [ i ] ; <nl> - VLOG ( 2 ) < < " Buffer " < < i < < " - > " < < buf . opaque ( ) < < " ( " < < buf . size ( ) <nl> - < < " B ) " ; <nl> - } <nl> - } <nl> - return std : : move ( buffer_allocations ) ; <nl> - } <nl> - <nl> - BufferAllocations : : ~ BufferAllocations ( ) { <nl> - if ( ! torn_down_ ) { <nl> - / / Presumably if we ' re executing this branch , the caller is in an error <nl> - / / state , otherwise it would have explicitly called TearDown so it could <nl> - / / save some set of live addresses . So ignoring any errors in TearDown is <nl> - / / sensible . <nl> - TearDown ( / * live_addresses = * / { } ) . IgnoreError ( ) ; <nl> - } <nl> - } <nl> - <nl> Status BufferAllocations : : TearDown ( <nl> - const std : : set < se : : DeviceMemoryBase > & live_addresses ) { <nl> + const std : : set < se : : DeviceMemoryBase > & live_addresses , <nl> + const BufferAssignment * buffer_assignment ) { <nl> / / Deallocate temporary buffers , taking care to try to deallocate all of them <nl> / / even if one of the deallocations fails . <nl> Status status ; <nl> - const int64 num_buffers = buffer_assignment_ - > Allocations ( ) . size ( ) ; <nl> + const int64 num_buffers = buffer_assignment - > Allocations ( ) . size ( ) ; <nl> for ( BufferAllocation : : Index i = 0 ; i < num_buffers ; + + i ) { <nl> - const BufferAllocation & allocation = buffer_assignment_ - > GetAllocation ( i ) ; <nl> + const BufferAllocation & allocation = buffer_assignment - > GetAllocation ( i ) ; <nl> se : : DeviceMemoryBase buffer_address = GetDeviceAddress ( allocation . index ( ) ) ; <nl> / / Deallocate buffers marked " maybe_live_out " but aren ' t actually live out , <nl> / / and temp buffers . <nl> Status BufferAllocations : : TearDown ( <nl> } <nl> } <nl> } <nl> - torn_down_ = true ; <nl> return status ; <nl> } <nl> <nl> se : : DeviceMemoryBase BufferAllocations : : GetDeviceAddress ( <nl> buffer_slice . size ( ) ) ; <nl> } <nl> <nl> - void BufferAllocations : : SetBuffer ( BufferAllocation : : Index buffer_index , <nl> - se : : DeviceMemoryBase buffer ) { <nl> - CHECK_GE ( buffer_index , 0 ) ; <nl> - CHECK_LT ( buffer_index , buffers_ . size ( ) ) ; <nl> - buffers_ [ buffer_index ] = buffer ; <nl> - } <nl> - <nl> bool ShouldEmitLiteralInLlvmIr ( const Literal & literal ) { <nl> / / LLVM can sometimes do interesting optimizations using scalar constants . <nl> return ShapeUtil : : IsScalar ( literal . shape ( ) ) ; <nl> mmm a / tensorflow / compiler / xla / service / gpu / buffer_allocations . h <nl> ppp b / tensorflow / compiler / xla / service / gpu / buffer_allocations . h <nl> limitations under the License . <nl> # include < vector > <nl> <nl> # include " absl / container / flat_hash_map . h " <nl> + # include " absl / strings / str_format . h " <nl> # include " absl / types / span . h " <nl> # include " tensorflow / compiler / xla / service / buffer_assignment . h " <nl> # include " tensorflow / compiler / xla / statusor . h " <nl> namespace gpu { <nl> / / allocated device buffers . <nl> class BufferAllocations { <nl> public : <nl> - / / This inner class encapsulates methods that build a BufferAllocations from <nl> - / / the given buffer assignment . <nl> - class Builder { <nl> - public : <nl> - / / Registers preallocated buffers ( such as parameter addresses and <nl> - / / user - specified result buffers ) to the given buffer index . The builder <nl> - / / will skip allocating buffers for registered buffer indices . <nl> - void RegisterBuffer ( BufferAllocation : : Index index , <nl> - se : : DeviceMemoryBase address ) ; <nl> - <nl> - / / Builds a BufferAllocations object from the given buffer assignment . <nl> - / / ` memory_allocator ` is what this function uses to allocate device memory . <nl> - / / ` device_ordinal ` is the number of the device this function allocates <nl> - / / memory on . <nl> - StatusOr < std : : unique_ptr < BufferAllocations > > Build ( <nl> - const BufferAssignment * buffer_assignment , int device_ordinal , <nl> - se : : DeviceMemoryAllocator * memory_allocator ) ; <nl> - <nl> - private : <nl> - absl : : flat_hash_map < BufferAllocation : : Index , se : : DeviceMemoryBase > <nl> - registered_buffers_ ; <nl> - } ; <nl> - <nl> - ~ BufferAllocations ( ) ; <nl> + BufferAllocations ( absl : : Span < se : : DeviceMemoryBase const > buffers , <nl> + int device_ordinal , <nl> + se : : DeviceMemoryAllocator * memory_allocator ) <nl> + : buffers_ ( buffers . begin ( ) , buffers . end ( ) ) , <nl> + device_ordinal_ ( device_ordinal ) , <nl> + memory_allocator_ ( memory_allocator ) { } <nl> <nl> + BufferAllocations ( BufferAllocations & & other ) = default ; <nl> + BufferAllocations & operator = ( BufferAllocations & & other ) = default ; <nl> BufferAllocations ( const BufferAllocations & ) = delete ; <nl> BufferAllocations & operator = ( const BufferAllocations & ) = delete ; <nl> <nl> class BufferAllocations { <nl> se : : DeviceMemoryBase GetDeviceAddress ( <nl> const BufferAllocation : : Slice & buffer_slice ) const ; <nl> <nl> - se : : DeviceMemoryBase GetTempBufferBase ( ) const { return temp_buffer_base_ ; } <nl> - <nl> / / Tears down all buffers allocated by this object that are not in <nl> / / ` live_addresses ` . <nl> - Status TearDown ( const std : : set < se : : DeviceMemoryBase > & live_addresses ) ; <nl> + Status TearDown ( const std : : set < se : : DeviceMemoryBase > & live_addresses , <nl> + const BufferAssignment * buffer_assignment ) ; <nl> + <nl> + std : : string ToString ( ) { <nl> + std : : string out ; <nl> + for ( BufferAllocation : : Index i = 0 ; i < buffers_ . size ( ) ; + + i ) { <nl> + const auto & buf = buffers_ [ i ] ; <nl> + absl : : StrAppendFormat ( & out , " Buffer % d - > % p ( % d B ) " , i , buf . opaque ( ) , <nl> + buf . size ( ) ) ; <nl> + } <nl> + return out ; <nl> + } <nl> <nl> private : <nl> - BufferAllocations ( BufferAllocation : : Index buffer_count , int device_ordinal , <nl> - se : : DeviceMemoryAllocator * memory_allocator , <nl> - const BufferAssignment * buffer_assignment ) <nl> - : buffers_ ( buffer_count ) , <nl> - device_ordinal_ ( device_ordinal ) , <nl> - memory_allocator_ ( memory_allocator ) , <nl> - buffer_assignment_ ( buffer_assignment ) { } <nl> - <nl> - / / Sets the device address of buffer ` buffer_index ` . <nl> - void SetBuffer ( BufferAllocation : : Index buffer_index , <nl> - se : : DeviceMemoryBase buffer ) ; <nl> - <nl> / / An array of device pointers that stores the address of each buffer <nl> / / indexed by Index . Each element can point to a temporary buffer , an <nl> / / input buffer , or nullptr if no buffer is needed for that Index . <nl> std : : vector < se : : DeviceMemoryBase > buffers_ ; <nl> - <nl> - / / The base address of the memory block that contains all temporary buffers . <nl> - se : : DeviceMemoryBase temp_buffer_base_ ; <nl> - <nl> int device_ordinal_ ; <nl> se : : DeviceMemoryAllocator * memory_allocator_ ; <nl> - const BufferAssignment * buffer_assignment_ ; <nl> - bool torn_down_ = false ; <nl> } ; <nl> <nl> / / LLVM and PTXAS don ' t deal well with large constants , so we only emit very <nl> mmm a / tensorflow / compiler / xla / service / gpu / gpu_executable . cc <nl> ppp b / tensorflow / compiler / xla / service / gpu / gpu_executable . cc <nl> limitations under the License . <nl> # include " absl / memory / memory . h " <nl> # include " tensorflow / compiler / xla / map_util . h " <nl> # include " tensorflow / compiler / xla / service / gpu / buffer_allocations . h " <nl> + # include " tensorflow / compiler / xla / service / gpu / gpu_constants . h " <nl> # include " tensorflow / compiler / xla / service / gpu / gpu_debug_info_manager . h " <nl> # include " tensorflow / compiler / xla / service / gpu / gpu_executable_run_options . h " <nl> # include " tensorflow / compiler / xla / service / gpu / gpu_types . h " <nl> limitations under the License . <nl> # include " tensorflow / compiler / xla / shape_util . h " <nl> # include " tensorflow / compiler / xla / status_macros . h " <nl> # include " tensorflow / compiler / xla / util . h " <nl> + # include " tensorflow / core / lib / gtl / map_util . h " <nl> + # include " tensorflow / core / platform / errors . h " <nl> # include " tensorflow / core / platform / logging . h " <nl> # include " tensorflow / core / profiler / lib / scoped_annotation . h " <nl> # include " tensorflow / core / profiler / lib / traceme . h " <nl> GpuExecutable : : ResolveConstantGlobals ( se : : Stream * stream ) { <nl> return & module_globals_ . emplace ( executor , std : : move ( globals ) ) . first - > second ; <nl> } <nl> <nl> + StatusOr < BufferAllocations > GpuExecutable : : GenerateBufferAllocations ( <nl> + absl : : Span < ExecutionInput const > arguments , <nl> + const GpuExecutable : : BufferAllocToDeviceMemoryMap * globals , <nl> + se : : DeviceMemoryAllocator * const memory_allocator , <nl> + se : : StreamExecutor * executor ) { <nl> + absl : : flat_hash_map < BufferAllocation : : Index , se : : DeviceMemoryBase > <nl> + registered_buffers ; <nl> + tensorflow : : profiler : : TraceMe hlo_module_activity ( <nl> + [ & ] { return std : : string ( " Build buffer allocations " ) ; } , <nl> + tensorflow : : profiler : : TraceMeLevel : : kInfo ) ; <nl> + <nl> + const int64 num_buffers = assignment_ - > Allocations ( ) . size ( ) ; <nl> + std : : vector < se : : DeviceMemoryBase > buffers ( num_buffers ) ; <nl> + for ( BufferAllocation : : Index i = 0 ; i < assignment_ - > Allocations ( ) . size ( ) ; <nl> + + + i ) { <nl> + const BufferAllocation & allocation = assignment_ - > GetAllocation ( i ) ; <nl> + if ( allocation . is_entry_computation_parameter ( ) ) { <nl> + auto param_no = allocation . parameter_number ( ) ; <nl> + se : : DeviceMemoryBase buffer = arguments [ param_no ] <nl> + . Buffer ( allocation . param_shape_index ( ) ) <nl> + . AsDeviceMemoryBase ( ) ; <nl> + <nl> + / / All top - level buffers and sub - buffers must have an explicit , non - null <nl> + / / pointer , except for zero - sized buffers , which may be null . <nl> + if ( buffer . is_null ( ) & & buffer . size ( ) > 0 ) { <nl> + return FailedPrecondition ( <nl> + " Cannot run XLA computation because pointer to ( sub - ) buffer at " <nl> + " index % s of parameter % d was null . All pointers to " <nl> + " ( sub - ) buffers must not be null , unless the ( sub - ) buffer has " <nl> + " zero elements . " , <nl> + allocation . param_shape_index ( ) . ToString ( ) , param_no ) ; <nl> + } <nl> + <nl> + InsertOrDie ( & registered_buffers , i , buffer ) ; <nl> + } <nl> + <nl> + if ( allocation . is_constant ( ) ) { <nl> + InsertOrDie ( & registered_buffers , i , FindOrDie ( * globals , i ) ) ; <nl> + } <nl> + } <nl> + <nl> + int device_ordinal = executor - > device_ordinal ( ) ; <nl> + for ( BufferAllocation : : Index i = 0 ; i < num_buffers ; + + i ) { <nl> + const BufferAllocation & allocation = assignment_ - > GetAllocation ( i ) ; <nl> + const int64 expected_alignment = [ & ] { <nl> + if ( allocation . is_entry_computation_parameter ( ) ) { <nl> + return kEntryParameterAlignBytes ; <nl> + } else if ( allocation . is_constant ( ) ) { <nl> + return kConstantBufferAlignBytes ; <nl> + } else { <nl> + return kXlaAllocatedBufferAlignBytes ; <nl> + } <nl> + } ( ) ; <nl> + <nl> + / / If buffer # i ' s address is already registered ( e . g . external arguments or <nl> + / / result buffers ) , use that registered buffer . <nl> + if ( se : : DeviceMemoryBase * address = <nl> + tensorflow : : gtl : : FindOrNull ( registered_buffers , i ) ) { <nl> + if ( reinterpret_cast < uintptr_t > ( address - > opaque ( ) ) % expected_alignment ! = <nl> + 0 ) { <nl> + return InternalError ( <nl> + " Address of registered buffer % d must be a multiple of % x , but " <nl> + " was % p " , <nl> + i , kEntryParameterAlignBytes , address - > opaque ( ) ) ; <nl> + } <nl> + CHECK_LT ( i , buffers . size ( ) ) ; <nl> + buffers [ i ] = * address ; <nl> + continue ; <nl> + } <nl> + <nl> + / / Allocate each allocation that might escape , or is the temp buffer . <nl> + if ( allocation . maybe_live_out ( ) | | allocation . IsPreallocatedTempBuffer ( ) ) { <nl> + const int64 buffer_size = allocation . size ( ) ; <nl> + se : : DeviceMemoryBase buffer_address ; <nl> + if ( buffer_size > 0 ) { <nl> + TF_ASSIGN_OR_RETURN ( <nl> + se : : OwningDeviceMemory buffer , <nl> + memory_allocator - > Allocate ( device_ordinal , buffer_size ) ) ; <nl> + if ( reinterpret_cast < uintptr_t > ( buffer - > opaque ( ) ) % <nl> + expected_alignment ! = <nl> + 0 ) { <nl> + return InternalError ( <nl> + " Address returned by memory_allocator - > Allocate must be a " <nl> + " multiple of 0x % x , but was % p " , <nl> + kXlaAllocatedBufferAlignBytes , buffer - > opaque ( ) ) ; <nl> + } <nl> + / / We do manual memory management within BufferAllocations . Be sure not <nl> + / / to do a TF_RETURN_IF_ERROR between this line and the <nl> + / / buffer_allocations . SetBuffer ( buffer_address ) call below ! <nl> + buffer_address = buffer . Release ( ) ; <nl> + } <nl> + <nl> + CHECK_LT ( i , buffers . size ( ) ) ; <nl> + buffers [ i ] = buffer_address ; <nl> + } <nl> + } <nl> + return { { buffers , device_ordinal , memory_allocator } } ; <nl> + } <nl> + <nl> StatusOr < ExecutionOutput > GpuExecutable : : ExecuteAsyncOnStream ( <nl> const ServiceExecutableRunOptions * run_options , <nl> std : : vector < ExecutionInput > arguments , <nl> StatusOr < ExecutionOutput > GpuExecutable : : ExecuteAsyncOnStream ( <nl> return Unimplemented ( " Points - to set of root instruction is ambiguous " ) ; <nl> } <nl> <nl> - BufferAllocations : : Builder buffer_allocations_builder ; <nl> const GpuExecutable : : BufferAllocToDeviceMemoryMap * globals ; <nl> { <nl> tensorflow : : profiler : : TraceMe hlo_module_activity ( <nl> StatusOr < ExecutionOutput > GpuExecutable : : ExecuteAsyncOnStream ( <nl> } <nl> <nl> se : : StreamExecutor * executor = run_options - > stream ( ) - > parent ( ) ; <nl> - <nl> - std : : unique_ptr < BufferAllocations > buffer_allocations ; <nl> - <nl> - { <nl> - tensorflow : : profiler : : TraceMe hlo_module_activity ( <nl> - [ & ] { return std : : string ( " Build buffer allocations " ) ; } , <nl> - tensorflow : : profiler : : TraceMeLevel : : kInfo ) ; <nl> - <nl> - for ( BufferAllocation : : Index i = 0 ; i < assignment_ - > Allocations ( ) . size ( ) ; <nl> - + + i ) { <nl> - const BufferAllocation & allocation = assignment_ - > GetAllocation ( i ) ; <nl> - if ( allocation . is_entry_computation_parameter ( ) ) { <nl> - auto param_no = allocation . parameter_number ( ) ; <nl> - se : : DeviceMemoryBase buffer = <nl> - arguments [ param_no ] <nl> - . Buffer ( allocation . param_shape_index ( ) ) <nl> - . AsDeviceMemoryBase ( ) ; <nl> - <nl> - / / All top - level buffers and sub - buffers must have an explicit , non - null <nl> - / / pointer , except for zero - sized buffers , which may be null . <nl> - if ( buffer . is_null ( ) & & buffer . size ( ) > 0 ) { <nl> - return FailedPrecondition ( <nl> - " Cannot run XLA computation because pointer to ( sub - ) buffer at " <nl> - " index % s of parameter % d was null . All pointers to " <nl> - " ( sub - ) buffers must not be null , unless the ( sub - ) buffer has " <nl> - " zero elements . " , <nl> - allocation . param_shape_index ( ) . ToString ( ) , param_no ) ; <nl> - } <nl> - <nl> - buffer_allocations_builder . RegisterBuffer ( i , buffer ) ; <nl> - } <nl> - <nl> - if ( allocation . is_constant ( ) ) { <nl> - buffer_allocations_builder . RegisterBuffer ( i , FindOrDie ( * globals , i ) ) ; <nl> - } <nl> - } <nl> - <nl> - TF_ASSIGN_OR_RETURN ( <nl> - buffer_allocations , <nl> - buffer_allocations_builder . Build ( <nl> - assignment_ . get ( ) , executor - > device_ordinal ( ) , memory_allocator ) ) ; <nl> - } <nl> + TF_ASSIGN_OR_RETURN ( BufferAllocations buffer_allocations , <nl> + GenerateBufferAllocations ( arguments , globals , <nl> + memory_allocator , executor ) ) ; <nl> <nl> for ( Thunk * thunk : thunk_schedule_ - > TotalOrder ( ) ) { <nl> TF_RETURN_IF_ERROR ( thunk - > Initialize ( * this , executor ) ) ; <nl> } <nl> - <nl> - TF_RETURN_IF_ERROR ( ExecuteThunks ( run_options , * buffer_allocations , <nl> + VLOG ( 2 ) < < buffer_allocations . ToString ( ) ; <nl> + TF_RETURN_IF_ERROR ( ExecuteThunks ( run_options , buffer_allocations , <nl> block_host_until_done , <nl> hlo_execution_profile ) ) ; <nl> <nl> StatusOr < ExecutionOutput > GpuExecutable : : ExecuteAsyncOnStream ( <nl> / / the respective location in ShapedBuffer . <nl> std : : set < se : : DeviceMemoryBase > buffers_in_result ; <nl> TF_RETURN_IF_ERROR ( shaped_buffer . buffers ( ) . ForEachMutableElementWithStatus ( <nl> - [ & buffer_allocations , & buffers_in_result , this ] ( <nl> - const ShapeIndex & index , se : : DeviceMemoryBase * device_memory ) { <nl> + [ & ] ( const ShapeIndex & index , se : : DeviceMemoryBase * device_memory ) { <nl> const auto & sources = this - > GetRootValueSet ( ) . element ( index ) ; <nl> / / The points - to set is unambiguous so the set should be a <nl> / / singleton . That is , we know exactly which instruction <nl> StatusOr < ExecutionOutput > GpuExecutable : : ExecuteAsyncOnStream ( <nl> src_hlo , sources . values ( ) [ 0 ] - > index ( ) ) ) ; <nl> <nl> se : : DeviceMemoryBase src_base = <nl> - buffer_allocations - > GetDeviceAddress ( slice . index ( ) ) ; <nl> + buffer_allocations . GetDeviceAddress ( slice . index ( ) ) ; <nl> CHECK ( ! src_base . is_null ( ) | | src_base . size ( ) = = 0 ) ; <nl> if ( ! slice . allocation ( ) - > is_entry_computation_parameter ( ) ) { <nl> / / If the buffer coming out of the result is from a parameter , it <nl> StatusOr < ExecutionOutput > GpuExecutable : : ExecuteAsyncOnStream ( <nl> buffers_in_result . insert ( src_base ) ; <nl> return Status : : OK ( ) ; <nl> } ) ) ; <nl> - TF_RETURN_IF_ERROR ( buffer_allocations - > TearDown ( buffers_in_result ) ) ; <nl> + TF_RETURN_IF_ERROR ( <nl> + buffer_allocations . TearDown ( buffers_in_result , assignment_ . get ( ) ) ) ; <nl> <nl> std : : vector < se : : OwningDeviceMemory > buffers_to_free ; <nl> for ( auto & argument : arguments ) { <nl> mmm a / tensorflow / compiler / xla / service / gpu / gpu_executable . h <nl> ppp b / tensorflow / compiler / xla / service / gpu / gpu_executable . h <nl> class GpuExecutable : public Executable { <nl> Status CheckCompatibilityWithServiceExecutableRunOptions ( <nl> const ServiceExecutableRunOptions * run_options ) ; <nl> <nl> + StatusOr < BufferAllocations > GenerateBufferAllocations ( <nl> + absl : : Span < ExecutionInput const > arguments , <nl> + const GpuExecutable : : BufferAllocToDeviceMemoryMap * globals , <nl> + se : : DeviceMemoryAllocator * const memory_allocator , <nl> + se : : StreamExecutor * executor ) ; <nl> + <nl> / / The LLVM IR , in string format , of the unoptimized module generated for this <nl> / / GpuExecutable . We save a string instead of an llvm : : Module * because leaving <nl> / / llvm : : Module * in a singleton can cause the heap checker to emit false <nl>
|
[ XLA : GPU ] Simplify BufferAllocations class
|
tensorflow/tensorflow
|
928bf74914441b433bd59c89c075141696282c4a
|
2020-06-03T01:47:58Z
|
mmm a / aten / src / THCUNN / SpatialGridSamplerBilinear . cu <nl> ppp b / aten / src / THCUNN / SpatialGridSamplerBilinear . cu <nl> <nl> } \ <nl> } while ( 0 ) <nl> <nl> + # undef MIN <nl> + # define MIN ( a , b ) ( ( ( a ) < ( b ) ) ? ( a ) : ( b ) ) <nl> + # undef MAX <nl> + # define MAX ( a , b ) ( ( ( a ) > ( b ) ) ? ( a ) : ( b ) ) <nl> + # define CLIP_COORDINATES ( in , out , clip_limit ) out = MIN ( ( clip_limit - 1 ) , MAX ( in , 0 ) ) <nl> + <nl> + const int MODE_BORDER = 1 ; <nl> + <nl> + <nl> template < typename Dtype > <nl> __launch_bounds__ ( 1024 ) <nl> __global__ void SpatialGridSamplerBilinear_updateOutput_kernel ( <nl> const int nthreads , <nl> THCDeviceTensor < Dtype , 4 > input , <nl> THCDeviceTensor < Dtype , 4 > grid , <nl> - THCDeviceTensor < Dtype , 4 > output ) { <nl> + THCDeviceTensor < Dtype , 4 > output , <nl> + const int padding_mode ) { <nl> <nl> int N = input . getSize ( 0 ) ; <nl> int C = input . getSize ( 1 ) ; <nl> __global__ void SpatialGridSamplerBilinear_updateOutput_kernel ( <nl> Dtype se = ( ix - ix_nw ) * ( iy - iy_nw ) ; <nl> <nl> / / calculate bilinear weighted pixel value and set output pixel <nl> + if ( padding_mode = = MODE_BORDER ) { <nl> + / / clip coordinates to image borders <nl> + CLIP_COORDINATES ( ix_nw , ix_nw , IW ) ; <nl> + CLIP_COORDINATES ( iy_nw , iy_nw , IH ) ; <nl> + CLIP_COORDINATES ( ix_ne , ix_ne , IW ) ; <nl> + CLIP_COORDINATES ( iy_ne , iy_ne , IH ) ; <nl> + CLIP_COORDINATES ( ix_sw , ix_sw , IW ) ; <nl> + CLIP_COORDINATES ( iy_sw , iy_sw , IH ) ; <nl> + CLIP_COORDINATES ( ix_se , ix_se , IW ) ; <nl> + CLIP_COORDINATES ( iy_se , iy_se , IH ) ; <nl> + } <nl> + <nl> Dtype out_val ; <nl> for ( c = 0 ; c < C ; + + c ) { <nl> out_val = ScalarConvert < int , Dtype > : : to ( 0 ) ; <nl> __global__ void SpatialGridSamplerBilinear_updateGradInput_kernel ( <nl> const int nthreads , <nl> THCDeviceTensor < Dtype , 4 > input , THCDeviceTensor < Dtype , 4 > gradInput , <nl> THCDeviceTensor < Dtype , 4 > grid , THCDeviceTensor < Dtype , 4 > gradGrid , <nl> - THCDeviceTensor < Dtype , 4 > gradOutput ) { <nl> + THCDeviceTensor < Dtype , 4 > gradOutput , <nl> + const int padding_mode ) { <nl> <nl> int N = input . getSize ( 0 ) ; <nl> int C = input . getSize ( 1 ) ; <nl> __global__ void SpatialGridSamplerBilinear_updateGradInput_kernel ( <nl> Dtype ne_val ; <nl> Dtype sw_val ; <nl> Dtype se_val ; <nl> + <nl> + int ix_nw_cl , iy_nw_cl , ix_ne_cl , iy_ne_cl , ix_sw_cl , iy_sw_cl , ix_se_cl , iy_se_cl ; <nl> + <nl> + if ( padding_mode = = MODE_BORDER ) { <nl> + / / get clipped NE , NW , SE , SW pixel values from ( x , y ) <nl> + CLIP_COORDINATES ( ix_nw , ix_nw_cl , IW ) ; <nl> + CLIP_COORDINATES ( iy_nw , iy_nw_cl , IH ) ; <nl> + CLIP_COORDINATES ( ix_ne , ix_ne_cl , IW ) ; <nl> + CLIP_COORDINATES ( iy_ne , iy_ne_cl , IH ) ; <nl> + CLIP_COORDINATES ( ix_sw , ix_sw_cl , IW ) ; <nl> + CLIP_COORDINATES ( iy_sw , iy_sw_cl , IH ) ; <nl> + CLIP_COORDINATES ( ix_se , ix_se_cl , IW ) ; <nl> + CLIP_COORDINATES ( iy_se , iy_se_cl , IH ) ; <nl> + } <nl> + else { <nl> + ix_nw_cl = ix_nw ; <nl> + iy_nw_cl = iy_nw ; <nl> + ix_ne_cl = ix_ne ; <nl> + iy_ne_cl = iy_ne ; <nl> + ix_sw_cl = ix_sw ; <nl> + iy_sw_cl = iy_sw ; <nl> + ix_se_cl = ix_se ; <nl> + iy_se_cl = iy_se ; <nl> + } <nl> + <nl> for ( int c = 0 ; c < C ; + + c ) { <nl> gradout = gradOutput [ n ] [ c ] [ h ] [ w ] ; <nl> <nl> / / calculate and set gradInput <nl> - SAFE_ADD ( gradInput , ix_nw , iy_nw , n , c , IH , IW , nw * gradout ) ; <nl> - SAFE_ADD ( gradInput , ix_ne , iy_ne , n , c , IH , IW , ne * gradout ) ; <nl> - SAFE_ADD ( gradInput , ix_sw , iy_sw , n , c , IH , IW , sw * gradout ) ; <nl> - SAFE_ADD ( gradInput , ix_se , iy_se , n , c , IH , IW , se * gradout ) ; <nl> + SAFE_ADD ( gradInput , ix_nw_cl , iy_nw_cl , n , c , IH , IW , nw * gradout ) ; <nl> + SAFE_ADD ( gradInput , ix_ne_cl , iy_ne_cl , n , c , IH , IW , ne * gradout ) ; <nl> + SAFE_ADD ( gradInput , ix_sw_cl , iy_sw_cl , n , c , IH , IW , sw * gradout ) ; <nl> + SAFE_ADD ( gradInput , ix_se_cl , iy_se_cl , n , c , IH , IW , se * gradout ) ; <nl> <nl> / / calculate gradGrid <nl> nw_val = ScalarConvert < int , Dtype > : : to ( 0 ) ; <nl> - if ( WITHIN_BOUNDS ( ix_nw , iy_nw , IH , IW ) ) { <nl> - nw_val = input [ n ] [ c ] [ iy_nw ] [ ix_nw ] ; <nl> + if ( WITHIN_BOUNDS ( ix_nw_cl , iy_nw_cl , IH , IW ) ) { <nl> + nw_val = input [ n ] [ c ] [ iy_nw_cl ] [ ix_nw_cl ] ; <nl> } <nl> ne_val = ScalarConvert < int , Dtype > : : to ( 0 ) ; <nl> - if ( WITHIN_BOUNDS ( ix_ne , iy_ne , IH , IW ) ) { <nl> - ne_val = input [ n ] [ c ] [ iy_ne ] [ ix_ne ] ; <nl> + if ( WITHIN_BOUNDS ( ix_ne_cl , iy_ne_cl , IH , IW ) ) { <nl> + ne_val = input [ n ] [ c ] [ iy_ne_cl ] [ ix_ne_cl ] ; <nl> } <nl> sw_val = ScalarConvert < int , Dtype > : : to ( 0 ) ; <nl> - if ( WITHIN_BOUNDS ( ix_sw , iy_sw , IH , IW ) ) { <nl> - sw_val = input [ n ] [ c ] [ iy_sw ] [ ix_sw ] ; <nl> + if ( WITHIN_BOUNDS ( ix_sw_cl , iy_sw_cl , IH , IW ) ) { <nl> + sw_val = input [ n ] [ c ] [ iy_sw_cl ] [ ix_sw_cl ] ; <nl> } <nl> se_val = ScalarConvert < int , Dtype > : : to ( 0 ) ; <nl> - if ( WITHIN_BOUNDS ( ix_se , iy_se , IH , IW ) ) { <nl> - se_val = input [ n ] [ c ] [ iy_se ] [ ix_se ] ; <nl> + if ( WITHIN_BOUNDS ( ix_se_cl , iy_se_cl , IH , IW ) ) { <nl> + se_val = input [ n ] [ c ] [ iy_se_cl ] [ ix_se_cl ] ; <nl> } <nl> <nl> gix + = ScalarConvert < int , Dtype > : : to ( - 1 ) * ( nw_val * ( iy_se - iy ) * gradout ) ; <nl> __global__ void SpatialGridSamplerBilinear_updateGradInput_kernel ( <nl> } <nl> } <nl> <nl> + # undef MIN <nl> + # undef MAX <nl> + # undef CLIP_COORDINATES <nl> # undef WITHIN_BOUNDS <nl> # undef SAFE_ADD <nl> <nl> mmm a / aten / src / THCUNN / generic / SpatialGridSamplerBilinear . cu <nl> ppp b / aten / src / THCUNN / generic / SpatialGridSamplerBilinear . cu <nl> TH_API void THNN_ ( SpatialGridSamplerBilinear_updateOutput ) ( <nl> THCState * state , <nl> THCTensor * input , <nl> THCTensor * grid , <nl> - THCTensor * output ) { <nl> + THCTensor * output , <nl> + int padding_mode ) { <nl> <nl> THCUNN_assertSameGPU ( state , 3 , input , grid , output ) ; <nl> THNN_ ( SpatialGridSamplerBilinear_shapeCheck ) ( state , input , grid , NULL ) ; <nl> TH_API void THNN_ ( SpatialGridSamplerBilinear_updateOutput ) ( <nl> int count = static_cast < int > ( N * H * W ) ; <nl> SpatialGridSamplerBilinear_updateOutput_kernel <nl> < < < GET_BLOCKS ( count ) , CUDA_NUM_THREADS , 0 , THCState_getCurrentStream ( state ) > > > ( <nl> - count , devInput , devGrid , devOutput ) ; <nl> + count , devInput , devGrid , devOutput , padding_mode ) ; <nl> THCudaCheck ( cudaGetLastError ( ) ) ; <nl> } <nl> <nl> TH_API void THNN_ ( SpatialGridSamplerBilinear_updateGradInput ) ( <nl> THCState * state , <nl> THCTensor * input , THCTensor * gradInput , <nl> THCTensor * grid , THCTensor * gradGrid , <nl> - THCTensor * gradOutput ) { <nl> + THCTensor * gradOutput , <nl> + int padding_mode ) { <nl> <nl> THCUNN_assertSameGPU ( state , 5 , input , gradInput , grid , gradGrid , gradOutput ) ; <nl> THNN_ ( SpatialGridSamplerBilinear_shapeCheck ) ( state , input , grid , gradOutput ) ; <nl> TH_API void THNN_ ( SpatialGridSamplerBilinear_updateGradInput ) ( <nl> int count = static_cast < int > ( N * H * W ) ; <nl> SpatialGridSamplerBilinear_updateGradInput_kernel <nl> < < < GET_BLOCKS ( count ) , CUDA_NUM_THREADS , 0 , THCState_getCurrentStream ( state ) > > > ( <nl> - count , devInput , devGradInput , devGrid , devGradGrid , devGradOutput ) ; <nl> + count , devInput , devGradInput , devGrid , devGradGrid , devGradOutput , padding_mode ) ; <nl> THCudaCheck ( cudaGetLastError ( ) ) ; <nl> } <nl> <nl> mmm a / aten / src / THCUNN / generic / THCUNN . h <nl> ppp b / aten / src / THCUNN / generic / THCUNN . h <nl> TH_API void THNN_ ( SpatialGridSamplerBilinear_updateOutput ) ( <nl> THCState * state , <nl> THCTensor * input , <nl> THCTensor * grid , <nl> - THCTensor * output ) ; <nl> + THCTensor * output , <nl> + int padding_mode ) ; <nl> <nl> TH_API void THNN_ ( SpatialGridSamplerBilinear_updateGradInput ) ( <nl> THCState * state , <nl> THCTensor * input , THCTensor * gradInput , <nl> THCTensor * grid , THCTensor * gradGrid , <nl> - THCTensor * gradOutput ) ; <nl> + THCTensor * gradOutput , <nl> + int padding_mode ) ; <nl> <nl> TH_API void THNN_ ( RReLU_updateOutput ) ( <nl> THCState * state , <nl> mmm a / aten / src / THNN / generic / SpatialGridSamplerBilinear . c <nl> ppp b / aten / src / THNN / generic / SpatialGridSamplerBilinear . c <nl> <nl> <nl> # undef MIN <nl> # define MIN ( a , b ) ( ( ( a ) < ( b ) ) ? ( a ) : ( b ) ) <nl> + # undef MAX <nl> + # define MAX ( a , b ) ( ( ( a ) > ( b ) ) ? ( a ) : ( b ) ) <nl> + <nl> + # undef MODE_BORDER <nl> + # define MODE_BORDER 1 <nl> <nl> static inline void THNN_ ( SpatialGridSamplerBilinear_shapeCheck ) <nl> ( THTensor * input , THTensor * grid , THTensor * gradOutput ) { <nl> THNN_ARGCHECK ( input - > nDimension = = 4 , 2 , input , <nl> - " 4D input tensor expected but got : % s " ) ; <nl> + " 4D input tensor expected but got : % s " ) ; <nl> THNN_ARGCHECK ( grid - > nDimension = = 4 , 2 , grid , <nl> - " 4D grid tensor expected but got : % s " ) ; <nl> + " 4D grid tensor expected but got : % s " ) ; <nl> <nl> int nbatch = THTensor_ ( size ) ( input , 0 ) ; <nl> int channels = THTensor_ ( size ) ( input , 1 ) ; <nl> static inline void THNN_ ( SpatialGridSamplerBilinear_shapeCheck ) <nl> # define SAFE_GET ( input , x , y , n , c , H , W ) x > = 0 & & x < W & & y > = 0 \ <nl> & & y < H ? THTensor_fastGet4d ( input , n , c , y , x ) : 0 <nl> <nl> + # define CLIP_COORDINATES ( in , out , clip_limit ) out = MIN ( ( clip_limit - 1 ) , MAX ( in , 0 ) ) <nl> + <nl> TH_API void THNN_ ( SpatialGridSamplerBilinear_updateOutput ) ( <nl> - THNNState * state , <nl> - THTensor * input , <nl> - THTensor * grid , <nl> - THTensor * output ) { <nl> + THNNState * state , <nl> + THTensor * input , <nl> + THTensor * grid , <nl> + THTensor * output , <nl> + int padding_mode ) { <nl> <nl> THNN_ ( SpatialGridSamplerBilinear_shapeCheck ) ( input , grid , NULL ) ; <nl> int N = THTensor_ ( size ) ( input , 0 ) ; <nl> TH_API void THNN_ ( SpatialGridSamplerBilinear_updateOutput ) ( <nl> int IW = THTensor_ ( size ) ( input , 3 ) ; <nl> int H = THTensor_ ( size ) ( grid , 1 ) ; <nl> int W = THTensor_ ( size ) ( grid , 2 ) ; <nl> - <nl> + <nl> / / resize output to the same shape as input <nl> THTensor_ ( resize4d ) ( output , N , C , H , W ) ; <nl> <nl> TH_API void THNN_ ( SpatialGridSamplerBilinear_updateOutput ) ( <nl> for ( n = 0 ; n < N ; + + n ) { <nl> for ( h = 0 ; h < H ; + + h ) { <nl> for ( w = 0 ; w < W ; + + w ) { <nl> - / / get the corresponding input x , y co - ordinates from grid <nl> - real ix = THTensor_fastGet4d ( grid , n , h , w , 0 ) ; <nl> - real iy = THTensor_fastGet4d ( grid , n , h , w , 1 ) ; <nl> - <nl> - / / normalize ix , iy from [ - 1 , 1 ] to [ 0 , IH - 1 ] & [ 0 , IW - 1 ] <nl> - ix = ( ( ix + 1 ) / 2 ) * ( IW - 1 ) ; <nl> - iy = ( ( iy + 1 ) / 2 ) * ( IH - 1 ) ; <nl> - <nl> - / / get NE , NW , SE , SW pixel values from ( x , y ) <nl> - int ix_nw = floor ( ix ) ; <nl> - int iy_nw = floor ( iy ) ; <nl> - int ix_ne = ix_nw + 1 ; <nl> - int iy_ne = iy_nw ; <nl> - int ix_sw = ix_nw ; <nl> - int iy_sw = iy_nw + 1 ; <nl> - int ix_se = ix_nw + 1 ; <nl> - int iy_se = iy_nw + 1 ; <nl> - <nl> - / / get surfaces to each neighbor : <nl> - real nw = ( ix_se - ix ) * ( iy_se - iy ) ; <nl> - real ne = ( ix - ix_sw ) * ( iy_sw - iy ) ; <nl> - real sw = ( ix_ne - ix ) * ( iy - iy_ne ) ; <nl> - real se = ( ix - ix_nw ) * ( iy - iy_nw ) ; <nl> - <nl> - / / calculate bilinear weighted pixel value and set output pixel <nl> - for ( c = 0 ; c < C ; + + c ) { <nl> - / / ( c , iy_nw , ix_nw ) * nw + ( c , iy_ne , ix_ne ) * ne <nl> - / / + ( c , iy_sw , ix_sw ) * sw + ( c , iy_se , ix_se ) * se <nl> - real nw_val = SAFE_GET ( input , ix_nw , iy_nw , n , c , IH , IW ) ; <nl> - real ne_val = SAFE_GET ( input , ix_ne , iy_ne , n , c , IH , IW ) ; <nl> - real sw_val = SAFE_GET ( input , ix_sw , iy_sw , n , c , IH , IW ) ; <nl> - real se_val = SAFE_GET ( input , ix_se , iy_se , n , c , IH , IW ) ; <nl> - real out_val = nw_val * nw + ne_val * ne + sw_val * sw + se_val * se ; <nl> - THTensor_fastSet4d ( output , n , c , h , w , out_val ) ; <nl> - } <nl> + / / get the corresponding input x , y co - ordinates from grid <nl> + real ix = THTensor_fastGet4d ( grid , n , h , w , 0 ) ; <nl> + real iy = THTensor_fastGet4d ( grid , n , h , w , 1 ) ; <nl> + <nl> + / / normalize ix , iy from [ - 1 , 1 ] to [ 0 , IH - 1 ] & [ 0 , IW - 1 ] <nl> + ix = ( ( ix + 1 ) / 2 ) * ( IW - 1 ) ; <nl> + iy = ( ( iy + 1 ) / 2 ) * ( IH - 1 ) ; <nl> + <nl> + / / get NE , NW , SE , SW pixel values from ( x , y ) <nl> + int ix_nw = floor ( ix ) ; <nl> + int iy_nw = floor ( iy ) ; <nl> + int ix_ne = ix_nw + 1 ; <nl> + int iy_ne = iy_nw ; <nl> + int ix_sw = ix_nw ; <nl> + int iy_sw = iy_nw + 1 ; <nl> + int ix_se = ix_nw + 1 ; <nl> + int iy_se = iy_nw + 1 ; <nl> + <nl> + / / get surfaces to each neighbor : <nl> + real nw = ( ix_se - ix ) * ( iy_se - iy ) ; <nl> + real ne = ( ix - ix_sw ) * ( iy_sw - iy ) ; <nl> + real sw = ( ix_ne - ix ) * ( iy - iy_ne ) ; <nl> + real se = ( ix - ix_nw ) * ( iy - iy_nw ) ; <nl> + <nl> + if ( padding_mode = = MODE_BORDER ) { <nl> + / / clip coordinates to image borders <nl> + CLIP_COORDINATES ( ix_nw , ix_nw , IW ) ; <nl> + CLIP_COORDINATES ( iy_nw , iy_nw , IH ) ; <nl> + CLIP_COORDINATES ( ix_ne , ix_ne , IW ) ; <nl> + CLIP_COORDINATES ( iy_ne , iy_ne , IH ) ; <nl> + CLIP_COORDINATES ( ix_sw , ix_sw , IW ) ; <nl> + CLIP_COORDINATES ( iy_sw , iy_sw , IH ) ; <nl> + CLIP_COORDINATES ( ix_se , ix_se , IW ) ; <nl> + CLIP_COORDINATES ( iy_se , iy_se , IH ) ; <nl> + } <nl> + <nl> + / / calculate bilinear weighted pixel value and set output pixel <nl> + for ( c = 0 ; c < C ; + + c ) { <nl> + / / ( c , iy_nw , ix_nw ) * nw + ( c , iy_ne , ix_ne ) * ne <nl> + / / + ( c , iy_sw , ix_sw ) * sw + ( c , iy_se , ix_se ) * se <nl> + real nw_val = SAFE_GET ( input , ix_nw , iy_nw , n , c , IH , IW ) ; <nl> + real ne_val = SAFE_GET ( input , ix_ne , iy_ne , n , c , IH , IW ) ; <nl> + real sw_val = SAFE_GET ( input , ix_sw , iy_sw , n , c , IH , IW ) ; <nl> + real se_val = SAFE_GET ( input , ix_se , iy_se , n , c , IH , IW ) ; <nl> + real out_val = nw_val * nw + ne_val * ne + sw_val * sw + se_val * se ; <nl> + THTensor_fastSet4d ( output , n , c , h , w , out_val ) ; <nl> + } <nl> } <nl> } <nl> } <nl> } <nl> <nl> - # define SAFE_ADD ( input , x , y , n , c , H , W , value ) \ <nl> - do { \ <nl> - if ( x > = 0 & & x < W & & y > = 0 & & y < H ) { \ <nl> - real old_value = THTensor_fastGet4d ( input , n , c , y , x ) ; \ <nl> - THTensor_fastSet4d ( input , n , c , y , x , value + old_value ) ; \ <nl> - } \ <nl> + # define SAFE_ADD ( input , x , y , n , c , H , W , value ) \ <nl> + do { \ <nl> + if ( x > = 0 & & x < W & & y > = 0 & & y < H ) { \ <nl> + real old_value = THTensor_fastGet4d ( input , n , c , y , x ) ; \ <nl> + THTensor_fastSet4d ( input , n , c , y , x , value + old_value ) ; \ <nl> + } \ <nl> } while ( 0 ) <nl> <nl> TH_API void THNN_ ( SpatialGridSamplerBilinear_updateGradInput ) ( <nl> - THNNState * state , <nl> - THTensor * input , THTensor * gradInput , <nl> - THTensor * grid , THTensor * gradGrid , <nl> - THTensor * gradOutput ) { <nl> + THNNState * state , <nl> + THTensor * input , THTensor * gradInput , <nl> + THTensor * grid , THTensor * gradGrid , <nl> + THTensor * gradOutput , <nl> + int padding_mode ) { <nl> <nl> THNN_ ( SpatialGridSamplerBilinear_shapeCheck ) ( input , grid , gradOutput ) ; <nl> int N = THTensor_ ( size ) ( input , 0 ) ; <nl> TH_API void THNN_ ( SpatialGridSamplerBilinear_updateGradInput ) ( <nl> for ( n = 0 ; n < N ; + + n ) { <nl> for ( h = 0 ; h < H ; + + h ) { <nl> for ( w = 0 ; w < W ; + + w ) { <nl> - / / get the corresponding input x , y co - ordinates from grid <nl> - real ix = THTensor_fastGet4d ( grid , n , h , w , 0 ) ; <nl> - real iy = THTensor_fastGet4d ( grid , n , h , w , 1 ) ; <nl> - <nl> - real gix = 0 ; <nl> - real giy = 0 ; <nl> - <nl> - / / normalize ix , iy from [ - 1 , 1 ] to [ 0 , H - 1 ] & [ 0 , W - 1 ] <nl> - ix = ( ( ix + 1 ) / 2 ) * ( IW - 1 ) ; <nl> - iy = ( ( iy + 1 ) / 2 ) * ( IH - 1 ) ; <nl> - <nl> - / / get NE , NW , SE , SW pixel values from ( x , y ) <nl> - int ix_nw = floor ( ix ) ; <nl> - int iy_nw = floor ( iy ) ; <nl> - int ix_ne = ix_nw + 1 ; <nl> - int iy_ne = iy_nw ; <nl> - int ix_sw = ix_nw ; <nl> - int iy_sw = iy_nw + 1 ; <nl> - int ix_se = ix_nw + 1 ; <nl> - int iy_se = iy_nw + 1 ; <nl> - <nl> - / / get surfaces to each neighbor : <nl> - real nw = ( ix_se - ix ) * ( iy_se - iy ) ; <nl> - real ne = ( ix - ix_sw ) * ( iy_sw - iy ) ; <nl> - real sw = ( ix_ne - ix ) * ( iy - iy_ne ) ; <nl> - real se = ( ix - ix_nw ) * ( iy - iy_nw ) ; <nl> - <nl> - for ( int c = 0 ; c < C ; + + c ) { <nl> - real gradout = THTensor_fastGet4d ( gradOutput , n , c , h , w ) ; <nl> - <nl> - / / calculate and set gradInput <nl> - SAFE_ADD ( gradInput , ix_nw , iy_nw , n , c , IH , IW , nw * gradout ) ; <nl> - SAFE_ADD ( gradInput , ix_ne , iy_ne , n , c , IH , IW , ne * gradout ) ; <nl> - SAFE_ADD ( gradInput , ix_sw , iy_sw , n , c , IH , IW , sw * gradout ) ; <nl> - SAFE_ADD ( gradInput , ix_se , iy_se , n , c , IH , IW , se * gradout ) ; <nl> - <nl> - / / calculate gradGrid <nl> - real nw_val = SAFE_GET ( input , ix_nw , iy_nw , n , c , IH , IW ) ; <nl> - real ne_val = SAFE_GET ( input , ix_ne , iy_ne , n , c , IH , IW ) ; <nl> - real sw_val = SAFE_GET ( input , ix_sw , iy_sw , n , c , IH , IW ) ; <nl> - real se_val = SAFE_GET ( input , ix_se , iy_se , n , c , IH , IW ) ; <nl> - <nl> - gix - = nw_val * ( iy_se - iy ) * gradout ; <nl> - gix + = ne_val * ( iy_sw - iy ) * gradout ; <nl> - gix - = sw_val * ( iy - iy_ne ) * gradout ; <nl> - gix + = se_val * ( iy - iy_nw ) * gradout ; <nl> - <nl> - giy - = nw_val * ( ix_se - ix ) * gradout ; <nl> - giy - = ne_val * ( ix - ix_sw ) * gradout ; <nl> - giy + = sw_val * ( ix_ne - ix ) * gradout ; <nl> - giy + = se_val * ( ix - ix_nw ) * gradout ; <nl> - } <nl> - <nl> - / / un - normalize gradGrid values back to [ - 1 , 1 ] constraints <nl> - gix = gix * ( IW - 1 ) / 2 ; <nl> - giy = giy * ( IH - 1 ) / 2 ; <nl> - <nl> - real gix_old = THTensor_fastGet4d ( gradGrid , n , h , w , 0 ) ; <nl> - real giy_old = THTensor_fastGet4d ( gradGrid , n , h , w , 1 ) ; <nl> - <nl> - THTensor_fastSet4d ( gradGrid , n , h , w , 0 , gix_old + gix ) ; <nl> - THTensor_fastSet4d ( gradGrid , n , h , w , 1 , giy_old + giy ) ; <nl> - <nl> + / / get the corresponding input x , y co - ordinates from grid <nl> + real ix = THTensor_fastGet4d ( grid , n , h , w , 0 ) ; <nl> + real iy = THTensor_fastGet4d ( grid , n , h , w , 1 ) ; <nl> + <nl> + real gix = 0 ; <nl> + real giy = 0 ; <nl> + <nl> + / / normalize ix , iy from [ - 1 , 1 ] to [ 0 , H - 1 ] & [ 0 , W - 1 ] <nl> + ix = ( ( ix + 1 ) / 2 ) * ( IW - 1 ) ; <nl> + iy = ( ( iy + 1 ) / 2 ) * ( IH - 1 ) ; <nl> + <nl> + / / get NE , NW , SE , SW pixel values from ( x , y ) <nl> + int ix_nw = floor ( ix ) ; <nl> + int iy_nw = floor ( iy ) ; <nl> + int ix_ne = ix_nw + 1 ; <nl> + int iy_ne = iy_nw ; <nl> + int ix_sw = ix_nw ; <nl> + int iy_sw = iy_nw + 1 ; <nl> + int ix_se = ix_nw + 1 ; <nl> + int iy_se = iy_nw + 1 ; <nl> + <nl> + / / get surfaces to each neighbor : <nl> + real nw = ( ix_se - ix ) * ( iy_se - iy ) ; <nl> + real ne = ( ix - ix_sw ) * ( iy_sw - iy ) ; <nl> + real sw = ( ix_ne - ix ) * ( iy - iy_ne ) ; <nl> + real se = ( ix - ix_nw ) * ( iy - iy_nw ) ; <nl> + <nl> + int ix_nw_cl , iy_nw_cl , ix_ne_cl , iy_ne_cl , ix_sw_cl , iy_sw_cl , ix_se_cl , iy_se_cl ; <nl> + <nl> + if ( padding_mode = = MODE_BORDER ) { <nl> + / / get clipped NE , NW , SE , SW pixel values from ( x , y ) <nl> + CLIP_COORDINATES ( ix_nw , ix_nw_cl , IW ) ; <nl> + CLIP_COORDINATES ( iy_nw , iy_nw_cl , IH ) ; <nl> + CLIP_COORDINATES ( ix_ne , ix_ne_cl , IW ) ; <nl> + CLIP_COORDINATES ( iy_ne , iy_ne_cl , IH ) ; <nl> + CLIP_COORDINATES ( ix_sw , ix_sw_cl , IW ) ; <nl> + CLIP_COORDINATES ( iy_sw , iy_sw_cl , IH ) ; <nl> + CLIP_COORDINATES ( ix_se , ix_se_cl , IW ) ; <nl> + CLIP_COORDINATES ( iy_se , iy_se_cl , IH ) ; <nl> + } <nl> + else { <nl> + ix_nw_cl = ix_nw ; <nl> + iy_nw_cl = iy_nw ; <nl> + ix_ne_cl = ix_ne ; <nl> + iy_ne_cl = iy_ne ; <nl> + ix_sw_cl = ix_sw ; <nl> + iy_sw_cl = iy_sw ; <nl> + ix_se_cl = ix_se ; <nl> + iy_se_cl = iy_se ; <nl> + } <nl> + <nl> + for ( int c = 0 ; c < C ; + + c ) { <nl> + real gradout = THTensor_fastGet4d ( gradOutput , n , c , h , w ) ; <nl> + <nl> + / / calculate and set gradInput <nl> + SAFE_ADD ( gradInput , ix_nw_cl , iy_nw_cl , n , c , IH , IW , nw * gradout ) ; <nl> + SAFE_ADD ( gradInput , ix_ne_cl , iy_ne_cl , n , c , IH , IW , ne * gradout ) ; <nl> + SAFE_ADD ( gradInput , ix_sw_cl , iy_sw_cl , n , c , IH , IW , sw * gradout ) ; <nl> + SAFE_ADD ( gradInput , ix_se_cl , iy_se_cl , n , c , IH , IW , se * gradout ) ; <nl> + <nl> + / / calculate gradGrid <nl> + real nw_val = SAFE_GET ( input , ix_nw_cl , iy_nw_cl , n , c , IH , IW ) ; <nl> + real ne_val = SAFE_GET ( input , ix_ne_cl , iy_ne_cl , n , c , IH , IW ) ; <nl> + real sw_val = SAFE_GET ( input , ix_sw_cl , iy_sw_cl , n , c , IH , IW ) ; <nl> + real se_val = SAFE_GET ( input , ix_se_cl , iy_se_cl , n , c , IH , IW ) ; <nl> + <nl> + gix - = nw_val * ( iy_se - iy ) * gradout ; <nl> + gix + = ne_val * ( iy_sw - iy ) * gradout ; <nl> + gix - = sw_val * ( iy - iy_ne ) * gradout ; <nl> + gix + = se_val * ( iy - iy_nw ) * gradout ; <nl> + <nl> + giy - = nw_val * ( ix_se - ix ) * gradout ; <nl> + giy - = ne_val * ( ix - ix_sw ) * gradout ; <nl> + giy + = sw_val * ( ix_ne - ix ) * gradout ; <nl> + giy + = se_val * ( ix - ix_nw ) * gradout ; <nl> + } <nl> + <nl> + / / un - normalize gradGrid values back to [ - 1 , 1 ] constraints <nl> + gix = gix * ( IW - 1 ) / 2 ; <nl> + giy = giy * ( IH - 1 ) / 2 ; <nl> + <nl> + real gix_old = THTensor_fastGet4d ( gradGrid , n , h , w , 0 ) ; <nl> + real giy_old = THTensor_fastGet4d ( gradGrid , n , h , w , 1 ) ; <nl> + <nl> + THTensor_fastSet4d ( gradGrid , n , h , w , 0 , gix_old + gix ) ; <nl> + THTensor_fastSet4d ( gradGrid , n , h , w , 1 , giy_old + giy ) ; <nl> } <nl> } <nl> } <nl> } <nl> <nl> + <nl> # undef MIN <nl> + # undef MAX <nl> # undef SAFE_GET <nl> + # undef CLIP_COORDINATES <nl> # undef SAFE_ADD <nl> + # undef MODE_BORDER <nl> <nl> # endif <nl> mmm a / aten / src / THNN / generic / THNN . h <nl> ppp b / aten / src / THNN / generic / THNN . h <nl> TH_API void THNN_ ( SpatialUpSamplingBilinear_updateGradInput ) ( <nl> int outputWidth ) ; <nl> <nl> TH_API void THNN_ ( SpatialGridSamplerBilinear_updateOutput ) ( <nl> - THNNState * state , <nl> - THTensor * input , <nl> - THTensor * grid , <nl> - THTensor * output ) ; <nl> + THNNState * state , <nl> + THTensor * input , <nl> + THTensor * grid , <nl> + THTensor * output , <nl> + int padding_mode ) ; <nl> <nl> TH_API void THNN_ ( SpatialGridSamplerBilinear_updateGradInput ) ( <nl> - THNNState * state , <nl> - THTensor * input , THTensor * gradInput , <nl> - THTensor * grid , THTensor * gradGrid , <nl> - THTensor * gradOutput ) ; <nl> + THNNState * state , <nl> + THTensor * input , THTensor * gradInput , <nl> + THTensor * grid , THTensor * gradGrid , <nl> + THTensor * gradOutput , <nl> + int padding_mode ) ; <nl> <nl> TH_API void THNN_ ( unfolded_acc ) ( <nl> THTensor * finput , <nl> TH_API void THNN_ ( VolumetricUpSamplingTrilinear_updateOutput ) ( <nl> THNNState * state , <nl> THTensor * input , <nl> THTensor * output , <nl> - int outputDepth , <nl> + int outputDepth , <nl> int outputHeight , <nl> int outputWidth ) ; <nl> TH_API void THNN_ ( VolumetricUpSamplingTrilinear_updateGradInput ) ( <nl> mmm a / test / test_nn . py <nl> ppp b / test / test_nn . py <nl> def test_cosine_similarity ( self ) : <nl> self . assertTrue ( gradcheck ( lambda x , y : F . cosine_similarity ( x , y , dim = - 1 ) , ( input1 , input2 ) ) ) <nl> <nl> def test_grid_sample ( self ) : <nl> - # test known input on CPU <nl> - input = Variable ( torch . arange ( 1 , 11 ) . view ( 1 , 1 , 2 , 5 ) ) <nl> - grid = Variable ( torch . Tensor ( <nl> - [ [ - 1 , - 0 . 5 , 0 , 0 . 2 , 1 ] , <nl> - [ - 1 , - 0 . 333 , 0 , 0 . 5 , 1 ] , <nl> - [ - 1 , - 0 . 5 , 0 , 0 . 3333 , 1 ] , <nl> - [ - 1 , - 0 . 2 , 0 , 0 . 2 , 1 ] ] ) . view ( 1 , 2 , 5 , 2 ) ) <nl> - output = F . grid_sample ( input , grid ) <nl> - groundtruth = torch . Tensor ( <nl> - [ [ 2 . 2500 , 6 . 0000000000 , 5 . 0000 , 4 . 8340 , 9 . 0000 ] , <nl> - [ 2 . 2500 , 6 . 333250045 , 5 . 0000 , 5 . 1000 , 8 . 4000 ] ] ) . view ( 1 , 1 , 2 , 5 ) <nl> - self . assertEqual ( output . data , groundtruth ) <nl> + def test_cpu_against_cuda ( N , C , H , W , padding_mode ) : <nl> + def test_shape ( N , C , IH , IW , H , W , padding_mode ) : <nl> <nl> - # do gradcheck <nl> - N = random . randint ( 1 , 8 ) <nl> - C = random . randint ( 1 , 8 ) <nl> - H = random . randint ( 1 , 8 ) <nl> - W = random . randint ( 1 , 8 ) <nl> - input = Variable ( torch . randn ( N , C , H , W ) , requires_grad = True ) <nl> - grid = Variable ( torch . randn ( N , H , W , 2 ) , requires_grad = True ) <nl> - self . assertTrue ( gradcheck ( lambda inp , grid : F . grid_sample ( inp , grid ) , ( input , grid ) ) ) <nl> - <nl> - def test_cpu_against_cuda ( N , C , H , W ) : <nl> - def test_shape ( N , C , IH , IW , H , W ) : <nl> input_cpu = Variable ( torch . randn ( C , N , IH , IW ) . transpose ( 0 , 1 ) , requires_grad = True ) <nl> grid_cpu = Variable ( torch . randn ( H , N , W , 2 ) . transpose ( 0 , 1 ) , requires_grad = True ) <nl> - out_cpu = F . grid_sample ( input_cpu , grid_cpu ) <nl> + out_cpu = F . grid_sample ( input_cpu , grid_cpu , padding_mode = padding_mode ) <nl> self . assertTrue ( out_cpu . size ( ) = = torch . Size ( [ N , C , H , W ] ) ) <nl> <nl> input_cuda = Variable ( input_cpu . data . transpose ( 0 , 1 ) . cuda ( ) . transpose ( 0 , 1 ) , requires_grad = True ) <nl> grid_cuda = Variable ( grid_cpu . data . transpose ( 0 , 1 ) . cuda ( ) . transpose ( 0 , 1 ) , requires_grad = True ) <nl> - out_cuda = F . grid_sample ( input_cuda , grid_cuda ) <nl> + out_cuda = F . grid_sample ( input_cuda , grid_cuda , padding_mode = padding_mode ) <nl> self . assertEqual ( out_cpu , out_cuda ) <nl> <nl> gradients = out_cpu . data . new ( out_cpu . size ( ) ) . normal_ ( ) <nl> def test_shape ( N , C , IH , IW , H , W ) : <nl> base_input = torch . randn ( C , IH , IW ) <nl> input_cpu = Variable ( base_input . expand ( input_cuda . size ( ) ) , requires_grad = True ) <nl> grid_cpu = Variable ( torch . randn ( N , H , W , 2 ) , requires_grad = True ) <nl> - out_cpu = F . grid_sample ( input_cpu , grid_cpu ) <nl> + out_cpu = F . grid_sample ( input_cpu , grid_cpu , padding_mode = padding_mode ) <nl> <nl> input_cuda = Variable ( base_input . cuda ( ) . expand ( input_cuda . size ( ) ) , requires_grad = True ) <nl> grid_cuda = Variable ( grid_cpu . data . cuda ( ) , requires_grad = True ) <nl> - out_cuda = F . grid_sample ( input_cuda , grid_cuda ) <nl> + out_cuda = F . grid_sample ( input_cuda , grid_cuda , padding_mode = padding_mode ) <nl> self . assertEqual ( out_cpu , out_cuda ) <nl> <nl> # test same size output <nl> - test_shape ( N , C , H , W , H , W ) <nl> + test_shape ( N , C , H , W , H , W , padding_mode ) <nl> <nl> # test larger output <nl> N = random . randint ( 1 , 8 ) <nl> def test_shape ( N , C , IH , IW , H , W ) : <nl> IW = random . randint ( 1 , 8 ) <nl> H = random . randint ( IH + 1 , 12 ) <nl> W = random . randint ( IH + 1 , 12 ) <nl> - test_shape ( N , C , IH , IW , H , W ) <nl> + test_shape ( N , C , IH , IW , H , W , padding_mode ) <nl> <nl> # test smaller output <nl> N = random . randint ( 1 , 8 ) <nl> def test_shape ( N , C , IH , IW , H , W ) : <nl> IW = random . randint ( 1 , 8 ) <nl> H = random . randint ( 1 , IH ) <nl> W = random . randint ( 1 , IW ) <nl> - test_shape ( N , C , IH , IW , H , W ) <nl> + test_shape ( N , C , IH , IW , H , W , padding_mode ) <nl> <nl> - # test CUDNN against CPU <nl> - if TEST_CUDNN : <nl> - test_cpu_against_cuda ( N , C , H , W ) <nl> + # test known input on CPU <nl> + for padding_mode in [ ' zeros ' , ' border ' ] : <nl> + <nl> + input = Variable ( torch . arange ( 1 , 11 ) . view ( 1 , 1 , 2 , 5 ) ) <nl> + grid = Variable ( torch . Tensor ( <nl> + [ [ - 0 . 9 , - 1 . 4 , 0 , 0 . 2 , 1 ] , <nl> + [ - 1 , - 0 . 333 , 0 , 0 . 5 , 1 ] , <nl> + [ - 1 , - 0 . 5 , 0 , 0 . 3333 , 1 ] , <nl> + [ - 1 , - 0 . 2 , 0 , 1 . 1 , 0 . 5 ] ] ) . view ( 1 , 2 , 5 , 2 ) ) <nl> + output = F . grid_sample ( input , grid , padding_mode = padding_mode ) <nl> + <nl> + if padding_mode = = ' zeros ' : <nl> + groundtruth = torch . Tensor ( <nl> + [ [ 0 . 9600 , 6 . 0000000000 , 5 . 0000 , 4 . 8340 , 9 . 0000 ] , <nl> + [ 2 . 2500 , 6 . 333250045 , 5 . 0000 , 5 . 1000 , 7 . 0000 ] ] ) . view ( 1 , 1 , 2 , 5 ) <nl> + else : <nl> + groundtruth = torch . Tensor ( <nl> + [ [ 1 . 2000 , 6 . 0000000000 , 5 . 0000 , 4 . 8340 , 9 . 0000 ] , <nl> + [ 2 . 2500 , 6 . 333250045 , 5 . 0000 , 5 . 1000 , 8 . 7500 ] ] ) . view ( 1 , 1 , 2 , 5 ) <nl> <nl> - # test CUDA ( without CUDNN ) against CPU <nl> - if TEST_CUDA : <nl> + self . assertEqual ( output . data , groundtruth ) <nl> <nl> - # GridSampler will automatically use CUDNN if it is available <nl> - # so we disable CUDNN temporarily <nl> - original_cudnn_enabled = cudnn . enabled <nl> - cudnn . enabled = False <nl> - test_cpu_against_cuda ( N , C , H , W ) <nl> - cudnn . enabled = original_cudnn_enabled <nl> + # do gradcheck <nl> + N = random . randint ( 1 , 8 ) <nl> + C = random . randint ( 1 , 8 ) <nl> + H = random . randint ( 1 , 8 ) <nl> + W = random . randint ( 1 , 8 ) <nl> + input = Variable ( torch . randn ( N , C , H , W ) , requires_grad = True ) <nl> + grid = Variable ( torch . randn ( N , H , W , 2 ) , requires_grad = True ) <nl> + self . assertTrue ( gradcheck ( <nl> + lambda inp , grid : F . grid_sample ( inp , grid , padding_mode = padding_mode ) , <nl> + ( input , grid ) ) ) <nl> + <nl> + # test CUDA against CPU <nl> + if TEST_CUDA : <nl> + test_cpu_against_cuda ( N , C , H , W , padding_mode ) <nl> <nl> def test_affine_grid ( self ) : <nl> # test known input on CPU <nl> mmm a / torch / nn / _functions / vision . py <nl> ppp b / torch / nn / _functions / vision . py <nl> <nl> from . thnn . auto import function_by_name <nl> import torch . backends . cudnn as cudnn <nl> <nl> + MODE_ZEROS = 0 <nl> + MODE_BORDER = 1 <nl> + <nl> <nl> class GridSampler ( Function ) : <nl> <nl> @ staticmethod <nl> - def forward ( ctx , input , grid ) : <nl> + def forward ( ctx , input , grid , padding_mode = ' zeros ' ) : <nl> ctx . save_for_backward ( input , grid ) <nl> + <nl> + if padding_mode = = ' zeros ' : <nl> + ctx . padding_mode = MODE_ZEROS <nl> + elif padding_mode = = ' border ' : <nl> + ctx . padding_mode = MODE_BORDER <nl> + else : <nl> + raise ValueError ( " padding_mode needs to be ' zeros ' or ' border ' , but got { } " <nl> + . format ( padding_mode ) ) <nl> + <nl> grid_sz = grid . size ( ) <nl> - if cudnn . is_acceptable ( input ) : <nl> + if cudnn . is_acceptable ( input ) and padding_mode = = ' zeros ' : <nl> output = input . new ( grid_sz [ 0 ] , input . size ( 1 ) , grid_sz [ 1 ] , grid_sz [ 2 ] ) <nl> grid = grid . contiguous ( ) <nl> if 0 in input . stride ( ) : <nl> def forward ( ctx , input , grid ) : <nl> else : <nl> backend = type2backend [ type ( input ) ] <nl> output = input . new ( grid_sz [ 0 ] , input . size ( 1 ) , grid_sz [ 1 ] , grid_sz [ 2 ] ) <nl> - backend . SpatialGridSamplerBilinear_updateOutput ( backend . library_state , input , grid , output ) <nl> + backend . SpatialGridSamplerBilinear_updateOutput ( <nl> + backend . library_state , input , grid , output , ctx . padding_mode ) <nl> return output <nl> <nl> @ staticmethod <nl> @ once_differentiable <nl> def backward ( ctx , grad_output ) : <nl> input , grid = ctx . saved_tensors <nl> - if cudnn . is_acceptable ( input ) : <nl> + padding_mode = ctx . padding_mode <nl> + <nl> + if cudnn . is_acceptable ( input ) and padding_mode = = ' zeros ' : <nl> grad_input = input . new ( input . size ( ) ) <nl> grad_grid = grid . new ( grid . size ( ) ) <nl> grid = grid . contiguous ( ) <nl> def backward ( ctx , grad_output ) : <nl> grad_grid = grid . new ( grid . size ( ) ) <nl> backend . SpatialGridSamplerBilinear_updateGradInput ( <nl> backend . library_state , input , grad_input , <nl> - grid , grad_grid , grad_output ) <nl> - return grad_input , grad_grid <nl> + grid , grad_grid , grad_output , padding_mode ) <nl> + return grad_input , grad_grid , None <nl> <nl> <nl> class AffineGridGenerator ( Function ) : <nl> mmm a / torch / nn / functional . py <nl> ppp b / torch / nn / functional . py <nl> def upsample_bilinear ( input , size = None , scale_factor = None ) : <nl> return upsample ( input , size , scale_factor , mode = ' bilinear ' ) <nl> <nl> <nl> - def grid_sample ( input , grid , mode = ' bilinear ' ) : <nl> + def grid_sample ( input , grid , mode = ' bilinear ' , padding_mode = ' zeros ' ) : <nl> r " " " Given an : attr : ` input ` and a flow - field : attr : ` grid ` , computes the <nl> ` output ` using input pixel locations from the grid . <nl> <nl> def grid_sample ( input , grid , mode = ' bilinear ' ) : <nl> values : x : 1 , y : 1 is the right - bottom pixel of the input <nl> <nl> If : attr : ` grid ` has values outside the range of ` [ - 1 , 1 ] ` , those locations <nl> - are ignored ( i . e . 0 is used as a contribution to the bilinear interpolation ) <nl> + are handled as defined by ` padding_mode ` . Options are ` zeros ` or ` border ` , <nl> + defining those locations to use 0 or image border values as contribution <nl> + to the bilinear interpolation . <nl> <nl> . . Note : : This function is used in building Spatial Transformer Networks <nl> <nl> Args : <nl> input ( Variable ) : input batch of images ( N x C x IH x IW ) <nl> grid ( Variable ) : flow - field of size ( N x OH x OW x 2 ) <nl> + padding_mode ( str ) : padding mode for outside grid values <nl> + ' zeros ' | ' border ' . Default : ' zeros ' <nl> <nl> Returns : <nl> output ( Variable ) : output Tensor <nl> <nl> " " " <nl> batch_size , channels , in_height , in_width = input . size ( ) <nl> - return GridSampler . apply ( input , grid ) <nl> + return GridSampler . apply ( input , grid , padding_mode ) <nl> <nl> <nl> def affine_grid ( theta , size ) : <nl>
|
Add border - padding for grid_sampler ( )
|
pytorch/pytorch
|
e33df2b88a31c08567c81f49c45f1eb530cd7ef4
|
2017-11-12T23:46:49Z
|
mmm a / xbmc / addons / Skin . cpp <nl> ppp b / xbmc / addons / Skin . cpp <nl> void CSkinInfo : : SettingOptionsStartupWindowsFiller ( const CSetting * setting , std : <nl> current = list [ 0 ] . second ; <nl> } <nl> <nl> + void CSkinInfo : : ToggleDebug ( ) <nl> + { <nl> + m_debugging = ! m_debugging ; <nl> + } <nl> + <nl> int CSkinInfo : : TranslateString ( const string & setting ) <nl> { <nl> / / run through and see if we have this setting <nl> mmm a / xbmc / addons / Skin . h <nl> ppp b / xbmc / addons / Skin . h <nl> class CSkinInfo : public CAddon <nl> const std : : string & GetCurrentAspect ( ) const { return m_currentAspect ; } <nl> <nl> void LoadIncludes ( ) ; <nl> + void ToggleDebug ( ) ; <nl> const INFO : : CSkinVariableString * CreateSkinVariable ( const std : : string & name , int context ) ; <nl> <nl> static void SettingOptionsSkinColorsFiller ( const CSetting * setting , std : : vector < std : : pair < std : : string , std : : string > > & list , std : : string & current , void * data ) ; <nl> mmm a / xbmc / interfaces / Builtins . cpp <nl> ppp b / xbmc / interfaces / Builtins . cpp <nl> <nl> # include " addons / AddonInstaller . h " <nl> # include " addons / AddonManager . h " <nl> # include " addons / PluginSource . h " <nl> + # include " addons / Skin . h " <nl> # include " interfaces / generic / ScriptInvocationManager . h " <nl> # include " interfaces / AnnouncementManager . h " <nl> # include " network / NetworkServices . h " <nl> const BUILT_IN commands [ ] = { <nl> { " PlayDVD " , false , " Plays the inserted CD or DVD media from the DVD - ROM Drive ! " } , <nl> { " RipCD " , false , " Rip the currently inserted audio CD " } , <nl> { " Skin . ToggleSetting " , true , " Toggles a skin setting on or off " } , <nl> + { " Skin . ToggleDebug " , false , " Toggles skin debug info on or off " } , <nl> { " Skin . SetString " , true , " Prompts and sets skin string " } , <nl> { " Skin . SetNumeric " , true , " Prompts and sets numeric input " } , <nl> { " Skin . SetPath " , true , " Prompts and sets a skin path " } , <nl> int CBuiltins : : Execute ( const std : : string & execString ) <nl> CSettings : : Get ( ) . Save ( ) ; <nl> } <nl> } <nl> + else if ( execute = = " skin . toggledebug " ) <nl> + { <nl> + g_SkinInfo - > ToggleDebug ( ) ; <nl> + } <nl> else if ( execute = = " dialog . close " & & params . size ( ) ) <nl> { <nl> bool bForce = false ; <nl>
|
add Skin . ToggleDebug function
|
xbmc/xbmc
|
1e8df463dafb2ccbc8902670e70b22846072afdd
|
2015-07-13T21:58:02Z
|
mmm a / modules / ocl / perf / perf_opticalflow . cpp <nl> ppp b / modules / ocl / perf / perf_opticalflow . cpp <nl> <nl> / / / / / / / / / / / / / PyrLKOpticalFlow / / / / / / / / / / / / / / / / / / / / / / / / <nl> PERFTEST ( PyrLKOpticalFlow ) <nl> { <nl> - std : : string images1 [ ] = { " rubberwhale1 . png " , " basketball1 . png " } ; <nl> - std : : string images2 [ ] = { " rubberwhale2 . png " , " basketball2 . png " } ; <nl> + std : : string images1 [ ] = { " rubberwhale1 . png " , " aloeL . jpg " } ; <nl> + std : : string images2 [ ] = { " rubberwhale2 . png " , " aloeR . jpg " } ; <nl> <nl> for ( size_t i = 0 ; i < sizeof ( images1 ) / sizeof ( std : : string ) ; i + + ) <nl> { <nl> mmm a / modules / ocl / src / hog . cpp <nl> ppp b / modules / ocl / src / hog . cpp <nl> using namespace std ; <nl> <nl> static oclMat gauss_w_lut ; <nl> static bool hog_device_cpu ; <nl> - / * pre - compute gaussian and interp_weight lookup tables if sigma is 4 . 0f * / <nl> - static const float gaussian_interp_lut [ ] = <nl> - { <nl> - / * gaussian lut * / <nl> - 0 . 01831564f , 0 . 02926831f , 0 . 04393693f , 0 . 06196101f , 0 . 08208500f , 0 . 10215643f , <nl> - 0 . 11943297f , 0 . 13117145f , 0 . 13533528f , 0 . 13117145f , 0 . 11943297f , 0 . 10215643f , <nl> - 0 . 08208500f , 0 . 06196101f , 0 . 04393693f , 0 . 02926831f , 0 . 02926831f , 0 . 04677062f , <nl> - 0 . 07021102f , 0 . 09901341f , 0 . 13117145f , 0 . 16324551f , 0 . 19085334f , 0 . 20961139f , <nl> - 0 . 21626517f , 0 . 20961139f , 0 . 19085334f , 0 . 16324551f , 0 . 13117145f , 0 . 09901341f , <nl> - 0 . 07021102f , 0 . 04677062f , 0 . 04393693f , 0 . 07021102f , 0 . 10539922f , 0 . 14863673f , <nl> - 0 . 19691168f , 0 . 24506053f , 0 . 28650481f , 0 . 31466395f , 0 . 32465246f , 0 . 31466395f , <nl> - 0 . 28650481f , 0 . 24506053f , 0 . 19691168f , 0 . 14863673f , 0 . 10539922f , 0 . 07021102f , <nl> - 0 . 06196101f , 0 . 09901341f , 0 . 14863673f , 0 . 20961139f , 0 . 27768996f , 0 . 34559074f , <nl> - 0 . 40403652f , 0 . 44374731f , 0 . 45783335f , 0 . 44374731f , 0 . 40403652f , 0 . 34559074f , <nl> - 0 . 27768996f , 0 . 20961139f , 0 . 14863673f , 0 . 09901341f , 0 . 08208500f , 0 . 13117145f , <nl> - 0 . 19691168f , 0 . 27768996f , 0 . 36787945f , 0 . 45783335f , 0 . 53526145f , 0 . 58786964f , <nl> - 0 . 60653067f , 0 . 58786964f , 0 . 53526145f , 0 . 45783335f , 0 . 36787945f , 0 . 27768996f , <nl> - 0 . 19691168f , 0 . 13117145f , 0 . 10215643f , 0 . 16324551f , 0 . 24506053f , 0 . 34559074f , <nl> - 0 . 45783335f , 0 . 56978285f , 0 . 66614360f , 0 . 73161560f , 0 . 75483960f , 0 . 73161560f , <nl> - 0 . 66614360f , 0 . 56978285f , 0 . 45783335f , 0 . 34559074f , 0 . 24506053f , 0 . 16324551f , <nl> - 0 . 11943297f , 0 . 19085334f , 0 . 28650481f , 0 . 40403652f , 0 . 53526145f , 0 . 66614360f , <nl> - 0 . 77880079f , 0 . 85534531f , 0 . 88249689f , 0 . 85534531f , 0 . 77880079f , 0 . 66614360f , <nl> - 0 . 53526145f , 0 . 40403652f , 0 . 28650481f , 0 . 19085334f , 0 . 13117145f , 0 . 20961139f , <nl> - 0 . 31466395f , 0 . 44374731f , 0 . 58786964f , 0 . 73161560f , 0 . 85534531f , 0 . 93941307f , <nl> - 0 . 96923321f , 0 . 93941307f , 0 . 85534531f , 0 . 73161560f , 0 . 58786964f , 0 . 44374731f , <nl> - 0 . 31466395f , 0 . 20961139f , 0 . 13533528f , 0 . 21626517f , 0 . 32465246f , 0 . 45783335f , <nl> - 0 . 60653067f , 0 . 75483960f , 0 . 88249689f , 0 . 96923321f , 1 . 00000000f , 0 . 96923321f , <nl> - 0 . 88249689f , 0 . 75483960f , 0 . 60653067f , 0 . 45783335f , 0 . 32465246f , 0 . 21626517f , <nl> - 0 . 13117145f , 0 . 20961139f , 0 . 31466395f , 0 . 44374731f , 0 . 58786964f , 0 . 73161560f , <nl> - 0 . 85534531f , 0 . 93941307f , 0 . 96923321f , 0 . 93941307f , 0 . 85534531f , 0 . 73161560f , <nl> - 0 . 58786964f , 0 . 44374731f , 0 . 31466395f , 0 . 20961139f , 0 . 11943297f , 0 . 19085334f , <nl> - 0 . 28650481f , 0 . 40403652f , 0 . 53526145f , 0 . 66614360f , 0 . 77880079f , 0 . 85534531f , <nl> - 0 . 88249689f , 0 . 85534531f , 0 . 77880079f , 0 . 66614360f , 0 . 53526145f , 0 . 40403652f , <nl> - 0 . 28650481f , 0 . 19085334f , 0 . 10215643f , 0 . 16324551f , 0 . 24506053f , 0 . 34559074f , <nl> - 0 . 45783335f , 0 . 56978285f , 0 . 66614360f , 0 . 73161560f , 0 . 75483960f , 0 . 73161560f , <nl> - 0 . 66614360f , 0 . 56978285f , 0 . 45783335f , 0 . 34559074f , 0 . 24506053f , 0 . 16324551f , <nl> - 0 . 08208500f , 0 . 13117145f , 0 . 19691168f , 0 . 27768996f , 0 . 36787945f , 0 . 45783335f , <nl> - 0 . 53526145f , 0 . 58786964f , 0 . 60653067f , 0 . 58786964f , 0 . 53526145f , 0 . 45783335f , <nl> - 0 . 36787945f , 0 . 27768996f , 0 . 19691168f , 0 . 13117145f , 0 . 06196101f , 0 . 09901341f , <nl> - 0 . 14863673f , 0 . 20961139f , 0 . 27768996f , 0 . 34559074f , 0 . 40403652f , 0 . 44374731f , <nl> - 0 . 45783335f , 0 . 44374731f , 0 . 40403652f , 0 . 34559074f , 0 . 27768996f , 0 . 20961139f , <nl> - 0 . 14863673f , 0 . 09901341f , 0 . 04393693f , 0 . 07021102f , 0 . 10539922f , 0 . 14863673f , <nl> - 0 . 19691168f , 0 . 24506053f , 0 . 28650481f , 0 . 31466395f , 0 . 32465246f , 0 . 31466395f , <nl> - 0 . 28650481f , 0 . 24506053f , 0 . 19691168f , 0 . 14863673f , 0 . 10539922f , 0 . 07021102f , <nl> - 0 . 02926831f , 0 . 04677062f , 0 . 07021102f , 0 . 09901341f , 0 . 13117145f , 0 . 16324551f , <nl> - 0 . 19085334f , 0 . 20961139f , 0 . 21626517f , 0 . 20961139f , 0 . 19085334f , 0 . 16324551f , <nl> - 0 . 13117145f , 0 . 09901341f , 0 . 07021102f , 0 . 04677062f , <nl> - / * interp_weight lut * / <nl> - 0 . 00390625f , 0 . 01171875f , 0 . 01953125f , 0 . 02734375f , 0 . 03515625f , 0 . 04296875f , <nl> - 0 . 05078125f , 0 . 05859375f , 0 . 05859375f , 0 . 05078125f , 0 . 04296875f , 0 . 03515625f , <nl> - 0 . 02734375f , 0 . 01953125f , 0 . 01171875f , 0 . 00390625f , 0 . 01171875f , 0 . 03515625f , <nl> - 0 . 05859375f , 0 . 08203125f , 0 . 10546875f , 0 . 12890625f , 0 . 15234375f , 0 . 17578125f , <nl> - 0 . 17578125f , 0 . 15234375f , 0 . 12890625f , 0 . 10546875f , 0 . 08203125f , 0 . 05859375f , <nl> - 0 . 03515625f , 0 . 01171875f , 0 . 01953125f , 0 . 05859375f , 0 . 09765625f , 0 . 13671875f , <nl> - 0 . 17578125f , 0 . 21484375f , 0 . 25390625f , 0 . 29296875f , 0 . 29296875f , 0 . 25390625f , <nl> - 0 . 21484375f , 0 . 17578125f , 0 . 13671875f , 0 . 09765625f , 0 . 05859375f , 0 . 01953125f , <nl> - 0 . 02734375f , 0 . 08203125f , 0 . 13671875f , 0 . 19140625f , 0 . 24609375f , 0 . 30078125f , <nl> - 0 . 35546875f , 0 . 41015625f , 0 . 41015625f , 0 . 35546875f , 0 . 30078125f , 0 . 24609375f , <nl> - 0 . 19140625f , 0 . 13671875f , 0 . 08203125f , 0 . 02734375f , 0 . 03515625f , 0 . 10546875f , <nl> - 0 . 17578125f , 0 . 24609375f , 0 . 31640625f , 0 . 38671875f , 0 . 45703125f , 0 . 52734375f , <nl> - 0 . 52734375f , 0 . 45703125f , 0 . 38671875f , 0 . 31640625f , 0 . 24609375f , 0 . 17578125f , <nl> - 0 . 10546875f , 0 . 03515625f , 0 . 04296875f , 0 . 12890625f , 0 . 21484375f , 0 . 30078125f , <nl> - 0 . 38671875f , 0 . 47265625f , 0 . 55859375f , 0 . 64453125f , 0 . 64453125f , 0 . 55859375f , <nl> - 0 . 47265625f , 0 . 38671875f , 0 . 30078125f , 0 . 21484375f , 0 . 12890625f , 0 . 04296875f , <nl> - 0 . 05078125f , 0 . 15234375f , 0 . 25390625f , 0 . 35546875f , 0 . 45703125f , 0 . 55859375f , <nl> - 0 . 66015625f , 0 . 76171875f , 0 . 76171875f , 0 . 66015625f , 0 . 55859375f , 0 . 45703125f , <nl> - 0 . 35546875f , 0 . 25390625f , 0 . 15234375f , 0 . 05078125f , 0 . 05859375f , 0 . 17578125f , <nl> - 0 . 29296875f , 0 . 41015625f , 0 . 52734375f , 0 . 64453125f , 0 . 76171875f , 0 . 87890625f , <nl> - 0 . 87890625f , 0 . 76171875f , 0 . 64453125f , 0 . 52734375f , 0 . 41015625f , 0 . 29296875f , <nl> - 0 . 17578125f , 0 . 05859375f , 0 . 05859375f , 0 . 17578125f , 0 . 29296875f , 0 . 41015625f , <nl> - 0 . 52734375f , 0 . 64453125f , 0 . 76171875f , 0 . 87890625f , 0 . 87890625f , 0 . 76171875f , <nl> - 0 . 64453125f , 0 . 52734375f , 0 . 41015625f , 0 . 29296875f , 0 . 17578125f , 0 . 05859375f , <nl> - 0 . 05078125f , 0 . 15234375f , 0 . 25390625f , 0 . 35546875f , 0 . 45703125f , 0 . 55859375f , <nl> - 0 . 66015625f , 0 . 76171875f , 0 . 76171875f , 0 . 66015625f , 0 . 55859375f , 0 . 45703125f , <nl> - 0 . 35546875f , 0 . 25390625f , 0 . 15234375f , 0 . 05078125f , 0 . 04296875f , 0 . 12890625f , <nl> - 0 . 21484375f , 0 . 30078125f , 0 . 38671875f , 0 . 47265625f , 0 . 55859375f , 0 . 64453125f , <nl> - 0 . 64453125f , 0 . 55859375f , 0 . 47265625f , 0 . 38671875f , 0 . 30078125f , 0 . 21484375f , <nl> - 0 . 12890625f , 0 . 04296875f , 0 . 03515625f , 0 . 10546875f , 0 . 17578125f , 0 . 24609375f , <nl> - 0 . 31640625f , 0 . 38671875f , 0 . 45703125f , 0 . 52734375f , 0 . 52734375f , 0 . 45703125f , <nl> - 0 . 38671875f , 0 . 31640625f , 0 . 24609375f , 0 . 17578125f , 0 . 10546875f , 0 . 03515625f , <nl> - 0 . 02734375f , 0 . 08203125f , 0 . 13671875f , 0 . 19140625f , 0 . 24609375f , 0 . 30078125f , <nl> - 0 . 35546875f , 0 . 41015625f , 0 . 41015625f , 0 . 35546875f , 0 . 30078125f , 0 . 24609375f , <nl> - 0 . 19140625f , 0 . 13671875f , 0 . 08203125f , 0 . 02734375f , 0 . 01953125f , 0 . 05859375f , <nl> - 0 . 09765625f , 0 . 13671875f , 0 . 17578125f , 0 . 21484375f , 0 . 25390625f , 0 . 29296875f , <nl> - 0 . 29296875f , 0 . 25390625f , 0 . 21484375f , 0 . 17578125f , 0 . 13671875f , 0 . 09765625f , <nl> - 0 . 05859375f , 0 . 01953125f , 0 . 01171875f , 0 . 03515625f , 0 . 05859375f , 0 . 08203125f , <nl> - 0 . 10546875f , 0 . 12890625f , 0 . 15234375f , 0 . 17578125f , 0 . 17578125f , 0 . 15234375f , <nl> - 0 . 12890625f , 0 . 10546875f , 0 . 08203125f , 0 . 05859375f , 0 . 03515625f , 0 . 01171875f , <nl> - 0 . 00390625f , 0 . 01171875f , 0 . 01953125f , 0 . 02734375f , 0 . 03515625f , 0 . 04296875f , <nl> - 0 . 05078125f , 0 . 05859375f , 0 . 05859375f , 0 . 05078125f , 0 . 04296875f , 0 . 03515625f , <nl> - 0 . 02734375f , 0 . 01953125f , 0 . 01171875f , 0 . 00390625f <nl> - } ; <nl> <nl> namespace cv <nl> { <nl> namespace cv <nl> int nblocks_win_x , int nblocks_win_y ) ; <nl> <nl> void compute_hists ( int nbins , int block_stride_x , int blovck_stride_y , <nl> - int height , int width , float sigma , const cv : : ocl : : oclMat & grad , <nl> + int height , int width , const cv : : ocl : : oclMat & grad , <nl> const cv : : ocl : : oclMat & qangle , <nl> const cv : : ocl : : oclMat & gauss_w_lut , cv : : ocl : : oclMat & block_hists ) ; <nl> <nl> cv : : ocl : : HOGDescriptor : : HOGDescriptor ( Size win_size_ , Size block_size_ , Size blo <nl> <nl> effect_size = Size ( 0 , 0 ) ; <nl> <nl> - if ( queryDeviceInfo < IS_CPU_DEVICE , bool > ( ) ) <nl> + if ( queryDeviceInfo < IS_CPU_DEVICE , bool > ( ) ) <nl> hog_device_cpu = true ; <nl> else <nl> hog_device_cpu = false ; <nl> void cv : : ocl : : HOGDescriptor : : init_buffer ( const oclMat & img , Size win_stride ) <nl> Size wins_per_img = numPartsWithin ( img . size ( ) , win_size , win_stride ) ; <nl> labels . create ( 1 , wins_per_img . area ( ) , CV_8U ) ; <nl> <nl> - vector < float > v_lut = vector < float > ( gaussian_interp_lut , gaussian_interp_lut + <nl> - sizeof ( gaussian_interp_lut ) / sizeof ( gaussian_interp_lut [ 0 ] ) ) ; <nl> - Mat m_lut ( v_lut ) ; <nl> - gauss_w_lut . upload ( m_lut . reshape ( 1 , 1 ) ) ; <nl> + float sigma = getWinSigma ( ) ; <nl> + float scale = 1 . f / ( 2 . f * sigma * sigma ) ; <nl> + Mat gaussian_lut ( 1 , 512 , CV_32FC1 ) ; <nl> + int idx = 0 ; <nl> + for ( int i = - 8 ; i < 8 ; i + + ) <nl> + for ( int j = - 8 ; j < 8 ; j + + ) <nl> + gaussian_lut . at < float > ( idx + + ) = std : : exp ( - ( j * j + i * i ) * scale ) ; <nl> + for ( int i = - 8 ; i < 8 ; i + + ) <nl> + for ( int j = - 8 ; j < 8 ; j + + ) <nl> + gaussian_lut . at < float > ( idx + + ) = ( 8 . f - fabs ( j + 0 . 5f ) ) * ( 8 . f - fabs ( i + 0 . 5f ) ) / 64 . f ; <nl> + <nl> + gauss_w_lut . upload ( gaussian_lut ) ; <nl> } <nl> <nl> void cv : : ocl : : HOGDescriptor : : computeGradient ( const oclMat & img , oclMat & grad , oclMat & qangle ) <nl> void cv : : ocl : : HOGDescriptor : : computeBlockHistograms ( const oclMat & img ) <nl> computeGradient ( img , this - > grad , this - > qangle ) ; <nl> <nl> hog : : compute_hists ( nbins , block_stride . width , block_stride . height , effect_size . height , <nl> - effect_size . width , ( float ) getWinSigma ( ) , grad , qangle , gauss_w_lut , block_hists ) ; <nl> + effect_size . width , grad , qangle , gauss_w_lut , block_hists ) ; <nl> <nl> hog : : normalize_hists ( nbins , block_stride . width , block_stride . height , effect_size . height , <nl> effect_size . width , block_hists , ( float ) threshold_L2hys ) ; <nl> void cv : : ocl : : device : : hog : : set_up_constants ( int nbins , <nl> <nl> void cv : : ocl : : device : : hog : : compute_hists ( int nbins , <nl> int block_stride_x , int block_stride_y , <nl> - int height , int width , float sigma , <nl> + int height , int width , <nl> const cv : : ocl : : oclMat & grad , <nl> const cv : : ocl : : oclMat & qangle , <nl> const cv : : ocl : : oclMat & gauss_w_lut , <nl> void cv : : ocl : : device : : hog : : compute_hists ( int nbins , <nl> { <nl> Context * clCxt = Context : : getContext ( ) ; <nl> vector < pair < size_t , const void * > > args ; <nl> - string kernelName = ( sigma = = 4 . 0f ) ? " compute_hists_lut_kernel " : <nl> - " compute_hists_kernel " ; <nl> + string kernelName = " compute_hists_lut_kernel " ; <nl> <nl> int img_block_width = ( width - CELLS_PER_BLOCK_X * CELL_WIDTH + block_stride_x ) <nl> / block_stride_x ; <nl> void cv : : ocl : : device : : hog : : compute_hists ( int nbins , <nl> int grad_quadstep = grad . step > > 2 ; <nl> int qangle_step = qangle . step ; <nl> <nl> - / / Precompute gaussian spatial window parameter <nl> - float scale = 1 . f / ( 2 . f * sigma * sigma ) ; <nl> - <nl> int blocks_in_group = 4 ; <nl> size_t localThreads [ 3 ] = { blocks_in_group * 24 , 2 , 1 } ; <nl> size_t globalThreads [ 3 ] = { <nl> void cv : : ocl : : device : : hog : : compute_hists ( int nbins , <nl> args . push_back ( make_pair ( sizeof ( cl_int ) , ( void * ) & qangle_step ) ) ; <nl> args . push_back ( make_pair ( sizeof ( cl_mem ) , ( void * ) & grad . data ) ) ; <nl> args . push_back ( make_pair ( sizeof ( cl_mem ) , ( void * ) & qangle . data ) ) ; <nl> - if ( kernelName . compare ( " compute_hists_lut_kernel " ) = = 0 ) <nl> - args . push_back ( make_pair ( sizeof ( cl_mem ) , ( void * ) & gauss_w_lut . data ) ) ; <nl> - else <nl> - args . push_back ( make_pair ( sizeof ( cl_float ) , ( void * ) & scale ) ) ; <nl> + args . push_back ( make_pair ( sizeof ( cl_mem ) , ( void * ) & gauss_w_lut . data ) ) ; <nl> args . push_back ( make_pair ( sizeof ( cl_mem ) , ( void * ) & block_hists . data ) ) ; <nl> args . push_back ( make_pair ( smem , ( void * ) NULL ) ) ; <nl> <nl> - openCLExecuteKernel ( clCxt , & objdetect_hog , kernelName , globalThreads , <nl> - localThreads , args , - 1 , - 1 ) ; <nl> + if ( hog_device_cpu ) <nl> + { <nl> + openCLExecuteKernel ( clCxt , & objdetect_hog , kernelName , globalThreads , <nl> + localThreads , args , - 1 , - 1 , " - D CPU " ) ; <nl> + } else <nl> + { <nl> + cl_kernel kernel = openCLGetKernelFromSource ( clCxt , & objdetect_hog , kernelName ) ; <nl> + int wave_size = queryDeviceInfo < WAVEFRONT_SIZE , int > ( kernel ) ; <nl> + char opt [ 32 ] = { 0 } ; <nl> + sprintf ( opt , " - D WAVE_SIZE = % d " , wave_size ) ; <nl> + openCLExecuteKernel ( clCxt , & objdetect_hog , kernelName , globalThreads , <nl> + localThreads , args , - 1 , - 1 , opt ) ; <nl> + } <nl> } <nl> <nl> void cv : : ocl : : device : : hog : : normalize_hists ( int nbins , <nl> mmm a / modules / ocl / src / opencl / objdetect_hog . cl <nl> ppp b / modules / ocl / src / opencl / objdetect_hog . cl <nl> <nl> / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - <nl> / / Histogram computation <nl> / / 12 threads for a cell , 12x4 threads per block <nl> - / / Use pre - computed gaussian and interp_weight lookup tables if sigma is 4 . 0f <nl> + / / Use pre - computed gaussian and interp_weight lookup tables <nl> __kernel void compute_hists_lut_kernel ( <nl> const int cblock_stride_x , const int cblock_stride_y , <nl> const int cnbins , const int cblock_hist_size , const int img_block_width , <nl> __kernel void compute_hists_lut_kernel ( <nl> } <nl> } <nl> <nl> - / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - <nl> - / / Histogram computation <nl> - / / 12 threads for a cell , 12x4 threads per block <nl> - __kernel void compute_hists_kernel ( <nl> - const int cblock_stride_x , const int cblock_stride_y , <nl> - const int cnbins , const int cblock_hist_size , const int img_block_width , <nl> - const int blocks_in_group , const int blocks_total , <nl> - const int grad_quadstep , const int qangle_step , <nl> - __global const float * grad , __global const uchar * qangle , <nl> - const float scale , __global float * block_hists , __local float * smem ) <nl> - { <nl> - const int lx = get_local_id ( 0 ) ; <nl> - const int lp = lx / 24 ; / * local group id * / <nl> - const int gid = get_group_id ( 0 ) * blocks_in_group + lp ; / * global group id * / <nl> - const int gidY = gid / img_block_width ; <nl> - const int gidX = gid - gidY * img_block_width ; <nl> - <nl> - const int lidX = lx - lp * 24 ; <nl> - const int lidY = get_local_id ( 1 ) ; <nl> - <nl> - const int cell_x = lidX / 12 ; <nl> - const int cell_y = lidY ; <nl> - const int cell_thread_x = lidX - cell_x * 12 ; <nl> - <nl> - __local float * hists = smem + lp * cnbins * ( CELLS_PER_BLOCK_X * <nl> - CELLS_PER_BLOCK_Y * 12 + CELLS_PER_BLOCK_X * CELLS_PER_BLOCK_Y ) ; <nl> - __local float * final_hist = hists + cnbins * <nl> - ( CELLS_PER_BLOCK_X * CELLS_PER_BLOCK_Y * 12 ) ; <nl> - <nl> - const int offset_x = gidX * cblock_stride_x + ( cell_x < < 2 ) + cell_thread_x ; <nl> - const int offset_y = gidY * cblock_stride_y + ( cell_y < < 2 ) ; <nl> - <nl> - __global const float * grad_ptr = ( gid < blocks_total ) ? <nl> - grad + offset_y * grad_quadstep + ( offset_x < < 1 ) : grad ; <nl> - __global const uchar * qangle_ptr = ( gid < blocks_total ) ? <nl> - qangle + offset_y * qangle_step + ( offset_x < < 1 ) : qangle ; <nl> - <nl> - __local float * hist = hists + 12 * ( cell_y * CELLS_PER_BLOCK_Y + cell_x ) + <nl> - cell_thread_x ; <nl> - for ( int bin_id = 0 ; bin_id < cnbins ; + + bin_id ) <nl> - hist [ bin_id * 48 ] = 0 . f ; <nl> - <nl> - const int dist_x = - 4 + cell_thread_x - 4 * cell_x ; <nl> - const int dist_center_x = dist_x - 4 * ( 1 - 2 * cell_x ) ; <nl> - <nl> - const int dist_y_begin = - 4 - 4 * lidY ; <nl> - for ( int dist_y = dist_y_begin ; dist_y < dist_y_begin + 12 ; + + dist_y ) <nl> - { <nl> - float2 vote = ( float2 ) ( grad_ptr [ 0 ] , grad_ptr [ 1 ] ) ; <nl> - uchar2 bin = ( uchar2 ) ( qangle_ptr [ 0 ] , qangle_ptr [ 1 ] ) ; <nl> - <nl> - grad_ptr + = grad_quadstep ; <nl> - qangle_ptr + = qangle_step ; <nl> - <nl> - int dist_center_y = dist_y - 4 * ( 1 - 2 * cell_y ) ; <nl> - <nl> - float gaussian = exp ( - ( dist_center_y * dist_center_y + dist_center_x * <nl> - dist_center_x ) * scale ) ; <nl> - float interp_weight = ( 8 . f - fabs ( dist_y + 0 . 5f ) ) * <nl> - ( 8 . f - fabs ( dist_x + 0 . 5f ) ) / 64 . f ; <nl> - <nl> - hist [ bin . x * 48 ] + = gaussian * interp_weight * vote . x ; <nl> - hist [ bin . y * 48 ] + = gaussian * interp_weight * vote . y ; <nl> - } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - <nl> - volatile __local float * hist_ = hist ; <nl> - for ( int bin_id = 0 ; bin_id < cnbins ; + + bin_id , hist_ + = 48 ) <nl> - { <nl> - if ( cell_thread_x < 6 ) <nl> - hist_ [ 0 ] + = hist_ [ 6 ] ; <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - if ( cell_thread_x < 3 ) <nl> - hist_ [ 0 ] + = hist_ [ 3 ] ; <nl> - # ifdef CPU <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - # endif <nl> - if ( cell_thread_x = = 0 ) <nl> - final_hist [ ( cell_x * 2 + cell_y ) * cnbins + bin_id ] = <nl> - hist_ [ 0 ] + hist_ [ 1 ] + hist_ [ 2 ] ; <nl> - } <nl> - # ifdef CPU <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - # endif <nl> - int tid = ( cell_y * CELLS_PER_BLOCK_Y + cell_x ) * 12 + cell_thread_x ; <nl> - if ( ( tid < cblock_hist_size ) & & ( gid < blocks_total ) ) <nl> - { <nl> - __global float * block_hist = block_hists + <nl> - ( gidY * img_block_width + gidX ) * cblock_hist_size ; <nl> - block_hist [ tid ] = final_hist [ tid ] ; <nl> - } <nl> - } <nl> - <nl> / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - <nl> / / Normalization of histograms via L2Hys_norm <nl> / / optimized for the case of 9 bins <nl> mmm a / modules / ocl / src / opencl / pyrlk . cl <nl> ppp b / modules / ocl / src / opencl / pyrlk . cl <nl> <nl> / / @ Authors <nl> / / Dachuan Zhao , dachuan @ multicorewareinc . com <nl> / / Yao Wang , bitwangyaoyao @ gmail . com <nl> + / / Xiaopeng Fu , fuxiaopeng2222 @ 163 . com <nl> / / <nl> / / Redistribution and use in source and binary forms , with or without modification , <nl> / / are permitted provided that the following conditions are met : <nl> <nl> / / # pragma OPENCL EXTENSION cl_amd_printf : enable <nl> <nl> # define BUFFER 64 <nl> + # define BUFFER2 BUFFER > > 1 <nl> # ifndef WAVE_SIZE <nl> # define WAVE_SIZE 1 <nl> # endif <nl> void reduce3 ( float val1 , float val2 , float val3 , __local float * smem1 , __local <nl> smem3 [ tid ] = val3 ; <nl> barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> <nl> - if ( tid < 32 ) <nl> - { <nl> - smem1 [ tid ] + = smem1 [ tid + 32 ] ; <nl> - smem2 [ tid ] + = smem2 [ tid + 32 ] ; <nl> - smem3 [ tid ] + = smem3 [ tid + 32 ] ; <nl> - } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - <nl> - if ( tid < 16 ) <nl> - { <nl> - smem1 [ tid ] + = smem1 [ tid + 16 ] ; <nl> - smem2 [ tid ] + = smem2 [ tid + 16 ] ; <nl> - smem3 [ tid ] + = smem3 [ tid + 16 ] ; <nl> - } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - <nl> - if ( tid < 8 ) <nl> - { <nl> - smem1 [ tid ] + = smem1 [ tid + 8 ] ; <nl> - smem2 [ tid ] + = smem2 [ tid + 8 ] ; <nl> - smem3 [ tid ] + = smem3 [ tid + 8 ] ; <nl> - } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - <nl> - if ( tid < 4 ) <nl> - { <nl> - smem1 [ tid ] + = smem1 [ tid + 4 ] ; <nl> - smem2 [ tid ] + = smem2 [ tid + 4 ] ; <nl> - smem3 [ tid ] + = smem3 [ tid + 4 ] ; <nl> - } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - <nl> - if ( tid < 2 ) <nl> - { <nl> - smem1 [ tid ] + = smem1 [ tid + 2 ] ; <nl> - smem2 [ tid ] + = smem2 [ tid + 2 ] ; <nl> - smem3 [ tid ] + = smem3 [ tid + 2 ] ; <nl> - } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - <nl> - if ( tid < 1 ) <nl> + for ( int i = BUFFER2 ; i > 0 ; i > > = 1 ) <nl> { <nl> - smem1 [ BUFFER ] = smem1 [ tid ] + smem1 [ tid + 1 ] ; <nl> - smem2 [ BUFFER ] = smem2 [ tid ] + smem2 [ tid + 1 ] ; <nl> - smem3 [ BUFFER ] = smem3 [ tid ] + smem3 [ tid + 1 ] ; <nl> + if ( tid < i ) <nl> + { <nl> + smem1 [ tid ] + = smem1 [ tid + i ] ; <nl> + smem2 [ tid ] + = smem2 [ tid + i ] ; <nl> + smem3 [ tid ] + = smem3 [ tid + i ] ; <nl> + } <nl> + barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> } <nl> <nl> void reduce2 ( float val1 , float val2 , volatile __local float * smem1 , volatile __local float * smem2 , int tid ) <nl> void reduce2 ( float val1 , float val2 , volatile __local float * smem1 , volatile __l <nl> smem2 [ tid ] = val2 ; <nl> barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> <nl> - if ( tid < 32 ) <nl> - { <nl> - smem1 [ tid ] + = smem1 [ tid + 32 ] ; <nl> - smem2 [ tid ] + = smem2 [ tid + 32 ] ; <nl> - } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - <nl> - if ( tid < 16 ) <nl> + for ( int i = BUFFER2 ; i > 0 ; i > > = 1 ) <nl> { <nl> - smem1 [ tid ] + = smem1 [ tid + 16 ] ; <nl> - smem2 [ tid ] + = smem2 [ tid + 16 ] ; <nl> - } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - <nl> - if ( tid < 8 ) <nl> - { <nl> - smem1 [ tid ] + = smem1 [ tid + 8 ] ; <nl> - smem2 [ tid ] + = smem2 [ tid + 8 ] ; <nl> - } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - <nl> - if ( tid < 4 ) <nl> - { <nl> - smem1 [ tid ] + = smem1 [ tid + 4 ] ; <nl> - smem2 [ tid ] + = smem2 [ tid + 4 ] ; <nl> - } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - <nl> - if ( tid < 2 ) <nl> - { <nl> - smem1 [ tid ] + = smem1 [ tid + 2 ] ; <nl> - smem2 [ tid ] + = smem2 [ tid + 2 ] ; <nl> - } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - <nl> - if ( tid < 1 ) <nl> - { <nl> - smem1 [ BUFFER ] = smem1 [ tid ] + smem1 [ tid + 1 ] ; <nl> - smem2 [ BUFFER ] = smem2 [ tid ] + smem2 [ tid + 1 ] ; <nl> + if ( tid < i ) <nl> + { <nl> + smem1 [ tid ] + = smem1 [ tid + i ] ; <nl> + smem2 [ tid ] + = smem2 [ tid + i ] ; <nl> + } <nl> + barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> } <nl> <nl> void reduce1 ( float val1 , volatile __local float * smem1 , int tid ) <nl> void reduce1 ( float val1 , volatile __local float * smem1 , int tid ) <nl> smem1 [ tid ] = val1 ; <nl> barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> <nl> - if ( tid < 32 ) <nl> + for ( int i = BUFFER2 ; i > 0 ; i > > = 1 ) <nl> { <nl> - smem1 [ tid ] + = smem1 [ tid + 32 ] ; <nl> - } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - <nl> - if ( tid < 16 ) <nl> - { <nl> - smem1 [ tid ] + = smem1 [ tid + 16 ] ; <nl> - } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - <nl> - if ( tid < 8 ) <nl> - { <nl> - smem1 [ tid ] + = smem1 [ tid + 8 ] ; <nl> - } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - <nl> - if ( tid < 4 ) <nl> - { <nl> - smem1 [ tid ] + = smem1 [ tid + 4 ] ; <nl> - } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - <nl> - if ( tid < 2 ) <nl> - { <nl> - smem1 [ tid ] + = smem1 [ tid + 2 ] ; <nl> - } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - <nl> - if ( tid < 1 ) <nl> - { <nl> - smem1 [ BUFFER ] = smem1 [ tid ] + smem1 [ tid + 1 ] ; <nl> + if ( tid < i ) <nl> + { <nl> + smem1 [ tid ] + = smem1 [ tid + i ] ; <nl> + } <nl> + barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> } <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> } <nl> # else <nl> - void reduce3 ( float val1 , float val2 , float val3 , <nl> - __local volatile float * smem1 , __local volatile float * smem2 , __local volatile float * smem3 , int tid ) <nl> + void reduce3 ( float val1 , float val2 , float val3 , <nl> + __local volatile float * smem1 , __local volatile float * smem2 , __local volatile float * smem3 , int tid ) <nl> { <nl> smem1 [ tid ] = val1 ; <nl> smem2 [ tid ] = val2 ; <nl> __local volatile float * smem1 , __local volatile float * smem2 , __local volatile f <nl> smem2 [ tid ] + = smem2 [ tid + 32 ] ; <nl> smem3 [ tid ] + = smem3 [ tid + 32 ] ; <nl> # if WAVE_SIZE < 32 <nl> - } barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - if ( tid < 16 ) { <nl> + } <nl> + barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> + if ( tid < 16 ) <nl> + { <nl> # endif <nl> smem1 [ tid ] + = smem1 [ tid + 16 ] ; <nl> smem2 [ tid ] + = smem2 [ tid + 16 ] ; <nl> smem3 [ tid ] + = smem3 [ tid + 16 ] ; <nl> # if WAVE_SIZE < 16 <nl> - } barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - if ( tid < 8 ) { <nl> + } <nl> + barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> + if ( tid < 8 ) <nl> + { <nl> # endif <nl> smem1 [ tid ] + = smem1 [ tid + 8 ] ; <nl> smem2 [ tid ] + = smem2 [ tid + 8 ] ; <nl> __local volatile float * smem1 , __local volatile float * smem2 , __local volatile f <nl> smem2 [ tid ] + = smem2 [ tid + 1 ] ; <nl> smem3 [ tid ] + = smem3 [ tid + 1 ] ; <nl> } <nl> + barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> } <nl> <nl> void reduce2 ( float val1 , float val2 , __local volatile float * smem1 , __local volatile float * smem2 , int tid ) <nl> void reduce2 ( float val1 , float val2 , __local volatile float * smem1 , __local vola <nl> smem1 [ tid ] + = smem1 [ tid + 32 ] ; <nl> smem2 [ tid ] + = smem2 [ tid + 32 ] ; <nl> # if WAVE_SIZE < 32 <nl> - } barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - if ( tid < 16 ) { <nl> + } <nl> + barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> + if ( tid < 16 ) <nl> + { <nl> # endif <nl> smem1 [ tid ] + = smem1 [ tid + 16 ] ; <nl> smem2 [ tid ] + = smem2 [ tid + 16 ] ; <nl> # if WAVE_SIZE < 16 <nl> - } barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - if ( tid < 8 ) { <nl> + } <nl> + barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> + if ( tid < 8 ) <nl> + { <nl> # endif <nl> smem1 [ tid ] + = smem1 [ tid + 8 ] ; <nl> smem2 [ tid ] + = smem2 [ tid + 8 ] ; <nl> void reduce2 ( float val1 , float val2 , __local volatile float * smem1 , __local vola <nl> smem1 [ tid ] + = smem1 [ tid + 1 ] ; <nl> smem2 [ tid ] + = smem2 [ tid + 1 ] ; <nl> } <nl> + barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> } <nl> <nl> void reduce1 ( float val1 , __local volatile float * smem1 , int tid ) <nl> void reduce1 ( float val1 , __local volatile float * smem1 , int tid ) <nl> { <nl> smem1 [ tid ] + = smem1 [ tid + 32 ] ; <nl> # if WAVE_SIZE < 32 <nl> - } barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - if ( tid < 16 ) { <nl> + } <nl> + barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> + if ( tid < 16 ) <nl> + { <nl> # endif <nl> smem1 [ tid ] + = smem1 [ tid + 16 ] ; <nl> # if WAVE_SIZE < 16 <nl> - } barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> - if ( tid < 8 ) { <nl> + } <nl> + barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> + if ( tid < 8 ) <nl> + { <nl> # endif <nl> smem1 [ tid ] + = smem1 [ tid + 8 ] ; <nl> smem1 [ tid ] + = smem1 [ tid + 4 ] ; <nl> smem1 [ tid ] + = smem1 [ tid + 2 ] ; <nl> smem1 [ tid ] + = smem1 [ tid + 1 ] ; <nl> } <nl> + barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> } <nl> # endif <nl> <nl> void reduce1 ( float val1 , __local volatile float * smem1 , int tid ) <nl> __constant sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE | CLK_FILTER_LINEAR ; <nl> <nl> void SetPatch ( image2d_t I , float x , float y , <nl> - float * Pch , float * Dx , float * Dy , <nl> - float * A11 , float * A12 , float * A22 ) <nl> + float * Pch , float * Dx , float * Dy , <nl> + float * A11 , float * A12 , float * A22 ) <nl> { <nl> - * Pch = read_imagef ( I , sampler , ( float2 ) ( x , y ) ) . x ; <nl> + * Pch = read_imagef ( I , sampler , ( float2 ) ( x , y ) ) . x ; <nl> <nl> - float dIdx = 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y - 1 ) ) . x + 10 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y ) ) . x + 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y + 1 ) ) . x - <nl> - ( 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y - 1 ) ) . x + 10 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y ) ) . x + 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y + 1 ) ) . x ) ; <nl> + float dIdx = 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y - 1 ) ) . x + 10 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y ) ) . x + 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y + 1 ) ) . x - <nl> + ( 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y - 1 ) ) . x + 10 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y ) ) . x + 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y + 1 ) ) . x ) ; <nl> <nl> - float dIdy = 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y + 1 ) ) . x + 10 . 0f * read_imagef ( I , sampler , ( float2 ) ( x , y + 1 ) ) . x + 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y + 1 ) ) . x - <nl> - ( 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y - 1 ) ) . x + 10 . 0f * read_imagef ( I , sampler , ( float2 ) ( x , y - 1 ) ) . x + 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y - 1 ) ) . x ) ; <nl> + float dIdy = 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y + 1 ) ) . x + 10 . 0f * read_imagef ( I , sampler , ( float2 ) ( x , y + 1 ) ) . x + 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y + 1 ) ) . x - <nl> + ( 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y - 1 ) ) . x + 10 . 0f * read_imagef ( I , sampler , ( float2 ) ( x , y - 1 ) ) . x + 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y - 1 ) ) . x ) ; <nl> <nl> <nl> - * Dx = dIdx ; <nl> - * Dy = dIdy ; <nl> + * Dx = dIdx ; <nl> + * Dy = dIdy ; <nl> <nl> - * A11 + = dIdx * dIdx ; <nl> - * A12 + = dIdx * dIdy ; <nl> - * A22 + = dIdy * dIdy ; <nl> + * A11 + = dIdx * dIdx ; <nl> + * A12 + = dIdx * dIdy ; <nl> + * A22 + = dIdy * dIdy ; <nl> } <nl> <nl> void GetPatch ( image2d_t J , float x , float y , <nl> - float * Pch , float * Dx , float * Dy , <nl> - float * b1 , float * b2 ) <nl> + float * Pch , float * Dx , float * Dy , <nl> + float * b1 , float * b2 ) <nl> { <nl> - float J_val = read_imagef ( J , sampler , ( float2 ) ( x , y ) ) . x ; <nl> - float diff = ( J_val - * Pch ) * 32 . 0f ; <nl> - * b1 + = diff * * Dx ; <nl> - * b2 + = diff * * Dy ; <nl> + float J_val = read_imagef ( J , sampler , ( float2 ) ( x , y ) ) . x ; <nl> + float diff = ( J_val - * Pch ) * 32 . 0f ; <nl> + * b1 + = diff * * Dx ; <nl> + * b2 + = diff * * Dy ; <nl> } <nl> <nl> void GetError ( image2d_t J , const float x , const float y , const float * Pch , float * errval ) <nl> { <nl> - float diff = read_imagef ( J , sampler , ( float2 ) ( x , y ) ) . x - * Pch ; <nl> - * errval + = fabs ( diff ) ; <nl> + float diff = read_imagef ( J , sampler , ( float2 ) ( x , y ) ) . x - * Pch ; <nl> + * errval + = fabs ( diff ) ; <nl> } <nl> <nl> void SetPatch4 ( image2d_t I , const float x , const float y , <nl> - float4 * Pch , float4 * Dx , float4 * Dy , <nl> - float * A11 , float * A12 , float * A22 ) <nl> + float4 * Pch , float4 * Dx , float4 * Dy , <nl> + float * A11 , float * A12 , float * A22 ) <nl> { <nl> - * Pch = read_imagef ( I , sampler , ( float2 ) ( x , y ) ) ; <nl> + * Pch = read_imagef ( I , sampler , ( float2 ) ( x , y ) ) ; <nl> <nl> - float4 dIdx = 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y - 1 ) ) + 10 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y ) ) + 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y + 1 ) ) - <nl> - ( 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y - 1 ) ) + 10 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y ) ) + 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y + 1 ) ) ) ; <nl> + float4 dIdx = 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y - 1 ) ) + 10 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y ) ) + 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y + 1 ) ) - <nl> + ( 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y - 1 ) ) + 10 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y ) ) + 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y + 1 ) ) ) ; <nl> <nl> - float4 dIdy = 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y + 1 ) ) + 10 . 0f * read_imagef ( I , sampler , ( float2 ) ( x , y + 1 ) ) + 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y + 1 ) ) - <nl> - ( 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y - 1 ) ) + 10 . 0f * read_imagef ( I , sampler , ( float2 ) ( x , y - 1 ) ) + 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y - 1 ) ) ) ; <nl> + float4 dIdy = 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y + 1 ) ) + 10 . 0f * read_imagef ( I , sampler , ( float2 ) ( x , y + 1 ) ) + 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y + 1 ) ) - <nl> + ( 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x - 1 , y - 1 ) ) + 10 . 0f * read_imagef ( I , sampler , ( float2 ) ( x , y - 1 ) ) + 3 . 0f * read_imagef ( I , sampler , ( float2 ) ( x + 1 , y - 1 ) ) ) ; <nl> <nl> <nl> - * Dx = dIdx ; <nl> - * Dy = dIdy ; <nl> - float4 sqIdx = dIdx * dIdx ; <nl> - * A11 + = sqIdx . x + sqIdx . y + sqIdx . z ; <nl> - sqIdx = dIdx * dIdy ; <nl> - * A12 + = sqIdx . x + sqIdx . y + sqIdx . z ; <nl> - sqIdx = dIdy * dIdy ; <nl> - * A22 + = sqIdx . x + sqIdx . y + sqIdx . z ; <nl> + * Dx = dIdx ; <nl> + * Dy = dIdy ; <nl> + float4 sqIdx = dIdx * dIdx ; <nl> + * A11 + = sqIdx . x + sqIdx . y + sqIdx . z ; <nl> + sqIdx = dIdx * dIdy ; <nl> + * A12 + = sqIdx . x + sqIdx . y + sqIdx . z ; <nl> + sqIdx = dIdy * dIdy ; <nl> + * A22 + = sqIdx . x + sqIdx . y + sqIdx . z ; <nl> } <nl> <nl> void GetPatch4 ( image2d_t J , const float x , const float y , <nl> - const float4 * Pch , const float4 * Dx , const float4 * Dy , <nl> - float * b1 , float * b2 ) <nl> + const float4 * Pch , const float4 * Dx , const float4 * Dy , <nl> + float * b1 , float * b2 ) <nl> { <nl> - float4 J_val = read_imagef ( J , sampler , ( float2 ) ( x , y ) ) ; <nl> - float4 diff = ( J_val - * Pch ) * 32 . 0f ; <nl> - float4 xdiff = diff * * Dx ; <nl> - * b1 + = xdiff . x + xdiff . y + xdiff . z ; <nl> - xdiff = diff * * Dy ; <nl> - * b2 + = xdiff . x + xdiff . y + xdiff . z ; <nl> + float4 J_val = read_imagef ( J , sampler , ( float2 ) ( x , y ) ) ; <nl> + float4 diff = ( J_val - * Pch ) * 32 . 0f ; <nl> + float4 xdiff = diff * * Dx ; <nl> + * b1 + = xdiff . x + xdiff . y + xdiff . z ; <nl> + xdiff = diff * * Dy ; <nl> + * b2 + = xdiff . x + xdiff . y + xdiff . z ; <nl> } <nl> <nl> void GetError4 ( image2d_t J , const float x , const float y , const float4 * Pch , float * errval ) <nl> { <nl> - float4 diff = read_imagef ( J , sampler , ( float2 ) ( x , y ) ) - * Pch ; <nl> - * errval + = fabs ( diff . x ) + fabs ( diff . y ) + fabs ( diff . z ) ; <nl> + float4 diff = read_imagef ( J , sampler , ( float2 ) ( x , y ) ) - * Pch ; <nl> + * errval + = fabs ( diff . x ) + fabs ( diff . y ) + fabs ( diff . z ) ; <nl> } <nl> <nl> # define GRIDSIZE 3 <nl> __kernel void lkSparse_C1_D5 ( image2d_t I , image2d_t J , <nl> - __global const float2 * prevPts , int prevPtsStep , __global float2 * nextPts , int nextPtsStep , __global uchar * status , __global float * err , <nl> - const int level , const int rows , const int cols , int PATCH_X , int PATCH_Y , int cn , int c_winSize_x , int c_winSize_y , int c_iters , char calcErr ) <nl> + __global const float2 * prevPts , int prevPtsStep , __global float2 * nextPts , int nextPtsStep , __global uchar * status , __global float * err , <nl> + const int level , const int rows , const int cols , int PATCH_X , int PATCH_Y , int cn , int c_winSize_x , int c_winSize_y , int c_iters , char calcErr ) <nl> { <nl> - # ifdef CPU <nl> - __local float smem1 [ BUFFER + 1 ] ; <nl> - __local float smem2 [ BUFFER + 1 ] ; <nl> - __local float smem3 [ BUFFER + 1 ] ; <nl> - # else <nl> __local float smem1 [ BUFFER ] ; <nl> __local float smem2 [ BUFFER ] ; <nl> __local float smem3 [ BUFFER ] ; <nl> - # endif <nl> <nl> - unsigned int xid = get_local_id ( 0 ) ; <nl> - unsigned int yid = get_local_id ( 1 ) ; <nl> - unsigned int gid = get_group_id ( 0 ) ; <nl> - unsigned int xsize = get_local_size ( 0 ) ; <nl> - unsigned int ysize = get_local_size ( 1 ) ; <nl> - int xBase , yBase , i , j , k ; <nl> + unsigned int xid = get_local_id ( 0 ) ; <nl> + unsigned int yid = get_local_id ( 1 ) ; <nl> + unsigned int gid = get_group_id ( 0 ) ; <nl> + unsigned int xsize = get_local_size ( 0 ) ; <nl> + unsigned int ysize = get_local_size ( 1 ) ; <nl> + int xBase , yBase , i , j , k ; <nl> <nl> - float2 c_halfWin = ( float2 ) ( ( c_winSize_x - 1 ) > > 1 , ( c_winSize_y - 1 ) > > 1 ) ; <nl> + float2 c_halfWin = ( float2 ) ( ( c_winSize_x - 1 ) > > 1 , ( c_winSize_y - 1 ) > > 1 ) ; <nl> <nl> const int tid = mad24 ( yid , xsize , xid ) ; <nl> <nl> __kernel void lkSparse_C1_D5 ( image2d_t I , image2d_t J , <nl> float dIdx_patch [ GRIDSIZE ] [ GRIDSIZE ] ; <nl> float dIdy_patch [ GRIDSIZE ] [ GRIDSIZE ] ; <nl> <nl> - yBase = yid ; <nl> - { <nl> - xBase = xid ; <nl> - SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 0 ] [ 0 ] , & dIdx_patch [ 0 ] [ 0 ] , & dIdy_patch [ 0 ] [ 0 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 0 ] [ 1 ] , & dIdx_patch [ 0 ] [ 1 ] , & dIdy_patch [ 0 ] [ 1 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 0 ] [ 2 ] , & dIdx_patch [ 0 ] [ 2 ] , & dIdy_patch [ 0 ] [ 2 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> - } <nl> - yBase + = ysize ; <nl> - { <nl> - xBase = xid ; <nl> - SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 1 ] [ 0 ] , & dIdx_patch [ 1 ] [ 0 ] , & dIdy_patch [ 1 ] [ 0 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 1 ] [ 1 ] , & dIdx_patch [ 1 ] [ 1 ] , & dIdy_patch [ 1 ] [ 1 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 1 ] [ 2 ] , & dIdx_patch [ 1 ] [ 2 ] , & dIdy_patch [ 1 ] [ 2 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> - } <nl> - yBase + = ysize ; <nl> - if ( yBase < c_winSize_y ) <nl> - { <nl> - xBase = xid ; <nl> - SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 2 ] [ 0 ] , & dIdx_patch [ 2 ] [ 0 ] , & dIdy_patch [ 2 ] [ 0 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 2 ] [ 1 ] , & dIdx_patch [ 2 ] [ 1 ] , & dIdy_patch [ 2 ] [ 1 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 2 ] [ 2 ] , & dIdx_patch [ 2 ] [ 2 ] , & dIdy_patch [ 2 ] [ 2 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> - } <nl> + yBase = yid ; <nl> + { <nl> + xBase = xid ; <nl> + SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 0 ] [ 0 ] , & dIdx_patch [ 0 ] [ 0 ] , & dIdy_patch [ 0 ] [ 0 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 0 ] [ 1 ] , & dIdx_patch [ 0 ] [ 1 ] , & dIdy_patch [ 0 ] [ 1 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 0 ] [ 2 ] , & dIdx_patch [ 0 ] [ 2 ] , & dIdy_patch [ 0 ] [ 2 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> + } <nl> + yBase + = ysize ; <nl> + { <nl> + xBase = xid ; <nl> + SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 1 ] [ 0 ] , & dIdx_patch [ 1 ] [ 0 ] , & dIdy_patch [ 1 ] [ 0 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 1 ] [ 1 ] , & dIdx_patch [ 1 ] [ 1 ] , & dIdy_patch [ 1 ] [ 1 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 1 ] [ 2 ] , & dIdx_patch [ 1 ] [ 2 ] , & dIdy_patch [ 1 ] [ 2 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> + } <nl> + yBase + = ysize ; <nl> + if ( yBase < c_winSize_y ) <nl> + { <nl> + xBase = xid ; <nl> + SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 2 ] [ 0 ] , & dIdx_patch [ 2 ] [ 0 ] , & dIdy_patch [ 2 ] [ 0 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 2 ] [ 1 ] , & dIdx_patch [ 2 ] [ 1 ] , & dIdy_patch [ 2 ] [ 1 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + SetPatch ( I , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 2 ] [ 2 ] , & dIdx_patch [ 2 ] [ 2 ] , & dIdy_patch [ 2 ] [ 2 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> + } <nl> <nl> reduce3 ( A11 , A12 , A22 , smem1 , smem2 , smem3 , tid ) ; <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> <nl> - # ifdef CPU <nl> - A11 = smem1 [ BUFFER ] ; <nl> - A12 = smem2 [ BUFFER ] ; <nl> - A22 = smem3 [ BUFFER ] ; <nl> - # else <nl> A11 = smem1 [ 0 ] ; <nl> A12 = smem2 [ 0 ] ; <nl> A22 = smem3 [ 0 ] ; <nl> - # endif <nl> + barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> <nl> float D = A11 * A22 - A12 * A12 ; <nl> <nl> __kernel void lkSparse_C1_D5 ( image2d_t I , image2d_t J , <nl> float b1 = 0 ; <nl> float b2 = 0 ; <nl> <nl> - yBase = yid ; <nl> - { <nl> - xBase = xid ; <nl> - GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 0 ] [ 0 ] , & dIdx_patch [ 0 ] [ 0 ] , & dIdy_patch [ 0 ] [ 0 ] , <nl> - & b1 , & b2 ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 0 ] [ 1 ] , & dIdx_patch [ 0 ] [ 1 ] , & dIdy_patch [ 0 ] [ 1 ] , <nl> - & b1 , & b2 ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 0 ] [ 2 ] , & dIdx_patch [ 0 ] [ 2 ] , & dIdy_patch [ 0 ] [ 2 ] , <nl> - & b1 , & b2 ) ; <nl> - } <nl> - yBase + = ysize ; <nl> - { <nl> - xBase = xid ; <nl> - GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 1 ] [ 0 ] , & dIdx_patch [ 1 ] [ 0 ] , & dIdy_patch [ 1 ] [ 0 ] , <nl> - & b1 , & b2 ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 1 ] [ 1 ] , & dIdx_patch [ 1 ] [ 1 ] , & dIdy_patch [ 1 ] [ 1 ] , <nl> - & b1 , & b2 ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 1 ] [ 2 ] , & dIdx_patch [ 1 ] [ 2 ] , & dIdy_patch [ 1 ] [ 2 ] , <nl> - & b1 , & b2 ) ; <nl> - } <nl> - yBase + = ysize ; <nl> - if ( yBase < c_winSize_y ) <nl> - { <nl> - xBase = xid ; <nl> - GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 2 ] [ 0 ] , & dIdx_patch [ 2 ] [ 0 ] , & dIdy_patch [ 2 ] [ 0 ] , <nl> - & b1 , & b2 ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 2 ] [ 1 ] , & dIdx_patch [ 2 ] [ 1 ] , & dIdy_patch [ 2 ] [ 1 ] , <nl> - & b1 , & b2 ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 2 ] [ 2 ] , & dIdx_patch [ 2 ] [ 2 ] , & dIdy_patch [ 2 ] [ 2 ] , <nl> - & b1 , & b2 ) ; <nl> - } <nl> + yBase = yid ; <nl> + { <nl> + xBase = xid ; <nl> + GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 0 ] [ 0 ] , & dIdx_patch [ 0 ] [ 0 ] , & dIdy_patch [ 0 ] [ 0 ] , <nl> + & b1 , & b2 ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 0 ] [ 1 ] , & dIdx_patch [ 0 ] [ 1 ] , & dIdy_patch [ 0 ] [ 1 ] , <nl> + & b1 , & b2 ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 0 ] [ 2 ] , & dIdx_patch [ 0 ] [ 2 ] , & dIdy_patch [ 0 ] [ 2 ] , <nl> + & b1 , & b2 ) ; <nl> + } <nl> + yBase + = ysize ; <nl> + { <nl> + xBase = xid ; <nl> + GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 1 ] [ 0 ] , & dIdx_patch [ 1 ] [ 0 ] , & dIdy_patch [ 1 ] [ 0 ] , <nl> + & b1 , & b2 ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 1 ] [ 1 ] , & dIdx_patch [ 1 ] [ 1 ] , & dIdy_patch [ 1 ] [ 1 ] , <nl> + & b1 , & b2 ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 1 ] [ 2 ] , & dIdx_patch [ 1 ] [ 2 ] , & dIdy_patch [ 1 ] [ 2 ] , <nl> + & b1 , & b2 ) ; <nl> + } <nl> + yBase + = ysize ; <nl> + if ( yBase < c_winSize_y ) <nl> + { <nl> + xBase = xid ; <nl> + GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 2 ] [ 0 ] , & dIdx_patch [ 2 ] [ 0 ] , & dIdy_patch [ 2 ] [ 0 ] , <nl> + & b1 , & b2 ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 2 ] [ 1 ] , & dIdx_patch [ 2 ] [ 1 ] , & dIdy_patch [ 2 ] [ 1 ] , <nl> + & b1 , & b2 ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + GetPatch ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 2 ] [ 2 ] , & dIdx_patch [ 2 ] [ 2 ] , & dIdy_patch [ 2 ] [ 2 ] , <nl> + & b1 , & b2 ) ; <nl> + } <nl> <nl> reduce2 ( b1 , b2 , smem1 , smem2 , tid ) ; <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> <nl> - # ifdef CPU <nl> - b1 = smem1 [ BUFFER ] ; <nl> - b2 = smem2 [ BUFFER ] ; <nl> - # else <nl> b1 = smem1 [ 0 ] ; <nl> b2 = smem2 [ 0 ] ; <nl> - # endif <nl> + barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> <nl> float2 delta ; <nl> delta . x = A12 * b2 - A22 * b1 ; <nl> delta . y = A12 * b1 - A11 * b2 ; <nl> <nl> - prevPt + = delta ; <nl> + prevPt + = delta ; <nl> <nl> if ( fabs ( delta . x ) < THRESHOLD & & fabs ( delta . y ) < THRESHOLD ) <nl> break ; <nl> __kernel void lkSparse_C1_D5 ( image2d_t I , image2d_t J , <nl> D = 0 . 0f ; <nl> if ( calcErr ) <nl> { <nl> - yBase = yid ; <nl> - { <nl> - xBase = xid ; <nl> - GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 0 ] [ 0 ] , & D ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 0 ] [ 1 ] , & D ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 0 ] [ 2 ] , & D ) ; <nl> - } <nl> - yBase + = ysize ; <nl> - { <nl> - xBase = xid ; <nl> - GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 1 ] [ 0 ] , & D ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 1 ] [ 1 ] , & D ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 1 ] [ 2 ] , & D ) ; <nl> - } <nl> - yBase + = ysize ; <nl> - if ( yBase < c_winSize_y ) <nl> - { <nl> - xBase = xid ; <nl> - GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 2 ] [ 0 ] , & D ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 2 ] [ 1 ] , & D ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 2 ] [ 2 ] , & D ) ; <nl> - } <nl> + yBase = yid ; <nl> + { <nl> + xBase = xid ; <nl> + GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 0 ] [ 0 ] , & D ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 0 ] [ 1 ] , & D ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 0 ] [ 2 ] , & D ) ; <nl> + } <nl> + yBase + = ysize ; <nl> + { <nl> + xBase = xid ; <nl> + GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 1 ] [ 0 ] , & D ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 1 ] [ 1 ] , & D ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 1 ] [ 2 ] , & D ) ; <nl> + } <nl> + yBase + = ysize ; <nl> + if ( yBase < c_winSize_y ) <nl> + { <nl> + xBase = xid ; <nl> + GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 2 ] [ 0 ] , & D ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 2 ] [ 1 ] , & D ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + GetError ( J , prevPt . x + xBase + 0 . 5f , prevPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 2 ] [ 2 ] , & D ) ; <nl> + } <nl> <nl> reduce1 ( D , smem1 , tid ) ; <nl> } <nl> <nl> if ( tid = = 0 ) <nl> { <nl> - prevPt + = c_halfWin ; <nl> + prevPt + = c_halfWin ; <nl> <nl> nextPts [ gid ] = prevPt ; <nl> <nl> if ( calcErr ) <nl> - # ifdef CPU <nl> - err [ gid ] = smem1 [ BUFFER ] / ( float ) ( c_winSize_x * c_winSize_y ) ; <nl> - # else <nl> err [ gid ] = smem1 [ 0 ] / ( float ) ( c_winSize_x * c_winSize_y ) ; <nl> - # endif <nl> } <nl> } <nl> <nl> <nl> __kernel void lkSparse_C4_D5 ( image2d_t I , image2d_t J , <nl> - __global const float2 * prevPts , int prevPtsStep , __global float2 * nextPts , int nextPtsStep , __global uchar * status , __global float * err , <nl> - const int level , const int rows , const int cols , int PATCH_X , int PATCH_Y , int cn , int c_winSize_x , int c_winSize_y , int c_iters , char calcErr ) <nl> + __global const float2 * prevPts , int prevPtsStep , __global float2 * nextPts , int nextPtsStep , __global uchar * status , __global float * err , <nl> + const int level , const int rows , const int cols , int PATCH_X , int PATCH_Y , int cn , int c_winSize_x , int c_winSize_y , int c_iters , char calcErr ) <nl> { <nl> - # ifdef CPU <nl> - __local float smem1 [ BUFFER + 1 ] ; <nl> - __local float smem2 [ BUFFER + 1 ] ; <nl> - __local float smem3 [ BUFFER + 1 ] ; <nl> - # else <nl> - __local float smem1 [ BUFFER ] ; <nl> - __local float smem2 [ BUFFER ] ; <nl> - __local float smem3 [ BUFFER ] ; <nl> - # endif <nl> + __local float smem1 [ BUFFER ] ; <nl> + __local float smem2 [ BUFFER ] ; <nl> + __local float smem3 [ BUFFER ] ; <nl> <nl> - unsigned int xid = get_local_id ( 0 ) ; <nl> - unsigned int yid = get_local_id ( 1 ) ; <nl> - unsigned int gid = get_group_id ( 0 ) ; <nl> - unsigned int xsize = get_local_size ( 0 ) ; <nl> - unsigned int ysize = get_local_size ( 1 ) ; <nl> - int xBase , yBase , i , j , k ; <nl> + unsigned int xid = get_local_id ( 0 ) ; <nl> + unsigned int yid = get_local_id ( 1 ) ; <nl> + unsigned int gid = get_group_id ( 0 ) ; <nl> + unsigned int xsize = get_local_size ( 0 ) ; <nl> + unsigned int ysize = get_local_size ( 1 ) ; <nl> + int xBase , yBase , i , j , k ; <nl> <nl> - float2 c_halfWin = ( float2 ) ( ( c_winSize_x - 1 ) > > 1 , ( c_winSize_y - 1 ) > > 1 ) ; <nl> + float2 c_halfWin = ( float2 ) ( ( c_winSize_x - 1 ) > > 1 , ( c_winSize_y - 1 ) > > 1 ) ; <nl> <nl> const int tid = mad24 ( yid , xsize , xid ) ; <nl> <nl> __kernel void lkSparse_C4_D5 ( image2d_t I , image2d_t J , <nl> return ; <nl> } <nl> <nl> - nextPt - = c_halfWin ; <nl> + nextPt - = c_halfWin ; <nl> <nl> / / extract the patch from the first image , compute covariation matrix of derivatives <nl> <nl> __kernel void lkSparse_C4_D5 ( image2d_t I , image2d_t J , <nl> float4 I_patch [ 8 ] ; <nl> float4 dIdx_patch [ 8 ] ; <nl> float4 dIdy_patch [ 8 ] ; <nl> - float4 I_add , Dx_add , Dy_add ; <nl> + float4 I_add , Dx_add , Dy_add ; <nl> <nl> - yBase = yid ; <nl> - { <nl> - xBase = xid ; <nl> - SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 0 ] , & dIdx_patch [ 0 ] , & dIdy_patch [ 0 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> + yBase = yid ; <nl> + { <nl> + xBase = xid ; <nl> + SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 0 ] , & dIdx_patch [ 0 ] , & dIdy_patch [ 0 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> <nl> <nl> - xBase + = xsize ; <nl> - SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 1 ] , & dIdx_patch [ 1 ] , & dIdy_patch [ 1 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> + xBase + = xsize ; <nl> + SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 1 ] , & dIdx_patch [ 1 ] , & dIdy_patch [ 1 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 2 ] , & dIdx_patch [ 2 ] , & dIdy_patch [ 2 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 2 ] , & dIdx_patch [ 2 ] , & dIdy_patch [ 2 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> <nl> - } <nl> - yBase + = ysize ; <nl> - { <nl> - xBase = xid ; <nl> - SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 3 ] , & dIdx_patch [ 3 ] , & dIdy_patch [ 3 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 4 ] , & dIdx_patch [ 4 ] , & dIdy_patch [ 4 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 5 ] , & dIdx_patch [ 5 ] , & dIdy_patch [ 5 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> - } <nl> - yBase + = ysize ; <nl> - if ( yBase < c_winSize_y ) <nl> - { <nl> - xBase = xid ; <nl> - SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 6 ] , & dIdx_patch [ 6 ] , & dIdy_patch [ 6 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 7 ] , & dIdx_patch [ 7 ] , & dIdy_patch [ 7 ] , <nl> - & A11 , & A12 , & A22 ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_add , & Dx_add , & Dy_add , <nl> - & A11 , & A12 , & A22 ) ; <nl> - } <nl> + } <nl> + yBase + = ysize ; <nl> + { <nl> + xBase = xid ; <nl> + SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 3 ] , & dIdx_patch [ 3 ] , & dIdy_patch [ 3 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 4 ] , & dIdx_patch [ 4 ] , & dIdy_patch [ 4 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 5 ] , & dIdx_patch [ 5 ] , & dIdy_patch [ 5 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> + } <nl> + yBase + = ysize ; <nl> + if ( yBase < c_winSize_y ) <nl> + { <nl> + xBase = xid ; <nl> + SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 6 ] , & dIdx_patch [ 6 ] , & dIdy_patch [ 6 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 7 ] , & dIdx_patch [ 7 ] , & dIdy_patch [ 7 ] , <nl> + & A11 , & A12 , & A22 ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + SetPatch4 ( I , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_add , & Dx_add , & Dy_add , <nl> + & A11 , & A12 , & A22 ) ; <nl> + } <nl> <nl> reduce3 ( A11 , A12 , A22 , smem1 , smem2 , smem3 , tid ) ; <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> <nl> - # ifdef CPU <nl> - A11 = smem1 [ BUFFER ] ; <nl> - A12 = smem2 [ BUFFER ] ; <nl> - A22 = smem3 [ BUFFER ] ; <nl> - # else <nl> A11 = smem1 [ 0 ] ; <nl> A12 = smem2 [ 0 ] ; <nl> A22 = smem3 [ 0 ] ; <nl> - # endif <nl> + barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> <nl> float D = A11 * A22 - A12 * A12 ; <nl> <nl> __kernel void lkSparse_C4_D5 ( image2d_t I , image2d_t J , <nl> A12 / = D ; <nl> A22 / = D ; <nl> <nl> - nextPt = nextPts [ gid ] * 2 . 0f - c_halfWin ; <nl> + nextPt = nextPts [ gid ] * 2 . 0f - c_halfWin ; <nl> <nl> for ( k = 0 ; k < c_iters ; + + k ) <nl> { <nl> __kernel void lkSparse_C4_D5 ( image2d_t I , image2d_t J , <nl> float b1 = 0 ; <nl> float b2 = 0 ; <nl> <nl> - yBase = yid ; <nl> - { <nl> - xBase = xid ; <nl> - GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 0 ] , & dIdx_patch [ 0 ] , & dIdy_patch [ 0 ] , <nl> - & b1 , & b2 ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 1 ] , & dIdx_patch [ 1 ] , & dIdy_patch [ 1 ] , <nl> - & b1 , & b2 ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 2 ] , & dIdx_patch [ 2 ] , & dIdy_patch [ 2 ] , <nl> - & b1 , & b2 ) ; <nl> - } <nl> - yBase + = ysize ; <nl> - { <nl> - xBase = xid ; <nl> - GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 3 ] , & dIdx_patch [ 3 ] , & dIdy_patch [ 3 ] , <nl> - & b1 , & b2 ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 4 ] , & dIdx_patch [ 4 ] , & dIdy_patch [ 4 ] , <nl> - & b1 , & b2 ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 5 ] , & dIdx_patch [ 5 ] , & dIdy_patch [ 5 ] , <nl> - & b1 , & b2 ) ; <nl> - } <nl> - yBase + = ysize ; <nl> - if ( yBase < c_winSize_y ) <nl> - { <nl> - xBase = xid ; <nl> - GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 6 ] , & dIdx_patch [ 6 ] , & dIdy_patch [ 6 ] , <nl> - & b1 , & b2 ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 7 ] , & dIdx_patch [ 7 ] , & dIdy_patch [ 7 ] , <nl> - & b1 , & b2 ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_add , & Dx_add , & Dy_add , <nl> - & b1 , & b2 ) ; <nl> - } <nl> + yBase = yid ; <nl> + { <nl> + xBase = xid ; <nl> + GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 0 ] , & dIdx_patch [ 0 ] , & dIdy_patch [ 0 ] , <nl> + & b1 , & b2 ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 1 ] , & dIdx_patch [ 1 ] , & dIdy_patch [ 1 ] , <nl> + & b1 , & b2 ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 2 ] , & dIdx_patch [ 2 ] , & dIdy_patch [ 2 ] , <nl> + & b1 , & b2 ) ; <nl> + } <nl> + yBase + = ysize ; <nl> + { <nl> + xBase = xid ; <nl> + GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 3 ] , & dIdx_patch [ 3 ] , & dIdy_patch [ 3 ] , <nl> + & b1 , & b2 ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 4 ] , & dIdx_patch [ 4 ] , & dIdy_patch [ 4 ] , <nl> + & b1 , & b2 ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 5 ] , & dIdx_patch [ 5 ] , & dIdy_patch [ 5 ] , <nl> + & b1 , & b2 ) ; <nl> + } <nl> + yBase + = ysize ; <nl> + if ( yBase < c_winSize_y ) <nl> + { <nl> + xBase = xid ; <nl> + GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 6 ] , & dIdx_patch [ 6 ] , & dIdy_patch [ 6 ] , <nl> + & b1 , & b2 ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 7 ] , & dIdx_patch [ 7 ] , & dIdy_patch [ 7 ] , <nl> + & b1 , & b2 ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + GetPatch4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_add , & Dx_add , & Dy_add , <nl> + & b1 , & b2 ) ; <nl> + } <nl> <nl> reduce2 ( b1 , b2 , smem1 , smem2 , tid ) ; <nl> - barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> <nl> - # ifdef CPU <nl> - b1 = smem1 [ BUFFER ] ; <nl> - b2 = smem2 [ BUFFER ] ; <nl> - # else <nl> b1 = smem1 [ 0 ] ; <nl> b2 = smem2 [ 0 ] ; <nl> - # endif <nl> + barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl> <nl> float2 delta ; <nl> delta . x = A12 * b2 - A22 * b1 ; <nl> delta . y = A12 * b1 - A11 * b2 ; <nl> <nl> - nextPt + = delta ; <nl> + nextPt + = delta ; <nl> <nl> if ( fabs ( delta . x ) < THRESHOLD & & fabs ( delta . y ) < THRESHOLD ) <nl> break ; <nl> __kernel void lkSparse_C4_D5 ( image2d_t I , image2d_t J , <nl> D = 0 . 0f ; <nl> if ( calcErr ) <nl> { <nl> - yBase = yid ; <nl> - { <nl> - xBase = xid ; <nl> - GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 0 ] , & D ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 1 ] , & D ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 2 ] , & D ) ; <nl> - } <nl> - yBase + = ysize ; <nl> - { <nl> - xBase = xid ; <nl> - GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 3 ] , & D ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 4 ] , & D ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 5 ] , & D ) ; <nl> - } <nl> - yBase + = ysize ; <nl> - if ( yBase < c_winSize_y ) <nl> - { <nl> - xBase = xid ; <nl> - GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 6 ] , & D ) ; <nl> - <nl> - <nl> - xBase + = xsize ; <nl> - GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_patch [ 7 ] , & D ) ; <nl> - <nl> - xBase + = xsize ; <nl> - if ( xBase < c_winSize_x ) <nl> - GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> - & I_add , & D ) ; <nl> - } <nl> + yBase = yid ; <nl> + { <nl> + xBase = xid ; <nl> + GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 0 ] , & D ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 1 ] , & D ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 2 ] , & D ) ; <nl> + } <nl> + yBase + = ysize ; <nl> + { <nl> + xBase = xid ; <nl> + GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 3 ] , & D ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 4 ] , & D ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 5 ] , & D ) ; <nl> + } <nl> + yBase + = ysize ; <nl> + if ( yBase < c_winSize_y ) <nl> + { <nl> + xBase = xid ; <nl> + GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 6 ] , & D ) ; <nl> + <nl> + <nl> + xBase + = xsize ; <nl> + GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_patch [ 7 ] , & D ) ; <nl> + <nl> + xBase + = xsize ; <nl> + if ( xBase < c_winSize_x ) <nl> + GetError4 ( J , nextPt . x + xBase + 0 . 5f , nextPt . y + yBase + 0 . 5f , <nl> + & I_add , & D ) ; <nl> + } <nl> <nl> reduce1 ( D , smem1 , tid ) ; <nl> } <nl> <nl> if ( tid = = 0 ) <nl> { <nl> - nextPt + = c_halfWin ; <nl> + nextPt + = c_halfWin ; <nl> nextPts [ gid ] = nextPt ; <nl> <nl> if ( calcErr ) <nl> - # ifdef CPU <nl> - err [ gid ] = smem1 [ BUFFER ] / ( float ) ( 3 * c_winSize_x * c_winSize_y ) ; <nl> - # else <nl> err [ gid ] = smem1 [ 0 ] / ( float ) ( 3 * c_winSize_x * c_winSize_y ) ; <nl> - # endif <nl> } <nl> } <nl> <nl> __kernel void lkDense_C1_D0 ( image2d_t I , image2d_t J , __global float * u , int uStep , __global float * v , int vStep , __global const float * prevU , int prevUStep , __global const float * prevV , int prevVStep , <nl> - const int rows , const int cols , / * __global float * err , int errStep , int cn , * / int c_winSize_x , int c_winSize_y , int c_iters , char calcErr ) <nl> + const int rows , const int cols , / * __global float * err , int errStep , int cn , * / int c_winSize_x , int c_winSize_y , int c_iters , char calcErr ) <nl> { <nl> - int c_halfWin_x = ( c_winSize_x - 1 ) / 2 ; <nl> - int c_halfWin_y = ( c_winSize_y - 1 ) / 2 ; <nl> + int c_halfWin_x = ( c_winSize_x - 1 ) / 2 ; <nl> + int c_halfWin_y = ( c_winSize_y - 1 ) / 2 ; <nl> <nl> const int patchWidth = get_local_size ( 0 ) + 2 * c_halfWin_x ; <nl> const int patchHeight = get_local_size ( 1 ) + 2 * c_halfWin_y ; <nl> __kernel void lkDense_C1_D0 ( image2d_t I , image2d_t J , __global float * u , int uSt <nl> const int xBase = get_group_id ( 0 ) * get_local_size ( 0 ) ; <nl> const int yBase = get_group_id ( 1 ) * get_local_size ( 1 ) ; <nl> <nl> - sampler_t sampleri = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE | CLK_FILTER_NEAREST ; <nl> + sampler_t sampleri = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE | CLK_FILTER_NEAREST ; <nl> <nl> for ( int i = get_local_id ( 1 ) ; i < patchHeight ; i + = get_local_size ( 1 ) ) <nl> { <nl> __kernel void lkDense_C1_D0 ( image2d_t I , image2d_t J , __global float * u , int uSt <nl> / / Sharr Deriv <nl> <nl> dIdx_patch [ i * patchWidth + j ] = 3 * read_imagei ( I , sampleri , ( float2 ) ( x + 1 , y - 1 ) ) . x + 10 * read_imagei ( I , sampleri , ( float2 ) ( x + 1 , y ) ) . x + 3 * read_imagei ( I , sampleri , ( float2 ) ( x + 1 , y + 1 ) ) . x - <nl> - ( 3 * read_imagei ( I , sampleri , ( float2 ) ( x - 1 , y - 1 ) ) . x + 10 * read_imagei ( I , sampleri , ( float2 ) ( x - 1 , y ) ) . x + 3 * read_imagei ( I , sampleri , ( float2 ) ( x - 1 , y + 1 ) ) . x ) ; <nl> + ( 3 * read_imagei ( I , sampleri , ( float2 ) ( x - 1 , y - 1 ) ) . x + 10 * read_imagei ( I , sampleri , ( float2 ) ( x - 1 , y ) ) . x + 3 * read_imagei ( I , sampleri , ( float2 ) ( x - 1 , y + 1 ) ) . x ) ; <nl> <nl> dIdy_patch [ i * patchWidth + j ] = 3 * read_imagei ( I , sampleri , ( float2 ) ( x - 1 , y + 1 ) ) . x + 10 * read_imagei ( I , sampleri , ( float2 ) ( x , y + 1 ) ) . x + 3 * read_imagei ( I , sampleri , ( float2 ) ( x + 1 , y + 1 ) ) . x - <nl> - ( 3 * read_imagei ( I , sampleri , ( float2 ) ( x - 1 , y - 1 ) ) . x + 10 * read_imagei ( I , sampleri , ( float2 ) ( x , y - 1 ) ) . x + 3 * read_imagei ( I , sampleri , ( float2 ) ( x + 1 , y - 1 ) ) . x ) ; <nl> + ( 3 * read_imagei ( I , sampleri , ( float2 ) ( x - 1 , y - 1 ) ) . x + 10 * read_imagei ( I , sampleri , ( float2 ) ( x , y - 1 ) ) . x + 3 * read_imagei ( I , sampleri , ( float2 ) ( x + 1 , y - 1 ) ) . x ) ; <nl> } <nl> } <nl> barrier ( CLK_LOCAL_MEM_FENCE ) ; <nl>
|
Merge pull request from bitwangyaoyao : 2 . 4_fix
|
opencv/opencv
|
bc78e87a61333250874ea6a3fd8546cb80e8e0b5
|
2013-07-30T13:13:27Z
|
mmm a / src / compiler / ia32 / code - generator - ia32 . cc <nl> ppp b / src / compiler / ia32 / code - generator - ia32 . cc <nl> CodeGenerator : : CodeGenResult CodeGenerator : : AssembleArchInstruction ( <nl> case kSSEInt32ToFloat32 : <nl> __ cvtsi2ss ( i . OutputDoubleRegister ( ) , i . InputOperand ( 0 ) ) ; <nl> break ; <nl> - case kSSEUint32ToFloat32 : { <nl> - Register scratch0 = i . TempRegister ( 0 ) ; <nl> - Register scratch1 = i . TempRegister ( 1 ) ; <nl> - __ mov ( scratch0 , i . InputOperand ( 0 ) ) ; <nl> - __ Cvtui2ss ( i . OutputDoubleRegister ( ) , scratch0 , scratch1 ) ; <nl> + case kSSEUint32ToFloat32 : <nl> + __ Cvtui2ss ( i . OutputDoubleRegister ( ) , i . InputOperand ( 0 ) , <nl> + i . TempRegister ( 0 ) ) ; <nl> break ; <nl> - } <nl> case kSSEInt32ToFloat64 : <nl> __ cvtsi2sd ( i . OutputDoubleRegister ( ) , i . InputOperand ( 0 ) ) ; <nl> break ; <nl> mmm a / src / compiler / ia32 / instruction - selector - ia32 . cc <nl> ppp b / src / compiler / ia32 / instruction - selector - ia32 . cc <nl> void InstructionSelector : : VisitUint32Mod ( Node * node ) { <nl> <nl> void InstructionSelector : : VisitRoundUint32ToFloat32 ( Node * node ) { <nl> IA32OperandGenerator g ( this ) ; <nl> - InstructionOperand temps [ ] = { g . TempRegister ( ) , g . TempRegister ( ) } ; <nl> + InstructionOperand temps [ ] = { g . TempRegister ( ) } ; <nl> Emit ( kSSEUint32ToFloat32 , g . DefineAsRegister ( node ) , g . Use ( node - > InputAt ( 0 ) ) , <nl> arraysize ( temps ) , temps ) ; <nl> } <nl> mmm a / src / ia32 / macro - assembler - ia32 . cc <nl> ppp b / src / ia32 / macro - assembler - ia32 . cc <nl> void TurboAssembler : : Cvtsi2sd ( XMMRegister dst , Operand src ) { <nl> cvtsi2sd ( dst , src ) ; <nl> } <nl> <nl> - void TurboAssembler : : Cvtui2ss ( XMMRegister dst , Register src , Register tmp ) { <nl> - Label msb_set_src ; <nl> + void TurboAssembler : : Cvtui2ss ( XMMRegister dst , Operand src , Register tmp ) { <nl> Label done ; <nl> - test ( src , src ) ; <nl> - j ( sign , & msb_set_src , Label : : kNear ) ; <nl> - cvtsi2ss ( dst , src ) ; <nl> - jmp ( & done , Label : : kNear ) ; <nl> - bind ( & msb_set_src ) ; <nl> - mov ( tmp , src ) ; <nl> - shr ( src , 1 ) ; <nl> - / / Recover the least significant bit to avoid rounding errors . <nl> - and_ ( tmp , Immediate ( 1 ) ) ; <nl> - or_ ( src , tmp ) ; <nl> - cvtsi2ss ( dst , src ) ; <nl> + Register src_reg = src . is_reg_only ( ) ? src . reg ( ) : tmp ; <nl> + if ( src_reg = = tmp ) mov ( tmp , src ) ; <nl> + cvtsi2ss ( dst , src_reg ) ; <nl> + test ( src_reg , src_reg ) ; <nl> + j ( positive , & done , Label : : kNear ) ; <nl> + <nl> + / / Compute { src / 2 | ( src & 1 ) } ( retain the LSB to avoid rounding errors ) . <nl> + if ( src_reg ! = tmp ) mov ( tmp , src_reg ) ; <nl> + shr ( tmp , 1 ) ; <nl> + / / The LSB is shifted into CF . If it is set , set the LSB in { tmp } . <nl> + Label msb_not_set ; <nl> + j ( not_carry , & msb_not_set , Label : : kNear ) ; <nl> + or_ ( tmp , Immediate ( 1 ) ) ; <nl> + bind ( & msb_not_set ) ; <nl> + cvtsi2ss ( dst , tmp ) ; <nl> addss ( dst , dst ) ; <nl> bind ( & done ) ; <nl> } <nl> mmm a / src / ia32 / macro - assembler - ia32 . h <nl> ppp b / src / ia32 / macro - assembler - ia32 . h <nl> class TurboAssembler : public Assembler { <nl> void Cvtsi2sd ( XMMRegister dst , Register src ) { Cvtsi2sd ( dst , Operand ( src ) ) ; } <nl> void Cvtsi2sd ( XMMRegister dst , Operand src ) ; <nl> <nl> - void Cvtui2ss ( XMMRegister dst , Register src , Register tmp ) ; <nl> + void Cvtui2ss ( XMMRegister dst , Register src , Register tmp ) { <nl> + Cvtui2ss ( dst , Operand ( src ) , tmp ) ; <nl> + } <nl> + void Cvtui2ss ( XMMRegister dst , Operand src , Register tmp ) ; <nl> void Cvttss2ui ( Register dst , XMMRegister src , XMMRegister tmp ) { <nl> Cvttss2ui ( dst , Operand ( src ) , tmp ) ; <nl> } <nl>
|
[ ia32 ] Avoid overwrite of src register
|
v8/v8
|
3baf75f734d47246c19b80cebeb9cc5791cf96c5
|
2018-04-16T16:31:38Z
|
new file mode 100644 <nl> index 00000000000 . . bb8044685d3 <nl> mmm / dev / null <nl> ppp b / dbms / tests / performance / scalar . xml <nl> <nl> + < test > <nl> + < type > loop < / type > <nl> + <nl> + < stop_conditions > <nl> + < all_of > <nl> + < total_time_ms > 30000 < / total_time_ms > <nl> + < / all_of > <nl> + < any_of > <nl> + < min_time_not_changing_for_ms > 5000 < / min_time_not_changing_for_ms > <nl> + < total_time_ms > 60000 < / total_time_ms > <nl> + < / any_of > <nl> + < / stop_conditions > <nl> + <nl> + < main_metric > <nl> + < min_time / > <nl> + < / main_metric > <nl> + <nl> + < create_query > CREATE TABLE cdp_tags ( tag_id String , mid_seqs AggregateFunction ( groupBitmap , UInt32 ) ) engine = MergeTree ( ) ORDER BY ( tag_id ) SETTINGS index_granularity = 1 < / create_query > <nl> + < create_query > CREATE TABLE cdp_orders ( order_id UInt64 , order_complete_time DateTime , order_total_sales Float32 , mid_seq UInt32 ) engine = MergeTree ( ) PARTITION BY toYYYYMMDD ( order_complete_time ) ORDER BY ( order_complete_time , order_id ) < / create_query > <nl> + <nl> + < fill_query > INSERT INTO cdp_tags ( tag_id , mid_seqs ) SELECT ' tag1 ' , groupBitmapState ( toUInt32 ( number ) ) FROM numbers ( 10000000 ) WHERE number % 9 = 0 < / fill_query > <nl> + < fill_query > INSERT INTO cdp_tags ( tag_id , mid_seqs ) SELECT ' tag2 ' , groupBitmapState ( toUInt32 ( number ) ) FROM numbers ( 10000000 ) WHERE number % 8 = 0 < / fill_query > <nl> + < fill_query > INSERT INTO cdp_tags ( tag_id , mid_seqs ) SELECT ' tag3 ' , groupBitmapState ( toUInt32 ( number ) ) FROM numbers ( 10000000 ) WHERE number % 7 = 0 < / fill_query > <nl> + < fill_query > INSERT INTO cdp_tags ( tag_id , mid_seqs ) SELECT ' tag4 ' , groupBitmapState ( toUInt32 ( number ) ) FROM numbers ( 10000000 ) WHERE number % 6 = 0 < / fill_query > <nl> + < fill_query > INSERT INTO cdp_tags ( tag_id , mid_seqs ) SELECT ' tag5 ' , groupBitmapState ( toUInt32 ( number ) ) FROM numbers ( 10000000 ) WHERE number % 5 = 0 < / fill_query > <nl> + < fill_query > INSERT INTO cdp_tags ( tag_id , mid_seqs ) SELECT ' tag6 ' , groupBitmapState ( toUInt32 ( number ) ) FROM numbers ( 10000000 ) WHERE number % 4 = 0 < / fill_query > <nl> + < fill_query > INSERT INTO cdp_tags ( tag_id , mid_seqs ) SELECT ' tag7 ' , groupBitmapState ( toUInt32 ( number ) ) FROM numbers ( 10000000 ) WHERE number % 3 = 0 < / fill_query > <nl> + < fill_query > INSERT INTO cdp_tags ( tag_id , mid_seqs ) SELECT ' tag8 ' , groupBitmapState ( toUInt32 ( number ) ) FROM numbers ( 10000000 ) WHERE number % 2 = 0 < / fill_query > <nl> + < fill_query > INSERT INTO cdp_orders ( order_id , order_complete_time , order_total_sales , mid_seq ) SELECT number , addSeconds ( toDateTime ( ' 2000 - 01 - 01 00 : 00 : 00 ' ) , number ) , number % 1024 , toUInt32 ( number ) FROM numbers ( 10000000 ) < / fill_query > <nl> + <nl> + < query > WITH ( SELECT mid_seqs FROM cdp_tags WHERE tag_id = ' tag1 ' ) AS bm1 , ( SELECT mid_seqs FROM cdp_tags WHERE tag_id = ' tag2 ' ) AS bm2 , ( SELECT mid_seqs FROM cdp_tags WHERE tag_id = ' tag3 ' ) AS bm3 , ( SELECT mid_seqs FROM cdp_tags WHERE tag_id = ' tag4 ' ) AS bm4 , ( SELECT mid_seqs FROM cdp_tags WHERE tag_id = ' tag5 ' ) AS bm5 , ( SELECT mid_seqs FROM cdp_tags WHERE tag_id = ' tag6 ' ) AS bm6 , ( SELECT mid_seqs FROM cdp_tags WHERE tag_id = ' tag7 ' ) AS bm7 , ( SELECT mid_seqs FROM cdp_tags WHERE tag_id = ' tag8 ' ) AS bm8 , toDateTime ( ' 2000 - 01 - 01 00 : 00 : 00 ' ) AS ts_begin , addSeconds ( toDateTime ( ' 2000 - 01 - 01 00 : 00 : 00 ' ) , 1e8 ) AS ts_end SELECT multiIf ( bitmapContains ( bm1 , mid_seq ) , 1 , bitmapContains ( bm2 , mid_seq ) , 2 , bitmapContains ( bm3 , mid_seq ) , 3 , bitmapContains ( bm4 , mid_seq ) , 4 , bitmapContains ( bm5 , mid_seq ) , 5 , bitmapContains ( bm6 , mid_seq ) , 6 , bitmapContains ( bm7 , mid_seq ) , 7 , bitmapContains ( bm8 , mid_seq ) , 8 , 0 ) AS tag , count ( ) AS gc , sum ( order_total_sales ) AS total FROM cdp_orders PREWHERE order_complete_time > = ts_begin AND order_complete_time & lt ; ts_end GROUP BY tag ORDER BY tag < / query > <nl> + <nl> + < drop_query > DROP TABLE IF EXISTS cdp_tags < / drop_query > <nl> + < drop_query > DROP TABLE IF EXISTS cdp_orders < / drop_query > <nl> + < / test > <nl>
|
add perf test for subqueries with large scalars
|
ClickHouse/ClickHouse
|
dd72f62f179916adf1cecdbcb9a183d0ee92a0f2
|
2019-10-22T15:55:11Z
|
mmm a / project / VS2010Express / XBMC . vcxproj <nl> ppp b / project / VS2010Express / XBMC . vcxproj <nl> <nl> < ItemDefinitionGroup Condition = " ' $ ( Configuration ) | $ ( Platform ) ' = = ' Release ( OpenGL ) | Win32 ' " > <nl> < ClCompile > <nl> < AdditionalOptions > / MP % ( AdditionalOptions ) < / AdditionalOptions > <nl> - < AdditionalIncludeDirectories > . . \ . . \ ; . . \ . . \ xbmc \ ; . . \ . . \ xbmc \ cores \ dvdplayer ; . . \ . . \ xbmc \ win32 ; . . \ . . \ lib ; . . \ . . \ lib \ ffmpeg ; . . \ . . \ lib \ ffmpeg \ include - xbmc - win32 ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaRenderer ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaConnect ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaServer ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Platinum ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Core ; . . \ . . \ lib \ libUPnP \ Neptune \ Source \ Core ; . . \ . . \ lib \ libUPnP \ Neptune \ Source \ System \ Win32 ; . . \ . . \ lib \ win32 \ pcre ; . . \ . . \ lib \ win32 ; . . \ . . \ xbmc \ cores \ AudioEngine \ ; . . \ . . \ addons \ library . xbmc . gui ; . . \ . . \ addons \ library . xbmc . addon ; . . \ . . \ addons \ library . xbmc . pvr ; . . \ . . \ addons \ library . xbmc . codec ; % ( AdditionalIncludeDirectories ) < / AdditionalIncludeDirectories > <nl> + < AdditionalIncludeDirectories > . . \ . . \ ; . . \ . . \ xbmc \ ; . . \ . . \ xbmc \ cores \ dvdplayer ; . . \ . . \ xbmc \ win32 ; . . \ . . \ lib ; . . \ . . \ lib \ win32 \ ffmpeg ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaRenderer ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaConnect ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaServer ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Platinum ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Core ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Extras ; . . \ . . \ lib \ libUPnP \ Neptune \ Source \ Core ; . . \ . . \ lib \ libUPnP \ Neptune \ Source \ System \ Win32 ; . . \ . . \ lib \ win32 \ pcre ; . . \ . . \ lib \ win32 ; . . \ . . \ xbmc \ cores \ AudioEngine \ ; . . \ . . \ addons \ library . xbmc . gui ; . . \ . . \ addons \ library . xbmc . addon ; . . \ . . \ addons \ library . xbmc . pvr ; . . \ . . \ addons \ library . xbmc . codec ; % ( AdditionalIncludeDirectories ) < / AdditionalIncludeDirectories > <nl> < PreprocessorDefinitions > NOMINMAX ; _USE_32BIT_TIME_T ; HAS_GL ; __STDC_CONSTANT_MACROS ; XMD_H ; TAGLIB_STATIC ; % ( PreprocessorDefinitions ) < / PreprocessorDefinitions > <nl> < ExceptionHandling > Async < / ExceptionHandling > <nl> < PrecompiledHeader > Use < / PrecompiledHeader > <nl> <nl> < AdditionalDependencies > DInput8 . lib ; DSound . lib ; winmm . lib ; Mpr . lib ; Iphlpapi . lib ; PowrProf . lib ; setupapi . lib ; dwmapi . lib ; yajl . lib ; dxguid . lib ; % ( AdditionalDependencies ) < / AdditionalDependencies > <nl> < OutputFile > $ ( OutDir ) XBMC . exe < / OutputFile > <nl> < IgnoreSpecificDefaultLibraries > libc ; msvcrt ; libci ; msvcprt ; % ( IgnoreSpecificDefaultLibraries ) < / IgnoreSpecificDefaultLibraries > <nl> - < DelayLoadDLLs > dnssd . dll ; dwmapi . dll ; libmicrohttpd - 5 . dll ; ssh . dll ; sqlite3 . dll ; libsamplerate - 0 . dll ; % ( DelayLoadDLLs ) < / DelayLoadDLLs > <nl> + < DelayLoadDLLs > libxslt . dll ; dnssd . dll ; dwmapi . dll ; libmicrohttpd - 5 . dll ; ssh . dll ; sqlite3 . dll ; libsamplerate - 0 . dll ; avcodec - 55 . dll ; avfilter - 4 . dll ; avformat - 55 . dll ; avutil - 52 . dll ; postproc - 52 . dll ; swresample - 0 . dll ; swscale - 2 . dll ; % ( DelayLoadDLLs ) < / DelayLoadDLLs > <nl> < GenerateDebugInformation > true < / GenerateDebugInformation > <nl> < ProgramDatabaseFile > $ ( OutDir ) XBMC . pdb < / ProgramDatabaseFile > <nl> < RandomizedBaseAddress > true < / RandomizedBaseAddress > <nl> < DataExecutionPrevention > true < / DataExecutionPrevention > <nl> + < AdditionalLibraryDirectories > . . \ . . \ lib \ win32 \ ffmpeg \ . libs ; % ( AdditionalLibraryDirectories ) < / AdditionalLibraryDirectories > <nl> < / Link > <nl> < Manifest > <nl> < AdditionalManifestFiles > VC90 . CRT . x86 . manifest ; win81 . manifest ; % ( AdditionalManifestFiles ) < / AdditionalManifestFiles > <nl> <nl> < / ItemDefinitionGroup > <nl> < ItemDefinitionGroup Condition = " ' $ ( Configuration ) | $ ( Platform ) ' = = ' Debug Testsuite | Win32 ' " > <nl> < ClCompile > <nl> - < AdditionalIncludeDirectories > . . \ . . \ ; . . \ . . \ xbmc \ ; . . \ . . \ xbmc \ cores \ dvdplayer ; . . \ . . \ xbmc \ win32 ; . . \ . . \ lib ; . . \ . . \ lib \ ffmpeg ; . . \ . . \ lib \ ffmpeg \ include - xbmc - win32 ; . . \ . . \ lib \ liblame \ include ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaRenderer ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaConnect ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaServer ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Platinum ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Core ; . . \ . . \ lib \ libUPnP \ Neptune \ Source \ Core ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Extras ; . . \ . . \ lib \ libUPnP \ Neptune \ Source \ System \ Win32 ; . . \ . . \ lib \ win32 \ pcre ; . . \ . . \ lib \ win32 ; . . \ . . \ lib \ gtest \ include ; . . \ . . \ xbmc \ cores \ AudioEngine \ ; . . \ . . \ xbmc \ cores \ AudioEngine \ Utils \ ; . . \ . . \ addons \ library . xbmc . gui ; . . \ . . \ addons \ library . xbmc . addon ; . . \ . . \ addons \ library . xbmc . pvr ; . . \ . . \ addons \ library . xbmc . codec ; % ( AdditionalIncludeDirectories ) < / AdditionalIncludeDirectories > <nl> + < AdditionalIncludeDirectories > . . \ . . \ ; . . \ . . \ xbmc \ ; . . \ . . \ xbmc \ cores \ dvdplayer ; . . \ . . \ xbmc \ win32 ; . . \ . . \ lib ; . . \ . . \ lib \ win32 \ ffmpeg ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaRenderer ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaConnect ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaServer ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Platinum ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Core ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Extras ; . . \ . . \ lib \ libUPnP \ Neptune \ Source \ Core ; . . \ . . \ lib \ libUPnP \ Neptune \ Source \ System \ Win32 ; . . \ . . \ lib \ win32 \ pcre ; . . \ . . \ lib \ win32 ; . . \ . . \ xbmc \ cores \ AudioEngine \ ; . . \ . . \ addons \ library . xbmc . gui ; . . \ . . \ addons \ library . xbmc . addon ; . . \ . . \ addons \ library . xbmc . pvr ; . . \ . . \ addons \ library . xbmc . codec ; . . \ . . \ lib \ gtest \ include ; % ( AdditionalIncludeDirectories ) < / AdditionalIncludeDirectories > <nl> < PreprocessorDefinitions > _CONSOLE ; NOMINMAX ; _USE_32BIT_TIME_T ; HAS_DX ; D3D_DEBUG_INFO ; __STDC_CONSTANT_MACROS ; _SECURE_SCL = 0 ; TAGLIB_STATIC ; % ( PreprocessorDefinitions ) < / PreprocessorDefinitions > <nl> < ExceptionHandling > Async < / ExceptionHandling > <nl> < PrecompiledHeader > Use < / PrecompiledHeader > <nl> <nl> < IgnoreSpecificDefaultLibraries > libc ; msvcrt ; libcmt ; msvcrtd ; msvcprtd ; % ( IgnoreSpecificDefaultLibraries ) < / IgnoreSpecificDefaultLibraries > <nl> < ModuleDefinitionFile > <nl> < / ModuleDefinitionFile > <nl> - < DelayLoadDLLs > libxslt . dll ; dnssd . dll ; dwmapi . dll ; libmicrohttpd - 5 . dll ; ssh . dll ; sqlite3 . dll ; libsamplerate - 0 . dll ; % ( DelayLoadDLLs ) < / DelayLoadDLLs > <nl> + < DelayLoadDLLs > libxslt . dll ; dnssd . dll ; dwmapi . dll ; libmicrohttpd - 5 . dll ; ssh . dll ; sqlite3 . dll ; libsamplerate - 0 . dll ; avcodec - 55 . dll ; avfilter - 4 . dll ; avformat - 55 . dll ; avutil - 52 . dll ; postproc - 52 . dll ; swresample - 0 . dll ; swscale - 2 . dll ; % ( DelayLoadDLLs ) < / DelayLoadDLLs > <nl> < ProgramDatabaseFile > $ ( OutDir ) XBMC . pdb < / ProgramDatabaseFile > <nl> < SubSystem > Console < / SubSystem > <nl> < EntryPointSymbol > <nl> < / EntryPointSymbol > <nl> < RandomizedBaseAddress > true < / RandomizedBaseAddress > <nl> < DataExecutionPrevention > true < / DataExecutionPrevention > <nl> + < AdditionalLibraryDirectories > . . \ . . \ lib \ win32 \ ffmpeg \ . libs ; % ( AdditionalLibraryDirectories ) < / AdditionalLibraryDirectories > <nl> < / Link > <nl> < Manifest > <nl> < AdditionalManifestFiles > VC90 . CRT . x86 . manifest ; win81 . manifest ; % ( AdditionalManifestFiles ) < / AdditionalManifestFiles > <nl> <nl> < / ItemDefinitionGroup > <nl> < ItemDefinitionGroup Condition = " ' $ ( Configuration ) | $ ( Platform ) ' = = ' Debug ( OpenGL ) | Win32 ' " > <nl> < ClCompile > <nl> - < AdditionalIncludeDirectories > . . \ . . \ ; . . \ . . \ xbmc \ ; . . \ . . \ xbmc \ cores \ dvdplayer ; . . \ . . \ xbmc \ win32 ; . . \ . . \ lib ; . . \ . . \ lib \ ffmpeg ; . . \ . . \ lib \ ffmpeg \ include - xbmc - win32 ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaRenderer ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaConnect ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaServer ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Platinum ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Core ; . . \ . . \ lib \ libUPnP \ Neptune \ Source \ Core ; . . \ . . \ lib \ libUPnP \ Neptune \ Source \ System \ Win32 ; . . \ . . \ lib \ win32 \ pcre ; . . \ . . \ lib \ win32 ; . . \ . . \ xbmc \ cores \ AudioEngine \ ; . . \ . . \ addons \ library . xbmc . gui ; . . \ . . \ addons \ library . xbmc . addon ; . . \ . . \ addons \ library . xbmc . pvr ; . . \ . . \ addons \ library . xbmc . codec ; % ( AdditionalIncludeDirectories ) < / AdditionalIncludeDirectories > <nl> + < AdditionalIncludeDirectories > . . \ . . \ ; . . \ . . \ xbmc \ ; . . \ . . \ xbmc \ cores \ dvdplayer ; . . \ . . \ xbmc \ win32 ; . . \ . . \ lib ; . . \ . . \ lib \ win32 \ ffmpeg ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaRenderer ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaConnect ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaServer ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Platinum ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Core ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Extras ; . . \ . . \ lib \ libUPnP \ Neptune \ Source \ Core ; . . \ . . \ lib \ libUPnP \ Neptune \ Source \ System \ Win32 ; . . \ . . \ lib \ win32 \ pcre ; . . \ . . \ lib \ win32 ; . . \ . . \ xbmc \ cores \ AudioEngine \ ; . . \ . . \ addons \ library . xbmc . gui ; . . \ . . \ addons \ library . xbmc . addon ; . . \ . . \ addons \ library . xbmc . pvr ; . . \ . . \ addons \ library . xbmc . codec ; % ( AdditionalIncludeDirectories ) < / AdditionalIncludeDirectories > <nl> < PreprocessorDefinitions > NOMINMAX ; _USE_32BIT_TIME_T ; HAS_GL ; __STDC_CONSTANT_MACROS ; _SECURE_SCL = 0 ; TAGLIB_STATIC ; % ( PreprocessorDefinitions ) < / PreprocessorDefinitions > <nl> < ExceptionHandling > Async < / ExceptionHandling > <nl> < PrecompiledHeader > Use < / PrecompiledHeader > <nl> <nl> < IgnoreSpecificDefaultLibraries > libc ; msvcrt ; libcmt ; msvcrtd ; msvcprtd ; % ( IgnoreSpecificDefaultLibraries ) < / IgnoreSpecificDefaultLibraries > <nl> < ModuleDefinitionFile > <nl> < / ModuleDefinitionFile > <nl> - < DelayLoadDLLs > dnssd . dll ; dwmapi . dll ; libmicrohttpd - 5 . dll ; ssh . dll ; sqlite3 . dll ; libsamplerate - 0 . dll ; % ( DelayLoadDLLs ) < / DelayLoadDLLs > <nl> + < DelayLoadDLLs > libxslt . dll ; dnssd . dll ; dwmapi . dll ; libmicrohttpd - 5 . dll ; ssh . dll ; sqlite3 . dll ; libsamplerate - 0 . dll ; avcodec - 55 . dll ; avfilter - 4 . dll ; avformat - 55 . dll ; avutil - 52 . dll ; postproc - 52 . dll ; swresample - 0 . dll ; swscale - 2 . dll ; % ( DelayLoadDLLs ) < / DelayLoadDLLs > <nl> < ProgramDatabaseFile > $ ( OutDir ) XBMC . pdb < / ProgramDatabaseFile > <nl> < EntryPointSymbol > <nl> < / EntryPointSymbol > <nl> < RandomizedBaseAddress > true < / RandomizedBaseAddress > <nl> < DataExecutionPrevention > true < / DataExecutionPrevention > <nl> + < AdditionalLibraryDirectories > . . \ . . \ lib \ win32 \ ffmpeg \ . libs ; % ( AdditionalLibraryDirectories ) < / AdditionalLibraryDirectories > <nl> < / Link > <nl> < Manifest > <nl> < AdditionalManifestFiles > VC90 . CRT . x86 . manifest ; win81 . manifest ; % ( AdditionalManifestFiles ) < / AdditionalManifestFiles > <nl> <nl> < / ItemDefinitionGroup > <nl> < ItemDefinitionGroup Condition = " ' $ ( Configuration ) | $ ( Platform ) ' = = ' Template | Win32 ' " > <nl> < ClCompile > <nl> - < AdditionalIncludeDirectories > . . \ . . \ ; . . \ . . \ xbmc \ ; . . \ . . \ xbmc \ cores \ dvdplayer ; . . \ . . \ xbmc \ win32 ; . . \ . . \ lib ; . . \ . . \ lib \ ffmpeg ; . . \ . . \ lib \ ffmpeg \ include - xbmc - win32 ; . . \ . . \ lib \ liblame \ include ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaRenderer ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaConnect ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaServer ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Platinum ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Core ; . . \ . . \ lib \ libUPnP \ Neptune \ Source \ Core ; . . \ . . \ lib \ libUPnP \ Neptune \ Source \ System \ Win32 ; . . \ . . \ lib \ win32 \ pcre ; . . \ . . \ lib \ win32 ; . . \ . . \ xbmc \ cores \ AudioEngine \ ; . . \ . . \ addons \ library . xbmc . gui ; . . \ . . \ addons \ library . xbmc . addon ; . . \ . . \ addons \ library . xbmc . pvr ; . . \ . . \ addons \ library . xbmc . codec ; % ( AdditionalIncludeDirectories ) < / AdditionalIncludeDirectories > <nl> + < AdditionalIncludeDirectories > . . \ . . \ ; . . \ . . \ xbmc \ ; . . \ . . \ xbmc \ cores \ dvdplayer ; . . \ . . \ xbmc \ win32 ; . . \ . . \ lib ; . . \ . . \ lib \ win32 \ ffmpeg ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaRenderer ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaConnect ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Devices \ MediaServer ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Platinum ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Core ; . . \ . . \ lib \ libUPnP \ Platinum \ Source \ Extras ; . . \ . . \ lib \ libUPnP \ Neptune \ Source \ Core ; . . \ . . \ lib \ libUPnP \ Neptune \ Source \ System \ Win32 ; . . \ . . \ lib \ win32 \ pcre ; . . \ . . \ lib \ win32 ; . . \ . . \ xbmc \ cores \ AudioEngine \ ; . . \ . . \ addons \ library . xbmc . gui ; . . \ . . \ addons \ library . xbmc . addon ; . . \ . . \ addons \ library . xbmc . pvr ; . . \ . . \ addons \ library . xbmc . codec ; % ( AdditionalIncludeDirectories ) < / AdditionalIncludeDirectories > <nl> < / ClCompile > <nl> < / ItemDefinitionGroup > <nl> < ItemGroup > <nl>
|
Merge pull request from Montellese / win32_vsproj
|
xbmc/xbmc
|
2bebfb7a18846068d38f0503061ed4ab2a75a92c
|
2014-05-06T22:18:12Z
|
mmm a / tensorflow / tools / pip_package / setup . py <nl> ppp b / tensorflow / tools / pip_package / setup . py <nl> <nl> ' mock > = 2 . 0 . 0 ; python_version < " 3 " ' , <nl> # functools comes with python3 , need to install the backport for python2 <nl> ' functools32 > = 3 . 2 . 3 ; python_version < " 3 " ' , <nl> + ' six > = 1 . 12 . 0 ' , <nl> ] <nl> <nl> if sys . byteorder = = ' little ' : <nl>
|
Adding ` six ` version requirement to setup . py . oss .
|
tensorflow/tensorflow
|
7f125b0092f9c23bbe11f409791d79d6e83d15d1
|
2019-10-16T20:42:23Z
|
mmm a / benchmark / single - source / Codable . swift <nl> ppp b / benchmark / single - source / Codable . swift <nl> public func setup_json ( ) { <nl> JSONTester = CodablePerfTester ( encoder : JSONEncoder ( ) , decoder : JSONDecoder ( ) ) <nl> } <nl> <nl> - # if ! _runtime ( _ObjC ) <nl> - / / If we do not have an objc - runtime , then we do not have a definition for <nl> - / / autoreleasepool . Add in our own fake autoclosure for it that is inline <nl> - / / always . That should be able to be eaten through by the optimizer no problem . <nl> - @ inline ( __always ) <nl> - public func autoreleasepool < Result > ( <nl> - invoking body : ( ) throws - > Result <nl> - ) rethrows - > Result { <nl> - return try body ( ) <nl> - } <nl> - # endif <nl> - <nl> @ inline ( never ) <nl> public func run_JSONPerfEncode ( _ N : Int ) { <nl> autoreleasepool { <nl> mmm a / benchmark / utils / TestsUtils . swift <nl> ppp b / benchmark / utils / TestsUtils . swift <nl> public func CheckResults ( <nl> } <nl> } <nl> <nl> + # if ! _runtime ( _ObjC ) <nl> + / / If we do not have an objc - runtime , then we do not have a definition for <nl> + / / autoreleasepool . Add in our own fake autoclosure for it that is inline <nl> + / / always . That should be able to be eaten through by the optimizer no problem . <nl> + @ inlinable / / FIXME ( inline - always ) <nl> + @ inline ( __always ) <nl> + public func autoreleasepool < Result > ( <nl> + invoking body : ( ) throws - > Result <nl> + ) rethrows - > Result { <nl> + return try body ( ) <nl> + } <nl> + # endif <nl> + <nl> public func False ( ) - > Bool { return false } <nl> <nl> / / / This is a dummy protocol to test the speed of our protocol dispatch . <nl>
|
Merge pull request from palimondo / within - one - stem
|
apple/swift
|
40cce2da3dd06aed18803266e375bc4c8c1a6420
|
2019-02-20T11:35:26Z
|
mmm a / src / misc . h <nl> ppp b / src / misc . h <nl> <nl> # include < QThread > <nl> <nl> # ifndef Q_WS_WIN <nl> - # include < sys / vfs . h > <nl> + # ifdef Q_WS_MAC <nl> + # include < sys / param . h > <nl> + # include < sys / mount . h > <nl> # else <nl> - # include < winbase . h > <nl> + # include < sys / vfs . h > <nl> + # endif <nl> + # else <nl> + # include < winbase . h > <nl> # endif <nl> <nl> # include < libtorrent / torrent_info . hpp > <nl>
|
- Fix compilation on Mac OS
|
qbittorrent/qBittorrent
|
3dd9ebc61d488f945c0a7a4b708546697db51120
|
2009-09-30T18:43:31Z
|
mmm a / src / compiler / mips64 / code - generator - mips64 . cc <nl> ppp b / src / compiler / mips64 / code - generator - mips64 . cc <nl> void CodeGenerator : : AssembleArchBranch ( Instruction * instr , BranchInfo * branch ) { <nl> __ Branch ( tlabel , cc , at , Operand ( zero_reg ) ) ; <nl> } else if ( instr - > arch_opcode ( ) = = kMips64Dadd | | <nl> instr - > arch_opcode ( ) = = kMips64Dsub ) { <nl> - Label done ; <nl> cc = FlagsConditionToConditionOvf ( branch - > condition ) ; <nl> __ dsra32 ( kScratchReg , i . OutputRegister ( ) , 0 ) ; <nl> __ sra ( at , i . OutputRegister ( ) , 31 ) ; <nl> - __ Branch ( & done , NegateCondition ( cc ) , at , Operand ( kScratchReg ) ) ; <nl> - / / If we deoptimize , check if output register is the same as input <nl> - / / registers , if yes input values are overwritten so fix them first . <nl> - if ( instr - > InputAt ( 1 ) - > IsRegister ( ) ) { <nl> - if ( i . InputRegister ( 0 ) . is ( i . OutputRegister ( ) ) & & <nl> - i . InputRegister ( 1 ) . is ( i . OutputRegister ( ) ) ) { <nl> - __ dsra ( i . OutputRegister ( ) , i . OutputRegister ( ) , 1 ) ; <nl> - } <nl> - } <nl> - __ Branch ( tlabel ) ; <nl> - __ bind ( & done ) ; <nl> + __ Branch ( tlabel , cc , at , Operand ( kScratchReg ) ) ; <nl> } else if ( instr - > arch_opcode ( ) = = kMips64DaddOvf ) { <nl> switch ( branch - > condition ) { <nl> case kOverflow : <nl> mmm a / src / mips / macro - assembler - mips . cc <nl> ppp b / src / mips / macro - assembler - mips . cc <nl> void MacroAssembler : : AddBranchOvf ( Register dst , Register left , Register right , <nl> DCHECK ( ! right . is ( scratch ) ) ; <nl> <nl> if ( left . is ( right ) & & dst . is ( left ) ) { <nl> - mov ( scratch , left ) ; / / Preserve left and right . <nl> - addu ( dst , left , right ) ; / / Both are overwritten . <nl> - xor_ ( overflow_dst , dst , scratch ) ; / / Left and right are equal . <nl> - Label done ; / / Restore inputs if overflow . <nl> - Branch ( & done , ge , overflow_dst , Operand ( zero_reg ) ) ; <nl> - mov ( left , scratch ) ; / / Original left and right . <nl> - bind ( & done ) ; <nl> - } else if ( dst . is ( left ) ) { <nl> + mov ( overflow_dst , right ) ; <nl> + right = overflow_dst ; <nl> + } <nl> + <nl> + if ( dst . is ( left ) ) { <nl> mov ( scratch , left ) ; / / Preserve left . <nl> addu ( dst , left , right ) ; / / Left is overwritten . <nl> xor_ ( scratch , dst , scratch ) ; / / Original left . <nl>
|
Revert of MIPS : [ turbofan ] Fix addition for deoptimization . ( patchset id : 40001 of https : / / codereview . chromium . org / 2102063002 / )
|
v8/v8
|
29b89b489a20e1402eb17f9806bca16b7bc44c2b
|
2016-07-08T14:57:50Z
|
mmm a / code / search / exponential_search / README . md <nl> ppp b / code / search / exponential_search / README . md <nl> Once we find an index i ( after repeated doubling of i ) , we know that the element <nl> Exponential Binary Search is particularly useful for unbounded searches , where size of array is infinite . Please refer Unbounded Binary Search for an example . <nl> It works better than Binary Search for bounded arrays also when the element to be searched is closer to the first element . <nl> <nl> - # Collaborative effort by [ OpenGenus ] ( https : / / github . com / opengenus ) <nl> + mmm <nl> + <nl> + < p align = " center " > <nl> + A massive collaborative effort by < a href = " https : / / github . com / OpenGenus / cosmos " > OpenGenus Foundation < / a > <nl> + < / p > <nl> + <nl> + mmm <nl>
|
fixed footer
|
OpenGenus/cosmos
|
e41a7364e3334910bf022a534728a8f56ad5b5fe
|
2017-10-15T03:40:54Z
|
mmm a / src / json . hpp <nl> ppp b / src / json . hpp <nl> class basic_json <nl> * / <nl> class serializer <nl> { <nl> + private : <nl> + serializer ( const serializer & ) = delete ; <nl> + serializer & operator = ( const serializer & ) = delete ; <nl> + <nl> public : <nl> / * ! <nl> @ param [ in ] s output stream to serialize to <nl> mmm a / src / json . hpp . re2c <nl> ppp b / src / json . hpp . re2c <nl> class basic_json <nl> * / <nl> class serializer <nl> { <nl> + private : <nl> + serializer ( const serializer & ) = delete ; <nl> + serializer & operator = ( const serializer & ) = delete ; <nl> + <nl> public : <nl> / * ! <nl> @ param [ in ] s output stream to serialize to <nl>
|
Fix - Weffc + + warnings ( GNU 6 . 3 . 1 )
|
nlohmann/json
|
5b809b97374488c628c8cb7e27d614cb19cfb24a
|
2017-03-11T14:05:21Z
|
mmm a / source / common / http / async_client_impl . h <nl> ppp b / source / common / http / async_client_impl . h <nl> class AsyncStreamImpl : public AsyncClient : : Stream , <nl> Utility : : sendLocalReply ( <nl> remote_closed_ , <nl> Utility : : EncodeFunctions { <nl> - nullptr , <nl> + nullptr , nullptr , <nl> [ this , modify_headers ] ( ResponseHeaderMapPtr & & headers , bool end_stream ) - > void { <nl> if ( modify_headers ! = nullptr ) { <nl> modify_headers ( * headers ) ; <nl> mmm a / source / common / http / filter_manager . cc <nl> ppp b / source / common / http / filter_manager . cc <nl> void FilterManager : : sendLocalReplyViaFilterChain ( <nl> Utility : : sendLocalReply ( <nl> state_ . destroyed_ , <nl> Utility : : EncodeFunctions { <nl> + modify_headers , <nl> [ this ] ( ResponseHeaderMap & response_headers , Code & code , std : : string & body , <nl> absl : : string_view & content_type ) - > void { <nl> local_reply_ . rewrite ( request_headers_ . get ( ) , response_headers , stream_info_ , code , body , <nl> content_type ) ; <nl> } , <nl> [ this , modify_headers ] ( ResponseHeaderMapPtr & & headers , bool end_stream ) - > void { <nl> - if ( modify_headers ! = nullptr ) { <nl> - modify_headers ( * headers ) ; <nl> - } <nl> response_headers_ = std : : move ( headers ) ; <nl> / / TODO : Start encoding from the last decoder filter that saw the <nl> / / request instead . <nl> void FilterManager : : sendDirectLocalReply ( <nl> Http : : Utility : : sendLocalReply ( <nl> state_ . destroyed_ , <nl> Utility : : EncodeFunctions { <nl> + modify_headers , <nl> [ & ] ( ResponseHeaderMap & response_headers , Code & code , std : : string & body , <nl> absl : : string_view & content_type ) - > void { <nl> local_reply_ . rewrite ( request_headers_ . get ( ) , response_headers , stream_info_ , code , body , <nl> content_type ) ; <nl> } , <nl> [ & ] ( ResponseHeaderMapPtr & & response_headers , bool end_stream ) - > void { <nl> - if ( modify_headers ! = nullptr ) { <nl> - modify_headers ( * response_headers ) ; <nl> - } <nl> - <nl> / / Move the response headers into the FilterManager to make sure they ' re visible to <nl> / / access logs . <nl> response_headers_ = std : : move ( response_headers ) ; <nl> void ActiveStreamFilterBase : : resetStream ( ) { parent_ . filter_manager_callbacks_ . r <nl> uint64_t ActiveStreamFilterBase : : streamId ( ) const { return parent_ . streamId ( ) ; } <nl> <nl> } / / namespace Http <nl> - } / / namespace Envoy <nl> \ No newline at end of file <nl> + } / / namespace Envoy <nl> mmm a / source / common / http / utility . cc <nl> ppp b / source / common / http / utility . cc <nl> void Utility : : sendLocalReply ( const bool & is_reset , StreamDecoderFilterCallbacks & <nl> const LocalReplyData & local_reply_data ) { <nl> sendLocalReply ( <nl> is_reset , <nl> - Utility : : EncodeFunctions { nullptr , <nl> + Utility : : EncodeFunctions { nullptr , nullptr , <nl> [ & ] ( ResponseHeaderMapPtr & & headers , bool end_stream ) - > void { <nl> callbacks . encodeHeaders ( std : : move ( headers ) , end_stream ) ; <nl> } , <nl> void Utility : : sendLocalReply ( const bool & is_reset , const EncodeFunctions & encode <nl> ResponseHeaderMapPtr response_headers { createHeaderMap < ResponseHeaderMapImpl > ( <nl> { { Headers : : get ( ) . Status , std : : to_string ( enumToInt ( response_code ) ) } } ) } ; <nl> <nl> + if ( encode_functions . modify_headers_ ) { <nl> + encode_functions . modify_headers_ ( * response_headers ) ; <nl> + } <nl> if ( encode_functions . rewrite_ ) { <nl> encode_functions . rewrite_ ( * response_headers , response_code , body_text , content_type ) ; <nl> } <nl> void Utility : : sendLocalReply ( const bool & is_reset , const EncodeFunctions & encode <nl> } <nl> response_headers - > setGrpcMessage ( PercentEncoding : : encode ( body_text ) ) ; <nl> } <nl> + / / The ` modify_headers ` function may have added content - length , remove it . <nl> + response_headers - > removeContentLength ( ) ; <nl> encode_functions . encode_headers_ ( std : : move ( response_headers ) , true ) ; / / Trailers only response <nl> return ; <nl> } <nl> <nl> if ( ! body_text . empty ( ) ) { <nl> response_headers - > setContentLength ( body_text . size ( ) ) ; <nl> - response_headers - > setReferenceContentType ( content_type ) ; <nl> + / / If the ` rewrite ` function has changed body_text or content - type is not set , set it . <nl> + / / This allows ` modify_headers ` function to set content - type for the body . For example , <nl> + / / router . direct_response is calling sendLocalReply and may need to set content - type for <nl> + / / the body . <nl> + if ( body_text ! = local_reply_data . body_text_ | | response_headers - > ContentType ( ) = = nullptr ) { <nl> + response_headers - > setReferenceContentType ( content_type ) ; <nl> + } <nl> + } else { <nl> + response_headers - > removeContentLength ( ) ; <nl> + response_headers - > removeContentType ( ) ; <nl> } <nl> <nl> if ( local_reply_data . is_head_request_ ) { <nl> void Utility : : sendLocalReply ( const bool & is_reset , const EncodeFunctions & encode <nl> } <nl> <nl> encode_functions . encode_headers_ ( std : : move ( response_headers ) , body_text . empty ( ) ) ; <nl> - / / encode_headers ( ) ) may have changed the referenced is_reset so we need to test it <nl> + / / encode_headers ( ) may have changed the referenced is_reset so we need to test it <nl> if ( ! body_text . empty ( ) & & ! is_reset ) { <nl> Buffer : : OwnedImpl buffer ( body_text ) ; <nl> encode_functions . encode_data_ ( buffer , true ) ; <nl> mmm a / source / common / http / utility . h <nl> ppp b / source / common / http / utility . h <nl> bool isWebSocketUpgradeRequest ( const RequestHeaderMap & headers ) ; <nl> Http1Settings parseHttp1Settings ( const envoy : : config : : core : : v3 : : Http1ProtocolOptions & config ) ; <nl> <nl> struct EncodeFunctions { <nl> + / / Function to modify locally generated response headers . <nl> + std : : function < void ( ResponseHeaderMap & headers ) > modify_headers_ ; <nl> / / Function to rewrite locally generated response . <nl> std : : function < void ( ResponseHeaderMap & response_headers , Code & code , std : : string & body , <nl> absl : : string_view & content_type ) > <nl> mmm a / test / extensions / filters / http / ext_authz / ext_authz_integration_test . cc <nl> ppp b / test / extensions / filters / http / ext_authz / ext_authz_integration_test . cc <nl> TEST_P ( ExtAuthzHttpIntegrationTest , DisableCaseSensitiveStringMatcher ) { <nl> EXPECT_EQ ( case_sensitive_header_value_ , header_entry - > value ( ) . getStringView ( ) ) ; <nl> } <nl> <nl> + class ExtAuthzLocalReplyIntegrationTest : public HttpIntegrationTest , <nl> + public TestWithParam < Network : : Address : : IpVersion > { <nl> + public : <nl> + ExtAuthzLocalReplyIntegrationTest ( ) <nl> + : HttpIntegrationTest ( Http : : CodecClient : : Type : : HTTP1 , GetParam ( ) ) { } <nl> + <nl> + void createUpstreams ( ) override { <nl> + HttpIntegrationTest : : createUpstreams ( ) ; <nl> + fake_upstreams_ . emplace_back ( <nl> + new FakeUpstream ( 0 , FakeHttpConnection : : Type : : HTTP1 , version_ , timeSystem ( ) ) ) ; <nl> + fake_upstreams_ [ 0 ] - > set_allow_unexpected_disconnects ( true ) ; <nl> + } <nl> + <nl> + void cleanup ( ) { <nl> + if ( fake_ext_authz_connection_ ! = nullptr ) { <nl> + AssertionResult result = fake_ext_authz_connection_ - > close ( ) ; <nl> + RELEASE_ASSERT ( result , result . message ( ) ) ; <nl> + result = fake_ext_authz_connection_ - > waitForDisconnect ( ) ; <nl> + RELEASE_ASSERT ( result , result . message ( ) ) ; <nl> + } <nl> + cleanupUpstreamAndDownstream ( ) ; <nl> + } <nl> + <nl> + FakeHttpConnectionPtr fake_ext_authz_connection_ ; <nl> + } ; <nl> + <nl> + INSTANTIATE_TEST_SUITE_P ( IpVersions , ExtAuthzLocalReplyIntegrationTest , <nl> + ValuesIn ( TestEnvironment : : getIpVersionsForTest ( ) ) , <nl> + TestUtility : : ipTestParamsToString ) ; <nl> + <nl> + / / This integration test uses ext_authz combined with ` local_reply_config ` . <nl> + / / * If ext_authz response status is 401 ; its response headers and body are sent to the client . <nl> + / / * But if ` local_reply_config ` is specified , the response body and its content - length and type <nl> + / / are controlled by the ` local_reply_config ` . <nl> + / / This integration test verifies that content - type and content - length generated <nl> + / / from ` local_reply_config ` are not overridden by ext_authz response . <nl> + TEST_P ( ExtAuthzLocalReplyIntegrationTest , DeniedHeaderTest ) { <nl> + config_helper_ . addConfigModifier ( [ this ] ( envoy : : config : : bootstrap : : v3 : : Bootstrap & bootstrap ) { <nl> + auto * ext_authz_cluster = bootstrap . mutable_static_resources ( ) - > add_clusters ( ) ; <nl> + ext_authz_cluster - > MergeFrom ( bootstrap . static_resources ( ) . clusters ( ) [ 0 ] ) ; <nl> + ext_authz_cluster - > set_name ( " ext_authz " ) ; <nl> + <nl> + envoy : : extensions : : filters : : http : : ext_authz : : v3 : : ExtAuthz proto_config ; <nl> + const std : : string ext_authz_config = R " EOF ( <nl> + http_service : <nl> + server_uri : <nl> + uri : " ext_authz : 9000 " <nl> + cluster : " ext_authz " <nl> + timeout : 0 . 25s <nl> + ) EOF " ; <nl> + TestUtility : : loadFromYaml ( ext_authz_config , proto_config ) ; <nl> + <nl> + envoy : : config : : listener : : v3 : : Filter ext_authz_filter ; <nl> + ext_authz_filter . set_name ( Extensions : : HttpFilters : : HttpFilterNames : : get ( ) . ExtAuthorization ) ; <nl> + ext_authz_filter . mutable_typed_config ( ) - > PackFrom ( proto_config ) ; <nl> + config_helper_ . addFilter ( MessageUtil : : getJsonStringFromMessage ( ext_authz_filter ) ) ; <nl> + } ) ; <nl> + <nl> + const std : : string local_reply_yaml = R " EOF ( <nl> + body_format : <nl> + json_format : <nl> + code : " % RESPONSE_CODE % " <nl> + message : " % LOCAL_REPLY_BODY % " <nl> + ) EOF " ; <nl> + envoy : : extensions : : filters : : network : : http_connection_manager : : v3 : : LocalReplyConfig <nl> + local_reply_config ; <nl> + TestUtility : : loadFromYaml ( local_reply_yaml , local_reply_config ) ; <nl> + config_helper_ . setLocalReply ( local_reply_config ) ; <nl> + <nl> + HttpIntegrationTest : : initialize ( ) ; <nl> + <nl> + auto conn = makeClientConnection ( lookupPort ( " http " ) ) ; <nl> + codec_client_ = makeHttpConnection ( std : : move ( conn ) ) ; <nl> + auto response = codec_client_ - > makeHeaderOnlyRequest ( Http : : TestRequestHeaderMapImpl { <nl> + { " : method " , " GET " } , <nl> + { " : path " , " / " } , <nl> + { " : scheme " , " http " } , <nl> + { " : authority " , " host " } , <nl> + } ) ; <nl> + <nl> + AssertionResult result = <nl> + fake_upstreams_ . back ( ) - > waitForHttpConnection ( * dispatcher_ , fake_ext_authz_connection_ ) ; <nl> + RELEASE_ASSERT ( result , result . message ( ) ) ; <nl> + FakeStreamPtr ext_authz_request ; <nl> + result = fake_ext_authz_connection_ - > waitForNewStream ( * dispatcher_ , ext_authz_request ) ; <nl> + RELEASE_ASSERT ( result , result . message ( ) ) ; <nl> + result = ext_authz_request - > waitForEndStream ( * dispatcher_ ) ; <nl> + RELEASE_ASSERT ( result , result . message ( ) ) ; <nl> + <nl> + Http : : TestResponseHeaderMapImpl ext_authz_response_headers { <nl> + { " : status " , " 401 " } , <nl> + { " content - type " , " fake - type " } , <nl> + } ; <nl> + ext_authz_request - > encodeHeaders ( ext_authz_response_headers , true ) ; <nl> + <nl> + response - > waitForEndStream ( ) ; <nl> + EXPECT_TRUE ( response - > complete ( ) ) ; <nl> + <nl> + EXPECT_EQ ( " 401 " , response - > headers ( ) . Status ( ) - > value ( ) . getStringView ( ) ) ; <nl> + / / Without fixing the bug , " content - type " and " content - length " are overridden by the ext_authz <nl> + / / responses as its " content - type : fake - type " and " content - length : 0 " . <nl> + EXPECT_EQ ( " application / json " , response - > headers ( ) . ContentType ( ) - > value ( ) . getStringView ( ) ) ; <nl> + EXPECT_EQ ( " 26 " , response - > headers ( ) . ContentLength ( ) - > value ( ) . getStringView ( ) ) ; <nl> + <nl> + const std : : string expected_body = R " ( { <nl> + " code " : 401 , <nl> + " message " : " " <nl> + } ) " ; <nl> + EXPECT_TRUE ( TestUtility : : jsonStringEqual ( response - > body ( ) , expected_body ) ) ; <nl> + <nl> + cleanup ( ) ; <nl> + } <nl> + <nl> } / / namespace Envoy <nl> mmm a / test / integration / fake_upstream . h <nl> ppp b / test / integration / fake_upstream . h <nl> class FakeStream : public Http : : RequestDecoder , <nl> Http : : Utility : : sendLocalReply ( <nl> false , <nl> Http : : Utility : : EncodeFunctions ( <nl> - { nullptr , <nl> + { nullptr , nullptr , <nl> [ & ] ( Http : : ResponseHeaderMapPtr & & headers , bool end_stream ) - > void { <nl> encoder_ . encodeHeaders ( * headers , end_stream ) ; <nl> } , <nl> mmm a / test / integration / integration_test . cc <nl> ppp b / test / integration / integration_test . cc <nl> TEST_P ( IntegrationTest , PerWorkerStatsAndBalancing ) { <nl> check_listener_stats ( 0 , 1 ) ; <nl> } <nl> <nl> - TEST_P ( IntegrationTest , RouterDirectResponse ) { <nl> + TEST_P ( IntegrationTest , RouterDirectResponseWithBody ) { <nl> const std : : string body = " Response body " ; <nl> const std : : string file_path = TestEnvironment : : writeStringToFileForTest ( " test_envoy " , body ) ; <nl> static const std : : string domain ( " direct . example . com " ) ; <nl> TEST_P ( IntegrationTest , RouterDirectResponse ) { <nl> header_value_option - > mutable_header ( ) - > set_key ( " content - type " ) ; <nl> header_value_option - > mutable_header ( ) - > set_value ( " text / html " ) ; <nl> header_value_option - > mutable_append ( ) - > set_value ( false ) ; <nl> + / / Add a wrong content - length . <nl> + header_value_option = route_config - > mutable_response_headers_to_add ( ) - > Add ( ) ; <nl> + header_value_option - > mutable_header ( ) - > set_key ( " content - length " ) ; <nl> + header_value_option - > mutable_header ( ) - > set_value ( " 2000 " ) ; <nl> + header_value_option - > mutable_append ( ) - > set_value ( false ) ; <nl> auto * virtual_host = route_config - > add_virtual_hosts ( ) ; <nl> virtual_host - > set_name ( domain ) ; <nl> virtual_host - > add_domains ( domain ) ; <nl> TEST_P ( IntegrationTest , RouterDirectResponse ) { <nl> - > value ( ) <nl> . getStringView ( ) ) ; <nl> EXPECT_EQ ( " text / html " , response - > headers ( ) . getContentTypeValue ( ) ) ; <nl> + / / Verify content - length is correct . <nl> + EXPECT_EQ ( fmt : : format ( " { } " , body . size ( ) ) , response - > headers ( ) . getContentLengthValue ( ) ) ; <nl> EXPECT_EQ ( body , response - > body ( ) ) ; <nl> } <nl> <nl> + TEST_P ( IntegrationTest , RouterDirectResponseEmptyBody ) { <nl> + static const std : : string domain ( " direct . example . com " ) ; <nl> + static const std : : string prefix ( " / " ) ; <nl> + static const Http : : Code status ( Http : : Code : : OK ) ; <nl> + config_helper_ . addConfigModifier ( <nl> + [ & ] ( envoy : : extensions : : filters : : network : : http_connection_manager : : v3 : : HttpConnectionManager & <nl> + hcm ) - > void { <nl> + auto * route_config = hcm . mutable_route_config ( ) ; <nl> + auto * header_value_option = route_config - > mutable_response_headers_to_add ( ) - > Add ( ) ; <nl> + header_value_option - > mutable_header ( ) - > set_key ( " x - additional - header " ) ; <nl> + header_value_option - > mutable_header ( ) - > set_value ( " example - value " ) ; <nl> + header_value_option - > mutable_append ( ) - > set_value ( false ) ; <nl> + header_value_option = route_config - > mutable_response_headers_to_add ( ) - > Add ( ) ; <nl> + header_value_option - > mutable_header ( ) - > set_key ( " content - type " ) ; <nl> + header_value_option - > mutable_header ( ) - > set_value ( " text / html " ) ; <nl> + header_value_option - > mutable_append ( ) - > set_value ( false ) ; <nl> + / / Add a wrong content - length . <nl> + header_value_option = route_config - > mutable_response_headers_to_add ( ) - > Add ( ) ; <nl> + header_value_option - > mutable_header ( ) - > set_key ( " content - length " ) ; <nl> + header_value_option - > mutable_header ( ) - > set_value ( " 2000 " ) ; <nl> + header_value_option - > mutable_append ( ) - > set_value ( false ) ; <nl> + auto * virtual_host = route_config - > add_virtual_hosts ( ) ; <nl> + virtual_host - > set_name ( domain ) ; <nl> + virtual_host - > add_domains ( domain ) ; <nl> + virtual_host - > add_routes ( ) - > mutable_match ( ) - > set_prefix ( prefix ) ; <nl> + virtual_host - > mutable_routes ( 0 ) - > mutable_direct_response ( ) - > set_status ( <nl> + static_cast < uint32_t > ( status ) ) ; <nl> + } ) ; <nl> + initialize ( ) ; <nl> + <nl> + BufferingStreamDecoderPtr response = IntegrationUtil : : makeSingleRequest ( <nl> + lookupPort ( " http " ) , " GET " , " / " , " " , downstream_protocol_ , version_ , " direct . example . com " ) ; <nl> + ASSERT_TRUE ( response - > complete ( ) ) ; <nl> + EXPECT_EQ ( " 200 " , response - > headers ( ) . getStatusValue ( ) ) ; <nl> + EXPECT_EQ ( " example - value " , response - > headers ( ) <nl> + . get ( Envoy : : Http : : LowerCaseString ( " x - additional - header " ) ) <nl> + - > value ( ) <nl> + . getStringView ( ) ) ; <nl> + / / Content - type header is removed . <nl> + EXPECT_EQ ( nullptr , response - > headers ( ) . ContentType ( ) ) ; <nl> + / / Content - length header is correct . <nl> + EXPECT_EQ ( " 0 " , response - > headers ( ) . getContentLengthValue ( ) ) ; <nl> + } <nl> + <nl> TEST_P ( IntegrationTest , ConnectionClose ) { <nl> config_helper_ . addFilter ( ConfigHelper : : defaultHealthCheckFilter ( ) ) ; <nl> initialize ( ) ; <nl> mmm a / test / mocks / http / mocks . cc <nl> ppp b / test / mocks / http / mocks . cc <nl> void MockStreamDecoderFilterCallbacks : : sendLocalReply_ ( <nl> Utility : : sendLocalReply ( <nl> stream_destroyed_ , <nl> Utility : : EncodeFunctions { <nl> - nullptr , <nl> + nullptr , nullptr , <nl> [ this , modify_headers ] ( ResponseHeaderMapPtr & & headers , bool end_stream ) - > void { <nl> if ( modify_headers ! = nullptr ) { <nl> modify_headers ( * headers ) ; <nl>
|
sendLocalReply : call modify_headers before call encode_headers ( )
|
envoyproxy/envoy
|
619f15d7f2ce7bf11cd8f7ce9cb2ac50c1670080
|
2020-08-27T20:57:19Z
|
mmm a / include / swift / SIL / TypeLowering . h <nl> ppp b / include / swift / SIL / TypeLowering . h <nl> class TypeLowering { <nl> constexpr RecursiveProperties ( IsTrivial_t isTrivial , <nl> IsFixedABI_t isFixedABI , <nl> IsAddressOnly_t isAddressOnly , <nl> - IsResilient_t isResilient = IsNotResilient ) <nl> + IsResilient_t isResilient ) <nl> : Flags ( ( isTrivial ? 0U : NonTrivialFlag ) | <nl> - ( isAddressOnly ? AddressOnlyFlag : 0U ) | <nl> ( isFixedABI ? 0U : NonFixedABIFlag ) | <nl> + ( isAddressOnly ? AddressOnlyFlag : 0U ) | <nl> ( isResilient ? ResilientFlag : 0U ) ) { } <nl> <nl> + static constexpr RecursiveProperties forTrivial ( ) { <nl> + return { IsTrivial , IsFixedABI , IsNotAddressOnly , IsNotResilient } ; <nl> + } <nl> + <nl> static constexpr RecursiveProperties forReference ( ) { <nl> return { IsNotTrivial , IsFixedABI , IsNotAddressOnly , IsNotResilient } ; <nl> } <nl> class TypeLowering { <nl> } <nl> <nl> static constexpr RecursiveProperties forResilient ( ) { <nl> - return { IsNotTrivial , IsNotFixedABI , IsAddressOnly , IsResilient } ; <nl> + return { IsTrivial , IsFixedABI , IsNotAddressOnly , IsResilient } ; <nl> } <nl> <nl> void addSubobject ( RecursiveProperties other ) { <nl> mmm a / lib / SIL / TypeLowering . cpp <nl> ppp b / lib / SIL / TypeLowering . cpp <nl> namespace { <nl> / / The subclass should implement : <nl> / / / / Trivial , fixed - layout , and non - address - only . <nl> / / RetTy handleTrivial ( CanType ) ; <nl> + / / RetTy handleTrivial ( CanType . RecursiveProperties properties ) ; <nl> / / / / A reference type . <nl> / / RetTy handleReference ( CanType ) ; <nl> / / / / Non - trivial and address - only . <nl> / / RetTy handleAddressOnly ( CanType , RecursiveProperties properties ) ; <nl> / / and , if it doesn ' t override handleTupleType , <nl> / / / / An aggregate type that ' s non - trivial . <nl> - / / RetTy handleNonTrivialAggregate ( CanType , IsFixedABI_t fixed ) ; <nl> + / / RetTy handleNonTrivialAggregate ( CanType , RecursiveProperties properties ) ; <nl> / / <nl> / / Alternatively , it can just implement : <nl> / / RetTy handle ( CanType , RecursiveProperties properties ) ; <nl> <nl> / / / Handle a trivial , fixed - size , loadable type . <nl> - RetTy handleTrivial ( CanType type ) { <nl> - return asImpl ( ) . handle ( type , RecursiveProperties ( ) ) ; <nl> - } <nl> - <nl> - RetTy handleReference ( CanType type ) { <nl> - return asImpl ( ) . handle ( type , RecursiveProperties : : forReference ( ) ) ; <nl> + RetTy handleTrivial ( CanType type , RecursiveProperties properties ) { <nl> + return asImpl ( ) . handle ( type , properties ) ; <nl> } <nl> <nl> RetTy handleAddressOnly ( CanType type , RecursiveProperties properties ) { <nl> namespace { <nl> return asImpl ( ) . handle ( type , properties ) ; <nl> } <nl> <nl> + RetTy handleTrivial ( CanType type ) { <nl> + return asImpl ( ) . handleTrivial ( type , RecursiveProperties : : forTrivial ( ) ) ; <nl> + } <nl> + RetTy handleReference ( CanType type ) { <nl> + return asImpl ( ) . handle ( type , RecursiveProperties : : forReference ( ) ) ; <nl> + } <nl> + <nl> # define IMPL ( TYPE , LOWERING ) \ <nl> RetTy visit # # TYPE # # Type ( Can # # TYPE # # Type type ) { \ <nl> return asImpl ( ) . handle # # LOWERING ( type ) ; \ <nl> namespace { <nl> RetTy visitBuiltinUnsafeValueBufferType ( <nl> CanBuiltinUnsafeValueBufferType type ) { <nl> return asImpl ( ) . handleAddressOnly ( type , { IsNotTrivial , IsFixedABI , <nl> - IsAddressOnly } ) ; <nl> + IsAddressOnly , IsNotResilient } ) ; <nl> } <nl> <nl> RetTy visitAnyFunctionType ( CanAnyFunctionType type ) { <nl> namespace { <nl> RetTy visit # # Name # # StorageType ( Can # # Name # # StorageType type ) { \ <nl> return asImpl ( ) . handleAddressOnly ( type , { IsNotTrivial , \ <nl> IsFixedABI , \ <nl> - IsAddressOnly } ) ; \ <nl> + IsAddressOnly , \ <nl> + IsNotResilient } ) ; \ <nl> } <nl> # define ALWAYS_LOADABLE_CHECKED_REF_STORAGE ( Name , . . . ) \ <nl> RetTy visit # # Name # # StorageType ( Can # # Name # # StorageType type ) { \ <nl> namespace { <nl> RetTy visitAddressOnly # # Name # # StorageType ( Can # # Name # # StorageType type ) { \ <nl> return asImpl ( ) . handleAddressOnly ( type , { IsNotTrivial , \ <nl> IsFixedABI , \ <nl> - IsAddressOnly } ) ; \ <nl> + IsAddressOnly , \ <nl> + IsNotResilient } ) ; \ <nl> } \ <nl> RetTy visit # # Name # # StorageType ( Can # # Name # # StorageType type ) { \ <nl> auto referentType = type - > getReferentType ( ) ; \ <nl> namespace { <nl> } <nl> <nl> if ( LayoutInfo - > isAddressOnlyTrivial ( ) ) { <nl> - return asImpl ( ) . handleAddressOnly ( type , <nl> - { IsTrivial , IsNotFixedABI , IsAddressOnly } ) ; <nl> + auto properties = RecursiveProperties : : forTrivial ( ) ; <nl> + properties . setAddressOnly ( ) ; <nl> + return asImpl ( ) . handleAddressOnly ( type , properties ) ; <nl> } <nl> <nl> if ( LayoutInfo - > isRefCounted ( ) ) <nl> namespace { <nl> case ExistentialRepresentation : : Opaque : <nl> return asImpl ( ) . handleAddressOnly ( type , { IsNotTrivial , <nl> IsFixedABI , <nl> - IsAddressOnly } ) ; <nl> + IsAddressOnly , <nl> + IsNotResilient } ) ; <nl> / / Class - constrained and boxed existentials are refcounted . <nl> case ExistentialRepresentation : : Class : <nl> case ExistentialRepresentation : : Boxed : <nl> namespace { <nl> <nl> RetTy visitSILBlockStorageType ( CanSILBlockStorageType type ) { <nl> / / Should not be loaded . <nl> - return asImpl ( ) . handleAddressOnly ( type , { IsNotTrivial , IsFixedABI , <nl> - IsAddressOnly } ) ; <nl> + return asImpl ( ) . handleAddressOnly ( type , { IsNotTrivial , <nl> + IsFixedABI , <nl> + IsAddressOnly , <nl> + IsNotResilient } ) ; <nl> } <nl> <nl> RetTy visitSILBoxType ( CanSILBoxType type ) { <nl> namespace { <nl> / / / A class for trivial , fixed - layout , loadable types . <nl> class TrivialTypeLowering final : public LoadableTypeLowering { <nl> public : <nl> - TrivialTypeLowering ( SILType type , ResilienceExpansion forExpansion ) <nl> - : LoadableTypeLowering ( type , { IsTrivial , IsFixedABI , IsNotAddressOnly } , <nl> - IsNotReferenceCounted , forExpansion ) { } <nl> + TrivialTypeLowering ( SILType type , RecursiveProperties properties , <nl> + ResilienceExpansion forExpansion ) <nl> + : LoadableTypeLowering ( type , properties , IsNotReferenceCounted , <nl> + forExpansion ) { <nl> + assert ( properties . isFixedABI ( ) ) ; <nl> + assert ( properties . isTrivial ( ) ) ; <nl> + assert ( ! properties . isAddressOnly ( ) ) ; <nl> + } <nl> <nl> SILValue emitLoadOfCopy ( SILBuilder & B , SILLocation loc , SILValue addr , <nl> IsTake_t isTake ) const override { <nl> namespace { <nl> <nl> class NonTrivialLoadableTypeLowering : public LoadableTypeLowering { <nl> public : <nl> - NonTrivialLoadableTypeLowering ( SILType type , <nl> - IsReferenceCounted_t isRefCounted , <nl> - ResilienceExpansion forExpansion ) <nl> - : NonTrivialLoadableTypeLowering ( type , <nl> - { IsNotTrivial , IsFixedABI , IsNotAddressOnly } , <nl> - isRefCounted , forExpansion ) { } <nl> - <nl> - / / / This constructor is necessary because of opaque - values . <nl> NonTrivialLoadableTypeLowering ( SILType type , <nl> RecursiveProperties properties , <nl> IsReferenceCounted_t isRefCounted , <nl> namespace { <nl> const = 0 ; <nl> <nl> public : <nl> - LoadableAggTypeLowering ( CanType type , ResilienceExpansion forExpansion ) <nl> + LoadableAggTypeLowering ( CanType type , RecursiveProperties properties , <nl> + ResilienceExpansion forExpansion ) <nl> : NonTrivialLoadableTypeLowering ( SILType : : getPrimitiveObjectType ( type ) , <nl> - IsNotReferenceCounted , forExpansion ) { <nl> + properties , IsNotReferenceCounted , <nl> + forExpansion ) { <nl> } <nl> <nl> virtual SILValue rebuildAggregate ( SILBuilder & B , SILLocation loc , <nl> namespace { <nl> class LoadableTupleTypeLowering final <nl> : public LoadableAggTypeLowering < LoadableTupleTypeLowering , unsigned > { <nl> public : <nl> - LoadableTupleTypeLowering ( CanType type , ResilienceExpansion forExpansion ) <nl> - : LoadableAggTypeLowering ( type , forExpansion ) { } <nl> + LoadableTupleTypeLowering ( CanType type , RecursiveProperties properties , <nl> + ResilienceExpansion forExpansion ) <nl> + : LoadableAggTypeLowering ( type , properties , forExpansion ) { } <nl> <nl> SILValue emitRValueProject ( SILBuilder & B , SILLocation loc , <nl> SILValue tupleValue , unsigned index , <nl> namespace { <nl> class LoadableStructTypeLowering final <nl> : public LoadableAggTypeLowering < LoadableStructTypeLowering , VarDecl * > { <nl> public : <nl> - LoadableStructTypeLowering ( CanType type , ResilienceExpansion forExpansion ) <nl> - : LoadableAggTypeLowering ( type , forExpansion ) { } <nl> + LoadableStructTypeLowering ( CanType type , RecursiveProperties properties , <nl> + ResilienceExpansion forExpansion ) <nl> + : LoadableAggTypeLowering ( type , properties , forExpansion ) { } <nl> <nl> SILValue emitRValueProject ( SILBuilder & B , SILLocation loc , <nl> SILValue structValue , VarDecl * field , <nl> namespace { <nl> / / / A lowering for loadable but non - trivial enum types . <nl> class LoadableEnumTypeLowering final : public NonTrivialLoadableTypeLowering { <nl> public : <nl> - LoadableEnumTypeLowering ( CanType type , ResilienceExpansion forExpansion ) <nl> + LoadableEnumTypeLowering ( CanType type , RecursiveProperties properties , <nl> + ResilienceExpansion forExpansion ) <nl> : NonTrivialLoadableTypeLowering ( SILType : : getPrimitiveObjectType ( type ) , <nl> + properties , <nl> IsNotReferenceCounted , <nl> forExpansion ) { } <nl> <nl> namespace { <nl> AddressOnlyTypeLowering ( SILType type , RecursiveProperties properties , <nl> ResilienceExpansion forExpansion ) <nl> : TypeLowering ( type , properties , IsNotReferenceCounted , <nl> - forExpansion ) <nl> - { } <nl> + forExpansion ) { <nl> + assert ( properties . isAddressOnly ( ) ) ; <nl> + } <nl> <nl> void emitCopyInto ( SILBuilder & B , SILLocation loc , <nl> SILValue src , SILValue dest , IsTake_t isTake , <nl> namespace { <nl> UnsafeValueBufferTypeLowering ( SILType type , <nl> ResilienceExpansion forExpansion ) <nl> : AddressOnlyTypeLowering ( type , <nl> - { IsNotTrivial , IsFixedABI , IsAddressOnly } , <nl> + { IsNotTrivial , IsFixedABI , <nl> + IsAddressOnly , IsNotResilient } , <nl> forExpansion ) { } <nl> <nl> void emitCopyInto ( SILBuilder & B , SILLocation loc , <nl> namespace { <nl> TC ( TC ) , Dependent ( Dependent ) { } <nl> <nl> TypeLowering * handleTrivial ( CanType type ) { <nl> + return handleTrivial ( type , RecursiveProperties : : forTrivial ( ) ) ; <nl> + } <nl> + <nl> + TypeLowering * handleTrivial ( CanType type , <nl> + RecursiveProperties properties ) { <nl> auto silType = SILType : : getPrimitiveObjectType ( type ) ; <nl> - return new ( TC , Dependent ) TrivialTypeLowering ( silType , Expansion ) ; <nl> + return new ( TC , Dependent ) TrivialTypeLowering ( silType , properties , <nl> + Expansion ) ; <nl> } <nl> <nl> TypeLowering * handleReference ( CanType type ) { <nl> namespace { <nl> properties ) ; <nl> } <nl> <nl> + bool handleResilience ( CanType type , NominalTypeDecl * D , <nl> + RecursiveProperties & properties ) { <nl> + if ( D - > isResilient ( ) ) { <nl> + / / If the type is resilient and defined in our module , make a note of <nl> + / / that , since our lowering now depends on the resilience expansion . <nl> + bool sameModule = ( D - > getModuleContext ( ) = = M . getSwiftModule ( ) ) ; <nl> + if ( sameModule ) <nl> + properties . addSubobject ( RecursiveProperties : : forResilient ( ) ) ; <nl> + <nl> + / / If the type is in a different module , or if we ' re using a minimal <nl> + / / expansion , the type is address only and completely opaque to us . <nl> + / / <nl> + / / Note : if the type is in a different module , the lowering does <nl> + / / not depend on the resilience expansion , so we do not need to set <nl> + / / the isResilent ( ) flag above . <nl> + if ( ! sameModule | | Expansion = = ResilienceExpansion : : Minimal ) { <nl> + properties . addSubobject ( RecursiveProperties : : forOpaque ( ) ) ; <nl> + return true ; <nl> + } <nl> + } <nl> + <nl> + return false ; <nl> + } <nl> + <nl> TypeLowering * visitAnyStructType ( CanType structType , StructDecl * D ) { <nl> + RecursiveProperties properties ; <nl> <nl> - / / For now , if the type does not have a fixed layout in all resilience <nl> - / / domains , we will treat it as address - only in SIL . <nl> - if ( D - > isResilient ( M . getSwiftModule ( ) , Expansion ) ) <nl> - return handleAddressOnly ( structType , <nl> - RecursiveProperties : : forResilient ( ) ) ; <nl> + if ( handleResilience ( structType , D , properties ) ) <nl> + return handleAddressOnly ( structType , properties ) ; <nl> <nl> / / Classify the type according to its stored properties . <nl> - RecursiveProperties properties ; <nl> for ( auto field : D - > getStoredProperties ( ) ) { <nl> auto substFieldType = <nl> structType - > getTypeOfMember ( D - > getModuleContext ( ) , field , nullptr ) ; <nl> namespace { <nl> } <nl> <nl> TypeLowering * visitAnyEnumType ( CanType enumType , EnumDecl * D ) { <nl> - / / For now , if the type does not have a fixed layout in all resilience <nl> - / / domains , we will treat it as address - only in SIL . <nl> - if ( D - > isResilient ( M . getSwiftModule ( ) , Expansion ) ) <nl> - return handleAddressOnly ( enumType , RecursiveProperties : : forResilient ( ) ) ; <nl> + RecursiveProperties properties ; <nl> + <nl> + if ( handleResilience ( enumType , D , properties ) ) <nl> + return handleAddressOnly ( enumType , properties ) ; <nl> <nl> / / If the whole enum is indirect , we lower it as if all payload <nl> / / cases were indirect . This means a fixed - layout indirect enum <nl> / / is always loadable and nontrivial . A resilient indirect enum <nl> / / is still address only , because we don ' t know how many bits <nl> - / / are used for the discriminator . <nl> + / / are used for the discriminator , and new non - indirect cases <nl> + / / may be added resiliently later . <nl> if ( D - > isIndirect ( ) ) { <nl> - return new ( TC , Dependent ) LoadableEnumTypeLowering ( enumType , Expansion ) ; <nl> + properties . setNonTrivial ( ) ; <nl> + return new ( TC , Dependent ) LoadableEnumTypeLowering ( enumType , properties , <nl> + Expansion ) ; <nl> } <nl> <nl> / / Accumulate the properties of all direct payloads . <nl> - RecursiveProperties properties ; <nl> for ( auto elt : D - > getAllElements ( ) ) { <nl> / / No - payload elements do not affect any recursive properties . <nl> if ( ! elt - > hasAssociatedValues ( ) ) <nl> namespace { <nl> } <nl> assert ( props . isFixedABI ( ) ) ; <nl> if ( props . isTrivial ( ) ) { <nl> - return handleTrivial ( type ) ; <nl> + return handleTrivial ( type , props ) ; <nl> } <nl> - return new ( TC , Dependent ) LoadableLoweringClass ( type , Expansion ) ; <nl> + return new ( TC , Dependent ) LoadableLoweringClass ( type , props , Expansion ) ; <nl> } <nl> } ; <nl> } / / end anonymous namespace <nl> getTypeLoweringForExpansion ( TypeKey key , <nl> if ( ! lowering - > isResilient ( ) ) { <nl> / / Don ' t try to refine the lowering for other resilience expansions if <nl> / / we don ' t expect to get a different lowering anyway . <nl> + / / <nl> + / / See LowerType : : handleResilience ( ) for the gory details ; we only <nl> + / / set this flag if the type is resilient * and * inside our module . <nl> return lowering ; <nl> } <nl> <nl>
|
SIL : Type lowering propagates recursive properties in all cases
|
apple/swift
|
268f780a64df9602c6937d7a28339284b7ee4eca
|
2019-03-06T07:26:26Z
|
mmm a / lib / Driver / Driver . cpp <nl> ppp b / lib / Driver / Driver . cpp <nl> void Driver : : buildActions ( const ToolChain & TC , <nl> return ; <nl> } <nl> <nl> - ActionList CompileActions ; <nl> + ActionList AllModuleInputs ; <nl> + ActionList AllLinkerInputs ; <nl> + <nl> switch ( OI . CompilerMode ) { <nl> case OutputInfo : : Mode : : StandardCompile : <nl> case OutputInfo : : Mode : : UpdateCode : { <nl> void Driver : : buildActions ( const ToolChain & TC , <nl> Current . reset ( new CompileJobAction ( Current . release ( ) , <nl> types : : TY_LLVM_BC , <nl> previousBuildState ) ) ; <nl> + AllModuleInputs . push_back ( Current . get ( ) ) ; <nl> Current . reset ( new BackendJobAction ( Current . release ( ) , <nl> OI . CompilerOutputType , 0 ) ) ; <nl> - } else <nl> + } else { <nl> Current . reset ( new CompileJobAction ( Current . release ( ) , <nl> OI . CompilerOutputType , <nl> previousBuildState ) ) ; <nl> + AllModuleInputs . push_back ( Current . get ( ) ) ; <nl> + } <nl> break ; <nl> } <nl> case types : : TY_SwiftModuleFile : <nl> case types : : TY_SwiftModuleDocFile : <nl> / / Module inputs are okay if generating a module or linking . <nl> - if ( OI . ShouldGenerateModule ) <nl> + if ( OI . ShouldGenerateModule ) { <nl> + AllModuleInputs . push_back ( Current . get ( ) ) ; <nl> break ; <nl> + } <nl> SWIFT_FALLTHROUGH ; <nl> case types : : TY_AutolinkFile : <nl> case types : : TY_Object : <nl> void Driver : : buildActions ( const ToolChain & TC , <nl> llvm_unreachable ( " these types should never be inferred " ) ; <nl> } <nl> <nl> - CompileActions . push_back ( Current . release ( ) ) ; <nl> + AllLinkerInputs . push_back ( Current . release ( ) ) ; <nl> } <nl> break ; <nl> } <nl> void Driver : : buildActions ( const ToolChain & TC , <nl> if ( HandledHere ) { <nl> / / Create a single CompileJobAction and a single BackendJobAction . <nl> std : : unique_ptr < Action > CA ( new CompileJobAction ( types : : TY_LLVM_BC ) ) ; <nl> + AllModuleInputs . push_back ( CA . get ( ) ) ; <nl> + <nl> int InputIndex = 0 ; <nl> for ( const InputPair & Input : Inputs ) { <nl> types : : ID InputType = Input . first ; <nl> void Driver : : buildActions ( const ToolChain & TC , <nl> / / Only the first backend job owns the compilation job ( to prevent <nl> / / multiple de - allocations of the compilation job ) . <nl> BJA - > setOwnsInputs ( InputIndex = = 0 ) ; <nl> - CompileActions . push_back ( BJA ) ; <nl> + AllLinkerInputs . push_back ( BJA ) ; <nl> } <nl> InputIndex + + ; <nl> } <nl> void Driver : : buildActions ( const ToolChain & TC , <nl> / / file . <nl> CA . reset ( new BackendJobAction ( CAReleased , <nl> OI . CompilerOutputType , 0 ) ) ; <nl> - CompileActions . push_back ( CA . release ( ) ) ; <nl> + AllLinkerInputs . push_back ( CA . release ( ) ) ; <nl> } <nl> break ; <nl> } <nl> void Driver : : buildActions ( const ToolChain & TC , <nl> <nl> CA - > addInput ( new InputAction ( * InputArg , InputType ) ) ; <nl> } <nl> - CompileActions . push_back ( CA . release ( ) ) ; <nl> + AllModuleInputs . push_back ( CA . get ( ) ) ; <nl> + AllLinkerInputs . push_back ( CA . release ( ) ) ; <nl> } <nl> break ; <nl> } <nl> void Driver : : buildActions ( const ToolChain & TC , <nl> Mode = REPLJobAction : : Mode : : Integrated ; <nl> } <nl> <nl> - CompileActions . push_back ( new REPLJobAction ( Mode ) ) ; <nl> - break ; <nl> + Actions . push_back ( new REPLJobAction ( Mode ) ) ; <nl> + return ; <nl> } <nl> } <nl> <nl> - if ( CompileActions . empty ( ) ) <nl> + if ( AllLinkerInputs . empty ( ) ) <nl> / / If there are no compile actions , don ' t attempt to set up any downstream <nl> / / actions . <nl> return ; <nl> <nl> std : : unique_ptr < Action > MergeModuleAction ; <nl> if ( OI . ShouldGenerateModule & & <nl> - OI . CompilerMode ! = OutputInfo : : Mode : : SingleCompile ) { <nl> + OI . CompilerMode ! = OutputInfo : : Mode : : SingleCompile & & <nl> + ! AllModuleInputs . empty ( ) ) { <nl> / / We ' re performing multiple compilations ; set up a merge module step <nl> / / so we generate a single swiftmodule as output . <nl> - MergeModuleAction . reset ( new MergeModuleJobAction ( CompileActions ) ) ; <nl> + MergeModuleAction . reset ( new MergeModuleJobAction ( AllModuleInputs ) ) ; <nl> + MergeModuleAction - > setOwnsInputs ( false ) ; <nl> } <nl> <nl> if ( OI . shouldLink ( ) ) { <nl> - Action * LinkAction = new LinkJobAction ( CompileActions , OI . LinkAction ) ; <nl> + Action * LinkAction = new LinkJobAction ( AllLinkerInputs , OI . LinkAction ) ; <nl> <nl> if ( TC . getTriple ( ) . getObjectFormat ( ) = = llvm : : Triple : : ELF ) { <nl> / / On ELF platforms there ' s no built in autolinking mechanism , so we <nl> / / pull the info we need from the . o files directly and pass them as an <nl> / / argument input file to the linker . <nl> - Action * AutolinkExtractAction = new AutolinkExtractJobAction ( CompileActions ) ; <nl> + Action * AutolinkExtractAction = <nl> + new AutolinkExtractJobAction ( AllLinkerInputs ) ; <nl> / / Takes the same inputs as the linker , but doesn ' t own them . <nl> AutolinkExtractAction - > setOwnsInputs ( false ) ; <nl> / / And gives its output to the linker . <nl> void Driver : : buildActions ( const ToolChain & TC , <nl> } <nl> <nl> if ( MergeModuleAction ) { <nl> - / / We have a MergeModuleJobAction ; this needs to be an input to the <nl> - / / LinkJobAction . It shares inputs with the LinkAction , so tell it that it <nl> - / / no longer owns its inputs . <nl> - MergeModuleAction - > setOwnsInputs ( false ) ; <nl> if ( OI . DebugInfoKind = = IRGenDebugInfoKind : : Normal ) <nl> LinkAction - > addInput ( MergeModuleAction . release ( ) ) ; <nl> else <nl> void Driver : : buildActions ( const ToolChain & TC , <nl> dSYMAction - > setOwnsInputs ( false ) ; <nl> Actions . push_back ( dSYMAction ) ; <nl> } <nl> - } else if ( MergeModuleAction ) { <nl> - Actions . push_back ( MergeModuleAction . release ( ) ) ; <nl> } else { <nl> - Actions = CompileActions ; <nl> + / / The merge module action needs to be first to force the right outputs <nl> + / / for the other actions . However , we can ' t rely on it being the only <nl> + / / action because there may be other actions ( e . g . BackenJobActions ) that <nl> + / / are not merge - module inputs but nonetheless should be run . <nl> + if ( MergeModuleAction ) <nl> + Actions . push_back ( MergeModuleAction . release ( ) ) ; <nl> + Actions . append ( AllLinkerInputs . begin ( ) , AllLinkerInputs . end ( ) ) ; <nl> } <nl> } <nl> <nl> mmm a / lib / Driver / Tools . cpp <nl> ppp b / lib / Driver / Tools . cpp <nl> static void addInputsOfType ( ArgStringList & Arguments , <nl> auto & output = Cmd - > getOutput ( ) . getAnyOutputForType ( InputType ) ; <nl> if ( ! output . empty ( ) ) <nl> Arguments . push_back ( output . c_str ( ) ) ; <nl> - else if ( isa < BackendJobAction > ( Cmd - > getSource ( ) ) & & <nl> - InputType = = types : : TY_SwiftModuleFile ) { <nl> - / / Since BackendJobAction does not generate Swift module files , we look <nl> - / / through BackendJobAction ' s inputs ( CompileJobAction ) to find the Swift <nl> - / / module files . <nl> - assert ( Cmd - > getInputs ( ) . size ( ) = = 1 ) ; <nl> - auto * CompileJob = Cmd - > getInputs ( ) . front ( ) ; <nl> - auto & output = CompileJob - > getOutput ( ) . getAnyOutputForType ( InputType ) ; <nl> - if ( ! output . empty ( ) ) <nl> - Arguments . push_back ( output . c_str ( ) ) ; <nl> - } <nl> } <nl> } <nl> <nl> darwin : : Linker : : constructArgumentList ( const JobAction & JA , <nl> addInputsOfType ( Arguments , InputActions , types : : TY_Object ) ; <nl> <nl> if ( OI . DebugInfoKind = = IRGenDebugInfoKind : : Normal ) { <nl> - Arguments . push_back ( " - add_ast_path " ) ; <nl> <nl> size_t argCount = Arguments . size ( ) ; <nl> if ( OI . CompilerMode = = OutputInfo : : Mode : : SingleCompile ) <nl> addInputsOfType ( Arguments , Inputs , types : : TY_SwiftModuleFile ) ; <nl> else <nl> addPrimaryInputsOfType ( Arguments , Inputs , types : : TY_SwiftModuleFile ) ; <nl> - assert ( argCount + 1 = = Arguments . size ( ) & & " no swiftmodule found for - g " ) ; <nl> - ( void ) argCount ; <nl> + <nl> + if ( Arguments . size ( ) > argCount ) { <nl> + assert ( argCount + 1 = = Arguments . size ( ) & & <nl> + " multiple swiftmodules found for - g " ) ; <nl> + Arguments . insert ( Arguments . end ( ) - 1 , " - add_ast_path " ) ; <nl> + } <nl> } <nl> <nl> switch ( cast < LinkJobAction > ( JA ) . getKind ( ) ) { <nl> mmm a / test / Driver / actions . swift <nl> ppp b / test / Driver / actions . swift <nl> <nl> <nl> / / RUN : touch % t / a . o % t / b . o <nl> / / RUN : % swiftc_driver - driver - print - actions % t / a . o % t / b . o - o main 2 > & 1 | FileCheck % s - check - prefix = LINK - ONLY <nl> + / / RUN : % swiftc_driver - driver - print - actions - g % t / a . o % t / b . o - o main 2 > & 1 | FileCheck % s - check - prefix = LINK - ONLY <nl> / / LINK - ONLY : 0 : input , " { { . * } } / a . o " , object <nl> / / LINK - ONLY : 1 : input , " { { . * } } / b . o " , object <nl> / / LINK - ONLY : 2 : link , { 0 , 1 } , image <nl> <nl> / / DEBUG - LINK - ONLY : 1 : input , " { { . * } } / b . o " , object <nl> / / DEBUG - LINK - ONLY : 2 : input , " { { . * } } / a . swiftmodule " , swiftmodule <nl> / / DEBUG - LINK - ONLY : 3 : input , " { { . * } } / b . swiftmodule " , swiftmodule <nl> - / / DEBUG - LINK - ONLY : 4 : merge - module , { 0 , 1 , 2 , 3 } , swiftmodule <nl> + / / DEBUG - LINK - ONLY : 4 : merge - module , { 2 , 3 } , swiftmodule <nl> / / DEBUG - LINK - ONLY : 5 : link , { 0 , 1 , 2 , 3 , 4 } , image <nl> / / DEBUG - LINK - ONLY : 6 : generate - dSYM , { 5 } , dSYM <nl> <nl> mmm a / test / Driver / bindings . swift <nl> ppp b / test / Driver / bindings . swift <nl> <nl> / / RUN : echo ' { " " : { " object " : " objroot / bindings . o " } } ' > % t / map . json <nl> / / RUN : % swiftc_driver - driver - print - bindings - output - file - map % t / map . json - whole - module - optimization - target x86_64 - apple - macosx10 . 9 % s % S / Inputs / lib . swift 2 > & 1 | FileCheck % s - check - prefix = MAP - WFO <nl> / / MAP - WFO : # " x86_64 - apple - macosx10 . 9 " - " swift " , inputs : [ " { { . * } } bindings . swift " , " { { . * } } lib . swift " ] , output : { object : " objroot / bindings . o " } <nl> + <nl> + / / RUN : touch % t / a . o % t / b . o <nl> + / / RUN : % swiftc_driver - driver - print - bindings - target x86_64 - apple - macosx10 . 9 % t / a . o % t / b . o - o main 2 > & 1 | FileCheck % s - check - prefix = LINK - ONLY <nl> + / / RUN : % swiftc_driver - driver - print - bindings - target x86_64 - apple - macosx10 . 9 - g % t / a . o % t / b . o - o main 2 > & 1 | FileCheck % s - check - prefix = LINK - ONLY <nl> + / / LINK - ONLY : # " x86_64 - apple - macosx10 . 9 " - " darwin : : Linker " , inputs : [ " { { . * } } / a . o " , " { { . * } } / b . o " ] , output : { image : " main " } <nl> mmm a / test / Driver / embed - bitcode . swift <nl> ppp b / test / Driver / embed - bitcode . swift <nl> <nl> / / CHECK - MODULE - DAG : - Xllvm - fake - llvm - option <nl> / / CHECK - MODULE - DAG : - emit - module - path <nl> / / CHECK - MODULE : - frontend <nl> + / / CHECK - MODULE : - emit - module <nl> + / / CHECK - MODULE : - frontend <nl> + / / CHECK - MODULE : - c <nl> / / CHECK - MODULE - NOT : - Xcc <nl> / / CHECK - MODULE - NOT : - DDEBUG <nl> / / CHECK - MODULE - NOT : - fake - llvm - option <nl> / / CHECK - MODULE - NOT : - emit - module - path <nl> - / / CHECK - MODULE : - frontend <nl> - / / CHECK - MODULE : - emit - module <nl> <nl> / / RUN : % target - swiftc_driver - embed - bitcode - force - single - frontend - invocation % s 2 > & 1 - # # # | FileCheck % s - check - prefix = CHECK - SINGLE <nl> / / CHECK - SINGLE : - frontend <nl> <nl> / / CHECK - LIB : - emit - bc <nl> / / CHECK - LIB : - primary - file <nl> / / CHECK - LIB : swift - frontend <nl> - / / CHECK - LIB : - c <nl> - / / CHECK - LIB : - embed - bitcode <nl> - / / CHECK - LIB : - disable - llvm - optzns <nl> - / / CHECK - LIB : swift - frontend <nl> / / CHECK - LIB : - emit - bc <nl> / / CHECK - LIB : - primary - file <nl> / / CHECK - LIB : swift - frontend <nl> + / / CHECK - LIB : - emit - module <nl> + / / CHECK - LIB : swift - frontend <nl> / / CHECK - LIB : - c <nl> / / CHECK - LIB : - embed - bitcode <nl> / / CHECK - LIB : - disable - llvm - optzns <nl> / / CHECK - LIB : swift - frontend <nl> - / / CHECK - LIB : - emit - module <nl> + / / CHECK - LIB : - c <nl> + / / CHECK - LIB : - embed - bitcode <nl> + / / CHECK - LIB : - disable - llvm - optzns <nl> / / CHECK - LIB - NOT : swift - frontend <nl>
|
[ Driver ] Be more explicit about the inputs to the merge - module action .
|
apple/swift
|
208f647bd566470c7101ca57eb11335f20c17b75
|
2015-08-21T02:30:52Z
|
mmm a / lib / Sema / MiscDiagnostics . cpp <nl> ppp b / lib / Sema / MiscDiagnostics . cpp <nl> static void diagSyntacticUseRestrictions ( TypeChecker & TC , const Expr * E , <nl> if ( auto * IOE = dyn_cast < InOutExpr > ( SE - > getBase ( ) ) ) <nl> if ( IOE - > isImplicit ( ) ) <nl> AcceptableInOutExprs . insert ( IOE ) ; <nl> + <nl> + visitIndices ( SE , [ & ] ( unsigned argIndex , Expr * arg ) { <nl> + arg = lookThroughArgument ( arg ) ; <nl> + if ( auto * DRE = dyn_cast < DeclRefExpr > ( arg ) ) <nl> + checkNoEscapeParameterUse ( DRE , SE , OperandKind : : Argument ) ; <nl> + } ) ; <nl> } <nl> <nl> / / Check decl refs in withoutActuallyEscaping blocks . <nl> static void diagSyntacticUseRestrictions ( TypeChecker & TC , const Expr * E , <nl> return { true , E } ; <nl> } <nl> <nl> - static void visitArguments ( ApplyExpr * apply , <nl> - llvm : : function_ref < void ( unsigned , Expr * ) > fn ) { <nl> - auto * arg = apply - > getArg ( ) ; <nl> - <nl> + / / / Visit the argument / s represented by either a ParenExpr or TupleExpr , <nl> + / / / unshuffling if needed . If any other kind of expression , will pass it <nl> + / / / straight back . <nl> + static void argExprVisitArguments ( Expr * arg , <nl> + llvm : : function_ref <nl> + < void ( unsigned , Expr * ) > fn ) { <nl> / / The argument could be shuffled if it includes default arguments , <nl> / / label differences , or other exciting things like that . <nl> if ( auto * TSE = dyn_cast < TupleShuffleExpr > ( arg ) ) <nl> static void diagSyntacticUseRestrictions ( TypeChecker & TC , const Expr * E , <nl> } <nl> } <nl> <nl> + static void visitIndices ( SubscriptExpr * subscript , <nl> + llvm : : function_ref < void ( unsigned , Expr * ) > fn ) { <nl> + auto * indexArgs = subscript - > getIndex ( ) ; <nl> + argExprVisitArguments ( indexArgs , fn ) ; <nl> + } <nl> + <nl> + static void visitArguments ( ApplyExpr * apply , <nl> + llvm : : function_ref < void ( unsigned , Expr * ) > fn ) { <nl> + auto * arg = apply - > getArg ( ) ; <nl> + argExprVisitArguments ( arg , fn ) ; <nl> + } <nl> + <nl> static Expr * lookThroughArgument ( Expr * arg ) { <nl> while ( 1 ) { <nl> if ( auto conv = dyn_cast < ImplicitConversionExpr > ( arg ) ) <nl> static void diagSyntacticUseRestrictions ( TypeChecker & TC , const Expr * E , <nl> if ( isa < ParamDecl > ( DRE - > getDecl ( ) ) & & useKind = = OperandKind : : Callee ) <nl> checkNoEscapeParameterCall ( apply ) ; <nl> return ; <nl> + } else if ( isa < SubscriptExpr > ( parent ) <nl> + & & useKind = = OperandKind : : Argument ) { <nl> + return ; <nl> } else if ( isa < MakeTemporarilyEscapableExpr > ( parent ) ) { <nl> return ; <nl> } <nl> mmm a / test / decl / subscript / noescape_accessors . swift <nl> ppp b / test / decl / subscript / noescape_accessors . swift <nl> struct Subscripts { <nl> global = value <nl> } <nl> } <nl> + <nl> + subscript ( nonescapingIndexWithAddressor fn : ( ) - > Void ) - > Int { <nl> + get { <nl> + return 0 <nl> + } <nl> + mutableAddressWithNativeOwner { <nl> + fatalError ( ) <nl> + } <nl> + } <nl> <nl> / / expected - note @ + 1 2 { { implicitly non - escaping } } <nl> subscript ( nonescapingIndex fn : ( ) - > ( ) ) - > Int { <nl> func testSubscripts_value2 ( nonescaping : ( ) - > ( ) , <nl> s [ value2 : 0 ] = nonescaping / / expected - error { { assigning non - escaping parameter } } <nl> } <nl> <nl> - / / FIXME : Allow these uses in subscript expressions ! <nl> - / / expected - note @ + 1 2 { { implicitly non - escaping } } <nl> func testSubscripts_nonescapingIndex ( nonescaping : ( ) - > ( ) , <nl> escaping : @ escaping ( ) - > ( ) ) { <nl> var s = Subscripts ( ) <nl> - _ = s [ nonescapingIndex : nonescaping ] / / expected - error { { may only be called } } <nl> + _ = s [ nonescapingIndex : nonescaping ] <nl> _ = s [ nonescapingIndex : escaping ] <nl> - s [ nonescapingIndex : nonescaping ] = 0 / / expected - error { { may only be called } } <nl> + s [ nonescapingIndex : nonescaping ] = 0 <nl> s [ nonescapingIndex : escaping ] = 0 <nl> } <nl> <nl>
|
[ Sema ] Allow non - escaping functions to be passed as subscript arguments
|
apple/swift
|
d7346311209f5061826d3bce889b27029d576293
|
2017-11-02T11:04:53Z
|
mmm a / lib / Sema / TypeCheckExpr . cpp <nl> ppp b / lib / Sema / TypeCheckExpr . cpp <nl> class SemaExpressionTree : public ExprVisitor < SemaExpressionTree , Expr * > { <nl> <nl> Expr * SemaExpressionTree : : visitUnresolvedDotExpr ( UnresolvedDotExpr * E ) { <nl> Type SubExprTy = E - > getBase ( ) - > getType ( ) ; <nl> - if ( SubExprTy - > is < DependentType > ( ) ) { <nl> - E - > setDependentType ( SubExprTy ) ; <nl> - return E ; <nl> - } <nl> - <nl> + <nl> / / First , check to see if this is a reference to a field in the type or <nl> / / protocol . <nl> <nl> Expr * SemaExpressionTree : : visitUnresolvedDotExpr ( UnresolvedDotExpr * E ) { <nl> / / TODO : Otherwise , do an argument dependent lookup in the namespace of the <nl> / / base type . <nl> <nl> + if ( SubExprTy - > is < DependentType > ( ) ) { <nl> + E - > setDependentType ( SubExprTy ) ; <nl> + return E ; <nl> + } <nl> + <nl> + <nl> TC . diagnose ( E - > getDotLoc ( ) , diag : : no_valid_dot_expression , SubExprTy ) ; <nl> return 0 ; <nl> } <nl>
|
rearrange code , upshot being that " a . b " will resolve if there is only one candidate
|
apple/swift
|
bf91e00f4cc083ffe2fb92df4cbd5f6e862e9db7
|
2011-10-28T00:40:34Z
|
mmm a / utils / update - checkout <nl> ppp b / utils / update - checkout <nl> By default , updates your checkouts of Swift , SourceKit , LLDB , and SwiftPM . " " " ) <nl> config , clone_with_ssh , branch , skip_history , args . skip_repository ) <nl> <nl> repo_branch = branch <nl> - for dir_name , _ in config [ " repositories " ] . items ( ) : <nl> + for dir_name in config [ " repositories " ] . keys ( ) : <nl> if dir_name in args . skip_repository : <nl> print ( " mmm Skipping ' " + dir_name + " ' mmm " ) <nl> continue <nl>
|
[ gardening ] Instead of iterating over a dictionary ' s ( k , v ) and just using the key , just iterate over the keys ! NFC .
|
apple/swift
|
9a6e4f6124682d589190817e4047af750055f33b
|
2016-06-19T04:50:55Z
|
mmm a / lib / SILGen / SILGenBuilder . cpp <nl> ppp b / lib / SILGen / SILGenBuilder . cpp <nl> ManagedValue SILGenBuilder : : createBridgeObjectToRef ( SILLocation loc , <nl> return cloner . clone ( result ) ; <nl> } <nl> <nl> + BranchInst * SILGenBuilder : : createBranch ( SILLocation loc , <nl> + SILBasicBlock * targetBlock , <nl> + ArrayRef < ManagedValue > args ) { <nl> + llvm : : SmallVector < SILValue , 8 > newArgs ; <nl> + transform ( args , std : : back_inserter ( newArgs ) , <nl> + [ & ] ( ManagedValue mv ) - > SILValue { return mv . forward ( SGF ) ; } ) ; <nl> + return createBranch ( loc , targetBlock , newArgs ) ; <nl> + } <nl> + <nl> + ReturnInst * SILGenBuilder : : createReturn ( SILLocation loc , <nl> + ManagedValue returnValue ) { <nl> + return createReturn ( loc , returnValue . forward ( SGF ) ) ; <nl> + } <nl> + <nl> / / = = = mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - = = = / / <nl> / / Switch Enum Builder <nl> / / = = = mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - = = = / / <nl> mmm a / lib / SILGen / SILGenBuilder . h <nl> ppp b / lib / SILGen / SILGenBuilder . h <nl> class SILGenBuilder : public SILBuilder { <nl> using SILBuilder : : createBridgeObjectToRef ; <nl> ManagedValue createBridgeObjectToRef ( SILLocation loc , ManagedValue mv , <nl> SILType destType ) ; <nl> + <nl> + using SILBuilder : : createBranch ; <nl> + BranchInst * createBranch ( SILLocation Loc , SILBasicBlock * TargetBlock , <nl> + ArrayRef < ManagedValue > Args ) ; <nl> + <nl> + using SILBuilder : : createReturn ; <nl> + ReturnInst * createReturn ( SILLocation Loc , ManagedValue ReturnValue ) ; <nl> } ; <nl> <nl> class SwitchCaseFullExpr ; <nl>
|
Merge pull request from gottesmm / pr - 35c60f9706cc4b7084e9105fa2518a50bcb46d86
|
apple/swift
|
b9393b1f7209cc7c61f6c076c8987530cab6f1b8
|
2017-11-22T08:36:28Z
|
mmm a / emcc <nl> ppp b / emcc <nl> try : <nl> if not LEAVE_INPUTS_RAW : <nl> link_opts = [ ] if keep_debug else [ ' - strip - debug ' ] <nl> if llvm_opts > 0 : <nl> - if DEBUG : print > > sys . stderr , ' emcc : LLVM - O % d ' % llvm_opts <nl> shared . Building . llvm_opt ( in_temp ( target_basename + ' . bc ' ) , llvm_opts ) <nl> if DEBUG : save_intermediate ( ' opt ' , ' bc ' ) <nl> # Do LTO in a separate pass to work around LLVM bug XXX ( see failure e . g . in cubescript ) <nl> try : <nl> else : <nl> # At minimum remove dead functions etc . , this potentially saves a lot in the size of the generated code ( and the time to compile it ) <nl> link_opts + = shared . Building . get_safe_internalize ( ) + [ ' - globaldce ' ] <nl> - if DEBUG : print > > sys . stderr , ' emcc : LLVM linktime : ' , link_opts <nl> shared . Building . llvm_opt ( in_temp ( target_basename + ' . bc ' ) , link_opts ) <nl> if DEBUG : save_intermediate ( ' linktime ' , ' bc ' ) <nl> <nl> mmm a / tools / shared . py <nl> ppp b / tools / shared . py <nl> def ll_opts ( filename ) : <nl> def llvm_opt ( filename , opts ) : <nl> if type ( opts ) is int : <nl> opts = Building . pick_llvm_opts ( opts ) <nl> + if DEBUG : print > > sys . stderr , ' emcc : LLVM opts : ' , opts <nl> output = Popen ( [ LLVM_OPT , filename ] + opts + [ ' - o = ' + filename + ' . opt . bc ' ] , stdout = PIPE ) . communicate ( ) [ 0 ] <nl> assert os . path . exists ( filename + ' . opt . bc ' ) , ' Failed to run llvm optimizations : ' + output <nl> shutil . move ( filename + ' . opt . bc ' , filename ) <nl>
|
move llvm opt debug messages
|
emscripten-core/emscripten
|
cbf61df3d0542f5ee91e93835cc8e9cc87446035
|
2012-12-25T04:49:27Z
|
mmm a / src / blockencodings . cpp <nl> ppp b / src / blockencodings . cpp <nl> bool PartiallyDownloadedBlock : : IsTxAvailable ( size_t index ) const { <nl> return txn_available [ index ] ? true : false ; <nl> } <nl> <nl> - ReadStatus PartiallyDownloadedBlock : : FillBlock ( CBlock & block , const std : : vector < CTransactionRef > & vtx_missing ) const { <nl> + ReadStatus PartiallyDownloadedBlock : : FillBlock ( CBlock & block , const std : : vector < CTransactionRef > & vtx_missing ) { <nl> assert ( ! header . IsNull ( ) ) ; <nl> + uint256 hash = header . GetHash ( ) ; <nl> block = header ; <nl> block . vtx . resize ( txn_available . size ( ) ) ; <nl> <nl> ReadStatus PartiallyDownloadedBlock : : FillBlock ( CBlock & block , const std : : vector < <nl> return READ_STATUS_INVALID ; <nl> block . vtx [ i ] = vtx_missing [ tx_missing_offset + + ] ; <nl> } else <nl> - block . vtx [ i ] = txn_available [ i ] ; <nl> + block . vtx [ i ] = std : : move ( txn_available [ i ] ) ; <nl> } <nl> + <nl> + / / Make sure we can ' t call FillBlock again . <nl> + header . SetNull ( ) ; <nl> + txn_available . clear ( ) ; <nl> + <nl> if ( vtx_missing . size ( ) ! = tx_missing_offset ) <nl> return READ_STATUS_INVALID ; <nl> <nl> ReadStatus PartiallyDownloadedBlock : : FillBlock ( CBlock & block , const std : : vector < <nl> return READ_STATUS_CHECKBLOCK_FAILED ; <nl> } <nl> <nl> - LogPrint ( " cmpctblock " , " Successfully reconstructed block % s with % lu txn prefilled , % lu txn from mempool and % lu txn requested \ n " , header . GetHash ( ) . ToString ( ) , prefilled_count , mempool_count , vtx_missing . size ( ) ) ; <nl> + LogPrint ( " cmpctblock " , " Successfully reconstructed block % s with % lu txn prefilled , % lu txn from mempool and % lu txn requested \ n " , hash . ToString ( ) , prefilled_count , mempool_count , vtx_missing . size ( ) ) ; <nl> if ( vtx_missing . size ( ) < 5 ) { <nl> for ( const auto & tx : vtx_missing ) <nl> - LogPrint ( " cmpctblock " , " Reconstructed block % s required tx % s \ n " , header . GetHash ( ) . ToString ( ) , tx - > GetHash ( ) . ToString ( ) ) ; <nl> + LogPrint ( " cmpctblock " , " Reconstructed block % s required tx % s \ n " , hash . ToString ( ) , tx - > GetHash ( ) . ToString ( ) ) ; <nl> } <nl> <nl> return READ_STATUS_OK ; <nl> mmm a / src / blockencodings . h <nl> ppp b / src / blockencodings . h <nl> class PartiallyDownloadedBlock { <nl> <nl> ReadStatus InitData ( const CBlockHeaderAndShortTxIDs & cmpctblock ) ; <nl> bool IsTxAvailable ( size_t index ) const ; <nl> - ReadStatus FillBlock ( CBlock & block , const std : : vector < CTransactionRef > & vtx_missing ) const ; <nl> + ReadStatus FillBlock ( CBlock & block , const std : : vector < CTransactionRef > & vtx_missing ) ; <nl> } ; <nl> <nl> # endif <nl> mmm a / src / test / blockencodings_tests . cpp <nl> ppp b / src / test / blockencodings_tests . cpp <nl> BOOST_AUTO_TEST_CASE ( SimpleRoundTripTest ) <nl> BOOST_CHECK_EQUAL ( pool . size ( ) , poolSize - 1 ) ; <nl> <nl> CBlock block2 ; <nl> - std : : vector < CTransactionRef > vtx_missing ; <nl> - BOOST_CHECK ( partialBlock . FillBlock ( block2 , vtx_missing ) = = READ_STATUS_INVALID ) ; / / No transactions <nl> + { <nl> + PartiallyDownloadedBlock tmp = partialBlock ; <nl> + BOOST_CHECK ( partialBlock . FillBlock ( block2 , { } ) = = READ_STATUS_INVALID ) ; / / No transactions <nl> + partialBlock = tmp ; <nl> + } <nl> <nl> - vtx_missing . push_back ( block . vtx [ 2 ] ) ; / / Wrong transaction <nl> - partialBlock . FillBlock ( block2 , vtx_missing ) ; / / Current implementation doesn ' t check txn here , but don ' t require that <nl> + / / Wrong transaction <nl> + { <nl> + PartiallyDownloadedBlock tmp = partialBlock ; <nl> + partialBlock . FillBlock ( block2 , { block . vtx [ 2 ] } ) ; / / Current implementation doesn ' t check txn here , but don ' t require that <nl> + partialBlock = tmp ; <nl> + } <nl> bool mutated ; <nl> BOOST_CHECK ( block . hashMerkleRoot ! = BlockMerkleRoot ( block2 , & mutated ) ) ; <nl> <nl> - vtx_missing [ 0 ] = block . vtx [ 1 ] ; <nl> CBlock block3 ; <nl> - BOOST_CHECK ( partialBlock . FillBlock ( block3 , vtx_missing ) = = READ_STATUS_OK ) ; <nl> + BOOST_CHECK ( partialBlock . FillBlock ( block3 , { block . vtx [ 1 ] } ) = = READ_STATUS_OK ) ; <nl> BOOST_CHECK_EQUAL ( block . GetHash ( ) . ToString ( ) , block3 . GetHash ( ) . ToString ( ) ) ; <nl> BOOST_CHECK_EQUAL ( block . hashMerkleRoot . ToString ( ) , BlockMerkleRoot ( block3 , & mutated ) . ToString ( ) ) ; <nl> BOOST_CHECK ( ! mutated ) ; <nl> BOOST_AUTO_TEST_CASE ( NonCoinbasePreforwardRTTest ) <nl> BOOST_CHECK_EQUAL ( pool . mapTx . find ( block . vtx [ 2 ] - > GetHash ( ) ) - > GetSharedTx ( ) . use_count ( ) , SHARED_TX_OFFSET + 1 ) ; <nl> <nl> CBlock block2 ; <nl> - std : : vector < CTransactionRef > vtx_missing ; <nl> - BOOST_CHECK ( partialBlock . FillBlock ( block2 , vtx_missing ) = = READ_STATUS_INVALID ) ; / / No transactions <nl> + { <nl> + PartiallyDownloadedBlock tmp = partialBlock ; <nl> + BOOST_CHECK ( partialBlock . FillBlock ( block2 , { } ) = = READ_STATUS_INVALID ) ; / / No transactions <nl> + partialBlock = tmp ; <nl> + } <nl> <nl> - vtx_missing . push_back ( block . vtx [ 1 ] ) ; / / Wrong transaction <nl> - partialBlock . FillBlock ( block2 , vtx_missing ) ; / / Current implementation doesn ' t check txn here , but don ' t require that <nl> + / / Wrong transaction <nl> + { <nl> + PartiallyDownloadedBlock tmp = partialBlock ; <nl> + partialBlock . FillBlock ( block2 , { block . vtx [ 1 ] } ) ; / / Current implementation doesn ' t check txn here , but don ' t require that <nl> + partialBlock = tmp ; <nl> + } <nl> bool mutated ; <nl> BOOST_CHECK ( block . hashMerkleRoot ! = BlockMerkleRoot ( block2 , & mutated ) ) ; <nl> <nl> - vtx_missing [ 0 ] = block . vtx [ 0 ] ; <nl> CBlock block3 ; <nl> - BOOST_CHECK ( partialBlock . FillBlock ( block3 , vtx_missing ) = = READ_STATUS_OK ) ; <nl> + PartiallyDownloadedBlock partialBlockCopy = partialBlock ; <nl> + BOOST_CHECK ( partialBlock . FillBlock ( block3 , { block . vtx [ 0 ] } ) = = READ_STATUS_OK ) ; <nl> BOOST_CHECK_EQUAL ( block . GetHash ( ) . ToString ( ) , block3 . GetHash ( ) . ToString ( ) ) ; <nl> BOOST_CHECK_EQUAL ( block . hashMerkleRoot . ToString ( ) , BlockMerkleRoot ( block3 , & mutated ) . ToString ( ) ) ; <nl> BOOST_CHECK ( ! mutated ) ; <nl> BOOST_AUTO_TEST_CASE ( NonCoinbasePreforwardRTTest ) <nl> block . vtx . clear ( ) ; <nl> block2 . vtx . clear ( ) ; <nl> block3 . vtx . clear ( ) ; <nl> - BOOST_CHECK_EQUAL ( pool . mapTx . find ( txhash ) - > GetSharedTx ( ) . use_count ( ) , SHARED_TX_OFFSET + 1 ) ; <nl> + BOOST_CHECK_EQUAL ( pool . mapTx . find ( txhash ) - > GetSharedTx ( ) . use_count ( ) , SHARED_TX_OFFSET + 1 ) ; / / + 1 because of partialBlockCopy . <nl> } <nl> BOOST_CHECK_EQUAL ( pool . mapTx . find ( txhash ) - > GetSharedTx ( ) . use_count ( ) , SHARED_TX_OFFSET + 0 ) ; <nl> } <nl> BOOST_AUTO_TEST_CASE ( SufficientPreforwardRTTest ) <nl> BOOST_CHECK_EQUAL ( pool . mapTx . find ( block . vtx [ 1 ] - > GetHash ( ) ) - > GetSharedTx ( ) . use_count ( ) , SHARED_TX_OFFSET + 1 ) ; <nl> <nl> CBlock block2 ; <nl> - std : : vector < CTransactionRef > vtx_missing ; <nl> - BOOST_CHECK ( partialBlock . FillBlock ( block2 , vtx_missing ) = = READ_STATUS_OK ) ; <nl> + PartiallyDownloadedBlock partialBlockCopy = partialBlock ; <nl> + BOOST_CHECK ( partialBlock . FillBlock ( block2 , { } ) = = READ_STATUS_OK ) ; <nl> BOOST_CHECK_EQUAL ( block . GetHash ( ) . ToString ( ) , block2 . GetHash ( ) . ToString ( ) ) ; <nl> bool mutated ; <nl> BOOST_CHECK_EQUAL ( block . hashMerkleRoot . ToString ( ) , BlockMerkleRoot ( block2 , & mutated ) . ToString ( ) ) ; <nl> BOOST_AUTO_TEST_CASE ( SufficientPreforwardRTTest ) <nl> txhash = block . vtx [ 1 ] - > GetHash ( ) ; <nl> block . vtx . clear ( ) ; <nl> block2 . vtx . clear ( ) ; <nl> - BOOST_CHECK_EQUAL ( pool . mapTx . find ( txhash ) - > GetSharedTx ( ) . use_count ( ) , SHARED_TX_OFFSET + 1 ) ; <nl> + BOOST_CHECK_EQUAL ( pool . mapTx . find ( txhash ) - > GetSharedTx ( ) . use_count ( ) , SHARED_TX_OFFSET + 1 ) ; / / + 1 because of partialBlockCopy . <nl> } <nl> BOOST_CHECK_EQUAL ( pool . mapTx . find ( txhash ) - > GetSharedTx ( ) . use_count ( ) , SHARED_TX_OFFSET + 0 ) ; <nl> } <nl>
|
Make FillBlock consume txn_available to avoid shared_ptr copies
|
bitcoin/bitcoin
|
6713f0f142d97b4608c95a3ea03b4b670fceab2b
|
2016-12-22T02:18:28Z
|
mmm a / CMakeLists . txt <nl> ppp b / CMakeLists . txt <nl> include ( SourceGroups ) <nl> add_definitions ( - D_SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS = 1 ) <nl> <nl> include_directories ( $ { Leptonica_INCLUDE_DIRS } ) <nl> + include_directories ( $ { LibArchive_INCLUDE_DIRS } ) <nl> <nl> include_directories ( $ { CMAKE_CURRENT_BINARY_DIR } ) <nl> <nl>
|
cmake : Add missing include directory for LibArchive
|
tesseract-ocr/tesseract
|
25f2af9d1d8a7a1fb66fd71ab8d709f5d95ee4f8
|
2019-07-25T07:13:31Z
|
mmm a / bindings / python / doc / gettingstarted . rst <nl> ppp b / bindings / python / doc / gettingstarted . rst <nl> more common case ) is as follows : <nl> > > > i2 = cntk . input_variable ( ( 1 , 2 ) ) <nl> > > > cntk . squared_error ( i1 , i2 ) . eval ( { i1 : np . asarray ( [ [ [ [ 2 . , 1 . ] ] ] ] , dtype = np . float32 ) , i2 : np . asarray ( [ [ [ [ 4 . , 6 . ] ] ] ] , dtype = np . float32 ) } ) <nl> array ( 29 . 0 , dtype = float32 ) <nl> - <nl> + <nl> In the above example we are first setting up two input variables with shape ` ( 1 , 2 ) ` . We then setup a ` squared_error ` node with those two variables as <nl> inputs . Within the ` eval ( ) ` method we can setup the mapping of the data for those two variables . In this case we pass in two numpy arrays . The squared <nl> error is then of course ` ( 2 - 4 ) * * 2 + ( 1 - 6 ) * * 2 = 29 ` . <nl>
|
fix spacing typo
|
microsoft/CNTK
|
137d15b9e3a8a8de69d07f8aeca689d5af774e94
|
2016-09-26T18:00:08Z
|
mmm a / ReactWindows / ReactNative . Shared / UIManager / NativeViewHierarchyManager . cs <nl> ppp b / ReactWindows / ReactNative . Shared / UIManager / NativeViewHierarchyManager . cs <nl> public void ManageChildren ( int tag , int [ ] indexesToRemove , ViewAtIndex [ ] viewsTo <nl> { <nl> _layoutAnimator . DeleteView ( elementToDestroy , ( ) = > <nl> { <nl> - viewParentManager . RemoveView ( viewToManage , viewToDestroy ) ; <nl> - DropView ( viewToDestroy ) ; <nl> + if ( viewParentManager . TryRemoveView ( viewToManage , viewToDestroy ) ) <nl> + { <nl> + DropView ( viewToDestroy ) ; <nl> + } <nl> } ) ; <nl> } <nl> else <nl> mmm a / ReactWindows / ReactNative . Shared / UIManager / ViewParentManagerExtensions . cs <nl> ppp b / ReactWindows / ReactNative . Shared / UIManager / ViewParentManagerExtensions . cs <nl> namespace ReactNative . UIManager <nl> public static class ViewParentManagerExtensions <nl> { <nl> / / / < summary > <nl> - / / / Remove a view from a < see cref = " IViewParentManager " / > . <nl> + / / / Tries to remove a view from a < see cref = " IViewParentManager " / > . <nl> / / / < / summary > <nl> / / / < param name = " viewManager " > The view manager . < / param > <nl> / / / < param name = " parent " > The parent view . < / param > <nl> / / / < param name = " view " > The view to remove . < / param > <nl> - public static void RemoveView ( this IViewParentManager viewManager , DependencyObject parent , DependencyObject view ) <nl> + / / / < returns > <nl> + / / / < code > true < / code > if a view is removed , < code > false < / code > otherwise . <nl> + / / / < / returns > <nl> + public static bool TryRemoveView ( this IViewParentManager viewManager , DependencyObject parent , DependencyObject view ) <nl> { <nl> for ( var i = 0 ; i < viewManager . GetChildCount ( parent ) ; + + i ) <nl> { <nl> if ( viewManager . GetChildAt ( parent , i ) = = view ) <nl> { <nl> viewManager . RemoveChildAt ( parent , i ) ; <nl> - break ; <nl> + return true ; <nl> } <nl> } <nl> + <nl> + return false ; <nl> } <nl> } <nl> } <nl>
|
fix ( LayoutAnimation ) : Ensure view still exists before dropping ( )
|
microsoft/react-native-windows
|
2bc5cc0b09714bb9067a5a913e7576af8f00d292
|
2016-11-21T21:54:28Z
|
mmm a / src / d8 . cc <nl> ppp b / src / d8 . cc <nl> void SourceGroup : : WaitForThread ( ) { <nl> <nl> <nl> bool Shell : : SetOptions ( int argc , char * argv [ ] ) { <nl> - Locker lock ; <nl> - <nl> for ( int i = 0 ; i < argc ; i + + ) { <nl> if ( strcmp ( argv [ i ] , " - - stress - opt " ) = = 0 ) { <nl> options . stress_opt = true ; <nl>
|
bug fix since - - prof did not work
|
v8/v8
|
f2f2efc544e14940f7677fa4e6c071ff176c6f51
|
2011-07-11T12:04:13Z
|
mmm a / dbms / tests / clickhouse - test <nl> ppp b / dbms / tests / clickhouse - test <nl> def main ( args ) : <nl> ( name , ext ) = os . path . splitext ( case ) <nl> report_testcase = et . Element ( " testcase " , attrib = { " name " : name } ) <nl> <nl> - print " { 0 : 70 } " . format ( name + " : " ) , <nl> - sys . stdout . flush ( ) <nl> - <nl> - if not args . zookeeper and ' zookeeper ' in name : <nl> - report_testcase . append ( et . Element ( " skipped " , attrib = { " message " : " no zookeeper " } ) ) <nl> - print ( MSG_SKIPPED + " - no zookeeper " ) <nl> - elif not args . shard and ' shard ' in name : <nl> - report_testcase . append ( et . Element ( " skipped " , attrib = { " message " : " no shard " } ) ) <nl> - print ( MSG_SKIPPED + " - no shard " ) <nl> - else : <nl> - reference_file = os . path . join ( suite_dir , name ) + ' . reference ' <nl> - stdout_file = os . path . join ( suite_dir , name ) + ' . stdout ' <nl> - stderr_file = os . path . join ( suite_dir , name ) + ' . stderr ' <nl> - <nl> - if ext = = ' . sql ' : <nl> - command = " { 0 } - - multiquery < { 1 } > { 2 } 2 > { 3 } " . format ( args . client , case_file , stdout_file , stderr_file ) <nl> + try : <nl> + print " { 0 : 70 } " . format ( name + " : " ) , <nl> + sys . stdout . flush ( ) <nl> + <nl> + print 1 / 0 <nl> + <nl> + if not args . zookeeper and ' zookeeper ' in name : <nl> + report_testcase . append ( et . Element ( " skipped " , attrib = { " message " : " no zookeeper " } ) ) <nl> + print ( MSG_SKIPPED + " - no zookeeper " ) <nl> + elif not args . shard and ' shard ' in name : <nl> + report_testcase . append ( et . Element ( " skipped " , attrib = { " message " : " no shard " } ) ) <nl> + print ( MSG_SKIPPED + " - no shard " ) <nl> else : <nl> - command = " { 0 } > { 1 } 2 > { 2 } " . format ( case_file , stdout_file , stderr_file ) <nl> - <nl> - proc = Popen ( command , shell = True ) <nl> - start_time = datetime . now ( ) <nl> - while ( datetime . now ( ) - start_time ) . total_seconds ( ) < args . timeout and proc . poll ( ) is None : <nl> - sleep ( 0 ) <nl> - <nl> - if proc . returncode is None : <nl> - try : <nl> - proc . kill ( ) <nl> - except OSError as e : <nl> - if e . errno ! = ESRCH : <nl> - raise <nl> - <nl> - failure = et . Element ( " failure " , attrib = { " message " : " Timeout " } ) <nl> - report_testcase . append ( failure ) <nl> - <nl> - failures = failures + 1 <nl> - print ( " { 0 } - Timeout ! " . format ( MSG_FAIL ) ) <nl> - else : <nl> - stdout = open ( stdout_file , ' r ' ) . read ( ) if os . path . exists ( stdout_file ) else ' ' <nl> - stdout = unicode ( stdout , errors = ' replace ' , encoding = ' utf - 8 ' ) <nl> - stderr = open ( stderr_file , ' r ' ) . read ( ) if os . path . exists ( stderr_file ) else ' ' <nl> - stderr = unicode ( stderr , errors = ' replace ' , encoding = ' utf - 8 ' ) <nl> - <nl> - if proc . returncode ! = 0 : <nl> - failure = et . Element ( " failure " , attrib = { " message " : " return code { } " . format ( proc . returncode ) } ) <nl> + reference_file = os . path . join ( suite_dir , name ) + ' . reference ' <nl> + stdout_file = os . path . join ( suite_dir , name ) + ' . stdout ' <nl> + stderr_file = os . path . join ( suite_dir , name ) + ' . stderr ' <nl> + <nl> + if ext = = ' . sql ' : <nl> + command = " { 0 } - - multiquery < { 1 } > { 2 } 2 > { 3 } " . format ( args . client , case_file , stdout_file , stderr_file ) <nl> + else : <nl> + command = " { 0 } > { 1 } 2 > { 2 } " . format ( case_file , stdout_file , stderr_file ) <nl> + <nl> + proc = Popen ( command , shell = True ) <nl> + start_time = datetime . now ( ) <nl> + while ( datetime . now ( ) - start_time ) . total_seconds ( ) < args . timeout and proc . poll ( ) is None : <nl> + sleep ( 0 ) <nl> + <nl> + if proc . returncode is None : <nl> + try : <nl> + proc . kill ( ) <nl> + except OSError as e : <nl> + if e . errno ! = ESRCH : <nl> + raise <nl> + <nl> + failure = et . Element ( " failure " , attrib = { " message " : " Timeout " } ) <nl> report_testcase . append ( failure ) <nl> - <nl> - stdout_element = et . Element ( " system - out " ) <nl> - stdout_element . text = et . CDATA ( stdout ) <nl> - report_testcase . append ( stdout_element ) <nl> - <nl> + <nl> failures = failures + 1 <nl> - print ( " { 0 } - return code { 1 } " . format ( MSG_FAIL , proc . returncode ) ) <nl> - <nl> - if stderr : <nl> + print ( " { 0 } - Timeout ! " . format ( MSG_FAIL ) ) <nl> + else : <nl> + stdout = open ( stdout_file , ' r ' ) . read ( ) if os . path . exists ( stdout_file ) else ' ' <nl> + stdout = unicode ( stdout , errors = ' replace ' , encoding = ' utf - 8 ' ) <nl> + stderr = open ( stderr_file , ' r ' ) . read ( ) if os . path . exists ( stderr_file ) else ' ' <nl> + stderr = unicode ( stderr , errors = ' replace ' , encoding = ' utf - 8 ' ) <nl> + <nl> + if proc . returncode ! = 0 : <nl> + failure = et . Element ( " failure " , attrib = { " message " : " return code { } " . format ( proc . returncode ) } ) <nl> + report_testcase . append ( failure ) <nl> + <nl> + stdout_element = et . Element ( " system - out " ) <nl> + stdout_element . text = et . CDATA ( stdout ) <nl> + report_testcase . append ( stdout_element ) <nl> + <nl> + failures = failures + 1 <nl> + print ( " { 0 } - return code { 1 } " . format ( MSG_FAIL , proc . returncode ) ) <nl> + <nl> + if stderr : <nl> + stderr_element = et . Element ( " system - err " ) <nl> + stderr_element . text = et . CDATA ( stderr ) <nl> + report_testcase . append ( stderr_element ) <nl> + print ( stderr ) <nl> + <nl> + if args . stop and ( ' Connection refused ' in stderr or ' Attempt to read after eof ' in stderr ) and not ' Received exception from server ' in stderr : <nl> + SERVER_DIED = True <nl> + <nl> + elif stderr : <nl> + failure = et . Element ( " failure " , attrib = { " message " : " having stderror " } ) <nl> + report_testcase . append ( failure ) <nl> + <nl> stderr_element = et . Element ( " system - err " ) <nl> stderr_element . text = et . CDATA ( stderr ) <nl> report_testcase . append ( stderr_element ) <nl> - print ( stderr ) <nl> - <nl> - if args . stop and ( ' Connection refused ' in stderr or ' Attempt to read after eof ' in stderr ) and not ' Received exception from server ' in stderr : <nl> - SERVER_DIED = True <nl> - <nl> - elif stderr : <nl> - failure = et . Element ( " failure " , attrib = { " message " : " having stderror " } ) <nl> - report_testcase . append ( failure ) <nl> - <nl> - stderr_element = et . Element ( " system - err " ) <nl> - stderr_element . text = et . CDATA ( stderr ) <nl> - report_testcase . append ( stderr_element ) <nl> - <nl> - failures = failures + 1 <nl> - print ( " { 0 } - having stderror : \ n { 1 } " . format ( MSG_FAIL , stderr . encode ( ' utf - 8 ' ) ) ) <nl> - elif ' Exception ' in stdout : <nl> - failure = et . Element ( " error " , attrib = { " message " : " having exception " } ) <nl> - report_testcase . append ( failure ) <nl> - <nl> - stdout_element = et . Element ( " system - out " ) <nl> - stdout_element . text = et . CDATA ( stdout ) <nl> - report_testcase . append ( stdout_element ) <nl> - <nl> - failures = failures + 1 <nl> - print ( " { 0 } - having exception : \ n { 1 } " . format ( MSG_FAIL , stdout . encode ( ' utf - 8 ' ) ) ) <nl> - elif not os . path . isfile ( reference_file ) : <nl> - skipped = et . Element ( " skipped " , attrib = { " message " : " no reference file " } ) <nl> - report_testcase . append ( skipped ) <nl> - print ( " { 0 } - no reference file " . format ( MSG_UNKNOWN ) ) <nl> - else : <nl> - result_is_different = subprocess . call ( [ ' cmp ' , ' - s ' , reference_file , stdout_file ] , stdout = PIPE ) <nl> - <nl> - if result_is_different : <nl> - ( diff , _ ) = Popen ( [ ' diff ' , ' - - side - by - side ' , reference_file , stdout_file ] , stdout = PIPE ) . communicate ( ) <nl> - diff = unicode ( diff , errors = ' replace ' , encoding = ' utf - 8 ' ) <nl> - <nl> - failure = et . Element ( " failure " , attrib = { " message " : " result differs with reference " } ) <nl> + <nl> + failures = failures + 1 <nl> + print ( " { 0 } - having stderror : \ n { 1 } " . format ( MSG_FAIL , stderr . encode ( ' utf - 8 ' ) ) ) <nl> + elif ' Exception ' in stdout : <nl> + failure = et . Element ( " error " , attrib = { " message " : " having exception " } ) <nl> report_testcase . append ( failure ) <nl> - <nl> + <nl> stdout_element = et . Element ( " system - out " ) <nl> - try : <nl> - stdout_element . text = et . CDATA ( diff ) <nl> - except Exception as e : <nl> - stdout_element . text = et . CDATA ( remove_control_characters ( diff ) ) <nl> - <nl> + stdout_element . text = et . CDATA ( stdout ) <nl> report_testcase . append ( stdout_element ) <nl> + <nl> failures = failures + 1 <nl> - print ( " { 0 } - result differs with reference : \ n { 1 } " . format ( MSG_FAIL , diff . encode ( ' utf - 8 ' ) ) ) <nl> + print ( " { 0 } - having exception : \ n { 1 } " . format ( MSG_FAIL , stdout . encode ( ' utf - 8 ' ) ) ) <nl> + elif not os . path . isfile ( reference_file ) : <nl> + skipped = et . Element ( " skipped " , attrib = { " message " : " no reference file " } ) <nl> + report_testcase . append ( skipped ) <nl> + print ( " { 0 } - no reference file " . format ( MSG_UNKNOWN ) ) <nl> else : <nl> - print ( MSG_OK ) <nl> - if os . path . exists ( stdout_file ) : <nl> - os . remove ( stdout_file ) <nl> - if os . path . exists ( stderr_file ) : <nl> - os . remove ( stderr_file ) <nl> - <nl> - dump_report ( args . output , suite , name , report_testcase ) <nl> + result_is_different = subprocess . call ( [ ' cmp ' , ' - s ' , reference_file , stdout_file ] , stdout = PIPE ) <nl> + <nl> + if result_is_different : <nl> + ( diff , _ ) = Popen ( [ ' diff ' , ' - - side - by - side ' , reference_file , stdout_file ] , stdout = PIPE ) . communicate ( ) <nl> + diff = unicode ( diff , errors = ' replace ' , encoding = ' utf - 8 ' ) <nl> + <nl> + failure = et . Element ( " failure " , attrib = { " message " : " result differs with reference " } ) <nl> + report_testcase . append ( failure ) <nl> + <nl> + stdout_element = et . Element ( " system - out " ) <nl> + try : <nl> + stdout_element . text = et . CDATA ( diff ) <nl> + except : <nl> + stdout_element . text = et . CDATA ( remove_control_characters ( diff ) ) <nl> + <nl> + report_testcase . append ( stdout_element ) <nl> + failures = failures + 1 <nl> + print ( " { 0 } - result differs with reference : \ n { 1 } " . format ( MSG_FAIL , diff . encode ( ' utf - 8 ' ) ) ) <nl> + else : <nl> + print ( MSG_OK ) <nl> + if os . path . exists ( stdout_file ) : <nl> + os . remove ( stdout_file ) <nl> + if os . path . exists ( stderr_file ) : <nl> + os . remove ( stderr_file ) <nl> + except : <nl> + ( exc_type , exc_value ) = sys . exc_info ( ) [ : 2 ] <nl> + error = et . Element ( " error " , attrib = { " type " : exc_type . __name__ , " message " : str ( exc_value ) } ) <nl> + report_testcase . append ( error ) <nl> + <nl> + failures = failures + 1 <nl> + print ( " { 0 } - Test internal error : { 1 } \ n { 2 } " . format ( MSG_FAIL , exc_type . __name__ , exc_value ) ) <nl> + finally : <nl> + dump_report ( args . output , suite , name , report_testcase ) <nl> <nl> failures_total = failures_total + failures <nl> <nl>
|
handle test errors
|
ClickHouse/ClickHouse
|
e95ab368328ba8bd59f0ceb4525039e35a9cb814
|
2017-09-15T00:11:24Z
|
mmm a / arangod / Aql / OptimizerRules . cpp <nl> ppp b / arangod / Aql / OptimizerRules . cpp <nl> int triagens : : aql : : removeUnnecessaryRemoteScatter ( Optimizer * opt , <nl> <nl> for ( auto n : nodes ) { <nl> / / check if the remote node is preceeded by a scatter node and any number of <nl> - / / calculation and singleton nodes . if yes , remote remote and scatter <nl> + / / calculation and singleton nodes . if yes , remove remote and scatter <nl> <nl> auto const & deps = n - > getDependencies ( ) ; <nl> if ( deps . size ( ) ! = 1 ) { <nl>
|
fixed tyopo in comment
|
arangodb/arangodb
|
d2b86de370245fb999af17d715ba5f88042bf550
|
2014-10-08T09:55:58Z
|
mmm a / src / chain . h <nl> ppp b / src / chain . h <nl> enum BlockStatus : uint32_t { <nl> BLOCK_VALID_MASK = BLOCK_VALID_HEADER | BLOCK_VALID_TREE | BLOCK_VALID_TRANSACTIONS | <nl> BLOCK_VALID_CHAIN | BLOCK_VALID_SCRIPTS , <nl> <nl> - BLOCK_HAVE_DATA = 8 , / / ! full block available in blk * . dat <nl> - BLOCK_HAVE_UNDO = 16 , / / ! undo data available in rev * . dat <nl> + BLOCK_HAVE_DATA = 8 , / / ! < full block available in blk * . dat <nl> + BLOCK_HAVE_UNDO = 16 , / / ! < undo data available in rev * . dat <nl> BLOCK_HAVE_MASK = BLOCK_HAVE_DATA | BLOCK_HAVE_UNDO , <nl> <nl> - BLOCK_FAILED_VALID = 32 , / / ! stage after last reached validness failed <nl> - BLOCK_FAILED_CHILD = 64 , / / ! descends from failed block <nl> + BLOCK_FAILED_VALID = 32 , / / ! < stage after last reached validness failed <nl> + BLOCK_FAILED_CHILD = 64 , / / ! < descends from failed block <nl> BLOCK_FAILED_MASK = BLOCK_FAILED_VALID | BLOCK_FAILED_CHILD , <nl> <nl> - BLOCK_OPT_WITNESS = 128 , / / ! block data in blk * . data was received with a witness - enforcing client <nl> + BLOCK_OPT_WITNESS = 128 , / / ! < block data in blk * . data was received with a witness - enforcing client <nl> } ; <nl> <nl> / * * The block chain is a tree shaped structure starting with the <nl> mmm a / src / consensus / validation . h <nl> ppp b / src / consensus / validation . h <nl> static const unsigned char REJECT_CHECKPOINT = 0x43 ; <nl> class CValidationState { <nl> private : <nl> enum mode_state { <nl> - MODE_VALID , / / ! everything ok <nl> - MODE_INVALID , / / ! network rule violation ( DoS value may be set ) <nl> - MODE_ERROR , / / ! run - time error <nl> + MODE_VALID , / / ! < everything ok <nl> + MODE_INVALID , / / ! < network rule violation ( DoS value may be set ) <nl> + MODE_ERROR , / / ! < run - time error <nl> } mode ; <nl> int nDoS ; <nl> std : : string strRejectReason ; <nl> mmm a / src / rpc / protocol . h <nl> ppp b / src / rpc / protocol . h <nl> enum RPCErrorCode <nl> RPC_PARSE_ERROR = - 32700 , <nl> <nl> / / ! General application defined errors <nl> - RPC_MISC_ERROR = - 1 , / / ! std : : exception thrown in command handling <nl> - RPC_FORBIDDEN_BY_SAFE_MODE = - 2 , / / ! Server is in safe mode , and command is not allowed in safe mode <nl> - RPC_TYPE_ERROR = - 3 , / / ! Unexpected type was passed as parameter <nl> - RPC_INVALID_ADDRESS_OR_KEY = - 5 , / / ! Invalid address or key <nl> - RPC_OUT_OF_MEMORY = - 7 , / / ! Ran out of memory during operation <nl> - RPC_INVALID_PARAMETER = - 8 , / / ! Invalid , missing or duplicate parameter <nl> - RPC_DATABASE_ERROR = - 20 , / / ! Database error <nl> - RPC_DESERIALIZATION_ERROR = - 22 , / / ! Error parsing or validating structure in raw format <nl> - RPC_VERIFY_ERROR = - 25 , / / ! General error during transaction or block submission <nl> - RPC_VERIFY_REJECTED = - 26 , / / ! Transaction or block was rejected by network rules <nl> - RPC_VERIFY_ALREADY_IN_CHAIN = - 27 , / / ! Transaction already in chain <nl> - RPC_IN_WARMUP = - 28 , / / ! Client still warming up <nl> + RPC_MISC_ERROR = - 1 , / / ! < std : : exception thrown in command handling <nl> + RPC_FORBIDDEN_BY_SAFE_MODE = - 2 , / / ! < Server is in safe mode , and command is not allowed in safe mode <nl> + RPC_TYPE_ERROR = - 3 , / / ! < Unexpected type was passed as parameter <nl> + RPC_INVALID_ADDRESS_OR_KEY = - 5 , / / ! < Invalid address or key <nl> + RPC_OUT_OF_MEMORY = - 7 , / / ! < Ran out of memory during operation <nl> + RPC_INVALID_PARAMETER = - 8 , / / ! < Invalid , missing or duplicate parameter <nl> + RPC_DATABASE_ERROR = - 20 , / / ! < Database error <nl> + RPC_DESERIALIZATION_ERROR = - 22 , / / ! < Error parsing or validating structure in raw format <nl> + RPC_VERIFY_ERROR = - 25 , / / ! < General error during transaction or block submission <nl> + RPC_VERIFY_REJECTED = - 26 , / / ! < Transaction or block was rejected by network rules <nl> + RPC_VERIFY_ALREADY_IN_CHAIN = - 27 , / / ! < Transaction already in chain <nl> + RPC_IN_WARMUP = - 28 , / / ! < Client still warming up <nl> <nl> / / ! Aliases for backward compatibility <nl> RPC_TRANSACTION_ERROR = RPC_VERIFY_ERROR , <nl> enum RPCErrorCode <nl> RPC_TRANSACTION_ALREADY_IN_CHAIN = RPC_VERIFY_ALREADY_IN_CHAIN , <nl> <nl> / / ! P2P client errors <nl> - RPC_CLIENT_NOT_CONNECTED = - 9 , / / ! Bitcoin is not connected <nl> - RPC_CLIENT_IN_INITIAL_DOWNLOAD = - 10 , / / ! Still downloading initial blocks <nl> - RPC_CLIENT_NODE_ALREADY_ADDED = - 23 , / / ! Node is already added <nl> - RPC_CLIENT_NODE_NOT_ADDED = - 24 , / / ! Node has not been added before <nl> - RPC_CLIENT_NODE_NOT_CONNECTED = - 29 , / / ! Node to disconnect not found in connected nodes <nl> - RPC_CLIENT_INVALID_IP_OR_SUBNET = - 30 , / / ! Invalid IP / Subnet <nl> + RPC_CLIENT_NOT_CONNECTED = - 9 , / / ! < Bitcoin is not connected <nl> + RPC_CLIENT_IN_INITIAL_DOWNLOAD = - 10 , / / ! < Still downloading initial blocks <nl> + RPC_CLIENT_NODE_ALREADY_ADDED = - 23 , / / ! < Node is already added <nl> + RPC_CLIENT_NODE_NOT_ADDED = - 24 , / / ! < Node has not been added before <nl> + RPC_CLIENT_NODE_NOT_CONNECTED = - 29 , / / ! < Node to disconnect not found in connected nodes <nl> + RPC_CLIENT_INVALID_IP_OR_SUBNET = - 30 , / / ! < Invalid IP / Subnet <nl> <nl> / / ! Wallet errors <nl> - RPC_WALLET_ERROR = - 4 , / / ! Unspecified problem with wallet ( key not found etc . ) <nl> - RPC_WALLET_INSUFFICIENT_FUNDS = - 6 , / / ! Not enough funds in wallet or account <nl> - RPC_WALLET_INVALID_ACCOUNT_NAME = - 11 , / / ! Invalid account name <nl> - RPC_WALLET_KEYPOOL_RAN_OUT = - 12 , / / ! Keypool ran out , call keypoolrefill first <nl> - RPC_WALLET_UNLOCK_NEEDED = - 13 , / / ! Enter the wallet passphrase with walletpassphrase first <nl> - RPC_WALLET_PASSPHRASE_INCORRECT = - 14 , / / ! The wallet passphrase entered was incorrect <nl> - RPC_WALLET_WRONG_ENC_STATE = - 15 , / / ! Command given in wrong wallet encryption state ( encrypting an encrypted wallet etc . ) <nl> - RPC_WALLET_ENCRYPTION_FAILED = - 16 , / / ! Failed to encrypt the wallet <nl> - RPC_WALLET_ALREADY_UNLOCKED = - 17 , / / ! Wallet is already unlocked <nl> + RPC_WALLET_ERROR = - 4 , / / ! < Unspecified problem with wallet ( key not found etc . ) <nl> + RPC_WALLET_INSUFFICIENT_FUNDS = - 6 , / / ! < Not enough funds in wallet or account <nl> + RPC_WALLET_INVALID_ACCOUNT_NAME = - 11 , / / ! < Invalid account name <nl> + RPC_WALLET_KEYPOOL_RAN_OUT = - 12 , / / ! < Keypool ran out , call keypoolrefill first <nl> + RPC_WALLET_UNLOCK_NEEDED = - 13 , / / ! < Enter the wallet passphrase with walletpassphrase first <nl> + RPC_WALLET_PASSPHRASE_INCORRECT = - 14 , / / ! < The wallet passphrase entered was incorrect <nl> + RPC_WALLET_WRONG_ENC_STATE = - 15 , / / ! < Command given in wrong wallet encryption state ( encrypting an encrypted wallet etc . ) <nl> + RPC_WALLET_ENCRYPTION_FAILED = - 16 , / / ! < Failed to encrypt the wallet <nl> + RPC_WALLET_ALREADY_UNLOCKED = - 17 , / / ! < Wallet is already unlocked <nl> } ; <nl> <nl> std : : string JSONRPCRequest ( const std : : string & strMethod , const UniValue & params , const UniValue & id ) ; <nl> mmm a / src / script / sign . h <nl> ppp b / src / script / sign . h <nl> class MutableTransactionSignatureCreator : public TransactionSignatureCreator { <nl> MutableTransactionSignatureCreator ( const CKeyStore * keystoreIn , const CMutableTransaction * txToIn , unsigned int nInIn , const CAmount & amount , int nHashTypeIn ) : TransactionSignatureCreator ( keystoreIn , & tx , nInIn , amount , nHashTypeIn ) , tx ( * txToIn ) { } <nl> } ; <nl> <nl> - / * * A signature creator that just produces 72 - byte empty signatyres . * / <nl> + / * * A signature creator that just produces 72 - byte empty signatures . * / <nl> class DummySignatureCreator : public BaseSignatureCreator { <nl> public : <nl> DummySignatureCreator ( const CKeyStore * keystoreIn ) : BaseSignatureCreator ( keystoreIn ) { } <nl> mmm a / src / zmq / zmqpublishnotifier . h <nl> ppp b / src / zmq / zmqpublishnotifier . h <nl> class CBlockIndex ; <nl> class CZMQAbstractPublishNotifier : public CZMQAbstractNotifier <nl> { <nl> private : <nl> - uint32_t nSequence ; / / ! upcounting per message sequence number <nl> + uint32_t nSequence ; / / ! < upcounting per message sequence number <nl> <nl> public : <nl> <nl>
|
[ doc ] Fix typos in comments , doxygen : Fix comment syntax
|
bitcoin/bitcoin
|
fa27c0a2c4545a579bf339e816c3fa785252b7dc
|
2016-08-22T08:51:41Z
|
mmm a / TODO <nl> ppp b / TODO <nl> <nl> - Use tooltips to explain options <nl> - Exit confirmation only if there are active downloads ( display number of downloads ) - SMARTER <nl> - Make use of QNetworkInterface ( could be useful ? ) <nl> - - Display more info in log ( PeX , UPnP , DHT w / ports . . . ) <nl> + - Display more info in log ( UPnP successful ) <nl> - Possibility to disable the trayicon <nl> \ No newline at end of file <nl> mmm a / src / GUI . cpp <nl> ppp b / src / GUI . cpp <nl> GUI : : GUI ( QWidget * parent , QStringList torrentCmdLine ) : QMainWindow ( parent ) { <nl> tabs - > setTabText ( 0 , tr ( " Transfers " ) + " ( 0 ) " ) ; <nl> # ifndef NO_UPNP <nl> connect ( & BTSession , SIGNAL ( noWanServiceDetected ( ) ) , this , SLOT ( displayNoUPnPWanServiceDetected ( ) ) ) ; <nl> + connect ( & BTSession , SIGNAL ( wanServiceDetected ( ) ) , this , SLOT ( displayUPnPWanServiceDetected ( ) ) ) ; <nl> # endif <nl> connect ( & BTSession , SIGNAL ( addedTorrent ( const QString & , torrent_handle & , bool ) ) , this , SLOT ( torrentAdded ( const QString & , torrent_handle & , bool ) ) ) ; <nl> connect ( & BTSession , SIGNAL ( duplicateTorrent ( const QString & ) ) , this , SLOT ( torrentDuplicate ( const QString & ) ) ) ; <nl> void GUI : : readParamsOnSocket ( ) { <nl> void GUI : : displayNoUPnPWanServiceDetected ( ) { <nl> setInfoBar ( tr ( " UPnP : no WAN service detected . . . " ) , " red " ) ; <nl> } <nl> + <nl> + void GUI : : displayUPnPWanServiceDetected ( ) { <nl> + setInfoBar ( tr ( " UPnP : WAN service detected ! " ) , " green " ) ; <nl> + } <nl> + <nl> # endif <nl> <nl> / / Toggle paused state of selected torrent <nl> mmm a / src / GUI . h <nl> ppp b / src / GUI . h <nl> class GUI : public QMainWindow , private Ui : : MainWindow { <nl> void askForTorrentUrl ( ) ; <nl> # ifndef NO_UPNP <nl> void displayNoUPnPWanServiceDetected ( ) ; <nl> + void displayUPnPWanServiceDetected ( ) ; <nl> # endif <nl> <nl> <nl> mmm a / src / UPnP . cpp <nl> ppp b / src / UPnP . cpp <nl> bool CUPnPControlPoint : : AddPortMappings ( <nl> qDebug ( " UPnP : % s " , msg . str ( ) . c_str ( ) ) ; <nl> return false ; <nl> } <nl> - <nl> + emit yeswanServiceDetected ( ) ; <nl> int n = upnpPortMapping . size ( ) ; <nl> bool ok = false ; <nl> <nl> mmm a / src / UPnP . h <nl> ppp b / src / UPnP . h <nl> class CUPnPControlPoint : public QObject { <nl> <nl> signals : <nl> void noWanServiceDetected ( ) ; <nl> + void yeswanServiceDetected ( ) ; <nl> <nl> private : <nl> void OnEventReceived ( <nl> mmm a / src / bittorrent . cpp <nl> ppp b / src / bittorrent . cpp <nl> void bittorrent : : enableUPnP ( int port ) { <nl> " qBittorrent " ) ; <nl> m_upnp = new CUPnPControlPoint ( port ) ; <nl> connect ( m_upnp , SIGNAL ( noWanServiceDetected ( ) ) , this , SLOT ( noWanServiceEventHandler ( ) ) ) ; <nl> + connect ( m_upnp , SIGNAL ( yeswanServiceDetected ( ) ) , this , SLOT ( wanServiceEventHandler ( ) ) ) ; <nl> m_upnp - > AddPortMappings ( m_upnpMappings ) ; <nl> } <nl> } <nl> void bittorrent : : noWanServiceEventHandler ( ) { <nl> emit noWanServiceDetected ( ) ; <nl> } <nl> <nl> + void bittorrent : : wanServiceEventHandler ( ) { <nl> + / / Forward this signal <nl> + emit wanServiceDetected ( ) ; <nl> + } <nl> + <nl> / / Set UPnP port ( > = 1000 ) <nl> void bittorrent : : setUPnPPort ( int upnp_port ) { <nl> if ( ! UPnPEnabled ) { <nl> mmm a / src / bittorrent . h <nl> ppp b / src / bittorrent . h <nl> class bittorrent : public QObject { <nl> void saveTrackerFile ( const QString & hash ) ; <nl> # ifndef NO_UPNP <nl> void noWanServiceEventHandler ( ) ; <nl> + void wanServiceEventHandler ( ) ; <nl> # endif <nl> <nl> signals : <nl> class bittorrent : public QObject { <nl> void aboutToDownloadFromUrl ( const QString & url ) ; <nl> # ifndef NO_UPNP <nl> void noWanServiceDetected ( ) ; <nl> + void wanServiceDetected ( ) ; <nl> # endif <nl> } ; <nl> <nl>
|
- Added a message log when an UPnP WAN service is detected
|
qbittorrent/qBittorrent
|
4ca852c2b3e76ce3aa79968e0f5c54090ed2a18b
|
2007-03-29T14:49:01Z
|
mmm a / cocos / platform / CCFileUtils . cpp <nl> ppp b / cocos / platform / CCFileUtils . cpp <nl> bool FileUtils : : createDirectory ( const std : : string & path ) <nl> <nl> / / Create path recursively <nl> subpath = " " ; <nl> - for ( int i = 0 ; i < dirs . size ( ) ; + + i ) { <nl> + for ( int i = 0 ; i < dirs . size ( ) ; + + i ) <nl> + { <nl> subpath + = dirs [ i ] ; <nl> dir = opendir ( subpath . c_str ( ) ) ; <nl> if ( ! dir ) <nl> { <nl> + closedir ( dir ) ; <nl> + <nl> int ret = mkdir ( subpath . c_str ( ) , S_IRWXU | S_IRWXG | S_IRWXO ) ; <nl> if ( ret ! = 0 & & ( errno ! = EEXIST ) ) <nl> { <nl> bool FileUtils : : createDirectory ( const std : : string & path ) <nl> static int unlink_cb ( const char * fpath , const struct stat * sb , int typeflag , struct FTW * ftwbuf ) <nl> { <nl> auto ret = remove ( fpath ) ; <nl> - if ( ret ) { <nl> - log ( " Fail to remove : % s " , fpath ) ; <nl> + if ( ret ) <nl> + { <nl> + log ( " Fail to remove : % s " , fpath ) ; <nl> } <nl> <nl> return ret ; <nl>
|
invoke closedir after invoke opendir
|
cocos2d/cocos2d-x
|
74c80296aa832d1ef78f62bfdc11c83f6aaf08b6
|
2014-12-08T01:58:58Z
|
mmm a / hphp / hack / src / hhbc / string_utils . rs <nl> ppp b / hphp / hack / src / hhbc / string_utils . rs <nl> pub mod integer { <nl> } <nl> } <nl> <nl> + pub mod locals { <nl> + pub fn strip_dollar ( s : & str ) - > & str { <nl> + if s . len ( ) > 0 & & s . as_bytes ( ) [ 0 ] = = b ' $ ' { <nl> + & s [ 1 . . ] <nl> + } else { <nl> + s <nl> + } <nl> + } <nl> + } <nl> + <nl> # [ cfg ( test ) ] <nl> mod string_utils_tests { <nl> use pretty_assertions : : assert_eq ; <nl> mod string_utils_tests { <nl> } <nl> } <nl> } <nl> + <nl> + mod locals { <nl> + use crate : : locals : : * ; <nl> + <nl> + # [ test ] <nl> + fn strip_single_leading_dollar ( ) { <nl> + assert_eq ! ( strip_dollar ( " $ foo " ) , " foo " ) ; <nl> + } <nl> + <nl> + # [ test ] <nl> + fn return_string_if_no_leading_dollar ( ) { <nl> + assert_eq ! ( strip_dollar ( " foo " ) , " foo " ) ; <nl> + } <nl> + <nl> + # [ test ] <nl> + fn empty_string ( ) { <nl> + assert_eq ! ( strip_dollar ( " " ) , " " ) ; <nl> + } <nl> + <nl> + # [ test ] <nl> + fn string_of_single_dollar ( ) { <nl> + assert_eq ! ( strip_dollar ( " $ " ) , " " ) ; <nl> + } <nl> + } <nl> } <nl>
|
Migrate HHbc_string_utils . Locals to Rust
|
facebook/hhvm
|
ce647b1be51328ec4460627e0db5b755b2dd32bf
|
2019-10-31T19:23:20Z
|
mmm a / Telegram / Resources / langs / lang . strings <nl> ppp b / Telegram / Resources / langs / lang . strings <nl> Copyright ( c ) 2014 - 2017 John Preston , https : / / desktop . telegram . org <nl> " lng_action_game_you_scored " = " You scored { count : # | # | # } in { game } " ; <nl> " lng_action_game_score_no_game " = " { from } scored { count : # | # | # } " ; <nl> " lng_action_game_you_scored_no_game " = " You scored { count : # | # | # } " ; <nl> - " lng_action_call_outgoing " = " Outgoing call at { time } " ; <nl> - " lng_action_call_incoming " = " Incoming call at { time } " ; <nl> - " lng_action_call_outgoing_duration " = " Outgoing call ( { duration } ) at { time } " ; <nl> - " lng_action_call_incoming_duration " = " Incoming call ( { duration } ) at { time } " ; <nl> - " lng_action_call_incoming_missed " = " Missed call at { time } " ; <nl> - " lng_action_call_outgoing_missed " = " Cancelled call at { time } " ; <nl> - " lng_action_call_incoming_declined " = " Declined call at { time } " ; <nl> " lng_action_payment_done " = " You have just successfully transferred { amount } to { user } " ; <nl> " lng_action_payment_done_for " = " You have just successfully transferred { amount } to { user } for { invoice } " ; <nl> <nl> Copyright ( c ) 2014 - 2017 John Preston , https : / / desktop . telegram . org <nl> " lng_call_box_status_date " = " { date } at { time } " ; <nl> " lng_call_box_status_group " = " ( { count } ) { status } " ; <nl> <nl> + " lng_call_outgoing " = " Outgoing call " ; <nl> + " lng_call_incoming " = " Incoming call " ; <nl> + " lng_call_missed " = " Missed call " ; <nl> + " lng_call_cancelled " = " Cancelled call " ; <nl> + " lng_call_declined " = " Declined call " ; <nl> + " lng_call_duration_info " = " { time } , { duration } " ; <nl> + " lng_call_type_and_duration " = " { type } ( { duration } ) " ; <nl> + <nl> / / Not used <nl> <nl> " lng_topbar_info " = " Info " ; <nl> mmm a / Telegram / SourceFiles / calls / calls_box_controller . cpp <nl> ppp b / Telegram / SourceFiles / calls / calls_box_controller . cpp <nl> Copyright ( c ) 2014 - 2017 John Preston , https : / / desktop . telegram . org <nl> # include " observer_peer . h " <nl> # include " ui / effects / ripple_animation . h " <nl> # include " calls / calls_instance . h " <nl> + # include " history / history_media_types . h " <nl> <nl> namespace Calls { <nl> namespace { <nl> class BoxController : : Row : public PeerListBox : : Row { <nl> return false ; <nl> } <nl> QSize actionSize ( ) const override { <nl> - return QSize ( st : : callReDial . width , st : : callReDial . height ) ; <nl> + return peer ( ) - > isUser ( ) ? QSize ( st : : callReDial . width , st : : callReDial . height ) : QSize ( ) ; <nl> } <nl> QMargins actionMargins ( ) const override { <nl> return QMargins ( 0 , 0 , 0 , 0 ) ; <nl> void BoxController : : Row : : refreshStatus ( ) { <nl> BoxController : : Row : : Type BoxController : : Row : : ComputeType ( HistoryItem * item ) { <nl> if ( item - > out ( ) ) { <nl> return Type : : Out ; <nl> - } else if ( auto call = item - > Get < HistoryMessageCallInfo > ( ) ) { <nl> - using Reason = HistoryMessageCallInfo : : Reason ; <nl> - if ( call - > reason = = Reason : : Busy | | call - > reason = = Reason : : Missed ) { <nl> - return Type : : Missed ; <nl> + } else if ( auto media = item - > getMedia ( ) ) { <nl> + if ( media - > type ( ) = = MediaTypeCall ) { <nl> + auto reason = static_cast < HistoryCall * > ( media ) - > reason ( ) ; <nl> + if ( reason = = HistoryCall : : FinishReason : : Busy | | reason = = HistoryCall : : FinishReason : : Missed ) { <nl> + return Type : : Missed ; <nl> + } <nl> } <nl> } <nl> return Type : : In ; <nl> void BoxController : : rowClicked ( PeerListBox : : Row * row ) { <nl> <nl> void BoxController : : rowActionClicked ( PeerListBox : : Row * row ) { <nl> auto user = row - > peer ( ) - > asUser ( ) ; <nl> - Expects ( user ! = nullptr ) ; <nl> + t_assert ( user ! = nullptr ) ; <nl> <nl> Current ( ) . startOutgoingCall ( user ) ; <nl> } <nl> void BoxController : : receivedCalls ( const QVector < MTPMessage > & result ) { <nl> auto peerId = peerFromMessage ( message ) ; <nl> if ( auto peer = App : : peerLoaded ( peerId ) ) { <nl> auto item = App : : histories ( ) . addNewMessage ( message , NewMessageExisting ) ; <nl> - appendRow ( item ) ; <nl> + insertRow ( item , InsertWay : : Append ) ; <nl> } else { <nl> LOG ( ( " API Error : a search results with not loaded peer % 1 " ) . arg ( peerId ) ) ; <nl> } <nl> void BoxController : : receivedCalls ( const QVector < MTPMessage > & result ) { <nl> view ( ) - > refreshRows ( ) ; <nl> } <nl> <nl> - bool BoxController : : appendRow ( HistoryItem * item ) { <nl> + bool BoxController : : insertRow ( HistoryItem * item , InsertWay way ) { <nl> if ( auto row = rowForItem ( item ) ) { <nl> - row - > addItem ( item ) ; <nl> - return false ; <nl> + if ( row - > canAddItem ( item ) ) { <nl> + row - > addItem ( item ) ; <nl> + return false ; <nl> + } <nl> } <nl> - view ( ) - > appendRow ( createRow ( item ) ) ; <nl> + ( way = = InsertWay : : Append ) ? view ( ) - > appendRow ( createRow ( item ) ) : view ( ) - > prependRow ( createRow ( item ) ) ; <nl> view ( ) - > reorderRows ( [ ] ( auto & begin , auto & end ) { <nl> std : : sort ( begin , end , [ ] ( auto & a , auto & b ) { <nl> return static_cast < Row & > ( * a ) . maxItemId ( ) > static_cast < Row & > ( * a ) . maxItemId ( ) ; <nl> bool BoxController : : appendRow ( HistoryItem * item ) { <nl> return true ; <nl> } <nl> <nl> - bool BoxController : : prependRow ( HistoryItem * item ) { <nl> - if ( auto row = rowForItem ( item ) ) { <nl> - row - > addItem ( item ) ; <nl> - return false ; <nl> - } <nl> - view ( ) - > prependRow ( createRow ( item ) ) ; <nl> - return true ; <nl> - } <nl> - <nl> BoxController : : Row * BoxController : : rowForItem ( HistoryItem * item ) { <nl> auto v = view ( ) ; <nl> - auto checkForReturn = [ item ] ( Row * row ) { <nl> - return row - > canAddItem ( item ) ? row : nullptr ; <nl> - } ; <nl> if ( auto fullRowsCount = v - > fullRowsCount ( ) ) { <nl> auto itemId = item - > id ; <nl> auto lastRow = static_cast < Row * > ( v - > rowAt ( fullRowsCount - 1 ) ) ; <nl> if ( itemId < lastRow - > minItemId ( ) ) { <nl> - return checkForReturn ( lastRow ) ; <nl> + return lastRow ; <nl> } <nl> auto firstRow = static_cast < Row * > ( v - > rowAt ( 0 ) ) ; <nl> if ( itemId > firstRow - > maxItemId ( ) ) { <nl> - return checkForReturn ( firstRow ) ; <nl> + return firstRow ; <nl> } <nl> <nl> / / Binary search . Invariant : <nl> BoxController : : Row * BoxController : : rowForItem ( HistoryItem * item ) { <nl> return possibleResult ; <nl> } <nl> } <nl> - return checkForReturn ( result ) ; <nl> + return result ; <nl> } <nl> return nullptr ; <nl> } <nl> mmm a / Telegram / SourceFiles / calls / calls_box_controller . h <nl> ppp b / Telegram / SourceFiles / calls / calls_box_controller . h <nl> class BoxController : public PeerListBox : : Controller , private base : : Subscriber , <nl> class Row ; <nl> Row * rowForItem ( HistoryItem * item ) ; <nl> <nl> - bool appendRow ( HistoryItem * item ) ; <nl> - bool prependRow ( HistoryItem * item ) ; <nl> + enum class InsertWay { <nl> + Append , <nl> + Prepend , <nl> + } ; <nl> + bool insertRow ( HistoryItem * item , InsertWay way ) ; <nl> std : : unique_ptr < PeerListBox : : Row > createRow ( HistoryItem * item ) const ; <nl> <nl> MsgId _offsetId = 0 ; <nl> mmm a / Telegram / SourceFiles / calls / calls_instance . cpp <nl> ppp b / Telegram / SourceFiles / calls / calls_instance . cpp <nl> Instance : : Instance ( ) = default ; <nl> <nl> void Instance : : startOutgoingCall ( gsl : : not_null < UserData * > user ) { <nl> if ( _currentCall ) { <nl> + _currentCallPanel - > showAndActivate ( ) ; <nl> return ; / / Already in a call . <nl> } <nl> createCall ( user , Call : : Type : : Outgoing ) ; <nl> mmm a / Telegram / SourceFiles / history . cpp <nl> ppp b / Telegram / SourceFiles / history . cpp <nl> HistoryItem * History : : createItem ( const MTPMessage & msg , bool applyServiceAction , <nl> <nl> case mtpc_messageService : { <nl> auto & m = msg . c_messageService ( ) ; <nl> - result = HistoryService : : create ( this , m ) ; <nl> + if ( m . vaction . type ( ) = = mtpc_messageActionPhoneCall ) { <nl> + result = HistoryMessage : : create ( this , m ) ; <nl> + } else { <nl> + result = HistoryService : : create ( this , m ) ; <nl> + } <nl> <nl> if ( applyServiceAction ) { <nl> auto & action = m . vaction ; <nl> mmm a / Telegram / SourceFiles / history . h <nl> ppp b / Telegram / SourceFiles / history . h <nl> enum HistoryMediaType { <nl> MediaTypePhoto , <nl> MediaTypeVideo , <nl> MediaTypeContact , <nl> + MediaTypeCall , <nl> MediaTypeFile , <nl> MediaTypeGif , <nl> MediaTypeSticker , <nl> mmm a / Telegram / SourceFiles / history / history . style <nl> ppp b / Telegram / SourceFiles / history / history . style <nl> historyCallArrowMissedIn : icon { { " call_arrow_in " , historyCallArrowMissedInFg } } <nl> historyCallArrowMissedInSelected : icon { { " call_arrow_in " , historyCallArrowMissedInFgSelected } } ; <nl> historyCallArrowOut : icon { { " call_arrow_out " , historyCallArrowOutFg } } ; <nl> historyCallArrowOutSelected : icon { { " call_arrow_out " , historyCallArrowOutFgSelected } } ; <nl> + historyCallWidth : 240px ; <nl> + historyCallHeight : 56px ; <nl> + historyCallInIcon : icon { { " menu_calls " , msgFileInBg } } ; <nl> + historyCallInIconSelected : icon { { " menu_calls " , msgFileInBgSelected } } ; <nl> + historyCallOutIcon : icon { { " menu_calls " , msgFileOutBg } } ; <nl> + historyCallOutIconSelected : icon { { " menu_calls " , msgFileOutBgSelected } } ; <nl> + historyCallIconPosition : point ( 17px , 18px ) ; <nl> + historyCallLeft : 16px ; <nl> + historyCallTop : 9px ; <nl> + historyCallStatusTop : 29px ; <nl> + historyCallStatusSkip : 4px ; <nl> + historyCallArrowPosition : point ( - 1px , 1px ) ; <nl> <nl> msgFileMenuSize : size ( 36px , 36px ) ; <nl> msgFileSize : 44px ; <nl> mmm a / Telegram / SourceFiles / history / history_inner_widget . cpp <nl> ppp b / Telegram / SourceFiles / history / history_inner_widget . cpp <nl> void HistoryInner : : onDragExec ( ) { <nl> <nl> auto drag = std : : make_unique < QDrag > ( App : : wnd ( ) ) ; <nl> if ( ! urls . isEmpty ( ) ) mimeData - > setUrls ( urls ) ; <nl> - if ( uponSelected & & ! _selected . isEmpty ( ) & & _selected . cbegin ( ) . value ( ) = = FullSelection & & ! Adaptive : : OneColumn ( ) ) { <nl> - mimeData - > setData ( qsl ( " application / x - td - forward - selected " ) , " 1 " ) ; <nl> + if ( uponSelected & & ! Adaptive : : OneColumn ( ) ) { <nl> + auto selectedState = getSelectionState ( ) ; <nl> + if ( selectedState . count > 0 & & selectedState . count = = selectedState . canForwardCount ) { <nl> + mimeData - > setData ( qsl ( " application / x - td - forward - selected " ) , " 1 " ) ; <nl> + } <nl> } <nl> drag - > setMimeData ( mimeData ) ; <nl> drag - > exec ( Qt : : CopyAction ) ; <nl> void HistoryInner : : showContextMenu ( QContextMenuEvent * e , bool showFromTouch ) { <nl> dragActionUpdate ( e - > globalPos ( ) ) ; <nl> } <nl> <nl> - int32 selectedForForward , selectedForDelete ; <nl> - getSelectionState ( selectedForForward , selectedForDelete ) ; <nl> - bool canSendMessages = _widget - > canSendMessages ( _peer ) ; <nl> + auto selectedState = getSelectionState ( ) ; <nl> + auto canSendMessages = _widget - > canSendMessages ( _peer ) ; <nl> <nl> / / - 2 - has full selected items , but not over , - 1 - has selection , but no over , 0 - no selection , 1 - over text , 2 - over full selected items <nl> int32 isUponSelected = 0 , hasSelected = 0 ; ; <nl> void HistoryInner : : showContextMenu ( QContextMenuEvent * e , bool showFromTouch ) { <nl> _menu - > addAction ( lang ( lng_context_copy_post_link ) , _widget , SLOT ( onCopyPostLink ( ) ) ) ; <nl> } <nl> if ( isUponSelected > 1 ) { <nl> - _menu - > addAction ( lang ( lng_context_forward_selected ) , _widget , SLOT ( onForwardSelected ( ) ) ) ; <nl> - if ( selectedForDelete = = selectedForForward ) { <nl> + if ( selectedState . count > 0 & & selectedState . canForwardCount = = selectedState . count ) { <nl> + _menu - > addAction ( lang ( lng_context_forward_selected ) , _widget , SLOT ( onForwardSelected ( ) ) ) ; <nl> + } <nl> + if ( selectedState . count > 0 & & selectedState . canDeleteCount = = selectedState . count ) { <nl> _menu - > addAction ( lang ( lng_context_delete_selected ) , base : : lambda_guarded ( this , [ this ] { <nl> _widget - > confirmDeleteSelectedItems ( ) ; <nl> } ) ) ; <nl> void HistoryInner : : showContextMenu ( QContextMenuEvent * e , bool showFromTouch ) { <nl> _menu - > addAction ( lang ( lng_context_clear_selection ) , _widget , SLOT ( onClearSelected ( ) ) ) ; <nl> } else if ( App : : hoveredLinkItem ( ) ) { <nl> if ( isUponSelected ! = - 2 ) { <nl> - if ( dynamic_cast < HistoryMessage * > ( App : : hoveredLinkItem ( ) ) & & App : : hoveredLinkItem ( ) - > id > 0 ) { <nl> + if ( App : : hoveredLinkItem ( ) - > canForward ( ) ) { <nl> _menu - > addAction ( lang ( lng_context_forward_msg ) , _widget , SLOT ( forwardMessage ( ) ) ) - > setEnabled ( true ) ; <nl> } <nl> if ( App : : hoveredLinkItem ( ) - > canDelete ( ) ) { <nl> void HistoryInner : : showContextMenu ( QContextMenuEvent * e , bool showFromTouch ) { <nl> } <nl> } else { / / maybe cursor on some text history item ? <nl> bool canDelete = item & & item - > canDelete ( ) & & ( item - > id > 0 | | ! item - > serviceMsg ( ) ) ; <nl> - bool canForward = item & & ( item - > id > 0 ) & & ! item - > serviceMsg ( ) ; <nl> + bool canForward = item & & item - > canForward ( ) ; <nl> <nl> - HistoryMessage * msg = dynamic_cast < HistoryMessage * > ( item ) ; <nl> + auto msg = dynamic_cast < HistoryMessage * > ( item ) ; <nl> if ( isUponSelected > 0 ) { <nl> _menu - > addAction ( lang ( lng_context_copy_selected ) , this , SLOT ( copySelectedText ( ) ) ) - > setEnabled ( true ) ; <nl> if ( item & & item - > id > 0 & & isUponSelected ! = 2 ) { <nl> void HistoryInner : : showContextMenu ( QContextMenuEvent * e , bool showFromTouch ) { <nl> } <nl> } <nl> <nl> - QString linkCopyToClipboardText = _contextMenuLnk ? _contextMenuLnk - > copyToClipboardContextItemText ( ) : QString ( ) ; <nl> + auto linkCopyToClipboardText = _contextMenuLnk ? _contextMenuLnk - > copyToClipboardContextItemText ( ) : QString ( ) ; <nl> if ( ! linkCopyToClipboardText . isEmpty ( ) ) { <nl> _menu - > addAction ( linkCopyToClipboardText , this , SLOT ( copyContextUrl ( ) ) ) - > setEnabled ( true ) ; <nl> } <nl> void HistoryInner : : showContextMenu ( QContextMenuEvent * e , bool showFromTouch ) { <nl> _menu - > addAction ( lang ( lng_context_copy_post_link ) , _widget , SLOT ( onCopyPostLink ( ) ) ) ; <nl> } <nl> if ( isUponSelected > 1 ) { <nl> - _menu - > addAction ( lang ( lng_context_forward_selected ) , _widget , SLOT ( onForwardSelected ( ) ) ) ; <nl> - if ( selectedForDelete = = selectedForForward ) { <nl> + if ( selectedState . count > 0 & & selectedState . count = = selectedState . canForwardCount ) { <nl> + _menu - > addAction ( lang ( lng_context_forward_selected ) , _widget , SLOT ( onForwardSelected ( ) ) ) ; <nl> + } <nl> + if ( selectedState . count > 0 & & selectedState . count = = selectedState . canDeleteCount ) { <nl> _menu - > addAction ( lang ( lng_context_delete_selected ) , base : : lambda_guarded ( this , [ this ] { <nl> _widget - > confirmDeleteSelectedItems ( ) ; <nl> } ) ) ; <nl> void HistoryInner : : keyPressEvent ( QKeyEvent * e ) { <nl> setToClipboard ( getSelectedText ( ) , QClipboard : : FindBuffer ) ; <nl> # endif / / Q_OS_MAC <nl> } else if ( e = = QKeySequence : : Delete ) { <nl> - int32 selectedForForward , selectedForDelete ; <nl> - getSelectionState ( selectedForForward , selectedForDelete ) ; <nl> - if ( ! _selected . isEmpty ( ) & & selectedForDelete = = selectedForForward ) { <nl> + auto selectedState = getSelectionState ( ) ; <nl> + if ( selectedState . count > 0 & & selectedState . canDeleteCount = = selectedState . count ) { <nl> _widget - > confirmDeleteSelectedItems ( ) ; <nl> } <nl> } else { <nl> bool HistoryInner : : canCopySelected ( ) const { <nl> } <nl> <nl> bool HistoryInner : : canDeleteSelected ( ) const { <nl> - if ( _selected . isEmpty ( ) | | _selected . cbegin ( ) . value ( ) ! = FullSelection ) return false ; <nl> - int32 selectedForForward , selectedForDelete ; <nl> - getSelectionState ( selectedForForward , selectedForDelete ) ; <nl> - return ( selectedForForward = = selectedForDelete ) ; <nl> + auto selectedState = getSelectionState ( ) ; <nl> + return ( selectedState . count > 0 ) & & ( selectedState . count = = selectedState . canDeleteCount ) ; <nl> } <nl> <nl> - void HistoryInner : : getSelectionState ( int32 & selectedForForward , int32 & selectedForDelete ) const { <nl> - selectedForForward = selectedForDelete = 0 ; <nl> + Window : : TopBarWidget : : SelectedState HistoryInner : : getSelectionState ( ) const { <nl> + auto result = Window : : TopBarWidget : : SelectedState { } ; <nl> for ( auto i = _selected . cbegin ( ) , e = _selected . cend ( ) ; i ! = e ; + + i ) { <nl> if ( i . value ( ) = = FullSelection ) { <nl> + + + result . count ; <nl> if ( i . key ( ) - > canDelete ( ) ) { <nl> - + + selectedForDelete ; <nl> + + + result . canDeleteCount ; <nl> } <nl> - + + selectedForForward ; <nl> + if ( i . key ( ) - > canForward ( ) ) { <nl> + + + result . canForwardCount ; <nl> + } <nl> + } else { <nl> + result . textSelected = true ; <nl> } <nl> } <nl> - if ( ! selectedForDelete & & ! selectedForForward & & ! _selected . isEmpty ( ) ) { / / text selection <nl> - selectedForForward = - 1 ; <nl> - } <nl> + return result ; <nl> } <nl> <nl> void HistoryInner : : clearSelectedItems ( bool onlyTextSelection ) { <nl> void HistoryInner : : clearSelectedItems ( bool onlyTextSelection ) { <nl> void HistoryInner : : fillSelectedItems ( SelectedItemSet & sel , bool forDelete ) { <nl> if ( _selected . isEmpty ( ) | | _selected . cbegin ( ) . value ( ) ! = FullSelection ) return ; <nl> <nl> - for ( SelectedItems : : const_iterator i = _selected . cbegin ( ) , e = _selected . cend ( ) ; i ! = e ; + + i ) { <nl> - HistoryItem * item = i . key ( ) ; <nl> + for ( auto i = _selected . cbegin ( ) , e = _selected . cend ( ) ; i ! = e ; + + i ) { <nl> + auto item = i . key ( ) ; <nl> if ( item & & item - > toHistoryMessage ( ) & & item - > id > 0 ) { <nl> if ( item - > history ( ) = = _migrated ) { <nl> sel . insert ( item - > id - ServerMaxMsgId , item ) ; <nl> void HistoryInner : : applyDragSelection ( SelectedItems * toItems ) const { <nl> QString HistoryInner : : tooltipText ( ) const { <nl> if ( _dragCursorState = = HistoryInDateCursorState & & _dragAction = = NoDrag ) { <nl> if ( App : : hoveredItem ( ) ) { <nl> - QString dateText = App : : hoveredItem ( ) - > date . toString ( QLocale : : system ( ) . dateTimeFormat ( QLocale : : LongFormat ) ) ; <nl> + auto dateText = App : : hoveredItem ( ) - > date . toString ( QLocale : : system ( ) . dateTimeFormat ( QLocale : : LongFormat ) ) ; <nl> if ( auto edited = App : : hoveredItem ( ) - > Get < HistoryMessageEdited > ( ) ) { <nl> dateText + = ' \ n ' + lng_edited_date ( lt_date , edited - > _editDate . toString ( QLocale : : system ( ) . dateTimeFormat ( QLocale : : LongFormat ) ) ) ; <nl> } <nl> QString HistoryInner : : tooltipText ( ) const { <nl> return forwarded - > _text . originalText ( AllTextSelection , ExpandLinksNone ) ; <nl> } <nl> } <nl> - } else if ( ClickHandlerPtr lnk = ClickHandler : : getActive ( ) ) { <nl> + } else if ( auto lnk = ClickHandler : : getActive ( ) ) { <nl> return lnk - > tooltip ( ) ; <nl> } <nl> return QString ( ) ; <nl> mmm a / Telegram / SourceFiles / history / history_inner_widget . h <nl> ppp b / Telegram / SourceFiles / history / history_inner_widget . h <nl> Copyright ( c ) 2014 - 2017 John Preston , https : / / desktop . telegram . org <nl> <nl> # include " ui / widgets / tooltip . h " <nl> # include " ui / widgets / scroll_area . h " <nl> + # include " window / top_bar_widget . h " <nl> <nl> namespace Window { <nl> class Controller ; <nl> class HistoryInner : public TWidget , public Ui : : AbstractTooltipShower , private b <nl> bool canCopySelected ( ) const ; <nl> bool canDeleteSelected ( ) const ; <nl> <nl> - void getSelectionState ( int32 & selectedForForward , int32 & selectedForDelete ) const ; <nl> + Window : : TopBarWidget : : SelectedState getSelectionState ( ) const ; <nl> void clearSelectedItems ( bool onlyTextSelection = false ) ; <nl> void fillSelectedItems ( SelectedItemSet & sel , bool forDelete = true ) ; <nl> void selectItem ( HistoryItem * item ) ; <nl> mmm a / Telegram / SourceFiles / history / history_item . cpp <nl> ppp b / Telegram / SourceFiles / history / history_item . cpp <nl> bool HistoryItem : : canPin ( ) const { <nl> return id > 0 & & _history - > peer - > isMegagroup ( ) & & ( _history - > peer - > asChannel ( ) - > amEditor ( ) | | _history - > peer - > asChannel ( ) - > amCreator ( ) ) & & toHistoryMessage ( ) ; <nl> } <nl> <nl> + bool HistoryItem : : canForward ( ) const { <nl> + if ( id < 0 ) { <nl> + return false ; <nl> + } <nl> + if ( auto message = toHistoryMessage ( ) ) { <nl> + if ( auto media = message - > getMedia ( ) ) { <nl> + if ( media - > type ( ) = = MediaTypeCall ) { <nl> + return false ; <nl> + } <nl> + } <nl> + return true ; <nl> + } <nl> + return false ; <nl> + } <nl> + <nl> bool HistoryItem : : canEdit ( const QDateTime & cur ) const { <nl> auto messageToMyself = ( _history - > peer - > id = = AuthSession : : CurrentUserPeerId ( ) ) ; <nl> auto messageTooOld = messageToMyself ? false : ( date . secsTo ( cur ) > = Global : : EditTimeLimit ( ) ) ; <nl> bool HistoryItem : : canDeleteForEveryone ( const QDateTime & cur ) const { <nl> if ( ! toHistoryMessage ( ) ) { <nl> return false ; <nl> } <nl> + if ( auto media = getMedia ( ) ) { <nl> + if ( media - > type ( ) = = MediaTypeCall ) { <nl> + return false ; <nl> + } <nl> + } <nl> if ( ! out ( ) ) { <nl> if ( auto chat = history ( ) - > peer - > asChat ( ) ) { <nl> if ( ! chat - > amCreator ( ) & & ( ! chat - > amAdmin ( ) | | ! chat - > adminsEnabled ( ) ) ) { <nl> mmm a / Telegram / SourceFiles / history / history_item . h <nl> ppp b / Telegram / SourceFiles / history / history_item . h <nl> struct HistoryMessageUnreadBar : public RuntimeComponent < HistoryMessageUnreadBar <nl> <nl> } ; <nl> <nl> - struct HistoryMessageCallInfo : public RuntimeComponent < HistoryMessageCallInfo > { <nl> - enum class Reason { <nl> - None , <nl> - Missed , <nl> - Busy , <nl> - } ; <nl> - Reason reason = Reason : : None ; <nl> - } ; <nl> - <nl> / / HistoryMedia has a special owning smart pointer <nl> / / which regs / unregs this media to the holding HistoryItem <nl> class HistoryMedia ; <nl> class HistoryItem : public HistoryElement , public RuntimeComposer , public ClickH <nl> } <nl> <nl> bool canPin ( ) const ; <nl> + bool canForward ( ) const ; <nl> bool canEdit ( const QDateTime & cur ) const ; <nl> bool canDelete ( ) const ; <nl> bool canDeleteForEveryone ( const QDateTime & cur ) const ; <nl> class HistoryItemInstantiated { <nl> result - > finishCreate ( ) ; <nl> return result ; <nl> } <nl> + <nl> } ; <nl> <nl> ClickHandlerPtr goToMessageClickHandler ( PeerData * peer , MsgId msgId ) ; <nl> mmm a / Telegram / SourceFiles / history / history_media_types . cpp <nl> ppp b / Telegram / SourceFiles / history / history_media_types . cpp <nl> Copyright ( c ) 2014 - 2017 John Preston , https : / / desktop . telegram . org <nl> # include " history / history_location_manager . h " <nl> # include " window / window_controller . h " <nl> # include " styles / style_history . h " <nl> + # include " calls / calls_instance . h " <nl> <nl> namespace { <nl> <nl> void HistoryContact : : updateSentMedia ( const MTPMessageMedia & media ) { <nl> } <nl> } <nl> <nl> + HistoryCall : : HistoryCall ( HistoryItem * parent , const MTPDmessageActionPhoneCall & call ) : HistoryMedia ( parent ) <nl> + , _reason ( GetReason ( call ) ) { <nl> + if ( _parent - > out ( ) ) { <nl> + _text = lang ( _reason = = FinishReason : : Missed ? lng_call_cancelled : lng_call_outgoing ) ; <nl> + } else if ( _reason = = FinishReason : : Missed ) { <nl> + _text = lang ( lng_call_missed ) ; <nl> + } else if ( _reason = = FinishReason : : Busy ) { <nl> + _text = lang ( lng_call_declined ) ; <nl> + } else { <nl> + _text = lang ( lng_call_incoming ) ; <nl> + } <nl> + _duration = call . has_duration ( ) ? call . vduration . v : 0 ; <nl> + <nl> + _status = _parent - > date . time ( ) . toString ( cTimeFormat ( ) ) ; <nl> + if ( _duration ) { <nl> + if ( _reason ! = FinishReason : : Missed & & _reason ! = FinishReason : : Busy ) { <nl> + _status = lng_call_duration_info ( lt_time , _status , lt_duration , formatDurationWords ( _duration ) ) ; <nl> + } else { <nl> + _duration = 0 ; <nl> + } <nl> + } <nl> + } <nl> + <nl> + HistoryCall : : FinishReason HistoryCall : : GetReason ( const MTPDmessageActionPhoneCall & call ) { <nl> + if ( call . has_reason ( ) ) { <nl> + switch ( call . vreason . type ( ) ) { <nl> + case mtpc_phoneCallDiscardReasonBusy : return FinishReason : : Busy ; <nl> + case mtpc_phoneCallDiscardReasonDisconnect : return FinishReason : : Disconnected ; <nl> + case mtpc_phoneCallDiscardReasonHangup : return FinishReason : : Hangup ; <nl> + case mtpc_phoneCallDiscardReasonMissed : return FinishReason : : Missed ; <nl> + } <nl> + Unexpected ( " Call reason type . " ) ; <nl> + } <nl> + return FinishReason : : Hangup ; <nl> + } <nl> + <nl> + void HistoryCall : : initDimensions ( ) { <nl> + _maxw = st : : msgFileMinWidth ; <nl> + <nl> + _link = MakeShared < LambdaClickHandler > ( [ peer = _parent - > history ( ) - > peer ] { <nl> + if ( auto user = peer - > asUser ( ) ) { <nl> + Calls : : Current ( ) . startOutgoingCall ( user ) ; <nl> + } <nl> + } ) ; <nl> + <nl> + _maxw = st : : historyCallWidth ; <nl> + _minh = st : : historyCallHeight ; <nl> + if ( ! isBubbleTop ( ) ) { <nl> + _minh - = st : : msgFileTopMinus ; <nl> + } <nl> + _height = _minh ; <nl> + } <nl> + <nl> + void HistoryCall : : draw ( Painter & p , const QRect & r , TextSelection selection , TimeMs ms ) const { <nl> + if ( _width < st : : msgPadding . left ( ) + st : : msgPadding . right ( ) + 1 ) return ; <nl> + auto skipx = 0 , skipy = 0 , width = _width , height = _height ; <nl> + <nl> + auto out = _parent - > out ( ) , isPost = _parent - > isPost ( ) , outbg = out & & ! isPost ; <nl> + auto selected = ( selection = = FullSelection ) ; <nl> + <nl> + if ( width > = _maxw ) { <nl> + width = _maxw ; <nl> + } <nl> + <nl> + auto nameleft = 0 , nametop = 0 , nameright = 0 , statustop = 0 ; <nl> + auto topMinus = isBubbleTop ( ) ? 0 : st : : msgFileTopMinus ; <nl> + <nl> + nameleft = st : : historyCallLeft ; <nl> + nametop = st : : historyCallTop - topMinus ; <nl> + nameright = st : : msgFilePadding . left ( ) ; <nl> + statustop = st : : historyCallStatusTop - topMinus ; <nl> + <nl> + auto namewidth = width - nameleft - nameright ; <nl> + <nl> + p . setFont ( st : : semiboldFont ) ; <nl> + p . setPen ( outbg ? ( selected ? st : : historyFileNameOutFgSelected : st : : historyFileNameOutFg ) : ( selected ? st : : historyFileNameInFgSelected : st : : historyFileNameInFg ) ) ; <nl> + p . drawTextLeft ( nameleft , nametop , width , _text ) ; <nl> + <nl> + auto statusleft = nameleft ; <nl> + auto missed = ( _reason = = FinishReason : : Missed | | _reason = = FinishReason : : Busy ) ; <nl> + auto & arrow = outbg ? ( selected ? st : : historyCallArrowOutSelected : st : : historyCallArrowOut ) : missed ? ( selected ? st : : historyCallArrowMissedInSelected : st : : historyCallArrowMissedIn ) : ( selected ? st : : historyCallArrowInSelected : st : : historyCallArrowIn ) ; <nl> + arrow . paint ( p , statusleft + st : : historyCallArrowPosition . x ( ) , statustop + st : : historyCallArrowPosition . y ( ) , width ) ; <nl> + statusleft + = arrow . width ( ) + st : : historyCallStatusSkip ; <nl> + <nl> + auto & statusFg = outbg ? ( selected ? st : : mediaOutFgSelected : st : : mediaOutFg ) : ( selected ? st : : mediaInFgSelected : st : : mediaInFg ) ; <nl> + p . setFont ( st : : normalFont ) ; <nl> + p . setPen ( statusFg ) ; <nl> + p . drawTextLeft ( statusleft , statustop , width , _status ) ; <nl> + <nl> + auto & icon = outbg ? ( selected ? st : : historyCallOutIconSelected : st : : historyCallOutIcon ) : ( selected ? st : : historyCallInIconSelected : st : : historyCallInIcon ) ; <nl> + icon . paint ( p , width - st : : historyCallIconPosition . x ( ) - icon . width ( ) , st : : historyCallIconPosition . y ( ) - topMinus , width ) ; <nl> + } <nl> + <nl> + HistoryTextState HistoryCall : : getState ( int x , int y , HistoryStateRequest request ) const { <nl> + HistoryTextState result ; <nl> + if ( x > = 0 & & y > = 0 & & x < _width & & y < _height ) { <nl> + result . link = _link ; <nl> + return result ; <nl> + } <nl> + return result ; <nl> + } <nl> + <nl> + QString HistoryCall : : notificationText ( ) const { <nl> + auto result = _text ; <nl> + if ( _duration > 0 ) { <nl> + result = lng_call_type_and_duration ( lt_type , result , lt_duration , formatDurationWords ( _duration ) ) ; <nl> + } <nl> + return result ; <nl> + } <nl> + <nl> + TextWithEntities HistoryCall : : selectedText ( TextSelection selection ) const { <nl> + if ( selection ! = FullSelection ) { <nl> + return TextWithEntities ( ) ; <nl> + } <nl> + return { qsl ( " [ " ) + _text + qsl ( " ] " ) , EntitiesInText ( ) } ; <nl> + } <nl> + <nl> namespace { <nl> <nl> QString siteNameFromUrl ( const QString & url ) { <nl> mmm a / Telegram / SourceFiles / history / history_media_types . h <nl> ppp b / Telegram / SourceFiles / history / history_media_types . h <nl> class HistoryContact : public HistoryMedia { <nl> <nl> } ; <nl> <nl> + class HistoryCall : public HistoryMedia { <nl> + public : <nl> + HistoryCall ( HistoryItem * parent , const MTPDmessageActionPhoneCall & call ) ; <nl> + HistoryMediaType type ( ) const override { <nl> + return MediaTypeCall ; <nl> + } <nl> + std : : unique_ptr < HistoryMedia > clone ( HistoryItem * newParent ) const override { <nl> + Unexpected ( " Clone HistoryCall . " ) ; <nl> + } <nl> + <nl> + void initDimensions ( ) override ; <nl> + <nl> + void draw ( Painter & p , const QRect & r , TextSelection selection , TimeMs ms ) const override ; <nl> + HistoryTextState getState ( int x , int y , HistoryStateRequest request ) const override ; <nl> + <nl> + bool toggleSelectionByHandlerClick ( const ClickHandlerPtr & p ) const override { <nl> + return true ; <nl> + } <nl> + bool dragItemByHandler ( const ClickHandlerPtr & p ) const override { <nl> + return false ; <nl> + } <nl> + <nl> + QString notificationText ( ) const override ; <nl> + TextWithEntities selectedText ( TextSelection selection ) const override ; <nl> + <nl> + bool needsBubble ( ) const override { <nl> + return true ; <nl> + } <nl> + bool customInfoLayout ( ) const override { <nl> + return true ; <nl> + } <nl> + <nl> + enum class FinishReason { <nl> + Missed , <nl> + Busy , <nl> + Disconnected , <nl> + Hangup , <nl> + } ; <nl> + FinishReason reason ( ) const { <nl> + return _reason ; <nl> + } <nl> + <nl> + private : <nl> + static FinishReason GetReason ( const MTPDmessageActionPhoneCall & call ) ; <nl> + <nl> + FinishReason _reason = FinishReason : : Missed ; <nl> + int _duration = 0 ; <nl> + <nl> + QString _text ; <nl> + QString _status ; <nl> + <nl> + ClickHandlerPtr _link ; <nl> + <nl> + } ; <nl> + <nl> class HistoryWebPage : public HistoryMedia { <nl> public : <nl> HistoryWebPage ( HistoryItem * parent , WebPageData * data ) ; <nl> mmm a / Telegram / SourceFiles / history / history_message . cpp <nl> ppp b / Telegram / SourceFiles / history / history_message . cpp <nl> HistoryMessage : : HistoryMessage ( History * history , const MTPDmessage & msg ) <nl> setText ( textWithEntities ) ; <nl> } <nl> <nl> + HistoryMessage : : HistoryMessage ( History * history , const MTPDmessageService & msg ) <nl> + : HistoryItem ( history , msg . vid . v , mtpCastFlags ( msg . vflags . v ) , : : date ( msg . vdate ) , msg . has_from_id ( ) ? msg . vfrom_id . v : 0 ) { <nl> + CreateConfig config ; <nl> + <nl> + if ( msg . has_reply_to_msg_id ( ) ) config . replyTo = msg . vreply_to_msg_id . v ; <nl> + <nl> + createComponents ( config ) ; <nl> + <nl> + switch ( msg . vaction . type ( ) ) { <nl> + case mtpc_messageActionPhoneCall : { <nl> + _media = std : : make_unique < HistoryCall > ( this , msg . vaction . c_messageActionPhoneCall ( ) ) ; <nl> + } break ; <nl> + <nl> + default : Unexpected ( " Service message action type in HistoryMessage . " ) ; <nl> + } <nl> + <nl> + setText ( TextWithEntities { } ) ; <nl> + } <nl> + <nl> namespace { <nl> <nl> MTPDmessage : : Flags newForwardedFlags ( PeerData * p , int32 from , HistoryMessage * fwd ) { <nl> void HistoryService : : setMessageByAction ( const MTPmessageAction & action ) { <nl> return result ; <nl> } ; <nl> <nl> - auto preparePhoneCallText = [ this ] ( const MTPDmessageActionPhoneCall & action ) { <nl> - auto result = PreparedText { } ; <nl> - auto timeText = date . toString ( cTimeFormat ( ) ) ; <nl> - auto duration = action . has_duration ( ) ? qMax ( action . vduration . v , 0 ) : 0 ; <nl> - auto durationText = ( [ duration ] ( ) - > QString { <nl> - if ( ! duration ) { <nl> - return QString ( ) ; <nl> - } <nl> - if ( duration > = 60 ) { <nl> - auto minutes = duration / 60 ; <nl> - auto seconds = duration % 60 ; <nl> - return lng_duration_minutes_seconds ( lt_count_minutes , minutes , lt_count_seconds , seconds ) ; <nl> - } <nl> - return lng_duration_seconds ( lt_count , duration ) ; <nl> - } ) ( ) ; <nl> - auto info = this - > Get < HistoryMessageCallInfo > ( ) ; <nl> - if ( out ( ) ) { <nl> - if ( info & & info - > reason = = HistoryMessageCallInfo : : Reason : : Missed ) { <nl> - result . text = lng_action_call_outgoing_missed ( lt_time , timeText ) ; <nl> - } else if ( duration ) { <nl> - result . text = lng_action_call_outgoing_duration ( lt_duration , durationText , lt_time , timeText ) ; <nl> - } else { <nl> - result . text = lng_action_call_outgoing ( lt_time , timeText ) ; <nl> - } <nl> - } else { <nl> - if ( info & & info - > reason = = HistoryMessageCallInfo : : Reason : : Missed ) { <nl> - result . text = lng_action_call_incoming_missed ( lt_time , timeText ) ; <nl> - } else if ( info & & info - > reason = = HistoryMessageCallInfo : : Reason : : Busy ) { <nl> - result . text = lng_action_call_incoming_declined ( lt_time , timeText ) ; <nl> - } else if ( duration ) { <nl> - result . text = lng_action_call_incoming_duration ( lt_duration , durationText , lt_time , timeText ) ; <nl> - } else { <nl> - result . text = lng_action_call_incoming ( lt_time , timeText ) ; <nl> - } <nl> - } <nl> - return result ; <nl> - } ; <nl> - <nl> auto messageText = PreparedText { } ; <nl> <nl> switch ( action . type ( ) ) { <nl> void HistoryService : : setMessageByAction ( const MTPmessageAction & action ) { <nl> case mtpc_messageActionChannelMigrateFrom : messageText . text = lang ( lng_action_group_migrate ) ; break ; <nl> case mtpc_messageActionPinMessage : messageText = preparePinnedText ( ) ; break ; <nl> case mtpc_messageActionGameScore : messageText = prepareGameScoreText ( ) ; break ; <nl> - case mtpc_messageActionPhoneCall : messageText = preparePhoneCallText ( action . c_messageActionPhoneCall ( ) ) ; break ; <nl> + case mtpc_messageActionPhoneCall : Unexpected ( " PhoneCall type in HistoryService . " ) ; <nl> case mtpc_messageActionPaymentSent : messageText = preparePaymentSentText ( ) ; break ; <nl> default : messageText . text = lang ( lng_message_empty ) ; break ; <nl> } <nl> void HistoryService : : createFromMtp ( const MTPDmessageService & message ) { <nl> auto amount = message . vaction . c_messageActionPaymentSent ( ) . vtotal_amount . v ; <nl> auto currency = qs ( message . vaction . c_messageActionPaymentSent ( ) . vcurrency ) ; <nl> Get < HistoryServicePayment > ( ) - > amount = HistoryInvoice : : fillAmountAndCurrency ( amount , currency ) ; <nl> - } else if ( message . vaction . type ( ) = = mtpc_messageActionPhoneCall ) { <nl> - using Reason = HistoryMessageCallInfo : : Reason ; <nl> - auto & action = message . vaction . c_messageActionPhoneCall ( ) ; <nl> - auto reason = ( [ & action ] { <nl> - if ( action . has_reason ( ) ) { <nl> - switch ( action . vreason . type ( ) ) { <nl> - case mtpc_phoneCallDiscardReasonBusy : return Reason : : Busy ; <nl> - case mtpc_phoneCallDiscardReasonMissed : return Reason : : Missed ; <nl> - } <nl> - } <nl> - return Reason : : None ; <nl> - } ) ( ) ; <nl> - if ( reason ! = Reason : : None ) { <nl> - UpdateComponents ( HistoryMessageCallInfo : : Bit ( ) ) ; <nl> - Get < HistoryMessageCallInfo > ( ) - > reason = reason ; <nl> - } <nl> } <nl> if ( message . has_reply_to_msg_id ( ) ) { <nl> if ( message . vaction . type ( ) = = mtpc_messageActionPinMessage ) { <nl> mmm a / Telegram / SourceFiles / history / history_message . h <nl> ppp b / Telegram / SourceFiles / history / history_message . h <nl> class HistoryMessage : public HistoryItem , private HistoryItemInstantiated < Histo <nl> static HistoryMessage * create ( History * history , const MTPDmessage & msg ) { <nl> return _create ( history , msg ) ; <nl> } <nl> + static HistoryMessage * create ( History * history , const MTPDmessageService & msg ) { <nl> + return _create ( history , msg ) ; <nl> + } <nl> static HistoryMessage * create ( History * history , MsgId msgId , MTPDmessage : : Flags flags , QDateTime date , int32 from , HistoryMessage * fwd ) { <nl> return _create ( history , msgId , flags , date , from , fwd ) ; <nl> } <nl> class HistoryMessage : public HistoryItem , private HistoryItemInstantiated < Histo <nl> <nl> private : <nl> HistoryMessage ( History * history , const MTPDmessage & msg ) ; <nl> + HistoryMessage ( History * history , const MTPDmessageService & msg ) ; <nl> HistoryMessage ( History * history , MsgId msgId , MTPDmessage : : Flags flags , QDateTime date , int32 from , HistoryMessage * fwd ) ; / / local forwarded <nl> HistoryMessage ( History * history , MsgId msgId , MTPDmessage : : Flags flags , MsgId replyTo , int32 viaBotId , QDateTime date , int32 from , const TextWithEntities & textWithEntities ) ; / / local message <nl> HistoryMessage ( History * history , MsgId msgId , MTPDmessage : : Flags flags , MsgId replyTo , int32 viaBotId , QDateTime date , int32 from , DocumentData * doc , const QString & caption , const MTPReplyMarkup & markup ) ; / / local document <nl> mmm a / Telegram / SourceFiles / historywidget . cpp <nl> ppp b / Telegram / SourceFiles / historywidget . cpp <nl> void HistoryWidget : : setInnerFocus ( ) { <nl> if ( _scroll - > isHidden ( ) ) { <nl> setFocus ( ) ; <nl> } else if ( _list ) { <nl> - if ( _selCount | | ( _list & & _list - > wasSelectedText ( ) ) | | _recording | | isBotStart ( ) | | isBlocked ( ) | | ! _canSendMessages ) { <nl> + if ( _nonEmptySelection | | ( _list & & _list - > wasSelectedText ( ) ) | | _recording | | isBotStart ( ) | | isBlocked ( ) | | ! _canSendMessages ) { <nl> _list - > setFocus ( ) ; <nl> } else { <nl> _field - > setFocus ( ) ; <nl> void HistoryWidget : : showHistory ( const PeerId & peerId , MsgId showAtMsgId , bool re <nl> _titlePeerTextWidth = 0 ; <nl> <nl> noSelectingScroll ( ) ; <nl> - _selCount = 0 ; <nl> - _topBar - > showSelected ( 0 ) ; <nl> + _nonEmptySelection = false ; <nl> + _topBar - > showSelected ( Window : : TopBarWidget : : SelectedState { } ) ; <nl> <nl> App : : hoveredItem ( nullptr ) ; <nl> App : : pressedItem ( nullptr ) ; <nl> void HistoryWidget : : deleteSelectedItems ( bool forEveryone ) { <nl> } <nl> <nl> void HistoryWidget : : onListEscapePressed ( ) { <nl> - if ( _selCount & & _list ) { <nl> + if ( _nonEmptySelection & & _list ) { <nl> onClearSelected ( ) ; <nl> } else { <nl> onCancel ( ) ; <nl> void HistoryWidget : : fillSelectedItems ( SelectedItemSet & sel , bool forDelete ) { <nl> <nl> void HistoryWidget : : updateTopBarSelection ( ) { <nl> if ( ! _list ) { <nl> - _topBar - > showSelected ( 0 ) ; <nl> + _topBar - > showSelected ( Window : : TopBarWidget : : SelectedState { } ) ; <nl> return ; <nl> } <nl> <nl> - int32 selectedForForward , selectedForDelete ; <nl> - _list - > getSelectionState ( selectedForForward , selectedForDelete ) ; <nl> - _selCount = selectedForForward ? selectedForForward : selectedForDelete ; <nl> - _topBar - > showSelected ( _selCount > 0 ? _selCount : 0 , ( selectedForDelete = = selectedForForward ) ) ; <nl> + auto selectedState = _list - > getSelectionState ( ) ; <nl> + _nonEmptySelection = ( selectedState . count > 0 ) | | selectedState . textSelected ; <nl> + _topBar - > showSelected ( selectedState ) ; <nl> updateControlsVisibility ( ) ; <nl> updateListSize ( ) ; <nl> if ( ! Ui : : isLayerShown ( ) & & ! App : : passcoded ( ) ) { <nl> - if ( _selCount | | ( _list & & _list - > wasSelectedText ( ) ) | | _recording | | isBotStart ( ) | | isBlocked ( ) | | ! _canSendMessages ) { <nl> + if ( _nonEmptySelection | | ( _list & & _list - > wasSelectedText ( ) ) | | _recording | | isBotStart ( ) | | isBlocked ( ) | | ! _canSendMessages ) { <nl> _list - > setFocus ( ) ; <nl> } else { <nl> _field - > setFocus ( ) ; <nl> mmm a / Telegram / SourceFiles / historywidget . h <nl> ppp b / Telegram / SourceFiles / historywidget . h <nl> private slots : <nl> DragState _attachDrag = DragStateNone ; <nl> object_ptr < DragArea > _attachDragDocument , _attachDragPhoto ; <nl> <nl> - int32 _selCount ; / / < 0 - text selected , focus list , not _field <nl> + bool _nonEmptySelection = false ; <nl> <nl> TaskQueue _fileLoader ; <nl> TextUpdateEvents _textUpdateEvents = ( TextUpdateEvent : : SaveDraft | TextUpdateEvent : : SendTyping ) ; <nl> mmm a / Telegram / SourceFiles / layout . cpp <nl> ppp b / Telegram / SourceFiles / layout . cpp <nl> QString formatDurationText ( qint64 duration ) { <nl> return ( hours ? QString : : number ( hours ) + ' : ' : QString ( ) ) + ( minutes > = 10 ? QString ( ) : QString ( ' 0 ' ) ) + QString : : number ( minutes ) + ' : ' + ( seconds > = 10 ? QString ( ) : QString ( ' 0 ' ) ) + QString : : number ( seconds ) ; <nl> } <nl> <nl> + QString formatDurationWords ( qint64 duration ) { <nl> + if ( duration > 59 ) { <nl> + auto minutes = ( duration / 60 ) ; <nl> + auto seconds = ( duration % 60 ) ; <nl> + return lng_duration_minutes_seconds ( lt_count_minutes , minutes , lt_count_seconds , seconds ) ; <nl> + } <nl> + return lng_duration_seconds ( lt_count , duration ) ; <nl> + } <nl> + <nl> QString formatDurationAndSizeText ( qint64 duration , qint64 size ) { <nl> return lng_duration_and_size ( lt_duration , formatDurationText ( duration ) , lt_size , formatSizeText ( size ) ) ; <nl> } <nl> mmm a / Telegram / SourceFiles / layout . h <nl> ppp b / Telegram / SourceFiles / layout . h <nl> static const int32 FileStatusSizeFailed = 0x7FFFFFF2 ; <nl> QString formatSizeText ( qint64 size ) ; <nl> QString formatDownloadText ( qint64 ready , qint64 total ) ; <nl> QString formatDurationText ( qint64 duration ) ; <nl> + QString formatDurationWords ( qint64 duration ) ; <nl> QString formatDurationAndSizeText ( qint64 duration , qint64 size ) ; <nl> QString formatGifAndSizeText ( qint64 size ) ; <nl> QString formatPlayedText ( qint64 played , qint64 duration ) ; <nl> mmm a / Telegram / SourceFiles / overviewwidget . cpp <nl> ppp b / Telegram / SourceFiles / overviewwidget . cpp <nl> void OverviewInner : : onDragExec ( ) { <nl> QList < QUrl > urls ; <nl> bool forwardSelected = false ; <nl> if ( uponSelected ) { <nl> - forwardSelected = ! _selected . isEmpty ( ) & & _selected . cbegin ( ) . value ( ) = = FullSelection & & ! Adaptive : : OneColumn ( ) ; <nl> + if ( ! Adaptive : : OneColumn ( ) ) { <nl> + auto selectedState = getSelectionState ( ) ; <nl> + if ( selectedState . count > 0 & & selectedState . count = = selectedState . canForwardCount ) { <nl> + forwardSelected = true ; <nl> + } <nl> + } <nl> } else if ( pressedHandler ) { <nl> sel = pressedHandler - > dragText ( ) ; <nl> / / if ( ! sel . isEmpty ( ) & & sel . at ( 0 ) ! = ' / ' & & sel . at ( 0 ) ! = ' @ ' & & sel . at ( 0 ) ! = ' # ' ) { <nl> void OverviewInner : : showContextMenu ( QContextMenuEvent * e , bool showFromTouch ) { <nl> } <nl> } <nl> <nl> - int32 selectedForForward , selectedForDelete ; <nl> - getSelectionState ( selectedForForward , selectedForDelete ) ; <nl> + auto selectedState = getSelectionState ( ) ; <nl> <nl> / / - 2 - has full selected items , but not over , 0 - no selection , 2 - over full selected items <nl> int32 isUponSelected = 0 , hasSelected = 0 ; <nl> void OverviewInner : : showContextMenu ( QContextMenuEvent * e , bool showFromTouch ) { <nl> } <nl> } <nl> if ( isUponSelected > 1 ) { <nl> - _menu - > addAction ( lang ( lng_context_forward_selected ) , _overview , SLOT ( onForwardSelected ( ) ) ) ; <nl> - if ( selectedForDelete = = selectedForForward ) { <nl> + if ( selectedState . count > 0 & & selectedState . count = = selectedState . canForwardCount ) { <nl> + _menu - > addAction ( lang ( lng_context_forward_selected ) , _overview , SLOT ( onForwardSelected ( ) ) ) ; <nl> + } <nl> + if ( selectedState . count > 0 & & selectedState . count = = selectedState . canDeleteCount ) { <nl> _menu - > addAction ( lang ( lng_context_delete_selected ) , base : : lambda_guarded ( this , [ this ] { <nl> _overview - > confirmDeleteSelectedItems ( ) ; <nl> } ) ) ; <nl> void OverviewInner : : showContextMenu ( QContextMenuEvent * e , bool showFromTouch ) { <nl> _menu - > addAction ( lang ( lng_context_clear_selection ) , _overview , SLOT ( onClearSelected ( ) ) ) ; <nl> } else if ( App : : hoveredLinkItem ( ) ) { <nl> if ( isUponSelected ! = - 2 ) { <nl> - if ( App : : hoveredLinkItem ( ) - > toHistoryMessage ( ) ) { <nl> + if ( App : : hoveredLinkItem ( ) - > canForward ( ) ) { <nl> _menu - > addAction ( lang ( lng_context_forward_msg ) , this , SLOT ( forwardMessage ( ) ) ) - > setEnabled ( true ) ; <nl> } <nl> if ( App : : hoveredLinkItem ( ) - > canDelete ( ) ) { <nl> void OverviewInner : : showContextMenu ( QContextMenuEvent * e , bool showFromTouch ) { <nl> } <nl> _menu - > addAction ( lang ( lng_context_to_msg ) , this , SLOT ( goToMessage ( ) ) ) - > setEnabled ( true ) ; <nl> if ( isUponSelected > 1 ) { <nl> - _menu - > addAction ( lang ( lng_context_forward_selected ) , _overview , SLOT ( onForwardSelected ( ) ) ) ; <nl> - if ( selectedForDelete = = selectedForForward ) { <nl> + if ( selectedState . count > 0 & & selectedState . count = = selectedState . canForwardCount ) { <nl> + _menu - > addAction ( lang ( lng_context_forward_selected ) , _overview , SLOT ( onForwardSelected ( ) ) ) ; <nl> + } <nl> + if ( selectedState . count > 0 & & selectedState . count = = selectedState . canDeleteCount ) { <nl> _menu - > addAction ( lang ( lng_context_delete_selected ) , base : : lambda_guarded ( this , [ this ] { <nl> _overview - > confirmDeleteSelectedItems ( ) ; <nl> } ) ) ; <nl> void OverviewInner : : showContextMenu ( QContextMenuEvent * e , bool showFromTouch ) { <nl> _menu - > addAction ( lang ( lng_context_clear_selection ) , _overview , SLOT ( onClearSelected ( ) ) ) ; <nl> } else { <nl> if ( isUponSelected ! = - 2 ) { <nl> - if ( App : : mousedItem ( ) - > toHistoryMessage ( ) ) { <nl> + if ( App : : mousedItem ( ) - > canForward ( ) ) { <nl> _menu - > addAction ( lang ( lng_context_forward_msg ) , this , SLOT ( forwardMessage ( ) ) ) - > setEnabled ( true ) ; <nl> } <nl> if ( App : : mousedItem ( ) - > canDelete ( ) ) { <nl> void OverviewInner : : onMenuDestroy ( QObject * obj ) { <nl> } <nl> } <nl> <nl> - void OverviewInner : : getSelectionState ( int32 & selectedForForward , int32 & selectedForDelete ) const { <nl> - selectedForForward = selectedForDelete = 0 ; <nl> - for ( SelectedItems : : const_iterator i = _selected . cbegin ( ) , e = _selected . cend ( ) ; i ! = e ; + + i ) { <nl> + Window : : TopBarWidget : : SelectedState OverviewInner : : getSelectionState ( ) const { <nl> + auto result = Window : : TopBarWidget : : SelectedState { } ; <nl> + for ( auto i = _selected . cbegin ( ) , e = _selected . cend ( ) ; i ! = e ; + + i ) { <nl> if ( i . value ( ) = = FullSelection ) { <nl> - if ( HistoryItem * item = App : : histItemById ( itemChannel ( i . key ( ) ) , itemMsgId ( i . key ( ) ) ) ) { <nl> + if ( auto item = App : : histItemById ( itemChannel ( i . key ( ) ) , itemMsgId ( i . key ( ) ) ) ) { <nl> + + + result . count ; <nl> + if ( item - > canForward ( ) ) { <nl> + + + result . canForwardCount ; <nl> + } <nl> if ( item - > canDelete ( ) ) { <nl> - + + selectedForDelete ; <nl> + + + result . canDeleteCount ; <nl> } <nl> } <nl> - + + selectedForForward ; <nl> } <nl> } <nl> - if ( ! selectedForDelete & & ! selectedForForward & & ! _selected . isEmpty ( ) ) { / / text selection <nl> - selectedForForward = - 1 ; <nl> - } <nl> + return result ; <nl> } <nl> <nl> void OverviewInner : : clearSelectedItems ( bool onlyTextSelection ) { <nl> MediaOverviewType OverviewWidget : : type ( ) const { <nl> } <nl> <nl> void OverviewWidget : : switchType ( MediaOverviewType type ) { <nl> - _selCount = 0 ; <nl> - <nl> disconnect ( _scroll , SIGNAL ( scrolled ( ) ) , this , SLOT ( onScroll ( ) ) ) ; <nl> <nl> _inner - > setSelectMode ( false ) ; <nl> void OverviewWidget : : switchType ( MediaOverviewType type ) { <nl> _header = _header . toUpper ( ) ; <nl> <nl> noSelectingScroll ( ) ; <nl> - _topBar - > showSelected ( 0 ) ; <nl> + _topBar - > showSelected ( Window : : TopBarWidget : : SelectedState { } ) ; <nl> updateTopBarSelection ( ) ; <nl> scrollReset ( ) ; <nl> <nl> bool OverviewWidget : : contentOverlapped ( const QRect & globalRect ) { <nl> } <nl> <nl> void OverviewWidget : : updateTopBarSelection ( ) { <nl> - int32 selectedForForward , selectedForDelete ; <nl> - _inner - > getSelectionState ( selectedForForward , selectedForDelete ) ; <nl> - _selCount = selectedForForward ? selectedForForward : selectedForDelete ; <nl> - _inner - > setSelectMode ( _selCount > 0 ) ; <nl> + auto selectedState = _inner - > getSelectionState ( ) ; <nl> + _inner - > setSelectMode ( selectedState . count > 0 ) ; <nl> if ( App : : main ( ) ) { <nl> - _topBar - > showSelected ( _selCount > 0 ? _selCount : 0 , ( selectedForDelete = = selectedForForward ) ) ; <nl> + _topBar - > showSelected ( selectedState ) ; <nl> _topBar - > update ( ) ; <nl> } <nl> if ( App : : wnd ( ) & & ! Ui : : isLayerShown ( ) ) { <nl> mmm a / Telegram / SourceFiles / overviewwidget . h <nl> ppp b / Telegram / SourceFiles / overviewwidget . h <nl> Copyright ( c ) 2014 - 2017 John Preston , https : / / desktop . telegram . org <nl> # pragma once <nl> <nl> # include " window / section_widget . h " <nl> + # include " window / top_bar_widget . h " <nl> # include " ui / widgets / tooltip . h " <nl> # include " ui / widgets / scroll_area . h " <nl> <nl> class OverviewInner : public TWidget , public Ui : : AbstractTooltipShower , public R <nl> void changingMsgId ( HistoryItem * row , MsgId newId ) ; <nl> void repaintItem ( const HistoryItem * msg ) ; <nl> <nl> - void getSelectionState ( int32 & selectedForForward , int32 & selectedForDelete ) const ; <nl> + Window : : TopBarWidget : : SelectedState getSelectionState ( ) const ; <nl> void clearSelectedItems ( bool onlyTextSelection = false ) ; <nl> void fillSelectedItems ( SelectedItemSet & sel , bool forDelete = true ) ; <nl> <nl> public slots : <nl> QTimer _scrollTimer ; <nl> int32 _scrollDelta = 0 ; <nl> <nl> - int32 _selCount = 0 ; <nl> - <nl> object_ptr < Ui : : PlainShadow > _topShadow ; <nl> bool _inGrab = false ; <nl> <nl> mmm a / Telegram / SourceFiles / window / top_bar_widget . cpp <nl> ppp b / Telegram / SourceFiles / window / top_bar_widget . cpp <nl> void TopBarWidget : : updateControlsGeometry ( ) { <nl> selectedButtonsTop + = ( height ( ) - _forward - > height ( ) ) / 2 ; <nl> <nl> _forward - > moveToLeft ( buttonsLeft , selectedButtonsTop ) ; <nl> - buttonsLeft + = _forward - > width ( ) + st : : topBarActionSkip ; <nl> + if ( ! _forward - > isHidden ( ) ) { <nl> + buttonsLeft + = _forward - > width ( ) + st : : topBarActionSkip ; <nl> + } <nl> <nl> _delete - > moveToLeft ( buttonsLeft , selectedButtonsTop ) ; <nl> _clearSelection - > moveToRight ( st : : topBarActionSkip , selectedButtonsTop ) ; <nl> void TopBarWidget : : showAll ( ) { <nl> <nl> _clearSelection - > show ( ) ; <nl> _delete - > setVisible ( _canDelete ) ; <nl> - _forward - > show ( ) ; <nl> + _forward - > setVisible ( _canForward ) ; <nl> <nl> _mediaType - > setVisible ( App : : main ( ) ? App : : main ( ) - > showMediaTypeSwitch ( ) : false ) ; <nl> if ( historyPeer & & ! overviewPeer ) { <nl> void TopBarWidget : : updateMembersShowArea ( ) { <nl> _membersShowArea - > setGeometry ( App : : main ( ) - > getMembersShowAreaGeometry ( ) ) ; <nl> } <nl> <nl> - void TopBarWidget : : showSelected ( int selectedCount , bool canDelete ) { <nl> - if ( _selectedCount = = selectedCount & & _canDelete = = canDelete ) { <nl> + void TopBarWidget : : showSelected ( SelectedState state ) { <nl> + auto canDelete = ( state . count > 0 & & state . count = = state . canDeleteCount ) ; <nl> + auto canForward = ( state . count > 0 & & state . count = = state . canForwardCount ) ; <nl> + if ( _selectedCount = = state . count & & _canDelete = = canDelete & & _canForward = = canForward ) { <nl> return ; <nl> } <nl> - if ( selectedCount = = 0 ) { <nl> + if ( state . count = = 0 ) { <nl> / / Don ' t change the visible buttons if the selection is cancelled . <nl> canDelete = _canDelete ; <nl> + canForward = _canForward ; <nl> } <nl> <nl> auto wasSelected = ( _selectedCount > 0 ) ; <nl> - _selectedCount = selectedCount ; <nl> + _selectedCount = state . count ; <nl> if ( _selectedCount > 0 ) { <nl> _forward - > setNumbersText ( _selectedCount ) ; <nl> _delete - > setNumbersText ( _selectedCount ) ; <nl> void TopBarWidget : : showSelected ( int selectedCount , bool canDelete ) { <nl> } <nl> } <nl> auto hasSelected = ( _selectedCount > 0 ) ; <nl> - if ( _canDelete ! = canDelete ) { <nl> + if ( _canDelete ! = canDelete | | _canForward ! = canForward ) { <nl> _canDelete = canDelete ; <nl> + _canForward = canForward ; <nl> showAll ( ) ; <nl> } <nl> if ( wasSelected ! = hasSelected ) { <nl> mmm a / Telegram / SourceFiles / window / top_bar_widget . h <nl> ppp b / Telegram / SourceFiles / window / top_bar_widget . h <nl> class TopBarWidget : public TWidget , private base : : Subscriber { <nl> public : <nl> TopBarWidget ( QWidget * parent , gsl : : not_null < Window : : Controller * > controller ) ; <nl> <nl> + struct SelectedState { <nl> + bool textSelected = false ; <nl> + int count = 0 ; <nl> + int canDeleteCount = 0 ; <nl> + int canForwardCount = 0 ; <nl> + } ; <nl> + <nl> void showAll ( ) ; <nl> - void showSelected ( int selectedCount , bool canDelete = false ) ; <nl> + void showSelected ( SelectedState state ) ; <nl> void animationFinished ( ) ; <nl> void updateMembersShowArea ( ) ; <nl> <nl> class TopBarWidget : public TWidget , private base : : Subscriber { <nl> PeerData * _searchInPeer = nullptr ; <nl> int _selectedCount = 0 ; <nl> bool _canDelete = false ; <nl> + bool _canForward = false ; <nl> <nl> Animation _selectedShown ; <nl> <nl>
|
Redesign calls service messages .
|
telegramdesktop/tdesktop
|
c4f90983afce2a6247643950246101012e88e0b8
|
2017-05-09T20:46:19Z
|
mmm a / include / swift / Frontend / Frontend . h <nl> ppp b / include / swift / Frontend / Frontend . h <nl> class CompilerInstance { <nl> std : : vector < unsigned > BufferIDs ; <nl> <nl> enum : unsigned { NO_SUCH_BUFFER = ~ 0U } ; <nl> - unsigned MainBufferIndex = NO_SUCH_BUFFER ; <nl> + unsigned MainBufferID = NO_SUCH_BUFFER ; <nl> <nl> void createSILModule ( ) ; <nl> <nl> mmm a / lib / Frontend / Frontend . cpp <nl> ppp b / lib / Frontend / Frontend . cpp <nl> bool swift : : CompilerInstance : : setup ( const CompilerInvocation & Invok ) { <nl> / / Add the memory buffers first , these will be associated with a filename <nl> / / and they can replace the contents of an input filename . <nl> for ( auto Buf : Invocation . getInputBuffers ( ) ) { <nl> - if ( SILMode ) <nl> - MainBufferIndex = BufferIDs . size ( ) ; <nl> + unsigned BufferID = SourceMgr . addNewSourceBuffer ( <nl> + llvm : : MemoryBuffer : : getMemBufferCopy ( Buf - > getBuffer ( ) , <nl> + Buf - > getBufferIdentifier ( ) ) ) ; <nl> <nl> / / CompilerInvocation doesn ' t own the buffers , copy to a new buffer . <nl> - BufferIDs . push_back ( SourceMgr . addNewSourceBuffer ( <nl> - llvm : : MemoryBuffer : : getMemBufferCopy ( Buf - > getBuffer ( ) , <nl> - Buf - > getBufferIdentifier ( ) ) ) ) ; <nl> + BufferIDs . push_back ( BufferID ) ; <nl> + <nl> + if ( SILMode ) <nl> + MainBufferID = BufferID ; <nl> } <nl> <nl> for ( auto & File : Invocation . getInputFilenames ( ) ) { <nl> / / FIXME : Working with filenames is fragile , maybe use the real path <nl> / / or have some kind of FileManager . <nl> - if ( SourceMgr . getIDForBufferIdentifier ( File ) . hasValue ( ) ) <nl> - continue ; / / replaced by a memory buffer . <nl> + using namespace llvm : : sys : : path ; <nl> + { <nl> + Optional < unsigned > ExistingBufferID = <nl> + SourceMgr . getIDForBufferIdentifier ( File ) ; <nl> + if ( ExistingBufferID . hasValue ( ) ) { <nl> + if ( SILMode | | ( MainMode & & filename ( File ) = = " main . swift " ) ) <nl> + MainBufferID = ExistingBufferID . getValue ( ) ; <nl> + <nl> + continue ; / / replaced by a memory buffer . <nl> + } <nl> + } <nl> <nl> / / Open the input file . <nl> llvm : : OwningPtr < llvm : : MemoryBuffer > InputFile ; <nl> bool swift : : CompilerInstance : : setup ( const CompilerInvocation & Invok ) { <nl> return true ; <nl> } <nl> <nl> - using namespace llvm : : sys : : path ; <nl> - if ( SILMode | | ( MainMode & & filename ( File ) = = " main . swift " ) ) <nl> - MainBufferIndex = BufferIDs . size ( ) ; <nl> + unsigned BufferID = SourceMgr . addNewSourceBuffer ( InputFile . take ( ) ) ; <nl> <nl> / / Transfer ownership of the MemoryBuffer to the SourceMgr . <nl> - BufferIDs . push_back ( SourceMgr . addNewSourceBuffer ( InputFile . take ( ) ) ) ; <nl> + BufferIDs . push_back ( BufferID ) ; <nl> + <nl> + if ( SILMode | | ( MainMode & & filename ( File ) = = " main . swift " ) ) <nl> + MainBufferID = BufferID ; <nl> } <nl> <nl> - if ( MainMode & & MainBufferIndex = = NO_SUCH_BUFFER & & BufferIDs . size ( ) = = 1 ) <nl> - MainBufferIndex = 0 ; <nl> + if ( MainMode & & MainBufferID = = NO_SUCH_BUFFER & & BufferIDs . size ( ) = = 1 ) <nl> + MainBufferID = BufferIDs . front ( ) ; <nl> <nl> return false ; <nl> } <nl> void CompilerInstance : : performParse ( ) { <nl> <nl> if ( Kind = = SourceFileKind : : SIL ) { <nl> assert ( BufferIDs . size ( ) = = 1 ) ; <nl> - assert ( MainBufferIndex ! = NO_SUCH_BUFFER ) ; <nl> + assert ( MainBufferID ! = NO_SUCH_BUFFER ) ; <nl> createSILModule ( ) ; <nl> } <nl> <nl> void CompilerInstance : : performParse ( ) { <nl> / / a source file , or it may be a SIL file , which requires pumping the parser . <nl> / / We parse it last , though , to make sure that it can use decls from other <nl> / / files in the module . <nl> - if ( MainBufferIndex ! = NO_SUCH_BUFFER ) { <nl> + if ( MainBufferID ! = NO_SUCH_BUFFER ) { <nl> assert ( Kind = = SourceFileKind : : Main | | Kind = = SourceFileKind : : SIL ) ; <nl> <nl> - unsigned BufferID = BufferIDs [ MainBufferIndex ] ; <nl> if ( Kind = = SourceFileKind : : Main ) <nl> - SourceMgr . setHashbangBufferID ( BufferID ) ; <nl> + SourceMgr . setHashbangBufferID ( MainBufferID ) ; <nl> <nl> auto * SingleInputFile = <nl> - new ( * Context ) SourceFile ( * MainModule , Kind , BufferID , <nl> + new ( * Context ) SourceFile ( * MainModule , Kind , MainBufferID , <nl> Invocation . getParseStdlib ( ) ) ; <nl> MainModule - > addFile ( * SingleInputFile ) ; <nl> } <nl> void CompilerInstance : : performParse ( ) { <nl> <nl> / / Parse all the library files first . <nl> for ( size_t i = 0 , e = BufferIDs . size ( ) ; i < e ; + + i ) { <nl> - if ( i = = MainBufferIndex ) <nl> - continue ; <nl> auto BufferID = BufferIDs [ i ] ; <nl> + if ( BufferID = = MainBufferID ) <nl> + continue ; <nl> <nl> auto Buffer = SourceMgr . getLLVMSourceMgr ( ) . getMemoryBuffer ( BufferID ) ; <nl> if ( SerializedModuleLoader : : isValidSerializedAST ( * Buffer ) ) { <nl> void CompilerInstance : : performParse ( ) { <nl> return ; <nl> <nl> / / Parse the main file last . <nl> - if ( MainBufferIndex ! = NO_SUCH_BUFFER ) { <nl> + if ( MainBufferID ! = NO_SUCH_BUFFER ) { <nl> SourceFile & MainFile = MainModule - > getMainSourceFile ( Kind ) ; <nl> SILParserState SILContext ( TheSILModule . get ( ) ) ; <nl> <nl>
|
[ frontend ] Switch CompilerInstance : : MainBufferIndex to CompilerInstance : : MainBufferID .
|
apple/swift
|
6eb9a824654bb5c061211639a8788a733724f324
|
2014-01-10T22:39:07Z
|
mmm a / src / rdb_protocol / changefeed . cc <nl> ppp b / src / rdb_protocol / changefeed . cc <nl> void debug_print ( printf_buffer_t * buf , const T & t ) { <nl> buf - > appendf ( " % s " , debug : : print ( t ) . c_str ( ) ) ; <nl> } <nl> <nl> - datum_t vals_to_change ( <nl> - datum_t old_val , <nl> - datum_t new_val , <nl> - bool discard_old_val = false , <nl> - bool discard_new_val = false , <nl> - bool include_offsets = false , <nl> - boost : : optional < size_t > old_offset = boost : : none , <nl> - boost : : optional < size_t > new_offset = boost : : none ) { <nl> - if ( ( discard_old_val | | old_val . get_type ( ) = = datum_t : : R_NULL ) <nl> - & & ( discard_new_val | | new_val . get_type ( ) = = datum_t : : R_NULL ) ) { <nl> - return datum_t ( ) ; <nl> - } else { <nl> - std : : map < datum_string_t , datum_t > ret ; <nl> - if ( ! discard_old_val ) { <nl> - ret [ datum_string_t ( " old_val " ) ] = std : : move ( old_val ) ; <nl> - if ( include_offsets ) { <nl> - ret [ datum_string_t ( " old_offset " ) ] = old_offset <nl> - ? datum_t ( static_cast < double > ( * old_offset ) ) <nl> - : datum_t : : null ( ) ; <nl> - } <nl> - } <nl> - if ( ! discard_new_val ) { <nl> - ret [ datum_string_t ( " new_val " ) ] = std : : move ( new_val ) ; <nl> - if ( include_offsets ) { <nl> - ret [ datum_string_t ( " new_offset " ) ] = new_offset <nl> - ? datum_t ( static_cast < double > ( * new_offset ) ) <nl> - : datum_t : : null ( ) ; <nl> - } <nl> - } <nl> - guarantee ( ret . size ( ) ! = 0 ) ; <nl> - return datum_t ( std : : move ( ret ) ) ; <nl> - } <nl> - } <nl> - <nl> - datum_t change_val_to_change ( <nl> - const change_val_t & change , <nl> - bool discard_old_val = false , <nl> - bool discard_new_val = false ) { <nl> - return vals_to_change ( <nl> - change . old_val ? change . old_val - > val : datum_t : : null ( ) , <nl> - change . new_val ? change . new_val - > val : datum_t : : null ( ) , <nl> - discard_old_val , <nl> - discard_new_val ) ; <nl> - } <nl> - <nl> enum class pop_type_t { RANGE , POINT } ; <nl> class maybe_squashing_queue_t { <nl> public : <nl> class stream_t : public eager_datum_stream_t { <nl> } <nl> } ; <nl> <nl> + enum class change_type_t { <nl> + ADD = 0 , <nl> + REMOVE = 1 , <nl> + CHANGE = 2 , <nl> + INITIAL = 3 , <nl> + UNINITIAL = 4 , <nl> + STATE = 5 <nl> + } ; <nl> + <nl> / / Uses the home thread of the subscriber , not the client . <nl> class feed_t ; <nl> class subscription_t : public home_thread_mixin_t { <nl> class subscription_t : public home_thread_mixin_t { <nl> subscription_t ( feed_t * feed , <nl> configured_limits_t limits , <nl> const datum_t & squash , <nl> - bool include_states ) ; <nl> + bool include_states , <nl> + bool include_types ) ; <nl> void maybe_signal_cond ( ) THROWS_NOTHING ; <nl> void maybe_signal_queue_nearly_full_cond ( ) THROWS_NOTHING ; <nl> void destructor_cleanup ( std : : function < void ( ) > del_sub ) THROWS_NOTHING ; <nl> <nl> + datum_t maybe_add_type ( datum_t & & datum , change_type_t type ) ; <nl> / / If an error occurs , we ' re detached and ` exc ` is set to an exception to rethrow . <nl> std : : exception_ptr exc ; <nl> / / If we exceed the array size limit , elements are evicted from ` els ` and <nl> class subscription_t : public home_thread_mixin_t { <nl> const configured_limits_t limits ; <nl> const bool squash ; / / Whether or not to squash changes . <nl> const bool include_states ; / / Whether or not to include notes about the state . <nl> + const bool include_types ; / / Whether or not to include a type field in items . <nl> / / Whether we ' re in the middle of one logical batch ( only matters for squashing ) . <nl> bool mid_batch ; <nl> private : <nl> class subscription_t : public home_thread_mixin_t { <nl> DISABLE_COPYING ( subscription_t ) ; <nl> } ; <nl> <nl> + datum_string_t type_to_string ( change_type_t type ) { <nl> + datum_string_t type_string ; <nl> + switch ( type ) { <nl> + case change_type_t : : ADD : <nl> + type_string = datum_string_t ( " add " ) ; <nl> + break ; <nl> + case change_type_t : : REMOVE : <nl> + type_string = datum_string_t ( " remove " ) ; <nl> + break ; <nl> + case change_type_t : : CHANGE : <nl> + type_string = datum_string_t ( " change " ) ; <nl> + break ; <nl> + case change_type_t : : INITIAL : <nl> + type_string = datum_string_t ( " initial " ) ; <nl> + break ; <nl> + case change_type_t : : UNINITIAL : <nl> + type_string = datum_string_t ( " uninitial " ) ; <nl> + break ; <nl> + case change_type_t : : STATE : <nl> + type_string = datum_string_t ( " state " ) ; <nl> + break ; <nl> + default : <nl> + unreachable ( ) ; <nl> + } <nl> + <nl> + return type_string ; <nl> + } <nl> + datum_t add_type ( datum_t & & datum , change_type_t type ) { <nl> + datum_string_t type_string = type_to_string ( type ) ; <nl> + return datum . merge ( <nl> + datum_t { <nl> + std : : map < datum_string_t , datum_t > { <nl> + std : : pair < datum_string_t , datum_t > { <nl> + datum_string_t ( " type " ) , <nl> + datum_t ( type_string ) } } } ) ; <nl> + } <nl> + <nl> + datum_t subscription_t : : maybe_add_type ( datum_t & & datum , change_type_t type ) { <nl> + if ( ! include_types ) { <nl> + return std : : move ( datum ) ; <nl> + } <nl> + return add_type ( std : : move ( datum ) , type ) ; <nl> + } <nl> + <nl> + <nl> + datum_t vals_to_change ( <nl> + datum_t old_val , <nl> + datum_t new_val , <nl> + bool discard_old_val = false , <nl> + bool discard_new_val = false , <nl> + bool include_type = false , <nl> + bool include_offsets = false , <nl> + boost : : optional < size_t > old_offset = boost : : none , <nl> + boost : : optional < size_t > new_offset = boost : : none ) { <nl> + change_type_t change_type ; <nl> + <nl> + if ( discard_old_val & & ! discard_new_val ) { <nl> + change_type = change_type_t : : INITIAL ; <nl> + old_val = datum_t : : null ( ) ; <nl> + } else if ( ! discard_old_val & & discard_new_val ) { <nl> + change_type = change_type_t : : UNINITIAL ; <nl> + new_val = datum_t : : null ( ) ; <nl> + } else if ( ! discard_old_val <nl> + & & old_val . get_type ( ) = = datum_t : : R_NULL ) { <nl> + change_type = change_type_t : : ADD ; <nl> + } else if ( ! discard_new_val <nl> + & & new_val . get_type ( ) = = datum_t : : R_NULL ) { <nl> + change_type = change_type_t : : REMOVE ; <nl> + } else { <nl> + / / Either it ' s a change , or we ' re about to return . <nl> + change_type = change_type_t : : CHANGE ; <nl> + } <nl> + / / Status type is handled where statuses are generated . <nl> + <nl> + if ( ( discard_old_val | | old_val . get_type ( ) = = datum_t : : R_NULL ) <nl> + & & ( discard_new_val | | new_val . get_type ( ) = = datum_t : : R_NULL ) ) { <nl> + return datum_t ( ) ; <nl> + } else { <nl> + std : : map < datum_string_t , datum_t > ret ; <nl> + if ( ! discard_old_val ) { <nl> + ret [ datum_string_t ( " old_val " ) ] = std : : move ( old_val ) ; <nl> + if ( include_offsets ) { <nl> + ret [ datum_string_t ( " old_offset " ) ] = old_offset <nl> + ? datum_t ( static_cast < double > ( * old_offset ) ) <nl> + : datum_t : : null ( ) ; <nl> + } <nl> + } <nl> + if ( ! discard_new_val ) { <nl> + ret [ datum_string_t ( " new_val " ) ] = std : : move ( new_val ) ; <nl> + if ( include_offsets ) { <nl> + ret [ datum_string_t ( " new_offset " ) ] = new_offset <nl> + ? datum_t ( static_cast < double > ( * new_offset ) ) <nl> + : datum_t : : null ( ) ; <nl> + } <nl> + } <nl> + guarantee ( ret . size ( ) ! = 0 ) ; <nl> + <nl> + if ( include_type ) { <nl> + ret [ datum_string_t ( " type " ) ] = <nl> + datum_t ( <nl> + type_to_string ( change_type ) ) ; <nl> + } <nl> + datum_t ret_datum = datum_t ( std : : move ( ret ) ) ; <nl> + return ret_datum ; <nl> + } <nl> + } <nl> + <nl> + datum_t change_val_to_change ( <nl> + const change_val_t & change , <nl> + bool discard_old_val = false , <nl> + bool discard_new_val = false , <nl> + bool include_type = false ) { <nl> + datum_t res = vals_to_change ( <nl> + change . old_val ? change . old_val - > val : datum_t : : null ( ) , <nl> + change . new_val ? change . new_val - > val : datum_t : : null ( ) , <nl> + discard_old_val , <nl> + discard_new_val , <nl> + include_type ) ; <nl> + return res ; <nl> + } <nl> + <nl> enum class init_squashing_queue_t { NO , YES } ; <nl> class flat_sub_t : public subscription_t { <nl> public : <nl> class empty_sub_t : public flat_sub_t { <nl> empty_sub_t ( feed_t * feed , <nl> configured_limits_t limits , <nl> const datum_t & squash , <nl> - bool include_states ) <nl> + bool include_states , <nl> + bool include_types ) <nl> / / There will never be any changes , safe to start squashing right away . <nl> : flat_sub_t ( init_squashing_queue_t : : YES , <nl> - feed , std : : move ( limits ) , squash , include_states ) , <nl> + feed , <nl> + std : : move ( limits ) , <nl> + squash , <nl> + include_states , <nl> + include_types ) , <nl> state ( state_t : : INITIALIZING ) , <nl> sent_state ( state_t : : NONE ) , <nl> include_initial ( false ) { <nl> class empty_sub_t : public flat_sub_t { <nl> if ( state ! = sent_state & & include_states ) { <nl> sent_state = state ; <nl> state = state_t : : READY ; <nl> - return state_datum ( sent_state ) ; <nl> + return maybe_add_type ( <nl> + state_datum ( sent_state ) , <nl> + change_type_t : : STATE ) ; <nl> } <nl> r_sanity_fail ( ) ; <nl> } <nl> class point_sub_t : public flat_sub_t { <nl> configured_limits_t limits , <nl> const datum_t & squash , <nl> bool include_states , <nl> + bool include_types , <nl> datum_t _pkey ) <nl> / / For point changefeeds we start squashing right away . <nl> : flat_sub_t ( init_squashing_queue_t : : YES , <nl> - feed , std : : move ( limits ) , squash , include_states ) , <nl> + feed , <nl> + std : : move ( limits ) , <nl> + squash , <nl> + include_states , <nl> + include_types ) , <nl> pkey ( std : : move ( _pkey ) ) , <nl> stamp ( 0 ) , <nl> started ( false ) , <nl> class point_sub_t : public flat_sub_t { <nl> datum_t pop_el ( ) final { <nl> if ( state ! = sent_state & & include_states ) { <nl> sent_state = state ; <nl> - return state_datum ( state ) ; <nl> + return maybe_add_type ( state_datum ( state ) , <nl> + change_type_t : : STATE ) ; <nl> } <nl> datum_t ret ; <nl> if ( state ! = state_t : : READY & & include_initial ) { <nl> class point_sub_t : public flat_sub_t { <nl> / / like ` { new_val : null } ` . <nl> ret = datum_t ( <nl> std : : map < datum_string_t , datum_t > { { <nl> - datum_string_t ( " new_val " ) , datum_t : : null ( ) } } ) ; <nl> + datum_string_t ( " new_val " ) , <nl> + datum_t : : null ( ) } } ) ; <nl> } <nl> + ret = maybe_add_type ( std : : move ( ret ) , change_type_t : : INITIAL ) ; <nl> } else { <nl> - ret = change_val_to_change ( pop_change_val ( ) ) ; <nl> + ret = change_val_to_change ( pop_change_val ( ) , <nl> + false , <nl> + false , <nl> + include_types ) ; <nl> } <nl> initial_val = boost : : none ; <nl> state = state_t : : READY ; <nl> class range_sub_t : public flat_sub_t { <nl> configured_limits_t limits , <nl> const datum_t & squash , <nl> bool include_states , <nl> + bool include_types , <nl> env_t * outer_env , <nl> keyspec_t : : range_t _spec ) <nl> / / We don ' t turn on squashing until later for range subs . ( We need to <nl> / / wait until we ' ve purged and all the initial values are reconciled . ) <nl> : flat_sub_t ( init_squashing_queue_t : : NO , <nl> - feed , std : : move ( limits ) , squash , include_states ) , <nl> + feed , <nl> + std : : move ( limits ) , <nl> + squash , <nl> + include_states , <nl> + include_types ) , <nl> spec ( std : : move ( _spec ) ) , <nl> state ( state_t : : READY ) , <nl> sent_state ( state_t : : NONE ) , <nl> class range_sub_t : public flat_sub_t { <nl> if ( artificial_include_initial & & artificial_initial_vals . size ( ) = = 0 ) { <nl> state = state_t : : READY ; <nl> } <nl> - return state_datum ( sent_state ) ; <nl> + return maybe_add_type ( state_datum ( sent_state ) , <nl> + change_type_t : : STATE ) ; <nl> } <nl> if ( artificial_initial_vals . size ( ) ! = 0 ) { <nl> datum_t d = artificial_initial_vals . back ( ) ; <nl> class range_sub_t : public flat_sub_t { <nl> if ( artificial_initial_vals . size ( ) = = 0 ) { <nl> state = state_t : : READY ; <nl> } <nl> - return vals_to_change ( datum_t ( ) , d , true ) ; <nl> + return maybe_add_type ( <nl> + vals_to_change ( datum_t ( ) , d , true ) , <nl> + change_type_t : : INITIAL ) ; <nl> } <nl> - return change_val_to_change ( pop_change_val ( ) ) ; <nl> + return change_val_to_change ( pop_change_val ( ) , <nl> + false , <nl> + false , <nl> + include_types ) ; <nl> } <nl> bool has_el ( ) final { <nl> return ( include_states & & state ! = sent_state ) <nl> class limit_sub_t : public subscription_t { <nl> const datum_t & squash , <nl> bool _include_offsets , <nl> bool include_states , <nl> + bool include_types , <nl> keyspec_t : : limit_t _spec ) <nl> - : subscription_t ( feed , limits , squash , include_states ) , <nl> + : subscription_t ( feed , <nl> + limits , <nl> + squash , <nl> + include_states , <nl> + include_types ) , <nl> uuid ( generate_uuid ( ) ) , <nl> need_init ( - 1 ) , <nl> got_init ( 0 ) , <nl> class limit_sub_t : public subscription_t { <nl> if ( need_init = = got_init ) { <nl> ASSERT_NO_CORO_WAITING ; <nl> if ( include_initial ) { <nl> - if ( include_states ) els . push_back ( initializing_datum ( ) ) ; <nl> + if ( include_states ) els . push_back ( maybe_add_type ( initializing_datum ( ) , <nl> + change_type_t : : STATE ) ) ; <nl> size_t i = 0 ; <nl> for ( auto it = active_data . rbegin ( ) ; it ! = active_data . rend ( ) ; + + it ) { <nl> std : : map < datum_string_t , datum_t > m ; <nl> class limit_sub_t : public subscription_t { <nl> m [ datum_string_t ( " new_offset " ) ] = <nl> datum_t ( static_cast < double > ( i + + ) ) ; <nl> } <nl> - els . push_back ( datum_t ( std : : move ( m ) ) ) ; <nl> + els . push_back ( maybe_add_type ( datum_t ( std : : move ( m ) ) , <nl> + change_type_t : : INITIAL ) ) ; <nl> } <nl> } <nl> - if ( include_states ) els . push_back ( ready_datum ( ) ) ; <nl> + if ( include_states ) els . push_back ( maybe_add_type ( ready_datum ( ) , <nl> + change_type_t : : STATE ) ) ; <nl> <nl> if ( ! squash ) { <nl> decltype ( queued_changes ) changes ; <nl> class limit_sub_t : public subscription_t { <nl> if ( lc . old_d . has ( ) & & lc . new_d . has ( ) ) { <nl> rassert ( lc . old_d ! = lc . new_d | | lc . old_offset ! = lc . new_offset ) ; <nl> } <nl> + <nl> datum_t el = vals_to_change ( <nl> lc . old_d . has ( ) ? std : : move ( lc . old_d ) : datum_t : : null ( ) , <nl> lc . new_d . has ( ) ? std : : move ( lc . new_d ) : datum_t : : null ( ) , <nl> false , <nl> false , <nl> + include_types , <nl> include_offsets , <nl> std : : move ( lc . old_offset ) , <nl> std : : move ( lc . new_offset ) ) ; <nl> class splice_stream_t : public stream_t < range_sub_t > { <nl> cv . old_val - > tag_num , cv . source_stamp , * cv . old_val ) , <nl> cv . new_val & & discard ( <nl> cv . pkey , <nl> - cv . new_val - > tag_num , cv . source_stamp , * cv . new_val ) ) ; <nl> + cv . new_val - > tag_num , cv . source_stamp , * cv . new_val ) , <nl> + sub - > include_types ) ; <nl> if ( el . has ( ) ) { <nl> batcher . note_el ( el ) ; <nl> ret . push_back ( std : : move ( el ) ) ; <nl> class splice_stream_t : public stream_t < range_sub_t > { <nl> remove_outdated_ranges ( ) ; <nl> } else { <nl> if ( sub - > include_states ) { <nl> - ret . push_back ( state_datum ( state_t : : INITIALIZING ) ) ; <nl> + ret . push_back ( sub - > maybe_add_type ( <nl> + state_datum ( state_t : : INITIALIZING ) , <nl> + change_type_t : : STATE ) ) ; <nl> } <nl> } <nl> if ( ! src - > is_exhausted ( ) & & ! batcher . should_send_batch ( ) ) { <nl> class splice_stream_t : public stream_t < range_sub_t > { <nl> for ( auto & & datum : batch ) { <nl> datum_t cv = vals_to_change ( datum_t ( ) , std : : move ( datum ) , true ) ; <nl> if ( cv . has ( ) ) { <nl> - ret . push_back ( std : : move ( cv ) ) ; <nl> + ret . push_back ( <nl> + sub - > maybe_add_type ( cv , change_type_t : : INITIAL ) ) ; <nl> } <nl> } <nl> } <nl> subscription_t : : subscription_t ( <nl> feed_t * _feed , <nl> configured_limits_t _limits , <nl> const datum_t & _squash , <nl> - bool _include_states ) <nl> + bool _include_states , <nl> + bool _include_types ) <nl> : skipped ( 0 ) , <nl> feed ( _feed ) , <nl> limits ( std : : move ( _limits ) ) , <nl> squash ( _squash . as_bool ( ) ) , <nl> include_states ( _include_states ) , <nl> + include_types ( _include_types ) , <nl> mid_batch ( false ) , <nl> min_interval ( _squash . get_type ( ) = = datum_t : : R_NUM ? _squash . as_num ( ) : 0 . 0 ) , <nl> cond ( NULL ) , <nl> scoped_ptr_t < subscription_t > new_sub ( <nl> rcheck_datum ( ! ss - > include_offsets , base_exc_t : : LOGIC , <nl> " Cannot include offsets for range subs . " ) ; <nl> return new range_sub_t ( <nl> - feed , ss - > limits , ss - > squash , ss - > include_states , env , range ) ; <nl> + feed , <nl> + ss - > limits , <nl> + ss - > squash , <nl> + ss - > include_states , <nl> + ss - > include_types , <nl> + env , <nl> + range ) ; <nl> } <nl> subscription_t * operator ( ) ( const keyspec_t : : empty_t & ) const { <nl> rcheck_datum ( ! ss - > include_offsets , base_exc_t : : LOGIC , <nl> " Cannot include offsets for empty subs . " ) ; <nl> return new empty_sub_t ( <nl> - feed , ss - > limits , ss - > squash , ss - > include_states ) ; <nl> + feed , <nl> + ss - > limits , <nl> + ss - > squash , <nl> + ss - > include_states , <nl> + ss - > include_types ) ; <nl> } <nl> subscription_t * operator ( ) ( const keyspec_t : : limit_t & limit ) const { <nl> return new limit_sub_t ( <nl> - feed , ss - > limits , ss - > squash , ss - > include_offsets , <nl> - ss - > include_states , limit ) ; <nl> + feed , <nl> + ss - > limits , <nl> + ss - > squash , <nl> + ss - > include_offsets , <nl> + ss - > include_states , <nl> + ss - > include_types , <nl> + limit ) ; <nl> } <nl> subscription_t * operator ( ) ( const keyspec_t : : point_t & point ) const { <nl> rcheck_datum ( ! ss - > include_offsets , base_exc_t : : LOGIC , <nl> " Cannot include offsets for point subs . " ) ; <nl> return new point_sub_t ( <nl> - feed , ss - > limits , ss - > squash , ss - > include_states , point . key ) ; <nl> + feed , <nl> + ss - > limits , <nl> + ss - > squash , <nl> + ss - > include_states , <nl> + ss - > include_types , <nl> + point . key ) ; <nl> } <nl> env_t * env ; <nl> feed_t * feed ; <nl> streamspec_t : : streamspec_t ( counted_t < datum_stream_t > _maybe_src , <nl> std : : string _table_name , <nl> bool _include_offsets , <nl> bool _include_states , <nl> + bool _include_types , <nl> configured_limits_t _limits , <nl> datum_t _squash , <nl> keyspec_t : : spec_t _spec ) : <nl> streamspec_t : : streamspec_t ( counted_t < datum_stream_t > _maybe_src , <nl> table_name ( std : : move ( _table_name ) ) , <nl> include_offsets ( std : : move ( _include_offsets ) ) , <nl> include_states ( std : : move ( _include_states ) ) , <nl> + include_types ( std : : move ( _include_types ) ) , <nl> limits ( std : : move ( _limits ) ) , <nl> squash ( std : : move ( _squash ) ) , <nl> spec ( std : : move ( _spec ) ) { } <nl> mmm a / src / rdb_protocol / changefeed . hpp <nl> ppp b / src / rdb_protocol / changefeed . hpp <nl> struct streamspec_t { <nl> std : : string table_name ; <nl> bool include_offsets ; <nl> bool include_states ; <nl> + bool include_types ; <nl> configured_limits_t limits ; <nl> datum_t squash ; <nl> keyspec_t : : spec_t spec ; <nl> struct streamspec_t { <nl> std : : string _table_name , <nl> bool _include_offsets , <nl> bool _include_states , <nl> + bool _include_types , <nl> configured_limits_t _limits , <nl> datum_t _squash , <nl> keyspec_t : : spec_t _spec ) ; <nl> mmm a / src / rdb_protocol / optargs . cc <nl> ppp b / src / rdb_protocol / optargs . cc <nl> static const std : : set < std : : string > acceptable_optargs ( { <nl> " include_initial " , <nl> " include_offsets " , <nl> " include_states " , <nl> + " include_types " , <nl> " index " , <nl> " interleave " , <nl> " ordered " , <nl> mmm a / src / rdb_protocol / terms / seq . cc <nl> ppp b / src / rdb_protocol / terms / seq . cc <nl> class changes_term_t : public op_term_t { <nl> " changefeed_queue_size " , <nl> " include_initial " , <nl> " include_offsets " , <nl> - " include_states " } ) ) { } <nl> + " include_states " , <nl> + " include_types " } ) ) { } <nl> private : <nl> virtual scoped_ptr_t < val_t > eval_impl ( <nl> scope_env_t * env , args_t * args , eval_flags_t ) const { <nl> class changes_term_t : public op_term_t { <nl> include_states = v - > as_bool ( ) ; <nl> } <nl> <nl> + bool include_types = false ; <nl> + if ( scoped_ptr_t < val_t > v = args - > optarg ( env , " include_types " ) ) { <nl> + include_types = v - > as_bool ( ) ; <nl> + } <nl> + <nl> bool include_initial = false ; <nl> if ( scoped_ptr_t < val_t > v = args - > optarg ( env , " include_initial " ) ) { <nl> include_initial = v - > as_bool ( ) ; <nl> class changes_term_t : public op_term_t { <nl> changespec . keyspec . table_name , <nl> include_offsets , <nl> include_states , <nl> + include_types , <nl> limits , <nl> squash , <nl> std : : move ( changespec . keyspec . spec ) ) , <nl> class changes_term_t : public op_term_t { <nl> sel - > get_tbl ( ) - > display_name ( ) , <nl> include_offsets , <nl> include_states , <nl> + include_types , <nl> limits , <nl> squash , <nl> sel - > get_spec ( ) ) , <nl> mmm a / src / unittest / rdb_protocol . cc <nl> ppp b / src / unittest / rdb_protocol . cc <nl> TPTEST ( RDBProtocol , ArtificialChangefeeds ) { <nl> " test " , <nl> false , <nl> false , <nl> + false , <nl> ql : : configured_limits_t ( ) , <nl> ql : : datum_t : : boolean ( false ) , <nl> keyspec_t : : point_t { ql : : datum_t ( 0 . 0 ) } ) , <nl> TPTEST ( RDBProtocol , ArtificialChangefeeds ) { <nl> " test " , <nl> false , <nl> false , <nl> + false , <nl> ql : : configured_limits_t ( ) , <nl> ql : : datum_t : : boolean ( false ) , <nl> keyspec_t : : point_t { ql : : datum_t ( 10 . 0 ) } ) , <nl> TPTEST ( RDBProtocol , ArtificialChangefeeds ) { <nl> " test " , <nl> false , <nl> false , <nl> + false , <nl> ql : : configured_limits_t ( ) , <nl> ql : : datum_t : : boolean ( false ) , <nl> keyspec_t : : range_t { <nl> new file mode 100644 <nl> index 00000000000 . . a2c105bd637 <nl> mmm / dev / null <nl> ppp b / test / rql_test / changefeeds / types . yaml <nl> <nl> + desc : Test that types in a changefeed work as expected <nl> + table_variable_name : tbl <nl> + tests : <nl> + <nl> + - py : tbl . index_create ( ' num ' ) <nl> + - py : tbl . wait ( ) <nl> + <nl> + # Test all types on whole document changefeed <nl> + - py : tbl . insert ( { ' id ' : 1 } ) <nl> + ot : partial ( { ' inserted ' : 1 } ) <nl> + <nl> + - py : a = tbl . changes ( include_initial = True , include_states = True , include_types = True ) <nl> + - py : fetch ( a ) <nl> + ot : [ { ' state ' : ' initializing ' , ' type ' : ' state ' } , { ' new_val ' : { ' id ' : 1 } , ' type ' : ' initial ' } , { ' state ' : ' ready ' , ' type ' : ' state ' } ] <nl> + - py : tbl . insert ( { ' id ' : 2 } ) <nl> + - py : fetch ( a ) <nl> + ot : partial ( [ { ' type ' : ' add ' } ] ) <nl> + - py : tbl . delete ( ) <nl> + - py : fetch ( a ) <nl> + ot : partial ( [ { ' type ' : ' remove ' } ] ) <nl> + <nl> + - py : tbl . insert ( { ' id ' : 2 , ' num ' : 5 } ) <nl> + ot : partial ( { ' inserted ' : 1 } ) <nl> + - py : b = tbl . between ( 1 , 10 , index = " num " ) . changes ( include_initial = True , include_types = True ) <nl> + - py : tbl . get ( 2 ) . update ( { ' num ' : 666 } ) <nl> + - py : fetch ( b ) <nl> + ot : partial ( [ { ' type ' : ' initial ' } , { ' type ' : ' remove ' } ] ) <nl> + <nl> + # Test all types on row changefeed <nl> + <nl> + - py : tbl . delete ( ) <nl> + - py : tbl . insert ( { ' id ' : 1 , ' num ' : 1 } ) <nl> + ot : partial ( { ' inserted ' : 1 } ) <nl> + <nl> + - py : c = tbl . pluck ( ' num ' ) . changes ( include_initial = True , include_states = True , include_types = True ) <nl> + - py : fetch ( c ) <nl> + ot : [ { ' state ' : ' initializing ' , ' type ' : ' state ' } , { ' new_val ' : { ' num ' : 1 } , ' type ' : ' initial ' } , { ' state ' : ' ready ' , ' type ' : ' state ' } ] <nl> + - py : tbl . insert ( { ' id ' : 2 } ) <nl> + - py : fetch ( c ) <nl> + ot : partial ( [ { ' type ' : ' add ' } ] ) <nl> + - py : tbl . delete ( ) <nl> + - py : fetch ( c ) <nl> + ot : partial ( [ { ' type ' : ' remove ' } ] ) <nl> + <nl> + - py : tbl . insert ( { ' id ' : 2 , ' num ' : 5 } ) <nl> + ot : partial ( { ' inserted ' : 1 } ) <nl> + <nl> + # Test for point changefeed <nl> + - py : tbl . delete ( ) <nl> + - py : tbl . insert ( { " id " : 1 , " num " : 2 } ) <nl> + ot : partial ( { ' inserted ' : 1 } ) <nl> + - py : d = tbl . get ( 1 ) . changes ( include_types = True , include_initial = True ) <nl> + - py : fetch ( d ) <nl> + ot : partial ( [ { ' type ' : ' initial ' } ] ) <nl> + - py : tbl . get ( 1 ) . update ( { " num " : 42 } ) <nl> + ot : partial ( { ' replaced ' : 1 } ) <nl> + - py : fetch ( d ) <nl> + ot : partial ( [ { ' type ' : ' change ' } ] ) <nl> + - py : tbl . get ( 1 ) . delete ( ) <nl> + ot : partial ( { ' deleted ' : 1 } ) <nl> + - py : fetch ( d ) <nl> + ot : partial ( [ { ' type ' : ' remove ' } ] ) <nl> + <nl> + # Test for limit changefeed <nl> + - py : tbl . delete ( ) <nl> + - py : tbl . insert ( { " id " : 1 , " num " : 5 } ) <nl> + ot : partial ( { ' inserted ' : 1 } ) <nl> + - py : e = tbl . order_by ( index = " num " ) . limit ( 1 ) . changes ( include_types = True , include_initial = True ) <nl> + - py : fetch ( e , timeout = 3 ) <nl> + ot : [ { ' new_val ' : { ' id ' : 1 , ' num ' : 5 } , ' type ' : ' initial ' } ] <nl> + - py : tbl . insert ( { " id " : 2 , " num " : 1 } ) <nl> + ot : partial ( { ' inserted ' : 1 } ) <nl> + - py : fetch ( e , timeout = 3 ) <nl> + ot : [ { ' new_val ' : { ' id ' : 2 , ' num ' : 1 } , ' old_val ' : { ' id ' : 1 , ' num ' : 5 } , ' type ' : ' change ' } ] <nl> + - py : tbl . get ( 1 ) . delete ( ) <nl> + - py : tbl . get ( 2 ) . delete ( ) <nl> + ot : partial ( { ' deleted ' : 1 } ) <nl> + - py : fetch ( e , timeout = 3 ) <nl> + ot : [ { ' new_val ' : None , ' old_val ' : { ' id ' : 2 , ' num ' : 1 } , ' type ' : ' remove ' } ] <nl> + - py : f = tbl . get ( 12345 ) . changes ( include_initial = True , include_types = True ) <nl> + - py : fetch ( f ) <nl> + ot : [ { ' new_val ' : None , ' type ' : ' initial ' } ] <nl>
|
Changefeed types for 5188
|
rethinkdb/rethinkdb
|
5b38c3cfdfb11a9b2d6c2801f0b08f8b007285cc
|
2016-03-25T20:37:52Z
|
mmm a / src / proto / grpc / testing / BUILD <nl> ppp b / src / proto / grpc / testing / BUILD <nl> grpc_package ( <nl> exports_files ( [ <nl> " echo . proto " , <nl> " echo_messages . proto " , <nl> - " test . proto " , <nl> " empty . proto " , <nl> " messages . proto " , <nl> + " simple_messages . proto " , <nl> + " test . proto " , <nl> ] ) <nl> <nl> grpc_proto_library ( <nl>
|
Second file to match with internal changes
|
grpc/grpc
|
011078c009fe0847a8ce9b456223ded96b1d8319
|
2020-01-28T22:20:04Z
|
mmm a / hphp / hack / src / client / clientLsp . ml <nl> ppp b / hphp / hack / src / client / clientLsp . ml <nl> let get_document_contents <nl> string option = <nl> match SMap . find_opt uri editor_open_files with <nl> | Some document - > Some document . TextDocumentItem . text <nl> - | None - > <nl> - let rawpath = String_utils . lstrip uri " file : / / " in <nl> - ( try <nl> - let contents = Disk . cat rawpath in <nl> - Some contents <nl> - with _ - > None ) <nl> + | None - > None <nl> <nl> let get_document_location <nl> ( editor_open_files : Lsp . TextDocumentItem . t SMap . t ) <nl> mmm a / hphp / hack / src / client / ide_service / clientIdeDaemon . ml <nl> ppp b / hphp / hack / src / client / ide_service / clientIdeDaemon . ml <nl> let handle_message : <nl> ^ " should have waited for the IDE services to become ready before " <nl> ^ " sending file - change notifications . " ) ) <nl> | ( Initialized initialized_state , File_changed path ) - > <nl> - let changed_files_to_process = <nl> - Path . Set . add initialized_state . changed_files_to_process path <nl> - in <nl> - let peak_changed_files_queue_size = <nl> - initialized_state . peak_changed_files_queue_size + 1 <nl> - in <nl> - let ctx = <nl> - Provider_context . empty <nl> - ~ tcopt : initialized_state . server_env . ServerEnv . tcopt <nl> - in <nl> - let state = <nl> - Initialized <nl> - { <nl> - initialized_state with <nl> - changed_files_to_process ; <nl> - ctx ; <nl> - peak_changed_files_queue_size ; <nl> - } <nl> - in <nl> - Lwt . return ( state , Handle_message_result . Notification ) <nl> + ( * Only invalidate when a hack file changes * ) <nl> + if FindUtils . file_filter ( Path . to_string path ) then <nl> + let changed_files_to_process = <nl> + Path . Set . add initialized_state . changed_files_to_process path <nl> + in <nl> + let peak_changed_files_queue_size = <nl> + initialized_state . peak_changed_files_queue_size + 1 <nl> + in <nl> + let ctx = <nl> + Provider_context . empty <nl> + ~ tcopt : initialized_state . server_env . ServerEnv . tcopt <nl> + in <nl> + let state = <nl> + Initialized <nl> + { <nl> + initialized_state with <nl> + changed_files_to_process ; <nl> + ctx ; <nl> + peak_changed_files_queue_size ; <nl> + } <nl> + in <nl> + Lwt . return ( state , Handle_message_result . Notification ) <nl> + else <nl> + Lwt . return ( state , Handle_message_result . Notification ) <nl> | ( Initializing , Initialize_from_saved_state param ) - > <nl> let % lwt result = initialize param in <nl> begin <nl>
|
Fix IDE speed on IViewerContext
|
facebook/hhvm
|
b25d3dc3ddbe564851d49a697bc9c62304c6cde8
|
2019-11-24T07:22:49Z
|
mmm a / BUILD <nl> ppp b / BUILD <nl> grpc_cc_library ( <nl> " src / core / lib / iomgr / wakeup_fd_posix . cc " , <nl> " src / core / lib / json / json . cc " , <nl> " src / core / lib / json / json_reader . cc " , <nl> - " src / core / lib / json / json_string . cc " , <nl> " src / core / lib / json / json_writer . cc " , <nl> " src / core / lib / slice / b64 . cc " , <nl> " src / core / lib / slice / percent_encoding . cc " , <nl> grpc_cc_library ( <nl> " src / core / lib / iomgr / wakeup_fd_pipe . h " , <nl> " src / core / lib / iomgr / wakeup_fd_posix . h " , <nl> " src / core / lib / json / json . h " , <nl> - " src / core / lib / json / json_common . h " , <nl> - " src / core / lib / json / json_reader . h " , <nl> - " src / core / lib / json / json_writer . h " , <nl> " src / core / lib / slice / b64 . h " , <nl> " src / core / lib / slice / percent_encoding . h " , <nl> " src / core / lib / slice / slice_hash_table . h " , <nl> mmm a / BUILD . gn <nl> ppp b / BUILD . gn <nl> config ( " grpc_config " ) { <nl> " src / core / lib / iomgr / wakeup_fd_posix . h " , <nl> " src / core / lib / json / json . cc " , <nl> " src / core / lib / json / json . h " , <nl> - " src / core / lib / json / json_common . h " , <nl> " src / core / lib / json / json_reader . cc " , <nl> - " src / core / lib / json / json_reader . h " , <nl> - " src / core / lib / json / json_string . cc " , <nl> " src / core / lib / json / json_writer . cc " , <nl> - " src / core / lib / json / json_writer . h " , <nl> " src / core / lib / security / context / security_context . cc " , <nl> " src / core / lib / security / context / security_context . h " , <nl> " src / core / lib / security / credentials / alts / alts_credentials . cc " , <nl> config ( " grpc_config " ) { <nl> " src / core / lib / iomgr / wakeup_fd_posix . h " , <nl> " src / core / lib / json / json . cc " , <nl> " src / core / lib / json / json . h " , <nl> - " src / core / lib / json / json_common . h " , <nl> " src / core / lib / json / json_reader . cc " , <nl> - " src / core / lib / json / json_reader . h " , <nl> - " src / core / lib / json / json_string . cc " , <nl> " src / core / lib / json / json_writer . cc " , <nl> - " src / core / lib / json / json_writer . h " , <nl> " src / core / lib / profiling / timers . h " , <nl> " src / core / lib / slice / b64 . cc " , <nl> " src / core / lib / slice / b64 . h " , <nl> mmm a / CMakeLists . txt <nl> ppp b / CMakeLists . txt <nl> if ( gRPC_BUILD_TESTS ) <nl> add_dependencies ( buildtests_c init_test ) <nl> add_dependencies ( buildtests_c inproc_callback_test ) <nl> add_dependencies ( buildtests_c invalid_call_argument_test ) <nl> - add_dependencies ( buildtests_c json_rewrite ) <nl> - add_dependencies ( buildtests_c json_rewrite_test ) <nl> - add_dependencies ( buildtests_c json_stream_error_test ) <nl> add_dependencies ( buildtests_c json_test ) <nl> add_dependencies ( buildtests_c lame_client_test ) <nl> add_dependencies ( buildtests_c load_file_test ) <nl> add_library ( grpc <nl> src / core / lib / iomgr / wakeup_fd_posix . cc <nl> src / core / lib / json / json . cc <nl> src / core / lib / json / json_reader . cc <nl> - src / core / lib / json / json_string . cc <nl> src / core / lib / json / json_writer . cc <nl> src / core / lib / slice / b64 . cc <nl> src / core / lib / slice / percent_encoding . cc <nl> add_library ( grpc_cronet <nl> src / core / lib / iomgr / wakeup_fd_posix . cc <nl> src / core / lib / json / json . cc <nl> src / core / lib / json / json_reader . cc <nl> - src / core / lib / json / json_string . cc <nl> src / core / lib / json / json_writer . cc <nl> src / core / lib / slice / b64 . cc <nl> src / core / lib / slice / percent_encoding . cc <nl> add_library ( grpc_test_util <nl> src / core / lib / iomgr / wakeup_fd_posix . cc <nl> src / core / lib / json / json . cc <nl> src / core / lib / json / json_reader . cc <nl> - src / core / lib / json / json_string . cc <nl> src / core / lib / json / json_writer . cc <nl> src / core / lib / slice / b64 . cc <nl> src / core / lib / slice / percent_encoding . cc <nl> add_library ( grpc_test_util_unsecure <nl> src / core / lib / iomgr / wakeup_fd_posix . cc <nl> src / core / lib / json / json . cc <nl> src / core / lib / json / json_reader . cc <nl> - src / core / lib / json / json_string . cc <nl> src / core / lib / json / json_writer . cc <nl> src / core / lib / slice / b64 . cc <nl> src / core / lib / slice / percent_encoding . cc <nl> add_library ( grpc_unsecure <nl> src / core / lib / iomgr / wakeup_fd_posix . cc <nl> src / core / lib / json / json . cc <nl> src / core / lib / json / json_reader . cc <nl> - src / core / lib / json / json_string . cc <nl> src / core / lib / json / json_writer . cc <nl> src / core / lib / slice / b64 . cc <nl> src / core / lib / slice / percent_encoding . cc <nl> add_library ( grpc + + <nl> src / core / lib / iomgr / wakeup_fd_posix . cc <nl> src / core / lib / json / json . cc <nl> src / core / lib / json / json_reader . cc <nl> - src / core / lib / json / json_string . cc <nl> src / core / lib / json / json_writer . cc <nl> src / core / lib / slice / b64 . cc <nl> src / core / lib / slice / percent_encoding . cc <nl> add_library ( grpc + + _unsecure <nl> src / core / lib / iomgr / wakeup_fd_posix . cc <nl> src / core / lib / json / json . cc <nl> src / core / lib / json / json_reader . cc <nl> - src / core / lib / json / json_string . cc <nl> src / core / lib / json / json_writer . cc <nl> src / core / lib / slice / b64 . cc <nl> src / core / lib / slice / percent_encoding . cc <nl> target_link_libraries ( invalid_call_argument_test <nl> ) <nl> <nl> <nl> - endif ( ) <nl> - if ( gRPC_BUILD_TESTS ) <nl> - <nl> - add_executable ( json_rewrite <nl> - test / core / json / json_rewrite . cc <nl> - ) <nl> - <nl> - target_include_directories ( json_rewrite <nl> - PRIVATE <nl> - $ { CMAKE_CURRENT_SOURCE_DIR } <nl> - $ { CMAKE_CURRENT_SOURCE_DIR } / include <nl> - $ { _gRPC_ADDRESS_SORTING_INCLUDE_DIR } <nl> - $ { _gRPC_SSL_INCLUDE_DIR } <nl> - $ { _gRPC_UPB_GENERATED_DIR } <nl> - $ { _gRPC_UPB_GRPC_GENERATED_DIR } <nl> - $ { _gRPC_UPB_INCLUDE_DIR } <nl> - $ { _gRPC_ZLIB_INCLUDE_DIR } <nl> - ) <nl> - <nl> - target_link_libraries ( json_rewrite <nl> - $ { _gRPC_ALLTARGETS_LIBRARIES } <nl> - grpc_test_util <nl> - grpc <nl> - gpr <nl> - ) <nl> - <nl> - <nl> - endif ( ) <nl> - if ( gRPC_BUILD_TESTS ) <nl> - <nl> - add_executable ( json_rewrite_test <nl> - test / core / json / json_rewrite_test . cc <nl> - ) <nl> - <nl> - target_include_directories ( json_rewrite_test <nl> - PRIVATE <nl> - $ { CMAKE_CURRENT_SOURCE_DIR } <nl> - $ { CMAKE_CURRENT_SOURCE_DIR } / include <nl> - $ { _gRPC_ADDRESS_SORTING_INCLUDE_DIR } <nl> - $ { _gRPC_SSL_INCLUDE_DIR } <nl> - $ { _gRPC_UPB_GENERATED_DIR } <nl> - $ { _gRPC_UPB_GRPC_GENERATED_DIR } <nl> - $ { _gRPC_UPB_INCLUDE_DIR } <nl> - $ { _gRPC_ZLIB_INCLUDE_DIR } <nl> - ) <nl> - <nl> - target_link_libraries ( json_rewrite_test <nl> - $ { _gRPC_ALLTARGETS_LIBRARIES } <nl> - grpc_test_util <nl> - grpc <nl> - gpr <nl> - ) <nl> - <nl> - <nl> - endif ( ) <nl> - if ( gRPC_BUILD_TESTS ) <nl> - <nl> - add_executable ( json_stream_error_test <nl> - test / core / json / json_stream_error_test . cc <nl> - ) <nl> - <nl> - target_include_directories ( json_stream_error_test <nl> - PRIVATE <nl> - $ { CMAKE_CURRENT_SOURCE_DIR } <nl> - $ { CMAKE_CURRENT_SOURCE_DIR } / include <nl> - $ { _gRPC_ADDRESS_SORTING_INCLUDE_DIR } <nl> - $ { _gRPC_SSL_INCLUDE_DIR } <nl> - $ { _gRPC_UPB_GENERATED_DIR } <nl> - $ { _gRPC_UPB_GRPC_GENERATED_DIR } <nl> - $ { _gRPC_UPB_INCLUDE_DIR } <nl> - $ { _gRPC_ZLIB_INCLUDE_DIR } <nl> - ) <nl> - <nl> - target_link_libraries ( json_stream_error_test <nl> - $ { _gRPC_ALLTARGETS_LIBRARIES } <nl> - grpc_test_util <nl> - grpc <nl> - gpr <nl> - ) <nl> - <nl> - <nl> endif ( ) <nl> if ( gRPC_BUILD_TESTS ) <nl> <nl> mmm a / Makefile <nl> ppp b / Makefile <nl> init_test : $ ( BINDIR ) / $ ( CONFIG ) / init_test <nl> inproc_callback_test : $ ( BINDIR ) / $ ( CONFIG ) / inproc_callback_test <nl> invalid_call_argument_test : $ ( BINDIR ) / $ ( CONFIG ) / invalid_call_argument_test <nl> json_fuzzer_test : $ ( BINDIR ) / $ ( CONFIG ) / json_fuzzer_test <nl> - json_rewrite : $ ( BINDIR ) / $ ( CONFIG ) / json_rewrite <nl> - json_rewrite_test : $ ( BINDIR ) / $ ( CONFIG ) / json_rewrite_test <nl> - json_stream_error_test : $ ( BINDIR ) / $ ( CONFIG ) / json_stream_error_test <nl> json_test : $ ( BINDIR ) / $ ( CONFIG ) / json_test <nl> lame_client_test : $ ( BINDIR ) / $ ( CONFIG ) / lame_client_test <nl> load_file_test : $ ( BINDIR ) / $ ( CONFIG ) / load_file_test <nl> buildtests_c : privatelibs_c \ <nl> $ ( BINDIR ) / $ ( CONFIG ) / init_test \ <nl> $ ( BINDIR ) / $ ( CONFIG ) / inproc_callback_test \ <nl> $ ( BINDIR ) / $ ( CONFIG ) / invalid_call_argument_test \ <nl> - $ ( BINDIR ) / $ ( CONFIG ) / json_rewrite \ <nl> - $ ( BINDIR ) / $ ( CONFIG ) / json_rewrite_test \ <nl> - $ ( BINDIR ) / $ ( CONFIG ) / json_stream_error_test \ <nl> $ ( BINDIR ) / $ ( CONFIG ) / json_test \ <nl> $ ( BINDIR ) / $ ( CONFIG ) / lame_client_test \ <nl> $ ( BINDIR ) / $ ( CONFIG ) / load_file_test \ <nl> test_c : buildtests_c <nl> $ ( Q ) $ ( BINDIR ) / $ ( CONFIG ) / inproc_callback_test | | ( echo test inproc_callback_test failed ; exit 1 ) <nl> $ ( E ) " [ RUN ] Testing invalid_call_argument_test " <nl> $ ( Q ) $ ( BINDIR ) / $ ( CONFIG ) / invalid_call_argument_test | | ( echo test invalid_call_argument_test failed ; exit 1 ) <nl> - $ ( E ) " [ RUN ] Testing json_rewrite_test " <nl> - $ ( Q ) $ ( BINDIR ) / $ ( CONFIG ) / json_rewrite_test | | ( echo test json_rewrite_test failed ; exit 1 ) <nl> - $ ( E ) " [ RUN ] Testing json_stream_error_test " <nl> - $ ( Q ) $ ( BINDIR ) / $ ( CONFIG ) / json_stream_error_test | | ( echo test json_stream_error_test failed ; exit 1 ) <nl> $ ( E ) " [ RUN ] Testing json_test " <nl> $ ( Q ) $ ( BINDIR ) / $ ( CONFIG ) / json_test | | ( echo test json_test failed ; exit 1 ) <nl> $ ( E ) " [ RUN ] Testing lame_client_test " <nl> LIBGRPC_SRC = \ <nl> src / core / lib / iomgr / wakeup_fd_posix . cc \ <nl> src / core / lib / json / json . cc \ <nl> src / core / lib / json / json_reader . cc \ <nl> - src / core / lib / json / json_string . cc \ <nl> src / core / lib / json / json_writer . cc \ <nl> src / core / lib / slice / b64 . cc \ <nl> src / core / lib / slice / percent_encoding . cc \ <nl> LIBGRPC_CRONET_SRC = \ <nl> src / core / lib / iomgr / wakeup_fd_posix . cc \ <nl> src / core / lib / json / json . cc \ <nl> src / core / lib / json / json_reader . cc \ <nl> - src / core / lib / json / json_string . cc \ <nl> src / core / lib / json / json_writer . cc \ <nl> src / core / lib / slice / b64 . cc \ <nl> src / core / lib / slice / percent_encoding . cc \ <nl> LIBGRPC_TEST_UTIL_SRC = \ <nl> src / core / lib / iomgr / wakeup_fd_posix . cc \ <nl> src / core / lib / json / json . cc \ <nl> src / core / lib / json / json_reader . cc \ <nl> - src / core / lib / json / json_string . cc \ <nl> src / core / lib / json / json_writer . cc \ <nl> src / core / lib / slice / b64 . cc \ <nl> src / core / lib / slice / percent_encoding . cc \ <nl> LIBGRPC_TEST_UTIL_UNSECURE_SRC = \ <nl> src / core / lib / iomgr / wakeup_fd_posix . cc \ <nl> src / core / lib / json / json . cc \ <nl> src / core / lib / json / json_reader . cc \ <nl> - src / core / lib / json / json_string . cc \ <nl> src / core / lib / json / json_writer . cc \ <nl> src / core / lib / slice / b64 . cc \ <nl> src / core / lib / slice / percent_encoding . cc \ <nl> LIBGRPC_UNSECURE_SRC = \ <nl> src / core / lib / iomgr / wakeup_fd_posix . cc \ <nl> src / core / lib / json / json . cc \ <nl> src / core / lib / json / json_reader . cc \ <nl> - src / core / lib / json / json_string . cc \ <nl> src / core / lib / json / json_writer . cc \ <nl> src / core / lib / slice / b64 . cc \ <nl> src / core / lib / slice / percent_encoding . cc \ <nl> LIBGRPC + + _SRC = \ <nl> src / core / lib / iomgr / wakeup_fd_posix . cc \ <nl> src / core / lib / json / json . cc \ <nl> src / core / lib / json / json_reader . cc \ <nl> - src / core / lib / json / json_string . cc \ <nl> src / core / lib / json / json_writer . cc \ <nl> src / core / lib / slice / b64 . cc \ <nl> src / core / lib / slice / percent_encoding . cc \ <nl> LIBGRPC + + _UNSECURE_SRC = \ <nl> src / core / lib / iomgr / wakeup_fd_posix . cc \ <nl> src / core / lib / json / json . cc \ <nl> src / core / lib / json / json_reader . cc \ <nl> - src / core / lib / json / json_string . cc \ <nl> src / core / lib / json / json_writer . cc \ <nl> src / core / lib / slice / b64 . cc \ <nl> src / core / lib / slice / percent_encoding . cc \ <nl> endif <nl> endif <nl> <nl> <nl> - JSON_REWRITE_SRC = \ <nl> - test / core / json / json_rewrite . cc \ <nl> - <nl> - JSON_REWRITE_OBJS = $ ( addprefix $ ( OBJDIR ) / $ ( CONFIG ) / , $ ( addsuffix . o , $ ( basename $ ( JSON_REWRITE_SRC ) ) ) ) <nl> - ifeq ( $ ( NO_SECURE ) , true ) <nl> - <nl> - # You can ' t build secure targets if you don ' t have OpenSSL . <nl> - <nl> - $ ( BINDIR ) / $ ( CONFIG ) / json_rewrite : openssl_dep_error <nl> - <nl> - else <nl> - <nl> - <nl> - <nl> - $ ( BINDIR ) / $ ( CONFIG ) / json_rewrite : $ ( JSON_REWRITE_OBJS ) $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc_test_util . a $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc . a $ ( LIBDIR ) / $ ( CONFIG ) / libgpr . a <nl> - $ ( E ) " [ LD ] Linking $ @ " <nl> - $ ( Q ) mkdir - p ` dirname $ @ ` <nl> - $ ( Q ) $ ( LDXX ) $ ( LDFLAGS ) $ ( JSON_REWRITE_OBJS ) $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc_test_util . a $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc . a $ ( LIBDIR ) / $ ( CONFIG ) / libgpr . a $ ( LDLIBS ) $ ( LDLIBS_SECURE ) - o $ ( BINDIR ) / $ ( CONFIG ) / json_rewrite <nl> - <nl> - endif <nl> - <nl> - $ ( OBJDIR ) / $ ( CONFIG ) / test / core / json / json_rewrite . o : $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc_test_util . a $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc . a $ ( LIBDIR ) / $ ( CONFIG ) / libgpr . a <nl> - <nl> - deps_json_rewrite : $ ( JSON_REWRITE_OBJS : . o = . dep ) <nl> - <nl> - ifneq ( $ ( NO_SECURE ) , true ) <nl> - ifneq ( $ ( NO_DEPS ) , true ) <nl> - - include $ ( JSON_REWRITE_OBJS : . o = . dep ) <nl> - endif <nl> - endif <nl> - <nl> - <nl> - JSON_REWRITE_TEST_SRC = \ <nl> - test / core / json / json_rewrite_test . cc \ <nl> - <nl> - JSON_REWRITE_TEST_OBJS = $ ( addprefix $ ( OBJDIR ) / $ ( CONFIG ) / , $ ( addsuffix . o , $ ( basename $ ( JSON_REWRITE_TEST_SRC ) ) ) ) <nl> - ifeq ( $ ( NO_SECURE ) , true ) <nl> - <nl> - # You can ' t build secure targets if you don ' t have OpenSSL . <nl> - <nl> - $ ( BINDIR ) / $ ( CONFIG ) / json_rewrite_test : openssl_dep_error <nl> - <nl> - else <nl> - <nl> - <nl> - <nl> - $ ( BINDIR ) / $ ( CONFIG ) / json_rewrite_test : $ ( JSON_REWRITE_TEST_OBJS ) $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc_test_util . a $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc . a $ ( LIBDIR ) / $ ( CONFIG ) / libgpr . a <nl> - $ ( E ) " [ LD ] Linking $ @ " <nl> - $ ( Q ) mkdir - p ` dirname $ @ ` <nl> - $ ( Q ) $ ( LDXX ) $ ( LDFLAGS ) $ ( JSON_REWRITE_TEST_OBJS ) $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc_test_util . a $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc . a $ ( LIBDIR ) / $ ( CONFIG ) / libgpr . a $ ( LDLIBS ) $ ( LDLIBS_SECURE ) - o $ ( BINDIR ) / $ ( CONFIG ) / json_rewrite_test <nl> - <nl> - endif <nl> - <nl> - $ ( OBJDIR ) / $ ( CONFIG ) / test / core / json / json_rewrite_test . o : $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc_test_util . a $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc . a $ ( LIBDIR ) / $ ( CONFIG ) / libgpr . a <nl> - <nl> - deps_json_rewrite_test : $ ( JSON_REWRITE_TEST_OBJS : . o = . dep ) <nl> - <nl> - ifneq ( $ ( NO_SECURE ) , true ) <nl> - ifneq ( $ ( NO_DEPS ) , true ) <nl> - - include $ ( JSON_REWRITE_TEST_OBJS : . o = . dep ) <nl> - endif <nl> - endif <nl> - <nl> - <nl> - JSON_STREAM_ERROR_TEST_SRC = \ <nl> - test / core / json / json_stream_error_test . cc \ <nl> - <nl> - JSON_STREAM_ERROR_TEST_OBJS = $ ( addprefix $ ( OBJDIR ) / $ ( CONFIG ) / , $ ( addsuffix . o , $ ( basename $ ( JSON_STREAM_ERROR_TEST_SRC ) ) ) ) <nl> - ifeq ( $ ( NO_SECURE ) , true ) <nl> - <nl> - # You can ' t build secure targets if you don ' t have OpenSSL . <nl> - <nl> - $ ( BINDIR ) / $ ( CONFIG ) / json_stream_error_test : openssl_dep_error <nl> - <nl> - else <nl> - <nl> - <nl> - <nl> - $ ( BINDIR ) / $ ( CONFIG ) / json_stream_error_test : $ ( JSON_STREAM_ERROR_TEST_OBJS ) $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc_test_util . a $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc . a $ ( LIBDIR ) / $ ( CONFIG ) / libgpr . a <nl> - $ ( E ) " [ LD ] Linking $ @ " <nl> - $ ( Q ) mkdir - p ` dirname $ @ ` <nl> - $ ( Q ) $ ( LDXX ) $ ( LDFLAGS ) $ ( JSON_STREAM_ERROR_TEST_OBJS ) $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc_test_util . a $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc . a $ ( LIBDIR ) / $ ( CONFIG ) / libgpr . a $ ( LDLIBS ) $ ( LDLIBS_SECURE ) - o $ ( BINDIR ) / $ ( CONFIG ) / json_stream_error_test <nl> - <nl> - endif <nl> - <nl> - $ ( OBJDIR ) / $ ( CONFIG ) / test / core / json / json_stream_error_test . o : $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc_test_util . a $ ( LIBDIR ) / $ ( CONFIG ) / libgrpc . a $ ( LIBDIR ) / $ ( CONFIG ) / libgpr . a <nl> - <nl> - deps_json_stream_error_test : $ ( JSON_STREAM_ERROR_TEST_OBJS : . o = . dep ) <nl> - <nl> - ifneq ( $ ( NO_SECURE ) , true ) <nl> - ifneq ( $ ( NO_DEPS ) , true ) <nl> - - include $ ( JSON_STREAM_ERROR_TEST_OBJS : . o = . dep ) <nl> - endif <nl> - endif <nl> - <nl> - <nl> JSON_TEST_SRC = \ <nl> test / core / json / json_test . cc \ <nl> <nl> mmm a / build . yaml <nl> ppp b / build . yaml <nl> filegroups : <nl> - src / core / lib / iomgr / wakeup_fd_posix . cc <nl> - src / core / lib / json / json . cc <nl> - src / core / lib / json / json_reader . cc <nl> - - src / core / lib / json / json_string . cc <nl> - src / core / lib / json / json_writer . cc <nl> - src / core / lib / slice / b64 . cc <nl> - src / core / lib / slice / percent_encoding . cc <nl> filegroups : <nl> - src / core / lib / iomgr / wakeup_fd_pipe . h <nl> - src / core / lib / iomgr / wakeup_fd_posix . h <nl> - src / core / lib / json / json . h <nl> - - src / core / lib / json / json_common . h <nl> - - src / core / lib / json / json_reader . h <nl> - - src / core / lib / json / json_writer . h <nl> - src / core / lib / slice / b64 . h <nl> - src / core / lib / slice / percent_encoding . h <nl> - src / core / lib / slice / slice_hash_table . h <nl> targets : <nl> corpus_dirs : <nl> - test / core / json / corpus <nl> maxlen : 512 <nl> - - name : json_rewrite <nl> - build : test <nl> - run : false <nl> - language : c <nl> - src : <nl> - - test / core / json / json_rewrite . cc <nl> - deps : <nl> - - grpc_test_util <nl> - - grpc <nl> - - gpr <nl> - uses_polling : false <nl> - - name : json_rewrite_test <nl> - build : test <nl> - language : c <nl> - src : <nl> - - test / core / json / json_rewrite_test . cc <nl> - deps : <nl> - - grpc_test_util <nl> - - grpc <nl> - - gpr <nl> - uses_polling : false <nl> - - name : json_stream_error_test <nl> - build : test <nl> - language : c <nl> - src : <nl> - - test / core / json / json_stream_error_test . cc <nl> - deps : <nl> - - grpc_test_util <nl> - - grpc <nl> - - gpr <nl> - uses_polling : false <nl> - name : json_test <nl> build : test <nl> language : c <nl> mmm a / config . m4 <nl> ppp b / config . m4 <nl> if test " $ PHP_GRPC " ! = " no " ; then <nl> src / core / lib / iomgr / wakeup_fd_posix . cc \ <nl> src / core / lib / json / json . cc \ <nl> src / core / lib / json / json_reader . cc \ <nl> - src / core / lib / json / json_string . cc \ <nl> src / core / lib / json / json_writer . cc \ <nl> src / core / lib / profiling / basic_timers . cc \ <nl> src / core / lib / profiling / stap_timers . cc \ <nl> mmm a / config . w32 <nl> ppp b / config . w32 <nl> if ( PHP_GRPC ! = " no " ) { <nl> " src \ \ core \ \ lib \ \ iomgr \ \ wakeup_fd_posix . cc " + <nl> " src \ \ core \ \ lib \ \ json \ \ json . cc " + <nl> " src \ \ core \ \ lib \ \ json \ \ json_reader . cc " + <nl> - " src \ \ core \ \ lib \ \ json \ \ json_string . cc " + <nl> " src \ \ core \ \ lib \ \ json \ \ json_writer . cc " + <nl> " src \ \ core \ \ lib \ \ slice \ \ b64 . cc " + <nl> " src \ \ core \ \ lib \ \ slice \ \ percent_encoding . cc " + <nl> mmm a / gRPC - C + + . podspec <nl> ppp b / gRPC - C + + . podspec <nl> Pod : : Spec . new do | s | <nl> ' src / core / lib / iomgr / wakeup_fd_pipe . h ' , <nl> ' src / core / lib / iomgr / wakeup_fd_posix . h ' , <nl> ' src / core / lib / json / json . h ' , <nl> - ' src / core / lib / json / json_common . h ' , <nl> - ' src / core / lib / json / json_reader . h ' , <nl> - ' src / core / lib / json / json_writer . h ' , <nl> ' src / core / lib / profiling / timers . h ' , <nl> ' src / core / lib / security / context / security_context . h ' , <nl> ' src / core / lib / security / credentials / alts / alts_credentials . h ' , <nl> Pod : : Spec . new do | s | <nl> ' src / core / lib / iomgr / wakeup_fd_pipe . h ' , <nl> ' src / core / lib / iomgr / wakeup_fd_posix . h ' , <nl> ' src / core / lib / json / json . h ' , <nl> - ' src / core / lib / json / json_common . h ' , <nl> - ' src / core / lib / json / json_reader . h ' , <nl> - ' src / core / lib / json / json_writer . h ' , <nl> ' src / core / lib / profiling / timers . h ' , <nl> ' src / core / lib / slice / b64 . h ' , <nl> ' src / core / lib / slice / percent_encoding . h ' , <nl> Pod : : Spec . new do | s | <nl> ' src / core / lib / iomgr / wakeup_fd_pipe . h ' , <nl> ' src / core / lib / iomgr / wakeup_fd_posix . h ' , <nl> ' src / core / lib / json / json . h ' , <nl> - ' src / core / lib / json / json_common . h ' , <nl> - ' src / core / lib / json / json_reader . h ' , <nl> - ' src / core / lib / json / json_writer . h ' , <nl> ' src / core / lib / profiling / timers . h ' , <nl> ' src / core / lib / security / context / security_context . h ' , <nl> ' src / core / lib / security / credentials / alts / alts_credentials . h ' , <nl> mmm a / gRPC - Core . podspec <nl> ppp b / gRPC - Core . podspec <nl> Pod : : Spec . new do | s | <nl> ' src / core / lib / iomgr / wakeup_fd_posix . h ' , <nl> ' src / core / lib / json / json . cc ' , <nl> ' src / core / lib / json / json . h ' , <nl> - ' src / core / lib / json / json_common . h ' , <nl> ' src / core / lib / json / json_reader . cc ' , <nl> - ' src / core / lib / json / json_reader . h ' , <nl> - ' src / core / lib / json / json_string . cc ' , <nl> ' src / core / lib / json / json_writer . cc ' , <nl> - ' src / core / lib / json / json_writer . h ' , <nl> ' src / core / lib / profiling / basic_timers . cc ' , <nl> ' src / core / lib / profiling / stap_timers . cc ' , <nl> ' src / core / lib / profiling / timers . h ' , <nl> Pod : : Spec . new do | s | <nl> ' src / core / lib / iomgr / wakeup_fd_pipe . h ' , <nl> ' src / core / lib / iomgr / wakeup_fd_posix . h ' , <nl> ' src / core / lib / json / json . h ' , <nl> - ' src / core / lib / json / json_common . h ' , <nl> - ' src / core / lib / json / json_reader . h ' , <nl> - ' src / core / lib / json / json_writer . h ' , <nl> ' src / core / lib / profiling / timers . h ' , <nl> ' src / core / lib / security / context / security_context . h ' , <nl> ' src / core / lib / security / credentials / alts / alts_credentials . h ' , <nl> mmm a / grpc . gemspec <nl> ppp b / grpc . gemspec <nl> Gem : : Specification . new do | s | <nl> s . files + = % w ( src / core / lib / iomgr / wakeup_fd_posix . h ) <nl> s . files + = % w ( src / core / lib / json / json . cc ) <nl> s . files + = % w ( src / core / lib / json / json . h ) <nl> - s . files + = % w ( src / core / lib / json / json_common . h ) <nl> s . files + = % w ( src / core / lib / json / json_reader . cc ) <nl> - s . files + = % w ( src / core / lib / json / json_reader . h ) <nl> - s . files + = % w ( src / core / lib / json / json_string . cc ) <nl> s . files + = % w ( src / core / lib / json / json_writer . cc ) <nl> - s . files + = % w ( src / core / lib / json / json_writer . h ) <nl> s . files + = % w ( src / core / lib / profiling / basic_timers . cc ) <nl> s . files + = % w ( src / core / lib / profiling / stap_timers . cc ) <nl> s . files + = % w ( src / core / lib / profiling / timers . h ) <nl> mmm a / grpc . gyp <nl> ppp b / grpc . gyp <nl> <nl> ' src / core / lib / iomgr / wakeup_fd_posix . cc ' , <nl> ' src / core / lib / json / json . cc ' , <nl> ' src / core / lib / json / json_reader . cc ' , <nl> - ' src / core / lib / json / json_string . cc ' , <nl> ' src / core / lib / json / json_writer . cc ' , <nl> ' src / core / lib / slice / b64 . cc ' , <nl> ' src / core / lib / slice / percent_encoding . cc ' , <nl> <nl> ' src / core / lib / iomgr / wakeup_fd_posix . cc ' , <nl> ' src / core / lib / json / json . cc ' , <nl> ' src / core / lib / json / json_reader . cc ' , <nl> - ' src / core / lib / json / json_string . cc ' , <nl> ' src / core / lib / json / json_writer . cc ' , <nl> ' src / core / lib / slice / b64 . cc ' , <nl> ' src / core / lib / slice / percent_encoding . cc ' , <nl> <nl> ' src / core / lib / iomgr / wakeup_fd_posix . cc ' , <nl> ' src / core / lib / json / json . cc ' , <nl> ' src / core / lib / json / json_reader . cc ' , <nl> - ' src / core / lib / json / json_string . cc ' , <nl> ' src / core / lib / json / json_writer . cc ' , <nl> ' src / core / lib / slice / b64 . cc ' , <nl> ' src / core / lib / slice / percent_encoding . cc ' , <nl> <nl> ' src / core / lib / iomgr / wakeup_fd_posix . cc ' , <nl> ' src / core / lib / json / json . cc ' , <nl> ' src / core / lib / json / json_reader . cc ' , <nl> - ' src / core / lib / json / json_string . cc ' , <nl> ' src / core / lib / json / json_writer . cc ' , <nl> ' src / core / lib / slice / b64 . cc ' , <nl> ' src / core / lib / slice / percent_encoding . cc ' , <nl> <nl> ' src / core / lib / iomgr / wakeup_fd_posix . cc ' , <nl> ' src / core / lib / json / json . cc ' , <nl> ' src / core / lib / json / json_reader . cc ' , <nl> - ' src / core / lib / json / json_string . cc ' , <nl> ' src / core / lib / json / json_writer . cc ' , <nl> ' src / core / lib / slice / b64 . cc ' , <nl> ' src / core / lib / slice / percent_encoding . cc ' , <nl> <nl> ' src / core / lib / iomgr / wakeup_fd_posix . cc ' , <nl> ' src / core / lib / json / json . cc ' , <nl> ' src / core / lib / json / json_reader . cc ' , <nl> - ' src / core / lib / json / json_string . cc ' , <nl> ' src / core / lib / json / json_writer . cc ' , <nl> ' src / core / lib / slice / b64 . cc ' , <nl> ' src / core / lib / slice / percent_encoding . cc ' , <nl> mmm a / package . xml <nl> ppp b / package . xml <nl> <nl> < file baseinstalldir = " / " name = " src / core / lib / iomgr / wakeup_fd_posix . h " role = " src " / > <nl> < file baseinstalldir = " / " name = " src / core / lib / json / json . cc " role = " src " / > <nl> < file baseinstalldir = " / " name = " src / core / lib / json / json . h " role = " src " / > <nl> - < file baseinstalldir = " / " name = " src / core / lib / json / json_common . h " role = " src " / > <nl> < file baseinstalldir = " / " name = " src / core / lib / json / json_reader . cc " role = " src " / > <nl> - < file baseinstalldir = " / " name = " src / core / lib / json / json_reader . h " role = " src " / > <nl> - < file baseinstalldir = " / " name = " src / core / lib / json / json_string . cc " role = " src " / > <nl> < file baseinstalldir = " / " name = " src / core / lib / json / json_writer . cc " role = " src " / > <nl> - < file baseinstalldir = " / " name = " src / core / lib / json / json_writer . h " role = " src " / > <nl> < file baseinstalldir = " / " name = " src / core / lib / profiling / basic_timers . cc " role = " src " / > <nl> < file baseinstalldir = " / " name = " src / core / lib / profiling / stap_timers . cc " role = " src " / > <nl> < file baseinstalldir = " / " name = " src / core / lib / profiling / timers . h " role = " src " / > <nl> mmm a / src / core / lib / json / json . h <nl> ppp b / src / core / lib / json / json . h <nl> <nl> # include < stdbool . h > <nl> # include < stdlib . h > <nl> <nl> - # include " src / core / lib / json / json_common . h " <nl> + / * The various json types . * / <nl> + typedef enum { <nl> + GRPC_JSON_OBJECT , <nl> + GRPC_JSON_ARRAY , <nl> + GRPC_JSON_STRING , <nl> + GRPC_JSON_NUMBER , <nl> + GRPC_JSON_TRUE , <nl> + GRPC_JSON_FALSE , <nl> + GRPC_JSON_NULL , <nl> + GRPC_JSON_TOP_LEVEL <nl> + } grpc_json_type ; <nl> <nl> / * A tree - like structure to hold json values . The key and value pointers <nl> * are not owned by it . <nl> deleted file mode 100644 <nl> index bfa2ba32bea . . 00000000000 <nl> mmm a / src / core / lib / json / json_common . h <nl> ppp / dev / null <nl> <nl> - / * <nl> - * <nl> - * Copyright 2015 gRPC authors . <nl> - * <nl> - * Licensed under the Apache License , Version 2 . 0 ( the " License " ) ; <nl> - * you may not use this file except in compliance with the License . <nl> - * You may obtain a copy of the License at <nl> - * <nl> - * http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> - * <nl> - * Unless required by applicable law or agreed to in writing , software <nl> - * distributed under the License is distributed on an " AS IS " BASIS , <nl> - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied . <nl> - * See the License for the specific language governing permissions and <nl> - * limitations under the License . <nl> - * <nl> - * / <nl> - <nl> - # ifndef GRPC_CORE_LIB_JSON_JSON_COMMON_H <nl> - # define GRPC_CORE_LIB_JSON_JSON_COMMON_H <nl> - <nl> - / * The various json types . * / <nl> - typedef enum { <nl> - GRPC_JSON_OBJECT , <nl> - GRPC_JSON_ARRAY , <nl> - GRPC_JSON_STRING , <nl> - GRPC_JSON_NUMBER , <nl> - GRPC_JSON_TRUE , <nl> - GRPC_JSON_FALSE , <nl> - GRPC_JSON_NULL , <nl> - GRPC_JSON_TOP_LEVEL <nl> - } grpc_json_type ; <nl> - <nl> - # endif / * GRPC_CORE_LIB_JSON_JSON_COMMON_H * / <nl> mmm a / src / core / lib / json / json_reader . cc <nl> ppp b / src / core / lib / json / json_reader . cc <nl> <nl> <nl> # include < grpc / support / log . h > <nl> <nl> - # include " src / core / lib / json / json_reader . h " <nl> + # include " src / core / lib / json / json . h " <nl> + <nl> + typedef enum { <nl> + GRPC_JSON_STATE_OBJECT_KEY_BEGIN , <nl> + GRPC_JSON_STATE_OBJECT_KEY_STRING , <nl> + GRPC_JSON_STATE_OBJECT_KEY_END , <nl> + GRPC_JSON_STATE_VALUE_BEGIN , <nl> + GRPC_JSON_STATE_VALUE_STRING , <nl> + GRPC_JSON_STATE_STRING_ESCAPE , <nl> + GRPC_JSON_STATE_STRING_ESCAPE_U1 , <nl> + GRPC_JSON_STATE_STRING_ESCAPE_U2 , <nl> + GRPC_JSON_STATE_STRING_ESCAPE_U3 , <nl> + GRPC_JSON_STATE_STRING_ESCAPE_U4 , <nl> + GRPC_JSON_STATE_VALUE_NUMBER , <nl> + GRPC_JSON_STATE_VALUE_NUMBER_WITH_DECIMAL , <nl> + GRPC_JSON_STATE_VALUE_NUMBER_ZERO , <nl> + GRPC_JSON_STATE_VALUE_NUMBER_DOT , <nl> + GRPC_JSON_STATE_VALUE_NUMBER_E , <nl> + GRPC_JSON_STATE_VALUE_NUMBER_EPM , <nl> + GRPC_JSON_STATE_VALUE_TRUE_R , <nl> + GRPC_JSON_STATE_VALUE_TRUE_U , <nl> + GRPC_JSON_STATE_VALUE_TRUE_E , <nl> + GRPC_JSON_STATE_VALUE_FALSE_A , <nl> + GRPC_JSON_STATE_VALUE_FALSE_L , <nl> + GRPC_JSON_STATE_VALUE_FALSE_S , <nl> + GRPC_JSON_STATE_VALUE_FALSE_E , <nl> + GRPC_JSON_STATE_VALUE_NULL_U , <nl> + GRPC_JSON_STATE_VALUE_NULL_L1 , <nl> + GRPC_JSON_STATE_VALUE_NULL_L2 , <nl> + GRPC_JSON_STATE_VALUE_END , <nl> + GRPC_JSON_STATE_END <nl> + } grpc_json_reader_state ; <nl> + <nl> + enum { <nl> + / * The first non - unicode value is 0x110000 . But let ' s pick <nl> + * a value high enough to start our error codes from . These <nl> + * values are safe to return from the read_char function . <nl> + * / <nl> + GRPC_JSON_READ_CHAR_EOF = 0x7ffffff0 , <nl> + GRPC_JSON_READ_CHAR_EAGAIN , <nl> + GRPC_JSON_READ_CHAR_ERROR <nl> + } ; <nl> + <nl> + typedef struct grpc_json_reader { <nl> + / * That structure is fully private , and initialized by grpc_json_reader_init . <nl> + * The definition is public so you can put it on your stack . <nl> + * / <nl> + <nl> + int depth ; <nl> + int in_object ; <nl> + int in_array ; <nl> + int escaped_string_was_key ; <nl> + int container_just_begun ; <nl> + uint16_t unicode_char , unicode_high_surrogate ; <nl> + grpc_json_reader_state state ; <nl> + <nl> + grpc_json * top ; <nl> + grpc_json * current_container ; <nl> + grpc_json * current_value ; <nl> + uint8_t * input ; <nl> + uint8_t * key ; <nl> + uint8_t * string ; <nl> + uint8_t * string_ptr ; <nl> + size_t remaining_input ; <nl> + } grpc_json_reader ; <nl> + <nl> + / * The return type of the parser . * / <nl> + typedef enum { <nl> + GRPC_JSON_DONE , / * The parser finished successfully . * / <nl> + GRPC_JSON_EAGAIN , / * The parser yields to get more data . * / <nl> + GRPC_JSON_READ_ERROR , / * The parser passes through a read error . * / <nl> + GRPC_JSON_PARSE_ERROR , / * The parser found an error in the json stream . * / <nl> + GRPC_JSON_INTERNAL_ERROR / * The parser got an internal error . * / <nl> + } grpc_json_reader_status ; <nl> <nl> static void json_reader_string_clear ( grpc_json_reader * reader ) { <nl> - reader - > vtable - > string_clear ( reader - > userdata ) ; <nl> + if ( reader - > string ) { <nl> + GPR_ASSERT ( reader - > string_ptr < reader - > input ) ; <nl> + * reader - > string_ptr + + = 0 ; <nl> + } <nl> + reader - > string = reader - > string_ptr ; <nl> } <nl> <nl> static void json_reader_string_add_char ( grpc_json_reader * reader , uint32_t c ) { <nl> - reader - > vtable - > string_add_char ( reader - > userdata , c ) ; <nl> + GPR_ASSERT ( reader - > string_ptr < reader - > input ) ; <nl> + GPR_ASSERT ( c < = 0xff ) ; <nl> + * reader - > string_ptr + + = static_cast < uint8_t > ( c ) ; <nl> } <nl> <nl> - static void json_reader_string_add_utf32 ( grpc_json_reader * reader , <nl> - uint32_t utf32 ) { <nl> - reader - > vtable - > string_add_utf32 ( reader - > userdata , utf32 ) ; <nl> + static void json_reader_string_add_utf32 ( grpc_json_reader * reader , uint32_t c ) { <nl> + if ( c < = 0x7f ) { <nl> + json_reader_string_add_char ( reader , c ) ; <nl> + } else if ( c < = 0x7ff ) { <nl> + uint32_t b1 = 0xc0 | ( ( c > > 6 ) & 0x1f ) ; <nl> + uint32_t b2 = 0x80 | ( c & 0x3f ) ; <nl> + json_reader_string_add_char ( reader , b1 ) ; <nl> + json_reader_string_add_char ( reader , b2 ) ; <nl> + } else if ( c < = 0xffff ) { <nl> + uint32_t b1 = 0xe0 | ( ( c > > 12 ) & 0x0f ) ; <nl> + uint32_t b2 = 0x80 | ( ( c > > 6 ) & 0x3f ) ; <nl> + uint32_t b3 = 0x80 | ( c & 0x3f ) ; <nl> + json_reader_string_add_char ( reader , b1 ) ; <nl> + json_reader_string_add_char ( reader , b2 ) ; <nl> + json_reader_string_add_char ( reader , b3 ) ; <nl> + } else if ( c < = 0x1fffff ) { <nl> + uint32_t b1 = 0xf0 | ( ( c > > 18 ) & 0x07 ) ; <nl> + uint32_t b2 = 0x80 | ( ( c > > 12 ) & 0x3f ) ; <nl> + uint32_t b3 = 0x80 | ( ( c > > 6 ) & 0x3f ) ; <nl> + uint32_t b4 = 0x80 | ( c & 0x3f ) ; <nl> + json_reader_string_add_char ( reader , b1 ) ; <nl> + json_reader_string_add_char ( reader , b2 ) ; <nl> + json_reader_string_add_char ( reader , b3 ) ; <nl> + json_reader_string_add_char ( reader , b4 ) ; <nl> + } <nl> } <nl> <nl> static uint32_t grpc_json_reader_read_char ( grpc_json_reader * reader ) { <nl> - return reader - > vtable - > read_char ( reader - > userdata ) ; <nl> + if ( reader - > remaining_input = = 0 ) return GRPC_JSON_READ_CHAR_EOF ; <nl> + uint32_t r = * reader - > input + + ; <nl> + reader - > remaining_input - - ; <nl> + if ( r = = 0 ) { <nl> + reader - > remaining_input = 0 ; <nl> + return GRPC_JSON_READ_CHAR_EOF ; <nl> + } <nl> + return r ; <nl> + } <nl> + <nl> + / * Helper function to create a new grpc_json object and link it into <nl> + * our tree - in - progress inside our opaque structure . <nl> + * / <nl> + static grpc_json * json_create_and_link ( grpc_json_reader * reader , <nl> + grpc_json_type type ) { <nl> + grpc_json * json = grpc_json_create ( type ) ; <nl> + json - > parent = reader - > current_container ; <nl> + json - > prev = reader - > current_value ; <nl> + reader - > current_value = json ; <nl> + if ( json - > prev ) { <nl> + json - > prev - > next = json ; <nl> + } <nl> + if ( json - > parent ) { <nl> + if ( ! json - > parent - > child ) { <nl> + json - > parent - > child = json ; <nl> + } <nl> + if ( json - > parent - > type = = GRPC_JSON_OBJECT ) { <nl> + json - > key = reinterpret_cast < char * > ( reader - > key ) ; <nl> + } <nl> + } <nl> + if ( ! reader - > top ) { <nl> + reader - > top = json ; <nl> + } <nl> + return json ; <nl> } <nl> <nl> static void json_reader_container_begins ( grpc_json_reader * reader , <nl> grpc_json_type type ) { <nl> - reader - > vtable - > container_begins ( reader - > userdata , type ) ; <nl> + GPR_ASSERT ( type = = GRPC_JSON_ARRAY | | type = = GRPC_JSON_OBJECT ) ; <nl> + grpc_json * container = json_create_and_link ( reader , type ) ; <nl> + reader - > current_container = container ; <nl> + reader - > current_value = nullptr ; <nl> } <nl> <nl> static grpc_json_type grpc_json_reader_container_ends ( <nl> grpc_json_reader * reader ) { <nl> - return reader - > vtable - > container_ends ( reader - > userdata ) ; <nl> + grpc_json_type container_type = GRPC_JSON_TOP_LEVEL ; <nl> + GPR_ASSERT ( reader - > current_container ) ; <nl> + reader - > current_value = reader - > current_container ; <nl> + reader - > current_container = reader - > current_container - > parent ; <nl> + if ( reader - > current_container ) { <nl> + container_type = reader - > current_container - > type ; <nl> + } <nl> + return container_type ; <nl> } <nl> <nl> static void json_reader_set_key ( grpc_json_reader * reader ) { <nl> - reader - > vtable - > set_key ( reader - > userdata ) ; <nl> + reader - > key = reader - > string ; <nl> } <nl> <nl> static void json_reader_set_string ( grpc_json_reader * reader ) { <nl> - reader - > vtable - > set_string ( reader - > userdata ) ; <nl> + grpc_json * json = json_create_and_link ( reader , GRPC_JSON_STRING ) ; <nl> + json - > value = reinterpret_cast < char * > ( reader - > string ) ; <nl> } <nl> <nl> static int json_reader_set_number ( grpc_json_reader * reader ) { <nl> - return reader - > vtable - > set_number ( reader - > userdata ) ; <nl> + grpc_json * json = json_create_and_link ( reader , GRPC_JSON_NUMBER ) ; <nl> + json - > value = reinterpret_cast < char * > ( reader - > string ) ; <nl> + return 1 ; <nl> } <nl> <nl> static void json_reader_set_true ( grpc_json_reader * reader ) { <nl> - reader - > vtable - > set_true ( reader - > userdata ) ; <nl> + json_create_and_link ( reader , GRPC_JSON_TRUE ) ; <nl> } <nl> <nl> static void json_reader_set_false ( grpc_json_reader * reader ) { <nl> - reader - > vtable - > set_false ( reader - > userdata ) ; <nl> + json_create_and_link ( reader , GRPC_JSON_FALSE ) ; <nl> } <nl> <nl> static void json_reader_set_null ( grpc_json_reader * reader ) { <nl> - reader - > vtable - > set_null ( reader - > userdata ) ; <nl> - } <nl> - <nl> - / * Call this function to initialize the reader structure . * / <nl> - void grpc_json_reader_init ( grpc_json_reader * reader , <nl> - grpc_json_reader_vtable * vtable , void * userdata ) { <nl> - memset ( reader , 0 , sizeof ( * reader ) ) ; <nl> - reader - > vtable = vtable ; <nl> - reader - > userdata = userdata ; <nl> - json_reader_string_clear ( reader ) ; <nl> - reader - > state = GRPC_JSON_STATE_VALUE_BEGIN ; <nl> + json_create_and_link ( reader , GRPC_JSON_NULL ) ; <nl> } <nl> <nl> - int grpc_json_reader_is_complete ( grpc_json_reader * reader ) { <nl> + static int json_reader_is_complete ( grpc_json_reader * reader ) { <nl> return ( ( reader - > depth = = 0 ) & & <nl> ( ( reader - > state = = GRPC_JSON_STATE_END ) | | <nl> ( reader - > state = = GRPC_JSON_STATE_VALUE_END ) ) ) ; <nl> } <nl> <nl> - grpc_json_reader_status grpc_json_reader_run ( grpc_json_reader * reader ) { <nl> + / * Call this function to start parsing the input . It will return the following : <nl> + * . GRPC_JSON_DONE if the input got eof , and the parsing finished <nl> + * successfully . <nl> + * . GRPC_JSON_EAGAIN if the read_char function returned again . Call the <nl> + * parser again as needed . It is okay to call the parser in polling mode , <nl> + * although a bit dull . <nl> + * . GRPC_JSON_READ_ERROR if the read_char function returned an error . The <nl> + * state isn ' t broken however , and the function can be called again if the <nl> + * error has been corrected . But please use the EAGAIN feature instead for <nl> + * consistency . <nl> + * . GRPC_JSON_PARSE_ERROR if the input was somehow invalid . <nl> + * . GRPC_JSON_INTERNAL_ERROR if the parser somehow ended into an invalid <nl> + * internal state . <nl> + * / <nl> + static grpc_json_reader_status grpc_json_reader_run ( grpc_json_reader * reader ) { <nl> uint32_t c , success ; <nl> <nl> / * This state - machine is a strict implementation of ECMA - 404 * / <nl> grpc_json_reader_status grpc_json_reader_run ( grpc_json_reader * reader ) { <nl> return GRPC_JSON_EAGAIN ; <nl> <nl> case GRPC_JSON_READ_CHAR_EOF : <nl> - if ( grpc_json_reader_is_complete ( reader ) ) { <nl> + if ( json_reader_is_complete ( reader ) ) { <nl> return GRPC_JSON_DONE ; <nl> } else { <nl> return GRPC_JSON_PARSE_ERROR ; <nl> grpc_json_reader_status grpc_json_reader_run ( grpc_json_reader * reader ) { <nl> <nl> GPR_UNREACHABLE_CODE ( return GRPC_JSON_INTERNAL_ERROR ) ; <nl> } <nl> + <nl> + / * And finally , let ' s define our public API . * / <nl> + grpc_json * grpc_json_parse_string_with_len ( char * input , size_t size ) { <nl> + if ( input = = nullptr ) return nullptr ; <nl> + / / Initialize reader . <nl> + grpc_json_reader reader ; <nl> + memset ( & reader , 0 , sizeof ( reader ) ) ; <nl> + reader . string_ptr = reader . input = reinterpret_cast < uint8_t * > ( input ) ; <nl> + reader . remaining_input = size ; <nl> + json_reader_string_clear ( & reader ) ; <nl> + reader . state = GRPC_JSON_STATE_VALUE_BEGIN ; <nl> + / / Perform read . <nl> + grpc_json_reader_status status = grpc_json_reader_run ( & reader ) ; <nl> + / / Process results . <nl> + grpc_json * json = reader . top ; <nl> + if ( ( status ! = GRPC_JSON_DONE ) & & json ! = nullptr ) { <nl> + grpc_json_destroy ( json ) ; <nl> + json = nullptr ; <nl> + } <nl> + return json ; <nl> + } <nl> + <nl> + # define UNBOUND_JSON_STRING_LENGTH 0x7fffffff <nl> + <nl> + grpc_json * grpc_json_parse_string ( char * input ) { <nl> + return grpc_json_parse_string_with_len ( input , UNBOUND_JSON_STRING_LENGTH ) ; <nl> + } <nl> deleted file mode 100644 <nl> index 78f7ad9f3a8 . . 00000000000 <nl> mmm a / src / core / lib / json / json_reader . h <nl> ppp / dev / null <nl> <nl> - / * <nl> - * <nl> - * Copyright 2015 gRPC authors . <nl> - * <nl> - * Licensed under the Apache License , Version 2 . 0 ( the " License " ) ; <nl> - * you may not use this file except in compliance with the License . <nl> - * You may obtain a copy of the License at <nl> - * <nl> - * http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> - * <nl> - * Unless required by applicable law or agreed to in writing , software <nl> - * distributed under the License is distributed on an " AS IS " BASIS , <nl> - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied . <nl> - * See the License for the specific language governing permissions and <nl> - * limitations under the License . <nl> - * <nl> - * / <nl> - <nl> - # ifndef GRPC_CORE_LIB_JSON_JSON_READER_H <nl> - # define GRPC_CORE_LIB_JSON_JSON_READER_H <nl> - <nl> - # include < grpc / support / port_platform . h > <nl> - <nl> - # include " src / core / lib / json / json_common . h " <nl> - <nl> - typedef enum { <nl> - GRPC_JSON_STATE_OBJECT_KEY_BEGIN , <nl> - GRPC_JSON_STATE_OBJECT_KEY_STRING , <nl> - GRPC_JSON_STATE_OBJECT_KEY_END , <nl> - GRPC_JSON_STATE_VALUE_BEGIN , <nl> - GRPC_JSON_STATE_VALUE_STRING , <nl> - GRPC_JSON_STATE_STRING_ESCAPE , <nl> - GRPC_JSON_STATE_STRING_ESCAPE_U1 , <nl> - GRPC_JSON_STATE_STRING_ESCAPE_U2 , <nl> - GRPC_JSON_STATE_STRING_ESCAPE_U3 , <nl> - GRPC_JSON_STATE_STRING_ESCAPE_U4 , <nl> - GRPC_JSON_STATE_VALUE_NUMBER , <nl> - GRPC_JSON_STATE_VALUE_NUMBER_WITH_DECIMAL , <nl> - GRPC_JSON_STATE_VALUE_NUMBER_ZERO , <nl> - GRPC_JSON_STATE_VALUE_NUMBER_DOT , <nl> - GRPC_JSON_STATE_VALUE_NUMBER_E , <nl> - GRPC_JSON_STATE_VALUE_NUMBER_EPM , <nl> - GRPC_JSON_STATE_VALUE_TRUE_R , <nl> - GRPC_JSON_STATE_VALUE_TRUE_U , <nl> - GRPC_JSON_STATE_VALUE_TRUE_E , <nl> - GRPC_JSON_STATE_VALUE_FALSE_A , <nl> - GRPC_JSON_STATE_VALUE_FALSE_L , <nl> - GRPC_JSON_STATE_VALUE_FALSE_S , <nl> - GRPC_JSON_STATE_VALUE_FALSE_E , <nl> - GRPC_JSON_STATE_VALUE_NULL_U , <nl> - GRPC_JSON_STATE_VALUE_NULL_L1 , <nl> - GRPC_JSON_STATE_VALUE_NULL_L2 , <nl> - GRPC_JSON_STATE_VALUE_END , <nl> - GRPC_JSON_STATE_END <nl> - } grpc_json_reader_state ; <nl> - <nl> - enum { <nl> - / * The first non - unicode value is 0x110000 . But let ' s pick <nl> - * a value high enough to start our error codes from . These <nl> - * values are safe to return from the read_char function . <nl> - * / <nl> - GRPC_JSON_READ_CHAR_EOF = 0x7ffffff0 , <nl> - GRPC_JSON_READ_CHAR_EAGAIN , <nl> - GRPC_JSON_READ_CHAR_ERROR <nl> - } ; <nl> - <nl> - struct grpc_json_reader ; <nl> - <nl> - typedef struct grpc_json_reader_vtable { <nl> - / * Clears your internal string scratchpad . * / <nl> - void ( * string_clear ) ( void * userdata ) ; <nl> - / * Adds a char to the string scratchpad . * / <nl> - void ( * string_add_char ) ( void * userdata , uint32_t c ) ; <nl> - / * Adds a utf32 char to the string scratchpad . * / <nl> - void ( * string_add_utf32 ) ( void * userdata , uint32_t c ) ; <nl> - / * Reads a character from your input . May be utf - 8 , 16 or 32 . * / <nl> - uint32_t ( * read_char ) ( void * userdata ) ; <nl> - / * Starts a container of type GRPC_JSON_ARRAY or GRPC_JSON_OBJECT . * / <nl> - void ( * container_begins ) ( void * userdata , grpc_json_type type ) ; <nl> - / * Ends the current container . Must return the type of its parent . * / <nl> - grpc_json_type ( * container_ends ) ( void * userdata ) ; <nl> - / * Your internal string scratchpad is an object ' s key . * / <nl> - void ( * set_key ) ( void * userdata ) ; <nl> - / * Your internal string scratchpad is a string value . * / <nl> - void ( * set_string ) ( void * userdata ) ; <nl> - / * Your internal string scratchpad is a numerical value . Return 1 if valid . * / <nl> - int ( * set_number ) ( void * userdata ) ; <nl> - / * Sets the values true , false or null . * / <nl> - void ( * set_true ) ( void * userdata ) ; <nl> - void ( * set_false ) ( void * userdata ) ; <nl> - void ( * set_null ) ( void * userdata ) ; <nl> - } grpc_json_reader_vtable ; <nl> - <nl> - typedef struct grpc_json_reader { <nl> - / * That structure is fully private , and initialized by grpc_json_reader_init . <nl> - * The definition is public so you can put it on your stack . <nl> - * / <nl> - <nl> - void * userdata ; <nl> - grpc_json_reader_vtable * vtable ; <nl> - int depth ; <nl> - int in_object ; <nl> - int in_array ; <nl> - int escaped_string_was_key ; <nl> - int container_just_begun ; <nl> - uint16_t unicode_char , unicode_high_surrogate ; <nl> - grpc_json_reader_state state ; <nl> - } grpc_json_reader ; <nl> - <nl> - / * The return type of the parser . * / <nl> - typedef enum { <nl> - GRPC_JSON_DONE , / * The parser finished successfully . * / <nl> - GRPC_JSON_EAGAIN , / * The parser yields to get more data . * / <nl> - GRPC_JSON_READ_ERROR , / * The parser passes through a read error . * / <nl> - GRPC_JSON_PARSE_ERROR , / * The parser found an error in the json stream . * / <nl> - GRPC_JSON_INTERNAL_ERROR / * The parser got an internal error . * / <nl> - } grpc_json_reader_status ; <nl> - <nl> - / * Call this function to start parsing the input . It will return the following : <nl> - * . GRPC_JSON_DONE if the input got eof , and the parsing finished <nl> - * successfully . <nl> - * . GRPC_JSON_EAGAIN if the read_char function returned again . Call the <nl> - * parser again as needed . It is okay to call the parser in polling mode , <nl> - * although a bit dull . <nl> - * . GRPC_JSON_READ_ERROR if the read_char function returned an error . The <nl> - * state isn ' t broken however , and the function can be called again if the <nl> - * error has been corrected . But please use the EAGAIN feature instead for <nl> - * consistency . <nl> - * . GRPC_JSON_PARSE_ERROR if the input was somehow invalid . <nl> - * . GRPC_JSON_INTERNAL_ERROR if the parser somehow ended into an invalid <nl> - * internal state . <nl> - * / <nl> - grpc_json_reader_status grpc_json_reader_run ( grpc_json_reader * reader ) ; <nl> - <nl> - / * Call this function to initialize the reader structure . * / <nl> - void grpc_json_reader_init ( grpc_json_reader * reader , <nl> - grpc_json_reader_vtable * vtable , void * userdata ) ; <nl> - <nl> - / * You may call this from the read_char callback if you don ' t know where is the <nl> - * end of your input stream , and you ' d like the json reader to hint you that it <nl> - * has completed reading its input , so you can return an EOF to it . Note that <nl> - * there might still be trailing whitespaces after that point . <nl> - * / <nl> - int grpc_json_reader_is_complete ( grpc_json_reader * reader ) ; <nl> - <nl> - # endif / * GRPC_CORE_LIB_JSON_JSON_READER_H * / <nl> deleted file mode 100644 <nl> index a1f1a6a84e5 . . 00000000000 <nl> mmm a / src / core / lib / json / json_string . cc <nl> ppp / dev / null <nl> <nl> - / * <nl> - * <nl> - * Copyright 2015 gRPC authors . <nl> - * <nl> - * Licensed under the Apache License , Version 2 . 0 ( the " License " ) ; <nl> - * you may not use this file except in compliance with the License . <nl> - * You may obtain a copy of the License at <nl> - * <nl> - * http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> - * <nl> - * Unless required by applicable law or agreed to in writing , software <nl> - * distributed under the License is distributed on an " AS IS " BASIS , <nl> - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied . <nl> - * See the License for the specific language governing permissions and <nl> - * limitations under the License . <nl> - * <nl> - * / <nl> - <nl> - # include < grpc / support / port_platform . h > <nl> - <nl> - # include < stdlib . h > <nl> - # include < string . h > <nl> - <nl> - # include < grpc / support / alloc . h > <nl> - # include < grpc / support / log . h > <nl> - <nl> - # include " src / core / lib / json / json . h " <nl> - # include " src / core / lib / json / json_reader . h " <nl> - # include " src / core / lib / json / json_writer . h " <nl> - <nl> - / * The json reader will construct a bunch of grpc_json objects and <nl> - * link them all up together in a tree - like structure that will represent <nl> - * the json data in memory . <nl> - * <nl> - * It also uses its own input as a scratchpad to store all of the decoded , <nl> - * unescaped strings . So we need to keep track of all these pointers in <nl> - * that opaque structure the reader will carry for us . <nl> - * <nl> - * Note that this works because the act of parsing json always reduces its <nl> - * input size , and never expands it . <nl> - * / <nl> - typedef struct { <nl> - grpc_json * top ; <nl> - grpc_json * current_container ; <nl> - grpc_json * current_value ; <nl> - uint8_t * input ; <nl> - uint8_t * key ; <nl> - uint8_t * string ; <nl> - uint8_t * string_ptr ; <nl> - size_t remaining_input ; <nl> - } json_reader_userdata ; <nl> - <nl> - / * This json writer will put everything in a big string . <nl> - * The point is that we allocate that string in chunks of 256 bytes . <nl> - * / <nl> - typedef struct { <nl> - char * output ; <nl> - size_t free_space ; <nl> - size_t string_len ; <nl> - size_t allocated ; <nl> - } json_writer_userdata ; <nl> - <nl> - / * This function checks if there ' s enough space left in the output buffer , <nl> - * and will enlarge it if necessary . We ' re only allocating chunks of 256 <nl> - * bytes at a time ( or multiples thereof ) . <nl> - * / <nl> - static void json_writer_output_check ( void * userdata , size_t needed ) { <nl> - json_writer_userdata * state = static_cast < json_writer_userdata * > ( userdata ) ; <nl> - if ( state - > free_space > = needed ) return ; <nl> - needed - = state - > free_space ; <nl> - / * Round up by 256 bytes . * / <nl> - needed = ( needed + 0xff ) & ~ 0xffU ; <nl> - state - > output = <nl> - static_cast < char * > ( gpr_realloc ( state - > output , state - > allocated + needed ) ) ; <nl> - state - > free_space + = needed ; <nl> - state - > allocated + = needed ; <nl> - } <nl> - <nl> - / * These are needed by the writer ' s implementation . * / <nl> - static void json_writer_output_char ( void * userdata , char c ) { <nl> - json_writer_userdata * state = static_cast < json_writer_userdata * > ( userdata ) ; <nl> - json_writer_output_check ( userdata , 1 ) ; <nl> - state - > output [ state - > string_len + + ] = c ; <nl> - state - > free_space - - ; <nl> - } <nl> - <nl> - static void json_writer_output_string_with_len ( void * userdata , const char * str , <nl> - size_t len ) { <nl> - json_writer_userdata * state = static_cast < json_writer_userdata * > ( userdata ) ; <nl> - json_writer_output_check ( userdata , len ) ; <nl> - memcpy ( state - > output + state - > string_len , str , len ) ; <nl> - state - > string_len + = len ; <nl> - state - > free_space - = len ; <nl> - } <nl> - <nl> - static void json_writer_output_string ( void * userdata , const char * str ) { <nl> - size_t len = strlen ( str ) ; <nl> - json_writer_output_string_with_len ( userdata , str , len ) ; <nl> - } <nl> - <nl> - / * The reader asks us to clear our scratchpad . In our case , we ' ll simply mark <nl> - * the end of the current string , and advance our output pointer . <nl> - * / <nl> - static void json_reader_string_clear ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - if ( state - > string ) { <nl> - GPR_ASSERT ( state - > string_ptr < state - > input ) ; <nl> - * state - > string_ptr + + = 0 ; <nl> - } <nl> - state - > string = state - > string_ptr ; <nl> - } <nl> - <nl> - static void json_reader_string_add_char ( void * userdata , uint32_t c ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - GPR_ASSERT ( state - > string_ptr < state - > input ) ; <nl> - GPR_ASSERT ( c < = 0xff ) ; <nl> - * state - > string_ptr + + = static_cast < uint8_t > ( c ) ; <nl> - } <nl> - <nl> - / * We are converting a UTF - 32 character into UTF - 8 here , <nl> - * as described by RFC3629 . <nl> - * / <nl> - static void json_reader_string_add_utf32 ( void * userdata , uint32_t c ) { <nl> - if ( c < = 0x7f ) { <nl> - json_reader_string_add_char ( userdata , c ) ; <nl> - } else if ( c < = 0x7ff ) { <nl> - uint32_t b1 = 0xc0 | ( ( c > > 6 ) & 0x1f ) ; <nl> - uint32_t b2 = 0x80 | ( c & 0x3f ) ; <nl> - json_reader_string_add_char ( userdata , b1 ) ; <nl> - json_reader_string_add_char ( userdata , b2 ) ; <nl> - } else if ( c < = 0xffff ) { <nl> - uint32_t b1 = 0xe0 | ( ( c > > 12 ) & 0x0f ) ; <nl> - uint32_t b2 = 0x80 | ( ( c > > 6 ) & 0x3f ) ; <nl> - uint32_t b3 = 0x80 | ( c & 0x3f ) ; <nl> - json_reader_string_add_char ( userdata , b1 ) ; <nl> - json_reader_string_add_char ( userdata , b2 ) ; <nl> - json_reader_string_add_char ( userdata , b3 ) ; <nl> - } else if ( c < = 0x1fffff ) { <nl> - uint32_t b1 = 0xf0 | ( ( c > > 18 ) & 0x07 ) ; <nl> - uint32_t b2 = 0x80 | ( ( c > > 12 ) & 0x3f ) ; <nl> - uint32_t b3 = 0x80 | ( ( c > > 6 ) & 0x3f ) ; <nl> - uint32_t b4 = 0x80 | ( c & 0x3f ) ; <nl> - json_reader_string_add_char ( userdata , b1 ) ; <nl> - json_reader_string_add_char ( userdata , b2 ) ; <nl> - json_reader_string_add_char ( userdata , b3 ) ; <nl> - json_reader_string_add_char ( userdata , b4 ) ; <nl> - } <nl> - } <nl> - <nl> - / * We consider that the input may be a zero - terminated string . So we <nl> - * can end up hitting eof before the end of the alleged string length . <nl> - * / <nl> - static uint32_t json_reader_read_char ( void * userdata ) { <nl> - uint32_t r ; <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - <nl> - if ( state - > remaining_input = = 0 ) return GRPC_JSON_READ_CHAR_EOF ; <nl> - <nl> - r = * state - > input + + ; <nl> - state - > remaining_input - - ; <nl> - <nl> - if ( r = = 0 ) { <nl> - state - > remaining_input = 0 ; <nl> - return GRPC_JSON_READ_CHAR_EOF ; <nl> - } <nl> - <nl> - return r ; <nl> - } <nl> - <nl> - / * Helper function to create a new grpc_json object and link it into <nl> - * our tree - in - progress inside our opaque structure . <nl> - * / <nl> - static grpc_json * json_create_and_link ( void * userdata , grpc_json_type type ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - grpc_json * json = grpc_json_create ( type ) ; <nl> - <nl> - json - > parent = state - > current_container ; <nl> - json - > prev = state - > current_value ; <nl> - state - > current_value = json ; <nl> - <nl> - if ( json - > prev ) { <nl> - json - > prev - > next = json ; <nl> - } <nl> - if ( json - > parent ) { <nl> - if ( ! json - > parent - > child ) { <nl> - json - > parent - > child = json ; <nl> - } <nl> - if ( json - > parent - > type = = GRPC_JSON_OBJECT ) { <nl> - json - > key = reinterpret_cast < char * > ( state - > key ) ; <nl> - } <nl> - } <nl> - if ( ! state - > top ) { <nl> - state - > top = json ; <nl> - } <nl> - <nl> - return json ; <nl> - } <nl> - <nl> - static void json_reader_container_begins ( void * userdata , grpc_json_type type ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - grpc_json * container ; <nl> - <nl> - GPR_ASSERT ( type = = GRPC_JSON_ARRAY | | type = = GRPC_JSON_OBJECT ) ; <nl> - <nl> - container = json_create_and_link ( userdata , type ) ; <nl> - state - > current_container = container ; <nl> - state - > current_value = nullptr ; <nl> - } <nl> - <nl> - / * It ' s important to remember that the reader is mostly stateless , so it <nl> - * isn ' t trying to remember what the container was prior the one that just <nl> - * ends . Since we ' re keeping track of these for our own purpose , we are <nl> - * able to return that information back , which is useful for it to validate <nl> - * the input json stream . <nl> - * <nl> - * Also note that if we ' re at the top of the tree , and the last container <nl> - * ends , we have to return GRPC_JSON_TOP_LEVEL . <nl> - * / <nl> - static grpc_json_type json_reader_container_ends ( void * userdata ) { <nl> - grpc_json_type container_type = GRPC_JSON_TOP_LEVEL ; <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - <nl> - GPR_ASSERT ( state - > current_container ) ; <nl> - <nl> - state - > current_value = state - > current_container ; <nl> - state - > current_container = state - > current_container - > parent ; <nl> - <nl> - if ( state - > current_container ) { <nl> - container_type = state - > current_container - > type ; <nl> - } <nl> - <nl> - return container_type ; <nl> - } <nl> - <nl> - / * The next 3 functions basically are the reader asking us to use our string <nl> - * scratchpad for one of these 3 purposes . <nl> - * <nl> - * Note that in the set_number case , we ' re not going to try interpreting it . <nl> - * We ' ll keep it as a string , and leave it to the caller to evaluate it . <nl> - * / <nl> - static void json_reader_set_key ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - state - > key = state - > string ; <nl> - } <nl> - <nl> - static void json_reader_set_string ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - grpc_json * json = json_create_and_link ( userdata , GRPC_JSON_STRING ) ; <nl> - json - > value = reinterpret_cast < char * > ( state - > string ) ; <nl> - } <nl> - <nl> - static int json_reader_set_number ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - grpc_json * json = json_create_and_link ( userdata , GRPC_JSON_NUMBER ) ; <nl> - json - > value = reinterpret_cast < char * > ( state - > string ) ; <nl> - return 1 ; <nl> - } <nl> - <nl> - / * The object types true , false and null are self - sufficient , and don ' t need <nl> - * any more information beside their type . <nl> - * / <nl> - static void json_reader_set_true ( void * userdata ) { <nl> - json_create_and_link ( userdata , GRPC_JSON_TRUE ) ; <nl> - } <nl> - <nl> - static void json_reader_set_false ( void * userdata ) { <nl> - json_create_and_link ( userdata , GRPC_JSON_FALSE ) ; <nl> - } <nl> - <nl> - static void json_reader_set_null ( void * userdata ) { <nl> - json_create_and_link ( userdata , GRPC_JSON_NULL ) ; <nl> - } <nl> - <nl> - static grpc_json_reader_vtable reader_vtable = { <nl> - json_reader_string_clear , json_reader_string_add_char , <nl> - json_reader_string_add_utf32 , json_reader_read_char , <nl> - json_reader_container_begins , json_reader_container_ends , <nl> - json_reader_set_key , json_reader_set_string , <nl> - json_reader_set_number , json_reader_set_true , <nl> - json_reader_set_false , json_reader_set_null } ; <nl> - <nl> - / * And finally , let ' s define our public API . * / <nl> - grpc_json * grpc_json_parse_string_with_len ( char * input , size_t size ) { <nl> - grpc_json_reader reader ; <nl> - json_reader_userdata state ; <nl> - grpc_json * json = nullptr ; <nl> - grpc_json_reader_status status ; <nl> - <nl> - if ( ! input ) return nullptr ; <nl> - <nl> - state . top = state . current_container = state . current_value = nullptr ; <nl> - state . string = state . key = nullptr ; <nl> - state . string_ptr = state . input = reinterpret_cast < uint8_t * > ( input ) ; <nl> - state . remaining_input = size ; <nl> - grpc_json_reader_init ( & reader , & reader_vtable , & state ) ; <nl> - <nl> - status = grpc_json_reader_run ( & reader ) ; <nl> - json = state . top ; <nl> - <nl> - if ( ( status ! = GRPC_JSON_DONE ) & & json ) { <nl> - grpc_json_destroy ( json ) ; <nl> - json = nullptr ; <nl> - } <nl> - <nl> - return json ; <nl> - } <nl> - <nl> - # define UNBOUND_JSON_STRING_LENGTH 0x7fffffff <nl> - <nl> - grpc_json * grpc_json_parse_string ( char * input ) { <nl> - return grpc_json_parse_string_with_len ( input , UNBOUND_JSON_STRING_LENGTH ) ; <nl> - } <nl> - <nl> - static void json_dump_recursive ( grpc_json_writer * writer , const grpc_json * json , <nl> - int in_object ) { <nl> - while ( json ) { <nl> - if ( in_object ) grpc_json_writer_object_key ( writer , json - > key ) ; <nl> - <nl> - switch ( json - > type ) { <nl> - case GRPC_JSON_OBJECT : <nl> - case GRPC_JSON_ARRAY : <nl> - grpc_json_writer_container_begins ( writer , json - > type ) ; <nl> - if ( json - > child ) <nl> - json_dump_recursive ( writer , json - > child , <nl> - json - > type = = GRPC_JSON_OBJECT ) ; <nl> - grpc_json_writer_container_ends ( writer , json - > type ) ; <nl> - break ; <nl> - case GRPC_JSON_STRING : <nl> - grpc_json_writer_value_string ( writer , json - > value ) ; <nl> - break ; <nl> - case GRPC_JSON_NUMBER : <nl> - grpc_json_writer_value_raw ( writer , json - > value ) ; <nl> - break ; <nl> - case GRPC_JSON_TRUE : <nl> - grpc_json_writer_value_raw_with_len ( writer , " true " , 4 ) ; <nl> - break ; <nl> - case GRPC_JSON_FALSE : <nl> - grpc_json_writer_value_raw_with_len ( writer , " false " , 5 ) ; <nl> - break ; <nl> - case GRPC_JSON_NULL : <nl> - grpc_json_writer_value_raw_with_len ( writer , " null " , 4 ) ; <nl> - break ; <nl> - default : <nl> - GPR_UNREACHABLE_CODE ( abort ( ) ) ; <nl> - } <nl> - json = json - > next ; <nl> - } <nl> - } <nl> - <nl> - static grpc_json_writer_vtable writer_vtable = { <nl> - json_writer_output_char , json_writer_output_string , <nl> - json_writer_output_string_with_len } ; <nl> - <nl> - char * grpc_json_dump_to_string ( const grpc_json * json , int indent ) { <nl> - grpc_json_writer writer ; <nl> - json_writer_userdata state ; <nl> - <nl> - state . output = nullptr ; <nl> - state . free_space = state . string_len = state . allocated = 0 ; <nl> - grpc_json_writer_init ( & writer , indent , & writer_vtable , & state ) ; <nl> - <nl> - json_dump_recursive ( & writer , json , 0 ) ; <nl> - <nl> - json_writer_output_char ( & state , 0 ) ; <nl> - <nl> - return state . output ; <nl> - } <nl> mmm a / src / core / lib / json / json_writer . cc <nl> ppp b / src / core / lib / json / json_writer . cc <nl> <nl> <nl> # include < grpc / support / port_platform . h > <nl> <nl> + # include < stdlib . h > <nl> # include < string . h > <nl> <nl> - # include " src / core / lib / json / json_writer . h " <nl> + # include < grpc / support / alloc . h > <nl> + # include < grpc / support / log . h > <nl> <nl> - static void json_writer_output_char ( grpc_json_writer * writer , char c ) { <nl> - writer - > vtable - > output_char ( writer - > userdata , c ) ; <nl> + # include " src / core / lib / json / json . h " <nl> + <nl> + / * The idea of the writer is basically symmetrical of the reader . While the <nl> + * reader emits various calls to your code , the writer takes basically the <nl> + * same calls and emit json out of it . It doesn ' t try to make any check on <nl> + * the order of the calls you do on it . Meaning you can theorically force <nl> + * it to generate invalid json . <nl> + * <nl> + * Also , unlike the reader , the writer expects UTF - 8 encoded input strings . <nl> + * These strings will be UTF - 8 validated , and any invalid character will <nl> + * cut the conversion short , before any invalid UTF - 8 sequence , thus forming <nl> + * a valid UTF - 8 string overall . <nl> + * / <nl> + <nl> + typedef struct grpc_json_writer { <nl> + int indent ; <nl> + int depth ; <nl> + int container_empty ; <nl> + int got_key ; <nl> + char * output ; <nl> + size_t free_space ; <nl> + size_t string_len ; <nl> + size_t allocated ; <nl> + } grpc_json_writer ; <nl> + <nl> + / * This function checks if there ' s enough space left in the output buffer , <nl> + * and will enlarge it if necessary . We ' re only allocating chunks of 256 <nl> + * bytes at a time ( or multiples thereof ) . <nl> + * / <nl> + static void json_writer_output_check ( grpc_json_writer * writer , size_t needed ) { <nl> + if ( writer - > free_space > = needed ) return ; <nl> + needed - = writer - > free_space ; <nl> + / * Round up by 256 bytes . * / <nl> + needed = ( needed + 0xff ) & ~ 0xffU ; <nl> + writer - > output = static_cast < char * > ( <nl> + gpr_realloc ( writer - > output , writer - > allocated + needed ) ) ; <nl> + writer - > free_space + = needed ; <nl> + writer - > allocated + = needed ; <nl> } <nl> <nl> - static void json_writer_output_string ( grpc_json_writer * writer , <nl> - const char * str ) { <nl> - writer - > vtable - > output_string ( writer - > userdata , str ) ; <nl> + static void json_writer_output_char ( grpc_json_writer * writer , char c ) { <nl> + json_writer_output_check ( writer , 1 ) ; <nl> + writer - > output [ writer - > string_len + + ] = c ; <nl> + writer - > free_space - - ; <nl> } <nl> <nl> static void json_writer_output_string_with_len ( grpc_json_writer * writer , <nl> const char * str , size_t len ) { <nl> - writer - > vtable - > output_string_with_len ( writer - > userdata , str , len ) ; <nl> + json_writer_output_check ( writer , len ) ; <nl> + memcpy ( writer - > output + writer - > string_len , str , len ) ; <nl> + writer - > string_len + = len ; <nl> + writer - > free_space - = len ; <nl> } <nl> <nl> - void grpc_json_writer_init ( grpc_json_writer * writer , int indent , <nl> - grpc_json_writer_vtable * vtable , void * userdata ) { <nl> - memset ( writer , 0 , sizeof ( * writer ) ) ; <nl> - writer - > container_empty = 1 ; <nl> - writer - > indent = indent ; <nl> - writer - > vtable = vtable ; <nl> - writer - > userdata = userdata ; <nl> + static void json_writer_output_string ( grpc_json_writer * writer , <nl> + const char * str ) { <nl> + size_t len = strlen ( str ) ; <nl> + json_writer_output_string_with_len ( writer , str , len ) ; <nl> } <nl> <nl> static void json_writer_output_indent ( grpc_json_writer * writer ) { <nl> static void json_writer_escape_string ( grpc_json_writer * writer , <nl> json_writer_output_char ( writer , ' " ' ) ; <nl> } <nl> <nl> - void grpc_json_writer_container_begins ( grpc_json_writer * writer , <nl> - grpc_json_type type ) { <nl> + static void grpc_json_writer_container_begins ( grpc_json_writer * writer , <nl> + grpc_json_type type ) { <nl> if ( ! writer - > got_key ) json_writer_value_end ( writer ) ; <nl> json_writer_output_indent ( writer ) ; <nl> json_writer_output_char ( writer , type = = GRPC_JSON_OBJECT ? ' { ' : ' [ ' ) ; <nl> void grpc_json_writer_container_begins ( grpc_json_writer * writer , <nl> writer - > depth + + ; <nl> } <nl> <nl> - void grpc_json_writer_container_ends ( grpc_json_writer * writer , <nl> - grpc_json_type type ) { <nl> + static void grpc_json_writer_container_ends ( grpc_json_writer * writer , <nl> + grpc_json_type type ) { <nl> if ( writer - > indent & & ! writer - > container_empty ) <nl> json_writer_output_char ( writer , ' \ n ' ) ; <nl> writer - > depth - - ; <nl> void grpc_json_writer_container_ends ( grpc_json_writer * writer , <nl> writer - > got_key = 0 ; <nl> } <nl> <nl> - void grpc_json_writer_object_key ( grpc_json_writer * writer , const char * string ) { <nl> + static void grpc_json_writer_object_key ( grpc_json_writer * writer , <nl> + const char * string ) { <nl> json_writer_value_end ( writer ) ; <nl> json_writer_output_indent ( writer ) ; <nl> json_writer_escape_string ( writer , string ) ; <nl> void grpc_json_writer_object_key ( grpc_json_writer * writer , const char * string ) { <nl> writer - > got_key = 1 ; <nl> } <nl> <nl> - void grpc_json_writer_value_raw ( grpc_json_writer * writer , const char * string ) { <nl> + static void grpc_json_writer_value_raw ( grpc_json_writer * writer , <nl> + const char * string ) { <nl> if ( ! writer - > got_key ) json_writer_value_end ( writer ) ; <nl> json_writer_output_indent ( writer ) ; <nl> json_writer_output_string ( writer , string ) ; <nl> writer - > got_key = 0 ; <nl> } <nl> <nl> - void grpc_json_writer_value_raw_with_len ( grpc_json_writer * writer , <nl> - const char * string , size_t len ) { <nl> + static void grpc_json_writer_value_raw_with_len ( grpc_json_writer * writer , <nl> + const char * string , <nl> + size_t len ) { <nl> if ( ! writer - > got_key ) json_writer_value_end ( writer ) ; <nl> json_writer_output_indent ( writer ) ; <nl> json_writer_output_string_with_len ( writer , string , len ) ; <nl> writer - > got_key = 0 ; <nl> } <nl> <nl> - void grpc_json_writer_value_string ( grpc_json_writer * writer , <nl> - const char * string ) { <nl> + static void grpc_json_writer_value_string ( grpc_json_writer * writer , <nl> + const char * string ) { <nl> if ( ! writer - > got_key ) json_writer_value_end ( writer ) ; <nl> json_writer_output_indent ( writer ) ; <nl> json_writer_escape_string ( writer , string ) ; <nl> writer - > got_key = 0 ; <nl> } <nl> + <nl> + static void json_dump_recursive ( grpc_json_writer * writer , const grpc_json * json , <nl> + int in_object ) { <nl> + while ( json ) { <nl> + if ( in_object ) grpc_json_writer_object_key ( writer , json - > key ) ; <nl> + switch ( json - > type ) { <nl> + case GRPC_JSON_OBJECT : <nl> + case GRPC_JSON_ARRAY : <nl> + grpc_json_writer_container_begins ( writer , json - > type ) ; <nl> + if ( json - > child ) <nl> + json_dump_recursive ( writer , json - > child , <nl> + json - > type = = GRPC_JSON_OBJECT ) ; <nl> + grpc_json_writer_container_ends ( writer , json - > type ) ; <nl> + break ; <nl> + case GRPC_JSON_STRING : <nl> + grpc_json_writer_value_string ( writer , json - > value ) ; <nl> + break ; <nl> + case GRPC_JSON_NUMBER : <nl> + grpc_json_writer_value_raw ( writer , json - > value ) ; <nl> + break ; <nl> + case GRPC_JSON_TRUE : <nl> + grpc_json_writer_value_raw_with_len ( writer , " true " , 4 ) ; <nl> + break ; <nl> + case GRPC_JSON_FALSE : <nl> + grpc_json_writer_value_raw_with_len ( writer , " false " , 5 ) ; <nl> + break ; <nl> + case GRPC_JSON_NULL : <nl> + grpc_json_writer_value_raw_with_len ( writer , " null " , 4 ) ; <nl> + break ; <nl> + default : <nl> + GPR_UNREACHABLE_CODE ( abort ( ) ) ; <nl> + } <nl> + json = json - > next ; <nl> + } <nl> + } <nl> + <nl> + char * grpc_json_dump_to_string ( const grpc_json * json , int indent ) { <nl> + grpc_json_writer writer ; <nl> + memset ( & writer , 0 , sizeof ( writer ) ) ; <nl> + writer . container_empty = 1 ; <nl> + writer . indent = indent ; <nl> + json_dump_recursive ( & writer , json , 0 ) ; <nl> + json_writer_output_char ( & writer , 0 ) ; <nl> + return writer . output ; <nl> + } <nl> deleted file mode 100644 <nl> index ba0bedde7fa . . 00000000000 <nl> mmm a / src / core / lib / json / json_writer . h <nl> ppp / dev / null <nl> <nl> - / * <nl> - * <nl> - * Copyright 2015 gRPC authors . <nl> - * <nl> - * Licensed under the Apache License , Version 2 . 0 ( the " License " ) ; <nl> - * you may not use this file except in compliance with the License . <nl> - * You may obtain a copy of the License at <nl> - * <nl> - * http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> - * <nl> - * Unless required by applicable law or agreed to in writing , software <nl> - * distributed under the License is distributed on an " AS IS " BASIS , <nl> - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied . <nl> - * See the License for the specific language governing permissions and <nl> - * limitations under the License . <nl> - * <nl> - * / <nl> - <nl> - / * The idea of the writer is basically symmetrical of the reader . While the <nl> - * reader emits various calls to your code , the writer takes basically the <nl> - * same calls and emit json out of it . It doesn ' t try to make any check on <nl> - * the order of the calls you do on it . Meaning you can theorically force <nl> - * it to generate invalid json . <nl> - * <nl> - * Also , unlike the reader , the writer expects UTF - 8 encoded input strings . <nl> - * These strings will be UTF - 8 validated , and any invalid character will <nl> - * cut the conversion short , before any invalid UTF - 8 sequence , thus forming <nl> - * a valid UTF - 8 string overall . <nl> - * / <nl> - <nl> - # ifndef GRPC_CORE_LIB_JSON_JSON_WRITER_H <nl> - # define GRPC_CORE_LIB_JSON_JSON_WRITER_H <nl> - <nl> - # include < grpc / support / port_platform . h > <nl> - <nl> - # include < stdlib . h > <nl> - <nl> - # include " src / core / lib / json / json_common . h " <nl> - <nl> - typedef struct grpc_json_writer_vtable { <nl> - / * Adds a character to the output stream . * / <nl> - void ( * output_char ) ( void * userdata , char ) ; <nl> - / * Adds a zero - terminated string to the output stream . * / <nl> - void ( * output_string ) ( void * userdata , const char * str ) ; <nl> - / * Adds a fixed - length string to the output stream . * / <nl> - void ( * output_string_with_len ) ( void * userdata , const char * str , size_t len ) ; <nl> - <nl> - } grpc_json_writer_vtable ; <nl> - <nl> - typedef struct grpc_json_writer { <nl> - void * userdata ; <nl> - grpc_json_writer_vtable * vtable ; <nl> - int indent ; <nl> - int depth ; <nl> - int container_empty ; <nl> - int got_key ; <nl> - } grpc_json_writer ; <nl> - <nl> - / * Call this to initialize your writer structure . The indent parameter is <nl> - * specifying the number of spaces to use for indenting the output . If you <nl> - * use indent = 0 , then the output will not have any newlines either , thus <nl> - * emitting a condensed json output . <nl> - * / <nl> - void grpc_json_writer_init ( grpc_json_writer * writer , int indent , <nl> - grpc_json_writer_vtable * vtable , void * userdata ) ; <nl> - <nl> - / * Signals the beginning of a container . * / <nl> - void grpc_json_writer_container_begins ( grpc_json_writer * writer , <nl> - grpc_json_type type ) ; <nl> - / * Signals the end of a container . * / <nl> - void grpc_json_writer_container_ends ( grpc_json_writer * writer , <nl> - grpc_json_type type ) ; <nl> - / * Writes down an object key for the next value . * / <nl> - void grpc_json_writer_object_key ( grpc_json_writer * writer , const char * string ) ; <nl> - / * Sets a raw value . Useful for numbers . * / <nl> - void grpc_json_writer_value_raw ( grpc_json_writer * writer , const char * string ) ; <nl> - / * Sets a raw value with its length . Useful for values like true or false . * / <nl> - void grpc_json_writer_value_raw_with_len ( grpc_json_writer * writer , <nl> - const char * string , size_t len ) ; <nl> - / * Sets a string value . It ' ll be escaped , and utf - 8 validated . * / <nl> - void grpc_json_writer_value_string ( grpc_json_writer * writer , <nl> - const char * string ) ; <nl> - <nl> - # endif / * GRPC_CORE_LIB_JSON_JSON_WRITER_H * / <nl> mmm a / src / python / grpcio / grpc_core_dependencies . py <nl> ppp b / src / python / grpcio / grpc_core_dependencies . py <nl> <nl> ' src / core / lib / iomgr / wakeup_fd_posix . cc ' , <nl> ' src / core / lib / json / json . cc ' , <nl> ' src / core / lib / json / json_reader . cc ' , <nl> - ' src / core / lib / json / json_string . cc ' , <nl> ' src / core / lib / json / json_writer . cc ' , <nl> ' src / core / lib / profiling / basic_timers . cc ' , <nl> ' src / core / lib / profiling / stap_timers . cc ' , <nl> mmm a / test / core / json / BUILD <nl> ppp b / test / core / json / BUILD <nl> grpc_fuzzer ( <nl> ] , <nl> ) <nl> <nl> - grpc_cc_binary ( <nl> - name = " json_rewrite " , <nl> - testonly = 1 , <nl> - srcs = [ " json_rewrite . cc " ] , <nl> - language = " C + + " , <nl> - deps = [ <nl> - " / / : gpr " , <nl> - " / / : grpc " , <nl> - " / / test / core / util : grpc_test_util " , <nl> - ] , <nl> - ) <nl> - <nl> - grpc_cc_test ( <nl> - name = " json_rewrite_test " , <nl> - srcs = [ " json_rewrite_test . cc " ] , <nl> - data = [ <nl> - " rewrite_test_input . json " , <nl> - " rewrite_test_output_condensed . json " , <nl> - " rewrite_test_output_indented . json " , <nl> - " : json_stream_error_test " , <nl> - ] , <nl> - language = " C + + " , <nl> - uses_polling = False , <nl> - deps = [ <nl> - " / / : gpr " , <nl> - " / / : grpc " , <nl> - " / / test / core / util : grpc_test_util " , <nl> - ] , <nl> - ) <nl> - <nl> - grpc_cc_test ( <nl> - name = " json_stream_error_test " , <nl> - srcs = [ " json_stream_error_test . cc " ] , <nl> - language = " C + + " , <nl> - uses_polling = False , <nl> - deps = [ <nl> - " / / : gpr " , <nl> - " / / : grpc " , <nl> - " / / test / core / util : grpc_test_util " , <nl> - ] , <nl> - ) <nl> - <nl> grpc_cc_test ( <nl> name = " json_test " , <nl> srcs = [ " json_test . cc " ] , <nl> deleted file mode 100644 <nl> index da2f50ec59f . . 00000000000 <nl> mmm a / test / core / json / json_rewrite . cc <nl> ppp / dev / null <nl> <nl> - / * <nl> - * <nl> - * Copyright 2015 gRPC authors . <nl> - * <nl> - * Licensed under the Apache License , Version 2 . 0 ( the " License " ) ; <nl> - * you may not use this file except in compliance with the License . <nl> - * You may obtain a copy of the License at <nl> - * <nl> - * http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> - * <nl> - * Unless required by applicable law or agreed to in writing , software <nl> - * distributed under the License is distributed on an " AS IS " BASIS , <nl> - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied . <nl> - * See the License for the specific language governing permissions and <nl> - * limitations under the License . <nl> - * <nl> - * / <nl> - <nl> - # include < stdio . h > <nl> - # include < stdlib . h > <nl> - <nl> - # include < grpc / support / alloc . h > <nl> - # include < grpc / support / log . h > <nl> - <nl> - # include " src / core / lib / json / json_reader . h " <nl> - # include " src / core / lib / json / json_writer . h " <nl> - # include " test / core / util / cmdline . h " <nl> - <nl> - typedef struct json_writer_userdata { <nl> - FILE * out ; <nl> - } json_writer_userdata ; <nl> - <nl> - typedef struct stacked_container { <nl> - grpc_json_type type ; <nl> - struct stacked_container * next ; <nl> - } stacked_container ; <nl> - <nl> - typedef struct json_reader_userdata { <nl> - FILE * in ; <nl> - grpc_json_writer * writer ; <nl> - char * scratchpad ; <nl> - char * ptr ; <nl> - size_t free_space ; <nl> - size_t allocated ; <nl> - size_t string_len ; <nl> - stacked_container * top ; <nl> - } json_reader_userdata ; <nl> - <nl> - static void json_writer_output_char ( void * userdata , char c ) { <nl> - json_writer_userdata * state = static_cast < json_writer_userdata * > ( userdata ) ; <nl> - fputc ( c , state - > out ) ; <nl> - } <nl> - <nl> - static void json_writer_output_string ( void * userdata , const char * str ) { <nl> - json_writer_userdata * state = static_cast < json_writer_userdata * > ( userdata ) ; <nl> - fputs ( str , state - > out ) ; <nl> - } <nl> - <nl> - static void json_writer_output_string_with_len ( void * userdata , const char * str , <nl> - size_t len ) { <nl> - json_writer_userdata * state = static_cast < json_writer_userdata * > ( userdata ) ; <nl> - fwrite ( str , len , 1 , state - > out ) ; <nl> - } <nl> - <nl> - grpc_json_writer_vtable writer_vtable = { json_writer_output_char , <nl> - json_writer_output_string , <nl> - json_writer_output_string_with_len } ; <nl> - <nl> - static void check_string ( json_reader_userdata * state , size_t needed ) { <nl> - if ( state - > free_space > = needed ) return ; <nl> - needed - = state - > free_space ; <nl> - needed = ( needed + 0xffu ) & ~ 0xffu ; <nl> - state - > scratchpad = static_cast < char * > ( <nl> - gpr_realloc ( state - > scratchpad , state - > allocated + needed ) ) ; <nl> - state - > free_space + = needed ; <nl> - state - > allocated + = needed ; <nl> - } <nl> - <nl> - static void json_reader_string_clear ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - state - > free_space = state - > allocated ; <nl> - state - > string_len = 0 ; <nl> - } <nl> - <nl> - static void json_reader_string_add_char ( void * userdata , uint32_t c ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - check_string ( state , 1 ) ; <nl> - GPR_ASSERT ( c < 256 ) ; <nl> - state - > scratchpad [ state - > string_len + + ] = static_cast < char > ( c ) ; <nl> - } <nl> - <nl> - static void json_reader_string_add_utf32 ( void * userdata , uint32_t c ) { <nl> - if ( c < = 0x7f ) { <nl> - json_reader_string_add_char ( userdata , c ) ; <nl> - } else if ( c < = 0x7ff ) { <nl> - uint32_t b1 = 0xc0u | ( ( c > > 6u ) & 0x1fu ) ; <nl> - uint32_t b2 = 0x80u | ( c & 0x3fu ) ; <nl> - json_reader_string_add_char ( userdata , b1 ) ; <nl> - json_reader_string_add_char ( userdata , b2 ) ; <nl> - } else if ( c < = 0xffffu ) { <nl> - uint32_t b1 = 0xe0u | ( ( c > > 12u ) & 0x0fu ) ; <nl> - uint32_t b2 = 0x80u | ( ( c > > 6u ) & 0x3fu ) ; <nl> - uint32_t b3 = 0x80u | ( c & 0x3fu ) ; <nl> - json_reader_string_add_char ( userdata , b1 ) ; <nl> - json_reader_string_add_char ( userdata , b2 ) ; <nl> - json_reader_string_add_char ( userdata , b3 ) ; <nl> - } else if ( c < = 0x1fffffu ) { <nl> - uint32_t b1 = 0xf0u | ( ( c > > 18u ) & 0x07u ) ; <nl> - uint32_t b2 = 0x80u | ( ( c > > 12u ) & 0x3fu ) ; <nl> - uint32_t b3 = 0x80u | ( ( c > > 6u ) & 0x3fu ) ; <nl> - uint32_t b4 = 0x80u | ( c & 0x3fu ) ; <nl> - json_reader_string_add_char ( userdata , b1 ) ; <nl> - json_reader_string_add_char ( userdata , b2 ) ; <nl> - json_reader_string_add_char ( userdata , b3 ) ; <nl> - json_reader_string_add_char ( userdata , b4 ) ; <nl> - } <nl> - } <nl> - <nl> - static uint32_t json_reader_read_char ( void * userdata ) { <nl> - int r ; <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - <nl> - r = fgetc ( state - > in ) ; <nl> - if ( r = = EOF ) r = GRPC_JSON_READ_CHAR_EOF ; <nl> - return static_cast < uint32_t > ( r ) ; <nl> - } <nl> - <nl> - static void json_reader_container_begins ( void * userdata , grpc_json_type type ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - stacked_container * container = <nl> - static_cast < stacked_container * > ( gpr_malloc ( sizeof ( stacked_container ) ) ) ; <nl> - <nl> - container - > type = type ; <nl> - container - > next = state - > top ; <nl> - state - > top = container ; <nl> - <nl> - grpc_json_writer_container_begins ( state - > writer , type ) ; <nl> - } <nl> - <nl> - static grpc_json_type json_reader_container_ends ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - stacked_container * container = state - > top ; <nl> - <nl> - grpc_json_writer_container_ends ( state - > writer , container - > type ) ; <nl> - state - > top = container - > next ; <nl> - gpr_free ( container ) ; <nl> - return state - > top ? state - > top - > type : GRPC_JSON_TOP_LEVEL ; <nl> - } <nl> - <nl> - static void json_reader_set_key ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - json_reader_string_add_char ( userdata , 0 ) ; <nl> - <nl> - grpc_json_writer_object_key ( state - > writer , state - > scratchpad ) ; <nl> - } <nl> - <nl> - static void json_reader_set_string ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - json_reader_string_add_char ( userdata , 0 ) ; <nl> - <nl> - grpc_json_writer_value_string ( state - > writer , state - > scratchpad ) ; <nl> - } <nl> - <nl> - static int json_reader_set_number ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - <nl> - grpc_json_writer_value_raw_with_len ( state - > writer , state - > scratchpad , <nl> - state - > string_len ) ; <nl> - <nl> - return 1 ; <nl> - } <nl> - <nl> - static void json_reader_set_true ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - <nl> - grpc_json_writer_value_raw_with_len ( state - > writer , " true " , 4 ) ; <nl> - } <nl> - <nl> - static void json_reader_set_false ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - <nl> - grpc_json_writer_value_raw_with_len ( state - > writer , " false " , 5 ) ; <nl> - } <nl> - <nl> - static void json_reader_set_null ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - <nl> - grpc_json_writer_value_raw_with_len ( state - > writer , " null " , 4 ) ; <nl> - } <nl> - <nl> - static grpc_json_reader_vtable reader_vtable = { <nl> - json_reader_string_clear , json_reader_string_add_char , <nl> - json_reader_string_add_utf32 , json_reader_read_char , <nl> - json_reader_container_begins , json_reader_container_ends , <nl> - json_reader_set_key , json_reader_set_string , <nl> - json_reader_set_number , json_reader_set_true , <nl> - json_reader_set_false , json_reader_set_null } ; <nl> - <nl> - int rewrite ( FILE * in , FILE * out , int indent ) { <nl> - grpc_json_writer writer ; <nl> - grpc_json_reader reader ; <nl> - grpc_json_reader_status status ; <nl> - json_writer_userdata writer_user ; <nl> - json_reader_userdata reader_user ; <nl> - <nl> - reader_user . writer = & writer ; <nl> - reader_user . in = in ; <nl> - reader_user . top = nullptr ; <nl> - reader_user . scratchpad = nullptr ; <nl> - reader_user . string_len = 0 ; <nl> - reader_user . free_space = 0 ; <nl> - reader_user . allocated = 0 ; <nl> - <nl> - writer_user . out = out ; <nl> - <nl> - grpc_json_writer_init ( & writer , indent , & writer_vtable , & writer_user ) ; <nl> - grpc_json_reader_init ( & reader , & reader_vtable , & reader_user ) ; <nl> - <nl> - status = grpc_json_reader_run ( & reader ) ; <nl> - <nl> - free ( reader_user . scratchpad ) ; <nl> - while ( reader_user . top ) { <nl> - stacked_container * container = reader_user . top ; <nl> - reader_user . top = container - > next ; <nl> - free ( container ) ; <nl> - } <nl> - <nl> - return status = = GRPC_JSON_DONE ; <nl> - } <nl> - <nl> - int main ( int argc , char * * argv ) { <nl> - int indent = 2 ; <nl> - gpr_cmdline * cl ; <nl> - <nl> - cl = gpr_cmdline_create ( nullptr ) ; <nl> - gpr_cmdline_add_int ( cl , " indent " , nullptr , & indent ) ; <nl> - gpr_cmdline_parse ( cl , argc , argv ) ; <nl> - gpr_cmdline_destroy ( cl ) ; <nl> - <nl> - return rewrite ( stdin , stdout , indent ) ? 0 : 1 ; <nl> - } <nl> deleted file mode 100644 <nl> index b7e89cdb1a3 . . 00000000000 <nl> mmm a / test / core / json / json_rewrite_test . cc <nl> ppp / dev / null <nl> <nl> - / * <nl> - * <nl> - * Copyright 2015 gRPC authors . <nl> - * <nl> - * Licensed under the Apache License , Version 2 . 0 ( the " License " ) ; <nl> - * you may not use this file except in compliance with the License . <nl> - * You may obtain a copy of the License at <nl> - * <nl> - * http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> - * <nl> - * Unless required by applicable law or agreed to in writing , software <nl> - * distributed under the License is distributed on an " AS IS " BASIS , <nl> - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied . <nl> - * See the License for the specific language governing permissions and <nl> - * limitations under the License . <nl> - * <nl> - * / <nl> - <nl> - # include < stdio . h > <nl> - # include < stdlib . h > <nl> - <nl> - # include < grpc / support / alloc . h > <nl> - # include < grpc / support / log . h > <nl> - # include " test / core / util / test_config . h " <nl> - <nl> - # include " src / core / lib / gpr / useful . h " <nl> - # include " src / core / lib / json / json_reader . h " <nl> - # include " src / core / lib / json / json_writer . h " <nl> - <nl> - typedef struct json_writer_userdata { <nl> - FILE * cmp ; <nl> - } json_writer_userdata ; <nl> - <nl> - typedef struct stacked_container { <nl> - grpc_json_type type ; <nl> - struct stacked_container * next ; <nl> - } stacked_container ; <nl> - <nl> - typedef struct json_reader_userdata { <nl> - FILE * in ; <nl> - grpc_json_writer * writer ; <nl> - char * scratchpad ; <nl> - char * ptr ; <nl> - size_t free_space ; <nl> - size_t allocated ; <nl> - size_t string_len ; <nl> - stacked_container * top ; <nl> - int did_eagain ; <nl> - } json_reader_userdata ; <nl> - <nl> - static void json_writer_output_char ( void * userdata , char c ) { <nl> - json_writer_userdata * state = static_cast < json_writer_userdata * > ( userdata ) ; <nl> - int cmp = fgetc ( state - > cmp ) ; <nl> - <nl> - / * treat CRLF as LF * / <nl> - if ( cmp = = ' \ r ' & & c = = ' \ n ' ) { <nl> - cmp = fgetc ( state - > cmp ) ; <nl> - } <nl> - GPR_ASSERT ( cmp = = c ) ; <nl> - } <nl> - <nl> - static void json_writer_output_string ( void * userdata , const char * str ) { <nl> - while ( * str ) { <nl> - json_writer_output_char ( userdata , * str + + ) ; <nl> - } <nl> - } <nl> - <nl> - static void json_writer_output_string_with_len ( void * userdata , const char * str , <nl> - size_t len ) { <nl> - size_t i ; <nl> - for ( i = 0 ; i < len ; i + + ) { <nl> - json_writer_output_char ( userdata , str [ i ] ) ; <nl> - } <nl> - } <nl> - <nl> - grpc_json_writer_vtable writer_vtable = { json_writer_output_char , <nl> - json_writer_output_string , <nl> - json_writer_output_string_with_len } ; <nl> - <nl> - static void check_string ( json_reader_userdata * state , size_t needed ) { <nl> - if ( state - > free_space > = needed ) return ; <nl> - needed - = state - > free_space ; <nl> - needed = ( needed + 0xffu ) & ~ 0xffu ; <nl> - state - > scratchpad = static_cast < char * > ( <nl> - gpr_realloc ( state - > scratchpad , state - > allocated + needed ) ) ; <nl> - state - > free_space + = needed ; <nl> - state - > allocated + = needed ; <nl> - } <nl> - <nl> - static void json_reader_string_clear ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - state - > free_space = state - > allocated ; <nl> - state - > string_len = 0 ; <nl> - } <nl> - <nl> - static void json_reader_string_add_char ( void * userdata , uint32_t c ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - check_string ( state , 1 ) ; <nl> - GPR_ASSERT ( c < = 256 ) ; <nl> - state - > scratchpad [ state - > string_len + + ] = static_cast < char > ( c ) ; <nl> - } <nl> - <nl> - static void json_reader_string_add_utf32 ( void * userdata , uint32_t c ) { <nl> - if ( c < = 0x7f ) { <nl> - json_reader_string_add_char ( userdata , c ) ; <nl> - } else if ( c < = 0x7ffu ) { <nl> - uint32_t b1 = 0xc0u | ( ( c > > 6u ) & 0x1fu ) ; <nl> - uint32_t b2 = 0x80u | ( c & 0x3fu ) ; <nl> - json_reader_string_add_char ( userdata , b1 ) ; <nl> - json_reader_string_add_char ( userdata , b2 ) ; <nl> - } else if ( c < = 0xffffu ) { <nl> - uint32_t b1 = 0xe0u | ( ( c > > 12u ) & 0x0fu ) ; <nl> - uint32_t b2 = 0x80u | ( ( c > > 6u ) & 0x3fu ) ; <nl> - uint32_t b3 = 0x80u | ( c & 0x3fu ) ; <nl> - json_reader_string_add_char ( userdata , b1 ) ; <nl> - json_reader_string_add_char ( userdata , b2 ) ; <nl> - json_reader_string_add_char ( userdata , b3 ) ; <nl> - } else if ( c < = 0x1fffffu ) { <nl> - uint32_t b1 = 0xf0u | ( ( c > > 18u ) & 0x07u ) ; <nl> - uint32_t b2 = 0x80u | ( ( c > > 12u ) & 0x3fu ) ; <nl> - uint32_t b3 = 0x80u | ( ( c > > 6u ) & 0x3fu ) ; <nl> - uint32_t b4 = 0x80u | ( c & 0x3fu ) ; <nl> - json_reader_string_add_char ( userdata , b1 ) ; <nl> - json_reader_string_add_char ( userdata , b2 ) ; <nl> - json_reader_string_add_char ( userdata , b3 ) ; <nl> - json_reader_string_add_char ( userdata , b4 ) ; <nl> - } <nl> - } <nl> - <nl> - static uint32_t json_reader_read_char ( void * userdata ) { <nl> - int r ; <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - <nl> - if ( ! state - > did_eagain ) { <nl> - state - > did_eagain = 1 ; <nl> - return GRPC_JSON_READ_CHAR_EAGAIN ; <nl> - } <nl> - <nl> - state - > did_eagain = 0 ; <nl> - <nl> - r = fgetc ( state - > in ) ; <nl> - if ( r = = EOF ) r = GRPC_JSON_READ_CHAR_EOF ; <nl> - return static_cast < uint32_t > ( r ) ; <nl> - } <nl> - <nl> - static void json_reader_container_begins ( void * userdata , grpc_json_type type ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - stacked_container * container = <nl> - static_cast < stacked_container * > ( gpr_malloc ( sizeof ( stacked_container ) ) ) ; <nl> - <nl> - container - > type = type ; <nl> - container - > next = state - > top ; <nl> - state - > top = container ; <nl> - <nl> - grpc_json_writer_container_begins ( state - > writer , type ) ; <nl> - } <nl> - <nl> - static grpc_json_type json_reader_container_ends ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - stacked_container * container = state - > top ; <nl> - <nl> - grpc_json_writer_container_ends ( state - > writer , container - > type ) ; <nl> - state - > top = container - > next ; <nl> - gpr_free ( container ) ; <nl> - return state - > top ? state - > top - > type : GRPC_JSON_TOP_LEVEL ; <nl> - } <nl> - <nl> - static void json_reader_set_key ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - json_reader_string_add_char ( userdata , 0 ) ; <nl> - <nl> - grpc_json_writer_object_key ( state - > writer , state - > scratchpad ) ; <nl> - } <nl> - <nl> - static void json_reader_set_string ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - json_reader_string_add_char ( userdata , 0 ) ; <nl> - <nl> - grpc_json_writer_value_string ( state - > writer , state - > scratchpad ) ; <nl> - } <nl> - <nl> - static int json_reader_set_number ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - <nl> - grpc_json_writer_value_raw_with_len ( state - > writer , state - > scratchpad , <nl> - state - > string_len ) ; <nl> - <nl> - return 1 ; <nl> - } <nl> - <nl> - static void json_reader_set_true ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - <nl> - grpc_json_writer_value_raw_with_len ( state - > writer , " true " , 4 ) ; <nl> - } <nl> - <nl> - static void json_reader_set_false ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - <nl> - grpc_json_writer_value_raw_with_len ( state - > writer , " false " , 5 ) ; <nl> - } <nl> - <nl> - static void json_reader_set_null ( void * userdata ) { <nl> - json_reader_userdata * state = static_cast < json_reader_userdata * > ( userdata ) ; <nl> - <nl> - grpc_json_writer_value_raw_with_len ( state - > writer , " null " , 4 ) ; <nl> - } <nl> - <nl> - static grpc_json_reader_vtable reader_vtable = { <nl> - json_reader_string_clear , json_reader_string_add_char , <nl> - json_reader_string_add_utf32 , json_reader_read_char , <nl> - json_reader_container_begins , json_reader_container_ends , <nl> - json_reader_set_key , json_reader_set_string , <nl> - json_reader_set_number , json_reader_set_true , <nl> - json_reader_set_false , json_reader_set_null } ; <nl> - <nl> - int rewrite_and_compare ( FILE * in , FILE * cmp , int indent ) { <nl> - grpc_json_writer writer ; <nl> - grpc_json_reader reader ; <nl> - grpc_json_reader_status status ; <nl> - json_writer_userdata writer_user ; <nl> - json_reader_userdata reader_user ; <nl> - <nl> - GPR_ASSERT ( in ) ; <nl> - GPR_ASSERT ( cmp ) ; <nl> - <nl> - reader_user . writer = & writer ; <nl> - reader_user . in = in ; <nl> - reader_user . top = nullptr ; <nl> - reader_user . scratchpad = nullptr ; <nl> - reader_user . string_len = 0 ; <nl> - reader_user . free_space = 0 ; <nl> - reader_user . allocated = 0 ; <nl> - reader_user . did_eagain = 0 ; <nl> - <nl> - writer_user . cmp = cmp ; <nl> - <nl> - grpc_json_writer_init ( & writer , indent , & writer_vtable , & writer_user ) ; <nl> - grpc_json_reader_init ( & reader , & reader_vtable , & reader_user ) ; <nl> - <nl> - do { <nl> - status = grpc_json_reader_run ( & reader ) ; <nl> - } while ( status = = GRPC_JSON_EAGAIN ) ; <nl> - <nl> - free ( reader_user . scratchpad ) ; <nl> - while ( reader_user . top ) { <nl> - stacked_container * container = reader_user . top ; <nl> - reader_user . top = container - > next ; <nl> - free ( container ) ; <nl> - } <nl> - <nl> - return status = = GRPC_JSON_DONE ; <nl> - } <nl> - <nl> - typedef struct test_file { <nl> - const char * input ; <nl> - const char * cmp ; <nl> - int indent ; <nl> - } test_file ; <nl> - <nl> - static test_file test_files [ ] = { <nl> - { " test / core / json / rewrite_test_input . json " , <nl> - " test / core / json / rewrite_test_output_condensed . json " , 0 } , <nl> - { " test / core / json / rewrite_test_input . json " , <nl> - " test / core / json / rewrite_test_output_indented . json " , 2 } , <nl> - { " test / core / json / rewrite_test_output_indented . json " , <nl> - " test / core / json / rewrite_test_output_condensed . json " , 0 } , <nl> - { " test / core / json / rewrite_test_output_condensed . json " , <nl> - " test / core / json / rewrite_test_output_indented . json " , 2 } , <nl> - } ; <nl> - <nl> - void test_rewrites ( ) { <nl> - unsigned i ; <nl> - <nl> - for ( i = 0 ; i < GPR_ARRAY_SIZE ( test_files ) ; i + + ) { <nl> - test_file * test = test_files + i ; <nl> - FILE * input = fopen ( test - > input , " rb " ) ; <nl> - FILE * cmp = fopen ( test - > cmp , " rb " ) ; <nl> - int status ; <nl> - gpr_log ( GPR_INFO , " Testing file % s against % s using indent = % i " , test - > input , <nl> - test - > cmp , test - > indent ) ; <nl> - status = rewrite_and_compare ( input , cmp , test - > indent ) ; <nl> - GPR_ASSERT ( status ) ; <nl> - fclose ( input ) ; <nl> - fclose ( cmp ) ; <nl> - } <nl> - } <nl> - <nl> - int main ( int argc , char * * argv ) { <nl> - grpc : : testing : : TestEnvironment env ( argc , argv ) ; <nl> - test_rewrites ( ) ; <nl> - gpr_log ( GPR_INFO , " json_rewrite_test success " ) ; <nl> - return 0 ; <nl> - } <nl> deleted file mode 100644 <nl> index f31e9c95d86 . . 00000000000 <nl> mmm a / test / core / json / json_stream_error_test . cc <nl> ppp / dev / null <nl> <nl> - / * <nl> - * <nl> - * Copyright 2015 gRPC authors . <nl> - * <nl> - * Licensed under the Apache License , Version 2 . 0 ( the " License " ) ; <nl> - * you may not use this file except in compliance with the License . <nl> - * You may obtain a copy of the License at <nl> - * <nl> - * http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> - * <nl> - * Unless required by applicable law or agreed to in writing , software <nl> - * distributed under the License is distributed on an " AS IS " BASIS , <nl> - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied . <nl> - * See the License for the specific language governing permissions and <nl> - * limitations under the License . <nl> - * <nl> - * / <nl> - <nl> - # include < stdio . h > <nl> - # include < stdlib . h > <nl> - <nl> - # include < grpc / support / alloc . h > <nl> - # include < grpc / support / log . h > <nl> - # include " test / core / util / test_config . h " <nl> - <nl> - # include " src / core / lib / json / json_reader . h " <nl> - # include " src / core / lib / json / json_writer . h " <nl> - <nl> - static int g_string_clear_once = 0 ; <nl> - <nl> - static void string_clear ( void * / * userdata * / ) { <nl> - GPR_ASSERT ( ! g_string_clear_once ) ; <nl> - g_string_clear_once = 1 ; <nl> - } <nl> - <nl> - static uint32_t read_char ( void * / * userdata * / ) { <nl> - return GRPC_JSON_READ_CHAR_ERROR ; <nl> - } <nl> - <nl> - static grpc_json_reader_vtable reader_vtable = { <nl> - string_clear , nullptr , nullptr , read_char , nullptr , nullptr , <nl> - nullptr , nullptr , nullptr , nullptr , nullptr , nullptr } ; <nl> - <nl> - static void read_error ( ) { <nl> - grpc_json_reader reader ; <nl> - grpc_json_reader_status status ; <nl> - grpc_json_reader_init ( & reader , & reader_vtable , nullptr ) ; <nl> - <nl> - status = grpc_json_reader_run ( & reader ) ; <nl> - GPR_ASSERT ( status = = GRPC_JSON_READ_ERROR ) ; <nl> - } <nl> - <nl> - int main ( int argc , char * * argv ) { <nl> - grpc : : testing : : TestEnvironment env ( argc , argv ) ; <nl> - read_error ( ) ; <nl> - gpr_log ( GPR_INFO , " json_stream_error success " ) ; <nl> - return 0 ; <nl> - } <nl> deleted file mode 100644 <nl> index 568891474f7 . . 00000000000 <nl> mmm a / test / core / json / rewrite_test_input . json <nl> ppp / dev / null <nl> <nl> - { <nl> - " unicode , escape and empty test " : { " a \ tb " : " \ u00eb " , " empty " : [ { } , [ ] , { } ] } , <nl> - " some more unicode tests " : { <nl> - " typical utf - 8 input ( plane 0 ) " : " ßâñć ⇒ " , <nl> - " atypical utf - 8 input ( plane 1 ) " : " 𝄞 " <nl> - } , <nl> - <nl> - " whitespace test " : { " trying " : <nl> - " to " <nl> - , <nl> - <nl> - " break " <nl> - : <nl> - " the " , <nl> - " parser " : " a bit " } , <nl> - <nl> - " # " : " All these examples are from http : / / json . org / example " , <nl> - " test1 " : <nl> - { <nl> - " glossary " : { <nl> - " title " : " example glossary " , <nl> - " GlossDiv " : { <nl> - " title " : " S " , <nl> - " GlossList " : { <nl> - " GlossEntry " : { <nl> - " ID " : " SGML " , <nl> - " SortAs " : " SGML " , <nl> - " GlossTerm " : " Standard Generalized Markup Language " , <nl> - " Acronym " : " SGML " , <nl> - " Abbrev " : " ISO 8879 : 1986 " , <nl> - " GlossDef " : { <nl> - " para " : " A meta - markup language , used to create markup languages such as DocBook . " , <nl> - " GlossSeeAlso " : [ " GML " , " XML " ] <nl> - } , <nl> - " GlossSee " : " markup " <nl> - } <nl> - } <nl> - } <nl> - } <nl> - } , <nl> - <nl> - " test2 " : <nl> - { " menu " : { <nl> - " id " : " file " , <nl> - " value " : " File " , <nl> - " popup " : { <nl> - " menuitem " : [ <nl> - { " value " : " New " , " onclick " : " CreateNewDoc ( ) " } , <nl> - { " value " : " Open " , " onclick " : " OpenDoc ( ) " } , <nl> - { " value " : " Close " , " onclick " : " CloseDoc ( ) " } <nl> - ] <nl> - } <nl> - } } , <nl> - <nl> - " test3 " : <nl> - { " widget " : { <nl> - " debug " : " on " , <nl> - " window " : { <nl> - " title " : " Sample Konfabulator Widget " , <nl> - " name " : " main_window " , <nl> - " width " : 500 , <nl> - " height " : 500 <nl> - } , <nl> - " image " : { <nl> - " src " : " Images / Sun . png " , <nl> - " name " : " sun1 " , <nl> - " hOffset " : 250 , <nl> - " vOffset " : 250 , <nl> - " alignment " : " center " <nl> - } , <nl> - " text " : { <nl> - " data " : " Click Here " , <nl> - " size " : 36 , <nl> - " style " : " bold " , <nl> - " name " : " text1 " , <nl> - " hOffset " : 250 , <nl> - " vOffset " : 100 , <nl> - " alignment " : " center " , <nl> - " onMouseUp " : " sun1 . opacity = ( sun1 . opacity / 100 ) * 90 ; " <nl> - } <nl> - } } , <nl> - <nl> - " test4 " : <nl> - { " web - app " : { <nl> - " servlet " : [ <nl> - { <nl> - " servlet - name " : " cofaxCDS " , <nl> - " servlet - class " : " org . cofax . cds . CDSServlet " , <nl> - " init - param " : { <nl> - " configGlossary : installationAt " : " Philadelphia , PA " , <nl> - " configGlossary : adminEmail " : " ksm @ pobox . com " , <nl> - " configGlossary : poweredBy " : " Cofax " , <nl> - " configGlossary : poweredByIcon " : " / images / cofax . gif " , <nl> - " configGlossary : staticPath " : " / content / static " , <nl> - " templateProcessorClass " : " org . cofax . WysiwygTemplate " , <nl> - " templateLoaderClass " : " org . cofax . FilesTemplateLoader " , <nl> - " templatePath " : " templates " , <nl> - " templateOverridePath " : " " , <nl> - " defaultListTemplate " : " listTemplate . htm " , <nl> - " defaultFileTemplate " : " articleTemplate . htm " , <nl> - " useJSP " : false , <nl> - " jspListTemplate " : " listTemplate . jsp " , <nl> - " jspFileTemplate " : " articleTemplate . jsp " , <nl> - " cachePackageTagsTrack " : 200 , <nl> - " cachePackageTagsStore " : 200 , <nl> - " cachePackageTagsRefresh " : 60 , <nl> - " cacheTemplatesTrack " : 100 , <nl> - " cacheTemplatesStore " : 50 , <nl> - " cacheTemplatesRefresh " : 15 , <nl> - " cachePagesTrack " : 200 , <nl> - " cachePagesStore " : 100 , <nl> - " cachePagesRefresh " : 10 , <nl> - " cachePagesDirtyRead " : 10 , <nl> - " searchEngineListTemplate " : " forSearchEnginesList . htm " , <nl> - " searchEngineFileTemplate " : " forSearchEngines . htm " , <nl> - " searchEngineRobotsDb " : " WEB - INF / robots . db " , <nl> - " useDataStore " : true , <nl> - " dataStoreClass " : " org . cofax . SqlDataStore " , <nl> - " redirectionClass " : " org . cofax . SqlRedirection " , <nl> - " dataStoreName " : " cofax " , <nl> - " dataStoreDriver " : " com . microsoft . jdbc . sqlserver . SQLServerDriver " , <nl> - " dataStoreUrl " : " jdbc : microsoft : sqlserver : / / LOCALHOST : 1433 ; DatabaseName = goon " , <nl> - " dataStoreUser " : " sa " , <nl> - " dataStorePassword " : " dataStoreTestQuery " , <nl> - " dataStoreTestQuery " : " SET NOCOUNT ON ; select test = ' test ' ; " , <nl> - " dataStoreLogFile " : " / usr / local / tomcat / logs / datastore . log " , <nl> - " dataStoreInitConns " : 10 , <nl> - " dataStoreMaxConns " : 100 , <nl> - " dataStoreConnUsageLimit " : 100 , <nl> - " dataStoreLogLevel " : " debug " , <nl> - " maxUrlLength " : 500 } } , <nl> - { <nl> - " servlet - name " : " cofaxEmail " , <nl> - " servlet - class " : " org . cofax . cds . EmailServlet " , <nl> - " init - param " : { <nl> - " mailHost " : " mail1 " , <nl> - " mailHostOverride " : " mail2 " } } , <nl> - { <nl> - " servlet - name " : " cofaxAdmin " , <nl> - " servlet - class " : " org . cofax . cds . AdminServlet " } , <nl> - <nl> - { <nl> - " servlet - name " : " fileServlet " , <nl> - " servlet - class " : " org . cofax . cds . FileServlet " } , <nl> - { <nl> - " servlet - name " : " cofaxTools " , <nl> - " servlet - class " : " org . cofax . cms . CofaxToolsServlet " , <nl> - " init - param " : { <nl> - " templatePath " : " toolstemplates / " , <nl> - " log " : 1 , <nl> - " logLocation " : " / usr / local / tomcat / logs / CofaxTools . log " , <nl> - " logMaxSize " : " " , <nl> - " dataLog " : 1 , <nl> - " dataLogLocation " : " / usr / local / tomcat / logs / dataLog . log " , <nl> - " dataLogMaxSize " : " " , <nl> - " removePageCache " : " / content / admin / remove ? cache = pages & id = " , <nl> - " removeTemplateCache " : " / content / admin / remove ? cache = templates & id = " , <nl> - " fileTransferFolder " : " / usr / local / tomcat / webapps / content / fileTransferFolder " , <nl> - " lookInContext " : 1 , <nl> - " adminGroupID " : 4 , <nl> - " betaServer " : true } } ] , <nl> - " servlet - mapping " : { <nl> - " cofaxCDS " : " / " , <nl> - " cofaxEmail " : " / cofaxutil / aemail / * " , <nl> - " cofaxAdmin " : " / admin / * " , <nl> - " fileServlet " : " / static / * " , <nl> - " cofaxTools " : " / tools / * " } , <nl> - <nl> - " taglib " : { <nl> - " taglib - uri " : " cofax . tld " , <nl> - " taglib - location " : " / WEB - INF / tlds / cofax . tld " } } } , <nl> - <nl> - " test5 " : <nl> - { " menu " : { <nl> - " header " : " SVG Viewer " , <nl> - " items " : [ <nl> - { " id " : " Open " } , <nl> - { " id " : " OpenNew " , " label " : " Open New " } , <nl> - null , <nl> - { " id " : " ZoomIn " , " label " : " Zoom In " } , <nl> - { " id " : " ZoomOut " , " label " : " Zoom Out " } , <nl> - { " id " : " OriginalView " , " label " : " Original View " } , <nl> - null , <nl> - { " id " : " Quality " } , <nl> - { " id " : " Pause " } , <nl> - { " id " : " Mute " } , <nl> - null , <nl> - { " id " : " Find " , " label " : " Find . . . " } , <nl> - { " id " : " FindAgain " , " label " : " Find Again " } , <nl> - { " id " : " Copy " } , <nl> - { " id " : " CopyAgain " , " label " : " Copy Again " } , <nl> - { " id " : " CopySVG " , " label " : " Copy SVG " } , <nl> - { " id " : " ViewSVG " , " label " : " View SVG " } , <nl> - { " id " : " ViewSource " , " label " : " View Source " } , <nl> - { " id " : " SaveAs " , " label " : " Save As " } , <nl> - null , <nl> - { " id " : " Help " } , <nl> - { " id " : " About " , " label " : " About Adobe CVG Viewer . . . " } <nl> - ] <nl> - } } <nl> - <nl> - <nl> - } <nl> deleted file mode 100644 <nl> index 3adbbd99d66 . . 00000000000 <nl> mmm a / test / core / json / rewrite_test_output_condensed . json <nl> ppp / dev / null <nl> @ @ - 1 + 0 , 0 @ @ <nl> - { " unicode , escape and empty test " : { " a \ tb " : " \ u00eb " , " empty " : [ { } , [ ] , { } ] } , " some more unicode tests " : { " typical utf - 8 input ( plane 0 ) " : " \ u00df \ u00e2 \ u00f1 \ u0107 \ u21d2 " , " atypical utf - 8 input ( plane 1 ) " : " \ ud834 \ udd1e " } , " whitespace test " : { " trying " : " to " , " break " : " the " , " parser " : " a bit " } , " # " : " All these examples are from http : / / json . org / example " , " test1 " : { " glossary " : { " title " : " example glossary " , " GlossDiv " : { " title " : " S " , " GlossList " : { " GlossEntry " : { " ID " : " SGML " , " SortAs " : " SGML " , " GlossTerm " : " Standard Generalized Markup Language " , " Acronym " : " SGML " , " Abbrev " : " ISO 8879 : 1986 " , " GlossDef " : { " para " : " A meta - markup language , used to create markup languages such as DocBook . " , " GlossSeeAlso " : [ " GML " , " XML " ] } , " GlossSee " : " markup " } } } } } , " test2 " : { " menu " : { " id " : " file " , " value " : " File " , " popup " : { " menuitem " : [ { " value " : " New " , " onclick " : " CreateNewDoc ( ) " } , { " value " : " Open " , " onclick " : " OpenDoc ( ) " } , { " value " : " Close " , " onclick " : " CloseDoc ( ) " } ] } } } , " test3 " : { " widget " : { " debug " : " on " , " window " : { " title " : " Sample Konfabulator Widget " , " name " : " main_window " , " width " : 500 , " height " : 500 } , " image " : { " src " : " Images / Sun . png " , " name " : " sun1 " , " hOffset " : 250 , " vOffset " : 250 , " alignment " : " center " } , " text " : { " data " : " Click Here " , " size " : 36 , " style " : " bold " , " name " : " text1 " , " hOffset " : 250 , " vOffset " : 100 , " alignment " : " center " , " onMouseUp " : " sun1 . opacity = ( sun1 . opacity / 100 ) * 90 ; " } } } , " test4 " : { " web - app " : { " servlet " : [ { " servlet - name " : " cofaxCDS " , " servlet - class " : " org . cofax . cds . CDSServlet " , " init - param " : { " configGlossary : installationAt " : " Philadelphia , PA " , " configGlossary : adminEmail " : " ksm @ pobox . com " , " configGlossary : poweredBy " : " Cofax " , " configGlossary : poweredByIcon " : " / images / cofax . gif " , " configGlossary : staticPath " : " / content / static " , " templateProcessorClass " : " org . cofax . WysiwygTemplate " , " templateLoaderClass " : " org . cofax . FilesTemplateLoader " , " templatePath " : " templates " , " templateOverridePath " : " " , " defaultListTemplate " : " listTemplate . htm " , " defaultFileTemplate " : " articleTemplate . htm " , " useJSP " : false , " jspListTemplate " : " listTemplate . jsp " , " jspFileTemplate " : " articleTemplate . jsp " , " cachePackageTagsTrack " : 200 , " cachePackageTagsStore " : 200 , " cachePackageTagsRefresh " : 60 , " cacheTemplatesTrack " : 100 , " cacheTemplatesStore " : 50 , " cacheTemplatesRefresh " : 15 , " cachePagesTrack " : 200 , " cachePagesStore " : 100 , " cachePagesRefresh " : 10 , " cachePagesDirtyRead " : 10 , " searchEngineListTemplate " : " forSearchEnginesList . htm " , " searchEngineFileTemplate " : " forSearchEngines . htm " , " searchEngineRobotsDb " : " WEB - INF / robots . db " , " useDataStore " : true , " dataStoreClass " : " org . cofax . SqlDataStore " , " redirectionClass " : " org . cofax . SqlRedirection " , " dataStoreName " : " cofax " , " dataStoreDriver " : " com . microsoft . jdbc . sqlserver . SQLServerDriver " , " dataStoreUrl " : " jdbc : microsoft : sqlserver : / / LOCALHOST : 1433 ; DatabaseName = goon " , " dataStoreUser " : " sa " , " dataStorePassword " : " dataStoreTestQuery " , " dataStoreTestQuery " : " SET NOCOUNT ON ; select test = ' test ' ; " , " dataStoreLogFile " : " / usr / local / tomcat / logs / datastore . log " , " dataStoreInitConns " : 10 , " dataStoreMaxConns " : 100 , " dataStoreConnUsageLimit " : 100 , " dataStoreLogLevel " : " debug " , " maxUrlLength " : 500 } } , { " servlet - name " : " cofaxEmail " , " servlet - class " : " org . cofax . cds . EmailServlet " , " init - param " : { " mailHost " : " mail1 " , " mailHostOverride " : " mail2 " } } , { " servlet - name " : " cofaxAdmin " , " servlet - class " : " org . cofax . cds . AdminServlet " } , { " servlet - name " : " fileServlet " , " servlet - class " : " org . cofax . cds . FileServlet " } , { " servlet - name " : " cofaxTools " , " servlet - class " : " org . cofax . cms . CofaxToolsServlet " , " init - param " : { " templatePath " : " toolstemplates / " , " log " : 1 , " logLocation " : " / usr / local / tomcat / logs / CofaxTools . log " , " logMaxSize " : " " , " dataLog " : 1 , " dataLogLocation " : " / usr / local / tomcat / logs / dataLog . log " , " dataLogMaxSize " : " " , " removePageCache " : " / content / admin / remove ? cache = pages & id = " , " removeTemplateCache " : " / content / admin / remove ? cache = templates & id = " , " fileTransferFolder " : " / usr / local / tomcat / webapps / content / fileTransferFolder " , " lookInContext " : 1 , " adminGroupID " : 4 , " betaServer " : true } } ] , " servlet - mapping " : { " cofaxCDS " : " / " , " cofaxEmail " : " / cofaxutil / aemail / * " , " cofaxAdmin " : " / admin / * " , " fileServlet " : " / static / * " , " cofaxTools " : " / tools / * " } , " taglib " : { " taglib - uri " : " cofax . tld " , " taglib - location " : " / WEB - INF / tlds / cofax . tld " } } } , " test5 " : { " menu " : { " header " : " SVG Viewer " , " items " : [ { " id " : " Open " } , { " id " : " OpenNew " , " label " : " Open New " } , null , { " id " : " ZoomIn " , " label " : " Zoom In " } , { " id " : " ZoomOut " , " label " : " Zoom Out " } , { " id " : " OriginalView " , " label " : " Original View " } , null , { " id " : " Quality " } , { " id " : " Pause " } , { " id " : " Mute " } , null , { " id " : " Find " , " label " : " Find . . . " } , { " id " : " FindAgain " , " label " : " Find Again " } , { " id " : " Copy " } , { " id " : " CopyAgain " , " label " : " Copy Again " } , { " id " : " CopySVG " , " label " : " Copy SVG " } , { " id " : " ViewSVG " , " label " : " View SVG " } , { " id " : " ViewSource " , " label " : " View Source " } , { " id " : " SaveAs " , " label " : " Save As " } , null , { " id " : " Help " } , { " id " : " About " , " label " : " About Adobe CVG Viewer . . . " } ] } } } <nl> \ No newline at end of file <nl> deleted file mode 100644 <nl> index 7ac9f49f229 . . 00000000000 <nl> mmm a / test / core / json / rewrite_test_output_indented . json <nl> ppp / dev / null <nl> <nl> - { <nl> - " unicode , escape and empty test " : { <nl> - " a \ tb " : " \ u00eb " , <nl> - " empty " : [ <nl> - { } , <nl> - [ ] , <nl> - { } <nl> - ] <nl> - } , <nl> - " some more unicode tests " : { <nl> - " typical utf - 8 input ( plane 0 ) " : " \ u00df \ u00e2 \ u00f1 \ u0107 \ u21d2 " , <nl> - " atypical utf - 8 input ( plane 1 ) " : " \ ud834 \ udd1e " <nl> - } , <nl> - " whitespace test " : { <nl> - " trying " : " to " , <nl> - " break " : " the " , <nl> - " parser " : " a bit " <nl> - } , <nl> - " # " : " All these examples are from http : / / json . org / example " , <nl> - " test1 " : { <nl> - " glossary " : { <nl> - " title " : " example glossary " , <nl> - " GlossDiv " : { <nl> - " title " : " S " , <nl> - " GlossList " : { <nl> - " GlossEntry " : { <nl> - " ID " : " SGML " , <nl> - " SortAs " : " SGML " , <nl> - " GlossTerm " : " Standard Generalized Markup Language " , <nl> - " Acronym " : " SGML " , <nl> - " Abbrev " : " ISO 8879 : 1986 " , <nl> - " GlossDef " : { <nl> - " para " : " A meta - markup language , used to create markup languages such as DocBook . " , <nl> - " GlossSeeAlso " : [ <nl> - " GML " , <nl> - " XML " <nl> - ] <nl> - } , <nl> - " GlossSee " : " markup " <nl> - } <nl> - } <nl> - } <nl> - } <nl> - } , <nl> - " test2 " : { <nl> - " menu " : { <nl> - " id " : " file " , <nl> - " value " : " File " , <nl> - " popup " : { <nl> - " menuitem " : [ <nl> - { <nl> - " value " : " New " , <nl> - " onclick " : " CreateNewDoc ( ) " <nl> - } , <nl> - { <nl> - " value " : " Open " , <nl> - " onclick " : " OpenDoc ( ) " <nl> - } , <nl> - { <nl> - " value " : " Close " , <nl> - " onclick " : " CloseDoc ( ) " <nl> - } <nl> - ] <nl> - } <nl> - } <nl> - } , <nl> - " test3 " : { <nl> - " widget " : { <nl> - " debug " : " on " , <nl> - " window " : { <nl> - " title " : " Sample Konfabulator Widget " , <nl> - " name " : " main_window " , <nl> - " width " : 500 , <nl> - " height " : 500 <nl> - } , <nl> - " image " : { <nl> - " src " : " Images / Sun . png " , <nl> - " name " : " sun1 " , <nl> - " hOffset " : 250 , <nl> - " vOffset " : 250 , <nl> - " alignment " : " center " <nl> - } , <nl> - " text " : { <nl> - " data " : " Click Here " , <nl> - " size " : 36 , <nl> - " style " : " bold " , <nl> - " name " : " text1 " , <nl> - " hOffset " : 250 , <nl> - " vOffset " : 100 , <nl> - " alignment " : " center " , <nl> - " onMouseUp " : " sun1 . opacity = ( sun1 . opacity / 100 ) * 90 ; " <nl> - } <nl> - } <nl> - } , <nl> - " test4 " : { <nl> - " web - app " : { <nl> - " servlet " : [ <nl> - { <nl> - " servlet - name " : " cofaxCDS " , <nl> - " servlet - class " : " org . cofax . cds . CDSServlet " , <nl> - " init - param " : { <nl> - " configGlossary : installationAt " : " Philadelphia , PA " , <nl> - " configGlossary : adminEmail " : " ksm @ pobox . com " , <nl> - " configGlossary : poweredBy " : " Cofax " , <nl> - " configGlossary : poweredByIcon " : " / images / cofax . gif " , <nl> - " configGlossary : staticPath " : " / content / static " , <nl> - " templateProcessorClass " : " org . cofax . WysiwygTemplate " , <nl> - " templateLoaderClass " : " org . cofax . FilesTemplateLoader " , <nl> - " templatePath " : " templates " , <nl> - " templateOverridePath " : " " , <nl> - " defaultListTemplate " : " listTemplate . htm " , <nl> - " defaultFileTemplate " : " articleTemplate . htm " , <nl> - " useJSP " : false , <nl> - " jspListTemplate " : " listTemplate . jsp " , <nl> - " jspFileTemplate " : " articleTemplate . jsp " , <nl> - " cachePackageTagsTrack " : 200 , <nl> - " cachePackageTagsStore " : 200 , <nl> - " cachePackageTagsRefresh " : 60 , <nl> - " cacheTemplatesTrack " : 100 , <nl> - " cacheTemplatesStore " : 50 , <nl> - " cacheTemplatesRefresh " : 15 , <nl> - " cachePagesTrack " : 200 , <nl> - " cachePagesStore " : 100 , <nl> - " cachePagesRefresh " : 10 , <nl> - " cachePagesDirtyRead " : 10 , <nl> - " searchEngineListTemplate " : " forSearchEnginesList . htm " , <nl> - " searchEngineFileTemplate " : " forSearchEngines . htm " , <nl> - " searchEngineRobotsDb " : " WEB - INF / robots . db " , <nl> - " useDataStore " : true , <nl> - " dataStoreClass " : " org . cofax . SqlDataStore " , <nl> - " redirectionClass " : " org . cofax . SqlRedirection " , <nl> - " dataStoreName " : " cofax " , <nl> - " dataStoreDriver " : " com . microsoft . jdbc . sqlserver . SQLServerDriver " , <nl> - " dataStoreUrl " : " jdbc : microsoft : sqlserver : / / LOCALHOST : 1433 ; DatabaseName = goon " , <nl> - " dataStoreUser " : " sa " , <nl> - " dataStorePassword " : " dataStoreTestQuery " , <nl> - " dataStoreTestQuery " : " SET NOCOUNT ON ; select test = ' test ' ; " , <nl> - " dataStoreLogFile " : " / usr / local / tomcat / logs / datastore . log " , <nl> - " dataStoreInitConns " : 10 , <nl> - " dataStoreMaxConns " : 100 , <nl> - " dataStoreConnUsageLimit " : 100 , <nl> - " dataStoreLogLevel " : " debug " , <nl> - " maxUrlLength " : 500 <nl> - } <nl> - } , <nl> - { <nl> - " servlet - name " : " cofaxEmail " , <nl> - " servlet - class " : " org . cofax . cds . EmailServlet " , <nl> - " init - param " : { <nl> - " mailHost " : " mail1 " , <nl> - " mailHostOverride " : " mail2 " <nl> - } <nl> - } , <nl> - { <nl> - " servlet - name " : " cofaxAdmin " , <nl> - " servlet - class " : " org . cofax . cds . AdminServlet " <nl> - } , <nl> - { <nl> - " servlet - name " : " fileServlet " , <nl> - " servlet - class " : " org . cofax . cds . FileServlet " <nl> - } , <nl> - { <nl> - " servlet - name " : " cofaxTools " , <nl> - " servlet - class " : " org . cofax . cms . CofaxToolsServlet " , <nl> - " init - param " : { <nl> - " templatePath " : " toolstemplates / " , <nl> - " log " : 1 , <nl> - " logLocation " : " / usr / local / tomcat / logs / CofaxTools . log " , <nl> - " logMaxSize " : " " , <nl> - " dataLog " : 1 , <nl> - " dataLogLocation " : " / usr / local / tomcat / logs / dataLog . log " , <nl> - " dataLogMaxSize " : " " , <nl> - " removePageCache " : " / content / admin / remove ? cache = pages & id = " , <nl> - " removeTemplateCache " : " / content / admin / remove ? cache = templates & id = " , <nl> - " fileTransferFolder " : " / usr / local / tomcat / webapps / content / fileTransferFolder " , <nl> - " lookInContext " : 1 , <nl> - " adminGroupID " : 4 , <nl> - " betaServer " : true <nl> - } <nl> - } <nl> - ] , <nl> - " servlet - mapping " : { <nl> - " cofaxCDS " : " / " , <nl> - " cofaxEmail " : " / cofaxutil / aemail / * " , <nl> - " cofaxAdmin " : " / admin / * " , <nl> - " fileServlet " : " / static / * " , <nl> - " cofaxTools " : " / tools / * " <nl> - } , <nl> - " taglib " : { <nl> - " taglib - uri " : " cofax . tld " , <nl> - " taglib - location " : " / WEB - INF / tlds / cofax . tld " <nl> - } <nl> - } <nl> - } , <nl> - " test5 " : { <nl> - " menu " : { <nl> - " header " : " SVG Viewer " , <nl> - " items " : [ <nl> - { <nl> - " id " : " Open " <nl> - } , <nl> - { <nl> - " id " : " OpenNew " , <nl> - " label " : " Open New " <nl> - } , <nl> - null , <nl> - { <nl> - " id " : " ZoomIn " , <nl> - " label " : " Zoom In " <nl> - } , <nl> - { <nl> - " id " : " ZoomOut " , <nl> - " label " : " Zoom Out " <nl> - } , <nl> - { <nl> - " id " : " OriginalView " , <nl> - " label " : " Original View " <nl> - } , <nl> - null , <nl> - { <nl> - " id " : " Quality " <nl> - } , <nl> - { <nl> - " id " : " Pause " <nl> - } , <nl> - { <nl> - " id " : " Mute " <nl> - } , <nl> - null , <nl> - { <nl> - " id " : " Find " , <nl> - " label " : " Find . . . " <nl> - } , <nl> - { <nl> - " id " : " FindAgain " , <nl> - " label " : " Find Again " <nl> - } , <nl> - { <nl> - " id " : " Copy " <nl> - } , <nl> - { <nl> - " id " : " CopyAgain " , <nl> - " label " : " Copy Again " <nl> - } , <nl> - { <nl> - " id " : " CopySVG " , <nl> - " label " : " Copy SVG " <nl> - } , <nl> - { <nl> - " id " : " ViewSVG " , <nl> - " label " : " View SVG " <nl> - } , <nl> - { <nl> - " id " : " ViewSource " , <nl> - " label " : " View Source " <nl> - } , <nl> - { <nl> - " id " : " SaveAs " , <nl> - " label " : " Save As " <nl> - } , <nl> - null , <nl> - { <nl> - " id " : " Help " <nl> - } , <nl> - { <nl> - " id " : " About " , <nl> - " label " : " About Adobe CVG Viewer . . . " <nl> - } <nl> - ] <nl> - } <nl> - } <nl> - } <nl> \ No newline at end of file <nl> mmm a / tools / doxygen / Doxyfile . c + + . internal <nl> ppp b / tools / doxygen / Doxyfile . c + + . internal <nl> src / core / lib / iomgr / wakeup_fd_posix . cc \ <nl> src / core / lib / iomgr / wakeup_fd_posix . h \ <nl> src / core / lib / json / json . cc \ <nl> src / core / lib / json / json . h \ <nl> - src / core / lib / json / json_common . h \ <nl> src / core / lib / json / json_reader . cc \ <nl> - src / core / lib / json / json_reader . h \ <nl> - src / core / lib / json / json_string . cc \ <nl> src / core / lib / json / json_writer . cc \ <nl> - src / core / lib / json / json_writer . h \ <nl> src / core / lib / profiling / timers . h \ <nl> src / core / lib / slice / b64 . cc \ <nl> src / core / lib / slice / b64 . h \ <nl> mmm a / tools / doxygen / Doxyfile . core . internal <nl> ppp b / tools / doxygen / Doxyfile . core . internal <nl> src / core / lib / iomgr / wakeup_fd_posix . cc \ <nl> src / core / lib / iomgr / wakeup_fd_posix . h \ <nl> src / core / lib / json / json . cc \ <nl> src / core / lib / json / json . h \ <nl> - src / core / lib / json / json_common . h \ <nl> src / core / lib / json / json_reader . cc \ <nl> - src / core / lib / json / json_reader . h \ <nl> - src / core / lib / json / json_string . cc \ <nl> src / core / lib / json / json_writer . cc \ <nl> - src / core / lib / json / json_writer . h \ <nl> src / core / lib / profiling / basic_timers . cc \ <nl> src / core / lib / profiling / stap_timers . cc \ <nl> src / core / lib / profiling / timers . h \ <nl> mmm a / tools / run_tests / generated / tests . json <nl> ppp b / tools / run_tests / generated / tests . json <nl> <nl> ] , <nl> " uses_polling " : true <nl> } , <nl> - { <nl> - " args " : [ ] , <nl> - " benchmark " : false , <nl> - " ci_platforms " : [ <nl> - " linux " , <nl> - " mac " , <nl> - " posix " , <nl> - " windows " <nl> - ] , <nl> - " cpu_cost " : 1 . 0 , <nl> - " exclude_configs " : [ ] , <nl> - " exclude_iomgrs " : [ ] , <nl> - " flaky " : false , <nl> - " gtest " : false , <nl> - " language " : " c " , <nl> - " name " : " json_rewrite_test " , <nl> - " platforms " : [ <nl> - " linux " , <nl> - " mac " , <nl> - " posix " , <nl> - " windows " <nl> - ] , <nl> - " uses_polling " : false <nl> - } , <nl> - { <nl> - " args " : [ ] , <nl> - " benchmark " : false , <nl> - " ci_platforms " : [ <nl> - " linux " , <nl> - " mac " , <nl> - " posix " , <nl> - " windows " <nl> - ] , <nl> - " cpu_cost " : 1 . 0 , <nl> - " exclude_configs " : [ ] , <nl> - " exclude_iomgrs " : [ ] , <nl> - " flaky " : false , <nl> - " gtest " : false , <nl> - " language " : " c " , <nl> - " name " : " json_stream_error_test " , <nl> - " platforms " : [ <nl> - " linux " , <nl> - " mac " , <nl> - " posix " , <nl> - " windows " <nl> - ] , <nl> - " uses_polling " : false <nl> - } , <nl> { <nl> " args " : [ ] , <nl> " benchmark " : false , <nl>
|
Merge pull request from markdroth / json_remove_vtable
|
grpc/grpc
|
1b5da2b9cee9bad692af657277f9fa9d4055347a
|
2019-12-04T22:17:47Z
|
mmm a / cocos / 2d / CCLayer . h <nl> ppp b / cocos / 2d / CCLayer . h <nl> class CC_DLL Layer : public Node <nl> Only the touches of this node will be affected . This " method " is not propagated to it ' s children . <nl> @ since v0 . 8 . 1 <nl> * / <nl> - CC_DEPRECATED_ATTRIBUTE virtual bool isTouchEnabled ( ) const ; <nl> - CC_DEPRECATED_ATTRIBUTE virtual void setTouchEnabled ( bool value ) ; <nl> + CC_DEPRECATED_ATTRIBUTE bool isTouchEnabled ( ) const ; <nl> + CC_DEPRECATED_ATTRIBUTE void setTouchEnabled ( bool value ) ; <nl> <nl> CC_DEPRECATED_ATTRIBUTE virtual void setTouchMode ( Touch : : DispatchMode mode ) ; <nl> CC_DEPRECATED_ATTRIBUTE virtual Touch : : DispatchMode getTouchMode ( ) const ; <nl> mmm a / extensions / GUI / CCControlExtension / CCControl . cpp <nl> ppp b / extensions / GUI / CCControlExtension / CCControl . cpp <nl> bool Control : : init ( ) <nl> { <nl> if ( Layer : : init ( ) ) <nl> { <nl> - / / this - > setTouchEnabled ( true ) ; <nl> - / / _isTouchEnabled = true ; <nl> / / Initialise instance variables <nl> _state = Control : : State : : NORMAL ; <nl> setEnabled ( true ) ; <nl> mmm a / extensions / GUI / CCScrollView / CCScrollView . cpp <nl> ppp b / extensions / GUI / CCScrollView / CCScrollView . cpp <nl> void ScrollView : : resume ( Object * sender ) <nl> _container - > resume ( ) ; <nl> } <nl> <nl> + bool ScrollView : : isTouchEnabled ( ) const <nl> + { <nl> + return _touchListener ! = nullptr ; <nl> + } <nl> + <nl> void ScrollView : : setTouchEnabled ( bool enabled ) <nl> { <nl> _eventDispatcher - > removeEventListener ( _touchListener ) ; <nl> - <nl> + _touchListener = nullptr ; <nl> + <nl> if ( enabled ) <nl> { <nl> _touchListener = EventListenerTouchOneByOne : : create ( ) ; <nl> mmm a / extensions / GUI / CCScrollView / CCScrollView . h <nl> ppp b / extensions / GUI / CCScrollView / CCScrollView . h <nl> class ScrollView : public Layer <nl> void resume ( Object * sender ) ; <nl> <nl> void setTouchEnabled ( bool enabled ) ; <nl> + bool isTouchEnabled ( ) const ; <nl> bool isDragging ( ) const { return _dragging ; } <nl> bool isTouchMoved ( ) const { return _touchMoved ; } <nl> bool isBounceable ( ) const { return _bounceable ; } <nl>
|
Prevents warning of invoking ScrollView : : setTouchEnabled . Layer : : setTouchEnabled should not be a virtual function .
|
cocos2d/cocos2d-x
|
59c2647bd6ad240385d98887999af98a980bf668
|
2013-11-20T03:35:04Z
|
mmm a / CMakeLists . txt <nl> ppp b / CMakeLists . txt <nl> endif ( ) <nl> # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> <nl> if ( MSVC ) <nl> - if ( CMAKE_BUILD_TYPE STREQUAL " RelWithDebInfo " ) <nl> - message ( " clobbering RelWithDebInfo " ) <nl> - set ( CMAKE_C_FLAGS " $ { CMAKE_C_FLAGS } - DNDEBUG " ) <nl> - set ( CMAKE_CXX_FLAGS " $ { CMAKE_CXX_FLAGS } - DNDEBUG " ) <nl> - set ( CMAKE_C_FLAGS_RELWITHDEBINFO " $ { CMAKE_C_FLAGS_RELWITHDEBINFO } - DNDEBUG " ) <nl> - set ( CMAKE_CXX_FLAGS_RELWITHDEBINFO " $ { CMAKE_CXX_FLAGS_RELWITHDEBINFO } - DNDEBUG " ) <nl> - endif ( ) <nl> - <nl> add_definitions ( " - D_CRT_SECURE_NO_WARNINGS = 1 " ) <nl> add_definitions ( " - DFD_SETSIZE = 2048 " ) <nl> add_definitions ( " - DUSE_REGEX_STATIC = 1 " ) <nl>
|
cmake test end
|
arangodb/arangodb
|
cf7ea230243860ab11287424df22b2c6c71bb623
|
2015-09-04T13:48:00Z
|
mmm a / vendor / node <nl> ppp b / vendor / node <nl> @ @ - 1 + 1 @ @ <nl> - Subproject commit f76f1b16bddf025e1f26b26e483cc2dab1eb9dd9 <nl> + Subproject commit d6a6f2a8912501433ee8ba0d9fd53c065a11973e <nl>
|
Update vendor / node ref
|
electron/electron
|
a9eb267f924a59bd0f71591eceabcfb039cccccb
|
2018-06-19T01:49:44Z
|
mmm a / cmake / find / ldap . cmake <nl> ppp b / cmake / find / ldap . cmake <nl> if ( ENABLE_LDAP ) <nl> endif ( ) <nl> <nl> if ( NOT OPENLDAP_FOUND AND NOT MISSING_INTERNAL_LDAP_LIBRARY ) <nl> + string ( TOLOWER " $ { CMAKE_SYSTEM_NAME } " _system_name ) <nl> string ( TOLOWER " $ { CMAKE_SYSTEM_PROCESSOR } " _system_processor ) <nl> + <nl> if ( <nl> " $ { _system_processor } " STREQUAL " amd64 " OR <nl> " $ { _system_processor } " STREQUAL " x64 " <nl> if ( ENABLE_LDAP ) <nl> endif ( ) <nl> <nl> if ( <nl> - ( " $ { CMAKE_SYSTEM_NAME } " STREQUAL " Linux " AND " $ { _system_processor } " STREQUAL " x86_64 " ) OR <nl> - ( " $ { CMAKE_SYSTEM_NAME } " STREQUAL " FreeBSD " AND " $ { _system_processor } " STREQUAL " x86_64 " ) OR <nl> - ( " $ { CMAKE_SYSTEM_NAME } " STREQUAL " Darwin " AND " $ { _system_processor } " STREQUAL " x86_64 " ) <nl> + ( " $ { _system_name } " STREQUAL " linux " AND " $ { _system_processor } " STREQUAL " x86_64 " ) OR <nl> + ( " $ { _system_name } " STREQUAL " freebsd " AND " $ { _system_processor } " STREQUAL " x86_64 " ) OR <nl> + ( " $ { _system_name } " STREQUAL " darwin " AND " $ { _system_processor } " STREQUAL " x86_64 " ) <nl> ) <nl> set ( _ldap_supported_platform TRUE ) <nl> endif ( ) <nl> mmm a / contrib / openldap - cmake / CMakeLists . txt <nl> ppp b / contrib / openldap - cmake / CMakeLists . txt <nl> macro ( mkversion _lib_name ) <nl> ) <nl> endmacro ( ) <nl> <nl> + string ( TOLOWER " $ { CMAKE_SYSTEM_NAME } " _system_name ) <nl> string ( TOLOWER " $ { CMAKE_SYSTEM_PROCESSOR } " _system_processor ) <nl> + <nl> if ( <nl> " $ { _system_processor } " STREQUAL " amd64 " OR <nl> " $ { _system_processor } " STREQUAL " x64 " <nl> if ( <nl> set ( _system_processor " x86_64 " ) <nl> endif ( ) <nl> <nl> - set ( _extra_build_dir " $ { CMAKE_CURRENT_SOURCE_DIR } / $ { CMAKE_SYSTEM_NAME } _ $ { _system_processor } " ) <nl> + set ( _extra_build_dir " $ { CMAKE_CURRENT_SOURCE_DIR } / $ { _system_name } _ $ { _system_processor } " ) <nl> <nl> set ( _lber_srcs <nl> $ { OPENLDAP_SOURCE_DIR } / libraries / liblber / assert . c <nl> similarity index 100 % <nl> rename from contrib / openldap - cmake / Darwin_x86_64 / include / lber_types . h <nl> rename to contrib / openldap - cmake / darwin_x86_64 / include / lber_types . h <nl> similarity index 100 % <nl> rename from contrib / openldap - cmake / Darwin_x86_64 / include / ldap_config . h <nl> rename to contrib / openldap - cmake / darwin_x86_64 / include / ldap_config . h <nl> similarity index 100 % <nl> rename from contrib / openldap - cmake / Darwin_x86_64 / include / ldap_features . h <nl> rename to contrib / openldap - cmake / darwin_x86_64 / include / ldap_features . h <nl> similarity index 100 % <nl> rename from contrib / openldap - cmake / Darwin_x86_64 / include / portable . h <nl> rename to contrib / openldap - cmake / darwin_x86_64 / include / portable . h <nl> similarity index 100 % <nl> rename from contrib / openldap - cmake / FreeBSD_x86_64 / include / lber_types . h <nl> rename to contrib / openldap - cmake / freebsd_x86_64 / include / lber_types . h <nl> similarity index 100 % <nl> rename from contrib / openldap - cmake / FreeBSD_x86_64 / include / ldap_config . h <nl> rename to contrib / openldap - cmake / freebsd_x86_64 / include / ldap_config . h <nl> similarity index 100 % <nl> rename from contrib / openldap - cmake / FreeBSD_x86_64 / include / ldap_features . h <nl> rename to contrib / openldap - cmake / freebsd_x86_64 / include / ldap_features . h <nl> similarity index 100 % <nl> rename from contrib / openldap - cmake / FreeBSD_x86_64 / include / portable . h <nl> rename to contrib / openldap - cmake / freebsd_x86_64 / include / portable . h <nl> similarity index 100 % <nl> rename from contrib / openldap - cmake / Linux_x86_64 / include / lber_types . h <nl> rename to contrib / openldap - cmake / linux_x86_64 / include / lber_types . h <nl> similarity index 100 % <nl> rename from contrib / openldap - cmake / Linux_x86_64 / include / ldap_config . h <nl> rename to contrib / openldap - cmake / linux_x86_64 / include / ldap_config . h <nl> similarity index 100 % <nl> rename from contrib / openldap - cmake / Linux_x86_64 / include / ldap_features . h <nl> rename to contrib / openldap - cmake / linux_x86_64 / include / ldap_features . h <nl> similarity index 100 % <nl> rename from contrib / openldap - cmake / Linux_x86_64 / include / portable . h <nl> rename to contrib / openldap - cmake / linux_x86_64 / include / portable . h <nl>
|
Normalize CMAKE_SYSTEM_NAME and platform dependent folder names
|
ClickHouse/ClickHouse
|
b68664d33255f82bf9f9b48ed8012a0a3d953ecf
|
2020-05-14T20:40:27Z
|
mmm a / src / io / page_fmatrix - inl . hpp <nl> ppp b / src / io / page_fmatrix - inl . hpp <nl> class CSCMatrixManager { <nl> " invalid column buffer format " ) ; <nl> p_page - > col_data . push_back ( ColBatch : : Inst ( p_data , len ) ) ; <nl> p_page - > col_index . push_back ( cidx ) ; <nl> + return true ; <nl> } <nl> / / the following are in memory auxiliary data structure <nl> / * ! \ brief top of reader position * / <nl> class ThreadColPageIterator : public utils : : IIterator < ColBatch > { <nl> float page_ratio , bool silent ) { <nl> itr_ . SetParam ( " buffer_size " , " 2 " ) ; <nl> itr_ . get_factory ( ) . Setup ( fi , page_ratio ) ; <nl> + itr_ . Init ( ) ; <nl> if ( ! silent ) { <nl> utils : : Printf ( " ThreadColPageIterator : finish initialzing , % u columns \ n " , <nl> static_cast < unsigned > ( col_ptr ( ) . size ( ) - 1 ) ) ; <nl> class FMatrixPage : public IFMatrix { <nl> } <nl> virtual void InitColAccess ( float pkeep = 1 . 0f ) { <nl> if ( this - > HaveColAccess ( ) ) return ; <nl> - this - > InitColData ( pkeep , fname_cbuffer_ . c_str ( ) , <nl> - 64 < < 20 , 5 ) ; <nl> + if ( ! this - > LoadColData ( ) ) { <nl> + this - > InitColData ( pkeep , fname_cbuffer_ . c_str ( ) , <nl> + 64 < < 20 , 5 ) ; <nl> + utils : : Check ( this - > LoadColData ( ) , " fail to read in column data " ) ; <nl> + } <nl> } <nl> / * ! <nl> * \ brief get the row iterator associated with FMatrix <nl> mmm a / src / utils / matrix_csr . h <nl> ppp b / src / utils / matrix_csr . h <nl> <nl> * \ author Tianqi Chen <nl> * / <nl> # include < vector > <nl> + # include < utility > <nl> # include < algorithm > <nl> # include " . / io . h " <nl> # include " . / utils . h " <nl> struct SparseCSRFileBuilder { <nl> for ( size_t i = 1 ; i < rptr . size ( ) ; i + + ) { <nl> nelem + = rptr [ i ] ; <nl> rptr [ i ] = nelem ; <nl> - } <nl> + } <nl> begin_data = static_cast < SizeType > ( fo - > Tell ( ) ) + sizeof ( SizeType ) ; <nl> SizeType begin_meta = begin_data + nelem * sizeof ( IndexType ) ; <nl> fo - > Write ( & begin_meta , sizeof ( begin_meta ) ) ; <nl> struct SparseCSRFileBuilder { <nl> buffer_rptr . resize ( rptr . size ( ) ) ; <nl> buffer_temp . reserve ( buffer_size ) ; <nl> buffer_data . resize ( buffer_size ) ; <nl> - saved_offset . clear ( ) ; <nl> - saved_offset . resize ( rptr . size ( ) - 1 , 0 ) ; <nl> + saved_offset = rptr ; <nl> + saved_offset . resize ( rptr . size ( ) - 1 ) ; <nl> this - > ClearBuffer ( ) ; <nl> } <nl> / * ! \ brief step 4 : push element into buffer * / <nl> struct SparseCSRFileBuilder { <nl> this - > WriteBuffer ( ) ; <nl> this - > ClearBuffer ( ) ; <nl> } <nl> - buffer_temp . push_back ( std : : make_pair ( row_id , col_id ) ) ; <nl> + buffer_rptr [ row_id + 1 ] + = 1 ; <nl> + buffer_temp . push_back ( std : : make_pair ( row_id , col_id ) ) ; <nl> } <nl> / * ! \ brief finalize the construction * / <nl> inline void Finalize ( void ) { <nl> struct SparseCSRFileBuilder { <nl> inline void SortRows ( Comparator comp , size_t step ) { <nl> for ( size_t i = 0 ; i < rptr . size ( ) - 1 ; i + = step ) { <nl> bst_omp_uint begin = static_cast < bst_omp_uint > ( i ) ; <nl> - bst_omp_uint end = static_cast < bst_omp_uint > ( std : : min ( rptr . size ( ) , i + step ) ) ; <nl> + bst_omp_uint end = static_cast < bst_omp_uint > ( std : : min ( rptr . size ( ) - 1 , i + step ) ) ; <nl> if ( rptr [ end ] ! = rptr [ begin ] ) { <nl> fo - > Seek ( begin_data + rptr [ begin ] * sizeof ( IndexType ) ) ; <nl> buffer_data . resize ( rptr [ end ] - rptr [ begin ] ) ; <nl> fo - > Read ( BeginPtr ( buffer_data ) , ( rptr [ end ] - rptr [ begin ] ) * sizeof ( IndexType ) ) ; <nl> / / do parallel sorting <nl> # pragma omp parallel for schedule ( static ) <nl> - for ( bst_omp_uint j = begin ; j < end ; + + j ) { <nl> + for ( bst_omp_uint j = begin ; j < end ; + + j ) { <nl> std : : sort ( & buffer_data [ 0 ] + rptr [ j ] - rptr [ begin ] , <nl> & buffer_data [ 0 ] + rptr [ j + 1 ] - rptr [ begin ] , <nl> comp ) ; <nl> struct SparseCSRFileBuilder { <nl> fo - > Write ( BeginPtr ( buffer_data ) , ( rptr [ end ] - rptr [ begin ] ) * sizeof ( IndexType ) ) ; <nl> } <nl> } <nl> + printf ( " CSV : : begin_dat = % lu \ n " , begin_data ) ; <nl> } <nl> protected : <nl> inline void WriteBuffer ( void ) { <nl> struct SparseCSRFileBuilder { <nl> buffer_data [ rp + + ] = buffer_temp [ i ] . second ; <nl> } <nl> / / write out <nl> - for ( size_t i = 0 ; i < buffer_rptr . size ( ) ; + + i ) { <nl> + for ( size_t i = 0 ; i < buffer_rptr . size ( ) - 1 ; + + i ) { <nl> size_t nelem = buffer_rptr [ i + 1 ] - buffer_rptr [ i ] ; <nl> if ( nelem ! = 0 ) { <nl> - utils : : Assert ( saved_offset [ i ] < rptr [ i + 1 ] , " data exceed bound " ) ; <nl> - fo - > Seek ( ( rptr [ i ] + saved_offset [ i ] ) * sizeof ( IndexType ) + begin_data ) ; <nl> + utils : : Assert ( saved_offset [ i ] + nelem < = rptr [ i + 1 ] , " data exceed bound " ) ; <nl> + fo - > Seek ( saved_offset [ i ] * sizeof ( IndexType ) + begin_data ) ; <nl> fo - > Write ( & buffer_data [ 0 ] + buffer_rptr [ i ] , nelem * sizeof ( IndexType ) ) ; <nl> saved_offset [ i ] + = nelem ; <nl> } <nl>
|
still buggy
|
dmlc/xgboost
|
226d26d40c7a7c44e607dab2a7ae476b3e15fd58
|
2014-09-03T00:18:17Z
|
mmm a / src / library_sdl . js <nl> ppp b / src / library_sdl . js <nl> <nl> / / To use emscripten ' s SDL library here , you need to define <nl> / / Module . canvas and at least one of Module . ctx2D , Module . ctxGL . <nl> + / / <nl> + / / More specifically , our SDL implementation will look for <nl> + / / Module . canvas and Module . ctx2D . You should fill them using <nl> + / / something like <nl> + / / <nl> + / / function onLoad ( ) { <nl> + / / / / Pass canvas and context to the generated code <nl> + / / Module . canvas = document . getElementById ( ' canvas ' ) ; <nl> + / / Module . ctx2D = Module . canvas . getContext ( ' 2d ' ) ; <nl> + / / } <nl> + / / <nl> + / / Note that this must be called during onload , since you will <nl> + / / only be able to access the canvas element in the page after <nl> + / / it loads . You will likely also want to disable running by <nl> + / / default , with something like <nl> + / / <nl> + / / var Module = { <nl> + / / noInitialRun : true <nl> + / / } ; <nl> + / / <nl> + / / which is defined BEFORE you load the compiled code . Here <nl> + / / is a full example : <nl> + <nl> + / * <nl> + < html > <nl> + < head > <nl> + < title > Demo < / title > <nl> + < script type = ' text / javascript ' > <nl> + var Module = { <nl> + noInitialRun : true <nl> + } ; <nl> + <nl> + / / implement print <nl> + var print = function ( text ) { <nl> + var element = document . getElementById ( ' output ' ) <nl> + element . innerHTML = text . replace ( ' \ n ' , ' < br > ' , ' g ' ) + element . innerHTML ; <nl> + } <nl> + < / script > <nl> + < script src = ' doom . ccsimple . js ' type = ' text / javascript ' > < / script > <nl> + < script type = ' text / javascript ' > <nl> + function onLoad ( ) { <nl> + / / Pass canvas and context to the generated code , and do the actual run ( ) here <nl> + Module . canvas = document . getElementById ( ' canvas ' ) ; <nl> + Module . ctx2D = Module . canvas . getContext ( ' 2d ' ) ; <nl> + if ( ! Module . ctx2D ) { <nl> + alert ( ' Canvas not available : ( ' ) ; <nl> + return ; <nl> + } <nl> + Module . run ( ) ; <nl> + } <nl> + < / script > <nl> + < body onload = ' onLoad ( ) ' style = ' background - color : black ; color : white ' > <nl> + < center > <nl> + < canvas id = ' canvas ' width = ' 320 ' height = ' 200 ' > < / canvas > <nl> + < / center > <nl> + < div id = ' output ' > < / div > <nl> + < / body > <nl> + < / html > <nl> + * / <nl> <nl> mergeInto ( Library , { <nl> $ SDL__deps : [ ' $ Browser ' ] , <nl>
|
sdl docs
|
emscripten-core/emscripten
|
fe80ac33c9301a013f9a4679ded406af8ca2a760
|
2011-06-23T18:29:21Z
|
mmm a / emscripten . py <nl> ppp b / emscripten . py <nl> def load_from_cache ( chunk ) : <nl> # sent data <nl> basics = [ ' buffer ' , ' Int8Array ' , ' Int16Array ' , ' Int32Array ' , ' Uint8Array ' , ' Uint16Array ' , ' Uint32Array ' , ' Float32Array ' , ' Float64Array ' ] <nl> sending = ' { ' + ' , ' . join ( [ s + ' : ' + s for s in basics + global_vars + global_funcs ] ) + ' } ' <nl> + # received <nl> + receiving = ' ; \ n ' . join ( [ ' var ' + s + ' = Module [ " ' + s + ' " ] = asm . ' + s for s in exported_implemented_functions ] ) <nl> # finalize <nl> funcs_js = ' ' ' <nl> var asm = ( function ( env , buffer ) { <nl> def load_from_cache ( chunk ) : <nl> <nl> return % s ; <nl> } ) ( % s , buffer ) ; <nl> - for ( var _export in asm ) Module [ _export ] = asm [ _export ] ; <nl> - ' ' ' % ( exports , sending ) <nl> + % s ; <nl> + ' ' ' % ( exports , sending , receiving ) <nl> <nl> outputs = None <nl> if DEBUG : print > > sys . stderr , ' emscript : phase 2b took % s seconds ' % ( time . time ( ) - t ) <nl>
|
export asm . js exports to outside scope
|
emscripten-core/emscripten
|
709d9bf788b78992ab6aa7676e6a9f59ac696efa
|
2012-12-07T22:23:19Z
|
mmm a / modules / java / generator / gen_java . py <nl> ppp b / modules / java / generator / gen_java . py <nl> def getLibVersion ( version_hpp_path ) : <nl> epoch = re . search ( " ^ W * # \ W * define \ W + CV_VERSION_EPOCH \ W + ( \ d + ) \ W * $ " , version_file , re . MULTILINE ) . group ( 1 ) <nl> major = re . search ( " ^ W * # \ W * define \ W + CV_VERSION_MAJOR \ W + ( \ d + ) \ W * $ " , version_file , re . MULTILINE ) . group ( 1 ) <nl> minor = re . search ( " ^ W * # \ W * define \ W + CV_VERSION_MINOR \ W + ( \ d + ) \ W * $ " , version_file , re . MULTILINE ) . group ( 1 ) <nl> - patch = re . search ( " ^ W * # \ W * define \ W + CV_VERSION_REVISION \ W + ( \ d + ) \ W * $ " , version_file , re . MULTILINE ) . group ( 1 ) <nl> - return ( epoch , major , minor , patch ) <nl> + revision = re . search ( " ^ W * # \ W * define \ W + CV_VERSION_REVISION \ W + ( \ d + ) \ W * $ " , version_file , re . MULTILINE ) . group ( 1 ) <nl> + return ( epoch , major , minor , revision ) <nl> <nl> class ConstInfo ( object ) : <nl> def __init__ ( self , cname , name , val , addedManually = False ) : <nl> def add_class_code_stream ( self , class_name , cls_base = ' ' ) : <nl> " " " % { ' m ' : self . module , ' jc ' : jname } ) <nl> <nl> if class_name = = ' Core ' : <nl> - ( epoch , major , minor , patch ) = getLibVersion ( <nl> + ( epoch , major , minor , revision ) = getLibVersion ( <nl> ( os . path . dirname ( __file__ ) or ' . ' ) + ' / . . / . . / core / include / opencv2 / core / version . hpp ' ) <nl> - version_str = ' . ' . join ( ( epoch , major , minor , patch ) ) <nl> + version_str = ' . ' . join ( ( epoch , major , minor , revision ) ) <nl> version_suffix = ' ' . join ( ( epoch , major , minor ) ) <nl> - # if version_suffix . endswith ( ' 0 ' ) : <nl> - # version_suffix = version_suffix [ 0 : - 1 ] <nl> self . classes [ class_name ] . imports . add ( " java . lang . String " ) <nl> self . java_code [ class_name ] [ " j_code " ] . write ( " " " <nl> public static final String VERSION = " % ( v ) s " , NATIVE_LIBRARY_NAME = " opencv_java % ( vs ) s " ; <nl> - public static final int VERSION_EPOCH = % ( ep ) s , VERSION_MAJOR = % ( ma ) s , VERSION_MINOR = % ( mi ) s , VERSION_PATCH = % ( pa ) s ; <nl> - " " " % { ' v ' : version_str , ' vs ' : version_suffix , ' ep ' : epoch , ' ma ' : major , ' mi ' : minor , ' pa ' : patch } ) <nl> + public static final int VERSION_EPOCH = % ( ep ) s , VERSION_MAJOR = % ( ma ) s , VERSION_MINOR = % ( mi ) s , VERSION_REVISION = % ( re ) s ; <nl> + " " " % { ' v ' : version_str , ' vs ' : version_suffix , ' ep ' : epoch , ' ma ' : major , ' mi ' : minor , ' re ' : revision } ) <nl> <nl> <nl> def add_class ( self , decl ) : <nl>
|
patch - > revision
|
opencv/opencv
|
d18b2c2502cdb182225bce80350b4a149d60768a
|
2013-02-28T13:19:52Z
|
mmm a / fdbserver / storageserver . actor . cpp <nl> ppp b / fdbserver / storageserver . actor . cpp <nl> ACTOR Future < GetKeyValuesReply > readRange ( StorageServer * data , Version version , <nl> state StorageServer : : VersionedData : : iterator vCurrent = view . end ( ) ; <nl> state KeyRef readBegin ; <nl> state KeyRef readEnd ; <nl> + state Key readBeginTemp ; <nl> state int vCount = 0 ; <nl> <nl> / / for caching the storage queue results during the first PTree traversal <nl> state VectorRef < KeyValueRef > resultCache ; <nl> + <nl> + <nl> / / for remembering the position in the resultCache <nl> state int pos = 0 ; <nl> <nl> ACTOR Future < GetKeyValuesReply > readRange ( StorageServer * data , Version version , <nl> / / Read the data on disk up to vCurrent ( or the end of the range ) <nl> readEnd = vCurrent ? std : : min ( vCurrent . key ( ) , range . end ) : range . end ; <nl> Standalone < RangeResultRef > atStorageVersion = wait ( <nl> - data - > storage . readRange ( KeyRangeRef ( readBegin , readEnd ) , limit , * pLimitBytes ) ) ; <nl> + data - > storage . readRange ( KeyRangeRef ( readBegin , readEnd ) , limit , * pLimitBytes ) ) ; <nl> <nl> ASSERT ( atStorageVersion . size ( ) < = limit ) ; <nl> if ( data - > storageVersion ( ) > version ) throw transaction_too_old ( ) ; <nl> ACTOR Future < GetKeyValuesReply > readRange ( StorageServer * data , Version version , <nl> / / If we hit our limits reading from disk but then combining with MVCC gave us back more room <nl> if ( atStorageVersion . more ) { / / if there might be more data , begin reading right after what we already found to find out <nl> ASSERT ( result . data . end ( ) [ - 1 ] . key = = atStorageVersion . end ( ) [ - 1 ] . key ) ; <nl> - readBegin = keyAfter ( result . data . end ( ) [ - 1 ] . key ) ; <nl> + readBegin = readBeginTemp = keyAfter ( result . data . end ( ) [ - 1 ] . key ) ; <nl> } else if ( vCurrent & & vCurrent - > isClearTo ( ) ) { / / if vCurrent is a clear , skip it . <nl> ASSERT ( vCurrent - > getEndKey ( ) > readBegin ) ; <nl> readBegin = vCurrent - > getEndKey ( ) ; / / next disk read should start at the end of the clear <nl> ACTOR Future < GetKeyValuesReply > readRange ( StorageServer * data , Version version , <nl> <nl> / / all but the last item are less than * pLimitBytes <nl> ASSERT ( result . data . size ( ) = = 0 | | * pLimitBytes + result . data . end ( ) [ - 1 ] . expectedSize ( ) + sizeof ( KeyValueRef ) > 0 ) ; <nl> - <nl> result . more = limit = = 0 | | * pLimitBytes < = 0 ; / / FIXME : Does this have to be exact ? <nl> result . version = version ; <nl> return result ; <nl>
|
Fixed a corruptionbug .
|
apple/foundationdb
|
4fa04c5891001111c8888372f61100ec2adb28b5
|
2020-05-02T16:22:15Z
|
mmm a / dbms / include / DB / Core / ErrorCodes . h <nl> ppp b / dbms / include / DB / Core / ErrorCodes . h <nl> namespace ErrorCodes <nl> NEGATIVE_REFCOUNT , <nl> CHUNK_NOT_FOUND , <nl> DUPLICATE_CHUNK_NAME , <nl> + NO_SUCH_TABLE , <nl> <nl> POCO_EXCEPTION = 1000 , <nl> STD_EXCEPTION , <nl> new file mode 100644 <nl> index 00000000000 . . 5add7ca7916 <nl> mmm / dev / null <nl> ppp b / dbms / include / DB / Storages / StorageChunkRef . h <nl> <nl> + # pragma once <nl> + <nl> + # include < DB / Storages / StorageChunks . h > <nl> + <nl> + <nl> + namespace DB <nl> + { <nl> + <nl> + / * * Ссылка на кусок данных в таблице типа Chunks . <nl> + * Запись не поддерживается . <nl> + * / <nl> + class StorageChunkRef : public IStorage <nl> + { <nl> + public : <nl> + static StoragePtr create ( const std : : string & name_ , NamesAndTypesListPtr columns_ , Context & context_ , const std : : string & source_database_name_ , const std : : string & source_table_name_ , bool attach ) ; <nl> + <nl> + std : : string getName ( ) const { return " ChunkRef " ; } <nl> + std : : string getTableName ( ) const { return name ; } <nl> + <nl> + const NamesAndTypesList & getColumnsList ( ) const { return * columns ; } <nl> + <nl> + BlockInputStreams read ( <nl> + const Names & column_names , <nl> + ASTPtr query , <nl> + const Settings & settings , <nl> + QueryProcessingStage : : Enum & processed_stage , <nl> + size_t max_block_size = DEFAULT_BLOCK_SIZE , <nl> + unsigned threads = 1 ) ; <nl> + <nl> + void dropImpl ( ) ; <nl> + <nl> + private : <nl> + String name ; <nl> + NamesAndTypesListPtr columns ; <nl> + String source_database_name ; <nl> + String source_table_name ; <nl> + Context context ; <nl> + <nl> + StorageChunkRef ( const std : : string & name_ , NamesAndTypesListPtr columns_ , Context & context_ , const std : : string & source_database_name_ , const std : : string & source_table_name_ , bool attach ) ; <nl> + <nl> + StorageChunks * getSource ( ) ; <nl> + } ; <nl> + <nl> + } <nl> new file mode 100644 <nl> index 00000000000 . . 7a9d4b4f359 <nl> mmm / dev / null <nl> ppp b / dbms / src / Storages / StorageChunkRef . cpp <nl> <nl> + # include < DB / Storages / StorageChunkRef . h > <nl> + <nl> + <nl> + namespace DB <nl> + { <nl> + <nl> + StoragePtr StorageChunkRef : : create ( const std : : string & name_ , NamesAndTypesListPtr columns_ , Context & context_ , const std : : string & source_database_name_ , const std : : string & source_table_name_ , bool attach ) <nl> + { <nl> + return ( new StorageChunkRef ( name_ , columns_ , context_ , source_database_name_ , source_table_name_ , attach ) ) - > thisPtr ( ) ; <nl> + } <nl> + <nl> + BlockInputStreams StorageChunkRef : : read ( <nl> + const Names & column_names , <nl> + ASTPtr query , <nl> + const Settings & settings , <nl> + QueryProcessingStage : : Enum & processed_stage , <nl> + size_t max_block_size , <nl> + unsigned threads ) <nl> + { <nl> + StorageChunks * chunks = getSource ( ) ; <nl> + if ( chunks = = NULL ) <nl> + throw Exception ( " Referenced table " + source_table_name + " in database " + source_database_name + " doesn ' t exist " , ErrorCodes : : NO_SUCH_TABLE ) ; <nl> + return chunks - > readFromChunk ( name , column_names , query , settings , processed_stage , max_block_size , threads ) ; <nl> + } <nl> + <nl> + void StorageChunkRef : : dropImpl ( ) <nl> + { <nl> + StorageChunks * chunks = getSource ( ) ; <nl> + if ( chunks = = NULL ) <nl> + LOG_ERROR ( & Logger : : get ( " StorageChunkRef " ) , " Referenced table " + source_table_name + " in database " + source_database_name + " doesn ' t exist " ) ; <nl> + chunks - > removeReference ( ) ; <nl> + } <nl> + <nl> + StorageChunkRef : : StorageChunkRef ( const std : : string & name_ , NamesAndTypesListPtr columns_ , Context & context_ , const std : : string & source_database_name_ , const std : : string & source_table_name_ , bool attach ) <nl> + : name ( name_ ) , columns ( columns_ ) , context ( context_ ) , source_database_name ( source_database_name_ ) , source_table_name ( source_table_name_ ) <nl> + { <nl> + if ( ! attach ) <nl> + { <nl> + StorageChunks * chunks = getSource ( ) ; <nl> + if ( chunks = = NULL ) <nl> + throw Exception ( " Referenced table " + source_table_name + " in database " + source_database_name + " doesn ' t exist " , ErrorCodes : : NO_SUCH_TABLE ) ; <nl> + chunks - > addReference ( ) ; <nl> + } <nl> + } <nl> + <nl> + StorageChunks * StorageChunkRef : : getSource ( ) <nl> + { <nl> + StoragePtr table_ptr = context . getTable ( source_database_name , source_table_name ) ; <nl> + StorageChunks * chunks = dynamic_cast < StorageChunks * > ( & * table_ptr ) ; <nl> + return chunks ; <nl> + } <nl> + <nl> + } <nl> mmm a / dbms / src / Storages / StorageChunks . cpp <nl> ppp b / dbms / src / Storages / StorageChunks . cpp <nl> void StorageChunks : : removeReference ( ) <nl> { <nl> Int64 c = reference_counter . add ( - 1 , false ) ; <nl> if ( c < 0 ) <nl> - throw Exception ( " Negative refcount on table " + getName ( ) , ErrorCodes : : NEGATIVE_REFCOUNT ) ; <nl> + throw Exception ( " Negative refcount on table " + name , ErrorCodes : : NEGATIVE_REFCOUNT ) ; <nl> if ( c = = 0 ) <nl> dropThis ( ) ; <nl> } <nl> BlockInputStreams StorageChunks : : readFromChunk ( <nl> Poco : : ScopedReadRWLock lock ( rwlock ) ; <nl> <nl> if ( ! chunk_indices . count ( chunk_name ) ) <nl> - throw Exception ( " No chunk " + chunk_name + " in table " + getName ( ) , ErrorCodes : : CHUNK_NOT_FOUND ) ; <nl> + throw Exception ( " No chunk " + chunk_name + " in table " + name , ErrorCodes : : CHUNK_NOT_FOUND ) ; <nl> size_t index = chunk_indices [ chunk_name ] ; <nl> mark1 = marks [ index ] ; <nl> mark2 = index + 1 = = marks . size ( ) ? marksCount ( ) : marks [ index + 1 ] ; <nl> BlockOutputStreamPtr StorageChunks : : writeToNewChunk ( <nl> Poco : : ScopedWriteRWLock lock ( rwlock ) ; <nl> <nl> if ( chunk_indices . count ( chunk_name ) ) <nl> - throw Exception ( " Duplicate chunk name in table " + getName ( ) , ErrorCodes : : DUPLICATE_CHUNK_NAME ) ; <nl> + throw Exception ( " Duplicate chunk name in table " + name , ErrorCodes : : DUPLICATE_CHUNK_NAME ) ; <nl> <nl> size_t mark = marksCount ( ) ; <nl> chunk_indices [ chunk_name ] = marks . size ( ) ; <nl> mmm a / dbms / src / Storages / StorageFactory . cpp <nl> ppp b / dbms / src / Storages / StorageFactory . cpp <nl> <nl> # include < DB / Storages / StorageSystemOne . h > <nl> # include < DB / Storages / StorageFactory . h > <nl> # include < DB / Storages / StorageChunks . h > <nl> + # include < DB / Storages / StorageChunkRef . h > <nl> <nl> <nl> namespace DB <nl> StoragePtr StorageFactory : : get ( <nl> { <nl> return StorageChunks : : create ( data_path , table_name , database_name , columns , context ) ; <nl> } <nl> + else if ( name = = " ChunkRef " ) <nl> + { <nl> + ASTs & args_func = dynamic_cast < ASTFunction & > ( * dynamic_cast < ASTCreateQuery & > ( * query ) . storage ) . children ; <nl> + <nl> + if ( args_func . size ( ) ! = 1 ) <nl> + throw Exception ( " Storage ChunkRef requires exactly 2 parameters " <nl> + " - names of source database and source table . " , <nl> + ErrorCodes : : NUMBER_OF_ARGUMENTS_DOESNT_MATCH ) ; <nl> + <nl> + ASTs & args = dynamic_cast < ASTExpressionList & > ( * args_func . at ( 0 ) ) . children ; <nl> + <nl> + if ( args . size ( ) ! = 2 ) <nl> + throw Exception ( " Storage ChunkRef requires exactly 2 parameters " <nl> + " - names of source database and source table . " , <nl> + ErrorCodes : : NUMBER_OF_ARGUMENTS_DOESNT_MATCH ) ; <nl> + <nl> + String source_database = dynamic_cast < ASTIdentifier & > ( * args [ 0 ] ) . name ; <nl> + String source_table = dynamic_cast < ASTIdentifier & > ( * args [ 1 ] ) . name ; <nl> + <nl> + return StorageChunkRef : : create ( table_name , columns , context , source_database , source_table , attach ) ; <nl> + } <nl> else if ( name = = " TinyLog " ) <nl> { <nl> return StorageTinyLog : : create ( data_path , table_name , columns , attach ) ; <nl>
|
clickhouse : added ChunkRef storage [ # CONV - 6705 ] .
|
ClickHouse/ClickHouse
|
746d879e4422de3b203788a9d4cb9ddd1ff5f120
|
2013-02-07T13:40:59Z
|
mmm a / test / distributed / test_distributed . py <nl> ppp b / test / distributed / test_distributed . py <nl> def test_all_gather_coalesced_with_empty ( self ) : <nl> self . _barrier ( ) <nl> <nl> # AllToAll <nl> - def _test_all_to_all_single_equal_split_helper ( self , group , group_id , rank ) : <nl> + def _test_all_to_all_single_equal_split_helper ( <nl> + self , <nl> + group , <nl> + group_id , <nl> + rank , <nl> + cuda = False , <nl> + rank_to_GPU = None , <nl> + ) : <nl> if group_id is not None : <nl> size = len ( group ) <nl> in_tensor = torch . ones ( [ size , size ] ) * rank <nl> expected_tensor = torch . cat ( [ torch . ones ( [ 1 , size ] ) * i for i in group ] ) <nl> out_tensor = torch . ones ( [ size , size ] ) * - 1 <nl> + if cuda : <nl> + in_tensor = in_tensor . cuda ( rank_to_GPU [ rank ] [ 0 ] ) <nl> + expected_tensor = expected_tensor . cuda ( rank_to_GPU [ rank ] [ 0 ] ) <nl> + out_tensor = out_tensor . cuda ( rank_to_GPU [ rank ] [ 0 ] ) <nl> dist . all_to_all_single ( out_tensor , in_tensor , group = group_id ) <nl> self . assertEqual ( out_tensor , expected_tensor ) <nl> self . _barrier ( ) <nl> <nl> - def _test_all_to_all_single_unequal_split_helper ( self , group , group_id , rank ) : <nl> + def _test_all_to_all_single_unequal_split_helper ( <nl> + self , <nl> + group , <nl> + group_id , <nl> + rank , <nl> + cuda = False , <nl> + rank_to_GPU = None , <nl> + ) : <nl> if group_id is not None : <nl> size = len ( group ) <nl> in_splits = [ i + 1 for i in group ] <nl> def _test_all_to_all_single_unequal_split_helper ( self , group , group_id , rank ) : <nl> in_tensor = torch . ones ( [ sum ( in_splits ) , size ] ) * rank <nl> out_tensor = torch . ones ( [ ( rank + 1 ) * size , size ] ) <nl> expected_tensor = torch . cat ( [ torch . ones ( [ rank + 1 , size ] ) * i for i in group ] ) <nl> + if cuda : <nl> + in_tensor = in_tensor . cuda ( rank_to_GPU [ rank ] [ 0 ] ) <nl> + expected_tensor = expected_tensor . cuda ( rank_to_GPU [ rank ] [ 0 ] ) <nl> + out_tensor = out_tensor . cuda ( rank_to_GPU [ rank ] [ 0 ] ) <nl> dist . all_to_all_single ( <nl> out_tensor , in_tensor , out_splits , in_splits , group = group_id ) <nl> self . assertEqual ( out_tensor , expected_tensor ) <nl> def _test_all_to_all_helper ( self , group , group_id , rank ) : <nl> self . assertEqual ( t1 , t2 ) <nl> self . _barrier ( ) <nl> <nl> - @ unittest . skipIf ( BACKEND ! = " mpi " , " Only MPI supports all_to_all_single " ) <nl> + @ unittest . skipIf ( <nl> + BACKEND ! = " mpi " , " Only MPI supports CPU all_to_all_single " <nl> + ) <nl> def test_all_to_all_single_equal_split ( self ) : <nl> group , group_id , rank = self . _init_global_test ( ) <nl> self . _test_all_to_all_single_equal_split_helper ( group , group_id , rank ) <nl> <nl> - @ unittest . skipIf ( BACKEND ! = " mpi " , " Only MPI supports all_to_all_single " ) <nl> + @ unittest . skipIf ( <nl> + BACKEND ! = " nccl " , " Only Nccl supports CUDA all_to_all_single " <nl> + ) <nl> + @ skip_if_no_gpu <nl> + @ skip_if_rocm <nl> + def test_all_to_all_single_equal_split_cuda ( self ) : <nl> + group , group_id , rank = self . _init_global_test ( ) <nl> + rank_to_GPU = self . _init_multigpu_helper ( ) <nl> + self . _test_all_to_all_single_equal_split_helper ( <nl> + group , <nl> + group_id , <nl> + rank , <nl> + True , <nl> + rank_to_GPU , <nl> + ) <nl> + <nl> + @ unittest . skipIf ( <nl> + BACKEND ! = " mpi " , " Only MPI supports CPU all_to_all_single " <nl> + ) <nl> def test_all_to_all_single_unequal_split ( self ) : <nl> group , group_id , rank = self . _init_global_test ( ) <nl> self . _test_all_to_all_single_unequal_split_helper ( group , group_id , rank ) <nl> <nl> + @ unittest . skipIf ( <nl> + BACKEND ! = " nccl " , " Only Nccl supports CUDA all_to_all_single " <nl> + ) <nl> + @ skip_if_no_gpu <nl> + @ skip_if_rocm <nl> + def test_all_to_all_single_unequal_split_cuda ( self ) : <nl> + group , group_id , rank = self . _init_global_test ( ) <nl> + rank_to_GPU = self . _init_multigpu_helper ( ) <nl> + self . _test_all_to_all_single_unequal_split_helper ( <nl> + group , <nl> + group_id , <nl> + rank , <nl> + True , <nl> + rank_to_GPU , <nl> + ) <nl> + <nl> @ unittest . skipIf ( BACKEND ! = " mpi " , " Only MPI supports all_to_all " ) <nl> def test_all_to_all ( self ) : <nl> group , group_id , rank = self . _init_global_test ( ) <nl> self . _test_all_to_all_helper ( group , group_id , rank ) <nl> <nl> - @ unittest . skipIf ( BACKEND ! = " mpi " , " Only MPI supports all_to_all_single " ) <nl> + @ unittest . skipIf ( <nl> + BACKEND ! = " mpi " , " Only MPI supports CPU all_to_all_single " <nl> + ) <nl> @ skip_if_small_worldsize <nl> def test_all_to_all_single_equal_split_group ( self ) : <nl> group , group_id , rank = self . _init_group_test ( ) <nl> self . _test_all_to_all_single_equal_split_helper ( group , group_id , rank ) <nl> <nl> - @ unittest . skipIf ( BACKEND ! = " mpi " , " Only MPI supports all_to_all_single " ) <nl> + @ unittest . skipIf ( <nl> + BACKEND ! = " nccl " , " Only Nccl supports CUDA all_to_all_single " <nl> + ) <nl> + @ skip_if_no_gpu <nl> + @ skip_if_rocm <nl> + @ skip_if_small_worldsize <nl> + def test_all_to_all_single_equal_split_group_cuda ( self ) : <nl> + group , group_id , rank = self . _init_group_test ( ) <nl> + rank_to_GPU = self . _init_multigpu_helper ( ) <nl> + self . _test_all_to_all_single_equal_split_helper ( <nl> + group , <nl> + group_id , <nl> + rank , <nl> + True , <nl> + rank_to_GPU , <nl> + ) <nl> + <nl> + @ unittest . skipIf ( <nl> + BACKEND ! = " mpi " , " Only MPI supports CPU all_to_all_single " <nl> + ) <nl> @ skip_if_small_worldsize <nl> def test_all_to_all_single_unequal_split_group ( self ) : <nl> group , group_id , rank = self . _init_group_test ( ) <nl> self . _test_all_to_all_single_unequal_split_helper ( group , group_id , rank ) <nl> <nl> + @ unittest . skipIf ( <nl> + BACKEND ! = " nccl " , " Only Nccl supports CUDA all_to_all_single " <nl> + ) <nl> + @ skip_if_no_gpu <nl> + @ skip_if_rocm <nl> + @ skip_if_small_worldsize <nl> + def test_all_to_all_single_unequal_split_group_cuda ( self ) : <nl> + group , group_id , rank = self . _init_global_test ( ) <nl> + rank_to_GPU = self . _init_multigpu_helper ( ) <nl> + self . _test_all_to_all_single_unequal_split_helper ( <nl> + group , <nl> + group_id , <nl> + rank , <nl> + True , <nl> + rank_to_GPU , <nl> + ) <nl> + <nl> @ unittest . skipIf ( BACKEND ! = " mpi " , " Only MPI supports all_to_all " ) <nl> @ skip_if_small_worldsize <nl> def test_all_to_all_group ( self ) : <nl> group , group_id , rank = self . _init_group_test ( ) <nl> self . _test_all_to_all_helper ( group , group_id , rank ) <nl> <nl> - @ unittest . skipIf ( BACKEND ! = " mpi " , " Only MPI supports all_to_all_single " ) <nl> + @ unittest . skipIf ( <nl> + BACKEND ! = " mpi " , " Only MPI supports CPU all_to_all_single " <nl> + ) <nl> def test_all_to_all_single_equal_split_full_group ( self ) : <nl> group , group_id , rank = self . _init_full_group_test ( ) <nl> self . _test_all_to_all_single_equal_split_helper ( group , group_id , rank ) <nl> <nl> - @ unittest . skipIf ( BACKEND ! = " mpi " , " Only MPI supports all_to_all_single " ) <nl> + @ unittest . skipIf ( <nl> + BACKEND ! = " nccl " , " Only Nccl supports CUDA all_to_all_single " <nl> + ) <nl> + @ skip_if_no_gpu <nl> + @ skip_if_rocm <nl> + def test_all_to_all_single_equal_split_full_group_cuda ( self ) : <nl> + group , group_id , rank = self . _init_full_group_test ( ) <nl> + rank_to_GPU = self . _init_multigpu_helper ( ) <nl> + self . _test_all_to_all_single_equal_split_helper ( <nl> + group , <nl> + group_id , <nl> + rank , <nl> + True , <nl> + rank_to_GPU , <nl> + ) <nl> + <nl> + @ unittest . skipIf ( <nl> + BACKEND ! = " mpi " , " Only MPI supports CPU all_to_all_single " <nl> + ) <nl> def test_all_to_all_single_unequal_split_full_group ( self ) : <nl> group , group_id , rank = self . _init_full_group_test ( ) <nl> self . _test_all_to_all_single_unequal_split_helper ( group , group_id , rank ) <nl> <nl> + @ unittest . skipIf ( <nl> + BACKEND ! = " nccl " , " Only Nccl supports CUDA all_to_all_single " <nl> + ) <nl> + @ skip_if_no_gpu <nl> + @ skip_if_rocm <nl> + def test_all_to_all_single_unequal_split_full_group_cuda ( self ) : <nl> + group , group_id , rank = self . _init_full_group_test ( ) <nl> + rank_to_GPU = self . _init_multigpu_helper ( ) <nl> + self . _test_all_to_all_single_unequal_split_helper ( <nl> + group , <nl> + group_id , <nl> + rank , <nl> + True , <nl> + rank_to_GPU , <nl> + ) <nl> + <nl> @ unittest . skipIf ( BACKEND ! = " mpi " , " Only MPI supports all_to_all " ) <nl> def test_all_to_all_full_group ( self ) : <nl> group , group_id , rank = self . _init_full_group_test ( ) <nl> mmm a / torch / lib / c10d / NCCLUtils . hpp <nl> ppp b / torch / lib / c10d / NCCLUtils . hpp <nl> <nl> # define ENABLE_NCCL_ERROR_CHECKING <nl> # endif <nl> <nl> + / / P2P is enabled only for NCCL versions 2 . 7 + since ncclSend ( ) <nl> + / / and ncclRecv ( ) are not supported in earlier versions . <nl> + # if defined ( NCCL_MAJOR ) & & ( NCCL_MAJOR = = 2 ) & & defined ( NCCL_MINOR ) & & \ <nl> + ( NCCL_MINOR > = 7 ) <nl> + # define ENABLE_NCCL_P2P_SUPPORT <nl> + # elif defined ( NCCL_MAJOR ) & & ( NCCL_MAJOR > = 3 ) <nl> + # define ENABLE_NCCL_P2P_SUPPORT <nl> + # endif <nl> + <nl> / / Macro to throw on a non - successful NCCL return value . <nl> # define C10D_NCCL_CHECK ( cmd ) \ <nl> do { \ <nl> mmm a / torch / lib / c10d / ProcessGroup . cpp <nl> ppp b / torch / lib / c10d / ProcessGroup . cpp <nl> std : : shared_ptr < ProcessGroup : : Work > ProcessGroup : : allgather_coalesced ( <nl> " no support for allgather_coalesced in this process group " ) ; <nl> } <nl> <nl> + void ProcessGroup : : checkSplitSizes ( <nl> + const std : : vector < int64_t > & split_sizes , <nl> + const at : : Tensor & tensor , <nl> + int group_size ) { <nl> + if ( split_sizes . size ( ) = = 0 ) { <nl> + TORCH_CHECK ( <nl> + tensor . size ( 0 ) % group_size = = 0 , <nl> + " Tensor ' s dim 0 does not divide equally across group size " ) ; <nl> + } else { <nl> + TORCH_CHECK ( <nl> + split_sizes . size ( ) = = group_size , <nl> + " Number of tensor splits not equal to group size " ) ; <nl> + int sum = std : : accumulate ( split_sizes . begin ( ) , split_sizes . end ( ) , 0 ) ; <nl> + TORCH_CHECK ( <nl> + sum = = tensor . size ( 0 ) , " Split sizes doesn ' t match total dim 0 size " ) ; <nl> + } <nl> + } <nl> + <nl> + int64_t ProcessGroup : : computeLengthsAndOffsets ( <nl> + const std : : vector < int64_t > & split_sizes , <nl> + const at : : Tensor & tensor , <nl> + std : : vector < int > * lengths , <nl> + std : : vector < int > * offsets ) { <nl> + int64_t group_size = lengths - > size ( ) ; <nl> + bool equal_splits = false ; <nl> + int64_t dim0_size = tensor . size ( 0 ) ; <nl> + int64_t row_size = ( dim0_size ? tensor . numel ( ) / dim0_size : 1 ) ; <nl> + int64_t split_size = 0 ; <nl> + int64_t offset = 0 ; <nl> + <nl> + if ( split_sizes . size ( ) = = 0 ) { <nl> + equal_splits = true ; <nl> + split_size = tensor . size ( 0 ) / group_size ; <nl> + } <nl> + for ( int i = 0 ; i < group_size ; i + + ) { <nl> + int64_t length = row_size * ( equal_splits ? split_size : split_sizes [ i ] ) ; <nl> + TORCH_INTERNAL_ASSERT ( <nl> + length < = std : : numeric_limits < int > : : max ( ) & & <nl> + offset < = std : : numeric_limits < int > : : max ( ) , <nl> + " Length or offset larger than INT_MAX not supported " ) ; <nl> + ( * lengths ) [ i ] = length ; <nl> + ( * offsets ) [ i ] = offset ; <nl> + offset + = length ; <nl> + } <nl> + return offset ; <nl> + } <nl> + <nl> + int64_t ProcessGroup : : computeLengthsAndOffsets ( <nl> + const std : : vector < at : : Tensor > & tensors , <nl> + std : : vector < int > * lengths , <nl> + std : : vector < int > * offsets ) { <nl> + int64_t group_size = lengths - > size ( ) ; <nl> + int64_t offset = 0 ; <nl> + for ( int i = 0 ; i < group_size ; i + + ) { <nl> + int64_t length = tensors [ i ] . numel ( ) ; <nl> + TORCH_INTERNAL_ASSERT ( <nl> + length < = std : : numeric_limits < int > : : max ( ) & & <nl> + offset < = std : : numeric_limits < int > : : max ( ) , <nl> + " Length or offset larger than INT_MAX not supported " ) ; <nl> + ( * lengths ) [ i ] = length ; <nl> + ( * offsets ) [ i ] = offset ; <nl> + offset + = length ; <nl> + } <nl> + return offset ; <nl> + } <nl> + <nl> } / / namespace c10d <nl> mmm a / torch / lib / c10d / ProcessGroup . hpp <nl> ppp b / torch / lib / c10d / ProcessGroup . hpp <nl> class ProcessGroup { <nl> const BarrierOptions & opts = BarrierOptions ( ) ) = 0 ; <nl> <nl> protected : <nl> + void checkSplitSizes ( <nl> + const std : : vector < int64_t > & split_sizes , <nl> + const at : : Tensor & tensor , <nl> + int group_size ) ; <nl> + <nl> + int64_t computeLengthsAndOffsets ( <nl> + const std : : vector < int64_t > & split_sizes , <nl> + const at : : Tensor & tensor , <nl> + std : : vector < int > * lengths , <nl> + std : : vector < int > * offsets ) ; <nl> + <nl> + int64_t computeLengthsAndOffsets ( <nl> + const std : : vector < at : : Tensor > & tensors , <nl> + std : : vector < int > * lengths , <nl> + std : : vector < int > * offsets ) ; <nl> + <nl> const int rank_ ; <nl> const int size_ ; <nl> } ; <nl> mmm a / torch / lib / c10d / ProcessGroupMPI . cpp <nl> ppp b / torch / lib / c10d / ProcessGroupMPI . cpp <nl> void checkSameSizeAndType ( <nl> } <nl> } <nl> <nl> - void checkSplitSizes ( <nl> - const std : : vector < int64_t > & split_sizes , <nl> - const at : : Tensor & tensor , <nl> - int group_size ) { <nl> - if ( split_sizes . size ( ) = = 0 ) { <nl> - TORCH_CHECK ( <nl> - tensor . size ( 0 ) % group_size = = 0 , <nl> - " Tensor ' s dim 0 does not divide equally across group size " ) ; <nl> - } else { <nl> - TORCH_CHECK ( <nl> - split_sizes . size ( ) = = group_size , <nl> - " Number of tensor splits not equal to group size " ) ; <nl> - int sum = std : : accumulate ( split_sizes . begin ( ) , split_sizes . end ( ) , 0 ) ; <nl> - TORCH_CHECK ( <nl> - sum = = tensor . size ( 0 ) , " Split sizes doesn ' t match total dim 0 size " ) ; <nl> - } <nl> - } <nl> - <nl> - int64_t computeLengthsAndOffsets ( <nl> - const std : : vector < int64_t > & split_sizes , <nl> - const at : : Tensor & tensor , <nl> - std : : vector < int > * lengths , <nl> - std : : vector < int > * offsets ) { <nl> - int64_t group_size = lengths - > size ( ) ; <nl> - bool equal_splits = false ; <nl> - int64_t dim0_size = tensor . size ( 0 ) ; <nl> - int64_t row_size = ( dim0_size ? tensor . numel ( ) / dim0_size : 1 ) ; <nl> - int64_t split_size = 0 ; <nl> - int64_t offset = 0 ; <nl> - <nl> - if ( split_sizes . size ( ) = = 0 ) { <nl> - equal_splits = true ; <nl> - split_size = tensor . size ( 0 ) / group_size ; <nl> - } <nl> - for ( int i = 0 ; i < group_size ; i + + ) { <nl> - int64_t length = row_size * ( equal_splits ? split_size : split_sizes [ i ] ) ; <nl> - TORCH_INTERNAL_ASSERT ( <nl> - length < = std : : numeric_limits < int > : : max ( ) & & <nl> - offset < = std : : numeric_limits < int > : : max ( ) , <nl> - " Length or offset larger than INT_MAX not supported " ) ; <nl> - ( * lengths ) [ i ] = length ; <nl> - ( * offsets ) [ i ] = offset ; <nl> - offset + = length ; <nl> - } <nl> - return offset ; <nl> - } <nl> - <nl> - int64_t computeLengthsAndOffsets ( <nl> - const std : : vector < at : : Tensor > & tensors , <nl> - std : : vector < int > * lengths , <nl> - std : : vector < int > * offsets ) { <nl> - int64_t group_size = lengths - > size ( ) ; <nl> - int64_t offset = 0 ; <nl> - for ( int i = 0 ; i < group_size ; i + + ) { <nl> - int64_t length = tensors [ i ] . numel ( ) ; <nl> - TORCH_INTERNAL_ASSERT ( <nl> - length < = std : : numeric_limits < int > : : max ( ) & & <nl> - offset < = std : : numeric_limits < int > : : max ( ) , <nl> - " Length or offset larger than INT_MAX not supported " ) ; <nl> - ( * lengths ) [ i ] = length ; <nl> - ( * offsets ) [ i ] = offset ; <nl> - offset + = length ; <nl> - } <nl> - return offset ; <nl> - } <nl> - <nl> } / / namespace <nl> <nl> ProcessGroupMPI : : AsyncWork : : AsyncWork ( at : : Tensor tensor , MPI_Request request ) <nl> std : : shared_ptr < ProcessGroup : : Work > ProcessGroupMPI : : alltoall_base ( <nl> return enqueue ( std : : move ( entry ) ) ; <nl> } else { <nl> / / Need alltoallv <nl> - checkSplitSizes ( inputSplitSizes , inputTensor , size_ ) ; <nl> - checkSplitSizes ( outputSplitSizes , outputTensor , size_ ) ; <nl> + ProcessGroup : : checkSplitSizes ( inputSplitSizes , inputTensor , size_ ) ; <nl> + ProcessGroup : : checkSplitSizes ( outputSplitSizes , outputTensor , size_ ) ; <nl> std : : function < void ( std : : unique_ptr < WorkEntry > & ) > runFunc = <nl> [ opts , this , inputSplitSizes , outputSplitSizes ] ( <nl> std : : unique_ptr < WorkEntry > & entry ) { <nl> std : : shared_ptr < ProcessGroup : : Work > ProcessGroupMPI : : alltoall_base ( <nl> std : : vector < int > recv_lengths ( size_ ) ; <nl> std : : vector < int > send_offsets ( size_ ) ; <nl> std : : vector < int > recv_offsets ( size_ ) ; <nl> - computeLengthsAndOffsets ( <nl> + ProcessGroup : : computeLengthsAndOffsets ( <nl> inputSplitSizes , srcdata , & send_lengths , & send_offsets ) ; <nl> - computeLengthsAndOffsets ( <nl> + ProcessGroup : : computeLengthsAndOffsets ( <nl> outputSplitSizes , dstdata , & recv_lengths , & recv_offsets ) ; <nl> c10 : : DeviceGuard guard ( srcdata . device ( ) ) ; <nl> std : : unique_lock < std : : mutex > globalLock ( pgGlobalMutex_ ) ; <nl> std : : shared_ptr < ProcessGroup : : Work > ProcessGroupMPI : : alltoall ( <nl> auto srcdata = entry - > src ; <nl> auto dstdata = entry - > dst ; <nl> int64_t src_len = <nl> - computeLengthsAndOffsets ( srcdata , & send_lengths , & send_offsets ) ; <nl> + ProcessGroup : : computeLengthsAndOffsets ( srcdata , & send_lengths , & send_offsets ) ; <nl> int64_t dst_len = <nl> - computeLengthsAndOffsets ( dstdata , & recv_lengths , & recv_offsets ) ; <nl> + ProcessGroup : : computeLengthsAndOffsets ( dstdata , & recv_lengths , & recv_offsets ) ; <nl> std : : vector < int64_t > send_lengthsL ( <nl> send_lengths . begin ( ) , send_lengths . end ( ) ) ; <nl> std : : vector < int64_t > recv_lengthsL ( <nl> mmm a / torch / lib / c10d / ProcessGroupNCCL . cpp <nl> ppp b / torch / lib / c10d / ProcessGroupNCCL . cpp <nl> std : : string getNcclAbortedCommStoreKey ( const std : : string ncclIdStr ) { <nl> return std : : string ( kNCCLAbortedCommStoreKey ) + " : " + ncclIdStr ; <nl> } <nl> <nl> + # ifdef ENABLE_NCCL_P2P_SUPPORT <nl> + ncclResult_t ncclAlltoall ( <nl> + void * sendbuff , <nl> + void * recvbuff , <nl> + size_t count , <nl> + size_t size , <nl> + ncclDataType_t type , <nl> + ncclComm_t comm , <nl> + cudaStream_t stream ) { <nl> + int numRanks ; <nl> + size_t rank_diff = count * size ; <nl> + C10D_NCCL_CHECK ( ncclCommCount ( comm , & numRanks ) ) ; <nl> + C10D_NCCL_CHECK ( ncclGroupStart ( ) ) ; <nl> + for ( int r = 0 ; r < numRanks ; r + + ) { <nl> + C10D_NCCL_CHECK ( ncclSend ( <nl> + ( ( char * ) sendbuff ) + r * rank_diff , count , type , r , comm , stream ) ) ; <nl> + C10D_NCCL_CHECK ( ncclRecv ( <nl> + ( ( char * ) recvbuff ) + r * rank_diff , count , type , r , comm , stream ) ) ; <nl> + } <nl> + C10D_NCCL_CHECK ( ncclGroupEnd ( ) ) ; <nl> + return ncclSuccess ; <nl> + } <nl> + <nl> + ncclResult_t ncclAlltoallv ( <nl> + void * sendbuff , <nl> + const int * sendcounts , <nl> + const int * senddispls , <nl> + void * recvbuff , <nl> + const int * recvcounts , <nl> + const int * recvdispls , <nl> + size_t size , <nl> + ncclDataType_t type , <nl> + ncclComm_t comm , <nl> + cudaStream_t stream ) { <nl> + int numRanks ; <nl> + C10D_NCCL_CHECK ( ncclCommCount ( comm , & numRanks ) ) ; <nl> + C10D_NCCL_CHECK ( ncclGroupStart ( ) ) ; <nl> + for ( int r = 0 ; r < numRanks ; r + + ) { <nl> + C10D_NCCL_CHECK ( ncclSend ( <nl> + ( ( char * ) sendbuff ) + senddispls [ r ] * size , <nl> + sendcounts [ r ] , <nl> + type , <nl> + r , <nl> + comm , <nl> + stream ) ) ; <nl> + C10D_NCCL_CHECK ( ncclRecv ( <nl> + ( ( char * ) recvbuff ) + recvdispls [ r ] * size , <nl> + recvcounts [ r ] , <nl> + type , <nl> + r , <nl> + comm , <nl> + stream ) ) ; <nl> + } <nl> + C10D_NCCL_CHECK ( ncclGroupEnd ( ) ) ; <nl> + return ncclSuccess ; <nl> + } <nl> + # endif <nl> + <nl> } / / namespace <nl> <nl> const int64_t ProcessGroupNCCL : : kWatchdogThreadSleepMillis = 10000 ; <nl> std : : vector < std : : shared_ptr < NCCLComm > > & ProcessGroupNCCL : : getNCCLComm ( <nl> <nl> namespace { <nl> <nl> + / / Check validity of tensor <nl> + void check_gpu_single_tensor ( const at : : Tensor & tensor ) { <nl> + if ( ! tensor . is_cuda ( ) | | tensor . is_sparse ( ) ) { <nl> + throw std : : runtime_error ( " Tensors must be CUDA and dense " ) ; <nl> + } <nl> + if ( ! tensor . is_contiguous ( ) ) { <nl> + throw std : : runtime_error ( " Tensors must be contiguous " ) ; <nl> + } <nl> + } <nl> + <nl> / / Check that all ` tensors ' have the same type and shape and are distributed <nl> / / across distinct GPUs . <nl> void check_gpu_tensors ( const std : : vector < at : : Tensor > & tensors ) { <nl> void check_gpu_tensors ( const std : : vector < at : : Tensor > & tensors ) { <nl> usedDevices . reserve ( tensors . size ( ) ) ; <nl> <nl> for ( const auto & t : tensors ) { <nl> - if ( ! t . is_cuda ( ) | | t . is_sparse ( ) ) { <nl> - throw std : : runtime_error ( " Tensors must be CUDA and dense " ) ; <nl> - } <nl> + check_gpu_single_tensor ( t ) ; <nl> if ( t . scalar_type ( ) ! = first . scalar_type ( ) ) { <nl> throw std : : runtime_error ( " Tensors must have identical type " ) ; <nl> } <nl> std : : shared_ptr < ProcessGroup : : Work > ProcessGroupNCCL : : barrier ( <nl> return work ; <nl> } <nl> <nl> + std : : shared_ptr < ProcessGroup : : Work > ProcessGroupNCCL : : alltoall_base ( <nl> + at : : Tensor & outputTensor , <nl> + at : : Tensor & inputTensor , <nl> + std : : vector < int64_t > & outputSplitSizes , <nl> + std : : vector < int64_t > & inputSplitSizes , <nl> + const AllToAllOptions & / * unused * / ) { <nl> + # ifdef ENABLE_NCCL_P2P_SUPPORT <nl> + check_gpu_single_tensor ( outputTensor ) ; <nl> + check_gpu_single_tensor ( inputTensor ) ; <nl> + if ( outputSplitSizes . size ( ) = = 0 & & inputSplitSizes . size ( ) = = 0 ) { <nl> + std : : vector < at : : Tensor > inputTensors = { inputTensor } ; <nl> + std : : vector < at : : Tensor > outputTensors = { outputTensor } ; <nl> + return collective ( <nl> + inputTensors , <nl> + outputTensors , <nl> + [ & ] ( at : : Tensor & input , <nl> + at : : Tensor & output , <nl> + ncclComm_t comm , <nl> + at : : cuda : : CUDAStream & stream ) { <nl> + return ncclAlltoall ( <nl> + input . data_ptr ( ) , <nl> + output . data_ptr ( ) , <nl> + input . numel ( ) / size_ , <nl> + input . element_size ( ) , <nl> + getNcclDataType ( input . scalar_type ( ) ) , <nl> + comm , <nl> + stream . stream ( ) ) ; <nl> + } ) ; <nl> + } else { <nl> + ProcessGroup : : checkSplitSizes ( inputSplitSizes , inputTensor , size_ ) ; <nl> + ProcessGroup : : checkSplitSizes ( outputSplitSizes , outputTensor , size_ ) ; <nl> + std : : vector < at : : Tensor > inputTensors = { inputTensor } ; <nl> + std : : vector < at : : Tensor > outputTensors = { outputTensor } ; <nl> + return collective ( <nl> + inputTensors , <nl> + outputTensors , <nl> + [ & ] ( at : : Tensor & input , <nl> + at : : Tensor & output , <nl> + ncclComm_t comm , <nl> + at : : cuda : : CUDAStream & stream ) { <nl> + std : : vector < int > send_lengths ( size_ ) ; <nl> + std : : vector < int > recv_lengths ( size_ ) ; <nl> + std : : vector < int > send_offsets ( size_ ) ; <nl> + std : : vector < int > recv_offsets ( size_ ) ; <nl> + ProcessGroup : : computeLengthsAndOffsets ( <nl> + inputSplitSizes , input , & send_lengths , & send_offsets ) ; <nl> + ProcessGroup : : computeLengthsAndOffsets ( <nl> + outputSplitSizes , output , & recv_lengths , & recv_offsets ) ; <nl> + return ncclAlltoallv ( <nl> + input . data_ptr ( ) , <nl> + send_lengths . data ( ) , <nl> + send_offsets . data ( ) , <nl> + output . data_ptr ( ) , <nl> + recv_lengths . data ( ) , <nl> + recv_offsets . data ( ) , <nl> + input . element_size ( ) , <nl> + getNcclDataType ( input . scalar_type ( ) ) , <nl> + comm , <nl> + stream . stream ( ) ) ; <nl> + } ) ; <nl> + } <nl> + # else <nl> + throw std : : runtime_error ( <nl> + " ProcessGroupNCCL only supports alltoall * for NCCL lib version > = 2 . 7 . 0 " ) ; <nl> + # endif <nl> + } <nl> + <nl> + std : : shared_ptr < ProcessGroup : : Work > ProcessGroupNCCL : : alltoall ( <nl> + std : : vector < at : : Tensor > & / * unused * / , <nl> + std : : vector < at : : Tensor > & / * unused * / , <nl> + const AllToAllOptions & / * unused * / ) { <nl> + throw std : : runtime_error ( " ProcessGroupNCCL does not support alltoall " ) ; <nl> + } <nl> + <nl> std : : shared_ptr < ProcessGroup : : Work > ProcessGroupNCCL : : gather ( <nl> std : : vector < std : : vector < at : : Tensor > > & / * unused * / , <nl> std : : vector < at : : Tensor > & / * unused * / , <nl> mmm a / torch / lib / c10d / ProcessGroupNCCL . hpp <nl> ppp b / torch / lib / c10d / ProcessGroupNCCL . hpp <nl> class ProcessGroupNCCL : public ProcessGroup { <nl> std : : shared_ptr < ProcessGroup : : Work > barrier ( <nl> const BarrierOptions & opts = BarrierOptions ( ) ) override ; <nl> <nl> + std : : shared_ptr < ProcessGroup : : Work > alltoall_base ( <nl> + at : : Tensor & outputTensor , <nl> + at : : Tensor & inputTensor , <nl> + std : : vector < int64_t > & outputSplitSizes , <nl> + std : : vector < int64_t > & inputSplitSizes , <nl> + const AllToAllOptions & opts = AllToAllOptions ( ) ) override ; <nl> + <nl> + std : : shared_ptr < ProcessGroup : : Work > alltoall ( <nl> + std : : vector < at : : Tensor > & outputTensors , <nl> + std : : vector < at : : Tensor > & inputTensors , <nl> + const AllToAllOptions & opts = AllToAllOptions ( ) ) override ; <nl> + <nl> / / Unsupported Ops <nl> std : : shared_ptr < ProcessGroup : : Work > gather ( <nl> std : : vector < std : : vector < at : : Tensor > > & outputTensors , <nl> mmm a / torch / lib / c10d / ProcessGroupRoundRobin . cpp <nl> ppp b / torch / lib / c10d / ProcessGroupRoundRobin . cpp <nl> std : : shared_ptr < ProcessGroup : : Work > ProcessGroupRoundRobin : : reduce_scatter ( <nl> return next ( ) - > reduce_scatter ( outputs , inputs , opts ) ; <nl> } ; <nl> <nl> + std : : shared_ptr < ProcessGroup : : Work > ProcessGroupRoundRobin : : alltoall_base ( <nl> + at : : Tensor & outputTensor , <nl> + at : : Tensor & inputTensor , <nl> + std : : vector < int64_t > & outputSplitSizes , <nl> + std : : vector < int64_t > & inputSplitSizes , <nl> + const AllToAllOptions & opts ) { <nl> + return next ( ) - > alltoall_base ( <nl> + outputTensor , inputTensor , outputSplitSizes , inputSplitSizes , opts ) ; <nl> + } ; <nl> + <nl> std : : shared_ptr < ProcessGroup : : Work > ProcessGroupRoundRobin : : send ( <nl> std : : vector < at : : Tensor > & / * unused * / , <nl> int / * unused * / , <nl> mmm a / torch / lib / c10d / ProcessGroupRoundRobin . hpp <nl> ppp b / torch / lib / c10d / ProcessGroupRoundRobin . hpp <nl> class ProcessGroupRoundRobin final : public ProcessGroup { <nl> std : : vector < std : : vector < at : : Tensor > > & inputs , <nl> const ReduceScatterOptions & opts = ReduceScatterOptions ( ) ) override ; <nl> <nl> + std : : shared_ptr < ProcessGroup : : Work > alltoall_base ( <nl> + at : : Tensor & outputTensor , <nl> + at : : Tensor & inputTensor , <nl> + std : : vector < int64_t > & outputSplitSizes , <nl> + std : : vector < int64_t > & inputSplitSizes , <nl> + const AllToAllOptions & opts = AllToAllOptions ( ) ) override ; <nl> + <nl> std : : shared_ptr < ProcessGroup : : Work > send ( <nl> std : : vector < at : : Tensor > & tensors , <nl> int dstRank , <nl>
|
Add NCCL Alltoall to PT NCCL process group ( )
|
pytorch/pytorch
|
b87f0e5085d30e1a1849436e6b9065edbe94e43d
|
2020-07-22T17:55:51Z
|
mmm a / src / types . h <nl> ppp b / src / types . h <nl> class TypeImpl : public Config : : Base { <nl> <nl> template < class Config > <nl> class TypeImpl < Config > : : BitsetType : public TypeImpl < Config > { <nl> - protected : <nl> + public : / / protected : <nl> friend class TypeImpl < Config > ; <nl> <nl> enum { <nl> class TypeImpl < Config > : : BitsetType : public TypeImpl < Config > { <nl> <nl> bitset Bitset ( ) { return Config : : as_bitset ( this ) ; } <nl> <nl> - static TypeImpl * New ( bitset bits ) { <nl> - return static_cast < BitsetType * > ( Config : : from_bitset ( bits ) ) ; <nl> - } <nl> + static TypeImpl * New ( bitset bits ) { return Config : : from_bitset ( bits ) ; } <nl> static TypeHandle New ( bitset bits , Region * region ) { <nl> return Config : : from_bitset ( bits , region ) ; <nl> } <nl> mmm a / test / cctest / test - types . cc <nl> ppp b / test / cctest / test - types . cc <nl> struct Tests : Rep { <nl> CHECK ( this - > IsBitset ( T . Any ) ) ; <nl> <nl> CHECK ( bitset ( 0 ) = = this - > AsBitset ( T . None ) ) ; <nl> - printf ( " [ BitSet ] % p ( % p ) = = % p ( % p ) \ n " , <nl> + printf ( " [ BitSet ] value = % p enum = % p bitset = % p any = % p this = % p any = % p \ n " , <nl> reinterpret_cast < void * > ( bitset ( 0xfffffffeu ) ) , <nl> + reinterpret_cast < void * > ( bitset ( HeapType : : BitsetType : : kAny ) ) , <nl> + reinterpret_cast < void * > ( <nl> + HeapTypeConfig : : from_bitset ( HeapType : : BitsetType : : kAny ) ) , <nl> reinterpret_cast < void * > ( HeapType : : Any ( ) ) , <nl> reinterpret_cast < void * > ( this - > AsBitset ( T . Any ) ) , <nl> reinterpret_cast < void * > ( * T . Any ) ) ; <nl>
|
Moar prints
|
v8/v8
|
06e826493a07e7289fe654db1523097b0a69ab40
|
2014-09-15T11:19:20Z
|
mmm a / Editor / Styles / stylesheet . qss <nl> ppp b / Editor / Styles / stylesheet . qss <nl> QLineEdit { <nl> } <nl> <nl> QLineEdit : focus { <nl> - color : rgb ( 97 , 172 , 236 ) ; <nl> + border : 1px solid rgb ( 97 , 172 , 236 ) outset ; <nl> } <nl> <nl> QSizeGrip { <nl> QThumbnailsView > QToolButton { <nl> icon - size : 17px ; <nl> } <nl> <nl> - QAbstractItemView : : item : : text { <nl> - selection - background : green ; <nl> + CAssetBrowser CBreadcrumbsBar <nl> + { <nl> + max - height : 22px ; <nl> + } <nl> + <nl> + CAssetBrowser CBreadcrumbsBar QToolButton <nl> + { <nl> + margin - top : 2px ; <nl> + } <nl> + <nl> + CAssetBrowser CBreadcrumbsBar QLineEdit <nl> + { <nl> + min - height : 20px ; <nl> + } <nl> + <nl> + CAssetBrowser CBreadcrumbsBar QLineEdit [ error = true ] <nl> + { <nl> + border : 1px solid rgb ( 193 , 70 , 70 ) outset ; <nl> + } <nl> + <nl> + QWidget # HoverWidget <nl> + { <nl> + margin - top : 2px ; <nl> + margin - left : 1px ; <nl> + margin - right : 3px ; <nl> + min - height : 18px ; <nl> } <nl> + <nl> + QWidget # HoverWidget : hover <nl> + { <nl> + background - color : rgb ( 86 , 86 , 86 ) ; <nl> + border - radius : 3px ; <nl> + } <nl> \ No newline at end of file <nl>
|
! XF ( DEV - 4121 ) ( Asset Browser ) Asset path in Asset Browser is now modifiable via text editing
|
CRYTEK/CRYENGINE
|
08d740c45d5f4101df76fd9a8a6f661e697c245f
|
2017-08-21T08:48:23Z
|
mmm a / src / core / file_sys / card_image . cpp <nl> ppp b / src / core / file_sys / card_image . cpp <nl> XCI : : XCI ( VirtualFile file_ ) : file ( std : : move ( file_ ) ) , partitions ( 0x4 ) { <nl> const auto secure_ncas = secure_partition - > GetNCAsCollapsed ( ) ; <nl> std : : copy ( secure_ncas . begin ( ) , secure_ncas . end ( ) , std : : back_inserter ( ncas ) ) ; <nl> <nl> - program_nca_status = Loader : : ResultStatus : : ErrorXCIMissingProgramNCA ; <nl> program = <nl> secure_partition - > GetNCA ( secure_partition - > GetProgramTitleID ( ) , ContentRecordType : : Program ) ; <nl> - if ( program ! = nullptr ) <nl> - program_nca_status = program - > GetStatus ( ) ; <nl> + program_nca_status = secure_partition - > GetProgramStatus ( secure_partition - > GetProgramTitleID ( ) ) ; <nl> + if ( program_nca_status = = Loader : : ResultStatus : : ErrorNSPMissingProgramNCA ) <nl> + program_nca_status = Loader : : ResultStatus : : ErrorXCIMissingProgramNCA ; <nl> <nl> auto result = AddNCAFromPartition ( XCIPartition : : Update ) ; <nl> if ( result ! = Loader : : ResultStatus : : Success ) { <nl> mmm a / src / core / file_sys / content_archive . cpp <nl> ppp b / src / core / file_sys / content_archive . cpp <nl> NCA : : NCA ( VirtualFile file_ , VirtualFile bktr_base_romfs_ , u64 bktr_base_ivfc_off <nl> dirs . push_back ( std : : move ( npfs ) ) ; <nl> if ( IsDirectoryExeFS ( dirs . back ( ) ) ) <nl> exefs = dirs . back ( ) ; <nl> + } else { <nl> + if ( has_rights_id ) <nl> + status = Loader : : ResultStatus : : ErrorIncorrectTitlekeyOrTitlekek ; <nl> + else <nl> + status = Loader : : ResultStatus : : ErrorIncorrectKeyAreaKey ; <nl> + return ; <nl> } <nl> } else { <nl> if ( status ! = Loader : : ResultStatus : : Success ) <nl> NCAContentType NCA : : GetType ( ) const { <nl> u64 NCA : : GetTitleId ( ) const { <nl> if ( is_update | | status = = Loader : : ResultStatus : : ErrorMissingBKTRBaseRomFS ) <nl> return header . title_id | 0x800 ; <nl> - if ( status ! = Loader : : ResultStatus : : Success ) <nl> - return { } ; <nl> return header . title_id ; <nl> } <nl> <nl> mmm a / src / core / file_sys / submission_package . cpp <nl> ppp b / src / core / file_sys / submission_package . cpp <nl> NSP : : NSP ( VirtualFile file_ ) <nl> for ( const auto & outer_file : files ) { <nl> if ( outer_file - > GetName ( ) . substr ( outer_file - > GetName ( ) . size ( ) - 9 ) = = " . cnmt . nca " ) { <nl> const auto nca = std : : make_shared < NCA > ( outer_file ) ; <nl> - if ( nca - > GetStatus ( ) ! = Loader : : ResultStatus : : Success ) <nl> + if ( nca - > GetStatus ( ) ! = Loader : : ResultStatus : : Success ) { <nl> + program_status [ nca - > GetTitleId ( ) ] = nca - > GetStatus ( ) ; <nl> continue ; <nl> + } <nl> + <nl> const auto section0 = nca - > GetSubdirectories ( ) [ 0 ] ; <nl> <nl> for ( const auto & inner_file : section0 - > GetFiles ( ) ) { <nl>
|
nsp : Fix error masking issue with XCI files
|
yuzu-emu/yuzu
|
92e26df00f12de2e084ceb84d17ca79c5323a315
|
2018-09-04T20:24:24Z
|
mmm a / modules / perception / onboard / component / trafficlights_perception_component . cc <nl> ppp b / modules / perception / onboard / component / trafficlights_perception_component . cc <nl> bool TrafficLightsPerceptionComponent : : Init ( ) { <nl> return false ; <nl> } <nl> <nl> - if ( InitCameraListeners ( ) ! = cyber : : SUCC ) { <nl> - AERROR < < " TrafficLightsPerceptionComponent InitCameraListeners failed . " ; <nl> + if ( InitCameraFrame ( ) ! = cyber : : SUCC ) { <nl> + AERROR < < " TrafficLightsPerceptionComponent InitCameraFrame failed . " ; <nl> return false ; <nl> } <nl> <nl> - if ( InitCameraFrame ( ) ! = cyber : : SUCC ) { <nl> - AERROR < < " TrafficLightsPerceptionComponent InitCameraFrame failed . " ; <nl> + if ( InitCameraListeners ( ) ! = cyber : : SUCC ) { <nl> + AERROR < < " TrafficLightsPerceptionComponent InitCameraListeners failed . " ; <nl> return false ; <nl> } <nl> <nl> mmm a / modules / perception / onboard / proto / trafficlights_perception_component . proto <nl> ppp b / modules / perception / onboard / proto / trafficlights_perception_component . proto <nl> message TrafficLight { <nl> optional string tl_tf2_frame_id = 1 [ default = " world " ] ; <nl> optional string tl_tf2_child_frame_id = 2 [ default = " perception_localization_100hz " ] ; <nl> optional double tf2_timeout_second = 3 [ default = 0 . 01 ] ; <nl> - optional string camera_names = 4 [ default = " onsemi_traffic , onsemi_narrow , onsemi_obstacle , onsemi_wide " ] ; <nl> - optional string camera_channel_names = 5 [ default = " / sensor / camera / traffic / image_long , / sensor / camera / obstacle / image_narrow , / sensor / camera / traffic / image_short , / sensor / camera / obstacle / image_wide " ] ; <nl> + optional string camera_names = 4 [ default = " front_6mm , front_12mm " ] ; <nl> + optional string camera_channel_names = 5 [ default = " / apollo / sensor / camera / front_6mm , / apollop / sensor / camera / front_12mm " ] ; <nl> optional double tl_image_timestamp_offset = 6 [ default = 0 . 0 ] ; <nl> optional int32 max_process_image_fps = 7 [ default = 8 ] ; <nl> optional double query_tf_interval_seconds = 8 [ default = 0 . 3 ] ; <nl> message TrafficLight { <nl> optional string camera_traffic_light_perception_conf_dir = 12 [ default = " conf / perception / camera " ] ; <nl> optional string camera_traffic_light_perception_conf_file = 13 [ default = " trafficlight . pt " ] ; <nl> optional int32 default_image_border_size = 14 [ default = 100 ] ; <nl> - optional string traffic_light_output_channel_name = 15 [ default = " / perception / traffic_light_status " ] ; <nl> - optional string simulation_channel_name = 16 [ default = " / perception / traffic_light_simulation " ] ; <nl> + optional string traffic_light_output_channel_name = 15 [ default = " / apollo / perception / traffic_light " ] ; <nl> + optional string simulation_channel_name = 16 [ default = " / apollo / perception / traffic_light_simulation " ] ; <nl> } <nl>
|
Perception : Fixed init failure of traffic light detection
|
ApolloAuto/apollo
|
1b514989bf9a4ff686239cec8194e524243a5b2a
|
2018-12-13T23:18:27Z
|
mmm a / depends / README . md <nl> ppp b / depends / README . md <nl> Common ` host - platform - triplets ` for cross compilation are : <nl> <nl> No other options are needed , the paths are automatically configured . <nl> <nl> - Install the required dependencies : Ubuntu & Debian <nl> mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm <nl> + # # # Install the required dependencies : Ubuntu & Debian <nl> <nl> - For macOS cross compilation : <nl> + # # # # For macOS cross compilation <nl> <nl> sudo apt - get install curl librsvg2 - bin libtiff - tools bsdmainutils cmake imagemagick libcap - dev libz - dev libbz2 - dev python - setuptools <nl> <nl> - For Win32 / Win64 cross compilation : <nl> + # # # # For Win32 / Win64 cross compilation <nl> <nl> - see [ build - windows . md ] ( . . / doc / build - windows . md # cross - compilation - for - ubuntu - and - windows - subsystem - for - linux ) <nl> <nl> - For linux ( including i386 , ARM ) cross compilation : <nl> + # # # # For linux ( including i386 , ARM ) cross compilation <nl> <nl> - sudo apt - get install curl g + + - aarch64 - linux - gnu g + + - 4 . 8 - aarch64 - linux - gnu gcc - 4 . 8 - aarch64 - linux - gnu binutils - aarch64 - linux - gnu g + + - arm - linux - gnueabihf g + + - 4 . 8 - arm - linux - gnueabihf gcc - 4 . 8 - arm - linux - gnueabihf binutils - arm - linux - gnueabihf g + + - 4 . 8 - multilib gcc - 4 . 8 - multilib binutils - gold bsdmainutils <nl> + Common linux dependencies : <nl> + <nl> + sudo apt - get install make automake cmake curl g + + - multilib libtool binutils - gold bsdmainutils pkg - config python3 <nl> + <nl> + For linux ARM cross compilation : <nl> + <nl> + sudo apt - get install g + + - arm - linux - gnueabihf binutils - arm - linux - gnueabihf <nl> + <nl> + For linux AARCH64 cross compilation : <nl> + <nl> + sudo apt - get install g + + - aarch64 - linux - gnu binutils - aarch64 - linux - gnu <nl> <nl> For linux RISC - V 64 - bit cross compilation ( there are no packages for 32 - bit ) : <nl> <nl> - sudo apt - get install curl g + + - riscv64 - linux - gnu binutils - riscv64 - linux - gnu <nl> + sudo apt - get install g + + - riscv64 - linux - gnu binutils - riscv64 - linux - gnu <nl> <nl> RISC - V known issue : gcc - 7 . 3 . 0 and gcc - 7 . 3 . 1 result in a broken ` test_bitcoin ` executable ( see https : / / github . com / bitcoin / bitcoin / pull / 13543 ) , <nl> this is apparently fixed in gcc - 8 . 1 . 0 . <nl> <nl> - Dependency Options : <nl> + # # # Dependency Options <nl> The following can be set when running make : make FOO = bar <nl> <nl> SOURCES_PATH : downloaded sources will be placed here <nl> The following can be set when running make : make FOO = bar <nl> If some packages are not built , for example ` make NO_WALLET = 1 ` , the appropriate <nl> options will be passed to bitcoin ' s configure . In this case , ` - - disable - wallet ` . <nl> <nl> - Additional targets : <nl> + # # # Additional targets <nl> <nl> download : run ' make download ' to fetch all sources without building them <nl> download - osx : run ' make download - osx ' to fetch all sources needed for macOS builds <nl> mmm a / doc / build - unix . md <nl> ppp b / doc / build - unix . md <nl> tuned to conserve memory with additional CXXFLAGS : <nl> <nl> Build requirements : <nl> <nl> - sudo apt - get install build - essential libtool autotools - dev automake pkg - config libssl - dev libevent - dev bsdmainutils python3 libboost - system - dev libboost - filesystem - dev libboost - chrono - dev libboost - test - dev libboost - thread - dev <nl> + sudo apt - get install build - essential libtool autotools - dev automake pkg - config bsdmainutils python3 <nl> + <nl> + Now , you can either build from self - compiled [ depends ] ( / depends / README . md ) or install the required dependencies : <nl> + <nl> + sudo apt - get libssl - dev libevent - dev libboost - system - dev libboost - filesystem - dev libboost - chrono - dev libboost - test - dev libboost - thread - dev <nl> <nl> BerkeleyDB is required for the wallet . <nl> <nl> ZMQ dependencies ( provides ZMQ API ) : <nl> <nl> sudo apt - get install libzmq3 - dev <nl> <nl> - # # # # Dependencies for the GUI <nl> + GUI dependencies : <nl> <nl> If you want to build bitcoin - qt , make sure that the required packages for Qt development <nl> are installed . Qt 5 is necessary to build the GUI . <nl>
|
Merge : doc : Split depends installation instructions per arch
|
bitcoin/bitcoin
|
69a29b5a8ecf4e067e0a2479e72192109bd3918a
|
2018-10-04T03:58:08Z
|
mmm a / include / swift / AST / ArchetypeBuilder . h <nl> ppp b / include / swift / AST / ArchetypeBuilder . h <nl> class ArchetypeBuilder : : PotentialArchetype { <nl> / / / that were unresolved ( at least at some point ) . <nl> llvm : : TinyPtrVector < ComponentIdentTypeRepr * > UnresolvedReferences ; <nl> <nl> + / / / The equivalence class of this potential archetype . <nl> + llvm : : TinyPtrVector < PotentialArchetype * > EquivalenceClass ; <nl> + <nl> / / / \ brief Construct a new potential archetype for an unresolved <nl> / / / associated type . <nl> PotentialArchetype ( PotentialArchetype * Parent , Identifier Name ) <nl> class ArchetypeBuilder : : PotentialArchetype { <nl> IsRecursive ( false ) , Invalid ( false ) <nl> { <nl> assert ( Parent ! = nullptr & & " Not an associated type ? " ) ; <nl> + EquivalenceClass . push_back ( this ) ; <nl> } <nl> <nl> / / / \ brief Construct a new potential archetype for an associated type . <nl> class ArchetypeBuilder : : PotentialArchetype { <nl> Representative ( this ) , IsRecursive ( false ) , Invalid ( false ) <nl> { <nl> assert ( Parent ! = nullptr & & " Not an associated type ? " ) ; <nl> + EquivalenceClass . push_back ( this ) ; <nl> } <nl> <nl> / / / \ brief Construct a new potential archetype for a generic parameter . <nl> class ArchetypeBuilder : : PotentialArchetype { <nl> Identifier Name ) <nl> : ParentOrParam ( GenericParam ) , RootProtocol ( RootProtocol ) , <nl> NameOrAssociatedType ( Name ) , Representative ( this ) , IsRecursive ( false ) , <nl> - Invalid ( false ) { } <nl> + Invalid ( false ) { <nl> + EquivalenceClass . push_back ( this ) ; <nl> + } <nl> <nl> / / / \ brief Recursively build the full name . <nl> void buildFullName ( bool forDebug , SmallVectorImpl < char > & result ) const ; <nl> class ArchetypeBuilder : : PotentialArchetype { <nl> / / / path compression on the way . <nl> PotentialArchetype * getRepresentative ( ) ; <nl> <nl> + / / / Retrieve the equivalence class containing this potential archetype . <nl> + ArrayRef < PotentialArchetype * > getEquivalenceClass ( ) { <nl> + return getRepresentative ( ) - > EquivalenceClass ; <nl> + } <nl> + <nl> / / / Retrieve the source of the same - type constraint that applies to this <nl> / / / potential archetype . <nl> const RequirementSource & getSameTypeSource ( ) const { <nl> mmm a / lib / AST / ArchetypeBuilder . cpp <nl> ppp b / lib / AST / ArchetypeBuilder . cpp <nl> bool ArchetypeBuilder : : PotentialArchetype : : addConformance ( <nl> / / Otherwise , create a new potential archetype for this associated type <nl> / / and make it equivalent to the first potential archetype we encountered . <nl> auto otherPA = new PotentialArchetype ( this , assocType ) ; <nl> - otherPA - > Representative = known - > second . front ( ) ; <nl> + auto frontRep = known - > second . front ( ) - > getRepresentative ( ) ; <nl> + otherPA - > Representative = frontRep ; <nl> + frontRep - > EquivalenceClass . push_back ( otherPA ) ; <nl> otherPA - > SameTypeSource = RequirementSource ( RequirementSource : : Inferred , <nl> source . getLoc ( ) ) ; <nl> known - > second . push_back ( otherPA ) ; <nl> auto ArchetypeBuilder : : PotentialArchetype : : getNestedType ( <nl> / / If we have resolved this nested type to more than one associated <nl> / / type , create same - type constraints between them . <nl> if ( ! nested . empty ( ) ) { <nl> - pa - > Representative = nested . front ( ) ; <nl> + pa - > Representative = nested . front ( ) - > getRepresentative ( ) ; <nl> + pa - > Representative - > EquivalenceClass . push_back ( pa ) ; <nl> pa - > SameTypeSource = RequirementSource ( RequirementSource : : Inferred , <nl> SourceLoc ( ) ) ; <nl> } <nl> bool ArchetypeBuilder : : addSameTypeRequirementBetweenArchetypes ( <nl> / / Make T1 the representative of T2 , merging the equivalence classes . <nl> T2 - > Representative = T1 ; <nl> T2 - > SameTypeSource = Source ; <nl> + for ( auto equiv : T2 - > EquivalenceClass ) <nl> + T1 - > EquivalenceClass . push_back ( equiv ) ; <nl> <nl> / / Add unresolved references . <nl> T1 - > UnresolvedReferences . insert ( T1 - > UnresolvedReferences . end ( ) , <nl>
|
Start tracking the complete equivalence class on each PotentialArchetype .
|
apple/swift
|
ec8a602fa295f6a9b590cb69376e3e24a87d74db
|
2015-07-02T20:57:34Z
|
mmm a / src / base / platform / semaphore . h <nl> ppp b / src / base / platform / semaphore . h <nl> class Semaphore final { <nl> / / Increments the semaphore counter . <nl> void Signal ( ) ; <nl> <nl> - / / Suspends the calling thread until the semaphore counter is non zero <nl> - / / and then decrements the semaphore counter . <nl> + / / Decrements the semaphore counter if it is positive , or blocks until it <nl> + / / becomes positive and then decrements the counter . <nl> void Wait ( ) ; <nl> <nl> - / / Suspends the calling thread until the counter is non zero or the timeout <nl> - / / time has passed . If timeout happens the return value is false and the <nl> - / / counter is unchanged . Otherwise the semaphore counter is decremented and <nl> - / / true is returned . <nl> + / / Like Wait ( ) but returns after rel_time time has passed . If the timeout <nl> + / / happens the return value is false and the counter is unchanged . Otherwise <nl> + / / the semaphore counter is decremented and true is returned . <nl> bool WaitFor ( const TimeDelta & rel_time ) WARN_UNUSED_RESULT ; <nl> <nl> # if V8_OS_MACOSX <nl> mmm a / src / libplatform / task - queue . cc <nl> ppp b / src / libplatform / task - queue . cc <nl> <nl> # include " src / libplatform / task - queue . h " <nl> <nl> # include " src / base / logging . h " <nl> + # include " src / base / platform / platform . h " <nl> + # include " src / base / platform / time . h " <nl> <nl> namespace v8 { <nl> namespace platform { <nl> void TaskQueue : : Terminate ( ) { <nl> process_queue_semaphore_ . Signal ( ) ; <nl> } <nl> <nl> + void TaskQueue : : BlockUntilQueueEmptyForTesting ( ) { <nl> + for ( ; ; ) { <nl> + { <nl> + base : : LockGuard < base : : Mutex > guard ( & lock_ ) ; <nl> + if ( task_queue_ . empty ( ) ) return ; <nl> + } <nl> + base : : OS : : Sleep ( base : : TimeDelta : : FromMilliseconds ( 5 ) ) ; <nl> + } <nl> + } <nl> + <nl> } / / namespace platform <nl> } / / namespace v8 <nl> mmm a / src / libplatform / task - queue . h <nl> ppp b / src / libplatform / task - queue . h <nl> <nl> # include " src / base / macros . h " <nl> # include " src / base / platform / mutex . h " <nl> # include " src / base / platform / semaphore . h " <nl> + # include " testing / gtest / include / gtest / gtest_prod . h " <nl> <nl> namespace v8 { <nl> <nl> class TaskQueue { <nl> void Terminate ( ) ; <nl> <nl> private : <nl> + FRIEND_TEST ( WorkerThreadTest , PostSingleTask ) ; <nl> + <nl> + void BlockUntilQueueEmptyForTesting ( ) ; <nl> + <nl> base : : Semaphore process_queue_semaphore_ ; <nl> base : : Mutex lock_ ; <nl> std : : queue < Task * > task_queue_ ; <nl> mmm a / test / unittests / libplatform / worker - thread - unittest . cc <nl> ppp b / test / unittests / libplatform / worker - thread - unittest . cc <nl> TEST ( WorkerThreadTest , Basic ) { <nl> queue . Terminate ( ) ; <nl> } <nl> <nl> + TEST ( WorkerThreadTest , PostSingleTask ) { <nl> + TaskQueue queue ; <nl> + WorkerThread thread1 ( & queue ) ; <nl> + WorkerThread thread2 ( & queue ) ; <nl> + <nl> + InSequence s ; <nl> + StrictMock < MockTask > * task = new StrictMock < MockTask > ; <nl> + EXPECT_CALL ( * task , Run ( ) ) ; <nl> + EXPECT_CALL ( * task , Die ( ) ) ; <nl> + queue . Append ( task ) ; <nl> + <nl> + / / The next call should not time out . <nl> + queue . BlockUntilQueueEmptyForTesting ( ) ; <nl> + queue . Terminate ( ) ; <nl> + } <nl> + <nl> } / / namespace platform <nl> } / / namespace v8 <nl>
|
Add test for posting a single task to the worker pool
|
v8/v8
|
f5b86867667aa05d7cf8b6beec7ebc11abab9764
|
2016-08-23T11:56:57Z
|
mmm a / lib / ffmpeg / libavcodec / mpegvideo . c <nl> ppp b / lib / ffmpeg / libavcodec / mpegvideo . c <nl> int MPV_frame_start ( MpegEncContext * s , AVCodecContext * avctx ) <nl> / * Allocate a dummy frame * / <nl> i = ff_find_unused_picture ( s , 0 ) ; <nl> s - > next_picture_ptr = & s - > picture [ i ] ; <nl> - s - > last_picture_ptr - > key_frame = 0 ; <nl> + s - > next_picture_ptr - > key_frame = 0 ; <nl> if ( ff_alloc_picture ( s , s - > next_picture_ptr , 0 ) < 0 ) <nl> return - 1 ; <nl> } <nl> mmm a / lib / ffmpeg / patches / 0048 - Dont - mark - genereted - dummy - frame - as - keyframe . patch <nl> ppp b / lib / ffmpeg / patches / 0048 - Dont - mark - genereted - dummy - frame - as - keyframe . patch <nl> / * Allocate a dummy frame * / <nl> i = ff_find_unused_picture ( s , 0 ) ; <nl> s - > next_picture_ptr = & s - > picture [ i ] ; <nl> - + s - > last_picture_ptr - > key_frame = 0 ; <nl> + + s - > next_picture_ptr - > key_frame = 0 ; <nl> if ( ff_alloc_picture ( s , s - > next_picture_ptr , 0 ) < 0 ) <nl> return - 1 ; <nl> } <nl>
|
fixed : typo in lavc dummy frame fix
|
xbmc/xbmc
|
c1aa0af587b778f18084d33cfbfa74abf7d9cc20
|
2011-07-11T19:16:50Z
|
mmm a / tensorflow / lite / kernels / conv . cc <nl> ppp b / tensorflow / lite / kernels / conv . cc <nl> bool IsIm2ColRequired ( TfLiteTensor * input , TfLiteConvParams * params , <nl> const bool need_im2col = need_dilated_im2col | | need_non_dilated_im2col ; <nl> <nl> / / Return early as basic requirement is not met <nl> - if ( ! need_im2col ) return need_im2col ; <nl> + if ( ! need_im2col ) return false ; <nl> <nl> / / Special case for Hybrid , as it supports only non - dilated im2col currently <nl> const bool is_hybrid_non_dilated = is_hybrid & & need_non_dilated_im2col ; <nl>
|
[ 5 ] Review comments handled
|
tensorflow/tensorflow
|
cb29312583aa321d5f893106235712459541bd62
|
2020-01-07T04:14:15Z
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.