diff
stringlengths 41
2.03M
| msg
stringlengths 1
1.5k
⌀ | repo
stringlengths 5
40
| sha
stringlengths 40
40
| time
stringlengths 20
20
|
---|---|---|---|---|
mmm a / utils / kafka / consume . py <nl> ppp b / utils / kafka / consume . py <nl> def main ( ) : <nl> pprint ( client . poll ( 10000 ) ) <nl> client . unsubscribe ( ) <nl> client . close ( ) <nl> + return 0 <nl> <nl> <nl> if __name__ = = " __main__ " : <nl>
|
[ utils / kafka ] provide some exit code from main
|
ClickHouse/ClickHouse
|
4765b7bee5b04324dca1d157b22b701dddf12663
|
2019-11-01T16:08:29Z
|
new file mode 100644 <nl> index 00000000000 . . 2c0f3825fc7 <nl> mmm / dev / null <nl> ppp b / templates / tools / dockerfile / bazel . include <nl> <nl> + # = = = = = = = = = = = = = = = = = = = = = = = = <nl> + # Bazel installation <nl> + RUN echo " deb [ arch = amd64 ] http : / / storage . googleapis . com / bazel - apt stable jdk1 . 8 " > / etc / apt / sources . list . d / bazel . list <nl> + RUN curl https : / / bazel . build / bazel - release . pub . gpg | apt - key add - <nl> + RUN apt - get - y update & & apt - get - y install bazel = 0 . 13 . 1 & & apt - get clean <nl> new file mode 100644 <nl> index 00000000000 . . 8ef2f02e715 <nl> mmm / dev / null <nl> ppp b / templates / tools / dockerfile / test / bazel / Dockerfile . template <nl> <nl> + % YAML 1 . 2 <nl> + mmm | <nl> + # Copyright 2015 gRPC authors . <nl> + # <nl> + # Licensed under the Apache License , Version 2 . 0 ( the " License " ) ; <nl> + # you may not use this file except in compliance with the License . <nl> + # You may obtain a copy of the License at <nl> + # <nl> + # http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> + # <nl> + # Unless required by applicable law or agreed to in writing , software <nl> + # distributed under the License is distributed on an " AS IS " BASIS , <nl> + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied . <nl> + # See the License for the specific language governing permissions and <nl> + # limitations under the License . <nl> + <nl> + FROM gcr . io / oss - fuzz - base / base - builder <nl> + <nl> + # Install basic packages and Bazel dependencies . <nl> + RUN apt - get update & & apt - get install - y software - properties - common python - software - properties <nl> + RUN add - apt - repository ppa : webupd8team / java <nl> + RUN apt - get update & & apt - get - y install $ { ' \ \ ' } <nl> + autoconf $ { ' \ \ ' } <nl> + build - essential $ { ' \ \ ' } <nl> + curl $ { ' \ \ ' } <nl> + libtool $ { ' \ \ ' } <nl> + make $ { ' \ \ ' } <nl> + openjdk - 8 - jdk $ { ' \ \ ' } <nl> + vim <nl> + <nl> + < % include file = " . . / . . / bazel . include " / > <nl> + <nl> + RUN mkdir - p / var / local / jenkins <nl> + <nl> + # Define the default command . <nl> + CMD [ " bash " ] <nl> mmm a / templates / tools / dockerfile / test / sanity / Dockerfile . template <nl> ppp b / templates / tools / dockerfile / test / sanity / Dockerfile . template <nl> <nl> RUN echo " deb http : / / http . debian . net / debian jessie - backports main " > > / etc / apt / sources . list <nl> RUN apt - get update <nl> RUN apt - get install - y - t jessie - backports openjdk - 8 - jdk <nl> - <nl> - # = = = = = = = = = = = = = = = = = = = = = = = = <nl> - # Bazel installation <nl> - RUN echo " deb [ arch = amd64 ] http : / / storage . googleapis . com / bazel - apt stable jdk1 . 8 " > / etc / apt / sources . list . d / bazel . list <nl> - RUN curl https : / / bazel . build / bazel - release . pub . gpg | apt - key add - <nl> - RUN apt - get - y update <nl> - RUN apt - get - y install bazel <nl> - <nl> - # Pin Bazel to 0 . 9 . 0 <nl> - # Installing Bazel via apt - get first is required before installing 0 . 9 . 0 to <nl> - # allow gRPC to build without errors . See https : / / github . com / grpc / grpc / issues / 10553 <nl> - RUN curl - fSsL - O https : / / github . com / bazelbuild / bazel / releases / download / 0 . 9 . 0 / bazel - 0 . 9 . 0 - installer - linux - x86_64 . sh <nl> - RUN chmod + x . / bazel - 0 . 9 . 0 - installer - linux - x86_64 . sh <nl> - RUN . / bazel - 0 . 9 . 0 - installer - linux - x86_64 . sh <nl> <nl> + < % include file = " . . / . . / bazel . include " / > <nl> < % include file = " . . / . . / clang5 . include " / > <nl> < % include file = " . . / . . / run_tests_addons . include " / > <nl> <nl> mmm a / tools / distrib / python / make_grpcio_tools . py <nl> ppp b / tools / distrib / python / make_grpcio_tools . py <nl> def protobuf_submodule_commit_hash ( ) : <nl> <nl> <nl> def bazel_query ( query ) : <nl> + print ( ' Running " bazel query % s " ' % query ) <nl> output = subprocess . check_output ( [ BAZEL_DEPS , query ] ) <nl> return output . splitlines ( ) <nl> <nl> def main ( ) : <nl> shutil . copyfile ( source_file , target_file ) <nl> <nl> try : <nl> + print ( ' Invoking " bazel query " to gather the protobuf dependencies . ' ) <nl> protoc_lib_deps_content = get_deps ( ) <nl> except Exception as error : <nl> # We allow this script to succeed even if we couldn ' t get the dependencies , <nl> def main ( ) : <nl> # If we successfully got the dependencies , truncate and rewrite the deps file . <nl> with open ( GRPC_PYTHON_PROTOC_LIB_DEPS , ' w ' ) as deps_file : <nl> deps_file . write ( protoc_lib_deps_content ) <nl> + print ( ' File " % s " updated . ' % GRPC_PYTHON_PROTOC_LIB_DEPS ) <nl> + print ( ' Done . ' ) <nl> <nl> <nl> if __name__ = = ' __main__ ' : <nl> mmm a / tools / dockerfile / test / bazel / Dockerfile <nl> ppp b / tools / dockerfile / test / bazel / Dockerfile <nl> RUN apt - get update & & apt - get - y install \ <nl> # Bazel installation <nl> RUN echo " deb [ arch = amd64 ] http : / / storage . googleapis . com / bazel - apt stable jdk1 . 8 " > / etc / apt / sources . list . d / bazel . list <nl> RUN curl https : / / bazel . build / bazel - release . pub . gpg | apt - key add - <nl> - RUN apt - get - y update <nl> - RUN apt - get - y install bazel <nl> + RUN apt - get - y update & & apt - get - y install bazel = 0 . 13 . 1 & & apt - get clean <nl> <nl> - # Pin Bazel to 0 . 9 . 0 <nl> - # Installing Bazel via apt - get first is required before installing 0 . 9 . 0 to <nl> - # allow gRPC to build without errors . See https : / / github . com / grpc / grpc / issues / 10553 <nl> - RUN curl - fSsL - O https : / / github . com / bazelbuild / bazel / releases / download / 0 . 9 . 0 / bazel - 0 . 9 . 0 - installer - linux - x86_64 . sh <nl> - RUN chmod + x . / bazel - 0 . 9 . 0 - installer - linux - x86_64 . sh <nl> - RUN . / bazel - 0 . 9 . 0 - installer - linux - x86_64 . sh <nl> <nl> RUN mkdir - p / var / local / jenkins <nl> <nl> mmm a / tools / dockerfile / test / sanity / Dockerfile <nl> ppp b / tools / dockerfile / test / sanity / Dockerfile <nl> RUN apt - get install - y - t jessie - backports openjdk - 8 - jdk <nl> # Bazel installation <nl> RUN echo " deb [ arch = amd64 ] http : / / storage . googleapis . com / bazel - apt stable jdk1 . 8 " > / etc / apt / sources . list . d / bazel . list <nl> RUN curl https : / / bazel . build / bazel - release . pub . gpg | apt - key add - <nl> - RUN apt - get - y update <nl> - RUN apt - get - y install bazel <nl> - <nl> - # Pin Bazel to 0 . 9 . 0 <nl> - # Installing Bazel via apt - get first is required before installing 0 . 9 . 0 to <nl> - # allow gRPC to build without errors . See https : / / github . com / grpc / grpc / issues / 10553 <nl> - RUN curl - fSsL - O https : / / github . com / bazelbuild / bazel / releases / download / 0 . 9 . 0 / bazel - 0 . 9 . 0 - installer - linux - x86_64 . sh <nl> - RUN chmod + x . / bazel - 0 . 9 . 0 - installer - linux - x86_64 . sh <nl> - RUN . / bazel - 0 . 9 . 0 - installer - linux - x86_64 . sh <nl> + RUN apt - get - y update & & apt - get - y install bazel = 0 . 13 . 1 & & apt - get clean <nl> <nl> RUN apt - get update & & apt - get - y install wget xz - utils <nl> RUN wget http : / / releases . llvm . org / 5 . 0 . 0 / clang + llvm - 5 . 0 . 0 - linux - x86_64 - ubuntu14 . 04 . tar . xz <nl>
|
Merge pull request from jtattermusch / fixing_grpcio_tools
|
grpc/grpc
|
515908ba3196d057a8465a146f667b9e6c2ad9f2
|
2018-05-23T22:34:29Z
|
mmm a / torch / utils / cpp_extension . py <nl> ppp b / torch / utils / cpp_extension . py <nl> def _find_cuda_home ( ) : <nl> ! ! WARNING ! ! <nl> ' ' ' <nl> CUDA_HOME = _find_cuda_home ( ) if torch . cuda . is_available ( ) else None <nl> + # PyTorch releases have the version pattern major . minor . patch , whereas when <nl> + # PyTorch is built from source , we append the git commit hash , which gives <nl> + # it the below pattern . <nl> + BUILT_FROM_SOURCE_VERSION_PATTERN = re . compile ( r ' \ d + \ . \ d + \ . \ d + \ w + \ + \ w + ' ) <nl> <nl> <nl> def check_compiler_abi_compatibility ( compiler ) : <nl> def check_compiler_abi_compatibility ( compiler ) : <nl> False if the compiler is ( likely ) ABI - incompatible with PyTorch , <nl> else True . <nl> ' ' ' <nl> + if BUILT_FROM_SOURCE_VERSION_PATTERN . match ( torch . version . __version__ ) : <nl> + return True <nl> try : <nl> check_cmd = ' { } ' if sys . platform = = ' win32 ' else ' { } - - version ' <nl> info = subprocess . check_output ( <nl>
|
Dont emit warning for ABI incompatibility when PyTorch was built from source ( )
|
pytorch/pytorch
|
cf9b80720db2aa2e42e77495bae19bd0d48d4c4e
|
2018-05-19T19:25:52Z
|
mmm a / src / mongo / db / repl / rs_rollback . cpp <nl> ppp b / src / mongo / db / repl / rs_rollback . cpp <nl> Status rollback_internal : : updateFixUpInfoFromLocalOplogEntry ( FixUpInfo & fixUpInf <nl> return Status : : OK ( ) ; <nl> <nl> if ( ourObj . objsize ( ) > 512 * 1024 * 1024 ) <nl> - throw RSFatalException ( " rollback too large " ) ; <nl> + throw RSFatalException ( str : : stream ( ) < < " Rollback too large , oplog size : " <nl> + < < ourObj . objsize ( ) ) ; <nl> <nl> DocID doc ; <nl> doc . ownedObj = ourObj . getOwned ( ) ; <nl> doc . ns = doc . ownedObj . getStringField ( " ns " ) ; <nl> if ( * doc . ns = = ' \ 0 ' ) { <nl> - throw RSFatalException ( str : : stream ( ) < < " local op on rollback has no ns : " <nl> + throw RSFatalException ( str : : stream ( ) < < " Local op on rollback has no ns : " <nl> < < redact ( doc . ownedObj ) ) ; <nl> } <nl> <nl> BSONObj obj = doc . ownedObj . getObjectField ( * op = = ' u ' ? " o2 " : " o " ) ; <nl> if ( obj . isEmpty ( ) ) { <nl> - throw RSFatalException ( str : : stream ( ) < < " local op on rollback has no object field : " <nl> + throw RSFatalException ( str : : stream ( ) < < " Local op on rollback has no object field : " <nl> < < redact ( doc . ownedObj ) ) ; <nl> } <nl> <nl> Status rollback_internal : : updateFixUpInfoFromLocalOplogEntry ( FixUpInfo & fixUpInf <nl> string cmdname = first . fieldName ( ) ; <nl> Command * cmd = Command : : findCommand ( cmdname . c_str ( ) ) ; <nl> if ( cmd = = NULL ) { <nl> - severe ( ) < < " rollback no such command " < < first . fieldName ( ) ; <nl> + severe ( ) < < " Rollback no such command " < < first . fieldName ( ) ; <nl> return Status ( ErrorCodes : : UnrecoverableRollbackError , <nl> - str : : stream ( ) < < " rollback no such command " < < first . fieldName ( ) , <nl> + str : : stream ( ) < < " Rollback no such command " < < first . fieldName ( ) , <nl> 18751 ) ; <nl> } <nl> if ( cmdname = = " create " ) { <nl> Status rollback_internal : : updateFixUpInfoFromLocalOplogEntry ( FixUpInfo & fixUpInf <nl> } else if ( cmdname = = " dropIndexes " | | cmdname = = " deleteIndexes " ) { <nl> / / TODO : this is bad . we simply full resync the collection here , <nl> / / which could be very slow . <nl> - warning ( ) < < " rollback of dropIndexes is slow in this version of " <nl> - < < " mongod " ; <nl> + warning ( ) < < " Rollback of dropIndexes is slow in this version of " <nl> + < < " mongod . " ; <nl> string ns = nss . db ( ) . toString ( ) + ' . ' + first . valuestr ( ) ; <nl> fixUpInfo . collectionsToResyncData . insert ( ns ) ; <nl> return Status : : OK ( ) ; <nl> } else if ( cmdname = = " renameCollection " ) { <nl> / / TODO : slow . <nl> - warning ( ) < < " rollback of renameCollection is slow in this version of " <nl> - < < " mongod " ; <nl> + warning ( ) < < " Rollback of renameCollection is slow in this version of " <nl> + < < " mongod . " ; <nl> string from = first . valuestr ( ) ; <nl> string to = obj [ " to " ] . String ( ) ; <nl> fixUpInfo . collectionsToResyncData . insert ( from ) ; <nl> fixUpInfo . collectionsToResyncData . insert ( to ) ; <nl> return Status : : OK ( ) ; <nl> } else if ( cmdname = = " dropDatabase " ) { <nl> - severe ( ) < < " rollback : can ' t rollback drop database full resync " <nl> - < < " will be required " ; <nl> + string message = <nl> + " rollback : can ' t rollback drop database full resync will be required . " ; <nl> + severe ( ) < < message ; <nl> log ( ) < < obj . toString ( ) ; <nl> - throw RSFatalException ( ) ; <nl> + throw RSFatalException ( message ) ; <nl> } else if ( cmdname = = " collMod " ) { <nl> const auto ns = NamespaceString ( cmd - > parseNs ( nss . db ( ) . toString ( ) , obj ) ) ; <nl> for ( auto field : obj ) { <nl> Status rollback_internal : : updateFixUpInfoFromLocalOplogEntry ( FixUpInfo & fixUpInf <nl> fixUpInfo . collectionsToResyncMetadata . insert ( ns . ns ( ) ) ; <nl> continue ; <nl> } <nl> - <nl> - severe ( ) < < " cannot rollback a collMod command : " < < redact ( obj ) ; <nl> - throw RSFatalException ( ) ; <nl> + string message = " cannot rollback a collMod command : " ; <nl> + severe ( ) < < message < < redact ( obj ) ; <nl> + throw RSFatalException ( message ) ; <nl> } <nl> return Status : : OK ( ) ; <nl> } else if ( cmdname = = " applyOps " ) { <nl> Status rollback_internal : : updateFixUpInfoFromLocalOplogEntry ( FixUpInfo & fixUpInf <nl> } <nl> return Status : : OK ( ) ; <nl> } else { <nl> - severe ( ) < < " can ' t rollback this command yet : " < < redact ( obj ) ; <nl> - log ( ) < < " cmdname = " < < cmdname ; <nl> - throw RSFatalException ( ) ; <nl> + std : : string message = str : : stream ( ) < < " can ' t rollback this command yet : " ; <nl> + severe ( ) < < message < < redact ( obj ) ; <nl> + log ( ) < < " cmdname = " < < cmdname ; <nl> + throw RSFatalException ( str : : stream ( ) < < message < < " cmdname = " < < cmdname ) ; <nl> } <nl> } <nl> <nl> NamespaceString nss ( doc . ns ) ; <nl> if ( nss . isSystemDotIndexes ( ) ) { <nl> if ( * op ! = ' i ' ) { <nl> - severe ( ) < < " Unexpected operation type ' " < < * op < < " ' on system . indexes operation , " <nl> - < < " document : " < < redact ( doc . ownedObj ) ; <nl> - throw RSFatalException ( ) ; <nl> + std : : string message = str : : stream ( ) < < " Unexpected operation type ' " < < * op <nl> + < < " ' on system . indexes operation , " <nl> + < < " document : " ; <nl> + severe ( ) < < message < < redact ( doc . ownedObj ) ; <nl> + throw RSFatalException ( message ) ; <nl> } <nl> string objNs ; <nl> auto status = bsonExtractStringField ( obj , " ns " , & objNs ) ; <nl> if ( ! status . isOK ( ) ) { <nl> severe ( ) < < " Missing collection namespace in system . indexes operation , document : " <nl> < < redact ( doc . ownedObj ) ; <nl> - throw RSFatalException ( ) ; <nl> + throw RSFatalException ( " Missing collection namespace in system . indexes operation . " ) ; <nl> } <nl> NamespaceString objNss ( objNs ) ; <nl> if ( ! objNss . isValid ( ) ) { <nl> severe ( ) < < " Invalid collection namespace in system . indexes operation , document : " <nl> < < redact ( doc . ownedObj ) ; <nl> - throw RSFatalException ( ) ; <nl> + throw RSFatalException ( <nl> + str : : stream ( ) <nl> + < < " Invalid collection namespace in system . indexes operation , namespace : " <nl> + < < doc . ns ) ; <nl> } <nl> string indexName ; <nl> status = bsonExtractStringField ( obj , " name " , & indexName ) ; <nl> if ( ! status . isOK ( ) ) { <nl> severe ( ) < < " Missing index name in system . indexes operation , document : " <nl> < < redact ( doc . ownedObj ) ; <nl> - throw RSFatalException ( ) ; <nl> + throw RSFatalException ( " Missing index name in system . indexes operation . " ) ; <nl> } <nl> using ValueType = multimap < string , string > : : value_type ; <nl> ValueType pairToInsert = std : : make_pair ( objNs , indexName ) ; <nl> Status rollback_internal : : updateFixUpInfoFromLocalOplogEntry ( FixUpInfo & fixUpInf <nl> <nl> doc . _id = obj [ " _id " ] ; <nl> if ( doc . _id . eoo ( ) ) { <nl> - severe ( ) < < " cannot rollback op with no _id . ns : " < < doc . ns <nl> - < < " , document : " < < redact ( doc . ownedObj ) ; <nl> - throw RSFatalException ( ) ; <nl> + std : : string message = str : : stream ( ) < < " cannot rollback op with no _id . ns : " < < doc . ns ; <nl> + severe ( ) < < message < < " , document : " < < redact ( doc . ownedObj ) ; <nl> + throw RSFatalException ( message ) ; <nl> } <nl> <nl> fixUpInfo . docsToRefetch . insert ( doc ) ; <nl> void syncFixUp ( OperationContext * opCtx , <nl> BSONObj good = rollbackSource . findOne ( NamespaceString ( doc . ns ) , doc . _id . wrap ( ) ) ; <nl> totalSize + = good . objsize ( ) ; <nl> if ( totalSize > = 300 * 1024 * 1024 ) { <nl> - throw RSFatalException ( " replSet too much data to roll back " ) ; <nl> + throw RSFatalException ( " replSet too much data to roll back . " ) ; <nl> } <nl> <nl> / / Note good might be empty , indicating we should delete it . <nl> void syncFixUp ( OperationContext * opCtx , <nl> if ( ex . getCode ( ) = = ErrorCodes : : CommandNotSupportedOnView ) <nl> continue ; <nl> <nl> - log ( ) < < " rollback couldn ' t re - get from ns : " < < doc . ns < < " _id : " < < redact ( doc . _id ) <nl> + log ( ) < < " Rollback couldn ' t re - get from ns : " < < doc . ns < < " _id : " < < redact ( doc . _id ) <nl> < < ' ' < < numFetched < < ' / ' < < fixUpInfo . docsToRefetch . size ( ) < < " : " <nl> < < redact ( ex ) ; <nl> throw ; <nl> void syncFixUp ( OperationContext * opCtx , <nl> while ( PlanExecutor : : ADVANCED = = ( execState = exec - > getNext ( & curObj , NULL ) ) ) { <nl> auto status = removeSaver . goingToDelete ( curObj ) ; <nl> if ( ! status . isOK ( ) ) { <nl> - severe ( ) < < " rolling back createCollection on " < < * it <nl> - < < " failed to write document to remove saver file : " < < status ; <nl> - throw RSFatalException ( ) ; <nl> + severe ( ) < < " Rolling back createCollection on " < < * it <nl> + < < " failed to write document to remove saver file : " <nl> + < < redact ( status ) ; <nl> + throw RSFatalException ( <nl> + " Rolling back createCollection . Failed to write document to remove saver " <nl> + " file . " ) ; <nl> } <nl> } <nl> if ( execState ! = PlanExecutor : : IS_EOF ) { <nl> if ( execState = = PlanExecutor : : FAILURE & & <nl> WorkingSetCommon : : isValidStatusMemberObject ( curObj ) ) { <nl> Status errorStatus = WorkingSetCommon : : getMemberObjectStatus ( curObj ) ; <nl> - severe ( ) < < " rolling back createCollection on " < < * it < < " failed with " <nl> - < < errorStatus < < " . A full resync is necessary . " ; <nl> + severe ( ) < < " Rolling back createCollection on " < < * it < < " failed with " <nl> + < < redact ( errorStatus ) < < " . A full resync is necessary . " ; <nl> + throw RSFatalException ( <nl> + " Rolling back createCollection failed . A full resync is necessary . " ) ; <nl> } else { <nl> - severe ( ) < < " rolling back createCollection on " < < * it <nl> + severe ( ) < < " Rolling back createCollection on " < < * it <nl> < < " failed . A full resync is necessary . " ; <nl> + throw RSFatalException ( <nl> + " Rolling back createCollection failed . A full resync is necessary . " ) ; <nl> } <nl> - <nl> - throw RSFatalException ( ) ; <nl> } <nl> <nl> WriteUnitOfWork wunit ( opCtx ) ; <nl> void syncFixUp ( OperationContext * opCtx , <nl> indexCatalog - > findIndexByName ( opCtx , indexName , includeUnfinishedIndexes ) ; <nl> if ( ! indexDescriptor ) { <nl> warning ( ) < < " rollback failed to drop index " < < indexName < < " in " < < nss . toString ( ) <nl> - < < " : index not found " ; <nl> + < < " : index not found . " ; <nl> continue ; <nl> } <nl> WriteUnitOfWork wunit ( opCtx ) ; <nl> auto status = indexCatalog - > dropIndex ( opCtx , indexDescriptor ) ; <nl> if ( ! status . isOK ( ) ) { <nl> - severe ( ) < < " rollback failed to drop index " < < indexName < < " in " < < nss . toString ( ) <nl> - < < " : " < < status ; <nl> - throw RSFatalException ( ) ; <nl> + severe ( ) < < " rollback failed to drop index " < < indexName < < " in " < < nss . toString ( ) ; <nl> + throw RSFatalException ( str : : stream ( ) < < " Rollback failed to drop index " < < indexName <nl> + < < " in " <nl> + < < nss . toString ( ) ) ; <nl> } <nl> wunit . commit ( ) ; <nl> } <nl> void syncFixUp ( OperationContext * opCtx , <nl> if ( now - lastProgressUpdate > progressUpdateGap ) { <nl> log ( ) < < deletes < < " delete and " < < updates <nl> < < " update operations processed out of " < < goodVersions . size ( ) <nl> - < < " total operations " ; <nl> + < < " total operations . " ; <nl> lastProgressUpdate = now ; <nl> } <nl> const DocID & doc = idAndDoc . first ; <nl> void syncFixUp ( OperationContext * opCtx , <nl> if ( found ) { <nl> auto status = removeSaver - > goingToDelete ( obj ) ; <nl> if ( ! status . isOK ( ) ) { <nl> - severe ( ) < < " rollback cannot write document in namespace " < < doc . ns <nl> + severe ( ) < < " Rollback cannot write document in namespace " < < doc . ns <nl> < < " to archive file : " < < redact ( status ) ; <nl> - throw RSFatalException ( ) ; <nl> + throw RSFatalException ( str : : stream ( ) <nl> + < < " Rollback cannot write document in namespace " <nl> + < < doc . ns <nl> + < < " to archive file . " ) ; <nl> } <nl> } else { <nl> - error ( ) < < " rollback cannot find object : " < < pattern < < " in namespace " <nl> + error ( ) < < " Rollback cannot find object : " < < pattern < < " in namespace " <nl> < < doc . ns ; <nl> } <nl> } <nl> void syncFixUp ( OperationContext * opCtx , <nl> const auto findOneStart = clock - > now ( ) ; <nl> RecordId loc = Helpers : : findOne ( opCtx , collection , pattern , false ) ; <nl> if ( clock - > now ( ) - findOneStart > Milliseconds ( 200 ) ) <nl> - warning ( ) < < " roll back slow no _id index for " < < doc . ns <nl> + warning ( ) < < " Roll back slow no _id index for " < < doc . ns <nl> < < " perhaps ? " ; <nl> / / would be faster but requires index : <nl> / / RecordId loc = Helpers : : findById ( nsd , pattern ) ; <nl> void syncFixUp ( OperationContext * opCtx , <nl> / / Replicated capped collections have many ways to become <nl> / / inconsistent . We rely on age - out to make these problems go away <nl> / / eventually . <nl> - warning ( ) < < " ignoring failure to roll back change to capped " <nl> + warning ( ) < < " Ignoring failure to roll back change to capped " <nl> < < " collection " < < doc . ns < < " with _id " <nl> < < redact ( idAndDoc . first . _id . toString ( <nl> / * includeFieldName * / false ) ) <nl> void syncFixUp ( OperationContext * opCtx , <nl> update ( opCtx , ctx . db ( ) , request ) ; <nl> } <nl> } catch ( const DBException & e ) { <nl> - log ( ) < < " exception in rollback ns : " < < doc . ns < < ' ' < < pattern . toString ( ) < < ' ' <nl> + log ( ) < < " Exception in rollback ns : " < < doc . ns < < ' ' < < pattern . toString ( ) < < ' ' <nl> < < redact ( e ) < < " ndeletes : " < < deletes ; <nl> throw ; <nl> } <nl> void syncFixUp ( OperationContext * opCtx , <nl> <nl> Status status = getGlobalAuthorizationManager ( ) - > initialize ( opCtx ) ; <nl> if ( ! status . isOK ( ) ) { <nl> - severe ( ) < < " Failed to reinitialize auth data after rollback : " < < status ; <nl> + severe ( ) < < " Failed to reinitialize auth data after rollback : " < < redact ( status ) ; <nl> fassertFailedNoTrace ( 40366 ) ; <nl> } <nl> <nl> void rollback ( OperationContext * opCtx , <nl> invariant ( ex . getCode ( ) ! = ErrorCodes : : UnrecoverableRollbackError ) ; <nl> <nl> warning ( ) < < " rollback cannot complete at this time ( retrying later ) : " < < redact ( ex ) <nl> - < < " appliedThrough = " < < replCoord - > getMyLastAppliedOpTime ( ) < < " minvalid = " <nl> + < < " appliedThrough = " < < replCoord - > getMyLastAppliedOpTime ( ) < < " minvalid = " <nl> < < replicationProcess - > getConsistencyMarkers ( ) - > getMinValid ( opCtx ) ; <nl> <nl> / / Sleep a bit to allow upstream node to coalesce , if that was the cause of the failure . If <nl>
|
SERVER - 27412 : Updates the error messages for RSFatalExceptions in rs_rollback to be more descriptive
|
mongodb/mongo
|
bce4f6142f139ccff91e32ce445086b1c646d9cf
|
2017-06-13T14:48:48Z
|
mmm a / src / codegen / code - stub - assembler . cc <nl> ppp b / src / codegen / code - stub - assembler . cc <nl> TNode < IntPtrT > CodeStubAssembler : : IntPtrRoundUpToPowerOfTwo32 ( <nl> return Signed ( IntPtrAdd ( value , IntPtrConstant ( 1 ) ) ) ; <nl> } <nl> <nl> - Node * CodeStubAssembler : : MatchesParameterMode ( Node * value , ParameterMode mode ) { <nl> - if ( mode = = SMI_PARAMETERS ) { <nl> - return TaggedIsSmi ( value ) ; <nl> - } else { <nl> - return Int32Constant ( 1 ) ; <nl> - } <nl> - } <nl> - <nl> TNode < BoolT > CodeStubAssembler : : WordIsPowerOfTwo ( SloppyTNode < IntPtrT > value ) { <nl> intptr_t constant ; <nl> if ( ToIntPtrConstant ( value , & constant ) ) { <nl> void CodeStubAssembler : : StoreObjectFieldRoot ( TNode < HeapObject > object , <nl> } <nl> } <nl> <nl> + template < typename TIndex > <nl> void CodeStubAssembler : : StoreFixedArrayOrPropertyArrayElement ( <nl> - TNode < UnionT < FixedArray , PropertyArray > > object , Node * index_node , <nl> - TNode < Object > value , WriteBarrierMode barrier_mode , int additional_offset , <nl> - ParameterMode parameter_mode ) { <nl> - CSA_SLOW_ASSERT ( <nl> - this , Word32Or ( IsFixedArraySubclass ( object ) , IsPropertyArray ( object ) ) ) ; <nl> - CSA_SLOW_ASSERT ( this , MatchesParameterMode ( index_node , parameter_mode ) ) ; <nl> + TNode < UnionT < FixedArray , PropertyArray > > object , TNode < TIndex > index_node , <nl> + TNode < Object > value , WriteBarrierMode barrier_mode , int additional_offset ) { <nl> + / / TODO ( v8 : 9708 ) : Do we want to keep both IntPtrT and UintPtrT variants ? <nl> + static_assert ( std : : is_same < TIndex , Smi > : : value | | <nl> + std : : is_same < TIndex , UintPtrT > : : value | | <nl> + std : : is_same < TIndex , IntPtrT > : : value , <nl> + " Only Smi , UintPtrT or IntPtrT index is allowed " ) ; <nl> DCHECK ( barrier_mode = = SKIP_WRITE_BARRIER | | <nl> barrier_mode = = UNSAFE_SKIP_WRITE_BARRIER | | <nl> barrier_mode = = UPDATE_WRITE_BARRIER | | <nl> void CodeStubAssembler : : StoreFixedArrayOrPropertyArrayElement ( <nl> static_cast < int > ( PropertyArray : : kHeaderSize ) ) ; <nl> int header_size = <nl> FixedArray : : kHeaderSize + additional_offset - kHeapObjectTag ; <nl> - TNode < IntPtrT > offset = ElementOffsetFromIndex ( index_node , HOLEY_ELEMENTS , <nl> - parameter_mode , header_size ) ; <nl> + TNode < IntPtrT > offset = <nl> + ElementOffsetFromIndex ( index_node , HOLEY_ELEMENTS , header_size ) ; <nl> STATIC_ASSERT ( static_cast < int > ( FixedArrayBase : : kLengthOffset ) = = <nl> static_cast < int > ( WeakFixedArray : : kLengthOffset ) ) ; <nl> STATIC_ASSERT ( static_cast < int > ( FixedArrayBase : : kLengthOffset ) = = <nl> void CodeStubAssembler : : StoreFixedArrayOrPropertyArrayElement ( <nl> } <nl> } <nl> <nl> + template V8_EXPORT_PRIVATE void <nl> + CodeStubAssembler : : StoreFixedArrayOrPropertyArrayElement < Smi > ( <nl> + TNode < UnionT < FixedArray , PropertyArray > > , TNode < Smi > , TNode < Object > , <nl> + WriteBarrierMode , int ) ; <nl> + <nl> + template V8_EXPORT_PRIVATE void <nl> + CodeStubAssembler : : StoreFixedArrayOrPropertyArrayElement < IntPtrT > ( <nl> + TNode < UnionT < FixedArray , PropertyArray > > , TNode < IntPtrT > , TNode < Object > , <nl> + WriteBarrierMode , int ) ; <nl> + <nl> + template V8_EXPORT_PRIVATE void <nl> + CodeStubAssembler : : StoreFixedArrayOrPropertyArrayElement < UintPtrT > ( <nl> + TNode < UnionT < FixedArray , PropertyArray > > , TNode < UintPtrT > , TNode < Object > , <nl> + WriteBarrierMode , int ) ; <nl> + <nl> template < typename TIndex > <nl> void CodeStubAssembler : : StoreFixedDoubleArrayElement ( <nl> TNode < FixedDoubleArray > object , TNode < TIndex > index , TNode < Float64T > value , <nl> TNode < Oddball > CodeStubAssembler : : OrdinaryHasInstance ( <nl> return var_result . value ( ) ; <nl> } <nl> <nl> - TNode < IntPtrT > CodeStubAssembler : : ElementOffsetFromIndex ( Node * index_node , <nl> - ElementsKind kind , <nl> - ParameterMode mode , <nl> - int base_size ) { <nl> - CSA_SLOW_ASSERT ( this , MatchesParameterMode ( index_node , mode ) ) ; <nl> - if ( mode = = SMI_PARAMETERS ) { <nl> - return ElementOffsetFromIndex ( ReinterpretCast < Smi > ( index_node ) , kind , <nl> - base_size ) ; <nl> - } else { <nl> - DCHECK ( mode = = INTPTR_PARAMETERS ) ; <nl> - return ElementOffsetFromIndex ( ReinterpretCast < IntPtrT > ( index_node ) , kind , <nl> - base_size ) ; <nl> - } <nl> - } <nl> - <nl> template < typename TIndex > <nl> TNode < IntPtrT > CodeStubAssembler : : ElementOffsetFromIndex ( <nl> TNode < TIndex > index_node , ElementsKind kind , int base_size ) { <nl> mmm a / src / codegen / code - stub - assembler . h <nl> ppp b / src / codegen / code - stub - assembler . h <nl> class V8_EXPORT_PRIVATE CodeStubAssembler <nl> <nl> using AllocationFlags = base : : Flags < AllocationFlag > ; <nl> <nl> - enum ParameterMode { SMI_PARAMETERS , INTPTR_PARAMETERS } ; <nl> - <nl> - / / On 32 - bit platforms , there is a slight performance advantage to doing all <nl> - / / of the array offset / index arithmetic with SMIs , since it ' s possible <nl> - / / to save a few tag / untag operations without paying an extra expense when <nl> - / / calculating array offset ( the smi math can be folded away ) and there are <nl> - / / fewer live ranges . Thus only convert indices to untagged value on 64 - bit <nl> - / / platforms . <nl> - ParameterMode OptimalParameterMode ( ) const { <nl> - # if defined ( BINT_IS_SMI ) <nl> - return SMI_PARAMETERS ; <nl> - # elif defined ( BINT_IS_INTPTR ) <nl> - return INTPTR_PARAMETERS ; <nl> - # else <nl> - # error Unknown BInt type . <nl> - # endif <nl> - } <nl> - <nl> TNode < IntPtrT > ParameterToIntPtr ( TNode < Smi > value ) { return SmiUntag ( value ) ; } <nl> TNode < IntPtrT > ParameterToIntPtr ( TNode < IntPtrT > value ) { return value ; } <nl> TNode < IntPtrT > ParameterToIntPtr ( TNode < UintPtrT > value ) { <nl> class V8_EXPORT_PRIVATE CodeStubAssembler <nl> return CAST ( heap_object ) ; <nl> } <nl> <nl> - Node * MatchesParameterMode ( Node * value , ParameterMode mode ) ; <nl> - <nl> # define PARAMETER_BINOP ( OpName , IntPtrOpName , SmiOpName ) \ <nl> TNode < Smi > OpName ( TNode < Smi > a , TNode < Smi > b ) { return SmiOpName ( a , b ) ; } \ <nl> TNode < IntPtrT > OpName ( TNode < IntPtrT > a , TNode < IntPtrT > b ) { \ <nl> class V8_EXPORT_PRIVATE CodeStubAssembler <nl> std : : is_same < TIndex , UintPtrT > : : value | | <nl> std : : is_same < TIndex , IntPtrT > : : value , <nl> " Only Smi , UintPtrT or IntPtrT index is allowed " ) ; <nl> - const ParameterMode mode = <nl> - std : : is_same < TIndex , Smi > : : value ? SMI_PARAMETERS : INTPTR_PARAMETERS ; <nl> if ( NeedsBoundsCheck ( check_bounds ) ) { <nl> FixedArrayBoundsCheck ( array , index , additional_offset ) ; <nl> } <nl> StoreFixedArrayOrPropertyArrayElement ( array , index , value , barrier_mode , <nl> - additional_offset , mode ) ; <nl> + additional_offset ) ; <nl> } <nl> / / This doesn ' t emit a bounds - check . As part of the security - performance <nl> / / tradeoff , only use it if it is performance critical . <nl> class V8_EXPORT_PRIVATE CodeStubAssembler <nl> <nl> void StorePropertyArrayElement ( TNode < PropertyArray > array , <nl> TNode < IntPtrT > index , TNode < Object > value ) { <nl> - StoreFixedArrayOrPropertyArrayElement ( <nl> - array , index , value , UPDATE_WRITE_BARRIER , 0 , INTPTR_PARAMETERS ) ; <nl> + StoreFixedArrayOrPropertyArrayElement ( array , index , value , <nl> + UPDATE_WRITE_BARRIER ) ; <nl> } <nl> <nl> void StoreFixedArrayElement ( <nl> class V8_EXPORT_PRIVATE CodeStubAssembler <nl> template < typename TIndex > <nl> TNode < IntPtrT > ElementOffsetFromIndex ( TNode < TIndex > index , ElementsKind kind , <nl> int base_size = 0 ) ; <nl> - / / TODO ( v8 : 9708 ) : remove once all uses are ported . <nl> - TNode < IntPtrT > ElementOffsetFromIndex ( Node * index , ElementsKind kind , <nl> - ParameterMode mode , int base_size = 0 ) ; <nl> <nl> / / Check that a field offset is within the bounds of the an object . <nl> TNode < BoolT > IsOffsetInBounds ( SloppyTNode < IntPtrT > offset , <nl> class V8_EXPORT_PRIVATE CodeStubAssembler <nl> return CodeAssembler : : LoadRoot ( root_index ) ; <nl> } <nl> <nl> + template < typename TIndex > <nl> void StoreFixedArrayOrPropertyArrayElement ( <nl> - TNode < UnionT < FixedArray , PropertyArray > > array , Node * index , <nl> + TNode < UnionT < FixedArray , PropertyArray > > array , TNode < TIndex > index , <nl> TNode < Object > value , WriteBarrierMode barrier_mode = UPDATE_WRITE_BARRIER , <nl> - int additional_offset = 0 , <nl> - ParameterMode parameter_mode = INTPTR_PARAMETERS ) ; <nl> + int additional_offset = 0 ) ; <nl> } ; <nl> <nl> class V8_EXPORT_PRIVATE CodeStubArguments { <nl>
|
[ csa ] [ cleanup ] Remove ParameterMode from the codebase
|
v8/v8
|
f2851de4b6e9ef28fe340d6095363c1adb6ba77d
|
2020-08-27T13:44:04Z
|
mmm a / tests / test_browser . py <nl> ppp b / tests / test_browser . py <nl> def test_split_memory_large_file ( self ) : <nl> self . btest ( ' split_memory_large_file . cpp ' , expected = ' 1 ' , args = [ ' - s ' , ' SPLIT_MEMORY = ' + str ( size ) , ' - s ' , ' TOTAL_MEMORY = 100000000 ' , ' - s ' , ' TOTAL_STACK = 10240 ' , ' - - preload - file ' , ' huge . dat ' ] , timeout = 60 ) <nl> <nl> def test_binaryen ( self ) : <nl> - try : <nl> - BINARYEN_ROOT <nl> - except : <nl> - return self . skip ( ' no binaryen set up ' ) <nl> self . btest ( ' browser_test_hello_world . c ' , expected = ' 0 ' , args = [ ' - s ' , ' BINARYEN = 1 ' , ' - s ' , ' BINARYEN_METHOD = " interpret - binary " ' ] ) <nl> <nl> mmm a / tests / test_other . py <nl> ppp b / tests / test_other . py <nl> def test ( contents ) : <nl> check_execute ( [ PYTHON , EMCC , ' src . cpp ' , ' - O2 ' ] ) # optimized , so no assertions <nl> self . assertNotContained ( WARNING , open ( ' a . out . js ' ) . read ( ) ) <nl> <nl> + def test_binaryen ( self ) : <nl> + check_execute ( [ PYTHON , EMCC , path_from_root ( ' tests ' , ' hello_world . cpp ' ) , ' - s ' , ' BINARYEN = 1 ' , ' - s ' , ' BINARYEN_METHOD = " interpret - binary " ' ] ) <nl> + self . assertContained ( ' hello , world ! ' , run_js ( ' a . out . js ' ) ) <nl> + <nl> # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> # Function eliminator tests <nl> # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl>
|
add binaryen tests in other and browser
|
emscripten-core/emscripten
|
943b2ba93935e211539fb1f5d2b384cc4fe03e5e
|
2016-03-26T22:46:58Z
|
mmm a / platform / javascript / http_client_javascript . cpp <nl> ppp b / platform / javascript / http_client_javascript . cpp <nl> Error HTTPClient : : poll ( ) { <nl> case STATUS_CONNECTION_ERROR : <nl> return ERR_CONNECTION_ERROR ; <nl> <nl> - case STATUS_REQUESTING : <nl> + case STATUS_REQUESTING : { <nl> <nl> # ifdef DEBUG_ENABLED <nl> if ( ! has_polled ) { <nl> Error HTTPClient : : poll ( ) { <nl> godot_xhr_get_response ( xhr_id , write . ptr ( ) , polled_response . size ( ) ) ; <nl> write = PoolByteArray : : Write ( ) ; <nl> break ; <nl> + } <nl> + <nl> + default : <nl> + ERR_FAIL_V ( ERR_BUG ) ; <nl> } <nl> return OK ; <nl> } <nl> mmm a / platform / javascript / javascript_eval . cpp <nl> ppp b / platform / javascript / javascript_eval . cpp <nl> Variant JavaScript : : eval ( const String & p_code , bool p_use_global_exec_context ) { <nl> case Variant : : POOL_BYTE_ARRAY : <nl> arr_write = PoolByteArray : : Write ( ) ; <nl> return arr ; <nl> + default : <nl> + return Variant ( ) ; <nl> } <nl> - return Variant ( ) ; <nl> } <nl> <nl> # endif / / JAVASCRIPT_EVAL_ENABLED <nl> mmm a / platform / javascript / os_javascript . cpp <nl> ppp b / platform / javascript / os_javascript . cpp <nl> void OS_JavaScript : : set_window_size ( const Size2 p_size ) { <nl> emscripten_exit_soft_fullscreen ( ) ; <nl> window_maximized = false ; <nl> } <nl> - emscripten_set_canvas_size ( p_size . x , p_size . y ) ; <nl> + emscripten_set_canvas_element_size ( NULL , p_size . x , p_size . y ) ; <nl> } <nl> } <nl> <nl> Size2 OS_JavaScript : : get_window_size ( ) const { <nl> <nl> - int canvas [ 3 ] ; <nl> - emscripten_get_canvas_size ( canvas , canvas + 1 , canvas + 2 ) ; <nl> + int canvas [ 2 ] ; <nl> + emscripten_get_canvas_element_size ( NULL , canvas , canvas + 1 ) ; <nl> return Size2 ( canvas [ 0 ] , canvas [ 1 ] ) ; <nl> } <nl> <nl> static void set_css_cursor ( const char * p_cursor ) { <nl> / * clang - format on * / <nl> } <nl> <nl> - static const char * get_css_cursor ( ) { <nl> + static bool is_css_cursor_hidden ( ) { <nl> <nl> - char cursor [ 16 ] ; <nl> / * clang - format off * / <nl> - EM_ASM_INT ( { <nl> - stringToUTF8 ( Module . canvas . style . cursor ? Module . canvas . style . cursor : ' auto ' , $ 0 , 16 ) ; <nl> - } , cursor ) ; <nl> + return EM_ASM_INT ( { <nl> + return Module . canvas . style . cursor = = = ' none ' ; <nl> + } ) ; <nl> / * clang - format on * / <nl> - return cursor ; <nl> } <nl> <nl> void OS_JavaScript : : set_cursor_shape ( CursorShape p_shape ) { <nl> void OS_JavaScript : : set_mouse_mode ( OS : : MouseMode p_mode ) { <nl> <nl> OS : : MouseMode OS_JavaScript : : get_mouse_mode ( ) const { <nl> <nl> - if ( String : : utf8 ( get_css_cursor ( ) ) = = " none " ) <nl> + if ( is_css_cursor_hidden ( ) ) <nl> return MOUSE_MODE_HIDDEN ; <nl> <nl> EmscriptenPointerlockChangeEvent ev ; <nl> bool OS_JavaScript : : main_loop_iterate ( ) { <nl> strategy . canvasResizedCallback = NULL ; <nl> emscripten_enter_soft_fullscreen ( NULL , & strategy ) ; <nl> } else { <nl> - emscripten_set_canvas_size ( windowed_size . width , windowed_size . height ) ; <nl> + emscripten_set_canvas_element_size ( NULL , windowed_size . width , windowed_size . height ) ; <nl> } <nl> just_exited_fullscreen = false ; <nl> } <nl> <nl> - int canvas [ 3 ] ; <nl> - emscripten_get_canvas_size ( canvas , canvas + 1 , canvas + 2 ) ; <nl> + int canvas [ 2 ] ; <nl> + emscripten_get_canvas_element_size ( NULL , canvas , canvas + 1 ) ; <nl> video_mode . width = canvas [ 0 ] ; <nl> video_mode . height = canvas [ 1 ] ; <nl> if ( ! window_maximized & & ! video_mode . fullscreen & & ! just_exited_fullscreen & & ! entering_fullscreen ) { <nl>
|
Merge pull request from eska014 / html5 - wall
|
godotengine/godot
|
214d9bd17e203033551141b749727db4adf43bec
|
2018-10-02T06:39:29Z
|
mmm a / src / flags / flag - definitions . h <nl> ppp b / src / flags / flag - definitions . h <nl> DEFINE_INT ( trace_wasm_ast_start , 0 , <nl> DEFINE_INT ( trace_wasm_ast_end , 0 , " end function for wasm AST trace ( exclusive ) " ) <nl> DEFINE_BOOL ( liftoff , true , <nl> " enable Liftoff , the baseline compiler for WebAssembly " ) <nl> + DEFINE_BOOL ( liftoff_extern_ref , false , <nl> + " enable support for externref in Liftoff " ) <nl> / / We can ' t tier up ( from Liftoff to TurboFan ) in single - threaded mode , hence <nl> / / disable Liftoff in that configuration for now . The alternative is disabling <nl> / / TurboFan , which would reduce peak performance considerably . <nl> mmm a / src / wasm / baseline / arm / liftoff - assembler - arm . h <nl> ppp b / src / wasm / baseline / arm / liftoff - assembler - arm . h <nl> inline void Store ( LiftoffAssembler * assm , LiftoffRegister src , MemOperand dst , <nl> # endif <nl> switch ( type . kind ( ) ) { <nl> case ValueType : : kI32 : <nl> + case ValueType : : kOptRef : <nl> + case ValueType : : kRef : <nl> assm - > str ( src . gp ( ) , dst ) ; <nl> break ; <nl> case ValueType : : kI64 : <nl> inline void Load ( LiftoffAssembler * assm , LiftoffRegister dst , MemOperand src , <nl> ValueType type ) { <nl> switch ( type . kind ( ) ) { <nl> case ValueType : : kI32 : <nl> + case ValueType : : kOptRef : <nl> + case ValueType : : kRef : <nl> assm - > ldr ( dst . gp ( ) , src ) ; <nl> break ; <nl> case ValueType : : kI64 : <nl> int LiftoffAssembler : : SlotSizeForType ( ValueType type ) { <nl> } <nl> <nl> bool LiftoffAssembler : : NeedsAlignment ( ValueType type ) { <nl> - switch ( type . kind ( ) ) { <nl> - case ValueType : : kS128 : <nl> - return true ; <nl> - default : <nl> - / / No alignment because all other types are kStackSlotSize . <nl> - return false ; <nl> - } <nl> + return ( type . kind ( ) = = ValueType : : kS128 | | type . is_reference_type ( ) ) ; <nl> } <nl> <nl> void LiftoffAssembler : : LoadConstant ( LiftoffRegister reg , WasmValue value , <nl> void LiftoffAssembler : : MoveStackValue ( uint32_t dst_offset , uint32_t src_offset , <nl> <nl> void LiftoffAssembler : : Move ( Register dst , Register src , ValueType type ) { <nl> DCHECK_NE ( dst , src ) ; <nl> - DCHECK_EQ ( type , kWasmI32 ) ; <nl> + DCHECK ( type = = kWasmI32 | | type . is_reference_type ( ) ) ; <nl> TurboAssembler : : Move ( dst , src ) ; <nl> } <nl> <nl> mmm a / src / wasm / baseline / arm64 / liftoff - assembler - arm64 . h <nl> ppp b / src / wasm / baseline / arm64 / liftoff - assembler - arm64 . h <nl> inline CPURegister GetRegFromType ( const LiftoffRegister & reg , ValueType type ) { <nl> case ValueType : : kI32 : <nl> return reg . gp ( ) . W ( ) ; <nl> case ValueType : : kI64 : <nl> + case ValueType : : kRef : <nl> + case ValueType : : kOptRef : <nl> return reg . gp ( ) . X ( ) ; <nl> case ValueType : : kF32 : <nl> return reg . fp ( ) . S ( ) ; <nl> int LiftoffAssembler : : SlotSizeForType ( ValueType type ) { <nl> } <nl> <nl> bool LiftoffAssembler : : NeedsAlignment ( ValueType type ) { <nl> - switch ( type . kind ( ) ) { <nl> - case ValueType : : kS128 : <nl> - return true ; <nl> - default : <nl> - / / No alignment because all other types are kStackSlotSize . <nl> - return false ; <nl> - } <nl> + return type . kind ( ) = = ValueType : : kS128 | | type . is_reference_type ( ) ; <nl> } <nl> <nl> void LiftoffAssembler : : LoadConstant ( LiftoffRegister reg , WasmValue value , <nl> void LiftoffAssembler : : Move ( Register dst , Register src , ValueType type ) { <nl> if ( type = = kWasmI32 ) { <nl> Mov ( dst . W ( ) , src . W ( ) ) ; <nl> } else { <nl> - DCHECK_EQ ( kWasmI64 , type ) ; <nl> + DCHECK ( kWasmI64 = = type | | type . is_reference_type ( ) ) ; <nl> Mov ( dst . X ( ) , src . X ( ) ) ; <nl> } <nl> } <nl> mmm a / src / wasm / baseline / ia32 / liftoff - assembler - ia32 . h <nl> ppp b / src / wasm / baseline / ia32 / liftoff - assembler - ia32 . h <nl> inline void Load ( LiftoffAssembler * assm , LiftoffRegister dst , Register base , <nl> Operand src ( base , offset ) ; <nl> switch ( type . kind ( ) ) { <nl> case ValueType : : kI32 : <nl> + case ValueType : : kOptRef : <nl> + case ValueType : : kRef : <nl> assm - > mov ( dst . gp ( ) , src ) ; <nl> break ; <nl> case ValueType : : kI64 : <nl> int LiftoffAssembler : : SlotSizeForType ( ValueType type ) { <nl> return type . element_size_bytes ( ) ; <nl> } <nl> <nl> - bool LiftoffAssembler : : NeedsAlignment ( ValueType type ) { return false ; } <nl> + bool LiftoffAssembler : : NeedsAlignment ( ValueType type ) { <nl> + return type . is_reference_type ( ) ; <nl> + } <nl> <nl> void LiftoffAssembler : : LoadConstant ( LiftoffRegister reg , WasmValue value , <nl> RelocInfo : : Mode rmode ) { <nl> void LiftoffAssembler : : MoveStackValue ( uint32_t dst_offset , uint32_t src_offset , <nl> <nl> void LiftoffAssembler : : Move ( Register dst , Register src , ValueType type ) { <nl> DCHECK_NE ( dst , src ) ; <nl> - DCHECK_EQ ( kWasmI32 , type ) ; <nl> + DCHECK ( kWasmI32 = = type | | type . is_reference_type ( ) ) ; <nl> mov ( dst , src ) ; <nl> } <nl> <nl> void LiftoffAssembler : : Spill ( int offset , LiftoffRegister reg , ValueType type ) { <nl> Operand dst = liftoff : : GetStackSlot ( offset ) ; <nl> switch ( type . kind ( ) ) { <nl> case ValueType : : kI32 : <nl> + case ValueType : : kOptRef : <nl> + case ValueType : : kRef : <nl> mov ( dst , reg . gp ( ) ) ; <nl> break ; <nl> case ValueType : : kI64 : <nl> mmm a / src / wasm / baseline / liftoff - assembler . cc <nl> ppp b / src / wasm / baseline / liftoff - assembler . cc <nl> void LiftoffAssembler : : CacheState : : Split ( const CacheState & source ) { <nl> * this = source ; <nl> } <nl> <nl> + void LiftoffAssembler : : CacheState : : DefineSafepoint ( Safepoint & safepoint ) { <nl> + for ( auto slot : stack_state ) { <nl> + DCHECK ( ! slot . is_reg ( ) ) ; <nl> + <nl> + if ( slot . type ( ) . is_reference_type ( ) ) { <nl> + / / index = 0 is for the stack slot at ' fp - kSystemPointerSize ' , the <nl> + / / location of the current stack slot is ' fp - slot . offset ( ) ' . <nl> + / / The index we need is therefore ' ( fp - kSystemPointerSize ) - ( fp - <nl> + / / slot . offset ( ) ) ' = ' slot . offset ( ) - kSystemPointerSize ' . <nl> + auto index = ( slot . offset ( ) - kSystemPointerSize ) / kSystemPointerSize ; <nl> + safepoint . DefinePointerSlot ( index ) ; <nl> + } <nl> + } <nl> + } <nl> + <nl> namespace { <nl> <nl> constexpr AssemblerOptions DefaultLiftoffOptions ( ) { <nl> mmm a / src / wasm / baseline / liftoff - assembler . h <nl> ppp b / src / wasm / baseline / liftoff - assembler . h <nl> class LiftoffAssembler : public TurboAssembler { <nl> / / Disallow copy construction . <nl> CacheState ( const CacheState & ) = delete ; <nl> <nl> + void DefineSafepoint ( Safepoint & safepoint ) ; <nl> + <nl> base : : SmallVector < VarState , 8 > stack_state ; <nl> LiftoffRegList used_registers ; <nl> uint32_t register_use_count [ kAfterMaxLiftoffRegCode ] = { 0 } ; <nl> class LiftoffAssembler : public TurboAssembler { <nl> uint32_t num_locals ( ) const { return num_locals_ ; } <nl> void set_num_locals ( uint32_t num_locals ) ; <nl> <nl> - int GetTotalFrameSlotCount ( ) const { <nl> - / / TODO ( zhin ) : Temporary for migration from index to offset . <nl> - return ( ( max_used_spill_offset_ + kStackSlotSize - 1 ) / kStackSlotSize ) ; <nl> + int GetTotalFrameSlotCountForGC ( ) const { <nl> + / / The GC does not care about the actual number of spill slots , just about <nl> + / / the number of references that could be there in the spilling area . Note <nl> + / / that the offset of the first spill slot is kSystemPointerSize and not <nl> + / / ' 0 ' . Therefore we don ' t have to add ' + 1 ' here . <nl> + return max_used_spill_offset_ / kSystemPointerSize ; <nl> } <nl> <nl> int GetTotalFrameSize ( ) const { return max_used_spill_offset_ ; } <nl> mmm a / src / wasm / baseline / liftoff - compiler . cc <nl> ppp b / src / wasm / baseline / liftoff - compiler . cc <nl> compiler : : CallDescriptor * GetLoweredCallDescriptor ( <nl> : call_desc ; <nl> } <nl> <nl> - constexpr ValueType kSupportedTypesArr [ ] = { kWasmI32 , kWasmI64 , kWasmF32 , <nl> - kWasmF64 , kWasmS128 } ; <nl> + constexpr ValueType kSupportedTypesArr [ ] = { <nl> + kWasmI32 , kWasmI64 , kWasmF32 , kWasmF64 , <nl> + kWasmS128 , kWasmExternRef , kWasmFuncRef } ; <nl> constexpr Vector < const ValueType > kSupportedTypes = <nl> ArrayVector ( kSupportedTypesArr ) ; <nl> <nl> + constexpr ValueType kSupportedTypesWithoutRefsArr [ ] = { <nl> + kWasmI32 , kWasmI64 , kWasmF32 , kWasmF64 , kWasmS128 } ; <nl> + constexpr Vector < const ValueType > kSupportedTypesWithoutRefs = <nl> + ArrayVector ( kSupportedTypesWithoutRefsArr ) ; <nl> + <nl> constexpr Condition GetCompareCondition ( WasmOpcode opcode ) { <nl> switch ( opcode ) { <nl> case kExprI32Eq : <nl> class LiftoffCompiler { <nl> Vector < const uint8_t > : : cast ( VectorOf ( protected_instructions_ ) ) ) ; <nl> } <nl> <nl> - uint32_t GetTotalFrameSlotCount ( ) const { <nl> - return __ GetTotalFrameSlotCount ( ) ; <nl> + uint32_t GetTotalFrameSlotCountForGC ( ) const { <nl> + return __ GetTotalFrameSlotCountForGC ( ) ; <nl> } <nl> <nl> void unsupported ( FullDecoder * decoder , LiftoffBailoutReason reason , <nl> class LiftoffCompiler { <nl> <nl> void TierUpFunction ( FullDecoder * decoder ) { <nl> __ CallRuntimeStub ( WasmCode : : kWasmTriggerTierUp ) ; <nl> - safepoint_table_builder_ . DefineSafepoint ( & asm_ , Safepoint : : kNoLazyDeopt ) ; <nl> + DefineSafepoint ( ) ; <nl> } <nl> <nl> void TraceFunctionEntry ( FullDecoder * decoder ) { <nl> class LiftoffCompiler { <nl> source_position_table_builder_ . AddPosition ( <nl> __ pc_offset ( ) , SourcePosition ( decoder - > position ( ) ) , false ) ; <nl> __ CallRuntimeStub ( WasmCode : : kWasmTraceEnter ) ; <nl> - safepoint_table_builder_ . DefineSafepoint ( & asm_ , Safepoint : : kNoLazyDeopt ) ; <nl> + DefineSafepoint ( ) ; <nl> } <nl> <nl> void StartFunctionBody ( FullDecoder * decoder , Control * block ) { <nl> for ( uint32_t i = 0 ; i < __ num_locals ( ) ; + + i ) { <nl> - if ( ! CheckSupportedType ( decoder , kSupportedTypes , __ local_type ( i ) , <nl> - " param " ) ) <nl> + if ( ! CheckSupportedType ( decoder , <nl> + FLAG_liftoff_extern_ref <nl> + ? kSupportedTypes <nl> + : kSupportedTypesWithoutRefs , <nl> + __ local_type ( i ) , " param " ) ) <nl> return ; <nl> } <nl> <nl> class LiftoffCompiler { <nl> } <nl> } <nl> <nl> + if ( FLAG_liftoff_extern_ref ) { <nl> + / / Initialize all reference type locals with ref . null . <nl> + for ( uint32_t param_idx = num_params ; param_idx < __ num_locals ( ) ; <nl> + + + param_idx ) { <nl> + ValueType type = decoder - > local_type ( param_idx ) ; <nl> + if ( type . is_reference_type ( ) ) { <nl> + Register isolate_root = __ GetUnusedRegister ( kGpReg , { } ) . gp ( ) ; <nl> + / / We can re - use the isolate_root register as result register . <nl> + Register result = isolate_root ; <nl> + <nl> + LOAD_INSTANCE_FIELD ( isolate_root , IsolateRoot , kSystemPointerSize ) ; <nl> + __ LoadTaggedPointer ( <nl> + result , isolate_root , no_reg , <nl> + IsolateData : : root_slot_offset ( RootIndex : : kNullValue ) , { } ) ; <nl> + __ Spill ( __ cache_state ( ) - > stack_state . back ( ) . offset ( ) , <nl> + LiftoffRegister ( result ) , type ) ; <nl> + } <nl> + } <nl> + } <nl> DCHECK_EQ ( __ num_locals ( ) , __ cache_state ( ) - > stack_height ( ) ) ; <nl> <nl> if ( V8_UNLIKELY ( debug_sidetable_builder_ ) ) { <nl> class LiftoffCompiler { <nl> source_position_table_builder_ . AddPosition ( <nl> __ pc_offset ( ) , SourcePosition ( ool - > position ) , true ) ; <nl> __ CallRuntimeStub ( ool - > stub ) ; <nl> + / / TODO ( ahaas ) : Define a proper safepoint here . <nl> + safepoint_table_builder_ . DefineSafepoint ( & asm_ , Safepoint : : kNoLazyDeopt ) ; <nl> DCHECK_EQ ( ! debug_sidetable_builder_ , ! ool - > debug_sidetable_entry_builder ) ; <nl> if ( V8_UNLIKELY ( ool - > debug_sidetable_entry_builder ) ) { <nl> ool - > debug_sidetable_entry_builder - > set_pc_offset ( __ pc_offset ( ) ) ; <nl> } <nl> - safepoint_table_builder_ . DefineSafepoint ( & asm_ , Safepoint : : kNoLazyDeopt ) ; <nl> DCHECK_EQ ( ool - > continuation . get ( ) - > is_bound ( ) , is_stack_check ) ; <nl> if ( ! ool - > regs_to_save . is_empty ( ) ) __ PopRegisters ( ool - > regs_to_save ) ; <nl> if ( is_stack_check ) { <nl> class LiftoffCompiler { <nl> __ PatchPrepareStackFrame ( pc_offset_stack_frame_construction_ , <nl> __ GetTotalFrameSize ( ) ) ; <nl> __ FinishCode ( ) ; <nl> - safepoint_table_builder_ . Emit ( & asm_ , __ GetTotalFrameSlotCount ( ) ) ; <nl> + safepoint_table_builder_ . Emit ( & asm_ , __ GetTotalFrameSlotCountForGC ( ) ) ; <nl> __ MaybeEmitOutOfLineConstantPool ( ) ; <nl> / / The previous calls may have also generated a bailout . <nl> DidAssemblerBailout ( decoder ) ; <nl> class LiftoffCompiler { <nl> source_position_table_builder_ . AddPosition ( <nl> __ pc_offset ( ) , SourcePosition ( decoder - > position ( ) ) , true ) ; <nl> __ CallRuntimeStub ( WasmCode : : kWasmDebugBreak ) ; <nl> - RegisterDebugSideTableEntry ( DebugSideTableBuilder : : kAllowRegisters ) ; <nl> + / / TODO ( ahaas ) : Define a proper safepoint here . <nl> safepoint_table_builder_ . DefineSafepoint ( & asm_ , Safepoint : : kNoLazyDeopt ) ; <nl> + RegisterDebugSideTableEntry ( DebugSideTableBuilder : : kAllowRegisters ) ; <nl> } <nl> <nl> void Block ( FullDecoder * decoder , Control * block ) { } <nl> class LiftoffCompiler { <nl> } <nl> <nl> void RefNull ( FullDecoder * decoder , ValueType type , Value * ) { <nl> + if ( ! FLAG_liftoff_extern_ref ) { <nl> + unsupported ( decoder , kRefTypes , " ref_null " ) ; <nl> + return ; <nl> + } <nl> Register isolate_root = __ GetUnusedRegister ( kGpReg , { } ) . gp ( ) ; <nl> / / We can re - use the isolate_root register as result register . <nl> Register result = isolate_root ; <nl> class LiftoffCompiler { <nl> source_position_table_builder_ . AddPosition ( <nl> __ pc_offset ( ) , SourcePosition ( decoder - > position ( ) ) , false ) ; <nl> __ CallRuntimeStub ( WasmCode : : kWasmTraceExit ) ; <nl> - safepoint_table_builder_ . DefineSafepoint ( & asm_ , Safepoint : : kNoLazyDeopt ) ; <nl> + DefineSafepoint ( ) ; <nl> <nl> __ DeallocateStackSlot ( sizeof ( int64_t ) ) ; <nl> } <nl> class LiftoffCompiler { <nl> void GlobalGet ( FullDecoder * decoder , Value * result , <nl> const GlobalIndexImmediate < validate > & imm ) { <nl> const auto * global = & env_ - > module - > globals [ imm . index ] ; <nl> - if ( ! CheckSupportedType ( decoder , kSupportedTypes , global - > type , " global " ) ) <nl> + if ( ! CheckSupportedType ( decoder , kSupportedTypesWithoutRefs , global - > type , <nl> + " global " ) ) { <nl> return ; <nl> + } <nl> LiftoffRegList pinned ; <nl> uint32_t offset = 0 ; <nl> Register addr = GetGlobalBaseAndOffset ( global , & pinned , & offset ) ; <nl> class LiftoffCompiler { <nl> void GlobalSet ( FullDecoder * decoder , const Value & value , <nl> const GlobalIndexImmediate < validate > & imm ) { <nl> auto * global = & env_ - > module - > globals [ imm . index ] ; <nl> - if ( ! CheckSupportedType ( decoder , kSupportedTypes , global - > type , " global " ) ) <nl> + if ( ! CheckSupportedType ( decoder , kSupportedTypesWithoutRefs , global - > type , <nl> + " global " ) ) <nl> return ; <nl> LiftoffRegList pinned ; <nl> uint32_t offset = 0 ; <nl> class LiftoffCompiler { <nl> source_position_table_builder_ . AddPosition ( __ pc_offset ( ) , <nl> SourcePosition ( position ) , false ) ; <nl> __ CallRuntimeStub ( WasmCode : : kWasmTraceMemory ) ; <nl> - safepoint_table_builder_ . DefineSafepoint ( & asm_ , Safepoint : : kNoLazyDeopt ) ; <nl> + DefineSafepoint ( ) ; <nl> <nl> __ DeallocateStackSlot ( sizeof ( MemoryTracingInfo ) ) ; <nl> } <nl> class LiftoffCompiler { <nl> if ( input . gp ( ) ! = param_reg ) __ Move ( param_reg , input . gp ( ) , kWasmI32 ) ; <nl> <nl> __ CallRuntimeStub ( WasmCode : : kWasmMemoryGrow ) ; <nl> + DefineSafepoint ( ) ; <nl> RegisterDebugSideTableEntry ( DebugSideTableBuilder : : kDidSpill ) ; <nl> - safepoint_table_builder_ . DefineSafepoint ( & asm_ , Safepoint : : kNoLazyDeopt ) ; <nl> <nl> if ( kReturnRegister0 ! = result . gp ( ) ) { <nl> __ Move ( result . gp ( ) , kReturnRegister0 , kWasmI32 ) ; <nl> class LiftoffCompiler { <nl> __ PrepareBuiltinCall ( & sig , call_descriptor , <nl> { index , expected_value , timeout } ) ; <nl> __ CallRuntimeStub ( target ) ; <nl> - <nl> + DefineSafepoint ( ) ; <nl> / / Pop parameters from the value stack . <nl> __ cache_state ( ) - > stack_state . pop_back ( 3 ) ; <nl> <nl> RegisterDebugSideTableEntry ( DebugSideTableBuilder : : kDidSpill ) ; <nl> - safepoint_table_builder_ . DefineSafepoint ( & asm_ , Safepoint : : kNoLazyDeopt ) ; <nl> <nl> __ PushRegister ( kWasmI32 , LiftoffRegister ( kReturnRegister0 ) ) ; <nl> } <nl> class LiftoffCompiler { <nl> { descriptor . GetRegisterParameter ( 1 ) , count , kWasmI32 } } ) ; <nl> <nl> __ CallRuntimeStub ( WasmCode : : kWasmAtomicNotify ) ; <nl> + DefineSafepoint ( ) ; <nl> RegisterDebugSideTableEntry ( DebugSideTableBuilder : : kDidSpill ) ; <nl> - safepoint_table_builder_ . DefineSafepoint ( & asm_ , Safepoint : : kNoLazyDeopt ) ; <nl> <nl> __ PushRegister ( kWasmI32 , LiftoffRegister ( kReturnRegister0 ) ) ; <nl> } <nl> class LiftoffCompiler { <nl> __ PrepareBuiltinCall ( & sig , call_descriptor , <nl> { dst , src , size , table_index , segment_index } ) ; <nl> __ CallRuntimeStub ( target ) ; <nl> + DefineSafepoint ( ) ; <nl> <nl> / / Pop parameters from the value stack . <nl> __ cache_state ( ) - > stack_state . pop_back ( 3 ) ; <nl> <nl> RegisterDebugSideTableEntry ( DebugSideTableBuilder : : kDidSpill ) ; <nl> - safepoint_table_builder_ . DefineSafepoint ( & asm_ , Safepoint : : kNoLazyDeopt ) ; <nl> } <nl> <nl> void ElemDrop ( FullDecoder * decoder , const ElemDropImmediate < validate > & imm ) { <nl> class LiftoffCompiler { <nl> __ PrepareBuiltinCall ( & sig , call_descriptor , <nl> { dst , src , size , table_dst_index , table_src_index } ) ; <nl> __ CallRuntimeStub ( target ) ; <nl> + DefineSafepoint ( ) ; <nl> <nl> / / Pop parameters from the value stack . <nl> __ cache_state ( ) - > stack_state . pop_back ( 3 ) ; <nl> <nl> RegisterDebugSideTableEntry ( DebugSideTableBuilder : : kDidSpill ) ; <nl> - safepoint_table_builder_ . DefineSafepoint ( & asm_ , Safepoint : : kNoLazyDeopt ) ; <nl> } <nl> <nl> void TableGrow ( FullDecoder * decoder , const TableIndexImmediate < validate > & imm , <nl> class LiftoffCompiler { <nl> const CallFunctionImmediate < validate > & imm , <nl> const Value args [ ] , Value returns [ ] , CallKind call_kind ) { <nl> for ( ValueType ret : imm . sig - > returns ( ) ) { <nl> - if ( ! CheckSupportedType ( decoder , kSupportedTypes , ret , " return " ) ) { <nl> + if ( ! CheckSupportedType ( decoder , <nl> + FLAG_liftoff_extern_ref <nl> + ? kSupportedTypes <nl> + : kSupportedTypesWithoutRefs , <nl> + ret , " return " ) ) { <nl> / / TODO ( 7581 ) : Remove this once reference - types are full supported . <nl> if ( ! ret . is_reference_type ( ) ) { <nl> return ; <nl> class LiftoffCompiler { <nl> } <nl> } <nl> <nl> + DefineSafepoint ( ) ; <nl> RegisterDebugSideTableEntry ( DebugSideTableBuilder : : kDidSpill ) ; <nl> - safepoint_table_builder_ . DefineSafepoint ( & asm_ , Safepoint : : kNoLazyDeopt ) ; <nl> <nl> __ FinishCall ( imm . sig , call_descriptor ) ; <nl> } <nl> class LiftoffCompiler { <nl> return unsupported ( decoder , kRefTypes , " table index ! = 0 " ) ; <nl> } <nl> for ( ValueType ret : imm . sig - > returns ( ) ) { <nl> - if ( ! CheckSupportedType ( decoder , kSupportedTypes , ret , " return " ) ) { <nl> + if ( ! CheckSupportedType ( decoder , <nl> + FLAG_liftoff_extern_ref <nl> + ? kSupportedTypes <nl> + : kSupportedTypesWithoutRefs , <nl> + ret , " return " ) ) { <nl> return ; <nl> } <nl> } <nl> class LiftoffCompiler { <nl> __ CallIndirect ( imm . sig , call_descriptor , target ) ; <nl> } <nl> <nl> + DefineSafepoint ( ) ; <nl> RegisterDebugSideTableEntry ( DebugSideTableBuilder : : kDidSpill ) ; <nl> - safepoint_table_builder_ . DefineSafepoint ( & asm_ , Safepoint : : kNoLazyDeopt ) ; <nl> <nl> __ FinishCall ( imm . sig , call_descriptor ) ; <nl> } <nl> class LiftoffCompiler { <nl> os < < " \ n " ; <nl> } <nl> <nl> + void DefineSafepoint ( ) { <nl> + Safepoint safepoint = safepoint_table_builder_ . DefineSafepoint ( <nl> + & asm_ , Safepoint : : kNoLazyDeopt ) ; <nl> + __ cache_state ( ) - > DefineSafepoint ( safepoint ) ; <nl> + } <nl> + <nl> DISALLOW_IMPLICIT_CONSTRUCTORS ( LiftoffCompiler ) ; <nl> } ; <nl> <nl> WasmCompilationResult ExecuteLiftoffCompilation ( <nl> result . instr_buffer = instruction_buffer - > ReleaseBuffer ( ) ; <nl> result . source_positions = compiler - > GetSourcePositionTable ( ) ; <nl> result . protected_instructions_data = compiler - > GetProtectedInstructionsData ( ) ; <nl> - result . frame_slot_count = compiler - > GetTotalFrameSlotCount ( ) ; <nl> + result . frame_slot_count = compiler - > GetTotalFrameSlotCountForGC ( ) + <nl> + StandardFrameConstants : : kFixedSlotCountAboveFp ; <nl> result . tagged_parameter_slots = call_descriptor - > GetTaggedParameterSlots ( ) ; <nl> result . func_index = func_index ; <nl> result . result_tier = ExecutionTier : : kLiftoff ; <nl> mmm a / src / wasm / baseline / mips / liftoff - assembler - mips . h <nl> ppp b / src / wasm / baseline / mips / liftoff - assembler - mips . h <nl> inline void Load ( LiftoffAssembler * assm , LiftoffRegister dst , Register base , <nl> MemOperand src ( base , offset ) ; <nl> switch ( type . kind ( ) ) { <nl> case ValueType : : kI32 : <nl> + case ValueType : : kRef : <nl> + case ValueType : : kOptRef : <nl> assm - > lw ( dst . gp ( ) , src ) ; <nl> break ; <nl> case ValueType : : kI64 : <nl> int LiftoffAssembler : : SlotSizeForType ( ValueType type ) { <nl> } <nl> <nl> bool LiftoffAssembler : : NeedsAlignment ( ValueType type ) { <nl> - switch ( type . kind ( ) ) { <nl> - case ValueType : : kS128 : <nl> - return true ; <nl> - default : <nl> - / / No alignment because all other types are kStackSlotSize . <nl> - return false ; <nl> - } <nl> + return type . kind ( ) = = ValueType : : kS128 | | type . is_reference_type ( ) ; <nl> } <nl> <nl> void LiftoffAssembler : : LoadConstant ( LiftoffRegister reg , WasmValue value , <nl> void LiftoffAssembler : : Spill ( int offset , LiftoffRegister reg , ValueType type ) { <nl> MemOperand dst = liftoff : : GetStackSlot ( offset ) ; <nl> switch ( type . kind ( ) ) { <nl> case ValueType : : kI32 : <nl> + case ValueType : : kRef : <nl> + case ValueType : : kOptRef : <nl> sw ( reg . gp ( ) , dst ) ; <nl> break ; <nl> case ValueType : : kI64 : <nl> void LiftoffAssembler : : Fill ( LiftoffRegister reg , int offset , ValueType type ) { <nl> MemOperand src = liftoff : : GetStackSlot ( offset ) ; <nl> switch ( type . kind ( ) ) { <nl> case ValueType : : kI32 : <nl> + case ValueType : : kRef : <nl> + case ValueType : : kOptRef : <nl> lw ( reg . gp ( ) , src ) ; <nl> break ; <nl> case ValueType : : kI64 : <nl> mmm a / src / wasm / baseline / mips64 / liftoff - assembler - mips64 . h <nl> ppp b / src / wasm / baseline / mips64 / liftoff - assembler - mips64 . h <nl> inline void Load ( LiftoffAssembler * assm , LiftoffRegister dst , MemOperand src , <nl> assm - > lw ( dst . gp ( ) , src ) ; <nl> break ; <nl> case ValueType : : kI64 : <nl> + case ValueType : : kRef : <nl> + case ValueType : : kOptRef : <nl> assm - > ld ( dst . gp ( ) , src ) ; <nl> break ; <nl> case ValueType : : kF32 : <nl> int LiftoffAssembler : : SlotSizeForType ( ValueType type ) { <nl> } <nl> <nl> bool LiftoffAssembler : : NeedsAlignment ( ValueType type ) { <nl> - switch ( type . kind ( ) ) { <nl> - case ValueType : : kS128 : <nl> - return true ; <nl> - default : <nl> - / / No alignment because all other types are kStackSlotSize . <nl> - return false ; <nl> - } <nl> + return type . kind ( ) = = ValueType : : kS128 | | type . is_reference_type ( ) ; <nl> } <nl> <nl> void LiftoffAssembler : : LoadConstant ( LiftoffRegister reg , WasmValue value , <nl> void LiftoffAssembler : : Spill ( int offset , LiftoffRegister reg , ValueType type ) { <nl> Sw ( reg . gp ( ) , dst ) ; <nl> break ; <nl> case ValueType : : kI64 : <nl> + case ValueType : : kRef : <nl> + case ValueType : : kOptRef : <nl> Sd ( reg . gp ( ) , dst ) ; <nl> break ; <nl> case ValueType : : kF32 : <nl> void LiftoffAssembler : : Spill ( int offset , WasmValue value ) { <nl> sw ( tmp . gp ( ) , dst ) ; <nl> break ; <nl> } <nl> - case ValueType : : kI64 : { <nl> + case ValueType : : kI64 : <nl> + case ValueType : : kRef : <nl> + case ValueType : : kOptRef : { <nl> LiftoffRegister tmp = GetUnusedRegister ( kGpReg , { } ) ; <nl> TurboAssembler : : li ( tmp . gp ( ) , value . to_i64 ( ) ) ; <nl> sd ( tmp . gp ( ) , dst ) ; <nl> void LiftoffAssembler : : Fill ( LiftoffRegister reg , int offset , ValueType type ) { <nl> Lw ( reg . gp ( ) , src ) ; <nl> break ; <nl> case ValueType : : kI64 : <nl> + case ValueType : : kRef : <nl> + case ValueType : : kOptRef : <nl> Ld ( reg . gp ( ) , src ) ; <nl> break ; <nl> case ValueType : : kF32 : <nl> mmm a / src / wasm / baseline / x64 / liftoff - assembler - x64 . h <nl> ppp b / src / wasm / baseline / x64 / liftoff - assembler - x64 . h <nl> inline void Load ( LiftoffAssembler * assm , LiftoffRegister dst , Operand src , <nl> assm - > movl ( dst . gp ( ) , src ) ; <nl> break ; <nl> case ValueType : : kI64 : <nl> + case ValueType : : kOptRef : <nl> + case ValueType : : kRef : <nl> assm - > movq ( dst . gp ( ) , src ) ; <nl> break ; <nl> case ValueType : : kF32 : <nl> int LiftoffAssembler : : SlotSizeForType ( ValueType type ) { <nl> return type . element_size_bytes ( ) ; <nl> } <nl> <nl> - bool LiftoffAssembler : : NeedsAlignment ( ValueType type ) { return false ; } <nl> + bool LiftoffAssembler : : NeedsAlignment ( ValueType type ) { <nl> + return type . is_reference_type ( ) ; <nl> + } <nl> <nl> void LiftoffAssembler : : LoadConstant ( LiftoffRegister reg , WasmValue value , <nl> RelocInfo : : Mode rmode ) { <nl> void LiftoffAssembler : : Move ( Register dst , Register src , ValueType type ) { <nl> if ( type = = kWasmI32 ) { <nl> movl ( dst , src ) ; <nl> } else { <nl> - DCHECK_EQ ( kWasmI64 , type ) ; <nl> + DCHECK ( kWasmI64 = = type | | type . is_reference_type ( ) ) ; <nl> movq ( dst , src ) ; <nl> } <nl> } <nl> void LiftoffAssembler : : Spill ( int offset , LiftoffRegister reg , ValueType type ) { <nl> movl ( dst , reg . gp ( ) ) ; <nl> break ; <nl> case ValueType : : kI64 : <nl> + case ValueType : : kOptRef : <nl> + case ValueType : : kRef : <nl> movq ( dst , reg . gp ( ) ) ; <nl> break ; <nl> case ValueType : : kF32 : <nl> new file mode 100644 <nl> index 00000000000 . . bf10030837d <nl> mmm / dev / null <nl> ppp b / test / mjsunit / wasm / externref - liftoff . js <nl> <nl> + / / Copyright 2020 the V8 project authors . All rights reserved . <nl> + / / Use of this source code is governed by a BSD - style license that can be <nl> + / / found in the LICENSE file . <nl> + <nl> + / / Flags : - - expose - wasm - - experimental - wasm - reftypes - - expose - gc - - liftoff <nl> + / / Flags : - - no - wasm - tier - up - - liftoff - extern - ref <nl> + <nl> + load ( " test / mjsunit / wasm / externref . js " ) ; <nl>
|
[ wasm ] [ liftoff ] Emit safepoints for externref values on the stack
|
v8/v8
|
10348e8eb681bf9ac75db935591f32c4e0bca1d4
|
2020-09-07T20:26:23Z
|
mmm a / system / lib / split_malloc . cpp <nl> ppp b / system / lib / split_malloc . cpp <nl> static size_t split_memory = 0 ; <nl> static size_t num_spaces = 0 ; <nl> static bool allocated [ MAX_SPACES ] ; / / whether storage is allocated for this chunk , both an ArrayBuffer in JS and an mspace here <nl> static mspace spaces [ MAX_SPACES ] ; / / 0 is for the stack , static , etc - not used by malloc # TODO : make a small space in there ? <nl> + static size_t counts [ MAX_SPACES ] ; / / how many allocations are in the space <nl> <nl> static void init ( ) { <nl> total_memory = EM_ASM_INT_V ( { return TOTAL_MEMORY ; } ) ; <nl> static void init ( ) { <nl> if ( num_spaces > = MAX_SPACES ) abort ( ) ; <nl> allocated [ 0 ] = true ; / / but never used from here <nl> spaces [ 0 ] = 0 ; / / never used <nl> + counts [ 0 ] = 0 ; / / never used <nl> for ( int i = 1 ; i < num_spaces ; i + + ) { <nl> allocated [ i ] = false ; <nl> spaces [ i ] = 0 ; <nl> + counts [ i ] = 0 ; <nl> } <nl> initialized = true ; <nl> } <nl> <nl> static void allocate_space ( int i ) { <nl> assert ( ! allocated [ i ] ) ; <nl> + assert ( counts [ i ] = = 0 ) ; <nl> allocated [ i ] = true ; <nl> EM_ASM_ ( { allocateSplitChunk ( $ 0 ) } , i ) ; <nl> spaces [ i ] = create_mspace_with_base ( ( void * ) ( split_memory * i ) , split_memory , 0 ) ; <nl> static void allocate_space ( int i ) { <nl> <nl> static void free_space ( int i ) { <nl> assert ( allocated [ i ] ) ; <nl> + assert ( counts [ i ] = = 0 ) ; <nl> allocated [ i ] = false ; <nl> destroy_mspace ( ( void * ) ( split_memory * i ) ) ; <nl> EM_ASM_ ( { freeSplitChunk ( $ 0 ) } , i ) ; <nl> static void free_space ( int i ) { <nl> static mspace get_space ( void * ptr ) { / / for a valid pointer , so the space must already exist <nl> int index = space_index ( ptr ) ; <nl> assert ( allocated [ index ] ) ; <nl> + assert ( counts [ index ] > 0 ) ; <nl> return spaces [ index ] ; <nl> } <nl> <nl> void * malloc ( size_t size ) { <nl> while ( 1 ) { / / simple round - robin , while keeping to use the same one as long as it keeps succeeding <nl> if ( ! allocated [ next ] ) allocate_space ( next ) ; <nl> void * ret = mspace_malloc ( spaces [ next ] , size ) ; <nl> - if ( ret ) return ret ; <nl> + if ( ret ) { <nl> + counts [ next ] + + ; <nl> + return ret ; <nl> + } <nl> next + + ; <nl> if ( next = = num_spaces ) next = 1 ; <nl> if ( next = = start ) break ; <nl> void * malloc ( size_t size ) { <nl> <nl> void free ( void * ptr ) { <nl> if ( ptr = = 0 ) return ; <nl> + int index = space_index ( ptr ) ; <nl> + assert ( counts [ index ] > 0 ) ; <nl> mspace_free ( get_space ( ptr ) , ptr ) ; <nl> + counts [ index ] - - ; <nl> + if ( counts [ index ] = = 0 ) { <nl> + free_space ( index ) ; <nl> + } <nl> } <nl> <nl> void * realloc ( void * ptr , size_t newsize ) { <nl> void * memalign ( size_t alignment , size_t size ) { <nl> while ( 1 ) { / / simple round - robin , while keeping to use the same one as long as it keeps succeeding <nl> if ( ! allocated [ next ] ) allocate_space ( next ) ; <nl> void * ret = mspace_memalign ( spaces [ next ] , alignment , size ) ; <nl> - if ( ret ) return ret ; <nl> + if ( ret ) { <nl> + counts [ next ] + + ; <nl> + return ret ; <nl> + } <nl> next + + ; <nl> if ( next = = num_spaces ) next = 1 ; <nl> if ( next = = start ) break ; <nl>
|
free split memory chunks when possible
|
emscripten-core/emscripten
|
aa427fdaac242e4f978cd83a146c31bb6bcaf639
|
2015-09-17T21:31:39Z
|
mmm a / tools / internal_ci / linux / grpc_bazel_rbe_incompatible_changes . sh <nl> ppp b / tools / internal_ci / linux / grpc_bazel_rbe_incompatible_changes . sh <nl> <nl> <nl> set - ex <nl> <nl> - # TODO ( jtattermusch ) : use the latest version of bazel <nl> + # Use bazelisk to download the right bazel version <nl> + wget https : / / github . com / bazelbuild / bazelisk / releases / download / v0 . 0 . 7 / bazelisk - linux - amd64 <nl> + chmod u + x bazelisk - linux - amd64 <nl> <nl> - # Use - - all_incompatible_changes to give an early warning about future <nl> - # bazel incompatibilities . <nl> - EXTRA_FLAGS = " - - config = opt - - cache_test_results = no - - all_incompatible_changes " <nl> + # We want bazelisk to run the latest stable version <nl> + export USE_BAZEL_VERSION = latest <nl> + # Use bazelisk instead of our usual / / tools / bazel wrapper <nl> + mv bazelisk - linux - amd64 github / grpc / tools / bazel <nl> + <nl> + EXTRA_FLAGS = " - - config = opt - - cache_test_results = no " <nl> github / grpc / tools / internal_ci / linux / grpc_bazel_on_foundry_base . sh " $ { EXTRA_FLAGS } " <nl>
|
use bazelisk to grab latest bazel version
|
grpc/grpc
|
bf4c7f45ef9dc63306bbd6ef9461ed0646a0b843
|
2019-06-06T13:03:20Z
|
mmm a / tensorflow / stream_executor / cuda / cuda_dnn . cc <nl> ppp b / tensorflow / stream_executor / cuda / cuda_dnn . cc <nl> port : : StatusOr < dnn : : AlgorithmDesc > GetCudnnConvolutionForwardAlgorithm ( <nl> return * algo_desc ; <nl> } <nl> <nl> + if ( ! absl : : StrContains ( scratch_or . status ( ) . ToString ( ) , <nl> + " CUDNN_STATUS_ALLOC_FAILED " ) ) { <nl> + return port : : Status ( port : : error : : INVALID_ARGUMENT , <nl> + absl : : StrCat ( " cuDNN returned unexpected error : " , <nl> + scratch_or . status ( ) . ToString ( ) ) ) ; <nl> + } <nl> + <nl> algo_desc = algorithm_config . algorithm_no_scratch ( ) ; <nl> <nl> / / Failed to allocate workspace for the first algorithm , fall back to the <nl>
|
Return real cuDNN error on allocation failure , do not assume all errors are OOMs
|
tensorflow/tensorflow
|
b2d5c920abe36697efbdc765b0499e73b72fdbd8
|
2019-07-30T15:03:58Z
|
mmm a / hphp / hack / src / decl / direct_decl_smart_constructors . rs <nl> ppp b / hphp / hack / src / decl / direct_decl_smart_constructors . rs <nl> pub struct PropertyDecl { <nl> modifiers : Node_ , <nl> hint : Node_ , <nl> name : Node_ , <nl> + is_initialized : bool , <nl> } <nl> <nl> # [ derive ( Clone , Debug ) ] <nl> pub enum Node_ { <nl> Question ( Pos ) , / / This needs a pos since it shows up in nullable types . <nl> This ( Pos ) , / / This needs a pos since it shows up in Taccess . <nl> ColonColon ( Pos ) , / / This needs a pos since it shows up in Taccess . <nl> + Initializer , / / We don ' t care what we initialize values to , only <nl> + / / whether or not they ' re initialized , so we make a fake <nl> + / / node for them . <nl> <nl> / / Box the insides of the vector so we don ' t need to reallocate them when <nl> / / we pull them out of the TypeConstraint variant . <nl> impl < ' a > FlattenSmartConstructors < ' a , State < ' a > > for DirectDeclSmartConstructors <nl> arg0 <nl> } <nl> <nl> - fn make_simple_initializer ( & mut self , _arg0 : Self : : R , arg1 : Self : : R ) - > Self : : R { <nl> - arg1 <nl> + fn make_simple_initializer ( & mut self , _arg0 : Self : : R , _arg1 : Self : : R ) - > Self : : R { <nl> + Ok ( Node_ : : Initializer ) <nl> } <nl> <nl> fn make_list_item ( & mut self , item : Self : : R , sep : Self : : R ) - > Self : : R { <nl> impl < ' a > FlattenSmartConstructors < ' a , State < ' a > > for DirectDeclSmartConstructors <nl> _arg9 : Self : : R , <nl> body : Self : : R , <nl> ) - > Self : : R { <nl> - fn read_member_attributes < ' a > ( attributes : impl Iterator < Item = & ' a Node_ > ) - > bool { <nl> + # [ derive ( Clone , Copy , Debug , Eq , Hash , PartialEq ) ] <nl> + enum MemberAttribute { <nl> + LateInit , <nl> + Const , <nl> + } <nl> + fn read_member_attributes < ' a > ( <nl> + attributes : impl Iterator < Item = & ' a Node_ > , <nl> + ) - > HashSet < MemberAttribute > { <nl> + let mut ret = HashSet : : new ( ) ; <nl> for attribute in attributes { <nl> if let Node_ : : Name ( name , _ ) = attribute { <nl> - if name = = " __LateInit " { <nl> - return true ; <nl> + match name . as_ref ( ) { <nl> + " __LateInit " = > { <nl> + ret . insert ( MemberAttribute : : LateInit ) ; <nl> + } <nl> + " __Const " = > { <nl> + ret . insert ( MemberAttribute : : Const ) ; <nl> + } <nl> + _ = > ( ) , <nl> } <nl> } <nl> } <nl> - return false ; <nl> + ret <nl> } <nl> <nl> fn read_member_modifiers < ' a > ( <nl> impl < ' a > FlattenSmartConstructors < ' a , State < ' a > > for DirectDeclSmartConstructors <nl> _ = > { } <nl> } , <nl> Node_ : : Property ( decl ) = > { <nl> - let is_late_init = read_member_attributes ( decl . attrs . iter ( ) ) ; <nl> + let attributes = read_member_attributes ( decl . attrs . iter ( ) ) ; <nl> let ( is_static , visibility ) = <nl> read_member_modifiers ( decl . modifiers . iter ( ) ) ; <nl> let ( name , pos ) = get_name ( " " , & decl . name ) ? ; <nl> impl < ' a > FlattenSmartConstructors < ' a , State < ' a > > for DirectDeclSmartConstructors <nl> strip_dollar_prefix ( Cow : : Owned ( name ) ) . into_owned ( ) <nl> } ; <nl> let ty = self . node_to_ty ( & decl . hint , & type_variables ) ? ; <nl> + let is_const = attributes . contains ( & MemberAttribute : : Const ) ; <nl> let prop = shallow_decl_defs : : ShallowProp { <nl> - const_ : false , <nl> + const_ : is_const , <nl> xhp_attr : None , <nl> - lateinit : is_late_init , <nl> + lateinit : attributes . contains ( & MemberAttribute : : LateInit ) , <nl> lsb : false , <nl> name : Id ( pos , name ) , <nl> - needs_init : true , <nl> + needs_init : ! decl . is_initialized , <nl> type_ : Some ( ty ) , <nl> abstract_ : false , <nl> visibility , <nl> impl < ' a > FlattenSmartConstructors < ' a , State < ' a > > for DirectDeclSmartConstructors <nl> attrs : Self : : R , <nl> modifiers : Self : : R , <nl> hint : Self : : R , <nl> - name : Self : : R , <nl> + declarator : Self : : R , <nl> _arg4 : Self : : R , <nl> ) - > Self : : R { <nl> - / / Sometimes the name is a single element list . <nl> - let name = match name ? { <nl> + / / Sometimes the declarator is a single element list . <nl> + let declarator = match declarator ? { <nl> Node_ : : List ( nodes ) = > nodes <nl> . first ( ) <nl> - . ok_or ( " Expected a name , but was given an empty list . " . to_owned ( ) ) ? <nl> + . ok_or ( " Expected a declarator , but was given an empty list . " . to_owned ( ) ) ? <nl> . clone ( ) , <nl> - name = > name , <nl> + declarator = > declarator , <nl> } ; <nl> - Ok ( Node_ : : Property ( Box : : new ( PropertyDecl { <nl> - attrs : attrs ? , <nl> - modifiers : modifiers ? , <nl> - hint : hint ? , <nl> - name , <nl> - } ) ) ) <nl> + match declarator { <nl> + Node_ : : ListItem ( innards ) = > { <nl> + let ( name , initializer ) = * innards ; <nl> + let name = match name { <nl> + Node_ : : List ( nodes ) = > nodes <nl> + . first ( ) <nl> + . ok_or ( " Expected a name , but was given an empty list . " . to_owned ( ) ) ? <nl> + . clone ( ) , <nl> + name = > name , <nl> + } ; <nl> + Ok ( Node_ : : Property ( Box : : new ( PropertyDecl { <nl> + attrs : attrs ? , <nl> + modifiers : modifiers ? , <nl> + hint : hint ? , <nl> + name , <nl> + is_initialized : match initializer { <nl> + Node_ : : Initializer = > true , <nl> + _ = > false , <nl> + } , <nl> + } ) ) ) <nl> + } <nl> + n = > Err ( format ! ( " Expected a ListItem , but was { : ? } " , n ) ) , <nl> + } <nl> } <nl> <nl> - fn make_property_declarator ( & mut self , name : Self : : R , _arg1 : Self : : R ) - > Self : : R { <nl> - name <nl> + fn make_property_declarator ( & mut self , name : Self : : R , initializer : Self : : R ) - > Self : : R { <nl> + Ok ( Node_ : : ListItem ( Box : : new ( ( name ? , initializer ? ) ) ) ) <nl> } <nl> <nl> fn make_methodish_declaration ( <nl> new file mode 100644 <nl> index 00000000000 . . 81257fee472 <nl> mmm / dev / null <nl> ppp b / hphp / hack / test / decl / classes_const_attribute . php <nl> <nl> + < ? hh / / strict <nl> + / / Copyright 2004 - present Facebook . All Rights Reserved . <nl> + <nl> + abstract class A { <nl> + < < __Const > > public arraykey $ p ; <nl> + } <nl> + <nl> + class B extends A { <nl> + < < __Const > > public int $ p = 1 ; <nl> + } <nl> new file mode 100644 <nl> index 00000000000 . . 7db4e04e8d1 <nl> mmm / dev / null <nl> ppp b / hphp / hack / test / decl / classes_const_attribute . php . exp <nl> <nl> + Parsed decls : <nl> + <nl> + { Direct_decl_parser . classes = <nl> + { " A " - > <nl> + { Shallow_decl_defs . sc_mode = Mstrict ; sc_final = false ; <nl> + sc_is_xhp = false ; sc_has_xhp_keyword = false ; sc_kind = Cabstract ; <nl> + sc_name = ( [ 4 : 16 - 17 ] , " \ \ A " ) ; sc_tparams = [ ] ; <nl> + sc_where_constraints = [ ] ; sc_extends = [ ] ; sc_uses = [ ] ; <nl> + sc_method_redeclarations = [ ] ; sc_xhp_attr_uses = [ ] ; <nl> + sc_req_extends = [ ] ; sc_req_implements = [ ] ; sc_implements = [ ] ; <nl> + sc_consts = [ ] ; sc_typeconsts = [ ] ; sc_pu_enums = [ ] ; <nl> + sc_props = <nl> + [ { Shallow_decl_defs . sp_const = true ; sp_xhp_attr = None ; <nl> + sp_lateinit = false ; sp_lsb = false ; sp_name = ( [ 5 : 31 - 33 ] , " p " ) ; <nl> + sp_needs_init = true ; <nl> + sp_type = <nl> + ( Some ( Rhint ( root | classes_const_attribute . php line 5 , characters 22 - 29 ) , <nl> + ( Tprim Tarraykey ) ) ) ; <nl> + sp_abstract = false ; sp_visibility = Public ; sp_fixme_codes = { } } <nl> + ] ; <nl> + sc_sprops = [ ] ; sc_constructor = None ; sc_static_methods = [ ] ; <nl> + sc_methods = [ ] ; sc_user_attributes = [ ] ; sc_enum_type = None ; <nl> + sc_decl_errors = < opaque > } ; <nl> + " B " - > <nl> + { Shallow_decl_defs . sc_mode = Mstrict ; sc_final = false ; <nl> + sc_is_xhp = false ; sc_has_xhp_keyword = false ; sc_kind = Cnormal ; <nl> + sc_name = ( [ 8 : 7 - 8 ] , " \ \ B " ) ; sc_tparams = [ ] ; sc_where_constraints = [ ] ; <nl> + sc_extends = <nl> + [ ( Rhint ( root | classes_const_attribute . php line 8 , characters 17 - 17 ) , <nl> + ( Tapply ( ( [ 8 : 17 - 18 ] , " \ \ A " ) , [ ] ) ) ) ] ; <nl> + sc_uses = [ ] ; sc_method_redeclarations = [ ] ; sc_xhp_attr_uses = [ ] ; <nl> + sc_req_extends = [ ] ; sc_req_implements = [ ] ; sc_implements = [ ] ; <nl> + sc_consts = [ ] ; sc_typeconsts = [ ] ; sc_pu_enums = [ ] ; <nl> + sc_props = <nl> + [ { Shallow_decl_defs . sp_const = true ; sp_xhp_attr = None ; <nl> + sp_lateinit = false ; sp_lsb = false ; sp_name = ( [ 9 : 26 - 28 ] , " p " ) ; <nl> + sp_needs_init = false ; <nl> + sp_type = <nl> + ( Some ( Rhint ( root | classes_const_attribute . php line 9 , characters 22 - 24 ) , <nl> + ( Tprim Tint ) ) ) ; <nl> + sp_abstract = false ; sp_visibility = Public ; sp_fixme_codes = { } } <nl> + ] ; <nl> + sc_sprops = [ ] ; sc_constructor = None ; sc_static_methods = [ ] ; <nl> + sc_methods = [ ] ; sc_user_attributes = [ ] ; sc_enum_type = None ; <nl> + sc_decl_errors = < opaque > } } ; <nl> + funs = { } ; typedefs = { } ; consts = { } } <nl> + <nl> + They matched ! <nl> new file mode 100644 <nl> index 00000000000 . . bd7c220af7e <nl> mmm / dev / null <nl> ppp b / hphp / hack / test / decl / classes_const_attribute . php . typecheck . exp <nl> <nl> + File " classes_const_attribute . php " , line 4 , characters 16 - 16 : <nl> + Cannot use experimental feature : The __Const attribute is not supported . ( Other [ 0000 ] ) <nl> + File " classes_const_attribute . php " , line 8 , characters 7 - 7 : <nl> + Cannot use experimental feature : The __Const attribute is not supported . ( Other [ 0000 ] ) <nl>
|
Add support for the __Const attribute .
|
facebook/hhvm
|
6200e621eb26dbee5dc6d5ec6ca574998881ae8b
|
2020-03-30T18:50:05Z
|
mmm a / hphp / submodules / folly <nl> ppp b / hphp / submodules / folly <nl> @ @ - 1 + 1 @ @ <nl> - Subproject commit bac28da15f8144f5d43773fb892a701e3f058ad4 <nl> + Subproject commit 5e8ce83b6808c98f311f088a8d1a085ffec1a672 <nl> mmm a / hphp / third_party / folly / CMakeLists . txt <nl> ppp b / hphp / third_party / folly / CMakeLists . txt <nl> list ( REMOVE_ITEM files <nl> $ { FOLLY_DIR } / experimental / io / HugePageUtil . cpp <nl> $ { FOLLY_DIR } / experimental / io / IOBufQueue . cpp <nl> $ { FOLLY_DIR } / experimental / symbolizer / StackTrace . cpp <nl> + $ { FOLLY_DIR } / experimental / symbolizer / ElfCache . cpp <nl> $ { FOLLY_DIR } / experimental / symbolizer / ElfUtil . cpp <nl> $ { FOLLY_DIR } / experimental / symbolizer / SignalHandler . cpp <nl> ) <nl>
|
Update folly again to fix OSX compilation
|
facebook/hhvm
|
9e475d61464f9fef0ec508283ef8e9b7280ecbd9
|
2014-02-07T18:32:08Z
|
mmm a / caffe2 / python / predictor / predictor_exporter . py <nl> ppp b / caffe2 / python / predictor / predictor_exporter . py <nl> def get_predictor_exporter_helper ( submodelNetName ) : <nl> class PredictorExportMeta ( collections . namedtuple ( <nl> ' PredictorExportMeta ' , <nl> ' predict_net , parameters , inputs , outputs , shapes , name , \ <nl> - extra_init_net , net_type , num_workers , trainer_prefix ' ) ) : <nl> + extra_init_net , global_init_net , net_type , num_workers , trainer_prefix ' ) ) : <nl> " " " <nl> Metadata to be used for serializaing a net . <nl> <nl> class PredictorExportMeta ( collections . namedtuple ( <nl> num_workers specifies for net type ' dag ' how many threads should run ops <nl> <nl> trainer_prefix specifies the type of trainer . <nl> + <nl> + extra_init_net gets appended to pred_init_net , useful for thread local init <nl> + <nl> + global_init_net gets appended to global_init_net , useful for global init <nl> + on a shared across threads parameter workspace <nl> + ( in a case of multi - threaded inference ) <nl> + <nl> " " " <nl> def __new__ ( <nl> cls , <nl> def __new__ ( <nl> shapes = None , <nl> name = " " , <nl> extra_init_net = None , <nl> + global_init_net = None , <nl> net_type = None , <nl> num_workers = None , <nl> trainer_prefix = None , <nl> def __new__ ( <nl> assert isinstance ( predict_net , ( caffe2_pb2 . NetDef , caffe2_pb2 . PlanDef ) ) <nl> return super ( PredictorExportMeta , cls ) . __new__ ( <nl> cls , predict_net , parameters , inputs , outputs , shapes , name , <nl> - extra_init_net , net_type , num_workers , trainer_prefix ) <nl> + extra_init_net , global_init_net , net_type , num_workers , trainer_prefix ) <nl> <nl> def inputs_name ( self ) : <nl> return utils . get_comp_name ( predictor_constants . INPUTS_BLOB_TYPE , <nl> def _global_init_net ( predictor_export_meta ) : <nl> net . Proto ( ) . external_input . extend ( [ predictor_constants . PREDICTOR_DBREADER ] ) <nl> net . Proto ( ) . external_output . extend ( predictor_export_meta . parameters ) <nl> <nl> + if predictor_export_meta . global_init_net : <nl> + net . AppendNet ( predictor_export_meta . global_init_net ) <nl> + <nl> # Add the model_id in the predict_net to the global_init_net <nl> utils . AddModelIdArg ( predictor_export_meta , net . Proto ( ) ) <nl> return net . Proto ( ) <nl> mmm a / caffe2 / python / predictor / predictor_exporter_test . py <nl> ppp b / caffe2 / python / predictor / predictor_exporter_test . py <nl> def test_meta_net_def_net_runs ( self ) : <nl> <nl> extra_init_net = core . Net ( ' extra_init ' ) <nl> extra_init_net . ConstantFill ( ' data ' , ' data ' , value = 1 . 0 ) <nl> + <nl> + global_init_net = core . Net ( ' global_init ' ) <nl> + global_init_net . ConstantFill ( <nl> + [ ] , <nl> + ' global_init_blob ' , <nl> + value = 1 . 0 , <nl> + shape = [ 1 , 5 ] , <nl> + dtype = core . DataType . FLOAT <nl> + ) <nl> pem = pe . PredictorExportMeta ( <nl> predict_net = self . predictor_export_meta . predict_net , <nl> parameters = self . predictor_export_meta . parameters , <nl> def test_meta_net_def_net_runs ( self ) : <nl> outputs = self . predictor_export_meta . outputs , <nl> shapes = self . predictor_export_meta . shapes , <nl> extra_init_net = extra_init_net , <nl> + global_init_net = global_init_net , <nl> net_type = ' dag ' , <nl> ) <nl> <nl> def test_meta_net_def_net_runs ( self ) : <nl> np . testing . assert_array_equal ( <nl> workspace . FetchBlob ( " y " ) , np . zeros ( shape = ( 1 , 10 ) ) ) <nl> <nl> + self . assertTrue ( " global_init_blob " not in workspace . Blobs ( ) ) <nl> # Load parameters from DB <nl> global_init_net = pred_utils . GetNet ( meta_net_def , <nl> pc . GLOBAL_INIT_NET_TYPE ) <nl> workspace . RunNetOnce ( global_init_net ) <nl> <nl> + # make sure the extra global_init_net is running <nl> + self . assertTrue ( workspace . HasBlob ( ' global_init_blob ' ) ) <nl> + np . testing . assert_array_equal ( <nl> + workspace . FetchBlob ( " global_init_blob " ) , np . ones ( shape = ( 1 , 5 ) ) ) <nl> + <nl> # Run the net with a reshaped input and verify we are <nl> # producing good numbers ( with our custom implementation ) <nl> workspace . FeedBlob ( " data " , np . random . randn ( 2 , 5 ) . astype ( np . float32 ) ) <nl>
|
add fbgemm fp16 ( fbfcpacked ) support , add global_init_net in predictor_export_meta ( )
|
pytorch/pytorch
|
f3cf6ed789683db0f2e1a5da9926b82f935d9d3b
|
2019-03-22T07:19:59Z
|
mmm a / src / Processors / Sources / RemoteSource . cpp <nl> ppp b / src / Processors / Sources / RemoteSource . cpp <nl> Chunk RemoteExtremesSource : : generate ( ) <nl> if ( auto block = query_executor - > getExtremes ( ) ) <nl> { <nl> UInt64 num_rows = block . rows ( ) ; <nl> - std : : cerr < < " Got extrees " < < num_rows < < " rows " < < std : : endl ; <nl> return Chunk ( block . getColumns ( ) , num_rows ) ; <nl> } <nl> <nl>
|
Remove debug output .
|
ClickHouse/ClickHouse
|
31ad5d7e5d224ba1df8e33f6a14a93e1100e70b8
|
2020-06-04T20:42:03Z
|
mmm a / tensorflow / contrib / tensor_forest / python / tensor_forest . py <nl> ppp b / tensorflow / contrib / tensor_forest / python / tensor_forest . py <nl> def training_graph ( self , input_data , input_labels , data_spec = None , <nl> epoch = ( [ 0 ] if epoch is None else epoch ) , <nl> * * tree_kwargs ) ) <nl> <nl> - return control_flow_ops . group ( * tree_graphs ) <nl> + return control_flow_ops . group ( * tree_graphs , name = ' train ' ) <nl> <nl> def inference_graph ( self , input_data , data_spec = None ) : <nl> " " " Constructs a TF graph for evaluating a random forest . <nl> def inference_graph ( self , input_data , data_spec = None ) : <nl> data_spec ) ) <nl> with ops . device ( self . device_assigner . get_device ( 0 ) ) : <nl> all_predict = array_ops . pack ( probabilities ) <nl> - return math_ops . reduce_sum ( all_predict , 0 ) / self . params . num_trees <nl> + return math_ops . div ( <nl> + math_ops . reduce_sum ( all_predict , 0 ) , self . params . num_trees , <nl> + name = ' probabilities ' ) <nl> <nl> def average_size ( self ) : <nl> " " " Constructs a TF graph for evaluating the average size of a forest . <nl>
|
Give names to training and inference ops in tensor_forest , which helps with integration with frameworks that identify them by name .
|
tensorflow/tensorflow
|
4ddfb7812e7a1b22b16baa1ab6f1c319e1289bc5
|
2016-07-14T02:18:10Z
|
mmm a / xbmc / utils / GUIInfoManager . cpp <nl> ppp b / xbmc / utils / GUIInfoManager . cpp <nl> CStdString CGUIInfoManager : : GetLabel ( int info , int contextWindow ) <nl> if ( info > = SLIDE_INFO_START & & info < = SLIDE_INFO_END ) <nl> return GetPictureLabel ( info ) ; <nl> <nl> - if ( info > = LISTITEM_PROPERTY_START + MUSICPLAYER_PROPERTY_OFFSET ) <nl> + if ( info > = LISTITEM_PROPERTY_START + MUSICPLAYER_PROPERTY_OFFSET & & <nl> + info - LISTITEM_PROPERTY_START + MUSICPLAYER_PROPERTY_OFFSET < ( int ) m_listitemProperties . size ( ) ) <nl> { / / grab the property <nl> if ( ! m_currentFile ) <nl> return " " ; <nl>
|
fixed : possible crash in certain skins . Thanks vdrfan
|
xbmc/xbmc
|
e74a5b93900c8779b9a88c105b3571d9ef5b33fc
|
2010-01-12T02:46:51Z
|
mmm a / tensorflow / compiler / mlir / lite / tf_tfl_passes . cc <nl> ppp b / tensorflow / compiler / mlir / lite / tf_tfl_passes . cc <nl> void AddTFToTFLConversionPasses ( const mlir : : TFL : : PassConfig & pass_config , <nl> / / control flow ops ( IfOp , CaseOp ) . <nl> pass_manager - > addPass ( mlir : : createInlinerPass ( ) ) ; <nl> <nl> + / / This pass removes the asset file dependencies in hash table use cases . <nl> + pass_manager - > addPass ( mlir : : TF : : CreateInitTextFileToImportPass ( ) ) ; <nl> + <nl> pass_manager - > addPass ( <nl> mlir : : TFL : : CreateLegalizeTFPass ( pass_config . runtime_verification ) ) ; <nl> pass_manager - > addPass ( mlir : : TFL : : CreateOptimizePass ( ) ) ; <nl> mmm a / tensorflow / compiler / mlir / tensorflow / BUILD <nl> ppp b / tensorflow / compiler / mlir / tensorflow / BUILD <nl> cc_library ( <nl> " transforms / generated_optimize . inc " , <nl> " transforms / gpu_fusion . cc " , <nl> " transforms / graph_pruning . cc " , <nl> + " transforms / init_text_file_to_import . cc " , <nl> " transforms / launch_to_device_attribute . cc " , <nl> " transforms / layout_optimization . cc " , <nl> " transforms / mark_ops_for_outside_compilation . cc " , <nl> cc_library ( <nl> cc_library ( <nl> name = " tensorflow_test_passes " , <nl> srcs = [ <nl> + " transforms / init_text_file_to_import_test_pass . cc " , <nl> " transforms / lift_variables_test_pass . cc " , <nl> " transforms / lower_tf_pass . cc " , <nl> ] , <nl> cc_library ( <nl> " / / tensorflow / core / platform : errors " , <nl> " / / tensorflow / core / platform : status " , <nl> " / / tensorflow / core / platform : threadpool_options " , <nl> + " @ llvm - project / / llvm : Support " , <nl> " @ llvm - project / / mlir : IR " , <nl> " @ llvm - project / / mlir : Pass " , <nl> + " @ llvm - project / / mlir : StandardOps " , <nl> " @ llvm - project / / mlir : Support " , <nl> ] , <nl> alwayslink = 1 , <nl> new file mode 100644 <nl> index 0000000000000 . . 6a9581b0e4406 <nl> mmm / dev / null <nl> ppp b / tensorflow / compiler / mlir / tensorflow / tests / init_text_file_to_import . mlir <nl> <nl> + / / RUN : tf - opt - tf - init - text - file - to - import - test % s | FileCheck % s <nl> + <nl> + / / Tests that the tf . InitializeTableFromTextFileV2 op are inlined . <nl> + <nl> + func @ init_all_tables ( ) { <nl> + % cst = constant dense < " % FILE_PLACEHOLDER " > : tensor < ! tf . string > <nl> + % 0 = " tf . HashTableV2 " ( ) { container = " " , device = " " , key_dtype = ! tf . string , shared_name = " hash_table_ / tmp / vocab . txt_ - 2_ - 1 " , use_node_name_sharing = false , value_dtype = i64 } : ( ) - > tensor < ! tf . resource > <nl> + " tf . InitializeTableFromTextFileV2 " ( % 0 , % cst ) { delimiter = " " , device = " " , key_index = - 2 : i64 , value_index = - 1 : i64 , vocab_size = - 1 : i64 } : ( tensor < ! tf . resource > , tensor < ! tf . string > ) - > ( ) <nl> + return <nl> + / / CHECK : [ [ CST : % . * ] ] = constant dense < [ " apple " , " banana " , " grape " ] > : tensor < 3x ! tf . string > <nl> + / / CHECK : [ [ CST_0 : % . * ] ] = constant dense < [ 0 , 1 , 2 ] > : tensor < 3xi64 > <nl> + / / CHECK : [ [ VAL : % . * ] ] = " tf . HashTableV2 " ( ) <nl> + / / CHECK : " tf . LookupTableImportV2 " ( [ [ VAL ] ] , [ [ CST ] ] , [ [ CST_0 ] ] ) <nl> + } <nl> new file mode 100644 <nl> index 0000000000000 . . 05afe1cc27fc6 <nl> mmm / dev / null <nl> ppp b / tensorflow / compiler / mlir / tensorflow / tests / init_text_file_to_import_invalid . mlir <nl> <nl> + / / RUN : tf - opt - split - input - file - verify - diagnostics - tf - init - text - file - to - import % s | FileCheck % s <nl> + <nl> + / / Tests that the given vocabulary file does not exist . <nl> + <nl> + func @ init_all_tables ( ) { <nl> + % cst = constant dense < " vocab_file_does_not_exist . txt " > : tensor < ! tf . string > <nl> + % 0 = " tf . HashTableV2 " ( ) { container = " " , device = " " , key_dtype = ! tf . string , shared_name = " hash_table_ / tmp / vocab . txt_ - 2_ - 1 " , use_node_name_sharing = false , value_dtype = i64 } : ( ) - > tensor < ! tf . resource > <nl> + / / expected - error @ + 1 { { ' tf . InitializeTableFromTextFileV2 ' op failed to open vocabulary file ( vocab_file_does_not_exist . txt ) : cannot open input file ' vocab_file_does_not_exist . txt ' : No such file or directory } } <nl> + " tf . InitializeTableFromTextFileV2 " ( % 0 , % cst ) { delimiter = " " , device = " " , key_index = - 2 : i64 , value_index = - 1 : i64 , vocab_size = - 1 : i64 } : ( tensor < ! tf . resource > , tensor < ! tf . string > ) - > ( ) <nl> + return <nl> + } <nl> + <nl> + / / mmm - - <nl> + <nl> + / / Tests that the tf . InitializeTableFromTextFileV2 op is not converted since <nl> + / / unsupported key_index , - 1 . <nl> + <nl> + func @ init_all_tables ( ) { <nl> + % cst = constant dense < " vocab_file_does_not_exist . txt " > : tensor < ! tf . string > <nl> + % 0 = " tf . HashTableV2 " ( ) { container = " " , device = " " , key_dtype = ! tf . string , shared_name = " hash_table_ / tmp / vocab . txt_ - 2_ - 1 " , use_node_name_sharing = false , value_dtype = i64 } : ( ) - > tensor < ! tf . resource > <nl> + " tf . InitializeTableFromTextFileV2 " ( % 0 , % cst ) { delimiter = " " , device = " " , key_index = - 1 : i64 , value_index = - 1 : i64 , vocab_size = - 1 : i64 } : ( tensor < ! tf . resource > , tensor < ! tf . string > ) - > ( ) <nl> + return <nl> + / / CHECK : [ [ VAL : % . * ] ] = " tf . HashTableV2 " ( ) <nl> + / / CHECK : tf . InitializeTableFromTextFileV2 " <nl> + } <nl> + <nl> + / / mmm - - <nl> + <nl> + / / Tests that the tf . InitializeTableFromTextFileV2 op is not converted since <nl> + / / unsupported value_index , 0 . <nl> + <nl> + func @ init_all_tables ( ) { <nl> + % cst = constant dense < " vocab_file_does_not_exist . txt " > : tensor < ! tf . string > <nl> + % 0 = " tf . HashTableV2 " ( ) { container = " " , device = " " , key_dtype = ! tf . string , shared_name = " hash_table_ / tmp / vocab . txt_ - 2_ - 1 " , use_node_name_sharing = false , value_dtype = i64 } : ( ) - > tensor < ! tf . resource > <nl> + " tf . InitializeTableFromTextFileV2 " ( % 0 , % cst ) { delimiter = " " , device = " " , key_index = - 2 : i64 , value_index = 0 : i64 , vocab_size = - 1 : i64 } : ( tensor < ! tf . resource > , tensor < ! tf . string > ) - > ( ) <nl> + return <nl> + / / CHECK : [ [ VAL : % . * ] ] = " tf . HashTableV2 " ( ) <nl> + / / CHECK : tf . InitializeTableFromTextFileV2 " <nl> + } <nl> + <nl> + / / mmm - - <nl> + <nl> + / / Tests that the tf . InitializeTableFromTextFileV2 op is not converted since <nl> + / / unsupported vocab_size , 1 . <nl> + <nl> + func @ init_all_tables ( ) { <nl> + % cst = constant dense < " vocab_file_does_not_exist . txt " > : tensor < ! tf . string > <nl> + % 0 = " tf . HashTableV2 " ( ) { container = " " , device = " " , key_dtype = ! tf . string , shared_name = " hash_table_ / tmp / vocab . txt_ - 2_ - 1 " , use_node_name_sharing = false , value_dtype = i64 } : ( ) - > tensor < ! tf . resource > <nl> + " tf . InitializeTableFromTextFileV2 " ( % 0 , % cst ) { delimiter = " " , device = " " , key_index = - 2 : i64 , value_index = - 1 : i64 , vocab_size = 1 : i64 } : ( tensor < ! tf . resource > , tensor < ! tf . string > ) - > ( ) <nl> + return <nl> + / / CHECK : [ [ VAL : % . * ] ] = " tf . HashTableV2 " ( ) <nl> + / / CHECK : tf . InitializeTableFromTextFileV2 " <nl> + } <nl> new file mode 100644 <nl> index 0000000000000 . . 615ca26012e41 <nl> mmm / dev / null <nl> ppp b / tensorflow / compiler / mlir / tensorflow / transforms / init_text_file_to_import . cc <nl> <nl> + / * Copyright 2020 The TensorFlow Authors . All Rights Reserved . <nl> + <nl> + Licensed under the Apache License , Version 2 . 0 ( the " License " ) ; <nl> + you may not use this file except in compliance with the License . <nl> + You may obtain a copy of the License at <nl> + <nl> + http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> + <nl> + Unless required by applicable law or agreed to in writing , software <nl> + distributed under the License is distributed on an " AS IS " BASIS , <nl> + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied . <nl> + See the License for the specific language governing permissions and <nl> + limitations under the License . <nl> + = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = * / <nl> + <nl> + # include < numeric > <nl> + <nl> + # include " llvm / Support / Casting . h " <nl> + # include " mlir / Dialect / StandardOps / IR / Ops . h " / / from @ llvm - project <nl> + # include " mlir / IR / Attributes . h " / / from @ llvm - project <nl> + # include " mlir / IR / OperationSupport . h " / / from @ llvm - project <nl> + # include " mlir / IR / PatternMatch . h " / / from @ llvm - project <nl> + # include " mlir / Pass / Pass . h " / / from @ llvm - project <nl> + # include " mlir / Support / FileUtilities . h " / / from @ llvm - project <nl> + # include " tensorflow / compiler / mlir / tensorflow / ir / tf_ops . h " <nl> + <nl> + namespace mlir { <nl> + namespace TF { <nl> + namespace { <nl> + <nl> + static constexpr int kTextFileIndex_WholeLine = - 2 ; <nl> + static constexpr int kTextFileIndex_LineNumber = - 1 ; <nl> + <nl> + / / InitTextFileToImportPass converts InitializeTableFromTextFileV2Op to the <nl> + / / corresponding LookupTableImportV2Op if possible . <nl> + class InitTextFileToImportPass <nl> + : public mlir : : PassWrapper < InitTextFileToImportPass , FunctionPass > { <nl> + public : <nl> + explicit InitTextFileToImportPass ( ) { } <nl> + <nl> + private : <nl> + void runOnFunction ( ) override ; <nl> + } ; <nl> + <nl> + class ConvertInitializeTableFromTextFileV2 <nl> + : public OpRewritePattern < InitializeTableFromTextFileV2Op > { <nl> + public : <nl> + using OpRewritePattern : : OpRewritePattern ; <nl> + <nl> + LogicalResult matchAndRewrite ( InitializeTableFromTextFileV2Op op , <nl> + PatternRewriter & rewriter ) const override { <nl> + / / Now , this pattern matching only supports the following case , which is <nl> + / / commonly used among inference use cases : <nl> + / / <nl> + / / tf . lookup . TextFileInitializer ( <nl> + / / " test . txt " , tf . string , tf . lookup . TextFileIndex . WHOLE_LINE , <nl> + / / tf . int64 , tf . lookup . TextFileIndex . LINE_NUMBER , delimiter = " " ) <nl> + / / <nl> + / / In the above case , the delimiter will be not used since the key is just a <nl> + / / whole line and value is a line number . <nl> + if ( op . key_index ( ) ! = kTextFileIndex_WholeLine | | <nl> + op . value_index ( ) ! = kTextFileIndex_LineNumber | | <nl> + op . vocab_size ( ) ! = - 1 ) { <nl> + return failure ( ) ; <nl> + } <nl> + <nl> + / / Try to find filename from constant op . <nl> + DenseStringElementsAttr filename_attr ; <nl> + if ( ! matchPattern ( op . filename ( ) . getDefiningOp ( ) , <nl> + m_Constant ( & filename_attr ) ) ) { <nl> + return failure ( ) ; <nl> + } <nl> + StringRef filename = filename_attr . getRawStringData ( ) [ 0 ] ; <nl> + <nl> + / / Read the content of the file . <nl> + std : : string error_message ; <nl> + auto file = openInputFile ( filename , & error_message ) ; <nl> + if ( ! file ) { <nl> + return op . emitOpError ( " failed to open vocabulary file " ) <nl> + < < " ( " < < filename . str ( ) < < " ) : " < < error_message ; <nl> + } <nl> + <nl> + / / Splits into lines . <nl> + SmallVector < StringRef , 8 > lines ; <nl> + file - > getBuffer ( ) . split ( lines , " \ n " , - 1 , false ) ; <nl> + <nl> + / / Map each line to line number , starting from zero . <nl> + SmallVector < int64_t , 8 > line_nums ; <nl> + line_nums . resize ( lines . size ( ) ) ; <nl> + std : : iota ( line_nums . begin ( ) , line_nums . end ( ) , 0 ) ; <nl> + <nl> + / / Create constant ops for keys an values . <nl> + Value key_constant_tensor = rewriter . create < ConstantOp > ( <nl> + op . getLoc ( ) , <nl> + DenseStringElementsAttr : : get ( <nl> + RankedTensorType : : get ( static_cast < int64_t > ( lines . size ( ) ) , <nl> + StringType : : get ( rewriter . getContext ( ) ) ) , <nl> + lines ) ) ; <nl> + <nl> + Value value_constant_tensor = rewriter . create < ConstantOp > ( <nl> + op . getLoc ( ) , rewriter . getI64TensorAttr ( line_nums ) ) ; <nl> + <nl> + / / Replace the given op with LookupTableImportV2Op . <nl> + rewriter . create < LookupTableImportV2Op > ( op . getLoc ( ) , op . table_handle ( ) , <nl> + key_constant_tensor , <nl> + value_constant_tensor ) ; <nl> + rewriter . eraseOp ( op ) ; <nl> + return success ( ) ; <nl> + } <nl> + } ; <nl> + <nl> + void InitTextFileToImportPass : : runOnFunction ( ) { <nl> + OwningRewritePatternList patterns ; <nl> + MLIRContext * context = & getContext ( ) ; <nl> + FuncOp func = getFunction ( ) ; <nl> + <nl> + patterns . insert < ConvertInitializeTableFromTextFileV2 > ( context ) ; <nl> + applyPatternsAndFoldGreedily ( func , patterns ) ; <nl> + } <nl> + <nl> + } / / namespace <nl> + <nl> + / / Replace InitializeTableFromTextFileV2Ops with LookupTableImportV2Ops . <nl> + std : : unique_ptr < OperationPass < FuncOp > > CreateInitTextFileToImportPass ( ) { <nl> + return std : : make_unique < InitTextFileToImportPass > ( ) ; <nl> + } <nl> + <nl> + static PassRegistration < InitTextFileToImportPass > pass ( <nl> + " tf - init - text - file - to - import " , <nl> + " convert InitializeTableFromTextFileV2 ops to LookupTableImportV2Op to " <nl> + " remove the dependency on asset files " ) ; <nl> + <nl> + } / / namespace TF <nl> + } / / namespace mlir <nl> new file mode 100644 <nl> index 0000000000000 . . 96a04fa6eeb43 <nl> mmm / dev / null <nl> ppp b / tensorflow / compiler / mlir / tensorflow / transforms / init_text_file_to_import_test_pass . cc <nl> <nl> + / * Copyright 2020 The TensorFlow Authors . All Rights Reserved . <nl> + <nl> + Licensed under the Apache License , Version 2 . 0 ( the " License " ) ; <nl> + you may not use this file except in compliance with the License . <nl> + You may obtain a copy of the License at <nl> + <nl> + http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> + <nl> + Unless required by applicable law or agreed to in writing , software <nl> + distributed under the License is distributed on an " AS IS " BASIS , <nl> + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied . <nl> + See the License for the specific language governing permissions and <nl> + limitations under the License . <nl> + = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = * / <nl> + <nl> + # include " llvm / Support / Casting . h " <nl> + # include " llvm / Support / FileSystem . h " <nl> + # include " llvm / Support / ToolOutputFile . h " <nl> + # include " mlir / Dialect / StandardOps / IR / Ops . h " / / from @ llvm - project <nl> + # include " mlir / IR / Attributes . h " / / from @ llvm - project <nl> + # include " mlir / IR / OperationSupport . h " / / from @ llvm - project <nl> + # include " mlir / IR / PatternMatch . h " / / from @ llvm - project <nl> + # include " mlir / Pass / Pass . h " / / from @ llvm - project <nl> + # include " mlir / Pass / PassManager . h " / / from @ llvm - project <nl> + # include " mlir / Support / FileUtilities . h " / / from @ llvm - project <nl> + # include " tensorflow / compiler / mlir / tensorflow / ir / tf_ops . h " <nl> + # include " tensorflow / compiler / mlir / tensorflow / transforms / passes . h " <nl> + <nl> + namespace mlir { <nl> + namespace TF { <nl> + namespace { <nl> + <nl> + / / InitTextFileToImportTestPass generates a temporary file and run the <nl> + / / InitTextFileToImportPass for testing purpose . <nl> + class InitTextFileToImportTestPass <nl> + : public mlir : : PassWrapper < InitTextFileToImportTestPass , <nl> + OperationPass < ModuleOp > > { <nl> + public : <nl> + explicit InitTextFileToImportTestPass ( ) { } <nl> + <nl> + private : <nl> + void runOnOperation ( ) override ; <nl> + } ; <nl> + <nl> + void InitTextFileToImportTestPass : : runOnOperation ( ) { <nl> + ModuleOp module = getOperation ( ) ; <nl> + <nl> + / / Create a temporary vocab file . <nl> + int fd ; <nl> + SmallString < 256 > filename ; <nl> + std : : error_code error_code = <nl> + llvm : : sys : : fs : : createTemporaryFile ( " text " , " vocab " , fd , filename ) ; <nl> + if ( error_code ) return signalPassFailure ( ) ; <nl> + <nl> + llvm : : ToolOutputFile temp_file ( filename , fd ) ; <nl> + const char * dictionary_in_lines = <nl> + " apple \ n " <nl> + " banana \ n " <nl> + " grape " ; <nl> + temp_file . os ( ) < < dictionary_in_lines ; <nl> + temp_file . os ( ) . flush ( ) ; <nl> + <nl> + / / Replace filename constant ops to use the temporary file . <nl> + MLIRContext * context = & getContext ( ) ; <nl> + <nl> + for ( FuncOp func : module . getOps < FuncOp > ( ) ) { <nl> + llvm : : SmallVector < ConstantOp , 4 > constant_ops ( func . getOps < ConstantOp > ( ) ) ; <nl> + for ( auto op : constant_ops ) { <nl> + ShapedType shaped_type = <nl> + RankedTensorType : : get ( { 1 } , StringType : : get ( context ) ) ; <nl> + <nl> + DenseStringElementsAttr attr ; <nl> + if ( ! matchPattern ( op . getOperation ( ) , m_Constant ( & attr ) ) ) { <nl> + continue ; <nl> + } <nl> + <nl> + ArrayRef < StringRef > values = attr . getRawStringData ( ) ; <nl> + if ( values . size ( ) ! = 1 | | values [ 0 ] ! = " % FILE_PLACEHOLDER " ) { <nl> + continue ; <nl> + } <nl> + <nl> + op . valueAttr ( DenseStringElementsAttr : : get ( shaped_type , { filename } ) ) ; <nl> + } <nl> + } <nl> + <nl> + / / Run the lowering pass . <nl> + PassManager pm ( context ) ; <nl> + pm . addPass ( CreateInitTextFileToImportPass ( ) ) ; <nl> + if ( failed ( pm . run ( module ) ) ) return signalPassFailure ( ) ; <nl> + } <nl> + <nl> + } / / namespace <nl> + <nl> + static PassRegistration < InitTextFileToImportTestPass > pass ( <nl> + " tf - init - text - file - to - import - test " , <nl> + " generate a temporary file and invoke InitTextFileToImportPass " ) ; <nl> + <nl> + } / / namespace TF <nl> + } / / namespace mlir <nl> mmm a / tensorflow / compiler / mlir / tensorflow / transforms / passes . h <nl> ppp b / tensorflow / compiler / mlir / tensorflow / transforms / passes . h <nl> std : : unique_ptr < OperationPass < FuncOp > > CreateFusedKernelMatcherPass ( ) ; <nl> <nl> / / Creates function pass to select device index / fold tf . DeviceIndex . <nl> std : : unique_ptr < OperationPass < FuncOp > > CreateDeviceIndexSelectorPass ( ) ; <nl> + <nl> + / / Creates function pass to replace InitializeTableFromTextFileV2Ops with <nl> + / / LookupTableImportV2Op ops . <nl> + std : : unique_ptr < OperationPass < FuncOp > > CreateInitTextFileToImportPass ( ) ; <nl> } / / namespace TF <nl> <nl> namespace tf_executor { <nl>
|
Add a new pass that converts tf . InitializeTableFromTextFileV2 op to
|
tensorflow/tensorflow
|
8aac615a87988ba920f13fb9352d67d4a3558276
|
2020-07-23T23:26:05Z
|
mmm a / Documentation / Books / AQL / book . json <nl> ppp b / Documentation / Books / AQL / book . json <nl> <nl> " author " : " ArangoDB GmbH " , <nl> " description " : " Official AQL manual for ArangoDB - the native multi - model NoSQL database " , <nl> " language " : " en " , <nl> - " plugins " : [ " - search " , " - lunr " , " - sharing " , " toggle - chapters " , " addcssjs " , " anchorjs " , " sitemap - general " , " ga " , " callouts @ git + https : / / github . com / Simran - B / gitbook - plugin - callouts . git " , " edit - link " , " localized - footer " ] , <nl> + " plugins " : [ <nl> + " - search " , <nl> + " - lunr " , <nl> + " - sharing " , <nl> + " toggle - chapters " , <nl> + " addcssjs " , <nl> + " anchorjs " , <nl> + " sitemap - general @ git + https : / / github . com / Simran - B / gitbook - plugin - sitemap - general . git " , <nl> + " ga " , <nl> + " callouts @ git + https : / / github . com / Simran - B / gitbook - plugin - callouts . git " , <nl> + " edit - link " , <nl> + " localized - footer " <nl> + ] , <nl> " pdf " : { <nl> " fontSize " : 12 , <nl> " toc " : true , <nl> <nl> " css " : [ " styles / header . css " ] <nl> } , <nl> " sitemap - general " : { <nl> - " prefix " : " https : / / docs . arangodb . com / devel / AQL / " <nl> + " prefix " : " https : / / docs . arangodb . com / devel / AQL / " , <nl> + " changefreq " : " daily " , <nl> + " priority " : 0 . 3 <nl> } , <nl> " ga " : { <nl> " token " : " UA - 81053435 - 2 " <nl> mmm a / Documentation / Books / HTTP / book . json <nl> ppp b / Documentation / Books / HTTP / book . json <nl> <nl> " author " : " ArangoDB GmbH " , <nl> " description " : " Official HTTP API manual for ArangoDB - the native multi - model NoSQL database " , <nl> " language " : " en " , <nl> - " plugins " : [ " - search " , " - lunr " , " - sharing " , " toggle - chapters " , " addcssjs " , " anchorjs " , " sitemap - general " , " ga " , " edit - link " , " localized - footer " ] , <nl> + " plugins " : [ <nl> + " - search " , <nl> + " - lunr " , <nl> + " - sharing " , <nl> + " toggle - chapters " , <nl> + " addcssjs " , <nl> + " anchorjs " , <nl> + " sitemap - general @ git + https : / / github . com / Simran - B / gitbook - plugin - sitemap - general . git " , <nl> + " ga " , <nl> + " callouts @ git + https : / / github . com / Simran - B / gitbook - plugin - callouts . git " , <nl> + " edit - link " , <nl> + " localized - footer " <nl> + ] , <nl> " pdf " : { <nl> " fontSize " : 12 , <nl> " toc " : true , <nl> <nl> " css " : [ " styles / header . css " ] <nl> } , <nl> " sitemap - general " : { <nl> - " prefix " : " https : / / docs . arangodb . com / devel / HTTP / " <nl> + " prefix " : " https : / / docs . arangodb . com / devel / HTTP / " , <nl> + " changefreq " : " daily " , <nl> + " priority " : 0 . 3 <nl> } , <nl> " ga " : { <nl> " token " : " UA - 81053435 - 2 " <nl> mmm a / Documentation / Books / Manual / book . json <nl> ppp b / Documentation / Books / Manual / book . json <nl> <nl> " author " : " ArangoDB GmbH " , <nl> " description " : " Official manual for ArangoDB - the native multi - model NoSQL database " , <nl> " language " : " en " , <nl> - " plugins " : [ " - search " , " - lunr " , " - sharing " , " toggle - chapters " , " addcssjs " , " anchorjs " , " sitemap - general " , " ga " , " callouts @ git + https : / / github . com / Simran - B / gitbook - plugin - callouts . git " , " edit - link " , " localized - footer " ] , <nl> + " plugins " : [ <nl> + " - search " , <nl> + " - lunr " , <nl> + " - sharing " , <nl> + " toggle - chapters " , <nl> + " addcssjs " , <nl> + " anchorjs " , <nl> + " sitemap - general @ git + https : / / github . com / Simran - B / gitbook - plugin - sitemap - general . git " , <nl> + " ga " , <nl> + " callouts @ git + https : / / github . com / Simran - B / gitbook - plugin - callouts . git " , <nl> + " edit - link " , <nl> + " localized - footer " <nl> + ] , <nl> " pdf " : { <nl> " fontSize " : 12 , <nl> " toc " : true , <nl> <nl> " css " : [ " styles / header . css " ] <nl> } , <nl> " sitemap - general " : { <nl> - " prefix " : " https : / / docs . arangodb . com / devel / Manual / " <nl> + " prefix " : " https : / / docs . arangodb . com / devel / Manual / " , <nl> + " changefreq " : " daily " , <nl> + " priority " : 0 . 3 <nl> } , <nl> " ga " : { <nl> " token " : " UA - 81053435 - 2 " <nl>
|
Docs : Reformat gitbook plugin lists , add priority to sitemaps
|
arangodb/arangodb
|
bbbd4c365ce5ffa4ba932e8f7c1fffe51b646f7e
|
2017-06-28T12:35:00Z
|
mmm a / arangod / Cluster / AgencyComm . cpp <nl> ppp b / arangod / Cluster / AgencyComm . cpp <nl> std : : string AgencyComm : : getVersion ( ) { <nl> return " " ; <nl> } <nl> <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief update a version number in the agency <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + bool AgencyComm : : increaseVersion ( std : : string const & key ) { <nl> + / / fetch existing version number <nl> + AgencyCommResult result = getValues ( key , false ) ; <nl> + <nl> + if ( ! result . successful ( ) ) { <nl> + if ( result . httpCode ( ) ! = ( int ) triagens : : rest : : HttpResponse : : NOT_FOUND ) { <nl> + return false ; <nl> + } <nl> + <nl> + / / no version key found , now set it <nl> + TRI_json_t * json = triagens : : basics : : JsonHelper : : uint64String ( TRI_UNKNOWN_MEM_ZONE , 1 ) ; <nl> + <nl> + if ( json = = 0 ) { <nl> + return false ; <nl> + } <nl> + <nl> + result = casValue ( key , <nl> + json , <nl> + false , <nl> + 0 . 0 , <nl> + 0 . 0 ) ; <nl> + <nl> + TRI_FreeJson ( TRI_UNKNOWN_MEM_ZONE , json ) ; <nl> + <nl> + return result . successful ( ) ; <nl> + } <nl> + <nl> + / / found a version <nl> + result . parse ( " " , false ) ; <nl> + std : : map < std : : string , AgencyCommResultEntry > : : const_iterator it = result . _values . begin ( ) ; <nl> + <nl> + if ( it = = result . _values . end ( ) ) { <nl> + return false ; <nl> + } <nl> + <nl> + uint64_t version = triagens : : basics : : JsonHelper : : stringUInt64 ( ( * it ) . second . _json ) ; <nl> + <nl> + / / version key found , now update it <nl> + TRI_json_t * oldJson = triagens : : basics : : JsonHelper : : uint64String ( TRI_UNKNOWN_MEM_ZONE , version ) ; <nl> + <nl> + if ( oldJson = = 0 ) { <nl> + return false ; <nl> + } <nl> + <nl> + TRI_json_t * newJson = triagens : : basics : : JsonHelper : : uint64String ( TRI_UNKNOWN_MEM_ZONE , version + 1 ) ; <nl> + <nl> + if ( newJson = = 0 ) { <nl> + TRI_FreeJson ( TRI_UNKNOWN_MEM_ZONE , oldJson ) ; <nl> + return false ; <nl> + } <nl> + <nl> + result = casValue ( key , <nl> + oldJson , <nl> + newJson , <nl> + 0 . 0 , <nl> + 0 . 0 ) ; <nl> + <nl> + TRI_FreeJson ( TRI_UNKNOWN_MEM_ZONE , newJson ) ; <nl> + TRI_FreeJson ( TRI_UNKNOWN_MEM_ZONE , oldJson ) ; <nl> + <nl> + return result . successful ( ) ; <nl> + } <nl> + <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief creates a directory in the backend <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> mmm a / arangod / Cluster / AgencyComm . h <nl> ppp b / arangod / Cluster / AgencyComm . h <nl> namespace triagens { <nl> <nl> std : : string getVersion ( ) ; <nl> <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief update a version number in the agency <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + bool increaseVersion ( std : : string const & ) ; <nl> + <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief creates a directory in the backend <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> mmm a / arangod / Cluster / HeartbeatThread . cpp <nl> ppp b / arangod / Cluster / HeartbeatThread . cpp <nl> void HeartbeatThread : : run ( ) { <nl> uint64_t lastCommandIndex = getLastCommandIndex ( ) ; <nl> bool valueFound = false ; <nl> <nl> + const bool isCoordinator = ServerState : : instance ( ) - > isCoordinator ( ) ; <nl> <nl> while ( ! _stop ) { <nl> LOG_TRACE ( " sending heartbeat to agency " ) ; <nl> void HeartbeatThread : : run ( ) { <nl> break ; <nl> } <nl> <nl> - { <nl> + if ( ! isCoordinator ) { <nl> / / get the current version of the Plan <nl> AgencyCommResult result = _agency . getValues ( " Plan / Version " , false ) ; <nl> <nl> mmm a / arangod / Cluster / v8 - cluster . cpp <nl> ppp b / arangod / Cluster / v8 - cluster . cpp <nl> static v8 : : Handle < v8 : : Value > JS_IsEnabledAgency ( v8 : : Arguments const & argv ) { <nl> return scope . Close ( v8 : : Boolean : : New ( ! prefix . empty ( ) ) ) ; <nl> } <nl> <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief increase the version number <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + static v8 : : Handle < v8 : : Value > JS_IncreaseVersionAgency ( v8 : : Arguments const & argv ) { <nl> + v8 : : HandleScope scope ; <nl> + <nl> + if ( argv . Length ( ) ! = 1 ) { <nl> + TRI_V8_EXCEPTION_USAGE ( scope , " increaseVersion ( < key > ) " ) ; <nl> + } <nl> + <nl> + const std : : string key = TRI_ObjectToString ( argv [ 0 ] ) ; <nl> + <nl> + AgencyComm comm ; <nl> + if ( ! comm . increaseVersion ( key ) ) { <nl> + TRI_V8_EXCEPTION_MESSAGE ( scope , TRI_ERROR_INTERNAL , " unable to increase version " ) ; <nl> + } <nl> + <nl> + return scope . Close ( v8 : : Boolean : : New ( true ) ) ; <nl> + } <nl> + <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief gets a value from the agency <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> void TRI_InitV8Cluster ( v8 : : Handle < v8 : : Context > context ) { <nl> TRI_AddMethodVocbase ( rt , " createDirectory " , JS_CreateDirectoryAgency ) ; <nl> TRI_AddMethodVocbase ( rt , " get " , JS_GetAgency ) ; <nl> TRI_AddMethodVocbase ( rt , " isEnabled " , JS_IsEnabledAgency ) ; <nl> + TRI_AddMethodVocbase ( rt , " increaseVersion " , JS_IncreaseVersionAgency ) ; <nl> TRI_AddMethodVocbase ( rt , " list " , JS_ListAgency ) ; <nl> TRI_AddMethodVocbase ( rt , " lockRead " , JS_LockReadAgency ) ; <nl> TRI_AddMethodVocbase ( rt , " lockWrite " , JS_LockWriteAgency ) ; <nl> mmm a / arangom <nl> ppp b / arangom <nl> if [ " $ 1 " = = " init " ] ; then <nl> set Target / Lock " \ " UNLOCKED \ " " <nl> set Target / DBServers <nl> set Target / Coordinators <nl> - set Target / Databases / @ Usystem " { } " <nl> + set Target / Databases / @ Usystem " { \ " name \ " : \ " _system \ " } " <nl> set Target / Collections / @ Usystem <nl> <nl> set Plan / Version " \ " 1 \ " " <nl> set Plan / Lock " \ " UNLOCKED \ " " <nl> set Plan / DBServers <nl> set Plan / Coordinators <nl> - set Plan / Databases / @ Usystem " { } " <nl> + set Plan / Databases / @ Usystem " { \ " name \ " : \ " _system \ " } " <nl> set Plan / Collections / @ Usystem <nl> <nl> set Current / Version " \ " 1 \ " " <nl> mmm a / js / server / modules / org / arangodb / cluster . js <nl> ppp b / js / server / modules / org / arangodb / cluster . js <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> var console = require ( " console " ) ; <nl> + var db = require ( " org / arangodb " ) . db ; <nl> + <nl> + function getByPrefix ( values , prefix ) { <nl> + var result = { } ; <nl> + var a ; <nl> + var n = prefix . length ; <nl> + <nl> + for ( a in values ) { <nl> + if ( values . hasOwnProperty ( a ) ) { <nl> + if ( a . substr ( 0 , n ) = = = prefix ) { <nl> + result [ a . substr ( n ) ] = values [ a ] ; <nl> + } <nl> + } <nl> + } <nl> + return result ; <nl> + } <nl> + <nl> + function writeLocked ( lockInfo , cb , args ) { <nl> + var timeout = lockInfo . timeout ; <nl> + if ( timeout = = = undefined ) { <nl> + timeout = 5 ; <nl> + } <nl> + <nl> + var ttl = lockInfo . ttl ; <nl> + if ( ttl = = = undefined ) { <nl> + ttl = 10 ; <nl> + } <nl> + <nl> + ArangoAgency . lockWrite ( lockInfo . part , ttl , timeout ) ; <nl> + <nl> + try { <nl> + cb . apply ( this , args ) ; <nl> + ArangoAgency . increaseVersion ( lockInfo . part + " / Version " ) ; <nl> + ArangoAgency . unlockWrite ( lockInfo . part , timeout ) ; <nl> + } <nl> + catch ( err ) { <nl> + ArangoAgency . unlockWrite ( lockInfo . part , timeout ) ; <nl> + throw err ; <nl> + } <nl> + } <nl> + <nl> + function handleDatabaseChanges ( plan , current ) { <nl> + var plannedDatabases = getByPrefix ( plan , " Plan / Databases / " ) ; <nl> + / / var currentDatabases = getByPrefix ( current , " Current / Databases / " ) ; <nl> + var localDatabases = db . _listDatabases ( ) ; <nl> + <nl> + var createDatabase = function ( payload ) { <nl> + ArangoAgency . set ( " Current / Databases / " + payload . name + " / " + ArangoServerState . id ( ) , payload ) ; <nl> + } ; <nl> + <nl> + var dropDatabase = function ( payload ) { <nl> + try { <nl> + ArangoAgency . remove ( " Current / Databases / " + payload . name + " / " + ArangoServerState . id ( ) ) ; <nl> + } <nl> + catch ( err ) { <nl> + } <nl> + } ; <nl> + <nl> + var name ; <nl> + <nl> + / / check which databases need to be created locally <nl> + for ( name in plannedDatabases ) { <nl> + if ( plannedDatabases . hasOwnProperty ( name ) ) { <nl> + if ( localDatabases . indexOf ( name ) = = = - 1 ) { <nl> + / / must create database <nl> + <nl> + var payload = plannedDatabases [ name ] ; <nl> + / / TODO : handle options and user information <nl> + <nl> + <nl> + console . info ( " creating local database ' % s ' " , payload . name ) ; <nl> + db . _createDatabase ( payload . name ) ; <nl> + <nl> + writeLocked ( { part : " Current " } , createDatabase , [ payload ] ) ; <nl> + } <nl> + } <nl> + } <nl> + <nl> + / / check which databases need to be deleted locally <nl> + localDatabases . forEach ( function ( name ) { <nl> + if ( ! plannedDatabases . hasOwnProperty ( name ) ) { <nl> + / / must drop database <nl> + <nl> + console . info ( " dropping local database ' % s ' " , name ) ; <nl> + db . _dropDatabase ( name ) ; <nl> + <nl> + writeLocked ( { part : " Current " } , dropDatabase , [ { name : name } ] ) ; <nl> + } <nl> + } ) ; <nl> + } <nl> + <nl> + function handleChanges ( plan , current ) { <nl> + handleDatabaseChanges ( plan , current ) ; <nl> + } <nl> + <nl> <nl> var isCluster = function ( ) { <nl> return ( typeof ArangoServerState ! = = " undefined " & & <nl> var handlePlanChange = function ( ) { <nl> } <nl> <nl> try { <nl> - console . info ( " % s " , " plan change handling successful " ) ; <nl> + var plan = ArangoAgency . get ( " Plan " , true ) ; <nl> + var current = ArangoAgency . get ( " Current " , true ) ; <nl> + <nl> + handleChanges ( plan , current ) ; <nl> + console . info ( " plan change handling successful " ) ; <nl> } <nl> catch ( err ) { <nl> - console . error ( " % s " , " plan change handling failed " ) ; <nl> + console . error ( " plan change handling failed " ) ; <nl> } <nl> } ; <nl> <nl>
|
handle plan change for create / drop database
|
arangodb/arangodb
|
8010963a94dd72ecaed26196bf3c7bf1c5edd85a
|
2014-01-16T16:03:10Z
|
mmm a / xbmc / WinSystemX11 . cpp <nl> ppp b / xbmc / WinSystemX11 . cpp <nl> void CWinSystemX11 : : UpdateResolutions ( ) <nl> res . strId = mode . id ; <nl> res . iSubtitles = ( int ) ( 0 . 95 * mode . h ) ; <nl> res . fRefreshRate = mode . hz ; <nl> + res . bFullScreen = true ; <nl> <nl> if ( ( float ) mode . w / ( float ) mode . h > = 1 . 59 ) <nl> res . dwFlags = D3DPRESENTFLAG_WIDESCREEN ; <nl>
|
fixed : resolutions coming out of xrandr are always fullscreen
|
xbmc/xbmc
|
802d40792c15e86b48945720a24cb80af4d5ce13
|
2010-09-19T17:55:56Z
|
mmm a / CMakeLists . txt <nl> ppp b / CMakeLists . txt <nl> cmake_minimum_required ( VERSION 3 . 4 . 3 ) <nl> list ( APPEND CMAKE_MODULE_PATH <nl> " $ { CMAKE_CURRENT_SOURCE_DIR } / cmake / modules " ) <nl> <nl> + # Make a job pool for things that can ' t yet be distributed <nl> + cmake_host_system_information ( <nl> + RESULT localhost_logical_cores QUERY NUMBER_OF_LOGICAL_CORES ) <nl> + set_property ( GLOBAL PROPERTY JOB_POOLS local_jobs = $ { localhost_logical_cores } ) <nl> + # Put linking in that category <nl> + set_property ( GLOBAL PROPERTY JOB_POOL_LINK local_jobs ) <nl> + <nl> # First include general CMake utilities . <nl> include ( SwiftUtils ) <nl> <nl>
|
Use job pools to limit link parallelism
|
apple/swift
|
46497364b67feb80c53b618df4787a8aa6d28c20
|
2016-11-21T22:37:54Z
|
mmm a / src / filterParserThread . h <nl> ppp b / src / filterParserThread . h <nl> class FilterParserThread : public QThread { <nl> int getlineInStream ( QDataStream & stream , string & name , char delim ) { <nl> char c ; <nl> int total_read = 0 ; <nl> + int read ; <nl> do { <nl> - int read = stream . readRawData ( & c , 1 ) ; <nl> + read = stream . readRawData ( & c , 1 ) ; <nl> total_read + = read ; <nl> if ( read > 0 ) { <nl> if ( c ! = delim ) { <nl>
|
- Another patch by Attila to fix mingw32 compilation
|
qbittorrent/qBittorrent
|
5133931302d1e52a29fab19dee15b439cc259af8
|
2008-12-12T09:20:47Z
|
mmm a / README . rst <nl> ppp b / README . rst <nl> <nl> format <nl> = = = = = = <nl> <nl> - . . highlight : : c + + <nl> - <nl> Format is an open - source C + + library that provides <nl> string formatting functionality similar to ` str . format <nl> < http : / / docs . python . org / 2 / library / stdtypes . html # str . format > ` __ <nl> Features <nl> Examples <nl> mmmmmm - - <nl> <nl> - This prints " Hello , world ! " to stdout : <nl> + This prints ` ` Hello , world ! ` ` to stdout : <nl> <nl> . . code - block : : c + + <nl> <nl> fmt : : Print ( " Hello , { 0 } ! " ) < < " world " ; <nl> <nl> - Arguments are accessed by position and arguments ' indices can be repeated : : <nl> + Arguments are accessed by position and arguments ' indices can be repeated : <nl> + <nl> + . . code - block : : c + + <nl> <nl> std : : string s = str ( fmt : : Format ( " { 0 } { 1 } { 0 } " ) < < " abra " < < " cad " ) ; <nl> / / s = = " abracadabra " <nl> <nl> An object of any user - defined type for which there is an overloaded <nl> - ` ` std : : ostream ` ` insertion operator ( ` ` operator < < ` ` ) can be formatted : : <nl> + ` ` std : : ostream ` ` insertion operator ( ` ` operator < < ` ` ) can be formatted : <nl> + <nl> + . . code - block : : c + + <nl> <nl> class Date { <nl> int year_ , month_ , day_ ; <nl> An object of any user - defined type for which there is an overloaded <nl> <nl> You can use ` ` fmt : : ActiveFormatter ` ` to create your own functions <nl> similar to ` ` fmt : : Format ` ` and ` ` fmt : : Print ` ` with an arbitrary action <nl> - performed when formatting is complete : : <nl> + performed when formatting is complete : <nl> + <nl> + . . code - block : : c + + <nl> <nl> struct PrintError { <nl> void operator ( ) ( const fmt : : Formatter & f ) const { <nl> platforms . <nl> IOStreams <nl> ~ ~ ~ ~ ~ ~ ~ ~ ~ <nl> <nl> - The main issue with IOStreams is best illustrated with an example : : <nl> + The main issue with IOStreams is best illustrated with an example : <nl> + <nl> + . . code - block : : c + + <nl> <nl> std : : cout < < std : : setprecision ( 2 ) < < std : : fixed < < 1 . 23456 < < " \ n " ; <nl> <nl> - which is a lot of typing compared to printf : : <nl> + which is a lot of typing compared to printf : <nl> + <nl> + . . code - block : : c + + <nl> <nl> printf ( " % . 2f \ n " , 1 . 23456 ) ; <nl> <nl>
|
Use syntax highlighting for all examples .
|
fmtlib/fmt
|
a8a536bde544efe546480de909b2068d0fa7a81b
|
2012-12-13T16:12:09Z
|
mmm a / configure . ac <nl> ppp b / configure . ac <nl> else <nl> fi <nl> fi <nl> <nl> + save_CXXFLAGS = " $ { CXXFLAGS } " <nl> + CXXFLAGS = " $ { CXXFLAGS } $ { CRYPTO_CFLAGS } $ { SSL_CFLAGS } " <nl> + AC_CHECK_DECLS ( [ EVP_MD_CTX_new ] , , , [ AC_INCLUDES_DEFAULT <nl> + # include < openssl / x509_vfy . h > <nl> + ] ) <nl> + CXXFLAGS = " $ { save_CXXFLAGS } " <nl> + <nl> dnl univalue check <nl> <nl> need_bundled_univalue = yes <nl> mmm a / src / qt / paymentrequestplus . cpp <nl> ppp b / src / qt / paymentrequestplus . cpp <nl> bool PaymentRequestPlus : : getMerchant ( X509_STORE * certStore , QString & merchant ) c <nl> std : : string data_to_verify ; / / Everything but the signature <nl> rcopy . SerializeToString ( & data_to_verify ) ; <nl> <nl> - # if OPENSSL_VERSION_NUMBER > = 0x10100000L <nl> + # if HAVE_DECL_EVP_MD_CTX_NEW <nl> EVP_MD_CTX * ctx = EVP_MD_CTX_new ( ) ; <nl> if ( ! ctx ) throw SSLVerifyError ( " Error allocating OpenSSL context . " ) ; <nl> # else <nl> bool PaymentRequestPlus : : getMerchant ( X509_STORE * certStore , QString & merchant ) c <nl> ! EVP_VerifyFinal ( ctx , ( const unsigned char * ) paymentRequest . signature ( ) . data ( ) , ( unsigned int ) paymentRequest . signature ( ) . size ( ) , pubkey ) ) { <nl> throw SSLVerifyError ( " Bad signature , invalid payment request . " ) ; <nl> } <nl> - # if OPENSSL_VERSION_NUMBER > = 0x10100000L <nl> + # if HAVE_DECL_EVP_MD_CTX_NEW <nl> EVP_MD_CTX_free ( ctx ) ; <nl> # endif <nl> <nl>
|
Merge : Let autoconf detect presence of EVP_MD_CTX_new
|
bitcoin/bitcoin
|
70145064153aae87245c35e009282e5198e3f60f
|
2017-01-05T09:28:47Z
|
mmm a / atom / browser / native_window_mac . mm <nl> ppp b / atom / browser / native_window_mac . mm <nl> <nl> <nl> } / / namespace <nl> <nl> - / / This view encapsuate Quit , Minimize and Full Screen buttons . It is being <nl> - / / used for frameless window . <nl> - @ interface SemaphoreView : NSView <nl> + / / This view encapsuates the Quit , Minimize and Full Screen buttons . It is being <nl> + / / used for frameless windows . <nl> + @ interface SemaphoreView : NSView { <nl> + @ private <nl> + BOOL mouse_inside_ ; <nl> + } <nl> @ end <nl> <nl> @ implementation SemaphoreView <nl> <nl> - BOOL mouseInside = NO ; <nl> - <nl> - ( id ) initWithFrame : ( NSRect ) frame { <nl> self = [ super initWithFrame : frame ] ; <nl> <nl> if ( self ) { <nl> + mouse_inside_ = NO ; <nl> + <nl> / / create buttons <nl> NSButton * closeButton = [ NSWindow standardWindowButton : NSWindowCloseButton <nl> forStyleMask : NSTitledWindowMask ] ; <nl> - ( id ) initWithFrame : ( NSRect ) frame { <nl> } <nl> <nl> - ( BOOL ) _mouseInGroup : ( NSButton * ) button { <nl> - return mouseInside ; <nl> + return mouse_inside_ ; <nl> } <nl> <nl> - ( void ) updateTrackingAreas { <nl> - ( void ) updateTrackingAreas { <nl> <nl> - ( void ) mouseEntered : ( NSEvent * ) event { <nl> [ super mouseEntered : event ] ; <nl> - mouseInside = YES ; <nl> + mouse_inside_ = YES ; <nl> [ self setNeedsDisplayForButtons ] ; <nl> } <nl> <nl> - ( void ) mouseExited : ( NSEvent * ) event { <nl> [ super mouseExited : event ] ; <nl> - mouseInside = NO ; <nl> + mouse_inside_ = NO ; <nl> [ self setNeedsDisplayForButtons ] ; <nl> } <nl> <nl> - ( void ) setNeedsDisplayForButtons { <nl> for ( NSView * subview in self . subviews ) { <nl> - [ subview setHidden : ! mouseInside ] ; <nl> + [ subview setHidden : ! mouse_inside_ ] ; <nl> [ subview setNeedsDisplay : YES ] ; <nl> } <nl> } <nl>
|
Declare mouse inside variable in interface
|
electron/electron
|
37ba1b0a6be5d8607b4b0693f0319a4654d4f8ee
|
2017-06-05T19:55:39Z
|
mmm a / folly / Makefile . am <nl> ppp b / folly / Makefile . am <nl> nobase_follyinclude_HEADERS = \ <nl> Format . h \ <nl> Format - inl . h \ <nl> futures / Barrier . h \ <nl> - futures / Deprecated . h \ <nl> futures / ThreadedExecutor . h \ <nl> futures / DrivableExecutor . h \ <nl> futures / Future - pre . h \ <nl> mmm a / folly / Portability . h <nl> ppp b / folly / Portability . h <nl> struct MaxAlign { char c ; } __attribute__ ( ( __aligned__ ) ) ; <nl> # elif defined ( _MSC_VER ) <nl> # define FOLLY_DEPRECATED ( msg ) __declspec ( deprecated ( msg ) ) <nl> # else <nl> - # define FOLLY_DEPRECATED <nl> + # define FOLLY_DEPRECATED ( msg ) <nl> # endif <nl> <nl> / / noreturn <nl> deleted file mode 100644 <nl> index 1e02f8cac5d . . 00000000000 <nl> mmm a / folly / futures / Deprecated . h <nl> ppp / dev / null <nl> <nl> - / * <nl> - * Copyright 2015 Facebook , Inc . <nl> - * <nl> - * Licensed under the Apache License , Version 2 . 0 ( the " License " ) ; <nl> - * you may not use this file except in compliance with the License . <nl> - * You may obtain a copy of the License at <nl> - * <nl> - * http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> - * <nl> - * Unless required by applicable law or agreed to in writing , software <nl> - * distributed under the License is distributed on an " AS IS " BASIS , <nl> - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied . <nl> - * See the License for the specific language governing permissions and <nl> - * limitations under the License . <nl> - * / <nl> - <nl> - # pragma once <nl> - # define DEPRECATED __attribute__ ( ( __deprecated__ ) ) <nl> mmm a / folly / futures / Future . h <nl> ppp b / folly / futures / Future . h <nl> <nl> # include < vector > <nl> <nl> # include < folly / Optional . h > <nl> + # include < folly / Portability . h > <nl> # include < folly / MoveWrapper . h > <nl> - # include < folly / futures / Deprecated . h > <nl> # include < folly / futures / DrivableExecutor . h > <nl> # include < folly / futures / Promise . h > <nl> # include < folly / futures / Try . h > <nl> class Future { <nl> / / / by then ) , and it is active ( active by default ) . <nl> / / / <nl> / / / Inactive Futures will activate upon destruction . <nl> - DEPRECATED Future < T > & activate ( ) & { <nl> + FOLLY_DEPRECATED ( " do not use " ) Future < T > & activate ( ) & { <nl> core_ - > activate ( ) ; <nl> return * this ; <nl> } <nl> - DEPRECATED Future < T > & deactivate ( ) & { <nl> + FOLLY_DEPRECATED ( " do not use " ) Future < T > & deactivate ( ) & { <nl> core_ - > deactivate ( ) ; <nl> return * this ; <nl> } <nl> - DEPRECATED Future < T > activate ( ) & & { <nl> + FOLLY_DEPRECATED ( " do not use " ) Future < T > activate ( ) & & { <nl> core_ - > activate ( ) ; <nl> return std : : move ( * this ) ; <nl> } <nl> - DEPRECATED Future < T > deactivate ( ) & & { <nl> + FOLLY_DEPRECATED ( " do not use " ) Future < T > deactivate ( ) & & { <nl> core_ - > deactivate ( ) ; <nl> return std : : move ( * this ) ; <nl> } <nl> mmm a / folly / futures / Promise . h <nl> ppp b / folly / futures / Promise . h <nl> <nl> <nl> # pragma once <nl> <nl> - # include < folly / futures / Deprecated . h > <nl> + # include < folly / Portability . h > <nl> # include < folly / futures / Try . h > <nl> # include < functional > <nl> <nl> class Promise { <nl> p . setException ( std : : current_exception ( ) ) ; <nl> } <nl> * / <nl> - DEPRECATED void setException ( std : : exception_ptr const & ) ; <nl> + FOLLY_DEPRECATED ( " use setException ( exception_wrapper ) " ) <nl> + void setException ( std : : exception_ptr const & ) ; <nl> <nl> / * * Fulfill the Promise with an exception type E , which can be passed to <nl> std : : make_exception_ptr ( ) . Useful for originating exceptions . If you <nl> mmm a / folly / futures / SharedPromise . h <nl> ppp b / folly / futures / SharedPromise . h <nl> <nl> # pragma once <nl> <nl> # include < folly / futures / Promise . h > <nl> + # include < folly / Portability . h > <nl> <nl> namespace folly { <nl> <nl> class SharedPromise { <nl> p . setException ( std : : current_exception ( ) ) ; <nl> } <nl> * / <nl> - DEPRECATED void setException ( std : : exception_ptr const & ) ; <nl> + FOLLY_DEPRECATED ( " use setException ( exception_wrapper ) " ) <nl> + void setException ( std : : exception_ptr const & ) ; <nl> <nl> / * * Fulfill the SharedPromise with an exception type E , which can be passed to <nl> std : : make_exception_ptr ( ) . Useful for originating exceptions . If you <nl> mmm a / folly / futures / Try . h <nl> ppp b / folly / futures / Try . h <nl> <nl> # include < folly / ExceptionWrapper . h > <nl> # include < folly / Likely . h > <nl> # include < folly / Memory . h > <nl> - # include < folly / futures / Deprecated . h > <nl> + # include < folly / Portability . h > <nl> # include < folly / futures / FutureException . h > <nl> # include < folly / futures / Unit . h > <nl> <nl> class Try { <nl> * <nl> * @ param ep The exception_pointer . Will be rethrown . <nl> * / <nl> - DEPRECATED explicit Try ( std : : exception_ptr ep ) <nl> + FOLLY_DEPRECATED ( " use Try ( exception_wrapper ) " ) <nl> + explicit Try ( std : : exception_ptr ep ) <nl> : contains_ ( Contains : : EXCEPTION ) { <nl> try { <nl> std : : rethrow_exception ( ep ) ; <nl> class Try < void > { <nl> * <nl> * @ param ep The exception_pointer . Will be rethrown . <nl> * / <nl> - DEPRECATED explicit Try ( std : : exception_ptr ep ) : hasValue_ ( false ) { <nl> + FOLLY_DEPRECATED ( " use Try ( exception_wrapper ) " ) <nl> + explicit Try ( std : : exception_ptr ep ) : hasValue_ ( false ) { <nl> try { <nl> std : : rethrow_exception ( ep ) ; <nl> } catch ( const std : : exception & e ) { <nl> mmm a / folly / futures / helpers . h <nl> ppp b / folly / futures / helpers . h <nl> <nl> # pragma once <nl> <nl> # include < folly / futures / Future . h > <nl> + # include < folly / Portability . h > <nl> <nl> namespace folly { <nl> <nl> auto makeFutureWith ( F & & func ) <nl> / / / <nl> / / / auto f = makeFuture < string > ( std : : current_exception ( ) ) ; <nl> template < class T > <nl> - DEPRECATED Future < T > makeFuture ( std : : exception_ptr const & e ) ; <nl> + FOLLY_DEPRECATED ( " use makeFuture ( exception_wrapper ) " ) <nl> + Future < T > makeFuture ( std : : exception_ptr const & e ) ; <nl> <nl> / / / Make a failed Future from an exception_wrapper . <nl> template < class T > <nl>
|
Add MSVC support to futures / Deprecated . h
|
facebook/folly
|
cbaaa2db9b6b5e4873af7ce67d2a1631c012f08e
|
2015-08-03T20:22:13Z
|
mmm a / src / input_common / sdl / sdl_impl . cpp <nl> ppp b / src / input_common / sdl / sdl_impl . cpp <nl> SDLState : : SDLState ( ) { <nl> return ; <nl> } <nl> if ( SDL_SetHint ( SDL_HINT_JOYSTICK_ALLOW_BACKGROUND_EVENTS , " 1 " ) = = SDL_FALSE ) { <nl> - LOG_ERROR ( Input , " Failed to set Hint for background events " , SDL_GetError ( ) ) ; <nl> + LOG_ERROR ( Input , " Failed to set hint for background events with : { } " , SDL_GetError ( ) ) ; <nl> } <nl> <nl> SDL_AddEventWatch ( & SDLEventWatcher , this ) ; <nl>
|
input_common / sdl / sdl_impl : Correct logging string in SDLState constructor
|
yuzu-emu/yuzu
|
5ccf2a7b822a28364e7e859013342e1b0c9c33d3
|
2019-06-03T20:56:47Z
|
mmm a / tensorflow / python / ops / image_ops_impl . py <nl> ppp b / tensorflow / python / ops / image_ops_impl . py <nl> def random_flip_up_down ( image , seed = None ) : <nl> A tensor of the same type and shape as ` image ` . <nl> Raises : <nl> ValueError : if the shape of ` image ` not supported . <nl> + <nl> + Usage Example : <nl> + ` ` ` python <nl> + > > import tensorflow as tf <nl> + > > x = tf . random . normal ( shape = ( 256 , 256 , 3 ) ) <nl> + > > tf . image . random_flip_up_down ( x ) <nl> + ` ` ` <nl> " " " <nl> return _random_flip ( image , 0 , seed , ' random_flip_up_down ' ) <nl> <nl> def random_flip_left_right ( image , seed = None ) : <nl> <nl> Raises : <nl> ValueError : if the shape of ` image ` not supported . <nl> + <nl> + Usage Example : <nl> + ` ` ` python <nl> + > > import tensorflow as tf <nl> + > > x = tf . random . normal ( shape = ( 256 , 256 , 3 ) ) <nl> + > > tf . image . random_flip_left_right ( x ) <nl> + ` ` ` <nl> " " " <nl> return _random_flip ( image , 1 , seed , ' random_flip_left_right ' ) <nl> <nl> def flip_left_right ( image ) : <nl> <nl> Raises : <nl> ValueError : if the shape of ` image ` not supported . <nl> + <nl> + Usage Example : <nl> + ` ` ` python <nl> + > > import tensorflow as tf <nl> + > > x = tf . random . normal ( shape = ( 256 , 256 , 3 ) ) <nl> + > > tf . image . flip_left_right ( x ) <nl> + ` ` ` <nl> " " " <nl> return _flip ( image , 1 , ' flip_left_right ' ) <nl> <nl> def flip_up_down ( image ) : <nl> <nl> Raises : <nl> ValueError : if the shape of ` image ` not supported . <nl> + <nl> + Usage Example : <nl> + ` ` ` python <nl> + > > import tensorflow as tf <nl> + > > x = tf . random . normal ( shape = ( 256 , 256 , 3 ) ) <nl> + > > tf . image . flip_up_down ( x ) <nl> + ` ` ` <nl> " " " <nl> return _flip ( image , 0 , ' flip_up_down ' ) <nl> <nl>
|
Added usage examples to some APIs
|
tensorflow/tensorflow
|
52392ac73c70c358032ae45e0dbeeba929f871cb
|
2019-12-24T17:04:28Z
|
mmm a / include / grpc / impl / codegen / grpc_types . h <nl> ppp b / include / grpc / impl / codegen / grpc_types . h <nl> typedef struct { <nl> # define GRPC_ARG_HTTP2_MAX_PING_STRIKES " grpc . http2 . max_ping_strikes " <nl> / * * Minimum allowed time between two pings without sending any data frame . Int <nl> valued , seconds * / <nl> - # define GRPC_ARG_HTTP2_MIN_PING_INTERVAL_WITHOUT_DATA_S \ <nl> - " grpc . http2 . min_ping_interval_without_data " <nl> + # define GRPC_ARG_HTTP2_MIN_PING_INTERVAL_WITHOUT_DATA_MS \ <nl> + " grpc . http2 . min_ping_interval_without_data_ms " <nl> / * * How much data are we willing to queue up per stream if <nl> GRPC_WRITE_BUFFER_HINT is set ? This is an upper bound * / <nl> # define GRPC_ARG_HTTP2_WRITE_BUFFER_SIZE " grpc . http2 . write_buffer_size " <nl> / * * After a duration of this time the client pings the server to see if the <nl> transport is still alive . Int valued , seconds . * / <nl> - # define GRPC_ARG_CLIENT_KEEPALIVE_TIME_S " grpc . client_keepalive_time " <nl> + # define GRPC_ARG_CLIENT_KEEPALIVE_TIME_MS " grpc . client_keepalive_time_ms " <nl> / * * After waiting for a duration of this time , if the client does not receive <nl> the ping ack , it will close the transport . Int valued , seconds . * / <nl> - # define GRPC_ARG_CLIENT_KEEPALIVE_TIMEOUT_S " grpc . client_keepalive_timeout " <nl> + # define GRPC_ARG_CLIENT_KEEPALIVE_TIMEOUT_MS " grpc . client_keepalive_timeout_ms " <nl> / * * Is it permissible to send keepalive pings without any outstanding streams . <nl> Int valued , 0 ( false ) / 1 ( true ) . * / <nl> # define GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS \ <nl> mmm a / src / core / ext / transport / chttp2 / transport / chttp2_transport . c <nl> ppp b / src / core / ext / transport / chttp2 / transport / chttp2_transport . c <nl> <nl> # define MAX_WRITE_BUFFER_SIZE ( 64 * 1024 * 1024 ) <nl> # define DEFAULT_MAX_HEADER_LIST_SIZE ( 16 * 1024 ) <nl> <nl> - # define DEFAULT_CLIENT_KEEPALIVE_TIME_S INT_MAX <nl> - # define DEFAULT_CLIENT_KEEPALIVE_TIMEOUT_S 20 <nl> + # define DEFAULT_CLIENT_KEEPALIVE_TIME_MS INT_MAX <nl> + # define DEFAULT_CLIENT_KEEPALIVE_TIMEOUT_MS 20000 <nl> # define DEFAULT_KEEPALIVE_PERMIT_WITHOUT_CALLS false <nl> <nl> - static int g_default_client_keepalive_time_s = DEFAULT_CLIENT_KEEPALIVE_TIME_S ; <nl> - static int g_default_client_keepalive_timeout_s = <nl> - DEFAULT_CLIENT_KEEPALIVE_TIMEOUT_S ; <nl> + static int g_default_client_keepalive_time_ms = <nl> + DEFAULT_CLIENT_KEEPALIVE_TIME_MS ; <nl> + static int g_default_client_keepalive_timeout_ms = <nl> + DEFAULT_CLIENT_KEEPALIVE_TIMEOUT_MS ; <nl> static bool g_default_keepalive_permit_without_calls = <nl> DEFAULT_KEEPALIVE_PERMIT_WITHOUT_CALLS ; <nl> <nl> static void retry_initiate_ping_locked ( grpc_exec_ctx * exec_ctx , void * tp , <nl> # define DEFAULT_MIN_TIME_BETWEEN_PINGS_MS 0 <nl> # define DEFAULT_MAX_PINGS_BETWEEN_DATA 3 <nl> # define DEFAULT_MAX_PING_STRIKES 2 <nl> - # define DEFAULT_MIN_PING_INTERVAL_WITHOUT_DATA_S 300 <nl> + # define DEFAULT_MIN_PING_INTERVAL_WITHOUT_DATA_MS 300000 / * 5 minutes * / <nl> <nl> / * * keepalive - relevant functions * / <nl> static void init_keepalive_ping_locked ( grpc_exec_ctx * exec_ctx , void * arg , <nl> static void init_transport ( grpc_exec_ctx * exec_ctx , grpc_chttp2_transport * t , <nl> . min_time_between_pings = <nl> gpr_time_from_millis ( DEFAULT_MIN_TIME_BETWEEN_PINGS_MS , GPR_TIMESPAN ) , <nl> . max_ping_strikes = DEFAULT_MAX_PING_STRIKES , <nl> - . min_ping_interval_without_data = gpr_time_from_seconds ( <nl> - DEFAULT_MIN_PING_INTERVAL_WITHOUT_DATA_S , GPR_TIMESPAN ) , <nl> + . min_ping_interval_without_data = gpr_time_from_millis ( <nl> + DEFAULT_MIN_PING_INTERVAL_WITHOUT_DATA_MS , GPR_TIMESPAN ) , <nl> } ; <nl> <nl> / * client - side keepalive setting * / <nl> t - > keepalive_time = <nl> - g_default_client_keepalive_time_s = = INT_MAX <nl> + g_default_client_keepalive_time_ms = = INT_MAX <nl> ? gpr_inf_future ( GPR_TIMESPAN ) <nl> - : gpr_time_from_seconds ( g_default_client_keepalive_time_s , <nl> - GPR_TIMESPAN ) ; <nl> + : gpr_time_from_millis ( g_default_client_keepalive_time_ms , <nl> + GPR_TIMESPAN ) ; <nl> t - > keepalive_timeout = <nl> - g_default_client_keepalive_timeout_s = = INT_MAX <nl> + g_default_client_keepalive_timeout_ms = = INT_MAX <nl> ? gpr_inf_future ( GPR_TIMESPAN ) <nl> - : gpr_time_from_seconds ( g_default_client_keepalive_timeout_s , <nl> - GPR_TIMESPAN ) ; <nl> + : gpr_time_from_millis ( g_default_client_keepalive_timeout_ms , <nl> + GPR_TIMESPAN ) ; <nl> t - > keepalive_permit_without_calls = g_default_keepalive_permit_without_calls ; <nl> <nl> if ( channel_args ) { <nl> static void init_transport ( grpc_exec_ctx * exec_ctx , grpc_chttp2_transport * t , <nl> ( grpc_integer_options ) { DEFAULT_MIN_TIME_BETWEEN_PINGS_MS , 0 , <nl> INT_MAX } ) , <nl> GPR_TIMESPAN ) ; <nl> - } else if ( 0 = = strcmp ( channel_args - > args [ i ] . key , <nl> - GRPC_ARG_HTTP2_MIN_PING_INTERVAL_WITHOUT_DATA_S ) ) { <nl> - t - > ping_policy . min_ping_interval_without_data = gpr_time_from_seconds ( <nl> + } else if ( 0 = = <nl> + strcmp ( channel_args - > args [ i ] . key , <nl> + GRPC_ARG_HTTP2_MIN_PING_INTERVAL_WITHOUT_DATA_MS ) ) { <nl> + t - > ping_policy . min_ping_interval_without_data = gpr_time_from_millis ( <nl> grpc_channel_arg_get_integer ( <nl> & channel_args - > args [ i ] , <nl> - ( grpc_integer_options ) { DEFAULT_MIN_PING_INTERVAL_WITHOUT_DATA_S , <nl> - 0 , INT_MAX } ) , <nl> + ( grpc_integer_options ) { <nl> + DEFAULT_MIN_PING_INTERVAL_WITHOUT_DATA_MS , 0 , INT_MAX } ) , <nl> GPR_TIMESPAN ) ; <nl> } else if ( 0 = = strcmp ( channel_args - > args [ i ] . key , <nl> GRPC_ARG_HTTP2_WRITE_BUFFER_SIZE ) ) { <nl> static void init_transport ( grpc_exec_ctx * exec_ctx , grpc_chttp2_transport * t , <nl> t - > enable_bdp_probe = grpc_channel_arg_get_integer ( <nl> & channel_args - > args [ i ] , ( grpc_integer_options ) { 1 , 0 , 1 } ) ; <nl> } else if ( 0 = = strcmp ( channel_args - > args [ i ] . key , <nl> - GRPC_ARG_CLIENT_KEEPALIVE_TIME_S ) ) { <nl> + GRPC_ARG_CLIENT_KEEPALIVE_TIME_MS ) ) { <nl> const int value = grpc_channel_arg_get_integer ( <nl> & channel_args - > args [ i ] , <nl> - ( grpc_integer_options ) { g_default_client_keepalive_time_s , 1 , <nl> + ( grpc_integer_options ) { g_default_client_keepalive_time_ms , 1 , <nl> INT_MAX } ) ; <nl> t - > keepalive_time = value = = INT_MAX <nl> ? gpr_inf_future ( GPR_TIMESPAN ) <nl> - : gpr_time_from_seconds ( value , GPR_TIMESPAN ) ; <nl> + : gpr_time_from_millis ( value , GPR_TIMESPAN ) ; <nl> } else if ( 0 = = strcmp ( channel_args - > args [ i ] . key , <nl> - GRPC_ARG_CLIENT_KEEPALIVE_TIMEOUT_S ) ) { <nl> + GRPC_ARG_CLIENT_KEEPALIVE_TIMEOUT_MS ) ) { <nl> const int value = grpc_channel_arg_get_integer ( <nl> & channel_args - > args [ i ] , <nl> - ( grpc_integer_options ) { g_default_client_keepalive_timeout_s , 0 , <nl> + ( grpc_integer_options ) { g_default_client_keepalive_timeout_ms , 0 , <nl> INT_MAX } ) ; <nl> t - > keepalive_timeout = value = = INT_MAX <nl> ? gpr_inf_future ( GPR_TIMESPAN ) <nl> - : gpr_time_from_seconds ( value , GPR_TIMESPAN ) ; <nl> + : gpr_time_from_millis ( value , GPR_TIMESPAN ) ; <nl> } else if ( 0 = = strcmp ( channel_args - > args [ i ] . key , <nl> GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS ) ) { <nl> t - > keepalive_permit_without_calls = <nl> void grpc_chttp2_config_default_keepalive_args ( grpc_channel_args * args ) { <nl> size_t i ; <nl> if ( args ) { <nl> for ( i = 0 ; i < args - > num_args ; i + + ) { <nl> - if ( 0 = = strcmp ( args - > args [ i ] . key , GRPC_ARG_CLIENT_KEEPALIVE_TIME_S ) ) { <nl> - g_default_client_keepalive_time_s = grpc_channel_arg_get_integer ( <nl> - & args - > args [ i ] , ( grpc_integer_options ) { <nl> - g_default_client_keepalive_time_s , 1 , INT_MAX } ) ; <nl> + if ( 0 = = strcmp ( args - > args [ i ] . key , GRPC_ARG_CLIENT_KEEPALIVE_TIME_MS ) ) { <nl> + g_default_client_keepalive_time_ms = grpc_channel_arg_get_integer ( <nl> + & args - > args [ i ] , <nl> + ( grpc_integer_options ) { g_default_client_keepalive_time_ms , 1 , <nl> + INT_MAX } ) ; <nl> } else if ( 0 = = strcmp ( args - > args [ i ] . key , <nl> - GRPC_ARG_CLIENT_KEEPALIVE_TIMEOUT_S ) ) { <nl> - g_default_client_keepalive_timeout_s = grpc_channel_arg_get_integer ( <nl> + GRPC_ARG_CLIENT_KEEPALIVE_TIMEOUT_MS ) ) { <nl> + g_default_client_keepalive_timeout_ms = grpc_channel_arg_get_integer ( <nl> & args - > args [ i ] , <nl> - ( grpc_integer_options ) { g_default_client_keepalive_timeout_s , 0 , <nl> + ( grpc_integer_options ) { g_default_client_keepalive_timeout_ms , 0 , <nl> INT_MAX } ) ; <nl> } else if ( 0 = = strcmp ( args - > args [ i ] . key , <nl> GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS ) ) { <nl> mmm a / test / core / end2end / tests / bad_ping . c <nl> ppp b / test / core / end2end / tests / bad_ping . c <nl> static void test_bad_ping ( grpc_end2end_test_config config ) { <nl> . value . integer = 0 } } ; <nl> grpc_arg server_a [ ] = { <nl> { . type = GRPC_ARG_INTEGER , <nl> - . key = GRPC_ARG_HTTP2_MIN_PING_INTERVAL_WITHOUT_DATA_S , <nl> - . value . integer = 300 } , <nl> + . key = GRPC_ARG_HTTP2_MIN_PING_INTERVAL_WITHOUT_DATA_MS , <nl> + . value . integer = 300000 / * 5 minutes * / } , <nl> { . type = GRPC_ARG_INTEGER , <nl> . key = GRPC_ARG_HTTP2_MAX_PING_STRIKES , <nl> . value . integer = MAX_PING_STRIKES } } ; <nl> mmm a / test / core / end2end / tests / keepalive_timeout . c <nl> ppp b / test / core / end2end / tests / keepalive_timeout . c <nl> static void test_keepalive_timeout ( grpc_end2end_test_config config ) { <nl> gpr_timespec deadline = five_seconds_time ( ) ; <nl> <nl> grpc_arg keepalive_args [ ] = { { . type = GRPC_ARG_INTEGER , <nl> - . key = GRPC_ARG_CLIENT_KEEPALIVE_TIME_S , <nl> - . value . integer = 2 } , <nl> + . key = GRPC_ARG_CLIENT_KEEPALIVE_TIME_MS , <nl> + . value . integer = 1500 } , <nl> { . type = GRPC_ARG_INTEGER , <nl> - . key = GRPC_ARG_CLIENT_KEEPALIVE_TIMEOUT_S , <nl> + . key = GRPC_ARG_CLIENT_KEEPALIVE_TIMEOUT_MS , <nl> . value . integer = 0 } , <nl> { . type = GRPC_ARG_INTEGER , <nl> . key = GRPC_ARG_HTTP2_BDP_PROBE , <nl> mmm a / test / core / end2end / tests / ping . c <nl> ppp b / test / core / end2end / tests / ping . c <nl> static void test_ping ( grpc_end2end_test_config config , <nl> . value . integer = 20 } } ; <nl> grpc_arg server_a [ ] = { <nl> { . type = GRPC_ARG_INTEGER , <nl> - . key = GRPC_ARG_HTTP2_MIN_PING_INTERVAL_WITHOUT_DATA_S , <nl> + . key = GRPC_ARG_HTTP2_MIN_PING_INTERVAL_WITHOUT_DATA_MS , <nl> . value . integer = 0 } , <nl> { . type = GRPC_ARG_INTEGER , <nl> - . key = GRPC_ARG_HTTP2_KEEPALIVE_PERMIT_WITHOUT_CALLS , <nl> + . key = GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS , <nl> . value . integer = 1 } } ; <nl> grpc_channel_args client_args = { . num_args = GPR_ARRAY_SIZE ( client_a ) , <nl> . args = client_a } ; <nl>
|
Change time unit to ms
|
grpc/grpc
|
c18d4b39c808d0507f0654fdf7ecad70a9a54945
|
2017-04-04T07:03:28Z
|
mmm a / torch / csrc / distributed / rpc / rref_context . cpp <nl> ppp b / torch / csrc / distributed / rpc / rref_context . cpp <nl> c10 : : intrusive_ptr < RRef > RRefContext : : getOrCreateRRef ( <nl> auto & rrefId = rrefForkData . rrefId_ ; <nl> auto & forkId = rrefForkData . forkId_ ; <nl> if ( ownerId = = getWorkerId ( ) ) { <nl> + / / We have found the rref through the rrefId <nl> auto ownerRRef = getOwnerRRef ( rrefId ) ; <nl> - TORCH_INTERNAL_ASSERT ( ownerRRef - > type ( ) = = type ) ; <nl> + / / Now double check if the two types are matched <nl> + / / <nl> + / / Why we are special casing the check for tensor type here ? <nl> + / / this is because tensor types might get specialized on tensors when <nl> + / / we pass inputs to the function , i . e . TensorType can filled with <nl> + / / specific shape info , requires_grad info , etc . so the OwerRRef we <nl> + / / found might already have those infos , but the ` type ` we passed in <nl> + / / here is a plain TensorType , they are not equal relationship : <nl> + / / specialized TensorType < : plain TensorType <nl> + / / <nl> + / / In RPC we don ' t care the difference as we ser / de with just the <nl> + / / plain TensorType . This is not a issue for UserRRef creation either , <nl> + / / since Tensor can only get specialized with a previous run of local <nl> + / / JIT function , and we shouldn ' t preserve the specialized SubTensorType <nl> + / / information on other workers because it ' s only information only . <nl> + if ( type = = TensorType : : get ( ) ) { <nl> + TORCH_INTERNAL_ASSERT ( ownerRRef - > type ( ) - > isSubtypeOf ( TensorType : : get ( ) ) ) ; <nl> + } else { <nl> + TORCH_INTERNAL_ASSERT ( ownerRRef - > type ( ) = = type ) ; <nl> + } <nl> return ownerRRef ; <nl> } else { <nl> return createUserRRef ( ownerId , rrefId , forkId , type ) ; <nl>
|
[ rpc ] special case tensor type check when getting RRef ( )
|
pytorch/pytorch
|
4dad00b64b396ef81f16bdb896175688fc629f4d
|
2020-02-27T02:44:40Z
|
mmm a / include / swift / AST / ASTWalker . h <nl> ppp b / include / swift / AST / ASTWalker . h <nl> class ModuleDecl ; <nl> class Stmt ; <nl> class Pattern ; <nl> class TypeRepr ; <nl> - class TypeLoc ; <nl> class ParameterList ; <nl> enum class AccessKind : unsigned char ; <nl> <nl> class ASTWalker { <nl> / / / returns failure . <nl> virtual bool walkToDeclPost ( Decl * D ) { return true ; } <nl> <nl> - / / / This method is called when first visiting a TypeLoc , before <nl> - / / / walking into its TypeRepr children . If it returns false , the subtree is <nl> - / / / skipped . <nl> - / / / <nl> - / / / \ param TL The TypeLoc to check . <nl> - virtual bool walkToTypeLocPre ( TypeLoc & TL ) { return true ; } <nl> - <nl> - / / / This method is called after visiting the children of a TypeLoc . <nl> - / / / If it returns false , the remaining traversal is terminated and returns <nl> - / / / failure . <nl> - virtual bool walkToTypeLocPost ( TypeLoc & TL ) { return true ; } <nl> - <nl> - <nl> / / / This method is called when first visiting a TypeRepr , before <nl> / / / walking into its children . If it returns false , the subtree is skipped . <nl> / / / <nl> mmm a / include / swift / AST / Expr . h <nl> ppp b / include / swift / AST / Expr . h <nl> class KeyPathExpr : public Expr { <nl> OptionalWrap , <nl> Identity , <nl> TupleElement , <nl> - DictionaryKey , <nl> } ; <nl> <nl> private : <nl> class KeyPathExpr : public Expr { <nl> propertyType , <nl> loc ) ; <nl> } <nl> - <nl> - / / / Create a component for a dictionary key ( # keyPath only ) . <nl> - static Component forDictionaryKey ( DeclNameRef UnresolvedName , <nl> - Type valueType , <nl> - SourceLoc loc ) { <nl> - return Component ( nullptr , UnresolvedName , nullptr , { } , { } , <nl> - Kind : : DictionaryKey , <nl> - valueType , <nl> - loc ) ; <nl> - } <nl> <nl> / / / Create a component for a subscript . <nl> static Component forSubscript ( ASTContext & ctx , <nl> class KeyPathExpr : public Expr { <nl> case Kind : : Property : <nl> case Kind : : Identity : <nl> case Kind : : TupleElement : <nl> - case Kind : : DictionaryKey : <nl> return true ; <nl> <nl> case Kind : : UnresolvedSubscript : <nl> class KeyPathExpr : public Expr { <nl> case Kind : : Property : <nl> case Kind : : Identity : <nl> case Kind : : TupleElement : <nl> - case Kind : : DictionaryKey : <nl> return nullptr ; <nl> } <nl> llvm_unreachable ( " unhandled kind " ) ; <nl> class KeyPathExpr : public Expr { <nl> case Kind : : Property : <nl> case Kind : : Identity : <nl> case Kind : : TupleElement : <nl> - case Kind : : DictionaryKey : <nl> llvm_unreachable ( " no subscript labels for this kind " ) ; <nl> } <nl> llvm_unreachable ( " unhandled kind " ) ; <nl> class KeyPathExpr : public Expr { <nl> case Kind : : Property : <nl> case Kind : : Identity : <nl> case Kind : : TupleElement : <nl> - case Kind : : DictionaryKey : <nl> return { } ; <nl> } <nl> llvm_unreachable ( " unhandled kind " ) ; <nl> class KeyPathExpr : public Expr { <nl> DeclNameRef getUnresolvedDeclName ( ) const { <nl> switch ( getKind ( ) ) { <nl> case Kind : : UnresolvedProperty : <nl> - case Kind : : DictionaryKey : <nl> return Decl . UnresolvedName ; <nl> <nl> case Kind : : Invalid : <nl> class KeyPathExpr : public Expr { <nl> case Kind : : OptionalForce : <nl> case Kind : : Identity : <nl> case Kind : : TupleElement : <nl> - case Kind : : DictionaryKey : <nl> llvm_unreachable ( " no decl ref for this kind " ) ; <nl> } <nl> llvm_unreachable ( " unhandled kind " ) ; <nl> class KeyPathExpr : public Expr { <nl> case Kind : : Identity : <nl> case Kind : : Property : <nl> case Kind : : Subscript : <nl> - case Kind : : DictionaryKey : <nl> llvm_unreachable ( " no field number for this kind " ) ; <nl> } <nl> llvm_unreachable ( " unhandled kind " ) ; <nl> mmm a / include / swift / AST / Pattern . h <nl> ppp b / include / swift / AST / Pattern . h <nl> namespace swift { <nl> class Expr ; <nl> enum class CheckedCastKind : unsigned ; <nl> class TypeExpr ; <nl> - class TypeLoc ; <nl> <nl> / / / PatternKind - The classification of different kinds of <nl> / / / value - matching pattern . <nl> class TypedPattern : public Pattern { <nl> <nl> TypeRepr * getTypeRepr ( ) const { return PatTypeRepr ; } <nl> <nl> - TypeLoc getTypeLoc ( ) const ; <nl> SourceLoc getLoc ( ) const ; <nl> SourceRange getSourceRange ( ) const ; <nl> <nl> mmm a / include / swift / IDE / Utils . h <nl> ppp b / include / swift / IDE / Utils . h <nl> class NameMatcher : public ASTWalker { <nl> bool walkToDeclPost ( Decl * D ) override ; <nl> std : : pair < bool , Stmt * > walkToStmtPre ( Stmt * S ) override ; <nl> Stmt * walkToStmtPost ( Stmt * S ) override ; <nl> - bool walkToTypeLocPre ( TypeLoc & TL ) override ; <nl> - bool walkToTypeLocPost ( TypeLoc & TL ) override ; <nl> bool walkToTypeReprPre ( TypeRepr * T ) override ; <nl> bool walkToTypeReprPost ( TypeRepr * T ) override ; <nl> std : : pair < bool , Pattern * > walkToPatternPre ( Pattern * P ) override ; <nl> mmm a / lib / AST / ASTDumper . cpp <nl> ppp b / lib / AST / ASTDumper . cpp <nl> class PrintExpr : public ExprVisitor < PrintExpr > { <nl> PrintWithColorRAII ( OS , DiscriminatorColor ) <nl> < < " # " < < component . getTupleIndex ( ) ; <nl> break ; <nl> - case KeyPathExpr : : Component : : Kind : : DictionaryKey : <nl> - PrintWithColorRAII ( OS , ASTNodeColor ) < < " dict_key " ; <nl> - PrintWithColorRAII ( OS , IdentifierColor ) <nl> - < < " key = ' " < < component . getUnresolvedDeclName ( ) < < " ' " ; <nl> - break ; <nl> } <nl> PrintWithColorRAII ( OS , TypeColor ) <nl> < < " type = ' " < < GetTypeOfKeyPathComponent ( E , i ) < < " ' " ; <nl> mmm a / lib / AST / ASTPrinter . cpp <nl> ppp b / lib / AST / ASTPrinter . cpp <nl> void PrintAST : : printTypedPattern ( const TypedPattern * TP ) { <nl> if ( auto decl = named - > getDecl ( ) ) <nl> isIUO = decl - > isImplicitlyUnwrappedOptional ( ) ; <nl> <nl> - printTypeLocForImplicitlyUnwrappedOptional ( TP - > getTypeLoc ( ) , isIUO ) ; <nl> + const auto TyLoc = TypeLoc ( TP - > getTypeRepr ( ) , <nl> + TP - > hasType ( ) ? TP - > getType ( ) : Type ( ) ) ; <nl> + printTypeLocForImplicitlyUnwrappedOptional ( TyLoc , isIUO ) ; <nl> } <nl> <nl> / / / Determines if we are required to print the name of a property declaration , <nl> mmm a / lib / AST / ASTScopeCreation . cpp <nl> ppp b / lib / AST / ASTScopeCreation . cpp <nl> void ScopeCreator : : forEachClosureIn ( <nl> return { false , P } ; <nl> } <nl> bool walkToDeclPre ( Decl * D ) override { return false ; } <nl> - bool walkToTypeLocPre ( TypeLoc & TL ) override { return false ; } <nl> bool walkToTypeReprPre ( TypeRepr * T ) override { return false ; } <nl> bool walkToParameterListPre ( ParameterList * PL ) override { return false ; } <nl> } ; <nl> mmm a / lib / AST / ASTWalker . cpp <nl> ppp b / lib / AST / ASTWalker . cpp <nl> class Traversal : public ASTVisitor < Traversal , Expr * , Stmt * , <nl> if ( doIt ( typeRepr ) ) <nl> return true ; <nl> for ( auto & Inherit : ED - > getInherited ( ) ) { <nl> - if ( doIt ( Inherit ) ) <nl> - return true ; <nl> + if ( auto * const TyR = Inherit . getTypeRepr ( ) ) <nl> + if ( doIt ( TyR ) ) <nl> + return true ; <nl> } <nl> if ( visitTrailingRequirements ( ED ) ) <nl> return true ; <nl> class Traversal : public ASTVisitor < Traversal , Expr * , Stmt * , <nl> } <nl> <nl> bool visitAbstractTypeParamDecl ( AbstractTypeParamDecl * TPD ) { <nl> - for ( auto Inherit : TPD - > getInherited ( ) ) { <nl> - if ( doIt ( Inherit ) ) <nl> - return true ; <nl> + for ( const auto & Inherit : TPD - > getInherited ( ) ) { <nl> + if ( auto * const TyR = Inherit . getTypeRepr ( ) ) <nl> + if ( doIt ( TyR ) ) <nl> + return true ; <nl> } <nl> <nl> if ( const auto ATD = dyn_cast < AssociatedTypeDecl > ( TPD ) ) { <nl> class Traversal : public ASTVisitor < Traversal , Expr * , Stmt * , <nl> <nl> bool WalkGenerics = visitGenericParamListIfNeeded ( NTD ) ; <nl> <nl> - for ( auto & Inherit : NTD - > getInherited ( ) ) { <nl> - if ( doIt ( Inherit ) ) <nl> - return true ; <nl> + for ( const auto & Inherit : NTD - > getInherited ( ) ) { <nl> + if ( auto * const TyR = Inherit . getTypeRepr ( ) ) <nl> + if ( doIt ( Inherit . getTypeRepr ( ) ) ) <nl> + return true ; <nl> } <nl> <nl> / / Visit requirements <nl> class Traversal : public ASTVisitor < Traversal , Expr * , Stmt * , <nl> bool WalkGenerics = visitGenericParamListIfNeeded ( SD ) ; <nl> <nl> visit ( SD - > getIndices ( ) ) ; <nl> - if ( doIt ( SD - > getElementTypeLoc ( ) ) ) <nl> - return true ; <nl> + if ( auto * const TyR = SD - > getElementTypeLoc ( ) . getTypeRepr ( ) ) <nl> + if ( doIt ( TyR ) ) <nl> + return true ; <nl> <nl> / / Visit trailing requirements <nl> if ( WalkGenerics & & visitTrailingRequirements ( SD ) ) <nl> class Traversal : public ASTVisitor < Traversal , Expr * , Stmt * , <nl> visit ( PD ) ; <nl> visit ( AFD - > getParameters ( ) ) ; <nl> <nl> - if ( auto * FD = dyn_cast < FuncDecl > ( AFD ) ) <nl> + if ( auto * FD = dyn_cast < FuncDecl > ( AFD ) ) { <nl> if ( ! isa < AccessorDecl > ( FD ) ) <nl> - if ( doIt ( FD - > getBodyResultTypeLoc ( ) ) ) <nl> - return true ; <nl> + if ( auto * const TyR = FD - > getBodyResultTypeLoc ( ) . getTypeRepr ( ) ) <nl> + if ( doIt ( TyR ) ) <nl> + return true ; <nl> + } <nl> <nl> / / Visit trailing requirements <nl> if ( WalkGenerics & & visitTrailingRequirements ( AFD ) ) <nl> class Traversal : public ASTVisitor < Traversal , Expr * , Stmt * , <nl> case KeyPathExpr : : Component : : Kind : : Invalid : <nl> case KeyPathExpr : : Component : : Kind : : Identity : <nl> case KeyPathExpr : : Component : : Kind : : TupleElement : <nl> - case KeyPathExpr : : Component : : Kind : : DictionaryKey : <nl> / / No subexpr to visit . <nl> break ; <nl> } <nl> class Traversal : public ASTVisitor < Traversal , Expr * , Stmt * , <nl> return false ; <nl> } <nl> <nl> - bool doIt ( TypeLoc & TL ) { <nl> - if ( ! Walker . walkToTypeLocPre ( TL ) ) <nl> - return false ; <nl> - <nl> - / / No " visit " since TypeLocs are not a class hierarchy . Clients can do what <nl> - / / they want in walkToTypeLocPre . <nl> - <nl> - if ( auto typerepr = TL . getTypeRepr ( ) ) <nl> - if ( doIt ( typerepr ) ) <nl> - return true ; <nl> - <nl> - / / If we didn ' t bail out , do post - order visitation . <nl> - return ! Walker . walkToTypeLocPost ( TL ) ; <nl> - } <nl> - <nl> / / / Returns true on failure . <nl> bool doIt ( TypeRepr * T ) { <nl> / / Do the pre - order visitation . If it returns false , we just <nl> mmm a / lib / AST / Expr . cpp <nl> ppp b / lib / AST / Expr . cpp <nl> forEachImmediateChildExpr ( llvm : : function_ref < Expr * ( Expr * ) > callback ) { <nl> } <nl> bool walkToDeclPre ( Decl * D ) override { return false ; } <nl> bool walkToTypeReprPre ( TypeRepr * T ) override { return false ; } <nl> - bool walkToTypeLocPre ( TypeLoc & TL ) override { return false ; } <nl> } ; <nl> <nl> this - > walk ( ChildWalker ( callback , this ) ) ; <nl> void Expr : : forEachChildExpr ( llvm : : function_ref < Expr * ( Expr * ) > callback ) { <nl> } <nl> bool walkToDeclPre ( Decl * D ) override { return false ; } <nl> bool walkToTypeReprPre ( TypeRepr * T ) override { return false ; } <nl> - bool walkToTypeLocPre ( TypeLoc & TL ) override { return false ; } <nl> } ; <nl> <nl> this - > walk ( ChildWalker ( callback ) ) ; <nl> void KeyPathExpr : : Component : : setSubscriptIndexHashableConformances ( <nl> case Kind : : Property : <nl> case Kind : : Identity : <nl> case Kind : : TupleElement : <nl> - case Kind : : DictionaryKey : <nl> llvm_unreachable ( " no hashable conformances for this kind " ) ; <nl> } <nl> } <nl> mmm a / lib / AST / Pattern . cpp <nl> ppp b / lib / AST / Pattern . cpp <nl> namespace { <nl> std : : pair < bool , Stmt * > walkToStmtPre ( Stmt * S ) override { <nl> return { false , S } ; <nl> } <nl> - bool walkToTypeLocPre ( TypeLoc & TL ) override { return false ; } <nl> bool walkToTypeReprPre ( TypeRepr * T ) override { return false ; } <nl> bool walkToParameterListPre ( ParameterList * PL ) override { return false ; } <nl> bool walkToDeclPre ( Decl * D ) override { return false ; } <nl> TypedPattern : : TypedPattern ( Pattern * pattern , TypeRepr * tr ) <nl> Bits . TypedPattern . IsPropagatedType = false ; <nl> } <nl> <nl> - TypeLoc TypedPattern : : getTypeLoc ( ) const { <nl> - TypeLoc loc = TypeLoc ( PatTypeRepr ) ; <nl> - <nl> - if ( hasType ( ) ) <nl> - loc . setType ( getType ( ) ) ; <nl> - <nl> - return loc ; <nl> - } <nl> - <nl> SourceLoc TypedPattern : : getLoc ( ) const { <nl> if ( SubPattern - > isImplicit ( ) & & PatTypeRepr ) <nl> return PatTypeRepr - > getSourceRange ( ) . Start ; <nl> mmm a / lib / Frontend / ModuleInterfaceLoader . cpp <nl> ppp b / lib / Frontend / ModuleInterfaceLoader . cpp <nl> void InterfaceSubContextDelegateImpl : : inheritOptionsForBuildingInterface ( <nl> . EffectiveLanguageVersion . asAPINotesVersionString ( ) ) ) ; <nl> <nl> genericSubInvocation . setImportSearchPaths ( SearchPathOpts . ImportSearchPaths ) ; <nl> - llvm : : for_each ( SearchPathOpts . ImportSearchPaths , <nl> - [ & ] ( const std : : string & path ) { <nl> - GenericArgs . push_back ( " - I " ) ; <nl> - GenericArgs . push_back ( path ) ; <nl> - } ) ; <nl> genericSubInvocation . setFrameworkSearchPaths ( SearchPathOpts . FrameworkSearchPaths ) ; <nl> - llvm : : for_each ( SearchPathOpts . FrameworkSearchPaths , <nl> - [ & ] ( const SearchPathOptions : : FrameworkSearchPath & path ) { <nl> - GenericArgs . push_back ( path . IsSystem ? " - Fsystem " : " - F " ) ; <nl> - GenericArgs . push_back ( path . Path ) ; <nl> - } ) ; <nl> if ( ! SearchPathOpts . SDKPath . empty ( ) ) { <nl> genericSubInvocation . setSDKPath ( SearchPathOpts . SDKPath ) ; <nl> - GenericArgs . push_back ( " - sdk " ) ; <nl> - GenericArgs . push_back ( SearchPathOpts . SDKPath ) ; <nl> } <nl> <nl> genericSubInvocation . setInputKind ( InputFileKind : : SwiftModuleInterface ) ; <nl> if ( ! SearchPathOpts . RuntimeResourcePath . empty ( ) ) { <nl> genericSubInvocation . setRuntimeResourcePath ( SearchPathOpts . RuntimeResourcePath ) ; <nl> - GenericArgs . push_back ( " - resource - dir " ) ; <nl> - GenericArgs . push_back ( SearchPathOpts . RuntimeResourcePath ) ; <nl> } <nl> <nl> / / Inhibit warnings from the genericSubInvocation since we are assuming the user <nl> InterfaceSubContextDelegateImpl : : InterfaceSubContextDelegateImpl ( <nl> SubFEOpts . RequestedAction = FrontendOptions : : ActionType : : EmitModuleOnly ; <nl> if ( ! moduleCachePath . empty ( ) ) { <nl> genericSubInvocation . setClangModuleCachePath ( moduleCachePath ) ; <nl> - GenericArgs . push_back ( " - module - cache - path " ) ; <nl> - GenericArgs . push_back ( moduleCachePath ) ; <nl> } <nl> if ( ! prebuiltCachePath . empty ( ) ) { <nl> genericSubInvocation . getFrontendOptions ( ) . PrebuiltModuleCachePath = <nl> - prebuiltCachePath . str ( ) ; <nl> - GenericArgs . push_back ( " - prebuilt - module - cache - path " ) ; <nl> - GenericArgs . push_back ( prebuiltCachePath ) ; <nl> + prebuiltCachePath . str ( ) ; <nl> } <nl> if ( trackSystemDependencies ) { <nl> genericSubInvocation . getFrontendOptions ( ) . IntermoduleDependencyTracking = <nl> InterfaceSubContextDelegateImpl : : InterfaceSubContextDelegateImpl ( <nl> } <nl> genericSubInvocation . getSearchPathOptions ( ) . ExplicitSwiftModules = <nl> searchPathOpts . ExplicitSwiftModules ; <nl> - / / Dependencies scanner shouldn ' t know any explict Swift modules to use . <nl> - / / Adding these argumnets may not be necessary . <nl> - / / FIXME : remove it ? <nl> - for ( auto EM : searchPathOpts . ExplicitSwiftModules ) { <nl> - GenericArgs . push_back ( " - swift - module - file " ) ; <nl> - GenericArgs . push_back ( ArgSaver . save ( EM ) ) ; <nl> - } <nl> / / Pass down - explicit - swift - module - map - file <nl> / / FIXME : we shouldn ' t need this . Remove it ? <nl> StringRef explictSwiftModuleMap = searchPathOpts . ExplicitSwiftModuleMap ; <nl> genericSubInvocation . getSearchPathOptions ( ) . ExplicitSwiftModuleMap = <nl> explictSwiftModuleMap . str ( ) ; <nl> - if ( ! explictSwiftModuleMap . empty ( ) ) { <nl> - GenericArgs . push_back ( " - explicit - swift - module - map - file " ) ; <nl> - GenericArgs . push_back ( explictSwiftModuleMap ) ; <nl> - } <nl> if ( clangImporter ) { <nl> / / We need to add these extra clang flags because explict module building <nl> / / related flags are all there : - fno - implicit - modules , - fmodule - map - file = , <nl> mmm a / lib / IDE / ExprContextAnalysis . cpp <nl> ppp b / lib / IDE / ExprContextAnalysis . cpp <nl> using namespace ide ; <nl> / / typeCheckContextAt ( DeclContext , SourceLoc ) <nl> / / = = = mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - = = = / / <nl> <nl> - namespace { <nl> - void typeCheckContextImpl ( DeclContext * DC , SourceLoc Loc ) { <nl> - / / Nothing to type check in module context . <nl> - if ( DC - > isModuleScopeContext ( ) ) <nl> - return ; <nl> + void swift : : ide : : typeCheckContextAt ( DeclContext * DC , SourceLoc Loc ) { <nl> + while ( isa < AbstractClosureExpr > ( DC ) ) <nl> + DC = DC - > getParent ( ) ; <nl> <nl> / / Make sure the extension has been bound , in case it is in an inactive # if <nl> / / or something weird like that . <nl> void typeCheckContextImpl ( DeclContext * DC , SourceLoc Loc ) { <nl> switch ( DC - > getContextKind ( ) ) { <nl> case DeclContextKind : : AbstractClosureExpr : <nl> case DeclContextKind : : Module : <nl> + case DeclContextKind : : FileUnit : <nl> case DeclContextKind : : SerializedLocal : <nl> case DeclContextKind : : EnumElementDecl : <nl> case DeclContextKind : : GenericTypeDecl : <nl> void typeCheckContextImpl ( DeclContext * DC , SourceLoc Loc ) { <nl> } <nl> break ; <nl> } <nl> - <nl> - case DeclContextKind : : FileUnit : <nl> - llvm_unreachable ( " module scope context handled above " ) ; <nl> } <nl> } <nl> - } / / anonymous namespace <nl> - <nl> - void swift : : ide : : typeCheckContextAt ( DeclContext * DC , SourceLoc Loc ) { <nl> - while ( isa < AbstractClosureExpr > ( DC ) ) <nl> - DC = DC - > getParent ( ) ; <nl> - <nl> - typeCheckContextImpl ( DC , Loc ) ; <nl> - } <nl> <nl> / / = = = mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - = = = / / <nl> / / findParsedExpr ( DeclContext , Expr ) <nl> class ExprFinder : public ASTWalker { <nl> return { isInterstingRange ( S ) , S } ; <nl> } <nl> <nl> - bool walkToTypeLocPre ( TypeLoc & TL ) override { return false ; } <nl> bool walkToTypeReprPre ( TypeRepr * T ) override { return false ; } <nl> } ; <nl> } / / anonymous namespace <nl> mmm a / lib / IDE / SourceEntityWalker . cpp <nl> ppp b / lib / IDE / SourceEntityWalker . cpp <nl> std : : pair < bool , Expr * > SemaAnnotator : : walkToExprPre ( Expr * E ) { <nl> case KeyPathExpr : : Component : : Kind : : OptionalWrap : <nl> case KeyPathExpr : : Component : : Kind : : OptionalForce : <nl> case KeyPathExpr : : Component : : Kind : : Identity : <nl> - case KeyPathExpr : : Component : : Kind : : DictionaryKey : <nl> break ; <nl> } <nl> } <nl> mmm a / lib / IDE / SwiftSourceDocInfo . cpp <nl> ppp b / lib / IDE / SwiftSourceDocInfo . cpp <nl> Expr * NameMatcher : : walkToExprPost ( Expr * E ) { <nl> return E ; <nl> } <nl> <nl> - bool NameMatcher : : walkToTypeLocPre ( TypeLoc & TL ) { <nl> - if ( isDone ( ) | | shouldSkip ( TL . getSourceRange ( ) ) ) <nl> - return false ; <nl> - return true ; <nl> - } <nl> - <nl> - bool NameMatcher : : walkToTypeLocPost ( TypeLoc & TL ) { <nl> - return ! isDone ( ) ; <nl> - } <nl> - <nl> bool NameMatcher : : walkToTypeReprPre ( TypeRepr * T ) { <nl> if ( isDone ( ) | | shouldSkip ( T - > getSourceRange ( ) ) ) <nl> return false ; <nl> mmm a / lib / Parse / ParseStmt . cpp <nl> ppp b / lib / Parse / ParseStmt . cpp <nl> struct FallthroughFinder : ASTWalker { <nl> } <nl> <nl> bool walkToDeclPre ( Decl * d ) override { return false ; } <nl> - bool walkToTypeLocPre ( TypeLoc & tl ) override { return false ; } <nl> bool walkToTypeReprPre ( TypeRepr * t ) override { return false ; } <nl> <nl> static FallthroughStmt * findFallthrough ( Stmt * s ) { <nl> mmm a / lib / SILGen / SILGenExpr . cpp <nl> ppp b / lib / SILGen / SILGenExpr . cpp <nl> RValue RValueEmitter : : visitKeyPathExpr ( KeyPathExpr * E , SGFContext C ) { <nl> case KeyPathExpr : : Component : : Kind : : UnresolvedProperty : <nl> case KeyPathExpr : : Component : : Kind : : UnresolvedSubscript : <nl> llvm_unreachable ( " not resolved " ) ; <nl> - break ; <nl> - <nl> - case KeyPathExpr : : Component : : Kind : : DictionaryKey : <nl> - llvm_unreachable ( " DictionaryKey only valid in # keyPath " ) ; <nl> - break ; <nl> } <nl> } <nl> <nl> mmm a / lib / SILOptimizer / FunctionSignatureTransforms / ExistentialSpecializer . cpp <nl> ppp b / lib / SILOptimizer / FunctionSignatureTransforms / ExistentialSpecializer . cpp <nl> class ExistentialSpecializer : public SILFunctionTransform { <nl> return ; <nl> } <nl> <nl> - / / FIXME : This pass should be able to support ownership . <nl> - if ( F - > hasOwnership ( ) ) <nl> - return ; <nl> - <nl> / / / Get CallerAnalysis information handy . <nl> CA = PM - > getAnalysis < CallerAnalysis > ( ) ; <nl> <nl> bool ExistentialSpecializer : : canSpecializeCalleeFunction ( FullApplySite & Apply ) { <nl> if ( ! Callee - > isDefinition ( ) ) <nl> return false ; <nl> <nl> - / / If the callee has ownership enabled , bail . <nl> - / / <nl> - / / FIXME : We should be able to handle callees that have ownership , but the <nl> - / / pass has not been updated yet . <nl> - if ( Callee - > hasOwnership ( ) ) <nl> - return false ; <nl> - <nl> / / Ignore generic functions . Generic functions should be fully specialized <nl> / / before attempting to introduce new generic parameters for existential <nl> / / arguments . Otherwise , there ' s no guarantee that the generic specializer <nl> mmm a / lib / SILOptimizer / FunctionSignatureTransforms / ExistentialTransform . cpp <nl> ppp b / lib / SILOptimizer / FunctionSignatureTransforms / ExistentialTransform . cpp <nl> void ExistentialSpecializerCloner : : cloneAndPopulateFunction ( ) { <nl> SILBuilderWithScope Builder ( exitBB - > getTerminator ( ) ) ; <nl> / / A return location can ' t be used for a non - return instruction . <nl> auto loc = RegularLocation : : getAutoGeneratedLocation ( ) ; <nl> - for ( SILValue cleanupVal : CleanupValues ) <nl> - Builder . createDestroyAddr ( loc , cleanupVal ) ; <nl> + for ( SILValue cleanupVal : CleanupValues ) { <nl> + if ( cleanupVal . getOwnershipKind ( ) = = ValueOwnershipKind : : Guaranteed ) { <nl> + Builder . emitEndBorrowOperation ( loc , cleanupVal ) ; <nl> + } else { <nl> + Builder . emitDestroyOperation ( loc , cleanupVal ) ; <nl> + } <nl> + } <nl> <nl> for ( auto * ASI : llvm : : reverse ( AllocStackInsts ) ) <nl> Builder . createDeallocStack ( loc , ASI ) ; <nl> void ExistentialSpecializerCloner : : cloneArguments ( <nl> NewF . getLoweredType ( NewF . mapTypeIntoContext ( GenericParam ) ) ; <nl> GenericSILType = GenericSILType . getCategoryType ( <nl> ArgDesc . Arg - > getType ( ) . getCategory ( ) ) ; <nl> - auto * NewArg = <nl> - ClonedEntryBB - > createFunctionArgument ( GenericSILType , ArgDesc . Decl ) ; <nl> - NewArg - > setOwnershipKind ( ValueOwnershipKind ( <nl> - NewF , GenericSILType , ArgDesc . Arg - > getArgumentConvention ( ) ) ) ; <nl> + auto * NewArg = ClonedEntryBB - > createFunctionArgument ( <nl> + GenericSILType , ArgDesc . Decl , <nl> + ValueOwnershipKind ( NewF , GenericSILType , <nl> + ArgDesc . Arg - > getArgumentConvention ( ) ) ) ; <nl> / / Determine the Conformances . <nl> SILType ExistentialType = ArgDesc . Arg - > getType ( ) . getObjectType ( ) ; <nl> CanType OpenedType = NewArg - > getType ( ) . getASTType ( ) ; <nl> void ExistentialSpecializerCloner : : cloneArguments ( <nl> } <nl> case ExistentialRepresentation : : Class : { <nl> SILValue NewArgValue = NewArg ; <nl> + bool origConsumed = EAD . isConsumed ; <nl> + <nl> + / / Load our object if needed and if our original value was not consumed , <nl> + / / make a copy in ossa . Do not perturb code - gen in non - ossa code though . <nl> if ( ! NewArg - > getType ( ) . isObject ( ) ) { <nl> - NewArgValue = NewFBuilder . createLoad ( InsertLoc , NewArg , <nl> - LoadOwnershipQualifier : : Unqualified ) ; <nl> + auto qual = LoadOwnershipQualifier : : Take ; <nl> + if ( NewFBuilder . hasOwnership ( ) & & ! origConsumed ) { <nl> + qual = LoadOwnershipQualifier : : Copy ; <nl> + } <nl> + NewArgValue = <nl> + NewFBuilder . emitLoadValueOperation ( InsertLoc , NewArg , qual ) ; <nl> + } else { <nl> + if ( NewFBuilder . hasOwnership ( ) & & ! origConsumed ) { <nl> + NewArgValue = NewFBuilder . emitCopyValueOperation ( InsertLoc , NewArg ) ; <nl> + } <nl> } <nl> - <nl> - / / FIXME_ownership : init_existential_ref always takes ownership of the <nl> - / / incoming reference . If the argument convention is borrowed <nl> - / / ( ! isConsumed ) , then we should create a copy_value here and add this new <nl> - / / existential to the CleanupValues vector . <nl> <nl> / / / Simple case : Create an init_existential . <nl> / / / % 5 = init_existential_ref % 0 : $ T : $ T , $ P <nl> void ExistentialSpecializerCloner : : cloneArguments ( <nl> InsertLoc , ArgDesc . Arg - > getType ( ) . getObjectType ( ) , <nl> NewArg - > getType ( ) . getASTType ( ) , <nl> NewArgValue , Conformances ) ; <nl> - <nl> + <nl> + / / If we don ' t have an object and we are in ossa , the store will consume <nl> + / / the InitRef . <nl> if ( ! NewArg - > getType ( ) . isObject ( ) ) { <nl> auto alloc = NewFBuilder . createAllocStack ( InsertLoc , <nl> InitRef - > getType ( ) ) ; <nl> - NewFBuilder . createStore ( InsertLoc , InitRef , alloc , <nl> - StoreOwnershipQualifier : : Unqualified ) ; <nl> + NewFBuilder . emitStoreValueOperation ( InsertLoc , InitRef , alloc , <nl> + StoreOwnershipQualifier : : Init ) ; <nl> InitRef = alloc ; <nl> AllocStackInsts . push_back ( alloc ) ; <nl> + } else { <nl> + / / Otherwise in ossa , we need to add init existential ref as something <nl> + / / to be cleaned up . In non - ossa , we do not insert the copies , so we do <nl> + / / not need to do it then . <nl> + / / <nl> + / / TODO : This would be simpler if we had managed value / cleanup scopes . <nl> + if ( NewFBuilder . hasOwnership ( ) & & ! origConsumed ) { <nl> + CleanupValues . push_back ( InitRef ) ; <nl> + } <nl> } <nl> <nl> entryArgs . push_back ( InitRef ) ; <nl> break ; <nl> } <nl> + <nl> default : { <nl> llvm_unreachable ( " Unhandled existential type in ExistentialTransform ! " ) ; <nl> break ; <nl> void ExistentialTransform : : populateThunkBody ( ) { <nl> SILValue archetypeValue ; <nl> auto ExistentialRepr = <nl> ArgDesc . Arg - > getType ( ) . getPreferredExistentialRepresentation ( ) ; <nl> + bool OriginallyConsumed = ETAD . isConsumed ; <nl> switch ( ExistentialRepr ) { <nl> case ExistentialRepresentation : : Opaque : { <nl> archetypeValue = Builder . createOpenExistentialAddr ( <nl> Loc , OrigOperand , OpenedSILType , it - > second . AccessType ) ; <nl> SILValue calleeArg = archetypeValue ; <nl> - if ( ETAD . isConsumed ) { <nl> + if ( OriginallyConsumed ) { <nl> / / open_existential_addr projects a borrowed address into the <nl> / / existential box . Since the callee consumes the generic value , we <nl> / / must pass in a copy . <nl> void ExistentialTransform : : populateThunkBody ( ) { <nl> / / If the operand is not object type , we need an explicit load . <nl> SILValue OrigValue = OrigOperand ; <nl> if ( ! OrigOperand - > getType ( ) . isObject ( ) ) { <nl> - OrigValue = Builder . createLoad ( Loc , OrigValue , <nl> - LoadOwnershipQualifier : : Unqualified ) ; <nl> + auto qual = LoadOwnershipQualifier : : Take ; <nl> + if ( Builder . hasOwnership ( ) & & ! OriginallyConsumed ) { <nl> + qual = LoadOwnershipQualifier : : Copy ; <nl> + } <nl> + OrigValue = Builder . emitLoadValueOperation ( Loc , OrigValue , qual ) ; <nl> + } else { <nl> + if ( Builder . hasOwnership ( ) & & ! OriginallyConsumed ) { <nl> + OrigValue = Builder . emitCopyValueOperation ( Loc , OrigValue ) ; <nl> + } <nl> } <nl> + <nl> / / OpenExistentialRef forwards ownership , so it does the right thing <nl> / / regardless of whether the argument is borrowed or consumed . <nl> archetypeValue = <nl> Builder . createOpenExistentialRef ( Loc , OrigValue , OpenedSILType ) ; <nl> + <nl> + / / If we don ' t have an object and we are in ossa , the store will consume <nl> + / / the open_existential_ref . <nl> if ( ! OrigOperand - > getType ( ) . isObject ( ) ) { <nl> SILValue ASI = Builder . createAllocStack ( Loc , OpenedSILType ) ; <nl> - Builder . createStore ( Loc , archetypeValue , ASI , <nl> - StoreOwnershipQualifier : : Unqualified ) ; <nl> + Builder . emitStoreValueOperation ( Loc , archetypeValue , ASI , <nl> + StoreOwnershipQualifier : : Init ) ; <nl> Temps . push_back ( { ASI , SILValue ( ) } ) ; <nl> archetypeValue = ASI ; <nl> + } else { <nl> + / / Otherwise in ossa , we need to add open_existential_ref as something <nl> + / / to be cleaned up . In non - ossa , we do not insert the copies , so we <nl> + / / do not need to do it then . <nl> + / / <nl> + / / TODO : This would be simpler if we had managed value / cleanup scopes . <nl> + if ( Builder . hasOwnership ( ) & & ! OriginallyConsumed ) { <nl> + Temps . push_back ( { SILValue ( ) , archetypeValue } ) ; <nl> + } <nl> } <nl> ApplyArgs . push_back ( archetypeValue ) ; <nl> break ; <nl> void ExistentialTransform : : populateThunkBody ( ) { <nl> / / copy_addr % valAdr to % temp / / < = = Temp CopyAddr <nl> / / apply ( % temp ) / / < = = Temp is consumed by the apply <nl> / / <nl> - / / Destroy the original argument and deallocation the temporary : <nl> + / / Destroy the original argument and deallocation the temporary . If we have <nl> + / / an address this becomes : <nl> / / destroy_addr % consumedExistential : $ * Protocol <nl> / / dealloc_stack % temp : $ * T <nl> + / / <nl> + / / Otherwise , if we had an object , we just emit a destroy_value . <nl> if ( Temp . DestroyValue ) <nl> - Builder . createDestroyAddr ( cleanupLoc , Temp . DestroyValue ) ; <nl> + Builder . emitDestroyOperation ( cleanupLoc , Temp . DestroyValue ) ; <nl> if ( Temp . DeallocStackEntry ) <nl> Builder . createDeallocStack ( cleanupLoc , Temp . DeallocStackEntry ) ; <nl> } <nl> mmm a / lib / Sema / CSApply . cpp <nl> ppp b / lib / Sema / CSApply . cpp <nl> static bool buildObjCKeyPathString ( KeyPathExpr * E , <nl> / / Don ' t bother building the key path string if the key path didn ' t even <nl> / / resolve . <nl> return false ; <nl> - case KeyPathExpr : : Component : : Kind : : DictionaryKey : <nl> - llvm_unreachable ( " DictionaryKey only valid in # keyPath expressions . " ) ; <nl> - return false ; <nl> } <nl> } <nl> <nl> namespace { <nl> case KeyPathExpr : : Component : : Kind : : OptionalWrap : <nl> case KeyPathExpr : : Component : : Kind : : TupleElement : <nl> llvm_unreachable ( " already resolved " ) ; <nl> - break ; <nl> - case KeyPathExpr : : Component : : Kind : : DictionaryKey : <nl> - llvm_unreachable ( " DictionaryKey only valid in # keyPath " ) ; <nl> - break ; <nl> } <nl> <nl> / / Update " componentTy " with the result type of the last component . <nl> namespace { <nl> componentType = solution . simplifyType ( cs . getType ( kp , i ) ) ; <nl> assert ( ! componentType - > hasTypeVariable ( ) & & <nl> " Should not write type variable into key - path component " ) ; <nl> - kp - > getMutableComponents ( ) [ i ] . setComponentType ( componentType ) ; <nl> } <nl> + <nl> + kp - > getMutableComponents ( ) [ i ] . setComponentType ( componentType ) ; <nl> } <nl> } <nl> <nl> mmm a / lib / Sema / CSDiagnostics . cpp <nl> ppp b / lib / Sema / CSDiagnostics . cpp <nl> bool ContextualFailure : : diagnoseAsError ( ) { <nl> return true ; <nl> } <nl> <nl> + case ConstraintLocator : : FunctionBuilderBodyResult : { <nl> + diagnostic = * getDiagnosticFor ( CTP_Initialization , toType ) ; <nl> + break ; <nl> + } <nl> + <nl> default : <nl> return false ; <nl> } <nl> mmm a / lib / Sema / CSGen . cpp <nl> ppp b / lib / Sema / CSGen . cpp <nl> namespace { <nl> } <nl> <nl> / / / Ignore types . <nl> - bool walkToTypeLocPre ( TypeLoc & TL ) override { return false ; } <nl> + bool walkToTypeReprPre ( TypeRepr * T ) override { return false ; } <nl> } ; <nl> <nl> / / / Given a collection of " linked " expressions , analyzes them for <nl> namespace { <nl> } <nl> <nl> / / / Ignore types . <nl> - bool walkToTypeLocPre ( TypeLoc & TL ) override { return false ; } <nl> + bool walkToTypeReprPre ( TypeRepr * T ) override { return false ; } <nl> } ; <nl> <nl> / / / For a given expression , given information that is global to the <nl> namespace { <nl> } <nl> case KeyPathExpr : : Component : : Kind : : Identity : <nl> continue ; <nl> - case KeyPathExpr : : Component : : Kind : : DictionaryKey : <nl> - llvm_unreachable ( " DictionaryKey only valid in # keyPath " ) ; <nl> - break ; <nl> } <nl> <nl> / / By now , ` base ` is the result type of this component . Set it in the <nl> mmm a / lib / Sema / CSSimplify . cpp <nl> ppp b / lib / Sema / CSSimplify . cpp <nl> bool ConstraintSystem : : repairFailures ( <nl> getConstraintLocator ( anchor , path ) ) ; <nl> } <nl> <nl> + case ConstraintLocator : : FunctionBuilderBodyResult : { <nl> + conversionsOrFixes . push_back ( ContextualMismatch : : create ( <nl> + * this , lhs , rhs , getConstraintLocator ( locator ) ) ) ; <nl> + break ; <nl> + } <nl> + <nl> default : <nl> break ; <nl> } <nl> ConstraintSystem : : simplifyKeyPathConstraint ( <nl> case KeyPathExpr : : Component : : Kind : : TupleElement : <nl> llvm_unreachable ( " not implemented " ) ; <nl> break ; <nl> - case KeyPathExpr : : Component : : Kind : : DictionaryKey : <nl> - llvm_unreachable ( " DictionaryKey only valid in # keyPath " ) ; <nl> - break ; <nl> } <nl> } <nl> <nl> mmm a / lib / Sema / ConstraintSystem . cpp <nl> ppp b / lib / Sema / ConstraintSystem . cpp <nl> ConstraintLocator * ConstraintSystem : : getCalleeLocator ( <nl> case ComponentKind : : OptionalChain : <nl> case ComponentKind : : OptionalWrap : <nl> case ComponentKind : : Identity : <nl> - case ComponentKind : : DictionaryKey : <nl> / / These components don ' t have any callee associated , so just continue . <nl> break ; <nl> } <nl> void constraints : : simplifyLocator ( ASTNode & anchor , <nl> break ; <nl> } <nl> <nl> + case ConstraintLocator : : FunctionBuilderBodyResult : { <nl> + path = path . slice ( 1 ) ; <nl> + break ; <nl> + } <nl> + <nl> default : <nl> / / FIXME : Lots of other cases to handle . <nl> break ; <nl> SolutionApplicationTarget SolutionApplicationTarget : : forInitialization ( <nl> if ( auto * typedPattern = dyn_cast < TypedPattern > ( pattern ) ) { <nl> const Pattern * inner = typedPattern - > getSemanticsProvidingPattern ( ) ; <nl> if ( isa < NamedPattern > ( inner ) | | isa < AnyPattern > ( inner ) ) { <nl> - contextualType = typedPattern - > getTypeLoc ( ) ; <nl> - if ( ! contextualType . getType ( ) ) <nl> + contextualType = TypeLoc ( typedPattern - > getTypeRepr ( ) ) ; <nl> + if ( typedPattern - > hasType ( ) ) <nl> + contextualType . setType ( typedPattern - > getType ( ) ) ; <nl> + else <nl> contextualType . setType ( patternType ) ; <nl> } <nl> } <nl> mmm a / lib / Sema / TypeCheckAccess . cpp <nl> ppp b / lib / Sema / TypeCheckAccess . cpp <nl> class AccessControlChecker : public AccessControlCheckerBase , <nl> if ( ! anyVar ) <nl> return ; <nl> <nl> - checkTypeAccess ( TP - > getTypeLoc ( ) , anyVar , / * mayBeInferred * / true , <nl> + checkTypeAccess ( TP - > hasType ( ) ? TP - > getType ( ) : Type ( ) , <nl> + TP - > getTypeRepr ( ) , anyVar , / * mayBeInferred * / true , <nl> [ & ] ( AccessScope typeAccessScope , <nl> const TypeRepr * complainRepr , <nl> DowngradeToWarning downgradeToWarning ) { <nl> class UsableFromInlineChecker : public AccessControlCheckerBase , <nl> return ; <nl> <nl> checkTypeAccess ( <nl> - TP - > getTypeLoc ( ) , <nl> + TP - > hasType ( ) ? TP - > getType ( ) : Type ( ) , <nl> + TP - > getTypeRepr ( ) , <nl> fixedLayoutStructContext ? fixedLayoutStructContext : anyVar , <nl> / * mayBeInferred * / true , <nl> [ & ] ( AccessScope typeAccessScope , const TypeRepr * complainRepr , <nl> class ExportabilityChecker : public DeclVisitor < ExportabilityChecker > { <nl> if ( shouldSkipChecking ( anyVar ) ) <nl> return ; <nl> <nl> - checkType ( TP - > getTypeLoc ( ) , anyVar , getDiagnoser ( anyVar ) ) ; <nl> + checkType ( TP - > hasType ( ) ? TP - > getType ( ) : Type ( ) , <nl> + TP - > getTypeRepr ( ) , anyVar , getDiagnoser ( anyVar ) ) ; <nl> <nl> / / Check the property wrapper types . <nl> for ( auto attr : anyVar - > getAttachedPropertyWrappers ( ) ) <nl> mmm a / lib / Sema / TypeCheckAvailability . cpp <nl> ppp b / lib / Sema / TypeCheckAvailability . cpp <nl> class AvailabilityWalker : public ASTWalker { <nl> case KeyPathExpr : : Component : : Kind : : OptionalWrap : <nl> case KeyPathExpr : : Component : : Kind : : OptionalForce : <nl> case KeyPathExpr : : Component : : Kind : : Identity : <nl> - case KeyPathExpr : : Component : : Kind : : DictionaryKey : <nl> break ; <nl> } <nl> } <nl> mmm a / lib / Sema / TypeCheckCodeCompletion . cpp <nl> ppp b / lib / Sema / TypeCheckCodeCompletion . cpp <nl> static Optional < Type > getTypeOfCompletionContextExpr ( <nl> <nl> case CompletionTypeCheckKind : : KeyPath : <nl> referencedDecl = nullptr ; <nl> - if ( auto keyPath = dyn_cast < KeyPathExpr > ( parsedExpr ) ) { <nl> - auto components = keyPath - > getComponents ( ) ; <nl> - if ( ! components . empty ( ) ) { <nl> - auto & last = components . back ( ) ; <nl> - if ( last . isResolved ( ) ) { <nl> - if ( last . getKind ( ) = = KeyPathExpr : : Component : : Kind : : Property ) <nl> - referencedDecl = last . getDeclRef ( ) ; <nl> - Type lookupTy = last . getComponentType ( ) ; <nl> - ASTContext & Ctx = DC - > getASTContext ( ) ; <nl> - if ( auto bridgedClass = Ctx . getBridgedToObjC ( DC , lookupTy ) ) <nl> - return bridgedClass ; <nl> - return lookupTy ; <nl> - } <nl> - } <nl> - } <nl> + if ( auto keyPath = dyn_cast < KeyPathExpr > ( parsedExpr ) ) <nl> + return TypeChecker : : checkObjCKeyPathExpr ( DC , keyPath , <nl> + / * requireResultType = * / true ) ; <nl> <nl> return None ; <nl> } <nl> mmm a / lib / Sema / TypeCheckConstraints . cpp <nl> ppp b / lib / Sema / TypeCheckConstraints . cpp <nl> class FunctionSyntacticDiagnosticWalker : public ASTWalker { <nl> return { false , pattern } ; <nl> } <nl> <nl> - bool walkToTypeLocPre ( TypeLoc & typeLoc ) override { return false ; } <nl> bool walkToTypeReprPre ( TypeRepr * typeRepr ) override { return false ; } <nl> bool walkToParameterListPre ( ParameterList * params ) override { return false ; } <nl> } ; <nl> void swift : : forEachExprInConstraintSystem ( <nl> } <nl> bool walkToDeclPre ( Decl * D ) override { return false ; } <nl> bool walkToTypeReprPre ( TypeRepr * T ) override { return false ; } <nl> - bool walkToTypeLocPre ( TypeLoc & TL ) override { return false ; } <nl> } ; <nl> <nl> expr - > walk ( ChildWalker ( callback ) ) ; <nl> mmm a / lib / Sema / TypeCheckExprObjC . cpp <nl> ppp b / lib / Sema / TypeCheckExprObjC . cpp <nl> Optional < Type > TypeChecker : : checkObjCKeyPathExpr ( DeclContext * dc , <nl> case KeyPathExpr : : Component : : Kind : : OptionalWrap : <nl> case KeyPathExpr : : Component : : Kind : : Property : <nl> case KeyPathExpr : : Component : : Kind : : Subscript : <nl> - case KeyPathExpr : : Component : : Kind : : DictionaryKey : <nl> llvm_unreachable ( " already resolved ! " ) ; <nl> } <nl> <nl> Optional < Type > TypeChecker : : checkObjCKeyPathExpr ( DeclContext * dc , <nl> / / From here , we ' re resolving a property . Use the current type . <nl> updateState ( / * isProperty = * / true , currentType ) ; <nl> <nl> - auto resolved = KeyPathExpr : : Component : : <nl> - forDictionaryKey ( componentName , currentType , componentNameLoc ) ; <nl> - resolvedComponents . push_back ( resolved ) ; <nl> continue ; <nl> } <nl> <nl> Optional < Type > TypeChecker : : checkObjCKeyPathExpr ( DeclContext * dc , <nl> if ( auto var = dyn_cast < VarDecl > ( found ) ) { <nl> / / Resolve this component to the variable we found . <nl> auto varRef = ConcreteDeclRef ( var ) ; <nl> - Type varTy = var - > getInterfaceType ( ) ; <nl> - <nl> - / / Updates currentType <nl> - updateState ( / * isProperty = * / true , varTy ) ; <nl> - <nl> - auto resolved = KeyPathExpr : : Component : : forProperty ( varRef , currentType , <nl> - componentNameLoc ) ; <nl> + auto resolved = <nl> + KeyPathExpr : : Component : : forProperty ( varRef , Type ( ) , componentNameLoc ) ; <nl> resolvedComponents . push_back ( resolved ) ; <nl> + updateState ( / * isProperty = * / true , var - > getInterfaceType ( ) ) ; <nl> <nl> / / Check that the property is @ objc . <nl> if ( ! var - > isObjC ( ) ) { <nl> Optional < Type > TypeChecker : : checkObjCKeyPathExpr ( DeclContext * dc , <nl> break ; <nl> } <nl> <nl> - / / Updates currentType based on newType . <nl> updateState ( / * isProperty = * / false , newType ) ; <nl> - <nl> - / / Resolve this component to the type we found . <nl> - auto typeRef = ConcreteDeclRef ( type ) ; <nl> - auto resolved = KeyPathExpr : : Component : : forProperty ( typeRef , currentType , <nl> - componentNameLoc ) ; <nl> - resolvedComponents . push_back ( resolved ) ; <nl> - <nl> continue ; <nl> } <nl> <nl> mmm a / lib / Serialization / SerializeDoc . cpp <nl> ppp b / lib / Serialization / SerializeDoc . cpp <nl> static void writeDeclCommentTable ( <nl> return { false , E } ; <nl> } <nl> <nl> - bool walkToTypeLocPre ( TypeLoc & TL ) override { return false ; } <nl> bool walkToTypeReprPre ( TypeRepr * T ) override { return false ; } <nl> bool walkToParameterListPre ( ParameterList * PL ) override { return false ; } <nl> } ; <nl> struct BasicDeclLocsTableWriter : public ASTWalker { <nl> <nl> std : : pair < bool , Stmt * > walkToStmtPre ( Stmt * S ) override { return { false , S } ; } <nl> std : : pair < bool , Expr * > walkToExprPre ( Expr * E ) override { return { false , E } ; } <nl> - bool walkToTypeLocPre ( TypeLoc & TL ) override { return false ; } <nl> bool walkToTypeReprPre ( TypeRepr * T ) override { return false ; } <nl> bool walkToParameterListPre ( ParameterList * PL ) override { return false ; } <nl> <nl> mmm a / test / Constraints / function_builder_diags . swift <nl> ppp b / test / Constraints / function_builder_diags . swift <nl> struct MyView { <nl> case . / / expected - error { { expected ' : ' after ' case ' } } <nl> } / / expected - error { { expected identifier after ' . ' expression } } <nl> } <nl> + <nl> + @ TupleBuilder var invalidConversion : Int { / / expected - error { { cannot convert value of type ' String ' to specified type ' Int ' } } <nl> + " " <nl> + } <nl> } <nl> mmm a / test / IDE / complete_pound_keypath . swift <nl> ppp b / test / IDE / complete_pound_keypath . swift <nl> <nl> <nl> / / RUN : % target - swift - ide - test ( mock - sdk : % clang - importer - sdk ) - code - completion - source - filename % s - code - completion - token = IN_KEYPATH_2 | % FileCheck - check - prefix = CHECK - IN_KEYPATH % s <nl> <nl> - / / RUN : % target - swift - ide - test ( mock - sdk : % clang - importer - sdk ) - code - completion - source - filename % s - code - completion - token = IN_KEYPATH_3 | % FileCheck - check - prefix = CHECK - IN_KEYPATH_BRIDGED_STRING % s <nl> - <nl> - / / RUN : % target - swift - ide - test ( mock - sdk : % clang - importer - sdk ) - code - completion - source - filename % s - code - completion - token = IN_KEYPATH_4 | % FileCheck - check - prefix = CHECK - IN_KEYPATH_BRIDGED_STRING % s <nl> - <nl> - / / RUN : % target - swift - ide - test ( mock - sdk : % clang - importer - sdk ) - code - completion - source - filename % s - code - completion - token = IN_KEYPATH_5 | % FileCheck - check - prefixes = CHECK - IN_KEYPATH , CHECK - IN_KEYPATH_OPT % s <nl> - <nl> - / / RUN : % target - swift - ide - test ( mock - sdk : % clang - importer - sdk ) - code - completion - source - filename % s - code - completion - token = IN_KEYPATH_6 | % FileCheck - check - prefixes = CHECK - IN_KEYPATH , CHECK - IN_KEYPATH_OPT % s <nl> - <nl> - / / RUN : % target - swift - ide - test ( mock - sdk : % clang - importer - sdk ) - code - completion - source - filename % s - code - completion - token = IN_KEYPATH_7 | % FileCheck - check - prefixes = CHECK - IN_KEYPATH_BRIDGED_STRING % s <nl> - <nl> <nl> / / REQUIRES : objc_interop <nl> <nl> func selectorArg1 ( obj : NSObject ) { <nl> acceptKeyPath ( # ^ KEYPATH_ARG ^ # <nl> } <nl> <nl> - @ objcMembers class ObjCClass : NSObject { <nl> + class ObjCClass : NSObject { <nl> var prop1 : String = " " <nl> var prop2 : ObjCClass ? <nl> - var prop3 : [ ObjCClass ] ? = [ ] <nl> - var prop4 : [ String : String ] = [ : ] <nl> <nl> func completeInKeyPath1 ( ) { <nl> _ = # keyPath ( # ^ IN_KEYPATH_1 ^ # <nl> func completeInKeyPath2 ( ) { <nl> _ = # keyPath ( ObjCClass . # ^ IN_KEYPATH_2 ^ # <nl> } <nl> <nl> - func completeInKeyPath3 ( ) { <nl> - _ = # keyPath ( ObjCClass . prop1 . # ^ IN_KEYPATH_3 ^ # <nl> - } <nl> - func completeInKeyPath3 ( ) { <nl> - _ = # keyPath ( String . # ^ IN_KEYPATH_4 ^ # <nl> - } <nl> - <nl> - func completeInKeyPath4 ( ) { <nl> - _ = # keyPath ( ObjCClass . prop2 . # ^ IN_KEYPATH_5 ^ # <nl> - } <nl> - <nl> - func completeInKeyPath5 ( ) { <nl> - _ = # keyPath ( ObjCClass . prop3 . # ^ IN_KEYPATH_6 ^ # <nl> - } <nl> - <nl> - func completeInKeyPath6 ( ) { <nl> - _ = # keyPath ( ObjCClass . prop4 . anythingHere . # ^ IN_KEYPATH_7 ^ # <nl> - } <nl> - <nl> / / CHECK - AFTER_POUND - NOT : keyPath <nl> <nl> / / CHECK - KEYPATH_ARG : Keyword / None / TypeRelation [ Identical ] : # keyPath ( { # @ objc property sequence # } ) [ # String # ] ; name = # keyPath ( @ objc property sequence ) <nl> <nl> / / CHECK - IN_KEYPATH : Decl [ InstanceVar ] / CurrNominal : prop1 [ # String # ] ; name = prop1 <nl> / / CHECK - IN_KEYPATH : Decl [ InstanceVar ] / CurrNominal : prop2 [ # ObjCClass ? # ] ; name = prop2 <nl> - / / CHECK - IN_KEYPATH : Decl [ InstanceVar ] / CurrNominal : prop3 [ # [ ObjCClass ] ? # ] ; name = prop3 <nl> / / CHECK - IN_KEYPATH : Decl [ InstanceVar ] / Super : hashValue [ # Int # ] ; name = hashValue <nl> <nl> - / / Make sure we unwrap optionals ( members of Optional itself are invalid in this context ) <nl> - / / <nl> - / / CHECK - IN_KEYPATH_OPT - NOT : name = map <nl> - <nl> - / / Make sure we handle bridged types ( i . e . show NSString members rather than String members ) <nl> - / / <nl> - / / CHECK - IN_KEYPATH_BRIDGED_STRING : Decl [ InstanceVar ] / CurrNominal / IsSystem : urlsInText [ # [ URL ] # ] ; name = urlsInText <nl> - / / CHECK - IN_KEYPATH_BRIDGED_STRING : Decl [ InstanceVar ] / CurrNominal / IsSystem : uppercased [ # String ! # ] ; name = uppercased <nl> - / / CHECK - IN_KEYPATH_BRIDGED_STRING - NOT : name = count <nl> - <nl> <nl> mmm a / test / Index / index_keypaths . swift <nl> ppp b / test / Index / index_keypaths . swift <nl> <nl> - / / RUN : % target - swift - ide - test ( mock - sdk : % clang - importer - sdk ) - print - indexed - symbols - source - filename % s | % FileCheck % s <nl> + / / RUN : % target - swift - ide - test - print - indexed - symbols - source - filename % s | % FileCheck % s <nl> / / REQUIRES : objc_interop <nl> <nl> - import Foundation <nl> - <nl> struct MyStruct { <nl> struct Inner { <nl> let myProp = 1 <nl> } <nl> } <nl> <nl> + class MyClass { <nl> + class Inner { <nl> + @ objc var myProp = 1 <nl> + } <nl> + } <nl> + <nl> let a = \ MyStruct . Inner . myProp <nl> / / CHECK : [ [ @ LINE - 1 ] ] : 25 | { { . * } } | myProp <nl> / / CHECK : [ [ @ LINE - 2 ] ] : 10 | { { . * } } | MyStruct <nl> / / CHECK : [ [ @ LINE - 3 ] ] : 19 | { { . * } } | Inner <nl> let b : KeyPath < MyStruct . Inner , Int > = \ . myProp <nl> / / CHECK : [ [ @ LINE - 1 ] ] : 41 | { { . * } } | myProp <nl> - <nl> - @ objc class MyClass : NSObject { <nl> - @ objc class Inner : NSObject { <nl> - @ objc var myProp = 1 <nl> - @ objc var otherProp : [ String : MyClass . Inner ] = [ : ] <nl> - func method ( ) { <nl> - let c : String = # keyPath ( myProp ) <nl> - / / CHECK : [ [ @ LINE - 1 ] ] : 32 | { { . * } } | myProp <nl> - } <nl> - } <nl> - } <nl> - <nl> - let d : String = # keyPath ( MyClass . Inner . myProp ) <nl> - / / CHECK : [ [ @ LINE - 1 ] ] : 26 | { { . * } } | MyClass <nl> - / / CHECK : [ [ @ LINE - 2 ] ] : 34 | { { . * } } | Inner <nl> - / / CHECK : [ [ @ LINE - 3 ] ] : 40 | { { . * } } | myProp <nl> - <nl> - let e = \ MyClass . Inner . myProp <nl> + let c = \ MyClass . Inner . myProp <nl> / / CHECK : [ [ @ LINE - 1 ] ] : 24 | { { . * } } | myProp <nl> / / CHECK : [ [ @ LINE - 2 ] ] : 10 | { { . * } } | MyClass <nl> / / CHECK : [ [ @ LINE - 3 ] ] : 18 | { { . * } } | Inner <nl> - <nl> - let f : KeyPath < MyClass . Inner , Int > = \ . myProp <nl> + let d : KeyPath < MyClass . Inner , Int > = \ . myProp <nl> / / CHECK : [ [ @ LINE - 1 ] ] : 40 | { { . * } } | myProp <nl> - <nl> - let g : String = # keyPath ( MyClass . Inner . otherProp . someDictKey . myProp ) <nl> - / / CHECK : [ [ @ LINE - 1 ] ] : 26 | { { . * } } | MyClass <nl> - / / CHECK : [ [ @ LINE - 2 ] ] : 34 | { { . * } } | Inner <nl> - / / CHECK : [ [ @ LINE - 3 ] ] : 40 | { { . * } } | otherProp <nl> - / / CHECK : [ [ @ LINE - 4 ] ] : 62 | { { . * } } | myProp <nl> new file mode 100644 <nl> index 000000000000 . . 68267ed3dbca <nl> mmm / dev / null <nl> ppp b / test / SILOptimizer / existential_transform_extras_ossa . sil <nl> <nl> + / / RUN : % target - sil - opt - enable - sil - verify - all % s - enable - existential - specializer - existential - specializer | % FileCheck % s <nl> + <nl> + / / Additional tests for existential_specializer <nl> + <nl> + import Builtin <nl> + import Swift <nl> + import SwiftShims <nl> + <nl> + internal protocol P { <nl> + func foo ( ) - > Int32 <nl> + } <nl> + <nl> + internal class Klass1 : P { <nl> + @ inline ( never ) func foo ( ) - > Int32 <nl> + init ( ) <nl> + } <nl> + <nl> + internal class Klass2 : P { <nl> + @ inline ( never ) func foo ( ) - > Int32 <nl> + init ( ) <nl> + } <nl> + <nl> + @ inline ( never ) internal func wrap_foo_ncp ( a : inout P , b : inout P ) - > Int32 <nl> + <nl> + @ inline ( never ) func ncp ( ) <nl> + <nl> + sil hidden [ ossa ] [ noinline ] @ $ s7dealloc3ncpyyF : $ @ convention ( thin ) ( ) - > Int32 { <nl> + bb0 : <nl> + % 0 = alloc_stack $ P , var , name " magic2 " <nl> + % 1 = alloc_ref $ Klass1 <nl> + % 4 = init_existential_addr % 0 : $ * P , $ Klass1 <nl> + store % 1 to [ init ] % 4 : $ * Klass1 <nl> + % 6 = alloc_stack $ P , var , name " magic3 " <nl> + % 7 = alloc_ref $ Klass1 <nl> + % 10 = init_existential_addr % 6 : $ * P , $ Klass1 <nl> + store % 7 to [ init ] % 10 : $ * Klass1 <nl> + % 12 = function_ref @ $ s7dealloc12wrap_foo_ncp1a1bSiAA1P_pz_AaE_pztF : $ @ convention ( thin ) ( @ in P , @ in P ) - > Int32 <nl> + % 13 = apply % 12 ( % 0 , % 6 ) : $ @ convention ( thin ) ( @ in P , @ in P ) - > Int32 <nl> + debug_value % 13 : $ Int32 , let , name " x " <nl> + % 14 = alloc_stack $ P , var , name " magic4 " <nl> + % 15 = alloc_ref $ Klass1 <nl> + % 16 = init_existential_addr % 14 : $ * P , $ Klass1 <nl> + store % 15 to [ init ] % 16 : $ * Klass1 <nl> + % 17 = function_ref @ $ s7dealloc20wrap_foo_ncp_another1aSiAA1P_pz_tF : $ @ convention ( thin ) ( @ inout P ) - > Int32 <nl> + % 18 = apply % 17 ( % 14 ) : $ @ convention ( thin ) ( @ inout P ) - > Int32 <nl> + % 24 = struct_extract % 13 : $ Int32 , # Int32 . _value <nl> + % 25 = struct_extract % 18 : $ Int32 , # Int32 . _value <nl> + % 26 = integer_literal $ Builtin . Int1 , - 1 <nl> + % 27 = builtin " sadd_with_overflow_Int32 " ( % 24 : $ Builtin . Int32 , % 25 : $ Builtin . Int32 , % 26 : $ Builtin . Int1 ) : $ ( Builtin . Int32 , Builtin . Int1 ) <nl> + % 28 = tuple_extract % 27 : $ ( Builtin . Int32 , Builtin . Int1 ) , 0 <nl> + % 29 = tuple_extract % 27 : $ ( Builtin . Int32 , Builtin . Int1 ) , 1 <nl> + cond_fail % 29 : $ Builtin . Int1 <nl> + % 31 = struct $ Int32 ( % 28 : $ Builtin . Int32 ) <nl> + destroy_addr % 14 : $ * P <nl> + dealloc_stack % 14 : $ * P <nl> + dealloc_stack % 6 : $ * P <nl> + dealloc_stack % 0 : $ * P <nl> + return % 31 : $ Int32 <nl> + } <nl> + <nl> + / / CHECK - LABEL : sil public_external [ serialized ] [ ossa ] @ $ s7dealloc20wrap_foo_ncp_another1aSiAA1P_pz_tF : $ @ convention ( thin ) ( @ inout P ) - > Int32 { <nl> + / / CHECK : bb0 ( % 0 : $ * P ) : <nl> + / / CHECK : debug_value_addr <nl> + / / CHECK : alloc_stack <nl> + / / CHECK : copy_addr <nl> + / / CHECK : open_existential_addr <nl> + / / CHECK : witness_method <nl> + / / CHECK : apply <nl> + / / CHECK : destroy_addr <nl> + / / CHECK : dealloc_stack <nl> + / / CHECK : return <nl> + / / CHECK - LABEL : } / / end sil function ' $ s7dealloc20wrap_foo_ncp_another1aSiAA1P_pz_tF ' <nl> + sil public_external [ ossa ] [ serialized ] @ $ s7dealloc20wrap_foo_ncp_another1aSiAA1P_pz_tF : $ @ convention ( thin ) ( @ inout P ) - > Int32 { <nl> + bb0 ( % 0 : $ * P ) : <nl> + debug_value_addr % 0 : $ * P , var , name " a " , argno 1 <nl> + % 2 = alloc_stack $ P <nl> + copy_addr % 0 to [ initialization ] % 2 : $ * P <nl> + % 4 = open_existential_addr immutable_access % 2 : $ * P to $ * @ opened ( " EE9F89E4 - ECF4 - 11E8 - 8DDF - D0817AD4059B " ) P <nl> + % 5 = witness_method $ @ opened ( " EE9F89E4 - ECF4 - 11E8 - 8DDF - D0817AD4059B " ) P , # P . foo : < Self where Self : P > ( Self ) - > ( ) - > Int32 , % 4 : $ * @ opened ( " EE9F89E4 - ECF4 - 11E8 - 8DDF - D0817AD4059B " ) P : $ @ convention ( witness_method : P ) < τ_0_0 where τ_0_0 : P > ( @ in_guaranteed τ_0_0 ) - > Int32 <nl> + % 6 = apply % 5 < @ opened ( " EE9F89E4 - ECF4 - 11E8 - 8DDF - D0817AD4059B " ) P > ( % 4 ) : $ @ convention ( witness_method : P ) < τ_0_0 where τ_0_0 : P > ( @ in_guaranteed τ_0_0 ) - > Int32 <nl> + destroy_addr % 2 : $ * P <nl> + dealloc_stack % 2 : $ * P <nl> + return % 6 : $ Int32 <nl> + } / / end sil function ' $ s7dealloc20wrap_foo_ncp_another1aSiAA1P_pz_tF ' <nl> + <nl> + sil shared [ ossa ] [ noinline ] @ $ s7dealloc6Klass1C3fooSiyFTf4d_n : $ @ convention ( thin ) ( ) - > Int32 { <nl> + bb0 : <nl> + % 0 = integer_literal $ Builtin . Int32 , 10 <nl> + % 1 = struct $ Int32 ( % 0 : $ Builtin . Int32 ) <nl> + return % 1 : $ Int32 <nl> + } <nl> + <nl> + sil_global hidden [ let ] @ $ global_var : $ P <nl> + <nl> + / / CHECK - LABEL : sil hidden [ noinline ] [ ossa ] @ $ helper : $ @ convention ( thin ) ( @ in P ) - > Int32 { <nl> + / / CHECK : bb0 ( % 0 : $ * P ) : <nl> + / / CHECK : debug_value_addr <nl> + / / CHECK : alloc_stack <nl> + / / CHECK : copy_addr <nl> + / / CHECK : destroy_addr <nl> + / / CHECK : open_existential_addr <nl> + / / CHECK : witness_method <nl> + / / CHECK : apply <nl> + / / CHECK : dealloc_stack <nl> + / / CHECK : return <nl> + / / CHECK - LABEL : } / / end sil function ' $ helper ' <nl> + sil hidden [ ossa ] [ noinline ] @ $ helper : $ @ convention ( thin ) ( @ in P ) - > Int32 { <nl> + bb0 ( % 0 : $ * P ) : <nl> + debug_value_addr % 0 : $ * P , var , name " a " , argno 1 <nl> + % 4 = alloc_stack $ P <nl> + copy_addr % 0 to [ initialization ] % 4 : $ * P <nl> + destroy_addr % 0 : $ * P <nl> + % 6 = open_existential_addr immutable_access % 4 : $ * P to $ * @ opened ( " 3CB58EC4 - ECED - 11E8 - 9798 - D0817AD4059B " ) P <nl> + % 7 = witness_method $ @ opened ( " 3CB58EC4 - ECED - 11E8 - 9798 - D0817AD4059B " ) P , # P . foo : < Self where Self : P > ( Self ) - > ( ) - > Int32 , % 6 : $ * @ opened ( " 3CB58EC4 - ECED - 11E8 - 9798 - D0817AD4059B " ) P : $ @ convention ( witness_method : P ) < τ_0_0 where τ_0_0 : P > ( @ in_guaranteed τ_0_0 ) - > Int32 <nl> + % 8 = apply % 7 < @ opened ( " 3CB58EC4 - ECED - 11E8 - 9798 - D0817AD4059B " ) P > ( % 6 ) : $ @ convention ( witness_method : P ) < τ_0_0 where τ_0_0 : P > ( @ in_guaranteed τ_0_0 ) - > Int32 <nl> + dealloc_stack % 4 : $ * P <nl> + return % 8 : $ Int32 <nl> + } <nl> + <nl> + sil [ ossa ] @ global_addr_init : $ @ convention ( thin ) ( Builtin . Int1 ) - > Int32 { <nl> + bb0 ( % 0 : $ Builtin . Int1 ) : <nl> + alloc_global @ $ global_var <nl> + % 1 = global_addr @ $ global_var : $ * P <nl> + cond_br % 0 , bb1 , bb2 <nl> + <nl> + bb1 : <nl> + % 2 = init_existential_addr % 1 : $ * P , $ Klass1 <nl> + % 3 = alloc_ref $ Klass1 <nl> + store % 3 to [ init ] % 2 : $ * Klass1 <nl> + br bb3 <nl> + <nl> + bb2 : <nl> + % 5 = init_existential_addr % 1 : $ * P , $ Klass2 <nl> + % 6 = alloc_ref $ Klass2 <nl> + store % 6 to [ init ] % 5 : $ * Klass2 <nl> + br bb3 <nl> + <nl> + bb3 : <nl> + % 12 = function_ref @ $ helper : $ @ convention ( thin ) ( @ in P ) - > Int32 <nl> + % 13 = apply % 12 ( % 1 ) : $ @ convention ( thin ) ( @ in P ) - > Int32 <nl> + return % 13 : $ Int32 <nl> + } <nl> + <nl> + / / CHECK - LABEL : sil shared [ noinline ] [ ossa ] @ $ s7dealloc12wrap_foo_ncp1a1bSiAA1P_pz_AaE_pztFTf4ee_n : $ @ convention ( thin ) < τ_0_0 , τ_0_1 where τ_0_0 : P , τ_0_1 : P > ( @ in τ_0_0 , @ in τ_0_1 ) - > Int32 { <nl> + / / CHECK : bb0 ( % 0 : $ * τ_0_0 , % 1 : $ * τ_0_1 ) : <nl> + / / CHECK : alloc_stack <nl> + / / CHECK : init_existential_addr <nl> + / / CHECK : copy_addr <nl> + / / CHECK : alloc_stack <nl> + / / CHECK : init_existential_addr <nl> + / / CHECK : copy_addr <nl> + / / CHECK : alloc_stack <nl> + / / CHECK : copy_addr <nl> + / / CHECK : destroy_addr <nl> + / / CHECK : open_existential_addr <nl> + / / CHECK : witness_method <nl> + / / CHECK : apply <nl> + / / CHECK : alloc_stack <nl> + / / CHECK : copy_addr <nl> + / / CHECK : destroy_addr <nl> + / / CHECK : open_existential_addr <nl> + / / CHECK : witness_method <nl> + / / CHECK : apply <nl> + / / CHECK : dealloc_stack <nl> + / / CHECK : dealloc_stack <nl> + / / CHECK : dealloc_stack <nl> + / / CHECK : dealloc_stack <nl> + / / CHECK : return <nl> + / / CHECK - LABEL : } / / end sil function ' $ s7dealloc12wrap_foo_ncp1a1bSiAA1P_pz_AaE_pztFTf4ee_n ' <nl> + sil hidden [ ossa ] [ noinline ] @ $ s7dealloc12wrap_foo_ncp1a1bSiAA1P_pz_AaE_pztF : $ @ convention ( thin ) ( @ in P , @ in P ) - > Int32 { <nl> + bb0 ( % 0 : $ * P , % 1 : $ * P ) : <nl> + debug_value_addr % 0 : $ * P , var , name " a " , argno 1 <nl> + debug_value_addr % 1 : $ * P , var , name " b " , argno 2 <nl> + % 4 = alloc_stack $ P <nl> + copy_addr % 0 to [ initialization ] % 4 : $ * P <nl> + destroy_addr % 0 : $ * P <nl> + % 6 = open_existential_addr immutable_access % 4 : $ * P to $ * @ opened ( " 3CB58EC4 - ECED - 11E8 - 9798 - D0817AD4059B " ) P <nl> + % 7 = witness_method $ @ opened ( " 3CB58EC4 - ECED - 11E8 - 9798 - D0817AD4059B " ) P , # P . foo : < Self where Self : P > ( Self ) - > ( ) - > Int32 , % 6 : $ * @ opened ( " 3CB58EC4 - ECED - 11E8 - 9798 - D0817AD4059B " ) P : $ @ convention ( witness_method : P ) < τ_0_0 where τ_0_0 : P > ( @ in_guaranteed τ_0_0 ) - > Int32 <nl> + % 8 = apply % 7 < @ opened ( " 3CB58EC4 - ECED - 11E8 - 9798 - D0817AD4059B " ) P > ( % 6 ) : $ @ convention ( witness_method : P ) < τ_0_0 where τ_0_0 : P > ( @ in_guaranteed τ_0_0 ) - > Int32 <nl> + % 9 = alloc_stack $ P <nl> + copy_addr % 1 to [ initialization ] % 9 : $ * P <nl> + destroy_addr % 1 : $ * P <nl> + % 11 = open_existential_addr immutable_access % 9 : $ * P to $ * @ opened ( " 3CB58FAA - ECED - 11E8 - 9798 - D0817AD4059B " ) P <nl> + % 12 = witness_method $ @ opened ( " 3CB58FAA - ECED - 11E8 - 9798 - D0817AD4059B " ) P , # P . foo : < Self where Self : P > ( Self ) - > ( ) - > Int32 , % 11 : $ * @ opened ( " 3CB58FAA - ECED - 11E8 - 9798 - D0817AD4059B " ) P : $ @ convention ( witness_method : P ) < τ_0_0 where τ_0_0 : P > ( @ in_guaranteed τ_0_0 ) - > Int32 <nl> + % 13 = apply % 12 < @ opened ( " 3CB58FAA - ECED - 11E8 - 9798 - D0817AD4059B " ) P > ( % 11 ) : $ @ convention ( witness_method : P ) < τ_0_0 where τ_0_0 : P > ( @ in_guaranteed τ_0_0 ) - > Int32 <nl> + % 14 = struct_extract % 8 : $ Int32 , # Int32 . _value <nl> + % 15 = struct_extract % 13 : $ Int32 , # Int32 . _value <nl> + % 16 = integer_literal $ Builtin . Int1 , - 1 <nl> + % 17 = builtin " sadd_with_overflow_Int32 " ( % 14 : $ Builtin . Int32 , % 15 : $ Builtin . Int32 , % 16 : $ Builtin . Int1 ) : $ ( Builtin . Int32 , Builtin . Int1 ) <nl> + % 18 = tuple_extract % 17 : $ ( Builtin . Int32 , Builtin . Int1 ) , 0 <nl> + % 19 = tuple_extract % 17 : $ ( Builtin . Int32 , Builtin . Int1 ) , 1 <nl> + cond_fail % 19 : $ Builtin . Int1 <nl> + % 21 = struct $ Int32 ( % 18 : $ Builtin . Int32 ) <nl> + dealloc_stack % 9 : $ * P <nl> + dealloc_stack % 4 : $ * P <nl> + return % 21 : $ Int32 <nl> + } <nl> + <nl> + sil_witness_table hidden Klass1 : P module dealloc { <nl> + method # P . foo : < Self where Self : P > ( Self ) - > ( ) - > Int32 : nil <nl> + } <nl> + <nl> + sil_witness_table hidden Klass2 : P module dealloc { <nl> + method # P . foo : < Self where Self : P > ( Self ) - > ( ) - > Int32 : nil <nl> + } <nl> + <nl> + / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> + / / Test composite conformances with superclass constraints where one of <nl> + / / the protocol constraints is satisfied by the superclass constraint . <nl> + / / <nl> + / / < rdar : / / problem / 57025861 > " Assertion failed : ( conformances . size ( ) <nl> + / / = = numConformanceRequirements ) " in ExistentialSpecializer on <nl> + protocol Plotable { } <nl> + <nl> + class PlotLayer : Plotable { <nl> + init ( ) <nl> + } <nl> + <nl> + protocol InView { } <nl> + <nl> + class PlotLayerInView : PlotLayer & InView { <nl> + override init ( ) <nl> + } <nl> + <nl> + class PlotView { <nl> + @ _hasStorage @ _hasInitialValue var layers : Container < PlotLayer & Plotable & InView > { get set } <nl> + public func resolveLayers ( ) <nl> + init ( ) <nl> + } <nl> + <nl> + struct Container < T > { <nl> + @ _hasStorage @ _hasInitialValue var val : T { get set } <nl> + } <nl> + <nl> + / / Check that the init_existential instruction was created with all <nl> + / / three requirements ( the generic parameter only has two <nl> + / / requirements ) . Relies on assertions during specialization and on <nl> + / / the SILVerifier to catch other inconsistencies . <nl> + / / <nl> + / / CHECK - LABEL : sil shared [ ossa ] @ $ s40testExistentialSpecializeCompositeHelperTf4en_n : $ @ convention ( thin ) < τ_0_0 where τ_0_0 : PlotLayer , τ_0_0 : InView > ( @ owned τ_0_0 , @ inout Container < PlotLayer & InView & Plotable > ) - > ( ) { <nl> + / / CHECK : bb0 ( % 0 : @ owned $ τ_0_0 , % 1 : $ * Container < PlotLayer & InView & Plotable > ) : <nl> + / / CHECK : init_existential_ref % 0 : $ τ_0_0 : $ τ_0_0 , $ PlotLayer & InView & Plotable <nl> + / / CHECK - LABEL : } / / end sil function ' $ s40testExistentialSpecializeCompositeHelperTf4en_n ' <nl> + sil shared [ ossa ] @ testExistentialSpecializeCompositeHelper : $ @ convention ( method ) ( @ owned PlotLayer & Plotable & InView , @ inout Container < PlotLayer & Plotable & InView > ) - > ( ) { <nl> + bb0 ( % 0 : @ owned $ PlotLayer & Plotable & InView , % 1 : $ * Container < PlotLayer & Plotable & InView > ) : <nl> + % adr = struct_element_addr % 1 : $ * Container < PlotLayer & Plotable & InView > , # Container . val <nl> + store % 0 to [ assign ] % adr : $ * PlotLayer & Plotable & InView <nl> + % v = tuple ( ) <nl> + return % v : $ ( ) <nl> + } <nl> + <nl> + sil [ ossa ] @ testExistentialSpecializeComposite : $ @ convention ( method ) ( @ guaranteed PlotView ) - > ( ) { <nl> + bb0 ( % 0 : @ guaranteed $ PlotView ) : <nl> + % ref = alloc_ref $ PlotLayerInView <nl> + % ref1 = copy_value % ref : $ PlotLayerInView <nl> + % exis = init_existential_ref % ref1 : $ PlotLayerInView : $ PlotLayerInView , $ PlotLayer & Plotable & InView <nl> + % array = ref_element_addr % 0 : $ PlotView , # PlotView . layers <nl> + % f = function_ref @ testExistentialSpecializeCompositeHelper : $ @ convention ( method ) ( @ owned PlotLayer & Plotable & InView , @ inout Container < PlotLayer & Plotable & InView > ) - > ( ) <nl> + % call = apply % f ( % exis , % array ) : $ @ convention ( method ) ( @ owned PlotLayer & Plotable & InView , @ inout Container < PlotLayer & Plotable & InView > ) - > ( ) <nl> + destroy_value % ref : $ PlotLayerInView <nl> + % v = tuple ( ) <nl> + return % v : $ ( ) <nl> + } <nl> mmm a / test / ScanDependencies / module_deps . swift <nl> ppp b / test / ScanDependencies / module_deps . swift <nl> <nl> / / Check the contents of the JSON output <nl> / / RUN : % FileCheck % s < % t / deps . json <nl> <nl> + / / Check the contents of the JSON output <nl> + / / RUN : % FileCheck % s - check - prefix CHECK - NO - SEARCH - PATHS < % t / deps . json <nl> + <nl> / / Check the make - style dependencies file <nl> / / RUN : % FileCheck % s - check - prefix CHECK - MAKE - DEPS < % t / deps . d <nl> <nl> import SubE <nl> / / CHECK : " commandLine " : [ <nl> / / CHECK : " - compile - module - from - interface " <nl> / / CHECK : " - target " <nl> - / / CHECK : " - sdk " <nl> / / CHECK : " - module - name " <nl> / / CHECK : " G " <nl> / / CHECK : " - swift - version " <nl> import SubE <nl> / / / mmmmmm - - Clang module SwiftShims <nl> / / CHECK - LABEL : " modulePath " : " SwiftShims . pcm " , <nl> <nl> + / / CHECK - NO - SEARCH - PATHS - NOT : " - I " <nl> + / / CHECK - NO - SEARCH - PATHS - NOT : " - sdk " <nl> + / / CHECK - NO - SEARCH - PATHS - NOT : " - F " <nl> + / / CHECK - NO - SEARCH - PATHS - NOT : " - prebuilt - module - cache - path " <nl> <nl> / / Check make - style dependencies <nl> / / CHECK - MAKE - DEPS : module_deps . swift <nl> mmm a / test / expr / primary / keypath / keypath - objc . swift <nl> ppp b / test / expr / primary / keypath / keypath - objc . swift <nl> func testKeyPath ( a : A , b : B ) { <nl> let _ : String = # keyPath ( A . propString ) <nl> <nl> / / Property of String property ( which looks on NSString ) <nl> - let _ : String = # keyPath ( A . propString . URLsInText ) / / expected - error { { ' URLsInText ' has been renamed to ' urlsInText ' } } <nl> + let _ : String = # keyPath ( A . propString . URLsInText ) <nl> <nl> / / String property with a suffix <nl> let _ : String = # keyPath ( A . propString ) . description <nl> func testKeyPath ( a : A , b : B ) { <nl> <nl> / / Array property ( make sure we look at the array element ) . <nl> let _ : String = # keyPath ( A . propArray ) <nl> - let _ : String = # keyPath ( A . propArray . URLsInText ) / / expected - error { { ' URLsInText ' has been renamed to ' urlsInText ' } } <nl> + let _ : String = # keyPath ( A . propArray . URLsInText ) <nl> <nl> / / Dictionary property ( make sure we look at the value type ) . <nl> let _ : String = # keyPath ( A . propDict . anyKeyName ) <nl> func testKeyPath ( a : A , b : B ) { <nl> <nl> / / Set property ( make sure we look at the set element ) . <nl> let _ : String = # keyPath ( A . propSet ) <nl> - let _ : String = # keyPath ( A . propSet . URLsInText ) / / expected - error { { ' URLsInText ' has been renamed to ' urlsInText ' } } <nl> + let _ : String = # keyPath ( A . propSet . URLsInText ) <nl> <nl> / / AnyObject property <nl> - let _ : String = # keyPath ( A . propAnyObject . URLsInText ) / / expected - error { { ' URLsInText ' has been renamed to ' urlsInText ' } } <nl> + let _ : String = # keyPath ( A . propAnyObject . URLsInText ) <nl> let _ : String = # keyPath ( A . propAnyObject . propA ) <nl> let _ : String = # keyPath ( A . propAnyObject . propB ) <nl> let _ : String = # keyPath ( A . propAnyObject . description ) <nl> <nl> / / NSString property <nl> - let _ : String = # keyPath ( A . propNSString . URLsInText ) / / expected - error { { ' URLsInText ' has been renamed to ' urlsInText ' } } <nl> + let _ : String = # keyPath ( A . propNSString . URLsInText ) <nl> <nl> / / NSArray property ( AnyObject array element ) . <nl> let _ : String = # keyPath ( A . propNSArray ) <nl> - let _ : String = # keyPath ( A . propNSArray . URLsInText ) / / expected - error { { ' URLsInText ' has been renamed to ' urlsInText ' } } <nl> + let _ : String = # keyPath ( A . propNSArray . URLsInText ) <nl> <nl> / / NSDictionary property ( AnyObject value type ) . <nl> let _ : String = # keyPath ( A . propNSDict . anyKeyName ) <nl> func testKeyPath ( a : A , b : B ) { <nl> <nl> / / NSSet property ( AnyObject set element ) . <nl> let _ : String = # keyPath ( A . propNSSet ) <nl> - let _ : String = # keyPath ( A . propNSSet . URLsInText ) / / expected - error { { ' URLsInText ' has been renamed to ' urlsInText ' } } <nl> + let _ : String = # keyPath ( A . propNSSet . URLsInText ) <nl> <nl> / / Property with keyword name . <nl> let _ : String = # keyPath ( A . repeat ) <nl> mmm a / test / lit . cfg <nl> ppp b / test / lit . cfg <nl> if ( run_os = = ' maccatalyst ' ) : <nl> target_os_abi = ' macosx ' <nl> target_os_is_maccatalyst = " TRUE " <nl> config . available_features . add ( " OS = ios " ) <nl> + # macOS on ASi uses the stable ABI <nl> + if run_os in ( ' macosx ' , ) and run_cpu in ( ' arm64 ' , ) : <nl> + target_mandates_stable_abi = " TRUE " <nl> + config . available_features . add ( ' swift_only_stable_abi ' ) <nl> if ( run_os in [ ' linux - gnu ' , ' linux - gnueabihf ' , ' freebsd ' , ' openbsd ' , ' windows - cygnus ' , ' windows - gnu ' , ' windows - msvc ' , ' linux - android ' , ' linux - androideabi ' ] ) : <nl> target_mandates_stable_abi = " TRUE " <nl> config . available_features . add ( ' swift_only_stable_abi ' ) <nl>
|
Merge pull request from nathawes / merge - master - into - master - next - 2
|
apple/swift
|
a79df896f07191a46f64e858890bc184768a8610
|
2020-08-05T16:59:20Z
|
mmm a / include / v8 - profiler . h <nl> ppp b / include / v8 - profiler . h <nl> class V8_EXPORT HeapGraphNode { <nl> SnapshotObjectId GetId ( ) const ; <nl> <nl> / * * Returns node ' s own size , in bytes . * / <nl> - int GetSelfSize ( ) const ; <nl> + V8_DEPRECATED ( " Use GetShallowSize instead " , <nl> + int GetSelfSize ( ) const ) ; <nl> + <nl> + / * * Returns node ' s own size , in bytes . * / <nl> + size_t GetShallowSize ( ) const ; <nl> <nl> / * * Returns child nodes count of the node . * / <nl> int GetChildrenCount ( ) const ; <nl> mmm a / src / api . cc <nl> ppp b / src / api . cc <nl> SnapshotObjectId HeapGraphNode : : GetId ( ) const { <nl> <nl> <nl> int HeapGraphNode : : GetSelfSize ( ) const { <nl> + size_t size = ToInternal ( this ) - > self_size ( ) ; <nl> + CHECK ( size < = static_cast < size_t > ( internal : : kMaxInt ) ) ; <nl> + return static_cast < int > ( size ) ; <nl> + } <nl> + <nl> + <nl> + size_t HeapGraphNode : : GetShallowSize ( ) const { <nl> return ToInternal ( this ) - > self_size ( ) ; <nl> } <nl> <nl> mmm a / src / heap - snapshot - generator . cc <nl> ppp b / src / heap - snapshot - generator . cc <nl> HeapEntry : : HeapEntry ( HeapSnapshot * snapshot , <nl> Type type , <nl> const char * name , <nl> SnapshotObjectId id , <nl> - int self_size ) <nl> + size_t self_size ) <nl> : type_ ( type ) , <nl> children_count_ ( 0 ) , <nl> children_index_ ( - 1 ) , <nl> void HeapEntry : : SetIndexedReference ( HeapGraphEdge : : Type type , <nl> void HeapEntry : : Print ( <nl> const char * prefix , const char * edge_name , int max_depth , int indent ) { <nl> STATIC_CHECK ( sizeof ( unsigned ) = = sizeof ( id ( ) ) ) ; <nl> - OS : : Print ( " % 6d @ % 6u % * c % s % s : " , <nl> + OS : : Print ( " % 6 " V8PRIuPTR " @ % 6u % * c % s % s : " , <nl> self_size ( ) , id ( ) , indent , ' ' , prefix , edge_name ) ; <nl> if ( type ( ) ! = kString ) { <nl> OS : : Print ( " % s % . 40s \ n " , TypeAsString ( ) , name_ ) ; <nl> template < > struct SnapshotSizeConstants < 4 > { <nl> <nl> template < > struct SnapshotSizeConstants < 8 > { <nl> static const int kExpectedHeapGraphEdgeSize = 24 ; <nl> - static const int kExpectedHeapEntrySize = 32 ; <nl> + static const int kExpectedHeapEntrySize = 40 ; <nl> } ; <nl> <nl> } / / namespace <nl> HeapEntry * HeapSnapshot : : AddGcSubrootEntry ( int tag ) { <nl> HeapEntry * HeapSnapshot : : AddEntry ( HeapEntry : : Type type , <nl> const char * name , <nl> SnapshotObjectId id , <nl> - int size ) { <nl> + size_t size ) { <nl> HeapEntry entry ( this , type , name , id , size ) ; <nl> entries_ . Add ( entry ) ; <nl> return & entries_ . last ( ) ; <nl> HeapEntry * V8HeapExplorer : : AddEntry ( HeapObject * object , <nl> HeapEntry * V8HeapExplorer : : AddEntry ( Address address , <nl> HeapEntry : : Type type , <nl> const char * name , <nl> - int size ) { <nl> + size_t size ) { <nl> SnapshotObjectId object_id = heap_object_map_ - > FindOrAddEntry ( address , size ) ; <nl> return snapshot_ - > AddEntry ( type , name , object_id , size ) ; <nl> } <nl> void V8HeapExplorer : : ExtractAllocationSiteReferences ( int entry , <nl> <nl> class JSArrayBufferDataEntryAllocator : public HeapEntriesAllocator { <nl> public : <nl> - JSArrayBufferDataEntryAllocator ( int size , V8HeapExplorer * explorer ) <nl> + JSArrayBufferDataEntryAllocator ( size_t size , V8HeapExplorer * explorer ) <nl> : size_ ( size ) <nl> , explorer_ ( explorer ) { <nl> } <nl> class JSArrayBufferDataEntryAllocator : public HeapEntriesAllocator { <nl> HeapEntry : : kNative , " system / JSArrayBufferData " , size_ ) ; <nl> } <nl> private : <nl> - int size_ ; <nl> + size_t size_ ; <nl> V8HeapExplorer * explorer_ ; <nl> } ; <nl> <nl> void V8HeapExplorer : : ExtractJSArrayBufferReferences ( <nl> if ( ! buffer - > backing_store ( ) ) <nl> return ; <nl> size_t data_size = NumberToSize ( heap_ - > isolate ( ) , buffer - > byte_length ( ) ) ; <nl> - CHECK ( data_size < = static_cast < size_t > ( kMaxInt ) ) ; <nl> - JSArrayBufferDataEntryAllocator allocator ( static_cast < int > ( data_size ) , this ) ; <nl> + JSArrayBufferDataEntryAllocator allocator ( data_size , this ) ; <nl> HeapEntry * data_entry = <nl> filler_ - > FindOrAddEntry ( buffer - > backing_store ( ) , & allocator ) ; <nl> filler_ - > SetNamedReference ( HeapGraphEdge : : kInternal , <nl> int HeapSnapshotJSONSerializer : : GetStringId ( const char * s ) { <nl> } <nl> <nl> <nl> - static int utoa ( unsigned value , const Vector < char > & buffer , int buffer_pos ) { <nl> + namespace { <nl> + <nl> + template < size_t size > struct ToUnsigned ; <nl> + <nl> + template < > struct ToUnsigned < 4 > { <nl> + typedef uint32_t Type ; <nl> + } ; <nl> + <nl> + template < > struct ToUnsigned < 8 > { <nl> + typedef uint64_t Type ; <nl> + } ; <nl> + <nl> + } / / namespace <nl> + <nl> + <nl> + template < typename T > <nl> + static int utoa_impl ( T value , const Vector < char > & buffer , int buffer_pos ) { <nl> + STATIC_CHECK ( static_cast < T > ( - 1 ) > 0 ) ; / / Check that T is unsigned <nl> int number_of_digits = 0 ; <nl> - unsigned t = value ; <nl> + T t = value ; <nl> do { <nl> + + number_of_digits ; <nl> } while ( t / = 10 ) ; <nl> static int utoa ( unsigned value , const Vector < char > & buffer , int buffer_pos ) { <nl> buffer_pos + = number_of_digits ; <nl> int result = buffer_pos ; <nl> do { <nl> - int last_digit = value % 10 ; <nl> + int last_digit = static_cast < int > ( value % 10 ) ; <nl> buffer [ - - buffer_pos ] = ' 0 ' + last_digit ; <nl> value / = 10 ; <nl> } while ( value ) ; <nl> static int utoa ( unsigned value , const Vector < char > & buffer , int buffer_pos ) { <nl> } <nl> <nl> <nl> + template < typename T > <nl> + static int utoa ( T value , const Vector < char > & buffer , int buffer_pos ) { <nl> + typename ToUnsigned < sizeof ( value ) > : : Type unsigned_value = value ; <nl> + STATIC_CHECK ( sizeof ( value ) = = sizeof ( unsigned_value ) ) ; <nl> + return utoa_impl ( unsigned_value , buffer , buffer_pos ) ; <nl> + } <nl> + <nl> + <nl> void HeapSnapshotJSONSerializer : : SerializeEdge ( HeapGraphEdge * edge , <nl> bool first_edge ) { <nl> / / The buffer needs space for 3 unsigned ints , 3 commas , \ n and \ 0 <nl> void HeapSnapshotJSONSerializer : : SerializeEdges ( ) { <nl> <nl> <nl> void HeapSnapshotJSONSerializer : : SerializeNode ( HeapEntry * entry ) { <nl> - / / The buffer needs space for 5 unsigned ints , 5 commas , \ n and \ 0 <nl> + / / The buffer needs space for 4 unsigned ints , 1 size_t , 5 commas , \ n and \ 0 <nl> static const int kBufferSize = <nl> - 5 * MaxDecimalDigitsIn < sizeof ( unsigned ) > : : kUnsigned / / NOLINT <nl> + 4 * MaxDecimalDigitsIn < sizeof ( unsigned ) > : : kUnsigned / / NOLINT <nl> + + MaxDecimalDigitsIn < sizeof ( size_t ) > : : kUnsigned / / NOLINT <nl> + 5 + 1 + 1 ; <nl> EmbeddedVector < char , kBufferSize > buffer ; <nl> int buffer_pos = 0 ; <nl> mmm a / src / heap - snapshot - generator . h <nl> ppp b / src / heap - snapshot - generator . h <nl> class HeapEntry BASE_EMBEDDED { <nl> Type type , <nl> const char * name , <nl> SnapshotObjectId id , <nl> - int self_size ) ; <nl> + size_t self_size ) ; <nl> <nl> HeapSnapshot * snapshot ( ) { return snapshot_ ; } <nl> Type type ( ) { return static_cast < Type > ( type_ ) ; } <nl> const char * name ( ) { return name_ ; } <nl> void set_name ( const char * name ) { name_ = name ; } <nl> inline SnapshotObjectId id ( ) { return id_ ; } <nl> - int self_size ( ) { return self_size_ ; } <nl> + size_t self_size ( ) { return self_size_ ; } <nl> INLINE ( int index ( ) const ) ; <nl> int children_count ( ) const { return children_count_ ; } <nl> INLINE ( int set_children_index ( int index ) ) ; <nl> class HeapEntry BASE_EMBEDDED { <nl> unsigned type_ : 4 ; <nl> int children_count_ : 28 ; <nl> int children_index_ ; <nl> - int self_size_ ; <nl> + size_t self_size_ ; <nl> SnapshotObjectId id_ ; <nl> HeapSnapshot * snapshot_ ; <nl> const char * name_ ; <nl> class HeapSnapshot { <nl> HeapEntry * AddEntry ( HeapEntry : : Type type , <nl> const char * name , <nl> SnapshotObjectId id , <nl> - int size ) ; <nl> + size_t size ) ; <nl> HeapEntry * AddRootEntry ( ) ; <nl> HeapEntry * AddGcRootsEntry ( ) ; <nl> HeapEntry * AddGcSubrootEntry ( int tag ) ; <nl> class V8HeapExplorer : public HeapEntriesAllocator { <nl> HeapEntry * AddEntry ( Address address , <nl> HeapEntry : : Type type , <nl> const char * name , <nl> - int size ) ; <nl> + size_t size ) ; <nl> <nl> static String * GetConstructorName ( JSObject * object ) ; <nl> <nl> mmm a / test / cctest / test - heap - profiler . cc <nl> ppp b / test / cctest / test - heap - profiler . cc <nl> TEST ( HeapSnapshotObjectSizes ) { <nl> CHECK_NE ( NULL , x2 ) ; <nl> <nl> / / Test sizes . <nl> - CHECK_NE ( 0 , x - > GetSelfSize ( ) ) ; <nl> - CHECK_NE ( 0 , x1 - > GetSelfSize ( ) ) ; <nl> - CHECK_NE ( 0 , x2 - > GetSelfSize ( ) ) ; <nl> + CHECK_NE ( 0 , static_cast < int > ( x - > GetShallowSize ( ) ) ) ; <nl> + CHECK_NE ( 0 , static_cast < int > ( x1 - > GetShallowSize ( ) ) ) ; <nl> + CHECK_NE ( 0 , static_cast < int > ( x2 - > GetShallowSize ( ) ) ) ; <nl> } <nl> <nl> <nl> TEST ( AllocationSitesAreVisible ) { <nl> " elements " ) ; <nl> CHECK_NE ( NULL , elements ) ; <nl> CHECK_EQ ( v8 : : HeapGraphNode : : kArray , elements - > GetType ( ) ) ; <nl> - CHECK_EQ ( v8 : : internal : : FixedArray : : SizeFor ( 3 ) , elements - > GetSelfSize ( ) ) ; <nl> + CHECK_EQ ( v8 : : internal : : FixedArray : : SizeFor ( 3 ) , <nl> + static_cast < int > ( elements - > GetShallowSize ( ) ) ) ; <nl> <nl> v8 : : Handle < v8 : : Value > array_val = <nl> heap_profiler - > FindObjectById ( transition_info - > GetId ( ) ) ; <nl> TEST ( ArrayBufferAndArrayBufferView ) { <nl> const v8 : : HeapGraphNode * backing_store = <nl> GetProperty ( arr1_buffer , v8 : : HeapGraphEdge : : kInternal , " backing_store " ) ; <nl> CHECK_NE ( NULL , backing_store ) ; <nl> - CHECK_EQ ( 400 , backing_store - > GetSelfSize ( ) ) ; <nl> + CHECK_EQ ( 400 , static_cast < int > ( backing_store - > GetShallowSize ( ) ) ) ; <nl> } <nl> <nl> <nl>
|
Allow self_size to be larger than 2GB in heap snapshots .
|
v8/v8
|
1bace575f03b451734f2e536c2b3b04a85bb75b5
|
2014-02-18T13:22:07Z
|
mmm a / arangod / Aql / BindParameters . h <nl> ppp b / arangod / Aql / BindParameters . h <nl> namespace triagens { <nl> / / / @ brief create the parameters <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - BindParameters ( TRI_json_t * ) ; <nl> + explicit BindParameters ( TRI_json_t * ) ; <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief destroy the parameters <nl> mmm a / arangod / Aql / Query . cpp <nl> ppp b / arangod / Aql / Query . cpp <nl> QueryResult Query : : execute ( QueryRegistry * registry ) { <nl> return res ; <nl> } <nl> <nl> - if ( useQueryCache & & ( _isModificationQuery | | ! _ast - > root ( ) - > isDeterministic ( ) ) ) { <nl> + if ( useQueryCache & & ( _isModificationQuery | | ! _ast - > root ( ) - > isDeterministic ( ) | | ! _warnings . empty ( ) ) ) { <nl> useQueryCache = false ; <nl> } <nl> <nl> QueryResult Query : : execute ( QueryRegistry * registry ) { <nl> value = nullptr ; <nl> } <nl> <nl> - / / finally store the generated result in the query cache <nl> - QueryCache : : instance ( ) - > store ( <nl> - _vocbase , <nl> - queryStringHash , <nl> - _queryString , <nl> - _queryLength , <nl> - TRI_CopyJson ( TRI_UNKNOWN_MEM_ZONE , jsonResult . json ( ) ) , <nl> - collections ( ) - > collectionNames ( ) <nl> - ) ; <nl> + if ( _warnings . empty ( ) ) { <nl> + / / finally store the generated result in the query cache <nl> + QueryCache : : instance ( ) - > store ( <nl> + _vocbase , <nl> + queryStringHash , <nl> + _queryString , <nl> + _queryLength , <nl> + TRI_CopyJson ( TRI_UNKNOWN_MEM_ZONE , jsonResult . json ( ) ) , <nl> + collections ( ) - > collectionNames ( ) <nl> + ) ; <nl> + } <nl> } <nl> else { <nl> / / iterate over result and return it <nl> QueryResultV8 Query : : executeV8 ( v8 : : Isolate * isolate , <nl> return res ; <nl> } <nl> <nl> - if ( useQueryCache & & ( _isModificationQuery | | ! _ast - > root ( ) - > isDeterministic ( ) ) ) { <nl> + if ( useQueryCache & & ( _isModificationQuery | | ! _ast - > root ( ) - > isDeterministic ( ) | | ! _warnings . empty ( ) ) ) { <nl> useQueryCache = false ; <nl> } <nl> <nl> QueryResultV8 Query : : executeV8 ( v8 : : Isolate * isolate , <nl> value = nullptr ; <nl> } <nl> <nl> - / / finally store the generated result in the query cache <nl> - QueryCache : : instance ( ) - > store ( <nl> - _vocbase , <nl> - queryStringHash , <nl> - _queryString , <nl> - _queryLength , <nl> - cacheResult . get ( ) , <nl> - collections ( ) - > collectionNames ( ) <nl> - ) ; <nl> - cacheResult . release ( ) ; <nl> + if ( _warnings . empty ( ) ) { <nl> + / / finally store the generated result in the query cache <nl> + QueryCache : : instance ( ) - > store ( <nl> + _vocbase , <nl> + queryStringHash , <nl> + _queryString , <nl> + _queryLength , <nl> + cacheResult . get ( ) , <nl> + collections ( ) - > collectionNames ( ) <nl> + ) ; <nl> + cacheResult . release ( ) ; <nl> + } <nl> } <nl> else { <nl> / / iterate over result and return it <nl> mmm a / arangod / VocBase / document - collection . cpp <nl> ppp b / arangod / VocBase / document - collection . cpp <nl> <nl> <nl> # include " document - collection . h " <nl> <nl> + # include " Aql / QueryCache . h " <nl> # include " Basics / Barrier . h " <nl> # include " Basics / conversions . h " <nl> # include " Basics / Exceptions . h " <nl> bool TRI_DropIndexDocumentCollection ( TRI_document_collection_t * document , <nl> <nl> TRI_WRITE_LOCK_DOCUMENTS_INDEXES_PRIMARY_COLLECTION ( document ) ; <nl> <nl> + triagens : : aql : : QueryCache : : instance ( ) - > invalidate ( vocbase , document - > _info . _name ) ; <nl> + <nl> triagens : : arango : : Index * found = document - > removeIndex ( iid ) ; <nl> <nl> TRI_WRITE_UNLOCK_DOCUMENTS_INDEXES_PRIMARY_COLLECTION ( document ) ; <nl> triagens : : arango : : Index * TRI_EnsureCapConstraintDocumentCollection ( TRI_document <nl> <nl> if ( idx ! = nullptr ) { <nl> if ( created ) { <nl> + triagens : : aql : : QueryCache : : instance ( ) - > invalidate ( document - > _vocbase , document - > _info . _name ) ; <nl> int res = TRI_SaveIndex ( document , idx , true ) ; <nl> <nl> if ( res ! = TRI_ERROR_NO_ERROR ) { <nl> triagens : : arango : : Index * TRI_EnsureGeoIndex1DocumentCollection ( TRI_document_col <nl> <nl> if ( idx ! = nullptr ) { <nl> if ( created ) { <nl> + triagens : : aql : : QueryCache : : instance ( ) - > invalidate ( document - > _vocbase , document - > _info . _name ) ; <nl> int res = TRI_SaveIndex ( document , idx , true ) ; <nl> <nl> if ( res ! = TRI_ERROR_NO_ERROR ) { <nl> triagens : : arango : : Index * TRI_EnsureGeoIndex2DocumentCollection ( TRI_document_col <nl> <nl> if ( idx ! = nullptr ) { <nl> if ( created ) { <nl> + triagens : : aql : : QueryCache : : instance ( ) - > invalidate ( document - > _vocbase , document - > _info . _name ) ; <nl> int res = TRI_SaveIndex ( document , idx , true ) ; <nl> <nl> if ( res ! = TRI_ERROR_NO_ERROR ) { <nl> triagens : : arango : : Index * TRI_EnsureHashIndexDocumentCollection ( TRI_document_col <nl> <nl> TRI_WRITE_LOCK_DOCUMENTS_INDEXES_PRIMARY_COLLECTION ( document ) ; <nl> <nl> - / / given the list of attributes ( as strings ) <nl> auto idx = CreateHashIndexDocumentCollection ( document , attributes , iid , sparse , unique , created ) ; <nl> <nl> if ( idx ! = nullptr ) { <nl> if ( created ) { <nl> + triagens : : aql : : QueryCache : : instance ( ) - > invalidate ( document - > _vocbase , document - > _info . _name ) ; <nl> int res = TRI_SaveIndex ( document , idx , true ) ; <nl> <nl> if ( res ! = TRI_ERROR_NO_ERROR ) { <nl> triagens : : arango : : Index * TRI_EnsureSkiplistIndexDocumentCollection ( TRI_document <nl> <nl> if ( idx ! = nullptr ) { <nl> if ( created ) { <nl> + triagens : : aql : : QueryCache : : instance ( ) - > invalidate ( document - > _vocbase , document - > _info . _name ) ; <nl> int res = TRI_SaveIndex ( document , idx , true ) ; <nl> <nl> if ( res ! = TRI_ERROR_NO_ERROR ) { <nl> triagens : : arango : : Index * TRI_EnsureFulltextIndexDocumentCollection ( TRI_document <nl> <nl> if ( idx ! = nullptr ) { <nl> if ( created ) { <nl> + triagens : : aql : : QueryCache : : instance ( ) - > invalidate ( document - > _vocbase , document - > _info . _name ) ; <nl> int res = TRI_SaveIndex ( document , idx , true ) ; <nl> <nl> if ( res ! = TRI_ERROR_NO_ERROR ) { <nl> mmm a / arangod / VocBase / vocbase . cpp <nl> ppp b / arangod / VocBase / vocbase . cpp <nl> static int RenameCollection ( TRI_vocbase_t * vocbase , <nl> / / i . e . db . < old - collection - name > <nl> collection - > _internalVersion + + ; <nl> <nl> - / / lock query cache and invalidate all entries for the two collections <nl> - auto & cacheLock = triagens : : aql : : QueryCache : : instance ( ) - > getLock ( ) ; <nl> - WRITE_LOCKER ( cacheLock ) ; <nl> - triagens : : aql : : QueryCache : : instance ( ) - > invalidate ( cacheLock , vocbase , std : : vector < char const * > { oldName , newName } ) ; <nl> + / / invalidate all entries for the two collections <nl> + triagens : : aql : : QueryCache : : instance ( ) - > invalidate ( vocbase , std : : vector < char const * > { oldName , newName } ) ; <nl> <nl> TRI_WRITE_UNLOCK_STATUS_VOCBASE_COL ( collection ) ; <nl> <nl> int TRI_DropCollectionVocBase ( TRI_vocbase_t * vocbase , <nl> return TRI_set_errno ( TRI_ERROR_FORBIDDEN ) ; <nl> } <nl> <nl> - / / lock query cache and invalidate all entries for this collection <nl> - auto & cacheLock = triagens : : aql : : QueryCache : : instance ( ) - > getLock ( ) ; <nl> - WRITE_LOCKER ( cacheLock ) ; <nl> - triagens : : aql : : QueryCache : : instance ( ) - > invalidate ( cacheLock , vocbase , collection - > _name ) ; <nl> - <nl> TRI_ReadLockReadWriteLock ( & vocbase - > _inventoryLock ) ; <nl> <nl> TRI_EVENTUAL_WRITE_LOCK_STATUS_VOCBASE_COL ( collection ) ; <nl> <nl> + triagens : : aql : : QueryCache : : instance ( ) - > invalidate ( vocbase , collection - > _name ) ; <nl> + <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / collection already deleted <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> mmm a / js / server / tests / aql - bind . js <nl> ppp b / js / server / tests / aql - bind . js <nl> function ahuacatlBindTestSuite ( ) { <nl> } , <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - / / / @ brief test a list bind variable <nl> + / / / @ brief test an array bind variable <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - testBindList1 : function ( ) { <nl> + testBindArray1 : function ( ) { <nl> var expected = [ " " ] ; <nl> var actual = getQueryResults ( " FOR u IN @ list FILTER u = = @ value RETURN u " , { " list " : [ " the quick fox " , true , false , - 5 , 0 , 1 , null , " " , [ ] , { } ] , " value " : [ ] } ) ; <nl> <nl> function ahuacatlBindTestSuite ( ) { <nl> } , <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - / / / @ brief test a list bind variable <nl> + / / / @ brief test an array bind variable <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - testBindList2 : function ( ) { <nl> + testBindArray2 : function ( ) { <nl> var expected = [ true , false , 1 , null , [ ] ] ; <nl> var actual = getQueryResults ( " FOR u IN @ list FILTER u IN @ value RETURN u " , { " list " : [ " the quick fox " , true , false , - 5 , 0 , 1 , null , " " , [ ] , { } ] , " value " : [ true , false , 1 , null , [ ] ] } ) ; <nl> <nl> function ahuacatlBindTestSuite ( ) { <nl> } , <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - / / / @ brief test an array bind variable <nl> + / / / @ brief test an object bind variable <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - testBindArray1 : function ( ) { <nl> + testBindObject1 : function ( ) { <nl> var expected = [ { } ] ; <nl> var actual = getQueryResults ( " FOR u IN @ list FILTER u = = @ value RETURN u " , { " list " : [ " the quick fox " , true , false , - 5 , 0 , 1 , null , " " , [ ] , { } ] , " value " : { } } ) ; <nl> <nl> function ahuacatlBindTestSuite ( ) { <nl> } , <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - / / / @ brief test an array bind variable <nl> + / / / @ brief test an object bind variable <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - testBindArray2 : function ( ) { <nl> + testBindObject2 : function ( ) { <nl> var expected = [ { " brown " : true , " fox " : true , " quick " : true } ] ; <nl> var list = [ { " fox " : false , " brown " : false , " quick " : false } , <nl> { " fox " : true , " brown " : false , " quick " : false } , <nl> mmm a / js / server / tests / aql - optimizer - rule - use - index - for - sort . js <nl> ppp b / js / server / tests / aql - optimizer - rule - use - index - for - sort . js <nl> var removeAlwaysOnClusterRules = helper . removeAlwaysOnClusterRules ; <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> function optimizerRuleTestSuite ( ) { <nl> + var cacheProperties ; <nl> var ruleName = " use - index - for - sort " ; <nl> var secondRuleName = " use - index - range " ; <nl> var removeCalculationNodes = " remove - unnecessary - calculations - 2 " ; <nl> function optimizerRuleTestSuite ( ) { <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> setUp : function ( ) { <nl> + / / turn off caching first <nl> + cacheProperties = AQL_QUERY_CACHE_PROPERTIES ( ) ; <nl> + / / AQL_QUERY_CACHE_PROPERTIES ( { mode : " off " } ) ; <nl> + / / AQL_QUERY_CACHE_INVALIDATE ( ) ; <nl> + <nl> var loopto = 10 ; <nl> <nl> internal . db . _drop ( colName ) ; <nl> function optimizerRuleTestSuite ( ) { <nl> internal . db . _drop ( colName ) ; <nl> internal . db . _drop ( colNameOther ) ; <nl> skiplist = null ; <nl> - } , <nl> <nl> + / / restore previous state <nl> + / / AQL_QUERY_CACHE_PROPERTIES ( cacheProperties ) ; <nl> + / / AQL_QUERY_CACHE_INVALIDATE ( ) ; <nl> + } , <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief test that rule has no effect <nl> function optimizerRuleTestSuite ( ) { <nl> removeAlwaysOnClusterRules ( result . plan . rules ) , query ) ; <nl> QResults [ 0 ] = AQL_EXECUTE ( query , { } , paramNone ) . json ; <nl> QResults [ 1 ] = AQL_EXECUTE ( query , { } , paramIndexFromSort ) . json ; <nl> - <nl> + <nl> assertTrue ( isEqual ( QResults [ 0 ] , QResults [ 1 ] ) , " result " + i + " is equal ? " ) ; <nl> <nl> allresults = getQueryMultiplePlansAndExecutions ( query , { } ) ; <nl> function optimizerRuleTestSuite ( ) { <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief this sort is replaceable by an index . <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - testSortIndexable : function ( ) { <nl> <nl> + testSortIndexable : function ( ) { <nl> var query = " FOR v IN " + colName + " SORT v . a RETURN [ v . a , v . b ] " ; <nl> <nl> var XPresult ; <nl> function optimizerRuleTestSuite ( ) { <nl> <nl> / / - > use - index - for - sort alone . <nl> XPresult = AQL_EXPLAIN ( query , { } , paramIndexFromSort ) ; <nl> - QResults [ 1 ] = AQL_EXECUTE ( query , { } , paramIndexFromSort ) . json ; <nl> + QResults [ 1 ] = AQL_EXECUTE ( query , { } , paramIndexFromSort ) . json . sort ( sortArray ) ; <nl> / / our rule should have been applied . <nl> assertEqual ( [ ruleName ] , removeAlwaysOnClusterRules ( XPresult . plan . rules ) ) ; <nl> / / The sortnode and its calculation node should have been removed . <nl> function optimizerRuleTestSuite ( ) { <nl> <nl> / / - > combined use - index - for - sort and remove - unnecessary - calculations - 2 <nl> XPresult = AQL_EXPLAIN ( query , { } , paramIndexFromSort_RemoveCalculations ) ; <nl> - QResults [ 2 ] = AQL_EXECUTE ( query , { } , paramIndexFromSort_RemoveCalculations ) . json ; <nl> + QResults [ 2 ] = AQL_EXECUTE ( query , { } , paramIndexFromSort_RemoveCalculations ) . json . sort ( sortArray ) ; <nl> / / our rule should have been applied . <nl> assertEqual ( [ ruleName , removeCalculationNodes ] . sort ( ) , removeAlwaysOnClusterRules ( XPresult . plan . rules ) . sort ( ) ) ; <nl> / / The sortnode and its calculation node should have been removed . <nl> function optimizerRuleTestSuite ( ) { <nl> hasIndexRangeNode_WithRanges ( XPresult , false ) ; <nl> <nl> for ( i = 1 ; i < 3 ; i + + ) { <nl> - assertTrue ( isEqual ( QResults [ 0 ] , QResults [ i ] ) , " Result " + i + " is Equal ? " ) ; <nl> + assertTrue ( isEqual ( QResults [ 0 ] , QResults [ i ] ) , " Result " + i + " is equal ? " ) ; <nl> } <nl> var allresults = getQueryMultiplePlansAndExecutions ( query , { } ) ; <nl> for ( j = 1 ; j < allresults . results . length ; j + + ) { <nl> function optimizerRuleTestSuite ( ) { <nl> QResults [ 0 ] = AQL_EXECUTE ( query , { } , paramNone ) . json . sort ( sortArray ) ; <nl> <nl> / / - > use - index - for - sort alone . <nl> - QResults [ 1 ] = AQL_EXECUTE ( query , { } , paramIndexFromSort ) . json ; <nl> + QResults [ 1 ] = AQL_EXECUTE ( query , { } , paramIndexFromSort ) . json . sort ( sortArray ) ; <nl> XPresult = AQL_EXPLAIN ( query , { } , paramIndexFromSort ) ; <nl> / / our rule should be there . <nl> assertEqual ( [ ruleName ] , removeAlwaysOnClusterRules ( XPresult . plan . rules ) ) ; <nl> function optimizerRuleTestSuite ( ) { <nl> hasIndexRangeNode_WithRanges ( XPresult , false ) ; <nl> <nl> / / - > combined use - index - for - sort and use - index - range <nl> - QResults [ 2 ] = AQL_EXECUTE ( query , { } , paramIndexFromSort_IndexRange ) . json ; <nl> + QResults [ 2 ] = AQL_EXECUTE ( query , { } , paramIndexFromSort_IndexRange ) . json . sort ( sortArray ) ; <nl> XPresult = AQL_EXPLAIN ( query , { } , paramIndexFromSort_IndexRange ) ; <nl> assertEqual ( [ secondRuleName , ruleName ] . sort ( ) , removeAlwaysOnClusterRules ( XPresult . plan . rules ) . sort ( ) ) ; <nl> / / The sortnode should be gone , its calculation node should not have been removed yet . <nl> function optimizerRuleTestSuite ( ) { <nl> hasIndexRangeNode_WithRanges ( XPresult , true ) ; <nl> <nl> / / - > use - index - range alone . <nl> - QResults [ 3 ] = AQL_EXECUTE ( query , { } , paramIndexRange ) . json ; <nl> + QResults [ 3 ] = AQL_EXECUTE ( query , { } , paramIndexRange ) . json . sort ( sortArray ) ; <nl> XPresult = AQL_EXPLAIN ( query , { } , paramIndexRange ) ; <nl> assertEqual ( [ secondRuleName ] , removeAlwaysOnClusterRules ( XPresult . plan . rules ) ) ; <nl> / / the sortnode and its calculation node should be there . <nl> function optimizerRuleTestSuite ( ) { <nl> hasIndexRangeNode_WithRanges ( XPresult , true ) ; <nl> <nl> / / - > combined use - index - for - sort , remove - unnecessary - calculations - 2 and use - index - range <nl> - QResults [ 4 ] = AQL_EXECUTE ( query , { } , paramIndexFromSort_IndexRange_RemoveCalculations ) . json ; <nl> + QResults [ 4 ] = AQL_EXECUTE ( query , { } , paramIndexFromSort_IndexRange_RemoveCalculations ) . json . sort ( sortArray ) ; <nl> <nl> XPresult = AQL_EXPLAIN ( query , { } , paramIndexFromSort_IndexRange_RemoveCalculations ) ; <nl> assertEqual ( [ secondRuleName , removeCalculationNodes , ruleName ] . sort ( ) , removeAlwaysOnClusterRules ( XPresult . plan . rules ) . sort ( ) ) ; <nl> function optimizerRuleTestSuite ( ) { <nl> hasIndexRangeNode_WithRanges ( XPresult , true ) ; <nl> <nl> for ( i = 1 ; i < 5 ; i + + ) { <nl> - assertTrue ( isEqual ( QResults [ 0 ] , QResults [ i ] ) , " Result " + i + " is Equal ? " ) ; <nl> + assertTrue ( isEqual ( QResults [ 0 ] , QResults [ i ] ) , " Result " + i + " is equal ? " ) ; <nl> } <nl> var allresults = getQueryMultiplePlansAndExecutions ( query , { } ) ; <nl> for ( j = 1 ; j < allresults . results . length ; j + + ) { <nl> function optimizerRuleTestSuite ( ) { <nl> assertEqual ( first . lowConst . bound , first . highConst . bound , " bounds equality " ) ; <nl> <nl> for ( i = 1 ; i < 2 ; i + + ) { <nl> - assertTrue ( isEqual ( QResults [ 0 ] . sort ( sortArray ) , QResults [ i ] ) , " Result " + i + " is Equal ? " ) ; <nl> + assertTrue ( isEqual ( QResults [ 0 ] . sort ( sortArray ) , QResults [ i ] . sort ( sortArray ) ) , " Result " + i + " is Equal ? " ) ; <nl> } <nl> var allresults = getQueryMultiplePlansAndExecutions ( query , { } ) ; <nl> for ( j = 1 ; j < allresults . results . length ; j + + ) { <nl> function optimizerRuleTestSuite ( ) { <nl> QResults [ 0 ] = AQL_EXECUTE ( query , { } , paramNone ) . json . sort ( sortArray ) ; <nl> <nl> / / - > use - index - range alone . <nl> - QResults [ 1 ] = AQL_EXECUTE ( query , { } , paramIndexRange ) . json ; <nl> + QResults [ 1 ] = AQL_EXECUTE ( query , { } , paramIndexRange ) . json . sort ( sortArray ) ; <nl> <nl> XPresult = AQL_EXPLAIN ( query , { } , paramIndexRange ) ; <nl> assertEqual ( [ secondRuleName ] , removeAlwaysOnClusterRules ( XPresult . plan . rules ) ) ; <nl> function optimizerRuleTestSuite ( ) { <nl> assertEqual ( first . highs . length , 0 , " no variable high bound " ) ; <nl> assertEqual ( first . highConst . bound , 5 , " proper value was set " ) ; <nl> <nl> - assertTrue ( isEqual ( QResults [ 0 ] , QResults [ 1 ] ) , " Results are Equal ? " ) ; <nl> + assertTrue ( isEqual ( QResults [ 0 ] , QResults [ 1 ] ) , " Results are equal ? " ) ; <nl> <nl> var allresults = getQueryMultiplePlansAndExecutions ( query , { } ) ; <nl> for ( j = 1 ; j < allresults . results . length ; j + + ) { <nl> function optimizerRuleTestSuite ( ) { <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief test in detail that an index range can be used for a greater than filter . <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> testRangeGreaterThan : function ( ) { <nl> var query = " FOR v IN " + colName + " FILTER v . a > 5 RETURN [ v . a , v . b ] " ; <nl> var XPresult ; <nl> function optimizerRuleTestSuite ( ) { <nl> QResults [ 0 ] = AQL_EXECUTE ( query , { } , paramNone ) . json . sort ( sortArray ) ; <nl> <nl> / / - > use - index - range alone . <nl> - QResults [ 1 ] = AQL_EXECUTE ( query , { } , paramIndexRange ) . json ; <nl> + QResults [ 1 ] = AQL_EXECUTE ( query , { } , paramIndexRange ) . json . sort ( sortArray ) ; <nl> <nl> XPresult = AQL_EXPLAIN ( query , { } , paramIndexRange ) ; <nl> assertEqual ( [ secondRuleName ] , removeAlwaysOnClusterRules ( XPresult . plan . rules ) ) ; <nl> function optimizerRuleTestSuite ( ) { <nl> QResults [ 0 ] = AQL_EXECUTE ( query , { } , paramNone ) . json . sort ( sortArray ) ; <nl> <nl> / / - > use - index - range alone . <nl> - QResults [ 1 ] = AQL_EXECUTE ( query , { } , paramIndexRange ) . json ; <nl> + QResults [ 1 ] = AQL_EXECUTE ( query , { } , paramIndexRange ) . json . sort ( sortArray ) ; <nl> <nl> XPresult = AQL_EXPLAIN ( query , { } , paramIndexRange ) ; <nl> assertEqual ( [ secondRuleName ] , removeAlwaysOnClusterRules ( XPresult . plan . rules ) ) ; <nl> function optimizerRuleTestSuite ( ) { <nl> / / / @ brief test in detail that an index range can be used for an or combined <nl> / / / greater than + less than filter spanning a range . <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> testRangeBandstop : function ( ) { <nl> var query = " FOR v IN " + colName + " FILTER v . a < 5 | | v . a > 10 RETURN [ v . a , v . b ] " ; <nl> <nl> function optimizerRuleTestSuite ( ) { <nl> QResults [ 0 ] = AQL_EXECUTE ( query , { } , paramNone ) . json . sort ( sortArray ) ; <nl> <nl> / / - > use - index - range alone . <nl> - QResults [ 1 ] = AQL_EXECUTE ( query , { } , paramIndexRange ) . json ; <nl> + QResults [ 1 ] = AQL_EXECUTE ( query , { } , paramIndexRange ) . json . sort ( sortArray ) ; <nl> <nl> XPresult = AQL_EXPLAIN ( query , { } , paramIndexRange ) ; <nl> assertEqual ( [ secondRuleName ] , removeAlwaysOnClusterRules ( XPresult . plan . rules ) ) ; <nl> function optimizerRuleTestSuite ( ) { <nl> assertEqual ( first . lowConst . bound , 10 , " proper value was set " ) ; <nl> assertEqual ( first . lowConst . include , false , " proper include " ) ; <nl> <nl> - assertTrue ( isEqual ( QResults [ 0 ] , QResults [ 1 ] ) , " Results are Equal ? " ) ; <nl> + assertTrue ( isEqual ( QResults [ 0 ] , QResults [ 1 ] ) , " Results are equal ? " ) ; <nl> } , <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> mmm a / js / server / tests / aql - query - cache . js <nl> ppp b / js / server / tests / aql - query - cache . js <nl> <nl> <nl> var jsunity = require ( " jsunity " ) ; <nl> var db = require ( " org / arangodb " ) . db ; <nl> + var internal = require ( " internal " ) ; <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief test suite <nl> function ahuacatlQueryCacheTestSuite ( ) { <nl> assertEqual ( " demand " , result . mode ) ; <nl> } , <nl> <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief test rename collection <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + testRenameCollection1 : function ( ) { <nl> + if ( require ( " org / arangodb / cluster " ) . isCluster ( ) ) { <nl> + / / renaming collections not supported in cluster <nl> + return ; <nl> + } <nl> + <nl> + var query = " FOR doc IN @ @ collection SORT doc . value RETURN doc . value " ; <nl> + var result , i ; <nl> + <nl> + for ( i = 1 ; i < = 5 ; + + i ) { <nl> + c1 . save ( { value : i } ) ; <nl> + } <nl> + c2 . drop ( ) ; <nl> + <nl> + AQL_QUERY_CACHE_PROPERTIES ( { mode : " on " } ) ; <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 4 , 5 ] , result . json ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertTrue ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 4 , 5 ] , result . json ) ; <nl> + <nl> + c1 . rename ( " UnitTestsAhuacatlQueryCache2 " ) ; <nl> + <nl> + try { <nl> + AQL_EXECUTE ( query , { " @ collection " : " UnitTestsAhuacatlQueryCache1 " } ) ; <nl> + fail ( ) ; <nl> + } <nl> + catch ( err ) { <nl> + assertEqual ( internal . errors . ERROR_ARANGO_COLLECTION_NOT_FOUND . code , err . errorNum ) ; <nl> + } <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : " UnitTestsAhuacatlQueryCache2 " } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 4 , 5 ] , result . json ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : " UnitTestsAhuacatlQueryCache2 " } ) ; <nl> + assertTrue ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 4 , 5 ] , result . json ) ; <nl> + } , <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief test rename collection <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + testRenameCollection2 : function ( ) { <nl> + if ( require ( " org / arangodb / cluster " ) . isCluster ( ) ) { <nl> + / / renaming collections not supported in cluster <nl> + return ; <nl> + } <nl> + <nl> + var query = " FOR doc IN @ @ collection SORT doc . value RETURN doc . value " ; <nl> + var result , i ; <nl> + <nl> + for ( i = 1 ; i < = 5 ; + + i ) { <nl> + c1 . save ( { value : i } ) ; <nl> + } <nl> + <nl> + AQL_QUERY_CACHE_PROPERTIES ( { mode : " on " } ) ; <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 4 , 5 ] , result . json ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertTrue ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 4 , 5 ] , result . json ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : " UnitTestsAhuacatlQueryCache2 " } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ ] , result . json ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : " UnitTestsAhuacatlQueryCache2 " } ) ; <nl> + assertTrue ( result . cached ) ; <nl> + assertEqual ( [ ] , result . json ) ; <nl> + <nl> + c2 . drop ( ) ; <nl> + c1 . rename ( " UnitTestsAhuacatlQueryCache2 " ) ; <nl> + <nl> + try { <nl> + AQL_EXECUTE ( query , { " @ collection " : " UnitTestsAhuacatlQueryCache1 " } ) ; <nl> + fail ( ) ; <nl> + } <nl> + catch ( err ) { <nl> + assertEqual ( internal . errors . ERROR_ARANGO_COLLECTION_NOT_FOUND . code , err . errorNum ) ; <nl> + } <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : " UnitTestsAhuacatlQueryCache2 " } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 4 , 5 ] , result . json ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : " UnitTestsAhuacatlQueryCache2 " } ) ; <nl> + assertTrue ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 4 , 5 ] , result . json ) ; <nl> + } , <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief test drop collection <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + testDropCollection : function ( ) { <nl> + var query = " FOR doc IN @ @ collection SORT doc . value RETURN doc . value " ; <nl> + var result , i ; <nl> + <nl> + for ( i = 1 ; i < = 5 ; + + i ) { <nl> + c1 . save ( { value : i } ) ; <nl> + } <nl> + <nl> + AQL_QUERY_CACHE_PROPERTIES ( { mode : " on " } ) ; <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 4 , 5 ] , result . json ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertTrue ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 4 , 5 ] , result . json ) ; <nl> + <nl> + c1 . drop ( ) ; <nl> + <nl> + try { <nl> + AQL_EXECUTE ( query , { " @ collection " : " UnitTestsAhuacatlQueryCache1 " } ) ; <nl> + fail ( ) ; <nl> + } <nl> + catch ( err ) { <nl> + assertEqual ( internal . errors . ERROR_ARANGO_COLLECTION_NOT_FOUND . code , err . errorNum ) ; <nl> + } <nl> + } , <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief test drop and recreation of collection <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + testDropAndRecreateCollection : function ( ) { <nl> + var query = " FOR doc IN @ @ collection SORT doc . value RETURN doc . value " ; <nl> + var result , i ; <nl> + <nl> + for ( i = 1 ; i < = 5 ; + + i ) { <nl> + c1 . save ( { value : i } ) ; <nl> + } <nl> + <nl> + AQL_QUERY_CACHE_PROPERTIES ( { mode : " on " } ) ; <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 4 , 5 ] , result . json ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertTrue ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 4 , 5 ] , result . json ) ; <nl> + <nl> + c1 . drop ( ) ; <nl> + <nl> + try { <nl> + AQL_EXECUTE ( query , { " @ collection " : " UnitTestsAhuacatlQueryCache1 " } ) ; <nl> + fail ( ) ; <nl> + } <nl> + catch ( err ) { <nl> + assertEqual ( internal . errors . ERROR_ARANGO_COLLECTION_NOT_FOUND . code , err . errorNum ) ; <nl> + } <nl> + <nl> + / / re - create collection with same name <nl> + c1 = db . _create ( " UnitTestsAhuacatlQueryCache1 " ) ; <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ ] , result . json ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertTrue ( result . cached ) ; <nl> + assertEqual ( [ ] , result . json ) ; <nl> + } , <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief test adding indexes <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + testAddIndexCapConstraint : function ( ) { <nl> + var query = " FOR doc IN @ @ collection SORT doc . value RETURN doc . value " ; <nl> + var result , i ; <nl> + <nl> + for ( i = 1 ; i < = 5 ; + + i ) { <nl> + c1 . save ( { value : i } ) ; <nl> + } <nl> + <nl> + AQL_QUERY_CACHE_PROPERTIES ( { mode : " on " } ) ; <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 4 , 5 ] , result . json ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertTrue ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 4 , 5 ] , result . json ) ; <nl> + <nl> + c1 . ensureCapConstraint ( 3 ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ 3 , 4 , 5 ] , result . json ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertTrue ( result . cached ) ; <nl> + assertEqual ( [ 3 , 4 , 5 ] , result . json ) ; <nl> + } , <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief test dropping indexes <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + testDropIndexCapConstraint : function ( ) { <nl> + var query = " FOR doc IN @ @ collection SORT doc . value RETURN doc . value " ; <nl> + var result , i ; <nl> + <nl> + c1 . ensureCapConstraint ( 3 ) ; <nl> + for ( i = 1 ; i < = 5 ; + + i ) { <nl> + c1 . save ( { value : i } ) ; <nl> + } <nl> + <nl> + AQL_QUERY_CACHE_PROPERTIES ( { mode : " on " } ) ; <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ 3 , 4 , 5 ] , result . json ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertTrue ( result . cached ) ; <nl> + assertEqual ( [ 3 , 4 , 5 ] , result . json ) ; <nl> + <nl> + var indexes = c1 . getIndexes ( ) ; <nl> + assertEqual ( 2 , indexes . length ) ; <nl> + assertEqual ( " cap " , indexes [ 1 ] . type ) ; <nl> + assertTrue ( c1 . dropIndex ( indexes [ 1 ] . id ) ) ; <nl> + <nl> + indexes = c1 . getIndexes ( ) ; <nl> + assertEqual ( 1 , indexes . length ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ 3 , 4 , 5 ] , result . json ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) } ) ; <nl> + assertTrue ( result . cached ) ; <nl> + assertEqual ( [ 3 , 4 , 5 ] , result . json ) ; <nl> + } , <nl> + <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief test non - deterministic queries <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> function ahuacatlQueryCacheTestSuite ( ) { <nl> } ) ; <nl> } , <nl> <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief test same query with different bind parameters <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + testDifferentBindOrders : function ( ) { <nl> + var query = " FOR doc IN @ @ collection SORT doc . value LIMIT @ offset , @ count RETURN doc . value " ; <nl> + var result , i ; <nl> + <nl> + AQL_QUERY_CACHE_PROPERTIES ( { mode : " on " } ) ; <nl> + for ( i = 1 ; i < = 10 ; + + i ) { <nl> + c1 . save ( { value : i } ) ; <nl> + } <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) , offset : 0 , count : 1 } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ 1 ] , result . json ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) , offset : 0 , count : 1 } ) ; <nl> + assertTrue ( result . cached ) ; <nl> + assertEqual ( [ 1 ] , result . json ) ; <nl> + <nl> + / / same bind parameter values , but in exchanged order <nl> + result = AQL_EXECUTE ( query , { " @ collection " : c1 . name ( ) , offset : 1 , count : 0 } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ ] , result . json ) ; <nl> + } , <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief test same query with different bind parameters <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + testDifferentBindOrdersArray : function ( ) { <nl> + var query = " RETURN @ values " ; <nl> + var result , i ; <nl> + <nl> + AQL_QUERY_CACHE_PROPERTIES ( { mode : " on " } ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { values : [ 1 , 2 , 3 , 4 , 5 ] } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 4 , 5 ] , result . json [ 0 ] ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { values : [ 1 , 2 , 3 , 4 , 5 ] } ) ; <nl> + assertTrue ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 4 , 5 ] , result . json [ 0 ] ) ; <nl> + <nl> + / / same bind parameter values , but in exchanged order <nl> + result = AQL_EXECUTE ( query , { values : [ 5 , 4 , 3 , 2 , 1 ] } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ 5 , 4 , 3 , 2 , 1 ] , result . json [ 0 ] ) ; <nl> + <nl> + result = AQL_EXECUTE ( query , { values : [ 1 , 2 , 3 , 5 , 4 ] } ) ; <nl> + assertFalse ( result . cached ) ; <nl> + assertEqual ( [ 1 , 2 , 3 , 5 , 4 ] , result . json [ 0 ] ) ; <nl> + } , <nl> + <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief test same query with different bind parameters <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> mmm a / lib / Basics / json - utilities . cpp <nl> ppp b / lib / Basics / json - utilities . cpp <nl> static uint64_t FastHashJsonRecursive ( uint64_t hash , <nl> case TRI_JSON_OBJECT : { <nl> hash = fasthash64 ( static_cast < const void * > ( " object " ) , 6 , hash ) ; <nl> size_t const n = TRI_LengthVector ( & object - > _value . _objects ) ; <nl> - uint64_t tmphash = hash ; <nl> for ( size_t i = 0 ; i < n ; i + = 2 ) { <nl> auto subjson = static_cast < TRI_json_t const * > ( TRI_AddressVector ( & object - > _value . _objects , i ) ) ; <nl> TRI_ASSERT ( TRI_IsStringJson ( subjson ) ) ; <nl> - tmphash ^ = FastHashJsonRecursive ( hash , subjson ) ; <nl> + hash = FastHashJsonRecursive ( hash , subjson ) ; <nl> subjson = static_cast < TRI_json_t const * > ( TRI_AddressVector ( & object - > _value . _objects , i + 1 ) ) ; <nl> - tmphash ^ = FastHashJsonRecursive ( hash , subjson ) ; <nl> + hash = FastHashJsonRecursive ( hash , subjson ) ; <nl> } <nl> - return tmphash ; <nl> + return hash ; <nl> } <nl> <nl> case TRI_JSON_ARRAY : { <nl>
|
fixed invalidation with index creation etc . , adjusted tests
|
arangodb/arangodb
|
8372c3399343b11eda0ae8e0512e4507bcb23f3f
|
2015-06-25T21:40:31Z
|
mmm a / core / ustring . cpp <nl> ppp b / core / ustring . cpp <nl> Vector < String > String : : split_spaces ( ) const { <nl> return ret ; <nl> } <nl> <nl> - Vector < String > String : : split ( const String & p_splitter , bool p_allow_empty ) const { <nl> + Vector < String > String : : split ( const String & p_splitter , bool p_allow_empty , int p_maxsplit ) const { <nl> <nl> Vector < String > ret ; <nl> int from = 0 ; <nl> Vector < String > String : : split ( const String & p_splitter , bool p_allow_empty ) const <nl> int end = find ( p_splitter , from ) ; <nl> if ( end < 0 ) <nl> end = len ; <nl> - if ( p_allow_empty | | ( end > from ) ) <nl> - ret . push_back ( substr ( from , end - from ) ) ; <nl> + if ( p_allow_empty | | ( end > from ) ) { <nl> + if ( p_maxsplit < = 0 ) <nl> + ret . push_back ( substr ( from , end - from ) ) ; <nl> + else if ( p_maxsplit > 0 ) { <nl> + <nl> + / / Put rest of the string and leave cycle . <nl> + if ( p_maxsplit = = ret . size ( ) ) { <nl> + ret . push_back ( substr ( from , len ) ) ; <nl> + break ; <nl> + } <nl> + <nl> + / / Otherwise , push items until positive limit is reached . <nl> + ret . push_back ( substr ( from , end - from ) ) ; <nl> + } <nl> + } <nl> <nl> if ( end = = len ) <nl> break ; <nl> mmm a / core / ustring . h <nl> ppp b / core / ustring . h <nl> class String : public Vector < CharType > { <nl> String get_slice ( String p_splitter , int p_slice ) const ; <nl> String get_slicec ( CharType p_splitter , int p_slice ) const ; <nl> <nl> - Vector < String > split ( const String & p_splitter , bool p_allow_empty = true ) const ; <nl> + Vector < String > split ( const String & p_splitter , bool p_allow_empty = true , int p_maxsplit = 0 ) const ; <nl> Vector < String > split_spaces ( ) const ; <nl> Vector < float > split_floats ( const String & p_splitter , bool p_allow_empty = true ) const ; <nl> Vector < float > split_floats_mk ( const Vector < String > & p_splitters , bool p_allow_empty = true ) const ; <nl> mmm a / core / variant_call . cpp <nl> ppp b / core / variant_call . cpp <nl> struct _VariantCall { <nl> VCALL_LOCALMEM2R ( String , replacen ) ; <nl> VCALL_LOCALMEM2R ( String , insert ) ; <nl> VCALL_LOCALMEM0R ( String , capitalize ) ; <nl> - VCALL_LOCALMEM2R ( String , split ) ; <nl> + VCALL_LOCALMEM3R ( String , split ) ; <nl> VCALL_LOCALMEM2R ( String , split_floats ) ; <nl> VCALL_LOCALMEM0R ( String , to_upper ) ; <nl> VCALL_LOCALMEM0R ( String , to_lower ) ; <nl> void register_variant_methods ( ) { <nl> ADDFUNC2R ( STRING , STRING , String , replacen , STRING , " what " , STRING , " forwhat " , varray ( ) ) ; <nl> ADDFUNC2R ( STRING , STRING , String , insert , INT , " position " , STRING , " what " , varray ( ) ) ; <nl> ADDFUNC0R ( STRING , STRING , String , capitalize , varray ( ) ) ; <nl> - ADDFUNC2R ( STRING , POOL_STRING_ARRAY , String , split , STRING , " divisor " , BOOL , " allow_empty " , varray ( true ) ) ; <nl> + ADDFUNC3R ( STRING , POOL_STRING_ARRAY , String , split , STRING , " divisor " , BOOL , " allow_empty " , INT , " maxsplit " , varray ( true , 0 ) ) ; <nl> ADDFUNC2R ( STRING , POOL_REAL_ARRAY , String , split_floats , STRING , " divisor " , BOOL , " allow_empty " , varray ( true ) ) ; <nl> <nl> ADDFUNC0R ( STRING , STRING , String , to_upper , varray ( ) ) ; <nl> mmm a / doc / classes / String . xml <nl> ppp b / doc / classes / String . xml <nl> <nl> < / argument > <nl> < argument index = " 1 " name = " allow_empty " type = " bool " default = " True " > <nl> < / argument > <nl> + < argument index = " 2 " name = " maxsplit " type = " int " default = " 0 " > <nl> + < / argument > <nl> < description > <nl> Splits the string by a divisor string and returns an array of the substrings . Example " One , Two , Three " will return [ " One " , " Two " , " Three " ] if split by " , " . <nl> + If [ code ] maxsplit [ / code ] is given , at most maxsplit number of splits occur , and the remainder of the string is returned as the final element of the list ( thus , the list will have at most maxsplit + 1 elements ) <nl> < / description > <nl> < / method > <nl> < method name = " split_floats " > <nl>
|
Added third argument for String . split ( ) function ( see issue )
|
godotengine/godot
|
5302fd125b36d453615483f6ced4e40e973c499a
|
2017-12-15T18:51:13Z
|
deleted file mode 100644 <nl> index f6d24afff5d49 . . 0000000000000 <nl> mmm a / avro . BUILD <nl> ppp / dev / null <nl> <nl> - package ( default_visibility = [ " / / visibility : public " ] ) <nl> - <nl> - licenses ( [ " notice " ] ) # Apache 2 . 0 <nl> - <nl> - cc_library ( <nl> - name = " avrocpp " , <nl> - srcs = glob ( <nl> - [ <nl> - " impl / * * / * . cc " , <nl> - " impl / * * / * . hh " , <nl> - ] , <nl> - exclude = [ <nl> - " impl / avrogencpp . cc " , <nl> - ] , <nl> - ) , <nl> - hdrs = glob ( [ " api / * * / * . hh " ] ) , <nl> - includes = [ " api " ] , <nl> - deps = [ <nl> - " @ boost_archive / / : boost " , <nl> - " @ boost_archive / / : filesystem " , <nl> - " @ boost_archive / / : iostreams " , <nl> - " @ boost_archive / / : system " , <nl> - ] , <nl> - ) <nl> - <nl> - cc_binary ( <nl> - name = " avrogencpp " , <nl> - srcs = [ " impl / avrogencpp . cc " ] , <nl> - deps = [ <nl> - " : avrocpp " , <nl> - " @ boost_archive / / : program_options " , <nl> - ] , <nl> - ) <nl> deleted file mode 100644 <nl> index c10d9eba476da . . 0000000000000 <nl> mmm a / boost . BUILD <nl> ppp / dev / null <nl> <nl> - # Description : <nl> - # The Boost library collection ( http : / / www . boost . org ) <nl> - # <nl> - # Most Boost libraries are header - only , in which case you only need to depend <nl> - # on : boost . If you need one of the libraries that has a separately - compiled <nl> - # implementation , depend on the appropriate libs rule . <nl> - <nl> - # This is only needed for Avro . <nl> - package ( default_visibility = [ " @ avro_archive / / : __subpackages__ " ] ) <nl> - <nl> - licenses ( [ " notice " ] ) # Boost software license <nl> - <nl> - cc_library ( <nl> - name = " boost " , <nl> - hdrs = glob ( [ <nl> - " boost / * * / * . hpp " , <nl> - " boost / * * / * . h " , <nl> - " boost / * * / * . ipp " , <nl> - ] ) , <nl> - includes = [ " . " ] , <nl> - ) <nl> - <nl> - cc_library ( <nl> - name = " filesystem " , <nl> - srcs = glob ( [ " libs / filesystem / src / * . cpp " ] ) , <nl> - deps = [ <nl> - " : boost " , <nl> - " : system " , <nl> - ] , <nl> - ) <nl> - <nl> - cc_library ( <nl> - name = " iostreams " , <nl> - srcs = glob ( [ " libs / iostreams / src / * . cpp " ] ) , <nl> - deps = [ <nl> - " : boost " , <nl> - " @ bzip2_archive / / : bz2lib " , <nl> - " @ zlib_archive / / : zlib " , <nl> - ] , <nl> - ) <nl> - <nl> - cc_library ( <nl> - name = " program_options " , <nl> - srcs = glob ( [ " libs / program_options / src / * . cpp " ] ) , <nl> - deps = [ " : boost " ] , <nl> - ) <nl> - <nl> - cc_library ( <nl> - name = " system " , <nl> - srcs = glob ( [ " libs / system / src / * . cpp " ] ) , <nl> - deps = [ " : boost " ] , <nl> - ) <nl> deleted file mode 100644 <nl> index f5385964187ef . . 0000000000000 <nl> mmm a / boringssl . BUILD <nl> ppp / dev / null <nl> <nl> - package ( default_visibility = [ " / / tensorflow : __subpackages__ " ] ) <nl> - <nl> - licenses ( [ " restricted " ] ) # OpenSSL license , partly BSD - like <nl> - <nl> - # See https : / / boringssl . googlesource . com / boringssl / + / master / INCORPORATING . md <nl> - # on how to re - generate the list of source files . <nl> - <nl> - ssl_headers = [ <nl> - ] <nl> - <nl> - ssl_internal_headers = [ <nl> - " ssl / internal . h " , <nl> - " ssl / test / async_bio . h " , <nl> - " ssl / test / packeted_bio . h " , <nl> - " ssl / test / scoped_types . h " , <nl> - " ssl / test / test_config . h " , <nl> - ] <nl> - <nl> - ssl_sources = [ <nl> - " ssl / custom_extensions . c " , <nl> - " ssl / d1_both . c " , <nl> - " ssl / d1_clnt . c " , <nl> - " ssl / d1_lib . c " , <nl> - " ssl / d1_meth . c " , <nl> - " ssl / d1_pkt . c " , <nl> - " ssl / d1_srtp . c " , <nl> - " ssl / d1_srvr . c " , <nl> - " ssl / dtls_record . c " , <nl> - " ssl / pqueue / pqueue . c " , <nl> - " ssl / s3_both . c " , <nl> - " ssl / s3_clnt . c " , <nl> - " ssl / s3_enc . c " , <nl> - " ssl / s3_lib . c " , <nl> - " ssl / s3_meth . c " , <nl> - " ssl / s3_pkt . c " , <nl> - " ssl / s3_srvr . c " , <nl> - " ssl / ssl_aead_ctx . c " , <nl> - " ssl / ssl_asn1 . c " , <nl> - " ssl / ssl_buffer . c " , <nl> - " ssl / ssl_cert . c " , <nl> - " ssl / ssl_cipher . c " , <nl> - " ssl / ssl_ecdh . c " , <nl> - " ssl / ssl_file . c " , <nl> - " ssl / ssl_lib . c " , <nl> - " ssl / ssl_rsa . c " , <nl> - " ssl / ssl_session . c " , <nl> - " ssl / ssl_stat . c " , <nl> - " ssl / t1_enc . c " , <nl> - " ssl / t1_lib . c " , <nl> - " ssl / tls_record . c " , <nl> - ] <nl> - <nl> - crypto_headers = [ <nl> - " include / openssl / aead . h " , <nl> - " include / openssl / aes . h " , <nl> - " include / openssl / arm_arch . h " , <nl> - " include / openssl / asn1 . h " , <nl> - " include / openssl / asn1_mac . h " , <nl> - " include / openssl / asn1t . h " , <nl> - " include / openssl / base . h " , <nl> - " include / openssl / base64 . h " , <nl> - " include / openssl / bio . h " , <nl> - " include / openssl / blowfish . h " , <nl> - " include / openssl / bn . h " , <nl> - " include / openssl / buf . h " , <nl> - " include / openssl / buffer . h " , <nl> - " include / openssl / bytestring . h " , <nl> - " include / openssl / cast . h " , <nl> - " include / openssl / chacha . h " , <nl> - " include / openssl / cipher . h " , <nl> - " include / openssl / cmac . h " , <nl> - " include / openssl / conf . h " , <nl> - " include / openssl / cpu . h " , <nl> - " include / openssl / crypto . h " , <nl> - " include / openssl / curve25519 . h " , <nl> - " include / openssl / des . h " , <nl> - " include / openssl / dh . h " , <nl> - " include / openssl / digest . h " , <nl> - " include / openssl / dsa . h " , <nl> - " include / openssl / ec . h " , <nl> - " include / openssl / ec_key . h " , <nl> - " include / openssl / ecdh . h " , <nl> - " include / openssl / ecdsa . h " , <nl> - " include / openssl / engine . h " , <nl> - " include / openssl / err . h " , <nl> - " include / openssl / evp . h " , <nl> - " include / openssl / ex_data . h " , <nl> - " include / openssl / hkdf . h " , <nl> - " include / openssl / hmac . h " , <nl> - " include / openssl / lhash . h " , <nl> - " include / openssl / lhash_macros . h " , <nl> - " include / openssl / md4 . h " , <nl> - " include / openssl / md5 . h " , <nl> - " include / openssl / mem . h " , <nl> - " include / openssl / newhope . h " , <nl> - " include / openssl / nid . h " , <nl> - " include / openssl / obj . h " , <nl> - " include / openssl / obj_mac . h " , <nl> - " include / openssl / objects . h " , <nl> - " include / openssl / opensslconf . h " , <nl> - " include / openssl / opensslv . h " , <nl> - " include / openssl / ossl_typ . h " , <nl> - " include / openssl / pem . h " , <nl> - " include / openssl / pkcs12 . h " , <nl> - " include / openssl / pkcs7 . h " , <nl> - " include / openssl / pkcs8 . h " , <nl> - " include / openssl / poly1305 . h " , <nl> - " include / openssl / pqueue . h " , <nl> - " include / openssl / rand . h " , <nl> - " include / openssl / rc4 . h " , <nl> - " include / openssl / ripemd . h " , <nl> - " include / openssl / rsa . h " , <nl> - " include / openssl / safestack . h " , <nl> - " include / openssl / sha . h " , <nl> - " include / openssl / srtp . h " , <nl> - " include / openssl / stack . h " , <nl> - " include / openssl / stack_macros . h " , <nl> - " include / openssl / thread . h " , <nl> - " include / openssl / time_support . h " , <nl> - " include / openssl / type_check . h " , <nl> - " include / openssl / x509 . h " , <nl> - " include / openssl / x509_vfy . h " , <nl> - " include / openssl / x509v3 . h " , <nl> - ] <nl> - <nl> - crypto_internal_headers = [ <nl> - " crypto / aes / internal . h " , <nl> - " crypto / asn1 / asn1_locl . h " , <nl> - " crypto / bio / internal . h " , <nl> - " crypto / bn / internal . h " , <nl> - " crypto / bn / rsaz_exp . h " , <nl> - " crypto / bytestring / internal . h " , <nl> - " crypto / cipher / internal . h " , <nl> - " crypto / conf / conf_def . h " , <nl> - " crypto / conf / internal . h " , <nl> - " crypto / curve25519 / internal . h " , <nl> - " crypto / des / internal . h " , <nl> - " crypto / dh / internal . h " , <nl> - " crypto / digest / internal . h " , <nl> - " crypto / digest / md32_common . h " , <nl> - " crypto / ec / internal . h " , <nl> - " crypto / ec / p256 - x86_64 - table . h " , <nl> - " crypto / evp / internal . h " , <nl> - " crypto / internal . h " , <nl> - " crypto / modes / internal . h " , <nl> - " crypto / newhope / internal . h " , <nl> - " crypto / obj / obj_dat . h " , <nl> - " crypto / obj / obj_xref . h " , <nl> - " crypto / pkcs8 / internal . h " , <nl> - " crypto / poly1305 / internal . h " , <nl> - " crypto / rand / internal . h " , <nl> - " crypto / rsa / internal . h " , <nl> - " crypto / test / scoped_types . h " , <nl> - " crypto / test / test_util . h " , <nl> - " crypto / x509 / charmap . h " , <nl> - " crypto / x509 / internal . h " , <nl> - " crypto / x509 / vpm_int . h " , <nl> - " crypto / x509v3 / ext_dat . h " , <nl> - " crypto / x509v3 / pcy_int . h " , <nl> - ] <nl> - <nl> - crypto_sources = [ <nl> - " : err_data_c " , <nl> - " crypto / aes / aes . c " , <nl> - " crypto / aes / mode_wrappers . c " , <nl> - " crypto / asn1 / a_bitstr . c " , <nl> - " crypto / asn1 / a_bool . c " , <nl> - " crypto / asn1 / a_bytes . c " , <nl> - " crypto / asn1 / a_d2i_fp . c " , <nl> - " crypto / asn1 / a_dup . c " , <nl> - " crypto / asn1 / a_enum . c " , <nl> - " crypto / asn1 / a_gentm . c " , <nl> - " crypto / asn1 / a_i2d_fp . c " , <nl> - " crypto / asn1 / a_int . c " , <nl> - " crypto / asn1 / a_mbstr . c " , <nl> - " crypto / asn1 / a_object . c " , <nl> - " crypto / asn1 / a_octet . c " , <nl> - " crypto / asn1 / a_print . c " , <nl> - " crypto / asn1 / a_strnid . c " , <nl> - " crypto / asn1 / a_time . c " , <nl> - " crypto / asn1 / a_type . c " , <nl> - " crypto / asn1 / a_utctm . c " , <nl> - " crypto / asn1 / a_utf8 . c " , <nl> - " crypto / asn1 / asn1_lib . c " , <nl> - " crypto / asn1 / asn1_par . c " , <nl> - " crypto / asn1 / asn_pack . c " , <nl> - " crypto / asn1 / bio_asn1 . c " , <nl> - " crypto / asn1 / bio_ndef . c " , <nl> - " crypto / asn1 / f_enum . c " , <nl> - " crypto / asn1 / f_int . c " , <nl> - " crypto / asn1 / f_string . c " , <nl> - " crypto / asn1 / t_bitst . c " , <nl> - " crypto / asn1 / tasn_dec . c " , <nl> - " crypto / asn1 / tasn_enc . c " , <nl> - " crypto / asn1 / tasn_fre . c " , <nl> - " crypto / asn1 / tasn_new . c " , <nl> - " crypto / asn1 / tasn_prn . c " , <nl> - " crypto / asn1 / tasn_typ . c " , <nl> - " crypto / asn1 / tasn_utl . c " , <nl> - " crypto / asn1 / x_bignum . c " , <nl> - " crypto / asn1 / x_long . c " , <nl> - " crypto / base64 / base64 . c " , <nl> - " crypto / bio / bio . c " , <nl> - " crypto / bio / bio_mem . c " , <nl> - " crypto / bio / buffer . c " , <nl> - " crypto / bio / connect . c " , <nl> - " crypto / bio / fd . c " , <nl> - " crypto / bio / file . c " , <nl> - " crypto / bio / hexdump . c " , <nl> - " crypto / bio / pair . c " , <nl> - " crypto / bio / printf . c " , <nl> - " crypto / bio / socket . c " , <nl> - " crypto / bio / socket_helper . c " , <nl> - " crypto / bn / add . c " , <nl> - " crypto / bn / asm / x86_64 - gcc . c " , <nl> - " crypto / bn / bn . c " , <nl> - " crypto / bn / bn_asn1 . c " , <nl> - " crypto / bn / cmp . c " , <nl> - " crypto / bn / convert . c " , <nl> - " crypto / bn / ctx . c " , <nl> - " crypto / bn / div . c " , <nl> - " crypto / bn / exponentiation . c " , <nl> - " crypto / bn / gcd . c " , <nl> - " crypto / bn / generic . c " , <nl> - " crypto / bn / kronecker . c " , <nl> - " crypto / bn / montgomery . c " , <nl> - " crypto / bn / mul . c " , <nl> - " crypto / bn / prime . c " , <nl> - " crypto / bn / random . c " , <nl> - " crypto / bn / rsaz_exp . c " , <nl> - " crypto / bn / shift . c " , <nl> - " crypto / bn / sqrt . c " , <nl> - " crypto / buf / buf . c " , <nl> - " crypto / bytestring / asn1_compat . c " , <nl> - " crypto / bytestring / ber . c " , <nl> - " crypto / bytestring / cbb . c " , <nl> - " crypto / bytestring / cbs . c " , <nl> - " crypto / chacha / chacha . c " , <nl> - " crypto / cipher / aead . c " , <nl> - " crypto / cipher / cipher . c " , <nl> - " crypto / cipher / derive_key . c " , <nl> - " crypto / cipher / e_aes . c " , <nl> - " crypto / cipher / e_chacha20poly1305 . c " , <nl> - " crypto / cipher / e_des . c " , <nl> - " crypto / cipher / e_null . c " , <nl> - " crypto / cipher / e_rc2 . c " , <nl> - " crypto / cipher / e_rc4 . c " , <nl> - " crypto / cipher / e_ssl3 . c " , <nl> - " crypto / cipher / e_tls . c " , <nl> - " crypto / cipher / tls_cbc . c " , <nl> - " crypto / cmac / cmac . c " , <nl> - " crypto / conf / conf . c " , <nl> - " crypto / cpu - aarch64 - linux . c " , <nl> - " crypto / cpu - arm - linux . c " , <nl> - " crypto / cpu - arm . c " , <nl> - " crypto / cpu - intel . c " , <nl> - " crypto / crypto . c " , <nl> - " crypto / curve25519 / curve25519 . c " , <nl> - " crypto / curve25519 / spake25519 . c " , <nl> - " crypto / curve25519 / x25519 - x86_64 . c " , <nl> - " crypto / des / des . c " , <nl> - " crypto / dh / check . c " , <nl> - " crypto / dh / dh . c " , <nl> - " crypto / dh / dh_asn1 . c " , <nl> - " crypto / dh / params . c " , <nl> - " crypto / digest / digest . c " , <nl> - " crypto / digest / digests . c " , <nl> - " crypto / dsa / dsa . c " , <nl> - " crypto / dsa / dsa_asn1 . c " , <nl> - " crypto / ec / ec . c " , <nl> - " crypto / ec / ec_asn1 . c " , <nl> - " crypto / ec / ec_key . c " , <nl> - " crypto / ec / ec_montgomery . c " , <nl> - " crypto / ec / oct . c " , <nl> - " crypto / ec / p224 - 64 . c " , <nl> - " crypto / ec / p256 - 64 . c " , <nl> - " crypto / ec / p256 - x86_64 . c " , <nl> - " crypto / ec / simple . c " , <nl> - " crypto / ec / util - 64 . c " , <nl> - " crypto / ec / wnaf . c " , <nl> - " crypto / ecdh / ecdh . c " , <nl> - " crypto / ecdsa / ecdsa . c " , <nl> - " crypto / ecdsa / ecdsa_asn1 . c " , <nl> - " crypto / engine / engine . c " , <nl> - " crypto / err / err . c " , <nl> - " crypto / evp / digestsign . c " , <nl> - " crypto / evp / evp . c " , <nl> - " crypto / evp / evp_asn1 . c " , <nl> - " crypto / evp / evp_ctx . c " , <nl> - " crypto / evp / p_dsa_asn1 . c " , <nl> - " crypto / evp / p_ec . c " , <nl> - " crypto / evp / p_ec_asn1 . c " , <nl> - " crypto / evp / p_rsa . c " , <nl> - " crypto / evp / p_rsa_asn1 . c " , <nl> - " crypto / evp / pbkdf . c " , <nl> - " crypto / evp / print . c " , <nl> - " crypto / evp / sign . c " , <nl> - " crypto / ex_data . c " , <nl> - " crypto / hkdf / hkdf . c " , <nl> - " crypto / hmac / hmac . c " , <nl> - " crypto / lhash / lhash . c " , <nl> - " crypto / md4 / md4 . c " , <nl> - " crypto / md5 / md5 . c " , <nl> - " crypto / mem . c " , <nl> - " crypto / modes / cbc . c " , <nl> - " crypto / modes / cfb . c " , <nl> - " crypto / modes / ctr . c " , <nl> - " crypto / modes / gcm . c " , <nl> - " crypto / modes / ofb . c " , <nl> - " crypto / newhope / error_correction . c " , <nl> - " crypto / newhope / newhope . c " , <nl> - " crypto / newhope / ntt . c " , <nl> - " crypto / newhope / poly . c " , <nl> - " crypto / newhope / precomp . c " , <nl> - " crypto / newhope / reduce . c " , <nl> - " crypto / obj / obj . c " , <nl> - " crypto / obj / obj_xref . c " , <nl> - " crypto / pem / pem_all . c " , <nl> - " crypto / pem / pem_info . c " , <nl> - " crypto / pem / pem_lib . c " , <nl> - " crypto / pem / pem_oth . c " , <nl> - " crypto / pem / pem_pk8 . c " , <nl> - " crypto / pem / pem_pkey . c " , <nl> - " crypto / pem / pem_x509 . c " , <nl> - " crypto / pem / pem_xaux . c " , <nl> - " crypto / pkcs8 / p5_pbe . c " , <nl> - " crypto / pkcs8 / p5_pbev2 . c " , <nl> - " crypto / pkcs8 / p8_pkey . c " , <nl> - " crypto / pkcs8 / pkcs8 . c " , <nl> - " crypto / poly1305 / poly1305 . c " , <nl> - " crypto / poly1305 / poly1305_arm . c " , <nl> - " crypto / poly1305 / poly1305_vec . c " , <nl> - " crypto / rand / deterministic . c " , <nl> - " crypto / rand / rand . c " , <nl> - " crypto / rand / urandom . c " , <nl> - " crypto / rand / windows . c " , <nl> - " crypto / rc4 / rc4 . c " , <nl> - " crypto / refcount_c11 . c " , <nl> - " crypto / refcount_lock . c " , <nl> - " crypto / rsa / blinding . c " , <nl> - " crypto / rsa / padding . c " , <nl> - " crypto / rsa / rsa . c " , <nl> - " crypto / rsa / rsa_asn1 . c " , <nl> - " crypto / rsa / rsa_impl . c " , <nl> - " crypto / sha / sha1 . c " , <nl> - " crypto / sha / sha256 . c " , <nl> - " crypto / sha / sha512 . c " , <nl> - " crypto / stack / stack . c " , <nl> - " crypto / thread . c " , <nl> - " crypto / thread_none . c " , <nl> - " crypto / thread_pthread . c " , <nl> - " crypto / thread_win . c " , <nl> - " crypto / time_support . c " , <nl> - " crypto / x509 / a_digest . c " , <nl> - " crypto / x509 / a_sign . c " , <nl> - " crypto / x509 / a_strex . c " , <nl> - " crypto / x509 / a_verify . c " , <nl> - " crypto / x509 / algorithm . c " , <nl> - " crypto / x509 / asn1_gen . c " , <nl> - " crypto / x509 / by_dir . c " , <nl> - " crypto / x509 / by_file . c " , <nl> - " crypto / x509 / i2d_pr . c " , <nl> - " crypto / x509 / pkcs7 . c " , <nl> - " crypto / x509 / rsa_pss . c " , <nl> - " crypto / x509 / t_crl . c " , <nl> - " crypto / x509 / t_req . c " , <nl> - " crypto / x509 / t_x509 . c " , <nl> - " crypto / x509 / t_x509a . c " , <nl> - " crypto / x509 / x509 . c " , <nl> - " crypto / x509 / x509_att . c " , <nl> - " crypto / x509 / x509_cmp . c " , <nl> - " crypto / x509 / x509_d2 . c " , <nl> - " crypto / x509 / x509_def . c " , <nl> - " crypto / x509 / x509_ext . c " , <nl> - " crypto / x509 / x509_lu . c " , <nl> - " crypto / x509 / x509_obj . c " , <nl> - " crypto / x509 / x509_r2x . c " , <nl> - " crypto / x509 / x509_req . c " , <nl> - " crypto / x509 / x509_set . c " , <nl> - " crypto / x509 / x509_trs . c " , <nl> - " crypto / x509 / x509_txt . c " , <nl> - " crypto / x509 / x509_v3 . c " , <nl> - " crypto / x509 / x509_vfy . c " , <nl> - " crypto / x509 / x509_vpm . c " , <nl> - " crypto / x509 / x509cset . c " , <nl> - " crypto / x509 / x509name . c " , <nl> - " crypto / x509 / x509rset . c " , <nl> - " crypto / x509 / x509spki . c " , <nl> - " crypto / x509 / x509type . c " , <nl> - " crypto / x509 / x_algor . c " , <nl> - " crypto / x509 / x_all . c " , <nl> - " crypto / x509 / x_attrib . c " , <nl> - " crypto / x509 / x_crl . c " , <nl> - " crypto / x509 / x_exten . c " , <nl> - " crypto / x509 / x_info . c " , <nl> - " crypto / x509 / x_name . c " , <nl> - " crypto / x509 / x_pkey . c " , <nl> - " crypto / x509 / x_pubkey . c " , <nl> - " crypto / x509 / x_req . c " , <nl> - " crypto / x509 / x_sig . c " , <nl> - " crypto / x509 / x_spki . c " , <nl> - " crypto / x509 / x_val . c " , <nl> - " crypto / x509 / x_x509 . c " , <nl> - " crypto / x509 / x_x509a . c " , <nl> - " crypto / x509v3 / pcy_cache . c " , <nl> - " crypto / x509v3 / pcy_data . c " , <nl> - " crypto / x509v3 / pcy_lib . c " , <nl> - " crypto / x509v3 / pcy_map . c " , <nl> - " crypto / x509v3 / pcy_node . c " , <nl> - " crypto / x509v3 / pcy_tree . c " , <nl> - " crypto / x509v3 / v3_akey . c " , <nl> - " crypto / x509v3 / v3_akeya . c " , <nl> - " crypto / x509v3 / v3_alt . c " , <nl> - " crypto / x509v3 / v3_bcons . c " , <nl> - " crypto / x509v3 / v3_bitst . c " , <nl> - " crypto / x509v3 / v3_conf . c " , <nl> - " crypto / x509v3 / v3_cpols . c " , <nl> - " crypto / x509v3 / v3_crld . c " , <nl> - " crypto / x509v3 / v3_enum . c " , <nl> - " crypto / x509v3 / v3_extku . c " , <nl> - " crypto / x509v3 / v3_genn . c " , <nl> - " crypto / x509v3 / v3_ia5 . c " , <nl> - " crypto / x509v3 / v3_info . c " , <nl> - " crypto / x509v3 / v3_int . c " , <nl> - " crypto / x509v3 / v3_lib . c " , <nl> - " crypto / x509v3 / v3_ncons . c " , <nl> - " crypto / x509v3 / v3_pci . c " , <nl> - " crypto / x509v3 / v3_pcia . c " , <nl> - " crypto / x509v3 / v3_pcons . c " , <nl> - " crypto / x509v3 / v3_pku . c " , <nl> - " crypto / x509v3 / v3_pmaps . c " , <nl> - " crypto / x509v3 / v3_prn . c " , <nl> - " crypto / x509v3 / v3_purp . c " , <nl> - " crypto / x509v3 / v3_skey . c " , <nl> - " crypto / x509v3 / v3_sxnet . c " , <nl> - " crypto / x509v3 / v3_utl . c " , <nl> - ] <nl> - <nl> - # A trick to take the generated err_data . c from another package . <nl> - genrule ( <nl> - name = " err_data_c " , <nl> - srcs = [ " / / external : boringssl_err_data_c " ] , <nl> - outs = [ " err_data . c " ] , <nl> - cmd = " cp $ < $ @ " , <nl> - ) <nl> - <nl> - cc_library ( <nl> - name = " crypto " , <nl> - srcs = crypto_internal_headers + crypto_sources , <nl> - hdrs = crypto_headers , <nl> - # To avoid linking platform - specific ASM files . <nl> - defines = [ " OPENSSL_NO_ASM " ] , <nl> - includes = [ " include " ] , <nl> - visibility = [ " / / visibility : public " ] , <nl> - ) <nl> - <nl> - cc_library ( <nl> - name = " ssl " , <nl> - srcs = ssl_internal_headers + ssl_sources , <nl> - hdrs = ssl_headers , <nl> - includes = [ " src / include " ] , <nl> - visibility = [ " / / visibility : public " ] , <nl> - deps = [ <nl> - " : crypto " , <nl> - ] , <nl> - ) <nl> deleted file mode 100644 <nl> index 8865054d70c6a . . 0000000000000 <nl> mmm a / bzip2 . BUILD <nl> ppp / dev / null <nl> <nl> - package ( default_visibility = [ " / / visibility : public " ] ) <nl> - <nl> - licenses ( [ " notice " ] ) # BSD derivative <nl> - <nl> - cc_library ( <nl> - name = " bz2lib " , <nl> - srcs = [ <nl> - # These are in the same order as their corresponding . o files are in <nl> - # OBJS in Makefile ( rather than lexicographic order ) for easy <nl> - # comparison ( that they are identical . ) <nl> - " blocksort . c " , <nl> - " huffman . c " , <nl> - " crctable . c " , <nl> - " randtable . c " , <nl> - " compress . c " , <nl> - " decompress . c " , <nl> - " bzlib . c " , <nl> - " bzlib_private . h " , <nl> - ] , <nl> - hdrs = [ " bzlib . h " ] , <nl> - includes = [ " . " ] , <nl> - ) <nl> - <nl> - cc_binary ( <nl> - name = " bzip2 " , <nl> - srcs = [ " bzip2 . c " ] , <nl> - deps = [ " : bz2lib " ] , <nl> - ) <nl> deleted file mode 100644 <nl> index f25208c41672d . . 0000000000000 <nl> mmm a / tensorflow / third_party / hadoop / BUILD <nl> ppp / dev / null <nl> <nl> - package ( default_visibility = [ " / / visibility : public " ] ) <nl> - <nl> - licenses ( [ " notice " ] ) # Apache 2 . 0 <nl> - <nl> - filegroup ( <nl> - name = " all_files " , <nl> - srcs = glob ( <nl> - [ " * * / * " ] , <nl> - exclude = [ <nl> - " * * / METADATA " , <nl> - " * * / OWNERS " , <nl> - ] , <nl> - ) , <nl> - visibility = [ " / / tensorflow : __subpackages__ " ] , <nl> - ) <nl> - <nl> - cc_library ( <nl> - name = " hdfs " , <nl> - hdrs = [ " hdfs . h " ] , <nl> - ) <nl> deleted file mode 100644 <nl> index 560d8bba0e0e7 . . 0000000000000 <nl> mmm a / tensorflow / third_party / hadoop / hdfs . h <nl> ppp / dev / null <nl> <nl> - / * * <nl> - * Licensed to the Apache Software Foundation ( ASF ) under one <nl> - * or more contributor license agreements . See the NOTICE file <nl> - * distributed with this work for additional information <nl> - * regarding copyright ownership . The ASF licenses this file <nl> - * to you under the Apache License , Version 2 . 0 ( the <nl> - * " License " ) ; you may not use this file except in compliance <nl> - * with the License . You may obtain a copy of the License at <nl> - * <nl> - * http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> - * <nl> - * Unless required by applicable law or agreed to in writing , software <nl> - * distributed under the License is distributed on an " AS IS " BASIS , <nl> - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied . <nl> - * See the License for the specific language governing permissions and <nl> - * limitations under the License . <nl> - * / <nl> - <nl> - # ifndef LIBHDFS_HDFS_H <nl> - # define LIBHDFS_HDFS_H <nl> - <nl> - # include < errno . h > / * for EINTERNAL , etc . * / <nl> - # include < fcntl . h > / * for O_RDONLY , O_WRONLY * / <nl> - # include < stdint . h > / * for uint64_t , etc . * / <nl> - # include < time . h > / * for time_t * / <nl> - <nl> - / * <nl> - * Support export of DLL symbols during libhdfs build , and import of DLL symbols <nl> - * during client application build . A client application may optionally define <nl> - * symbol LIBHDFS_DLL_IMPORT in its build . This is not strictly required , but <nl> - * the compiler can produce more efficient code with it . <nl> - * / <nl> - # ifdef WIN32 <nl> - # ifdef LIBHDFS_DLL_EXPORT <nl> - # define LIBHDFS_EXTERNAL __declspec ( dllexport ) <nl> - # elif LIBHDFS_DLL_IMPORT <nl> - # define LIBHDFS_EXTERNAL __declspec ( dllimport ) <nl> - # else <nl> - # define LIBHDFS_EXTERNAL <nl> - # endif <nl> - # else <nl> - # ifdef LIBHDFS_DLL_EXPORT <nl> - # define LIBHDFS_EXTERNAL __attribute__ ( ( visibility ( " default " ) ) ) <nl> - # elif LIBHDFS_DLL_IMPORT <nl> - # define LIBHDFS_EXTERNAL __attribute__ ( ( visibility ( " default " ) ) ) <nl> - # else <nl> - # define LIBHDFS_EXTERNAL <nl> - # endif <nl> - # endif <nl> - <nl> - # ifndef O_RDONLY <nl> - # define O_RDONLY 1 <nl> - # endif <nl> - <nl> - # ifndef O_WRONLY <nl> - # define O_WRONLY 2 <nl> - # endif <nl> - <nl> - # ifndef EINTERNAL <nl> - # define EINTERNAL 255 <nl> - # endif <nl> - <nl> - # define ELASTIC_BYTE_BUFFER_POOL_CLASS \ <nl> - " org / apache / hadoop / io / ElasticByteBufferPool " <nl> - <nl> - / * * All APIs set errno to meaningful values * / <nl> - <nl> - # ifdef __cplusplus <nl> - extern " C " { <nl> - # endif <nl> - / * * <nl> - * Some utility decls used in libhdfs . <nl> - * / <nl> - struct hdfsBuilder ; <nl> - typedef int32_t tSize ; / / / size of data for read / write io ops <nl> - typedef time_t tTime ; / / / time type in seconds <nl> - typedef int64_t tOffset ; / / / offset within the file <nl> - typedef uint16_t tPort ; / / / port <nl> - typedef enum tObjectKind { <nl> - kObjectKindFile = ' F ' , <nl> - kObjectKindDirectory = ' D ' , <nl> - } tObjectKind ; <nl> - <nl> - / * * <nl> - * The C reflection of org . apache . org . hadoop . FileSystem . <nl> - * / <nl> - struct hdfs_internal ; <nl> - typedef struct hdfs_internal * hdfsFS ; <nl> - <nl> - struct hdfsFile_internal ; <nl> - typedef struct hdfsFile_internal * hdfsFile ; <nl> - <nl> - struct hadoopRzOptions ; <nl> - <nl> - struct hadoopRzBuffer ; <nl> - <nl> - / * * <nl> - * Determine if a file is open for read . <nl> - * <nl> - * @ param file The HDFS file <nl> - * @ return 1 if the file is open for read ; 0 otherwise <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsFileIsOpenForRead ( hdfsFile file ) ; <nl> - <nl> - / * * <nl> - * Determine if a file is open for write . <nl> - * <nl> - * @ param file The HDFS file <nl> - * @ return 1 if the file is open for write ; 0 otherwise <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsFileIsOpenForWrite ( hdfsFile file ) ; <nl> - <nl> - struct hdfsReadStatistics { <nl> - uint64_t totalBytesRead ; <nl> - uint64_t totalLocalBytesRead ; <nl> - uint64_t totalShortCircuitBytesRead ; <nl> - uint64_t totalZeroCopyBytesRead ; <nl> - } ; <nl> - <nl> - / * * <nl> - * Get read statistics about a file . This is only applicable to files <nl> - * opened for reading . <nl> - * <nl> - * @ param file The HDFS file <nl> - * @ param stats ( out parameter ) on a successful return , the read <nl> - * statistics . Unchanged otherwise . You must free the <nl> - * returned statistics with hdfsFileFreeReadStatistics . <nl> - * @ return 0 if the statistics were successfully returned , <nl> - * - 1 otherwise . On a failure , please check errno against <nl> - * ENOTSUP . webhdfs , LocalFilesystem , and so forth may <nl> - * not support read statistics . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsFileGetReadStatistics ( hdfsFile file , struct hdfsReadStatistics * * stats ) ; <nl> - <nl> - / * * <nl> - * @ param stats HDFS read statistics for a file . <nl> - * <nl> - * @ return the number of remote bytes read . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int64_t hdfsReadStatisticsGetRemoteBytesRead ( <nl> - const struct hdfsReadStatistics * stats ) ; <nl> - <nl> - / * * <nl> - * Clear the read statistics for a file . <nl> - * <nl> - * @ param file The file to clear the read statistics of . <nl> - * <nl> - * @ return 0 on success ; the error code otherwise . <nl> - * EINVAL : the file is not open for reading . <nl> - * ENOTSUP : the file does not support clearing the read <nl> - * statistics . <nl> - * Errno will also be set to this code on failure . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsFileClearReadStatistics ( hdfsFile file ) ; <nl> - <nl> - / * * <nl> - * Free some HDFS read statistics . <nl> - * <nl> - * @ param stats The HDFS read statistics to free . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - void hdfsFileFreeReadStatistics ( struct hdfsReadStatistics * stats ) ; <nl> - <nl> - / * * <nl> - * hdfsConnectAsUser - Connect to a hdfs file system as a specific user <nl> - * Connect to the hdfs . <nl> - * @ param nn The NameNode . See hdfsBuilderSetNameNode for details . <nl> - * @ param port The port on which the server is listening . <nl> - * @ param user the user name ( this is hadoop domain user ) . Or NULL is equivelant <nl> - * to hhdfsConnect ( host , port ) <nl> - * @ return Returns a handle to the filesystem or NULL on error . <nl> - * @ deprecated Use hdfsBuilderConnect instead . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - hdfsFS hdfsConnectAsUser ( const char * nn , tPort port , const char * user ) ; <nl> - <nl> - / * * <nl> - * hdfsConnect - Connect to a hdfs file system . <nl> - * Connect to the hdfs . <nl> - * @ param nn The NameNode . See hdfsBuilderSetNameNode for details . <nl> - * @ param port The port on which the server is listening . <nl> - * @ return Returns a handle to the filesystem or NULL on error . <nl> - * @ deprecated Use hdfsBuilderConnect instead . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - hdfsFS hdfsConnect ( const char * nn , tPort port ) ; <nl> - <nl> - / * * <nl> - * hdfsConnect - Connect to an hdfs file system . <nl> - * <nl> - * Forces a new instance to be created <nl> - * <nl> - * @ param nn The NameNode . See hdfsBuilderSetNameNode for details . <nl> - * @ param port The port on which the server is listening . <nl> - * @ param user The user name to use when connecting <nl> - * @ return Returns a handle to the filesystem or NULL on error . <nl> - * @ deprecated Use hdfsBuilderConnect instead . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - hdfsFS hdfsConnectAsUserNewInstance ( const char * nn , tPort port , <nl> - const char * user ) ; <nl> - <nl> - / * * <nl> - * hdfsConnect - Connect to an hdfs file system . <nl> - * <nl> - * Forces a new instance to be created <nl> - * <nl> - * @ param nn The NameNode . See hdfsBuilderSetNameNode for details . <nl> - * @ param port The port on which the server is listening . <nl> - * @ return Returns a handle to the filesystem or NULL on error . <nl> - * @ deprecated Use hdfsBuilderConnect instead . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - hdfsFS hdfsConnectNewInstance ( const char * nn , tPort port ) ; <nl> - <nl> - / * * <nl> - * Connect to HDFS using the parameters defined by the builder . <nl> - * <nl> - * The HDFS builder will be freed , whether or not the connection was <nl> - * successful . <nl> - * <nl> - * Every successful call to hdfsBuilderConnect should be matched with a call <nl> - * to hdfsDisconnect , when the hdfsFS is no longer needed . <nl> - * <nl> - * @ param bld The HDFS builder <nl> - * @ return Returns a handle to the filesystem , or NULL on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - hdfsFS hdfsBuilderConnect ( struct hdfsBuilder * bld ) ; <nl> - <nl> - / * * <nl> - * Create an HDFS builder . <nl> - * <nl> - * @ return The HDFS builder , or NULL on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - struct hdfsBuilder * hdfsNewBuilder ( void ) ; <nl> - <nl> - / * * <nl> - * Force the builder to always create a new instance of the FileSystem , <nl> - * rather than possibly finding one in the cache . <nl> - * <nl> - * @ param bld The HDFS builder <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - void hdfsBuilderSetForceNewInstance ( struct hdfsBuilder * bld ) ; <nl> - <nl> - / * * <nl> - * Set the HDFS NameNode to connect to . <nl> - * <nl> - * @ param bld The HDFS builder <nl> - * @ param nn The NameNode to use . <nl> - * <nl> - * If the string given is ' default ' , the default NameNode <nl> - * configuration will be used ( from the XML configuration files ) <nl> - * <nl> - * If NULL is given , a LocalFileSystem will be created . <nl> - * <nl> - * If the string starts with a protocol type such as file : / / or <nl> - * hdfs : / / , this protocol type will be used . If not , the <nl> - * hdfs : / / protocol type will be used . <nl> - * <nl> - * You may specify a NameNode port in the usual way by <nl> - * passing a string of the format hdfs : / / < hostname > : < port > . <nl> - * Alternately , you may set the port with <nl> - * hdfsBuilderSetNameNodePort . However , you must not pass the <nl> - * port in two different ways . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - void hdfsBuilderSetNameNode ( struct hdfsBuilder * bld , const char * nn ) ; <nl> - <nl> - / * * <nl> - * Set the port of the HDFS NameNode to connect to . <nl> - * <nl> - * @ param bld The HDFS builder <nl> - * @ param port The port . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - void hdfsBuilderSetNameNodePort ( struct hdfsBuilder * bld , tPort port ) ; <nl> - <nl> - / * * <nl> - * Set the username to use when connecting to the HDFS cluster . <nl> - * <nl> - * @ param bld The HDFS builder <nl> - * @ param userName The user name . The string will be shallow - copied . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - void hdfsBuilderSetUserName ( struct hdfsBuilder * bld , const char * userName ) ; <nl> - <nl> - / * * <nl> - * Set the path to the Kerberos ticket cache to use when connecting to <nl> - * the HDFS cluster . <nl> - * <nl> - * @ param bld The HDFS builder <nl> - * @ param kerbTicketCachePath The Kerberos ticket cache path . The string <nl> - * will be shallow - copied . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - void hdfsBuilderSetKerbTicketCachePath ( struct hdfsBuilder * bld , <nl> - const char * kerbTicketCachePath ) ; <nl> - <nl> - / * * <nl> - * Free an HDFS builder . <nl> - * <nl> - * It is normally not necessary to call this function since <nl> - * hdfsBuilderConnect frees the builder . <nl> - * <nl> - * @ param bld The HDFS builder <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - void hdfsFreeBuilder ( struct hdfsBuilder * bld ) ; <nl> - <nl> - / * * <nl> - * Set a configuration string for an HdfsBuilder . <nl> - * <nl> - * @ param key The key to set . <nl> - * @ param val The value , or NULL to set no value . <nl> - * This will be shallow - copied . You are responsible for <nl> - * ensuring that it remains valid until the builder is <nl> - * freed . <nl> - * <nl> - * @ return 0 on success ; nonzero error code otherwise . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsBuilderConfSetStr ( struct hdfsBuilder * bld , const char * key , <nl> - const char * val ) ; <nl> - <nl> - / * * <nl> - * Get a configuration string . <nl> - * <nl> - * @ param key The key to find <nl> - * @ param val ( out param ) The value . This will be set to NULL if the <nl> - * key isn ' t found . You must free this string with <nl> - * hdfsConfStrFree . <nl> - * <nl> - * @ return 0 on success ; nonzero error code otherwise . <nl> - * Failure to find the key is not an error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsConfGetStr ( const char * key , char * * val ) ; <nl> - <nl> - / * * <nl> - * Get a configuration integer . <nl> - * <nl> - * @ param key The key to find <nl> - * @ param val ( out param ) The value . This will NOT be changed if the <nl> - * key isn ' t found . <nl> - * <nl> - * @ return 0 on success ; nonzero error code otherwise . <nl> - * Failure to find the key is not an error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsConfGetInt ( const char * key , int32_t * val ) ; <nl> - <nl> - / * * <nl> - * Free a configuration string found with hdfsConfGetStr . <nl> - * <nl> - * @ param val A configuration string obtained from hdfsConfGetStr <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - void hdfsConfStrFree ( char * val ) ; <nl> - <nl> - / * * <nl> - * hdfsDisconnect - Disconnect from the hdfs file system . <nl> - * Disconnect from hdfs . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ return Returns 0 on success , - 1 on error . <nl> - * Even if there is an error , the resources associated with the <nl> - * hdfsFS will be freed . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsDisconnect ( hdfsFS fs ) ; <nl> - <nl> - / * * <nl> - * hdfsOpenFile - Open a hdfs file in given mode . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param path The full path to the file . <nl> - * @ param flags - an | of bits / fcntl . h file flags - supported flags are <nl> - * O_RDONLY , O_WRONLY ( meaning create or overwrite i . e . , implies O_TRUNCAT ) , <nl> - * O_WRONLY | O_APPEND . Other flags are generally ignored other than ( O_RDWR | | <nl> - * ( O_EXCL & O_CREAT ) ) which return NULL and set errno equal ENOTSUP . <nl> - * @ param bufferSize Size of buffer for read / write - pass 0 if you want <nl> - * to use the default configured values . <nl> - * @ param replication Block replication - pass 0 if you want to use <nl> - * the default configured values . <nl> - * @ param blocksize Size of block - pass 0 if you want to use the <nl> - * default configured values . <nl> - * @ return Returns the handle to the open file or NULL on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - hdfsFile hdfsOpenFile ( hdfsFS fs , const char * path , int flags , int bufferSize , <nl> - short replication , tSize blocksize ) ; <nl> - <nl> - / * * <nl> - * hdfsTruncateFile - Truncate a hdfs file to given lenght . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param path The full path to the file . <nl> - * @ param newlength The size the file is to be truncated to <nl> - * @ return 1 if the file has been truncated to the desired newlength <nl> - * and is immediately available to be reused for write operations <nl> - * such as append . <nl> - * 0 if a background process of adjusting the length of the last <nl> - * block has been started , and clients should wait for it to <nl> - * complete before proceeding with further file updates . <nl> - * - 1 on error . <nl> - * / <nl> - int hdfsTruncateFile ( hdfsFS fs , const char * path , tOffset newlength ) ; <nl> - <nl> - / * * <nl> - * hdfsUnbufferFile - Reduce the buffering done on a file . <nl> - * <nl> - * @ param file The file to unbuffer . <nl> - * @ return 0 on success <nl> - * ENOTSUP if the file does not support unbuffering <nl> - * Errno will also be set to this value . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsUnbufferFile ( hdfsFile file ) ; <nl> - <nl> - / * * <nl> - * hdfsCloseFile - Close an open file . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param file The file handle . <nl> - * @ return Returns 0 on success , - 1 on error . <nl> - * On error , errno will be set appropriately . <nl> - * If the hdfs file was valid , the memory associated with it will <nl> - * be freed at the end of this call , even if there was an I / O <nl> - * error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsCloseFile ( hdfsFS fs , hdfsFile file ) ; <nl> - <nl> - / * * <nl> - * hdfsExists - Checks if a given path exsits on the filesystem <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param path The path to look for <nl> - * @ return Returns 0 on success , - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsExists ( hdfsFS fs , const char * path ) ; <nl> - <nl> - / * * <nl> - * hdfsSeek - Seek to given offset in file . <nl> - * This works only for files opened in read - only mode . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param file The file handle . <nl> - * @ param desiredPos Offset into the file to seek into . <nl> - * @ return Returns 0 on success , - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsSeek ( hdfsFS fs , hdfsFile file , tOffset desiredPos ) ; <nl> - <nl> - / * * <nl> - * hdfsTell - Get the current offset in the file , in bytes . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param file The file handle . <nl> - * @ return Current offset , - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - tOffset hdfsTell ( hdfsFS fs , hdfsFile file ) ; <nl> - <nl> - / * * <nl> - * hdfsRead - Read data from an open file . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param file The file handle . <nl> - * @ param buffer The buffer to copy read bytes into . <nl> - * @ param length The length of the buffer . <nl> - * @ return On success , a positive number indicating how many bytes <nl> - * were read . <nl> - * On end - of - file , 0 . <nl> - * On error , - 1 . Errno will be set to the error code . <nl> - * Just like the POSIX read function , hdfsRead will return - 1 <nl> - * and set errno to EINTR if data is temporarily unavailable , <nl> - * but we are not yet at the end of the file . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - tSize hdfsRead ( hdfsFS fs , hdfsFile file , void * buffer , tSize length ) ; <nl> - <nl> - / * * <nl> - * hdfsPread - Positional read of data from an open file . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param file The file handle . <nl> - * @ param position Position from which to read <nl> - * @ param buffer The buffer to copy read bytes into . <nl> - * @ param length The length of the buffer . <nl> - * @ return See hdfsRead <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - tSize hdfsPread ( hdfsFS fs , hdfsFile file , tOffset position , void * buffer , <nl> - tSize length ) ; <nl> - <nl> - / * * <nl> - * hdfsWrite - Write data into an open file . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param file The file handle . <nl> - * @ param buffer The data . <nl> - * @ param length The no . of bytes to write . <nl> - * @ return Returns the number of bytes written , - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - tSize hdfsWrite ( hdfsFS fs , hdfsFile file , const void * buffer , tSize length ) ; <nl> - <nl> - / * * <nl> - * hdfsWrite - Flush the data . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param file The file handle . <nl> - * @ return Returns 0 on success , - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsFlush ( hdfsFS fs , hdfsFile file ) ; <nl> - <nl> - / * * <nl> - * hdfsHFlush - Flush out the data in client ' s user buffer . After the <nl> - * return of this call , new readers will see the data . <nl> - * @ param fs configured filesystem handle <nl> - * @ param file file handle <nl> - * @ return 0 on success , - 1 on error and sets errno <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsHFlush ( hdfsFS fs , hdfsFile file ) ; <nl> - <nl> - / * * <nl> - * hdfsHSync - Similar to posix fsync , Flush out the data in client ' s <nl> - * user buffer . all the way to the disk device ( but the disk may have <nl> - * it in its cache ) . <nl> - * @ param fs configured filesystem handle <nl> - * @ param file file handle <nl> - * @ return 0 on success , - 1 on error and sets errno <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsHSync ( hdfsFS fs , hdfsFile file ) ; <nl> - <nl> - / * * <nl> - * hdfsAvailable - Number of bytes that can be read from this <nl> - * input stream without blocking . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param file The file handle . <nl> - * @ return Returns available bytes ; - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsAvailable ( hdfsFS fs , hdfsFile file ) ; <nl> - <nl> - / * * <nl> - * hdfsCopy - Copy file from one filesystem to another . <nl> - * @ param srcFS The handle to source filesystem . <nl> - * @ param src The path of source file . <nl> - * @ param dstFS The handle to destination filesystem . <nl> - * @ param dst The path of destination file . <nl> - * @ return Returns 0 on success , - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsCopy ( hdfsFS srcFS , const char * src , hdfsFS dstFS , const char * dst ) ; <nl> - <nl> - / * * <nl> - * hdfsMove - Move file from one filesystem to another . <nl> - * @ param srcFS The handle to source filesystem . <nl> - * @ param src The path of source file . <nl> - * @ param dstFS The handle to destination filesystem . <nl> - * @ param dst The path of destination file . <nl> - * @ return Returns 0 on success , - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsMove ( hdfsFS srcFS , const char * src , hdfsFS dstFS , const char * dst ) ; <nl> - <nl> - / * * <nl> - * hdfsDelete - Delete file . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param path The path of the file . <nl> - * @ param recursive if path is a directory and set to <nl> - * non - zero , the directory is deleted else throws an exception . In <nl> - * case of a file the recursive argument is irrelevant . <nl> - * @ return Returns 0 on success , - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsDelete ( hdfsFS fs , const char * path , int recursive ) ; <nl> - <nl> - / * * <nl> - * hdfsRename - Rename file . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param oldPath The path of the source file . <nl> - * @ param newPath The path of the destination file . <nl> - * @ return Returns 0 on success , - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsRename ( hdfsFS fs , const char * oldPath , const char * newPath ) ; <nl> - <nl> - / * * <nl> - * hdfsGetWorkingDirectory - Get the current working directory for <nl> - * the given filesystem . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param buffer The user - buffer to copy path of cwd into . <nl> - * @ param bufferSize The length of user - buffer . <nl> - * @ return Returns buffer , NULL on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - char * hdfsGetWorkingDirectory ( hdfsFS fs , char * buffer , size_t bufferSize ) ; <nl> - <nl> - / * * <nl> - * hdfsSetWorkingDirectory - Set the working directory . All relative <nl> - * paths will be resolved relative to it . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param path The path of the new ' cwd ' . <nl> - * @ return Returns 0 on success , - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsSetWorkingDirectory ( hdfsFS fs , const char * path ) ; <nl> - <nl> - / * * <nl> - * hdfsCreateDirectory - Make the given file and all non - existent <nl> - * parents into directories . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param path The path of the directory . <nl> - * @ return Returns 0 on success , - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsCreateDirectory ( hdfsFS fs , const char * path ) ; <nl> - <nl> - / * * <nl> - * hdfsSetReplication - Set the replication of the specified <nl> - * file to the supplied value <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param path The path of the file . <nl> - * @ return Returns 0 on success , - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsSetReplication ( hdfsFS fs , const char * path , int16_t replication ) ; <nl> - <nl> - / * * <nl> - * hdfsFileInfo - Information about a file / directory . <nl> - * / <nl> - typedef struct { <nl> - tObjectKind mKind ; / * file or directory * / <nl> - char * mName ; / * the name of the file * / <nl> - tTime mLastMod ; / * the last modification time for the file in seconds * / <nl> - tOffset mSize ; / * the size of the file in bytes * / <nl> - short mReplication ; / * the count of replicas * / <nl> - tOffset mBlockSize ; / * the block size for the file * / <nl> - char * mOwner ; / * the owner of the file * / <nl> - char * mGroup ; / * the group associated with the file * / <nl> - short mPermissions ; / * the permissions associated with the file * / <nl> - tTime mLastAccess ; / * the last access time for the file in seconds * / <nl> - } hdfsFileInfo ; <nl> - <nl> - / * * <nl> - * hdfsListDirectory - Get list of files / directories for a given <nl> - * directory - path . hdfsFreeFileInfo should be called to deallocate memory . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param path The path of the directory . <nl> - * @ param numEntries Set to the number of files / directories in path . <nl> - * @ return Returns a dynamically - allocated array of hdfsFileInfo <nl> - * objects ; NULL on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - hdfsFileInfo * hdfsListDirectory ( hdfsFS fs , const char * path , int * numEntries ) ; <nl> - <nl> - / * * <nl> - * hdfsGetPathInfo - Get information about a path as a ( dynamically <nl> - * allocated ) single hdfsFileInfo struct . hdfsFreeFileInfo should be <nl> - * called when the pointer is no longer needed . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param path The path of the file . <nl> - * @ return Returns a dynamically - allocated hdfsFileInfo object ; <nl> - * NULL on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - hdfsFileInfo * hdfsGetPathInfo ( hdfsFS fs , const char * path ) ; <nl> - <nl> - / * * <nl> - * hdfsFreeFileInfo - Free up the hdfsFileInfo array ( including fields ) <nl> - * @ param hdfsFileInfo The array of dynamically - allocated hdfsFileInfo <nl> - * objects . <nl> - * @ param numEntries The size of the array . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - void hdfsFreeFileInfo ( hdfsFileInfo * hdfsFileInfo , int numEntries ) ; <nl> - <nl> - / * * <nl> - * hdfsFileIsEncrypted : determine if a file is encrypted based on its <nl> - * hdfsFileInfo . <nl> - * @ return - 1 if there was an error ( errno will be set ) , 0 if the file is <nl> - * not encrypted , 1 if the file is encrypted . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsFileIsEncrypted ( hdfsFileInfo * hdfsFileInfo ) ; <nl> - <nl> - / * * <nl> - * hdfsGetHosts - Get hostnames where a particular block ( determined by <nl> - * pos & blocksize ) of a file is stored . The last element in the array <nl> - * is NULL . Due to replication , a single block could be present on <nl> - * multiple hosts . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param path The path of the file . <nl> - * @ param start The start of the block . <nl> - * @ param length The length of the block . <nl> - * @ return Returns a dynamically - allocated 2 - d array of blocks - hosts ; <nl> - * NULL on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - char * * * hdfsGetHosts ( hdfsFS fs , const char * path , tOffset start , <nl> - tOffset length ) ; <nl> - <nl> - / * * <nl> - * hdfsFreeHosts - Free up the structure returned by hdfsGetHosts <nl> - * @ param hdfsFileInfo The array of dynamically - allocated hdfsFileInfo <nl> - * objects . <nl> - * @ param numEntries The size of the array . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - void hdfsFreeHosts ( char * * * blockHosts ) ; <nl> - <nl> - / * * <nl> - * hdfsGetDefaultBlockSize - Get the default blocksize . <nl> - * <nl> - * @ param fs The configured filesystem handle . <nl> - * @ deprecated Use hdfsGetDefaultBlockSizeAtPath instead . <nl> - * <nl> - * @ return Returns the default blocksize , or - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - tOffset hdfsGetDefaultBlockSize ( hdfsFS fs ) ; <nl> - <nl> - / * * <nl> - * hdfsGetDefaultBlockSizeAtPath - Get the default blocksize at the <nl> - * filesystem indicated by a given path . <nl> - * <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param path The given path will be used to locate the actual <nl> - * filesystem . The full path does not have to exist . <nl> - * <nl> - * @ return Returns the default blocksize , or - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - tOffset hdfsGetDefaultBlockSizeAtPath ( hdfsFS fs , const char * path ) ; <nl> - <nl> - / * * <nl> - * hdfsGetCapacity - Return the raw capacity of the filesystem . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ return Returns the raw - capacity ; - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - tOffset hdfsGetCapacity ( hdfsFS fs ) ; <nl> - <nl> - / * * <nl> - * hdfsGetUsed - Return the total raw size of all files in the filesystem . <nl> - * @ param fs The configured filesystem handle . <nl> - * @ return Returns the total - size ; - 1 on error . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - tOffset hdfsGetUsed ( hdfsFS fs ) ; <nl> - <nl> - / * * <nl> - * Change the user and / or group of a file or directory . <nl> - * <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param path the path to the file or directory <nl> - * @ param owner User string . Set to NULL for ' no change ' <nl> - * @ param group Group string . Set to NULL for ' no change ' <nl> - * @ return 0 on success else - 1 <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsChown ( hdfsFS fs , const char * path , const char * owner , <nl> - const char * group ) ; <nl> - <nl> - / * * <nl> - * hdfsChmod <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param path the path to the file or directory <nl> - * @ param mode the bitmask to set it to <nl> - * @ return 0 on success else - 1 <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsChmod ( hdfsFS fs , const char * path , short mode ) ; <nl> - <nl> - / * * <nl> - * hdfsUtime <nl> - * @ param fs The configured filesystem handle . <nl> - * @ param path the path to the file or directory <nl> - * @ param mtime new modification time or - 1 for no change <nl> - * @ param atime new access time or - 1 for no change <nl> - * @ return 0 on success else - 1 <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hdfsUtime ( hdfsFS fs , const char * path , tTime mtime , tTime atime ) ; <nl> - <nl> - / * * <nl> - * Allocate a zero - copy options structure . <nl> - * <nl> - * You must free all options structures allocated with this function using <nl> - * hadoopRzOptionsFree . <nl> - * <nl> - * @ return A zero - copy options structure , or NULL if one could <nl> - * not be allocated . If NULL is returned , errno will <nl> - * contain the error number . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - struct hadoopRzOptions * hadoopRzOptionsAlloc ( void ) ; <nl> - <nl> - / * * <nl> - * Determine whether we should skip checksums in read0 . <nl> - * <nl> - * @ param opts The options structure . <nl> - * @ param skip Nonzero to skip checksums sometimes ; zero to always <nl> - * check them . <nl> - * <nl> - * @ return 0 on success ; - 1 plus errno on failure . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hadoopRzOptionsSetSkipChecksum ( struct hadoopRzOptions * opts , int skip ) ; <nl> - <nl> - / * * <nl> - * Set the ByteBufferPool to use with read0 . <nl> - * <nl> - * @ param opts The options structure . <nl> - * @ param className If this is NULL , we will not use any <nl> - * ByteBufferPool . If this is non - NULL , it will be <nl> - * treated as the name of the pool class to use . <nl> - * For example , you can use <nl> - * ELASTIC_BYTE_BUFFER_POOL_CLASS . <nl> - * <nl> - * @ return 0 if the ByteBufferPool class was found and <nl> - * instantiated ; <nl> - * - 1 plus errno otherwise . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int hadoopRzOptionsSetByteBufferPool ( struct hadoopRzOptions * opts , <nl> - const char * className ) ; <nl> - <nl> - / * * <nl> - * Free a hadoopRzOptionsFree structure . <nl> - * <nl> - * @ param opts The options structure to free . <nl> - * Any associated ByteBufferPool will also be freed . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - void hadoopRzOptionsFree ( struct hadoopRzOptions * opts ) ; <nl> - <nl> - / * * <nl> - * Perform a byte buffer read . <nl> - * If possible , this will be a zero - copy ( mmap ) read . <nl> - * <nl> - * @ param file The file to read from . <nl> - * @ param opts An options structure created by hadoopRzOptionsAlloc . <nl> - * @ param maxLength The maximum length to read . We may read fewer bytes <nl> - * than this length . <nl> - * <nl> - * @ return On success , we will return a new hadoopRzBuffer . <nl> - * This buffer will continue to be valid and readable <nl> - * until it is released by readZeroBufferFree . Failure to <nl> - * release a buffer will lead to a memory leak . <nl> - * You can access the data within the hadoopRzBuffer with <nl> - * hadoopRzBufferGet . If you have reached EOF , the data <nl> - * within the hadoopRzBuffer will be NULL . You must still <nl> - * free hadoopRzBuffer instances containing NULL . <nl> - * <nl> - * On failure , we will return NULL plus an errno code . <nl> - * errno = EOPNOTSUPP indicates that we could not do a <nl> - * zero - copy read , and there was no ByteBufferPool <nl> - * supplied . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - struct hadoopRzBuffer * hadoopReadZero ( hdfsFile file , <nl> - struct hadoopRzOptions * opts , <nl> - int32_t maxLength ) ; <nl> - <nl> - / * * <nl> - * Determine the length of the buffer returned from readZero . <nl> - * <nl> - * @ param buffer a buffer returned from readZero . <nl> - * @ return the length of the buffer . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - int32_t hadoopRzBufferLength ( const struct hadoopRzBuffer * buffer ) ; <nl> - <nl> - / * * <nl> - * Get a pointer to the raw buffer returned from readZero . <nl> - * <nl> - * To find out how many bytes this buffer contains , call <nl> - * hadoopRzBufferLength . <nl> - * <nl> - * @ param buffer a buffer returned from readZero . <nl> - * @ return a pointer to the start of the buffer . This will be <nl> - * NULL when end - of - file has been reached . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - const void * hadoopRzBufferGet ( const struct hadoopRzBuffer * buffer ) ; <nl> - <nl> - / * * <nl> - * Release a buffer obtained through readZero . <nl> - * <nl> - * @ param file The hdfs stream that created this buffer . This must be <nl> - * the same stream you called hadoopReadZero on . <nl> - * @ param buffer The buffer to release . <nl> - * / <nl> - LIBHDFS_EXTERNAL <nl> - void hadoopRzBufferFree ( hdfsFile file , struct hadoopRzBuffer * buffer ) ; <nl> - <nl> - # ifdef __cplusplus <nl> - } <nl> - # endif <nl> - <nl> - # undef LIBHDFS_EXTERNAL <nl> - # endif / * LIBHDFS_HDFS_H * / <nl> - <nl> - / * * <nl> - * vim : ts = 4 : sw = 4 : et <nl> - * / <nl> deleted file mode 100644 <nl> index f631b6df06d13 . . 0000000000000 <nl> mmm a / third_party / avro / BUILD <nl> ppp / dev / null <nl> <nl> - package ( default_visibility = [ " / / visibility : public " ] ) <nl> - <nl> - licenses ( [ " notice " ] ) # Apache 2 . 0 <nl> deleted file mode 100644 <nl> index 9140409876927 . . 0000000000000 <nl> mmm a / third_party / avro / build_defs . bzl <nl> ppp / dev / null <nl> <nl> - " " " Build extension for generating C + + header file from an Avro schema . <nl> - <nl> - Example usage : <nl> - <nl> - load ( " / / third_party / avro : build_defs . bzl " , " avro_gen_cpp " ) <nl> - <nl> - avro_gen_cpp ( <nl> - name = " myrule " , <nl> - srcs = [ " myschema . json " ] , <nl> - outs = [ " myschema . h " ] , <nl> - namespace = " mynamespace " , <nl> - ) <nl> - " " " <nl> - <nl> - def avro_gen_cpp ( name , srcs , outs , namespace , visibility = None ) : <nl> - native . genrule ( <nl> - name = name , <nl> - srcs = srcs , <nl> - outs = outs , <nl> - cmd = ( " $ ( location @ avro_archive / / : avrogencpp ) " + <nl> - " - - include - prefix external / avro_archive / avro - cpp - 1 . 8 . 0 / api " + <nl> - " - - namespace " + namespace + <nl> - " - - no - union - typedef " + <nl> - " - - input $ ( SRCS ) " + <nl> - " - - output $ @ " ) , <nl> - tools = [ " @ avro_archive / / : avrogencpp " ] , <nl> - visibility = visibility , <nl> - ) <nl>
|
Remove unused files .
|
tensorflow/tensorflow
|
47de5f9beed2e73c4762faedcad210631427854e
|
2016-10-07T23:24:08Z
|
mmm a / example / fcn - xs / image_segmentaion . py <nl> ppp b / example / fcn - xs / image_segmentaion . py <nl> def main ( ) : <nl> model_prefix = " FCN8s_VGG16 " <nl> epoch = 19 <nl> <nl> - # By default , MXNet will run on the CPU . Uncomment the line below to execute on the GPU <nl> - # ctx = mx . gpu ( ) <nl> + # By default , MXNet will run on the CPU . Change to ctx = mx . gpu ( ) to run on GPU . <nl> + ctx = mx . cpu ( ) <nl> <nl> fcnxs , fcnxs_args , fcnxs_auxs = mx . model . load_checkpoint ( model_prefix , epoch ) <nl> fcnxs_args [ " data " ] = mx . nd . array ( get_data ( args . input ) , ctx ) <nl> mmm a / example / ssd / dataset / pycocotools / coco . py <nl> ppp b / example / ssd / dataset / pycocotools / coco . py <nl> def showAnns ( self , anns ) : <nl> color . append ( c ) <nl> else : <nl> # mask <nl> - t = self . imgs [ ann [ ' image_id ' ] ] <nl> - if type ( ann [ ' segmentation ' ] [ ' counts ' ] ) = = list : <nl> - # rle = maskUtils . frPyObjects ( [ ann [ ' segmentation ' ] ] , t [ ' height ' ] , t [ ' width ' ] ) <nl> - raise NotImplementedError ( " maskUtils disabled ! " ) <nl> - else : <nl> - rle = [ ann [ ' segmentation ' ] ] <nl> - # m = maskUtils . decode ( rle ) <nl> raise NotImplementedError ( " maskUtils disabled ! " ) <nl> - img = np . ones ( ( m . shape [ 0 ] , m . shape [ 1 ] , 3 ) ) <nl> - if ann [ ' iscrowd ' ] = = 1 : <nl> - color_mask = np . array ( [ 2 . 0 , 166 . 0 , 101 . 0 ] ) / 255 <nl> - if ann [ ' iscrowd ' ] = = 0 : <nl> - color_mask = np . random . random ( ( 1 , 3 ) ) . tolist ( ) [ 0 ] <nl> - for i in range ( 3 ) : <nl> - img [ : , : , i ] = color_mask [ i ] <nl> - ax . imshow ( np . dstack ( ( img , m * 0 . 5 ) ) ) <nl> if ' keypoints ' in ann and type ( ann [ ' keypoints ' ] ) = = list : <nl> # turn skeleton into zero - based index <nl> sks = np . array ( self . loadCats ( ann [ ' category_id ' ] ) [ 0 ] [ ' skeleton ' ] ) - 1 <nl> def annToMask ( self , ann ) : <nl> : return : binary mask ( numpy 2D array ) <nl> " " " <nl> rle = self . annToRLE ( ann ) <nl> - # m = maskUtils . decode ( rle ) <nl> raise NotImplementedError ( " maskUtils disabled ! " ) <nl> - return m <nl> mmm a / example / ssd / symbol / common . py <nl> ppp b / example / ssd / symbol / common . py <nl> def multibox_layer ( from_layers , num_classes , sizes = [ . 2 , . 95 ] , <nl> assert sizes [ 0 ] > 0 and sizes [ 0 ] < 1 <nl> assert sizes [ 1 ] > 0 and sizes [ 1 ] < 1 and sizes [ 1 ] > sizes [ 0 ] <nl> tmp = np . linspace ( sizes [ 0 ] , sizes [ 1 ] , num = ( len ( from_layers ) - 1 ) ) <nl> + # Ref for start_offset value : <nl> + # https : / / arxiv . org / abs / 1512 . 02325 <nl> + start_offset = 0 . 1 <nl> min_sizes = [ start_offset ] + tmp . tolist ( ) <nl> max_sizes = tmp . tolist ( ) + [ tmp [ - 1 ] + start_offset ] <nl> sizes = zip ( min_sizes , max_sizes ) <nl>
|
[ MXNET - 696 ] Fix undefined variable errors ( )
|
apache/incubator-mxnet
|
b9673a9a4f30d74f51405c7482c64890eb203769
|
2018-08-09T00:24:46Z
|
mmm a / site / source / docs / porting / connecting_cpp_and_javascript / Interacting - with - code . rst <nl> ppp b / site / source / docs / porting / connecting_cpp_and_javascript / Interacting - with - code . rst <nl> Implement a C API in JavaScript <nl> = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> <nl> It is possible to implement a C API in JavaScript ! This is the approach <nl> - that was used to write Emscripten ' s implementations of : term : ` SDL ` and <nl> - * libc * . <nl> + used in many of Emscripten ' s libraries , like SDL1 and OpenGL . <nl> <nl> You can use it to write your own APIs to call from C / C + + . To do this <nl> you define the interface , decorating with ` ` extern ` ` to mark the methods <nl> default ) . When compiling the C code , the compiler looks in the JavaScript <nl> libraries for relevant external symbols . <nl> <nl> By default , the implementation is added to * * library . js * * ( and this is <nl> - where you ' ll find the Emscripten implementation of * libc * ) . You can put <nl> + where you ' ll find parts of Emscripten ' s * libc * ) . You can put <nl> the JavaScript implementation in your own library file and add it using <nl> the : ref : ` emcc option < emcc - js - library > ` ` ` - - js - library ` ` . See <nl> ` test_js_libraries ` _ in * * tests / test_other . py * * for a complete working <nl> key - value pairs are special . Interior code inside a function can <nl> have arbitrary JS , of course ) . <nl> <nl> To avoid this limitation of JS libraries , you can put code in another file using <nl> - the ` ` - - pre - js ` ` OR ` ` - - post - js ` ` options , which allow arbitary normal <nl> + the ` ` - - pre - js ` ` or ` ` - - post - js ` ` options , which allow arbitary normal <nl> JS , and it is included and optimized with the rest of the output . That is <nl> the recommended approach for most cases . Another option is another ` ` < script > ` ` tag . <nl> <nl>
|
more doc clarifications
|
emscripten-core/emscripten
|
b0982efe1fb0cc3d30f8c5e554c7b7152042dd7e
|
2016-02-09T02:25:37Z
|
mmm a / tensorflow / lite / toco / graph_transformations / quantize . cc <nl> ppp b / tensorflow / lite / toco / graph_transformations / quantize . cc <nl> const MinMax & GetOrComputeMinMax ( Model * model , const string & array_name ) { <nl> / / We always want [ min , max ] to contain 0 . <nl> float min = 0 . f ; <nl> float max = 0 . f ; <nl> - for ( auto val : data ) { <nl> + for ( const auto & val : data ) { <nl> min = std : : min ( min , val ) ; <nl> max = std : : max ( max , val ) ; <nl> } <nl> const MinMax & GetOrComputeMinMax ( Model * model , const string & array_name ) { <nl> / / weights arrays for which fake - quantization would make sense , rather <nl> / / they tend to be hardcoded arrays of zeros or ones used in some graphs . <nl> bool is_quantization_trivially_exact = true ; <nl> - for ( auto val : data ) { <nl> + for ( const auto & val : data ) { <nl> is_quantization_trivially_exact & = ( val = = min | | val = = max ) ; <nl> } <nl> if ( ! is_quantization_trivially_exact ) { <nl> mmm a / tensorflow / lite / toco / graph_transformations / remove_trivial_binary . cc <nl> ppp b / tensorflow / lite / toco / graph_transformations / remove_trivial_binary . cc <nl> namespace { <nl> template < typename Scalar > <nl> bool AreAllBufferElementsEqualTo ( const std : : vector < Scalar > & buffer_data , <nl> Scalar value ) { <nl> - for ( auto x : buffer_data ) { <nl> + for ( const auto & x : buffer_data ) { <nl> if ( x ! = value ) { <nl> return false ; <nl> } <nl> mmm a / tensorflow / lite / toco / graph_transformations / resolve_constant_unary . cc <nl> ppp b / tensorflow / lite / toco / graph_transformations / resolve_constant_unary . cc <nl> void ReduceGeneric ( bool keep_dims , const std : : vector < int > & axes , <nl> / / Reduction mask will be elementwise multiplied against the input <nl> / / indices to figure out the output index for the element . <nl> std : : vector < int > reduction_mask ( input_shape . dimensions_count ( ) , 1 ) ; <nl> - for ( int axis : axes ) { <nl> + for ( const auto & axis : axes ) { <nl> CHECK_GE ( axis , 0 ) ; <nl> CHECK_LT ( axis , input_shape . dimensions_count ( ) ) ; <nl> reduction_mask [ axis ] = 0 ; <nl> mmm a / tensorflow / lite / toco / tooling_util . cc <nl> ppp b / tensorflow / lite / toco / tooling_util . cc <nl> void FixOperatorOrdering ( Model * model ) { <nl> std : : unordered_map < string , string > reason_why_leftover ; <nl> while ( true ) { <nl> bool inserted_something = false ; <nl> - for ( auto i : remaining ) { <nl> + for ( const auto & i : remaining ) { <nl> bool can_insert = true ; <nl> auto & op = old_operators [ i ] ; <nl> CHECK ( op ) ; <nl> void FixOperatorOrdering ( Model * model ) { <nl> } <nl> bad_inputs_already_traced . insert ( bad_input ) ; <nl> bad_op = nullptr ; <nl> - for ( auto i : remaining ) { <nl> + for ( const auto & i : remaining ) { <nl> const Operator * op = old_operators [ i ] . get ( ) ; <nl> for ( const string & output : op - > outputs ) { <nl> if ( bad_input = = output ) { <nl> void ResolveModelFlags ( const ModelFlags & model_flags , Model * model ) { <nl> if ( input_array_proto . has_shape ( ) ) { <nl> auto & input_array_dims = * input_array . mutable_shape ( ) - > mutable_dims ( ) ; <nl> CheckValidShapeDimensions ( input_array_proto . shape ( ) . dims ( ) ) ; <nl> - for ( auto dim : input_array_proto . shape ( ) . dims ( ) ) { <nl> + for ( const auto & dim : input_array_proto . shape ( ) . dims ( ) ) { <nl> input_array_dims . push_back ( dim ) ; <nl> } <nl> } <nl>
|
Merge pull request from amitsrivastava78 : error_2
|
tensorflow/tensorflow
|
dc9c03227c77a5f108c95f6111a832406fbfbf9f
|
2019-03-12T00:41:41Z
|
mmm a / CMakeLists . txt <nl> ppp b / CMakeLists . txt <nl> cmake_minimum_required ( VERSION 2 . 6 ) <nl> <nl> set ( CMAKE_MODULE_PATH $ { CMAKE_MODULE_PATH } " $ { ClickHouse_SOURCE_DIR } / cmake / Modules / " ) <nl> <nl> - <nl> if ( CMAKE_CXX_COMPILER_ID STREQUAL " GNU " ) <nl> # Require at least gcc 5 <nl> if ( CMAKE_CXX_COMPILER_VERSION VERSION_LESS 5 AND NOT CMAKE_VERSION VERSION_LESS 2 . 8 . 9 ) <nl> message ( STATUS " CMAKE_BUILD_TYPE : " $ { CMAKE_BUILD_TYPE } ) <nl> # TSan is not supported due to false positive errors in libstdc + + and necessity to rebuild libstdc + + with TSan <nl> set ( CMAKE_CONFIGURATION_TYPES " RelWithDebInfo ; Debug ; Release ; MinSizeRel ; ASan ; UBSan " CACHE STRING " " FORCE ) <nl> <nl> - <nl> if ( CMAKE_SYSTEM_PROCESSOR MATCHES " ^ ( aarch64 . * | AARCH64 . * ) " ) <nl> set ( AARCH64 1 ) <nl> endif ( ) <nl> if ( USE_STATIC_LIBRARIES ) <nl> list ( REVERSE CMAKE_FIND_LIBRARY_SUFFIXES ) <nl> endif ( ) <nl> <nl> - <nl> - option ( UNBUNDLED " Try find all libraries in system ( if fail - use bundled from contrib / ) " OFF ) <nl> - if ( UNBUNDLED ) <nl> - set ( NOT_UNBUNDLED 0 ) <nl> - else ( ) <nl> - set ( NOT_UNBUNDLED 1 ) <nl> - endif ( ) <nl> - <nl> - message ( STATUS " Building for : $ { CMAKE_SYSTEM } $ { CMAKE_SYSTEM_PROCESSOR } $ { CMAKE_LIBRARY_ARCHITECTURE } ; USE_STATIC_LIBRARIES = $ { USE_STATIC_LIBRARIES } UNBUNDLED = $ { UNBUNDLED } " ) <nl> - <nl> - option ( USE_INTERNAL_BOOST_LIBRARY " Set to FALSE to use system boost library instead of bundled " $ { NOT_UNBUNDLED } ) <nl> - <nl> - if ( USE_INTERNAL_BOOST_LIBRARY ) <nl> - add_definitions ( - DBOOST_SYSTEM_NO_DEPRECATED ) <nl> - endif ( ) <nl> - <nl> - option ( USE_INTERNAL_POCO_LIBRARY " Set to FALSE to use system poco library instead of bundled " $ { NOT_UNBUNDLED } ) <nl> - <nl> - if ( CMAKE_SYSTEM MATCHES " FreeBSD " ) <nl> - set ( NOT_FREEBSD 0 ) <nl> - option ( USE_INTERNAL_GPERFTOOLS_LIBRARY " Set to FALSE to use system gperftools ( tcmalloc ) library instead of bundled " $ { NOT_FREEBSD } ) <nl> - else ( ) <nl> - set ( NOT_FREEBSD 1 ) <nl> - option ( USE_INTERNAL_GPERFTOOLS_LIBRARY " Set to FALSE to use system gperftools ( tcmalloc ) library instead of bundled " $ { NOT_UNBUNDLED } ) <nl> - endif ( ) <nl> - <nl> - <nl> - option ( USE_INTERNAL_ZLIB_LIBRARY " Set to FALSE to use system zlib library instead of bundled " $ { NOT_UNBUNDLED } ) <nl> - <nl> - option ( ENABLE_LIBTCMALLOC " Set to TRUE to enable libtcmalloc . " ON ) <nl> - <nl> - option ( DEBUG_LIBTCMALLOC " Set to TRUE to use debug version of libtcmalloc . " OFF ) <nl> - <nl> option ( GLIBC_COMPATIBILITY " Set to TRUE to enable compatibility with older glibc libraries . Note that it is not compatible with ASan . " OFF ) <nl> <nl> if ( GLIBC_COMPATIBILITY ) <nl> if ( ARCHNATIVE ) <nl> set ( COMPILER_FLAGS " $ { COMPILER_FLAGS } - march = native " ) <nl> endif ( ) <nl> <nl> - <nl> set ( CMAKE_BUILD_COLOR_MAKEFILE ON ) <nl> set ( CMAKE_CXX_FLAGS " $ { CMAKE_CXX_FLAGS } $ { COMPILER_FLAGS } - std = gnu + + 1y $ { PLATFORM_EXTRA_CXX_FLAG } - fno - omit - frame - pointer $ { COMMON_WARNING_FLAGS } $ { CXX_WARNING_FLAGS } $ { GLIBC_COMPATIBILITY_COMPILE_FLAGS } " ) <nl> # set ( CMAKE_CXX_FLAGS_RELEASE " $ { CMAKE_CXX_FLAGS_RELEASE } " ) <nl> else ( ) <nl> set ( CLICKHOUSE_ETC_DIR $ { CMAKE_INSTALL_PREFIX } / etc ) <nl> endif ( ) <nl> <nl> + <nl> + option ( UNBUNDLED " Try find all libraries in system ( if fail - use bundled from contrib / ) " OFF ) <nl> + if ( UNBUNDLED ) <nl> + set ( NOT_UNBUNDLED 0 ) <nl> + else ( ) <nl> + set ( NOT_UNBUNDLED 1 ) <nl> + endif ( ) <nl> + <nl> + message ( STATUS " Building for : $ { CMAKE_SYSTEM } $ { CMAKE_SYSTEM_PROCESSOR } $ { CMAKE_LIBRARY_ARCHITECTURE } ; USE_STATIC_LIBRARIES = $ { USE_STATIC_LIBRARIES } UNBUNDLED = $ { UNBUNDLED } " ) <nl> + <nl> include ( cmake / find_openssl . cmake ) <nl> include ( cmake / find_icu4c . cmake ) <nl> include ( cmake / find_boost . cmake ) <nl> include ( cmake / find_zlib . cmake ) <nl> + include ( cmake / find_zstd . cmake ) <nl> include ( cmake / find_poco . cmake ) <nl> + include ( cmake / find_lz4 . cmake ) <nl> + include ( cmake / find_sparsehash . cmake ) <nl> include ( cmake / find_libtool . cmake ) <nl> include ( cmake / find_rt . cmake ) <nl> include ( cmake / find_readline_edit . cmake ) <nl> + include ( cmake / find_zookeeper . cmake ) <nl> + include ( cmake / find_double - conversion . cmake ) <nl> + include ( cmake / find_re2 . cmake ) <nl> include ( cmake / find_gperftools . cmake ) <nl> include ( cmake / find_jemalloc . cmake ) <nl> <nl> mmm a / cmake / dbms_include . cmake <nl> ppp b / cmake / dbms_include . cmake <nl> include_directories ( $ { ClickHouse_BINARY_DIR } / libs / libcommon / include ) <nl> include_directories ( $ { ClickHouse_SOURCE_DIR } / libs / libpocoext / include ) <nl> include_directories ( $ { ClickHouse_SOURCE_DIR } / libs / libzkutil / include ) <nl> include_directories ( $ { ClickHouse_SOURCE_DIR } / libs / libmysqlxx / include ) <nl> - include_directories ( BEFORE $ { ClickHouse_SOURCE_DIR } / contrib / libzookeeper / include ) <nl> include_directories ( BEFORE $ { ClickHouse_SOURCE_DIR } / contrib / libcityhash / include ) <nl> - include_directories ( BEFORE $ { ClickHouse_SOURCE_DIR } / contrib / libdouble - conversion ) <nl> mmm a / cmake / find_boost . cmake <nl> ppp b / cmake / find_boost . cmake <nl> <nl> + option ( USE_INTERNAL_BOOST_LIBRARY " Set to FALSE to use system boost library instead of bundled " $ { NOT_UNBUNDLED } ) <nl> + <nl> if ( NOT USE_INTERNAL_BOOST_LIBRARY ) <nl> set ( Boost_USE_STATIC_LIBS $ { USE_STATIC_LIBRARIES } ) <nl> set ( BOOST_ROOT " / usr / local " ) <nl> if ( NOT USE_INTERNAL_BOOST_LIBRARY ) <nl> endif ( ) <nl> <nl> if ( NOT Boost_SYSTEM_LIBRARY ) <nl> + add_definitions ( - DBOOST_SYSTEM_NO_DEPRECATED ) <nl> set ( USE_INTERNAL_BOOST_LIBRARY 1 ) <nl> set ( Boost_PROGRAM_OPTIONS_LIBRARY boost_program_options_internal ) <nl> set ( Boost_SYSTEM_LIBRARY boost_system_internal ) <nl> if ( NOT Boost_SYSTEM_LIBRARY ) <nl> include_directories ( BEFORE $ { Boost_INCLUDE_DIRS } ) <nl> endif ( ) <nl> <nl> - message ( STATUS " Using Boost : $ { Boost_INCLUDE_DIRS } : $ { Boost_PROGRAM_OPTIONS_LIBRARY } , $ { Boost_SYSTEM_LIBRARY } , $ { Boost_FILESYSTEM_LIBRARY } " ) <nl> + message ( STATUS " Using Boost : $ { Boost_INCLUDE_DIRS } : $ { Boost_PROGRAM_OPTIONS_LIBRARY } , $ { Boost_SYSTEM_LIBRARY } , $ { Boost_FILESYSTEM_LIBRARY } " ) <nl> new file mode 100644 <nl> index 00000000000 . . d23a78873dc <nl> mmm / dev / null <nl> ppp b / cmake / find_double - conversion . cmake <nl> <nl> + option ( USE_INTERNAL_DOUBLE_CONVERSION_LIBRARY " Set to FALSE to use system double - conversion library instead of bundled " $ { NOT_UNBUNDLED } ) <nl> + <nl> + if ( NOT USE_INTERNAL_DOUBLE_CONVERSION_LIBRARY ) <nl> + find_library ( DOUBLE_CONVERSION_LIBRARY double - conversion ) <nl> + find_path ( DOUBLE_CONVERSION_INCLUDE_DIR NAMES double - conversion / double - conversion . h PATHS $ { DOUBLE_CONVERSION_INCLUDE_PATHS } ) <nl> + endif ( ) <nl> + <nl> + if ( DOUBLE_CONVERSION_LIBRARY AND DOUBLE_CONVERSION_INCLUDE_DIR ) <nl> + include_directories ( $ { DOUBLE_CONVERSION_INCLUDE_DIR } ) <nl> + else ( ) <nl> + set ( USE_INTERNAL_DOUBLE_CONVERSION_LIBRARY 1 ) <nl> + set ( DOUBLE_CONVERSION_INCLUDE_DIR " $ { ClickHouse_SOURCE_DIR } / contrib / libdouble - conversion " ) <nl> + include_directories ( BEFORE $ { DOUBLE_CONVERSION_INCLUDE_DIR } ) <nl> + set ( DOUBLE_CONVERSION_LIBRARY double - conversion ) <nl> + endif ( ) <nl> + <nl> + message ( STATUS " Using double - conversion : $ { DOUBLE_CONVERSION_INCLUDE_DIR } : $ { DOUBLE_CONVERSION_LIBRARY } " ) <nl> mmm a / cmake / find_gperftools . cmake <nl> ppp b / cmake / find_gperftools . cmake <nl> <nl> + if ( CMAKE_SYSTEM MATCHES " FreeBSD " ) <nl> + option ( USE_INTERNAL_GPERFTOOLS_LIBRARY " Set to FALSE to use system gperftools ( tcmalloc ) library instead of bundled " OFF ) <nl> + else ( ) <nl> + option ( USE_INTERNAL_GPERFTOOLS_LIBRARY " Set to FALSE to use system gperftools ( tcmalloc ) library instead of bundled " $ { NOT_UNBUNDLED } ) <nl> + endif ( ) <nl> + option ( ENABLE_LIBTCMALLOC " Set to TRUE to enable libtcmalloc " ON ) <nl> + option ( DEBUG_LIBTCMALLOC " Set to TRUE to use debug version of libtcmalloc " OFF ) <nl> + <nl> if ( ENABLE_LIBTCMALLOC ) <nl> # contrib / libtcmalloc doesnt build debug version , try find in system <nl> if ( DEBUG_LIBTCMALLOC OR NOT USE_INTERNAL_GPERFTOOLS_LIBRARY ) <nl> mmm a / cmake / find_icu4c . cmake <nl> ppp b / cmake / find_icu4c . cmake <nl> find_library ( ICUDATA icudata PATHS $ { ICU_PATHS } ) <nl> set ( ICU_LIBS $ { ICUI18N } $ { ICUUC } $ { ICUDATA } ) <nl> <nl> find_path ( ICU_INCLUDE_DIR NAMES unicode / unistr . h PATHS $ { ICU_INCLUDE_PATHS } ) <nl> - message ( STATUS " Using icu : $ { ICU_INCLUDE_DIR } : $ { ICU_LIBS } " ) <nl> + message ( STATUS " Using icu : $ { ICU_INCLUDE_DIR } : $ { ICU_LIBS } " ) <nl> include_directories ( $ { ICU_INCLUDE_DIR } ) <nl> mmm a / cmake / find_jemalloc . cmake <nl> ppp b / cmake / find_jemalloc . cmake <nl> <nl> + option ( ENABLE_JEMALLOC " Set to TRUE to use jemalloc instead of tcmalloc " OFF ) <nl> + <nl> if ( ENABLE_JEMALLOC ) <nl> find_package ( JeMalloc ) <nl> <nl> mmm a / cmake / find_libtool . cmake <nl> ppp b / cmake / find_libtool . cmake <nl> <nl> set ( LTDL_PATHS " / usr / local / opt / libtool / lib " ) <nl> find_library ( LTDL_LIB ltdl PATHSS $ { LTDL_PATHS } ) <nl> - message ( STATUS " Using ltdl : $ { LTDL_LIB } " ) <nl> + message ( STATUS " Using ltdl : $ { LTDL_LIB } " ) <nl> new file mode 100644 <nl> index 00000000000 . . 2d767bd6480 <nl> mmm / dev / null <nl> ppp b / cmake / find_lz4 . cmake <nl> <nl> + option ( USE_INTERNAL_LZ4_LIBRARY " Set to FALSE to use system lz4 library instead of bundled " $ { NOT_UNBUNDLED } ) <nl> + <nl> + if ( NOT USE_INTERNAL_LZ4_LIBRARY ) <nl> + find_library ( LZ4_LIBRARY lz4 ) <nl> + find_path ( LZ4_INCLUDE_DIR NAMES lz4 . h PATHS $ { LZ4_INCLUDE_PATHS } ) <nl> + endif ( ) <nl> + <nl> + if ( LZ4_LIBRARY AND LZ4_INCLUDE_DIR ) <nl> + include_directories ( $ { LZ4_INCLUDE_DIR } ) <nl> + else ( ) <nl> + set ( USE_INTERNAL_LZ4_LIBRARY 1 ) <nl> + set ( LZ4_INCLUDE_DIR " $ { ClickHouse_SOURCE_DIR } / contrib / liblz4 / include / lz4 " ) <nl> + include_directories ( BEFORE $ { LZ4_INCLUDE_DIR } ) <nl> + set ( LZ4_LIBRARY lz4 ) <nl> + endif ( ) <nl> + <nl> + message ( STATUS " Using lz4 : $ { LZ4_INCLUDE_DIR } : $ { LZ4_LIBRARY } " ) <nl> mmm a / cmake / find_openssl . cmake <nl> ppp b / cmake / find_openssl . cmake <nl> if ( NOT OPENSSL_FOUND ) <nl> set ( OPENSSL_LIBRARIES $ { OPENSSL_SSL_LIBRARY } $ { OPENSSL_CRYPTO_LIBRARY } ) <nl> endif ( ) <nl> <nl> - message ( STATUS " Using openssl : $ { OPENSSL_INCLUDE_DIR } : $ { OPENSSL_LIBRARIES } " ) <nl> + message ( STATUS " Using openssl : $ { OPENSSL_INCLUDE_DIR } : $ { OPENSSL_LIBRARIES } " ) <nl> include_directories ( $ { OPENSSL_INCLUDE_DIR } ) <nl> mmm a / cmake / find_poco . cmake <nl> ppp b / cmake / find_poco . cmake <nl> <nl> + option ( USE_INTERNAL_POCO_LIBRARY " Set to FALSE to use system poco library instead of bundled " $ { NOT_UNBUNDLED } ) <nl> <nl> if ( NOT USE_INTERNAL_POCO_LIBRARY ) <nl> find_package ( Poco COMPONENTS Net NetSSL XML Data Crypto DataODBC MongoDB ) <nl> new file mode 100644 <nl> index 00000000000 . . 0462a5f140e <nl> mmm / dev / null <nl> ppp b / cmake / find_re2 . cmake <nl> <nl> + # TODO : option ( USE_INTERNAL_RE2_LIBRARY " Set to FALSE to use system re2 library instead of bundled " $ { NOT_UNBUNDLED } ) <nl> + set ( USE_INTERNAL_RE2_LIBRARY ON ) <nl> + <nl> + if ( NOT USE_INTERNAL_RE2_LIBRARY ) <nl> + # TODO ! re2_st <nl> + find_library ( RE2_LIBRARY re2 ) <nl> + find_path ( RE2_INCLUDE_DIR NAMES re2 / re2 . h PATHS $ { RE2_INCLUDE_PATHS } ) <nl> + endif ( ) <nl> + <nl> + if ( RE2_LIBRARY AND RE2_INCLUDE_DIR ) <nl> + include_directories ( $ { RE2_INCLUDE_DIR } ) <nl> + else ( ) <nl> + set ( USE_INTERNAL_RE2_LIBRARY 1 ) <nl> + set ( RE2_INCLUDE_DIR " $ { ClickHouse_SOURCE_DIR } / contrib / libre2 " ) <nl> + set ( RE2_ST_INCLUDE_DIR " $ { ClickHouse_BINARY_DIR } / contrib / libre2 " ) <nl> + include_directories ( BEFORE $ { RE2_INCLUDE_DIR } ) <nl> + include_directories ( BEFORE $ { RE2_ST_INCLUDE_DIR } ) <nl> + set ( RE2_LIBRARY re2 ) <nl> + set ( RE2_ST_LIBRARY re2_st ) <nl> + endif ( ) <nl> + <nl> + message ( STATUS " Using re2 : $ { RE2_INCLUDE_DIR } : $ { RE2_LIBRARY } ; $ { RE2_ST_INCLUDE_DIR } : $ { RE2_ST_LIBRARY } " ) <nl> mmm a / cmake / find_readline_edit . cmake <nl> ppp b / cmake / find_readline_edit . cmake <nl> if ( READLINE_LIB AND TERMCAP_LIB ) <nl> message ( STATUS " Using line editing libraries ( readline ) : $ { READLINE_INCLUDE_DIR } : $ { LINE_EDITING_LIBS } " ) <nl> elseif ( EDIT_LIB ) <nl> find_library ( CURSES_LIB NAMES curses ) <nl> - set ( USE_LIBEDIT 1 ) <nl> + set ( USE_LIBEDIT 1 ) <nl> find_path ( READLINE_INCLUDE_DIR NAMES editline / readline . h PATHS $ { READLINE_INCLUDE_PATHS } ) <nl> set ( LINE_EDITING_LIBS $ { EDIT_LIB } $ { CURSES_LIB } $ { TERMCAP_LIB } ) <nl> message ( STATUS " Using line editing libraries ( edit ) : $ { READLINE_INCLUDE_DIR } : $ { LINE_EDITING_LIBS } " ) <nl> endif ( ) <nl> include ( CheckCXXSourceRuns ) <nl> <nl> set ( CMAKE_REQUIRED_LIBRARIES $ { CMAKE_REQUIRED_LIBRARIES } $ { LINE_EDITING_LIBS } ) <nl> - check_cxx_source_runs ( " <nl> + check_cxx_source_runs ( " <nl> # include < stdio . h > <nl> # include < readline / readline . h > <nl> # include < readline / history . h > <nl> new file mode 100644 <nl> index 00000000000 . . d96179eb049 <nl> mmm / dev / null <nl> ppp b / cmake / find_sparsehash . cmake <nl> <nl> + option ( USE_INTERNAL_SPARCEHASH_LIBRARY " Set to FALSE to use system sparsehash library instead of bundled " $ { NOT_UNBUNDLED } ) <nl> + <nl> + if ( NOT USE_INTERNAL_SPARCEHASH_LIBRARY ) <nl> + find_path ( SPARCEHASH_INCLUDE_DIR NAMES sparsehash / sparse_hash_map PATHS $ { SPARCEHASH_INCLUDE_PATHS } ) <nl> + endif ( ) <nl> + <nl> + if ( SPARCEHASH_INCLUDE_DIR ) <nl> + include_directories ( $ { SPARCEHASH_INCLUDE_DIR } ) <nl> + else ( ) <nl> + set ( USE_INTERNAL_SPARCEHASH_LIBRARY 1 ) <nl> + set ( SPARCEHASH_INCLUDE_DIR " $ { ClickHouse_SOURCE_DIR } / contrib / libsparsehash " ) <nl> + include_directories ( BEFORE $ { SPARCEHASH_INCLUDE_DIR } ) <nl> + endif ( ) <nl> + <nl> + message ( STATUS " Using sparsehash : $ { SPARCEHASH_INCLUDE_DIR } " ) <nl> mmm a / cmake / find_zlib . cmake <nl> ppp b / cmake / find_zlib . cmake <nl> <nl> + option ( USE_INTERNAL_ZLIB_LIBRARY " Set to FALSE to use system zlib library instead of bundled " $ { NOT_UNBUNDLED } ) <nl> + <nl> if ( NOT USE_INTERNAL_ZLIB_LIBRARY ) <nl> find_package ( ZLIB ) <nl> if ( ZLIB_FOUND ) <nl> new file mode 100644 <nl> index 00000000000 . . cbc4dd3f80f <nl> mmm / dev / null <nl> ppp b / cmake / find_zookeeper . cmake <nl> <nl> + option ( USE_INTERNAL_ZOOKEEPER_LIBRARY " Set to FALSE to use system zookeeper library instead of bundled " $ { NOT_UNBUNDLED } ) <nl> + <nl> + if ( NOT USE_INTERNAL_ZOOKEEPER_LIBRARY ) <nl> + find_library ( ZOOKEEPER_LIBRARY zookeeper_mt ) <nl> + find_path ( ZOOKEEPER_INCLUDE_DIR NAMES zookeeper / zookeeper . h PATHS $ { ZOOKEEPER_INCLUDE_PATHS } ) <nl> + endif ( ) <nl> + <nl> + if ( ZOOKEEPER_LIBRARY AND ZOOKEEPER_INCLUDE_DIR ) <nl> + include_directories ( $ { ZOOKEEPER_INCLUDE_DIR } ) <nl> + else ( ) <nl> + set ( USE_INTERNAL_ZOOKEEPER_LIBRARY 1 ) <nl> + set ( ZOOKEEPER_INCLUDE_DIR " $ { ClickHouse_SOURCE_DIR } / contrib / libzookeeper / include / zookeeper " ) <nl> + include_directories ( BEFORE $ { ZOOKEEPER_INCLUDE_DIR } ) <nl> + set ( ZOOKEEPER_LIBRARY zookeeper_mt ) <nl> + endif ( ) <nl> + <nl> + message ( STATUS " Using zookeeper : $ { ZOOKEEPER_INCLUDE_DIR } : $ { ZOOKEEPER_LIBRARY } " ) <nl> new file mode 100644 <nl> index 00000000000 . . 11d85cde4c3 <nl> mmm / dev / null <nl> ppp b / cmake / find_zstd . cmake <nl> <nl> + option ( USE_INTERNAL_ZSTD_LIBRARY " Set to FALSE to use system zstd library instead of bundled " $ { NOT_UNBUNDLED } ) <nl> + <nl> + if ( NOT USE_INTERNAL_ZSTD_LIBRARY ) <nl> + find_library ( ZSTD_LIBRARY zstd ) <nl> + find_path ( ZSTD_INCLUDE_DIR NAMES zstd . h PATHS $ { ZSTD_INCLUDE_PATHS } ) <nl> + endif ( ) <nl> + <nl> + if ( ZSTD_LIBRARY AND ZSTD_INCLUDE_DIR ) <nl> + include_directories ( $ { ZSTD_INCLUDE_DIR } ) <nl> + else ( ) <nl> + set ( USE_INTERNAL_ZSTD_LIBRARY 1 ) <nl> + set ( ZSTD_INCLUDE_DIR " $ { ClickHouse_SOURCE_DIR } / contrib / libzstd / include / zstd " ) <nl> + include_directories ( BEFORE $ { ZSTD_INCLUDE_DIR } ) <nl> + set ( ZSTD_LIBRARY zstd ) <nl> + endif ( ) <nl> + <nl> + message ( STATUS " Using zstd : $ { ZSTD_INCLUDE_DIR } : $ { ZSTD_LIBRARY } " ) <nl> mmm a / contrib / CMakeLists . txt <nl> ppp b / contrib / CMakeLists . txt <nl> if ( USE_INTERNAL_POCO_LIBRARY ) <nl> add_subdirectory ( libpoco ) <nl> endif ( ) <nl> <nl> - add_subdirectory ( liblz4 ) <nl> - add_subdirectory ( libzstd ) <nl> - add_subdirectory ( libre2 ) <nl> - add_subdirectory ( libdouble - conversion ) <nl> - add_subdirectory ( libzookeeper ) <nl> + if ( USE_INTERNAL_LZ4_LIBRARY ) <nl> + add_subdirectory ( liblz4 ) <nl> + endif ( ) <nl> + <nl> + if ( USE_INTERNAL_ZSTD_LIBRARY ) <nl> + add_subdirectory ( libzstd ) <nl> + endif ( ) <nl> + <nl> + if ( USE_INTERNAL_RE2_LIBRARY ) <nl> + add_subdirectory ( libre2 ) <nl> + endif ( ) <nl> + <nl> + if ( USE_INTERNAL_DOUBLE_CONVERSION_LIBRARY ) <nl> + add_subdirectory ( libdouble - conversion ) <nl> + endif ( ) <nl> + <nl> + if ( USE_INTERNAL_ZOOKEEPER_LIBRARY ) <nl> + add_subdirectory ( libzookeeper ) <nl> + endif ( ) <nl> + <nl> add_subdirectory ( libcityhash ) <nl> add_subdirectory ( libfarmhash ) <nl> add_subdirectory ( libmetrohash ) <nl> + <nl> if ( USE_INTERNAL_ZLIB_LIBRARY ) <nl> add_subdirectory ( libzlib - ng ) <nl> endif ( ) <nl> mmm a / contrib / libzookeeper / CMakeLists . txt <nl> ppp b / contrib / libzookeeper / CMakeLists . txt <nl> endif ( ) <nl> <nl> include_directories ( include / zookeeper src ) <nl> <nl> - add_library ( zookeeper <nl> + add_library ( zookeeper_mt <nl> src / zookeeper . c <nl> src / zookeeper . jute . c <nl> src / zk_hashtable . c <nl> mmm a / contrib / libzstd / CMakeLists . txt <nl> ppp b / contrib / libzstd / CMakeLists . txt <nl> endfunction ( ) <nl> <nl> # Define library directory , where sources and header files are located <nl> SET ( LIBRARY_DIR include / zstd ) <nl> - INCLUDE_DIRECTORIES ( $ { LIBRARY_DIR } $ { LIBRARY_DIR } / common ) <nl> + INCLUDE_DIRECTORIES ( BEFORE $ { LIBRARY_DIR } $ { LIBRARY_DIR } / common ) <nl> <nl> # Read file content <nl> FILE ( READ $ { LIBRARY_DIR } / zstd . h HEADER_CONTENT ) <nl> SET ( ZSTD_LEGACY_SUPPORT true ) <nl> <nl> IF ( ZSTD_LEGACY_SUPPORT ) <nl> SET ( LIBRARY_LEGACY_DIR $ { LIBRARY_DIR } / legacy ) <nl> - INCLUDE_DIRECTORIES ( $ { LIBRARY_LEGACY_DIR } ) <nl> + INCLUDE_DIRECTORIES ( BEFORE $ { LIBRARY_LEGACY_DIR } ) <nl> ADD_DEFINITIONS ( - D ZSTD_LEGACY_SUPPORT = 1 ) <nl> <nl> SET ( Sources $ { Sources } <nl> mmm a / dbms / CMakeLists . txt <nl> ppp b / dbms / CMakeLists . txt <nl> <nl> include ( $ { ClickHouse_SOURCE_DIR } / cmake / dbms_include . cmake ) <nl> <nl> - include_directories ( BEFORE $ { ClickHouse_SOURCE_DIR } / contrib / liblz4 / include / ) <nl> include_directories ( BEFORE $ { ClickHouse_SOURCE_DIR } / contrib / libdivide ) <nl> include_directories ( BEFORE $ { ClickHouse_SOURCE_DIR } / contrib / libcpuid / include / ) <nl> - include_directories ( BEFORE $ { ClickHouse_SOURCE_DIR } / contrib / libzstd / include / ) <nl> include_directories ( BEFORE $ { ClickHouse_SOURCE_DIR } / contrib / libfarmhash ) <nl> include_directories ( BEFORE $ { ClickHouse_SOURCE_DIR } / contrib / libmetrohash / src ) <nl> - include_directories ( BEFORE $ { ClickHouse_SOURCE_DIR } / contrib / libsparsehash ) <nl> - include_directories ( BEFORE $ { ClickHouse_SOURCE_DIR } / contrib / libre2 / ) <nl> - include_directories ( BEFORE $ { ClickHouse_BINARY_DIR } / contrib / libre2 / ) <nl> include_directories ( $ { ClickHouse_SOURCE_DIR } / libs / libdaemon / include / ) <nl> include_directories ( $ { ODBC_INCLUDE_DIRECTORIES } ) <nl> <nl> if ( NOT NO_WERROR ) <nl> set ( CMAKE_C_FLAGS " $ { CMAKE_C_FLAGS } - Werror " ) <nl> endif ( ) <nl> <nl> + include ( $ { ClickHouse_SOURCE_DIR } / cmake / find_iconv . cmake ) <nl> + <nl> + find_package ( Threads ) <nl> + <nl> add_subdirectory ( src ) <nl> <nl> add_library ( string_utils <nl> if ( NOT CMAKE_BUILD_TYPE STREQUAL " Debug " ) <nl> PROPERTIES COMPILE_FLAGS - g0 ) <nl> endif ( ) <nl> <nl> - <nl> if ( NOT AARCH64 ) <nl> set ( LINK_LIBRARIES_ONLY_ON_X86_64 cpuid ) <nl> endif ( ) <nl> else ( ) <nl> set ( PLATFORM_LIBS " " ) <nl> endif ( ) <nl> <nl> - include ( $ { ClickHouse_SOURCE_DIR } / cmake / find_iconv . cmake ) <nl> - <nl> - find_package ( Threads ) <nl> - <nl> target_link_libraries ( dbms <nl> common <nl> zkutil <nl> mysqlxx <nl> cityhash farmhash metrohash <nl> - lz4 zstd <nl> + $ { LZ4_LIBRARY } <nl> + $ { ZSTD_LIBRARY } <nl> string_utils <nl> - double - conversion <nl> + $ { DOUBLE_CONVERSION_LIBRARY } <nl> $ { ZLIB_LIBRARIES } <nl> $ { LINK_LIBRARIES_ONLY_ON_X86_64 } <nl> - re2 re2_st <nl> + $ { RE2_LIBRARY } <nl> + $ { RE2_ST_LIBRARY } <nl> $ { OPENSSL_CRYPTO_LIBRARY } <nl> $ { Boost_SYSTEM_LIBRARY } <nl> $ { Poco_Data_LIBRARY } <nl> mmm a / dbms / src / IO / CompressedReadBufferBase . cpp <nl> ppp b / dbms / src / IO / CompressedReadBufferBase . cpp <nl> <nl> # include < quicklz / quicklz_level1 . h > <nl> # endif <nl> <nl> - # include < lz4 / lz4 . h > <nl> - # include < zstd / zstd . h > <nl> + # include < lz4 . h > <nl> + # include < zstd . h > <nl> <nl> # include < DB / Common / PODArray . h > <nl> # include < DB / Common / ProfileEvents . h > <nl> mmm a / dbms / src / IO / CompressedWriteBuffer . cpp <nl> ppp b / dbms / src / IO / CompressedWriteBuffer . cpp <nl> <nl> # include < quicklz / quicklz_level1 . h > <nl> # endif <nl> <nl> - # include < lz4 / lz4 . h > <nl> - # include < lz4 / lz4hc . h > <nl> - # include < zstd / zstd . h > <nl> + # include < lz4 . h > <nl> + # include < lz4hc . h > <nl> + # include < zstd . h > <nl> <nl> # include < DB / Common / unaligned . h > <nl> <nl> mmm a / dbms / tests / clickhouse - test <nl> ppp b / dbms / tests / clickhouse - test <nl> def main ( args ) : <nl> report_testcase . append ( skipped ) <nl> print ( " { 0 } - no reference file " . format ( MSG_UNKNOWN ) ) <nl> else : <nl> - result_is_different = subprocess . call ( [ ' cmp ' , ' - - quiet ' , reference_file , stdout_file ] , stdout = PIPE ) <nl> + result_is_different = subprocess . call ( [ ' cmp ' , ' - s ' , reference_file , stdout_file ] , stdout = PIPE ) <nl> <nl> if result_is_different : <nl> ( diff , _ ) = Popen ( [ ' diff ' , ' - - side - by - side ' , reference_file , stdout_file ] , stdout = PIPE ) . communicate ( ) <nl> mmm a / doc / build_debian . sh <nl> ppp b / doc / build_debian . sh <nl> <nl> <nl> # install compiler and libs <nl> sudo apt install - y git bash cmake gcc - 6 g + + - 6 libicu - dev libreadline - dev libmysqlclient - dev unixodbc - dev libltdl - dev libssl - dev <nl> + # for - DUNBUNDLED = 1 mode : <nl> + # sudo apt install - y libboost - dev zlib1g - dev liblz4 - dev libdouble - conversion - dev libzstd - dev libre2 - dev libzookeeper - mt - dev libsparsehash - dev # libpoco - dev <nl> <nl> # install testing only stuff if you want : <nl> sudo apt install - y python python - lxml python - termcolor curl perl <nl> mmm a / doc / build_freebsd . sh <nl> ppp b / doc / build_freebsd . sh <nl> mkdir - p ClickHouse / build <nl> cd ClickHouse / build <nl> cmake . . - DUSE_INTERNAL_GPERFTOOLS_LIBRARY = 0 <nl> # WIP : variant with libs from ports : <nl> - # sudo pkg install boost - libs <nl> + # sudo pkg install boost - libs libzookeeper libdouble - conversion zstd liblz4 sparsehash <nl> # Check UNIXODBC option : <nl> # make - C / usr / ports / devel / poco config reinstall <nl> - # cmake . . - DUSE_INTERNAL_BOOST_LIBRARY = 0 - DUSE_INTERNAL_POCO_LIBRARY = 0 - DUSE_INTERNAL_GPERFTOOLS_LIBRARY = 0 <nl> + # cmake . . - DUNBUNDLED = 1 - DUSE_STATIC_LIBRARIES = 0 - DNO_WERROR = 1 <nl> <nl> make - C dbms / src / Server - j $ ( nproc | | sysctl - n hw . ncpu | | echo 2 ) <nl> cd . . / . . <nl> mmm a / libs / libzkutil / CMakeLists . txt <nl> ppp b / libs / libzkutil / CMakeLists . txt <nl> add_library ( zkutil <nl> <nl> find_package ( Threads ) <nl> <nl> - target_link_libraries ( zkutil zookeeper $ { Poco_Foundation_LIBRARY } $ { CMAKE_THREAD_LIBS_INIT } string_utils ) <nl> + target_link_libraries ( zkutil $ { ZOOKEEPER_LIBRARY } $ { Poco_Foundation_LIBRARY } $ { CMAKE_THREAD_LIBS_INIT } string_utils ) <nl> <nl> add_subdirectory ( src ) <nl> mmm a / utils / compressor / CMakeLists . txt <nl> ppp b / utils / compressor / CMakeLists . txt <nl> <nl> <nl> - include_directories ( BEFORE $ { ClickHouse_SOURCE_DIR } / contrib / libzstd / include ) <nl> - <nl> add_executable ( clickhouse - compressor main . cpp ) <nl> target_link_libraries ( clickhouse - compressor dbms $ { Boost_PROGRAM_OPTIONS_LIBRARY } ) <nl> <nl> install ( TARGETS clickhouse - compressor RUNTIME DESTINATION bin COMPONENT clickhouse - compressor ) <nl> <nl> add_executable ( zstd_test zstd_test . cpp ) <nl> - target_link_libraries ( zstd_test zstd ) <nl> + target_link_libraries ( zstd_test $ { ZSTD_LIBRARY } ) <nl> mmm a / utils / compressor / zstd_test . cpp <nl> ppp b / utils / compressor / zstd_test . cpp <nl> <nl> # include < unistd . h > <nl> - # include < zstd / zstd . h > <nl> + # include < zstd . h > <nl> # include < vector > <nl> # include < stdexcept > <nl> <nl>
|
Allow build with external double - conversion lz4 zstd re2 zookeeper ( )
|
ClickHouse/ClickHouse
|
0e91c470660193b032356680c91f277bce90fc5b
|
2017-02-28T23:49:04Z
|
mmm a / modules / websocket / SCsub <nl> ppp b / modules / websocket / SCsub <nl> thirdparty_sources = [ <nl> " handshake . c " , <nl> " header . c " , <nl> " libwebsockets . c " , <nl> - " minilex . c " , <nl> " output . c " , <nl> " pollfd . c " , <nl> " service . c " , <nl> mmm a / thirdparty / README . md <nl> ppp b / thirdparty / README . md <nl> changes are marked with ` / / - - GODOT - - ` comments . <nl> - License : LGPLv2 . 1 + static linking exception <nl> <nl> File extracted from upstream source : <nl> - - Everything in ` lib / ` except ` http2 / ` , ` event - libs / ` . <nl> + - Everything in ` lib / ` except ` minilex . c ` , ` http2 / ` , ` event - libs / ` . <nl> - From ` misc / ` exclude ` lws - genhash . c ` , ` lws - ring . c ` , ` romfs . { c , h } ` , ` smtp . c ` . <nl> - From ` plat / ` exclude ` lws - plat - { esp * , optee } . c ` . <nl> - From ` server / ` exclude ` access - log . c ` , ` cgi . c ` , ` daemonize . c ` , ` lws - spa . c ` , <nl> deleted file mode 100644 <nl> index 3cb1e33696f . . 00000000000 <nl> mmm a / thirdparty / lws / minilex . c <nl> ppp / dev / null <nl> <nl> - / * <nl> - * minilex . c <nl> - * <nl> - * High efficiency lexical state parser <nl> - * <nl> - * Copyright ( C ) 2011 - 2014 Andy Green < andy @ warmcat . com > <nl> - * <nl> - * Licensed under LGPL2 <nl> - * <nl> - * Usage : gcc minilex . c - o minilex & & . / minilex > lextable . h <nl> - * <nl> - * Run it twice to test parsing on the generated table on stderr <nl> - * / <nl> - <nl> - # include < stdio . h > <nl> - # include < stdlib . h > <nl> - # include < string . h > <nl> - <nl> - # include " lextable - strings . h " <nl> - <nl> - / * <nl> - * b7 = 0 = 1 - byte seq <nl> - * 0x08 = fail <nl> - * 2 - byte seq <nl> - * 0x00 - 0x07 , then terminal as given in 2nd byte <nl> - 3 - byte seq <nl> - * no match : go fwd 3 byte , match : jump fwd by amt in + 1 / + 2 bytes <nl> - * = 1 = 1 - byte seq <nl> - * no match : die , match go fwd 1 byte <nl> - * / <nl> - <nl> - unsigned char lextable [ ] = { <nl> - # include " lextable . h " <nl> - } ; <nl> - <nl> - # define PARALLEL 30 <nl> - <nl> - struct state { <nl> - char c [ PARALLEL ] ; <nl> - int state [ PARALLEL ] ; <nl> - int count ; <nl> - int bytepos ; <nl> - <nl> - int real_pos ; <nl> - } ; <nl> - <nl> - struct state state [ 1000 ] ; <nl> - int next = 1 ; <nl> - <nl> - # define FAIL_CHAR 0x08 <nl> - <nl> - int lextable_decode ( int pos , char c ) <nl> - { <nl> - while ( 1 ) { <nl> - if ( lextable [ pos ] & ( 1 < < 7 ) ) { / * 1 - byte , fail on mismatch * / <nl> - if ( ( lextable [ pos ] & 0x7f ) ! = c ) <nl> - return - 1 ; <nl> - / * fall thru * / <nl> - pos + + ; <nl> - if ( lextable [ pos ] = = FAIL_CHAR ) <nl> - return - 1 ; <nl> - return pos ; <nl> - } else { / * b7 = 0 , end or 3 - byte * / <nl> - if ( lextable [ pos ] < FAIL_CHAR ) / * terminal marker * / <nl> - return pos ; <nl> - <nl> - if ( lextable [ pos ] = = c ) / * goto * / <nl> - return pos + ( lextable [ pos + 1 ] ) + <nl> - ( lextable [ pos + 2 ] < < 8 ) ; <nl> - / * fall thru goto * / <nl> - pos + = 3 ; <nl> - / * continue * / <nl> - } <nl> - } <nl> - } <nl> - <nl> - int main ( void ) <nl> - { <nl> - int n = 0 ; <nl> - int m = 0 ; <nl> - int prev ; <nl> - char c ; <nl> - int walk ; <nl> - int saw ; <nl> - int y ; <nl> - int j ; <nl> - int pos = 0 ; <nl> - <nl> - while ( n < sizeof ( set ) / sizeof ( set [ 0 ] ) ) { <nl> - <nl> - m = 0 ; <nl> - walk = 0 ; <nl> - prev = 0 ; <nl> - <nl> - if ( set [ n ] [ 0 ] = = ' \ 0 ' ) { <nl> - n + + ; <nl> - continue ; <nl> - } <nl> - <nl> - while ( set [ n ] [ m ] ) { <nl> - <nl> - saw = 0 ; <nl> - for ( y = 0 ; y < state [ walk ] . count ; y + + ) <nl> - if ( state [ walk ] . c [ y ] = = set [ n ] [ m ] ) { <nl> - / * exists - - go forward * / <nl> - walk = state [ walk ] . state [ y ] ; <nl> - saw = 1 ; <nl> - break ; <nl> - } <nl> - <nl> - if ( saw ) <nl> - goto again ; <nl> - <nl> - / * something we didn ' t see before * / <nl> - <nl> - state [ walk ] . c [ state [ walk ] . count ] = set [ n ] [ m ] ; <nl> - <nl> - state [ walk ] . state [ state [ walk ] . count ] = next ; <nl> - state [ walk ] . count + + ; <nl> - walk = next + + ; <nl> - again : <nl> - m + + ; <nl> - } <nl> - <nl> - state [ walk ] . c [ 0 ] = n + + ; <nl> - state [ walk ] . state [ 0 ] = 0 ; / * terminal marker * / <nl> - state [ walk ] . count = 1 ; <nl> - } <nl> - <nl> - walk = 0 ; <nl> - for ( n = 0 ; n < next ; n + + ) { <nl> - state [ n ] . bytepos = walk ; <nl> - walk + = ( 2 * state [ n ] . count ) ; <nl> - } <nl> - <nl> - / * compute everyone ' s position first * / <nl> - <nl> - pos = 0 ; <nl> - walk = 0 ; <nl> - for ( n = 0 ; n < next ; n + + ) { <nl> - <nl> - state [ n ] . real_pos = pos ; <nl> - <nl> - for ( m = 0 ; m < state [ n ] . count ; m + + ) { <nl> - <nl> - if ( state [ n ] . state [ m ] = = 0 ) <nl> - pos + = 2 ; / * terminal marker * / <nl> - else { / * c is a character * / <nl> - if ( ( state [ state [ n ] . state [ m ] ] . bytepos - <nl> - walk ) = = 2 ) <nl> - pos + + ; <nl> - else { <nl> - pos + = 3 ; <nl> - if ( m = = state [ n ] . count - 1 ) <nl> - pos + + ; / * fail * / <nl> - } <nl> - } <nl> - walk + = 2 ; <nl> - } <nl> - } <nl> - <nl> - walk = 0 ; <nl> - pos = 0 ; <nl> - for ( n = 0 ; n < next ; n + + ) { <nl> - for ( m = 0 ; m < state [ n ] . count ; m + + ) { <nl> - <nl> - if ( ! m ) <nl> - fprintf ( stdout , " / * pos % 04x : % 3d * / " , <nl> - state [ n ] . real_pos , n ) ; <nl> - else <nl> - fprintf ( stdout , " " ) ; <nl> - <nl> - y = state [ n ] . c [ m ] ; <nl> - saw = state [ n ] . state [ m ] ; <nl> - <nl> - if ( saw = = 0 ) { / / c is a terminal then <nl> - <nl> - if ( y > 0x7ff ) { <nl> - fprintf ( stderr , " terminal too big \ n " ) ; <nl> - return 2 ; <nl> - } <nl> - <nl> - fprintf ( stdout , " 0x % 02X , 0x % 02X " <nl> - " " <nl> - " / * - terminal marker % 2d - * / , \ n " , <nl> - y > > 8 , y & 0xff , y & 0x7f ) ; <nl> - pos + = 2 ; <nl> - walk + = 2 ; <nl> - continue ; <nl> - } <nl> - <nl> - / * c is a character * / <nl> - <nl> - prev = y & 0x7f ; <nl> - if ( prev < 32 | | prev > 126 ) <nl> - prev = ' . ' ; <nl> - <nl> - <nl> - if ( ( state [ saw ] . bytepos - walk ) = = 2 ) { <nl> - fprintf ( stdout , " 0x % 02X / * ' % c ' - > * / , \ n " , <nl> - y | 0x80 , prev ) ; <nl> - pos + + ; <nl> - walk + = 2 ; <nl> - continue ; <nl> - } <nl> - <nl> - j = state [ saw ] . real_pos - pos ; <nl> - <nl> - if ( j > 0xffff ) { <nl> - fprintf ( stderr , <nl> - " Jump > 64K bytes ahead ( % d to % d ) \ n " , <nl> - state [ n ] . real_pos , state [ saw ] . real_pos ) ; <nl> - return 1 ; <nl> - } <nl> - fprintf ( stdout , " 0x % 02X / * ' % c ' * / , 0x % 02X , 0x % 02X " <nl> - " / * ( to 0x % 04X state % 3d ) * / , \ n " , <nl> - y , prev , <nl> - j & 0xff , j > > 8 , <nl> - state [ saw ] . real_pos , saw ) ; <nl> - pos + = 3 ; <nl> - <nl> - if ( m = = state [ n ] . count - 1 ) { <nl> - fprintf ( stdout , <nl> - " 0x % 02X , / * fail * / \ n " , <nl> - FAIL_CHAR ) ; <nl> - pos + + ; / * fail * / <nl> - } <nl> - <nl> - walk + = 2 ; <nl> - } <nl> - } <nl> - <nl> - fprintf ( stdout , " / * total size % d bytes * / \ n " , pos ) ; <nl> - <nl> - / * <nl> - * Try to parse every legal input string <nl> - * / <nl> - <nl> - for ( n = 0 ; n < sizeof ( set ) / sizeof ( set [ 0 ] ) ; n + + ) { <nl> - walk = 0 ; <nl> - m = 0 ; <nl> - y = - 1 ; <nl> - <nl> - if ( set [ n ] [ 0 ] = = ' \ 0 ' ) <nl> - continue ; <nl> - <nl> - fprintf ( stderr , " trying ' % s ' \ n " , set [ n ] ) ; <nl> - <nl> - while ( set [ n ] [ m ] ) { <nl> - walk = lextable_decode ( walk , set [ n ] [ m ] ) ; <nl> - if ( walk < 0 ) { <nl> - fprintf ( stderr , " failed \ n " ) ; <nl> - return 3 ; <nl> - } <nl> - <nl> - if ( lextable [ walk ] < FAIL_CHAR ) { <nl> - y = ( lextable [ walk ] < < 8 ) + lextable [ walk + 1 ] ; <nl> - break ; <nl> - } <nl> - m + + ; <nl> - } <nl> - <nl> - if ( y ! = n ) { <nl> - fprintf ( stderr , " decode failed % d \ n " , y ) ; <nl> - return 4 ; <nl> - } <nl> - } <nl> - <nl> - fprintf ( stderr , " All decode OK \ n " ) ; <nl> - <nl> - return 0 ; <nl> - } <nl>
|
Merge pull request from Faless / lws_uwp_fixes
|
godotengine/godot
|
39757c34da290aab1b97f7976a3daf41575c735c
|
2018-03-01T16:25:39Z
|
mmm a / tools / editor / plugins / path_2d_editor_plugin . cpp <nl> ppp b / tools / editor / plugins / path_2d_editor_plugin . cpp <nl> bool Path2DEditor : : forward_input_event ( const InputEvent & p_event ) { <nl> <nl> Ref < Curve2D > curve = node - > get_curve ( ) ; <nl> <nl> - Vector2 new_pos = moving_from + xform . basis_xform ( gpoint - moving_screen_from ) ; <nl> + Vector2 new_pos = moving_from + xform . affine_inverse ( ) . basis_xform ( gpoint - moving_screen_from ) ; <nl> switch ( action ) { <nl> <nl> case ACTION_MOVING_POINT : { <nl> bool Path2DEditor : : forward_input_event ( const InputEvent & p_event ) { <nl> <nl> Ref < Curve2D > curve = node - > get_curve ( ) ; <nl> <nl> - Vector2 new_pos = moving_from + xform . basis_xform ( gpoint - moving_screen_from ) ; <nl> + Vector2 new_pos = moving_from + xform . affine_inverse ( ) . basis_xform ( gpoint - moving_screen_from ) ; <nl> <nl> switch ( action ) { <nl> <nl>
|
Merge pull request from sketchyfun / master
|
godotengine/godot
|
336d9ce5d7c104c7828d620295ff977203b57c2d
|
2015-01-11T12:30:52Z
|
new file mode 100644 <nl> index 000000000000 . . 159135d06b52 <nl> mmm / dev / null <nl> ppp b / validation - test / Sema / Inputs / rdar36801676 . swift <nl> <nl> + import Cocoa <nl> + extension Notification . Name { } <nl> new file mode 100644 <nl> index 000000000000 . . d7221f8ec988 <nl> mmm / dev / null <nl> ppp b / validation - test / Sema / Inputs / rdar36801676_empty . swift <nl> @ @ - 0 , 0 + 1 @ @ <nl> + / / Empty swift source file . <nl> new file mode 100644 <nl> index 000000000000 . . 46f31df18999 <nl> mmm / dev / null <nl> ppp b / validation - test / Sema / wmo_verify_loaded . swift <nl> <nl> + / / RUN : rm - rf % t <nl> + / / RUN : mkdir - p % t <nl> + / / RUN : % target - swift - frontend - swift - version 4 - emit - module - o % t / rdar36801676 . swiftmodule % S / Inputs / rdar36801676 . swift <nl> + / / RUN : % target - swift - frontend - swift - version 4 - emit - silgen - enable - objc - interop - I % t - emit - silgen % S / Inputs / rdar36801676_empty . swift % s | % FileCheck % s <nl> + / / REQUIRES : OS = macosx <nl> + <nl> + / / If AST loaded module verification is run after type checking the empty source <nl> + / / file ( rdar36801676_empty . swift ) , but before type checking this source file , <nl> + / / then the importer caches the declaration for PasteboardType ' s constructor <nl> + / / without synthesizing it ' s body . Eventually , the SILVerifier will raise a linkage error : <nl> + / / SIL verification failed : external declarations of SILFunctions with shared <nl> + / / visibility is not allowed : SingleFunction | | <nl> + / / ! hasSharedVisibility ( RefF - > getLinkage ( ) ) | | RefF - > hasForeignBody ( ) <nl> + <nl> + import Cocoa <nl> + import rdar36801676 <nl> + <nl> + let objCSynthesizedEnum = NSPasteboard . PasteboardType ( rawValue : " MyPboardType " ) <nl> + extension Notification . Name { } <nl> + <nl> + / / NSPasteboardType . init ( rawValue : ) <nl> + / / - just make sure it has a body . <nl> + / / CHECK - LABEL : sil shared [ transparent ] [ serializable ] @ $ SSo16NSPasteboardTypea8rawValueABSS_tcfC : $ @ convention ( method ) ( @ owned String , @ thin NSPasteboard . PasteboardType . Type ) - > @ owned NSPasteboard . PasteboardType { <nl> + / / CHECK : bb0 ( % 0 : $ String , % 1 : $ @ thin NSPasteboard . PasteboardType . Type ) : <nl> + / / CHECK : return % { { . * } } : $ NSPasteboard . PasteboardType <nl> + / / CHECK - LABEL : } / / end sil function ' $ SSo16NSPasteboardTypea8rawValueABSS_tcfC ' <nl>
|
Merge remote - tracking branch ' origin / master ' into master - next
|
apple/swift
|
1f8e12f67edc7da273cd39d2c4af388c461edcfc
|
2018-01-31T01:29:16Z
|
mmm a / tests / js / server / resilience / move / moving - shards - cluster . js <nl> ppp b / tests / js / server / resilience / move / moving - shards - cluster . js <nl> function getDBServers ( ) { <nl> return servers ; <nl> } <nl> <nl> - const servers = getDBServers ( ) ; <nl> + var servers = getDBServers ( ) ; <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief test suite <nl>
|
Fix jslint
|
arangodb/arangodb
|
554ca3874e250286f2a45bdb5ecfcf05eaea3279
|
2019-09-12T19:25:15Z
|
mmm a / guilib / GUIEditControl . cpp <nl> ppp b / guilib / GUIEditControl . cpp <nl> <nl> # include " CocoaInterface . h " <nl> # endif <nl> <nl> + const char * CGUIEditControl : : smsLetters [ 10 ] = { " ! @ # $ % ^ & * ( ) [ ] { } < > / \ \ | 0 " , " . , ; : \ ' \ " - + _ = ? ` ~ 1 " , " abc2 " , " def3 " , " ghi4 " , " jkl5 " , " mno6 " , " pqrs7 " , " tuv8 " , " wxyz9 " } ; <nl> + const unsigned int CGUIEditControl : : smsDelay = 1000 ; <nl> + <nl> using namespace std ; <nl> <nl> # ifdef WIN32 <nl> CGUIEditControl : : CGUIEditControl ( int parentID , int controlID , float posX , float <nl> m_cursorBlink = 0 ; <nl> m_inputHeading = 0 ; <nl> m_inputType = INPUT_TYPE_TEXT ; <nl> + m_smsLastKey = 0 ; <nl> + m_smsKeyIndex = 0 ; <nl> SetLabel ( text ) ; <nl> } <nl> <nl> CGUIEditControl : : CGUIEditControl ( const CGUIButtonControl & button ) <nl> m_textWidth = GetWidth ( ) ; <nl> m_cursorPos = 0 ; <nl> m_cursorBlink = 0 ; <nl> + m_smsLastKey = 0 ; <nl> + m_smsKeyIndex = 0 ; <nl> } <nl> <nl> CGUIEditControl : : ~ CGUIEditControl ( void ) <nl> bool CGUIEditControl : : OnMessage ( CGUIMessage & message ) <nl> message . SetLabel ( GetLabel2 ( ) ) ; <nl> return true ; <nl> } <nl> + else if ( message . GetMessage ( ) = = GUI_MSG_SETFOCUS | | <nl> + message . GetMessage ( ) = = GUI_MSG_LOSTFOCUS ) <nl> + { <nl> + m_smsTimer . Stop ( ) ; <nl> + } <nl> return CGUIButtonControl : : OnMessage ( message ) ; <nl> } <nl> <nl> bool CGUIEditControl : : OnAction ( const CAction & action ) <nl> if ( m_cursorPos ) <nl> { <nl> m_text2 . erase ( - - m_cursorPos , 1 ) ; <nl> - OnTextChanged ( ) ; <nl> + UpdateText ( ) ; <nl> } <nl> return true ; <nl> } <nl> bool CGUIEditControl : : OnAction ( const CAction & action ) <nl> if ( m_cursorPos > 0 ) <nl> { <nl> m_cursorPos - - ; <nl> - OnTextChanged ( ) ; <nl> + UpdateText ( false ) ; <nl> return true ; <nl> } <nl> } <nl> bool CGUIEditControl : : OnAction ( const CAction & action ) <nl> if ( ( unsigned int ) m_cursorPos < m_text2 . size ( ) ) <nl> { <nl> m_cursorPos + + ; <nl> - OnTextChanged ( ) ; <nl> + UpdateText ( false ) ; <nl> return true ; <nl> } <nl> } <nl> else if ( action . id = = ACTION_PASTE ) <nl> { <nl> - # ifdef __APPLE__ <nl> - const char * szStr = Cocoa_Paste ( ) ; <nl> - if ( szStr ) <nl> - { <nl> - m_text2 + = szStr ; <nl> - m_cursorPos + = strlen ( szStr ) ; <nl> - OnTextChanged ( ) ; <nl> - } <nl> - # elif defined _WIN32 <nl> - HGLOBAL hglb ; <nl> - LPTSTR lptstr ; <nl> - if ( OpenClipboard ( g_hWnd ) ) <nl> - { <nl> - hglb = GetClipboardData ( CF_TEXT ) ; <nl> - if ( hglb ! = NULL ) <nl> - { <nl> - lptstr = ( LPTSTR ) GlobalLock ( hglb ) ; <nl> - if ( lptstr ! = NULL ) <nl> - { <nl> - m_text2 = ( char * ) lptstr ; <nl> - GlobalUnlock ( hglb ) ; <nl> - } <nl> - } <nl> - CloseClipboard ( ) ; <nl> - OnTextChanged ( ) ; <nl> - } <nl> - # endif <nl> + OnPasteClipboard ( ) ; <nl> } <nl> else if ( action . id > = KEY_VKEY & & action . id < KEY_ASCII ) <nl> { <nl> bool CGUIEditControl : : OnAction ( const CAction & action ) <nl> if ( b = = 0x25 & & m_cursorPos > 0 ) <nl> { / / left <nl> m_cursorPos - - ; <nl> - OnTextChanged ( ) ; <nl> + UpdateText ( false ) ; <nl> return true ; <nl> } <nl> if ( b = = 0x27 & & m_cursorPos < m_text2 . length ( ) ) <nl> { / / right <nl> m_cursorPos + + ; <nl> - OnTextChanged ( ) ; <nl> + UpdateText ( false ) ; <nl> return true ; <nl> } <nl> if ( b = = 0x2e ) <nl> bool CGUIEditControl : : OnAction ( const CAction & action ) <nl> if ( m_cursorPos < m_text2 . length ( ) ) <nl> { / / delete <nl> m_text2 . erase ( m_cursorPos , 1 ) ; <nl> - OnTextChanged ( ) ; <nl> + UpdateText ( ) ; <nl> return true ; <nl> } <nl> } <nl> bool CGUIEditControl : : OnAction ( const CAction & action ) <nl> if ( m_cursorPos > 0 ) <nl> { / / backspace <nl> m_text2 . erase ( - - m_cursorPos , 1 ) ; <nl> - OnTextChanged ( ) ; <nl> + UpdateText ( ) ; <nl> } <nl> return true ; <nl> } <nl> bool CGUIEditControl : : OnAction ( const CAction & action ) <nl> if ( m_cursorPos ) <nl> { <nl> m_text2 . erase ( - - m_cursorPos , 1 ) ; <nl> - OnTextChanged ( ) ; <nl> } <nl> break ; <nl> } <nl> default : <nl> { <nl> - m_text2 . insert ( m_text2 . begin ( ) + m_cursorPos , ( WCHAR ) action . unicode ) ; <nl> - m_cursorPos + + ; <nl> - OnTextChanged ( ) ; <nl> + m_text2 . insert ( m_text2 . begin ( ) + m_cursorPos + + , ( WCHAR ) action . unicode ) ; <nl> break ; <nl> } <nl> } <nl> - OnTextChanged ( ) ; <nl> + UpdateText ( ) ; <nl> return true ; <nl> } <nl> else if ( action . id > = REMOTE_2 & & action . id < = REMOTE_9 ) <nl> { / / input from the remote <nl> if ( m_inputType = = INPUT_TYPE_FILTER ) <nl> { / / filtering - use single number presses <nl> - m_text2 . insert ( m_text2 . begin ( ) + m_cursorPos , L ' 0 ' + ( action . id - REMOTE_0 ) ) ; <nl> - m_cursorPos + + ; <nl> - OnTextChanged ( ) ; <nl> + m_text2 . insert ( m_text2 . begin ( ) + m_cursorPos + + , L ' 0 ' + ( action . id - REMOTE_0 ) ) ; <nl> + UpdateText ( ) ; <nl> return true ; <nl> } <nl> + else <nl> + OnSMSCharacter ( action . id - REMOTE_0 ) ; <nl> } <nl> return CGUIButtonControl : : OnAction ( action ) ; <nl> } <nl> void CGUIEditControl : : OnClick ( ) <nl> textChanged = CGUIDialogNumeric : : ShowAndGetIPAddress ( utf8 , heading ) ; <nl> break ; <nl> case INPUT_TYPE_SEARCH : <nl> - CGUIDialogKeyboard : : ShowAndGetFilter ( utf8 , true ) ; <nl> + textChanged = CGUIDialogKeyboard : : ShowAndGetFilter ( utf8 , true ) ; <nl> break ; <nl> case INPUT_TYPE_FILTER : <nl> - CGUIDialogKeyboard : : ShowAndGetFilter ( utf8 , false ) ; <nl> + textChanged = CGUIDialogKeyboard : : ShowAndGetFilter ( utf8 , false ) ; <nl> break ; <nl> case INPUT_TYPE_TEXT : <nl> default : <nl> void CGUIEditControl : : OnClick ( ) <nl> { <nl> g_charsetConverter . utf8ToW ( utf8 , m_text2 ) ; <nl> m_cursorPos = m_text2 . size ( ) ; <nl> - OnTextChanged ( ) ; <nl> + UpdateText ( ) ; <nl> m_cursorPos = m_text2 . size ( ) ; <nl> } <nl> } <nl> <nl> + void CGUIEditControl : : UpdateText ( bool sendUpdate ) <nl> + { <nl> + m_smsTimer . Stop ( ) ; <nl> + if ( sendUpdate ) <nl> + { <nl> + SEND_CLICK_MESSAGE ( GetID ( ) , GetParentID ( ) , 0 ) ; <nl> + <nl> + vector < CGUIActionDescriptor > textChangeActions = m_textChangeActions ; <nl> + for ( unsigned int i = 0 ; i < textChangeActions . size ( ) ; i + + ) <nl> + { <nl> + CGUIMessage message ( GUI_MSG_EXECUTE , GetID ( ) , GetParentID ( ) ) ; <nl> + message . SetAction ( textChangeActions [ i ] ) ; <nl> + g_windowManager . SendMessage ( message ) ; <nl> + } <nl> + } <nl> + SetInvalid ( ) ; <nl> + } <nl> + <nl> void CGUIEditControl : : SetInputType ( CGUIEditControl : : INPUT_TYPE type , int heading ) <nl> { <nl> m_inputType = type ; <nl> void CGUIEditControl : : RecalcLabelPosition ( ) <nl> <nl> void CGUIEditControl : : RenderText ( ) <nl> { <nl> + if ( m_smsTimer . GetElapsedMilliseconds ( ) > smsDelay ) <nl> + UpdateText ( ) ; <nl> + <nl> if ( m_bInvalidated ) <nl> RecalcLabelPosition ( ) ; <nl> <nl> void CGUIEditControl : : ValidateCursor ( ) <nl> m_cursorPos = m_text2 . size ( ) ; <nl> } <nl> <nl> - void CGUIEditControl : : OnTextChanged ( ) <nl> - { <nl> - SEND_CLICK_MESSAGE ( GetID ( ) , GetParentID ( ) , 0 ) ; <nl> - <nl> - vector < CGUIActionDescriptor > textChangeActions = m_textChangeActions ; <nl> - for ( unsigned int i = 0 ; i < textChangeActions . size ( ) ; i + + ) <nl> - { <nl> - CGUIMessage message ( GUI_MSG_EXECUTE , GetID ( ) , GetParentID ( ) ) ; <nl> - message . SetAction ( textChangeActions [ i ] ) ; <nl> - g_windowManager . SendMessage ( message ) ; <nl> - } <nl> - <nl> - SetInvalid ( ) ; <nl> - } <nl> - <nl> void CGUIEditControl : : SetLabel ( const std : : string & text ) <nl> { <nl> m_textLayout . Update ( text ) ; <nl> void CGUIEditControl : : SetCursorPosition ( unsigned int iPosition ) <nl> { <nl> m_cursorPos = iPosition ; <nl> } <nl> + <nl> + void CGUIEditControl : : OnSMSCharacter ( unsigned int key ) <nl> + { <nl> + assert ( key < 10 ) ; <nl> + bool sendUpdate = false ; <nl> + if ( m_smsTimer . IsRunning ( ) ) <nl> + { <nl> + / / we ' re already entering an SMS character <nl> + if ( key ! = m_smsLastKey | | m_smsTimer . GetElapsedMilliseconds ( ) > smsDelay ) <nl> + { / / a different key was clicked than last time , or we have timed out <nl> + m_smsLastKey = key ; <nl> + m_smsKeyIndex = 0 ; <nl> + sendUpdate = true ; <nl> + } <nl> + else <nl> + { / / same key as last time within the appropriate time period <nl> + m_smsKeyIndex + + ; <nl> + if ( m_cursorPos ) <nl> + m_text2 . erase ( - - m_cursorPos , 1 ) ; <nl> + } <nl> + } <nl> + else <nl> + { / / key is pressed for the first time <nl> + m_smsLastKey = key ; <nl> + m_smsKeyIndex = 0 ; <nl> + } <nl> + <nl> + m_smsKeyIndex = m_smsKeyIndex % strlen ( smsLetters [ key ] ) ; <nl> + <nl> + m_text2 . insert ( m_text2 . begin ( ) + m_cursorPos + + , smsLetters [ key ] [ m_smsKeyIndex ] ) ; <nl> + UpdateText ( sendUpdate ) ; <nl> + m_smsTimer . StartZero ( ) ; <nl> + } <nl> + <nl> + void CGUIEditControl : : OnPasteClipboard ( ) <nl> + { <nl> + # ifdef __APPLE__ <nl> + const char * szStr = Cocoa_Paste ( ) ; <nl> + if ( szStr ) <nl> + { <nl> + m_text2 + = szStr ; <nl> + m_cursorPos + = strlen ( szStr ) ; <nl> + UpdateText ( ) ; <nl> + } <nl> + # elif defined _WIN32 <nl> + HGLOBAL hglb ; <nl> + LPTSTR lptstr ; <nl> + if ( OpenClipboard ( g_hWnd ) ) <nl> + { <nl> + hglb = GetClipboardData ( CF_TEXT ) ; <nl> + if ( hglb ! = NULL ) <nl> + { <nl> + lptstr = ( LPTSTR ) GlobalLock ( hglb ) ; <nl> + if ( lptstr ! = NULL ) <nl> + { <nl> + m_text2 = ( char * ) lptstr ; <nl> + GlobalUnlock ( hglb ) ; <nl> + } <nl> + } <nl> + CloseClipboard ( ) ; <nl> + UpdateText ( ) ; <nl> + } <nl> + # endif <nl> + } <nl> mmm a / guilib / GUIEditControl . h <nl> ppp b / guilib / GUIEditControl . h <nl> <nl> * / <nl> <nl> # include " GUIButtonControl . h " <nl> + # include " utils / Stopwatch . h " <nl> <nl> / * ! <nl> \ ingroup controls <nl> class CGUIEditControl : public CGUIButtonControl <nl> CStdStringW GetDisplayedText ( ) const ; <nl> void RecalcLabelPosition ( ) ; <nl> void ValidateCursor ( ) ; <nl> - void OnTextChanged ( ) ; <nl> + void UpdateText ( bool sendUpdate = true ) ; <nl> + void OnPasteClipboard ( ) ; <nl> + void OnSMSCharacter ( unsigned int key ) ; <nl> <nl> CStdStringW m_text2 ; <nl> CStdString m_text ; <nl> class CGUIEditControl : public CGUIButtonControl <nl> INPUT_TYPE m_inputType ; <nl> <nl> std : : vector < CGUIActionDescriptor > m_textChangeActions ; <nl> + <nl> + <nl> + unsigned int m_smsKeyIndex ; <nl> + unsigned int m_smsLastKey ; <nl> + CStopWatch m_smsTimer ; <nl> + <nl> + static const char * smsLetters [ 10 ] ; <nl> + static const unsigned int smsDelay ; <nl> } ; <nl> # endif <nl>
|
fixed : Edit controls didn ' t support SMS input . Ticket
|
xbmc/xbmc
|
d45e35ea92664ac224243eeee321910d3194e2cd
|
2009-10-11T02:52:56Z
|
mmm a / lib / Parse / ParseType . cpp <nl> ppp b / lib / Parse / ParseType . cpp <nl> bool Parser : : isOptionalToken ( const Token & T ) const { <nl> return true ; <nl> <nl> / / A postfix or bound infix operator token that begins with ' ? ' can be <nl> - / / optional too . We ' ll munch off the ' ? ' . <nl> - if ( ( T . is ( tok : : oper_postfix ) | | T . is ( tok : : oper_binary_unspaced ) | | <nl> - T . is ( tok : : oper_binary_spaced ) ) & & <nl> + / / optional too . We ' ll munch off the ' ? ' , so long as it is left - bound with <nl> + / / the type ( i . e . , parsed as a postfix or unspaced binary operator ) . <nl> + if ( ( T . is ( tok : : oper_postfix ) | | T . is ( tok : : oper_binary_unspaced ) ) & & <nl> T . getText ( ) . startswith ( " ? " ) ) <nl> return true ; <nl> return false ; <nl> bool Parser : : isImplicitlyUnwrappedOptionalToken ( const Token & T ) const { <nl> if ( T . is ( tok : : exclaim_postfix ) | | T . is ( tok : : sil_exclamation ) ) <nl> return true ; <nl> / / A postfix or bound infix operator token that begins with ' ! ' can be <nl> - / / implicitly unwrapped optional too . We ' ll munch off the ' ! ' . <nl> - if ( ( T . is ( tok : : oper_postfix ) | | T . is ( tok : : oper_binary_unspaced ) | | <nl> - T . is ( tok : : oper_binary_spaced ) ) & & <nl> + / / implicitly unwrapped optional too . We ' ll munch off the ' ! ' , so long as it <nl> + / / is left - bound with the type ( i . e . , parsed as a postfix or unspaced binary <nl> + / / operator ) . <nl> + if ( ( T . is ( tok : : oper_postfix ) | | T . is ( tok : : oper_binary_unspaced ) ) & & <nl> T . getText ( ) . startswith ( " ! " ) ) <nl> return true ; <nl> return false ; <nl> mmm a / test / Parse / recovery . swift <nl> ppp b / test / Parse / recovery . swift <nl> Base = 1 as Base = 1 / / expected - error { { cannot assign to the result of this expres <nl> / / < rdar : / / problem / 18634543 > Parser hangs at swift : : Parser : : parseType <nl> public enum TestA { <nl> public static func convertFromExtenndition ( <nl> - / / expected - error @ + 3 { { expected parameter type following ' : ' } } <nl> - / / expected - error @ + 2 { { expected ' , ' separator } } <nl> + / / expected - error @ + 3 2 { { expected parameter type following ' : ' } } <nl> + / / expected - error @ + 2 3 { { expected ' , ' separator } } <nl> / / expected - error @ + 1 { { use of undeclared type ' s ' } } <nl> s . _core . count ! = 0 , " Can ' t form a Character from an empty String " ) <nl> } <nl> <nl> public enum TestB { <nl> public static func convertFromExtenndition ( <nl> - / / expected - error @ + 3 { { expected parameter type following ' : ' } } <nl> - / / expected - error @ + 2 { { expected ' , ' separator } } <nl> + / / expected - error @ + 3 2 { { expected parameter type following ' : ' } } <nl> + / / expected - error @ + 2 3 { { expected ' , ' separator } } <nl> / / expected - error @ + 1 { { use of undeclared type ' s ' } } <nl> s . _core . count ? = 0 , " Can ' t form a Character from an empty String " ) <nl> } <nl> public enum TestB { <nl> / / < rdar : / / problem / 18634543 > Infinite loop and unbounded memory consumption in parser <nl> class bar { } <nl> var baz : bar <nl> - func foo1 ( bar ! = baz ) { } <nl> + func foo1 ( bar ! = baz ) { } <nl> func foo2 ( bar ! = baz ) { } <nl> <nl> <nl> func foo2 ( bar ! = baz ) { } <nl> / / < rdar : / / problem / 18662272 > Infinite loop and unbounded memory consumption in parser <nl> class Baz { } <nl> class Bar < T > { } <nl> - func f1 ( a : Bar < Baz ! > ) { } <nl> - func f2 ( a : Bar < Baz / * some comment * / ! > ) { } <nl> + func f1 ( a : Bar < Baz ! > ) { } <nl> + func f2 ( a : Bar < Baz / * some comment * / ! > ) { } <nl> <nl> <nl> / / rdar : / / 19605567 <nl> mmm a / test / expr / expressions . swift <nl> ppp b / test / expr / expressions . swift <nl> func testOptionalChaining ( a : Int ? , b : Int ! , c : Int ? ? ) { <nl> <nl> <nl> / / < rdar : / / problem / 19657458 > Nil Coalescing operator ( ? ? ) should have a higher precedence <nl> - <nl> func testNilCoalescePrecedence ( cond : Bool , a : Int ? , r : Range < Int > ? ) { <nl> / / ? ? should have higher precedence than logical operators like | | and comparisons . <nl> if cond | | ( a ? ? 42 > 0 ) { } / / Ok . <nl> func testNilCoalescePrecedence ( cond : Bool , a : Int ? , r : Range < Int > ? ) { <nl> let r3 = r ? ? 0 . . . 42 / / parses as the first one , not the second . <nl> } <nl> <nl> + / / < rdar : / / problem / 19772570 > Parsing of as and ? ? regressed <nl> + func testOptionalTypeParsing ( a : AnyObject ) - > String { <nl> + return a as ? String ? ? " default name string here " <nl> + } <nl> + <nl> + <nl>
|
fix < rdar : / / problem / 19772570 > Parsing of as and ? ? regressed
|
apple/swift
|
db421806e2506500b5072640f1d44a122ab2bd91
|
2015-02-12T05:54:05Z
|
mmm a / lib / AST / NameLookup . cpp <nl> ppp b / lib / AST / NameLookup . cpp <nl> bool Module : : lookupQualified ( Type type , <nl> <nl> / / Look for module references . <nl> if ( auto moduleTy = type - > getAs < ModuleType > ( ) ) { <nl> - moduleTy - > getModule ( ) - > lookupValue ( Module : : AccessPathTy ( ) , name , <nl> - NLKind : : QualifiedLookup , decls ) ; <nl> + Module * module = moduleTy - > getModule ( ) ; <nl> + module - > lookupValue ( Module : : AccessPathTy ( ) , name , <nl> + NLKind : : QualifiedLookup , decls ) ; <nl> + <nl> + / / Prefer decls from the module itself , rather than imported modules . <nl> + if ( ! decls . empty ( ) ) <nl> + return true ; <nl> + <nl> + / / Track whether we ' ve already searched the Clang modules . <nl> + / / FIXME : This is a weird hack . We either need to filter within the <nl> + / / Clang module importer , or we need to change how this works . <nl> + bool searchedClangModule = <nl> + module - > getContextKind ( ) = = DeclContextKind : : ClangModule ; <nl> + <nl> + module - > forAllVisibleModules ( Nothing , <nl> + makeStackLambda ( <nl> + [ & ] ( const Module : : ImportedModule & ImpEntry ) { <nl> + / / FIXME : Only searching Clang modules once . <nl> + if ( ImpEntry . first . empty ( ) & & <nl> + ImpEntry . second - > getContextKind ( ) = = DeclContextKind : : ClangModule ) { <nl> + if ( searchedClangModule ) <nl> + return ; <nl> + <nl> + searchedClangModule = true ; <nl> + } <nl> + <nl> + / / FIXME : Is the re - exported lookup really unqualified ? We do want it <nl> + / / to ignore the Builtin module , but no one should be re - exporting that . <nl> + ImpEntry . second - > lookupValue ( ImpEntry . first , name , <nl> + NLKind : : UnqualifiedLookup , decls ) ; <nl> + } <nl> + ) ) ; <nl> return ! decls . empty ( ) ; <nl> } <nl> <nl> mmm a / lib / Sema / TypeCheckPattern . cpp <nl> ppp b / lib / Sema / TypeCheckPattern . cpp <nl> class ResolveTypeReference : public ASTVisitor < ResolveTypeReference , <nl> lookup = TC . lookupMemberType ( td - > getDeclaredType ( ) , ude - > getName ( ) ) ; <nl> } else if ( Module * m = curScope . dyn_cast < Module * > ( ) ) { <nl> / / Look into the module . <nl> - lookup = TC . lookupMemberType ( m , ude - > getName ( ) ) ; <nl> + lookup = TC . lookupMemberType ( ModuleType : : get ( m ) , ude - > getName ( ) ) ; <nl> } else <nl> llvm_unreachable ( " invalid curType " ) ; <nl> <nl> mmm a / lib / Sema / TypeCheckType . cpp <nl> ppp b / lib / Sema / TypeCheckType . cpp <nl> static void diagnoseUnboundGenericType ( TypeChecker & tc , Type ty , SourceLoc loc ) { <nl> unbound - > getDecl ( ) - > getName ( ) ) ; <nl> } <nl> <nl> - / / / \ brief Find a type member as a qualified member of a module . <nl> - LookupTypeResult TypeChecker : : lookupMemberType ( Module * module , Identifier name ) { <nl> - LookupTypeResult result ; <nl> - SmallVector < ValueDecl * , 4 > decls ; <nl> - / / FIXME : The use of AccessPathTy ( ) is weird here . <nl> - module - > lookupValue ( Module : : AccessPathTy ( ) , name , <nl> - NLKind : : QualifiedLookup , decls ) ; <nl> - <nl> - for ( auto decl : decls ) { <nl> - / / Only consider type declarations . <nl> - auto typeDecl = dyn_cast < TypeDecl > ( decl ) ; <nl> - if ( ! typeDecl ) <nl> - continue ; <nl> - <nl> - auto type = typeDecl - > getDeclaredType ( ) ; <nl> - result . addResult ( { typeDecl , type } ) ; <nl> - } <nl> - return result ; <nl> - } <nl> - <nl> / / / \ brief Returns a valid type or ErrorType in case of an error . <nl> static Type resolveTypeDecl ( TypeChecker & TC , TypeDecl * typeDecl , SourceLoc loc , <nl> DeclContext * dc , <nl> resolveIdentTypeComponent ( TypeChecker & TC , <nl> <nl> / / Lookup into a module . <nl> auto module = parent . get < Module * > ( ) ; <nl> - LookupTypeResult foundModuleTypes = TC . lookupMemberType ( module , <nl> - comp . getIdentifier ( ) ) ; <nl> + LookupTypeResult foundModuleTypes = <nl> + TC . lookupMemberType ( ModuleType : : get ( module ) , comp . getIdentifier ( ) ) ; <nl> <nl> / / If we didn ' t find a type , complain . <nl> if ( ! foundModuleTypes ) { <nl> mmm a / lib / Sema / TypeChecker . h <nl> ppp b / lib / Sema / TypeChecker . h <nl> class TypeChecker : public ASTMutationListener { <nl> / / / <nl> / / / \ returns The result of name lookup . <nl> LookupTypeResult lookupMemberType ( Type type , Identifier name ) ; <nl> - <nl> - / / / \ brief Look up a member type within the given module . <nl> - / / / <nl> - / / / This looks for members types directly within the module . It finds types <nl> - / / / that would be found by qualified reference ( such as swift . Int ) and not <nl> - / / / types that are imports . <nl> - / / / <nl> - / / / \ param module The module in which to look for a member type . <nl> - / / / <nl> - / / / \ param name The name of the type to look for . <nl> - / / / <nl> - / / / \ returns The result of name lookup . <nl> - LookupTypeResult lookupMemberType ( Module * module , Identifier name ) ; <nl> <nl> / / / \ brief Look up the constructors of the given type . <nl> / / / <nl> new file mode 100644 <nl> index 000000000000 . . 26f464d4cacf <nl> mmm / dev / null <nl> ppp b / test / NameBinding / Inputs / abcde . swift <nl> <nl> + struct A { } <nl> + struct B { } <nl> + struct C { } <nl> + struct D { } <nl> + struct E { } <nl> new file mode 100644 <nl> index 000000000000 . . ed8103e53c6c <nl> mmm / dev / null <nl> ppp b / test / NameBinding / Inputs / aeiou . swift <nl> <nl> + struct A { } <nl> + struct E { } <nl> + struct I { } <nl> + struct O { } <nl> + struct U { } <nl> new file mode 100644 <nl> index 000000000000 . . f2d0ed7d99bb <nl> mmm / dev / null <nl> ppp b / test / NameBinding / Inputs / letters . swift <nl> <nl> + import abcde <nl> + import aeiou <nl> + <nl> + struct C { <nl> + var b : B <nl> + } <nl> new file mode 100644 <nl> index 000000000000 . . e01dca045f8b <nl> mmm / dev / null <nl> ppp b / test / NameBinding / import - resolution . swift <nl> <nl> + / / RUN : rm - rf % t <nl> + / / RUN : mkdir - p % t <nl> + / / RUN : % swift - emit - module - o % t % S / Inputs / abcde . swift <nl> + / / RUN : % swift - emit - module - o % t % S / Inputs / aeiou . swift <nl> + / / RUN : % swift - emit - module - o % t - I = % t % S / Inputs / letters . swift <nl> + / / RUN : % swift - parse % s - I = % t - sdk = - verify <nl> + <nl> + import letters <nl> + import abcde <nl> + <nl> + var qA : letters . A / / expected - error { { ambiguous type name ' A ' in module ' letters ' } } <nl> + var qB : letters . B <nl> + var qC : letters . C <nl> + <nl> + letters . abcde . A / / expected - error { { ' module < letters > ' does not have a member named ' abcde ' } } <nl> + letters . aeiou . A / / expected - error { { ' module < letters > ' does not have a member named ' aeiou ' } } <nl> + <nl> + var uA : A / / expected - error { { ' A ' is ambiguous for type look up in this context } } <nl> + var uB : B <nl> + var uC : C / / expected - error { { ' C ' is ambiguous for type look up in this context } } <nl> + <nl> + var qA1 : abcde . A / / okay <nl> + var qA2 : aeiou . A / / okay <nl>
|
Handle name resolution for qualified access into a module .
|
apple/swift
|
2479f8087bd9bd4aa82771967bf51d085ea602f2
|
2013-08-02T21:01:03Z
|
mmm a / . jenkins / caffe2 / bench . sh <nl> ppp b / . jenkins / caffe2 / bench . sh <nl> <nl> <nl> source " $ ( dirname " $ { BASH_SOURCE [ 0 ] } " ) / common . sh " <nl> <nl> - # Anywhere except $ ROOT_DIR should work <nl> + # Anywhere except $ ROOT_DIR should work . This is so the python import doesn ' t <nl> + # get confused by any ' caffe2 ' directory in cwd <nl> cd " $ INSTALL_PREFIX " <nl> <nl> if [ [ $ BUILD_ENVIRONMENT = = * - cuda * ] ] ; then <nl> else <nl> num_gpus = 0 <nl> fi <nl> <nl> - caffe2_pypath = " $ ( python - c ' import os ; import caffe2 ; print ( os . path . dirname ( os . path . realpath ( caffe2 . __file__ ) ) ) ' ) " <nl> + caffe2_pypath = " $ ( cd / usr & & python - c ' import os ; import caffe2 ; print ( os . path . dirname ( os . path . realpath ( caffe2 . __file__ ) ) ) ' ) " <nl> cmd = " $ PYTHON $ caffe2_pypath / python / examples / resnet50_trainer . py - - train_data null - - batch_size 64 - - epoch_size 6400 - - num_epochs 2 " <nl> if ( ( $ num_gpus = = 0 ) ) ; then <nl> cmd = " $ cmd - - use_cpu " <nl> mmm a / . jenkins / caffe2 / build . sh <nl> ppp b / . jenkins / caffe2 / build . sh <nl> if [ [ " $ BUILD_ENVIRONMENT " = = * cmake * ] ] ; then <nl> # This is to save test binaries for testing <nl> mv " $ INSTALL_PREFIX / test / " " $ INSTALL_PREFIX / cpp_test / " <nl> <nl> - ls $ INSTALL_PREFIX <nl> + ls - lah $ INSTALL_PREFIX <nl> <nl> else <nl> # Python build . Uses setup . py to install into site - packages <nl> else <nl> build_args + = ( " USE_FBGEMM = OFF " ) <nl> build_args + = ( " USE_MKLDNN = OFF " ) <nl> build_args + = ( " USE_DISTRIBUTED = ON " ) <nl> + for build_arg in " $ { build_args [ @ ] } " ; do <nl> + export $ build_arg <nl> + done <nl> <nl> # sccache will be stuck if all cores are used for compiling <nl> # see https : / / github . com / pytorch / pytorch / pull / 7361 <nl> else <nl> export MAX_JOBS = ` expr $ ( nproc ) - 1 ` <nl> fi <nl> <nl> - for build_arg in " $ { build_args [ @ ] } " ; do <nl> - export $ build_arg <nl> - done <nl> $ PYTHON setup . py install - - user <nl> <nl> - # This is to save test binaries for testing . Copying caffe2 / test to <nl> - # INSTALL_PREFIX , which is / usr / local / caffe2 / , enables these setup . py builds <nl> - # to share cpp - tests test - code with the cmake - only build above . In test . sh <nl> - # the cpp tests are run in install_prefix <nl> - cp - r torch / lib / tmp_install $ INSTALL_PREFIX <nl> - mkdir - p " $ INSTALL_PREFIX / cpp_test / " <nl> - cp - r caffe2 / test / * " $ INSTALL_PREFIX / cpp_test / " <nl> - <nl> - ls $ INSTALL_PREFIX <nl> - <nl> report_compile_cache_stats <nl> fi <nl> <nl> mmm a / . jenkins / caffe2 / common . sh <nl> ppp b / . jenkins / caffe2 / common . sh <nl> fi <nl> # builds . In + python builds the cpp tests are copied to / usr / local / caffe2 so <nl> # that the test code in . jenkins / test . sh is the same <nl> INSTALL_PREFIX = " / usr / local / caffe2 " <nl> + <nl> + mkdir - p " $ gtest_reports_dir " | | true <nl> + mkdir - p " $ pytest_reports_dir " | | true <nl> + mkdir - p " $ INSTALL_PREFIX " | | true <nl> mmm a / . jenkins / caffe2 / test . sh <nl> ppp b / . jenkins / caffe2 / test . sh <nl> if [ [ " $ { BUILD_ENVIRONMENT } " = = * - android * ] ] ; then <nl> exit 0 <nl> fi <nl> <nl> - rm - rf " $ TEST_DIR " & & mkdir - p " $ TEST_DIR " <nl> - <nl> - cd " $ { WORKSPACE } " <nl> + # Find where cpp tests and Caffe2 itself are installed <nl> + if [ [ " $ BUILD_ENVIRONMENT " = = * cmake * ] ] ; then <nl> + # For cmake only build we install everything into / usr / local <nl> + cpp_test_dir = " $ INSTALL_PREFIX / cpp_test " <nl> + ld_library_path = " $ INSTALL_PREFIX / lib " <nl> + else <nl> + # For Python builds we install into python <nl> + # cd to / usr first so the python import doesn ' t get confused by any ' caffe2 ' <nl> + # directory in cwd <nl> + python_installation = " $ ( dirname $ ( dirname $ ( cd / usr & & python - c ' import os ; import caffe2 ; print ( os . path . realpath ( caffe2 . __file__ ) ) ' ) ) ) " <nl> + caffe2_pypath = " $ python_installation / caffe2 " <nl> + cpp_test_dir = " $ python_installation / caffe2 / cpp_test " <nl> + ld_library_path = " $ python_installation / torch / lib " <nl> + fi <nl> <nl> - # # # # # # # # # # # # # <nl> + # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> # C + + tests # <nl> - # # # # # # # # # # # # # <nl> - <nl> + # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> echo " Running C + + tests . . " <nl> - export LD_LIBRARY_PATH = " $ { LD_LIBRARY_PATH } : $ { INSTALL_PREFIX } / lib " <nl> - mkdir - p " $ gtest_reports_dir " <nl> - for test in $ ( find " $ { INSTALL_PREFIX } / cpp_test " - executable - type f ) ; do <nl> + for test in $ ( find " $ cpp_test_dir " - executable - type f ) ; do <nl> case " $ test " in <nl> # skip tests we know are hanging or bad <nl> * / mkl_utils_test | * / aten / integer_divider_test ) <nl> for test in $ ( find " $ { INSTALL_PREFIX } / cpp_test " - executable - type f ) ; do <nl> if [ [ " $ BUILD_ENVIRONMENT " = = * rocm * ] ] ; then <nl> continue <nl> else <nl> - " $ test " <nl> + LD_LIBRARY_PATH = " $ ld_library_path " " $ test " <nl> fi <nl> ; ; <nl> * ) <nl> for test in $ ( find " $ { INSTALL_PREFIX } / cpp_test " - executable - type f ) ; do <nl> # output than it is to have XML output for Jenkins . <nl> # Note : in the future , if we want to use xml test reporter once we switch <nl> # to all gtest , one can simply do : <nl> - " $ test " - - gtest_output = xml : " $ gtest_reports_dir / $ ( basename $ test ) . xml " <nl> + LD_LIBRARY_PATH = " $ ld_library_path " \ <nl> + " $ test " - - gtest_output = xml : " $ gtest_reports_dir / $ ( basename $ test ) . xml " <nl> ; ; <nl> esac <nl> done <nl> if [ [ " $ BUILD_ENVIRONMENT " = = * cmake * ] ] ; then <nl> exit 0 <nl> fi <nl> <nl> - # Anywhere except $ ROOT_DIR should work <nl> - cd " $ INSTALL_PREFIX " <nl> - caffe2_pypath = " $ ( python - c ' import os ; import caffe2 ; print ( os . path . dirname ( os . path . realpath ( caffe2 . __file__ ) ) ) ' ) " <nl> - <nl> if [ [ " $ BUILD_ENVIRONMENT " = = * ubuntu14 . 04 * ] ] ; then <nl> # Hotfix , use hypothesis 3 . 44 . 6 on Ubuntu 14 . 04 <nl> # See comments on <nl> else <nl> pip install - - user - - no - cache - dir hypothesis = = 3 . 59 . 0 <nl> fi <nl> <nl> - mkdir - p " $ pytest_reports_dir " <nl> - <nl> # Collect additional tests to run ( outside caffe2 / python ) <nl> EXTRA_TESTS = ( ) <nl> <nl> pip install - - user pytest - sugar <nl> # # # # # # # # # # # # # # # # # # # # # <nl> # torchvision tests # <nl> # # # # # # # # # # # # # # # # # # # # # <nl> - <nl> if [ [ " $ BUILD_ENVIRONMENT " = = * onnx * ] ] ; then <nl> pip install - - user torchvision <nl> " $ ROOT_DIR / scripts / onnx / test . sh " <nl>
|
Fixing missing cpp tests for Caffe2 setup . py builds ( )
|
pytorch/pytorch
|
c448f85e1f27385e033f2a9c672be9b9d61202d5
|
2019-01-15T21:09:12Z
|
mmm a / include / grpc + + / impl / codegen / thrift_serializer . h <nl> ppp b / include / grpc + + / impl / codegen / thrift_serializer . h <nl> <nl> * <nl> * / <nl> <nl> - # ifndef GRPCXX_IMPL_CODEGEN_THRIFT_SERIALIZER_H <nl> - # define GRPCXX_IMPL_CODEGEN_THRIFT_SERIALIZER_H <nl> + # ifndef GRPCXX_IMPL_CODEGEN_THRIFT_SERIALIZER_H <nl> + # define GRPCXX_IMPL_CODEGEN_THRIFT_SERIALIZER_H <nl> <nl> - # include < memory > <nl> - # include < string > <nl> - # include < stdexcept > <nl> # include < grpc / impl / codegen / byte_buffer . h > <nl> # include < grpc / impl / codegen / byte_buffer_reader . h > <nl> # include < grpc / impl / codegen / slice . h > <nl> <nl> # include < thrift / protocol / TProtocolException . h > <nl> # include < thrift / transport / TBufferTransports . h > <nl> # include < thrift / transport / TTransportUtils . h > <nl> + # include < boost / make_shared . hpp > <nl> + # include < memory > <nl> + # include < stdexcept > <nl> + # include < string > <nl> <nl> namespace apache { <nl> namespace thrift { <nl> using apache : : thrift : : transport : : TMemoryBuffer ; <nl> using apache : : thrift : : transport : : TBufferBase ; <nl> using apache : : thrift : : transport : : TTransport ; <nl> <nl> - template < typename Dummy , typename Protocol > class ThriftSerializer { <nl> - public : <nl> + template < typename Dummy , typename Protocol > <nl> + class ThriftSerializer { <nl> + public : <nl> ThriftSerializer ( ) <nl> - : prepared_ ( false ) <nl> - , last_deserialized_ ( false ) <nl> - , serialize_version_ ( false ) { } <nl> + : prepared_ ( false ) , <nl> + last_deserialized_ ( false ) , <nl> + serialize_version_ ( false ) { } <nl> <nl> virtual ~ ThriftSerializer ( ) { } <nl> <nl> / / Serialize the passed type into the internal buffer <nl> / / and returns a pointer to internal buffer and its size <nl> - template < typename T > void Serialize ( const T & fields , const uint8_t * * serializedBuffer , <nl> - size_t * serializedLen ) { <nl> - / / prepare or reset buffer <nl> + template < typename T > <nl> + void Serialize ( const T & fields , const uint8_t * * serialized_buffer , <nl> + size_t * serialized_len ) { <nl> + / / prepare or reset buffer <nl> if ( ! prepared_ | | last_deserialized_ ) { <nl> prepare ( ) ; <nl> } else { <nl> template < typename Dummy , typename Protocol > class ThriftSerializer { <nl> protocol_ - > writeMessageBegin ( " " , TMessageType ( 0 ) , 0 ) ; <nl> } <nl> <nl> - / / serilaize fields into buffer <nl> + / / serialize fields into buffer <nl> fields . write ( protocol_ . get ( ) ) ; <nl> <nl> / / write the end of message <nl> template < typename Dummy , typename Protocol > class ThriftSerializer { <nl> protocol_ - > writeMessageEnd ( ) ; <nl> } <nl> <nl> - uint8_t * byteBuffer ; <nl> - uint32_t byteBufferSize ; <nl> - buffer_ - > getBuffer ( & byteBuffer , & byteBufferSize ) ; <nl> - * serializedBuffer = byteBuffer ; <nl> - * serializedLen = byteBufferSize ; <nl> + uint8_t * byte_buffer ; <nl> + uint32_t byte_buffer_size ; <nl> + buffer_ - > getBuffer ( & byte_buffer , & byte_buffer_size ) ; <nl> + * serialized_buffer = byte_buffer ; <nl> + * serialized_len = byte_buffer_size ; <nl> } <nl> <nl> / / Serialize the passed type into the byte buffer <nl> - template < typename T > void Serialize ( const T & fields , grpc_byte_buffer * * bp ) { <nl> + template < typename T > <nl> + void Serialize ( const T & fields , grpc_byte_buffer * * bp ) { <nl> + const uint8_t * byte_buffer ; <nl> + size_t byte_buffer_size ; <nl> <nl> - const uint8_t * byteBuffer ; <nl> - size_t byteBufferSize ; <nl> + Serialize ( fields , & byte_buffer , & byte_buffer_size ) ; <nl> <nl> - Serialize ( fields , & byteBuffer , & byteBufferSize ) ; <nl> - <nl> - gpr_slice slice = gpr_slice_from_copied_buffer ( ( char * ) byteBuffer , byteBufferSize ) ; <nl> + gpr_slice slice = <nl> + gpr_slice_from_copied_buffer ( ( char * ) byte_buffer , byte_buffer_size ) ; <nl> <nl> * bp = grpc_raw_byte_buffer_create ( & slice , 1 ) ; <nl> <nl> template < typename Dummy , typename Protocol > class ThriftSerializer { <nl> <nl> / / Deserialize the passed char array into the passed type , returns the number <nl> / / of bytes that have been consumed from the passed string . <nl> - template < typename T > uint32_t Deserialize ( const uint8_t * serializedBuffer , size_t length , <nl> - T * fields ) { <nl> + template < typename T > <nl> + uint32_t Deserialize ( const uint8_t * serialized_buffer , size_t length , <nl> + T * fields ) { <nl> / / prepare buffer if necessary <nl> if ( ! prepared_ ) { <nl> prepare ( ) ; <nl> } <nl> last_deserialized_ = true ; <nl> <nl> - / / reset buffer transport <nl> - buffer_ - > resetBuffer ( ( uint8_t * ) serializedBuffer , length ) ; <nl> + / / reset buffer transport <nl> + buffer_ - > resetBuffer ( ( uint8_t * ) serialized_buffer , length ) ; <nl> <nl> / / read the protocol version if necessary <nl> if ( serialize_version_ ) { <nl> std : : string name = " " ; <nl> - TMessageType mt = ( TMessageType ) 0 ; <nl> + TMessageType mt = ( TMessageType ) 0 ; <nl> int32_t seq_id = 0 ; <nl> protocol_ - > readMessageBegin ( name , mt , seq_id ) ; <nl> } <nl> template < typename Dummy , typename Protocol > class ThriftSerializer { <nl> return len ; <nl> } <nl> <nl> - <nl> / / Deserialize the passed byte buffer to passed type , returns the number <nl> / / of bytes consumed from byte buffer <nl> - template < typename T > uint32_t Deserialize ( grpc_byte_buffer * buffer , T * msg ) { <nl> - <nl> + template < typename T > <nl> + uint32_t Deserialize ( grpc_byte_buffer * buffer , T * msg ) { <nl> grpc_byte_buffer_reader reader ; <nl> grpc_byte_buffer_reader_init ( & reader , buffer ) ; <nl> <nl> gpr_slice slice = grpc_byte_buffer_reader_readall ( & reader ) ; <nl> <nl> - uint32_t len = Deserialize ( GPR_SLICE_START_PTR ( slice ) , GPR_SLICE_LENGTH ( slice ) , msg ) ; <nl> + uint32_t len = <nl> + Deserialize ( GPR_SLICE_START_PTR ( slice ) , GPR_SLICE_LENGTH ( slice ) , msg ) ; <nl> <nl> gpr_slice_unref ( slice ) ; <nl> <nl> template < typename Dummy , typename Protocol > class ThriftSerializer { <nl> } <nl> <nl> / / set serialization version flag <nl> - void SetSerializeVersion ( bool value ) { <nl> - serialize_version_ = value ; <nl> - } <nl> + void SetSerializeVersion ( bool value ) { serialize_version_ = value ; } <nl> <nl> / / Set the container size limit to deserialize <nl> / / This function should be called after buffer_ is initialized <nl> template < typename Dummy , typename Protocol > class ThriftSerializer { <nl> protocol_ - > setStringSizeLimit ( string_limit ) ; <nl> } <nl> <nl> - private : <nl> + private : <nl> bool prepared_ ; <nl> bool last_deserialized_ ; <nl> boost : : shared_ptr < TMemoryBuffer > buffer_ ; <nl> template < typename Dummy , typename Protocol > class ThriftSerializer { <nl> bool serialize_version_ ; <nl> <nl> void prepare ( ) { <nl> + <nl> buffer_ . reset ( new TMemoryBuffer ( ) ) ; <nl> <nl> / / create a protocol for the memory buffer transport <nl> template < typename Dummy , typename Protocol > class ThriftSerializer { <nl> prepared_ = true ; <nl> } <nl> <nl> - } ; / / ThriftSerializer <nl> + } ; / / ThriftSerializer <nl> <nl> - typedef ThriftSerializer < void , TBinaryProtocolT < TBufferBase , TNetworkBigEndian > > ThriftSerializerBinary ; <nl> - typedef ThriftSerializer < void , TCompactProtocolT < TBufferBase > > ThriftSerializerCompact ; <nl> + typedef ThriftSerializer < void , TBinaryProtocolT < TBufferBase , TNetworkBigEndian > > <nl> + ThriftSerializerBinary ; <nl> + typedef ThriftSerializer < void , TCompactProtocolT < TBufferBase > > <nl> + ThriftSerializerCompact ; <nl> <nl> - } / / namespace util <nl> - } / / namespace thrift <nl> - } / / namespace apache <nl> + } / / namespace util <nl> + } / / namespace thrift <nl> + } / / namespace apache <nl> <nl> # endif <nl> \ No newline at end of file <nl> mmm a / include / grpc + + / impl / codegen / thrift_utils . h <nl> ppp b / include / grpc + + / impl / codegen / thrift_utils . h <nl> <nl> # ifndef GRPCXX_IMPL_CODEGEN_THRIFT_UTILS_H <nl> # define GRPCXX_IMPL_CODEGEN_THRIFT_UTILS_H <nl> <nl> - # include < grpc / impl / codegen / byte_buffer . h > <nl> - # include < grpc / impl / codegen / byte_buffer_reader . h > <nl> - # include < grpc / impl / codegen / slice . h > <nl> - # include < grpc / impl / codegen / slice_buffer . h > <nl> # include < grpc + + / impl / codegen / config . h > <nl> # include < grpc + + / impl / codegen / core_codegen_interface . h > <nl> # include < grpc + + / impl / codegen / serialization_traits . h > <nl> # include < grpc + + / impl / codegen / status . h > <nl> # include < grpc + + / impl / codegen / status_code_enum . h > <nl> # include < grpc + + / impl / codegen / thrift_serializer . h > <nl> + # include < grpc / impl / codegen / byte_buffer . h > <nl> + # include < grpc / impl / codegen / byte_buffer_reader . h > <nl> + # include < grpc / impl / codegen / slice . h > <nl> + # include < grpc / impl / codegen / slice_buffer . h > <nl> # include < cstdint > <nl> # include < cstdlib > <nl> <nl> namespace grpc { <nl> using apache : : thrift : : util : : ThriftSerializerCompact ; <nl> <nl> template < class T > <nl> - class SerializationTraits < T , typename std : : enable_if < std : : is_base_of < apache : : thrift : : TBase , T > : : value > : : type > { <nl> + class SerializationTraits < T , typename std : : enable_if < std : : is_base_of < <nl> + apache : : thrift : : TBase , T > : : value > : : type > { <nl> public : <nl> - <nl> - static Status Serialize ( const T & msg , <nl> - grpc_byte_buffer * * bp , bool * own_buffer ) { <nl> - <nl> + static Status Serialize ( const T & msg , grpc_byte_buffer * * bp , <nl> + bool * own_buffer ) { <nl> * own_buffer = true ; <nl> <nl> ThriftSerializerCompact serializer ; <nl> - <nl> serializer . Serialize ( msg , bp ) ; <nl> <nl> return Status ( StatusCode : : OK , " ok " ) ; <nl> } <nl> <nl> - static Status Deserialize ( grpc_byte_buffer * buffer , <nl> - T * msg , <nl> + static Status Deserialize ( grpc_byte_buffer * buffer , T * msg , <nl> int max_message_size ) { <nl> if ( ! buffer ) { <nl> return Status ( StatusCode : : INTERNAL , " No payload " ) ; <nl> mmm a / tools / grift / Dockerfile <nl> ppp b / tools / grift / Dockerfile <nl> RUN apt - get update & & \ <nl> curl make automake libtool <nl> <nl> # Configure git <nl> - RUN git config - - global user . name " " & & \ <nl> - git config - - global user . email " " <nl> + RUN git config - - global user . name " Jenkins " & & \ <nl> + git config - - global user . email " jenkins @ grpc " <nl> <nl> RUN git clone https : / / github . com / grpc / grpc <nl> <nl> mmm a / tools / grift / README . md <nl> ppp b / tools / grift / README . md <nl> Copyright 2016 Google Inc . <nl> <nl> # Documentation <nl> <nl> - grift is integration of [ Apache Thrift ] ( https : / / github . com / apache / thrift . git ) Serializer with GRPC . <nl> + grift is integration of [ Apache Thrift ] ( https : / / github . com / apache / thrift . git ) Serializer with gRPC . <nl> <nl> This integration allows you to use grpc to send thrift messages in C + + and java . <nl> <nl> mmm a / tools / grift / grpc_plugins_generator . patch <nl> ppp b / tools / grift / grpc_plugins_generator . patch <nl> 2 . 8 . 0 . rc3 . 226 . g39d4020 <nl> <nl> <nl> - From c8577ad5513543c57a81ad1bf4927cc8a78baa03 Mon Sep 17 00 : 00 : 00 2001 <nl> + From e724d3abf096278615085bd58217321e32b43fd8 Mon Sep 17 00 : 00 : 00 2001 <nl> From : chedeti < chedeti @ google . com > <nl> Date : Sun , 31 Jul 2016 16 : 16 : 40 - 0700 <nl> Subject : [ PATCH 2 / 3 ] grpc cpp plugins generator with example <nl> Subject : [ PATCH 2 / 3 ] grpc cpp plugins generator with example <nl> tutorial / cpp / CMakeLists . txt | 53 mmm <nl> tutorial / cpp / CppClient . cpp | 80 mmm - - <nl> tutorial / cpp / CppServer . cpp | 181 mmmmmmmmm - <nl> - tutorial / cpp / GrpcClient . cpp | 94 pppppp <nl> - tutorial / cpp / GrpcServer . cpp | 87 ppp + + <nl> + tutorial / cpp / GriftClient . cpp | 93 pppppp <nl> + tutorial / cpp / GriftServer . cpp | 93 pppppp <nl> tutorial / cpp / Makefile . am | 66 + + - - <nl> tutorial / cpp / test . thrift | 13 + <nl> - 8 files changed , 636 insertions ( + ) , 416 deletions ( - ) <nl> + 8 files changed , 641 insertions ( + ) , 416 deletions ( - ) <nl> delete mode 100644 tutorial / cpp / CMakeLists . txt <nl> delete mode 100644 tutorial / cpp / CppClient . cpp <nl> delete mode 100644 tutorial / cpp / CppServer . cpp <nl> - create mode 100644 tutorial / cpp / GrpcClient . cpp <nl> - create mode 100644 tutorial / cpp / GrpcServer . cpp <nl> + create mode 100644 tutorial / cpp / GriftClient . cpp <nl> + create mode 100644 tutorial / cpp / GriftServer . cpp <nl> create mode 100644 tutorial / cpp / test . thrift <nl> <nl> index eafffa9 . . 0000000 <nl> - cout < < " Done . " < < endl ; <nl> - return 0 ; <nl> - } <nl> - + new file mode 100644 <nl> - index 0000000 . . ab1fe77 <nl> + index 0000000 . . 647a683 <nl> mmm / dev / null <nl> - ppp b / tutorial / cpp / GrpcClient . cpp <nl> - <nl> ppp + b / tutorial / cpp / GriftClient . cpp <nl> + <nl> + / * <nl> + * <nl> + * Copyright 2016 , Google Inc . <nl> index 0000000 . . ab1fe77 <nl> + using grpc : : ClientContext ; <nl> + using grpc : : Status ; <nl> + using test : : Greeter ; <nl> - + using namespace test ; <nl> + <nl> + class GreeterClient { <nl> + public : <nl> index 0000000 . . ab1fe77 <nl> + <nl> + return 0 ; <nl> + } <nl> - + new file mode 100644 <nl> - index 0000000 . . f63db57 <nl> + index 0000000 . . 7c01606 <nl> mmm / dev / null <nl> - ppp b / tutorial / cpp / GrpcServer . cpp <nl> - <nl> ppp + b / tutorial / cpp / GriftServer . cpp <nl> + <nl> + / * <nl> + * <nl> + * Copyright 2016 , Google Inc . <nl> index 0000000 . . f63db57 <nl> + using grpc : : Status ; <nl> + using test : : Greeter ; <nl> + <nl> - + using namespace grpc ; <nl> - + using namespace test ; <nl> - + <nl> + / / Logic and data behind the server ' s behavior . <nl> + class GreeterServiceImpl final : public Greeter : : Service { <nl> + + public : <nl> + + ~ GreeterServiceImpl ( ) { <nl> + + / / shutdown server <nl> + + server - > Shutdown ( ) ; <nl> + + } <nl> + + <nl> + Status SayHello ( ServerContext * context , const Greeter : : SayHelloReq * request , <nl> + Greeter : : SayHelloResp * reply ) override { <nl> + std : : string prefix ( " Hello " ) ; <nl> index 0000000 . . f63db57 <nl> + <nl> + return Status : : OK ; <nl> + } <nl> - + } ; <nl> + <nl> - + void RunServer ( ) { <nl> - + std : : string server_address ( " 0 . 0 . 0 . 0 : 50051 " ) ; <nl> - + GreeterServiceImpl service ; <nl> + + void RunServer ( ) { <nl> + + std : : string server_address ( " 0 . 0 . 0 . 0 : 50051 " ) ; <nl> + + <nl> + + ServerBuilder builder ; <nl> + + / / Listen on the given address without any authentication mechanism . <nl> + + builder . AddListeningPort ( server_address , grpc : : InsecureServerCredentials ( ) ) ; <nl> + + / / Register " service " as the instance through which we ' ll communicate with <nl> + + / / clients . In this case it corresponds to an * synchronous * service . <nl> + + builder . RegisterService ( this ) ; <nl> + + / / Finally assemble the server . <nl> + + server = builder . BuildAndStart ( ) ; <nl> + + std : : cout < < " Server listening on " < < server_address < < std : : endl ; <nl> + + <nl> + + / / Wait for the server to shutdown . Note that some other thread must be <nl> + + / / responsible for shutting down the server for this call to ever return . <nl> + + server - > Wait ( ) ; <nl> + + } <nl> + <nl> - + ServerBuilder builder ; <nl> - + / / Listen on the given address without any authentication mechanism . <nl> - + builder . AddListeningPort ( server_address , grpc : : InsecureServerCredentials ( ) ) ; <nl> - + / / Register " service " as the instance through which we ' ll communicate with <nl> - + / / clients . In this case it corresponds to an * synchronous * service . <nl> - + builder . RegisterService ( & service ) ; <nl> - + / / Finally assemble the server . <nl> - + std : : unique_ptr < Server > server ( builder . BuildAndStart ( ) ) ; <nl> - + std : : cout < < " Server listening on " < < server_address < < std : : endl ; <nl> - + <nl> - + / / Wait for the server to shutdown . Note that some other thread must be <nl> - + / / responsible for shutting down the server for this call to ever return . <nl> - + server - > Wait ( ) ; <nl> - + } <nl> + + private : <nl> + + std : : unique_ptr < Server > server ; <nl> + + } ; <nl> + <nl> + int main ( ) { <nl> - + RunServer ( ) ; <nl> + + GreeterServiceImpl service ; <nl> + + service . RunServer ( ) ; <nl> + <nl> + return 0 ; <nl> + } <nl> - + mmm a / tutorial / cpp / Makefile . am <nl> ppp b / tutorial / cpp / Makefile . am <nl> <nl> - TutorialServer_SOURCES = \ <nl> - CppServer . cpp <nl> + TestServer_SOURCES = \ <nl> - + GrpcServer . cpp <nl> + + GriftServer . cpp <nl> <nl> - TutorialServer_LDADD = \ <nl> - libtutorialgencpp . la \ <nl> - TutorialClient_SOURCES = \ <nl> - CppClient . cpp <nl> + TestClient_SOURCES = \ <nl> - + GrpcClient . cpp <nl> + + GriftClient . cpp <nl> <nl> - TutorialClient_LDADD = \ <nl> - libtutorialgencpp . la \ <nl> CMakeLists . txt \ <nl> - CppClient . cpp \ <nl> - CppServer . cpp <nl> - + GrpcClient . cpp \ <nl> - + GrpcServer . cpp <nl> + + GriftClient . cpp \ <nl> + + GriftServer . cpp <nl> new file mode 100644 <nl> index 0000000 . . de3c9a4 <nl> index 0000000 . . de3c9a4 <nl> 2 . 8 . 0 . rc3 . 226 . g39d4020 <nl> <nl> <nl> - From 096042c132126536870eea118127cf1e608969bc Mon Sep 17 00 : 00 : 00 2001 <nl> + From f991f33dd6461eae197b6ad0e7088b571f2a7b22 Mon Sep 17 00 : 00 : 00 2001 <nl> From : chedeti < chedeti @ google . com > <nl> Date : Sun , 31 Jul 2016 16 : 23 : 53 - 0700 <nl> Subject : [ PATCH 3 / 3 ] grpc java plugins generator <nl>
|
rename class variables to snake_case
|
grpc/grpc
|
82afcaa009e85cba16422b94a23387a3b7a0bd06
|
2016-08-03T23:38:05Z
|
mmm a / Telegram / SourceFiles / history / history_inner_widget . cpp <nl> ppp b / Telegram / SourceFiles / history / history_inner_widget . cpp <nl> QString HistoryInner : : tooltipText ( ) const { <nl> } <nl> } <nl> if ( const auto msgsigned = view - > data ( ) - > Get < HistoryMessageSigned > ( ) ) { <nl> - if ( msgsigned - > isElided ) { <nl> + if ( msgsigned - > isElided & & ! msgsigned - > isAnonymousRank ) { <nl> dateText + = ' \ n ' + tr : : lng_signed_author ( tr : : now , lt_user , msgsigned - > author ) ; <nl> } <nl> } <nl> mmm a / Telegram / SourceFiles / history / history_item . cpp <nl> ppp b / Telegram / SourceFiles / history / history_item . cpp <nl> QString HistoryItem : : authorOriginal ( ) const { <nl> if ( const auto forwarded = Get < HistoryMessageForwarded > ( ) ) { <nl> return forwarded - > originalAuthor ; <nl> } else if ( const auto msgsigned = Get < HistoryMessageSigned > ( ) ) { <nl> - return msgsigned - > author ; <nl> + if ( ! msgsigned - > isAnonymousRank ) { <nl> + return msgsigned - > author ; <nl> + } <nl> } <nl> return QString ( ) ; <nl> } <nl> mmm a / Telegram / SourceFiles / history / history_item_components . cpp <nl> ppp b / Telegram / SourceFiles / history / history_item_components . cpp <nl> void HistoryMessageVia : : resize ( int32 availw ) const { <nl> } <nl> <nl> void HistoryMessageSigned : : refresh ( const QString & date ) { <nl> + Expects ( ! isAnonymousRank ) ; <nl> + <nl> auto name = author ; <nl> const auto time = qsl ( " , " ) + date ; <nl> const auto timew = st : : msgDateFont - > width ( time ) ; <nl> mmm a / Telegram / SourceFiles / history / history_item_components . h <nl> ppp b / Telegram / SourceFiles / history / history_item_components . h <nl> struct HistoryMessageSigned : public RuntimeComponent < HistoryMessageSigned , Hist <nl> void refresh ( const QString & date ) ; <nl> int maxWidth ( ) const ; <nl> <nl> - bool isElided = false ; <nl> QString author ; <nl> Ui : : Text : : String signature ; <nl> + bool isElided = false ; <nl> + bool isAnonymousRank = false ; <nl> } ; <nl> <nl> struct HistoryMessageEdited : public RuntimeComponent < HistoryMessageEdited , HistoryItem > { <nl> mmm a / Telegram / SourceFiles / history / history_message . cpp <nl> ppp b / Telegram / SourceFiles / history / history_message . cpp <nl> void HistoryMessage : : createComponents ( const CreateConfig & config ) { <nl> } <nl> if ( const auto msgsigned = Get < HistoryMessageSigned > ( ) ) { <nl> msgsigned - > author = config . author ; <nl> + msgsigned - > isAnonymousRank = author ( ) - > isMegagroup ( ) ; <nl> } <nl> setupForwardedComponent ( config ) ; <nl> if ( const auto markup = Get < HistoryMessageReplyMarkup > ( ) ) { <nl> mmm a / Telegram / SourceFiles / history / view / history_view_message . cpp <nl> ppp b / Telegram / SourceFiles / history / view / history_view_message . cpp <nl> void Message : : refreshRightBadge ( ) { <nl> return ( delegate ( ) - > elementContext ( ) = = Context : : Replies ) <nl> ? QString ( ) <nl> : tr : : lng_channel_badge ( tr : : now ) ; <nl> + } else if ( data ( ) - > author ( ) - > isMegagroup ( ) ) { <nl> + if ( const auto msgsigned = data ( ) - > Get < HistoryMessageSigned > ( ) ) { <nl> + Assert ( msgsigned - > isAnonymousRank ) ; <nl> + return msgsigned - > author ; <nl> + } <nl> } <nl> const auto channel = data ( ) - > history ( ) - > peer - > asMegagroup ( ) ; <nl> const auto user = data ( ) - > author ( ) - > asUser ( ) ; <nl> void Message : : drawInfo ( <nl> } <nl> dateX + = timeLeft ( ) ; <nl> <nl> - if ( const auto msgsigned = item - > Get < HistoryMessageSigned > ( ) ) { <nl> + if ( const auto msgsigned = item - > Get < HistoryMessageSigned > ( ) <nl> + ; msgsigned & & ! msgsigned - > isAnonymousRank ) { <nl> msgsigned - > signature . drawElided ( p , dateX , dateY , item - > _timeWidth ) ; <nl> } else if ( const auto edited = displayedEditBadge ( ) ) { <nl> edited - > text . drawElided ( p , dateX , dateY , item - > _timeWidth ) ; <nl> void Message : : refreshEditedBadge ( ) { <nl> edited - > refresh ( dateText , editDate ! = 0 ) ; <nl> } <nl> if ( const auto msgsigned = item - > Get < HistoryMessageSigned > ( ) ) { <nl> - const auto text = ( ! edited | | ! editDate ) <nl> - ? dateText <nl> - : edited - > text . toString ( ) ; <nl> - msgsigned - > refresh ( text ) ; <nl> + if ( ! msgsigned - > isAnonymousRank ) { <nl> + const auto text = ( ! edited | | ! editDate ) <nl> + ? dateText <nl> + : edited - > text . toString ( ) ; <nl> + msgsigned - > refresh ( text ) ; <nl> + } <nl> } <nl> initTime ( ) ; <nl> } <nl> <nl> void Message : : initTime ( ) { <nl> const auto item = message ( ) ; <nl> - if ( const auto msgsigned = item - > Get < HistoryMessageSigned > ( ) ) { <nl> + if ( const auto msgsigned = item - > Get < HistoryMessageSigned > ( ) <nl> + ; msgsigned & & ! msgsigned - > isAnonymousRank ) { <nl> item - > _timeWidth = msgsigned - > maxWidth ( ) ; <nl> } else if ( const auto edited = displayedEditBadge ( ) ) { <nl> item - > _timeWidth = edited - > maxWidth ( ) ; <nl> mmm a / Telegram / SourceFiles / history / view / media / history_view_contact . cpp <nl> ppp b / Telegram / SourceFiles / history / view / media / history_view_contact . cpp <nl> QSize Contact : : countOptimalSize ( ) { <nl> auto minHeight = 0 ; <nl> if ( _userId ) { <nl> minHeight = st : : msgFileThumbPadding . top ( ) + st : : msgFileThumbSize + st : : msgFileThumbPadding . bottom ( ) ; <nl> - if ( item - > Has < HistoryMessageSigned > ( ) <nl> + const auto msgsigned = item - > Get < HistoryMessageSigned > ( ) ; <nl> + if ( ( msgsigned & & ! msgsigned - > isAnonymousRank ) <nl> | | item - > Has < HistoryMessageViews > ( ) ) { <nl> minHeight + = st : : msgDateFont - > height - st : : msgDateDelta . y ( ) ; <nl> } <nl> mmm a / Telegram / SourceFiles / history / view / media / history_view_document . cpp <nl> ppp b / Telegram / SourceFiles / history / view / media / history_view_document . cpp <nl> QSize Document : : countOptimalSize ( ) { <nl> } else { <nl> minHeight = st : : msgFilePadding . top ( ) + st : : msgFileSize + st : : msgFilePadding . bottom ( ) ; <nl> } <nl> - if ( ! captioned & & ( item - > Has < HistoryMessageSigned > ( ) <nl> + const auto msgsigned = item - > Get < HistoryMessageSigned > ( ) ; <nl> + if ( ! captioned & & ( ( msgsigned & & ! msgsigned - > isAnonymousRank ) <nl> | | item - > Has < HistoryMessageViews > ( ) <nl> | | _parent - > displayEditedBadge ( ) ) ) { <nl> minHeight + = st : : msgDateFont - > height - st : : msgDateDelta . y ( ) ; <nl>
|
Show admin rank for anonymous posts .
|
telegramdesktop/tdesktop
|
bd1a46252d2adf514526c35711b46487eab695f9
|
2020-10-01T09:57:03Z
|
mmm a / hphp / runtime / ext / ext_phar . cpp <nl> ppp b / hphp / runtime / ext / ext_phar . cpp <nl> static const StaticString <nl> s_mtime ( " mtime " ) , <nl> s_atime ( " atime " ) , <nl> s_ctime ( " ctime " ) , <nl> + s_mode ( " mode " ) , <nl> s_opendir ( " opendir " ) ; <nl> <nl> static class PharStreamWrapper : public Stream : : Wrapper { <nl> static class PharStreamWrapper : public Stream : : Wrapper { <nl> buf - > st_atime = stat [ s_atime ] . asInt64Val ( ) ; <nl> buf - > st_mtime = stat [ s_mtime ] . asInt64Val ( ) ; <nl> buf - > st_ctime = stat [ s_ctime ] . asInt64Val ( ) ; <nl> + buf - > st_mode = stat [ s_mode ] . asInt64Val ( ) ; <nl> return 0 ; <nl> } <nl> <nl> mmm a / hphp / runtime / ext / ext_posix . cpp <nl> ppp b / hphp / runtime / ext / ext_posix . cpp <nl> namespace HPHP { <nl> IMPLEMENT_DEFAULT_EXTENSION ( posix ) ; <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> + const int64_t k_POSIX_S_IFMT = S_IFMT ; <nl> + const int64_t k_POSIX_S_IFSOCK = S_IFSOCK ; <nl> + const int64_t k_POSIX_S_IFLNK = S_IFLNK ; <nl> + const int64_t k_POSIX_S_IFREG = S_IFREG ; <nl> + const int64_t k_POSIX_S_IFBLK = S_IFBLK ; <nl> + const int64_t k_POSIX_S_IFDIR = S_IFDIR ; <nl> + const int64_t k_POSIX_S_IFCHR = S_IFCHR ; <nl> + const int64_t k_POSIX_S_IFIFO = S_IFIFO ; <nl> + const int64_t k_POSIX_S_ISUID = S_ISUID ; <nl> + const int64_t k_POSIX_S_ISGID = S_ISGID ; <nl> + const int64_t k_POSIX_S_ISVTX = S_ISVTX ; <nl> + const int64_t k_POSIX_S_IRWXU = S_IRWXU ; <nl> + const int64_t k_POSIX_S_IRUSR = S_IRUSR ; <nl> + const int64_t k_POSIX_S_IWUSR = S_IWUSR ; <nl> + const int64_t k_POSIX_S_IXUSR = S_IXUSR ; <nl> + const int64_t k_POSIX_S_IRWXG = S_IRWXG ; <nl> + const int64_t k_POSIX_S_IRGRP = S_IRGRP ; <nl> + const int64_t k_POSIX_S_IWGRP = S_IWGRP ; <nl> + const int64_t k_POSIX_S_IXGRP = S_IXGRP ; <nl> + const int64_t k_POSIX_S_IRWXO = S_IRWXO ; <nl> + const int64_t k_POSIX_S_IROTH = S_IROTH ; <nl> + const int64_t k_POSIX_S_IWOTH = S_IWOTH ; <nl> + const int64_t k_POSIX_S_IXOTH = S_IXOTH ; <nl> + const int64_t k_POSIX_F_OK = F_OK ; <nl> + const int64_t k_POSIX_X_OK = X_OK ; <nl> + const int64_t k_POSIX_W_OK = W_OK ; <nl> + const int64_t k_POSIX_R_OK = R_OK ; <nl> + <nl> bool f_posix_access ( CStrRef file , int mode / * = 0 * / ) { <nl> String path = File : : TranslatePath ( file ) ; <nl> if ( path . empty ( ) ) { <nl> mmm a / hphp / runtime / ext / ext_posix . h <nl> ppp b / hphp / runtime / ext / ext_posix . h <nl> <nl> namespace HPHP { <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> + extern const int64_t k_POSIX_S_IFMT ; <nl> + extern const int64_t k_POSIX_S_IFSOCK ; <nl> + extern const int64_t k_POSIX_S_IFLNK ; <nl> + extern const int64_t k_POSIX_S_IFREG ; <nl> + extern const int64_t k_POSIX_S_IFBLK ; <nl> + extern const int64_t k_POSIX_S_IFDIR ; <nl> + extern const int64_t k_POSIX_S_IFCHR ; <nl> + extern const int64_t k_POSIX_S_IFIFO ; <nl> + extern const int64_t k_POSIX_S_ISUID ; <nl> + extern const int64_t k_POSIX_S_ISGID ; <nl> + extern const int64_t k_POSIX_S_ISVTX ; <nl> + extern const int64_t k_POSIX_S_IRWXU ; <nl> + extern const int64_t k_POSIX_S_IRUSR ; <nl> + extern const int64_t k_POSIX_S_IWUSR ; <nl> + extern const int64_t k_POSIX_S_IXUSR ; <nl> + extern const int64_t k_POSIX_S_IRWXG ; <nl> + extern const int64_t k_POSIX_S_IRGRP ; <nl> + extern const int64_t k_POSIX_S_IWGRP ; <nl> + extern const int64_t k_POSIX_S_IXGRP ; <nl> + extern const int64_t k_POSIX_S_IRWXO ; <nl> + extern const int64_t k_POSIX_S_IROTH ; <nl> + extern const int64_t k_POSIX_S_IWOTH ; <nl> + extern const int64_t k_POSIX_S_IXOTH ; <nl> + extern const int64_t k_POSIX_F_OK ; <nl> + extern const int64_t k_POSIX_X_OK ; <nl> + extern const int64_t k_POSIX_W_OK ; <nl> + extern const int64_t k_POSIX_R_OK ; <nl> + <nl> bool f_posix_access ( CStrRef file , int mode = 0 ) ; <nl> <nl> String f_posix_ctermid ( ) ; <nl> mmm a / hphp / system / idl / constants . idl . json <nl> ppp b / hphp / system / idl / constants . idl . json <nl> <nl> " name " : " PNG_NO_FILTER " , <nl> " value " : 0 <nl> } , <nl> - { <nl> - " name " : " POSIX_F_OK " , <nl> - " value " : 0 <nl> - } , <nl> - { <nl> - " name " : " POSIX_R_OK " , <nl> - " value " : 4 <nl> - } , <nl> - { <nl> - " name " : " POSIX_S_IFBLK " , <nl> - " value " : 24576 <nl> - } , <nl> - { <nl> - " name " : " POSIX_S_IFCHR " , <nl> - " value " : 8192 <nl> - } , <nl> - { <nl> - " name " : " POSIX_S_IFIFO " , <nl> - " value " : 4096 <nl> - } , <nl> - { <nl> - " name " : " POSIX_S_IFREG " , <nl> - " value " : 32768 <nl> - } , <nl> - { <nl> - " name " : " POSIX_S_IFSOCK " , <nl> - " value " : 49152 <nl> - } , <nl> - { <nl> - " name " : " POSIX_W_OK " , <nl> - " value " : 2 <nl> - } , <nl> - { <nl> - " name " : " POSIX_X_OK " , <nl> - " value " : 1 <nl> - } , <nl> { <nl> " name " : " PREG_BACKTRACK_LIMIT_ERROR " , <nl> " value " : 2 <nl> mmm a / hphp / system / idl / posix . idl . json <nl> ppp b / hphp / system / idl / posix . idl . json <nl> <nl> { <nl> " preamble " : " " , <nl> " consts " : [ <nl> + { <nl> + " name " : " POSIX_S_IFMT " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IFSOCK " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IFLNK " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IFREG " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IFBLK " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IFDIR " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IFCHR " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IFIFO " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_ISUID " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_ISGID " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_ISVTX " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IRWXU " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IRUSR " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IWUSR " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IXUSR " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IRWXG " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IRGRP " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IWGRP " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IXGRP " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IRWXO " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IROTH " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IWOTH " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_S_IXOTH " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_F_OK " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_X_OK " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_W_OK " , <nl> + " type " : " Int64 " <nl> + } , <nl> + { <nl> + " name " : " POSIX_R_OK " , <nl> + " type " : " Int64 " <nl> + } <nl> ] , <nl> " funcs " : [ <nl> { <nl> <nl> ] , <nl> " classes " : [ <nl> ] <nl> - } <nl> \ No newline at end of file <nl> + } <nl> mmm a / hphp / system / php / phar / Phar . php <nl> ppp b / hphp / system / php / phar / Phar . php <nl> private static function resolveDotDots ( $ pieces ) { <nl> private static function stat ( $ full_filename ) { <nl> list ( $ phar , $ filename ) = self : : getPharAndFile ( $ full_filename ) ; <nl> if ( ! isset ( $ phar - > fileInfo [ $ filename ] ) ) { <nl> - return false ; <nl> + $ dir = self : : opendir ( $ full_filename ) ; <nl> + if ( ! $ dir ) { <nl> + return false ; <nl> + } <nl> + <nl> + return array ( <nl> + ' size ' = > 0 , <nl> + ' atime ' = > 0 , <nl> + ' mtime ' = > 0 , <nl> + ' ctime ' = > 0 , <nl> + ' mode ' = > POSIX_S_IFDIR , <nl> + ) ; <nl> } <nl> + <nl> $ info = $ phar - > fileInfo [ $ filename ] ; <nl> return array ( <nl> ' size ' = > $ info [ 0 ] , <nl> ' atime ' = > $ info [ 1 ] , <nl> ' mtime ' = > $ info [ 1 ] , <nl> ' ctime ' = > $ info [ 1 ] , <nl> + ' mode ' = > POSIX_S_IFREG , <nl> ) ; <nl> } <nl> <nl>
|
expose more POSIX constants
|
facebook/hhvm
|
51e938d203ac4579d81007c1d995bb1ff001e855
|
2013-09-11T18:39:51Z
|
mmm a / vnpy / app / spread_trading / template . py <nl> ppp b / vnpy / app / spread_trading / template . py <nl> def send_email ( self , msg : str ) : <nl> Send email to default receiver . <nl> " " " <nl> if self . inited : <nl> - self . strategy_engine . send_strategy_email ( msg , self ) <nl> + self . strategy_engine . send_email ( msg , self ) <nl> <nl> def load_bar ( <nl> self , <nl>
|
Revert " Merge pull request from noranhe / dev "
|
vnpy/vnpy
|
39cd5d0f80555de24489ff6c27971f7c171328cc
|
2020-09-27T00:31:04Z
|
mmm a / tensorflow / workspace . bzl <nl> ppp b / tensorflow / workspace . bzl <nl> def tf_workspace ( path_prefix = " " , tf_repo_name = " " ) : <nl> temp_workaround_http_archive ( <nl> name = " llvm " , <nl> urls = [ <nl> - " http : / / bazel - mirror . storage . googleapis . com / github . com / llvm - mirror / llvm / archive / 53a96f264ef1148873c2c08bededc4c04a17078c . tar . gz " , <nl> - " https : / / github . com / llvm - mirror / llvm / archive / 53a96f264ef1148873c2c08bededc4c04a17078c . tar . gz " , <nl> + " http : / / bazel - mirror . storage . googleapis . com / github . com / llvm - mirror / llvm / archive / 5d2b26453d4bca5a13b69b0130e4369d1fcd393d . tar . gz " , <nl> + " https : / / github . com / llvm - mirror / llvm / archive / 5d2b26453d4bca5a13b69b0130e4369d1fcd393d . tar . gz " , <nl> ] , <nl> - sha256 = " 0ffea06dc2f6565dfc1ae2d9fef7f2c55ee561c9343fc7c1d6306cd8cfbe76b0 " , <nl> - strip_prefix = " llvm - 53a96f264ef1148873c2c08bededc4c04a17078c " , <nl> + sha256 = " 3cecf39bf4b3854629d610bb321bb57e0e46bda9110bd51c3bae5a4171c82bab " , <nl> + strip_prefix = " llvm - 5d2b26453d4bca5a13b69b0130e4369d1fcd393d " , <nl> build_file = str ( Label ( " / / third_party / llvm : llvm . BUILD " ) ) , <nl> repository = tf_repo_name , <nl> ) <nl>
|
Update LLVM version to upstream revision r298633 .
|
tensorflow/tensorflow
|
494fe43926f8928bb3b10c1729abea9be5113f21
|
2017-03-24T00:45:52Z
|
mmm a / Code / CryEngine / CryCommon / CryEntitySystem / IEntityClass . h <nl> ppp b / Code / CryEngine / CryCommon / CryEntitySystem / IEntityClass . h <nl> enum EEntityClassFlags <nl> ECLF_DO_NOT_SPAWN_AS_STATIC = BIT ( 3 ) , / / ! < If set the entity of this class stored as part of the level won ' t be assigned a static id on creation . <nl> ECLF_MODIFY_EXISTING = BIT ( 4 ) , / / ! < If set modify an existing class with the same name . <nl> ECLF_SEND_SCRIPT_EVENTS_FROM_FLOWGRAPH = BIT ( 5 ) , / / ! < If set send script events to entity from Flowgraph . <nl> - ECLF_ENTITY_ARCHETYPE = BIT ( 6 ) / / ! < If set this indicate the entity class is actually entity archetype . <nl> + ECLF_ENTITY_ARCHETYPE = BIT ( 6 ) , / / ! < If set this indicate the entity class is actually entity archetype . <nl> + ECLF_CREATE_PER_CLIENT = BIT ( 7 ) / / ! < If set , an instance of this class will be created for each connecting client <nl> } ; <nl> <nl> struct IEntityClassRegistryListener ; <nl> mmm a / Code / CryEngine / CryCommon / CrySchematyc / Reflection / DefaultTypeReflection . inl <nl> ppp b / Code / CryEngine / CryCommon / CrySchematyc / Reflection / DefaultTypeReflection . inl <nl> inline void ExplicitEntityIdToString ( IString & output , const ExplicitEntityId & in <nl> inline void ReflectType ( CTypeDesc < ExplicitEntityId > & desc ) <nl> { <nl> desc . SetGUID ( " 00782e22 - 3188 - 4538 - b4f2 - 8749b8a9dc48 " _cry_guid ) ; <nl> - desc . SetLabel ( " EntityId " ) ; <nl> - desc . SetDescription ( " Entity Identifier - Uniquely identifies an entity over the current session on a machine , is not unique over the network . " ) ; <nl> + desc . SetLabel ( " Entity " ) ; <nl> + desc . SetDescription ( " An entity instance present in the current scene " ) ; <nl> desc . SetDefaultValue ( ExplicitEntityId : : Invalid ) ; <nl> desc . SetToStringOperator < & ExplicitEntityIdToString > ( ) ; <nl> } <nl> mmm a / Code / CryEngine / CryEntitySystem / Entity . cpp <nl> ppp b / Code / CryEngine / CryEntitySystem / Entity . cpp <nl> void CEntity : : CreateSchematycObject ( const SEntitySpawnParams & spawnParams ) <nl> { <nl> if ( m_simulationMode ! = EEntitySimulationMode : : Idle ) <nl> { <nl> - m_pSchematycObject - > SetSimulationMode ( m_simulationMode , Schematyc : : EObjectSimulationUpdatePolicy : : OnChangeOnly , false ) ; <nl> + m_pSchematycObject - > SetSimulationMode ( m_simulationMode , Schematyc : : EObjectSimulationUpdatePolicy : : OnChangeOnly , m_simulationMode = = EEntitySimulationMode : : Game ) ; <nl> } <nl> } <nl> } <nl> } <nl> <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + void CEntity : : SetSimulationMode ( EEntitySimulationMode mode ) <nl> + { <nl> + m_simulationMode = mode ; <nl> + m_pSchematycObject - > SetSimulationMode ( m_simulationMode , Schematyc : : EObjectSimulationUpdatePolicy : : OnChangeOnly , m_simulationMode = = EEntitySimulationMode : : Game ) ; <nl> + } <nl> + <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> Matrix34 CEntity : : GetParentAttachPointWorldTM ( ) const <nl> { <nl> mmm a / Code / CryEngine / CryEntitySystem / Entity . h <nl> ppp b / Code / CryEngine / CryEntitySystem / Entity . h <nl> class CEntity : public IEntity <nl> virtual EEntitySimulationMode GetSimulationMode ( ) const final { return m_simulationMode ; } ; <nl> / / ~ IEntity <nl> <nl> + void SetSimulationMode ( EEntitySimulationMode mode ) ; <nl> + <nl> void ShutDownComponent ( IEntityComponent * pComponent ) ; <nl> <nl> CEntityComponentsVector & GetComponentsVector ( ) { return m_components ; } ; <nl> mmm a / Code / CryEngine / CryEntitySystem / EntityClassRegistry . cpp <nl> ppp b / Code / CryEngine / CryEntitySystem / EntityClassRegistry . cpp <nl> <nl> # include " EntityClassRegistry . h " <nl> # include " EntityClass . h " <nl> # include " EntityScript . h " <nl> + # include " EntitySystem . h " <nl> + # include " Entity . h " <nl> # include < CrySystem / File / CryFile . h > <nl> # include < CrySchematyc / CoreAPI . h > <nl> + # include < CryGame / IGameFramework . h > <nl> <nl> struct SSchematycEntityClassProperties <nl> { <nl> - SSchematycEntityClassProperties ( ) <nl> - : icon ( " % EDITOR % / objecticons / schematyc . bmp " ) <nl> - , bHideInEditor ( false ) <nl> - , bTriggerAreas ( true ) <nl> - { } <nl> + SSchematycEntityClassProperties ( ) = default ; <nl> <nl> void Serialize ( Serialization : : IArchive & archive ) <nl> { <nl> struct SSchematycEntityClassProperties <nl> archive . doc ( " Hide entity class in editor " ) ; <nl> archive ( bTriggerAreas , " bTriggerAreas " , " Trigger Areas " ) ; <nl> archive . doc ( " Entity can enter and trigger areas " ) ; <nl> + archive ( bCreatePerClient , " bCreatePerClient " , " Create per Client " ) ; <nl> + archive . doc ( " Automatically spawns an instance of this class with each client that connects to the server " ) ; <nl> } <nl> <nl> static void ReflectType ( Schematyc : : CTypeDesc < SSchematycEntityClassProperties > & desc ) <nl> struct SSchematycEntityClassProperties <nl> } <nl> <nl> / / class properties members <nl> - string icon ; <nl> - bool bHideInEditor ; <nl> - bool bTriggerAreas ; <nl> + string icon = " % EDITOR % / objecticons / schematyc . bmp " ; <nl> + bool bHideInEditor = false ; <nl> + bool bTriggerAreas = true ; <nl> + bool bCreatePerClient = false ; <nl> } ; <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> CEntityClassRegistry : : CEntityClassRegistry ( ) <nl> , m_listeners ( 2 ) <nl> { <nl> m_pSystem = GetISystem ( ) ; <nl> + <nl> + gEnv - > pGameFramework - > AddNetworkedClientListener ( * this ) ; <nl> } <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> CEntityClassRegistry : : ~ CEntityClassRegistry ( ) <nl> { <nl> + gEnv - > pGameFramework - > RemoveNetworkedClientListener ( * this ) ; <nl> } <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> class CSchematycEntityClassPreviewer : public Schematyc : : IObjectPreviewer <nl> <nl> virtual Schematyc : : ObjectId CreateObject ( const CryGUID & classGUID ) const override <nl> { <nl> - IEntityClass * pEntityClass = gEnv - > pEntitySystem - > GetClassRegistry ( ) - > FindClassByGUID ( classGUID ) ; <nl> + IEntityClass * pEntityClass = g_pIEntitySystem - > GetClassRegistry ( ) - > FindClassByGUID ( classGUID ) ; <nl> if ( pEntityClass ) <nl> { <nl> / / Spawn entity for preview <nl> class CSchematycEntityClassPreviewer : public Schematyc : : IObjectPreviewer <nl> params . pClass = pEntityClass ; <nl> params . sName = " Schematyc Preview Entity " ; <nl> params . nFlagsExtended | = ENTITY_FLAG_EXTENDED_PREVIEW ; <nl> - IEntity * pEntity = gEnv - > pEntitySystem - > SpawnEntity ( params ) ; <nl> + IEntity * pEntity = g_pIEntitySystem - > SpawnEntity ( params ) ; <nl> if ( pEntity & & pEntity - > GetSchematycObject ( ) ) <nl> { <nl> m_objectId = pEntity - > GetSchematycObject ( ) - > GetId ( ) ; <nl> class CSchematycEntityClassPreviewer : public Schematyc : : IObjectPreviewer <nl> return ; <nl> if ( pObject - > GetEntity ( ) ) <nl> { <nl> - gEnv - > pEntitySystem - > RemoveEntity ( pObject - > GetEntity ( ) - > GetId ( ) ) ; <nl> + g_pIEntitySystem - > RemoveEntity ( pObject - > GetEntity ( ) - > GetId ( ) ) ; <nl> } <nl> m_objectId = Schematyc : : ObjectId : : Invalid ; <nl> } ; <nl> void CEntityClassRegistry : : OnSchematycClassCompilation ( const Schematyc : : IRuntime <nl> <nl> bool bModifyExisting = false ; <nl> <nl> - IEntityClass * pEntityClass = gEnv - > pEntitySystem - > GetClassRegistry ( ) - > FindClass ( className ) ; <nl> + IEntityClass * pEntityClass = g_pIEntitySystem - > GetClassRegistry ( ) - > FindClass ( className ) ; <nl> if ( pEntityClass ) <nl> { <nl> if ( pEntityClass - > GetGUID ( ) ! = runtimeClass . GetGUID ( ) ) <nl> void CEntityClassRegistry : : OnSchematycClassCompilation ( const Schematyc : : IRuntime <nl> entityClassDesc . flags | = ECLF_INVISIBLE ; <nl> } <nl> <nl> + if ( classProperties . bCreatePerClient ) <nl> + { <nl> + entityClassDesc . flags | = ECLF_CREATE_PER_CLIENT ; <nl> + } <nl> + <nl> entityClassDesc . editorClassInfo . sCategory = " Schematyc " ; <nl> entityClassDesc . editorClassInfo . sIcon = icon . c_str ( ) ; <nl> - gEnv - > pEntitySystem - > GetClassRegistry ( ) - > RegisterStdClass ( entityClassDesc ) ; <nl> + g_pIEntitySystem - > GetClassRegistry ( ) - > RegisterStdClass ( entityClassDesc ) ; <nl> } <nl> } <nl> <nl> void CEntityClassRegistry : : UnregisterSchematycEntityClass ( ) <nl> gEnv - > pSchematyc - > GetEnvRegistry ( ) . DeregisterPackage ( EntityPackageGUID ) ; <nl> } <nl> } <nl> + <nl> + bool CEntityClassRegistry : : OnClientConnectionReceived ( int channelId , bool bIsReset ) <nl> + { <nl> + for ( const std : : pair < string , IEntityClass * > & classPair : m_mapClassName ) <nl> + { <nl> + if ( ( classPair . second - > GetFlags ( ) & ECLF_CREATE_PER_CLIENT ) ! = 0 ) <nl> + { <nl> + / / Connection received from a client , create a player entity and component <nl> + SEntitySpawnParams spawnParams ; <nl> + spawnParams . pClass = classPair . second ; <nl> + spawnParams . sName = " Client " ; <nl> + spawnParams . nFlags | = ENTITY_FLAG_NEVER_NETWORK_STATIC ; <nl> + <nl> + / / Set local player details <nl> + if ( channelId = = 1 & & ! gEnv - > IsDedicated ( ) & & g_pIEntitySystem - > GetEntityFromID ( LOCAL_PLAYER_ENTITY_ID ) = = nullptr ) <nl> + { <nl> + spawnParams . id = LOCAL_PLAYER_ENTITY_ID ; <nl> + spawnParams . nFlags | = ENTITY_FLAG_LOCAL_PLAYER ; <nl> + } <nl> + <nl> + if ( CEntity * pClientEntity = static_cast < CEntity * > ( g_pIEntitySystem - > SpawnEntity ( spawnParams ) ) ) <nl> + { <nl> + / / Set the local player entity channel id , and bind it to the network so that it can support Multiplayer contexts <nl> + pClientEntity - > GetNetEntity ( ) - > SetChannelId ( channelId ) ; <nl> + pClientEntity - > GetNetEntity ( ) - > BindToNetwork ( ) ; <nl> + <nl> + / / channelId starts at 1 , we want an index <nl> + uint32 clientIndex = channelId - 1 ; <nl> + <nl> + if ( m_channelEntityInstances . size ( ) < = clientIndex ) <nl> + { <nl> + m_channelEntityInstances . resize ( clientIndex + 1 ) ; <nl> + } <nl> + <nl> + / / Push the entity into our map , with the channel id as the key <nl> + m_channelEntityInstances [ clientIndex ] . push_back ( pClientEntity - > GetId ( ) ) ; <nl> + } <nl> + } <nl> + } <nl> + <nl> + return true ; <nl> + } <nl> + <nl> + bool CEntityClassRegistry : : OnClientReadyForGameplay ( int channelId , bool bIsReset ) <nl> + { <nl> + / / channelId starts at 1 , we want an index <nl> + uint32 clientIndex = channelId - 1 ; <nl> + <nl> + if ( m_channelEntityInstances . size ( ) > clientIndex ) <nl> + { <nl> + for ( EntityId entityId : m_channelEntityInstances [ clientIndex ] ) <nl> + { <nl> + if ( CEntity * pClientEntity = g_pIEntitySystem - > GetEntityFromID ( entityId ) ) <nl> + { <nl> + pClientEntity - > SetSimulationMode ( EEntitySimulationMode : : Game ) ; <nl> + } <nl> + } <nl> + <nl> + } <nl> + <nl> + return true ; <nl> + } <nl> + <nl> + void CEntityClassRegistry : : OnClientDisconnected ( int channelId , EDisconnectionCause cause , const char * description , bool bKeepClient ) <nl> + { <nl> + / / channelId starts at 1 , we want an index <nl> + uint32 clientIndex = channelId - 1 ; <nl> + <nl> + if ( m_channelEntityInstances . size ( ) < = ( clientIndex + 1 ) ) <nl> + { <nl> + for ( EntityId entityId : m_channelEntityInstances [ clientIndex ] ) <nl> + { <nl> + g_pIEntitySystem - > RemoveEntity ( entityId ) ; <nl> + } <nl> + <nl> + m_channelEntityInstances [ clientIndex ] . clear ( ) ; <nl> + } <nl> + } <nl> \ No newline at end of file <nl> mmm a / Code / CryEngine / CryEntitySystem / EntityClassRegistry . h <nl> ppp b / Code / CryEngine / CryEntitySystem / EntityClassRegistry . h <nl> <nl> # include < CryEntitySystem / IEntityClass . h > <nl> # include < CrySchematyc / Utils / ScopedConnection . h > <nl> <nl> + # include < CryNetwork / INetwork . h > <nl> + <nl> namespace Schematyc <nl> { <nl> struct IRuntimeClass ; <nl> namespace Schematyc <nl> / / Description : <nl> / / Standard implementation of the IEntityClassRegistry interface . <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - class CEntityClassRegistry final : public IEntityClassRegistry <nl> + class CEntityClassRegistry final <nl> + : public IEntityClassRegistry <nl> + , public INetworkedClientListener <nl> { <nl> public : <nl> CEntityClassRegistry ( ) ; <nl> class CEntityClassRegistry final : public IEntityClassRegistry <nl> } <nl> / / ~ IEntityClassRegistry <nl> <nl> + / / INetworkedClientListener <nl> + virtual void OnLocalClientDisconnected ( EDisconnectionCause cause , const char * description ) override { } <nl> + virtual bool OnClientConnectionReceived ( int channelId , bool bIsReset ) override ; <nl> + virtual bool OnClientReadyForGameplay ( int channelId , bool bIsReset ) override ; <nl> + virtual void OnClientDisconnected ( int channelId , EDisconnectionCause cause , const char * description , bool bKeepClient ) override ; <nl> + virtual bool OnClientTimingOut ( int channelId , EDisconnectionCause cause , const char * description ) override { return true ; } <nl> + / / ~ INetworkedClientListener <nl> + <nl> private : <nl> void LoadArchetypeDescription ( const XmlNodeRef & root ) ; <nl> void LoadClassDescription ( const XmlNodeRef & root , bool bOnlyNewClasses ) ; <nl> class CEntityClassRegistry final : public IEntityClassRegistry <nl> typedef std : : map < string , IEntityClass * > ClassNameMap ; <nl> ClassNameMap m_mapClassName ; <nl> <nl> + std : : vector < std : : vector < EntityId > > m_channelEntityInstances ; <nl> + <nl> std : : map < CryGUID , IEntityClass * > m_mapClassGUIDs ; <nl> <nl> IEntityClass * m_pDefaultClass ; <nl> mmm a / Code / CryEngine / CryEntitySystem / Schematyc / EntitySchematycUtilFunctions . h <nl> ppp b / Code / CryEngine / CryEntitySystem / Schematyc / EntitySchematycUtilFunctions . h <nl> <nl> / / Copyright 2001 - 2016 Crytek GmbH / Crytek Group . All rights reserved . <nl> <nl> - # include < CryEntitySystem / IEntity . h > <nl> - # include < CryEntitySystem / IEntitySystem . h > <nl> + # include " Entity . h " <nl> + # include " EntitySystem . h " <nl> # include < CrySerialization / Forward . h > <nl> # include < CryMath / Cry_Camera . h > <nl> <nl> namespace Entity <nl> { <nl> CSharedString GetEntityName ( ExplicitEntityId entityId ) <nl> { <nl> - const IEntity * pEntity = gEnv - > pEntitySystem - > GetEntity ( static_cast < EntityId > ( entityId ) ) ; <nl> + const CEntity * pEntity = g_pIEntitySystem - > GetEntityFromID ( static_cast < EntityId > ( entityId ) ) ; <nl> return pEntity ? CSharedString ( pEntity - > GetName ( ) ) : CSharedString ( ) ; <nl> } <nl> <nl> ObjectId GetEntityObjectId ( ExplicitEntityId entityId ) <nl> { <nl> - const IEntity * pEntity = gEnv - > pEntitySystem - > GetEntity ( static_cast < EntityId > ( entityId ) ) ; <nl> + const CEntity * pEntity = g_pIEntitySystem - > GetEntityFromID ( static_cast < EntityId > ( entityId ) ) ; <nl> return ( pEntity & & pEntity - > GetSchematycObject ( ) ) ? pEntity - > GetSchematycObject ( ) - > GetId ( ) : ObjectId : : Invalid ; <nl> } <nl> <nl> - CTransform GetEntityTransform ( ExplicitEntityId entityId ) <nl> + ExplicitEntityId FindEntityByName ( CSharedString name ) <nl> { <nl> - const IEntity * pEntity = gEnv - > pEntitySystem - > GetEntity ( static_cast < EntityId > ( entityId ) ) ; <nl> - return pEntity ? CTransform ( pEntity - > GetWorldTM ( ) ) : CTransform ( ) ; <nl> + const CEntity * pEntity = static_cast < CEntity * > ( g_pIEntitySystem - > FindEntityByName ( name . c_str ( ) ) ) ; <nl> + return ExplicitEntityId ( pEntity ? pEntity - > GetId ( ) : INVALID_ENTITYID ) ; <nl> } <nl> <nl> - CRotation GetEntityRotation ( ExplicitEntityId entityId ) <nl> + void Hide ( ExplicitEntityId entityId , bool bHide ) <nl> { <nl> - const IEntity * pEntity = gEnv - > pEntitySystem - > GetEntity ( static_cast < EntityId > ( entityId ) ) ; <nl> - return pEntity ? CRotation ( pEntity - > GetWorldRotation ( ) ) : CRotation ( ) ; <nl> + if ( CEntity * pEntity = g_pIEntitySystem - > GetEntityFromID ( static_cast < EntityId > ( entityId ) ) ) <nl> + { <nl> + pEntity - > Hide ( bHide ) ; <nl> + } <nl> + } <nl> + <nl> + bool IsHidden ( ExplicitEntityId entityId ) <nl> + { <nl> + if ( const CEntity * pEntity = g_pIEntitySystem - > GetEntityFromID ( static_cast < EntityId > ( entityId ) ) ) <nl> + { <nl> + return pEntity - > IsHidden ( ) ; <nl> + } <nl> + <nl> + return true ; <nl> + } <nl> + <nl> + void SetTransform ( ExplicitEntityId entityId , const CryTransform : : CTransform & transform ) <nl> + { <nl> + if ( CEntity * pEntity = g_pIEntitySystem - > GetEntityFromID ( static_cast < EntityId > ( entityId ) ) ) <nl> + { <nl> + pEntity - > SetWorldTM ( transform . ToMatrix34 ( ) ) ; <nl> + } <nl> } <nl> <nl> - Vec3 GetEntityPosition ( ExplicitEntityId entityId ) <nl> + CryTransform : : CTransform GetTransform ( ExplicitEntityId entityId ) <nl> { <nl> - const IEntity * pEntity = gEnv - > pEntitySystem - > GetEntity ( static_cast < EntityId > ( entityId ) ) ; <nl> - return pEntity ? pEntity - > GetWorldPos ( ) : ZERO ; <nl> + if ( const CEntity * pEntity = g_pIEntitySystem - > GetEntityFromID ( static_cast < EntityId > ( entityId ) ) ) <nl> + { <nl> + return CryTransform : : CTransform ( pEntity - > GetWorldTM ( ) ) ; <nl> + } <nl> + <nl> + return CryTransform : : CTransform ( ) ; <nl> + } <nl> + <nl> + void SetRotation ( ExplicitEntityId entityId , const CryTransform : : CRotation & rotation ) <nl> + { <nl> + if ( CEntity * pEntity = g_pIEntitySystem - > GetEntityFromID ( static_cast < EntityId > ( entityId ) ) ) <nl> + { <nl> + Matrix34 transform = pEntity - > GetWorldTM ( ) ; <nl> + transform . SetRotation33 ( rotation . ToMatrix33 ( ) ) ; <nl> + pEntity - > SetWorldTM ( transform ) ; <nl> + } <nl> + } <nl> + <nl> + CryTransform : : CRotation GetRotation ( ExplicitEntityId entityId ) <nl> + { <nl> + if ( const CEntity * pEntity = g_pIEntitySystem - > GetEntityFromID ( static_cast < EntityId > ( entityId ) ) ) <nl> + { <nl> + return CryTransform : : CRotation ( pEntity - > GetWorldRotation ( ) ) ; <nl> + } <nl> + <nl> + return CryTransform : : CRotation ( ) ; <nl> + } <nl> + <nl> + void SetPosition ( ExplicitEntityId entityId , const Vec3 & position ) <nl> + { <nl> + if ( CEntity * pEntity = g_pIEntitySystem - > GetEntityFromID ( static_cast < EntityId > ( entityId ) ) ) <nl> + { <nl> + Matrix34 transform = pEntity - > GetWorldTM ( ) ; <nl> + transform . SetTranslation ( position ) ; <nl> + pEntity - > SetWorldTM ( transform ) ; <nl> + } <nl> + } <nl> + <nl> + Vec3 GetPosition ( ExplicitEntityId entityId ) <nl> + { <nl> + if ( const CEntity * pEntity = g_pIEntitySystem - > GetEntityFromID ( static_cast < EntityId > ( entityId ) ) ) <nl> + { <nl> + return pEntity - > GetWorldPos ( ) ; <nl> + } <nl> + <nl> + return ZERO ; <nl> } <nl> <nl> static void RegisterUtilFunctions ( IEnvRegistrar & registrar ) <nl> { <nl> - CEnvRegistrationScope scope = registrar . Scope ( IEntity : : GetEntityScopeGUID ( ) ) ; <nl> + CEnvRegistrationScope scope = registrar . Scope ( GetTypeDesc < ExplicitEntityId > ( ) . GetGUID ( ) ) ; <nl> { <nl> auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & GetEntityName , " 955ca6c4 - 5b0a - 4150 - aaba - 79e2939e85f7 " _cry_guid , " GetEntityName " ) ; <nl> pFunction - > SetDescription ( " Get name of entity " ) ; <nl> static void RegisterUtilFunctions ( IEnvRegistrar & registrar ) <nl> scope . Register ( pFunction ) ; <nl> } <nl> { <nl> - auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & GetEntityTransform , " 367020b2 - c931 - 4e45 - bcd1 - b99675c49800 " _cry_guid , " GetEntityTransform " ) ; <nl> - pFunction - > SetDescription ( " Get transform of entity " ) ; <nl> + auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & FindEntityByName , " { 823518D1 - 6FE9 - 49DA - A522 - C1483385B70D } " _cry_guid , " FindEntityByName " ) ; <nl> + pFunction - > SetDescription ( " Finds an entity by name " ) ; <nl> + pFunction - > BindInput ( 1 , ' name ' , " Name " ) ; <nl> + pFunction - > BindOutput ( 0 , ' ent ' , " EntityId " ) ; <nl> + scope . Register ( pFunction ) ; <nl> + } <nl> + { <nl> + auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & Hide , " abc4938d - a631 - 4a36 - 9f10 - 22cf6dc9dabd " _cry_guid , " Hide " ) ; <nl> + pFunction - > SetDescription ( " Show / hide entity " ) ; <nl> + pFunction - > SetFlags ( EEnvFunctionFlags : : Construction ) ; <nl> pFunction - > BindInput ( 1 , ' ent ' , " EntityId " ) ; <nl> - pFunction - > BindOutput ( 0 , ' trfm ' , " Transform " ) ; <nl> + pFunction - > BindInput ( 2 , ' vis ' , " Hide " ) ; <nl> scope . Register ( pFunction ) ; <nl> } <nl> { <nl> - auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & GetEntityPosition , " 10657d53 - ef08 - 4a8f - 9d74 - 344fbf19f370 " _cry_guid , " GetEntityPosition " ) ; <nl> - pFunction - > SetDescription ( " Get world position of entity " ) ; <nl> + auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & IsHidden , " 5aa5e8f0 - b4f4 - 491d - 8074 - d8b129500d09 " _cry_guid , " IsHidden " ) ; <nl> + pFunction - > SetDescription ( " Is entity hidden ? " ) ; <nl> + pFunction - > SetFlags ( EEnvFunctionFlags : : Construction ) ; <nl> pFunction - > BindInput ( 1 , ' ent ' , " EntityId " ) ; <nl> - pFunction - > BindOutput ( 0 , ' pos ' , " Position " ) ; <nl> + pFunction - > BindOutput ( 0 , ' vis ' , " Visible " ) ; <nl> + scope . Register ( pFunction ) ; <nl> + } <nl> + <nl> + { <nl> + auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & SetTransform , " FA08CEA0 - A0C5 - 4340 - 9F8A - E38D74488BAF " _cry_guid , " SetTransform " ) ; <nl> + pFunction - > SetDescription ( " Set Entity Transformation " ) ; <nl> + pFunction - > BindInput ( 1 , ' ent ' , " EntityId " ) ; <nl> + pFunction - > BindInput ( 2 , ' tr ' , " transform " ) ; <nl> + scope . Register ( pFunction ) ; <nl> + } <nl> + <nl> + { <nl> + auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & GetTransform , " 8A99E1BA - A5CD - 4DE8 - A19F - D07DF5D3B245 " _cry_guid , " GetTransform " ) ; <nl> + pFunction - > SetDescription ( " Get Entity Transformation " ) ; <nl> + pFunction - > BindInput ( 1 , ' ent ' , " EntityId " ) ; <nl> + pFunction - > BindOutput ( 0 , ' tr ' , " transform " ) ; <nl> scope . Register ( pFunction ) ; <nl> } <nl> + <nl> { <nl> - auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & GetEntityPosition , " f2657f9a - 1229 - 41a1 - 87e7 - bd9d29c40470 " _cry_guid , " GetEntityRotation " ) ; <nl> - pFunction - > SetDescription ( " Get rotation of entity in world space " ) ; <nl> + auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & SetRotation , " { 53FDFFFB - A216 - 4001 - BA26 - 9E81A7D2160D } " _cry_guid , " SetRotation " ) ; <nl> + pFunction - > SetDescription ( " Set Entity Rotation " ) ; <nl> pFunction - > BindInput ( 1 , ' ent ' , " EntityId " ) ; <nl> - pFunction - > BindOutput ( 0 , ' rot ' , " Rotation " ) ; <nl> + pFunction - > BindInput ( 2 , ' rot ' , " rotation " ) ; <nl> + scope . Register ( pFunction ) ; <nl> + } <nl> + <nl> + { <nl> + auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & GetRotation , " { B03F7198 - 583E - 4C9C - BDC7 - 92D904920D2C } " _cry_guid , " GetRotation " ) ; <nl> + pFunction - > SetDescription ( " Get Entity Rotation " ) ; <nl> + pFunction - > BindInput ( 1 , ' ent ' , " EntityId " ) ; <nl> + pFunction - > BindOutput ( 0 , ' rot ' , " rotation " ) ; <nl> + scope . Register ( pFunction ) ; <nl> + } <nl> + <nl> + { <nl> + auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & SetPosition , " { 1FD3B799 - D029 - 480A - 8D7B - F4EDFFF3D5F9 } " _cry_guid , " SetPosition " ) ; <nl> + pFunction - > SetDescription ( " Set Entity Position " ) ; <nl> + pFunction - > BindInput ( 1 , ' ent ' , " EntityId " ) ; <nl> + pFunction - > BindInput ( 2 , ' pos ' , " Position " ) ; <nl> + scope . Register ( pFunction ) ; <nl> + } <nl> + <nl> + { <nl> + auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & GetPosition , " { 29C87754 - 67BF - 401A - 9E8E - F9DCC178031E } " _cry_guid , " GetPosition " ) ; <nl> + pFunction - > SetDescription ( " Get Entity Position " ) ; <nl> + pFunction - > BindInput ( 1 , ' ent ' , " EntityId " ) ; <nl> + pFunction - > BindOutput ( 0 , ' pos ' , " Position " ) ; <nl> scope . Register ( pFunction ) ; <nl> } <nl> } <nl> mmm a / Code / CryEngine / CryEntitySystem / Schematyc / EntityUtilsComponent . cpp <nl> ppp b / Code / CryEngine / CryEntitySystem / Schematyc / EntityUtilsComponent . cpp <nl> <nl> <nl> namespace Schematyc <nl> { <nl> - ExplicitEntityId CEntityUtilsComponent : : GetEntityId ( ) const <nl> - { <nl> - return ExplicitEntityId ( m_pEntity - > GetId ( ) ) ; <nl> - } <nl> - <nl> - void CEntityUtilsComponent : : SetTransform ( const CryTransform : : CTransform & transform ) <nl> - { <nl> - m_pEntity - > SetWorldTM ( transform . ToMatrix34 ( ) ) ; <nl> - } <nl> - <nl> - CryTransform : : CTransform CEntityUtilsComponent : : GetTransform ( ) <nl> - { <nl> - return CryTransform : : CTransform ( m_pEntity - > GetWorldTM ( ) ) ; <nl> - } <nl> - <nl> - void CEntityUtilsComponent : : SetRotation ( const CryTransform : : CRotation & rotation ) <nl> - { <nl> - Matrix34 transform = m_pEntity - > GetWorldTM ( ) ; <nl> - transform . SetRotation33 ( rotation . ToMatrix33 ( ) ) ; <nl> - m_pEntity - > SetWorldTM ( transform ) ; <nl> - } <nl> - <nl> - CryTransform : : CRotation CEntityUtilsComponent : : GetRotation ( ) <nl> - { <nl> - return CryTransform : : CRotation ( m_pEntity - > GetWorldRotation ( ) ) ; <nl> - } <nl> - <nl> - void CEntityUtilsComponent : : SetVisible ( bool bVisible ) <nl> - { <nl> - m_pEntity - > Invisible ( ! bVisible ) ; <nl> - } <nl> - <nl> - bool CEntityUtilsComponent : : IsVisible ( ) const <nl> - { <nl> - return ! m_pEntity - > IsInvisible ( ) ; <nl> - } <nl> - <nl> void CEntityUtilsComponent : : ReflectType ( CTypeDesc < CEntityUtilsComponent > & desc ) <nl> { <nl> desc . SetGUID ( " e88093df - 904f - 4c52 - af38 - 911e26777cdc " _cry_guid ) ; <nl> namespace Schematyc <nl> CEnvRegistrationScope scope = registrar . Scope ( IEntity : : GetEntityScopeGUID ( ) ) ; <nl> { <nl> CEnvRegistrationScope componentScope = scope . Register ( SCHEMATYC_MAKE_ENV_COMPONENT ( CEntityUtilsComponent ) ) ; <nl> - / / Functions <nl> - { <nl> - auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & CEntityUtilsComponent : : GetEntityId , " c01d8df5 - 058f - 406f - 8c4c - 8426e856f294 " _cry_guid , " GetEntityId " ) ; <nl> - pFunction - > SetDescription ( " Get Entity Id " ) ; <nl> - pFunction - > BindOutput ( 0 , ' id ' , " EntityId " ) ; <nl> - componentScope . Register ( pFunction ) ; <nl> - } <nl> - <nl> - { <nl> - auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & CEntityUtilsComponent : : SetTransform , " FA08CEA0 - A0C5 - 4340 - 9F8A - E38D74488BAF " _cry_guid , " SetTransform " ) ; <nl> - pFunction - > SetDescription ( " Set Entity Transformation " ) ; <nl> - pFunction - > BindInput ( 1 , ' tr ' , " transform " ) ; <nl> - componentScope . Register ( pFunction ) ; <nl> - } <nl> - <nl> - { <nl> - auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & CEntityUtilsComponent : : GetTransform , " 8A99E1BA - A5CD - 4DE8 - A19F - D07DF5D3B245 " _cry_guid , " GetTransform " ) ; <nl> - pFunction - > SetDescription ( " Get Entity Transformation " ) ; <nl> - pFunction - > BindOutput ( 0 , ' tr ' , " transform " ) ; <nl> - componentScope . Register ( pFunction ) ; <nl> - } <nl> - <nl> - { <nl> - auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & CEntityUtilsComponent : : SetRotation , " { 53FDFFFB - A216 - 4001 - BA26 - 9E81A7D2160D } " _cry_guid , " SetRotation " ) ; <nl> - pFunction - > SetDescription ( " Set Entity Rotation " ) ; <nl> - pFunction - > BindInput ( 1 , ' rot ' , " rotation " ) ; <nl> - componentScope . Register ( pFunction ) ; <nl> - } <nl> - <nl> - { <nl> - auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & CEntityUtilsComponent : : GetRotation , " { B03F7198 - 583E - 4C9C - BDC7 - 92D904920D2C } " _cry_guid , " GetRotation " ) ; <nl> - pFunction - > SetDescription ( " Get Entity Rotation " ) ; <nl> - pFunction - > BindOutput ( 0 , ' rot ' , " rotation " ) ; <nl> - componentScope . Register ( pFunction ) ; <nl> - } <nl> - <nl> - { <nl> - auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & CEntityUtilsComponent : : SetVisible , " abc4938d - a631 - 4a36 - 9f10 - 22cf6dc9dabd " _cry_guid , " SetVisible " ) ; <nl> - pFunction - > SetDescription ( " Show / hide geometry " ) ; <nl> - pFunction - > SetFlags ( EEnvFunctionFlags : : Construction ) ; <nl> - pFunction - > BindInput ( 1 , ' vis ' , " Visible " ) ; <nl> - componentScope . Register ( pFunction ) ; <nl> - } <nl> - { <nl> - auto pFunction = SCHEMATYC_MAKE_ENV_FUNCTION ( & CEntityUtilsComponent : : IsVisible , " 5aa5e8f0 - b4f4 - 491d - 8074 - d8b129500d09 " _cry_guid , " IsVisible " ) ; <nl> - pFunction - > SetDescription ( " Is geometry visible ? " ) ; <nl> - pFunction - > SetFlags ( EEnvFunctionFlags : : Construction ) ; <nl> - pFunction - > BindOutput ( 0 , ' vis ' , " Visible " ) ; <nl> - componentScope . Register ( pFunction ) ; <nl> - } <nl> } <nl> } <nl> <nl> mmm a / Code / CryEngine / CrySchematyc / Core / Impl / Script / Graph / Nodes / ScriptGraphGetEntityIdNode . cpp <nl> ppp b / Code / CryEngine / CrySchematyc / Core / Impl / Script / Graph / Nodes / ScriptGraphGetEntityIdNode . cpp <nl> void CScriptGraphGetEntityIdNode : : CreateLayout ( CScriptGraphNodeLayout & layout ) <nl> layout . SetName ( " GetEntityId " ) ; <nl> layout . SetStyleId ( " Core : : Data " ) ; <nl> <nl> - layout . AddOutputWithData ( " EntityId " , GetTypeDesc < ExplicitEntityId > ( ) . GetGUID ( ) , { EScriptGraphPortFlags : : Data , EScriptGraphPortFlags : : MultiLink , EScriptGraphPortFlags : : Pull } , ObjectId ( ) ) ; <nl> + layout . AddOutputWithData ( " Entity " , GetTypeDesc < ExplicitEntityId > ( ) . GetGUID ( ) , { EScriptGraphPortFlags : : Data , EScriptGraphPortFlags : : MultiLink , EScriptGraphPortFlags : : Pull } , ExplicitEntityId ( INVALID_ENTITYID ) ) ; <nl> } <nl> <nl> void CScriptGraphGetEntityIdNode : : Compile ( SCompilerContext & context , IGraphNodeCompiler & compiler ) const <nl> void CScriptGraphGetEntityIdNode : : Register ( CScriptGraphNodeFactory & factory ) <nl> <nl> virtual const char * GetBehavior ( ) const override <nl> { <nl> - return " GetEntityId " ; <nl> + return " GetEntity " ; <nl> } <nl> <nl> virtual const char * GetSubject ( ) const override <nl> void CScriptGraphGetEntityIdNode : : Register ( CScriptGraphNodeFactory & factory ) <nl> <nl> virtual const char * GetDescription ( ) const override <nl> { <nl> - return " Get id of this Entity " ; <nl> + return " Gets the Entity we are attached to " ; <nl> } <nl> <nl> virtual const char * GetStyleId ( ) const override <nl>
|
! R ( Schematyc ) Move Entity functions from Entity base component to new Entity data type ( renamed from EntityId )
|
CRYTEK/CRYENGINE
|
d56477a7c96ae692481d0605d5da0732954220c1
|
2017-08-18T14:48:06Z
|
mmm a / docs / docs / imaging . xml <nl> ppp b / docs / docs / imaging . xml <nl> <nl> array2d < / a > objects that contain various kinds of pixels . <nl> < / p > <nl> <nl> + < p > <nl> + < h2 > Pixel Types < / h2 > <nl> + Most image handling routines in dlib will accept images containing any pixel type . <nl> + This is made possible by defining a traits class , < a href = " # pixel_traits " > pixel_traits < / a > , for <nl> + each possible pixel type . This traits class enables image processing routines to determine <nl> + how to handle each kind of pixel and therefore only pixels which have a pixel_traits definition <nl> + may be used . The following list defines all the pixel types which come with pixel_traits definitions . <nl> + < ul > <nl> + < li > < b > RGB < / b > <nl> + < ul > There are two RGB pixel types in dlib , < a href = " # rgb_pixel " > rgb_pixel < / a > and < a href = " # bgr_pixel " > bgr_pixel < / a > . <nl> + Each defines a 24bit RGB pixel type . The bgr_pixel is identical to rgb_pixel except that it lays <nl> + the color channels down in memory in BGR order rather than RGB order and is therefore useful <nl> + for interfacing with other image processing tools which expect this format ( e . g . < a href = " # cv_image " > OpenCV < / a > ) . < / ul > <nl> + < / li > <nl> + < li > < b > RGB Alpha < / b > <nl> + < ul > The < a href = " # rgb_alpha_pixel " > rgb_alpha_pixel < / a > is a 8bit per channel RGB pixel with an 8bit alpha channel . < / ul > <nl> + < / li > <nl> + < li > < b > HSI < / b > <nl> + < ul > The < a href = " # hsi_pixel " > hsi_pixel < / a > is a 24bit pixel which represents a point in the Hue Saturation Intensity <nl> + ( HSI ) color space . < / ul > <nl> + < / li > <nl> + < li > < b > Grayscale < / b > <nl> + < ul > Any built in scalar type may be used as a grayscale pixel type . For example , unsigned char , int , double , etc . < / ul > <nl> + < / li > <nl> + < / ul > <nl> + <nl> + <nl> + < / p > <nl> < / body > <nl> <nl> < ! - - * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * - - > <nl> <nl> < description > <nl> get_pixel_intensity ( ) is a templated function that <nl> returns the grayscale intensity of a pixel . If the pixel isn ' t a grayscale <nl> - pixel then it converts the pixel to the HSI color space and returns the <nl> - obtained intensity value . <nl> + pixel then it converts the pixel to grayscale and returns that value . <nl> < / description > <nl> <nl> < / component > <nl> <nl> < description > <nl> This global function writes an image out to an ostream as a dlib DNG file ( a lossless <nl> compressed image format ) . <nl> + < p > <nl> + This routine can save images containing any type of pixel . However , the DNG format <nl> + can natively store only the following pixel types : < b > rgb_pixel < / b > , < b > hsi_pixel < / b > , <nl> + < b > rgb_alpha_pixel < / b > , < b > uint8 < / b > , and < b > uint16 < / b > . All other pixel <nl> + types will be converted into one of these types as appropriate before being <nl> + saved to disk . <nl> + < / p > <nl> + <nl> < / description > <nl> <nl> < / component > <nl> <nl> if you use CMake and dlib ' s default CMakeLists . txt file then it will get setup <nl> automatically . <nl> < / p > <nl> + < p > <nl> + This routine can save images containing any type of pixel . However , save_png ( ) can <nl> + only natively store the following pixel types : < b > rgb_pixel < / b > , <nl> + < b > rgb_alpha_pixel < / b > , < b > uint8 < / b > , and < b > uint16 < / b > . All other pixel <nl> + types will be converted into one of these types as appropriate before being <nl> + saved to disk . <nl> + < / p > <nl> < / description > <nl> <nl> < / component > <nl> <nl> < file > dlib / image_io . h < / file > <nl> < spec_file link = " true " > dlib / image_saver / image_saver_abstract . h < / spec_file > <nl> < description > <nl> - This global function writes an image out to an ostream as a MS Windows BMP file . <nl> + This global function writes an image out to an ostream as a MS Windows BMP file . <nl> + <nl> + < p > <nl> + This routine can save images containing any type of pixel . However , it will <nl> + convert all color pixels into < b > rgb_pixel < / b > and grayscale pixels into <nl> + < b > uint8 < / b > type before saving to disk . <nl> + < / p > <nl> + <nl> < / description > <nl> <nl> < / component > <nl>
|
Updated the docs to discuss pixel formats and what happens during the
|
davisking/dlib
|
8b55807fe6a9043592cc793ef60e065b82c554bd
|
2011-05-29T22:52:16Z
|
mmm a / tensorflow / python / keras / utils / composite_tensor_support_test . py <nl> ppp b / tensorflow / python / keras / utils / composite_tensor_support_test . py <nl> class ToSparse ( Layer ) : <nl> " " " Create a sparse tensor based on a given dense tensor . " " " <nl> <nl> def call ( self , inputs ) : <nl> - indices = array_ops . where ( math_ops . not_equal ( inputs , 0 ) ) <nl> + indices = array_ops . where_v2 ( math_ops . not_equal ( inputs , 0 ) ) <nl> values = array_ops . gather_nd ( inputs , indices ) <nl> shape = array_ops . shape ( inputs , out_type = dtypes . int64 ) <nl> return sparse_tensor . SparseTensor ( indices , values , dense_shape = shape ) <nl>
|
Removed the deprecated API from the File .
|
tensorflow/tensorflow
|
047d4bf2ff58417167a22893a8787a54ca861ad1
|
2019-07-17T09:36:57Z
|
mmm a / src / globals . h <nl> ppp b / src / globals . h <nl> namespace internal { <nl> <nl> / / Determine whether double field unboxing feature is enabled . <nl> # if V8_TARGET_ARCH_64_BIT <nl> - # define V8_DOUBLE_FIELDS_UNBOXING 0 <nl> + # define V8_DOUBLE_FIELDS_UNBOXING 1 <nl> # else <nl> # define V8_DOUBLE_FIELDS_UNBOXING 0 <nl> # endif <nl>
|
Revert of Temporarily disable double fields unboxing . ( patchset id : 1 of https : / / codereview . chromium . org / 928733003 / )
|
v8/v8
|
0d4ff29a607629ca25bbf7d00684c61269e11c0f
|
2015-02-26T12:26:59Z
|
mmm a / DEVELOPER . md <nl> ppp b / DEVELOPER . md <nl> Below is a list of additional documentation to aid the development process : <nl> <nl> - [ Envoy filter example project ( how to consume and extend Envoy as a submodule ) ] ( https : / / github . com / envoyproxy / envoy - filter - example ) <nl> <nl> + - [ Performance testing Envoy with ` tcmalloc ` / ` pprof ` ] ( https : / / github . com / envoyproxy / envoy / tree / bazel / PPROF . md ) <nl> + <nl> And some documents on components of Envoy architecture : <nl> <nl> - [ Envoy flow control ] ( https : / / github . com / envoyproxy / envoy / blob / master / source / docs / flow_control . md ) <nl> new file mode 100644 <nl> index 00000000000 . . 09e7982f2d8 <nl> mmm / dev / null <nl> ppp b / bazel / PPROF . md <nl> <nl> + # Memory consumption testing with ` pprof ` <nl> + <nl> + To use ` pprof ` to analyze performance and memory consumption in Envoy , you can <nl> + use the built - in statically linked profiler , or dynamically link it in to a <nl> + specific place yourself . <nl> + <nl> + # Linking <nl> + <nl> + # # Static Linking <nl> + <nl> + Static linking is already available ( because of a ` HeapProfilerDump ( ) ` call <nl> + inside <nl> + [ ` Envoy : : Profiler : : Heap : : forceLink ( ) ` ] ( https : / / github . com / envoyproxy / envoy / blob / master / source / common / profiler / profiler . cc # L21 - L26 ) ) . <nl> + <nl> + # # # Compiling a statically - linked Envoy <nl> + <nl> + Build the static binary using bazel : <nl> + <nl> + $ bazel build / / source / exe : envoy - static <nl> + <nl> + # # # Running a statically - linked Envoy with ` pprof ` <nl> + <nl> + And run the binary with a ` HEAPPROFILE ` environment variable , like so : <nl> + <nl> + $ HEAPPROFILE = / tmp / mybin . hprof bazel - bin / source / exe / envoy - static < args > <nl> + <nl> + ` HEAPPROFILE ` sets a location for the profiler output . A statically - linked <nl> + binary must be run with this environment variable ; a dynamically - linked binary <nl> + will populate the working directory by default . ( See * Methodology * . ) <nl> + <nl> + # # Dynamic Linking <nl> + <nl> + # # # Adding ` tcmalloc_dep ` to Envoy <nl> + <nl> + A statically - linked Envoy will profile everything . In a dynamically - linked <nl> + Envoy , you must add the HeapProfiler instructions yourself . <nl> + ` HeapProfilerStart ( ) ` will start recording allocations , ` HeapProfilerStop ( ) ` <nl> + will stop recording , and ` HeapProfilerDump ( ) ` will dump an output to the <nl> + specified directory . ( See [ Gperftools Heap <nl> + Profiler ] ( https : / / gperftools . github . io / gperftools / heapprofile . html ) . ) <nl> + <nl> + To add a ` HeapProfiler ` breakpoint yourself , add ` tcmalloc ` as a <nl> + dependency under the ` envoy_cc_library ` rule : <nl> + <nl> + ` source / exe / BUILD ` <nl> + <nl> + ` ` ` c + + <nl> + envoy_cc_library ( <nl> + name = " envoy_common_lib " , <nl> + + tcmalloc_dep = 1 , <nl> + deps = [ <nl> + . . . <nl> + ) <nl> + ` ` ` <nl> + <nl> + It is then necessary to add ` HeapProfilerStart ( ) ` and ` HeapProfilerDump ( ) ` <nl> + breakpoints somewhere in Envoy . One place to start profiling is at the <nl> + instantiation of ` MainCommonBase : : MainCommonBase ` : <nl> + <nl> + ` source / exe / main_common . cc ` <nl> + <nl> + ` ` ` c + + <nl> + / / includes <nl> + # include " gperftools / heap - profiler . h " <nl> + . . . <nl> + MainCommonBase : : MainCommonBase ( . . . ) : . . . { <nl> + + HeapProfilerStart ( " main_common_base " ) ; / / first line <nl> + . . . <nl> + } <nl> + ` ` ` <nl> + <nl> + ` source / server / server . cc ` <nl> + <nl> + ` ` ` c + + <nl> + / / includes <nl> + # include " gperftools / heap - profiler . h " <nl> + . . . <nl> + void InstanceImpl : : Initialize ( . . . ) : . . . { <nl> + . . . <nl> + + HeapProfilerDump ( " main_common_base " ) ; / / last line <nl> + } <nl> + ` ` ` <nl> + <nl> + Once these changes have been made in your working directory , it might make sense to <nl> + save the diff as a patch ( ` git diff > file ` ) , which can then be quickly <nl> + applied / unapplied for testing and commiting . ( ` git apply ` , ` git apply - R ` ) <nl> + <nl> + Build the binary using bazel , and run the binary without any environment variables : <nl> + <nl> + $ bazel build / / source / exe : envoy <nl> + $ bazel - bin / source / exe / envoy < args > <nl> + <nl> + This will dump your profiler output to the working directory . <nl> + <nl> + # Methodology <nl> + <nl> + For consistent testing , it makes sense to run Envoy for a constant amount of <nl> + time across trials : <nl> + <nl> + $ timeout < num_seconds > bazel - bin / source / exe / envoy < options > <nl> + <nl> + Envoy will print to stdout something like : <nl> + <nl> + Starting tracking the heap <nl> + <nl> + And then a series of stdouts like : <nl> + <nl> + Dumping heap profile to < heap file 0001 > ( 100 MB currently in use ) <nl> + Dumping heap profile to < heap file 0002 > ( 200 MB currently in use ) <nl> + . . . <nl> + <nl> + This will generate a series of files ; if you statically - linked , these are <nl> + wherever ` HEAPPROFILE ` points to . Otherwise , they are in the current directory <nl> + by default . They ' ll be named something like ` main_common_base . 0001 . heap ` , <nl> + ` main_common_base . 0002 . heap ` , etc . <nl> + <nl> + * NB : * There is no reason this needs to be titled ` main_common_base ` . Whatever <nl> + flag you supply ` HeapProfilerStart ` / ` HeapProfilerDump ` will become the <nl> + filename . Multiple sections of code could be profiled simultaneously by setting <nl> + multiple ` HeapProfilerStart ( ) ` / ` HeapProfilerStop ( ) ` breakpoints with unique <nl> + identifiers . <nl> + <nl> + # Analyzing with ` pprof ` <nl> + <nl> + [ pprof ] ( https : / / github . com / google / pprof ) can read these heap files in a <nl> + number of ways . Most convenient for first - order inspection might be ` pprof - top ` <nl> + or ` pprof - text ` : <nl> + <nl> + $ pprof - text bazel - bin / source / exe / envoy main_common_base * | head - n5 <nl> + File : envoy <nl> + Build ID : . . . <nl> + Type : inuse_space <nl> + Showing nodes accounting for 6402800 . 62kB , 98 . 59 % of 6494044 . 58kB total <nl> + Dropped . . . nodes ( cum < = . . . kB ) <nl> + <nl> + More complex flame / graph charts can be generated and viewed in a browser , which <nl> + is often more helpful than text - based output : <nl> + <nl> + $ pprof - http = localhost : 9999 bazel - bin / source / exe / envoy main_common_base * <nl>
|
docs : added developer docs for pprof / tcmalloc testing ( )
|
envoyproxy/envoy
|
9058d469c078b9d338a76e71b64614f6c9eb2c67
|
2018-08-20T19:15:57Z
|
mmm a / src / ast / scopes . cc <nl> ppp b / src / ast / scopes . cc <nl> Scope : : Scope ( Zone * zone , Scope * inner_scope , <nl> zone_ ( zone ) { <nl> SetDefaults ( CATCH_SCOPE , NULL , Handle < ScopeInfo > : : null ( ) ) ; <nl> AddInnerScope ( inner_scope ) ; <nl> - + + num_var_or_const_ ; <nl> + + + num_var_ ; <nl> num_heap_slots_ = Context : : MIN_CONTEXT_SLOTS ; <nl> Variable * variable = variables_ . Declare ( this , <nl> catch_variable_name , <nl> void Scope : : SetDefaults ( ScopeType scope_type , Scope * outer_scope , <nl> force_eager_compilation_ = false ; <nl> force_context_allocation_ = ( outer_scope ! = NULL & & ! is_function_scope ( ) ) <nl> ? outer_scope - > has_forced_context_allocation ( ) : false ; <nl> - num_var_or_const_ = 0 ; <nl> + num_var_ = 0 ; <nl> num_stack_slots_ = 0 ; <nl> num_heap_slots_ = 0 ; <nl> num_global_slots_ = 0 ; <nl> Scope * Scope : : FinalizeBlockScope ( ) { <nl> DCHECK ( temps_ . is_empty ( ) ) ; <nl> DCHECK ( params_ . is_empty ( ) ) ; <nl> <nl> - if ( num_var_or_const ( ) > 0 | | <nl> - ( is_declaration_scope ( ) & & calls_sloppy_eval ( ) ) ) { <nl> + if ( num_var ( ) > 0 | | ( is_declaration_scope ( ) & & calls_sloppy_eval ( ) ) ) { <nl> return this ; <nl> } <nl> <nl> Variable * Scope : : DeclareLocal ( const AstRawString * name , VariableMode mode , <nl> / / introduced during variable allocation , and TEMPORARY variables are <nl> / / allocated via NewTemporary ( ) . <nl> DCHECK ( IsDeclaredVariableMode ( mode ) ) ; <nl> - + + num_var_or_const_ ; <nl> + + + num_var_ ; <nl> return variables_ . Declare ( this , name , mode , kind , init_flag , <nl> maybe_assigned_flag ) ; <nl> } <nl> Declaration * Scope : : CheckConflictingVarDeclarations ( ) { <nl> return NULL ; <nl> } <nl> <nl> + Declaration * Scope : : CheckLexDeclarationsConflictingWith ( <nl> + ZoneList < const AstRawString * > * names ) { <nl> + int length = names - > length ( ) ; <nl> + for ( int i = 0 ; i < length ; + + i ) { <nl> + Variable * var = LookupLocal ( names - > at ( i ) ) ; <nl> + if ( var ! = nullptr & & IsLexicalVariableMode ( var - > mode ( ) ) ) { <nl> + / / Conflict ; find and return its declaration . <nl> + const AstRawString * name = names - > at ( i ) ; <nl> + int decls_length = decls_ . length ( ) ; <nl> + for ( int j = 0 ; j < decls_length ; + + j ) { <nl> + if ( decls_ [ i ] - > proxy ( ) - > raw_name ( ) = = name ) { <nl> + return decls_ [ i ] ; <nl> + } <nl> + } <nl> + DCHECK ( false ) ; <nl> + } <nl> + } <nl> + return nullptr ; <nl> + } <nl> <nl> class VarAndOrder { <nl> public : <nl> mmm a / src / ast / scopes . h <nl> ppp b / src / ast / scopes . h <nl> class Scope : public ZoneObject { <nl> / / scope over a let binding of the same name . <nl> Declaration * CheckConflictingVarDeclarations ( ) ; <nl> <nl> + / / Check if the scope has a conflicting lexical declaration that has a name in <nl> + / / the given list . This is used to catch patterns like <nl> + / / ` try { } catch ( e ) { let e ; } ` , <nl> + / / which is an error even though the two ' e ' s are declared in different <nl> + / / scopes . <nl> + Declaration * CheckLexDeclarationsConflictingWith ( <nl> + ZoneList < const AstRawString * > * names ) ; <nl> + <nl> / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm <nl> / / Scope - specific info . <nl> <nl> class Scope : public ZoneObject { <nl> / / The ModuleDescriptor for this scope ; only for module scopes . <nl> ModuleDescriptor * module ( ) const { return module_descriptor_ ; } <nl> <nl> + AstRawString * catch_variable_name ( ) const { <nl> + DCHECK ( is_catch_scope ( ) ) ; <nl> + DCHECK ( num_var ( ) = = 1 ) ; <nl> + return static_cast < AstRawString * > ( variables_ . Start ( ) - > key ) ; <nl> + } <nl> + <nl> / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm <nl> / / Variable allocation . <nl> <nl> class Scope : public ZoneObject { <nl> ZoneList < Variable * > * context_locals , <nl> ZoneList < Variable * > * context_globals ) ; <nl> <nl> - / / Current number of var or const locals . <nl> - int num_var_or_const ( ) { return num_var_or_const_ ; } <nl> + / / Current number of var locals . <nl> + int num_var ( ) const { return num_var_ ; } <nl> <nl> / / Result of variable allocation . <nl> int num_stack_slots ( ) const { return num_stack_slots_ ; } <nl> class Scope : public ZoneObject { <nl> bool is_declaration_scope_ ; <nl> <nl> / / Computed as variables are declared . <nl> - int num_var_or_const_ ; <nl> + int num_var_ ; <nl> <nl> / / Computed via AllocateVariables ; function , block and catch scopes only . <nl> int num_stack_slots_ ; <nl> mmm a / src / parsing / parser . cc <nl> ppp b / src / parsing / parser . cc <nl> Statement * Parser : : ParseStatementAsUnlabelled ( <nl> <nl> VariableProxy * Parser : : NewUnresolved ( const AstRawString * name , <nl> VariableMode mode ) { <nl> - / / If we are inside a function , a declaration of a var / const variable is a <nl> + / / If we are inside a function , a declaration of a ' var ' variable is a <nl> / / truly local variable , and the scope of the variable is always the function <nl> / / scope . <nl> - / / Let / const variables in harmony mode are always added to the immediately <nl> - / / enclosing scope . <nl> + / / Let / const variables are always added to the immediately enclosing scope . <nl> Scope * scope = <nl> IsLexicalVariableMode ( mode ) ? scope_ : scope_ - > DeclarationScope ( ) ; <nl> return scope - > NewUnresolved ( factory ( ) , name , Variable : : NORMAL , <nl> Block * Parser : : ParseVariableStatement ( VariableDeclarationContext var_context , <nl> / / VariableStatement : : <nl> / / VariableDeclarations ' ; ' <nl> <nl> - / / The scope of a var / const declared variable anywhere inside a function <nl> + / / The scope of a var declared variable anywhere inside a function <nl> / / is the entire function ( ECMA - 262 , 3rd , 10 . 1 . 3 , and 12 . 2 ) . Thus we can <nl> - / / transform a source - level var / const declaration into a ( Function ) <nl> - / / Scope declaration , and rewrite the source - level initialization into an <nl> - / / assignment statement . We use a block to collect multiple assignments . <nl> + / / transform a source - level var declaration into a ( Function ) Scope <nl> + / / declaration , and rewrite the source - level initialization into an assignment <nl> + / / statement . We use a block to collect multiple assignments . <nl> / / <nl> / / We mark the block as initializer block because we don ' t want the <nl> / / rewriter to add a ' . result ' assignment to such a block ( to get compliant <nl> TryStatement * Parser : : ParseTryStatement ( bool * ok ) { <nl> <nl> Expect ( Token : : RPAREN , CHECK_OK ) ; <nl> <nl> + ZoneList < const AstRawString * > bound_names ( 1 , zone ( ) ) ; <nl> + <nl> if ( ! is_simple ) { <nl> DeclarationDescriptor descriptor ; <nl> descriptor . declaration_kind = DeclarationDescriptor : : NORMAL ; <nl> TryStatement * Parser : : ParseTryStatement ( bool * ok ) { <nl> Block * init_block = <nl> factory ( ) - > NewBlock ( nullptr , 8 , true , kNoSourcePosition ) ; <nl> PatternRewriter : : DeclareAndInitializeVariables ( <nl> - init_block , & descriptor , & decl , nullptr , CHECK_OK ) ; <nl> + init_block , & descriptor , & decl , & bound_names , CHECK_OK ) ; <nl> catch_block - > statements ( ) - > Add ( init_block , zone ( ) ) ; <nl> + } else { <nl> + bound_names . Add ( name , zone ( ) ) ; <nl> } <nl> <nl> - / / TODO ( adamk ) : This should call ParseBlock in order to properly <nl> - / / add an additional block scope for the catch body . <nl> - Expect ( Token : : LBRACE , CHECK_OK ) ; <nl> - while ( peek ( ) ! = Token : : RBRACE ) { <nl> - Statement * stat = ParseStatementListItem ( CHECK_OK ) ; <nl> - if ( stat & & ! stat - > IsEmpty ( ) ) { <nl> - catch_block - > statements ( ) - > Add ( stat , zone ( ) ) ; <nl> + Block * inner_block = ParseBlock ( nullptr , CHECK_OK ) ; <nl> + catch_block - > statements ( ) - > Add ( inner_block , zone ( ) ) ; <nl> + <nl> + / / Check for ` catch ( e ) { let e ; } ` and similar errors . <nl> + Scope * inner_block_scope = inner_block - > scope ( ) ; <nl> + if ( inner_block_scope ! = nullptr ) { <nl> + Declaration * decl = <nl> + inner_block_scope - > CheckLexDeclarationsConflictingWith ( <nl> + & bound_names ) ; <nl> + if ( decl ! = nullptr ) { <nl> + const AstRawString * name = decl - > proxy ( ) - > raw_name ( ) ; <nl> + int position = decl - > proxy ( ) - > position ( ) ; <nl> + Scanner : : Location location = <nl> + position = = kNoSourcePosition <nl> + ? Scanner : : Location : : invalid ( ) <nl> + : Scanner : : Location ( position , position + 1 ) ; <nl> + ParserTraits : : ReportMessageAt ( <nl> + location , MessageTemplate : : kVarRedeclaration , name ) ; <nl> + * ok = false ; <nl> + return nullptr ; <nl> } <nl> } <nl> - Consume ( Token : : RBRACE ) ; <nl> } <nl> block_scope - > set_end_position ( scanner ( ) - > location ( ) . end_pos ) ; <nl> block_scope = block_scope - > FinalizeBlockScope ( ) ; <nl> Statement * Parser : : ParseForStatement ( ZoneList < const AstRawString * > * labels , <nl> bool * ok ) { <nl> int stmt_pos = peek_position ( ) ; <nl> Statement * init = NULL ; <nl> - ZoneList < const AstRawString * > lexical_bindings ( 1 , zone ( ) ) ; <nl> + ZoneList < const AstRawString * > bound_names ( 1 , zone ( ) ) ; <nl> + bool bound_names_are_lexical = false ; <nl> <nl> / / Create an in - between scope for let - bound iteration variables . <nl> Scope * for_scope = NewScope ( scope_ , BLOCK_SCOPE ) ; <nl> Statement * Parser : : ParseForStatement ( ZoneList < const AstRawString * > * labels , <nl> } <nl> <nl> Block * init_block = nullptr ; <nl> + bound_names_are_lexical = <nl> + IsLexicalVariableMode ( parsing_result . descriptor . mode ) ; <nl> <nl> - / / special case for legacy for ( var / const x = . . . . in ) <nl> - if ( ! IsLexicalVariableMode ( parsing_result . descriptor . mode ) & & <nl> - decl . pattern - > IsVariableProxy ( ) & & decl . initializer ! = nullptr ) { <nl> + / / special case for legacy for ( var . . . = . . . in . . . ) <nl> + if ( ! bound_names_are_lexical & & decl . pattern - > IsVariableProxy ( ) & & <nl> + decl . initializer ! = nullptr ) { <nl> DCHECK ( ! allow_harmony_for_in ( ) ) ; <nl> + + use_counts_ [ v8 : : Isolate : : kForInInitializer ] ; <nl> const AstRawString * name = <nl> Statement * Parser : : ParseForStatement ( ZoneList < const AstRawString * > * labels , <nl> descriptor . initialization_pos = kNoSourcePosition ; <nl> decl . initializer = factory ( ) - > NewVariableProxy ( temp ) ; <nl> <nl> + bool is_for_var_of = <nl> + mode = = ForEachStatement : : ITERATE & & <nl> + parsing_result . descriptor . mode = = VariableMode : : VAR ; <nl> + <nl> PatternRewriter : : DeclareAndInitializeVariables ( <nl> each_initialization_block , & descriptor , & decl , <nl> - IsLexicalVariableMode ( descriptor . mode ) ? & lexical_bindings <nl> - : nullptr , <nl> + bound_names_are_lexical | | is_for_var_of ? & bound_names <nl> + : nullptr , <nl> CHECK_OK ) ; <nl> + <nl> + / / Annex B . 3 . 5 prohibits the form <nl> + / / ` try { } catch ( e ) { for ( var e of { } ) ; } ` <nl> + / / So if we are parsing a statement like ` for ( var . . . of . . . ) ` <nl> + / / we need to walk up the scope chain and look for catch scopes <nl> + / / which have a simple binding , then compare their binding against <nl> + / / all of the names declared in the init of the for - of we ' re <nl> + / / parsing . <nl> + if ( is_for_var_of ) { <nl> + Scope * catch_scope = scope_ ; <nl> + while ( catch_scope ! = nullptr & & <nl> + ! catch_scope - > is_declaration_scope ( ) ) { <nl> + if ( catch_scope - > is_catch_scope ( ) ) { <nl> + auto name = catch_scope - > catch_variable_name ( ) ; <nl> + if ( name ! = <nl> + ast_value_factory ( ) <nl> + - > dot_catch_string ( ) ) { / / i . e . is a simple binding <nl> + if ( bound_names . Contains ( name ) ) { <nl> + ParserTraits : : ReportMessageAt ( <nl> + parsing_result . bindings_loc , <nl> + MessageTemplate : : kVarRedeclaration , name ) ; <nl> + * ok = false ; <nl> + return nullptr ; <nl> + } <nl> + } <nl> + } <nl> + catch_scope = catch_scope - > outer_scope ( ) ; <nl> + } <nl> + } <nl> } <nl> <nl> body_block - > statements ( ) - > Add ( each_initialization_block , zone ( ) ) ; <nl> Statement * Parser : : ParseForStatement ( ZoneList < const AstRawString * > * labels , <nl> body_block - > set_scope ( body_scope ) ; <nl> <nl> / / Create a TDZ for any lexically - bound names . <nl> - if ( IsLexicalVariableMode ( parsing_result . descriptor . mode ) ) { <nl> + if ( bound_names_are_lexical ) { <nl> DCHECK_NULL ( init_block ) ; <nl> <nl> init_block = <nl> factory ( ) - > NewBlock ( nullptr , 1 , false , kNoSourcePosition ) ; <nl> <nl> - for ( int i = 0 ; i < lexical_bindings . length ( ) ; + + i ) { <nl> + for ( int i = 0 ; i < bound_names . length ( ) ; + + i ) { <nl> / / TODO ( adamk ) : This needs to be some sort of special <nl> / / INTERNAL variable that ' s invisible to the debugger <nl> / / but visible to everything else . <nl> - VariableProxy * tdz_proxy = <nl> - NewUnresolved ( lexical_bindings [ i ] , LET ) ; <nl> + VariableProxy * tdz_proxy = NewUnresolved ( bound_names [ i ] , LET ) ; <nl> Declaration * tdz_decl = factory ( ) - > NewVariableDeclaration ( <nl> tdz_proxy , LET , scope_ , kNoSourcePosition ) ; <nl> Variable * tdz_var = Declare ( <nl> Statement * Parser : : ParseForStatement ( ZoneList < const AstRawString * > * labels , <nl> return final_loop ; <nl> } <nl> } else { <nl> + bound_names_are_lexical = <nl> + IsLexicalVariableMode ( parsing_result . descriptor . mode ) ; <nl> init = parsing_result . BuildInitializationBlock ( <nl> - IsLexicalVariableMode ( parsing_result . descriptor . mode ) <nl> - ? & lexical_bindings <nl> - : nullptr , <nl> - CHECK_OK ) ; <nl> + bound_names_are_lexical ? & bound_names : nullptr , CHECK_OK ) ; <nl> } <nl> } else { <nl> int lhs_beg_pos = peek_position ( ) ; <nl> Statement * Parser : : ParseForStatement ( ZoneList < const AstRawString * > * labels , <nl> / / If there are let bindings , then condition and the next statement of the <nl> / / for loop must be parsed in a new scope . <nl> Scope * inner_scope = scope_ ; <nl> - if ( lexical_bindings . length ( ) > 0 ) { <nl> + if ( bound_names_are_lexical & & bound_names . length ( ) > 0 ) { <nl> inner_scope = NewScope ( for_scope , BLOCK_SCOPE ) ; <nl> inner_scope - > set_start_position ( scanner ( ) - > location ( ) . beg_pos ) ; <nl> } <nl> Statement * Parser : : ParseForStatement ( ZoneList < const AstRawString * > * labels , <nl> } <nl> <nl> Statement * result = NULL ; <nl> - if ( lexical_bindings . length ( ) > 0 ) { <nl> + if ( bound_names_are_lexical & & bound_names . length ( ) > 0 ) { <nl> BlockState block_state ( & scope_ , for_scope ) ; <nl> result = DesugarLexicalBindingsInForStatement ( <nl> - inner_scope , parsing_result . descriptor . mode , & lexical_bindings , loop , <nl> - init , cond , next , body , CHECK_OK ) ; <nl> + inner_scope , parsing_result . descriptor . mode , & bound_names , loop , init , <nl> + cond , next , body , CHECK_OK ) ; <nl> for_scope - > set_end_position ( scanner ( ) - > location ( ) . end_pos ) ; <nl> } else { <nl> for_scope - > set_end_position ( scanner ( ) - > location ( ) . end_pos ) ; <nl> mmm a / src / parsing / pattern - rewriter . cc <nl> ppp b / src / parsing / pattern - rewriter . cc <nl> void Parser : : PatternRewriter : : VisitVariableProxy ( VariableProxy * pattern ) { <nl> Scope * declaration_scope = IsLexicalVariableMode ( descriptor_ - > mode ) <nl> ? descriptor_ - > scope <nl> : descriptor_ - > scope - > DeclarationScope ( ) ; <nl> - if ( declaration_scope - > num_var_or_const ( ) > kMaxNumFunctionLocals ) { <nl> + if ( declaration_scope - > num_var ( ) > kMaxNumFunctionLocals ) { <nl> parser_ - > ReportMessage ( MessageTemplate : : kTooManyVariables ) ; <nl> * ok_ = false ; <nl> return ; <nl> mmm a / test / mjsunit / es6 / block - conflicts - sloppy . js <nl> ppp b / test / mjsunit / es6 / block - conflicts - sloppy . js <nl> for ( var v = 0 ; v < varbinds . length ; + + v ) { <nl> TestNoConflict ( ' ( function ( x ) { ' + varbinds [ v ] + ' } ) ( ) ; ' ) ; <nl> } <nl> <nl> - / / Test conflicting catch / function bindings . <nl> - TestNoConflict ( ' try { } catch ( x ) { ' + funbind + ' } ' ) ; <nl> - <nl> / / Test conflicting parameter / function bindings . <nl> TestNoConflict ( ' ( function ( x ) { ' + funbind + ' } ) ( ) ; ' ) ; <nl> mmm a / test / mjsunit / es6 / block - conflicts . js <nl> ppp b / test / mjsunit / es6 / block - conflicts . js <nl> for ( var v = 0 ; v < varbinds . length ; + + v ) { <nl> TestNoConflict ( ' ( function ( x ) { ' + varbinds [ v ] + ' } ) ( ) ; ' ) ; <nl> } <nl> <nl> - / / Test conflicting catch / function bindings . <nl> - TestNoConflict ( ' try { } catch ( x ) { ' + funbind + ' } ' ) ; <nl> - <nl> / / Test conflicting parameter / function bindings . <nl> TestNoConflict ( ' ( function ( x ) { ' + funbind + ' } ) ( ) ; ' ) ; <nl> mmm a / test / mjsunit / es6 / block - sloppy - function . js <nl> ppp b / test / mjsunit / es6 / block - sloppy - function . js <nl> <nl> assertEquals ( 4 , f ( ) ) ; <nl> } ) ( ) ; <nl> <nl> + / / B . 3 . 5 interacts with B . 3 . 3 to allow this . <nl> + ( function hoistingThroughSimpleCatch ( ) { <nl> + assertEquals ( undefined , f ) ; <nl> + <nl> + try { <nl> + throw 0 ; <nl> + } catch ( f ) { <nl> + { <nl> + assertEquals ( 4 , f ( ) ) ; <nl> + <nl> + function f ( ) { <nl> + return 4 ; <nl> + } <nl> + <nl> + assertEquals ( 4 , f ( ) ) ; <nl> + } <nl> + } <nl> + <nl> + assertEquals ( 4 , f ( ) ) ; <nl> + } ) ( ) ; <nl> + <nl> + ( function noHoistingThroughComplexCatch ( ) { <nl> + try { <nl> + throw 0 ; <nl> + } catch ( { f } ) { <nl> + { <nl> + assertEquals ( 4 , f ( ) ) ; <nl> + <nl> + function f ( ) { <nl> + return 4 ; <nl> + } <nl> + <nl> + assertEquals ( 4 , f ( ) ) ; <nl> + } <nl> + } <nl> + <nl> + assertThrows ( ( ) = > f , ReferenceError ) ; <nl> + } ) ( ) ; <nl> + <nl> / / Test that hoisting from blocks does happen in global scope <nl> function globalHoisted ( ) { return 0 ; } <nl> { <nl> new file mode 100644 <nl> index 00000000000 . . 0d6ce061f8c <nl> mmm / dev / null <nl> ppp b / test / mjsunit / es6 / catch - parameter - redeclaration . js <nl> <nl> + / / Copyright 2016 the V8 project authors . All rights reserved . <nl> + / / Use of this source code is governed by a BSD - style license that can be <nl> + / / found in the LICENSE file . <nl> + <nl> + function checkIsRedeclarationError ( code ) { <nl> + try { <nl> + eval ( ` <nl> + checkIsRedeclarationError : { <nl> + break checkIsRedeclarationError ; <nl> + $ { code } <nl> + } <nl> + ` ) ; <nl> + assertUnreachable ( ) ; <nl> + } catch ( e ) { <nl> + assertInstanceof ( e , SyntaxError ) ; <nl> + assertTrue ( e . toString ( ) . indexOf ( " has already been declared " ) > = 0 ) ; <nl> + } <nl> + } <nl> + <nl> + function checkIsNotRedeclarationError ( code ) { <nl> + assertDoesNotThrow ( ( ) = > eval ( ` <nl> + checkIsNotRedeclarationError_label : { <nl> + break checkIsNotRedeclarationError_label ; <nl> + $ { code } <nl> + } <nl> + ` ) ) ; <nl> + } <nl> + <nl> + <nl> + let lexical_e = [ <nl> + ' let e ' , <nl> + ' let { e } = 0 ' , <nl> + ' let [ e ] = 0 ' , <nl> + ' let { f : e } = 0 ' , <nl> + ' let [ [ [ ] , e ] ] = 0 ' , <nl> + ' const e = 0 ' , <nl> + ' const { e } = 0 ' , <nl> + ' const [ e ] = 0 ' , <nl> + ' const { f : e } = 0 ' , <nl> + ' const [ [ [ ] , e ] ] = 0 ' , <nl> + ' function e ( ) { } ' , <nl> + ' function * e ( ) { } ' , <nl> + ] ; <nl> + <nl> + let not_lexical_e = [ <nl> + ' var e ' , <nl> + ' var { e } = 0 ' , <nl> + ' let { } = 0 ' , <nl> + ' let { e : f } = 0 ' , <nl> + ' { function e ( ) { } } ' <nl> + ] ; <nl> + <nl> + / / Check that lexical declarations cannot override a simple catch parameter <nl> + for ( let declaration of lexical_e ) { <nl> + checkIsRedeclarationError ( ` <nl> + try { <nl> + throw 0 ; <nl> + } catch ( e ) { <nl> + $ { declaration } <nl> + } <nl> + ` ) ; <nl> + } <nl> + <nl> + / / Check that lexical declarations cannot override a complex catch parameter <nl> + for ( let declaration of lexical_e ) { <nl> + checkIsRedeclarationError ( ` <nl> + try { <nl> + throw 0 ; <nl> + } catch ( { e } ) { <nl> + $ { declaration } <nl> + } <nl> + ` ) ; <nl> + } <nl> + <nl> + / / Check that non - lexical declarations can override a simple catch parameter <nl> + for ( let declaration of not_lexical_e ) { <nl> + checkIsNotRedeclarationError ( ` <nl> + try { <nl> + throw 0 ; <nl> + } catch ( e ) { <nl> + $ { declaration } <nl> + } <nl> + ` ) ; <nl> + } <nl> + <nl> + / / Check that the above error does not occur if a declaration scope is between <nl> + / / the catch and the loop . <nl> + for ( let declaration of lexical_e ) { <nl> + checkIsNotRedeclarationError ( ` <nl> + try { <nl> + throw 0 ; <nl> + } catch ( e ) { <nl> + ( ( ) = > { $ { declaration } } ) ( ) ; <nl> + } <nl> + ` ) ; <nl> + <nl> + checkIsNotRedeclarationError ( ` <nl> + try { <nl> + throw 0 ; <nl> + } catch ( e ) { <nl> + ( function ( ) { $ { declaration } } ) ( ) ; <nl> + } <nl> + ` ) ; <nl> + } <nl> new file mode 100644 <nl> index 00000000000 . . 5ea24ac7dca <nl> mmm / dev / null <nl> ppp b / test / mjsunit / es6 / for - each - in - catch . js <nl> <nl> + / / Copyright 2016 the V8 project authors . All rights reserved . <nl> + / / Use of this source code is governed by a BSD - style license that can be <nl> + / / found in the LICENSE file . <nl> + <nl> + function checkIsRedeclarationError ( code ) { <nl> + try { <nl> + eval ( ` <nl> + checkIsRedeclarationError : { <nl> + break checkIsRedeclarationError ; <nl> + $ { code } <nl> + } <nl> + ` ) ; <nl> + assertUnreachable ( ) ; <nl> + } catch ( e ) { <nl> + assertInstanceof ( e , SyntaxError ) ; <nl> + assertTrue ( e . toString ( ) . indexOf ( " has already been declared " ) > = 0 ) ; <nl> + } <nl> + } <nl> + <nl> + function checkIsNotRedeclarationError ( code ) { <nl> + assertDoesNotThrow ( ( ) = > eval ( ` <nl> + checkIsNotRedeclarationError_label : { <nl> + break checkIsNotRedeclarationError_label ; <nl> + $ { code } <nl> + } <nl> + ` ) ) ; <nl> + } <nl> + <nl> + <nl> + let var_e = [ <nl> + ' var e ' , <nl> + ' var { e } ' , <nl> + ' var [ e ] ' , <nl> + ' var { f : e } ' , <nl> + ' var [ [ [ ] , e ] ] ' <nl> + ] ; <nl> + <nl> + let not_var_e = [ <nl> + ' var f ' , <nl> + ' var { } ' , <nl> + ' var { e : f } ' , <nl> + ' e ' , <nl> + ' { e } ' , <nl> + ' let e ' , <nl> + ' const e ' , <nl> + ' let { e } ' , <nl> + ' const { e } ' , <nl> + ' let [ e ] ' , <nl> + ' const [ e ] ' , <nl> + ' let { f : e } ' , <nl> + ' const { f : e } ' <nl> + ] ; <nl> + <nl> + / / Check that ` for ( var . . . of . . . ) ` cannot redeclare a simple catch variable <nl> + / / but ` for ( var . . . in . . . ) ` can . <nl> + for ( let binding of var_e ) { <nl> + checkIsRedeclarationError ( ` <nl> + try { <nl> + throw 0 ; <nl> + } catch ( e ) { <nl> + for ( $ { binding } of [ ] ) ; <nl> + } <nl> + ` ) ; <nl> + <nl> + checkIsNotRedeclarationError ( ` <nl> + try { <nl> + throw 0 ; <nl> + } catch ( e ) { <nl> + for ( $ { binding } in [ ] ) ; <nl> + } <nl> + ` ) ; <nl> + } <nl> + <nl> + / / Check that the above error occurs even for nested catches . <nl> + for ( let binding of var_e ) { <nl> + checkIsRedeclarationError ( ` <nl> + try { <nl> + throw 0 ; <nl> + } catch ( e ) { <nl> + try { <nl> + throw 1 ; <nl> + } catch ( f ) { <nl> + try { <nl> + throw 2 ; <nl> + } catch ( { } ) { <nl> + for ( $ { binding } of [ ] ) ; <nl> + } <nl> + } <nl> + } <nl> + ` ) ; <nl> + <nl> + checkIsNotRedeclarationError ( ` <nl> + try { <nl> + throw 0 ; <nl> + } catch ( e ) { <nl> + try { <nl> + throw 1 ; <nl> + } catch ( f ) { <nl> + try { <nl> + throw 2 ; <nl> + } catch ( { } ) { <nl> + for ( $ { binding } in [ ] ) ; <nl> + } <nl> + } <nl> + } <nl> + ` ) ; <nl> + } <nl> + <nl> + / / Check that the above error does not occur if a declaration scope is between <nl> + / / the catch and the loop . <nl> + for ( let binding of var_e ) { <nl> + checkIsNotRedeclarationError ( ` <nl> + try { <nl> + throw 0 ; <nl> + } catch ( e ) { <nl> + ( ( ) = > { for ( $ { binding } of [ ] ) ; } ) ( ) ; <nl> + } <nl> + ` ) ; <nl> + <nl> + checkIsNotRedeclarationError ( ` <nl> + try { <nl> + throw 0 ; <nl> + } catch ( e ) { <nl> + ( function ( ) { for ( $ { binding } of [ ] ) ; } ) ( ) ; <nl> + } <nl> + ` ) ; <nl> + } <nl> + <nl> + / / Check that there is no error when not declaring a var named e . <nl> + for ( let binding of not_var_e ) { <nl> + checkIsNotRedeclarationError ( ` <nl> + try { <nl> + throw 0 ; <nl> + } catch ( e ) { <nl> + for ( $ { binding } of [ ] ) ; <nl> + } <nl> + ` ) ; <nl> + } <nl> + <nl> + / / Check that there is an error for both for - in and for - of when redeclaring <nl> + / / a non - simple catch parameter <nl> + for ( let binding of var_e ) { <nl> + checkIsRedeclarationError ( ` <nl> + try { <nl> + throw 0 ; <nl> + } catch ( { e } ) { <nl> + for ( $ { binding } of [ ] ) ; <nl> + } <nl> + ` ) ; <nl> + <nl> + checkIsRedeclarationError ( ` <nl> + try { <nl> + throw 0 ; <nl> + } catch ( { e } ) { <nl> + for ( $ { binding } in [ ] ) ; <nl> + } <nl> + ` ) ; <nl> + } <nl> + <nl> + / / Check that the above error occurs even for nested catches . <nl> + for ( let binding of var_e ) { <nl> + checkIsRedeclarationError ( ` <nl> + try { <nl> + throw 0 ; <nl> + } catch ( { e } ) { <nl> + try { <nl> + throw 1 ; <nl> + } catch ( f ) { <nl> + try { <nl> + throw 2 ; <nl> + } catch ( { } ) { <nl> + for ( $ { binding } of [ ] ) ; <nl> + } <nl> + } <nl> + } <nl> + ` ) ; <nl> + <nl> + checkIsRedeclarationError ( ` <nl> + try { <nl> + throw 0 ; <nl> + } catch ( { e } ) { <nl> + try { <nl> + throw 1 ; <nl> + } catch ( f ) { <nl> + try { <nl> + throw 2 ; <nl> + } catch ( { } ) { <nl> + for ( $ { binding } in [ ] ) ; <nl> + } <nl> + } <nl> + } <nl> + ` ) ; <nl> + } <nl> mmm a / test / test262 / test262 . status <nl> ppp b / test / test262 / test262 . status <nl> <nl> <nl> # https : / / bugs . chromium . org / p / v8 / issues / detail ? id = 4231 <nl> ' language / eval - code / direct / var - env - lower - lex - catch - non - strict ' : [ FAIL ] , <nl> - ' language / statements / try / early - catch - lex ' : [ FAIL ] , <nl> - ' language / statements / try / early - catch - var ' : [ FAIL ] , <nl> <nl> # https : / / bugs . chromium . org / p / v8 / issues / detail ? id = 4951 <nl> ' language / expressions / assignment / dstr - array - elem - iter - rtrn - close ' : [ FAIL ] , <nl> <nl> # https : / / bugs . chromium . org / p / v8 / issues / detail ? id = 1569 <nl> ' language / module - code / * ' : [ SKIP ] , <nl> <nl> - # https : / / bugs . chromium . org / p / v8 / issues / detail ? id = 5112 <nl> - ' language / statements / try / scope - catch - block - lex - open ' : [ FAIL ] , <nl> - <nl> # https : / / bugs . chromium . org / p / v8 / issues / detail ? id = 5012 <nl> ' intl402 / Intl / getCanonicalLocales / * ' : [ FAIL ] , <nl> <nl>
|
Add errors for declarations which conflict with catch parameters .
|
v8/v8
|
2907c726b2bb5cf20b2bec639ca9e6a521585406
|
2016-07-01T00:01:31Z
|
mmm a / js / common / bootstrap / errors . js <nl> ppp b / js / common / bootstrap / errors . js <nl> <nl> " ERROR_CANNOT_CREATE_TEMP_FILE " : { " code " : 20 , " message " : " cannot create temporary file " } , <nl> " ERROR_REQUEST_CANCELED " : { " code " : 21 , " message " : " canceled request " } , <nl> " ERROR_DEBUG " : { " code " : 22 , " message " : " intentional debug error " } , <nl> - " ERROR_NOT_YET_IMPLEMENTED " : { " code " : 23 , " message " : " not yet implemented " } , <nl> " ERROR_IP_ADDRESS_INVALID " : { " code " : 25 , " message " : " IP address is invalid " } , <nl> " ERROR_FILE_EXISTS " : { " code " : 27 , " message " : " file exists " } , <nl> " ERROR_LOCKED " : { " code " : 28 , " message " : " locked " } , <nl> mmm a / lib / Basics / Exceptions . cpp <nl> ppp b / lib / Basics / Exceptions . cpp <nl> char const * Exception : : what ( ) const throw ( ) { return _errorMessage . c_str ( ) ; } <nl> void Exception : : appendLocation ( ) { <nl> if ( _code = = TRI_ERROR_INTERNAL ) { <nl> _errorMessage + = std : : string ( " ( exception location : " ) + _file + " : " + std : : to_string ( _line ) + " ) . Please report this error to arangodb . com " ; <nl> - } else if ( _code = = TRI_ERROR_OUT_OF_MEMORY | | <nl> - _code = = TRI_ERROR_NOT_YET_IMPLEMENTED ) { <nl> + } else if ( _code = = TRI_ERROR_OUT_OF_MEMORY ) { <nl> _errorMessage + = std : : string ( " ( exception location : " ) + _file + " : " + std : : to_string ( _line ) + " ) " ; <nl> } <nl> <nl> mmm a / lib / Basics / Exceptions . h <nl> ppp b / lib / Basics / Exceptions . h <nl> <nl> # define THROW_ARANGO_EXCEPTION_MESSAGE ( code , message ) \ <nl> throw arangodb : : basics : : Exception ( code , message , __FILE__ , __LINE__ ) <nl> <nl> - / / / @ brief throws an arango exception with an error code " not yet implemented " <nl> - # define THROW_ARANGO_NOT_YET_IMPLEMENTED ( ) \ <nl> - throw arangodb : : basics : : Exception ( TRI_ERROR_NOT_YET_IMPLEMENTED , std : : string ( TRI_errno_string ( TRI_ERROR_NOT_YET_IMPLEMENTED ) ) + " - function " + __func__ , __FILE__ , __LINE__ ) <nl> - <nl> namespace arangodb { <nl> namespace basics { <nl> <nl> mmm a / lib / Basics / errors . dat <nl> ppp b / lib / Basics / errors . dat <nl> ERROR_CANNOT_CREATE_DIRECTORY , 19 , " cannot create directory " , " Will be raised when <nl> ERROR_CANNOT_CREATE_TEMP_FILE , 20 , " cannot create temporary file " , " Will be raised when an attempt to create a temporary file fails . " <nl> ERROR_REQUEST_CANCELED , 21 , " canceled request " , " Will be raised when a request is canceled by the user . " <nl> ERROR_DEBUG , 22 , " intentional debug error " , " Will be raised intentionally during debugging . " <nl> - ERROR_NOT_YET_IMPLEMENTED , 23 , " not yet implemented " , " Will be raised when hitting an unimplemented feature that will be implemented soon . " <nl> ERROR_IP_ADDRESS_INVALID , 25 , " IP address is invalid " , " Will be raised when the structure of an IP address is invalid . " <nl> ERROR_FILE_EXISTS , 27 , " file exists " , " Will be raised when a file already exists . " <nl> ERROR_LOCKED , 28 , " locked " , " Will be raised when a resource or an operation is locked . " <nl> mmm a / lib / Basics / voc - errors . cpp <nl> ppp b / lib / Basics / voc - errors . cpp <nl> void TRI_InitializeErrorMessages ( ) { <nl> REG_ERROR ( ERROR_CANNOT_CREATE_TEMP_FILE , " cannot create temporary file " ) ; <nl> REG_ERROR ( ERROR_REQUEST_CANCELED , " canceled request " ) ; <nl> REG_ERROR ( ERROR_DEBUG , " intentional debug error " ) ; <nl> - REG_ERROR ( ERROR_NOT_YET_IMPLEMENTED , " not yet implemented " ) ; <nl> REG_ERROR ( ERROR_IP_ADDRESS_INVALID , " IP address is invalid " ) ; <nl> REG_ERROR ( ERROR_FILE_EXISTS , " file exists " ) ; <nl> REG_ERROR ( ERROR_LOCKED , " locked " ) ; <nl> mmm a / lib / Basics / voc - errors . h <nl> ppp b / lib / Basics / voc - errors . h <nl> <nl> / / / Will be raised when a request is canceled by the user . <nl> / / / - 22 : @ LIT { intentional debug error } <nl> / / / Will be raised intentionally during debugging . <nl> - / / / - 23 : @ LIT { not yet implemented } <nl> - / / / Will be raised when hitting an unimplemented feature that will be <nl> - / / / implemented soon . <nl> / / / - 25 : @ LIT { IP address is invalid } <nl> / / / Will be raised when the structure of an IP address is invalid . <nl> / / / - 27 : @ LIT { file exists } <nl> void TRI_InitializeErrorMessages ( ) ; <nl> <nl> # define TRI_ERROR_DEBUG ( 22 ) <nl> <nl> - / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - / / / @ brief 23 : ERROR_NOT_YET_IMPLEMENTED <nl> - / / / <nl> - / / / not yet implemented <nl> - / / / <nl> - / / / Will be raised when hitting an unimplemented feature that will be <nl> - / / / implemented soon . <nl> - / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - <nl> - # define TRI_ERROR_NOT_YET_IMPLEMENTED ( 23 ) <nl> - <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief 25 : ERROR_IP_ADDRESS_INVALID <nl> / / / <nl> mmm a / lib / Rest / GeneralResponse . cpp <nl> ppp b / lib / Rest / GeneralResponse . cpp <nl> rest : : ResponseCode GeneralResponse : : responseCode ( int code ) { <nl> <nl> case TRI_ERROR_CLUSTER_UNSUPPORTED : <nl> case TRI_ERROR_NOT_IMPLEMENTED : <nl> - case TRI_ERROR_NOT_YET_IMPLEMENTED : <nl> return ResponseCode : : NOT_IMPLEMENTED ; <nl> <nl> default : <nl>
|
remove unused error code
|
arangodb/arangodb
|
bbdb19179745a1c074cabf3f3a91e05fd8d3f3bb
|
2017-05-22T11:53:10Z
|
mmm a / docs / docs / algorithms . xml <nl> ppp b / docs / docs / algorithms . xml <nl> <nl> < item > vector < / item > <nl> < item > point < / item > <nl> < item > rotate_point < / item > <nl> + < item > point_rotator < / item > <nl> < item > centered_rect < / item > <nl> < item > translate_rect < / item > <nl> < item > resize_rect < / item > <nl> <nl> <nl> < / component > <nl> <nl> + < ! - - * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * - - > <nl> + <nl> + < component > <nl> + < name > point_rotator < / name > <nl> + < file > dlib / geometry . h < / file > <nl> + < spec_file link = " true " > dlib / geometry / vector_abstract . h < / spec_file > <nl> + < description > <nl> + This is an object that rotates a 2D < a href = " # vector " > vector < / a > or <nl> + < a href = " # point " > point < / a > object about the origin . <nl> + < / description > <nl> + <nl> + < / component > <nl> + <nl> < ! - - * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * - - > <nl> <nl> < component > <nl> mmm a / docs / docs / term_index . xml <nl> ppp b / docs / docs / term_index . xml <nl> <nl> < term link = " algorithms . html # vector " name = " vector " / > <nl> < term link = " algorithms . html # point " name = " point " / > <nl> < term link = " algorithms . html # rotate_point " name = " rotate_point " / > <nl> + < term link = " algorithms . html # point_rotator " name = " point_rotator " / > <nl> < term link = " algorithms . html # running_stats " name = " running_stats " / > <nl> <nl> <nl>
|
Updated the docs
|
davisking/dlib
|
f42cff91b40c23cdab8e5b9e8f6f98d3fa76da8a
|
2009-04-04T16:17:39Z
|
mmm a / tests / cpp - tests / Classes / ShaderTest / ShaderTest . cpp <nl> ppp b / tests / cpp - tests / Classes / ShaderTest / ShaderTest . cpp <nl> class SpriteBlur : public Sprite <nl> { <nl> public : <nl> ~ SpriteBlur ( ) ; <nl> - void setBlurSize ( float f ) ; <nl> bool initWithTexture ( Texture2D * texture , const Rect & rect ) ; <nl> void initGLProgram ( ) ; <nl> <nl> static SpriteBlur * create ( const char * pszFileName ) ; <nl> + void setBlurRadius ( float radius ) ; <nl> + void setBlurSampleNum ( float num ) ; <nl> <nl> protected : <nl> - <nl> - int _blurRadius ; <nl> - Vec2 _pixelSize ; <nl> - <nl> - int _samplingRadius ; <nl> - / / gaussian = cons * exp ( ( dx * dx + dy * dy ) * scale ) ; <nl> - float _scale ; <nl> - float _cons ; <nl> - float _weightSum ; <nl> + float _blurRadius ; <nl> + float _blurSampleNum ; <nl> } ; <nl> <nl> SpriteBlur : : ~ SpriteBlur ( ) <nl> bool SpriteBlur : : initWithTexture ( Texture2D * texture , const Rect & rect ) <nl> _eventDispatcher - > addEventListenerWithSceneGraphPriority ( listener , this ) ; <nl> # endif <nl> <nl> - auto s = getTexture ( ) - > getContentSizeInPixels ( ) ; <nl> - <nl> - _pixelSize = Vec2 ( 1 / s . width , 1 / s . height ) ; <nl> - <nl> - _samplingRadius = 0 ; <nl> - this - > initGLProgram ( ) ; <nl> - <nl> - getGLProgramState ( ) - > setUniformVec2 ( " onePixelSize " , _pixelSize ) ; <nl> + initGLProgram ( ) ; <nl> <nl> return true ; <nl> } <nl> void SpriteBlur : : initGLProgram ( ) <nl> <nl> auto glProgramState = GLProgramState : : getOrCreateWithGLProgram ( program ) ; <nl> setGLProgramState ( glProgramState ) ; <nl> + <nl> + auto size = getTexture ( ) - > getContentSizeInPixels ( ) ; <nl> + getGLProgramState ( ) - > setUniformVec2 ( " resolution " , size ) ; <nl> + getGLProgramState ( ) - > setUniformFloat ( " blurRadius " , _blurRadius ) ; <nl> + getGLProgramState ( ) - > setUniformFloat ( " sampleNum " , 7 . 0f ) ; <nl> } <nl> <nl> - void SpriteBlur : : setBlurSize ( float f ) <nl> + void SpriteBlur : : setBlurRadius ( float radius ) <nl> { <nl> - if ( _blurRadius = = ( int ) f ) <nl> - return ; <nl> - _blurRadius = ( int ) f ; <nl> - <nl> - _samplingRadius = _blurRadius ; <nl> - if ( _samplingRadius > 10 ) <nl> - { <nl> - _samplingRadius = 10 ; <nl> - } <nl> - if ( _blurRadius > 0 ) <nl> - { <nl> - float sigma = _blurRadius / 2 . 0f ; <nl> - _scale = - 0 . 5f / ( sigma * sigma ) ; <nl> - _cons = - 1 . 0f * _scale / 3 . 141592f ; <nl> - _weightSum = - _cons ; <nl> - <nl> - float weight ; <nl> - int squareX ; <nl> - for ( int dx = 0 ; dx < = _samplingRadius ; + + dx ) <nl> - { <nl> - squareX = dx * dx ; <nl> - weight = _cons * exp ( squareX * _scale ) ; <nl> - _weightSum + = 2 . 0 * weight ; <nl> - for ( int dy = 1 ; dy < = _samplingRadius ; + + dy ) <nl> - { <nl> - weight = _cons * exp ( ( squareX + dy * dy ) * _scale ) ; <nl> - _weightSum + = 4 . 0 * weight ; <nl> - } <nl> - } <nl> - } <nl> - log ( " _blurRadius : % d " , _blurRadius ) ; <nl> + _blurRadius = radius ; <nl> + getGLProgramState ( ) - > setUniformFloat ( " blurRadius " , _blurRadius ) ; <nl> + } <nl> <nl> - getGLProgramState ( ) - > setUniformVec4 ( " gaussianCoefficient " , Vec4 ( _samplingRadius , _scale , _cons , _weightSum ) ) ; <nl> + void SpriteBlur : : setBlurSampleNum ( float num ) <nl> + { <nl> + _blurSampleNum = num ; <nl> + getGLProgramState ( ) - > setUniformFloat ( " sampleNum " , _blurSampleNum ) ; <nl> } <nl> <nl> / / ShaderBlur <nl> std : : string ShaderBlur : : subtitle ( ) const <nl> return " Gaussian blur " ; <nl> } <nl> <nl> - ControlSlider * ShaderBlur : : createSliderCtl ( ) <nl> + void ShaderBlur : : createSliderCtls ( ) <nl> { <nl> auto screenSize = Director : : getInstance ( ) - > getWinSize ( ) ; <nl> <nl> - ControlSlider * slider = ControlSlider : : create ( " extensions / sliderTrack . png " , " extensions / sliderProgress . png " , " extensions / sliderThumb . png " ) ; <nl> - slider - > setAnchorPoint ( Vec2 ( 0 . 5f , 1 . 0f ) ) ; <nl> - slider - > setMinimumValue ( 0 . 0f ) ; / / Sets the min value of range <nl> - slider - > setMaximumValue ( 25 . 0f ) ; / / Sets the max value of range <nl> + { <nl> + ControlSlider * slider = ControlSlider : : create ( " extensions / sliderTrack . png " , " extensions / sliderProgress . png " , " extensions / sliderThumb . png " ) ; <nl> + slider - > setAnchorPoint ( Vec2 ( 0 . 5f , 1 . 0f ) ) ; <nl> + slider - > setMinimumValue ( 0 . 0f ) ; <nl> + slider - > setMaximumValue ( 25 . 0f ) ; <nl> + slider - > setScale ( 0 . 6f ) ; <nl> + slider - > setPosition ( Vec2 ( screenSize . width / 4 . 0f , screenSize . height / 3 . 0f ) ) ; <nl> + slider - > addTargetWithActionForControlEvents ( this , cccontrol_selector ( ShaderBlur : : onRadiusChanged ) , Control : : EventType : : VALUE_CHANGED ) ; <nl> + slider - > setValue ( 2 . 0f ) ; <nl> + addChild ( slider ) ; <nl> + _sliderRadiusCtl = slider ; <nl> + <nl> + auto label = Label : : createWithTTF ( " Blur Radius " , " fonts / arial . ttf " , 12 . 0f ) ; <nl> + addChild ( label ) ; <nl> + label - > setPosition ( Vec2 ( screenSize . width / 4 . 0f , screenSize . height / 3 . 0f - 24 . 0f ) ) ; <nl> + } <nl> <nl> - slider - > setPosition ( Vec2 ( screenSize . width / 2 . 0f , screenSize . height / 3 . 0f ) ) ; <nl> - <nl> - / / When the value of the slider will change , the given selector will be call <nl> - slider - > addTargetWithActionForControlEvents ( this , cccontrol_selector ( ShaderBlur : : sliderAction ) , Control : : EventType : : VALUE_CHANGED ) ; <nl> - slider - > setValue ( 2 . 0f ) ; <nl> - <nl> - return slider ; <nl> + { <nl> + ControlSlider * slider = ControlSlider : : create ( " extensions / sliderTrack . png " , " extensions / sliderProgress . png " , " extensions / sliderThumb . png " ) ; <nl> + slider - > setAnchorPoint ( Vec2 ( 0 . 5f , 1 . 0f ) ) ; <nl> + slider - > setMinimumValue ( 0 . 0f ) ; <nl> + slider - > setMaximumValue ( 11 . 0f ) ; <nl> + slider - > setScale ( 0 . 6f ) ; <nl> + slider - > setPosition ( Vec2 ( screenSize . width * 3 / 4 . 0f , screenSize . height / 3 . 0f ) ) ; <nl> + slider - > addTargetWithActionForControlEvents ( this , cccontrol_selector ( ShaderBlur : : onSampleNumChanged ) , Control : : EventType : : VALUE_CHANGED ) ; <nl> + slider - > setValue ( 7 . 0f ) ; <nl> + addChild ( slider ) ; <nl> + _sliderNumCtrl = slider ; <nl> + <nl> + auto label = Label : : createWithTTF ( " Blur Sample Num " , " fonts / arial . ttf " , 12 . 0f ) ; <nl> + addChild ( label ) ; <nl> + label - > setPosition ( Vec2 ( screenSize . width * 3 / 4 . 0f , screenSize . height / 3 . 0f - 24 . 0f ) ) ; <nl> + } <nl> <nl> } <nl> <nl> bool ShaderBlur : : init ( ) <nl> if ( ShaderTestDemo : : init ( ) ) <nl> { <nl> _blurSprite = SpriteBlur : : create ( " Images / grossini . png " ) ; <nl> - <nl> auto sprite = Sprite : : create ( " Images / grossini . png " ) ; <nl> - <nl> auto s = Director : : getInstance ( ) - > getWinSize ( ) ; <nl> _blurSprite - > setPosition ( Vec2 ( s . width / 3 , s . height / 2 ) ) ; <nl> sprite - > setPosition ( Vec2 ( 2 * s . width / 3 , s . height / 2 ) ) ; <nl> bool ShaderBlur : : init ( ) <nl> addChild ( _blurSprite ) ; <nl> addChild ( sprite ) ; <nl> <nl> - _sliderCtl = createSliderCtl ( ) ; <nl> + createSliderCtls ( ) ; <nl> <nl> - addChild ( _sliderCtl ) ; <nl> return true ; <nl> } <nl> <nl> return false ; <nl> } <nl> <nl> - void ShaderBlur : : sliderAction ( Ref * sender , Control : : EventType controlEvent ) <nl> + void ShaderBlur : : onRadiusChanged ( Ref * sender , Control : : EventType ) <nl> + { <nl> + ControlSlider * slider = ( ControlSlider * ) sender ; <nl> + _blurSprite - > setBlurRadius ( slider - > getValue ( ) ) ; <nl> + } <nl> + <nl> + void ShaderBlur : : onSampleNumChanged ( Ref * sender , Control : : EventType ) <nl> { <nl> ControlSlider * slider = ( ControlSlider * ) sender ; <nl> - _blurSprite - > setBlurSize ( slider - > getValue ( ) ) ; <nl> + _blurSprite - > setBlurSampleNum ( slider - > getValue ( ) ) ; <nl> } <nl> <nl> / / ShaderRetroEffect <nl> mmm a / tests / cpp - tests / Classes / ShaderTest / ShaderTest . h <nl> ppp b / tests / cpp - tests / Classes / ShaderTest / ShaderTest . h <nl> class ShaderBlur : public ShaderTestDemo <nl> virtual std : : string title ( ) const override ; <nl> virtual std : : string subtitle ( ) const override ; <nl> virtual bool init ( ) ; <nl> - ControlSlider * createSliderCtl ( ) ; <nl> - void sliderAction ( Ref * sender , Control : : EventType controlEvent ) ; <nl> + void createSliderCtls ( ) ; <nl> + void onRadiusChanged ( Ref * sender , Control : : EventType controlEvent ) ; <nl> + void onSampleNumChanged ( Ref * sender , Control : : EventType controlEvent ) ; <nl> + <nl> protected : <nl> SpriteBlur * _blurSprite ; <nl> - ControlSlider * _sliderCtl ; <nl> + ControlSlider * _sliderRadiusCtl ; <nl> + ControlSlider * _sliderNumCtrl ; <nl> } ; <nl> <nl> class ShaderRetroEffect : public ShaderTestDemo <nl> mmm a / tests / cpp - tests / Classes / ShaderTest / ShaderTest2 . cpp <nl> ppp b / tests / cpp - tests / Classes / ShaderTest / ShaderTest2 . cpp <nl> class EffectBlur : public Effect <nl> { <nl> public : <nl> CREATE_FUNC ( EffectBlur ) ; <nl> - <nl> virtual void setTarget ( EffectSprite * sprite ) override ; <nl> - <nl> - void setGaussian ( float value ) ; <nl> - void setCustomUniforms ( ) ; <nl> - void setBlurSize ( float f ) ; <nl> + void setBlurRadius ( float radius ) ; <nl> + void setBlurSampleNum ( float num ) ; <nl> <nl> protected : <nl> - bool init ( float blurSize = 3 . 0 ) ; <nl> - <nl> - int _blurRadius ; <nl> - Vec2 _pixelSize ; <nl> - <nl> - int _samplingRadius ; <nl> - float _scale ; <nl> - float _cons ; <nl> - float _weightSum ; <nl> + bool init ( float blurRadius = 10 . 0f , float sampleNum = 5 . 0f ) ; <nl> + <nl> + float _blurRadius ; <nl> + float _blurSampleNum ; <nl> } ; <nl> <nl> void EffectBlur : : setTarget ( EffectSprite * sprite ) <nl> { <nl> - Size s = sprite - > getTexture ( ) - > getContentSizeInPixels ( ) ; <nl> - _pixelSize = Vec2 ( 1 / s . width , 1 / s . height ) ; <nl> - _glprogramstate - > setUniformVec2 ( " onePixelSize " , _pixelSize ) ; <nl> + Size size = sprite - > getTexture ( ) - > getContentSizeInPixels ( ) ; <nl> + _glprogramstate - > setUniformVec2 ( " resolution " , size ) ; <nl> + _glprogramstate - > setUniformFloat ( " blurRadius " , _blurRadius ) ; <nl> + _glprogramstate - > setUniformFloat ( " sampleNum " , _blurSampleNum ) ; <nl> } <nl> <nl> - bool EffectBlur : : init ( float blurSize ) <nl> + bool EffectBlur : : init ( float blurRadius , float sampleNum ) <nl> { <nl> initGLProgramState ( " Shaders / example_Blur . fsh " ) ; <nl> - auto s = Size ( 100 , 100 ) ; <nl> - <nl> - _blurRadius = 0 ; <nl> - _pixelSize = Vec2 ( 1 / s . width , 1 / s . height ) ; <nl> - _samplingRadius = 0 ; <nl> - <nl> - setBlurSize ( blurSize ) ; <nl> - <nl> - _glprogramstate - > setUniformVec2 ( " onePixelSize " , _pixelSize ) ; <nl> - _glprogramstate - > setUniformVec4 ( " gaussianCoefficient " , Vec4 ( _samplingRadius , _scale , _cons , _weightSum ) ) ; <nl> + _blurRadius = blurRadius ; <nl> + _blurSampleNum = sampleNum ; <nl> + <nl> return true ; <nl> } <nl> <nl> - void EffectBlur : : setBlurSize ( float f ) <nl> + void EffectBlur : : setBlurRadius ( float radius ) <nl> { <nl> - if ( _blurRadius = = ( int ) f ) <nl> - return ; <nl> - _blurRadius = ( int ) f ; <nl> + _blurRadius = radius ; <nl> + } <nl> <nl> - _samplingRadius = _blurRadius ; <nl> - if ( _samplingRadius > 10 ) <nl> - { <nl> - _samplingRadius = 10 ; <nl> - } <nl> - if ( _blurRadius > 0 ) <nl> - { <nl> - float sigma = _blurRadius / 2 . 0f ; <nl> - _scale = - 0 . 5f / ( sigma * sigma ) ; <nl> - _cons = - 1 . 0f * _scale / 3 . 141592f ; <nl> - _weightSum = - _cons ; <nl> - <nl> - float weight ; <nl> - int squareX ; <nl> - for ( int dx = 0 ; dx < = _samplingRadius ; + + dx ) <nl> - { <nl> - squareX = dx * dx ; <nl> - weight = _cons * exp ( squareX * _scale ) ; <nl> - _weightSum + = 2 . 0 * weight ; <nl> - for ( int dy = 1 ; dy < = _samplingRadius ; + + dy ) <nl> - { <nl> - weight = _cons * exp ( ( squareX + dy * dy ) * _scale ) ; <nl> - _weightSum + = 4 . 0 * weight ; <nl> - } <nl> - } <nl> - } <nl> + void EffectBlur : : setBlurSampleNum ( float num ) <nl> + { <nl> + _blurSampleNum = num ; <nl> } <nl> <nl> / / Outline <nl> mmm a / tests / cpp - tests / Resources / Shaders / example_Blur . fsh <nl> ppp b / tests / cpp - tests / Resources / Shaders / example_Blur . fsh <nl> <nl> - / / Shader taken from : http : / / webglsamples . googlecode . com / hg / electricflower / electricflower . html <nl> - <nl> # ifdef GL_ES <nl> precision mediump float ; <nl> # endif <nl> precision mediump float ; <nl> varying vec4 v_fragmentColor ; <nl> varying vec2 v_texCoord ; <nl> <nl> - uniform vec4 gaussianCoefficient ; <nl> - uniform vec2 onePixelSize ; <nl> + uniform vec2 resolution ; <nl> + uniform float blurRadius ; <nl> + uniform float sampleNum ; <nl> + <nl> + vec3 blur ( vec2 ) ; <nl> <nl> - void main ( ) { <nl> - if ( gaussianCoefficient . x > 0 . 0 ) { <nl> - vec4 sum = vec4 ( 0 . 0 ) ; <nl> - vec2 offset ; <nl> - float weight ; <nl> - float squareX ; <nl> - <nl> - for ( float dx = 0 . 0 ; dx < = gaussianCoefficient . x ; dx + = 1 . 0 ) { <nl> - squareX = dx * dx ; <nl> - weight = gaussianCoefficient . z * exp ( squareX * gaussianCoefficient . y ) ; <nl> - <nl> - offset . x = - dx * onePixelSize . x ; <nl> - offset . y = 0 . 0 ; <nl> - sum + = texture2D ( CC_Texture0 , v_texCoord + offset ) * weight ; <nl> - <nl> - offset . x = dx * onePixelSize . x ; <nl> - sum + = texture2D ( CC_Texture0 , v_texCoord + offset ) * weight ; <nl> - <nl> - for ( float dy = 1 . 0 ; dy < = gaussianCoefficient . x ; dy + = 1 . 0 ) { <nl> - weight = gaussianCoefficient . z * exp ( ( squareX + dy * dy ) * gaussianCoefficient . y ) ; <nl> - <nl> - offset . x = - dx * onePixelSize . x ; <nl> - offset . y = - dy * onePixelSize . y ; <nl> - sum + = texture2D ( CC_Texture0 , v_texCoord + offset ) * weight ; <nl> - <nl> - offset . y = dy * onePixelSize . y ; <nl> - sum + = texture2D ( CC_Texture0 , v_texCoord + offset ) * weight ; <nl> - <nl> - offset . x = dx * onePixelSize . x ; <nl> - sum + = texture2D ( CC_Texture0 , v_texCoord + offset ) * weight ; <nl> - <nl> - offset . y = - dy * onePixelSize . y ; <nl> - sum + = texture2D ( CC_Texture0 , v_texCoord + offset ) * weight ; <nl> - } <nl> - } <nl> - sum - = texture2D ( CC_Texture0 , v_texCoord ) * gaussianCoefficient . z ; <nl> - sum / = gaussianCoefficient . w ; <nl> - gl_FragColor = sum * v_fragmentColor ; <nl> - } <nl> - else { <nl> - gl_FragColor = texture2D ( CC_Texture0 , v_texCoord ) * v_fragmentColor ; <nl> - } <nl> + void main ( void ) <nl> + { <nl> + vec3 col = blur ( v_texCoord ) ; <nl> + gl_FragColor = vec4 ( col , 1 . 0 ) * v_fragmentColor ; <nl> } <nl> <nl> + vec3 blur ( vec2 p ) <nl> + { <nl> + if ( blurRadius > 0 . 0 & & sampleNum > 1 . 0 ) <nl> + { <nl> + vec3 col = vec3 ( 0 ) ; <nl> + vec2 unit = 1 . 0 / resolution . xy ; <nl> + <nl> + float r = blurRadius ; <nl> + float sampleStep = r / sampleNum ; <nl> + <nl> + float count = 0 . 0 ; <nl> + <nl> + for ( float x = - r ; x < r ; x + = sampleStep ) <nl> + { <nl> + for ( float y = - r ; y < r ; y + = sampleStep ) <nl> + { <nl> + float weight = ( r - abs ( x ) ) * ( r - abs ( y ) ) ; <nl> + col + = texture2D ( CC_Texture0 , p + vec2 ( x * unit . x , y * unit . y ) ) . rgb * weight ; <nl> + count + = weight ; <nl> + } <nl> + } <nl> + <nl> + return col / count ; <nl> + } <nl> + <nl> + return texture2D ( CC_Texture0 , p ) . rgb ; <nl> + } <nl>
|
Merge pull request from visiblelight / new_blur_shader
|
cocos2d/cocos2d-x
|
cf7c603d79f48112beec0222b9e6eb7d2d096861
|
2014-06-12T10:20:38Z
|
mmm a / README . rst <nl> ppp b / README . rst <nl> Features <nl> * Two APIs : faster concatenation - based write API and slower ( but still <nl> very fast ) replacement - based format API with positional arguments for <nl> localization . <nl> - * Write API similar to the one used by IOStreams but much faster and more <nl> - consistent . <nl> + * Write API similar to the one used by IOStreams but stateless allowing <nl> + faster implementation . <nl> * Format API with ` format string syntax <nl> < http : / / cppformat . readthedocs . org / en / latest / syntax . html > ` _ <nl> similar to the one used by ` str . format <nl> Features <nl> * Clean warning - free codebase even on high warning levels <nl> ( - Wall - Wextra - pedantic ) . <nl> * Support for wide strings . <nl> + * Optional header - only configuration enabled with ` ` FMT_HEADER_ONLY ` ` . <nl> <nl> See the ` documentation < http : / / cppformat . readthedocs . org / en / stable / > ` _ for more details . <nl> <nl>
|
Update README . rst
|
fmtlib/fmt
|
65cd4835cd467f9cd4e520133ab4def3f6aa4a24
|
2015-02-19T16:26:03Z
|
mmm a / Code / Sandbox / EditorQt / Objects / RopeObject . cpp <nl> ppp b / Code / Sandbox / EditorQt / Objects / RopeObject . cpp <nl> void CRopeObject : : UpdateGameArea ( ) <nl> <nl> UpdateRopeLinks ( ) ; <nl> } <nl> + if ( m_pEntity & & m_physicsState ) <nl> + m_pEntity - > SetPhysicsState ( m_physicsState ) ; <nl> } <nl> <nl> UpdateAudioData ( ) ; <nl> void CRopeObject : : OnEvent ( ObjectEvent event ) <nl> m_bAreaModified = true ; <nl> UpdateGameArea ( ) ; <nl> break ; <nl> + case EVENT_PHYSICS_GETSTATE : case EVENT_PHYSICS_RESETSTATE : case EVENT_PHYSICS_APPLYSTATE : <nl> + CEntityObject : : OnEvent ( event ) ; <nl> + break ; <nl> } <nl> } <nl> <nl> mmm a / Code / Sandbox / Plugins / EditorCommon / AssetSystem / Browser / AssetBrowser . cpp <nl> ppp b / Code / Sandbox / Plugins / EditorCommon / AssetSystem / Browser / AssetBrowser . cpp <nl> class CDraggingIntoRootOf : public TView <nl> } <nl> } <nl> } <nl> + <nl> + virtual void mouseReleaseEvent ( QMouseEvent * event ) override <nl> + { <nl> + / / Qt documentation says that it is possible for the user to deselect <nl> + / / the selected item with QAbstractItemView : : SingleSelection . <nl> + / / but it does not work this way . <nl> + / / Remove the following workaround when the Qt bug fixed : https : / / bugreports . qt . io / browse / QTBUG - 75898 <nl> + auto temp = TView : : selectionMode ( ) ; <nl> + TView : : setSelectionMode ( QAbstractItemView : : ExtendedSelection ) ; <nl> + TView : : mouseReleaseEvent ( event ) ; <nl> + TView : : setSelectionMode ( temp ) ; <nl> + } <nl> + <nl> private : <nl> QString m_root ; <nl> } ; <nl> void CAssetBrowser : : InitActions ( ) <nl> m_pActionSave = RegisterAction ( " general . save " , & CAssetBrowser : : OnSave ) ; <nl> <nl> m_pActionShowInFileExplorer = RegisterAction ( " path_utils . show_in_file_explorer " , & CAssetBrowser : : OnShowInFileExplorer ) ; <nl> - RegisterAction ( " asset . generate_thumbnails " , & CAssetBrowser : : OnGenerateThumbmails ) ; <nl> + m_pActionGenerateThumbnails = RegisterAction ( " asset . generate_thumbnails " , & CAssetBrowser : : OnGenerateThumbmails ) ; <nl> m_pActionSave = RegisterAction ( " asset . save_all " , & CAssetBrowser : : OnSaveAll ) ; <nl> m_pActionShowDetails = RegisterAction ( " asset . view_details " , & CAssetBrowser : : OnDetailsView ) ; <nl> m_pActionShowThumbnails = RegisterAction ( " asset . view_thumbnails " , & CAssetBrowser : : OnThumbnailsView ) ; <nl> void CAssetBrowser : : InitMenus ( ) <nl> CAbstractMenu * const pMenuFile = GetMenu ( CEditor : : MenuItems : : FileMenu ) ; <nl> pMenuFile - > signalAboutToShow . Connect ( [ pMenuFile , this ] ( ) <nl> { <nl> - pMenuFile - > Clear ( ) ; <nl> auto folderSelection = m_pFoldersView - > GetSelectedFolders ( ) ; <nl> - const QString folder = ( folderSelection . size ( ) = = 1 & & ! CAssetFoldersModel : : GetInstance ( ) - > IsReadOnlyFolder ( folderSelection [ 0 ] ) ) <nl> - ? folderSelection [ 0 ] <nl> - : QString ( ) ; <nl> - <nl> + pMenuFile - > Clear ( ) ; <nl> pMenuFile - > AddCommandAction ( GetAction ( " general . new_folder " ) ) ; <nl> CAbstractMenu * subMenu = pMenuFile - > CreateMenu ( tr ( " New Asset " ) ) ; <nl> - FillCreateAssetMenu ( subMenu , folder ) ; <nl> - <nl> - const bool bEnableImport = ! folder . isNull ( ) ; <nl> + FillCreateAssetMenu ( subMenu , folderSelection . size ( ) = = 1 & & ! CAssetFoldersModel : : GetInstance ( ) - > IsReadOnlyFolder ( folderSelection [ 0 ] ) ) ; <nl> <nl> int section = pMenuFile - > GetNextEmptySection ( ) ; <nl> pMenuFile - > AddCommandAction ( GetAction ( " general . import " ) , section ) ; <nl> void CAssetBrowser : : InitMenus ( ) <nl> m_pActionSave - > setEnabled ( isModified ) ; <nl> <nl> section = pMenuFile - > GetNextEmptySection ( ) ; <nl> - pMenuFile - > AddCommandAction ( GetAction ( " asset . generate_thumbnails " ) , section ) ; <nl> + pMenuFile - > AddCommandAction ( m_pActionGenerateThumbnails , section ) ; <nl> <nl> pMenuFile - > AddCommandAction ( m_pActionGenerateRepairMetaData , section ) ; <nl> m_pActionGenerateRepairMetaData - > setEnabled ( ! CAssetManager : : GetInstance ( ) - > IsScanning ( ) ) ; <nl> void CAssetBrowser : : SelectAsset ( const CAsset & asset ) const <nl> } <nl> <nl> / / TODO : Only add menu entries for asset types that support creating new assets , i . e . , implement CAssetType : : Create ( ) . <nl> - void CAssetBrowser : : FillCreateAssetMenu ( CAbstractMenu * menu , const QString & folder ) <nl> + void CAssetBrowser : : FillCreateAssetMenu ( CAbstractMenu * menu , bool enable ) <nl> { <nl> for ( CAssetType * pAssetType : CAssetManager : : GetInstance ( ) - > GetAssetTypes ( ) ) <nl> { <nl> void CAssetBrowser : : FillCreateAssetMenu ( CAbstractMenu * menu , const QString & fold <nl> continue ; <nl> } <nl> <nl> - const bool bEnableAction = ! folder . isNull ( ) ; <nl> - <nl> QAction * const pAction = menu - > CreateAction ( pAssetType - > GetUiTypeName ( ) ) ; <nl> connect ( pAction , & QAction : : triggered , [ this , pAssetType ] ( ) { BeginCreateAsset ( * pAssetType , nullptr ) ; } ) ; <nl> - pAction - > setEnabled ( bEnableAction ) ; <nl> + pAction - > setEnabled ( enable ) ; <nl> } <nl> } <nl> <nl> void CAssetBrowser : : UpdateSelectionDependantActions ( ) <nl> std : : vector < string > folders ; <nl> GetSelection ( assets , folders ) ; <nl> <nl> + if ( assets . empty ( ) & & folders . empty ( ) ) <nl> + { <nl> + folders = GetSelectedFolders ( ) ; <nl> + } <nl> + <nl> const bool hasAssetsSelected = ! assets . empty ( ) ; <nl> - const bool hasSelection = hasAssetsSelected | | ! folders . empty ( ) ; <nl> + const bool hasWritableFolderSelected = folders . size ( ) = = 1 & & ! CAssetFoldersModel : : GetInstance ( ) - > IsReadOnlyFolder ( QtUtil : : ToQString ( folders [ 0 ] ) ) ; <nl> <nl> m_pActionManageWorkFiles - > setEnabled ( hasAssetsSelected ) ; <nl> m_pActionDelete - > setEnabled ( hasAssetsSelected ) ; <nl> void CAssetBrowser : : UpdateSelectionDependantActions ( ) <nl> m_pActionDuplicate - > setEnabled ( hasAssetsSelected ) ; <nl> m_pActionSave - > setEnabled ( hasAssetsSelected ) ; <nl> m_pActionReimport - > setEnabled ( hasAssetsSelected ) ; <nl> + <nl> + GetAction ( " general . new_folder " ) - > setEnabled ( hasWritableFolderSelected ) ; <nl> + GetAction ( " general . import " ) - > setEnabled ( hasWritableFolderSelected ) ; <nl> + m_pActionShowInFileExplorer - > setEnabled ( hasWritableFolderSelected ) ; <nl> + m_pActionGenerateThumbnails - > setEnabled ( hasWritableFolderSelected ) ; <nl> + <nl> + UpdatePasteActionState ( ) ; <nl> } <nl> <nl> void CAssetBrowser : : UpdatePasteActionState ( ) <nl> void CAssetBrowser : : CreateContextMenu ( bool isFolderView / * = false * / ) <nl> } <nl> BuildContextMenuForFolders ( folders , abstractMenu ) ; <nl> } <nl> - else if ( assets . empty ( ) & & folders . empty ( ) & & ! IsRecursiveView ( ) ) / / nothing selected in recursive view <nl> + else if ( assets . empty ( ) & & folders . empty ( ) ) <nl> { <nl> BuildContextMenuForEmptiness ( abstractMenu ) ; <nl> } <nl> void CAssetBrowser : : CreateContextMenu ( bool isFolderView / * = false * / ) <nl> <nl> void CAssetBrowser : : BuildContextMenuForEmptiness ( CAbstractMenu & abstractMenu ) <nl> { <nl> - std : : vector < string > selectedFolders = GetSelectedFolders ( ) ; <nl> - CAssetFoldersModel * pModel = CAssetFoldersModel : : GetInstance ( ) ; <nl> + const std : : vector < string > selectedFolders = GetSelectedFolders ( ) ; <nl> + CAssetFoldersModel * const pModel = CAssetFoldersModel : : GetInstance ( ) ; <nl> <nl> int foldersSection = abstractMenu . GetNextEmptySection ( ) ; <nl> abstractMenu . SetSectionName ( foldersSection , " Folders " ) ; <nl> <nl> auto folder = QtUtil : : ToQString ( selectedFolders [ 0 ] ) ; <nl> - if ( selectedFolders . size ( ) = = 1 & & ! pModel - > IsReadOnlyFolder ( folder ) ) <nl> - { <nl> - abstractMenu . AddCommandAction ( GetAction ( " general . new_folder " ) ) ; <nl> + abstractMenu . AddCommandAction ( GetAction ( " general . new_folder " ) ) ; <nl> <nl> - CAbstractMenu * const pCreateAssetMenu = abstractMenu . CreateMenu ( tr ( " New Asset " ) ) ; <nl> - FillCreateAssetMenu ( pCreateAssetMenu , folder ) ; <nl> + CAbstractMenu * const pCreateAssetMenu = abstractMenu . CreateMenu ( tr ( " New Asset " ) ) ; <nl> + FillCreateAssetMenu ( pCreateAssetMenu , selectedFolders . size ( ) = = 1 & & ! pModel - > IsReadOnlyFolder ( folder ) ) ; <nl> <nl> - abstractMenu . AddCommandAction ( GetAction ( " general . paste " ) , foldersSection ) ; <nl> - UpdatePasteActionState ( ) ; <nl> + abstractMenu . AddCommandAction ( GetAction ( " general . paste " ) , foldersSection ) ; <nl> + abstractMenu . AddCommandAction ( GetAction ( " general . import " ) , foldersSection ) ; <nl> + abstractMenu . AddCommandAction ( m_pActionShowInFileExplorer , foldersSection ) ; <nl> + abstractMenu . AddCommandAction ( m_pActionGenerateThumbnails , foldersSection ) ; <nl> <nl> - abstractMenu . AddCommandAction ( GetAction ( " general . import " ) , foldersSection ) ; <nl> - abstractMenu . AddCommandAction ( m_pActionShowInFileExplorer , foldersSection ) ; <nl> - abstractMenu . AddCommandAction ( GetAction ( " asset . generate_thumbnails " ) , foldersSection ) ; <nl> - } <nl> + int section = abstractMenu . GetNextEmptySection ( ) ; <nl> + abstractMenu . AddCommandAction ( m_pActionRecursiveView , section ) ; <nl> + abstractMenu . AddCommandAction ( m_pActionShowDetails , section ) ; <nl> + abstractMenu . AddCommandAction ( m_pActionShowThumbnails , section ) ; <nl> + abstractMenu . AddCommandAction ( m_pActionShowSplitHorizontally , section ) ; <nl> + abstractMenu . AddCommandAction ( m_pActionShowSplitVertically , section ) ; <nl> + <nl> + section = abstractMenu . GetNextEmptySection ( ) ; <nl> + abstractMenu . AddCommandAction ( m_pActionShowFoldersView , section ) ; <nl> + <nl> + UpdateSelectionDependantActions ( ) ; <nl> <nl> NotifyContextMenuCreation ( abstractMenu , { } , selectedFolders ) ; <nl> } <nl> void CAssetBrowser : : BuildContextMenuForFolders ( const std : : vector < string > & folder <nl> } <nl> <nl> abstractMenu . AddCommandAction ( m_pActionShowInFileExplorer ) ; <nl> - abstractMenu . AddCommandAction ( GetAction ( " asset . generate_thumbnails " ) ) ; <nl> + abstractMenu . AddCommandAction ( m_pActionGenerateThumbnails ) ; <nl> <nl> NotifyContextMenuCreation ( abstractMenu , { } , folders ) ; <nl> } <nl> mmm a / Code / Sandbox / Plugins / EditorCommon / AssetSystem / Browser / AssetBrowser . h <nl> ppp b / Code / Sandbox / Plugins / EditorCommon / AssetSystem / Browser / AssetBrowser . h <nl> class EDITOR_COMMON_API CAssetBrowser : public CDockableEditor , public IAssetBro <nl> void WaitUntilAssetsAreReady ( ) ; <nl> QWidget * CreateAssetsViewSelector ( ) ; <nl> <nl> - void FillCreateAssetMenu ( CAbstractMenu * menu , const QString & folder ) ; <nl> + void FillCreateAssetMenu ( CAbstractMenu * menu , bool enable ) ; <nl> <nl> void BeginCreateAsset ( const CAssetType & type , const CAssetType : : SCreateParams * pCreateParams ) ; <nl> void EndCreateAsset ( ) ; <nl> class EDITOR_COMMON_API CAssetBrowser : public CDockableEditor , public IAssetBro <nl> / / ui components <nl> CAssetFoldersView * m_pFoldersView = nullptr ; <nl> CBreadcrumbsBar * m_pBreadcrumbs = nullptr ; <nl> - QCommandAction * m_pActionRecursiveView = nullptr ; <nl> - QCommandAction * m_pActionShowFoldersView = nullptr ; <nl> - QCommandAction * m_pActionManageWorkFiles = nullptr ; <nl> - QCommandAction * m_pActionShowInFileExplorer = nullptr ; <nl> + QCommandAction * m_pActionCopy = nullptr ; <nl> QCommandAction * m_pActionCopyName = nullptr ; <nl> QCommandAction * m_pActionCopyPath = nullptr ; <nl> - QCommandAction * m_pActionShowThumbnails = nullptr ; <nl> - QCommandAction * m_pActionShowDetails = nullptr ; <nl> - QCommandAction * m_pActionShowSplitHorizontally = nullptr ; <nl> - QCommandAction * m_pActionShowSplitVertically = nullptr ; <nl> QCommandAction * m_pActionDelete = nullptr ; <nl> - QCommandAction * m_pActionRename = nullptr ; <nl> - QCommandAction * m_pActionCopy = nullptr ; <nl> - QCommandAction * m_pActionDuplicate = nullptr ; <nl> - QCommandAction * m_pActionSave = nullptr ; <nl> - QCommandAction * m_pActionPaste = nullptr ; <nl> - QCommandAction * m_pActionReimport = nullptr ; <nl> QCommandAction * m_pActionDiscardChanges = nullptr ; <nl> + QCommandAction * m_pActionDuplicate = nullptr ; <nl> QCommandAction * m_pActionGenerateRepairMetaData = nullptr ; <nl> + QCommandAction * m_pActionGenerateThumbnails = nullptr ; <nl> QCommandAction * m_pActionHideIrrelevantFolders = nullptr ; <nl> + QCommandAction * m_pActionManageWorkFiles = nullptr ; <nl> + QCommandAction * m_pActionPaste = nullptr ; <nl> + QCommandAction * m_pActionRecursiveView = nullptr ; <nl> + QCommandAction * m_pActionReimport = nullptr ; <nl> + QCommandAction * m_pActionRename = nullptr ; <nl> + QCommandAction * m_pActionSave = nullptr ; <nl> + QCommandAction * m_pActionShowDetails = nullptr ; <nl> + QCommandAction * m_pActionShowFoldersView = nullptr ; <nl> + QCommandAction * m_pActionShowInFileExplorer = nullptr ; <nl> + QCommandAction * m_pActionShowSplitHorizontally = nullptr ; <nl> + QCommandAction * m_pActionShowSplitVertically = nullptr ; <nl> + QCommandAction * m_pActionShowThumbnails = nullptr ; <nl> # if ASSET_BROWSER_USE_PREVIEW_WIDGET <nl> QCommandAction * m_pActionShowPreview = nullptr ; <nl> # endif <nl>
|
! I integrate from / / ce / main . . .
|
CRYTEK/CRYENGINE
|
f4de2709979eacdbe45d5f9453c86357ca26dffc
|
2019-05-23T11:02:06Z
|
mmm a / modules / videoio / include / opencv2 / videoio . hpp <nl> ppp b / modules / videoio / include / opencv2 / videoio . hpp <nl> enum VideoCaptureAPIs { <nl> } ; <nl> <nl> / * * @ brief % VideoCapture generic properties identifier . <nl> + <nl> + Reading / writing properties involves many layers . Some unexpected result might happens along this chain . <nl> + Effective behaviour depends from device hardware , driver and API Backend . <nl> @ sa videoio_flags_others , VideoCapture : : get ( ) , VideoCapture : : set ( ) <nl> * / <nl> enum VideoCaptureProperties { <nl> enum VideoCaptureProperties { <nl> CAP_PROP_FRAME_COUNT = 7 , / / ! < Number of frames in the video file . <nl> CAP_PROP_FORMAT = 8 , / / ! < Format of the % Mat objects returned by VideoCapture : : retrieve ( ) . <nl> CAP_PROP_MODE = 9 , / / ! < Backend - specific value indicating the current capture mode . <nl> - CAP_PROP_BRIGHTNESS = 10 , / / ! < Brightness of the image ( only for cameras ) . <nl> + CAP_PROP_BRIGHTNESS = 10 , / / ! < Brightness of the image ( only for those cameras that support ) . <nl> CAP_PROP_CONTRAST = 11 , / / ! < Contrast of the image ( only for cameras ) . <nl> CAP_PROP_SATURATION = 12 , / / ! < Saturation of the image ( only for cameras ) . <nl> CAP_PROP_HUE = 13 , / / ! < Hue of the image ( only for cameras ) . <nl> - CAP_PROP_GAIN = 14 , / / ! < Gain of the image ( only for cameras ) . <nl> - CAP_PROP_EXPOSURE = 15 , / / ! < Exposure ( only for cameras ) . <nl> + CAP_PROP_GAIN = 14 , / / ! < Gain of the image ( only for those cameras that support ) . <nl> + CAP_PROP_EXPOSURE = 15 , / / ! < Exposure ( only for those cameras that support ) . <nl> CAP_PROP_CONVERT_RGB = 16 , / / ! < Boolean flags indicating whether images should be converted to RGB . <nl> CAP_PROP_WHITE_BALANCE_BLUE_U = 17 , / / ! < Currently unsupported . <nl> CAP_PROP_RECTIFICATION = 18 , / / ! < Rectification flag for stereo cameras ( note : only supported by DC1394 v 2 . x backend currently ) . <nl> enum VideoCaptureProperties { <nl> CAP_PROP_TILT = 34 , <nl> CAP_PROP_ROLL = 35 , <nl> CAP_PROP_IRIS = 36 , <nl> - CAP_PROP_SETTINGS = 37 , / / ! Pop up video / camera filter dialog ( note : only supported by DSHOW backend currently . Property value is ignored ) <nl> + CAP_PROP_SETTINGS = 37 , / / ! < Pop up video / camera filter dialog ( note : only supported by DSHOW backend currently . The property value is ignored ) <nl> CAP_PROP_BUFFERSIZE = 38 , <nl> CAP_PROP_AUTOFOCUS = 39 <nl> } ; <nl> class CV_EXPORTS_W VideoCapture <nl> <nl> @ overload <nl> <nl> - This is an overloaded member function , provided for convenience . It differs from the above function only in what argument ( s ) it accepts . <nl> Parameters are similar as the constructor VideoCapture ( int index ) , except it takes an additional argument apiPreference . <nl> - @ return open ( cameraNum + apiPreference ) . <nl> + Definitely , is same as open ( int index ) where ` index = cameraNum + apiPreference ` <nl> + @ return ` true ` if the camera has been successfully opened . <nl> * / <nl> CV_WRAP bool open ( int cameraNum , int apiPreference ) ; <nl> <nl>
|
Fix misplaced description CAP_PROP_SETTINGS and others minor changes in videoio doc
|
opencv/opencv
|
eb768514c3c5e885f7bbe63e065f32b9a863b8d5
|
2017-01-26T16:10:32Z
|
mmm a / lib / ProgramOptions / program - options . cpp <nl> ppp b / lib / ProgramOptions / program - options . cpp <nl> static char * FillVariables ( char const * value ) { <nl> static struct option * InitOptionStructure ( struct option * option , <nl> char const * name , int hasArg , <nl> int * flag , int val ) { <nl> - option - > name = name ; <nl> + option - > name = const_cast < char * > ( name ) ; <nl> option - > has_arg = hasArg ; <nl> option - > flag = flag ; <nl> option - > val = 256 + val ; <nl>
|
fix , solaris : invalid conversion from ' const char * ' to ' char * '
|
arangodb/arangodb
|
63ef328ad66330a1d44b7d74e01cde3c8a35946a
|
2016-01-26T07:29:04Z
|
mmm a / folly / futures / Future - inl . h <nl> ppp b / folly / futures / Future - inl . h <nl> Future < std : : pair < <nl> size_t , <nl> Try < typename std : : iterator_traits < InputIterator > : : value_type : : value_type > > > <nl> collectAny ( InputIterator first , InputIterator last ) { <nl> + return collectAnySemiFuture ( first , last ) . via ( & InlineExecutor : : instance ( ) ) ; <nl> + } <nl> + <nl> + template < class InputIterator > <nl> + SemiFuture < std : : pair < <nl> + size_t , <nl> + Try < typename std : : iterator_traits < InputIterator > : : value_type : : value_type > > > <nl> + collectAnySemiFuture ( InputIterator first , InputIterator last ) { <nl> using F = typename std : : iterator_traits < InputIterator > : : value_type ; <nl> using T = typename F : : value_type ; <nl> <nl> collectAny ( InputIterator first , InputIterator last ) { <nl> std : : atomic < bool > done { false } ; <nl> } ; <nl> <nl> + std : : vector < folly : : Executor : : KeepAlive < futures : : detail : : DeferredExecutor > > <nl> + executors ; <nl> + futures : : detail : : stealDeferredExecutors ( executors , first , last ) ; <nl> + <nl> auto ctx = std : : make_shared < Context > ( ) ; <nl> for ( size_t i = 0 ; first ! = last ; + + first , + + i ) { <nl> first - > setCallback_ ( [ i , ctx ] ( Try < T > & & t ) { <nl> collectAny ( InputIterator first , InputIterator last ) { <nl> } <nl> } ) ; <nl> } <nl> - return ctx - > p . getSemiFuture ( ) . via ( & InlineExecutor : : instance ( ) ) ; <nl> + auto future = ctx - > p . getSemiFuture ( ) ; <nl> + if ( ! executors . empty ( ) ) { <nl> + future = std : : move ( future ) . defer ( <nl> + [ ] ( Try < typename decltype ( future ) : : value_type > & & t ) { <nl> + return std : : move ( t ) . value ( ) ; <nl> + } ) ; <nl> + auto deferredExecutor = futures : : detail : : getDeferredExecutor ( future ) ; <nl> + deferredExecutor - > setNestedExecutors ( std : : move ( executors ) ) ; <nl> + } <nl> + return future ; <nl> } <nl> <nl> / / collectAnyWithoutException ( iterator ) <nl> mmm a / folly / futures / helpers . h <nl> ppp b / folly / futures / helpers . h <nl> Future < std : : pair < <nl> size_t , <nl> Try < typename std : : iterator_traits < InputIterator > : : value_type : : value_type > > > <nl> collectAny ( InputIterator first , InputIterator last ) ; <nl> + template < class InputIterator > <nl> + SemiFuture < std : : pair < <nl> + size_t , <nl> + Try < typename std : : iterator_traits < InputIterator > : : value_type : : value_type > > > <nl> + collectAnySemiFuture ( InputIterator first , InputIterator last ) ; <nl> <nl> / / / Sugar for the most common case <nl> template < class Collection > <nl>
|
add collectAnySemiFuture
|
facebook/folly
|
0a2bbe62ddd1d489508ee2b86ea2946b5c7577e4
|
2019-04-30T01:01:21Z
|
mmm a / src / csharp / Grpc . Auth / GrpcCredentials . cs <nl> ppp b / src / csharp / Grpc . Auth / GrpcCredentials . cs <nl> <nl> namespace Grpc . Auth <nl> { <nl> / / / < summary > <nl> - / / / Factory methods to create instances of < see cref = " Credentials " / > class . <nl> + / / / Factory methods to create instances of < see cref = " ChannelCredentials " / > and < see cref = " CallCredentials " / > classes . <nl> / / / < / summary > <nl> public static class GrpcCredentials <nl> { <nl> public static MetadataCredentials Create ( ITokenAccess credential ) <nl> } <nl> <nl> / / / < summary > <nl> - / / / Convenience method to create a < see cref = " CompositeCredentials " / > instance from <nl> + / / / Convenience method to create a < see cref = " ChannelCredentials " / > instance from <nl> / / / < c > ITokenAccess < / c > credential and < c > SslCredentials < / c > instance . <nl> / / / < / summary > <nl> / / / < param name = " credential " > The credential to use to obtain access tokens . < / param > <nl> / / / < param name = " sslCredentials " > The < c > SslCredentials < / c > instance . < / param > <nl> - / / / < returns > The composite credential for access token based auth over a secure channel . < / returns > <nl> - public static CompositeCredentials Create ( ITokenAccess credential , SslCredentials sslCredentials ) <nl> + / / / < returns > The channel credentials for access token based auth over a secure channel . < / returns > <nl> + public static ChannelCredentials Create ( ITokenAccess credential , SslCredentials sslCredentials ) <nl> { <nl> - return CompositeCredentials . Create ( Create ( credential ) , sslCredentials ) ; <nl> + return ChannelCredentials . Create ( sslCredentials , Create ( credential ) ) ; <nl> } <nl> <nl> / / / < summary > <nl> similarity index 52 % <nl> rename from src / csharp / Grpc . Core . Tests / CredentialsTest . cs <nl> rename to src / csharp / Grpc . Core . Tests / CallCredentialsTest . cs <nl> mmm a / src / csharp / Grpc . Core . Tests / CredentialsTest . cs <nl> ppp b / src / csharp / Grpc . Core . Tests / CallCredentialsTest . cs <nl> <nl> <nl> namespace Grpc . Core . Tests <nl> { <nl> - public class CredentialsTest <nl> + public class CallCredentialsTest <nl> { <nl> [ Test ] <nl> - public void InsecureCredentials_IsNonComposable ( ) <nl> + public void CallCredentials_ComposeAtLeastTwo ( ) <nl> { <nl> - Assert . IsFalse ( Credentials . Insecure . IsComposable ) ; <nl> + Assert . Throws ( typeof ( ArgumentException ) , ( ) = > CallCredentials . Compose ( new FakeCallCredentials ( ) ) ) ; <nl> } <nl> <nl> [ Test ] <nl> - public void CompositeCredentials_Create ( ) <nl> + public void CallCredentials_ToNativeCredentials ( ) <nl> { <nl> - new CompositeCredentials ( new FakeCredentials ( true ) , new FakeCredentials ( true ) , new FakeCredentials ( true ) ) ; <nl> - } <nl> - <nl> - [ Test ] <nl> - public void CompositeCredentials_ComposeAtLeastTwo ( ) <nl> - { <nl> - Assert . Throws ( typeof ( ArgumentException ) , ( ) = > new CompositeCredentials ( new FakeCredentials ( true ) ) ) ; <nl> - } <nl> - <nl> - [ Test ] <nl> - public void CompositeCredentials_ForbidsNonComposable ( ) <nl> - { <nl> - Assert . Throws ( typeof ( ArgumentException ) , ( ) = > new CompositeCredentials ( new FakeCredentials ( true ) , new FakeCredentials ( false ) ) ) ; <nl> - } <nl> - <nl> - [ Test ] <nl> - public void CompositeCredentials_ToNativeCredentials ( ) <nl> - { <nl> - var composite = new CompositeCredentials ( new MetadataCredentials ( async ( uri , m ) = > { await Task . Delay ( 1 ) ; } ) , new SslCredentials ( ) ) ; <nl> + var composite = CallCredentials . Compose ( <nl> + new MetadataCredentials ( async ( uri , m ) = > { await Task . Delay ( 1 ) ; } ) , <nl> + new MetadataCredentials ( async ( uri , m ) = > { await Task . Delay ( 2 ) ; } ) ) ; <nl> using ( var nativeComposite = composite . ToNativeCredentials ( ) ) <nl> { <nl> } <nl> } <nl> - <nl> - [ Test ] <nl> - public void CompositeCredentials_OnlyOneConnectorCredentialAllowed ( ) <nl> - { <nl> - var composite = new CompositeCredentials ( new SslCredentials ( ) , new SslCredentials ( ) ) ; <nl> - / / it makes no sense to compose multiple ssl credentials . <nl> - Assert . Throws ( typeof ( ArgumentException ) , ( ) = > composite . ToNativeCredentials ( ) ) ; <nl> - } <nl> - <nl> - private class FakeCredentials : Credentials <nl> - { <nl> - readonly bool composable ; <nl> - <nl> - public FakeCredentials ( bool composable ) <nl> - { <nl> - this . composable = composable ; <nl> - } <nl> - <nl> - internal override bool IsComposable <nl> - { <nl> - get { return composable ; } <nl> - } <nl> - <nl> - internal override CredentialsSafeHandle ToNativeCredentials ( ) <nl> - { <nl> - return null ; <nl> - } <nl> - } <nl> } <nl> } <nl> new file mode 100644 <nl> index 00000000000 . . 489bf385756 <nl> mmm / dev / null <nl> ppp b / src / csharp / Grpc . Core . Tests / ChannelCredentialsTest . cs <nl> <nl> + # region Copyright notice and license <nl> + <nl> + / / Copyright 2015 , Google Inc . <nl> + / / All rights reserved . <nl> + / / <nl> + / / Redistribution and use in source and binary forms , with or without <nl> + / / modification , are permitted provided that the following conditions are <nl> + / / met : <nl> + / / <nl> + / / * Redistributions of source code must retain the above copyright <nl> + / / notice , this list of conditions and the following disclaimer . <nl> + / / * Redistributions in binary form must reproduce the above <nl> + / / copyright notice , this list of conditions and the following disclaimer <nl> + / / in the documentation and / or other materials provided with the <nl> + / / distribution . <nl> + / / * Neither the name of Google Inc . nor the names of its <nl> + / / contributors may be used to endorse or promote products derived from <nl> + / / this software without specific prior written permission . <nl> + / / <nl> + / / THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS <nl> + / / " AS IS " AND ANY EXPRESS OR IMPLIED WARRANTIES , INCLUDING , BUT NOT <nl> + / / LIMITED TO , THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR <nl> + / / A PARTICULAR PURPOSE ARE DISCLAIMED . IN NO EVENT SHALL THE COPYRIGHT <nl> + / / OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT , INDIRECT , INCIDENTAL , <nl> + / / SPECIAL , EXEMPLARY , OR CONSEQUENTIAL DAMAGES ( INCLUDING , BUT NOT <nl> + / / LIMITED TO , PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES ; LOSS OF USE , <nl> + / / DATA , OR PROFITS ; OR BUSINESS INTERRUPTION ) HOWEVER CAUSED AND ON ANY <nl> + / / THEORY OF LIABILITY , WHETHER IN CONTRACT , STRICT LIABILITY , OR TORT <nl> + / / ( INCLUDING NEGLIGENCE OR OTHERWISE ) ARISING IN ANY WAY OUT OF THE USE <nl> + / / OF THIS SOFTWARE , EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE . <nl> + <nl> + # endregion <nl> + <nl> + using System ; <nl> + using System . Diagnostics ; <nl> + using System . Runtime . InteropServices ; <nl> + using System . Threading ; <nl> + using System . Threading . Tasks ; <nl> + using Grpc . Core ; <nl> + using Grpc . Core . Internal ; <nl> + using Grpc . Core . Utils ; <nl> + using NUnit . Framework ; <nl> + <nl> + namespace Grpc . Core . Tests <nl> + { <nl> + public class ChannelCredentialsTest <nl> + { <nl> + [ Test ] <nl> + public void InsecureCredentials_IsNonComposable ( ) <nl> + { <nl> + Assert . IsFalse ( ChannelCredentials . Insecure . IsComposable ) ; <nl> + } <nl> + <nl> + [ Test ] <nl> + public void ChannelCredentials_CreateComposite ( ) <nl> + { <nl> + var composite = ChannelCredentials . Create ( new FakeChannelCredentials ( true ) , new FakeCallCredentials ( ) ) ; <nl> + Assert . IsFalse ( composite . IsComposable ) ; <nl> + <nl> + Assert . Throws ( typeof ( ArgumentNullException ) , ( ) = > ChannelCredentials . Create ( null , new FakeCallCredentials ( ) ) ) ; <nl> + Assert . Throws ( typeof ( ArgumentNullException ) , ( ) = > ChannelCredentials . Create ( new FakeChannelCredentials ( true ) , null ) ) ; <nl> + <nl> + / / forbid composing non - composable <nl> + Assert . Throws ( typeof ( ArgumentException ) , ( ) = > ChannelCredentials . Create ( new FakeChannelCredentials ( false ) , new FakeCallCredentials ( ) ) ) ; <nl> + } <nl> + <nl> + [ Test ] <nl> + public void ChannelCredentials_CreateWrapped ( ) <nl> + { <nl> + ChannelCredentials . Create ( new FakeCallCredentials ( ) ) ; <nl> + } <nl> + } <nl> + } <nl> mmm a / src / csharp / Grpc . Core . Tests / ChannelTest . cs <nl> ppp b / src / csharp / Grpc . Core . Tests / ChannelTest . cs <nl> public class ChannelTest <nl> [ Test ] <nl> public void Constructor_RejectsInvalidParams ( ) <nl> { <nl> - Assert . Throws ( typeof ( ArgumentNullException ) , ( ) = > new Channel ( null , Credentials . Insecure ) ) ; <nl> + Assert . Throws ( typeof ( ArgumentNullException ) , ( ) = > new Channel ( null , ChannelCredentials . Insecure ) ) ; <nl> } <nl> <nl> [ Test ] <nl> public void State_IdleAfterCreation ( ) <nl> { <nl> - var channel = new Channel ( " localhost " , Credentials . Insecure ) ; <nl> + var channel = new Channel ( " localhost " , ChannelCredentials . Insecure ) ; <nl> Assert . AreEqual ( ChannelState . Idle , channel . State ) ; <nl> channel . ShutdownAsync ( ) . Wait ( ) ; <nl> } <nl> public void State_IdleAfterCreation ( ) <nl> [ Test ] <nl> public void WaitForStateChangedAsync_InvalidArgument ( ) <nl> { <nl> - var channel = new Channel ( " localhost " , Credentials . Insecure ) ; <nl> + var channel = new Channel ( " localhost " , ChannelCredentials . Insecure ) ; <nl> Assert . Throws ( typeof ( ArgumentException ) , ( ) = > channel . WaitForStateChangedAsync ( ChannelState . FatalFailure ) ) ; <nl> channel . ShutdownAsync ( ) . Wait ( ) ; <nl> } <nl> public void WaitForStateChangedAsync_InvalidArgument ( ) <nl> [ Test ] <nl> public void ResolvedTarget ( ) <nl> { <nl> - var channel = new Channel ( " 127 . 0 . 0 . 1 " , Credentials . Insecure ) ; <nl> + var channel = new Channel ( " 127 . 0 . 0 . 1 " , ChannelCredentials . Insecure ) ; <nl> Assert . IsTrue ( channel . ResolvedTarget . Contains ( " 127 . 0 . 0 . 1 " ) ) ; <nl> channel . ShutdownAsync ( ) . Wait ( ) ; <nl> } <nl> public void ResolvedTarget ( ) <nl> [ Test ] <nl> public void Shutdown_AllowedOnlyOnce ( ) <nl> { <nl> - var channel = new Channel ( " localhost " , Credentials . Insecure ) ; <nl> + var channel = new Channel ( " localhost " , ChannelCredentials . Insecure ) ; <nl> channel . ShutdownAsync ( ) . Wait ( ) ; <nl> Assert . Throws ( typeof ( InvalidOperationException ) , ( ) = > channel . ShutdownAsync ( ) . GetAwaiter ( ) . GetResult ( ) ) ; <nl> } <nl> new file mode 100644 <nl> index 00000000000 . . 87d55cd276a <nl> mmm / dev / null <nl> ppp b / src / csharp / Grpc . Core . Tests / FakeCredentials . cs <nl> <nl> + # region Copyright notice and license <nl> + <nl> + / / Copyright 2015 , Google Inc . <nl> + / / All rights reserved . <nl> + / / <nl> + / / Redistribution and use in source and binary forms , with or without <nl> + / / modification , are permitted provided that the following conditions are <nl> + / / met : <nl> + / / <nl> + / / * Redistributions of source code must retain the above copyright <nl> + / / notice , this list of conditions and the following disclaimer . <nl> + / / * Redistributions in binary form must reproduce the above <nl> + / / copyright notice , this list of conditions and the following disclaimer <nl> + / / in the documentation and / or other materials provided with the <nl> + / / distribution . <nl> + / / * Neither the name of Google Inc . nor the names of its <nl> + / / contributors may be used to endorse or promote products derived from <nl> + / / this software without specific prior written permission . <nl> + / / <nl> + / / THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS <nl> + / / " AS IS " AND ANY EXPRESS OR IMPLIED WARRANTIES , INCLUDING , BUT NOT <nl> + / / LIMITED TO , THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR <nl> + / / A PARTICULAR PURPOSE ARE DISCLAIMED . IN NO EVENT SHALL THE COPYRIGHT <nl> + / / OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT , INDIRECT , INCIDENTAL , <nl> + / / SPECIAL , EXEMPLARY , OR CONSEQUENTIAL DAMAGES ( INCLUDING , BUT NOT <nl> + / / LIMITED TO , PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES ; LOSS OF USE , <nl> + / / DATA , OR PROFITS ; OR BUSINESS INTERRUPTION ) HOWEVER CAUSED AND ON ANY <nl> + / / THEORY OF LIABILITY , WHETHER IN CONTRACT , STRICT LIABILITY , OR TORT <nl> + / / ( INCLUDING NEGLIGENCE OR OTHERWISE ) ARISING IN ANY WAY OUT OF THE USE <nl> + / / OF THIS SOFTWARE , EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE . <nl> + <nl> + # endregion <nl> + <nl> + using System ; <nl> + using System . Diagnostics ; <nl> + using System . Runtime . InteropServices ; <nl> + using System . Threading ; <nl> + using System . Threading . Tasks ; <nl> + using Grpc . Core ; <nl> + using Grpc . Core . Internal ; <nl> + using Grpc . Core . Utils ; <nl> + using NUnit . Framework ; <nl> + <nl> + namespace Grpc . Core . Tests <nl> + { <nl> + internal class FakeChannelCredentials : ChannelCredentials <nl> + { <nl> + readonly bool composable ; <nl> + <nl> + public FakeChannelCredentials ( bool composable ) <nl> + { <nl> + this . composable = composable ; <nl> + } <nl> + <nl> + internal override bool IsComposable <nl> + { <nl> + get { return composable ; } <nl> + } <nl> + <nl> + internal override CredentialsSafeHandle ToNativeCredentials ( ) <nl> + { <nl> + return null ; <nl> + } <nl> + } <nl> + <nl> + internal class FakeCallCredentials : CallCredentials <nl> + { <nl> + internal override CredentialsSafeHandle ToNativeCredentials ( ) <nl> + { <nl> + return null ; <nl> + } <nl> + } <nl> + } <nl> mmm a / src / csharp / Grpc . Core . Tests / Grpc . Core . Tests . csproj <nl> ppp b / src / csharp / Grpc . Core . Tests / Grpc . Core . Tests . csproj <nl> <nl> < Compile Include = " . . \ Grpc . Core \ Version . cs " > <nl> < Link > Version . cs < / Link > <nl> < / Compile > <nl> + < Compile Include = " CallCredentialsTest . cs " / > <nl> + < Compile Include = " FakeCredentials . cs " / > <nl> < Compile Include = " MarshallingErrorsTest . cs " / > <nl> - < Compile Include = " CredentialsTest . cs " / > <nl> + < Compile Include = " ChannelCredentialsTest . cs " / > <nl> < Compile Include = " ShutdownTest . cs " / > <nl> < Compile Include = " Internal \ AsyncCallTest . cs " / > <nl> < Compile Include = " Properties \ AssemblyInfo . cs " / > <nl> mmm a / src / csharp / Grpc . Core . Tests / Internal / AsyncCallTest . cs <nl> ppp b / src / csharp / Grpc . Core . Tests / Internal / AsyncCallTest . cs <nl> public class AsyncCallTest <nl> [ SetUp ] <nl> public void Init ( ) <nl> { <nl> - channel = new Channel ( " localhost " , Credentials . Insecure ) ; <nl> + channel = new Channel ( " localhost " , ChannelCredentials . Insecure ) ; <nl> <nl> fakeCall = new FakeNativeCall ( ) ; <nl> <nl> mmm a / src / csharp / Grpc . Core . Tests / MockServiceHelper . cs <nl> ppp b / src / csharp / Grpc . Core . Tests / MockServiceHelper . cs <nl> public Channel GetChannel ( ) <nl> { <nl> if ( channel = = null ) <nl> { <nl> - channel = new Channel ( Host , GetServer ( ) . Ports . Single ( ) . BoundPort , Credentials . Insecure ) ; <nl> + channel = new Channel ( Host , GetServer ( ) . Ports . Single ( ) . BoundPort , ChannelCredentials . Insecure ) ; <nl> } <nl> return channel ; <nl> } <nl> new file mode 100644 <nl> index 00000000000 . . 809c9f412d0 <nl> mmm / dev / null <nl> ppp b / src / csharp / Grpc . Core / CallCredentials . cs <nl> <nl> + # region Copyright notice and license <nl> + <nl> + / / Copyright 2015 , Google Inc . <nl> + / / All rights reserved . <nl> + / / <nl> + / / Redistribution and use in source and binary forms , with or without <nl> + / / modification , are permitted provided that the following conditions are <nl> + / / met : <nl> + / / <nl> + / / * Redistributions of source code must retain the above copyright <nl> + / / notice , this list of conditions and the following disclaimer . <nl> + / / * Redistributions in binary form must reproduce the above <nl> + / / copyright notice , this list of conditions and the following disclaimer <nl> + / / in the documentation and / or other materials provided with the <nl> + / / distribution . <nl> + / / * Neither the name of Google Inc . nor the names of its <nl> + / / contributors may be used to endorse or promote products derived from <nl> + / / this software without specific prior written permission . <nl> + / / <nl> + / / THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS <nl> + / / " AS IS " AND ANY EXPRESS OR IMPLIED WARRANTIES , INCLUDING , BUT NOT <nl> + / / LIMITED TO , THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR <nl> + / / A PARTICULAR PURPOSE ARE DISCLAIMED . IN NO EVENT SHALL THE COPYRIGHT <nl> + / / OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT , INDIRECT , INCIDENTAL , <nl> + / / SPECIAL , EXEMPLARY , OR CONSEQUENTIAL DAMAGES ( INCLUDING , BUT NOT <nl> + / / LIMITED TO , PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES ; LOSS OF USE , <nl> + / / DATA , OR PROFITS ; OR BUSINESS INTERRUPTION ) HOWEVER CAUSED AND ON ANY <nl> + / / THEORY OF LIABILITY , WHETHER IN CONTRACT , STRICT LIABILITY , OR TORT <nl> + / / ( INCLUDING NEGLIGENCE OR OTHERWISE ) ARISING IN ANY WAY OUT OF THE USE <nl> + / / OF THIS SOFTWARE , EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE . <nl> + <nl> + # endregion <nl> + <nl> + using System ; <nl> + using System . Collections . Generic ; <nl> + using System . Threading . Tasks ; <nl> + <nl> + using Grpc . Core . Internal ; <nl> + using Grpc . Core . Utils ; <nl> + <nl> + namespace Grpc . Core <nl> + { <nl> + / / / < summary > <nl> + / / / Client - side call credentials . Provide authorization with per - call granularity . <nl> + / / / < / summary > <nl> + public abstract class CallCredentials <nl> + { <nl> + / / / < summary > <nl> + / / / Composes multiple multiple < c > CallCredentials < / c > objects into <nl> + / / / a single < c > CallCredentials < / c > object . <nl> + / / / < / summary > <nl> + / / / < param name = " credentials " > credentials to compose < / param > <nl> + / / / < returns > The new < c > CompositeCallCredentials < / c > < / returns > <nl> + public static CallCredentials Compose ( params CallCredentials [ ] credentials ) <nl> + { <nl> + return new CompositeCallCredentials ( credentials ) ; <nl> + } <nl> + <nl> + / / / < summary > <nl> + / / / Creates native object for the credentials . <nl> + / / / < / summary > <nl> + / / / < returns > The native credentials . < / returns > <nl> + internal abstract CredentialsSafeHandle ToNativeCredentials ( ) ; <nl> + } <nl> + <nl> + / / / < summary > <nl> + / / / Asynchronous authentication interceptor for < see cref = " MetadataCredentials " / > . <nl> + / / / < / summary > <nl> + / / / < param name = " authUri " > URL of a service to which current remote call needs to authenticate < / param > <nl> + / / / < param name = " metadata " > Metadata to populate with entries that will be added to outgoing call ' s headers . < / param > <nl> + / / / < returns > < / returns > <nl> + public delegate Task AsyncAuthInterceptor ( string authUri , Metadata metadata ) ; <nl> + <nl> + / / / < summary > <nl> + / / / Client - side credentials that delegate metadata based auth to an interceptor . <nl> + / / / The interceptor is automatically invoked for each remote call that uses < c > MetadataCredentials . < / c > <nl> + / / / < / summary > <nl> + public class MetadataCredentials : CallCredentials <nl> + { <nl> + readonly AsyncAuthInterceptor interceptor ; <nl> + <nl> + / / / < summary > <nl> + / / / Initializes a new instance of < c > MetadataCredentials < / c > class . <nl> + / / / < / summary > <nl> + / / / < param name = " interceptor " > authentication interceptor < / param > <nl> + public MetadataCredentials ( AsyncAuthInterceptor interceptor ) <nl> + { <nl> + this . interceptor = interceptor ; <nl> + } <nl> + <nl> + internal override CredentialsSafeHandle ToNativeCredentials ( ) <nl> + { <nl> + NativeMetadataCredentialsPlugin plugin = new NativeMetadataCredentialsPlugin ( interceptor ) ; <nl> + return plugin . Credentials ; <nl> + } <nl> + } <nl> + <nl> + / / / < summary > <nl> + / / / Credentials that allow composing multiple credentials objects into one < see cref = " CallCredentials " / > object . <nl> + / / / < / summary > <nl> + internal sealed class CompositeCallCredentials : CallCredentials <nl> + { <nl> + readonly List < CallCredentials > credentials ; <nl> + <nl> + / / / < summary > <nl> + / / / Initializes a new instance of < c > CompositeCallCredentials < / c > class . <nl> + / / / The resulting credentials object will be composite of all the credentials specified as parameters . <nl> + / / / < / summary > <nl> + / / / < param name = " credentials " > credentials to compose < / param > <nl> + public CompositeCallCredentials ( params CallCredentials [ ] credentials ) <nl> + { <nl> + Preconditions . CheckArgument ( credentials . Length > = 2 , " Composite credentials object can only be created from 2 or more credentials . " ) ; <nl> + this . credentials = new List < CallCredentials > ( credentials ) ; <nl> + } <nl> + <nl> + internal override CredentialsSafeHandle ToNativeCredentials ( ) <nl> + { <nl> + return ToNativeRecursive ( 0 ) ; <nl> + } <nl> + <nl> + / / Recursive descent makes managing lifetime of intermediate CredentialSafeHandle instances easier . <nl> + / / In practice , we won ' t usually see composites from more than two credentials anyway . <nl> + private CredentialsSafeHandle ToNativeRecursive ( int startIndex ) <nl> + { <nl> + if ( startIndex = = credentials . Count - 1 ) <nl> + { <nl> + return credentials [ startIndex ] . ToNativeCredentials ( ) ; <nl> + } <nl> + <nl> + using ( var cred1 = credentials [ startIndex ] . ToNativeCredentials ( ) ) <nl> + using ( var cred2 = ToNativeRecursive ( startIndex + 1 ) ) <nl> + { <nl> + var nativeComposite = CredentialsSafeHandle . CreateComposite ( cred1 , cred2 ) ; <nl> + if ( nativeComposite . IsInvalid ) <nl> + { <nl> + throw new ArgumentException ( " Error creating native composite credentials . Likely , this is because you are trying to compose incompatible credentials . " ) ; <nl> + } <nl> + return nativeComposite ; <nl> + } <nl> + } <nl> + } <nl> + } <nl> mmm a / src / csharp / Grpc . Core / CallOptions . cs <nl> ppp b / src / csharp / Grpc . Core / CallOptions . cs <nl> public struct CallOptions <nl> CancellationToken cancellationToken ; <nl> WriteOptions writeOptions ; <nl> ContextPropagationToken propagationToken ; <nl> - Credentials credentials ; <nl> + CallCredentials credentials ; <nl> <nl> / / / < summary > <nl> / / / Creates a new instance of < c > CallOptions < / c > struct . <nl> public struct CallOptions <nl> / / / < param name = " propagationToken " > Context propagation token obtained from < see cref = " ServerCallContext " / > . < / param > <nl> / / / < param name = " credentials " > Credentials to use for this call . < / param > <nl> public CallOptions ( Metadata headers = null , DateTime ? deadline = null , CancellationToken cancellationToken = default ( CancellationToken ) , <nl> - WriteOptions writeOptions = null , ContextPropagationToken propagationToken = null , Credentials credentials = null ) <nl> + WriteOptions writeOptions = null , ContextPropagationToken propagationToken = null , CallCredentials credentials = null ) <nl> { <nl> this . headers = headers ; <nl> this . deadline = deadline ; <nl> public ContextPropagationToken PropagationToken <nl> / / / < summary > <nl> / / / Credentials to use for this call . <nl> / / / < / summary > <nl> - public Credentials Credentials <nl> + public CallCredentials Credentials <nl> { <nl> get <nl> { <nl> mmm a / src / csharp / Grpc . Core / Channel . cs <nl> ppp b / src / csharp / Grpc . Core / Channel . cs <nl> public class Channel <nl> / / / < param name = " target " > Target of the channel . < / param > <nl> / / / < param name = " credentials " > Credentials to secure the channel . < / param > <nl> / / / < param name = " options " > Channel options . < / param > <nl> - public Channel ( string target , Credentials credentials , IEnumerable < ChannelOption > options = null ) <nl> + public Channel ( string target , ChannelCredentials credentials , IEnumerable < ChannelOption > options = null ) <nl> { <nl> this . target = Preconditions . CheckNotNull ( target , " target " ) ; <nl> this . environment = GrpcEnvironment . AddRef ( ) ; <nl> public Channel ( string target , Credentials credentials , IEnumerable < ChannelOption <nl> / / / < param name = " port " > The port . < / param > <nl> / / / < param name = " credentials " > Credentials to secure the channel . < / param > <nl> / / / < param name = " options " > Channel options . < / param > <nl> - public Channel ( string host , int port , Credentials credentials , IEnumerable < ChannelOption > options = null ) : <nl> + public Channel ( string host , int port , ChannelCredentials credentials , IEnumerable < ChannelOption > options = null ) : <nl> this ( string . Format ( " { 0 } : { 1 } " , host , port ) , credentials , options ) <nl> { <nl> } <nl> similarity index 61 % <nl> rename from src / csharp / Grpc . Core / Credentials . cs <nl> rename to src / csharp / Grpc . Core / ChannelCredentials . cs <nl> mmm a / src / csharp / Grpc . Core / Credentials . cs <nl> ppp b / src / csharp / Grpc . Core / ChannelCredentials . cs <nl> <nl> namespace Grpc . Core <nl> { <nl> / / / < summary > <nl> - / / / Client - side credentials . Used for creation of a secure channel . <nl> + / / / Client - side channel credentials . Used for creation of a secure channel . <nl> / / / < / summary > <nl> - public abstract class Credentials <nl> + public abstract class ChannelCredentials <nl> { <nl> - static readonly Credentials InsecureInstance = new InsecureCredentialsImpl ( ) ; <nl> + static readonly ChannelCredentials InsecureInstance = new InsecureCredentialsImpl ( ) ; <nl> <nl> / / / < summary > <nl> - / / / Returns instance of credential that provides no security and <nl> + / / / Returns instance of credentials that provides no security and <nl> / / / will result in creating an unsecure channel with no encryption whatsoever . <nl> / / / < / summary > <nl> - public static Credentials Insecure <nl> + public static ChannelCredentials Insecure <nl> { <nl> get <nl> { <nl> public static Credentials Insecure <nl> } <nl> } <nl> <nl> + / / / < summary > <nl> + / / / Creates a new instance of < c > ChannelCredentials < / c > class by composing <nl> + / / / given channel credentials with call credentials . <nl> + / / / < / summary > <nl> + / / / < param name = " channelCredentials " > Channel credentials . < / param > <nl> + / / / < param name = " callCredentials " > Call credentials . < / param > <nl> + / / / < returns > The new composite < c > ChannelCredentials < / c > < / returns > <nl> + public static ChannelCredentials Create ( ChannelCredentials channelCredentials , CallCredentials callCredentials ) <nl> + { <nl> + return new CompositeChannelCredentials ( channelCredentials , callCredentials ) ; <nl> + } <nl> + <nl> + / / / < summary > <nl> + / / / Creates a new instance of < c > ChannelCredentials < / c > by wrapping <nl> + / / / an instance of < c > CallCredentials < / c > . <nl> + / / / < / summary > <nl> + / / / < param name = " callCredentials " > Call credentials . < / param > <nl> + / / / < returns > The < c > ChannelCredentials < / c > wrapping given call credentials . < / returns > <nl> + public static ChannelCredentials Create ( CallCredentials callCredentials ) <nl> + { <nl> + return new WrappedCallCredentials ( callCredentials ) ; <nl> + } <nl> + <nl> / / / < summary > <nl> / / / Creates native object for the credentials . May return null if insecure channel <nl> / / / should be created . <nl> public static Credentials Insecure <nl> / / / < / summary > <nl> internal virtual bool IsComposable <nl> { <nl> - get { return true ; } <nl> + get { return false ; } <nl> } <nl> <nl> - private sealed class InsecureCredentialsImpl : Credentials <nl> + private sealed class InsecureCredentialsImpl : ChannelCredentials <nl> { <nl> internal override CredentialsSafeHandle ToNativeCredentials ( ) <nl> { <nl> return null ; <nl> } <nl> - <nl> - / / Composing insecure credentials makes no sense . <nl> - internal override bool IsComposable <nl> - { <nl> - get { return false ; } <nl> - } <nl> } <nl> } <nl> <nl> / / / < summary > <nl> / / / Client - side SSL credentials . <nl> / / / < / summary > <nl> - public sealed class SslCredentials : Credentials <nl> + public sealed class SslCredentials : ChannelCredentials <nl> { <nl> readonly string rootCertificates ; <nl> readonly KeyCertificatePair keyCertificatePair ; <nl> public KeyCertificatePair KeyCertificatePair <nl> } <nl> } <nl> <nl> + / / Composing composite makes no sense . <nl> + internal override bool IsComposable <nl> + { <nl> + get { return true ; } <nl> + } <nl> + <nl> internal override CredentialsSafeHandle ToNativeCredentials ( ) <nl> { <nl> return CredentialsSafeHandle . CreateSslCredentials ( rootCertificates , keyCertificatePair ) ; <nl> internal override CredentialsSafeHandle ToNativeCredentials ( ) <nl> } <nl> <nl> / / / < summary > <nl> - / / / Asynchronous authentication interceptor for < see cref = " MetadataCredentials " / > . <nl> + / / / Credentials that allow composing one < see cref = " ChannelCredentials " / > object and <nl> + / / / one or more < see cref = " CallCredentials " / > objects into a single < see cref = " ChannelCredentials " / > . <nl> / / / < / summary > <nl> - / / / < param name = " authUri " > URL of a service to which current remote call needs to authenticate < / param > <nl> - / / / < param name = " metadata " > Metadata to populate with entries that will be added to outgoing call ' s headers . < / param > <nl> - / / / < returns > < / returns > <nl> - public delegate Task AsyncAuthInterceptor ( string authUri , Metadata metadata ) ; <nl> - <nl> - / / / < summary > <nl> - / / / Client - side credentials that delegate metadata based auth to an interceptor . <nl> - / / / The interceptor is automatically invoked for each remote call that uses < c > MetadataCredentials . < / c > <nl> - / / / < / summary > <nl> - public partial class MetadataCredentials : Credentials <nl> + internal sealed class CompositeChannelCredentials : ChannelCredentials <nl> { <nl> - readonly AsyncAuthInterceptor interceptor ; <nl> + readonly ChannelCredentials channelCredentials ; <nl> + readonly CallCredentials callCredentials ; <nl> <nl> / / / < summary > <nl> - / / / Initializes a new instance of < c > MetadataCredentials < / c > class . <nl> + / / / Initializes a new instance of < c > CompositeChannelCredentials < / c > class . <nl> + / / / The resulting credentials object will be composite of all the credentials specified as parameters . <nl> / / / < / summary > <nl> - / / / < param name = " interceptor " > authentication interceptor < / param > <nl> - public MetadataCredentials ( AsyncAuthInterceptor interceptor ) <nl> + / / / < param name = " channelCredentials " > channelCredentials to compose < / param > <nl> + / / / < param name = " callCredentials " > channelCredentials to compose < / param > <nl> + public CompositeChannelCredentials ( ChannelCredentials channelCredentials , CallCredentials callCredentials ) <nl> { <nl> - this . interceptor = interceptor ; <nl> + this . channelCredentials = Preconditions . CheckNotNull ( channelCredentials ) ; <nl> + this . callCredentials = Preconditions . CheckNotNull ( callCredentials ) ; <nl> + Preconditions . CheckArgument ( channelCredentials . IsComposable , " Supplied channel credentials do not allow composition . " ) ; <nl> } <nl> <nl> internal override CredentialsSafeHandle ToNativeCredentials ( ) <nl> { <nl> - NativeMetadataCredentialsPlugin plugin = new NativeMetadataCredentialsPlugin ( interceptor ) ; <nl> - return plugin . Credentials ; <nl> + using ( var cred1 = channelCredentials . ToNativeCredentials ( ) ) <nl> + using ( var cred2 = callCredentials . ToNativeCredentials ( ) ) <nl> + { <nl> + var nativeComposite = CredentialsSafeHandle . CreateComposite ( cred1 , cred2 ) ; <nl> + if ( nativeComposite . IsInvalid ) <nl> + { <nl> + throw new ArgumentException ( " Error creating native composite credentials . Likely , this is because you are trying to compose incompatible credentials . " ) ; <nl> + } <nl> + return nativeComposite ; <nl> + } <nl> } <nl> } <nl> <nl> / / / < summary > <nl> - / / / Credentials that allow composing multiple credentials objects into one < see cref = " Credentials " / > object . <nl> + / / / Credentials wrapping < see cref = " CallCredentials " / > as < see cref = " ChannelCredentials " / > . <nl> / / / < / summary > <nl> - public sealed class CompositeCredentials : Credentials <nl> + internal sealed class WrappedCallCredentials : ChannelCredentials <nl> { <nl> - readonly List < Credentials > credentials ; <nl> + readonly CallCredentials callCredentials ; <nl> <nl> / / / < summary > <nl> - / / / Initializes a new instance of < c > CompositeCredentials < / c > class . <nl> - / / / The resulting credentials object will be composite of all the credentials specified as parameters . <nl> + / / / Wraps instance of < c > CallCredentials < / c > as < c > ChannelCredentials < / c > . <nl> / / / < / summary > <nl> - / / / < param name = " credentials " > credentials to compose < / param > <nl> - public CompositeCredentials ( params Credentials [ ] credentials ) <nl> + / / / < param name = " callCredentials " > credentials to wrap < / param > <nl> + public WrappedCallCredentials ( CallCredentials callCredentials ) <nl> { <nl> - Preconditions . CheckArgument ( credentials . Length > = 2 , " Composite credentials object can only be created from 2 or more credentials . " ) ; <nl> - foreach ( var cred in credentials ) <nl> - { <nl> - Preconditions . CheckArgument ( cred . IsComposable , " Cannot create composite credentials : one or more credential objects do not allow composition . " ) ; <nl> - } <nl> - this . credentials = new List < Credentials > ( credentials ) ; <nl> - } <nl> - <nl> - / / / < summary > <nl> - / / / Creates a new instance of < c > CompositeCredentials < / c > class by composing <nl> - / / / multiple < c > Credentials < / c > objects . <nl> - / / / < / summary > <nl> - / / / < param name = " credentials " > credentials to compose < / param > <nl> - / / / < returns > The new < c > CompositeCredentials < / c > < / returns > <nl> - public static CompositeCredentials Create ( params Credentials [ ] credentials ) <nl> - { <nl> - return new CompositeCredentials ( credentials ) ; <nl> + this . callCredentials = Preconditions . CheckNotNull ( callCredentials ) ; <nl> } <nl> <nl> internal override CredentialsSafeHandle ToNativeCredentials ( ) <nl> { <nl> - return ToNativeRecursive ( 0 ) ; <nl> - } <nl> - <nl> - / / Recursive descent makes managing lifetime of intermediate CredentialSafeHandle instances easier . <nl> - / / In practice , we won ' t usually see composites from more than two credentials anyway . <nl> - private CredentialsSafeHandle ToNativeRecursive ( int startIndex ) <nl> - { <nl> - if ( startIndex = = credentials . Count - 1 ) <nl> - { <nl> - return credentials [ startIndex ] . ToNativeCredentials ( ) ; <nl> - } <nl> - <nl> - using ( var cred1 = credentials [ startIndex ] . ToNativeCredentials ( ) ) <nl> - using ( var cred2 = ToNativeRecursive ( startIndex + 1 ) ) <nl> - { <nl> - var nativeComposite = CredentialsSafeHandle . CreateComposite ( cred1 , cred2 ) ; <nl> - if ( nativeComposite . IsInvalid ) <nl> - { <nl> - throw new ArgumentException ( " Error creating native composite credentials . Likely , this is because you are trying to compose incompatible credentials . " ) ; <nl> - } <nl> - return nativeComposite ; <nl> - } <nl> + return callCredentials . ToNativeCredentials ( ) ; <nl> } <nl> } <nl> } <nl> mmm a / src / csharp / Grpc . Core / Grpc . Core . csproj <nl> ppp b / src / csharp / Grpc . Core / Grpc . Core . csproj <nl> <nl> < ItemGroup > <nl> < Compile Include = " AsyncDuplexStreamingCall . cs " / > <nl> < Compile Include = " AsyncServerStreamingCall . cs " / > <nl> + < Compile Include = " CallCredentials . cs " / > <nl> < Compile Include = " IClientStreamWriter . cs " / > <nl> < Compile Include = " Internal \ NativeMetadataCredentialsPlugin . cs " / > <nl> < Compile Include = " Internal \ INativeCall . cs " / > <nl> <nl> < Compile Include = " Utils \ AsyncStreamExtensions . cs " / > <nl> < Compile Include = " Utils \ BenchmarkUtil . cs " / > <nl> < Compile Include = " Internal \ CredentialsSafeHandle . cs " / > <nl> - < Compile Include = " Credentials . cs " / > <nl> + < Compile Include = " ChannelCredentials . cs " / > <nl> < Compile Include = " Internal \ ChannelArgsSafeHandle . cs " / > <nl> < Compile Include = " Internal \ AsyncCompletion . cs " / > <nl> < Compile Include = " Internal \ AsyncCallBase . cs " / > <nl> mmm a / src / csharp / Grpc . Examples . MathClient / MathClient . cs <nl> ppp b / src / csharp / Grpc . Examples . MathClient / MathClient . cs <nl> class MathClient <nl> { <nl> public static void Main ( string [ ] args ) <nl> { <nl> - var channel = new Channel ( " 127 . 0 . 0 . 1 " , 23456 , Credentials . Insecure ) ; <nl> + var channel = new Channel ( " 127 . 0 . 0 . 1 " , 23456 , ChannelCredentials . Insecure ) ; <nl> Math . IMathClient client = new Math . MathClient ( channel ) ; <nl> MathExamples . DivExample ( client ) ; <nl> <nl> mmm a / src / csharp / Grpc . Examples . Tests / MathClientServerTests . cs <nl> ppp b / src / csharp / Grpc . Examples . Tests / MathClientServerTests . cs <nl> public void Init ( ) <nl> Ports = { { Host , ServerPort . PickUnused , ServerCredentials . Insecure } } <nl> } ; <nl> server . Start ( ) ; <nl> - channel = new Channel ( Host , server . Ports . Single ( ) . BoundPort , Credentials . Insecure ) ; <nl> + channel = new Channel ( Host , server . Ports . Single ( ) . BoundPort , ChannelCredentials . Insecure ) ; <nl> client = Math . NewClient ( channel ) ; <nl> } <nl> <nl> mmm a / src / csharp / Grpc . HealthCheck . Tests / HealthClientServerTest . cs <nl> ppp b / src / csharp / Grpc . HealthCheck . Tests / HealthClientServerTest . cs <nl> public void Init ( ) <nl> Ports = { { Host , ServerPort . PickUnused , ServerCredentials . Insecure } } <nl> } ; <nl> server . Start ( ) ; <nl> - channel = new Channel ( Host , server . Ports . Single ( ) . BoundPort , Credentials . Insecure ) ; <nl> + channel = new Channel ( Host , server . Ports . Single ( ) . BoundPort , ChannelCredentials . Insecure ) ; <nl> <nl> client = Grpc . Health . V1Alpha . Health . NewClient ( channel ) ; <nl> } <nl> mmm a / src / csharp / Grpc . IntegrationTesting / InteropClient . cs <nl> ppp b / src / csharp / Grpc . IntegrationTesting / InteropClient . cs <nl> private async Task Run ( ) <nl> await channel . ShutdownAsync ( ) ; <nl> } <nl> <nl> - private async Task < Credentials > CreateCredentialsAsync ( ) <nl> + private async Task < ChannelCredentials > CreateCredentialsAsync ( ) <nl> { <nl> - var credentials = options . UseTls ? TestCredentials . CreateTestClientCredentials ( options . UseTestCa ) : Credentials . Insecure ; <nl> + var credentials = options . UseTls ? TestCredentials . CreateTestClientCredentials ( options . UseTestCa ) : ChannelCredentials . Insecure ; <nl> <nl> if ( options . TestCase = = " jwt_token_creds " ) <nl> { <nl> var googleCredential = await GoogleCredential . GetApplicationDefaultAsync ( ) ; <nl> Assert . IsTrue ( googleCredential . IsCreateScopedRequired ) ; <nl> - credentials = CompositeCredentials . Create ( googleCredential . ToGrpcCredentials ( ) , credentials ) ; <nl> + credentials = ChannelCredentials . Create ( credentials , googleCredential . ToGrpcCredentials ( ) ) ; <nl> } <nl> <nl> if ( options . TestCase = = " compute_engine_creds " ) <nl> { <nl> var googleCredential = await GoogleCredential . GetApplicationDefaultAsync ( ) ; <nl> Assert . IsFalse ( googleCredential . IsCreateScopedRequired ) ; <nl> - credentials = CompositeCredentials . Create ( googleCredential . ToGrpcCredentials ( ) , credentials ) ; <nl> + credentials = ChannelCredentials . Create ( credentials , googleCredential . ToGrpcCredentials ( ) ) ; <nl> } <nl> return credentials ; <nl> } <nl> mmm a / src / csharp / Grpc . IntegrationTesting / MetadataCredentialsTest . cs <nl> ppp b / src / csharp / Grpc . IntegrationTesting / MetadataCredentialsTest . cs <nl> public void Init ( ) <nl> metadata . Add ( " authorization " , " SECRET_TOKEN " ) ; <nl> } ) ; <nl> <nl> - var clientCredentials = CompositeCredentials . Create ( <nl> + var clientCredentials = ChannelCredentials . Create ( <nl> new SslCredentials ( File . ReadAllText ( TestCredentials . ClientCertAuthorityPath ) ) , <nl> new MetadataCredentials ( asyncAuthInterceptor ) ) ; <nl> channel = new Channel ( Host , server . Ports . Single ( ) . BoundPort , clientCredentials , options ) ; <nl>
|
introduce the new split - type credentials api
|
grpc/grpc
|
5bd7005833b60d9db31860049458b122fa496599
|
2015-10-07T00:57:45Z
|
mmm a / src / python / grpcio_testing / grpc_testing / __init__ . py <nl> ppp b / src / python / grpcio_testing / grpc_testing / __init__ . py <nl> <nl> " " " Objects for use in testing gRPC Python - using application code . " " " <nl> <nl> import abc <nl> + import six <nl> <nl> from google . protobuf import descriptor <nl> - import six <nl> <nl> import grpc <nl> <nl>
|
Fix import order to satisfy pylint
|
grpc/grpc
|
a941ec6d745f995e246daedc3514721152dae9fa
|
2018-06-08T06:38:34Z
|
mmm a / aten / src / ATen / core / ivalue . cpp <nl> ppp b / aten / src / ATen / core / ivalue . cpp <nl> std : : ostream & IValue : : repr ( <nl> return out < < enum_holder - > qualifiedClassName ( ) < < " . " < < <nl> enum_holder - > name ( ) ; <nl> } <nl> + case IValue : : Tag : : Object : { <nl> + TORCH_INTERNAL_ASSERT ( false , " repr ( ) not defined on : " , v . tagKind ( ) , " . Perhaps you ' ve frozen a module with custom classes ? " ) ; <nl> + } <nl> default : <nl> TORCH_INTERNAL_ASSERT ( false , " repr ( ) not defined on : " , v . tagKind ( ) ) ; <nl> } <nl> mmm a / test / cpp / jit / test_custom_class . cpp <nl> ppp b / test / cpp / jit / test_custom_class . cpp <nl> <nl> # include < gtest / gtest . h > <nl> <nl> # include < test / cpp / jit / test_custom_class_registrations . h > <nl> + # include < torch / csrc / jit / passes / freeze_module . h > <nl> # include < torch / custom_class . h > <nl> # include < torch / script . h > <nl> <nl> TEST ( CustomClassTest , TestDocString ) { <nl> method_doc_string ) ; <nl> } <nl> <nl> + TEST ( CustomClassTest , Serialization ) { <nl> + script : : Module m ( " m " ) ; <nl> + <nl> + / / test make_custom_class API <nl> + auto custom_class_obj = make_custom_class < MyStackClass < std : : string > > ( <nl> + std : : vector < std : : string > { " foo " , " bar " } ) ; <nl> + m . register_attribute ( <nl> + " s " , <nl> + custom_class_obj . type ( ) , <nl> + custom_class_obj , <nl> + / * is_parameter = * / false ) ; <nl> + m . define ( R " ( <nl> + def forward ( self ) : <nl> + return self . s . return_a_tuple ( ) <nl> + ) " ) ; <nl> + <nl> + auto test_with_obj = [ ] ( script : : Module & mod ) { <nl> + auto res = mod . run_method ( " forward " ) ; <nl> + auto tup = res . toTuple ( ) ; <nl> + AT_ASSERT ( tup - > elements ( ) . size ( ) = = 2 ) ; <nl> + auto i = tup - > elements ( ) [ 1 ] . toInt ( ) ; <nl> + AT_ASSERT ( i = = 123 ) ; <nl> + } ; <nl> + <nl> + auto frozen_m = torch : : jit : : freeze_module ( m . clone ( ) ) ; <nl> + <nl> + test_with_obj ( m ) ; <nl> + test_with_obj ( frozen_m ) ; <nl> + <nl> + std : : ostringstream oss ; <nl> + m . save ( oss ) ; <nl> + std : : istringstream iss ( oss . str ( ) ) ; <nl> + caffe2 : : serialize : : IStreamAdapter adapter { & iss } ; <nl> + auto loaded_module = torch : : jit : : load ( iss , torch : : kCPU ) ; <nl> + <nl> + std : : ostringstream oss_frozen ; <nl> + frozen_m . save ( oss_frozen ) ; <nl> + std : : istringstream iss_frozen ( oss_frozen . str ( ) ) ; <nl> + caffe2 : : serialize : : IStreamAdapter adapter_frozen { & iss_frozen } ; <nl> + auto loaded_frozen_module = torch : : jit : : load ( iss_frozen , torch : : kCPU ) ; <nl> + } <nl> + <nl> } / / namespace jit <nl> } / / namespace torch <nl> mmm a / test / quantization / test_quantize_jit . py <nl> ppp b / test / quantization / test_quantize_jit . py <nl> def forward ( self , x ) : <nl> num_quantize_per_tensor = 1 # for output <nl> for num_quant , num_op in num_op_by_num_quant . items ( ) : <nl> num_quantize_per_tensor + = num_op * num_quant <nl> + num_quantize_per_tensor - = 4 # constant propagation removes some prepacks <nl> FileCheck ( ) . check_count ( " aten : : quantize_per_tensor ( " , num_quantize_per_tensor , exactly = True ) \ <nl> . run ( m1 . graph ) <nl> <nl> def forward ( self , x ) : <nl> m = torch . jit . script ( M ( ) ) <nl> m = quantize_dynamic_jit ( m , { ' ' : float16_dynamic_qconfig } ) <nl> <nl> - FileCheck ( ) . check ( " quantized : : linear_prepack_fp16 " ) \ <nl> - . check_next ( " quantized : : linear_dynamic_fp16 " ) \ <nl> + FileCheck ( ) . check ( " quantized : : linear_dynamic_fp16 " ) \ <nl> . check_not ( " aten : : linear " ) \ <nl> . check_not ( " aten : : dequantize " ) \ <nl> . check_not ( " aten : : quantize " ) \ <nl> def forward ( self , indices1 , offsets1 , indices2 , offsets2 ) : <nl> m = prepare_jit ( m , { ' embedding1 ' : int4_qconfig , ' embedding2 ' : int8_qconfig } ) <nl> m = convert_jit ( m ) <nl> FileCheck ( ) . check ( " quantized : : embedding_bag_4bit_rowwise_offsets " ) \ <nl> - . check_next ( " quantized : : embedding_bag_byte_rowwise_offsets " ) \ <nl> + . check ( " quantized : : embedding_bag_byte_rowwise_offsets " ) \ <nl> . run ( m . graph ) <nl> m ( * dummy_inputs ) <nl> <nl> mmm a / test / test_mobile_optimizer . py <nl> ppp b / test / test_mobile_optimizer . py <nl> def _quant_script_and_optimize ( model ) : <nl> <nl> m , m_optim = _quant_script_and_optimize ( Standalone ( ) ) <nl> FileCheck ( ) . check_not ( " Conv2d = prim : : GetAttr [ name = \ " conv1 \ " ] " ) \ <nl> - . check_count ( " _jit_pass_hoist_conv_packed_params " , 2 , exactly = True ) \ <nl> + . check_count ( " __torch__ . torch . classes . quantized . Conv2dPackedParamsBase = prim : : Constant " , 2 , exactly = True ) \ <nl> . run ( m_optim . graph ) <nl> self . assertFalse ( hasattr ( m_optim , " conv1 " ) ) <nl> self . assertFalse ( hasattr ( m_optim , " conv2 " ) ) <nl> def _quant_script_and_optimize ( model ) : <nl> <nl> m , m_optim = _quant_script_and_optimize ( Parent ( ) ) <nl> FileCheck ( ) . check_not ( " Conv2d = prim : : GetAttr [ name = \ " conv1 \ " ] " ) \ <nl> - . check_count ( " _jit_pass_hoist_conv_packed_params " , 2 , exactly = True ) \ <nl> + . check_count ( " __torch__ . torch . classes . quantized . Conv2dPackedParamsBase = prim : : Constant " , 2 , exactly = True ) \ <nl> . run ( m_optim . graph ) <nl> self . assertFalse ( hasattr ( m_optim , " conv1 " ) ) <nl> self . assertFalse ( hasattr ( m_optim , " child " ) ) <nl> mmm a / torch / _C / __init__ . pyi . in <nl> ppp b / torch / _C / __init__ . pyi . in <nl> def _jit_pass_vulkan_optimize_for_mobile ( module : ' torch . jit . ScriptModule ' , <nl> def _jit_pass_metal_optimize_for_mobile ( module : ' torch . jit . ScriptModule ' , <nl> preserved_methods : List [ AnyStr ] ) - > ' torch . jit . ScriptModule ' : . . . <nl> def _jit_pass_inline ( Graph ) - > None : . . . <nl> + def _jit_pass_constant_propagation ( Graph ) - > None : . . . <nl> def _jit_get_schemas_for_operator ( name : str ) - > List [ FunctionSchema ] : . . . <nl> def _jit_can_fuse_on_cpu ( ) - > _bool : . . . <nl> def _jit_can_fuse_on_gpu ( ) - > _bool : . . . <nl> mmm a / torch / csrc / jit / ir / constants . cpp <nl> ppp b / torch / csrc / jit / ir / constants . cpp <nl> c10 : : optional < Value * > tryInsertConstant ( <nl> n - > destroy ( ) ; <nl> return c10 : : nullopt ; <nl> } ; <nl> - } else if ( val . isGenericDict ( ) & & insertableIValue ( val ) ) { <nl> - n - > ival_ ( attr : : value , val ) ; <nl> - n - > output ( ) - > setType ( val . type ( ) ) ; <nl> - } else if ( val . isEnum ( ) ) { <nl> + } else if ( <nl> + ( val . isGenericDict ( ) & & insertableIValue ( val ) ) | | ( val . isEnum ( ) ) | | <nl> + ( val . isObject ( ) & & ! val . toObjectRef ( ) . type ( ) - > is_module ( ) ) ) { <nl> n - > ival_ ( attr : : value , val ) ; <nl> n - > output ( ) - > setType ( val . type ( ) ) ; <nl> } else { <nl> c10 : : optional < IValue > toIValue ( const Value * v ) { <nl> } else if ( type - > cast < EnumType > ( ) ) { <nl> const auto & enum_val = node - > ival ( attr : : value ) ; <nl> return enum_val ; <nl> + } else if ( type - > cast < ClassType > ( ) & & ! type - > is_module ( ) ) { <nl> + const auto & class_val = node - > ival ( attr : : value ) ; <nl> + return class_val ; <nl> } else { <nl> std : : stringstream ss ; <nl> ss < < " constant literal not supported for : " < < type - > str ( ) ; <nl> mmm a / torch / csrc / jit / ir / ir . cpp <nl> ppp b / torch / csrc / jit / ir / ir . cpp <nl> static void printAttribute ( std : : ostream & out , const IValue & ival ) { <nl> } else if ( input . isTensorList ( ) ) { <nl> ss < < " [ < Tensors > ] " ; <nl> return true ; <nl> + } else if ( input . isObject ( ) & & ! input . type ( ) - > is_module ( ) ) { <nl> + ss < < " object ( " < < & input . toObjectRef ( ) < < " ) " ; <nl> + return true ; <nl> } <nl> return false ; <nl> } ; <nl> mmm a / torch / csrc / jit / ir / node_hashing . cpp <nl> ppp b / torch / csrc / jit / ir / node_hashing . cpp <nl> bool ivaluesEqual ( const IValue & a1 , const IValue & a2 ) { <nl> if ( a1 . isEnum ( ) ) { <nl> return a1 . toEnumHolder ( ) = = a2 . toEnumHolder ( ) ; <nl> } <nl> + if ( a1 . isObject ( ) ) { <nl> + return & a1 . toObjectRef ( ) = = & a2 . toObjectRef ( ) ; <nl> + } <nl> TORCH_INTERNAL_ASSERT ( false ) ; <nl> } <nl> <nl> mmm a / torch / csrc / jit / passes / constant_propagation . cpp <nl> ppp b / torch / csrc / jit / passes / constant_propagation . cpp <nl> <nl> namespace torch { <nl> namespace jit { <nl> <nl> - c10 : : optional < std : : vector < IValue > > runNodeIfInputsAreConstant ( const Node * n ) { <nl> + c10 : : optional < std : : vector < IValue > > runNodeIfInputsAreConstant ( <nl> + const Node * n , <nl> + bool ignore_custom_classes ) { <nl> Stack stack ; <nl> for ( auto input : n - > inputs ( ) ) { <nl> if ( auto ival = toIValue ( input ) ) { <nl> c10 : : optional < std : : vector < IValue > > runNodeIfInputsAreConstant ( const Node * n ) { <nl> return c10 : : nullopt ; <nl> } <nl> } <nl> + <nl> switch ( n - > kind ( ) ) { <nl> case prim : : ListUnpack : { <nl> if ( stack . back ( ) . toList ( ) . size ( ) ! = n - > outputs ( ) . size ( ) ) { <nl> c10 : : optional < std : : vector < IValue > > runNodeIfInputsAreConstant ( const Node * n ) { <nl> return c10 : : nullopt ; <nl> } <nl> } <nl> + / / Weak form of const propagation <nl> + if ( ignore_custom_classes ) { <nl> + if ( v . isCustomClass ( ) ) { <nl> + return c10 : : nullopt ; <nl> + } <nl> + } <nl> } <nl> return stack ; <nl> } <nl> std : : unordered_set < Symbol > skip_list = { <nl> struct ConstantPropagator { <nl> / / Runs constant propagation with an aliasing db and checks if inputs or <nl> / / outputs might be mutated in the graph <nl> - static ConstantPropagator WithAliasDb ( std : : shared_ptr < Graph > graph ) { <nl> - return ConstantPropagator ( graph , true ) ; <nl> + static ConstantPropagator WithAliasDb ( <nl> + std : : shared_ptr < Graph > graph , <nl> + bool ignore_custom_classes ) { <nl> + return ConstantPropagator ( std : : move ( graph ) , true , ignore_custom_classes ) ; <nl> } <nl> <nl> / / Runs constant propagation only on ops that clearly do not have aliased <nl> / / inputs or outputs without computing aliasing information <nl> static ConstantPropagator NoAliasDb ( std : : shared_ptr < Graph > graph ) { <nl> - return ConstantPropagator ( graph , false ) ; <nl> + return ConstantPropagator ( std : : move ( graph ) , false , false ) ; <nl> } <nl> <nl> void run ( ) { <nl> struct ConstantPropagator { <nl> } <nl> <nl> private : <nl> - ConstantPropagator ( std : : shared_ptr < Graph > graph , bool aliasing_types ) <nl> + ConstantPropagator ( <nl> + std : : shared_ptr < Graph > graph , <nl> + bool aliasing_types , <nl> + bool ignore_custom_classes ) <nl> : graph_ ( std : : move ( graph ) ) { <nl> if ( aliasing_types ) { <nl> aliasDb_ = torch : : make_unique < AliasDb > ( graph_ ) ; <nl> } else { <nl> aliasDb_ = nullptr ; <nl> } <nl> + ignore_custom_classes_ = ignore_custom_classes ; <nl> } <nl> <nl> void propagateNode ( Node * n ) { <nl> std : : vector < IValue > outputs ; <nl> - if ( auto outputs_opt = runNodeIfInputsAreConstant ( n ) ) { <nl> + if ( auto outputs_opt = <nl> + runNodeIfInputsAreConstant ( n , ignore_custom_classes_ ) ) { <nl> outputs = std : : move ( outputs_opt . value ( ) ) ; <nl> } else { <nl> / / The op failed to run , so we cannot continue constant - prop for it . <nl> struct ConstantPropagator { <nl> <nl> std : : shared_ptr < Graph > graph_ ; <nl> std : : unique_ptr < AliasDb > aliasDb_ ; <nl> + bool ignore_custom_classes_ ; <nl> } ; <nl> } / / anonymous namespace <nl> <nl> - void ConstantPropagation ( std : : shared_ptr < Graph > & graph ) { <nl> - ConstantPropagator cp = ConstantPropagator : : WithAliasDb ( graph ) ; <nl> + void ConstantPropagation ( <nl> + std : : shared_ptr < Graph > & graph , <nl> + bool ignore_custom_classes ) { <nl> + ConstantPropagator cp = <nl> + ConstantPropagator : : WithAliasDb ( graph , ignore_custom_classes ) ; <nl> cp . run ( ) ; <nl> EliminateDeadCode ( graph ) ; <nl> GRAPH_DUMP ( " After ConstantPropagation : " , graph ) ; <nl> mmm a / torch / csrc / jit / passes / constant_propagation . h <nl> ppp b / torch / csrc / jit / passes / constant_propagation . h <nl> <nl> namespace torch { <nl> namespace jit { <nl> <nl> - TORCH_API void ConstantPropagation ( std : : shared_ptr < Graph > & graph ) ; <nl> + / / Runs constant propagation on all objects unless ignore_custom_classes is <nl> + / / specified as true , in which case user defined classes are skipped . This is <nl> + / / useful to prevent early fusion of packing operations , which end up lowering <nl> + / / away information about their constructors ( e . g . packed : : linear_clamp_prepack <nl> + / / and prepacked : : conv2d_clamp_prepack ) <nl> + TORCH_API void ConstantPropagation ( <nl> + std : : shared_ptr < Graph > & graph , <nl> + bool ignore_custom_classes = false ) ; <nl> <nl> / / runs constant propagation only on ops that have non - aliasing inputs & outputs <nl> TORCH_API void ConstantPropagationImmutableTypes ( std : : shared_ptr < Graph > & graph ) ; <nl> <nl> / / Runs the node if its inputs are constants . Callers of this function must <nl> / / make their own determination if constant prop is appropriate - for example <nl> - / / non - deterministic ops or ops with side effects <nl> - TORCH_API c10 : : optional < Stack > runNodeIfInputsAreConstant ( const Node * node ) ; <nl> + / / non - deterministic ops or ops with side effects . If ignore_custom_classes is <nl> + / / specified , nodes that output user defined classes are not run . <nl> + TORCH_API c10 : : optional < Stack > runNodeIfInputsAreConstant ( <nl> + const Node * node , <nl> + bool ignore_custom_classes = false ) ; <nl> <nl> } / / namespace jit <nl> } / / namespace torch <nl> mmm a / torch / csrc / jit / passes / freeze_module . cpp <nl> ppp b / torch / csrc / jit / passes / freeze_module . cpp <nl> class AttributePropagator { <nl> ClearProfilingInformation ( subgraph ) ; <nl> } ; <nl> auto applyOptimizations = [ ] ( std : : shared_ptr < Graph > & subgraph ) { <nl> - runOptimization ( subgraph , / * unroll ? * / false ) ; <nl> + runOptimization ( <nl> + subgraph , / * unroll ? * / false , / * const_prop_user_classes ? * / false ) ; <nl> } ; <nl> <nl> for ( auto function : preservedMethods_ ) { <nl> class AttributePropagator { <nl> val = overrideGradient ( val ) ; <nl> } <nl> attr = std : : move ( dict ) ; <nl> + } else if ( attr . isObject ( ) & & ! attr . toObjectRef ( ) . type ( ) - > is_module ( ) ) { <nl> + auto obj_type = attr . type ( ) - > expect < ClassType > ( ) ; <nl> + auto obj_value = std : : move ( attr ) . toObject ( ) ; <nl> + auto sub_attributes = obj_type - > getAttributes ( ) ; <nl> + for ( const auto & sub_attr : sub_attributes ) { <nl> + auto sub_attr_val = obj_value - > getAttr ( sub_attr . getName ( ) ) ; <nl> + sub_attr_val = overrideGradient ( sub_attr_val ) ; <nl> + } <nl> + return obj_value ; <nl> } <nl> <nl> return attr ; <nl> mmm a / torch / csrc / jit / passes / xnnpack_rewrite . cpp <nl> ppp b / torch / csrc / jit / passes / xnnpack_rewrite . cpp <nl> <nl> # include < torch / csrc / jit / ir / ir . h > <nl> # include < torch / csrc / jit / ir / subgraph_matcher . h > <nl> # include < torch / csrc / jit / passes / constant_pooling . h > <nl> + # include < torch / csrc / jit / passes / constant_propagation . h > <nl> # include < torch / csrc / jit / passes / fold_conv_bn . h > <nl> # include < torch / csrc / jit / passes / freeze_module . h > <nl> # include < torch / csrc / jit / passes / fuse_linear . h > <nl> void fusePrePackedLinearConvWithClamp ( script : : Module & module ) { <nl> auto graph = module . get_method ( " forward " ) . graph ( ) ; <nl> fuseReluWithPackedOps ( graph ) ; <nl> fuseHardtanhWithPackedOps ( graph ) ; <nl> + <nl> + / / Ignore user defined classes for later passes <nl> + ConstantPropagation ( graph , true ) ; <nl> } <nl> <nl> void FoldPrePackingOps ( script : : Module & m ) { <nl> void FoldPrePackingOps ( script : : Module & m ) { <nl> " prepacked : : conv2d_transpose_clamp_prepack " ) ) ; <nl> } ; <nl> PrePackingOpsFolder ( m , filter_fn , " prepack_folding " ) ; <nl> + auto graph = m . get_method ( " forward " ) . graph ( ) ; <nl> + / / Folding requires a const propagation through user defined classes <nl> + ConstantPropagation ( graph , false ) ; <nl> } <nl> <nl> script : : Module optimizeForMobile ( <nl> mmm a / torch / csrc / jit / python / init . cpp <nl> ppp b / torch / csrc / jit / python / init . cpp <nl> void initJITBindings ( PyObject * module ) { <nl> } ) <nl> . def ( <nl> " _jit_pass_constant_propagation " , <nl> - [ ] ( std : : shared_ptr < Graph > & g ) { return ConstantPropagation ( g ) ; } ) <nl> + [ ] ( std : : shared_ptr < Graph > & g ) { return ConstantPropagation ( g ) ; } , <nl> + py : : arg ( " graph " ) ) <nl> . def ( " _jit_pass_erase_shape_information " , EraseShapeInformation ) <nl> . def ( <nl> " _jit_pass_create_autodiff_subgraphs " , <nl> mmm a / torch / csrc / jit / runtime / graph_executor . cpp <nl> ppp b / torch / csrc / jit / runtime / graph_executor . cpp <nl> void runNondiffOptimization ( <nl> " After customPostPassses ( end of runNondiffOptimization ) \ n " , * graph ) ; <nl> } <nl> <nl> - void runOptimization ( std : : shared_ptr < Graph > & graph , bool unroll ) { <nl> + void runOptimization ( <nl> + std : : shared_ptr < Graph > & graph , <nl> + bool unroll , <nl> + bool const_prop_user_classes ) { <nl> / / Basic graph preprocessing to eliminate noise . <nl> GRAPH_DEBUG ( <nl> " Before EliminateDeadCode ( beginning of runOptimization ) \ n " , * graph ) ; <nl> void runOptimization ( std : : shared_ptr < Graph > & graph , bool unroll ) { <nl> <nl> PeepholeOptimize ( graph ) ; <nl> GRAPH_DEBUG ( " After PeepholeOptimize , before ConstantPropagation \ n " , * graph ) ; <nl> - ConstantPropagation ( graph ) ; <nl> + <nl> + if ( const_prop_user_classes ) { <nl> + ConstantPropagation ( graph ) ; <nl> + } else { <nl> + ConstantPropagation ( graph , true ) ; <nl> + } <nl> GRAPH_DEBUG ( " After ConstantPropagation , before ConstantPooling \ n " , * graph ) ; <nl> + <nl> ConstantPooling ( graph ) ; <nl> GRAPH_DEBUG ( " After ConstantPooling \ n " , * graph ) ; <nl> <nl> mmm a / torch / csrc / jit / runtime / graph_executor_impl . h <nl> ppp b / torch / csrc / jit / runtime / graph_executor_impl . h <nl> namespace jit { <nl> <nl> void packGradient ( const Gradient & gradient , Node * dnode ) ; <nl> bool needsGradient ( const std : : shared_ptr < const Graph > & graph ) ; <nl> - void runOptimization ( std : : shared_ptr < Graph > & graph , bool unroll = true ) ; <nl> + void runOptimization ( <nl> + std : : shared_ptr < Graph > & graph , <nl> + bool unroll = true , <nl> + bool const_prop_user_classes = true ) ; <nl> void runNondiffOptimization ( <nl> std : : shared_ptr < Graph > & graph , <nl> bool strict_fuser_check = false ) ; <nl> mmm a / torch / csrc / jit / serialization / python_print . cpp <nl> ppp b / torch / csrc / jit / serialization / python_print . cpp <nl> struct PythonPrintImpl { <nl> <nl> void printConstant ( TaggedStringStream & stmt , const IValue & v ) { <nl> const auto customFormatter = [ & ] ( std : : ostream & ss , const IValue & v ) { <nl> - if ( v . isTensor ( ) | | containsNonASCIIString ( v ) ) { <nl> + if ( v . isTensor ( ) | | containsNonASCIIString ( v ) | | v . isObject ( ) ) { <nl> + TORCH_INTERNAL_ASSERT ( ! v . type ( ) - > is_module ( ) ) ; <nl> ss < < " CONSTANTS . c " < < getOrAddConstant ( v ) ; <nl> return true ; <nl> } <nl> mmm a / torch / quantization / quantize_jit . py <nl> ppp b / torch / quantization / quantize_jit . py <nl> def _convert_jit ( model , inplace = False , debug = False , quant_type = QuantType . STATIC , <nl> model . _reconstruct ( model_c ) <nl> else : <nl> model = wrap_cpp_module ( model_c ) <nl> + torch . _C . _jit_pass_constant_propagation ( model . graph ) <nl> return model <nl> <nl> def convert_jit ( model , inplace = False , debug = False , preserved_attrs = None ) : <nl> def _quantize_jit ( model , qconfig_dict , run_fn = None , run_args = None , inplace = False <nl> run_fn ( model , * run_args ) <nl> model = convert_jit ( model , True , debug ) <nl> <nl> + torch . _C . _jit_pass_constant_propagation ( model . graph ) <nl> return model <nl> <nl> def quantize_jit ( model , qconfig_dict , run_fn , run_args , inplace = False , debug = False ) : <nl>
|
[ TorchScript ] Support user defined classes as constants ( )
|
pytorch/pytorch
|
43a9d6fb6e7e68d6fd08cc5d2b69d4299a4b9851
|
2020-11-17T04:52:02Z
|
mmm a / src / main . cpp <nl> ppp b / src / main . cpp <nl> bool static ProcessMessage ( CNode * pfrom , string strCommand , CDataStream & vRecv , <nl> if ( pfrom - > nVersion > = NO_BLOOM_VERSION ) { <nl> Misbehaving ( pfrom - > GetId ( ) , 100 ) ; <nl> return false ; <nl> + } else if ( GetBoolArg ( " - enforcenodebloom " , false ) ) { <nl> + pfrom - > fDisconnect = true ; <nl> + return false ; <nl> } <nl> - / / TODO : Enable this after reasonable network upgrade <nl> - / / else { <nl> - / / pfrom - > fDisconnect = true ; <nl> - / / return false ; <nl> - / / } <nl> } <nl> <nl> <nl>
|
Add enforcenodebloom option .
|
bitcoin/bitcoin
|
0f4dc53fd6a19a763922b4c3888ce6542c594e01
|
2015-11-24T10:08:00Z
|
mmm a / README . md <nl> ppp b / README . md <nl> for android : http : / / demo : 80 / forward / live / livestream_sd . html <nl> < pre > <nl> killall - 1 srs <nl> < / pre > <nl> - or use specified signal name to reload : < br / > <nl> + or use specified signal to reload : < br / > <nl> < pre > <nl> killall - s SIGHUP srs <nl> < / pre > <nl> mmm a / trunk / auto / build_ffmpeg . sh <nl> ppp b / trunk / auto / build_ffmpeg . sh <nl> else <nl> - - enable - postproc - - enable - bzlib - - enable - zlib - - enable - parsers \ <nl> - - enable - libfreetype \ <nl> - - enable - libx264 - - enable - libmp3lame - - enable - libaacplus \ <nl> - - - enable - pthreads - - extra - libs = - lpthread - - enable - encoders - - enable - decoders - - enable - avfilter - - enable - muxers - - enable - demuxers & & <nl> + - - enable - pthreads - - extra - libs = - lpthread \ <nl> + - - enable - encoders - - enable - decoders - - enable - avfilter - - enable - muxers - - enable - demuxers & & <nl> make & & make install <nl> - ret = $ ? ; if [ [ 0 - ne $ { ret } ] ] ; then echo " build x264 failed " ; exit 1 ; fi <nl> + ret = $ ? ; if [ [ 0 - ne $ { ret } ] ] ; then echo " build ffmpeg failed " ; exit 1 ; fi <nl> fi <nl> old mode 100755 <nl> new mode 100644 <nl>
|
fix the script error
|
ossrs/srs
|
c7ec6f511c2a80fd78847adce624d188ec048f62
|
2013-12-15T08:53:37Z
|
mmm a / test / test_namedtensor . py <nl> ppp b / test / test_namedtensor . py <nl> class TestNamedTensor ( TestCase ) : <nl> def test_trivial ( self ) : <nl> pass <nl> <nl> - # TODO ( rzou ) : Some form of this check should be added to self . assertEqual . <nl> - # Right now I don ' t know what it should look like . <nl> - def assertTensorDataAndNamesEqual ( self , x , y ) : <nl> - self . assertEqual ( x . names , y . names ) <nl> - unnamed_x = x . set_names ( None ) <nl> - unnamed_y = y . set_names ( None ) <nl> - self . assertEqual ( unnamed_x , unnamed_y ) <nl> - <nl> def _test_factory ( self , factory , device ) : <nl> x = factory ( [ ] , device = device ) <nl> self . assertEqual ( x . names , ( ) ) <nl>
|
Revert D16667816 : Improve test_namedtensor . py with named tensor equality check
|
pytorch/pytorch
|
71352fbd9aa159028e5bb0444c5786fe887e30a9
|
2019-08-09T17:55:14Z
|
mmm a / src / mips / full - codegen - mips . cc <nl> ppp b / src / mips / full - codegen - mips . cc <nl> void FullCodeGenerator : : VisitForInStatement ( ForInStatement * stmt ) { <nl> / / modification check . Otherwise , we got a fixed array , and we have <nl> / / to do a slow check . <nl> Label fixed_array ; <nl> - __ mov ( a2 , v0 ) ; <nl> - __ lw ( a1 , FieldMemOperand ( a2 , HeapObject : : kMapOffset ) ) ; <nl> + __ lw ( a2 , FieldMemOperand ( v0 , HeapObject : : kMapOffset ) ) ; <nl> __ LoadRoot ( at , Heap : : kMetaMapRootIndex ) ; <nl> - __ Branch ( & fixed_array , ne , a1 , Operand ( at ) ) ; <nl> + __ Branch ( & fixed_array , ne , a2 , Operand ( at ) ) ; <nl> <nl> / / We got a map in register v0 . Get the enumeration cache from it . <nl> + Label no_descriptors ; <nl> __ bind ( & use_cache ) ; <nl> - __ LoadInstanceDescriptors ( v0 , a1 , a2 ) ; <nl> - __ lw ( a1 , FieldMemOperand ( a1 , DescriptorArray : : kEnumCacheOffset ) ) ; <nl> - __ lw ( a2 , FieldMemOperand ( a1 , DescriptorArray : : kEnumCacheBridgeCacheOffset ) ) ; <nl> + <nl> + __ EnumLength ( a1 , v0 ) ; <nl> + __ Branch ( & no_descriptors , eq , a1 , Operand ( Smi : : FromInt ( 0 ) ) ) ; <nl> + <nl> + __ LoadInstanceDescriptors ( v0 , a2 , t0 ) ; <nl> + __ lw ( a2 , FieldMemOperand ( a2 , DescriptorArray : : kEnumCacheOffset ) ) ; <nl> + __ lw ( a2 , FieldMemOperand ( a2 , DescriptorArray : : kEnumCacheBridgeCacheOffset ) ) ; <nl> <nl> / / Set up the four remaining stack slots . <nl> __ push ( v0 ) ; / / Map . <nl> - __ lw ( a1 , FieldMemOperand ( a2 , FixedArray : : kLengthOffset ) ) ; <nl> __ li ( a0 , Operand ( Smi : : FromInt ( 0 ) ) ) ; <nl> / / Push enumeration cache , enumeration cache length ( as smi ) and zero . <nl> __ Push ( a2 , a1 , a0 ) ; <nl> __ jmp ( & loop ) ; <nl> <nl> + __ bind ( & no_descriptors ) ; <nl> + __ Drop ( 1 ) ; <nl> + __ jmp ( & exit ) ; <nl> + <nl> / / We got a fixed array in register v0 . Iterate through that . <nl> Label non_proxy ; <nl> __ bind ( & fixed_array ) ; <nl> mmm a / src / mips / lithium - codegen - mips . cc <nl> ppp b / src / mips / lithium - codegen - mips . cc <nl> void LCodeGen : : DoFixedArrayBaseLength ( LFixedArrayBaseLength * instr ) { <nl> } <nl> <nl> <nl> + void LCodeGen : : DoMapEnumLength ( LMapEnumLength * instr ) { <nl> + Register result = ToRegister ( instr - > result ( ) ) ; <nl> + Register map = ToRegister ( instr - > InputAt ( 0 ) ) ; <nl> + __ EnumLength ( result , map ) ; <nl> + } <nl> + <nl> + <nl> void LCodeGen : : DoElementsKind ( LElementsKind * instr ) { <nl> Register result = ToRegister ( instr - > result ( ) ) ; <nl> Register input = ToRegister ( instr - > InputAt ( 0 ) ) ; <nl> void LCodeGen : : DoForInCacheArray ( LForInCacheArray * instr ) { <nl> Register map = ToRegister ( instr - > map ( ) ) ; <nl> Register result = ToRegister ( instr - > result ( ) ) ; <nl> Register scratch = ToRegister ( instr - > scratch ( ) ) ; <nl> + Label load_cache , done ; <nl> + __ EnumLength ( result , map ) ; <nl> + __ Branch ( & load_cache , ne , result , Operand ( Smi : : FromInt ( 0 ) ) ) ; <nl> + __ li ( result , Operand ( isolate ( ) - > factory ( ) - > empty_fixed_array ( ) ) ) ; <nl> + __ jmp ( & done ) ; <nl> + <nl> + __ bind ( & load_cache ) ; <nl> __ LoadInstanceDescriptors ( map , result , scratch ) ; <nl> __ lw ( result , <nl> FieldMemOperand ( result , DescriptorArray : : kEnumCacheOffset ) ) ; <nl> __ lw ( result , <nl> FieldMemOperand ( result , FixedArray : : SizeFor ( instr - > idx ( ) ) ) ) ; <nl> DeoptimizeIf ( eq , instr - > environment ( ) , result , Operand ( zero_reg ) ) ; <nl> + <nl> + __ bind ( & done ) ; <nl> } <nl> <nl> <nl> mmm a / src / mips / lithium - mips . cc <nl> ppp b / src / mips / lithium - mips . cc <nl> LInstruction * LChunkBuilder : : DoFixedArrayBaseLength ( <nl> } <nl> <nl> <nl> + LInstruction * LChunkBuilder : : DoMapEnumLength ( HMapEnumLength * instr ) { <nl> + LOperand * map = UseRegisterAtStart ( instr - > value ( ) ) ; <nl> + return DefineAsRegister ( new ( zone ( ) ) LMapEnumLength ( map ) ) ; <nl> + } <nl> + <nl> + <nl> LInstruction * LChunkBuilder : : DoElementsKind ( HElementsKind * instr ) { <nl> LOperand * object = UseRegisterAtStart ( instr - > value ( ) ) ; <nl> return DefineAsRegister ( new ( zone ( ) ) LElementsKind ( object ) ) ; <nl> mmm a / src / mips / lithium - mips . h <nl> ppp b / src / mips / lithium - mips . h <nl> class LCodeGen ; <nl> V ( LoadNamedField ) \ <nl> V ( LoadNamedFieldPolymorphic ) \ <nl> V ( LoadNamedGeneric ) \ <nl> + V ( MapEnumLength ) \ <nl> V ( MathMinMax ) \ <nl> V ( ModI ) \ <nl> V ( MulI ) \ <nl> class LFixedArrayBaseLength : public LTemplateInstruction < 1 , 1 , 0 > { <nl> } ; <nl> <nl> <nl> + class LMapEnumLength : public LTemplateInstruction < 1 , 1 , 0 > { <nl> + public : <nl> + explicit LMapEnumLength ( LOperand * value ) { <nl> + inputs_ [ 0 ] = value ; <nl> + } <nl> + <nl> + DECLARE_CONCRETE_INSTRUCTION ( MapEnumLength , " map - enum - length " ) <nl> + } ; <nl> + <nl> + <nl> class LElementsKind : public LTemplateInstruction < 1 , 1 , 0 > { <nl> public : <nl> explicit LElementsKind ( LOperand * value ) { <nl> mmm a / src / mips / macro - assembler - mips . cc <nl> ppp b / src / mips / macro - assembler - mips . cc <nl> void MacroAssembler : : LoadInstanceDescriptors ( Register map , <nl> } <nl> <nl> <nl> + void MacroAssembler : : EnumLength ( Register dst , Register map ) { <nl> + STATIC_ASSERT ( Map : : EnumLengthBits : : kShift = = 0 ) ; <nl> + lw ( dst , FieldMemOperand ( map , Map : : kBitField3Offset ) ) ; <nl> + And ( dst , dst , Operand ( Smi : : FromInt ( Map : : EnumLengthBits : : kMask ) ) ) ; <nl> + } <nl> + <nl> + <nl> void MacroAssembler : : CheckEnumCache ( Register null_value , Label * call_runtime ) { <nl> - Label next ; <nl> - / / Preload a couple of values used in the loop . <nl> Register empty_fixed_array_value = t2 ; <nl> LoadRoot ( empty_fixed_array_value , Heap : : kEmptyFixedArrayRootIndex ) ; <nl> - mov ( a1 , a0 ) ; <nl> - bind ( & next ) ; <nl> + Label next , start ; <nl> + mov ( a2 , a0 ) ; <nl> <nl> - / / Check that there are no elements . Register a1 contains the <nl> - / / current JS object we ' ve reached through the prototype chain . <nl> - lw ( a2 , FieldMemOperand ( a1 , JSObject : : kElementsOffset ) ) ; <nl> - Branch ( call_runtime , ne , a2 , Operand ( empty_fixed_array_value ) ) ; <nl> - <nl> - / / Check that instance descriptors are not empty so that we can <nl> - / / check for an enum cache . Leave the map in a2 for the subsequent <nl> - / / prototype load . <nl> - lw ( a2 , FieldMemOperand ( a1 , HeapObject : : kMapOffset ) ) ; <nl> - lw ( a3 , FieldMemOperand ( a2 , Map : : kTransitionsOrBackPointerOffset ) ) ; <nl> + / / Check if the enum length field is properly initialized , indicating that <nl> + / / there is an enum cache . <nl> + lw ( a1 , FieldMemOperand ( a2 , HeapObject : : kMapOffset ) ) ; <nl> <nl> - CheckMap ( a3 , <nl> - t3 , <nl> - isolate ( ) - > factory ( ) - > fixed_array_map ( ) , <nl> - call_runtime , <nl> - DONT_DO_SMI_CHECK ) ; <nl> + EnumLength ( a3 , a1 ) ; <nl> + Branch ( call_runtime , eq , a3 , Operand ( Smi : : FromInt ( Map : : kInvalidEnumCache ) ) ) ; <nl> <nl> - LoadRoot ( t3 , Heap : : kEmptyDescriptorArrayRootIndex ) ; <nl> - lw ( a3 , FieldMemOperand ( a3 , TransitionArray : : kDescriptorsOffset ) ) ; <nl> - Branch ( call_runtime , eq , a3 , Operand ( t3 ) ) ; <nl> + jmp ( & start ) ; <nl> <nl> - / / Check that there is an enum cache in the non - empty instance <nl> - / / descriptors ( a3 ) . This is the case if the next enumeration <nl> - / / index field does not contain a smi . <nl> - lw ( a3 , FieldMemOperand ( a3 , DescriptorArray : : kEnumCacheOffset ) ) ; <nl> - JumpIfSmi ( a3 , call_runtime ) ; <nl> + bind ( & next ) ; <nl> + lw ( a1 , FieldMemOperand ( a2 , HeapObject : : kMapOffset ) ) ; <nl> <nl> / / For all objects but the receiver , check that the cache is empty . <nl> - Label check_prototype ; <nl> - Branch ( & check_prototype , eq , a1 , Operand ( a0 ) ) ; <nl> - lw ( a3 , FieldMemOperand ( a3 , DescriptorArray : : kEnumCacheBridgeCacheOffset ) ) ; <nl> - Branch ( call_runtime , ne , a3 , Operand ( empty_fixed_array_value ) ) ; <nl> - <nl> - / / Load the prototype from the map and loop if non - null . <nl> - bind ( & check_prototype ) ; <nl> - lw ( a1 , FieldMemOperand ( a2 , Map : : kPrototypeOffset ) ) ; <nl> - Branch ( & next , ne , a1 , Operand ( null_value ) ) ; <nl> + EnumLength ( a3 , a1 ) ; <nl> + Branch ( call_runtime , ne , a3 , Operand ( Smi : : FromInt ( 0 ) ) ) ; <nl> + <nl> + bind ( & start ) ; <nl> + <nl> + / / Check that there are no elements . Register r2 contains the current JS <nl> + / / object we ' ve reached through the prototype chain . <nl> + lw ( a2 , FieldMemOperand ( a2 , JSObject : : kElementsOffset ) ) ; <nl> + Branch ( call_runtime , ne , a2 , Operand ( empty_fixed_array_value ) ) ; <nl> + <nl> + lw ( a2 , FieldMemOperand ( a1 , Map : : kPrototypeOffset ) ) ; <nl> + Branch ( & next , ne , a2 , Operand ( null_value ) ) ; <nl> } <nl> <nl> <nl> mmm a / src / mips / macro - assembler - mips . h <nl> ppp b / src / mips / macro - assembler - mips . h <nl> class MacroAssembler : public Assembler { <nl> void LoadInstanceDescriptors ( Register map , <nl> Register descriptors , <nl> Register scratch ) ; <nl> + void EnumLength ( Register dst , Register map ) ; <nl> <nl> <nl> / / Activation support . <nl>
|
MIPS : Use a special EnumLength field to indicate number of valid enum cache values .
|
v8/v8
|
2b91f23b58f12c85e3917aa019fe8aa7508697af
|
2012-08-31T09:50:27Z
|
mmm a / src / execution . cc <nl> ppp b / src / execution . cc <nl> void StackGuard : : set_interrupt_limits ( const ExecutionAccess & lock ) { <nl> isolate_ - > heap ( ) - > SetStackLimits ( ) ; <nl> } <nl> <nl> - <nl> void StackGuard : : reset_limits ( const ExecutionAccess & lock ) { <nl> DCHECK_NOT_NULL ( isolate_ ) ; <nl> thread_local_ . set_jslimit ( thread_local_ . real_jslimit_ ) ; <nl> void StackGuard : : reset_limits ( const ExecutionAccess & lock ) { <nl> isolate_ - > heap ( ) - > SetStackLimits ( ) ; <nl> } <nl> <nl> - <nl> - static void PrintDeserializedCodeInfo ( Handle < JSFunction > function ) { <nl> - if ( function - > code ( ) = = function - > shared ( ) - > GetCode ( ) & & <nl> - function - > shared ( ) - > deserialized ( ) ) { <nl> - PrintF ( " [ Running deserialized script " ) ; <nl> - Object script = function - > shared ( ) - > script ( ) ; <nl> - if ( script - > IsScript ( ) ) { <nl> - Object name = Script : : cast ( script ) - > name ( ) ; <nl> - if ( name - > IsString ( ) ) { <nl> - PrintF ( " : % s " , String : : cast ( name ) - > ToCString ( ) . get ( ) ) ; <nl> - } <nl> - } <nl> - PrintF ( " ] \ n " ) ; <nl> - } <nl> - } <nl> - <nl> - <nl> namespace { <nl> <nl> Handle < Object > NormalizeReceiver ( Isolate * isolate , Handle < Object > receiver ) { <nl> V8_WARN_UNUSED_RESULT MaybeHandle < Object > Invoke ( Isolate * isolate , <nl> Address func = params . target - > ptr ( ) ; <nl> Address recv = params . receiver - > ptr ( ) ; <nl> Address * * argv = reinterpret_cast < Address * * > ( params . argv ) ; <nl> - if ( FLAG_profile_deserialization & & params . target - > IsJSFunction ( ) ) { <nl> - PrintDeserializedCodeInfo ( Handle < JSFunction > : : cast ( params . target ) ) ; <nl> - } <nl> RuntimeCallTimerScope timer ( isolate , RuntimeCallCounterId : : kJS_Execution ) ; <nl> value = Object ( stub_entry . Call ( isolate - > isolate_data ( ) - > isolate_root ( ) , <nl> orig_func , func , recv , params . argc , argv ) ) ; <nl> mmm a / src / objects / shared - function - info - inl . h <nl> ppp b / src / objects / shared - function - info - inl . h <nl> BIT_FIELD_ACCESSORS ( SharedFunctionInfo , flags , name_should_print_as_anonymous , <nl> SharedFunctionInfo : : NameShouldPrintAsAnonymousBit ) <nl> BIT_FIELD_ACCESSORS ( SharedFunctionInfo , flags , is_anonymous_expression , <nl> SharedFunctionInfo : : IsAnonymousExpressionBit ) <nl> - BIT_FIELD_ACCESSORS ( SharedFunctionInfo , flags , deserialized , <nl> - SharedFunctionInfo : : IsDeserializedBit ) <nl> BIT_FIELD_ACCESSORS ( SharedFunctionInfo , flags , has_reported_binary_coverage , <nl> SharedFunctionInfo : : HasReportedBinaryCoverageBit ) <nl> <nl> mmm a / src / objects / shared - function - info . h <nl> ppp b / src / objects / shared - function - info . h <nl> class SharedFunctionInfo : public HeapObject { <nl> / / which does not change this flag ) . <nl> DECL_BOOLEAN_ACCESSORS ( is_anonymous_expression ) <nl> <nl> - / / Indicates that the the shared function info is deserialized from cache . <nl> - DECL_BOOLEAN_ACCESSORS ( deserialized ) <nl> - <nl> / / Indicates that the function has been reported for binary code coverage . <nl> DECL_BOOLEAN_ACCESSORS ( has_reported_binary_coverage ) <nl> <nl> class SharedFunctionInfo : public HeapObject { <nl> V ( ConstructAsBuiltinBit , bool , 1 , _ ) \ <nl> V ( IsAnonymousExpressionBit , bool , 1 , _ ) \ <nl> V ( NameShouldPrintAsAnonymousBit , bool , 1 , _ ) \ <nl> - V ( IsDeserializedBit , bool , 1 , _ ) \ <nl> V ( HasReportedBinaryCoverageBit , bool , 1 , _ ) \ <nl> V ( IsNamedExpressionBit , bool , 1 , _ ) \ <nl> V ( IsTopLevelBit , bool , 1 , _ ) <nl> mmm a / src / snapshot / code - serializer . cc <nl> ppp b / src / snapshot / code - serializer . cc <nl> void CodeSerializer : : SerializeObject ( HeapObject obj , HowToCode how_to_code , <nl> } <nl> DCHECK ( ! sfi - > HasDebugInfo ( ) ) ; <nl> <nl> - / / Mark SFI to indicate whether the code is cached . <nl> - bool was_deserialized = sfi - > deserialized ( ) ; <nl> - sfi - > set_deserialized ( sfi - > is_compiled ( ) ) ; <nl> SerializeGeneric ( obj , how_to_code , where_to_point ) ; <nl> - sfi - > set_deserialized ( was_deserialized ) ; <nl> <nl> / / Restore debug info <nl> if ( ! debug_info . is_null ( ) ) { <nl> mmm a / test / cctest / test - serialize . cc <nl> ppp b / test / cctest / test - serialize . cc <nl> v8 : : ScriptCompiler : : CachedData * CompileRunAndProduceCache ( <nl> return cache ; <nl> } <nl> <nl> - void CheckDeserializedFlag ( v8 : : Local < v8 : : UnboundScript > script ) { <nl> - i : : Handle < i : : SharedFunctionInfo > sfi = v8 : : Utils : : OpenHandle ( * script ) ; <nl> - i : : SharedFunctionInfo : : ScriptIterator iterator ( sfi - > GetIsolate ( ) , <nl> - Script : : cast ( sfi - > script ( ) ) ) ; <nl> - for ( SharedFunctionInfo next = iterator . Next ( ) ; ! next . is_null ( ) ; <nl> - next = iterator . Next ( ) ) { <nl> - CHECK_EQ ( next - > is_compiled ( ) , next - > deserialized ( ) ) ; <nl> - } <nl> - } <nl> - <nl> TEST ( CodeSerializerIsolates ) { <nl> const char * source = " function f ( ) { return ' abc ' ; } ; f ( ) + ' def ' " ; <nl> v8 : : ScriptCompiler : : CachedData * cache = CompileRunAndProduceCache ( source ) ; <nl> TEST ( CodeSerializerIsolates ) { <nl> . ToLocalChecked ( ) ; <nl> } <nl> CHECK ( ! cache - > rejected ) ; <nl> - CheckDeserializedFlag ( script ) ; <nl> v8 : : Local < v8 : : Value > result = script - > BindToCurrentContext ( ) <nl> - > Run ( isolate2 - > GetCurrentContext ( ) ) <nl> . ToLocalChecked ( ) ; <nl> TEST ( CodeSerializerIsolatesEager ) { <nl> . ToLocalChecked ( ) ; <nl> } <nl> CHECK ( ! cache - > rejected ) ; <nl> - CheckDeserializedFlag ( script ) ; <nl> v8 : : Local < v8 : : Value > result = script - > BindToCurrentContext ( ) <nl> - > Run ( isolate2 - > GetCurrentContext ( ) ) <nl> . ToLocalChecked ( ) ; <nl> TEST ( CodeSerializerAfterExecute ) { <nl> . ToLocalChecked ( ) ; <nl> } <nl> CHECK ( ! cache - > rejected ) ; <nl> - CheckDeserializedFlag ( script ) ; <nl> <nl> Handle < SharedFunctionInfo > sfi = v8 : : Utils : : OpenHandle ( * script ) ; <nl> CHECK ( sfi - > HasBytecodeArray ( ) ) ; <nl> TEST ( CodeSerializerWithHarmonyScoping ) { <nl> isolate2 , & source , v8 : : ScriptCompiler : : kConsumeCodeCache ) <nl> . ToLocalChecked ( ) ; <nl> } <nl> - CheckDeserializedFlag ( script ) ; <nl> v8 : : Local < v8 : : Value > result = script - > BindToCurrentContext ( ) <nl> - > Run ( isolate2 - > GetCurrentContext ( ) ) <nl> . ToLocalChecked ( ) ; <nl>
|
[ SFI ] Free up unused IsDeserializedBit from SFI : : flags .
|
v8/v8
|
2619f59c264a962418d9edc89f82ec22fed1da6a
|
2019-01-18T12:32:03Z
|
mmm a / modules / imgproc / src / imgwarp . cpp <nl> ppp b / modules / imgproc / src / imgwarp . cpp <nl> cv2DRotationMatrix ( CvPoint2D32f center , double angle , <nl> double scale , CvMat * matrix ) <nl> { <nl> cv : : Mat M0 = cv : : cvarrToMat ( matrix ) , M = cv : : getRotationMatrix2D ( center , angle , scale ) ; <nl> - CV_Assert ( M . size ( ) = = M . size ( ) ) ; <nl> + CV_Assert ( M . size ( ) = = M0 . size ( ) ) ; <nl> M . convertTo ( M0 , M0 . type ( ) ) ; <nl> return matrix ; <nl> } <nl> cvGetPerspectiveTransform ( const CvPoint2D32f * src , <nl> { <nl> cv : : Mat M0 = cv : : cvarrToMat ( matrix ) , <nl> M = cv : : getPerspectiveTransform ( ( const cv : : Point2f * ) src , ( const cv : : Point2f * ) dst ) ; <nl> - CV_Assert ( M . size ( ) = = M . size ( ) ) ; <nl> + CV_Assert ( M . size ( ) = = M0 . size ( ) ) ; <nl> M . convertTo ( M0 , M0 . type ( ) ) ; <nl> return matrix ; <nl> } <nl>
|
fixed misprint in imgwarp . cpp
|
opencv/opencv
|
5a4fa4607b417bd152f168eaa425da60600f4d3c
|
2013-04-01T07:26:49Z
|
mmm a / hphp / hhvm / thread_locals . txt <nl> ppp b / hphp / hhvm / thread_locals . txt <nl> HPHP : : s_cachedHash <nl> HPHP : : jit : : irgen : : IRBuilder : : optimizeInst ( HPHP : : jit : : IRInstruction * , HPHP : : jit : : irgen : : IRBuilder : : CloneFlag , HPHP : : jit : : Block * ) : : instNest <nl> HPHP : : Trace : : dumpEntry ( HPHP : : Trace : : RingBufferEntry const * ) : : indentDepth <nl> HPHP : : high_arena_tcache # Only needed in lowptr builds . <nl> + HPHP : : s_bigint_data # hphp / zend / is also used by Hack Native <nl> + HPHP : : s_bigint_data_guard # hphp / zend / is also used by Hack Native <nl> mmm a / hphp / runtime / base / ini - setting . cpp <nl> ppp b / hphp / runtime / base / ini - setting . cpp <nl> <nl> # include " hphp / runtime / base / execution - context . h " <nl> # include " hphp / runtime / base / req - optional . h " <nl> # include " hphp / runtime / base / runtime - option . h " <nl> - # include " hphp / runtime / base / zend - strtod . h " <nl> <nl> # include " hphp / runtime / base / ini - parser / zend - ini . h " <nl> <nl> <nl> # include " hphp / util / portability . h " <nl> # include " hphp / util / logger . h " <nl> <nl> + # include " hphp / zend / zend - strtod . h " <nl> + <nl> # ifndef _MSC_VER <nl> # include < glob . h > <nl> # endif <nl> mmm a / hphp / runtime / base / program - functions . cpp <nl> ppp b / hphp / runtime / base / program - functions . cpp <nl> <nl> # include " hphp / runtime / base / unit - cache . h " <nl> # include " hphp / runtime / base / variable - serializer . h " <nl> # include " hphp / runtime / base / zend - math . h " <nl> - # include " hphp / runtime / base / zend - strtod . h " <nl> # include " hphp / runtime / debugger / debugger . h " <nl> # include " hphp / runtime / debugger / debugger_client . h " <nl> # include " hphp / runtime / debugger / debugger_hook_handler . h " <nl> <nl> # include " hphp / util / type - scan . h " <nl> <nl> # include " hphp / zend / zend - string . h " <nl> + # include " hphp / zend / zend - strtod . h " <nl> <nl> # include < folly / CPortability . h > <nl> # include < folly / Portability . h > <nl> mmm a / hphp / runtime / base / string - data . cpp <nl> ppp b / hphp / runtime / base / string - data . cpp <nl> <nl> # include " hphp / runtime / base / zend - functions . h " <nl> # include " hphp / runtime / base / zend - printf . h " <nl> # include " hphp / runtime / base / zend - string . h " <nl> - # include " hphp / runtime / base / zend - strtod . h " <nl> # include " hphp / runtime / ext / apc / ext_apc . h " <nl> <nl> + # include " hphp / zend / zend - strtod . h " <nl> + <nl> namespace HPHP { <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> mmm a / hphp / runtime / base / variable - unserializer . cpp <nl> ppp b / hphp / runtime / base / variable - unserializer . cpp <nl> <nl> # include " hphp / runtime / base / struct - log - util . h " <nl> # include " hphp / runtime / base / request - info . h " <nl> # include " hphp / runtime / base / variable - serializer . h " <nl> - # include " hphp / runtime / base / zend - strtod . h " <nl> <nl> # include " hphp / runtime / ext / collections / ext_collections - map . h " <nl> # include " hphp / runtime / ext / collections / ext_collections - pair . h " <nl> <nl> <nl> # include " hphp / runtime / vm / jit / perf - counters . h " <nl> <nl> + # include " hphp / zend / zend - strtod . h " <nl> + <nl> namespace HPHP { <nl> <nl> namespace { <nl> mmm a / hphp / runtime / base / zend - collator . cpp <nl> ppp b / hphp / runtime / base / zend - collator . cpp <nl> <nl> + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> * / <nl> # include " hphp / runtime / base / zend - collator . h " <nl> - # include " hphp / runtime / base / zend - strtod . h " <nl> # include " hphp / runtime / base / intl - convert . h " <nl> # include " hphp / runtime / base / tv - type . h " <nl> # include " hphp / runtime / base / builtin - functions . h " <nl> # include " hphp / runtime / base / runtime - error . h " <nl> # include " hphp / runtime / base / array - iterator . h " <nl> # include " hphp / runtime / base / comparisons . h " <nl> + # include " hphp / zend / zend - strtod . h " <nl> <nl> namespace HPHP { <nl> <nl> mmm a / hphp / runtime / base / zend - functions . cpp <nl> ppp b / hphp / runtime / base / zend - functions . cpp <nl> <nl> # include " hphp / runtime / base / zend - functions . h " <nl> <nl> # include " hphp / runtime / base / runtime - option . h " <nl> - # include " hphp / runtime / base / zend - strtod . h " <nl> # include " hphp / util / fast_strtoll_base10 . h " <nl> + # include " hphp / zend / zend - strtod . h " <nl> <nl> namespace HPHP { <nl> <nl> mmm a / hphp / runtime / base / zend - printf . cpp <nl> ppp b / hphp / runtime / base / zend - printf . cpp <nl> <nl> + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> * / <nl> <nl> + / / NOTE : See also " hphp / zend / zend - printf . * " . <nl> + <nl> # include " hphp / runtime / base / zend - printf . h " <nl> <nl> # include < cmath > <nl> <nl> # include " hphp / runtime / base / execution - context . h " <nl> # include " hphp / runtime / base / string - buffer . h " <nl> # include " hphp / runtime / base / zend - string . h " <nl> - # include " hphp / runtime / base / zend - strtod . h " <nl> + # include " hphp / zend / zend - strtod . h " <nl> <nl> namespace HPHP { <nl> <nl> namespace HPHP { <nl> # define ALIGN_RIGHT 1 <nl> # define ADJ_WIDTH 1 <nl> # define ADJ_PRECISION 2 <nl> - # define NUM_BUF_SIZE 500 <nl> - # define NDIG 320 <nl> - # define FLOAT_DIGITS 6 <nl> # define FLOAT_PRECISION 6 <nl> # define MAX_FLOAT_DIGITS 38 <nl> # define MAX_FLOAT_PRECISION 40 <nl> - # define EXPONENT_LENGTH 10 <nl> <nl> static char hexchars [ ] = " 0123456789abcdef " ; <nl> static char HEXCHARS [ ] = " 0123456789ABCDEF " ; <nl> <nl> - typedef enum { <nl> - LM_STD = 0 , <nl> - LM_INTMAX_T , <nl> - LM_PTRDIFF_T , <nl> - LM_LONG_LONG , <nl> - LM_SIZE_T , <nl> - LM_LONG , <nl> - LM_LONG_DOUBLE <nl> - } length_modifier_e ; <nl> - <nl> - typedef enum { <nl> - NO = 0 , YES = 1 <nl> - } boolean_e ; <nl> - <nl> - # define NUM ( c ) ( c - ' 0 ' ) <nl> - <nl> - # define STR_TO_DEC ( str , num ) do { \ <nl> - num = NUM ( * str + + ) ; \ <nl> - while ( isdigit ( ( int ) * str ) ) { \ <nl> - num * = 10 ; \ <nl> - num + = NUM ( * str + + ) ; \ <nl> - if ( num > = INT_MAX / 10 ) { \ <nl> - while ( isdigit ( ( int ) * str + + ) ) \ <nl> - continue ; \ <nl> - break ; \ <nl> - } \ <nl> - } \ <nl> - } while ( 0 ) <nl> - <nl> - / * <nl> - * This macro does zero padding so that the precision <nl> - * requirement is satisfied . The padding is done by <nl> - * adding ' 0 ' s to the left of the string that is going <nl> - * to be printed . <nl> - * / <nl> - # define FIX_PRECISION ( adjust , precision , s , s_len ) do { \ <nl> - if ( adjust ) \ <nl> - while ( s_len < precision ) { \ <nl> - * - - s = ' 0 ' ; \ <nl> - s_len + + ; \ <nl> - } \ <nl> - } while ( 0 ) <nl> - <nl> - typedef int64_t wide_int ; <nl> - typedef uint64_t u_wide_int ; <nl> - <nl> - # define FALSE 0 <nl> - # define TRUE 1 <nl> - # define NUL ' \ 0 ' <nl> - # define INT_NULL ( ( int * ) 0 ) <nl> - <nl> - static const char * s_null = " ( null ) " ; <nl> - # define S_NULL_LEN 6 <nl> - <nl> - # define FLOAT_DIGITS 6 <nl> - # define EXPONENT_LENGTH 10 <nl> - <nl> # define HAVE_LOCALE_H 1 <nl> <nl> # ifdef HAVE_LOCALE_H <nl> namespace HPHP { <nl> # endif <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - / * <nl> - * Copyright ( c ) 2002 , 2006 Todd C . Miller < Todd . Miller @ courtesan . com > <nl> - * <nl> - * Permission to use , copy , modify , and distribute this software for any <nl> - * purpose with or without fee is hereby granted , provided that the above <nl> - * copyright notice and this permission notice appear in all copies . <nl> - * <nl> - * THE SOFTWARE IS PROVIDED " AS IS " AND THE AUTHOR DISCLAIMS ALL WARRANTIES <nl> - * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF <nl> - * MERCHANTABILITY AND FITNESS . IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR <nl> - * ANY SPECIAL , DIRECT , INDIRECT , OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES <nl> - * WHATSOEVER RESULTING FROM LOSS OF USE , DATA OR PROFITS , WHETHER IN AN <nl> - * ACTION OF CONTRACT , NEGLIGENCE OR OTHER TORTIOUS ACTION , ARISING OUT OF <nl> - * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE . <nl> - * <nl> - * Sponsored in part by the Defense Advanced Research Projects <nl> - * Agency ( DARPA ) and Air Force Research Laboratory , Air Force <nl> - * Materiel Command , USAF , under agreement number F39502 - 99 - 1 - 0512 . <nl> - * / <nl> - <nl> - static char * __cvt ( double value , int ndigit , int * decpt , int * sign , <nl> - int fmode , int pad ) { <nl> - register char * s = nullptr ; <nl> - char * p , * rve , c ; <nl> - size_t siz ; <nl> - <nl> - if ( ndigit < 0 ) { <nl> - siz = - ndigit + 1 ; <nl> - } else { <nl> - siz = ndigit + 1 ; <nl> - } <nl> - <nl> - / * __dtoa ( ) doesn ' t allocate space for 0 so we do it by hand * / <nl> - if ( value = = 0 . 0 ) { <nl> - * decpt = 1 - fmode ; / * 1 for ' e ' , 0 for ' f ' * / <nl> - * sign = 0 ; <nl> - if ( ( rve = s = ( char * ) req : : malloc_noptrs ( ndigit ? siz : 2 ) ) = = nullptr ) { <nl> - return ( nullptr ) ; <nl> - } <nl> - * rve + + = ' 0 ' ; <nl> - * rve = ' \ 0 ' ; <nl> - if ( ! ndigit ) { <nl> - return ( s ) ; <nl> - } <nl> - } else { <nl> - p = zend_dtoa ( value , fmode + 2 , ndigit , decpt , sign , & rve ) ; <nl> - if ( * decpt = = 9999 ) { <nl> - / * Infinity or Nan , convert to inf or nan like printf * / <nl> - * decpt = 0 ; <nl> - c = * p ; <nl> - zend_freedtoa ( p ) ; <nl> - return strdup ( c = = ' I ' ? " INF " : " NAN " ) ; <nl> - } <nl> - / * Make a local copy and adjust rve to be in terms of s * / <nl> - if ( pad & & fmode ) { <nl> - siz + = * decpt ; <nl> - } <nl> - if ( ( s = ( char * ) req : : malloc_noptrs ( siz + 1 ) ) = = nullptr ) { <nl> - zend_freedtoa ( p ) ; <nl> - return ( nullptr ) ; <nl> - } <nl> - ( void ) string_copy ( s , p , siz ) ; <nl> - rve = s + ( rve - p ) ; <nl> - zend_freedtoa ( p ) ; <nl> - } <nl> - <nl> - / * Add trailing zeros * / <nl> - if ( pad ) { <nl> - siz - = rve - s ; <nl> - while ( - - siz ) { <nl> - * rve + + = ' 0 ' ; <nl> - } <nl> - * rve = ' \ 0 ' ; <nl> - } <nl> - <nl> - return ( s ) ; <nl> - } <nl> - <nl> - static inline char * php_ecvt ( double value , int ndigit , int * decpt , int * sign ) { <nl> - return ( __cvt ( value , ndigit , decpt , sign , 0 , 1 ) ) ; <nl> - } <nl> - <nl> - static inline char * php_fcvt ( double value , int ndigit , int * decpt , int * sign ) { <nl> - return ( __cvt ( value , ndigit , decpt , sign , 1 , 1 ) ) ; <nl> - } <nl> - <nl> - static char * php_gcvt ( double value , int ndigit , char dec_point , <nl> - char exponent , char * buf ) { <nl> - char * digits , * dst , * src ; <nl> - int i , decpt , sign ; <nl> - <nl> - digits = zend_dtoa ( value , 2 , ndigit , & decpt , & sign , nullptr ) ; <nl> - if ( decpt = = 9999 ) { <nl> - / * <nl> - * Infinity or NaN , convert to inf or nan with sign . <nl> - * We assume the buffer is at least ndigit long . <nl> - * / <nl> - snprintf ( buf , ndigit + 1 , " % s % s " , ( sign & & * digits = = ' I ' ) ? " - " : " " , <nl> - * digits = = ' I ' ? " INF " : " NAN " ) ; <nl> - zend_freedtoa ( digits ) ; <nl> - return ( buf ) ; <nl> - } <nl> - <nl> - dst = buf ; <nl> - if ( sign ) { <nl> - * dst + + = ' - ' ; <nl> - } <nl> - <nl> - if ( ( decpt > = 0 & & decpt > ndigit ) | | decpt < - 3 ) { / * use E - style * / <nl> - / * exponential format ( e . g . 1 . 2345e + 13 ) * / <nl> - if ( - - decpt < 0 ) { <nl> - sign = 1 ; <nl> - decpt = - decpt ; <nl> - } else { <nl> - sign = 0 ; <nl> - } <nl> - src = digits ; <nl> - * dst + + = * src + + ; <nl> - * dst + + = dec_point ; <nl> - if ( * src = = ' \ 0 ' ) { <nl> - * dst + + = ' 0 ' ; <nl> - } else { <nl> - do { <nl> - * dst + + = * src + + ; <nl> - } while ( * src ! = ' \ 0 ' ) ; <nl> - } <nl> - * dst + + = exponent ; <nl> - if ( sign ) { <nl> - * dst + + = ' - ' ; <nl> - } else { <nl> - * dst + + = ' + ' ; <nl> - } <nl> - if ( decpt < 10 ) { <nl> - * dst + + = ' 0 ' + decpt ; <nl> - * dst = ' \ 0 ' ; <nl> - } else { <nl> - / * XXX - optimize * / <nl> - for ( sign = decpt , i = 0 ; ( sign / = 10 ) ! = 0 ; i + + ) <nl> - continue ; <nl> - dst [ i + 1 ] = ' \ 0 ' ; <nl> - while ( decpt ! = 0 ) { <nl> - dst [ i - - ] = ' 0 ' + decpt % 10 ; <nl> - decpt / = 10 ; <nl> - } <nl> - } <nl> - } else if ( decpt < 0 ) { <nl> - / * standard format 0 . * / <nl> - * dst + + = ' 0 ' ; / * zero before decimal point * / <nl> - * dst + + = dec_point ; <nl> - do { <nl> - * dst + + = ' 0 ' ; <nl> - } while ( + + decpt < 0 ) ; <nl> - src = digits ; <nl> - while ( * src ! = ' \ 0 ' ) { <nl> - * dst + + = * src + + ; <nl> - } <nl> - * dst = ' \ 0 ' ; <nl> - } else { <nl> - / * standard format * / <nl> - for ( i = 0 , src = digits ; i < decpt ; i + + ) { <nl> - if ( * src ! = ' \ 0 ' ) { <nl> - * dst + + = * src + + ; <nl> - } else { <nl> - * dst + + = ' 0 ' ; <nl> - } <nl> - } <nl> - if ( * src ! = ' \ 0 ' ) { <nl> - if ( src = = digits ) { <nl> - * dst + + = ' 0 ' ; / * zero before decimal point * / <nl> - } <nl> - * dst + + = dec_point ; <nl> - for ( i = decpt ; digits [ i ] ! = ' \ 0 ' ; i + + ) { <nl> - * dst + + = digits [ i ] ; <nl> - } <nl> - } <nl> - * dst = ' \ 0 ' ; <nl> - } <nl> - zend_freedtoa ( digits ) ; <nl> - return ( buf ) ; <nl> - } <nl> - <nl> - / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - / / Apache license <nl> - <nl> - / * = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> - * Copyright ( c ) 1995 - 1998 The Apache Group . All rights reserved . <nl> - * <nl> - * Redistribution and use in source and binary forms , with or without <nl> - * modification , are permitted provided that the following conditions <nl> - * are met : <nl> - * <nl> - * 1 . Redistributions of source code must retain the above copyright <nl> - * notice , this list of conditions and the following disclaimer . <nl> - * <nl> - * 2 . Redistributions in binary form must reproduce the above copyright <nl> - * notice , this list of conditions and the following disclaimer in <nl> - * the documentation and / or other materials provided with the <nl> - * distribution . <nl> - * <nl> - * 3 . All advertising materials mentioning features or use of this <nl> - * software must display the following acknowledgment : <nl> - * " This product includes software developed by the Apache Group <nl> - * for use in the Apache HTTP server project ( http : / / www . apache . org / ) . " <nl> - * <nl> - * 4 . The names " Apache Server " and " Apache Group " must not be used to <nl> - * endorse or promote products derived from this software without <nl> - * prior written permission . <nl> - * <nl> - * 5 . Redistributions of any form whatsoever must retain the following <nl> - * acknowledgment : <nl> - * " This product includes software developed by the Apache Group <nl> - * for use in the Apache HTTP server project ( http : / / www . apache . org / ) . " <nl> - * <nl> - * THIS SOFTWARE IS PROVIDED BY THE APACHE GROUP ` ` AS IS ' ' AND ANY <nl> - * EXPRESSED OR IMPLIED WARRANTIES , INCLUDING , BUT NOT LIMITED TO , THE <nl> - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR <nl> - * PURPOSE ARE DISCLAIMED . IN NO EVENT SHALL THE APACHE GROUP OR <nl> - * ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT , INDIRECT , INCIDENTAL , <nl> - * SPECIAL , EXEMPLARY , OR CONSEQUENTIAL DAMAGES ( INCLUDING , BUT <nl> - * NOT LIMITED TO , PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES ; <nl> - * LOSS OF USE , DATA , OR PROFITS ; OR BUSINESS INTERRUPTION ) <nl> - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY , WHETHER IN CONTRACT , <nl> - * STRICT LIABILITY , OR TORT ( INCLUDING NEGLIGENCE OR OTHERWISE ) <nl> - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE , EVEN IF ADVISED <nl> - * OF THE POSSIBILITY OF SUCH DAMAGE . <nl> - * = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> - * <nl> - * This software consists of voluntary contributions made by many <nl> - * individuals on behalf of the Apache Group and was originally based <nl> - * on public domain software written at the National Center for <nl> - * Supercomputing Applications , University of Illinois , Urbana - Champaign . <nl> - * For more information on the Apache Group and the Apache HTTP server <nl> - * project , please see < http : / / www . apache . org / > . <nl> - * <nl> - * This code is based on , and used with the permission of , the <nl> - * SIO stdio - replacement strx_ * functions by Panos Tsirigotis <nl> - * < panos @ alumni . cs . colorado . edu > for xinetd . <nl> - * / <nl> - <nl> - / * <nl> - * Convert num to a base X number where X is a power of 2 . nbits determines X . <nl> - * For example , if nbits is 3 , we do base 8 conversion <nl> - * Return value : <nl> - * a pointer to a string containing the number <nl> - * <nl> - * The caller provides a buffer for the string : that is the buf_end argument <nl> - * which is a pointer to the END of the buffer + 1 ( i . e . if the buffer <nl> - * is declared as buf [ 100 ] , buf_end should be & buf [ 100 ] ) <nl> - * / <nl> - char * ap_php_conv_p2 ( register uint64_t num , register int nbits , <nl> - char format , char * buf_end , register int * len ) <nl> - { <nl> - register int mask = ( 1 < < nbits ) - 1 ; <nl> - register char * p = buf_end ; <nl> - static char low_digits [ ] = " 0123456789abcdef " ; <nl> - static char upper_digits [ ] = " 0123456789ABCDEF " ; <nl> - register char * digits = ( format = = ' X ' ) ? upper_digits : low_digits ; <nl> - <nl> - do { <nl> - * - - p = digits [ num & mask ] ; <nl> - num > > = nbits ; <nl> - } <nl> - while ( num ) ; <nl> - <nl> - * len = buf_end - p ; <nl> - return ( p ) ; <nl> - } <nl> - <nl> - / * <nl> - * Convert num to its decimal format . <nl> - * Return value : <nl> - * - a pointer to a string containing the number ( no sign ) <nl> - * - len contains the length of the string <nl> - * - is_negative is set to TRUE or FALSE depending on the sign <nl> - * of the number ( always set to FALSE if is_unsigned is TRUE ) <nl> - * <nl> - * The caller provides a buffer for the string : that is the buf_end argument <nl> - * which is a pointer to the END of the buffer + 1 ( i . e . if the buffer <nl> - * is declared as buf [ 100 ] , buf_end should be & buf [ 100 ] ) <nl> - * / <nl> - char * ap_php_conv_10 ( register int64_t num , register bool is_unsigned , <nl> - register int * is_negative , char * buf_end , <nl> - register int * len ) { <nl> - register char * p = buf_end ; <nl> - register uint64_t magnitude ; <nl> - <nl> - if ( is_unsigned ) { <nl> - magnitude = ( uint64_t ) num ; <nl> - * is_negative = 0 ; <nl> - } else { <nl> - * is_negative = ( num < 0 ) ; <nl> - <nl> - / * <nl> - * On a 2 ' s complement machine , negating the most negative integer <nl> - * results in a number that cannot be represented as a signed integer . <nl> - * Here is what we do to obtain the number ' s magnitude : <nl> - * a . add 1 to the number <nl> - * b . negate it ( becomes positive ) <nl> - * c . convert it to unsigned <nl> - * d . add 1 <nl> - * / <nl> - if ( * is_negative ) { <nl> - int64_t t = num + 1 ; <nl> - magnitude = ( ( uint64_t ) - t ) + 1 ; <nl> - } else { <nl> - magnitude = ( uint64_t ) num ; <nl> - } <nl> - } <nl> - <nl> - / * <nl> - * We use a do - while loop so that we write at least 1 digit <nl> - * / <nl> - do { <nl> - register uint64_t new_magnitude = magnitude / 10 ; <nl> - <nl> - * - - p = ( char ) ( magnitude - new_magnitude * 10 + ' 0 ' ) ; <nl> - magnitude = new_magnitude ; <nl> - } <nl> - while ( magnitude ) ; <nl> - <nl> - * len = buf_end - p ; <nl> - return ( p ) ; <nl> - } <nl> - <nl> - / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - / * <nl> - * Convert a floating point number to a string formats ' f ' , ' e ' or ' E ' . <nl> - * The result is placed in buf , and len denotes the length of the string <nl> - * The sign is returned in the is_negative argument ( and is not placed <nl> - * in buf ) . <nl> - * / <nl> - char * php_conv_fp ( register char format , register double num , <nl> - bool add_dp , int precision , char dec_point , <nl> - int * is_negative , char * buf , int * len ) { <nl> - register char * s = buf ; <nl> - register char * p , * p_orig ; <nl> - int decimal_point ; <nl> - <nl> - if ( precision > = NDIG - 1 ) { <nl> - precision = NDIG - 2 ; <nl> - } <nl> - <nl> - if ( format = = ' F ' ) { <nl> - p_orig = p = php_fcvt ( num , precision , & decimal_point , is_negative ) ; <nl> - } else { / / either e or E format <nl> - p_orig = p = php_ecvt ( num , precision + 1 , & decimal_point , is_negative ) ; <nl> - } <nl> - <nl> - / / Check for Infinity and NaN <nl> - if ( isalpha ( ( int ) * p ) ) { <nl> - * len = strlen ( p ) ; <nl> - memcpy ( buf , p , * len + 1 ) ; <nl> - * is_negative = 0 ; <nl> - req : : free ( p_orig ) ; <nl> - return ( buf ) ; <nl> - } <nl> - if ( format = = ' F ' ) { <nl> - if ( decimal_point < = 0 ) { <nl> - if ( num ! = 0 | | precision > 0 ) { <nl> - * s + + = ' 0 ' ; <nl> - if ( precision > 0 ) { <nl> - * s + + = dec_point ; <nl> - while ( decimal_point + + < 0 ) { <nl> - * s + + = ' 0 ' ; <nl> - } <nl> - } else if ( add_dp ) { <nl> - * s + + = dec_point ; <nl> - } <nl> - } <nl> - } else { <nl> - int addz = decimal_point > = NDIG ? decimal_point - NDIG + 1 : 0 ; <nl> - decimal_point - = addz ; <nl> - while ( decimal_point - - > 0 ) { <nl> - * s + + = * p + + ; <nl> - } <nl> - while ( addz - - > 0 ) { <nl> - * s + + = ' 0 ' ; <nl> - } <nl> - if ( precision > 0 | | add_dp ) { <nl> - * s + + = dec_point ; <nl> - } <nl> - } <nl> - } else { <nl> - * s + + = * p + + ; <nl> - if ( precision > 0 | | add_dp ) { <nl> - * s + + = ' . ' ; <nl> - } <nl> - } <nl> - <nl> - / / copy the rest of p , the NUL is NOT copied <nl> - while ( * p ) { <nl> - * s + + = * p + + ; <nl> - } <nl> - <nl> - if ( format ! = ' F ' ) { <nl> - char temp [ EXPONENT_LENGTH ] ; / / for exponent conversion <nl> - int t_len ; <nl> - int exponent_is_negative ; <nl> - <nl> - * s + + = format ; / / either e or E <nl> - decimal_point - - ; <nl> - if ( decimal_point ! = 0 ) { <nl> - p = ap_php_conv_10 ( ( int64_t ) decimal_point , false , <nl> - & exponent_is_negative , & temp [ EXPONENT_LENGTH ] , <nl> - & t_len ) ; <nl> - * s + + = exponent_is_negative ? ' - ' : ' + ' ; <nl> - <nl> - / / Make sure the exponent has at least 2 digits <nl> - while ( t_len - - ) { <nl> - * s + + = * p + + ; <nl> - } <nl> - } else { <nl> - * s + + = ' + ' ; <nl> - * s + + = ' 0 ' ; <nl> - } <nl> - } <nl> - * len = s - buf ; <nl> - req : : free ( p_orig ) ; <nl> - return ( buf ) ; <nl> - } <nl> - <nl> - / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - <nl> - inline static void appendchar ( char * * buffer , int * pos , int * size , char add ) { <nl> - if ( ( * pos + 1 ) > = * size ) { <nl> - * size < < = 1 ; <nl> - * buffer = ( char * ) realloc ( * buffer , * size ) ; <nl> - } <nl> - ( * buffer ) [ ( * pos ) + + ] = add ; <nl> - } <nl> - <nl> - inline static void appendsimplestring ( char * * buffer , int * pos , int * size , <nl> - const char * add , int len ) { <nl> - int req_size = * pos + len ; <nl> - <nl> - if ( req_size > * size ) { <nl> - while ( req_size > * size ) { <nl> - * size < < = 1 ; <nl> - } <nl> - * buffer = ( char * ) realloc ( * buffer , * size ) ; <nl> - } <nl> - memcpy ( & ( * buffer ) [ * pos ] , add , len ) ; <nl> - * pos + = len ; <nl> - } <nl> <nl> inline static void appendstring ( StringBuffer * buffer , const char * add , <nl> int min_width , int max_width , char padding , <nl> String string_printf ( const char * format , int len , const Array & args ) { <nl> return result . detach ( ) ; <nl> } <nl> <nl> - / * <nl> - * Do format conversion placing the output in buffer <nl> - * / <nl> - static int xbuf_format_converter ( char * * outbuf , const char * fmt , va_list ap ) <nl> - { <nl> - register char * s = nullptr ; <nl> - char * q ; <nl> - int s_len ; <nl> - <nl> - register int min_width = 0 ; <nl> - int precision = 0 ; <nl> - enum { <nl> - LEFT , RIGHT <nl> - } adjust ; <nl> - char pad_char ; <nl> - char prefix_char ; <nl> - <nl> - double fp_num ; <nl> - wide_int i_num = ( wide_int ) 0 ; <nl> - u_wide_int ui_num ; <nl> - <nl> - char num_buf [ NUM_BUF_SIZE ] ; <nl> - char char_buf [ 2 ] ; / * for printing % % and % < unknown > * / <nl> - <nl> - # ifdef HAVE_LOCALE_H <nl> - struct lconv * lconv = nullptr ; <nl> - # endif <nl> - <nl> - / * <nl> - * Flag variables <nl> - * / <nl> - length_modifier_e modifier ; <nl> - boolean_e alternate_form ; <nl> - boolean_e print_sign ; <nl> - boolean_e print_blank ; <nl> - boolean_e adjust_precision ; <nl> - boolean_e adjust_width ; <nl> - int is_negative ; <nl> - <nl> - int size = 240 ; <nl> - char * result = ( char * ) malloc ( size ) ; <nl> - int outpos = 0 ; <nl> - <nl> - while ( * fmt ) { <nl> - if ( * fmt ! = ' % ' ) { <nl> - appendchar ( & result , & outpos , & size , * fmt ) ; <nl> - } else { <nl> - / * <nl> - * Default variable settings <nl> - * / <nl> - adjust = RIGHT ; <nl> - alternate_form = print_sign = print_blank = NO ; <nl> - pad_char = ' ' ; <nl> - prefix_char = NUL ; <nl> - <nl> - fmt + + ; <nl> - <nl> - / * <nl> - * Try to avoid checking for flags , width or precision <nl> - * / <nl> - if ( isascii ( ( int ) * fmt ) & & ! islower ( ( int ) * fmt ) ) { <nl> - / * <nl> - * Recognize flags : - , # , BLANK , + <nl> - * / <nl> - for ( ; ; fmt + + ) { <nl> - if ( * fmt = = ' - ' ) <nl> - adjust = LEFT ; <nl> - else if ( * fmt = = ' + ' ) <nl> - print_sign = YES ; <nl> - else if ( * fmt = = ' # ' ) <nl> - alternate_form = YES ; <nl> - else if ( * fmt = = ' ' ) <nl> - print_blank = YES ; <nl> - else if ( * fmt = = ' 0 ' ) <nl> - pad_char = ' 0 ' ; <nl> - else <nl> - break ; <nl> - } <nl> - <nl> - / * <nl> - * Check if a width was specified <nl> - * / <nl> - if ( isdigit ( ( int ) * fmt ) ) { <nl> - STR_TO_DEC ( fmt , min_width ) ; <nl> - adjust_width = YES ; <nl> - } else if ( * fmt = = ' * ' ) { <nl> - min_width = va_arg ( ap , int ) ; <nl> - fmt + + ; <nl> - adjust_width = YES ; <nl> - if ( min_width < 0 ) { <nl> - adjust = LEFT ; <nl> - min_width = - min_width ; <nl> - } <nl> - } else <nl> - adjust_width = NO ; <nl> - <nl> - / * <nl> - * Check if a precision was specified <nl> - * <nl> - * XXX : an unreasonable amount of precision may be specified <nl> - * resulting in overflow of num_buf . Currently we <nl> - * ignore this possibility . <nl> - * / <nl> - if ( * fmt = = ' . ' ) { <nl> - adjust_precision = YES ; <nl> - fmt + + ; <nl> - if ( isdigit ( ( int ) * fmt ) ) { <nl> - STR_TO_DEC ( fmt , precision ) ; <nl> - } else if ( * fmt = = ' * ' ) { <nl> - precision = va_arg ( ap , int ) ; <nl> - fmt + + ; <nl> - if ( precision < 0 ) <nl> - precision = 0 ; <nl> - } else <nl> - precision = 0 ; <nl> - } else <nl> - adjust_precision = NO ; <nl> - } else <nl> - adjust_precision = adjust_width = NO ; <nl> - <nl> - / * <nl> - * Modifier check <nl> - * / <nl> - switch ( * fmt ) { <nl> - case ' L ' : <nl> - fmt + + ; <nl> - modifier = LM_LONG_DOUBLE ; <nl> - break ; <nl> - case ' I ' : <nl> - fmt + + ; <nl> - # if SIZEOF_LONG_LONG <nl> - if ( * fmt = = ' 6 ' & & * ( fmt + 1 ) = = ' 4 ' ) { <nl> - fmt + = 2 ; <nl> - modifier = LM_LONG_LONG ; <nl> - } else <nl> - # endif <nl> - if ( * fmt = = ' 3 ' & & * ( fmt + 1 ) = = ' 2 ' ) { <nl> - fmt + = 2 ; <nl> - modifier = LM_LONG ; <nl> - } else { <nl> - # ifdef _WIN64 <nl> - modifier = LM_LONG_LONG ; <nl> - # else <nl> - modifier = LM_LONG ; <nl> - # endif <nl> - } <nl> - break ; <nl> - case ' l ' : <nl> - fmt + + ; <nl> - # if SIZEOF_LONG_LONG <nl> - if ( * fmt = = ' l ' ) { <nl> - fmt + + ; <nl> - modifier = LM_LONG_LONG ; <nl> - } else <nl> - # endif <nl> - modifier = LM_LONG ; <nl> - break ; <nl> - case ' z ' : <nl> - fmt + + ; <nl> - modifier = LM_SIZE_T ; <nl> - break ; <nl> - case ' j ' : <nl> - fmt + + ; <nl> - # if SIZEOF_INTMAX_T <nl> - modifier = LM_INTMAX_T ; <nl> - # else <nl> - modifier = LM_SIZE_T ; <nl> - # endif <nl> - break ; <nl> - case ' t ' : <nl> - fmt + + ; <nl> - # if SIZEOF_PTRDIFF_T <nl> - modifier = LM_PTRDIFF_T ; <nl> - # else <nl> - modifier = LM_SIZE_T ; <nl> - # endif <nl> - break ; <nl> - case ' h ' : <nl> - fmt + + ; <nl> - if ( * fmt = = ' h ' ) { <nl> - fmt + + ; <nl> - } <nl> - / * these are promoted to int , so no break * / <nl> - default : <nl> - modifier = LM_STD ; <nl> - break ; <nl> - } <nl> - <nl> - / * <nl> - * Argument extraction and printing . <nl> - * First we determine the argument type . <nl> - * Then , we convert the argument to a string . <nl> - * On exit from the switch , s points to the string that <nl> - * must be printed , s_len has the length of the string <nl> - * The precision requirements , if any , are reflected in s_len . <nl> - * <nl> - * NOTE : pad_char may be set to ' 0 ' because of the 0 flag . <nl> - * It is reset to ' ' by non - numeric formats <nl> - * / <nl> - switch ( * fmt ) { <nl> - case ' u ' : <nl> - switch ( modifier ) { <nl> - default : <nl> - i_num = ( wide_int ) va_arg ( ap , unsigned int ) ; <nl> - break ; <nl> - case LM_LONG_DOUBLE : <nl> - goto fmt_error ; <nl> - case LM_LONG : <nl> - i_num = ( wide_int ) va_arg ( ap , unsigned long int ) ; <nl> - break ; <nl> - case LM_SIZE_T : <nl> - i_num = ( wide_int ) va_arg ( ap , size_t ) ; <nl> - break ; <nl> - # if SIZEOF_LONG_LONG <nl> - case LM_LONG_LONG : <nl> - i_num = ( wide_int ) va_arg ( ap , u_wide_int ) ; <nl> - break ; <nl> - # endif <nl> - # if SIZEOF_INTMAX_T <nl> - case LM_INTMAX_T : <nl> - i_num = ( wide_int ) va_arg ( ap , uintmax_t ) ; <nl> - break ; <nl> - # endif <nl> - # if SIZEOF_PTRDIFF_T <nl> - case LM_PTRDIFF_T : <nl> - i_num = ( wide_int ) va_arg ( ap , ptrdiff_t ) ; <nl> - break ; <nl> - # endif <nl> - } <nl> - / * <nl> - * The rest also applies to other integer formats , so fall <nl> - * into that case . <nl> - * / <nl> - case ' d ' : <nl> - case ' i ' : <nl> - / * <nl> - * Get the arg if we haven ' t already . <nl> - * / <nl> - if ( ( * fmt ) ! = ' u ' ) { <nl> - switch ( modifier ) { <nl> - default : <nl> - i_num = ( wide_int ) va_arg ( ap , int ) ; <nl> - break ; <nl> - case LM_LONG_DOUBLE : <nl> - goto fmt_error ; <nl> - case LM_LONG : <nl> - i_num = ( wide_int ) va_arg ( ap , long int ) ; <nl> - break ; <nl> - case LM_SIZE_T : <nl> - # if SIZEOF_SSIZE_T <nl> - i_num = ( wide_int ) va_arg ( ap , ssize_t ) ; <nl> - # else <nl> - i_num = ( wide_int ) va_arg ( ap , size_t ) ; <nl> - # endif <nl> - break ; <nl> - # if SIZEOF_LONG_LONG <nl> - case LM_LONG_LONG : <nl> - i_num = ( wide_int ) va_arg ( ap , wide_int ) ; <nl> - break ; <nl> - # endif <nl> - # if SIZEOF_INTMAX_T <nl> - case LM_INTMAX_T : <nl> - i_num = ( wide_int ) va_arg ( ap , intmax_t ) ; <nl> - break ; <nl> - # endif <nl> - # if SIZEOF_PTRDIFF_T <nl> - case LM_PTRDIFF_T : <nl> - i_num = ( wide_int ) va_arg ( ap , ptrdiff_t ) ; <nl> - break ; <nl> - # endif <nl> - } <nl> - } <nl> - s = ap_php_conv_10 ( i_num , ( * fmt ) = = ' u ' , & is_negative , <nl> - & num_buf [ NUM_BUF_SIZE ] , & s_len ) ; <nl> - FIX_PRECISION ( adjust_precision , precision , s , s_len ) ; <nl> - <nl> - if ( * fmt ! = ' u ' ) { <nl> - if ( is_negative ) <nl> - prefix_char = ' - ' ; <nl> - else if ( print_sign ) <nl> - prefix_char = ' + ' ; <nl> - else if ( print_blank ) <nl> - prefix_char = ' ' ; <nl> - } <nl> - break ; <nl> - <nl> - <nl> - case ' o ' : <nl> - switch ( modifier ) { <nl> - default : <nl> - ui_num = ( u_wide_int ) va_arg ( ap , unsigned int ) ; <nl> - break ; <nl> - case LM_LONG_DOUBLE : <nl> - goto fmt_error ; <nl> - case LM_LONG : <nl> - ui_num = ( u_wide_int ) va_arg ( ap , unsigned long int ) ; <nl> - break ; <nl> - case LM_SIZE_T : <nl> - ui_num = ( u_wide_int ) va_arg ( ap , size_t ) ; <nl> - break ; <nl> - # if SIZEOF_LONG_LONG <nl> - case LM_LONG_LONG : <nl> - ui_num = ( u_wide_int ) va_arg ( ap , u_wide_int ) ; <nl> - break ; <nl> - # endif <nl> - # if SIZEOF_INTMAX_T <nl> - case LM_INTMAX_T : <nl> - ui_num = ( u_wide_int ) va_arg ( ap , uintmax_t ) ; <nl> - break ; <nl> - # endif <nl> - # if SIZEOF_PTRDIFF_T <nl> - case LM_PTRDIFF_T : <nl> - ui_num = ( u_wide_int ) va_arg ( ap , ptrdiff_t ) ; <nl> - break ; <nl> - # endif <nl> - } <nl> - s = ap_php_conv_p2 ( ui_num , 3 , * fmt , <nl> - & num_buf [ NUM_BUF_SIZE ] , & s_len ) ; <nl> - FIX_PRECISION ( adjust_precision , precision , s , s_len ) ; <nl> - if ( alternate_form & & * s ! = ' 0 ' ) { <nl> - * - - s = ' 0 ' ; <nl> - s_len + + ; <nl> - } <nl> - break ; <nl> - <nl> - <nl> - case ' x ' : <nl> - case ' X ' : <nl> - switch ( modifier ) { <nl> - default : <nl> - ui_num = ( u_wide_int ) va_arg ( ap , unsigned int ) ; <nl> - break ; <nl> - case LM_LONG_DOUBLE : <nl> - goto fmt_error ; <nl> - case LM_LONG : <nl> - ui_num = ( u_wide_int ) va_arg ( ap , unsigned long int ) ; <nl> - break ; <nl> - case LM_SIZE_T : <nl> - ui_num = ( u_wide_int ) va_arg ( ap , size_t ) ; <nl> - break ; <nl> - # if SIZEOF_LONG_LONG <nl> - case LM_LONG_LONG : <nl> - ui_num = ( u_wide_int ) va_arg ( ap , u_wide_int ) ; <nl> - break ; <nl> - # endif <nl> - # if SIZEOF_INTMAX_T <nl> - case LM_INTMAX_T : <nl> - ui_num = ( u_wide_int ) va_arg ( ap , uintmax_t ) ; <nl> - break ; <nl> - # endif <nl> - # if SIZEOF_PTRDIFF_T <nl> - case LM_PTRDIFF_T : <nl> - ui_num = ( u_wide_int ) va_arg ( ap , ptrdiff_t ) ; <nl> - break ; <nl> - # endif <nl> - } <nl> - s = ap_php_conv_p2 ( ui_num , 4 , * fmt , <nl> - & num_buf [ NUM_BUF_SIZE ] , & s_len ) ; <nl> - FIX_PRECISION ( adjust_precision , precision , s , s_len ) ; <nl> - if ( alternate_form & & i_num ! = 0 ) { <nl> - * - - s = * fmt ; / * ' x ' or ' X ' * / <nl> - * - - s = ' 0 ' ; <nl> - s_len + = 2 ; <nl> - } <nl> - break ; <nl> - <nl> - <nl> - case ' s ' : <nl> - case ' v ' : <nl> - s = va_arg ( ap , char * ) ; <nl> - if ( s ! = nullptr ) { <nl> - s_len = strlen ( s ) ; <nl> - if ( adjust_precision & & precision < s_len ) <nl> - s_len = precision ; <nl> - } else { <nl> - s = const_cast < char * > ( s_null ) ; <nl> - s_len = S_NULL_LEN ; <nl> - } <nl> - pad_char = ' ' ; <nl> - break ; <nl> - <nl> - <nl> - case ' f ' : <nl> - case ' F ' : <nl> - case ' e ' : <nl> - case ' E ' : <nl> - switch ( modifier ) { <nl> - case LM_LONG_DOUBLE : <nl> - fp_num = ( double ) va_arg ( ap , long double ) ; <nl> - break ; <nl> - case LM_STD : <nl> - fp_num = va_arg ( ap , double ) ; <nl> - break ; <nl> - default : <nl> - goto fmt_error ; <nl> - } <nl> - <nl> - if ( std : : isnan ( fp_num ) ) { <nl> - s = const_cast < char * > ( " nan " ) ; <nl> - s_len = 3 ; <nl> - } else if ( std : : isinf ( fp_num ) ) { <nl> - s = const_cast < char * > ( " inf " ) ; <nl> - s_len = 3 ; <nl> - } else { <nl> - # ifdef HAVE_LOCALE_H <nl> - if ( ! lconv ) { <nl> - lconv = localeconv ( ) ; <nl> - } <nl> - # endif <nl> - s = php_conv_fp ( ( * fmt = = ' f ' ) ? ' F ' : * fmt , fp_num , alternate_form , <nl> - ( adjust_precision = = NO ) ? FLOAT_DIGITS : precision , <nl> - ( * fmt = = ' f ' ) ? LCONV_DECIMAL_POINT : ' . ' , <nl> - & is_negative , & num_buf [ 1 ] , & s_len ) ; <nl> - if ( is_negative ) <nl> - prefix_char = ' - ' ; <nl> - else if ( print_sign ) <nl> - prefix_char = ' + ' ; <nl> - else if ( print_blank ) <nl> - prefix_char = ' ' ; <nl> - } <nl> - break ; <nl> - <nl> - <nl> - case ' g ' : <nl> - case ' k ' : <nl> - case ' G ' : <nl> - case ' H ' : <nl> - switch ( modifier ) { <nl> - case LM_LONG_DOUBLE : <nl> - fp_num = ( double ) va_arg ( ap , long double ) ; <nl> - break ; <nl> - case LM_STD : <nl> - fp_num = va_arg ( ap , double ) ; <nl> - break ; <nl> - default : <nl> - goto fmt_error ; <nl> - } <nl> - <nl> - if ( std : : isnan ( fp_num ) ) { <nl> - s = const_cast < char * > ( " NAN " ) ; <nl> - s_len = 3 ; <nl> - break ; <nl> - } else if ( std : : isinf ( fp_num ) ) { <nl> - if ( fp_num > 0 ) { <nl> - s = const_cast < char * > ( " INF " ) ; <nl> - s_len = 3 ; <nl> - } else { <nl> - s = const_cast < char * > ( " - INF " ) ; <nl> - s_len = 4 ; <nl> - } <nl> - break ; <nl> - } <nl> - <nl> - if ( adjust_precision = = NO ) <nl> - precision = FLOAT_DIGITS ; <nl> - else if ( precision = = 0 ) <nl> - precision = 1 ; <nl> - / * <nl> - * * We use & num_buf [ 1 ] , so that we have room for the sign <nl> - * / <nl> - # ifdef HAVE_LOCALE_H <nl> - if ( ! lconv ) { <nl> - lconv = localeconv ( ) ; <nl> - } <nl> - # endif <nl> - s = php_gcvt ( fp_num , precision , <nl> - ( * fmt = = ' H ' | | * fmt = = ' k ' ) ? ' . ' : LCONV_DECIMAL_POINT , <nl> - ( * fmt = = ' G ' | | * fmt = = ' H ' ) ? ' E ' : ' e ' , & num_buf [ 1 ] ) ; <nl> - if ( * s = = ' - ' ) <nl> - prefix_char = * s + + ; <nl> - else if ( print_sign ) <nl> - prefix_char = ' + ' ; <nl> - else if ( print_blank ) <nl> - prefix_char = ' ' ; <nl> - <nl> - s_len = strlen ( s ) ; <nl> - <nl> - if ( alternate_form & & ( q = strchr ( s , ' . ' ) ) = = nullptr ) <nl> - s [ s_len + + ] = ' . ' ; <nl> - break ; <nl> - <nl> - <nl> - case ' c ' : <nl> - char_buf [ 0 ] = ( char ) ( va_arg ( ap , int ) ) ; <nl> - s = & char_buf [ 0 ] ; <nl> - s_len = 1 ; <nl> - pad_char = ' ' ; <nl> - break ; <nl> - <nl> - <nl> - case ' % ' : <nl> - char_buf [ 0 ] = ' % ' ; <nl> - s = & char_buf [ 0 ] ; <nl> - s_len = 1 ; <nl> - pad_char = ' ' ; <nl> - break ; <nl> - <nl> - <nl> - case ' n ' : <nl> - * ( va_arg ( ap , int * ) ) = outpos ; <nl> - goto skip_output ; <nl> - <nl> - / * <nl> - * Always extract the argument as a " char * " pointer . We <nl> - * should be using " void * " but there are still machines <nl> - * that don ' t understand it . <nl> - * If the pointer size is equal to the size of an unsigned <nl> - * integer we convert the pointer to a hex number , otherwise <nl> - * we print " % p " to indicate that we don ' t handle " % p " . <nl> - * / <nl> - case ' p ' : <nl> - if ( sizeof ( char * ) < = sizeof ( u_wide_int ) ) { <nl> - ui_num = ( u_wide_int ) ( ( size_t ) va_arg ( ap , char * ) ) ; <nl> - s = ap_php_conv_p2 ( ui_num , 4 , ' x ' , <nl> - & num_buf [ NUM_BUF_SIZE ] , & s_len ) ; <nl> - if ( ui_num ! = 0 ) { <nl> - * - - s = ' x ' ; <nl> - * - - s = ' 0 ' ; <nl> - s_len + = 2 ; <nl> - } <nl> - } else { <nl> - s = const_cast < char * > ( " % p " ) ; <nl> - s_len = 2 ; <nl> - } <nl> - pad_char = ' ' ; <nl> - break ; <nl> - <nl> - <nl> - case NUL : <nl> - / * <nl> - * The last character of the format string was % . <nl> - * We ignore it . <nl> - * / <nl> - continue ; <nl> - <nl> - <nl> - fmt_error : <nl> - throw Exception ( " Illegal length modifier specified ' % c ' " , * fmt ) ; <nl> - <nl> - / * <nl> - * The default case is for unrecognized % ' s . <nl> - * We print % < char > to help the user identify what <nl> - * option is not understood . <nl> - * This is also useful in case the user wants to pass <nl> - * the output of format_converter to another function <nl> - * that understands some other % < char > ( like syslog ) . <nl> - * Note that we can ' t point s inside fmt because the <nl> - * unknown < char > could be preceded by width etc . <nl> - * / <nl> - default : <nl> - char_buf [ 0 ] = ' % ' ; <nl> - char_buf [ 1 ] = * fmt ; <nl> - s = char_buf ; <nl> - s_len = 2 ; <nl> - pad_char = ' ' ; <nl> - break ; <nl> - } <nl> - <nl> - if ( prefix_char ! = NUL ) { <nl> - * - - s = prefix_char ; <nl> - s_len + + ; <nl> - } <nl> - if ( adjust_width & & adjust = = RIGHT & & min_width > s_len ) { <nl> - if ( pad_char = = ' 0 ' & & prefix_char ! = NUL ) { <nl> - appendchar ( & result , & outpos , & size , * s ) ; <nl> - s + + ; <nl> - s_len - - ; <nl> - min_width - - ; <nl> - } <nl> - for ( int i = 0 ; i < min_width - s_len ; i + + ) { <nl> - appendchar ( & result , & outpos , & size , pad_char ) ; <nl> - } <nl> - } <nl> - / * <nl> - * Print the ( for now ) non - null terminated string s . <nl> - * / <nl> - appendsimplestring ( & result , & outpos , & size , s , s_len ) ; <nl> - <nl> - if ( adjust_width & & adjust = = LEFT & & min_width > s_len ) { <nl> - for ( int i = 0 ; i < min_width - s_len ; i + + ) { <nl> - appendchar ( & result , & outpos , & size , pad_char ) ; <nl> - } <nl> - } <nl> - } <nl> - skip_output : <nl> - fmt + + ; <nl> - } <nl> - / * <nl> - * Add the terminating null here since it wasn ' t added incrementally above <nl> - * once the whole string has been composed . <nl> - * / <nl> - result [ outpos ] = NUL ; <nl> - * outbuf = result ; <nl> - return outpos ; <nl> - } <nl> - <nl> - / * <nl> - * This is the general purpose conversion function . <nl> - * / <nl> - int vspprintf ( char * * pbuf , size_t / * max_len * / , const char * format , . . . ) { <nl> - int len ; <nl> - va_list ap ; <nl> - va_start ( ap , format ) ; <nl> - len = xbuf_format_converter ( pbuf , format , ap ) ; <nl> - va_end ( ap ) ; <nl> - return len ; <nl> - } <nl> - <nl> - / * <nl> - * Same as vspprintf but taking an va_list <nl> - * / <nl> - int vspprintf_ap ( char * * pbuf , size_t / * max_len * / , const char * format , <nl> - va_list ap ) { <nl> - int len ; <nl> - len = xbuf_format_converter ( pbuf , format , ap ) ; <nl> - return len ; <nl> - } <nl> - <nl> - int spprintf ( char * * pbuf , size_t max_len , const char * format , . . . ) <nl> - { <nl> - int cc ; <nl> - va_list ap ; <nl> - <nl> - va_start ( ap , format ) ; <nl> - cc = vspprintf ( pbuf , max_len , format , ap ) ; <nl> - va_end ( ap ) ; <nl> - return ( cc ) ; <nl> - } <nl> - <nl> - / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> } <nl> mmm a / hphp / runtime / base / zend - printf . h <nl> ppp b / hphp / runtime / base / zend - printf . h <nl> <nl> # ifndef incl_HPHP_ZEND_PRINTF_H_ <nl> # define incl_HPHP_ZEND_PRINTF_H_ <nl> <nl> - # include < sys / types . h > <nl> - # include < stdarg . h > <nl> + / / NOTE : See also " hphp / zend / zend - printf . * " . <nl> + <nl> + # include " hphp / zend / zend - printf . h " <nl> <nl> namespace HPHP { <nl> - / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> struct String ; <nl> struct Array ; <nl> struct Array ; <nl> * / <nl> String string_printf ( const char * format , int len , const Array & args ) ; <nl> <nl> - / / XXX : vspprintf and spprintf have slightly different semantics and flags than <nl> - / / C99 printf ( because PHP ) so we can ' t annotate them with ATTRIBUTE_PRINTF <nl> - <nl> - int vspprintf ( char * * pbuf , size_t max_len , const char * format , . . . ) ; <nl> - int vspprintf_ap ( char * * pbuf , size_t max_len , const char * format , va_list ap ) ; <nl> - int spprintf ( char * * pbuf , size_t max_len , const char * format , . . . ) ; <nl> - <nl> - / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> } <nl> <nl> # endif / / incl_HPHP_ZEND_PRINTF_H_ <nl> mmm a / hphp / runtime / base / zend - string . cpp <nl> ppp b / hphp / runtime / base / zend - string . cpp <nl> <nl> + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> * / <nl> <nl> + / / NOTE : See also " hphp / zend / zend - string . * " . <nl> + <nl> # include " hphp / runtime / base / zend - string . h " <nl> # include " hphp / runtime / base / zend - printf . h " <nl> # include " hphp / runtime / base / zend - math . h " <nl> void string_charmask ( const char * sinput , int len , char * mask ) { <nl> } <nl> } <nl> <nl> - int string_copy ( char * dst , const char * src , int siz ) { <nl> - register char * d = dst ; <nl> - register const char * s = src ; <nl> - register size_t n = siz ; <nl> - <nl> - / * Copy as many bytes as will fit * / <nl> - if ( n ! = 0 & & - - n ! = 0 ) { <nl> - do { <nl> - if ( ( * d + + = * s + + ) = = 0 ) <nl> - break ; <nl> - } while ( - - n ! = 0 ) ; <nl> - } <nl> - <nl> - / * Not enough room in dst , add NUL and traverse rest of src * / <nl> - if ( n = = 0 ) { <nl> - if ( siz ! = 0 ) <nl> - * d = ' \ 0 ' ; / * NUL - terminate dst * / <nl> - while ( * s + + ) <nl> - ; <nl> - } <nl> - <nl> - return ( s - src - 1 ) ; / * count does not include NUL * / <nl> - } <nl> - <nl> - / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - / / comparisons <nl> - <nl> - int string_ncmp ( const char * s1 , const char * s2 , int len ) { <nl> - for ( int i = 0 ; i < len ; i + + ) { <nl> - char c1 = s1 [ i ] ; <nl> - char c2 = s2 [ i ] ; <nl> - if ( c1 > c2 ) return 1 ; <nl> - if ( c1 < c2 ) return - 1 ; <nl> - } <nl> - return 0 ; <nl> - } <nl> - <nl> - static int compare_right ( char const * * a , char const * aend , <nl> - char const * * b , char const * bend ) { <nl> - int bias = 0 ; <nl> - <nl> - / * The longest run of digits wins . That aside , the greatest <nl> - value wins , but we can ' t know that it will until we ' ve scanned <nl> - both numbers to know that they have the same magnitude , so we <nl> - remember it in BIAS . * / <nl> - for ( ; ; ( * a ) + + , ( * b ) + + ) { <nl> - if ( ( * a = = aend | | ! isdigit ( ( int ) ( unsigned char ) * * a ) ) & & <nl> - ( * b = = bend | | ! isdigit ( ( int ) ( unsigned char ) * * b ) ) ) <nl> - return bias ; <nl> - else if ( * a = = aend | | ! isdigit ( ( int ) ( unsigned char ) * * a ) ) <nl> - return - 1 ; <nl> - else if ( * b = = bend | | ! isdigit ( ( int ) ( unsigned char ) * * b ) ) <nl> - return + 1 ; <nl> - else if ( * * a < * * b ) { <nl> - if ( ! bias ) <nl> - bias = - 1 ; <nl> - } else if ( * * a > * * b ) { <nl> - if ( ! bias ) <nl> - bias = + 1 ; <nl> - } <nl> - } <nl> - <nl> - return 0 ; <nl> - } <nl> - <nl> - static int compare_left ( char const * * a , char const * aend , <nl> - char const * * b , char const * bend ) { <nl> - / * Compare two left - aligned numbers : the first to have a <nl> - different value wins . * / <nl> - for ( ; ; ( * a ) + + , ( * b ) + + ) { <nl> - if ( ( * a = = aend | | ! isdigit ( ( int ) ( unsigned char ) * * a ) ) & & <nl> - ( * b = = bend | | ! isdigit ( ( int ) ( unsigned char ) * * b ) ) ) <nl> - return 0 ; <nl> - else if ( * a = = aend | | ! isdigit ( ( int ) ( unsigned char ) * * a ) ) <nl> - return - 1 ; <nl> - else if ( * b = = bend | | ! isdigit ( ( int ) ( unsigned char ) * * b ) ) <nl> - return + 1 ; <nl> - else if ( * * a < * * b ) <nl> - return - 1 ; <nl> - else if ( * * a > * * b ) <nl> - return + 1 ; <nl> - } <nl> - <nl> - return 0 ; <nl> - } <nl> - <nl> - int string_natural_cmp ( char const * a , size_t a_len , <nl> - char const * b , size_t b_len , int fold_case ) { <nl> - char ca , cb ; <nl> - char const * ap , * bp ; <nl> - char const * aend = a + a_len , * bend = b + b_len ; <nl> - int fractional , result ; <nl> - <nl> - if ( a_len = = 0 | | b_len = = 0 ) <nl> - return a_len - b_len ; <nl> - <nl> - ap = a ; <nl> - bp = b ; <nl> - while ( 1 ) { <nl> - ca = * ap ; cb = * bp ; <nl> - <nl> - / * skip over leading spaces or zeros * / <nl> - while ( isspace ( ( int ) ( unsigned char ) ca ) ) <nl> - ca = * + + ap ; <nl> - <nl> - while ( isspace ( ( int ) ( unsigned char ) cb ) ) <nl> - cb = * + + bp ; <nl> - <nl> - / * process run of digits * / <nl> - if ( isdigit ( ( int ) ( unsigned char ) ca ) & & isdigit ( ( int ) ( unsigned char ) cb ) ) { <nl> - fractional = ( ca = = ' 0 ' | | cb = = ' 0 ' ) ; <nl> - <nl> - if ( fractional ) <nl> - result = compare_left ( & ap , aend , & bp , bend ) ; <nl> - else <nl> - result = compare_right ( & ap , aend , & bp , bend ) ; <nl> - <nl> - if ( result ! = 0 ) <nl> - return result ; <nl> - else if ( ap = = aend & & bp = = bend ) <nl> - / * End of the strings . Let caller sort them out . * / <nl> - return 0 ; <nl> - else { <nl> - / * Keep on comparing from the current point . * / <nl> - ca = * ap ; cb = * bp ; <nl> - } <nl> - } <nl> - <nl> - if ( fold_case ) { <nl> - ca = toupper ( ( int ) ( unsigned char ) ca ) ; <nl> - cb = toupper ( ( int ) ( unsigned char ) cb ) ; <nl> - } <nl> - <nl> - if ( ca < cb ) <nl> - return - 1 ; <nl> - else if ( ca > cb ) <nl> - return + 1 ; <nl> - <nl> - + + ap ; + + bp ; <nl> - if ( ap > = aend & & bp > = bend ) <nl> - / * The strings compare the same . Perhaps the caller <nl> - will want to call strcmp to break the tie . * / <nl> - return 0 ; <nl> - else if ( ap > = aend ) <nl> - return - 1 ; <nl> - else if ( bp > = bend ) <nl> - return 1 ; <nl> - } <nl> - } <nl> - <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> void string_to_case ( String & s , int ( * tocase ) ( int ) ) { <nl> mmm a / hphp / runtime / base / zend - string . h <nl> ppp b / hphp / runtime / base / zend - string . h <nl> <nl> # ifndef incl_HPHP_ZEND_STRING_H_ <nl> # define incl_HPHP_ZEND_STRING_H_ <nl> <nl> + / / NOTE : See also " hphp / zend / zend - string . * " . <nl> + <nl> # include " hphp / zend / zend - string . h " <nl> # include " hphp / runtime / base / type - string . h " <nl> <nl> namespace HPHP { <nl> * NULL terminated , regardless of whether it ' s a binary string . <nl> * / <nl> <nl> - / * <nl> - * Copy src to string dst of size siz . At most siz - 1 characters <nl> - * will be copied . Always NUL terminates ( unless siz = = 0 ) . <nl> - * Returns strlen ( src ) ; if retval > = siz , truncation occurred . <nl> - * / <nl> - int string_copy ( char * dst , const char * src , int siz ) ; <nl> - <nl> - / * * <nl> - * Compare two binary strings . <nl> - * / <nl> - inline int string_strcmp ( const char * s1 , int len1 , const char * s2 , int len2 ) { <nl> - int minlen = len1 < len2 ? len1 : len2 ; <nl> - int retval ; <nl> - <nl> - retval = memcmp ( s1 , s2 , minlen ) ; <nl> - if ( ! retval ) { <nl> - return ( len1 - len2 ) ; <nl> - } <nl> - <nl> - return ( retval > 0 ) - ( retval < 0 ) ; <nl> - } <nl> - / * * <nl> - * Compare two binary strings of the first n bytes . <nl> - * / <nl> - inline int string_strncmp ( const char * s1 , int len1 , const char * s2 , int len2 , <nl> - int len ) { <nl> - int minlen = len1 < len2 ? len1 : len2 ; <nl> - int retval ; <nl> - <nl> - if ( len < minlen ) { <nl> - if ( UNLIKELY ( len < 0 ) ) len = 0 ; <nl> - minlen = len ; <nl> - } <nl> - retval = memcmp ( s1 , s2 , minlen ) ; <nl> - if ( ! retval ) { <nl> - return ( len < len1 ? len : len1 ) - ( len < len2 ? len : len2 ) ; <nl> - } else { <nl> - return retval ; <nl> - } <nl> - } <nl> - / * * <nl> - * Compare two binary strings of the first n bytes , ignore case . <nl> - * / <nl> - inline int string_strncasecmp ( const char * s1 , int len1 , <nl> - const char * s2 , int len2 , int len ) { <nl> - int minlen = len1 < len2 ? len1 : len2 ; <nl> - int c1 , c2 ; <nl> - <nl> - if ( len < minlen ) { <nl> - if ( UNLIKELY ( len < 0 ) ) len = 0 ; <nl> - minlen = len ; <nl> - } <nl> - while ( minlen - - ) { <nl> - c1 = tolower ( ( int ) * ( unsigned char * ) s1 + + ) ; <nl> - c2 = tolower ( ( int ) * ( unsigned char * ) s2 + + ) ; <nl> - if ( c1 ! = c2 ) { <nl> - return c1 - c2 ; <nl> - } <nl> - } <nl> - return ( len < len1 ? len : len1 ) - ( len < len2 ? len : len2 ) ; <nl> - } <nl> - <nl> - / * * <nl> - * Compare strings . <nl> - * / <nl> - int string_ncmp ( const char * s1 , const char * s2 , int len ) ; <nl> - int string_natural_cmp ( char const * a , size_t a_len , <nl> - char const * b , size_t b_len , int fold_case ) ; <nl> - <nl> / * * <nl> * Changing string ' s cases in place . Return ' s length is always the same <nl> * as " len " . <nl> mmm a / hphp / runtime / ext / json / JSON_parser . cpp <nl> ppp b / hphp / runtime / ext / json / JSON_parser . cpp <nl> SOFTWARE . <nl> # include " hphp / runtime / base / tv - refcount . h " <nl> # include " hphp / runtime / base / init - fini - node . h " <nl> # include " hphp / runtime / base / utf8 - decode . h " <nl> - # include " hphp / runtime / base / zend - strtod . h " <nl> # include " hphp / runtime / ext / json / ext_json . h " <nl> # include " hphp / runtime / ext / collections / ext_collections - map . h " <nl> # include " hphp / runtime / ext / collections / ext_collections - vector . h " <nl> # include " hphp / system / systemlib . h " <nl> # include " hphp / util / fast_strtoll_base10 . h " <nl> + # include " hphp / zend / zend - strtod . h " <nl> <nl> # define MAX_LENGTH_OF_LONG 20 <nl> static const char long_min_digits [ ] = " 9223372036854775808 " ; <nl> mmm a / hphp / runtime / vm / extern - compiler . cpp <nl> ppp b / hphp / runtime / vm / extern - compiler . cpp <nl> <nl> # include < folly / FileUtil . h > <nl> <nl> # include " hphp / runtime / base / ini - setting . h " <nl> - # include " hphp / runtime / base / zend - strtod . h " <nl> # include " hphp / runtime / server / source - root - info . h " <nl> # include " hphp / runtime / vm / native . h " <nl> # include " hphp / runtime / vm / repo . h " <nl> <nl> # include " hphp / util / sha1 . h " <nl> # include " hphp / util / struct - log . h " <nl> # include " hphp / util / timer . h " <nl> + # include " hphp / zend / zend - strtod . h " <nl> <nl> # include < iostream > <nl> <nl> new file mode 100644 <nl> index 00000000000 . . 21ae8753f56 <nl> mmm / dev / null <nl> ppp b / hphp / zend / zend - printf . cpp <nl> <nl> + / * <nl> + + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> + | HipHop for PHP | <nl> + + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> + | Copyright ( c ) 2010 - present Facebook , Inc . ( http : / / www . facebook . com ) | <nl> + | Copyright ( c ) 1998 - 2010 Zend Technologies Ltd . ( http : / / www . zend . com ) | <nl> + + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> + | This source file is subject to version 2 . 00 of the Zend license , | <nl> + | that is bundled with this package in the file LICENSE , and is | <nl> + | available through the world - wide - web at the following url : | <nl> + | http : / / www . zend . com / license / 2_00 . txt . | <nl> + | If you did not receive a copy of the Zend license and are unable to | <nl> + | obtain it through the world - wide - web , please send a note to | <nl> + | license @ zend . com so we can mail you a copy immediately . | <nl> + + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> + * / <nl> + <nl> + # include " hphp / zend / zend - printf . h " <nl> + <nl> + # include " hphp / util / exception . h " <nl> + # include " hphp / zend / zend - string . h " <nl> + # include " hphp / zend / zend - strtod . h " <nl> + <nl> + # include < cmath > <nl> + # include < ctype . h > <nl> + # include < limits . h > <nl> + # include < stdint . h > <nl> + # include < stdio . h > <nl> + # include < string . h > <nl> + <nl> + namespace HPHP { <nl> + <nl> + / * These definitions are copied from the Zend formatted output conversion <nl> + files so that we only need to make minimal changes to the Zend formatted <nl> + output conversion functions that are incorporated here . <nl> + * / <nl> + # define NDIG 320 <nl> + <nl> + typedef enum { <nl> + LM_STD = 0 , <nl> + LM_INTMAX_T , <nl> + LM_PTRDIFF_T , <nl> + LM_LONG_LONG , <nl> + LM_SIZE_T , <nl> + LM_LONG , <nl> + LM_LONG_DOUBLE <nl> + } length_modifier_e ; <nl> + <nl> + typedef enum { <nl> + NO = 0 , YES = 1 <nl> + } boolean_e ; <nl> + <nl> + # define NUM ( c ) ( c - ' 0 ' ) <nl> + <nl> + # define STR_TO_DEC ( str , num ) do { \ <nl> + num = NUM ( * str + + ) ; \ <nl> + while ( isdigit ( ( int ) * str ) ) { \ <nl> + num * = 10 ; \ <nl> + num + = NUM ( * str + + ) ; \ <nl> + if ( num > = INT_MAX / 10 ) { \ <nl> + while ( isdigit ( ( int ) * str + + ) ) \ <nl> + continue ; \ <nl> + break ; \ <nl> + } \ <nl> + } \ <nl> + } while ( 0 ) <nl> + <nl> + / * <nl> + * This macro does zero padding so that the precision <nl> + * requirement is satisfied . The padding is done by <nl> + * adding ' 0 ' s to the left of the string that is going <nl> + * to be printed . <nl> + * / <nl> + # define FIX_PRECISION ( adjust , precision , s , s_len ) do { \ <nl> + if ( adjust ) \ <nl> + while ( s_len < precision ) { \ <nl> + * - - s = ' 0 ' ; \ <nl> + s_len + + ; \ <nl> + } \ <nl> + } while ( 0 ) <nl> + <nl> + typedef int64_t wide_int ; <nl> + typedef uint64_t u_wide_int ; <nl> + <nl> + # define FALSE 0 <nl> + # define TRUE 1 <nl> + # define NUL ' \ 0 ' <nl> + # define INT_NULL ( ( int * ) 0 ) <nl> + <nl> + static const char * s_null = " ( null ) " ; <nl> + # define S_NULL_LEN 6 <nl> + <nl> + # define FLOAT_DIGITS 6 <nl> + # define EXPONENT_LENGTH 10 <nl> + <nl> + # define HAVE_LOCALE_H 1 <nl> + <nl> + # ifdef HAVE_LOCALE_H <nl> + } / / namespace HPHP <nl> + <nl> + # include < locale . h > <nl> + <nl> + namespace HPHP { <nl> + # define LCONV_DECIMAL_POINT ( * lconv - > decimal_point ) <nl> + # else <nl> + # define LCONV_DECIMAL_POINT ' . ' <nl> + # endif <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / * <nl> + * Copyright ( c ) 2002 , 2006 Todd C . Miller < Todd . Miller @ courtesan . com > <nl> + * <nl> + * Permission to use , copy , modify , and distribute this software for any <nl> + * purpose with or without fee is hereby granted , provided that the above <nl> + * copyright notice and this permission notice appear in all copies . <nl> + * <nl> + * THE SOFTWARE IS PROVIDED " AS IS " AND THE AUTHOR DISCLAIMS ALL WARRANTIES <nl> + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF <nl> + * MERCHANTABILITY AND FITNESS . IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR <nl> + * ANY SPECIAL , DIRECT , INDIRECT , OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES <nl> + * WHATSOEVER RESULTING FROM LOSS OF USE , DATA OR PROFITS , WHETHER IN AN <nl> + * ACTION OF CONTRACT , NEGLIGENCE OR OTHER TORTIOUS ACTION , ARISING OUT OF <nl> + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE . <nl> + * <nl> + * Sponsored in part by the Defense Advanced Research Projects <nl> + * Agency ( DARPA ) and Air Force Research Laboratory , Air Force <nl> + * Materiel Command , USAF , under agreement number F39502 - 99 - 1 - 0512 . <nl> + * / <nl> + <nl> + static char * __cvt ( double value , int ndigit , int * decpt , int * sign , <nl> + int fmode , int pad ) { <nl> + register char * s = nullptr ; <nl> + char * p , * rve , c ; <nl> + size_t siz ; <nl> + <nl> + if ( ndigit < 0 ) { <nl> + siz = - ndigit + 1 ; <nl> + } else { <nl> + siz = ndigit + 1 ; <nl> + } <nl> + <nl> + / * __dtoa ( ) doesn ' t allocate space for 0 so we do it by hand * / <nl> + if ( value = = 0 . 0 ) { <nl> + * decpt = 1 - fmode ; / * 1 for ' e ' , 0 for ' f ' * / <nl> + * sign = 0 ; <nl> + if ( ( rve = s = ( char * ) malloc ( ndigit ? siz : 2 ) ) = = nullptr ) { <nl> + return ( nullptr ) ; <nl> + } <nl> + * rve + + = ' 0 ' ; <nl> + * rve = ' \ 0 ' ; <nl> + if ( ! ndigit ) { <nl> + return ( s ) ; <nl> + } <nl> + } else { <nl> + p = zend_dtoa ( value , fmode + 2 , ndigit , decpt , sign , & rve ) ; <nl> + if ( * decpt = = 9999 ) { <nl> + / * Infinity or Nan , convert to inf or nan like printf * / <nl> + * decpt = 0 ; <nl> + c = * p ; <nl> + zend_freedtoa ( p ) ; <nl> + return strdup ( c = = ' I ' ? " INF " : " NAN " ) ; <nl> + } <nl> + / * Make a local copy and adjust rve to be in terms of s * / <nl> + if ( pad & & fmode ) { <nl> + siz + = * decpt ; <nl> + } <nl> + if ( ( s = ( char * ) malloc ( siz + 1 ) ) = = nullptr ) { <nl> + zend_freedtoa ( p ) ; <nl> + return ( nullptr ) ; <nl> + } <nl> + ( void ) string_copy ( s , p , siz ) ; <nl> + rve = s + ( rve - p ) ; <nl> + zend_freedtoa ( p ) ; <nl> + } <nl> + <nl> + / * Add trailing zeros * / <nl> + if ( pad ) { <nl> + siz - = rve - s ; <nl> + while ( - - siz ) { <nl> + * rve + + = ' 0 ' ; <nl> + } <nl> + * rve = ' \ 0 ' ; <nl> + } <nl> + <nl> + return ( s ) ; <nl> + } <nl> + <nl> + static inline char * php_ecvt ( double value , int ndigit , int * decpt , int * sign ) { <nl> + return ( __cvt ( value , ndigit , decpt , sign , 0 , 1 ) ) ; <nl> + } <nl> + <nl> + static inline char * php_fcvt ( double value , int ndigit , int * decpt , int * sign ) { <nl> + return ( __cvt ( value , ndigit , decpt , sign , 1 , 1 ) ) ; <nl> + } <nl> + <nl> + char * php_gcvt ( double value , int ndigit , char dec_point , <nl> + char exponent , char * buf ) { <nl> + char * digits , * dst , * src ; <nl> + int i , decpt , sign ; <nl> + <nl> + digits = zend_dtoa ( value , 2 , ndigit , & decpt , & sign , nullptr ) ; <nl> + if ( decpt = = 9999 ) { <nl> + / * <nl> + * Infinity or NaN , convert to inf or nan with sign . <nl> + * We assume the buffer is at least ndigit long . <nl> + * / <nl> + snprintf ( buf , ndigit + 1 , " % s % s " , ( sign & & * digits = = ' I ' ) ? " - " : " " , <nl> + * digits = = ' I ' ? " INF " : " NAN " ) ; <nl> + zend_freedtoa ( digits ) ; <nl> + return ( buf ) ; <nl> + } <nl> + <nl> + dst = buf ; <nl> + if ( sign ) { <nl> + * dst + + = ' - ' ; <nl> + } <nl> + <nl> + if ( ( decpt > = 0 & & decpt > ndigit ) | | decpt < - 3 ) { / * use E - style * / <nl> + / * exponential format ( e . g . 1 . 2345e + 13 ) * / <nl> + if ( - - decpt < 0 ) { <nl> + sign = 1 ; <nl> + decpt = - decpt ; <nl> + } else { <nl> + sign = 0 ; <nl> + } <nl> + src = digits ; <nl> + * dst + + = * src + + ; <nl> + * dst + + = dec_point ; <nl> + if ( * src = = ' \ 0 ' ) { <nl> + * dst + + = ' 0 ' ; <nl> + } else { <nl> + do { <nl> + * dst + + = * src + + ; <nl> + } while ( * src ! = ' \ 0 ' ) ; <nl> + } <nl> + * dst + + = exponent ; <nl> + if ( sign ) { <nl> + * dst + + = ' - ' ; <nl> + } else { <nl> + * dst + + = ' + ' ; <nl> + } <nl> + if ( decpt < 10 ) { <nl> + * dst + + = ' 0 ' + decpt ; <nl> + * dst = ' \ 0 ' ; <nl> + } else { <nl> + / * XXX - optimize * / <nl> + for ( sign = decpt , i = 0 ; ( sign / = 10 ) ! = 0 ; i + + ) <nl> + continue ; <nl> + dst [ i + 1 ] = ' \ 0 ' ; <nl> + while ( decpt ! = 0 ) { <nl> + dst [ i - - ] = ' 0 ' + decpt % 10 ; <nl> + decpt / = 10 ; <nl> + } <nl> + } <nl> + } else if ( decpt < 0 ) { <nl> + / * standard format 0 . * / <nl> + * dst + + = ' 0 ' ; / * zero before decimal point * / <nl> + * dst + + = dec_point ; <nl> + do { <nl> + * dst + + = ' 0 ' ; <nl> + } while ( + + decpt < 0 ) ; <nl> + src = digits ; <nl> + while ( * src ! = ' \ 0 ' ) { <nl> + * dst + + = * src + + ; <nl> + } <nl> + * dst = ' \ 0 ' ; <nl> + } else { <nl> + / * standard format * / <nl> + for ( i = 0 , src = digits ; i < decpt ; i + + ) { <nl> + if ( * src ! = ' \ 0 ' ) { <nl> + * dst + + = * src + + ; <nl> + } else { <nl> + * dst + + = ' 0 ' ; <nl> + } <nl> + } <nl> + if ( * src ! = ' \ 0 ' ) { <nl> + if ( src = = digits ) { <nl> + * dst + + = ' 0 ' ; / * zero before decimal point * / <nl> + } <nl> + * dst + + = dec_point ; <nl> + for ( i = decpt ; digits [ i ] ! = ' \ 0 ' ; i + + ) { <nl> + * dst + + = digits [ i ] ; <nl> + } <nl> + } <nl> + * dst = ' \ 0 ' ; <nl> + } <nl> + zend_freedtoa ( digits ) ; <nl> + return ( buf ) ; <nl> + } <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / Apache license <nl> + <nl> + / * = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + * Copyright ( c ) 1995 - 1998 The Apache Group . All rights reserved . <nl> + * <nl> + * Redistribution and use in source and binary forms , with or without <nl> + * modification , are permitted provided that the following conditions <nl> + * are met : <nl> + * <nl> + * 1 . Redistributions of source code must retain the above copyright <nl> + * notice , this list of conditions and the following disclaimer . <nl> + * <nl> + * 2 . Redistributions in binary form must reproduce the above copyright <nl> + * notice , this list of conditions and the following disclaimer in <nl> + * the documentation and / or other materials provided with the <nl> + * distribution . <nl> + * <nl> + * 3 . All advertising materials mentioning features or use of this <nl> + * software must display the following acknowledgment : <nl> + * " This product includes software developed by the Apache Group <nl> + * for use in the Apache HTTP server project ( http : / / www . apache . org / ) . " <nl> + * <nl> + * 4 . The names " Apache Server " and " Apache Group " must not be used to <nl> + * endorse or promote products derived from this software without <nl> + * prior written permission . <nl> + * <nl> + * 5 . Redistributions of any form whatsoever must retain the following <nl> + * acknowledgment : <nl> + * " This product includes software developed by the Apache Group <nl> + * for use in the Apache HTTP server project ( http : / / www . apache . org / ) . " <nl> + * <nl> + * THIS SOFTWARE IS PROVIDED BY THE APACHE GROUP ` ` AS IS ' ' AND ANY <nl> + * EXPRESSED OR IMPLIED WARRANTIES , INCLUDING , BUT NOT LIMITED TO , THE <nl> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR <nl> + * PURPOSE ARE DISCLAIMED . IN NO EVENT SHALL THE APACHE GROUP OR <nl> + * ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT , INDIRECT , INCIDENTAL , <nl> + * SPECIAL , EXEMPLARY , OR CONSEQUENTIAL DAMAGES ( INCLUDING , BUT <nl> + * NOT LIMITED TO , PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES ; <nl> + * LOSS OF USE , DATA , OR PROFITS ; OR BUSINESS INTERRUPTION ) <nl> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY , WHETHER IN CONTRACT , <nl> + * STRICT LIABILITY , OR TORT ( INCLUDING NEGLIGENCE OR OTHERWISE ) <nl> + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE , EVEN IF ADVISED <nl> + * OF THE POSSIBILITY OF SUCH DAMAGE . <nl> + * = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + * <nl> + * This software consists of voluntary contributions made by many <nl> + * individuals on behalf of the Apache Group and was originally based <nl> + * on public domain software written at the National Center for <nl> + * Supercomputing Applications , University of Illinois , Urbana - Champaign . <nl> + * For more information on the Apache Group and the Apache HTTP server <nl> + * project , please see < http : / / www . apache . org / > . <nl> + * <nl> + * This code is based on , and used with the permission of , the <nl> + * SIO stdio - replacement strx_ * functions by Panos Tsirigotis <nl> + * < panos @ alumni . cs . colorado . edu > for xinetd . <nl> + * / <nl> + <nl> + / * <nl> + * Convert num to a base X number where X is a power of 2 . nbits determines X . <nl> + * For example , if nbits is 3 , we do base 8 conversion <nl> + * Return value : <nl> + * a pointer to a string containing the number <nl> + * <nl> + * The caller provides a buffer for the string : that is the buf_end argument <nl> + * which is a pointer to the END of the buffer + 1 ( i . e . if the buffer <nl> + * is declared as buf [ 100 ] , buf_end should be & buf [ 100 ] ) <nl> + * / <nl> + char * ap_php_conv_p2 ( register uint64_t num , register int nbits , <nl> + char format , char * buf_end , register int * len ) <nl> + { <nl> + register int mask = ( 1 < < nbits ) - 1 ; <nl> + register char * p = buf_end ; <nl> + static char low_digits [ ] = " 0123456789abcdef " ; <nl> + static char upper_digits [ ] = " 0123456789ABCDEF " ; <nl> + register char * digits = ( format = = ' X ' ) ? upper_digits : low_digits ; <nl> + <nl> + do { <nl> + * - - p = digits [ num & mask ] ; <nl> + num > > = nbits ; <nl> + } <nl> + while ( num ) ; <nl> + <nl> + * len = buf_end - p ; <nl> + return ( p ) ; <nl> + } <nl> + <nl> + / * <nl> + * Convert num to its decimal format . <nl> + * Return value : <nl> + * - a pointer to a string containing the number ( no sign ) <nl> + * - len contains the length of the string <nl> + * - is_negative is set to TRUE or FALSE depending on the sign <nl> + * of the number ( always set to FALSE if is_unsigned is TRUE ) <nl> + * <nl> + * The caller provides a buffer for the string : that is the buf_end argument <nl> + * which is a pointer to the END of the buffer + 1 ( i . e . if the buffer <nl> + * is declared as buf [ 100 ] , buf_end should be & buf [ 100 ] ) <nl> + * / <nl> + char * ap_php_conv_10 ( register int64_t num , register bool is_unsigned , <nl> + register int * is_negative , char * buf_end , <nl> + register int * len ) { <nl> + register char * p = buf_end ; <nl> + register uint64_t magnitude ; <nl> + <nl> + if ( is_unsigned ) { <nl> + magnitude = ( uint64_t ) num ; <nl> + * is_negative = 0 ; <nl> + } else { <nl> + * is_negative = ( num < 0 ) ; <nl> + <nl> + / * <nl> + * On a 2 ' s complement machine , negating the most negative integer <nl> + * results in a number that cannot be represented as a signed integer . <nl> + * Here is what we do to obtain the number ' s magnitude : <nl> + * a . add 1 to the number <nl> + * b . negate it ( becomes positive ) <nl> + * c . convert it to unsigned <nl> + * d . add 1 <nl> + * / <nl> + if ( * is_negative ) { <nl> + int64_t t = num + 1 ; <nl> + magnitude = ( ( uint64_t ) - t ) + 1 ; <nl> + } else { <nl> + magnitude = ( uint64_t ) num ; <nl> + } <nl> + } <nl> + <nl> + / * <nl> + * We use a do - while loop so that we write at least 1 digit <nl> + * / <nl> + do { <nl> + register uint64_t new_magnitude = magnitude / 10 ; <nl> + <nl> + * - - p = ( char ) ( magnitude - new_magnitude * 10 + ' 0 ' ) ; <nl> + magnitude = new_magnitude ; <nl> + } <nl> + while ( magnitude ) ; <nl> + <nl> + * len = buf_end - p ; <nl> + return ( p ) ; <nl> + } <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / * <nl> + * Convert a floating point number to a string formats ' f ' , ' e ' or ' E ' . <nl> + * The result is placed in buf , and len denotes the length of the string <nl> + * The sign is returned in the is_negative argument ( and is not placed <nl> + * in buf ) . <nl> + * / <nl> + char * php_conv_fp ( register char format , register double num , <nl> + bool add_dp , int precision , char dec_point , <nl> + int * is_negative , char * buf , int * len ) { <nl> + register char * s = buf ; <nl> + register char * p , * p_orig ; <nl> + int decimal_point ; <nl> + <nl> + if ( precision > = NDIG - 1 ) { <nl> + precision = NDIG - 2 ; <nl> + } <nl> + <nl> + if ( format = = ' F ' ) { <nl> + p_orig = p = php_fcvt ( num , precision , & decimal_point , is_negative ) ; <nl> + } else { / / either e or E format <nl> + p_orig = p = php_ecvt ( num , precision + 1 , & decimal_point , is_negative ) ; <nl> + } <nl> + <nl> + / / Check for Infinity and NaN <nl> + if ( isalpha ( ( int ) * p ) ) { <nl> + * len = strlen ( p ) ; <nl> + memcpy ( buf , p , * len + 1 ) ; <nl> + * is_negative = 0 ; <nl> + free ( p_orig ) ; <nl> + return ( buf ) ; <nl> + } <nl> + if ( format = = ' F ' ) { <nl> + if ( decimal_point < = 0 ) { <nl> + if ( num ! = 0 | | precision > 0 ) { <nl> + * s + + = ' 0 ' ; <nl> + if ( precision > 0 ) { <nl> + * s + + = dec_point ; <nl> + while ( decimal_point + + < 0 ) { <nl> + * s + + = ' 0 ' ; <nl> + } <nl> + } else if ( add_dp ) { <nl> + * s + + = dec_point ; <nl> + } <nl> + } <nl> + } else { <nl> + int addz = decimal_point > = NDIG ? decimal_point - NDIG + 1 : 0 ; <nl> + decimal_point - = addz ; <nl> + while ( decimal_point - - > 0 ) { <nl> + * s + + = * p + + ; <nl> + } <nl> + while ( addz - - > 0 ) { <nl> + * s + + = ' 0 ' ; <nl> + } <nl> + if ( precision > 0 | | add_dp ) { <nl> + * s + + = dec_point ; <nl> + } <nl> + } <nl> + } else { <nl> + * s + + = * p + + ; <nl> + if ( precision > 0 | | add_dp ) { <nl> + * s + + = ' . ' ; <nl> + } <nl> + } <nl> + <nl> + / / copy the rest of p , the NUL is NOT copied <nl> + while ( * p ) { <nl> + * s + + = * p + + ; <nl> + } <nl> + <nl> + if ( format ! = ' F ' ) { <nl> + char temp [ EXPONENT_LENGTH ] ; / / for exponent conversion <nl> + int t_len ; <nl> + int exponent_is_negative ; <nl> + <nl> + * s + + = format ; / / either e or E <nl> + decimal_point - - ; <nl> + if ( decimal_point ! = 0 ) { <nl> + p = ap_php_conv_10 ( ( int64_t ) decimal_point , false , <nl> + & exponent_is_negative , & temp [ EXPONENT_LENGTH ] , <nl> + & t_len ) ; <nl> + * s + + = exponent_is_negative ? ' - ' : ' + ' ; <nl> + <nl> + / / Make sure the exponent has at least 2 digits <nl> + while ( t_len - - ) { <nl> + * s + + = * p + + ; <nl> + } <nl> + } else { <nl> + * s + + = ' + ' ; <nl> + * s + + = ' 0 ' ; <nl> + } <nl> + } <nl> + * len = s - buf ; <nl> + free ( p_orig ) ; <nl> + return ( buf ) ; <nl> + } <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + inline static void appendchar ( char * * buffer , int * pos , int * size , char add ) { <nl> + if ( ( * pos + 1 ) > = * size ) { <nl> + * size < < = 1 ; <nl> + * buffer = ( char * ) realloc ( * buffer , * size ) ; <nl> + } <nl> + ( * buffer ) [ ( * pos ) + + ] = add ; <nl> + } <nl> + <nl> + inline static void appendsimplestring ( char * * buffer , int * pos , int * size , <nl> + const char * add , int len ) { <nl> + int req_size = * pos + len ; <nl> + <nl> + if ( req_size > * size ) { <nl> + while ( req_size > * size ) { <nl> + * size < < = 1 ; <nl> + } <nl> + * buffer = ( char * ) realloc ( * buffer , * size ) ; <nl> + } <nl> + memcpy ( & ( * buffer ) [ * pos ] , add , len ) ; <nl> + * pos + = len ; <nl> + } <nl> + <nl> + / * <nl> + * Do format conversion placing the output in buffer <nl> + * / <nl> + static int xbuf_format_converter ( char * * outbuf , const char * fmt , va_list ap ) <nl> + { <nl> + register char * s = nullptr ; <nl> + char * q ; <nl> + int s_len ; <nl> + <nl> + register int min_width = 0 ; <nl> + int precision = 0 ; <nl> + enum { <nl> + LEFT , RIGHT <nl> + } adjust ; <nl> + char pad_char ; <nl> + char prefix_char ; <nl> + <nl> + double fp_num ; <nl> + wide_int i_num = ( wide_int ) 0 ; <nl> + u_wide_int ui_num ; <nl> + <nl> + char num_buf [ NUM_BUF_SIZE ] ; <nl> + char char_buf [ 2 ] ; / * for printing % % and % < unknown > * / <nl> + <nl> + # ifdef HAVE_LOCALE_H <nl> + struct lconv * lconv = nullptr ; <nl> + # endif <nl> + <nl> + / * <nl> + * Flag variables <nl> + * / <nl> + length_modifier_e modifier ; <nl> + boolean_e alternate_form ; <nl> + boolean_e print_sign ; <nl> + boolean_e print_blank ; <nl> + boolean_e adjust_precision ; <nl> + boolean_e adjust_width ; <nl> + int is_negative ; <nl> + <nl> + int size = 240 ; <nl> + char * result = ( char * ) malloc ( size ) ; <nl> + int outpos = 0 ; <nl> + <nl> + while ( * fmt ) { <nl> + if ( * fmt ! = ' % ' ) { <nl> + appendchar ( & result , & outpos , & size , * fmt ) ; <nl> + } else { <nl> + / * <nl> + * Default variable settings <nl> + * / <nl> + adjust = RIGHT ; <nl> + alternate_form = print_sign = print_blank = NO ; <nl> + pad_char = ' ' ; <nl> + prefix_char = NUL ; <nl> + <nl> + fmt + + ; <nl> + <nl> + / * <nl> + * Try to avoid checking for flags , width or precision <nl> + * / <nl> + if ( isascii ( ( int ) * fmt ) & & ! islower ( ( int ) * fmt ) ) { <nl> + / * <nl> + * Recognize flags : - , # , BLANK , + <nl> + * / <nl> + for ( ; ; fmt + + ) { <nl> + if ( * fmt = = ' - ' ) <nl> + adjust = LEFT ; <nl> + else if ( * fmt = = ' + ' ) <nl> + print_sign = YES ; <nl> + else if ( * fmt = = ' # ' ) <nl> + alternate_form = YES ; <nl> + else if ( * fmt = = ' ' ) <nl> + print_blank = YES ; <nl> + else if ( * fmt = = ' 0 ' ) <nl> + pad_char = ' 0 ' ; <nl> + else <nl> + break ; <nl> + } <nl> + <nl> + / * <nl> + * Check if a width was specified <nl> + * / <nl> + if ( isdigit ( ( int ) * fmt ) ) { <nl> + STR_TO_DEC ( fmt , min_width ) ; <nl> + adjust_width = YES ; <nl> + } else if ( * fmt = = ' * ' ) { <nl> + min_width = va_arg ( ap , int ) ; <nl> + fmt + + ; <nl> + adjust_width = YES ; <nl> + if ( min_width < 0 ) { <nl> + adjust = LEFT ; <nl> + min_width = - min_width ; <nl> + } <nl> + } else <nl> + adjust_width = NO ; <nl> + <nl> + / * <nl> + * Check if a precision was specified <nl> + * <nl> + * XXX : an unreasonable amount of precision may be specified <nl> + * resulting in overflow of num_buf . Currently we <nl> + * ignore this possibility . <nl> + * / <nl> + if ( * fmt = = ' . ' ) { <nl> + adjust_precision = YES ; <nl> + fmt + + ; <nl> + if ( isdigit ( ( int ) * fmt ) ) { <nl> + STR_TO_DEC ( fmt , precision ) ; <nl> + } else if ( * fmt = = ' * ' ) { <nl> + precision = va_arg ( ap , int ) ; <nl> + fmt + + ; <nl> + if ( precision < 0 ) <nl> + precision = 0 ; <nl> + } else <nl> + precision = 0 ; <nl> + } else <nl> + adjust_precision = NO ; <nl> + } else <nl> + adjust_precision = adjust_width = NO ; <nl> + <nl> + / * <nl> + * Modifier check <nl> + * / <nl> + switch ( * fmt ) { <nl> + case ' L ' : <nl> + fmt + + ; <nl> + modifier = LM_LONG_DOUBLE ; <nl> + break ; <nl> + case ' I ' : <nl> + fmt + + ; <nl> + # if SIZEOF_LONG_LONG <nl> + if ( * fmt = = ' 6 ' & & * ( fmt + 1 ) = = ' 4 ' ) { <nl> + fmt + = 2 ; <nl> + modifier = LM_LONG_LONG ; <nl> + } else <nl> + # endif <nl> + if ( * fmt = = ' 3 ' & & * ( fmt + 1 ) = = ' 2 ' ) { <nl> + fmt + = 2 ; <nl> + modifier = LM_LONG ; <nl> + } else { <nl> + # ifdef _WIN64 <nl> + modifier = LM_LONG_LONG ; <nl> + # else <nl> + modifier = LM_LONG ; <nl> + # endif <nl> + } <nl> + break ; <nl> + case ' l ' : <nl> + fmt + + ; <nl> + # if SIZEOF_LONG_LONG <nl> + if ( * fmt = = ' l ' ) { <nl> + fmt + + ; <nl> + modifier = LM_LONG_LONG ; <nl> + } else <nl> + # endif <nl> + modifier = LM_LONG ; <nl> + break ; <nl> + case ' z ' : <nl> + fmt + + ; <nl> + modifier = LM_SIZE_T ; <nl> + break ; <nl> + case ' j ' : <nl> + fmt + + ; <nl> + # if SIZEOF_INTMAX_T <nl> + modifier = LM_INTMAX_T ; <nl> + # else <nl> + modifier = LM_SIZE_T ; <nl> + # endif <nl> + break ; <nl> + case ' t ' : <nl> + fmt + + ; <nl> + # if SIZEOF_PTRDIFF_T <nl> + modifier = LM_PTRDIFF_T ; <nl> + # else <nl> + modifier = LM_SIZE_T ; <nl> + # endif <nl> + break ; <nl> + case ' h ' : <nl> + fmt + + ; <nl> + if ( * fmt = = ' h ' ) { <nl> + fmt + + ; <nl> + } <nl> + / * these are promoted to int , so no break * / <nl> + default : <nl> + modifier = LM_STD ; <nl> + break ; <nl> + } <nl> + <nl> + / * <nl> + * Argument extraction and printing . <nl> + * First we determine the argument type . <nl> + * Then , we convert the argument to a string . <nl> + * On exit from the switch , s points to the string that <nl> + * must be printed , s_len has the length of the string <nl> + * The precision requirements , if any , are reflected in s_len . <nl> + * <nl> + * NOTE : pad_char may be set to ' 0 ' because of the 0 flag . <nl> + * It is reset to ' ' by non - numeric formats <nl> + * / <nl> + switch ( * fmt ) { <nl> + case ' u ' : <nl> + switch ( modifier ) { <nl> + default : <nl> + i_num = ( wide_int ) va_arg ( ap , unsigned int ) ; <nl> + break ; <nl> + case LM_LONG_DOUBLE : <nl> + goto fmt_error ; <nl> + case LM_LONG : <nl> + i_num = ( wide_int ) va_arg ( ap , unsigned long int ) ; <nl> + break ; <nl> + case LM_SIZE_T : <nl> + i_num = ( wide_int ) va_arg ( ap , size_t ) ; <nl> + break ; <nl> + # if SIZEOF_LONG_LONG <nl> + case LM_LONG_LONG : <nl> + i_num = ( wide_int ) va_arg ( ap , u_wide_int ) ; <nl> + break ; <nl> + # endif <nl> + # if SIZEOF_INTMAX_T <nl> + case LM_INTMAX_T : <nl> + i_num = ( wide_int ) va_arg ( ap , uintmax_t ) ; <nl> + break ; <nl> + # endif <nl> + # if SIZEOF_PTRDIFF_T <nl> + case LM_PTRDIFF_T : <nl> + i_num = ( wide_int ) va_arg ( ap , ptrdiff_t ) ; <nl> + break ; <nl> + # endif <nl> + } <nl> + / * <nl> + * The rest also applies to other integer formats , so fall <nl> + * into that case . <nl> + * / <nl> + case ' d ' : <nl> + case ' i ' : <nl> + / * <nl> + * Get the arg if we haven ' t already . <nl> + * / <nl> + if ( ( * fmt ) ! = ' u ' ) { <nl> + switch ( modifier ) { <nl> + default : <nl> + i_num = ( wide_int ) va_arg ( ap , int ) ; <nl> + break ; <nl> + case LM_LONG_DOUBLE : <nl> + goto fmt_error ; <nl> + case LM_LONG : <nl> + i_num = ( wide_int ) va_arg ( ap , long int ) ; <nl> + break ; <nl> + case LM_SIZE_T : <nl> + # if SIZEOF_SSIZE_T <nl> + i_num = ( wide_int ) va_arg ( ap , ssize_t ) ; <nl> + # else <nl> + i_num = ( wide_int ) va_arg ( ap , size_t ) ; <nl> + # endif <nl> + break ; <nl> + # if SIZEOF_LONG_LONG <nl> + case LM_LONG_LONG : <nl> + i_num = ( wide_int ) va_arg ( ap , wide_int ) ; <nl> + break ; <nl> + # endif <nl> + # if SIZEOF_INTMAX_T <nl> + case LM_INTMAX_T : <nl> + i_num = ( wide_int ) va_arg ( ap , intmax_t ) ; <nl> + break ; <nl> + # endif <nl> + # if SIZEOF_PTRDIFF_T <nl> + case LM_PTRDIFF_T : <nl> + i_num = ( wide_int ) va_arg ( ap , ptrdiff_t ) ; <nl> + break ; <nl> + # endif <nl> + } <nl> + } <nl> + s = ap_php_conv_10 ( i_num , ( * fmt ) = = ' u ' , & is_negative , <nl> + & num_buf [ NUM_BUF_SIZE ] , & s_len ) ; <nl> + FIX_PRECISION ( adjust_precision , precision , s , s_len ) ; <nl> + <nl> + if ( * fmt ! = ' u ' ) { <nl> + if ( is_negative ) <nl> + prefix_char = ' - ' ; <nl> + else if ( print_sign ) <nl> + prefix_char = ' + ' ; <nl> + else if ( print_blank ) <nl> + prefix_char = ' ' ; <nl> + } <nl> + break ; <nl> + <nl> + <nl> + case ' o ' : <nl> + switch ( modifier ) { <nl> + default : <nl> + ui_num = ( u_wide_int ) va_arg ( ap , unsigned int ) ; <nl> + break ; <nl> + case LM_LONG_DOUBLE : <nl> + goto fmt_error ; <nl> + case LM_LONG : <nl> + ui_num = ( u_wide_int ) va_arg ( ap , unsigned long int ) ; <nl> + break ; <nl> + case LM_SIZE_T : <nl> + ui_num = ( u_wide_int ) va_arg ( ap , size_t ) ; <nl> + break ; <nl> + # if SIZEOF_LONG_LONG <nl> + case LM_LONG_LONG : <nl> + ui_num = ( u_wide_int ) va_arg ( ap , u_wide_int ) ; <nl> + break ; <nl> + # endif <nl> + # if SIZEOF_INTMAX_T <nl> + case LM_INTMAX_T : <nl> + ui_num = ( u_wide_int ) va_arg ( ap , uintmax_t ) ; <nl> + break ; <nl> + # endif <nl> + # if SIZEOF_PTRDIFF_T <nl> + case LM_PTRDIFF_T : <nl> + ui_num = ( u_wide_int ) va_arg ( ap , ptrdiff_t ) ; <nl> + break ; <nl> + # endif <nl> + } <nl> + s = ap_php_conv_p2 ( ui_num , 3 , * fmt , <nl> + & num_buf [ NUM_BUF_SIZE ] , & s_len ) ; <nl> + FIX_PRECISION ( adjust_precision , precision , s , s_len ) ; <nl> + if ( alternate_form & & * s ! = ' 0 ' ) { <nl> + * - - s = ' 0 ' ; <nl> + s_len + + ; <nl> + } <nl> + break ; <nl> + <nl> + <nl> + case ' x ' : <nl> + case ' X ' : <nl> + switch ( modifier ) { <nl> + default : <nl> + ui_num = ( u_wide_int ) va_arg ( ap , unsigned int ) ; <nl> + break ; <nl> + case LM_LONG_DOUBLE : <nl> + goto fmt_error ; <nl> + case LM_LONG : <nl> + ui_num = ( u_wide_int ) va_arg ( ap , unsigned long int ) ; <nl> + break ; <nl> + case LM_SIZE_T : <nl> + ui_num = ( u_wide_int ) va_arg ( ap , size_t ) ; <nl> + break ; <nl> + # if SIZEOF_LONG_LONG <nl> + case LM_LONG_LONG : <nl> + ui_num = ( u_wide_int ) va_arg ( ap , u_wide_int ) ; <nl> + break ; <nl> + # endif <nl> + # if SIZEOF_INTMAX_T <nl> + case LM_INTMAX_T : <nl> + ui_num = ( u_wide_int ) va_arg ( ap , uintmax_t ) ; <nl> + break ; <nl> + # endif <nl> + # if SIZEOF_PTRDIFF_T <nl> + case LM_PTRDIFF_T : <nl> + ui_num = ( u_wide_int ) va_arg ( ap , ptrdiff_t ) ; <nl> + break ; <nl> + # endif <nl> + } <nl> + s = ap_php_conv_p2 ( ui_num , 4 , * fmt , <nl> + & num_buf [ NUM_BUF_SIZE ] , & s_len ) ; <nl> + FIX_PRECISION ( adjust_precision , precision , s , s_len ) ; <nl> + if ( alternate_form & & i_num ! = 0 ) { <nl> + * - - s = * fmt ; / * ' x ' or ' X ' * / <nl> + * - - s = ' 0 ' ; <nl> + s_len + = 2 ; <nl> + } <nl> + break ; <nl> + <nl> + <nl> + case ' s ' : <nl> + case ' v ' : <nl> + s = va_arg ( ap , char * ) ; <nl> + if ( s ! = nullptr ) { <nl> + s_len = strlen ( s ) ; <nl> + if ( adjust_precision & & precision < s_len ) <nl> + s_len = precision ; <nl> + } else { <nl> + s = const_cast < char * > ( s_null ) ; <nl> + s_len = S_NULL_LEN ; <nl> + } <nl> + pad_char = ' ' ; <nl> + break ; <nl> + <nl> + <nl> + case ' f ' : <nl> + case ' F ' : <nl> + case ' e ' : <nl> + case ' E ' : <nl> + switch ( modifier ) { <nl> + case LM_LONG_DOUBLE : <nl> + fp_num = ( double ) va_arg ( ap , long double ) ; <nl> + break ; <nl> + case LM_STD : <nl> + fp_num = va_arg ( ap , double ) ; <nl> + break ; <nl> + default : <nl> + goto fmt_error ; <nl> + } <nl> + <nl> + if ( std : : isnan ( fp_num ) ) { <nl> + s = const_cast < char * > ( " nan " ) ; <nl> + s_len = 3 ; <nl> + } else if ( std : : isinf ( fp_num ) ) { <nl> + s = const_cast < char * > ( " inf " ) ; <nl> + s_len = 3 ; <nl> + } else { <nl> + # ifdef HAVE_LOCALE_H <nl> + if ( ! lconv ) { <nl> + lconv = localeconv ( ) ; <nl> + } <nl> + # endif <nl> + s = php_conv_fp ( ( * fmt = = ' f ' ) ? ' F ' : * fmt , fp_num , alternate_form , <nl> + ( adjust_precision = = NO ) ? FLOAT_DIGITS : precision , <nl> + ( * fmt = = ' f ' ) ? LCONV_DECIMAL_POINT : ' . ' , <nl> + & is_negative , & num_buf [ 1 ] , & s_len ) ; <nl> + if ( is_negative ) <nl> + prefix_char = ' - ' ; <nl> + else if ( print_sign ) <nl> + prefix_char = ' + ' ; <nl> + else if ( print_blank ) <nl> + prefix_char = ' ' ; <nl> + } <nl> + break ; <nl> + <nl> + <nl> + case ' g ' : <nl> + case ' k ' : <nl> + case ' G ' : <nl> + case ' H ' : <nl> + switch ( modifier ) { <nl> + case LM_LONG_DOUBLE : <nl> + fp_num = ( double ) va_arg ( ap , long double ) ; <nl> + break ; <nl> + case LM_STD : <nl> + fp_num = va_arg ( ap , double ) ; <nl> + break ; <nl> + default : <nl> + goto fmt_error ; <nl> + } <nl> + <nl> + if ( std : : isnan ( fp_num ) ) { <nl> + s = const_cast < char * > ( " NAN " ) ; <nl> + s_len = 3 ; <nl> + break ; <nl> + } else if ( std : : isinf ( fp_num ) ) { <nl> + if ( fp_num > 0 ) { <nl> + s = const_cast < char * > ( " INF " ) ; <nl> + s_len = 3 ; <nl> + } else { <nl> + s = const_cast < char * > ( " - INF " ) ; <nl> + s_len = 4 ; <nl> + } <nl> + break ; <nl> + } <nl> + <nl> + if ( adjust_precision = = NO ) <nl> + precision = FLOAT_DIGITS ; <nl> + else if ( precision = = 0 ) <nl> + precision = 1 ; <nl> + / * <nl> + * * We use & num_buf [ 1 ] , so that we have room for the sign <nl> + * / <nl> + # ifdef HAVE_LOCALE_H <nl> + if ( ! lconv ) { <nl> + lconv = localeconv ( ) ; <nl> + } <nl> + # endif <nl> + s = php_gcvt ( fp_num , precision , <nl> + ( * fmt = = ' H ' | | * fmt = = ' k ' ) ? ' . ' : LCONV_DECIMAL_POINT , <nl> + ( * fmt = = ' G ' | | * fmt = = ' H ' ) ? ' E ' : ' e ' , & num_buf [ 1 ] ) ; <nl> + if ( * s = = ' - ' ) <nl> + prefix_char = * s + + ; <nl> + else if ( print_sign ) <nl> + prefix_char = ' + ' ; <nl> + else if ( print_blank ) <nl> + prefix_char = ' ' ; <nl> + <nl> + s_len = strlen ( s ) ; <nl> + <nl> + if ( alternate_form & & ( q = strchr ( s , ' . ' ) ) = = nullptr ) <nl> + s [ s_len + + ] = ' . ' ; <nl> + break ; <nl> + <nl> + <nl> + case ' c ' : <nl> + char_buf [ 0 ] = ( char ) ( va_arg ( ap , int ) ) ; <nl> + s = & char_buf [ 0 ] ; <nl> + s_len = 1 ; <nl> + pad_char = ' ' ; <nl> + break ; <nl> + <nl> + <nl> + case ' % ' : <nl> + char_buf [ 0 ] = ' % ' ; <nl> + s = & char_buf [ 0 ] ; <nl> + s_len = 1 ; <nl> + pad_char = ' ' ; <nl> + break ; <nl> + <nl> + <nl> + case ' n ' : <nl> + * ( va_arg ( ap , int * ) ) = outpos ; <nl> + goto skip_output ; <nl> + <nl> + / * <nl> + * Always extract the argument as a " char * " pointer . We <nl> + * should be using " void * " but there are still machines <nl> + * that don ' t understand it . <nl> + * If the pointer size is equal to the size of an unsigned <nl> + * integer we convert the pointer to a hex number , otherwise <nl> + * we print " % p " to indicate that we don ' t handle " % p " . <nl> + * / <nl> + case ' p ' : <nl> + if ( sizeof ( char * ) < = sizeof ( u_wide_int ) ) { <nl> + ui_num = ( u_wide_int ) ( ( size_t ) va_arg ( ap , char * ) ) ; <nl> + s = ap_php_conv_p2 ( ui_num , 4 , ' x ' , <nl> + & num_buf [ NUM_BUF_SIZE ] , & s_len ) ; <nl> + if ( ui_num ! = 0 ) { <nl> + * - - s = ' x ' ; <nl> + * - - s = ' 0 ' ; <nl> + s_len + = 2 ; <nl> + } <nl> + } else { <nl> + s = const_cast < char * > ( " % p " ) ; <nl> + s_len = 2 ; <nl> + } <nl> + pad_char = ' ' ; <nl> + break ; <nl> + <nl> + <nl> + case NUL : <nl> + / * <nl> + * The last character of the format string was % . <nl> + * We ignore it . <nl> + * / <nl> + continue ; <nl> + <nl> + <nl> + fmt_error : <nl> + throw Exception ( " Illegal length modifier specified ' % c ' " , * fmt ) ; <nl> + <nl> + / * <nl> + * The default case is for unrecognized % ' s . <nl> + * We print % < char > to help the user identify what <nl> + * option is not understood . <nl> + * This is also useful in case the user wants to pass <nl> + * the output of format_converter to another function <nl> + * that understands some other % < char > ( like syslog ) . <nl> + * Note that we can ' t point s inside fmt because the <nl> + * unknown < char > could be preceded by width etc . <nl> + * / <nl> + default : <nl> + char_buf [ 0 ] = ' % ' ; <nl> + char_buf [ 1 ] = * fmt ; <nl> + s = char_buf ; <nl> + s_len = 2 ; <nl> + pad_char = ' ' ; <nl> + break ; <nl> + } <nl> + <nl> + if ( prefix_char ! = NUL ) { <nl> + * - - s = prefix_char ; <nl> + s_len + + ; <nl> + } <nl> + if ( adjust_width & & adjust = = RIGHT & & min_width > s_len ) { <nl> + if ( pad_char = = ' 0 ' & & prefix_char ! = NUL ) { <nl> + appendchar ( & result , & outpos , & size , * s ) ; <nl> + s + + ; <nl> + s_len - - ; <nl> + min_width - - ; <nl> + } <nl> + for ( int i = 0 ; i < min_width - s_len ; i + + ) { <nl> + appendchar ( & result , & outpos , & size , pad_char ) ; <nl> + } <nl> + } <nl> + / * <nl> + * Print the ( for now ) non - null terminated string s . <nl> + * / <nl> + appendsimplestring ( & result , & outpos , & size , s , s_len ) ; <nl> + <nl> + if ( adjust_width & & adjust = = LEFT & & min_width > s_len ) { <nl> + for ( int i = 0 ; i < min_width - s_len ; i + + ) { <nl> + appendchar ( & result , & outpos , & size , pad_char ) ; <nl> + } <nl> + } <nl> + } <nl> + skip_output : <nl> + fmt + + ; <nl> + } <nl> + / * <nl> + * Add the terminating null here since it wasn ' t added incrementally above <nl> + * once the whole string has been composed . <nl> + * / <nl> + result [ outpos ] = NUL ; <nl> + * outbuf = result ; <nl> + return outpos ; <nl> + } <nl> + <nl> + / * <nl> + * This is the general purpose conversion function . <nl> + * / <nl> + int vspprintf ( char * * pbuf , size_t / * max_len * / , const char * format , . . . ) { <nl> + int len ; <nl> + va_list ap ; <nl> + va_start ( ap , format ) ; <nl> + len = xbuf_format_converter ( pbuf , format , ap ) ; <nl> + va_end ( ap ) ; <nl> + return len ; <nl> + } <nl> + <nl> + / * <nl> + * Same as vspprintf but taking an va_list <nl> + * / <nl> + int vspprintf_ap ( char * * pbuf , size_t / * max_len * / , const char * format , <nl> + va_list ap ) { <nl> + int len ; <nl> + len = xbuf_format_converter ( pbuf , format , ap ) ; <nl> + return len ; <nl> + } <nl> + <nl> + int spprintf ( char * * pbuf , size_t max_len , const char * format , . . . ) <nl> + { <nl> + int cc ; <nl> + va_list ap ; <nl> + <nl> + va_start ( ap , format ) ; <nl> + cc = vspprintf ( pbuf , max_len , format , ap ) ; <nl> + va_end ( ap ) ; <nl> + return ( cc ) ; <nl> + } <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + } <nl> new file mode 100644 <nl> index 00000000000 . . 51e89ea2042 <nl> mmm / dev / null <nl> ppp b / hphp / zend / zend - printf . h <nl> <nl> + / * <nl> + + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> + | HipHop for PHP | <nl> + + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> + | Copyright ( c ) 2010 - present Facebook , Inc . ( http : / / www . facebook . com ) | <nl> + | Copyright ( c ) 1998 - 2010 Zend Technologies Ltd . ( http : / / www . zend . com ) | <nl> + + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> + | This source file is subject to version 2 . 00 of the Zend license , | <nl> + | that is bundled with this package in the file LICENSE , and is | <nl> + | available through the world - wide - web at the following url : | <nl> + | http : / / www . zend . com / license / 2_00 . txt . | <nl> + | If you did not receive a copy of the Zend license and are unable to | <nl> + | obtain it through the world - wide - web , please send a note to | <nl> + | license @ zend . com so we can mail you a copy immediately . | <nl> + + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> + * / <nl> + <nl> + # ifndef incl_HPHP_ZEND_ZEND_PRINTF_H_ <nl> + # define incl_HPHP_ZEND_ZEND_PRINTF_H_ <nl> + <nl> + # include < sys / types . h > <nl> + # include < stdarg . h > <nl> + <nl> + / / The " php_gcvt " and " php_conv_fp " functions assume that their " buf " argument <nl> + / / point to at least this many ( minus 1 for the latter ) bytes of memory . <nl> + # define NUM_BUF_SIZE 500 <nl> + <nl> + namespace HPHP { <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + char * php_gcvt ( double value , int ndigit , char dec_point , char exponent , <nl> + char * buf ) ; <nl> + char * php_conv_fp ( char format , double num , bool add_dp , int precision , <nl> + char dec_point , int * is_negative , char * buf , int * len ) ; <nl> + <nl> + / / XXX : vspprintf and spprintf have slightly different semantics and flags than <nl> + / / C99 printf ( because PHP ) so we can ' t annotate them with ATTRIBUTE_PRINTF <nl> + <nl> + int vspprintf ( char * * pbuf , size_t max_len , const char * format , . . . ) ; <nl> + int vspprintf_ap ( char * * pbuf , size_t max_len , const char * format , va_list ap ) ; <nl> + int spprintf ( char * * pbuf , size_t max_len , const char * format , . . . ) ; <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + } <nl> + <nl> + # endif / / incl_HPHP_ZEND_ZEND_PRINTF_H_ <nl> mmm a / hphp / zend / zend - string . cpp <nl> ppp b / hphp / zend / zend - string . cpp <nl> <nl> <nl> namespace HPHP { <nl> <nl> + int string_copy ( char * dst , const char * src , int siz ) { <nl> + register char * d = dst ; <nl> + register const char * s = src ; <nl> + register size_t n = siz ; <nl> + <nl> + / * Copy as many bytes as will fit * / <nl> + if ( n ! = 0 & & - - n ! = 0 ) { <nl> + do { <nl> + if ( ( * d + + = * s + + ) = = 0 ) <nl> + break ; <nl> + } while ( - - n ! = 0 ) ; <nl> + } <nl> + <nl> + / * Not enough room in dst , add NUL and traverse rest of src * / <nl> + if ( n = = 0 ) { <nl> + if ( siz ! = 0 ) <nl> + * d = ' \ 0 ' ; / * NUL - terminate dst * / <nl> + while ( * s + + ) <nl> + ; <nl> + } <nl> + <nl> + return ( s - src - 1 ) ; / * count does not include NUL * / <nl> + } <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / comparisons <nl> + <nl> + int string_ncmp ( const char * s1 , const char * s2 , int len ) { <nl> + for ( int i = 0 ; i < len ; i + + ) { <nl> + char c1 = s1 [ i ] ; <nl> + char c2 = s2 [ i ] ; <nl> + if ( c1 > c2 ) return 1 ; <nl> + if ( c1 < c2 ) return - 1 ; <nl> + } <nl> + return 0 ; <nl> + } <nl> + <nl> + static int compare_right ( char const * * a , char const * aend , <nl> + char const * * b , char const * bend ) { <nl> + int bias = 0 ; <nl> + <nl> + / * The longest run of digits wins . That aside , the greatest <nl> + value wins , but we can ' t know that it will until we ' ve scanned <nl> + both numbers to know that they have the same magnitude , so we <nl> + remember it in BIAS . * / <nl> + for ( ; ; ( * a ) + + , ( * b ) + + ) { <nl> + if ( ( * a = = aend | | ! isdigit ( ( int ) ( unsigned char ) * * a ) ) & & <nl> + ( * b = = bend | | ! isdigit ( ( int ) ( unsigned char ) * * b ) ) ) <nl> + return bias ; <nl> + else if ( * a = = aend | | ! isdigit ( ( int ) ( unsigned char ) * * a ) ) <nl> + return - 1 ; <nl> + else if ( * b = = bend | | ! isdigit ( ( int ) ( unsigned char ) * * b ) ) <nl> + return + 1 ; <nl> + else if ( * * a < * * b ) { <nl> + if ( ! bias ) <nl> + bias = - 1 ; <nl> + } else if ( * * a > * * b ) { <nl> + if ( ! bias ) <nl> + bias = + 1 ; <nl> + } <nl> + } <nl> + <nl> + return 0 ; <nl> + } <nl> + <nl> + static int compare_left ( char const * * a , char const * aend , <nl> + char const * * b , char const * bend ) { <nl> + / * Compare two left - aligned numbers : the first to have a <nl> + different value wins . * / <nl> + for ( ; ; ( * a ) + + , ( * b ) + + ) { <nl> + if ( ( * a = = aend | | ! isdigit ( ( int ) ( unsigned char ) * * a ) ) & & <nl> + ( * b = = bend | | ! isdigit ( ( int ) ( unsigned char ) * * b ) ) ) <nl> + return 0 ; <nl> + else if ( * a = = aend | | ! isdigit ( ( int ) ( unsigned char ) * * a ) ) <nl> + return - 1 ; <nl> + else if ( * b = = bend | | ! isdigit ( ( int ) ( unsigned char ) * * b ) ) <nl> + return + 1 ; <nl> + else if ( * * a < * * b ) <nl> + return - 1 ; <nl> + else if ( * * a > * * b ) <nl> + return + 1 ; <nl> + } <nl> + <nl> + return 0 ; <nl> + } <nl> + <nl> + int string_natural_cmp ( char const * a , size_t a_len , <nl> + char const * b , size_t b_len , int fold_case ) { <nl> + char ca , cb ; <nl> + char const * ap , * bp ; <nl> + char const * aend = a + a_len , * bend = b + b_len ; <nl> + int fractional , result ; <nl> + <nl> + if ( a_len = = 0 | | b_len = = 0 ) <nl> + return a_len - b_len ; <nl> + <nl> + ap = a ; <nl> + bp = b ; <nl> + while ( 1 ) { <nl> + ca = * ap ; cb = * bp ; <nl> + <nl> + / * skip over leading spaces or zeros * / <nl> + while ( isspace ( ( int ) ( unsigned char ) ca ) ) <nl> + ca = * + + ap ; <nl> + <nl> + while ( isspace ( ( int ) ( unsigned char ) cb ) ) <nl> + cb = * + + bp ; <nl> + <nl> + / * process run of digits * / <nl> + if ( isdigit ( ( int ) ( unsigned char ) ca ) & & isdigit ( ( int ) ( unsigned char ) cb ) ) { <nl> + fractional = ( ca = = ' 0 ' | | cb = = ' 0 ' ) ; <nl> + <nl> + if ( fractional ) <nl> + result = compare_left ( & ap , aend , & bp , bend ) ; <nl> + else <nl> + result = compare_right ( & ap , aend , & bp , bend ) ; <nl> + <nl> + if ( result ! = 0 ) <nl> + return result ; <nl> + else if ( ap = = aend & & bp = = bend ) <nl> + / * End of the strings . Let caller sort them out . * / <nl> + return 0 ; <nl> + else { <nl> + / * Keep on comparing from the current point . * / <nl> + ca = * ap ; cb = * bp ; <nl> + } <nl> + } <nl> + <nl> + if ( fold_case ) { <nl> + ca = toupper ( ( int ) ( unsigned char ) ca ) ; <nl> + cb = toupper ( ( int ) ( unsigned char ) cb ) ; <nl> + } <nl> + <nl> + if ( ca < cb ) <nl> + return - 1 ; <nl> + else if ( ca > cb ) <nl> + return + 1 ; <nl> + <nl> + + + ap ; + + bp ; <nl> + if ( ap > = aend & & bp > = bend ) <nl> + / * The strings compare the same . Perhaps the caller <nl> + will want to call strcmp to break the tie . * / <nl> + return 0 ; <nl> + else if ( ap > = aend ) <nl> + return - 1 ; <nl> + else if ( bp > = bend ) <nl> + return 1 ; <nl> + } <nl> + } <nl> + <nl> + <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> void string_translate ( char * str , int len , const char * str_from , <nl> mmm a / hphp / zend / zend - string . h <nl> ppp b / hphp / zend / zend - string . h <nl> <nl> | license @ zend . com so we can mail you a copy immediately . | <nl> + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> * / <nl> - # ifndef incl_HPHP_UTIL_ZEND_STRING_H_ <nl> - # define incl_HPHP_UTIL_ZEND_STRING_H_ <nl> + # ifndef incl_HPHP_ZEND_ZEND_STRING_H_ <nl> + # define incl_HPHP_ZEND_ZEND_STRING_H_ <nl> <nl> # include < cstdint > <nl> # include < cstdlib > <nl> <nl> namespace HPHP { <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> + / * * <nl> + * Low - level string functions PHP uses . <nl> + * <nl> + * 1 . If a function returns a char * , it has malloc - ed a new string and it ' s <nl> + * caller ' s responsibility to free it . <nl> + * <nl> + * 2 . If a function takes " int & len " right after the 1st string parameter , it <nl> + * is input string ' s length , and in return , it ' s return string ' s length . <nl> + * <nl> + * 3 . All functions work with binary strings and all returned strings are <nl> + * NULL terminated , regardless of whether it ' s a binary string . <nl> + * / <nl> + <nl> + / * <nl> + * Copy src to string dst of size siz . At most siz - 1 characters <nl> + * will be copied . Always NUL terminates ( unless siz = = 0 ) . <nl> + * Returns strlen ( src ) ; if retval > = siz , truncation occurred . <nl> + * / <nl> + int string_copy ( char * dst , const char * src , int siz ) ; <nl> + <nl> + / * * <nl> + * Compare two binary strings . <nl> + * / <nl> + inline int string_strcmp ( const char * s1 , int len1 , const char * s2 , int len2 ) { <nl> + int minlen = len1 < len2 ? len1 : len2 ; <nl> + int retval ; <nl> + <nl> + retval = memcmp ( s1 , s2 , minlen ) ; <nl> + if ( ! retval ) { <nl> + return ( len1 - len2 ) ; <nl> + } <nl> + <nl> + return ( retval > 0 ) - ( retval < 0 ) ; <nl> + } <nl> + / * * <nl> + * Compare two binary strings of the first n bytes . <nl> + * / <nl> + inline int string_strncmp ( const char * s1 , int len1 , const char * s2 , int len2 , <nl> + int len ) { <nl> + int minlen = len1 < len2 ? len1 : len2 ; <nl> + int retval ; <nl> + <nl> + if ( len < minlen ) { <nl> + if ( UNLIKELY ( len < 0 ) ) len = 0 ; <nl> + minlen = len ; <nl> + } <nl> + retval = memcmp ( s1 , s2 , minlen ) ; <nl> + if ( ! retval ) { <nl> + return ( len < len1 ? len : len1 ) - ( len < len2 ? len : len2 ) ; <nl> + } else { <nl> + return retval ; <nl> + } <nl> + } <nl> + / * * <nl> + * Compare two binary strings of the first n bytes , ignore case . <nl> + * / <nl> + inline int string_strncasecmp ( const char * s1 , int len1 , <nl> + const char * s2 , int len2 , int len ) { <nl> + int minlen = len1 < len2 ? len1 : len2 ; <nl> + int c1 , c2 ; <nl> + <nl> + if ( len < minlen ) { <nl> + if ( UNLIKELY ( len < 0 ) ) len = 0 ; <nl> + minlen = len ; <nl> + } <nl> + while ( minlen - - ) { <nl> + c1 = tolower ( ( int ) * ( unsigned char * ) s1 + + ) ; <nl> + c2 = tolower ( ( int ) * ( unsigned char * ) s2 + + ) ; <nl> + if ( c1 ! = c2 ) { <nl> + return c1 - c2 ; <nl> + } <nl> + } <nl> + return ( len < len1 ? len : len1 ) - ( len < len2 ? len : len2 ) ; <nl> + } <nl> + <nl> + / * * <nl> + * Compare strings . <nl> + * / <nl> + int string_ncmp ( const char * s1 , const char * s2 , int len ) ; <nl> + int string_natural_cmp ( char const * a , size_t a_len , <nl> + char const * b , size_t b_len , int fold_case ) ; <nl> + <nl> + <nl> / * * <nl> * Duplicate a binary string . Note that NULL termination is needed even for <nl> * a binary string , because String class only wraps such a " safe " one that can <nl> similarity index 98 % <nl> rename from hphp / runtime / base / zend - strtod . cpp <nl> rename to hphp / zend / zend - strtod . cpp <nl> mmm a / hphp / runtime / base / zend - strtod . cpp <nl> ppp b / hphp / zend / zend - strtod . cpp <nl> <nl> * directly - - and assumed always to succeed . <nl> * / <nl> <nl> - # include " hphp / runtime / base / zend - strtod . h " <nl> - # include " hphp / runtime / base / exceptions . h " <nl> - # include " hphp / runtime / base / rds - local . h " <nl> + # include " hphp / zend / zend - strtod . h " <nl> + <nl> + # include " hphp / util / assertions . h " <nl> + <nl> + # include < new > <nl> + # include < stdint . h > <nl> + # include < stdlib . h > <nl> + # include < string . h > <nl> <nl> namespace HPHP { <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> extern double rnd_prod ( double , double ) , rnd_quot ( double , double ) ; <nl> <nl> # define Kmax 15 <nl> <nl> + namespace { <nl> + <nl> struct Bigint { <nl> struct Bigint * next ; <nl> int k , maxwds , sign , wds ; <nl> struct Bigint { <nl> <nl> typedef struct Bigint Bigint ; <nl> <nl> + } / / namespace <nl> + <nl> void destroy_freelist ( Bigint * * freelist ) ; <nl> <nl> + namespace { <nl> + <nl> struct BigintData { <nl> BigintData ( ) : p5s ( nullptr ) { <nl> freelist = ( Bigint * * ) calloc ( Kmax + 1 , sizeof ( Bigint * ) ) ; <nl> struct BigintData { <nl> Bigint * * freelist ; <nl> Bigint * p5s ; <nl> } ; <nl> - static RDS_LOCAL_NO_CHECK ( BigintData , s_bigint_data ) ; <nl> <nl> + } / / namespace <nl> + <nl> + static __thread BigintData * s_bigint_data ; <nl> + <nl> + namespace { <nl> + <nl> + struct BigintDataGuard { <nl> + ~ BigintDataGuard ( ) { <nl> + delete s_bigint_data ; <nl> + s_bigint_data = nullptr ; <nl> + } <nl> + } ; <nl> + <nl> + } / / namespace <nl> + <nl> + static thread_local BigintDataGuard s_bigint_data_guard ; <nl> + <nl> + / / NOTE : If this has not been called , various functions in this file , <nl> + / / and in other files ( e . g . " zend - printf . cpp " ) , will crash when called . <nl> void zend_get_bigint_data ( ) { <nl> - s_bigint_data . getCheck ( ) ; <nl> + if ( s_bigint_data = = nullptr ) { <nl> + s_bigint_data = new BigintData ; <nl> + } <nl> } <nl> <nl> static Bigint * Balloc ( int k ) <nl> static Bigint * Balloc ( int k ) <nl> int x ; <nl> Bigint * rv ; <nl> <nl> - if ( k > Kmax ) { <nl> - raise_fatal_error ( " Balloc ( ) allocation exceeds list boundary " ) ; <nl> - } <nl> + assertx ( k < = Kmax ) ; <nl> <nl> Bigint * * & freelist = s_bigint_data - > freelist ; <nl> if ( ( rv = freelist [ k ] ) ) { <nl> static Bigint * Balloc ( int k ) <nl> } else { <nl> x = 1 < < k ; <nl> rv = ( Bigint * ) MALLOC ( sizeof ( Bigint ) + ( x - 1 ) * sizeof ( Long ) ) ; <nl> - if ( ! rv ) { <nl> - raise_fatal_error ( " Balloc ( ) failed to allocate memory " ) ; <nl> - } <nl> + assertx ( rv ! = nullptr ) ; <nl> rv - > k = k ; <nl> rv - > maxwds = x ; <nl> } <nl> double zend_strtod ( CONST char * s00 , const char * * se ) <nl> * se = ( char * ) s ; <nl> result = sign ? - value ( rv ) : value ( rv ) ; <nl> <nl> - if ( s_bigint_data . isNull ( ) ) { <nl> + if ( s_bigint_data = = nullptr ) { <nl> return result ; <nl> } <nl> + <nl> Bigint * & p5s = s_bigint_data - > p5s ; <nl> while ( p5s ) { <nl> tmp = p5s ; <nl> similarity index 86 % <nl> rename from hphp / runtime / base / zend - strtod . h <nl> rename to hphp / zend / zend - strtod . h <nl> mmm a / hphp / runtime / base / zend - strtod . h <nl> ppp b / hphp / zend / zend - strtod . h <nl> <nl> + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - + <nl> * / <nl> <nl> - # ifndef incl_HPHP_ZEND_STRTOD_H_ <nl> - # define incl_HPHP_ZEND_STRTOD_H_ <nl> + # ifndef incl_HPHP_ZEND_ZEND_STRTOD_H_ <nl> + # define incl_HPHP_ZEND_ZEND_STRTOD_H_ <nl> <nl> namespace HPHP { <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> void zend_get_bigint_data ( ) ; <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> } <nl> <nl> - / / These are actually implemented in EZC / php - src / Zend / zend_strtod . cpp <nl> - extern " C " int zend_startup_strtod ( void ) ; <nl> - extern " C " int zend_shutdown_strtod ( void ) ; <nl> - <nl> - # endif / / incl_HPHP_ZEND_STRTOD_H_ <nl> + # endif / / incl_HPHP_ZEND_ZEND_STRTOD_H_ <nl>
|
Move parts of " zend - string / strtod / printf . * " into " hphp / zend / " .
|
facebook/hhvm
|
ee70a257d199b262f7a88beefb9450e6dd961660
|
2019-05-14T21:02:27Z
|
mmm a / include / v8 - profiler . h <nl> ppp b / include / v8 - profiler . h <nl> class V8_EXPORT HeapSnapshot { <nl> kJSON = 0 / / See format description near ' Serialize ' method . <nl> } ; <nl> <nl> - / * * Returns heap snapshot UID ( assigned by the profiler . ) * / <nl> - unsigned GetUid ( ) const ; <nl> - <nl> - / * * Returns heap snapshot title . * / <nl> - Handle < String > GetTitle ( ) const ; <nl> - <nl> / * * Returns the root node of the heap graph . * / <nl> const HeapGraphNode * GetRoot ( ) const ; <nl> <nl> class V8_EXPORT HeapSnapshot { <nl> * Nodes reference strings , other nodes , and edges by their indexes <nl> * in corresponding arrays . <nl> * / <nl> - void Serialize ( OutputStream * stream , SerializationFormat format ) const ; <nl> + void Serialize ( OutputStream * stream , <nl> + SerializationFormat format = kJSON ) const ; <nl> } ; <nl> <nl> <nl> class V8_EXPORT HeapProfiler { <nl> } ; <nl> <nl> / * * <nl> - * Takes a heap snapshot and returns it . Title may be an empty string . <nl> + * Takes a heap snapshot and returns it . Title parameter is deprecated and <nl> + * should be an empty string . <nl> + * TODO : deprecate this method . <nl> * / <nl> const HeapSnapshot * TakeHeapSnapshot ( <nl> Handle < String > title , <nl> ActivityControl * control = NULL , <nl> ObjectNameResolver * global_object_name_resolver = NULL ) ; <nl> <nl> + const HeapSnapshot * TakeHeapSnapshot ( <nl> + ActivityControl * control = NULL , <nl> + ObjectNameResolver * global_object_name_resolver = NULL ) ; <nl> + <nl> / * * <nl> * Starts tracking of heap objects population statistics . After calling <nl> * this method , all heap objects relocations done by the garbage collector <nl> mmm a / src / api . cc <nl> ppp b / src / api . cc <nl> void HeapSnapshot : : Delete ( ) { <nl> } <nl> <nl> <nl> - unsigned HeapSnapshot : : GetUid ( ) const { <nl> - return ToInternal ( this ) - > uid ( ) ; <nl> - } <nl> - <nl> - <nl> - Handle < String > HeapSnapshot : : GetTitle ( ) const { <nl> - i : : Isolate * isolate = i : : Isolate : : Current ( ) ; <nl> - return ToApiHandle < String > ( <nl> - isolate - > factory ( ) - > InternalizeUtf8String ( ToInternal ( this ) - > title ( ) ) ) ; <nl> - } <nl> - <nl> - <nl> const HeapGraphNode * HeapSnapshot : : GetRoot ( ) const { <nl> return reinterpret_cast < const HeapGraphNode * > ( ToInternal ( this ) - > root ( ) ) ; <nl> } <nl> const HeapSnapshot * HeapProfiler : : TakeHeapSnapshot ( <nl> Handle < String > title , <nl> ActivityControl * control , <nl> ObjectNameResolver * resolver ) { <nl> + return TakeHeapSnapshot ( control , resolver ) ; <nl> + } <nl> + <nl> + <nl> + const HeapSnapshot * HeapProfiler : : TakeHeapSnapshot ( <nl> + ActivityControl * control , ObjectNameResolver * resolver ) { <nl> return reinterpret_cast < const HeapSnapshot * > ( <nl> - reinterpret_cast < i : : HeapProfiler * > ( this ) - > TakeSnapshot ( <nl> - * Utils : : OpenHandle ( * title ) , control , resolver ) ) ; <nl> + reinterpret_cast < i : : HeapProfiler * > ( this ) <nl> + - > TakeSnapshot ( control , resolver ) ) ; <nl> } <nl> <nl> <nl> mmm a / src / heap - profiler . cc <nl> ppp b / src / heap - profiler . cc <nl> namespace internal { <nl> HeapProfiler : : HeapProfiler ( Heap * heap ) <nl> : ids_ ( new HeapObjectsMap ( heap ) ) , <nl> names_ ( new StringsStorage ( heap ) ) , <nl> - next_snapshot_uid_ ( 1 ) , <nl> is_tracking_object_moves_ ( false ) { <nl> } <nl> <nl> v8 : : RetainedObjectInfo * HeapProfiler : : ExecuteWrapperClassCallback ( <nl> <nl> <nl> HeapSnapshot * HeapProfiler : : TakeSnapshot ( <nl> - const char * name , <nl> v8 : : ActivityControl * control , <nl> v8 : : HeapProfiler : : ObjectNameResolver * resolver ) { <nl> - HeapSnapshot * result = new HeapSnapshot ( this , name , next_snapshot_uid_ + + ) ; <nl> + HeapSnapshot * result = new HeapSnapshot ( this ) ; <nl> { <nl> HeapSnapshotGenerator generator ( result , control , resolver , heap ( ) ) ; <nl> if ( ! generator . GenerateSnapshot ( ) ) { <nl> HeapSnapshot * HeapProfiler : : TakeSnapshot ( <nl> } <nl> <nl> <nl> - HeapSnapshot * HeapProfiler : : TakeSnapshot ( <nl> - String * name , <nl> - v8 : : ActivityControl * control , <nl> - v8 : : HeapProfiler : : ObjectNameResolver * resolver ) { <nl> - return TakeSnapshot ( names_ - > GetName ( name ) , control , resolver ) ; <nl> - } <nl> - <nl> - <nl> void HeapProfiler : : StartHeapObjectsTracking ( bool track_allocations ) { <nl> ids_ - > UpdateHeapObjectsMap ( ) ; <nl> is_tracking_object_moves_ = true ; <nl> mmm a / src / heap - profiler . h <nl> ppp b / src / heap - profiler . h <nl> class HeapProfiler { <nl> size_t GetMemorySizeUsedByProfiler ( ) ; <nl> <nl> HeapSnapshot * TakeSnapshot ( <nl> - const char * name , <nl> - v8 : : ActivityControl * control , <nl> - v8 : : HeapProfiler : : ObjectNameResolver * resolver ) ; <nl> - HeapSnapshot * TakeSnapshot ( <nl> - String * name , <nl> v8 : : ActivityControl * control , <nl> v8 : : HeapProfiler : : ObjectNameResolver * resolver ) ; <nl> <nl> class HeapProfiler { <nl> SmartPointer < HeapObjectsMap > ids_ ; <nl> List < HeapSnapshot * > snapshots_ ; <nl> SmartPointer < StringsStorage > names_ ; <nl> - unsigned next_snapshot_uid_ ; <nl> List < v8 : : HeapProfiler : : WrapperInfoCallback > wrapper_callbacks_ ; <nl> SmartPointer < AllocationTracker > allocation_tracker_ ; <nl> bool is_tracking_object_moves_ ; <nl> mmm a / src / heap - snapshot - generator . cc <nl> ppp b / src / heap - snapshot - generator . cc <nl> template < > struct SnapshotSizeConstants < 8 > { <nl> } / / namespace <nl> <nl> <nl> - HeapSnapshot : : HeapSnapshot ( HeapProfiler * profiler , <nl> - const char * title , <nl> - unsigned uid ) <nl> + HeapSnapshot : : HeapSnapshot ( HeapProfiler * profiler ) <nl> : profiler_ ( profiler ) , <nl> - title_ ( title ) , <nl> - uid_ ( uid ) , <nl> root_index_ ( HeapEntry : : kNoEntry ) , <nl> gc_roots_index_ ( HeapEntry : : kNoEntry ) , <nl> max_snapshot_js_object_id_ ( 0 ) { <nl> void HeapSnapshotJSONSerializer : : SerializeNodes ( ) { <nl> <nl> <nl> void HeapSnapshotJSONSerializer : : SerializeSnapshot ( ) { <nl> - writer_ - > AddString ( " \ " title \ " : \ " " ) ; <nl> - writer_ - > AddString ( snapshot_ - > title ( ) ) ; <nl> - writer_ - > AddString ( " \ " " ) ; <nl> - writer_ - > AddString ( " , \ " uid \ " : " ) ; <nl> - writer_ - > AddNumber ( snapshot_ - > uid ( ) ) ; <nl> - writer_ - > AddString ( " , \ " meta \ " : " ) ; <nl> + writer_ - > AddString ( " \ " meta \ " : " ) ; <nl> / / The object describing node serialization layout . <nl> / / We use a set of macros to improve readability . <nl> # define JSON_A ( s ) " [ " s " ] " <nl> mmm a / src / heap - snapshot - generator . h <nl> ppp b / src / heap - snapshot - generator . h <nl> class HeapEntry BASE_EMBEDDED { <nl> / / HeapSnapshotGenerator fills in a HeapSnapshot . <nl> class HeapSnapshot { <nl> public : <nl> - HeapSnapshot ( HeapProfiler * profiler , <nl> - const char * title , <nl> - unsigned uid ) ; <nl> + explicit HeapSnapshot ( HeapProfiler * profiler ) ; <nl> void Delete ( ) ; <nl> <nl> HeapProfiler * profiler ( ) { return profiler_ ; } <nl> - const char * title ( ) { return title_ ; } <nl> - unsigned uid ( ) { return uid_ ; } <nl> size_t RawSnapshotSize ( ) const ; <nl> HeapEntry * root ( ) { return & entries_ [ root_index_ ] ; } <nl> HeapEntry * gc_roots ( ) { return & entries_ [ gc_roots_index_ ] ; } <nl> class HeapSnapshot { <nl> HeapEntry * AddGcSubrootEntry ( int tag , SnapshotObjectId id ) ; <nl> <nl> HeapProfiler * profiler_ ; <nl> - const char * title_ ; <nl> - unsigned uid_ ; <nl> int root_index_ ; <nl> int gc_roots_index_ ; <nl> int gc_subroot_indexes_ [ VisitorSynchronization : : kNumberOfSyncTags ] ; <nl> mmm a / test / cctest / test - heap - profiler . cc <nl> ppp b / test / cctest / test - heap - profiler . cc <nl> TEST ( HeapSnapshot ) { <nl> " var a2 = new A2 ( ) ; \ n " <nl> " var b2_1 = new B2 ( a2 ) , b2_2 = new B2 ( a2 ) ; \ n " <nl> " var c2 = new C2 ( a2 ) ; " ) ; <nl> - const v8 : : HeapSnapshot * snapshot_env2 = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " env2 " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot_env2 = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot_env2 ) ) ; <nl> const v8 : : HeapGraphNode * global_env2 = GetGlobalObject ( snapshot_env2 ) ; <nl> <nl> TEST ( HeapSnapshotObjectSizes ) { <nl> " x = new X ( new X ( ) , new X ( ) ) ; \ n " <nl> " dummy = new X ( ) ; \ n " <nl> " ( function ( ) { x . a . a = x . b ; } ) ( ) ; " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " sizes " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * x = <nl> TEST ( BoundFunctionInSnapshot ) { <nl> " function myFunction ( a , b ) { this . a = a ; this . b = b ; } \ n " <nl> " function AAAAA ( ) { } \ n " <nl> " boundFunction = myFunction . bind ( new AAAAA ( ) , 20 , new Number ( 12 ) ) ; \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " sizes " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * f = <nl> TEST ( HeapSnapshotEntryChildren ) { <nl> CompileRun ( <nl> " function A ( ) { } \ n " <nl> " a = new A ; " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " children " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> for ( int i = 0 , count = global - > GetChildrenCount ( ) ; i < count ; + + i ) { <nl> TEST ( HeapSnapshotCodeObjects ) { <nl> " function compiled ( x ) { return x + 1 ; } \ n " <nl> " var anonymous = ( function ( ) { return function ( ) { return 0 ; } } ) ( ) ; \ n " <nl> " compiled ( 1 ) " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " code " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> TEST ( HeapSnapshotHeapNumbers ) { <nl> CompileRun ( <nl> " a = 1 ; / / a is Smi \ n " <nl> " b = 2 . 5 ; / / b is HeapNumber " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " numbers " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> CHECK ( ! GetProperty ( global , v8 : : HeapGraphEdge : : kProperty , " a " ) ) ; <nl> TEST ( HeapSnapshotSlicedString ) { <nl> " 123456789 . 123456789 . 123456789 . 123456789 . 123456789 . " <nl> " 123456789 . 123456789 . 123456789 . 123456789 . 123456789 . \ " ; " <nl> " child_string = parent_string . slice ( 100 ) ; " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " strings " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * parent_string = <nl> TEST ( HeapSnapshotConsString ) { <nl> global - > SetInternalField ( 0 , v8 : : ToApiHandle < v8 : : String > ( cons_string ) ) ; <nl> <nl> v8 : : HeapProfiler * heap_profiler = isolate - > GetHeapProfiler ( ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " cons_strings " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global_node = GetGlobalObject ( snapshot ) ; <nl> <nl> TEST ( HeapSnapshotSymbol ) { <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> <nl> CompileRun ( " a = Symbol ( ' mySymbol ' ) ; \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " Symbol " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * a = <nl> TEST ( HeapSnapshotWeakCollection ) { <nl> " k = { } ; v = { } ; s = ' str ' ; \ n " <nl> " ws = new WeakSet ( ) ; ws . add ( k ) ; ws . add ( v ) ; ws [ s ] = s ; \ n " <nl> " wm = new WeakMap ( ) ; wm . set ( k , v ) ; wm [ s ] = s ; \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " WeakCollections " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * k = <nl> TEST ( HeapSnapshotCollection ) { <nl> " k = { } ; v = { } ; s = ' str ' ; \ n " <nl> " set = new Set ( ) ; set . add ( k ) ; set . add ( v ) ; set [ s ] = s ; \ n " <nl> " map = new Map ( ) ; map . set ( k , v ) ; map [ s ] = s ; \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " Collections " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * k = <nl> TEST ( HeapSnapshotInternalReferences ) { <nl> global - > SetInternalField ( 0 , v8_num ( 17 ) ) ; <nl> global - > SetInternalField ( 1 , obj ) ; <nl> v8 : : HeapProfiler * heap_profiler = isolate - > GetHeapProfiler ( ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " internals " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global_node = GetGlobalObject ( snapshot ) ; <nl> / / The first reference will not present , because it ' s a Smi . <nl> TEST ( HeapSnapshotAddressReuse ) { <nl> " var a = [ ] ; \ n " <nl> " for ( var i = 0 ; i < 10000 ; + + i ) \ n " <nl> " a [ i ] = new A ( ) ; \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot1 = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " snapshot1 " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot1 = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot1 ) ) ; <nl> v8 : : SnapshotObjectId maxId1 = snapshot1 - > GetMaxSnapshotJSObjectId ( ) ; <nl> <nl> TEST ( HeapSnapshotAddressReuse ) { <nl> " a [ i ] = new A ( ) ; \ n " ) ; <nl> CcTest : : heap ( ) - > CollectAllGarbage ( i : : Heap : : kNoGCFlags ) ; <nl> <nl> - const v8 : : HeapSnapshot * snapshot2 = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " snapshot2 " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot2 = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot2 ) ) ; <nl> const v8 : : HeapGraphNode * global2 = GetGlobalObject ( snapshot2 ) ; <nl> <nl> TEST ( HeapEntryIdsAndArrayShift ) { <nl> " var a = new Array ( ) ; \ n " <nl> " for ( var i = 0 ; i < 10 ; + + i ) \ n " <nl> " a . push ( new AnObject ( ) ) ; \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot1 = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " s1 " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot1 = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot1 ) ) ; <nl> <nl> CompileRun ( <nl> TEST ( HeapEntryIdsAndArrayShift ) { <nl> <nl> CcTest : : heap ( ) - > CollectAllGarbage ( i : : Heap : : kNoGCFlags ) ; <nl> <nl> - const v8 : : HeapSnapshot * snapshot2 = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " s2 " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot2 = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot2 ) ) ; <nl> <nl> const v8 : : HeapGraphNode * global1 = GetGlobalObject ( snapshot1 ) ; <nl> TEST ( HeapEntryIdsAndGC ) { <nl> " function B ( x ) { this . x = x ; } \ n " <nl> " var a = new A ( ) ; \ n " <nl> " var b = new B ( a ) ; " ) ; <nl> - v8 : : Local < v8 : : String > s1_str = v8_str ( " s1 " ) ; <nl> - v8 : : Local < v8 : : String > s2_str = v8_str ( " s2 " ) ; <nl> - const v8 : : HeapSnapshot * snapshot1 = <nl> - heap_profiler - > TakeHeapSnapshot ( s1_str ) ; <nl> + const v8 : : HeapSnapshot * snapshot1 = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot1 ) ) ; <nl> <nl> CcTest : : heap ( ) - > CollectAllGarbage ( i : : Heap : : kNoGCFlags ) ; <nl> <nl> - const v8 : : HeapSnapshot * snapshot2 = <nl> - heap_profiler - > TakeHeapSnapshot ( s2_str ) ; <nl> + const v8 : : HeapSnapshot * snapshot2 = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot2 ) ) ; <nl> <nl> CHECK_GT ( snapshot1 - > GetMaxSnapshotJSObjectId ( ) , 7000u ) ; <nl> TEST ( HeapSnapshotRootPreservedAfterSorting ) { <nl> LocalContext env ; <nl> v8 : : HandleScope scope ( env - > GetIsolate ( ) ) ; <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " s " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * root1 = snapshot - > GetRoot ( ) ; <nl> const_cast < i : : HeapSnapshot * > ( reinterpret_cast < const i : : HeapSnapshot * > ( <nl> TEST ( HeapSnapshotJSONSerialization ) { <nl> " function B ( x ) { this . x = x ; } \ n " <nl> " var a = new A ( " STRING_LITERAL_FOR_TEST " ) ; \ n " <nl> " var b = new B ( a ) ; " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " json " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> <nl> TestJSONStream stream ; <nl> TEST ( HeapSnapshotJSONSerializationAborting ) { <nl> LocalContext env ; <nl> v8 : : HandleScope scope ( env - > GetIsolate ( ) ) ; <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " abort " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> TestJSONStream stream ( 5 ) ; <nl> snapshot - > Serialize ( & stream , v8 : : HeapSnapshot : : kJSON ) ; <nl> TEST ( HeapSnapshotGetNodeById ) { <nl> v8 : : HandleScope scope ( env - > GetIsolate ( ) ) ; <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " id " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * root = snapshot - > GetRoot ( ) ; <nl> CheckChildrenIds ( snapshot , root , 0 , 3 ) ; <nl> TEST ( HeapSnapshotGetSnapshotObjectId ) { <nl> v8 : : HandleScope scope ( env - > GetIsolate ( ) ) ; <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> CompileRun ( " globalObject = { } ; \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " get_snapshot_object_id " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * global_object = <nl> TEST ( HeapSnapshotUnknownSnapshotObjectId ) { <nl> v8 : : HandleScope scope ( env - > GetIsolate ( ) ) ; <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> CompileRun ( " globalObject = { } ; \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " unknown_object_id " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * node = <nl> snapshot - > GetNodeById ( v8 : : HeapProfiler : : kUnknownObjectId ) ; <nl> TEST ( TakeHeapSnapshotAborting ) { <nl> const int snapshots_count = heap_profiler - > GetSnapshotCount ( ) ; <nl> TestActivityControl aborting_control ( 1 ) ; <nl> const v8 : : HeapSnapshot * no_snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " abort " ) , <nl> - & aborting_control ) ; <nl> + heap_profiler - > TakeHeapSnapshot ( & aborting_control ) ; <nl> CHECK ( ! no_snapshot ) ; <nl> CHECK_EQ ( snapshots_count , heap_profiler - > GetSnapshotCount ( ) ) ; <nl> CHECK_GT ( aborting_control . total ( ) , aborting_control . done ( ) ) ; <nl> <nl> TestActivityControl control ( - 1 ) ; / / Don ' t abort . <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " full " ) , <nl> - & control ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( & control ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> <nl> CHECK ( snapshot ) ; <nl> TEST ( HeapSnapshotRetainedObjectInfo ) { <nl> v8 : : Persistent < v8 : : String > p_CCC ( isolate , v8_str ( " CCC " ) ) ; <nl> p_CCC . SetWrapperClassId ( 2 ) ; <nl> CHECK_EQ ( 0 , TestRetainedObjectInfo : : instances . length ( ) ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " retained " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> <nl> CHECK_EQ ( 3 , TestRetainedObjectInfo : : instances . length ( ) ) ; <nl> TEST ( HeapSnapshotImplicitReferences ) { <nl> GraphWithImplicitRefs graph ( & env ) ; <nl> v8 : : V8 : : AddGCPrologueCallback ( & GraphWithImplicitRefs : : gcPrologue ) ; <nl> <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " implicit_refs " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> <nl> const v8 : : HeapGraphNode * global_object = GetGlobalObject ( snapshot ) ; <nl> TEST ( DeleteAllHeapSnapshots ) { <nl> CHECK_EQ ( 0 , heap_profiler - > GetSnapshotCount ( ) ) ; <nl> heap_profiler - > DeleteAllHeapSnapshots ( ) ; <nl> CHECK_EQ ( 0 , heap_profiler - > GetSnapshotCount ( ) ) ; <nl> - CHECK ( heap_profiler - > TakeHeapSnapshot ( v8_str ( " 1 " ) ) ) ; <nl> + CHECK ( heap_profiler - > TakeHeapSnapshot ( ) ) ; <nl> CHECK_EQ ( 1 , heap_profiler - > GetSnapshotCount ( ) ) ; <nl> heap_profiler - > DeleteAllHeapSnapshots ( ) ; <nl> CHECK_EQ ( 0 , heap_profiler - > GetSnapshotCount ( ) ) ; <nl> - CHECK ( heap_profiler - > TakeHeapSnapshot ( v8_str ( " 1 " ) ) ) ; <nl> - CHECK ( heap_profiler - > TakeHeapSnapshot ( v8_str ( " 2 " ) ) ) ; <nl> + CHECK ( heap_profiler - > TakeHeapSnapshot ( ) ) ; <nl> + CHECK ( heap_profiler - > TakeHeapSnapshot ( ) ) ; <nl> CHECK_EQ ( 2 , heap_profiler - > GetSnapshotCount ( ) ) ; <nl> heap_profiler - > DeleteAllHeapSnapshots ( ) ; <nl> CHECK_EQ ( 0 , heap_profiler - > GetSnapshotCount ( ) ) ; <nl> } <nl> <nl> <nl> - static const v8 : : HeapSnapshot * FindHeapSnapshot ( v8 : : HeapProfiler * profiler , <nl> - unsigned uid ) { <nl> + static bool FindHeapSnapshot ( v8 : : HeapProfiler * profiler , <nl> + const v8 : : HeapSnapshot * snapshot ) { <nl> int length = profiler - > GetSnapshotCount ( ) ; <nl> for ( int i = 0 ; i < length ; i + + ) { <nl> - const v8 : : HeapSnapshot * snapshot = profiler - > GetHeapSnapshot ( i ) ; <nl> - if ( snapshot - > GetUid ( ) = = uid ) { <nl> - return snapshot ; <nl> - } <nl> + if ( snapshot = = profiler - > GetHeapSnapshot ( i ) ) return true ; <nl> } <nl> - return NULL ; <nl> + return false ; <nl> } <nl> <nl> <nl> TEST ( DeleteHeapSnapshot ) { <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> <nl> CHECK_EQ ( 0 , heap_profiler - > GetSnapshotCount ( ) ) ; <nl> - const v8 : : HeapSnapshot * s1 = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " 1 " ) ) ; <nl> + const v8 : : HeapSnapshot * s1 = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> <nl> CHECK ( s1 ) ; <nl> CHECK_EQ ( 1 , heap_profiler - > GetSnapshotCount ( ) ) ; <nl> - unsigned uid1 = s1 - > GetUid ( ) ; <nl> - CHECK_EQ ( s1 , FindHeapSnapshot ( heap_profiler , uid1 ) ) ; <nl> + CHECK ( FindHeapSnapshot ( heap_profiler , s1 ) ) ; <nl> const_cast < v8 : : HeapSnapshot * > ( s1 ) - > Delete ( ) ; <nl> CHECK_EQ ( 0 , heap_profiler - > GetSnapshotCount ( ) ) ; <nl> - CHECK ( ! FindHeapSnapshot ( heap_profiler , uid1 ) ) ; <nl> + CHECK ( ! FindHeapSnapshot ( heap_profiler , s1 ) ) ; <nl> <nl> - const v8 : : HeapSnapshot * s2 = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " 2 " ) ) ; <nl> + const v8 : : HeapSnapshot * s2 = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( s2 ) ; <nl> CHECK_EQ ( 1 , heap_profiler - > GetSnapshotCount ( ) ) ; <nl> - unsigned uid2 = s2 - > GetUid ( ) ; <nl> - CHECK_NE ( static_cast < int > ( uid1 ) , static_cast < int > ( uid2 ) ) ; <nl> - CHECK_EQ ( s2 , FindHeapSnapshot ( heap_profiler , uid2 ) ) ; <nl> - const v8 : : HeapSnapshot * s3 = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " 3 " ) ) ; <nl> + CHECK ( FindHeapSnapshot ( heap_profiler , s2 ) ) ; <nl> + const v8 : : HeapSnapshot * s3 = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( s3 ) ; <nl> CHECK_EQ ( 2 , heap_profiler - > GetSnapshotCount ( ) ) ; <nl> - unsigned uid3 = s3 - > GetUid ( ) ; <nl> - CHECK_NE ( static_cast < int > ( uid1 ) , static_cast < int > ( uid3 ) ) ; <nl> - CHECK_EQ ( s3 , FindHeapSnapshot ( heap_profiler , uid3 ) ) ; <nl> + CHECK_NE ( s2 , s3 ) ; <nl> + CHECK ( FindHeapSnapshot ( heap_profiler , s3 ) ) ; <nl> const_cast < v8 : : HeapSnapshot * > ( s2 ) - > Delete ( ) ; <nl> CHECK_EQ ( 1 , heap_profiler - > GetSnapshotCount ( ) ) ; <nl> - CHECK ( ! FindHeapSnapshot ( heap_profiler , uid2 ) ) ; <nl> - CHECK_EQ ( s3 , FindHeapSnapshot ( heap_profiler , uid3 ) ) ; <nl> + CHECK ( ! FindHeapSnapshot ( heap_profiler , s2 ) ) ; <nl> + CHECK ( FindHeapSnapshot ( heap_profiler , s3 ) ) ; <nl> const_cast < v8 : : HeapSnapshot * > ( s3 ) - > Delete ( ) ; <nl> CHECK_EQ ( 0 , heap_profiler - > GetSnapshotCount ( ) ) ; <nl> - CHECK ( ! FindHeapSnapshot ( heap_profiler , uid3 ) ) ; <nl> + CHECK ( ! FindHeapSnapshot ( heap_profiler , s3 ) ) ; <nl> } <nl> <nl> <nl> TEST ( GlobalObjectName ) { <nl> <nl> NameResolver name_resolver ; <nl> const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " document " ) , <nl> - NULL , <nl> - & name_resolver ) ; <nl> + heap_profiler - > TakeHeapSnapshot ( NULL , & name_resolver ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> CHECK ( global ) ; <nl> TEST ( GlobalObjectFields ) { <nl> v8 : : HandleScope scope ( env - > GetIsolate ( ) ) ; <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> CompileRun ( " obj = { } ; " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " snapshot " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * builtins = <nl> TEST ( NoHandleLeaks ) { <nl> <nl> CompileRun ( " document = { URL : \ " abcdefgh \ " } ; " ) ; <nl> <nl> - v8 : : Handle < v8 : : String > name ( v8_str ( " leakz " ) ) ; <nl> i : : Isolate * isolate = CcTest : : i_isolate ( ) ; <nl> int count_before = i : : HandleScope : : NumberOfHandles ( isolate ) ; <nl> - heap_profiler - > TakeHeapSnapshot ( name ) ; <nl> + heap_profiler - > TakeHeapSnapshot ( ) ; <nl> int count_after = i : : HandleScope : : NumberOfHandles ( isolate ) ; <nl> CHECK_EQ ( count_before , count_after ) ; <nl> } <nl> TEST ( NodesIteration ) { <nl> LocalContext env ; <nl> v8 : : HandleScope scope ( env - > GetIsolate ( ) ) ; <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " iteration " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> CHECK ( global ) ; <nl> TEST ( GetHeapValueForNode ) { <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> <nl> CompileRun ( " a = { s_prop : \ ' value \ ' , n_prop : \ ' value2 \ ' } ; " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " value " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> CHECK ( heap_profiler - > FindObjectById ( global - > GetId ( ) ) - > IsObject ( ) ) ; <nl> TEST ( GetHeapValueForDeletedObject ) { <nl> / / property of the " a " object . Also , the " p " object can ' t be an empty one <nl> / / because the empty object is static and isn ' t actually deleted . <nl> CompileRun ( " a = { p : { r : { } } } ; " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " snapshot " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * obj = GetProperty ( <nl> TEST ( FastCaseAccessors ) { <nl> " obj1 . __defineSetter__ ( ' propWithSetter ' , function Z ( value ) { \ n " <nl> " return this . value_ = value ; \ n " <nl> " } ) ; \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " fastCaseAccessors " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> TEST ( FastCaseRedefinedAccessors ) { <nl> v8 : : Utils : : OpenHandle ( * js_global - > Get ( v8_str ( " obj1 " ) ) . As < v8 : : Object > ( ) ) ; <nl> USE ( js_obj1 ) ; <nl> <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " fastCaseAccessors " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> CHECK ( global ) ; <nl> TEST ( SlowCaseAccessors ) { <nl> " obj1 . __defineSetter__ ( ' propWithSetter ' , function Z ( value ) { \ n " <nl> " return this . value_ = value ; \ n " <nl> " } ) ; \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " slowCaseAccessors " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> TEST ( HiddenPropertiesFastCase ) { <nl> CompileRun ( <nl> " function C ( x ) { this . a = this ; this . b = x ; } \ n " <nl> " c = new C ( 2012 ) ; \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " HiddenPropertiesFastCase1 " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * c = <nl> TEST ( HiddenPropertiesFastCase ) { <nl> CHECK ( ! cHandle . IsEmpty ( ) & & cHandle - > IsObject ( ) ) ; <nl> cHandle - > ToObject ( isolate ) - > SetHiddenValue ( v8_str ( " key " ) , v8_str ( " val " ) ) ; <nl> <nl> - snapshot = heap_profiler - > TakeHeapSnapshot ( <nl> - v8_str ( " HiddenPropertiesFastCase2 " ) ) ; <nl> + snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> global = GetGlobalObject ( snapshot ) ; <nl> c = GetProperty ( global , v8 : : HeapGraphEdge : : kProperty , " c " ) ; <nl> TEST ( AccessorInfo ) { <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> <nl> CompileRun ( " function foo ( x ) { } \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " AccessorInfoTest " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * foo = <nl> bool HasWeakEdge ( const v8 : : HeapGraphNode * node ) { <nl> bool HasWeakGlobalHandle ( ) { <nl> v8 : : Isolate * isolate = CcTest : : isolate ( ) ; <nl> v8 : : HeapProfiler * heap_profiler = isolate - > GetHeapProfiler ( ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " weaks " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * gc_roots = GetNode ( <nl> snapshot - > GetRoot ( ) , v8 : : HeapGraphNode : : kSynthetic , " ( GC roots ) " ) ; <nl> TEST ( SfiAndJsFunctionWeakRefs ) { <nl> <nl> CompileRun ( <nl> " fun = ( function ( x ) { return function ( ) { return x + 1 ; } } ) ( 1 ) ; " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " fun " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> CHECK ( global ) ; <nl> TEST ( NoDebugObjectInSnapshot ) { <nl> <nl> CHECK ( CcTest : : i_isolate ( ) - > debug ( ) - > Load ( ) ) ; <nl> CompileRun ( " foo = { } ; " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " snapshot " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * root = snapshot - > GetRoot ( ) ; <nl> int globals_count = 0 ; <nl> TEST ( AllStrongGcRootsHaveNames ) { <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> <nl> CompileRun ( " foo = { } ; " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " snapshot " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * gc_roots = GetNode ( <nl> snapshot - > GetRoot ( ) , v8 : : HeapGraphNode : : kSynthetic , " ( GC roots ) " ) ; <nl> TEST ( NoRefsToNonEssentialEntries ) { <nl> v8 : : HandleScope scope ( env - > GetIsolate ( ) ) ; <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> CompileRun ( " global_object = { } ; \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " snapshot " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * global_object = <nl> TEST ( MapHasDescriptorsAndTransitions ) { <nl> v8 : : HandleScope scope ( env - > GetIsolate ( ) ) ; <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> CompileRun ( " obj = { a : 10 } ; \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " snapshot " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * global_object = <nl> TEST ( ManyLocalsInSharedContext ) { <nl> " result . push ( ' return f_ ' + ( n - 1 ) + ' ; ' ) ; " <nl> " result . push ( ' } ) ( ) ' ) ; " <nl> " var ok = eval ( result . join ( ' \ \ n ' ) ) ; " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " snapshot " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> TEST ( AllocationSitesAreVisible ) { <nl> CompileRun ( <nl> " fun = function ( ) { var a = [ 3 , 2 , 1 ] ; return a ; } \ n " <nl> " fun ( ) ; " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " snapshot " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> TEST ( JSFunctionHasCodeLink ) { <nl> v8 : : HandleScope scope ( env - > GetIsolate ( ) ) ; <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> CompileRun ( " function foo ( x , y ) { return x + y ; } \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " snapshot " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * foo_func = <nl> TEST ( CheckCodeNames ) { <nl> v8 : : HandleScope scope ( env - > GetIsolate ( ) ) ; <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> CompileRun ( " var a = 1 . 1 ; " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " CheckCodeNames " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> <nl> const char * stub_path [ ] = { <nl> TEST ( ArrayBufferAndArrayBufferView ) { <nl> v8 : : HandleScope scope ( env - > GetIsolate ( ) ) ; <nl> v8 : : HeapProfiler * heap_profiler = env - > GetIsolate ( ) - > GetHeapProfiler ( ) ; <nl> CompileRun ( " arr1 = new Uint32Array ( 100 ) ; \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " snapshot " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * arr1_obj = <nl> TEST ( ArrayBufferSharedBackingStore ) { <nl> v8 : : Handle < v8 : : Value > result = CompileRun ( " ab2 . byteLength " ) ; <nl> CHECK_EQ ( 1024 , result - > Int32Value ( ) ) ; <nl> <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " snapshot " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * ab1_node = <nl> TEST ( BoxObject ) { <nl> global - > Set ( 0 , v8 : : ToApiHandle < v8 : : Object > ( box ) ) ; <nl> <nl> v8 : : HeapProfiler * heap_profiler = isolate - > GetHeapProfiler ( ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " snapshot " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global_node = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * box_node = <nl> TEST ( WeakContainers ) { <nl> " foo ( obj ) ; \ n " <nl> " % OptimizeFunctionOnNextCall ( foo ) ; \ n " <nl> " foo ( obj ) ; \ n " ) ; <nl> - const v8 : : HeapSnapshot * snapshot = <nl> - heap_profiler - > TakeHeapSnapshot ( v8_str ( " snapshot " ) ) ; <nl> + const v8 : : HeapSnapshot * snapshot = heap_profiler - > TakeHeapSnapshot ( ) ; <nl> CHECK ( ValidateSnapshot ( snapshot ) ) ; <nl> const v8 : : HeapGraphNode * global = GetGlobalObject ( snapshot ) ; <nl> const v8 : : HeapGraphNode * obj = <nl>
|
Remove uid and title from HeapSnapshot
|
v8/v8
|
619d4535cced3be482ea7f0a5ac1a31573e01119
|
2015-03-10T15:14:07Z
|
mmm a / script / create - dist . py <nl> ppp b / script / create - dist . py <nl> def copy_license ( ) : <nl> <nl> <nl> def strip_binaries ( ) : <nl> + if get_target_arch ( ) = = ' arm ' : <nl> + strip = ' arm - linux - gnueabihf - strip ' <nl> + else : <nl> + strip = ' strip ' <nl> for binary in TARGET_BINARIES [ PLATFORM ] : <nl> if binary . endswith ( ' . so ' ) or ' . ' not in binary : <nl> - execute ( [ ' strip ' , os . path . join ( DIST_DIR , binary ) ] ) <nl> + execute ( [ strip , os . path . join ( DIST_DIR , binary ) ] ) <nl> <nl> <nl> def copy_system_libraries ( ) : <nl>
|
Call correct strip for arm target
|
electron/electron
|
8110f2f2213dac5be74efc46283604a1c16348eb
|
2015-07-02T07:19:39Z
|
mmm a / tensorflow / compiler / jit / BUILD <nl> ppp b / tensorflow / compiler / jit / BUILD <nl> cc_library ( <nl> alwayslink = 1 , <nl> ) <nl> <nl> - cc_library ( <nl> - name = " register_xla_cpu_jit " , <nl> - srcs = [ " register_xla_cpu_jit . cc " ] , <nl> - visibility = [ " / / visibility : public " ] , <nl> - deps = [ <nl> - " / / tensorflow / core : tf_xla_stub " , <nl> - ] , <nl> - alwayslink = 1 , <nl> - ) <nl> - <nl> cc_library ( <nl> name = " xla_cpu_jit " , <nl> visibility = [ " / / visibility : public " ] , <nl> deps = [ <nl> " : jit_compilation_passes " , <nl> - " : register_xla_cpu_jit " , <nl> " / / tensorflow / compiler / jit / kernels : xla_ops " , <nl> " / / tensorflow / compiler / tf2xla / kernels : xla_dummy_ops " , <nl> " / / tensorflow / compiler / tf2xla / kernels : xla_ops " , <nl> " / / tensorflow / compiler / xla / service : cpu_plugin " , <nl> - " / / tensorflow / core : tf_xla_stub " , <nl> - ] , <nl> - alwayslink = 1 , <nl> - ) <nl> - <nl> - cc_library ( <nl> - name = " register_xla_gpu_jit " , <nl> - srcs = [ " register_xla_gpu_jit . cc " ] , <nl> - visibility = [ " / / visibility : public " ] , <nl> - deps = [ <nl> - " / / tensorflow / core : tf_xla_stub " , <nl> ] , <nl> alwayslink = 1 , <nl> ) <nl> cc_library ( <nl> visibility = [ " / / visibility : public " ] , <nl> deps = if_cuda ( [ <nl> " : jit_compilation_passes " , <nl> - " : register_xla_gpu_jit " , <nl> - " / / tensorflow / core : tf_xla_stub " , <nl> " / / tensorflow / compiler / jit / kernels : xla_ops " , <nl> " / / tensorflow / compiler / tf2xla / kernels : xla_ops " , <nl> " / / tensorflow / compiler / tf2xla / kernels : xla_dummy_ops " , <nl> cc_library ( <nl> deps = [ <nl> " : flags " , <nl> " : jit_compilation_passes " , <nl> - " : register_xla_cpu_jit " , <nl> " : xla_device " , <nl> " / / tensorflow / compiler / jit / kernels : xla_ops " , <nl> " / / tensorflow / compiler / tf2xla : xla_compiler " , <nl> cc_library ( <nl> visibility = [ " : friends " ] , <nl> deps = [ <nl> " : jit_compilation_passes " , <nl> - " : register_xla_gpu_jit " , <nl> " : xla_device " , <nl> " / / tensorflow / compiler / jit / kernels : xla_ops " , <nl> " / / tensorflow / compiler / tf2xla : xla_compiler " , <nl> deleted file mode 100644 <nl> index 9cbef63127134 . . 0000000000000 <nl> mmm a / tensorflow / compiler / jit / register_xla_cpu_jit . cc <nl> ppp / dev / null <nl> <nl> - / * Copyright 2018 The TensorFlow Authors . All Rights Reserved . <nl> - <nl> - Licensed under the Apache License , Version 2 . 0 ( the " License " ) ; <nl> - you may not use this file except in compliance with the License . <nl> - You may obtain a copy of the License at <nl> - <nl> - http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> - <nl> - Unless required by applicable law or agreed to in writing , software <nl> - distributed under the License is distributed on an " AS IS " BASIS , <nl> - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied . <nl> - See the License for the specific language governing permissions and <nl> - limitations under the License . <nl> - = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = * / <nl> - <nl> - # include " tensorflow / core / common_runtime / tf_xla_stub . h " <nl> - <nl> - namespace tensorflow { <nl> - namespace { <nl> - XlaCpuJitIsLinkedIn register_xla_cpu_jit ; <nl> - } <nl> - } / / namespace tensorflow <nl> deleted file mode 100644 <nl> index 7399a41d25fea . . 0000000000000 <nl> mmm a / tensorflow / compiler / jit / register_xla_gpu_jit . cc <nl> ppp / dev / null <nl> <nl> - / * Copyright 2018 The TensorFlow Authors . All Rights Reserved . <nl> - <nl> - Licensed under the Apache License , Version 2 . 0 ( the " License " ) ; <nl> - you may not use this file except in compliance with the License . <nl> - You may obtain a copy of the License at <nl> - <nl> - http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> - <nl> - Unless required by applicable law or agreed to in writing , software <nl> - distributed under the License is distributed on an " AS IS " BASIS , <nl> - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied . <nl> - See the License for the specific language governing permissions and <nl> - limitations under the License . <nl> - = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = * / <nl> - <nl> - # include " tensorflow / core / common_runtime / tf_xla_stub . h " <nl> - <nl> - namespace tensorflow { <nl> - namespace { <nl> - XlaGpuJitIsLinkedIn register_xla_gpu_jit ; <nl> - } <nl> - } / / namespace tensorflow <nl> mmm a / tensorflow / core / BUILD <nl> ppp b / tensorflow / core / BUILD <nl> tf_cuda_library ( <nl> ] + if_static ( [ " : core_cpu_impl " ] ) + tf_protos_all ( ) + tf_protos_grappler ( ) , <nl> ) <nl> <nl> - tf_cuda_library ( <nl> - name = " tf_xla_stub " , <nl> - srcs = [ " common_runtime / tf_xla_stub . cc " ] , <nl> - hdrs = [ " common_runtime / tf_xla_stub . h " ] , <nl> - copts = tf_copts ( ) , <nl> - visibility = [ " / / visibility : public " ] , <nl> - deps = [ <nl> - " : lib " , <nl> - " : proto_text " , <nl> - " : session_options " , <nl> - ] , <nl> - ) <nl> - <nl> tf_cuda_library ( <nl> name = " core_cpu_internal " , <nl> srcs = [ <nl> tf_cuda_library ( <nl> " : framework " , <nl> " : graph " , <nl> " : lib " , <nl> - " : tf_xla_stub " , <nl> " : proto_text " , <nl> " : protos_all_cc " , <nl> " / / tensorflow / core / grappler : grappler_item " , <nl> mmm a / tensorflow / core / common_runtime / graph_execution_state . cc <nl> ppp b / tensorflow / core / common_runtime / graph_execution_state . cc <nl> limitations under the License . <nl> # include " tensorflow / core / common_runtime / device . h " <nl> # include " tensorflow / core / common_runtime / optimization_registry . h " <nl> # include " tensorflow / core / common_runtime / placer . h " <nl> - # include " tensorflow / core / common_runtime / tf_xla_stub . h " <nl> # include " tensorflow / core / framework / graph . pb_text . h " <nl> # include " tensorflow / core / framework / graph_def_util . h " <nl> # include " tensorflow / core / framework / node_def . pb . h " <nl> Status GraphExecutionState : : BuildGraph ( const BuildGraphOptions & options , <nl> CHECK_EQ ( options . callable_options . fetch_size ( ) , <nl> rewrite_metadata . fetch_types . size ( ) ) ; <nl> <nl> - TF_RETURN_IF_ERROR ( CheckXlaJitOptimizerOptions ( session_options_ ) ) ; <nl> - <nl> / / TODO ( andydavis ) : Clarify optimization pass requirements around CostModel . <nl> GraphOptimizationPassOptions optimization_options ; <nl> optimization_options . session_options = session_options_ ; <nl> deleted file mode 100644 <nl> index d463693669f4a . . 0000000000000 <nl> mmm a / tensorflow / core / common_runtime / tf_xla_stub . cc <nl> ppp / dev / null <nl> <nl> - / * Copyright 2018 The TensorFlow Authors . All Rights Reserved . <nl> - <nl> - Licensed under the Apache License , Version 2 . 0 ( the " License " ) ; <nl> - you may not use this file except in compliance with the License . <nl> - You may obtain a copy of the License at <nl> - <nl> - http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> - <nl> - Unless required by applicable law or agreed to in writing , software <nl> - distributed under the License is distributed on an " AS IS " BASIS , <nl> - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied . <nl> - See the License for the specific language governing permissions and <nl> - limitations under the License . <nl> - = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = * / <nl> - <nl> - # include < cstdlib > <nl> - <nl> - # include " tensorflow / core / common_runtime / tf_xla_stub . h " <nl> - # include " tensorflow / core / lib / core / errors . h " <nl> - <nl> - namespace tensorflow { <nl> - namespace { <nl> - <nl> - bool is_xla_gpu_jit_registered = false ; <nl> - bool is_xla_cpu_jit_registered = false ; <nl> - <nl> - struct XlaEnvVars { <nl> - bool xla_flags_env_var_present ; <nl> - bool tf_xla_flags_env_var_present ; <nl> - } ; <nl> - <nl> - XlaEnvVars ComputeEnvVarHasXlaFlags ( ) { <nl> - XlaEnvVars env_vars ; <nl> - env_vars . xla_flags_env_var_present = getenv ( " XLA_FLAGS " ) ! = nullptr ; <nl> - env_vars . tf_xla_flags_env_var_present = getenv ( " TF_XLA_FLAGS " ) ! = nullptr ; <nl> - return env_vars ; <nl> - } <nl> - <nl> - } / / namespace <nl> - <nl> - XlaGpuJitIsLinkedIn : : XlaGpuJitIsLinkedIn ( ) { is_xla_gpu_jit_registered = true ; } <nl> - XlaCpuJitIsLinkedIn : : XlaCpuJitIsLinkedIn ( ) { is_xla_cpu_jit_registered = true ; } <nl> - <nl> - Status CheckXlaJitOptimizerOptions ( const SessionOptions * session_options ) { <nl> - static XlaEnvVars env_vars = ComputeEnvVarHasXlaFlags ( ) ; <nl> - <nl> - if ( is_xla_cpu_jit_registered | | is_xla_gpu_jit_registered ) { <nl> - return Status : : OK ( ) ; <nl> - } <nl> - <nl> - if ( env_vars . xla_flags_env_var_present ) { <nl> - return errors : : InvalidArgument ( <nl> - " The XLA JIT is not linked in but the \ " XLA_FLAGS \ " environment " <nl> - " variable is set . Please either link in XLA or remove \ " XLA_FLAGS \ " " <nl> - " from the environment . " ) ; <nl> - } <nl> - <nl> - if ( env_vars . tf_xla_flags_env_var_present ) { <nl> - return errors : : InvalidArgument ( <nl> - " The XLA JIT is not linked in but the \ " TF_XLA_FLAGS \ " environment " <nl> - " variable is set . Please either link in XLA or remove " <nl> - " \ " TF_XLA_FLAGS \ " from the environment . " ) ; <nl> - } <nl> - <nl> - if ( session_options ) { <nl> - OptimizerOptions : : GlobalJitLevel jit_level = <nl> - session_options - > config . graph_options ( ) <nl> - . optimizer_options ( ) <nl> - . global_jit_level ( ) ; <nl> - <nl> - if ( jit_level = = OptimizerOptions : : ON_1 | | <nl> - jit_level = = OptimizerOptions : : ON_2 ) { <nl> - return errors : : InvalidArgument ( <nl> - " The XLA JIT is enabled in the session options but XLA is not linked " <nl> - " in . Plesae either link in XLA or disable the JIT in the session " <nl> - " options . " ) ; <nl> - } <nl> - } <nl> - <nl> - return Status : : OK ( ) ; <nl> - } <nl> - } / / namespace tensorflow <nl> deleted file mode 100644 <nl> index 723b2b5cd2e27 . . 0000000000000 <nl> mmm a / tensorflow / core / common_runtime / tf_xla_stub . h <nl> ppp / dev / null <nl> <nl> - / * Copyright 2018 The TensorFlow Authors . All Rights Reserved . <nl> - <nl> - Licensed under the Apache License , Version 2 . 0 ( the " License " ) ; <nl> - you may not use this file except in compliance with the License . <nl> - You may obtain a copy of the License at <nl> - <nl> - http : / / www . apache . org / licenses / LICENSE - 2 . 0 <nl> - <nl> - Unless required by applicable law or agreed to in writing , software <nl> - distributed under the License is distributed on an " AS IS " BASIS , <nl> - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied . <nl> - See the License for the specific language governing permissions and <nl> - limitations under the License . <nl> - = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = * / <nl> - # ifndef TENSORFLOW_CORE_COMMON_RUNTIME_TF_XLA_STUB_H_ <nl> - # define TENSORFLOW_CORE_COMMON_RUNTIME_TF_XLA_STUB_H_ <nl> - <nl> - # include " tensorflow / core / lib / core / status . h " <nl> - # include " tensorflow / core / public / session_options . h " <nl> - <nl> - namespace tensorflow { <nl> - / / Returns an error if the XLA JIT is enabled via ` session_options ` or if the <nl> - / / TF_XLA_FLAGS or XLA_FLAGS environment variables are set , but neither the <nl> - / / XLA CPU JIT nor the XLA GPU JIT are linked in . <nl> - / / <nl> - / / If ` session_options ` is null then only the environment variables are checked . <nl> - Status CheckXlaJitOptimizerOptions ( const SessionOptions * session_options ) ; <nl> - <nl> - / / The XLA CPU JIT creates a static instance of this class to notify <nl> - / / ` CheckXlaJitOptimizerOptions ` that the XLA CPU JIT is linked in . <nl> - / / <nl> - / / NB ! The constructor of this class ( if run at all ) needs to be ordered ( via <nl> - / / happens before ) before any call to ` CheckXlaJitOptimizerOptions ` . <nl> - class XlaCpuJitIsLinkedIn { <nl> - public : <nl> - XlaCpuJitIsLinkedIn ( ) ; <nl> - } ; <nl> - <nl> - / / The XLA GPU JIT creates a static instance of this class to notify <nl> - / / ` CheckXlaJitOptimizerOptions ` that the XLA GPU JIT is linked in . <nl> - / / <nl> - / / NB ! The constructor of this class ( if run at all ) needs to be ordered ( via <nl> - / / happens before ) before any call to ` CheckXlaJitOptimizerOptions ` . <nl> - class XlaGpuJitIsLinkedIn { <nl> - public : <nl> - XlaGpuJitIsLinkedIn ( ) ; <nl> - } ; <nl> - } / / namespace tensorflow <nl> - <nl> - # endif / / TENSORFLOW_CORE_COMMON_RUNTIME_TF_XLA_STUB_H_ <nl>
|
Automated rollback of commit 6b5bef9216ae89067e2a600772ac17a0ca4a5010
|
tensorflow/tensorflow
|
0c610a2be55f6d5e929f554d736b6bfa4ae35e81
|
2018-11-18T06:58:31Z
|
mmm a / addons / skin . estuary / 1080i / Variables . xml <nl> ppp b / addons / skin . estuary / 1080i / Variables . xml <nl> <nl> < value condition = " ListItem . IsRecording " > windows / pvr / record . png < / value > <nl> < value condition = " ListItem . IsPlaying " > overlays / watched / OverlayPlaying - List . png < / value > <nl> < value condition = " ListItem . IsResumable " > overlays / watched / resume . png < / value > <nl> + < value condition = " ListItem . IsParentFolder " > DefaultFolderBack . png < / value > <nl> < value condition = " ListItem . IsFolder " > overlays / folder . png < / value > <nl> < value condition = " ! String . IsEmpty ( ListItem . Overlay ) " > $ INFO [ ListItem . Overlay ] < / value > <nl> - < value condition = " ! ListItem . IsParentFolder " > OverlayUnwatched . png < / value > <nl> + < value > OverlayUnwatched . png < / value > <nl> < / variable > <nl> < variable name = " SettingsSectionIcon " > <nl> < value condition = " Window . IsActive ( playersettings ) " > icons / settings / video . png < / value > <nl>
|
[ estuary ] fix recordings parent folder overlay
|
xbmc/xbmc
|
3e660816d95dbbf6ff6e6189413e1a25855c6636
|
2016-11-20T11:33:32Z
|
mmm a / tensorflow / compiler / tf2tensorrt / convert / convert_nodes_test . cc <nl> ppp b / tensorflow / compiler / tf2tensorrt / convert / convert_nodes_test . cc <nl> class OpConverterTest : public : : testing : : Test { <nl> } <nl> } <nl> <nl> + void TestMatMulHelper ( <nl> + const std : : function < NodeDef ( DataType , bool , bool ) > & get_matmul , <nl> + const std : : string & op_name ) ; <nl> + <nl> / / Expose quantization_ranges_ for tests <nl> std : : unordered_map < nvinfer1 : : ITensor * , float > & quantization_ranges ( ) { <nl> return converter_ - > quantization_ranges_ ; <nl> TEST_F ( OpConverterTest , ConvertReshape ) { <nl> } <nl> } <nl> <nl> - TEST_F ( OpConverterTest , ConvertMatMul ) { <nl> - { <nl> - / / Input list is empty , should fail . <nl> - NodeDef node_def = MakeNodeDef ( " my_matmul " , " MatMul " , { } ) ; <nl> - RunValidationAndConversion ( <nl> - node_def , error : : INVALID_ARGUMENT , <nl> - " MatMul got 0 inputs but expected 2 , at my_matmul " ) ; <nl> - } <nl> - <nl> - / / Get the NodeDef for MatMul . <nl> - auto get_matmul_nodedef = [ ] ( DataType dtype , bool transpose_a , <nl> - bool transpose_b ) - > NodeDef { <nl> - Scope s = Scope : : NewRootScope ( ) ; <nl> - auto input = ops : : Placeholder ( s . WithOpName ( " input " ) , dtype ) ; <nl> - auto weights = ops : : Placeholder ( s . WithOpName ( " weights " ) , dtype ) ; <nl> - const auto matmul_attrs = <nl> - ops : : MatMul : : TransposeA ( transpose_a ) . TransposeB ( transpose_b ) ; <nl> - auto matmul = <nl> - ops : : MatMul ( s . WithOpName ( " my_matmul " ) , input , weights , matmul_attrs ) ; <nl> - return matmul . operation . node ( ) - > def ( ) ; <nl> - } ; <nl> - <nl> + / / Helper function for testing MatMul and BatchMatMul <nl> + / / get_matmul corresponds to the function used to generate the node . It should <nl> + / / accept ( DataType , transpose_a , transpose_b ) as parameters . <nl> + void OpConverterTest : : TestMatMulHelper ( <nl> + const std : : function < NodeDef ( DataType , bool , bool ) > & get_matmul , <nl> + const std : : string & op_name ) { <nl> + / / HACK : This needs to be done in a better way . <nl> + const bool is_batch_matmul = op_name = = " BatchMatMul " ; <nl> { <nl> / / Unsupported data type . <nl> Reset ( ) ; <nl> - NodeDef node_def = get_matmul_nodedef ( DT_INT32 , false , false ) ; <nl> + NodeDef node_def = get_matmul ( DT_INT32 , false , false ) ; <nl> AddTestTensor ( " input " , { 2 } , / * batch_size = * / 1 , nvinfer1 : : DataType : : kINT32 ) ; <nl> AddTestWeights < int32 > ( " weights " , { 2 , 1 } , { 3 , 5 } ) ; <nl> - RunValidationAndConversion ( node_def , error : : UNIMPLEMENTED , <nl> - " Data type int32 is not supported for MatMul , " <nl> - " must be one of [ float , half ] , at my_matmul " ) ; <nl> + RunValidationAndConversion ( <nl> + node_def , error : : UNIMPLEMENTED , <nl> + ( " Data type int32 is not supported for " + op_name + <nl> + " , " <nl> + " must be one of [ float , half ] , at my_matmul " ) <nl> + . c_str ( ) ) ; <nl> } <nl> / / OK . <nl> for ( bool transpose_a : { false , true } ) { <nl> for ( bool transpose_b : { false , true } ) { <nl> Reset ( ) ; <nl> - NodeDef node_def = get_matmul_nodedef ( DT_FLOAT , transpose_a , transpose_b ) ; <nl> + NodeDef node_def = get_matmul ( DT_FLOAT , transpose_a , transpose_b ) ; <nl> AddTestTensor ( " input " , { 2 } , / * batch_size = * / 1 ) ; <nl> AddTestWeights < float > ( " weights " , { 2 , 2 } , { 0 , 1 , 2 , 3 } ) ; <nl> - if ( ! transpose_a ) { <nl> - RunValidationAndConversion ( node_def ) ; <nl> - } else { <nl> - RunValidationAndConversion ( node_def , error : : INVALID_ARGUMENT , " Cannot transpose first input if it is a tensor with fewer than 2 non - batch dimensions " ) ; <nl> + if ( is_batch_matmul ) { <nl> + RunValidationAndConversion ( <nl> + node_def , error : : INVALID_ARGUMENT , <nl> + " Input weight attempts to broadcast across batch dimension " ) ; <nl> + continue ; <nl> + } else if ( transpose_a ) { <nl> + RunValidationAndConversion ( <nl> + node_def , error : : INVALID_ARGUMENT , <nl> + " Cannot transpose first input if it is a tensor with fewer than 2 " <nl> + " non - batch dimensions " ) ; <nl> continue ; <nl> } <nl> + RunValidationAndConversion ( node_def ) ; <nl> TRT_TensorOrWeights output ; <nl> TF_EXPECT_OK ( GetTensorOrWeights ( " my_matmul " , & output ) ) ; <nl> ASSERT_TRUE ( output . is_tensor ( ) ) ; <nl> TEST_F ( OpConverterTest , ConvertMatMul ) { <nl> / / OK , 3D inputs <nl> for ( bool transpose_b : { false , true } ) { <nl> Reset ( ) ; <nl> - NodeDef node_def = <nl> - get_matmul_nodedef ( DT_FLOAT , / * transpose_a = * / false , transpose_b ) ; <nl> + NodeDef node_def = get_matmul ( DT_FLOAT , / * transpose_a = * / false , transpose_b ) ; <nl> AddTestTensor ( " input " , { 1 , 1 , 2 } , / * batch_size = * / 1 ) ; <nl> AddTestWeights < float > ( " weights " , { 2 , 2 } , { 0 , 1 , 2 , 3 } ) ; <nl> + if ( is_batch_matmul ) { <nl> + RunValidationAndConversion ( <nl> + node_def , error : : INVALID_ARGUMENT , <nl> + " Input weight attempts to broadcast across batch dimension " ) ; <nl> + continue ; <nl> + } <nl> RunValidationAndConversion ( node_def ) ; <nl> TRT_TensorOrWeights output ; <nl> TF_EXPECT_OK ( GetTensorOrWeights ( " my_matmul " , & output ) ) ; <nl> TEST_F ( OpConverterTest , ConvertMatMul ) { <nl> } <nl> } <nl> <nl> + TEST_F ( OpConverterTest , ConvertMatMul ) { <nl> + { <nl> + / / Input list is empty , should fail . <nl> + NodeDef node_def = MakeNodeDef ( " my_matmul " , " MatMul " , { } ) ; <nl> + RunValidationAndConversion ( <nl> + node_def , error : : INVALID_ARGUMENT , <nl> + " MatMul got 0 inputs but expected 2 , at my_matmul " ) ; <nl> + } <nl> + <nl> + / / Get the NodeDef for MatMul . <nl> + auto get_matmul_nodedef = [ ] ( DataType dtype , bool transpose_a , <nl> + bool transpose_b ) - > NodeDef { <nl> + Scope s = Scope : : NewRootScope ( ) ; <nl> + auto input = ops : : Placeholder ( s . WithOpName ( " input " ) , dtype ) ; <nl> + auto weights = ops : : Placeholder ( s . WithOpName ( " weights " ) , dtype ) ; <nl> + const auto matmul_attrs = <nl> + ops : : MatMul : : TransposeA ( transpose_a ) . TransposeB ( transpose_b ) ; <nl> + auto matmul = <nl> + ops : : MatMul ( s . WithOpName ( " my_matmul " ) , input , weights , matmul_attrs ) ; <nl> + return matmul . operation . node ( ) - > def ( ) ; <nl> + } ; <nl> + <nl> + TestMatMulHelper ( get_matmul_nodedef , " MatMul " ) ; <nl> + } <nl> + <nl> + TEST_F ( OpConverterTest , ConvertBatchMatMul ) { <nl> + { <nl> + / / Input list is empty , should fail . <nl> + NodeDef node_def = MakeNodeDef ( " my_matmul " , " BatchMatMul " , { } ) ; <nl> + RunValidationAndConversion ( <nl> + node_def , error : : INVALID_ARGUMENT , <nl> + " BatchMatMul got 0 inputs but expected 2 , at my_matmul " ) ; <nl> + } <nl> + <nl> + / / Get the NodeDef for BatchMatMul . <nl> + auto get_matmul_nodedef = [ ] ( DataType dtype , bool transpose_a , <nl> + bool transpose_b ) - > NodeDef { <nl> + Scope s = Scope : : NewRootScope ( ) ; <nl> + auto input = ops : : Placeholder ( s . WithOpName ( " input " ) , dtype ) ; <nl> + auto weights = ops : : Placeholder ( s . WithOpName ( " weights " ) , dtype ) ; <nl> + const auto matmul_attrs = <nl> + ops : : BatchMatMul : : AdjX ( transpose_a ) . AdjY ( transpose_b ) ; <nl> + auto matmul = ops : : BatchMatMul ( s . WithOpName ( " my_matmul " ) , input , weights , <nl> + matmul_attrs ) ; <nl> + return matmul . operation . node ( ) - > def ( ) ; <nl> + } ; <nl> + <nl> + TestMatMulHelper ( get_matmul_nodedef , " BatchMatMul " ) ; <nl> + } <nl> + <nl> template < DataType dtype > <nl> void TestConvertBiasAdd ( OpConverterTest * test ) { <nl> / / Get the NodeDef for BiasAdd . <nl>
|
Adds additional test cases for BatchMatMul
|
tensorflow/tensorflow
|
c155a42b1de731fad396d2b5109ceb2d5d49bfde
|
2019-05-09T21:36:00Z
|
mmm a / src / api . cc <nl> ppp b / src / api . cc <nl> void v8 : : V8 : : ShutdownPlatform ( ) { <nl> <nl> bool v8 : : V8 : : Initialize ( ) { <nl> i : : V8 : : Initialize ( ) ; <nl> + # ifdef V8_USE_EXTERNAL_STARTUP_DATA <nl> + i : : ReadNatives ( ) ; <nl> + # endif <nl> return true ; <nl> } <nl> <nl> mmm a / src / natives - external . cc <nl> ppp b / src / natives - external . cc <nl> class NativesHolder { <nl> DCHECK ( store ) ; <nl> holder_ = store ; <nl> } <nl> + static bool empty ( ) { return holder_ = = NULL ; } <nl> static void Dispose ( ) { <nl> - DCHECK ( holder_ ) ; <nl> delete holder_ ; <nl> + holder_ = NULL ; <nl> } <nl> <nl> private : <nl> template < NativeType type > <nl> NativesStore * NativesHolder < type > : : holder_ = NULL ; <nl> <nl> <nl> + / / The natives blob . Memory is owned by caller . <nl> + static StartupData * natives_blob_ = NULL ; <nl> + <nl> + <nl> + / * * <nl> + * Read the Natives blob , as previously set by SetNativesFromFile . <nl> + * / <nl> + void ReadNatives ( ) { <nl> + if ( natives_blob_ & & NativesHolder < CORE > : : empty ( ) ) { <nl> + SnapshotByteSource bytes ( natives_blob_ - > data , natives_blob_ - > raw_size ) ; <nl> + NativesHolder < CORE > : : set ( NativesStore : : MakeFromScriptsSource ( & bytes ) ) ; <nl> + NativesHolder < EXPERIMENTAL > : : set ( <nl> + NativesStore : : MakeFromScriptsSource ( & bytes ) ) ; <nl> + DCHECK ( ! bytes . HasMore ( ) ) ; <nl> + } <nl> + } <nl> + <nl> + <nl> / * * <nl> - * Read the Natives ( library sources ) blob , as generated by js2c + the build <nl> + * Set the Natives ( library sources ) blob , as generated by js2c + the build <nl> * system . <nl> * / <nl> void SetNativesFromFile ( StartupData * natives_blob ) { <nl> + DCHECK ( ! natives_blob_ ) ; <nl> DCHECK ( natives_blob ) ; <nl> DCHECK ( natives_blob - > data ) ; <nl> DCHECK ( natives_blob - > raw_size > 0 ) ; <nl> <nl> - SnapshotByteSource bytes ( natives_blob - > data , natives_blob - > raw_size ) ; <nl> - NativesHolder < CORE > : : set ( NativesStore : : MakeFromScriptsSource ( & bytes ) ) ; <nl> - NativesHolder < EXPERIMENTAL > : : set ( NativesStore : : MakeFromScriptsSource ( & bytes ) ) ; <nl> - DCHECK ( ! bytes . HasMore ( ) ) ; <nl> + natives_blob_ = natives_blob ; <nl> + ReadNatives ( ) ; <nl> } <nl> <nl> <nl> mmm a / src / natives . h <nl> ppp b / src / natives . h <nl> typedef NativesCollection < EXPERIMENTAL > ExperimentalNatives ; <nl> # ifdef V8_USE_EXTERNAL_STARTUP_DATA <nl> / / Used for reading the natives at runtime . Implementation in natives - empty . cc <nl> void SetNativesFromFile ( StartupData * natives_blob ) ; <nl> + void ReadNatives ( ) ; <nl> void DisposeNatives ( ) ; <nl> # endif <nl> <nl> mmm a / src / snapshot - empty . cc <nl> ppp b / src / snapshot - empty . cc <nl> namespace internal { <nl> / / below . This happens when compiling the mksnapshot utility . <nl> void SetNativesFromFile ( StartupData * data ) { CHECK ( false ) ; } <nl> void SetSnapshotFromFile ( StartupData * data ) { CHECK ( false ) ; } <nl> + void ReadNatives ( ) { } <nl> void DisposeNatives ( ) { } <nl> # endif / / V8_USE_EXTERNAL_STARTUP_DATA <nl> <nl>
|
Fix Initialize & Dispose for external snapshot . Make sure v8 : : V8 : : ( Initialize | Dispose ) can be called in any order .
|
v8/v8
|
2525e8f4026307cf7116e64bc0e9a1bb512cbfff
|
2015-03-04T10:38:00Z
|
mmm a / README . markdown <nl> ppp b / README . markdown <nl> if it is not present it will default to : <nl> <nl> For each file in the result set , the query command will generate a JSON object <nl> value populated with the requested fields . For example , the default set of <nl> - fields will return a response something like this ( ` new ` is only present if <nl> - you are using the ` since ` generator and the item is new wrt . the since value <nl> - you specified in your query ) : <nl> + fields will return a response something like this : <nl> <nl> ` ` ` json <nl> { <nl> - " version " : " 2 . 7 " , <nl> + " version " : " 2 . 9 " , <nl> " clock " : " c : 80616 : 59 " , <nl> " is_fresh_instance " : false , <nl> " files " : [ <nl> { <nl> " exists " : true , <nl> " mode " : 33188 , <nl> + " new " : false , <nl> " name " : " argv . c " , <nl> " size " : 1340 , <nl> } <nl> you specified in your query ) : <nl> The ` is_fresh_instance ` member is true if the particular clock value indicates <nl> that it was returned by a different instance of watchman , or a named cursor <nl> hasn ' t been seen before . In that case , only files that currently exist will be <nl> - returned . Advanced users may set the input parameter ` empty_on_fresh_instance ` <nl> - to true , in which case no files will be returned for fresh instances . <nl> + returned , and all files will have ` new ` set to ` true ` . Advanced users may set <nl> + the input parameter ` empty_on_fresh_instance ` to true , in which case no files <nl> + will be returned for fresh instances . <nl> <nl> If the ` fields ` member consists of a single entry , the files result will be a <nl> simple array of values ; ` ` ` " fields " : [ " name " ] ` ` ` produces : <nl> mmm a / listener . c <nl> ppp b / listener . c <nl> json_t * w_match_results_to_json ( <nl> set_prop ( record , " ino " , json_integer ( file - > st . st_ino ) ) ; <nl> set_prop ( record , " dev " , json_integer ( file - > st . st_dev ) ) ; <nl> set_prop ( record , " nlink " , json_integer ( file - > st . st_nlink ) ) ; <nl> - <nl> - if ( matches [ i ] . is_new ) { <nl> - set_prop ( record , " new " , json_true ( ) ) ; <nl> - } <nl> + set_prop ( record , " new " , json_boolean ( matches [ i ] . is_new ) ) ; <nl> <nl> if ( clock_id_string ( file - > ctime . ticks , buf , sizeof ( buf ) ) ) { <nl> set_prop ( record , " cclock " , json_string_nocheck ( buf ) ) ; <nl> mmm a / query / eval . c <nl> ppp b / query / eval . c <nl> bool w_query_process_file ( <nl> m - > file = file ; <nl> m - > is_new = false ; <nl> if ( query - > since_spec ) { <nl> - if ( ! ctx - > since . is_timestamp ) { <nl> - m - > is_new = file - > ctime . ticks > ctx - > since . clock . ticks ; <nl> - } else { <nl> + if ( ctx - > since . is_timestamp ) { <nl> m - > is_new = w_timeval_compare ( ctx - > since . timestamp , file - > ctime . tv ) > 0 ; <nl> + } else if ( ctx - > since . clock . is_fresh_instance ) { <nl> + m - > is_new = true ; <nl> + } else { <nl> + m - > is_new = file - > ctime . ticks > ctx - > since . clock . ticks ; <nl> } <nl> } <nl> <nl> mmm a / query / fieldlist . c <nl> ppp b / query / fieldlist . c <nl> static json_t * make_exists ( struct watchman_rule_match * match ) <nl> <nl> static json_t * make_new ( struct watchman_rule_match * match ) <nl> { <nl> - if ( match - > is_new ) { <nl> - return json_true ( ) ; <nl> - } <nl> - return NULL ; <nl> + return json_boolean ( match - > is_new ) ; <nl> } <nl> <nl> # define MAKE_CLOCK_FIELD ( name , member ) \ <nl> mmm a / rules . c <nl> ppp b / rules . c <nl> uint32_t w_rules_match ( w_root_t * root , <nl> <nl> m - > relname = relname ; <nl> m - > file = file ; <nl> - if ( since & & ! since - > is_timestamp ) { <nl> - m - > is_new = file - > ctime . ticks > since - > clock . ticks ; <nl> - } else if ( since ) { <nl> - m - > is_new = w_timeval_compare ( since - > timestamp , file - > ctime . tv ) > 0 ; <nl> - } else { <nl> - m - > is_new = false ; <nl> + m - > is_new = false ; <nl> + if ( since ) { <nl> + if ( since - > is_timestamp ) { <nl> + m - > is_new = w_timeval_compare ( since - > timestamp , file - > ctime . tv ) > 0 ; <nl> + } else if ( since - > clock . is_fresh_instance ) { <nl> + m - > is_new = true ; <nl> + } else { <nl> + m - > is_new = file - > ctime . ticks > since - > clock . ticks ; <nl> + } <nl> } <nl> } else { <nl> w_string_delref ( relname ) ; <nl> mmm a / tests / integration / cursor . php <nl> ppp b / tests / integration / cursor . php <nl> function testCursor ( ) { <nl> touch ( $ root . ' / one ' , time ( ) + 1 ) ; <nl> $ since = $ this - > assertFileListUsingSince ( <nl> $ root , ' n : testCursor ' , array ( ' one ' ) ) ; <nl> - $ this - > assertEqual ( false , isset ( $ since [ ' files ' ] [ 0 ] [ ' new ' ] ) ) ; <nl> + $ this - > assertEqual ( false , $ since [ ' files ' ] [ 0 ] [ ' new ' ] ) ; <nl> <nl> / / Deleted files shouldn ' t show up in fresh cursors <nl> touch ( " $ root / two " ) ; <nl>
|
always set new to true for fresh instances
|
facebook/watchman
|
62036e62f33732e0c46d2db879305412436903b2
|
2013-08-05T22:54:41Z
|
mmm a / src / http / http . cc <nl> ppp b / src / http / http . cc <nl> bool maybe_gzip_response ( const http_req_t & req , http_res_t * res ) { <nl> double val = 1 . 0 ; <nl> char * endptr ; <nl> if ( it - > second . length ( ) ! = 0 ) { <nl> + set_errno ( 0 ) ; / / Clear errno because strtod doesn ' t <nl> val = strtod ( it - > second . c_str ( ) , & endptr ) ; <nl> if ( endptr = = it - > second . c_str ( ) | | <nl> ( get_errno ( ) = = ERANGE & & <nl>
|
more defensive errno handling for strtod
|
rethinkdb/rethinkdb
|
5cb611792628201fa7867f9b5e61fc64699c66dc
|
2014-02-07T00:27:21Z
|
mmm a / imgui . cpp <nl> ppp b / imgui . cpp <nl> void ImDrawList : : AddRectFilled ( const ImVec2 & a , const ImVec2 & b , ImU32 col , floa <nl> r = ImMin ( r , fabsf ( b . x - a . x ) * ( ( ( rounding_corners & ( 1 | 2 ) ) = = ( 1 | 2 ) ) | | ( ( rounding_corners & ( 4 | 8 ) ) = = ( 4 | 8 ) ) ? 0 . 5f : 1 . 0f ) ) ; <nl> r = ImMin ( r , fabsf ( b . y - a . y ) * ( ( ( rounding_corners & ( 1 | 8 ) ) = = ( 1 | 8 ) ) | | ( ( rounding_corners & ( 2 | 4 ) ) = = ( 2 | 4 ) ) ? 0 . 5f : 1 . 0f ) ) ; <nl> <nl> - const ImVec2 uv = GImGui - > FontTexUvWhitePixel ; <nl> if ( r = = 0 . 0f | | rounding_corners = = 0 ) <nl> { <nl> / / Use triangle so we can merge more draw calls together ( at the cost of extra vertices ) <nl>
|
Warning fix .
|
ocornut/imgui
|
dd893ac4f558505a528f64164ce6dd177e56002b
|
2015-04-09T20:43:42Z
|
mmm a / ports / otl / CONTROL <nl> ppp b / ports / otl / CONTROL <nl> <nl> Source : otl <nl> - Version : 4 . 0 . 443 - 2 <nl> + Version : 4 . 0 . 447 <nl> Description : Oracle , Odbc and DB2 - CLI Template Library <nl> Homepage : http : / / otl . sourceforge . net / <nl> mmm a / ports / otl / portfile . cmake <nl> ppp b / ports / otl / portfile . cmake <nl> include ( vcpkg_common_functions ) <nl> <nl> vcpkg_download_distfile ( ARCHIVE <nl> URLS " http : / / otl . sourceforge . net / otlv4_h2 . zip " <nl> - FILENAME " otlv4_h2 - 4 . 0 . 443 . zip " <nl> - SHA512 90a90d909586aae2088c87b5899244c01eef58aed7183d73b9be647f9c7a7bdcdc2f50f5c3cb564330b0061fb63e85f1b5522b4cf9390bc9baa5e2cb97ea3f3e <nl> + FILENAME " otlv4_h2 - 4 . 0 . 447 . zip " <nl> + SHA512 07663442272a327a7d9154f6e817c0166ed254cfe3b9d043762179e96180a70d3ba4b3762e5ef1ebdb18492e3425467c9ddad3a2c68aa93bb7d51d54e9712008 <nl> ) <nl> <nl> vcpkg_extract_source_archive_ex ( <nl>
|
version 447
|
microsoft/vcpkg
|
ce77a3d8fff098a2fade64829164ea4162cec430
|
2019-10-13T16:00:47Z
|
mmm a / Basics / StringUtils . h <nl> ppp b / Basics / StringUtils . h <nl> namespace triagens { <nl> / / / @ brief escape unicode <nl> / / / <nl> / / / This method escapes a unicode character string by replacing the unicode <nl> - / / / characters by a \ uXXXX sequence . <nl> + / / / characters by a \ \ uXXXX sequence . <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> string escapeUnicode ( string const & name , bool escapeSlash = true ) ; <nl> mmm a / Basics / Thread . cpp <nl> ppp b / Basics / Thread . cpp <nl> Thread : : ~ Thread ( ) { <nl> TRI_StopThread ( & _thread ) ; <nl> } <nl> <nl> - TRI_DeatchThread ( & _thread ) ; <nl> + TRI_DetachThread ( & _thread ) ; <nl> } <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> mmm a / BasicsC / strings . h <nl> ppp b / BasicsC / strings . h <nl> char * TRI_EscapeCString ( char const * in , size_t inLength , size_t * outLength ) ; <nl> / / / @ brief escapes special characters using unicode escapes <nl> / / / <nl> / / / This method escapes an UTF - 8 character string by replacing the unprintable <nl> - / / / characters by a \ uXXXX sequence . Set escapeSlash to true in order to also <nl> + / / / characters by a \ \ uXXXX sequence . Set escapeSlash to true in order to also <nl> / / / escape the character ' / ' . <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> char * TRI_EscapeUtf8String ( char const * in , size_t inLength , bool escapeSlash , s <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief unescapes unicode escape sequences <nl> / / / <nl> - / / / This method decodes a UTF - 8 character string by replacing the \ uXXXX <nl> + / / / This method decodes a UTF - 8 character string by replacing the \ \ uXXXX <nl> / / / sequence by unicode characters and representing them as UTF - 8 sequences . <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> mmm a / BasicsC / threads - posix . c <nl> ppp b / BasicsC / threads - posix . c <nl> bool TRI_StartThread ( TRI_thread_t * thread , void ( * starter ) ( void * ) , void * data ) <nl> int rc ; <nl> <nl> d = TRI_Allocate ( sizeof ( thread_data_t ) ) ; <nl> + if ( ! d ) { <nl> + TRI_set_errno ( TRI_ERROR_SYS_ERROR ) ; <nl> + LOG_ERROR ( " could not start thread : % s " , strerror ( errno ) ) ; <nl> + return false ; <nl> + } <nl> + <nl> d - > starter = starter ; <nl> d - > _data = data ; <nl> <nl> rc = pthread_create ( thread , 0 , & ThreadStarter , d ) ; <nl> <nl> if ( rc ! = 0 ) { <nl> + TRI_Free ( d ) ; <nl> TRI_set_errno ( TRI_ERROR_SYS_ERROR ) ; <nl> LOG_ERROR ( " could not start thread : % s " , strerror ( errno ) ) ; <nl> return false ; <nl> void TRI_StopThread ( TRI_thread_t * thread ) { <nl> / / / @ brief detachs a thread <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - void TRI_DeatchThread ( TRI_thread_t * thread ) { <nl> + void TRI_DetachThread ( TRI_thread_t * thread ) { <nl> pthread_detach ( * thread ) ; <nl> } <nl> <nl> mmm a / BasicsC / threads . h <nl> ppp b / BasicsC / threads . h <nl> void TRI_StopThread ( TRI_thread_t * thread ) ; <nl> / / / @ brief detachs a thread <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - void TRI_DeatchThread ( TRI_thread_t * thread ) ; <nl> + void TRI_DetachThread ( TRI_thread_t * thread ) ; <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief waits for a thread to finish <nl> new file mode 100644 <nl> index 00000000000 . . 790c42d2d70 <nl> mmm / dev / null <nl> ppp b / Doxygen / Examples . Durham / admin1 <nl> <nl> + avocado > db . examples . parameter ( ) ; <nl> + { <nl> + " waitForSync " : false , <nl> + " journalSize " : 134217728 <nl> + } <nl> new file mode 100644 <nl> index 00000000000 . . 4c1ddf22ba0 <nl> mmm / dev / null <nl> ppp b / Doxygen / Examples . Durham / admin2 <nl> <nl> + avocado > db . examples . parameter ( { waitForSync : true } ) ; <nl> + { <nl> + " waitForSync " : true , <nl> + " journalSize " : 134217728 <nl> + } <nl> new file mode 100644 <nl> index 00000000000 . . 8c021263d06 <nl> mmm / dev / null <nl> ppp b / Doxygen / Examples . Durham / admin3 <nl> <nl> + avocado > db . geo . ensureGeoIndex ( " loc " ) ; <nl> + 162534834 <nl> + <nl> + avocado > for ( i = - 90 ; i < = 90 ; i + = 10 ) { <nl> + . . . . . . . > for ( j = - 180 ; j < = 180 ; j + = 10 ) { <nl> + . . . . . . . > db . geo . save ( { name : " Name / " + i + " / " + j , <nl> + . . . . . . . > loc : [ i , j ] } ) ; <nl> + . . . . . . . > } <nl> + . . . . . . . > } <nl> + <nl> + avocado > db . geo . count ( ) ; <nl> + 703 <nl> + <nl> + avocado > db . geo . near ( 0 , 0 ) . limit ( 3 ) . toArray ( ) ; <nl> + [ { " _id " : " 154092 : 24861164 " , " _rev " : 24861164 , " name " : " Name / 0 / 0 " , " loc " : [ 0 , 0 ] } , <nl> + { " _id " : " 154092 : 24926700 " , " _rev " : 24926700 , " name " : " Name / 0 / 10 " , " loc " : [ 0 , 10 ] } , <nl> + { " _id " : " 154092 : 22436332 " , " _rev " : 22436332 , " name " : " Name / - 10 / 0 " , " loc " : [ - 10 , 0 ] } ] <nl> + <nl> + avocado > db . geo . near ( 0 , 0 ) . count ( ) ; <nl> + 100 <nl> new file mode 100644 <nl> index 00000000000 . . 5c083b213b0 <nl> mmm / dev / null <nl> ppp b / Doxygen / Examples . Durham / admin4 <nl> <nl> + avocado > db . geo2 . ensureGeoIndex ( " location . latitude " , " location . longitude " ) ; <nl> + 23735273 <nl> + <nl> + avocado > for ( i = - 90 ; i < = 90 ; i + = 10 ) { <nl> + . . . . . . . > for ( j = - 180 ; j < = 180 ; j + = 10 ) { <nl> + . . . . . . . > db . geo2 . save ( { name : " Name / " + i + " / " + j , <nl> + . . . . . . . > location : { latitude : i , <nl> + . . . . . . . > longitude : j } } ) ; <nl> + . . . . . . . > } <nl> + . . . . . . . > } <nl> + <nl> + avocado > db . geo2 . near ( 0 , 0 ) . limit ( 3 ) . toArray ( ) ; <nl> + [ { <nl> + " _id " : " 48126444 : 72964588 " , <nl> + " _rev " : 72964588 , <nl> + " location " : { " latitude " : 0 , " longitude " : 0 } , <nl> + " name " : " Name / 0 / 0 " <nl> + } , <nl> + { <nl> + " _id " : " 48126444 : 73030124 " , <nl> + " _rev " : 73030124 , <nl> + " location " : { " latitude " : 0 , " longitude " : 10 } , <nl> + " name " : " Name / 0 / 10 " <nl> + } , <nl> + { <nl> + " _id " : " 48126444 : 70539756 " , <nl> + " _rev " : 70539756 , <nl> + " location " : { " latitude " : - 10 , " longitude " : 0 } , <nl> + " name " : " Name / - 10 / 0 " <nl> + } ] <nl> new file mode 100644 <nl> index 00000000000 . . d4d30bf87c7 <nl> mmm / dev / null <nl> ppp b / Doxygen / Examples . Durham / admin5 <nl> <nl> + avocado > db . five . ensureHashIndex ( " a " ) <nl> + 2170279 <nl> new file mode 100644 <nl> index 00000000000 . . 5680b88adbe <nl> mmm / dev / null <nl> ppp b / Doxygen / Examples . Durham / simple18 <nl> <nl> + avocado > db . users . all ( ) . toArray ( ) ; <nl> + [ { " _id " : " 553063885 : 554702285 " , " _rev " : 554702285 , " id " : 323 , " name " : " Peter " } , <nl> + { " _id " : " 553063885 : 554636749 " , " _rev " : 554636749 , " id " : 535 , " name " : " Peter " } , <nl> + { " _id " : " 553063885 : 554833357 " , " _rev " : 554833357 , " id " : 25 , " name " : " Vladimir " } ] <nl> + <nl> + avocado > db . users . select ( { " id " : 323 } ) . toArray ( ) ; <nl> + [ { " id " : 323 , " name " : " Peter " , " _id " : " 553063885 : 554702285 " } ] <nl> + <nl> + avocado > db . users . select ( { " name " : " Peter " } ) . toArray ( ) ; <nl> + [ { " id " : 323 , " name " : " Peter " , " _id " : " 553063885 : 554702285 " } , <nl> + { " id " : 535 , " name " : " Peter " , " _id " : " 553063885 : 554636749 " } ] <nl> + <nl> + avocado > db . users . select ( { " name " : " Peter " , " id " : 535 } ) . toArray ( ) ; <nl> + [ { " id " : 535 , " name " : " Peter " , " _id " : " 553063885 : 554636749 " } ] <nl> new file mode 100644 <nl> index 00000000000 . . 7f7e8437c60 <nl> mmm / dev / null <nl> ppp b / Doxygen / Examples . Durham / simple19 <nl> <nl> + avocado > var a = db . users . select ( { " name " : " Peter " } ) ; <nl> + avocado > while ( a . hasNext ( ) ) print ( a . next ( ) ) ; <nl> + { " id " : 323 , " name " : " Peter " , " _id " : " 553063885 : 554702285 " } <nl> + { " id " : 535 , " name " : " Peter " , " _id " : " 553063885 : 554636749 " } <nl> new file mode 100755 <nl> index 00000000000 . . e4c8469d294 <nl> mmm / dev / null <nl> ppp b / Doxygen / Scripts / Markdown . pl <nl> <nl> + # ! / usr / bin / perl <nl> + <nl> + # <nl> + # Markdown - - A text - to - HTML conversion tool for web writers <nl> + # <nl> + # Copyright ( c ) 2004 John Gruber <nl> + # < http : / / daringfireball . net / projects / markdown / > <nl> + # <nl> + <nl> + <nl> + package Markdown ; <nl> + require 5 . 006_000 ; <nl> + use strict ; <nl> + use warnings ; <nl> + <nl> + use Digest : : MD5 qw ( md5_hex ) ; <nl> + use vars qw ( $ VERSION ) ; <nl> + $ VERSION = ' 1 . 0 . 1 ' ; <nl> + # Tue 14 Dec 2004 <nl> + <nl> + # # Disabled ; causes problems under Perl 5 . 6 . 1 : <nl> + # use utf8 ; <nl> + # binmode ( STDOUT , " : utf8 " ) ; # c . f . : http : / / acis . openlib . org / dev / perl - unicode - struggle . html <nl> + <nl> + <nl> + # <nl> + # Global default settings : <nl> + # <nl> + my $ g_empty_element_suffix = " / > " ; # Change to " > " for HTML output <nl> + my $ g_tab_width = 4 ; <nl> + <nl> + <nl> + # <nl> + # Globals : <nl> + # <nl> + <nl> + # Regex to match balanced [ brackets ] . See Friedl ' s <nl> + # " Mastering Regular Expressions " , 2nd Ed . , pp . 328 - 331 . <nl> + my $ g_nested_brackets ; <nl> + $ g_nested_brackets = qr { <nl> + ( ? > # Atomic matching <nl> + [ ^ \ [ \ ] ] + # Anything other than brackets <nl> + | <nl> + \ [ <nl> + ( ? ? { $ g_nested_brackets } ) # Recursive set of nested brackets <nl> + \ ] <nl> + ) * <nl> + } x ; <nl> + <nl> + <nl> + # Table of hash values for escaped characters : <nl> + my % g_escape_table ; <nl> + foreach my $ char ( split / / , ' \ \ ` * _ { } [ ] ( ) > # + - . ! ' ) { <nl> + $ g_escape_table { $ char } = md5_hex ( $ char ) ; <nl> + } <nl> + <nl> + <nl> + # Global hashes , used by various utility routines <nl> + my % g_urls ; <nl> + my % g_titles ; <nl> + my % g_html_blocks ; <nl> + <nl> + # Used to track when we ' re inside an ordered or unordered list <nl> + # ( see _ProcessListItems ( ) for details ) : <nl> + my $ g_list_level = 0 ; <nl> + <nl> + <nl> + # # # # Blosxom plug - in interface # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> + <nl> + # Set $ g_blosxom_use_meta to 1 to use Blosxom ' s meta plug - in to determine <nl> + # which posts Markdown should process , using a " meta - markup : markdown " <nl> + # header . If it ' s set to 0 ( the default ) , Markdown will process all <nl> + # entries . <nl> + my $ g_blosxom_use_meta = 0 ; <nl> + <nl> + sub start { 1 ; } <nl> + sub story { <nl> + my ( $ pkg , $ path , $ filename , $ story_ref , $ title_ref , $ body_ref ) = @ _ ; <nl> + <nl> + if ( ( ! $ g_blosxom_use_meta ) or <nl> + ( defined ( $ meta : : markup ) and ( $ meta : : markup = ~ / ^ \ s * markdown \ s * $ / i ) ) <nl> + ) { <nl> + $ $ body_ref = Markdown ( $ $ body_ref ) ; <nl> + } <nl> + 1 ; <nl> + } <nl> + <nl> + <nl> + # # # # Movable Type plug - in interface # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> + eval { require MT } ; # Test to see if we ' re running in MT . <nl> + unless ( $ @ ) { <nl> + require MT ; <nl> + import MT ; <nl> + require MT : : Template : : Context ; <nl> + import MT : : Template : : Context ; <nl> + <nl> + eval { require MT : : Plugin } ; # Test to see if we ' re running > = MT 3 . 0 . <nl> + unless ( $ @ ) { <nl> + require MT : : Plugin ; <nl> + import MT : : Plugin ; <nl> + my $ plugin = new MT : : Plugin ( { <nl> + name = > " Markdown " , <nl> + description = > " A plain - text - to - HTML formatting plugin . ( Version : $ VERSION ) " , <nl> + doc_link = > ' http : / / daringfireball . net / projects / markdown / ' <nl> + } ) ; <nl> + MT - > add_plugin ( $ plugin ) ; <nl> + } <nl> + <nl> + MT : : Template : : Context - > add_container_tag ( MarkdownOptions = > sub { <nl> + my $ ctx = shift ; <nl> + my $ args = shift ; <nl> + my $ builder = $ ctx - > stash ( ' builder ' ) ; <nl> + my $ tokens = $ ctx - > stash ( ' tokens ' ) ; <nl> + <nl> + if ( defined ( $ args - > { ' output ' } ) ) { <nl> + $ ctx - > stash ( ' markdown_output ' , lc $ args - > { ' output ' } ) ; <nl> + } <nl> + <nl> + defined ( my $ str = $ builder - > build ( $ ctx , $ tokens ) ) <nl> + or return $ ctx - > error ( $ builder - > errstr ) ; <nl> + $ str ; # return value <nl> + } ) ; <nl> + <nl> + MT - > add_text_filter ( ' markdown ' = > { <nl> + label = > ' Markdown ' , <nl> + docs = > ' http : / / daringfireball . net / projects / markdown / ' , <nl> + on_format = > sub { <nl> + my $ text = shift ; <nl> + my $ ctx = shift ; <nl> + my $ raw = 0 ; <nl> + if ( defined $ ctx ) { <nl> + my $ output = $ ctx - > stash ( ' markdown_output ' ) ; <nl> + if ( defined $ output & & $ output = ~ m / ^ html / i ) { <nl> + $ g_empty_element_suffix = " > " ; <nl> + $ ctx - > stash ( ' markdown_output ' , ' ' ) ; <nl> + } <nl> + elsif ( defined $ output & & $ output eq ' raw ' ) { <nl> + $ raw = 1 ; <nl> + $ ctx - > stash ( ' markdown_output ' , ' ' ) ; <nl> + } <nl> + else { <nl> + $ raw = 0 ; <nl> + $ g_empty_element_suffix = " / > " ; <nl> + } <nl> + } <nl> + $ text = $ raw ? $ text : Markdown ( $ text ) ; <nl> + $ text ; <nl> + } , <nl> + } ) ; <nl> + <nl> + # If SmartyPants is loaded , add a combo Markdown / SmartyPants text filter : <nl> + my $ smartypants ; <nl> + <nl> + { <nl> + no warnings " once " ; <nl> + $ smartypants = $ MT : : Template : : Context : : Global_filters { ' smarty_pants ' } ; <nl> + } <nl> + <nl> + if ( $ smartypants ) { <nl> + MT - > add_text_filter ( ' markdown_with_smartypants ' = > { <nl> + label = > ' Markdown With SmartyPants ' , <nl> + docs = > ' http : / / daringfireball . net / projects / markdown / ' , <nl> + on_format = > sub { <nl> + my $ text = shift ; <nl> + my $ ctx = shift ; <nl> + if ( defined $ ctx ) { <nl> + my $ output = $ ctx - > stash ( ' markdown_output ' ) ; <nl> + if ( defined $ output & & $ output eq ' html ' ) { <nl> + $ g_empty_element_suffix = " > " ; <nl> + } <nl> + else { <nl> + $ g_empty_element_suffix = " / > " ; <nl> + } <nl> + } <nl> + $ text = Markdown ( $ text ) ; <nl> + $ text = $ smartypants - > ( $ text , ' 1 ' ) ; <nl> + } , <nl> + } ) ; <nl> + } <nl> + } <nl> + else { <nl> + # # # # BBEdit / command - line text filter interface # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> + # Needs to be hidden from MT ( and Blosxom when running in static mode ) . <nl> + <nl> + # We ' re only using $ blosxom : : version once ; tell Perl not to warn us : <nl> + no warnings ' once ' ; <nl> + unless ( defined ( $ blosxom : : version ) ) { <nl> + use warnings ; <nl> + <nl> + # # # # Check for command - line switches : # # # # # # # # # # # # # # # # # <nl> + my % cli_opts ; <nl> + use Getopt : : Long ; <nl> + Getopt : : Long : : Configure ( ' pass_through ' ) ; <nl> + GetOptions ( \ % cli_opts , <nl> + ' version ' , <nl> + ' shortversion ' , <nl> + ' html4tags ' , <nl> + ) ; <nl> + if ( $ cli_opts { ' version ' } ) { # Version info <nl> + print " \ nThis is Markdown , version $ VERSION . \ n " ; <nl> + print " Copyright 2004 John Gruber \ n " ; <nl> + print " http : / / daringfireball . net / projects / markdown / \ n \ n " ; <nl> + exit 0 ; <nl> + } <nl> + if ( $ cli_opts { ' shortversion ' } ) { # Just the version number string . <nl> + print $ VERSION ; <nl> + exit 0 ; <nl> + } <nl> + if ( $ cli_opts { ' html4tags ' } ) { # Use HTML tag style instead of XHTML <nl> + $ g_empty_element_suffix = " > " ; <nl> + } <nl> + <nl> + <nl> + # # # # Process incoming text : # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> + my $ text ; <nl> + { <nl> + local $ / ; # Slurp the whole file <nl> + $ text = < > ; <nl> + } <nl> + print Markdown ( $ text ) ; <nl> + } <nl> + } <nl> + <nl> + <nl> + <nl> + sub Markdown { <nl> + # <nl> + # Main function . The order in which other subs are called here is <nl> + # essential . Link and image substitutions need to happen before <nl> + # _EscapeSpecialChars ( ) , so that any * ' s or _ ' s in the < a > <nl> + # and < img > tags get encoded . <nl> + # <nl> + my $ text = shift ; <nl> + <nl> + # Clear the global hashes . If we don ' t clear these , you get conflicts <nl> + # from other articles when generating a page which contains more than <nl> + # one article ( e . g . an index page that shows the N most recent <nl> + # articles ) : <nl> + % g_urls = ( ) ; <nl> + % g_titles = ( ) ; <nl> + % g_html_blocks = ( ) ; <nl> + <nl> + <nl> + # Standardize line endings : <nl> + $ text = ~ s { \ r \ n } { \ n } g ; # DOS to Unix <nl> + $ text = ~ s { \ r } { \ n } g ; # Mac to Unix <nl> + <nl> + # Make sure $ text ends with a couple of newlines : <nl> + $ text . = " \ n \ n " ; <nl> + <nl> + # Convert all tabs to spaces . <nl> + $ text = _Detab ( $ text ) ; <nl> + <nl> + # Strip any lines consisting only of spaces and tabs . <nl> + # This makes subsequent regexen easier to write , because we can <nl> + # match consecutive blank lines with / \ n + / instead of something <nl> + # contorted like / [ \ t ] * \ n + / . <nl> + $ text = ~ s / ^ [ \ t ] + $ / / mg ; <nl> + <nl> + # Turn block - level HTML blocks into hash entries <nl> + $ text = _HashHTMLBlocks ( $ text ) ; <nl> + <nl> + # Strip link definitions , store in hashes . <nl> + $ text = _StripLinkDefinitions ( $ text ) ; <nl> + <nl> + $ text = _RunBlockGamut ( $ text ) ; <nl> + <nl> + $ text = _UnescapeSpecialChars ( $ text ) ; <nl> + <nl> + return $ text . " \ n " ; <nl> + } <nl> + <nl> + <nl> + sub _StripLinkDefinitions { <nl> + # <nl> + # Strips link definitions from text , stores the URLs and titles in <nl> + # hash references . <nl> + # <nl> + my $ text = shift ; <nl> + my $ less_than_tab = $ g_tab_width - 1 ; <nl> + <nl> + # Link defs are in the form : ^ [ id ] : url " optional title " <nl> + while ( $ text = ~ s { <nl> + ^ [ ] { 0 , $ less_than_tab } \ [ ( . + ) \ ] : # id = $ 1 <nl> + [ \ t ] * <nl> + \ n ? # maybe * one * newline <nl> + [ \ t ] * <nl> + < ? ( \ S + ? ) > ? # url = $ 2 <nl> + [ \ t ] * <nl> + \ n ? # maybe one newline <nl> + [ \ t ] * <nl> + ( ? : <nl> + ( ? < = \ s ) # lookbehind for whitespace <nl> + [ " ( ] <nl> + ( . + ? ) # title = $ 3 <nl> + [ " ) ] <nl> + [ \ t ] * <nl> + ) ? # title is optional <nl> + ( ? : \ n + | \ Z ) <nl> + } <nl> + { } mx ) { <nl> + $ g_urls { lc $ 1 } = _EncodeAmpsAndAngles ( $ 2 ) ; # Link IDs are case - insensitive <nl> + if ( $ 3 ) { <nl> + $ g_titles { lc $ 1 } = $ 3 ; <nl> + $ g_titles { lc $ 1 } = ~ s / " / & quot ; / g ; <nl> + } <nl> + } <nl> + <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _HashHTMLBlocks { <nl> + my $ text = shift ; <nl> + my $ less_than_tab = $ g_tab_width - 1 ; <nl> + <nl> + # Hashify HTML blocks : <nl> + # We only want to do this for block - level HTML tags , such as headers , <nl> + # lists , and tables . That ' s because we still want to wrap < p > s around <nl> + # " paragraphs " that are wrapped in non - block - level tags , such as anchors , <nl> + # phrase emphasis , and spans . The list of tags we ' re looking for is <nl> + # hard - coded : <nl> + my $ block_tags_a = qr / p | div | h [ 1 - 6 ] | blockquote | pre | table | dl | ol | ul | script | noscript | form | fieldset | iframe | math | ins | del / ; <nl> + my $ block_tags_b = qr / p | div | h [ 1 - 6 ] | blockquote | pre | table | dl | ol | ul | script | noscript | form | fieldset | iframe | math / ; <nl> + <nl> + # First , look for nested blocks , e . g . : <nl> + # < div > <nl> + # < div > <nl> + # tags for inner block must be indented . <nl> + # < / div > <nl> + # < / div > <nl> + # <nl> + # The outermost tags must start at the left margin for this to match , and <nl> + # the inner nested divs must be indented . <nl> + # We need to do this before the next , more liberal match , because the next <nl> + # match will start at the first ` < div > ` and stop at the first ` < / div > ` . <nl> + $ text = ~ s { <nl> + ( # save in $ 1 <nl> + ^ # start of line ( with / m ) <nl> + < ( $ block_tags_a ) # start tag = $ 2 <nl> + \ b # word break <nl> + ( . * \ n ) * ? # any number of lines , minimally matching <nl> + < / \ 2 > # the matching end tag <nl> + [ \ t ] * # trailing spaces / tabs <nl> + ( ? = \ n + | \ Z ) # followed by a newline or end of document <nl> + ) <nl> + } { <nl> + my $ key = md5_hex ( $ 1 ) ; <nl> + $ g_html_blocks { $ key } = $ 1 ; <nl> + " \ n \ n " . $ key . " \ n \ n " ; <nl> + } egmx ; <nl> + <nl> + <nl> + # <nl> + # Now match more liberally , simply from ` \ n < tag > ` to ` < / tag > \ n ` <nl> + # <nl> + $ text = ~ s { <nl> + ( # save in $ 1 <nl> + ^ # start of line ( with / m ) <nl> + < ( $ block_tags_b ) # start tag = $ 2 <nl> + \ b # word break <nl> + ( . * \ n ) * ? # any number of lines , minimally matching <nl> + . * < / \ 2 > # the matching end tag <nl> + [ \ t ] * # trailing spaces / tabs <nl> + ( ? = \ n + | \ Z ) # followed by a newline or end of document <nl> + ) <nl> + } { <nl> + my $ key = md5_hex ( $ 1 ) ; <nl> + $ g_html_blocks { $ key } = $ 1 ; <nl> + " \ n \ n " . $ key . " \ n \ n " ; <nl> + } egmx ; <nl> + # Special case just for < hr / > . It was easier to make a special case than <nl> + # to make the other regex more complicated . <nl> + $ text = ~ s { <nl> + ( ? : <nl> + ( ? < = \ n \ n ) # Starting after a blank line <nl> + | # or <nl> + \ A \ n ? # the beginning of the doc <nl> + ) <nl> + ( # save in $ 1 <nl> + [ ] { 0 , $ less_than_tab } <nl> + < ( hr ) # start tag = $ 2 <nl> + \ b # word break <nl> + ( [ ^ < > ] ) * ? # <nl> + / ? > # the matching end tag <nl> + [ \ t ] * <nl> + ( ? = \ n { 2 , } | \ Z ) # followed by a blank line or end of document <nl> + ) <nl> + } { <nl> + my $ key = md5_hex ( $ 1 ) ; <nl> + $ g_html_blocks { $ key } = $ 1 ; <nl> + " \ n \ n " . $ key . " \ n \ n " ; <nl> + } egx ; <nl> + <nl> + # Special case for standalone HTML comments : <nl> + $ text = ~ s { <nl> + ( ? : <nl> + ( ? < = \ n \ n ) # Starting after a blank line <nl> + | # or <nl> + \ A \ n ? # the beginning of the doc <nl> + ) <nl> + ( # save in $ 1 <nl> + [ ] { 0 , $ less_than_tab } <nl> + ( ? s : <nl> + < ! <nl> + ( - - . * ? - - \ s * ) + <nl> + > <nl> + ) <nl> + [ \ t ] * <nl> + ( ? = \ n { 2 , } | \ Z ) # followed by a blank line or end of document <nl> + ) <nl> + } { <nl> + my $ key = md5_hex ( $ 1 ) ; <nl> + $ g_html_blocks { $ key } = $ 1 ; <nl> + " \ n \ n " . $ key . " \ n \ n " ; <nl> + } egx ; <nl> + <nl> + <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _RunBlockGamut { <nl> + # <nl> + # These are all the transformations that form block - level <nl> + # tags like paragraphs , headers , and list items . <nl> + # <nl> + my $ text = shift ; <nl> + <nl> + $ text = _DoHeaders ( $ text ) ; <nl> + <nl> + # Do Horizontal Rules : <nl> + $ text = ~ s { ^ [ ] { 0 , 2 } ( [ ] ? \ * [ ] ? ) { 3 , } [ \ t ] * $ } { \ n < hr $ g_empty_element_suffix \ n } gmx ; <nl> + $ text = ~ s { ^ [ ] { 0 , 2 } ( [ ] ? - [ ] ? ) { 3 , } [ \ t ] * $ } { \ n < hr $ g_empty_element_suffix \ n } gmx ; <nl> + $ text = ~ s { ^ [ ] { 0 , 2 } ( [ ] ? _ [ ] ? ) { 3 , } [ \ t ] * $ } { \ n < hr $ g_empty_element_suffix \ n } gmx ; <nl> + <nl> + $ text = _DoLists ( $ text ) ; <nl> + <nl> + $ text = _DoCodeBlocks ( $ text ) ; <nl> + <nl> + $ text = _DoBlockQuotes ( $ text ) ; <nl> + <nl> + # We already ran _HashHTMLBlocks ( ) before , in Markdown ( ) , but that <nl> + # was to escape raw HTML in the original Markdown source . This time , <nl> + # we ' re escaping the markup we ' ve just created , so that we don ' t wrap <nl> + # < p > tags around block - level tags . <nl> + $ text = _HashHTMLBlocks ( $ text ) ; <nl> + <nl> + $ text = _FormParagraphs ( $ text ) ; <nl> + <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _RunSpanGamut { <nl> + # <nl> + # These are all the transformations that occur * within * block - level <nl> + # tags like paragraphs , headers , and list items . <nl> + # <nl> + my $ text = shift ; <nl> + <nl> + $ text = _DoCodeSpans ( $ text ) ; <nl> + <nl> + $ text = _EscapeSpecialChars ( $ text ) ; <nl> + <nl> + # Process anchor and image tags . Images must come first , <nl> + # because ! [ foo ] [ f ] looks like an anchor . <nl> + $ text = _DoImages ( $ text ) ; <nl> + $ text = _DoAnchors ( $ text ) ; <nl> + <nl> + # Make links out of things like ` < http : / / example . com / > ` <nl> + # Must come after _DoAnchors ( ) , because you can use < and > <nl> + # delimiters in inline links like [ this ] ( < url > ) . <nl> + $ text = _DoAutoLinks ( $ text ) ; <nl> + <nl> + $ text = _EncodeAmpsAndAngles ( $ text ) ; <nl> + <nl> + $ text = _DoItalicsAndBold ( $ text ) ; <nl> + <nl> + # Do hard breaks : <nl> + $ text = ~ s / { 2 , } \ n / < br $ g_empty_element_suffix \ n / g ; <nl> + <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _EscapeSpecialChars { <nl> + my $ text = shift ; <nl> + my $ tokens | | = _TokenizeHTML ( $ text ) ; <nl> + <nl> + $ text = ' ' ; # rebuild $ text from the tokens <nl> + # my $ in_pre = 0 ; # Keep track of when we ' re inside < pre > or < code > tags . <nl> + # my $ tags_to_skip = qr ! < ( / ? ) ( ? : pre | code | kbd | script | math ) [ \ s > ] ! ; <nl> + <nl> + foreach my $ cur_token ( @ $ tokens ) { <nl> + if ( $ cur_token - > [ 0 ] eq " tag " ) { <nl> + # Within tags , encode * and _ so they don ' t conflict <nl> + # with their use in Markdown for italics and strong . <nl> + # We ' re replacing each such character with its <nl> + # corresponding MD5 checksum value ; this is likely <nl> + # overkill , but it should prevent us from colliding <nl> + # with the escape values by accident . <nl> + $ cur_token - > [ 1 ] = ~ s ! \ * ! $ g_escape_table { ' * ' } ! gx ; <nl> + $ cur_token - > [ 1 ] = ~ s ! _ ! $ g_escape_table { ' _ ' } ! gx ; <nl> + $ text . = $ cur_token - > [ 1 ] ; <nl> + } else { <nl> + my $ t = $ cur_token - > [ 1 ] ; <nl> + $ t = _EncodeBackslashEscapes ( $ t ) ; <nl> + $ text . = $ t ; <nl> + } <nl> + } <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _DoAnchors { <nl> + # <nl> + # Turn Markdown link shortcuts into XHTML < a > tags . <nl> + # <nl> + my $ text = shift ; <nl> + <nl> + # <nl> + # First , handle reference - style links : [ link text ] [ id ] <nl> + # <nl> + $ text = ~ s { <nl> + ( # wrap whole match in $ 1 <nl> + \ [ <nl> + ( $ g_nested_brackets ) # link text = $ 2 <nl> + \ ] <nl> + <nl> + [ ] ? # one optional space <nl> + ( ? : \ n [ ] * ) ? # one optional newline followed by spaces <nl> + <nl> + \ [ <nl> + ( . * ? ) # id = $ 3 <nl> + \ ] <nl> + ) <nl> + } { <nl> + my $ result ; <nl> + my $ whole_match = $ 1 ; <nl> + my $ link_text = $ 2 ; <nl> + my $ link_id = lc $ 3 ; <nl> + <nl> + if ( $ link_id eq " " ) { <nl> + $ link_id = lc $ link_text ; # for shortcut links like [ this ] [ ] . <nl> + } <nl> + <nl> + if ( defined $ g_urls { $ link_id } ) { <nl> + my $ url = $ g_urls { $ link_id } ; <nl> + $ url = ~ s ! \ * ! $ g_escape_table { ' * ' } ! gx ; # We ' ve got to encode these to avoid <nl> + $ url = ~ s ! _ ! $ g_escape_table { ' _ ' } ! gx ; # conflicting with italics / bold . <nl> + $ result = " < a href = \ " $ url \ " " ; <nl> + if ( defined $ g_titles { $ link_id } ) { <nl> + my $ title = $ g_titles { $ link_id } ; <nl> + $ title = ~ s ! \ * ! $ g_escape_table { ' * ' } ! gx ; <nl> + $ title = ~ s ! _ ! $ g_escape_table { ' _ ' } ! gx ; <nl> + $ result . = " title = \ " $ title \ " " ; <nl> + } <nl> + $ result . = " > $ link_text < / a > " ; <nl> + } <nl> + else { <nl> + $ result = $ whole_match ; <nl> + } <nl> + $ result ; <nl> + } xsge ; <nl> + <nl> + # <nl> + # Next , inline - style links : [ link text ] ( url " optional title " ) <nl> + # <nl> + $ text = ~ s { <nl> + ( # wrap whole match in $ 1 <nl> + \ [ <nl> + ( $ g_nested_brackets ) # link text = $ 2 <nl> + \ ] <nl> + \ ( # literal paren <nl> + [ \ t ] * <nl> + < ? ( . * ? ) > ? # href = $ 3 <nl> + [ \ t ] * <nl> + ( # $ 4 <nl> + ( [ ' " ] ) # quote char = $ 5 <nl> + ( . * ? ) # Title = $ 6 <nl> + \ 5 # matching quote <nl> + ) ? # title is optional <nl> + \ ) <nl> + ) <nl> + } { <nl> + my $ result ; <nl> + my $ whole_match = $ 1 ; <nl> + my $ link_text = $ 2 ; <nl> + my $ url = $ 3 ; <nl> + my $ title = $ 6 ; <nl> + <nl> + $ url = ~ s ! \ * ! $ g_escape_table { ' * ' } ! gx ; # We ' ve got to encode these to avoid <nl> + $ url = ~ s ! _ ! $ g_escape_table { ' _ ' } ! gx ; # conflicting with italics / bold . <nl> + $ result = " < a href = \ " $ url \ " " ; <nl> + <nl> + if ( defined $ title ) { <nl> + $ title = ~ s / " / & quot ; / g ; <nl> + $ title = ~ s ! \ * ! $ g_escape_table { ' * ' } ! gx ; <nl> + $ title = ~ s ! _ ! $ g_escape_table { ' _ ' } ! gx ; <nl> + $ result . = " title = \ " $ title \ " " ; <nl> + } <nl> + <nl> + $ result . = " > $ link_text < / a > " ; <nl> + <nl> + $ result ; <nl> + } xsge ; <nl> + <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _DoImages { <nl> + # <nl> + # Turn Markdown image shortcuts into < img > tags . <nl> + # <nl> + my $ text = shift ; <nl> + <nl> + # <nl> + # First , handle reference - style labeled images : ! [ alt text ] [ id ] <nl> + # <nl> + $ text = ~ s { <nl> + ( # wrap whole match in $ 1 <nl> + ! \ [ <nl> + ( . * ? ) # alt text = $ 2 <nl> + \ ] <nl> + <nl> + [ ] ? # one optional space <nl> + ( ? : \ n [ ] * ) ? # one optional newline followed by spaces <nl> + <nl> + \ [ <nl> + ( . * ? ) # id = $ 3 <nl> + \ ] <nl> + <nl> + ) <nl> + } { <nl> + my $ result ; <nl> + my $ whole_match = $ 1 ; <nl> + my $ alt_text = $ 2 ; <nl> + my $ link_id = lc $ 3 ; <nl> + <nl> + if ( $ link_id eq " " ) { <nl> + $ link_id = lc $ alt_text ; # for shortcut links like ! [ this ] [ ] . <nl> + } <nl> + <nl> + $ alt_text = ~ s / " / & quot ; / g ; <nl> + if ( defined $ g_urls { $ link_id } ) { <nl> + my $ url = $ g_urls { $ link_id } ; <nl> + $ url = ~ s ! \ * ! $ g_escape_table { ' * ' } ! gx ; # We ' ve got to encode these to avoid <nl> + $ url = ~ s ! _ ! $ g_escape_table { ' _ ' } ! gx ; # conflicting with italics / bold . <nl> + $ result = " < img src = \ " $ url \ " alt = \ " $ alt_text \ " " ; <nl> + if ( defined $ g_titles { $ link_id } ) { <nl> + my $ title = $ g_titles { $ link_id } ; <nl> + $ title = ~ s ! \ * ! $ g_escape_table { ' * ' } ! gx ; <nl> + $ title = ~ s ! _ ! $ g_escape_table { ' _ ' } ! gx ; <nl> + $ result . = " title = \ " $ title \ " " ; <nl> + } <nl> + $ result . = $ g_empty_element_suffix ; <nl> + } <nl> + else { <nl> + # If there ' s no such link ID , leave intact : <nl> + $ result = $ whole_match ; <nl> + } <nl> + <nl> + $ result ; <nl> + } xsge ; <nl> + <nl> + # <nl> + # Next , handle inline images : ! [ alt text ] ( url " optional title " ) <nl> + # Don ' t forget : encode * and _ <nl> + <nl> + $ text = ~ s { <nl> + ( # wrap whole match in $ 1 <nl> + ! \ [ <nl> + ( . * ? ) # alt text = $ 2 <nl> + \ ] <nl> + \ ( # literal paren <nl> + [ \ t ] * <nl> + < ? ( \ S + ? ) > ? # src url = $ 3 <nl> + [ \ t ] * <nl> + ( # $ 4 <nl> + ( [ ' " ] ) # quote char = $ 5 <nl> + ( . * ? ) # title = $ 6 <nl> + \ 5 # matching quote <nl> + [ \ t ] * <nl> + ) ? # title is optional <nl> + \ ) <nl> + ) <nl> + } { <nl> + my $ result ; <nl> + my $ whole_match = $ 1 ; <nl> + my $ alt_text = $ 2 ; <nl> + my $ url = $ 3 ; <nl> + my $ title = ' ' ; <nl> + if ( defined ( $ 6 ) ) { <nl> + $ title = $ 6 ; <nl> + } <nl> + <nl> + $ alt_text = ~ s / " / & quot ; / g ; <nl> + $ title = ~ s / " / & quot ; / g ; <nl> + $ url = ~ s ! \ * ! $ g_escape_table { ' * ' } ! gx ; # We ' ve got to encode these to avoid <nl> + $ url = ~ s ! _ ! $ g_escape_table { ' _ ' } ! gx ; # conflicting with italics / bold . <nl> + $ result = " < img src = \ " $ url \ " alt = \ " $ alt_text \ " " ; <nl> + if ( defined $ title ) { <nl> + $ title = ~ s ! \ * ! $ g_escape_table { ' * ' } ! gx ; <nl> + $ title = ~ s ! _ ! $ g_escape_table { ' _ ' } ! gx ; <nl> + $ result . = " title = \ " $ title \ " " ; <nl> + } <nl> + $ result . = $ g_empty_element_suffix ; <nl> + <nl> + $ result ; <nl> + } xsge ; <nl> + <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _DoHeaders { <nl> + my $ text = shift ; <nl> + <nl> + # Setext - style headers : <nl> + # Header 1 <nl> + # = = = = = = = = <nl> + # <nl> + # Header 2 <nl> + # mmmmmm - - <nl> + # <nl> + $ text = ~ s { ^ ( . + ) [ \ t ] * \ n = + [ \ t ] * \ n + } { <nl> + " < h1 > " . _RunSpanGamut ( $ 1 ) . " < / h1 > \ n \ n " ; <nl> + } egmx ; <nl> + <nl> + $ text = ~ s { ^ ( . + ) [ \ t ] * \ n - + [ \ t ] * \ n + } { <nl> + " < h2 > " . _RunSpanGamut ( $ 1 ) . " < / h2 > \ n \ n " ; <nl> + } egmx ; <nl> + <nl> + <nl> + # atx - style headers : <nl> + # # Header 1 <nl> + # # # Header 2 <nl> + # # # Header 2 with closing hashes # # <nl> + # . . . <nl> + # # # # # # # Header 6 <nl> + # <nl> + $ text = ~ s { <nl> + ^ ( \ # { 1 , 6 } ) # $ 1 = string of # ' s <nl> + [ \ t ] * <nl> + ( . + ? ) # $ 2 = Header text <nl> + [ \ t ] * <nl> + \ # * # optional closing # ' s ( not counted ) <nl> + \ n + <nl> + } { <nl> + my $ h_level = length ( $ 1 ) ; <nl> + " < h $ h_level > " . _RunSpanGamut ( $ 2 ) . " < / h $ h_level > \ n \ n " ; <nl> + } egmx ; <nl> + <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _DoLists { <nl> + # <nl> + # Form HTML ordered ( numbered ) and unordered ( bulleted ) lists . <nl> + # <nl> + my $ text = shift ; <nl> + my $ less_than_tab = $ g_tab_width - 1 ; <nl> + <nl> + # Re - usable patterns to match list item bullets and number markers : <nl> + my $ marker_ul = qr / [ * + - ] / ; <nl> + my $ marker_ol = qr / \ d + [ . ] / ; <nl> + my $ marker_any = qr / ( ? : $ marker_ul | $ marker_ol ) / ; <nl> + <nl> + # Re - usable pattern to match any entirel ul or ol list : <nl> + my $ whole_list = qr { <nl> + ( # $ 1 = whole list <nl> + ( # $ 2 <nl> + [ ] { 0 , $ less_than_tab } <nl> + ( $ { marker_any } ) # $ 3 = first list item marker <nl> + [ \ t ] + <nl> + ) <nl> + ( ? s : . + ? ) <nl> + ( # $ 4 <nl> + \ z <nl> + | <nl> + \ n { 2 , } <nl> + ( ? = \ S ) <nl> + ( ? ! # Negative lookahead for another list item marker <nl> + [ \ t ] * <nl> + $ { marker_any } [ \ t ] + <nl> + ) <nl> + ) <nl> + ) <nl> + } mx ; <nl> + <nl> + # We use a different prefix before nested lists than top - level lists . <nl> + # See extended comment in _ProcessListItems ( ) . <nl> + # <nl> + # Note : There ' s a bit of duplication here . My original implementation <nl> + # created a scalar regex pattern as the conditional result of the test on <nl> + # $ g_list_level , and then only ran the $ text = ~ s { . . . } { . . . } egmx <nl> + # substitution once , using the scalar as the pattern . This worked , <nl> + # everywhere except when running under MT on my hosting account at Pair <nl> + # Networks . There , this caused all rebuilds to be killed by the reaper ( or <nl> + # perhaps they crashed , but that seems incredibly unlikely given that the <nl> + # same script on the same server ran fine * except * under MT . I ' ve spent <nl> + # more time trying to figure out why this is happening than I ' d like to <nl> + # admit . My only guess , backed up by the fact that this workaround works , <nl> + # is that Perl optimizes the substition when it can figure out that the <nl> + # pattern will never change , and when this optimization isn ' t on , we run <nl> + # afoul of the reaper . Thus , the slightly redundant code to that uses two <nl> + # static s / / / patterns rather than one conditional pattern . <nl> + <nl> + if ( $ g_list_level ) { <nl> + $ text = ~ s { <nl> + ^ <nl> + $ whole_list <nl> + } { <nl> + my $ list = $ 1 ; <nl> + my $ list_type = ( $ 3 = ~ m / $ marker_ul / ) ? " ul " : " ol " ; <nl> + # Turn double returns into triple returns , so that we can make a <nl> + # paragraph for the last item in a list , if necessary : <nl> + $ list = ~ s / \ n { 2 , } / \ n \ n \ n / g ; <nl> + my $ result = _ProcessListItems ( $ list , $ marker_any ) ; <nl> + $ result = " < $ list_type > \ n " . $ result . " < / $ list_type > \ n " ; <nl> + $ result ; <nl> + } egmx ; <nl> + } <nl> + else { <nl> + $ text = ~ s { <nl> + ( ? : ( ? < = \ n \ n ) | \ A \ n ? ) <nl> + $ whole_list <nl> + } { <nl> + my $ list = $ 1 ; <nl> + my $ list_type = ( $ 3 = ~ m / $ marker_ul / ) ? " ul " : " ol " ; <nl> + # Turn double returns into triple returns , so that we can make a <nl> + # paragraph for the last item in a list , if necessary : <nl> + $ list = ~ s / \ n { 2 , } / \ n \ n \ n / g ; <nl> + my $ result = _ProcessListItems ( $ list , $ marker_any ) ; <nl> + $ result = " < $ list_type > \ n " . $ result . " < / $ list_type > \ n " ; <nl> + $ result ; <nl> + } egmx ; <nl> + } <nl> + <nl> + <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _ProcessListItems { <nl> + # <nl> + # Process the contents of a single ordered or unordered list , splitting it <nl> + # into individual list items . <nl> + # <nl> + <nl> + my $ list_str = shift ; <nl> + my $ marker_any = shift ; <nl> + <nl> + <nl> + # The $ g_list_level global keeps track of when we ' re inside a list . <nl> + # Each time we enter a list , we increment it ; when we leave a list , <nl> + # we decrement . If it ' s zero , we ' re not in a list anymore . <nl> + # <nl> + # We do this because when we ' re not inside a list , we want to treat <nl> + # something like this : <nl> + # <nl> + # I recommend upgrading to version <nl> + # 8 . Oops , now this line is treated <nl> + # as a sub - list . <nl> + # <nl> + # As a single paragraph , despite the fact that the second line starts <nl> + # with a digit - period - space sequence . <nl> + # <nl> + # Whereas when we ' re inside a list ( or sub - list ) , that line will be <nl> + # treated as the start of a sub - list . What a kludge , huh ? This is <nl> + # an aspect of Markdown ' s syntax that ' s hard to parse perfectly <nl> + # without resorting to mind - reading . Perhaps the solution is to <nl> + # change the syntax rules such that sub - lists must start with a <nl> + # starting cardinal number ; e . g . " 1 . " or " a . " . <nl> + <nl> + $ g_list_level + + ; <nl> + <nl> + # trim trailing blank lines : <nl> + $ list_str = ~ s / \ n { 2 , } \ z / \ n / ; <nl> + <nl> + <nl> + $ list_str = ~ s { <nl> + ( \ n ) ? # leading line = $ 1 <nl> + ( ^ [ \ t ] * ) # leading whitespace = $ 2 <nl> + ( $ marker_any ) [ \ t ] + # list marker = $ 3 <nl> + ( ( ? s : . + ? ) # list item text = $ 4 <nl> + ( \ n { 1 , 2 } ) ) <nl> + ( ? = \ n * ( \ z | \ 2 ( $ marker_any ) [ \ t ] + ) ) <nl> + } { <nl> + my $ item = $ 4 ; <nl> + my $ leading_line = $ 1 ; <nl> + my $ leading_space = $ 2 ; <nl> + <nl> + if ( $ leading_line or ( $ item = ~ m / \ n { 2 , } / ) ) { <nl> + $ item = _RunBlockGamut ( _Outdent ( $ item ) ) ; <nl> + } <nl> + else { <nl> + # Recursion for sub - lists : <nl> + $ item = _DoLists ( _Outdent ( $ item ) ) ; <nl> + chomp $ item ; <nl> + $ item = _RunSpanGamut ( $ item ) ; <nl> + } <nl> + <nl> + " < li > " . $ item . " < / li > \ n " ; <nl> + } egmx ; <nl> + <nl> + $ g_list_level - - ; <nl> + return $ list_str ; <nl> + } <nl> + <nl> + <nl> + <nl> + sub _DoCodeBlocks { <nl> + # <nl> + # Process Markdown ` < pre > < code > ` blocks . <nl> + # <nl> + <nl> + my $ text = shift ; <nl> + <nl> + $ text = ~ s { <nl> + ( ? : \ n \ n | \ A ) <nl> + ( # $ 1 = the code block - - one or more lines , starting with a space / tab <nl> + ( ? : <nl> + ( ? : [ ] { $ g_tab_width } | \ t ) # Lines must start with a tab or a tab - width of spaces <nl> + . * \ n + <nl> + ) + <nl> + ) <nl> + ( ( ? = ^ [ ] { 0 , $ g_tab_width } \ S ) | \ Z ) # Lookahead for non - space at line - start , or end of doc <nl> + } { <nl> + my $ codeblock = $ 1 ; <nl> + my $ result ; # return value <nl> + <nl> + $ codeblock = _EncodeCode ( _Outdent ( $ codeblock ) ) ; <nl> + $ codeblock = _Detab ( $ codeblock ) ; <nl> + $ codeblock = ~ s / \ A \ n + / / ; # trim leading newlines <nl> + $ codeblock = ~ s / \ s + \ z / / ; # trim trailing whitespace <nl> + <nl> + $ result = " \ n \ n < pre > < code > " . $ codeblock . " \ n < / code > < / pre > \ n \ n " ; <nl> + <nl> + $ result ; <nl> + } egmx ; <nl> + <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _DoCodeSpans { <nl> + # <nl> + # * Backtick quotes are used for < code > < / code > spans . <nl> + # <nl> + # * You can use multiple backticks as the delimiters if you want to <nl> + # include literal backticks in the code span . So , this input : <nl> + # <nl> + # Just type ` ` foo ` bar ` baz ` ` at the prompt . <nl> + # <nl> + # Will translate to : <nl> + # <nl> + # < p > Just type < code > foo ` bar ` baz < / code > at the prompt . < / p > <nl> + # <nl> + # There ' s no arbitrary limit to the number of backticks you <nl> + # can use as delimters . If you need three consecutive backticks <nl> + # in your code , use four for delimiters , etc . <nl> + # <nl> + # * You can use spaces to get literal backticks at the edges : <nl> + # <nl> + # . . . type ` ` ` bar ` ` ` . . . <nl> + # <nl> + # Turns to : <nl> + # <nl> + # . . . type < code > ` bar ` < / code > . . . <nl> + # <nl> + <nl> + my $ text = shift ; <nl> + <nl> + $ text = ~ s @ <nl> + ( ` + ) # $ 1 = Opening run of ` <nl> + ( . + ? ) # $ 2 = The code block <nl> + ( ? < ! ` ) <nl> + \ 1 # Matching closer <nl> + ( ? ! ` ) <nl> + @ <nl> + my $ c = " $ 2 " ; <nl> + $ c = ~ s / ^ [ \ t ] * / / g ; # leading whitespace <nl> + $ c = ~ s / [ \ t ] * $ / / g ; # trailing whitespace <nl> + $ c = _EncodeCode ( $ c ) ; <nl> + " < code > $ c < / code > " ; <nl> + @ egsx ; <nl> + <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _EncodeCode { <nl> + # <nl> + # Encode / escape certain characters inside Markdown code runs . <nl> + # The point is that in code , these characters are literals , <nl> + # and lose their special Markdown meanings . <nl> + # <nl> + local $ _ = shift ; <nl> + <nl> + # Encode all ampersands ; HTML entities are not <nl> + # entities within a Markdown code span . <nl> + s / & / & amp ; / g ; <nl> + <nl> + # Encode $ ' s , but only if we ' re running under Blosxom . <nl> + # ( Blosxom interpolates Perl variables in article bodies . ) <nl> + { <nl> + no warnings ' once ' ; <nl> + if ( defined ( $ blosxom : : version ) ) { <nl> + s / \ $ / & # 036 ; / g ; <nl> + } <nl> + } <nl> + <nl> + <nl> + # Do the angle bracket song and dance : <nl> + s ! < ! & lt ; ! gx ; <nl> + s ! > ! & gt ; ! gx ; <nl> + <nl> + # Now , escape characters that are magic in Markdown : <nl> + s ! \ * ! $ g_escape_table { ' * ' } ! gx ; <nl> + s ! _ ! $ g_escape_table { ' _ ' } ! gx ; <nl> + s ! { ! $ g_escape_table { ' { ' } ! gx ; <nl> + s ! } ! $ g_escape_table { ' } ' } ! gx ; <nl> + s ! \ [ ! $ g_escape_table { ' [ ' } ! gx ; <nl> + s ! \ ] ! $ g_escape_table { ' ] ' } ! gx ; <nl> + s ! \ \ ! $ g_escape_table { ' \ \ ' } ! gx ; <nl> + <nl> + return $ _ ; <nl> + } <nl> + <nl> + <nl> + sub _DoItalicsAndBold { <nl> + my $ text = shift ; <nl> + <nl> + # < strong > must go first : <nl> + $ text = ~ s { ( \ * \ * | __ ) ( ? = \ S ) ( . + ? [ * _ ] * ) ( ? < = \ S ) \ 1 } <nl> + { < strong > $ 2 < / strong > } gsx ; <nl> + <nl> + $ text = ~ s { ( \ * | _ ) ( ? = \ S ) ( . + ? ) ( ? < = \ S ) \ 1 } <nl> + { < em > $ 2 < / em > } gsx ; <nl> + <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _DoBlockQuotes { <nl> + my $ text = shift ; <nl> + <nl> + $ text = ~ s { <nl> + ( # Wrap whole match in $ 1 <nl> + ( <nl> + ^ [ \ t ] * > [ \ t ] ? # ' > ' at the start of a line <nl> + . + \ n # rest of the first line <nl> + ( . + \ n ) * # subsequent consecutive lines <nl> + \ n * # blanks <nl> + ) + <nl> + ) <nl> + } { <nl> + my $ bq = $ 1 ; <nl> + $ bq = ~ s / ^ [ \ t ] * > [ \ t ] ? / / gm ; # trim one level of quoting <nl> + $ bq = ~ s / ^ [ \ t ] + $ / / mg ; # trim whitespace - only lines <nl> + $ bq = _RunBlockGamut ( $ bq ) ; # recurse <nl> + <nl> + $ bq = ~ s / ^ / / g ; <nl> + # These leading spaces screw with < pre > content , so we need to fix that : <nl> + $ bq = ~ s { <nl> + ( \ s * < pre > . + ? < / pre > ) <nl> + } { <nl> + my $ pre = $ 1 ; <nl> + $ pre = ~ s / ^ / / mg ; <nl> + $ pre ; <nl> + } egsx ; <nl> + <nl> + " < blockquote > \ n $ bq \ n < / blockquote > \ n \ n " ; <nl> + } egmx ; <nl> + <nl> + <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _FormParagraphs { <nl> + # <nl> + # Params : <nl> + # $ text - string to process with html < p > tags <nl> + # <nl> + my $ text = shift ; <nl> + <nl> + # Strip leading and trailing lines : <nl> + $ text = ~ s / \ A \ n + / / ; <nl> + $ text = ~ s / \ n + \ z / / ; <nl> + <nl> + my @ grafs = split ( / \ n { 2 , } / , $ text ) ; <nl> + <nl> + # <nl> + # Wrap < p > tags . <nl> + # <nl> + foreach ( @ grafs ) { <nl> + unless ( defined ( $ g_html_blocks { $ _ } ) ) { <nl> + $ _ = _RunSpanGamut ( $ _ ) ; <nl> + s / ^ ( [ \ t ] * ) / < p > / ; <nl> + $ _ . = " < / p > " ; <nl> + } <nl> + } <nl> + <nl> + # <nl> + # Unhashify HTML blocks <nl> + # <nl> + foreach ( @ grafs ) { <nl> + if ( defined ( $ g_html_blocks { $ _ } ) ) { <nl> + $ _ = $ g_html_blocks { $ _ } ; <nl> + } <nl> + } <nl> + <nl> + return join " \ n \ n " , @ grafs ; <nl> + } <nl> + <nl> + <nl> + sub _EncodeAmpsAndAngles { <nl> + # Smart processing for ampersands and angle brackets that need to be encoded . <nl> + <nl> + my $ text = shift ; <nl> + <nl> + # Ampersand - encoding based entirely on Nat Irons ' s Amputator MT plugin : <nl> + # http : / / bumppo . net / projects / amputator / <nl> + $ text = ~ s / & ( ? ! # ? [ xX ] ? ( ? : [ 0 - 9a - fA - F ] + | \ w + ) ; ) / & amp ; / g ; <nl> + <nl> + # Encode naked < ' s <nl> + $ text = ~ s { < ( ? ! [ a - z / ? \ $ ! ] ) } { & lt ; } gi ; <nl> + <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _EncodeBackslashEscapes { <nl> + # <nl> + # Parameter : String . <nl> + # Returns : The string , with after processing the following backslash <nl> + # escape sequences . <nl> + # <nl> + local $ _ = shift ; <nl> + <nl> + s ! \ \ \ \ ! $ g_escape_table { ' \ \ ' } ! gx ; # Must process escaped backslashes first . <nl> + s ! \ \ ` ! $ g_escape_table { ' ` ' } ! gx ; <nl> + s ! \ \ \ * ! $ g_escape_table { ' * ' } ! gx ; <nl> + s ! \ \ _ ! $ g_escape_table { ' _ ' } ! gx ; <nl> + s ! \ \ \ { ! $ g_escape_table { ' { ' } ! gx ; <nl> + s ! \ \ \ } ! $ g_escape_table { ' } ' } ! gx ; <nl> + s ! \ \ \ [ ! $ g_escape_table { ' [ ' } ! gx ; <nl> + s ! \ \ \ ] ! $ g_escape_table { ' ] ' } ! gx ; <nl> + s ! \ \ \ ( ! $ g_escape_table { ' ( ' } ! gx ; <nl> + s ! \ \ \ ) ! $ g_escape_table { ' ) ' } ! gx ; <nl> + s ! \ \ > ! $ g_escape_table { ' > ' } ! gx ; <nl> + s ! \ \ \ # ! $ g_escape_table { ' # ' } ! gx ; <nl> + s ! \ \ \ + ! $ g_escape_table { ' + ' } ! gx ; <nl> + s ! \ \ \ - ! $ g_escape_table { ' - ' } ! gx ; <nl> + s ! \ \ \ . ! $ g_escape_table { ' . ' } ! gx ; <nl> + s { \ \ ! } { $ g_escape_table { ' ! ' } } gx ; <nl> + <nl> + return $ _ ; <nl> + } <nl> + <nl> + <nl> + sub _DoAutoLinks { <nl> + my $ text = shift ; <nl> + <nl> + $ text = ~ s { < ( ( https ? | ftp ) : [ ^ ' " > \ s ] + ) > } { < a href = " $ 1 " > $ 1 < / a > } gi ; <nl> + <nl> + # Email addresses : < address @ domain . foo > <nl> + $ text = ~ s { <nl> + < <nl> + ( ? : mailto : ) ? <nl> + ( <nl> + [ - . \ w ] + <nl> + \ @ <nl> + [ - a - z0 - 9 ] + ( \ . [ - a - z0 - 9 ] + ) * \ . [ a - z ] + <nl> + ) <nl> + > <nl> + } { <nl> + _EncodeEmailAddress ( _UnescapeSpecialChars ( $ 1 ) ) ; <nl> + } egix ; <nl> + <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _EncodeEmailAddress { <nl> + # <nl> + # Input : an email address , e . g . " foo @ example . com " <nl> + # <nl> + # Output : the email address as a mailto link , with each character <nl> + # of the address encoded as either a decimal or hex entity , in <nl> + # the hopes of foiling most address harvesting spam bots . E . g . : <nl> + # <nl> + # < a href = " & # x6D ; & # 97 ; & # 105 ; & # 108 ; & # x74 ; & # 111 ; : & # 102 ; & # 111 ; & # 111 ; & # 64 ; & # 101 ; <nl> + # x & # x61 ; & # 109 ; & # x70 ; & # 108 ; & # x65 ; & # x2E ; & # 99 ; & # 111 ; & # 109 ; " > & # 102 ; & # 111 ; & # 111 ; <nl> + # & # 64 ; & # 101 ; x & # x61 ; & # 109 ; & # x70 ; & # 108 ; & # x65 ; & # x2E ; & # 99 ; & # 111 ; & # 109 ; < / a > <nl> + # <nl> + # Based on a filter by Matthew Wickline , posted to the BBEdit - Talk <nl> + # mailing list : < http : / / tinyurl . com / yu7ue > <nl> + # <nl> + <nl> + my $ addr = shift ; <nl> + <nl> + srand ; <nl> + my @ encode = ( <nl> + sub { ' & # ' . ord ( shift ) . ' ; ' } , <nl> + sub { ' & # x ' . sprintf ( " % X " , ord ( shift ) ) . ' ; ' } , <nl> + sub { shift } , <nl> + ) ; <nl> + <nl> + $ addr = " mailto : " . $ addr ; <nl> + <nl> + $ addr = ~ s { ( . ) } { <nl> + my $ char = $ 1 ; <nl> + if ( $ char eq ' @ ' ) { <nl> + # this * must * be encoded . I insist . <nl> + $ char = $ encode [ int rand 1 ] - > ( $ char ) ; <nl> + } elsif ( $ char ne ' : ' ) { <nl> + # leave ' : ' alone ( to spot mailto : later ) <nl> + my $ r = rand ; <nl> + # roughly 10 % raw , 45 % hex , 45 % dec <nl> + $ char = ( <nl> + $ r > . 9 ? $ encode [ 2 ] - > ( $ char ) : <nl> + $ r < . 45 ? $ encode [ 1 ] - > ( $ char ) : <nl> + $ encode [ 0 ] - > ( $ char ) <nl> + ) ; <nl> + } <nl> + $ char ; <nl> + } gex ; <nl> + <nl> + $ addr = qq { < a href = " $ addr " > $ addr < / a > } ; <nl> + $ addr = ~ s { " > . + ? : } { " > } ; # strip the mailto : from the visible part <nl> + <nl> + return $ addr ; <nl> + } <nl> + <nl> + <nl> + sub _UnescapeSpecialChars { <nl> + # <nl> + # Swap back in all the special characters we ' ve hidden . <nl> + # <nl> + my $ text = shift ; <nl> + <nl> + while ( my ( $ char , $ hash ) = each ( % g_escape_table ) ) { <nl> + $ text = ~ s / $ hash / $ char / g ; <nl> + } <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _TokenizeHTML { <nl> + # <nl> + # Parameter : String containing HTML markup . <nl> + # Returns : Reference to an array of the tokens comprising the input <nl> + # string . Each token is either a tag ( possibly with nested , <nl> + # tags contained therein , such as < a href = " < MTFoo > " > , or a <nl> + # run of text between tags . Each element of the array is a <nl> + # two - element array ; the first is either ' tag ' or ' text ' ; <nl> + # the second is the actual value . <nl> + # <nl> + # <nl> + # Derived from the _tokenize ( ) subroutine from Brad Choate ' s MTRegex plugin . <nl> + # < http : / / www . bradchoate . com / past / mtregex . php > <nl> + # <nl> + <nl> + my $ str = shift ; <nl> + my $ pos = 0 ; <nl> + my $ len = length $ str ; <nl> + my @ tokens ; <nl> + <nl> + my $ depth = 6 ; <nl> + my $ nested_tags = join ( ' | ' , ( ' ( ? : < [ a - z / ! $ ] ( ? : [ ^ < > ] ' ) x $ depth ) . ( ' ) * > ) ' x $ depth ) ; <nl> + my $ match = qr / ( ? s : < ! ( - - . * ? - - \ s * ) + > ) | # comment <nl> + ( ? s : < \ ? . * ? \ ? > ) | # processing instruction <nl> + $ nested_tags / ix ; # nested tags <nl> + <nl> + while ( $ str = ~ m / ( $ match ) / g ) { <nl> + my $ whole_tag = $ 1 ; <nl> + my $ sec_start = pos $ str ; <nl> + my $ tag_start = $ sec_start - length $ whole_tag ; <nl> + if ( $ pos < $ tag_start ) { <nl> + push @ tokens , [ ' text ' , substr ( $ str , $ pos , $ tag_start - $ pos ) ] ; <nl> + } <nl> + push @ tokens , [ ' tag ' , $ whole_tag ] ; <nl> + $ pos = pos $ str ; <nl> + } <nl> + push @ tokens , [ ' text ' , substr ( $ str , $ pos , $ len - $ pos ) ] if $ pos < $ len ; <nl> + \ @ tokens ; <nl> + } <nl> + <nl> + <nl> + sub _Outdent { <nl> + # <nl> + # Remove one level of line - leading tabs or spaces <nl> + # <nl> + my $ text = shift ; <nl> + <nl> + $ text = ~ s / ^ ( \ t | [ ] { 1 , $ g_tab_width } ) / / gm ; <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + sub _Detab { <nl> + # <nl> + # Cribbed from a post by Bart Lateur : <nl> + # < http : / / www . nntp . perl . org / group / perl . macperl . anyperl / 154 > <nl> + # <nl> + my $ text = shift ; <nl> + <nl> + $ text = ~ s { ( . * ? ) \ t } { $ 1 . ( ' ' x ( $ g_tab_width - length ( $ 1 ) % $ g_tab_width ) ) } ge ; <nl> + return $ text ; <nl> + } <nl> + <nl> + <nl> + 1 ; <nl> + <nl> + __END__ <nl> + <nl> + <nl> + = pod <nl> + <nl> + = head1 NAME <nl> + <nl> + B < Markdown > <nl> + <nl> + <nl> + = head1 SYNOPSIS <nl> + <nl> + B < Markdown . pl > [ B < - - html4tags > ] [ B < - - version > ] [ B < - shortversion > ] <nl> + [ I < file > . . . ] <nl> + <nl> + <nl> + = head1 DESCRIPTION <nl> + <nl> + Markdown is a text - to - HTML filter ; it translates an easy - to - read / <nl> + easy - to - write structured text format into HTML . Markdown ' s text format <nl> + is most similar to that of plain text email , and supports features such <nl> + as headers , * emphasis * , code blocks , blockquotes , and links . <nl> + <nl> + Markdown ' s syntax is designed not as a generic markup language , but <nl> + specifically to serve as a front - end to ( X ) HTML . You can use span - level <nl> + HTML tags anywhere in a Markdown document , and you can use block level <nl> + HTML tags ( like < div > and < table > as well ) . <nl> + <nl> + For more information about Markdown ' s syntax , see : <nl> + <nl> + http : / / daringfireball . net / projects / markdown / <nl> + <nl> + <nl> + = head1 OPTIONS <nl> + <nl> + Use " - - " to end switch parsing . For example , to open a file named " - z " , use : <nl> + <nl> + Markdown . pl - - - z <nl> + <nl> + = over 4 <nl> + <nl> + <nl> + = item B < - - html4tags > <nl> + <nl> + Use HTML 4 style for empty element tags , e . g . : <nl> + <nl> + < br > <nl> + <nl> + instead of Markdown ' s default XHTML style tags , e . g . : <nl> + <nl> + < br / > <nl> + <nl> + <nl> + = item B < - v > , B < - - version > <nl> + <nl> + Display Markdown ' s version number and copyright information . <nl> + <nl> + <nl> + = item B < - s > , B < - - shortversion > <nl> + <nl> + Display the short - form version number . <nl> + <nl> + <nl> + = back <nl> + <nl> + <nl> + <nl> + = head1 BUGS <nl> + <nl> + To file bug reports or feature requests ( other than topics listed in the <nl> + Caveats section above ) please send email to : <nl> + <nl> + support @ daringfireball . net <nl> + <nl> + Please include with your report : ( 1 ) the example input ; ( 2 ) the output <nl> + you expected ; ( 3 ) the output Markdown actually produced . <nl> + <nl> + <nl> + = head1 VERSION HISTORY <nl> + <nl> + See the readme file for detailed release notes for this version . <nl> + <nl> + 1 . 0 . 1 - 14 Dec 2004 <nl> + <nl> + 1 . 0 - 28 Aug 2004 <nl> + <nl> + <nl> + = head1 AUTHOR <nl> + <nl> + John Gruber <nl> + http : / / daringfireball . net <nl> + <nl> + PHP port and other contributions by Michel Fortin <nl> + http : / / michelf . com <nl> + <nl> + <nl> + = head1 COPYRIGHT AND LICENSE <nl> + <nl> + Copyright ( c ) 2003 - 2004 John Gruber <nl> + < http : / / daringfireball . net / > <nl> + All rights reserved . <nl> + <nl> + Redistribution and use in source and binary forms , with or without <nl> + modification , are permitted provided that the following conditions are <nl> + met : <nl> + <nl> + * Redistributions of source code must retain the above copyright notice , <nl> + this list of conditions and the following disclaimer . <nl> + <nl> + * Redistributions in binary form must reproduce the above copyright <nl> + notice , this list of conditions and the following disclaimer in the <nl> + documentation and / or other materials provided with the distribution . <nl> + <nl> + * Neither the name " Markdown " nor the names of its contributors may <nl> + be used to endorse or promote products derived from this software <nl> + without specific prior written permission . <nl> + <nl> + This software is provided by the copyright holders and contributors " as <nl> + is " and any express or implied warranties , including , but not limited <nl> + to , the implied warranties of merchantability and fitness for a <nl> + particular purpose are disclaimed . In no event shall the copyright owner <nl> + or contributors be liable for any direct , indirect , incidental , special , <nl> + exemplary , or consequential damages ( including , but not limited to , <nl> + procurement of substitute goods or services ; loss of use , data , or <nl> + profits ; or business interruption ) however caused and on any theory of <nl> + liability , whether in contract , strict liability , or tort ( including <nl> + negligence or otherwise ) arising in any way out of the use of this <nl> + software , even if advised of the possibility of such damage . <nl> + <nl> + = cut <nl> new file mode 100755 <nl> index 00000000000 . . 592a56857c8 <nl> mmm / dev / null <nl> ppp b / Doxygen / Scripts / md2html . sh <nl> <nl> + # ! / bin / sh <nl> + <nl> + MARKDOWN = " ` dirname $ 0 ` / Markdown . pl " <nl> + INPUT = " $ 1 " <nl> + <nl> + if test - z " $ INPUT " ; then <nl> + echo " usage : $ 0 < file . md > " <nl> + exit 1 <nl> + fi <nl> + <nl> + OUTPUT = " ` dirname $ INPUT ` / ` basename $ INPUT . md ` . html " <nl> + <nl> + perl " $ MARKDOWN " " $ INPUT " \ <nl> + | sed - r - e " s / href = \ " ( [ ^ \ " # ] + ) ( [ \ " # ] ) / href = \ " \ 1 \ . html \ 2 / g " \ <nl> + | sed - e " s / href = \ " wiki \ / / href = \ " / g " \ <nl> + | sed - e " s / # wiki - / # / g " > $ OUTPUT <nl> mmm a / Makefile . doxygen <nl> ppp b / Makefile . doxygen <nl> Doxygen / . setup - directories : <nl> @ touch $ @ <nl> <nl> Doxygen / js / % . c : @ srcdir @ / js / % . js Doxygen / . setup - directories <nl> - python @ top_srcdir @ / Doxygen / Scripts / js2doxy . py $ < > $ @ <nl> + @ python @ top_srcdir @ / Doxygen / Scripts / js2doxy . py $ < > $ @ <nl> <nl> Doxygen / js / system / % . c : @ srcdir @ / js / system / % . js Doxygen / . setup - directories <nl> - python @ top_srcdir @ / Doxygen / Scripts / js2doxy . py $ < > $ @ <nl> + @ python @ top_srcdir @ / Doxygen / Scripts / js2doxy . py $ < > $ @ <nl> <nl> Doxygen / js / modules / % . c : @ srcdir @ / js / system / % . js Doxygen / . setup - directories <nl> - python @ top_srcdir @ / Doxygen / Scripts / js2doxy . py $ < > $ @ <nl> + @ python @ top_srcdir @ / Doxygen / Scripts / js2doxy . py $ < > $ @ <nl> <nl> Doxygen / js / bootstrap / % . c : @ srcdir @ / js / bootstrap / % . js Doxygen / . setup - directories <nl> - python @ top_srcdir @ / Doxygen / Scripts / js2doxy . py $ < > $ @ <nl> + @ python @ top_srcdir @ / Doxygen / Scripts / js2doxy . py $ < > $ @ <nl> <nl> Doxygen / xml / % . md : Doxygen / xml / % . xml <nl> - python @ top_srcdir @ / Doxygen / Scripts / xml2md . py $ < > $ @ <nl> + @ python @ top_srcdir @ / Doxygen / Scripts / xml2md . py $ < > $ @ <nl> <nl> # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> # # Doxygen <nl> doxygen : Doxygen / avocado . doxy $ ( DOXYGEN ) <nl> <nl> wiki : $ ( WIKI ) <nl> @ test - d Doxygen / wiki | | mkdir Doxygen / wiki <nl> - for w in $ ( WIKI ) ; do @ top_srcdir @ / Doxygen / Scripts / pandoc . sh $ $ w ; done <nl> - for w in $ ( WIKI ) ; do @ top_srcdir @ / Doxygen / Scripts / md2html . sh ` echo $ $ w | sed - e ' s : / xml / : / wiki / : g ' ` ; done <nl> + @ for w in $ ( WIKI ) ; do @ top_srcdir @ / Doxygen / Scripts / pandoc . sh $ $ w ; done <nl> + @ for w in $ ( WIKI ) ; do @ top_srcdir @ / Doxygen / Scripts / md2html . sh ` echo $ $ w | sed - e ' s : / xml / : / wiki / : g ' ` ; done <nl> <nl> # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> # # CLEANUP <nl> mmm a / Makefile . files <nl> ppp b / Makefile . files <nl> DOXYGEN = \ <nl> # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> <nl> WIKI = \ <nl> + Doxygen / xml / AQL . md \ <nl> Doxygen / xml / Basics . md \ <nl> + Doxygen / xml / DBAdmin . md \ <nl> + Doxygen / xml / SimpleQueries . md \ <nl> Doxygen / xml / Actions . md \ <nl> Doxygen / xml / AvocadoScript . md \ <nl> Doxygen / xml / CommandLine . md \ <nl> WIKI = \ <nl> Doxygen / xml / CommandLineScheduler . md \ <nl> Doxygen / xml / Compiling . md \ <nl> Doxygen / xml / DefineAction . md \ <nl> - Doxygen / xml / GeoCoordinates . md \ <nl> Doxygen / xml / Graphs . md \ <nl> Doxygen / xml / HttpInterface . md \ <nl> + Doxygen / xml / IndexUsage . md \ <nl> Doxygen / xml / InstallManual . md \ <nl> Doxygen / xml / JSModuleActions . md \ <nl> Doxygen / xml / JSModuleConsole . md \ <nl> WIKI = \ <nl> Doxygen / xml / RefManual . md \ <nl> Doxygen / xml / RestDocument . md \ <nl> Doxygen / xml / RestSystem . md \ <nl> - Doxygen / xml / SimpleQueries . md \ <nl> Doxygen / xml / StartStop . md \ <nl> Doxygen / xml / UserManual . md \ <nl> Doxygen / xml / jsUnity . md <nl> mmm a / Makefile . in <nl> ppp b / Makefile . in <nl> DOXYGEN = \ <nl> # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> WIKI = \ <nl> + Doxygen / xml / AQL . md \ <nl> Doxygen / xml / Basics . md \ <nl> + Doxygen / xml / DBAdmin . md \ <nl> + Doxygen / xml / SimpleQueries . md \ <nl> Doxygen / xml / Actions . md \ <nl> Doxygen / xml / AvocadoScript . md \ <nl> Doxygen / xml / CommandLine . md \ <nl> WIKI = \ <nl> Doxygen / xml / CommandLineScheduler . md \ <nl> Doxygen / xml / Compiling . md \ <nl> Doxygen / xml / DefineAction . md \ <nl> - Doxygen / xml / GeoCoordinates . md \ <nl> Doxygen / xml / Graphs . md \ <nl> Doxygen / xml / HttpInterface . md \ <nl> + Doxygen / xml / IndexUsage . md \ <nl> Doxygen / xml / InstallManual . md \ <nl> Doxygen / xml / JSModuleActions . md \ <nl> Doxygen / xml / JSModuleConsole . md \ <nl> WIKI = \ <nl> Doxygen / xml / RefManual . md \ <nl> Doxygen / xml / RestDocument . md \ <nl> Doxygen / xml / RestSystem . md \ <nl> - Doxygen / xml / SimpleQueries . md \ <nl> Doxygen / xml / StartStop . md \ <nl> Doxygen / xml / UserManual . md \ <nl> Doxygen / xml / jsUnity . md <nl> Doxygen / . setup - directories : <nl> @ touch $ @ <nl> <nl> Doxygen / js / % . c : @ srcdir @ / js / % . js Doxygen / . setup - directories <nl> - python @ top_srcdir @ / Doxygen / Scripts / js2doxy . py $ < > $ @ <nl> + @ python @ top_srcdir @ / Doxygen / Scripts / js2doxy . py $ < > $ @ <nl> <nl> Doxygen / js / system / % . c : @ srcdir @ / js / system / % . js Doxygen / . setup - directories <nl> - python @ top_srcdir @ / Doxygen / Scripts / js2doxy . py $ < > $ @ <nl> + @ python @ top_srcdir @ / Doxygen / Scripts / js2doxy . py $ < > $ @ <nl> <nl> Doxygen / js / modules / % . c : @ srcdir @ / js / system / % . js Doxygen / . setup - directories <nl> - python @ top_srcdir @ / Doxygen / Scripts / js2doxy . py $ < > $ @ <nl> + @ python @ top_srcdir @ / Doxygen / Scripts / js2doxy . py $ < > $ @ <nl> <nl> Doxygen / js / bootstrap / % . c : @ srcdir @ / js / bootstrap / % . js Doxygen / . setup - directories <nl> - python @ top_srcdir @ / Doxygen / Scripts / js2doxy . py $ < > $ @ <nl> + @ python @ top_srcdir @ / Doxygen / Scripts / js2doxy . py $ < > $ @ <nl> <nl> Doxygen / xml / % . md : Doxygen / xml / % . xml <nl> - python @ top_srcdir @ / Doxygen / Scripts / xml2md . py $ < > $ @ <nl> + @ python @ top_srcdir @ / Doxygen / Scripts / xml2md . py $ < > $ @ <nl> <nl> # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> doxygen : Doxygen / avocado . doxy $ ( DOXYGEN ) <nl> <nl> wiki : $ ( WIKI ) <nl> @ test - d Doxygen / wiki | | mkdir Doxygen / wiki <nl> - for w in $ ( WIKI ) ; do @ top_srcdir @ / Doxygen / Scripts / pandoc . sh $ $ w ; done <nl> - for w in $ ( WIKI ) ; do @ top_srcdir @ / Doxygen / Scripts / md2html . sh ` echo $ $ w | sed - e ' s : / xml / : / wiki / : g ' ` ; done <nl> + @ for w in $ ( WIKI ) ; do @ top_srcdir @ / Doxygen / Scripts / pandoc . sh $ $ w ; done <nl> + @ for w in $ ( WIKI ) ; do @ top_srcdir @ / Doxygen / Scripts / md2html . sh ` echo $ $ w | sed - e ' s : / xml / : / wiki / : g ' ` ; done <nl> <nl> . setup - directories : <nl> @ test - d js | | mkdir js <nl> mmm a / RestServer / ActionDispatcherThread . cpp <nl> ppp b / RestServer / ActionDispatcherThread . cpp <nl> void ActionDispatcherThread : : run ( ) { <nl> _context - > Enter ( ) ; <nl> <nl> DispatcherThread : : run ( ) ; <nl> + <nl> + / / free memory <nl> + TRI_FreeActionsVocBase ( ) ; <nl> <nl> _context - > Exit ( ) ; <nl> _context . Dispose ( ) ; <nl> - <nl> + <nl> _isolate - > Exit ( ) ; <nl> _isolate - > Dispose ( ) ; <nl> } <nl> void ActionDispatcherThread : : initialise ( ) { <nl> LOGGER_FATAL < < " cannot load actions from directory ' " < < loader - > getDirectory ( ) < < " ' " ; <nl> } <nl> } <nl> - <nl> + <nl> / / and return from the context <nl> _context - > Exit ( ) ; <nl> _isolate - > Exit ( ) ; <nl> mmm a / RestServer / ActionDispatcherThread . h <nl> ppp b / RestServer / ActionDispatcherThread . h <nl> <nl> # include < v8 . h > <nl> <nl> # include " V8 / JSLoader . h " <nl> + # include " V8 / v8 - globals . h " <nl> # include " VocBase / vocbase . h " <nl> <nl> / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> mmm a / RestServer / avocado . cpp <nl> ppp b / RestServer / avocado . cpp <nl> using namespace triagens : : avocado ; <nl> / / / < ol > <nl> / / / < li > @ ref GeoCoordinates <nl> / / / < / li > <nl> - / / / < li > @ ref Pagination <nl> - / / / < / li > <nl> / / / < / ol > <nl> / / / < / li > <nl> / / / < li > Vertices , Edges , and Graphs <nl> using namespace triagens : : avocado ; <nl> / / / < / li > <nl> / / / < / ol > <nl> / / / < / li > <nl> + / / / < li > @ ref DBAdmin <nl> + / / / < ol > <nl> + / / / < li > @ ref DBAdminDurability <nl> + / / / < / li > <nl> + / / / < li > @ ref DBAdminIndex <nl> + / / / < ol > <nl> + / / / < li > @ ref DBAdminIndexGeo <nl> + / / / < / ol > <nl> + / / / < / li > <nl> + / / / < / ol > <nl> + / / / < / li > <nl> / / / < li > Advanced Topics <nl> / / / < ol > <nl> / / / < li > Actions <nl> using namespace triagens : : avocado ; <nl> / / / Opens a debug shell instead of starting the HTTP server . <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ page DBAdminTOC <nl> + / / / <nl> + / / / < ol > <nl> + / / / < li > @ ref DBAdminDurability <nl> + / / / < / li > <nl> + / / / < li > @ ref DBAdminIndex <nl> + / / / < ol > <nl> + / / / < li > @ ref DBAdminIndexGeo <nl> + / / / < / ol > <nl> + / / / < / li > <nl> + / / / < / ol > <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ page DBAdmin Database Administration <nl> + / / / <nl> + / / / < hr > <nl> + / / / @ copydetails DBAdminTOC <nl> + / / / < hr > <nl> + / / / <nl> + / / / @ section DBAdminDurability Durability <nl> + / / / <nl> + / / / @ subsection DBAdminDurability1 Mostly Memory / Durability <nl> + / / / <nl> + / / / Database documents are stored in the memory - memory - mapped files are used to <nl> + / / / store them . The operating system has the advantageous option to decide <nl> + / / / swapping sparsely used areas out of the main memory . Per default , these <nl> + / / / memory - mapped files are synced frequently - advantageously storing all <nl> + / / / documents securely at once ( durability ) . <nl> + / / / <nl> + / / / @ subsection DBAdminDurability2 AppendOnly / MVCC <nl> + / / / <nl> + / / / Instead of overwriting existing documents , a completely new version of the <nl> + / / / document is generated . The two benefits are : <nl> + / / / <nl> + / / / - Objects can be stored coherently and compactly in the main memory . <nl> + / / / - Objects are preserved - isolated writing and reading transactions allow <nl> + / / / accessing these objects for parallel operations . <nl> + / / / <nl> + / / / The system collects obsolete versions as garbage , recognizing them as <nl> + / / / forsaken . Garbage collection is asynchronous and runs parallel to other <nl> + / / / processes . <nl> + / / / <nl> + / / / @ subsection DBAdminDurability3 Configuration <nl> + / / / <nl> + / / / @ copydetails JS_ParameterVocbaseCol <nl> + / / / <nl> + / / / @ section DBAdminIndex Index Management <nl> + / / / <nl> + / / / @ subsection DBAdminIndexHash Hash Indexes <nl> + / / / <nl> + / / / @ copydetails JS_EnsureHashIndexVocbaseCol <nl> + / / / <nl> + / / / @ subsection DBAdminIndexGeo Geo Indexes <nl> + / / / <nl> + / / / @ copydetails JS_EnsureGeoIndexVocbaseCol <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> / / - - SECTION - - INSTALLATION MANUAL <nl> / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> int main ( int argc , char * argv [ ] ) { <nl> <nl> / / Local Variables : <nl> / / mode : outline - minor <nl> - / / outline - regexp : " ^ \ \ ( / / / @ brief \ \ | / / / { @ inheritDoc } \ \ | / / / @ addtogroup \ \ | / / - - SECTION - - \ \ | / / / @ \ \ } \ \ ) " <nl> + / / outline - regexp : " ^ \ \ ( / / / @ brief \ \ | / / / { @ inheritDoc } \ \ | / / / @ addtogroup \ \ | / / - - SECTION - - \ \ | / / / @ page \ \ | / / / @ \ \ } \ \ ) " <nl> / / End : <nl> mmm a / Scheduler / ListenTask . cpp <nl> ppp b / Scheduler / ListenTask . cpp <nl> ListenTask : : ListenTask ( struct addrinfo * aip , bool reuseAddress ) <nl> <nl> <nl> ListenTask : : ~ ListenTask ( ) { <nl> - close ( listenSocket ) ; <nl> + if ( listenSocket ! = - 1 ) { <nl> + close ( listenSocket ) ; <nl> + } <nl> } <nl> <nl> / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> mmm a / V8 / v8 - actions . cpp <nl> ppp b / V8 / v8 - actions . cpp <nl> void TRI_CreateActionVocBase ( string const & name , <nl> LOG_DEBUG ( " created action ' % s ' for queue % s " , url . c_str ( ) , queue . c_str ( ) ) ; <nl> } <nl> <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief free all existing actions <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + void TRI_FreeActionsVocBase ( void ) { <nl> + TRI_v8_global_t * v8g ; <nl> + <nl> + v8g = ( TRI_v8_global_t * ) v8 : : Isolate : : GetCurrent ( ) - > GetData ( ) ; <nl> + <nl> + WRITE_LOCKER ( ActionsLock ) ; <nl> + WRITE_LOCKER ( v8g - > ActionsLock ) ; <nl> + <nl> + map < string , TRI_action_t * > : : iterator it ; <nl> + <nl> + for ( it = Actions . begin ( ) ; it ! = Actions . end ( ) ; it + + ) { <nl> + delete ( * it ) . second ; <nl> + } <nl> + Actions . clear ( ) ; <nl> + } <nl> + <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief looks up an action <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> mmm a / V8 / v8 - actions . h <nl> ppp b / V8 / v8 - actions . h <nl> void TRI_CreateActionVocBase ( std : : string const & name , <nl> TRI_action_options_t ao , <nl> v8 : : Handle < v8 : : Function > callback ) ; <nl> <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief free all existing actions <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + void TRI_FreeActionsVocBase ( void ) ; <nl> + <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief looks up an action <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> mmm a / V8 / v8 - vocbase . cpp <nl> ppp b / V8 / v8 - vocbase . cpp <nl> static v8 : : Handle < v8 : : Value > JS_DropIndexVocbaseCol ( v8 : : Arguments const & argv ) <nl> / / / In case that the index was successfully created , the index indetifier <nl> / / / is returned . <nl> / / / <nl> - / / / @ verbinclude fluent10 <nl> - / / / <nl> / / / @ FUN { ensureGeoIndex ( @ FA { location } , @ LIT { true } ) } <nl> / / / <nl> / / / As above which the exception , that the order within the list is longitude <nl> static v8 : : Handle < v8 : : Value > JS_DropIndexVocbaseCol ( v8 : : Arguments const & argv ) <nl> / / / In case that the index was successfully created , the index indetifier <nl> / / / is returned . <nl> / / / <nl> - / / / @ verbinclude fluent14 <nl> + / / / @ EXAMPLES <nl> + / / / <nl> + / / / Create an geo index for a list attribute : <nl> + / / / <nl> + / / / @ verbinclude admin3 <nl> + / / / <nl> + / / / Create an geo index for a hash array attribute : <nl> + / / / <nl> + / / / @ verbinclude admin4 <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> static v8 : : Handle < v8 : : Value > JS_EnsureGeoIndexVocbaseCol ( v8 : : Arguments const & argv ) { <nl> static v8 : : Handle < v8 : : Value > JS_EnsureGeoIndexVocbaseCol ( v8 : : Arguments const & a <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief ensures that a hash index exists <nl> / / / <nl> - / / / @ FUN { ensureHashIndex ( @ FA { field1 } , @ FA { field2 } , . . . , @ FA { fieldn } ) } <nl> + / / / @ FUN { ensureUniqueConstrain ( @ FA { field1 } , @ FA { field2 } , . . . , @ FA { fieldn } ) } <nl> / / / <nl> / / / Creates a hash index on all documents using attributes as paths to the <nl> / / / fields . At least one attribute must be given . The value of this attribute <nl> - / / / must be a list . All documents , which do not have the attribute path or with <nl> - / / / ore or more values that are not suitable , are ignored . <nl> + / / / must be a list . All documents , which do not have the attribute path or where <nl> + / / / one or more values that are not suitable , are ignored . <nl> / / / <nl> / / / In case that the index was successfully created , the index indetifier <nl> / / / is returned . <nl> / / / <nl> - / / / @ verbinclude fluent14 <nl> + / / / @ EXAMPLES <nl> + / / / <nl> + / / / @ verbinclude admin5 <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - static v8 : : Handle < v8 : : Value > JS_EnsureHashIndexVocbaseCol ( v8 : : Arguments const & argv ) { <nl> + static v8 : : Handle < v8 : : Value > JS_EnsureUniqueConstraintVocbaseCol ( v8 : : Arguments const & argv ) { <nl> v8 : : HandleScope scope ; <nl> v8 : : Handle < v8 : : String > err ; <nl> <nl> static v8 : : Handle < v8 : : Value > JS_EnsureHashIndexVocbaseCol ( v8 : : Arguments const & <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> <nl> if ( argv . Length ( ) = = 0 ) { <nl> - return scope . Close ( v8 : : ThrowException ( v8 : : String : : New ( " one or more string parameters required for the ensureHashIndex ( . . . ) command " ) ) ) ; <nl> + return scope . Close ( v8 : : ThrowException ( v8 : : String : : New ( " one or more string parameters required for the ensureUniqueConstraint ( . . . ) command " ) ) ) ; <nl> } <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> static v8 : : Handle < v8 : : Value > JS_EnsureHashIndexVocbaseCol ( v8 : : Arguments const & <nl> v8 : : Handle < v8 : : Value > argument = argv [ j ] ; <nl> <nl> if ( ! argument - > IsString ( ) ) { <nl> - errorString = " invalid parameter passed to ensureHashIndex ( . . . ) command " ; <nl> + errorString = " invalid parameter passed to ensureUniqueConstraint ( . . . ) command " ; <nl> ok = false ; <nl> break ; <nl> } <nl> static v8 : : Handle < v8 : : Value > JS_EnsureHashIndexVocbaseCol ( v8 : : Arguments const & <nl> char * cArgument = * argumentString = = 0 ? 0 : TRI_DuplicateString ( * argumentString ) ; <nl> <nl> if ( cArgument = = NULL ) { <nl> - errorString = " insuffient memory to complete ensureHashIndex ( . . . ) command " ; <nl> + errorString = " insuffient memory to complete ensureUniqueConstraint ( . . . ) command " ; <nl> ok = false ; <nl> break ; <nl> } <nl> <nl> - TRI_PushBackVector ( & attributes , & cArgument ) ; <nl> + TRI_PushBackVector ( & attributes , & cArgument ) ; <nl> } <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> static v8 : : Handle < v8 : : Value > JS_EnsureHashIndexVocbaseCol ( v8 : : Arguments const & <nl> char * right = * ( ( char * * ) ( TRI_AtVector ( & attributes , k ) ) ) ; <nl> <nl> if ( TRI_EqualString ( left , right ) ) { <nl> - errorString = " duplicate parameters sent to ensureHashIndex ( . . . ) command " ; <nl> + errorString = " duplicate parameters sent to ensureUniqueConstraint ( . . . ) command " ; <nl> ok = false ; <nl> break ; <nl> } <nl> static v8 : : Handle < v8 : : Value > JS_LoadVocbaseCol ( v8 : : Arguments const & argv ) { <nl> / / / <nl> / / / @ FUN { parameter ( ) } <nl> / / / <nl> - / / / Returns the collection parameter . <nl> + / / / Returns an object containing all collection parameters . <nl> / / / <nl> / / / - @ LIT { waitForSync } : If @ LIT { true } creating a document will only return <nl> / / / after the data was synced to disk . <nl> static v8 : : Handle < v8 : : Value > JS_LoadVocbaseCol ( v8 : : Arguments const & argv ) { <nl> / / / <nl> / / / @ FUN { parameter ( @ FA { parameter - array } ) } <nl> / / / <nl> - / / / Changes the collection parameter . <nl> + / / / Changes the collection parameters . <nl> / / / <nl> / / / @ EXAMPLES <nl> / / / <nl> - / / / Read the parameter <nl> + / / / Read all parameters <nl> / / / <nl> / / / @ verbinclude admin1 <nl> / / / <nl> - / / / Write the parameter <nl> + / / / Change a parameter <nl> / / / <nl> / / / @ verbinclude admin2 <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> void TRI_InitV8VocBridge ( v8 : : Handle < v8 : : Context > context , TRI_vocbase_t * vocbas <nl> v8 : : Handle < v8 : : String > DropIndexFuncName = v8 : : Persistent < v8 : : String > : : New ( v8 : : String : : New ( " dropIndex " ) ) ; <nl> v8 : : Handle < v8 : : String > EdgesFuncName = v8 : : Persistent < v8 : : String > : : New ( v8 : : String : : New ( " edges " ) ) ; <nl> v8 : : Handle < v8 : : String > EnsureGeoIndexFuncName = v8 : : Persistent < v8 : : String > : : New ( v8 : : String : : New ( " ensureGeoIndex " ) ) ; <nl> - v8 : : Handle < v8 : : String > EnsureHashIndexFuncName = v8 : : Persistent < v8 : : String > : : New ( v8 : : String : : New ( " ensureHashIndex " ) ) ; <nl> v8 : : Handle < v8 : : String > EnsureMultiHashIndexFuncName = v8 : : Persistent < v8 : : String > : : New ( v8 : : String : : New ( " ensureMultiHashIndex " ) ) ; <nl> - v8 : : Handle < v8 : : String > EnsureSkiplistIndexFuncName = v8 : : Persistent < v8 : : String > : : New ( v8 : : String : : New ( " ensureSLIndex " ) ) ; <nl> v8 : : Handle < v8 : : String > EnsureMultiSkiplistIndexFuncName = v8 : : Persistent < v8 : : String > : : New ( v8 : : String : : New ( " ensureMultiSLIndex " ) ) ; <nl> + v8 : : Handle < v8 : : String > EnsureSkiplistIndexFuncName = v8 : : Persistent < v8 : : String > : : New ( v8 : : String : : New ( " ensureSLIndex " ) ) ; <nl> + v8 : : Handle < v8 : : String > EnsureUniqueConstraintFuncName = v8 : : Persistent < v8 : : String > : : New ( v8 : : String : : New ( " ensureUniqueConstraint " ) ) ; <nl> v8 : : Handle < v8 : : String > ExecuteFuncName = v8 : : Persistent < v8 : : String > : : New ( v8 : : String : : New ( " execute " ) ) ; <nl> v8 : : Handle < v8 : : String > FiguresFuncName = v8 : : Persistent < v8 : : String > : : New ( v8 : : String : : New ( " figures " ) ) ; <nl> v8 : : Handle < v8 : : String > GetIndexesFuncName = v8 : : Persistent < v8 : : String > : : New ( v8 : : String : : New ( " getIndexes " ) ) ; <nl> void TRI_InitV8VocBridge ( v8 : : Handle < v8 : : Context > context , TRI_vocbase_t * vocbas <nl> rt - > Set ( DocumentFuncName , v8 : : FunctionTemplate : : New ( JS_DocumentQuery ) ) ; <nl> rt - > Set ( DropIndexFuncName , v8 : : FunctionTemplate : : New ( JS_DropIndexVocbaseCol ) ) ; <nl> rt - > Set ( EnsureGeoIndexFuncName , v8 : : FunctionTemplate : : New ( JS_EnsureGeoIndexVocbaseCol ) ) ; <nl> - rt - > Set ( EnsureHashIndexFuncName , v8 : : FunctionTemplate : : New ( JS_EnsureHashIndexVocbaseCol ) ) ; <nl> rt - > Set ( EnsureMultiHashIndexFuncName , v8 : : FunctionTemplate : : New ( JS_EnsureMultiHashIndexVocbaseCol ) ) ; <nl> rt - > Set ( EnsureMultiSkiplistIndexFuncName , v8 : : FunctionTemplate : : New ( JS_EnsureMultiSkiplistIndexVocbaseCol ) ) ; <nl> rt - > Set ( EnsureSkiplistIndexFuncName , v8 : : FunctionTemplate : : New ( JS_EnsureSkiplistIndexVocbaseCol ) ) ; <nl> + rt - > Set ( EnsureUniqueConstraintFuncName , v8 : : FunctionTemplate : : New ( JS_EnsureUniqueConstraintVocbaseCol ) ) ; <nl> rt - > Set ( FiguresFuncName , v8 : : FunctionTemplate : : New ( JS_FiguresVocbaseCol ) ) ; <nl> rt - > Set ( GetIndexesFuncName , v8 : : FunctionTemplate : : New ( JS_GetIndexesVocbaseCol ) ) ; <nl> rt - > Set ( LoadFuncName , v8 : : FunctionTemplate : : New ( JS_LoadVocbaseCol ) ) ; <nl> void TRI_InitV8VocBridge ( v8 : : Handle < v8 : : Context > context , TRI_vocbase_t * vocbas <nl> rt - > Set ( DocumentFuncName , v8 : : FunctionTemplate : : New ( JS_DocumentQuery ) ) ; <nl> rt - > Set ( DropIndexFuncName , v8 : : FunctionTemplate : : New ( JS_DropIndexVocbaseCol ) ) ; <nl> rt - > Set ( EnsureGeoIndexFuncName , v8 : : FunctionTemplate : : New ( JS_EnsureGeoIndexVocbaseCol ) ) ; <nl> - rt - > Set ( EnsureHashIndexFuncName , v8 : : FunctionTemplate : : New ( JS_EnsureHashIndexVocbaseCol ) ) ; <nl> rt - > Set ( EnsureMultiHashIndexFuncName , v8 : : FunctionTemplate : : New ( JS_EnsureMultiHashIndexVocbaseCol ) ) ; <nl> rt - > Set ( EnsureMultiSkiplistIndexFuncName , v8 : : FunctionTemplate : : New ( JS_EnsureMultiSkiplistIndexVocbaseCol ) ) ; <nl> rt - > Set ( EnsureSkiplistIndexFuncName , v8 : : FunctionTemplate : : New ( JS_EnsureSkiplistIndexVocbaseCol ) ) ; <nl> + rt - > Set ( EnsureUniqueConstraintFuncName , v8 : : FunctionTemplate : : New ( JS_EnsureUniqueConstraintVocbaseCol ) ) ; <nl> rt - > Set ( FiguresFuncName , v8 : : FunctionTemplate : : New ( JS_FiguresVocbaseCol ) ) ; <nl> rt - > Set ( GetIndexesFuncName , v8 : : FunctionTemplate : : New ( JS_GetIndexesVocbaseCol ) ) ; <nl> rt - > Set ( LoadFuncName , v8 : : FunctionTemplate : : New ( JS_LoadVocbaseCol ) ) ; <nl> mmm a / V8 / v8 - vocbase . h <nl> ppp b / V8 / v8 - vocbase . h <nl> <nl> / / / - look at all the @ ref JavaScriptFunc <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - / / / @ page GeoCoordinates Geo Coordinates <nl> - / / / <nl> - / / / @ section EnsureGeoIndex Create a Geo - Spatial Index <nl> - / / / <nl> - / / / First create an index . <nl> - / / / <nl> - / / / @ copydetails JS_EnsureGeoIndexVocbaseCol <nl> - / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ page JavaScriptFuncIndex JavaScript Function Index <nl> / / / <nl> mmm a / VocBase / collection . c <nl> ppp b / VocBase / collection . c <nl> bool TRI_IterateCollection ( TRI_collection_t * collection , <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> void TRI_IterateIndexCollection ( TRI_collection_t * collection , <nl> - void ( * iterator ) ( char const * filename , void * ) , <nl> + bool ( * iterator ) ( char const * filename , void * ) , <nl> void * data ) { <nl> size_t n ; <nl> size_t i ; <nl> void TRI_IterateIndexCollection ( TRI_collection_t * collection , <nl> <nl> for ( i = 0 ; i < n ; + + i ) { <nl> char const * filename ; <nl> + bool ok ; <nl> <nl> filename = collection - > _indexFiles . _buffer [ i ] ; <nl> - iterator ( filename , data ) ; <nl> + ok = iterator ( filename , data ) ; <nl> + <nl> + if ( ! ok ) { <nl> + LOG_ERROR ( " cannot load index ' % s ' for collection ' % s ' " , <nl> + filename , <nl> + collection - > _name ) ; <nl> + } <nl> } <nl> } <nl> <nl> mmm a / VocBase / collection . h <nl> ppp b / VocBase / collection . h <nl> bool TRI_IterateCollection ( TRI_collection_t * , <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> void TRI_IterateIndexCollection ( TRI_collection_t * collection , <nl> - void ( * iterator ) ( char const * filename , void * ) , <nl> + bool ( * iterator ) ( char const * filename , void * ) , <nl> void * data ) ; <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> mmm a / VocBase / index . c <nl> ppp b / VocBase / index . c <nl> static TRI_vector_string_t * GetFieldsIndex ( const TRI_idx_type_e indexType , <nl> } <nl> else { <nl> / / read number of fields <nl> - strVal = TRI_LookupArrayJson ( json , " field_count " ) ; <nl> + strVal = TRI_LookupArrayJson ( json , " fieldCount " ) ; <nl> if ( ! strVal | | strVal - > _type ! = TRI_JSON_NUMBER ) { <nl> return fields ; <nl> } <nl> GeoCoordinates * TRI_NearestGeoIndex ( TRI_index_t * idx , <nl> / / / @ { <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - / / / @ brief attempts to locate an entry in the hash index <nl> - / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> - <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - / / Warning : who ever calls this function is responsible for destroying <nl> - / / HashIndexElements * results <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - HashIndexElements * TRI_LookupHashIndex ( TRI_index_t * idx , TRI_json_t * parameterList ) { <nl> - TRI_hash_index_t * hashIndex ; <nl> - HashIndexElements * result ; <nl> - HashIndexElement element ; <nl> - TRI_shaper_t * shaper ; <nl> - size_t j ; <nl> - <nl> - element . numFields = parameterList - > _value . _objects . _length ; <nl> - element . fields = TRI_Allocate ( sizeof ( TRI_json_t ) * element . numFields ) ; <nl> - if ( element . fields = = NULL ) { <nl> - LOG_WARNING ( " out - of - memory in LookupHashIndex " ) ; <nl> - return NULL ; <nl> - } <nl> - <nl> - hashIndex = ( TRI_hash_index_t * ) idx ; <nl> - shaper = hashIndex - > base . _collection - > _shaper ; <nl> - <nl> - for ( j = 0 ; j < element . numFields ; + + j ) { <nl> - TRI_json_t * jsonObject = ( TRI_json_t * ) ( TRI_AtVector ( & ( parameterList - > _value . _objects ) , j ) ) ; <nl> - TRI_shaped_json_t * shapedObject = TRI_ShapedJsonJson ( shaper , jsonObject ) ; <nl> - element . fields [ j ] = * shapedObject ; <nl> - TRI_Free ( shapedObject ) ; <nl> - } <nl> - <nl> - if ( hashIndex - > _unique ) { <nl> - result = HashIndex_find ( hashIndex - > _hashIndex , & element ) ; <nl> - } <nl> - else { <nl> - result = MultiHashIndex_find ( hashIndex - > _hashIndex , & element ) ; <nl> - } <nl> - <nl> - for ( j = 0 ; j < element . numFields ; + + j ) { <nl> - TRI_DestroyShapedJson ( element . fields + j ) ; <nl> - } <nl> - TRI_Free ( element . fields ) ; <nl> - <nl> - return result ; <nl> - } <nl> - <nl> - <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief helper for hashing <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - static bool HashIndexHelper ( const TRI_hash_index_t * hashIndex , <nl> - HashIndexElement * hashElement , <nl> - const TRI_doc_mptr_t * document , <nl> - const TRI_shaped_json_t * shapedDoc ) { <nl> + static bool HashIndexHelper ( const TRI_hash_index_t * hashIndex , <nl> + HashIndexElement * hashElement , <nl> + const TRI_doc_mptr_t * document , <nl> + const TRI_shaped_json_t * shapedDoc ) { <nl> union { void * p ; void const * c ; } cnv ; <nl> TRI_shaped_json_t shapedObject ; <nl> TRI_shape_access_t * acc ; <nl> static bool HashIndexHelper ( const TRI_hash_index_t * hashIndex , <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> <nl> hashElement - > data = NULL ; <nl> - <nl> - <nl> + <nl> for ( j = 0 ; j < hashIndex - > _shapeList - > _length ; + + j ) { <nl> <nl> TRI_shape_pid_t shape = * ( ( TRI_shape_pid_t * ) ( TRI_AtVector ( hashIndex - > _shapeList , j ) ) ) ; <nl> static bool HashIndexHelper ( const TRI_hash_index_t * hashIndex , <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Determine if document has that particular shape <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> acc = TRI_ShapeAccessor ( hashIndex - > base . _collection - > _shaper , shapedDoc - > _sid , shape ) ; <nl> + <nl> if ( acc = = NULL | | acc - > _shape = = NULL ) { <nl> if ( acc ! = NULL ) { <nl> TRI_FreeShapeAccessor ( acc ) ; <nl> static bool HashIndexHelper ( const TRI_hash_index_t * hashIndex , <nl> TRI_Free ( hashElement - > fields ) ; <nl> return false ; <nl> } <nl> - <nl> - <nl> + <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Extract the field <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> if ( ! TRI_ExecuteShapeAccessor ( acc , shapedDoc , & shapedObject ) ) { <nl> TRI_FreeShapeAccessor ( acc ) ; <nl> TRI_Free ( hashElement - > fields ) ; <nl> + <nl> return false ; <nl> } <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Store the json shaped Object - - this is what will be hashed <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> hashElement - > fields [ j ] = shapedObject ; <nl> TRI_FreeShapeAccessor ( acc ) ; <nl> } / / end of for loop <nl> - <nl> } <nl> <nl> - <nl> - <nl> else if ( document ! = NULL ) { <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Assign the document to the HashIndexElement structure - so that it can later <nl> / / be retreived . <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> cnv . c = document ; <nl> hashElement - > data = cnv . p ; <nl> <nl> - <nl> for ( j = 0 ; j < hashIndex - > _shapeList - > _length ; + + j ) { <nl> - <nl> TRI_shape_pid_t shape = * ( ( TRI_shape_pid_t * ) ( TRI_AtVector ( hashIndex - > _shapeList , j ) ) ) ; <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Determine if document has that particular shape <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> acc = TRI_ShapeAccessor ( hashIndex - > base . _collection - > _shaper , document - > _document . _sid , shape ) ; <nl> + <nl> if ( acc = = NULL | | acc - > _shape = = NULL ) { <nl> if ( acc ! = NULL ) { <nl> TRI_FreeShapeAccessor ( acc ) ; <nl> } <nl> + <nl> TRI_Free ( hashElement - > fields ) ; <nl> + <nl> return false ; <nl> } <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Extract the field <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> if ( ! TRI_ExecuteShapeAccessor ( acc , & ( document - > _document ) , & shapedObject ) ) { <nl> TRI_FreeShapeAccessor ( acc ) ; <nl> TRI_Free ( hashElement - > fields ) ; <nl> + <nl> return false ; <nl> } <nl> <nl> - <nl> - / * start oreste : <nl> + # ifdef DEBUG_ORESTE <nl> TRI_json_t * object ; <nl> TRI_string_buffer_t buffer ; <nl> TRI_InitStringBuffer ( & buffer ) ; <nl> static bool HashIndexHelper ( const TRI_hash_index_t * hashIndex , <nl> TRI_DestroyStringBuffer ( & buffer ) ; <nl> TRI_Free ( object ) ; <nl> object = NULL ; <nl> - end oreste * / <nl> + # endif <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Store the field <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> hashElement - > fields [ j ] = shapedObject ; <nl> + <nl> TRI_FreeShapeAccessor ( acc ) ; <nl> } / / end of for loop <nl> } <nl> static bool HashIndexHelper ( const TRI_hash_index_t * hashIndex , <nl> } <nl> <nl> return true ; <nl> - <nl> - } / / end of static function HashIndexHelper <nl> - <nl> - <nl> - <nl> + } <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief hash indexes a document <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> static bool InsertHashIndex ( TRI_index_t * idx , TRI_doc_mptr_t const * doc ) { <nl> - <nl> HashIndexElement hashElement ; <nl> TRI_hash_index_t * hashIndex ; <nl> int res ; <nl> static bool InsertHashIndex ( TRI_index_t * idx , TRI_doc_mptr_t const * doc ) { <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Obtain the hash index structure <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> hashIndex = ( TRI_hash_index_t * ) idx ; <nl> + <nl> if ( idx = = NULL ) { <nl> LOG_WARNING ( " internal error in InsertHashIndex " ) ; <nl> return false ; <nl> } <nl> - <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Allocate storage to shaped json objects stored as a simple list . <nl> static bool InsertHashIndex ( TRI_index_t * idx , TRI_doc_mptr_t const * doc ) { <nl> <nl> hashElement . numFields = hashIndex - > _shapeList - > _length ; <nl> hashElement . fields = TRI_Allocate ( sizeof ( TRI_shaped_json_t ) * hashElement . numFields ) ; <nl> + <nl> if ( hashElement . fields = = NULL ) { <nl> LOG_WARNING ( " out - of - memory in InsertHashIndex " ) ; <nl> return false ; <nl> } <nl> + <nl> ok = HashIndexHelper ( hashIndex , & hashElement , doc , NULL ) ; <nl> <nl> if ( ! ok ) { <nl> return false ; <nl> } <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Fill the json field list from the document for unique hash index <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> static bool InsertHashIndex ( TRI_index_t * idx , TRI_doc_mptr_t const * doc ) { <nl> res = MultiHashIndex_insert ( hashIndex - > _hashIndex , & hashElement ) ; <nl> } <nl> <nl> - <nl> if ( res = = - 1 ) { <nl> LOG_WARNING ( " found duplicate entry in hash - index , should not happen " ) ; <nl> } <nl> static bool InsertHashIndex ( TRI_index_t * idx , TRI_doc_mptr_t const * doc ) { <nl> return res = = 0 ; <nl> } <nl> <nl> - <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief describes a hash index as a json object <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> static TRI_json_t * JsonHashIndex ( TRI_index_t * idx , TRI_doc_collection_t * collection ) { <nl> - <nl> TRI_json_t * json ; <nl> const TRI_shape_path_t * path ; <nl> TRI_hash_index_t * hashIndex ; <nl> static TRI_json_t * JsonHashIndex ( TRI_index_t * idx , TRI_doc_collection_t * collec <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Recast as a hash index <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> hashIndex = ( TRI_hash_index_t * ) idx ; <nl> + <nl> if ( hashIndex = = NULL ) { <nl> return NULL ; <nl> } <nl> - <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Allocate sufficent memory for the field list <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> fieldList = TRI_Allocate ( ( sizeof ( char * ) * hashIndex - > _shapeList - > _length ) ) ; <nl> + <nl> if ( fieldList = = NULL ) { <nl> return NULL ; <nl> } <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Convert the attributes ( field list of the hash index ) into strings <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> for ( j = 0 ; j < hashIndex - > _shapeList - > _length ; + + j ) { <nl> TRI_shape_pid_t shape = * ( ( TRI_shape_pid_t * ) ( TRI_AtVector ( hashIndex - > _shapeList , j ) ) ) ; <nl> path = collection - > _shaper - > lookupAttributePathByPid ( collection - > _shaper , shape ) ; <nl> + <nl> if ( path = = NULL ) { <nl> TRI_Free ( fieldList ) ; <nl> return NULL ; <nl> } <nl> + <nl> fieldList [ j ] = ( ( const char * ) path ) + sizeof ( TRI_shape_path_t ) + path - > _aidLength * sizeof ( TRI_shape_aid_t ) ; <nl> } <nl> - <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / create json object and fill it <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> json = TRI_CreateArrayJson ( ) ; <nl> + <nl> if ( ! json ) { <nl> TRI_Free ( fieldList ) ; <nl> return NULL ; <nl> } <nl> <nl> fieldCounter = TRI_Allocate ( 30 ) ; <nl> + <nl> if ( ! fieldCounter ) { <nl> TRI_Free ( fieldList ) ; <nl> TRI_FreeJson ( json ) ; <nl> static TRI_json_t * JsonHashIndex ( TRI_index_t * idx , TRI_doc_collection_t * collec <nl> TRI_Insert2ArrayJson ( json , " iid " , TRI_CreateNumberJson ( idx - > _iid ) ) ; <nl> TRI_Insert2ArrayJson ( json , " unique " , TRI_CreateBooleanJson ( hashIndex - > _unique ) ) ; <nl> TRI_Insert2ArrayJson ( json , " type " , TRI_CreateStringCopyJson ( " hash " ) ) ; <nl> - TRI_Insert2ArrayJson ( json , " field_count " , TRI_CreateNumberJson ( hashIndex - > _shapeList - > _length ) ) ; <nl> + TRI_Insert2ArrayJson ( json , " fieldCount " , TRI_CreateNumberJson ( hashIndex - > _shapeList - > _length ) ) ; <nl> + <nl> for ( j = 0 ; j < hashIndex - > _shapeList - > _length ; + + j ) { <nl> sprintf ( fieldCounter , " field_ % lu " , j ) ; <nl> TRI_Insert2ArrayJson ( json , fieldCounter , TRI_CreateStringCopyJson ( fieldList [ j ] ) ) ; <nl> static TRI_json_t * JsonHashIndex ( TRI_index_t * idx , TRI_doc_collection_t * collec <nl> return json ; <nl> } <nl> <nl> - <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief removes a document from a hash index <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> static bool RemoveHashIndex ( TRI_index_t * idx , TRI_doc_mptr_t const * doc ) { <nl> - <nl> HashIndexElement hashElement ; <nl> TRI_hash_index_t * hashIndex ; <nl> bool result ; <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Obtain the hash index structure <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> hashIndex = ( TRI_hash_index_t * ) idx ; <nl> + <nl> if ( idx = = NULL ) { <nl> LOG_WARNING ( " internal error in RemoveHashIndex " ) ; <nl> return false ; <nl> } <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Allocate some memory for the HashIndexElement structure <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> <nl> hashElement . numFields = hashIndex - > _shapeList - > _length ; <nl> hashElement . fields = TRI_Allocate ( sizeof ( TRI_shaped_json_t ) * hashElement . numFields ) ; <nl> + <nl> if ( hashElement . fields = = NULL ) { <nl> LOG_WARNING ( " out - of - memory in InsertHashIndex " ) ; <nl> return false ; <nl> static bool RemoveHashIndex ( TRI_index_t * idx , TRI_doc_mptr_t const * doc ) { <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Fill the json field list from the document <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - if ( ! HashIndexHelper ( hashIndex , & hashElement , doc , NULL ) ) { <nl> + <nl> + if ( ! HashIndexHelper ( hashIndex , & hashElement , doc , NULL ) ) { <nl> return false ; <nl> } <nl> - <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Attempt the removal for unique hash indexes <nl> static bool RemoveHashIndex ( TRI_index_t * idx , TRI_doc_mptr_t const * doc ) { <nl> result = MultiHashIndex_remove ( hashIndex - > _hashIndex , & hashElement ) ; <nl> } <nl> <nl> - <nl> return result ; <nl> } <nl> <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief updates a document from a hash index <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - static bool UpdateHashIndex ( TRI_index_t * idx , const TRI_doc_mptr_t * newDoc , <nl> + static bool UpdateHashIndex ( TRI_index_t * idx , <nl> + const TRI_doc_mptr_t * newDoc , <nl> const TRI_shaped_json_t * oldDoc ) { <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> static bool UpdateHashIndex ( TRI_index_t * idx , const TRI_doc_mptr_t * newDoc , <nl> HashIndexElement hashElement ; <nl> TRI_hash_index_t * hashIndex ; <nl> int res ; <nl> - <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Obtain the hash index structure <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> <nl> hashIndex = ( TRI_hash_index_t * ) idx ; <nl> + <nl> if ( idx = = NULL ) { <nl> LOG_WARNING ( " internal error in UpdateHashIndex " ) ; <nl> return false ; <nl> } <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Allocate some memory for the HashIndexElement structure <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> <nl> hashElement . numFields = hashIndex - > _shapeList - > _length ; <nl> hashElement . fields = TRI_Allocate ( sizeof ( TRI_shaped_json_t ) * hashElement . numFields ) ; <nl> + <nl> if ( hashElement . fields = = NULL ) { <nl> LOG_WARNING ( " out - of - memory in UpdateHashIndex " ) ; <nl> return false ; <nl> } <nl> - <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Update for unique hash index <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Fill in the fields with the values from oldDoc <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> <nl> if ( hashIndex - > _unique ) { <nl> - <nl> - <nl> if ( HashIndexHelper ( hashIndex , & hashElement , NULL , oldDoc ) ) { <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / We must fill the hashElement with the value of the document shape - - this <nl> / / is necessary when we attempt to remove non - unique hash indexes . <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> cnv . c = newDoc ; / / we are assuming here that the doc ptr does not change <nl> hashElement . data = cnv . p ; <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Remove the hash index entry and return . <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> if ( ! HashIndex_remove ( hashIndex - > _hashIndex , & hashElement ) ) { <nl> LOG_WARNING ( " could not remove old document from hash index in UpdateHashIndex " ) ; <nl> } <nl> - <nl> } <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Fill the json simple list from the document <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - if ( ! HashIndexHelper ( hashIndex , & hashElement , newDoc , NULL ) ) { <nl> + <nl> + if ( ! HashIndexHelper ( hashIndex , & hashElement , newDoc , NULL ) ) { <nl> + <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / probably fields do not match <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> return false ; <nl> } <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Attempt to add the hash entry from the new doc <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> res = HashIndex_insert ( hashIndex - > _hashIndex , & hashElement ) ; <nl> } <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Update for non - unique hash index <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> static bool UpdateHashIndex ( TRI_index_t * idx , const TRI_doc_mptr_t * newDoc , <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> <nl> if ( HashIndexHelper ( hashIndex , & hashElement , NULL , oldDoc ) ) { <nl> + <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / We must fill the hashElement with the value of the document shape - - this <nl> / / is necessary when we attempt to remove non - unique hash indexes . <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> cnv . c = newDoc ; <nl> hashElement . data = cnv . p ; <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Remove the hash index entry and return . <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - if ( ! MultiHashIndex_remove ( hashIndex - > _hashIndex , & hashElement ) ) { <nl> + <nl> + if ( ! MultiHashIndex_remove ( hashIndex - > _hashIndex , & hashElement ) ) { <nl> LOG_WARNING ( " could not remove old document from hash index in UpdateHashIndex " ) ; <nl> } <nl> - <nl> } <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Fill the shaped json simple list from the document <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> if ( ! HashIndexHelper ( hashIndex , & hashElement , newDoc , NULL ) ) { <nl> + <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / probably fields do not match <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> return false ; <nl> } <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Attempt to add the hash entry from the new doc <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> res = MultiHashIndex_insert ( hashIndex - > _hashIndex , & hashElement ) ; <nl> - <nl> } <nl> <nl> if ( res = = - 1 ) { <nl> static bool UpdateHashIndex ( TRI_index_t * idx , const TRI_doc_mptr_t * newDoc , <nl> <nl> return res = = 0 ; <nl> } <nl> - <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ } <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> + / / - - SECTION - - constructors and destructors <nl> + / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ addtogroup VocBase <nl> + / / / @ { <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief creates a hash index <nl> TRI_index_t * TRI_CreateHashIndex ( struct TRI_doc_collection_s * collection , <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Copy the contents of the shape list vector into a new vector and store this <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> hashIndex - > _shapeList = TRI_Allocate ( sizeof ( TRI_vector_t ) ) ; <nl> + <nl> if ( ! hashIndex - > _shapeList ) { <nl> TRI_Free ( hashIndex ) ; <nl> return NULL ; <nl> } <nl> <nl> TRI_InitVector ( hashIndex - > _shapeList , sizeof ( TRI_shape_pid_t ) ) ; <nl> + <nl> for ( j = 0 ; j < shapeList - > _length ; + + j ) { <nl> TRI_shape_pid_t shape = * ( ( TRI_shape_pid_t * ) ( TRI_AtVector ( shapeList , j ) ) ) ; <nl> + <nl> TRI_PushBackVector ( hashIndex - > _shapeList , & shape ) ; <nl> } <nl> <nl> TRI_index_t * TRI_CreateHashIndex ( struct TRI_doc_collection_s * collection , <nl> return & hashIndex - > base ; <nl> } <nl> <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief frees the memory allocated , but does not free the pointer <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + void TRI_DestroyHashIndex ( TRI_index_t * idx ) { <nl> + LOG_ERROR ( " TRI_DestroyHashIndex not implemented " ) ; <nl> + } <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief frees the memory allocated and frees the pointer <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + void TRI_FreeHashIndex ( TRI_index_t * idx ) { <nl> + TRI_DestroyHashIndex ( idx ) ; <nl> + TRI_Free ( idx ) ; <nl> + } <nl> + <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ } <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> + / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> + / / - - SECTION - - public functions <nl> + / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ addtogroup VocBase <nl> + / / / @ { <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief attempts to locate an entry in the hash index <nl> + / / / <nl> + / / / @ warning who ever calls this function is responsible for destroying <nl> + / / / HashIndexElements * results <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> + HashIndexElements * TRI_LookupHashIndex ( TRI_index_t * idx , TRI_json_t * parameterList ) { <nl> + TRI_hash_index_t * hashIndex ; <nl> + HashIndexElements * result ; <nl> + HashIndexElement element ; <nl> + TRI_shaper_t * shaper ; <nl> + size_t j ; <nl> + <nl> + element . numFields = parameterList - > _value . _objects . _length ; <nl> + element . fields = TRI_Allocate ( sizeof ( TRI_json_t ) * element . numFields ) ; <nl> + <nl> + if ( element . fields = = NULL ) { <nl> + LOG_WARNING ( " out - of - memory in LookupHashIndex " ) ; <nl> + return NULL ; <nl> + } <nl> + <nl> + hashIndex = ( TRI_hash_index_t * ) idx ; <nl> + shaper = hashIndex - > base . _collection - > _shaper ; <nl> + <nl> + for ( j = 0 ; j < element . numFields ; + + j ) { <nl> + TRI_json_t * jsonObject = ( TRI_json_t * ) ( TRI_AtVector ( & ( parameterList - > _value . _objects ) , j ) ) ; <nl> + TRI_shaped_json_t * shapedObject = TRI_ShapedJsonJson ( shaper , jsonObject ) ; <nl> + <nl> + element . fields [ j ] = * shapedObject ; <nl> + TRI_Free ( shapedObject ) ; <nl> + } <nl> + <nl> + if ( hashIndex - > _unique ) { <nl> + result = HashIndex_find ( hashIndex - > _hashIndex , & element ) ; <nl> + } <nl> + else { <nl> + result = MultiHashIndex_find ( hashIndex - > _hashIndex , & element ) ; <nl> + } <nl> + <nl> + for ( j = 0 ; j < element . numFields ; + + j ) { <nl> + TRI_DestroyShapedJson ( element . fields + j ) ; <nl> + } <nl> + <nl> + TRI_Free ( element . fields ) ; <nl> + <nl> + return result ; <nl> + } <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ } <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> / / - - SECTION - - SKIPLIST INDEX <nl> static TRI_json_t * JsonSkiplistIndex ( TRI_index_t * idx , TRI_doc_collection_t * co <nl> TRI_Insert2ArrayJson ( json , " iid " , TRI_CreateNumberJson ( idx - > _iid ) ) ; <nl> TRI_Insert2ArrayJson ( json , " unique " , TRI_CreateBooleanJson ( skiplistIndex - > _unique ) ) ; <nl> TRI_Insert2ArrayJson ( json , " type " , TRI_CreateStringCopyJson ( " skiplist " ) ) ; <nl> - TRI_Insert2ArrayJson ( json , " field_count " , TRI_CreateNumberJson ( skiplistIndex - > _shapeList - > _length ) ) ; <nl> + TRI_Insert2ArrayJson ( json , " fieldCount " , TRI_CreateNumberJson ( skiplistIndex - > _shapeList - > _length ) ) ; <nl> for ( j = 0 ; j < skiplistIndex - > _shapeList - > _length ; + + j ) { <nl> sprintf ( fieldCounter , " field_ % lu " , j ) ; <nl> TRI_Insert2ArrayJson ( json , fieldCounter , TRI_CreateStringCopyJson ( fieldList [ j ] ) ) ; <nl> mmm a / VocBase / index . h <nl> ppp b / VocBase / index . h <nl> GeoCoordinates * TRI_NearestGeoIndex ( TRI_index_t * , <nl> / / / @ { <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - <nl> - HashIndexElements * TRI_LookupHashIndex ( TRI_index_t * , TRI_json_t * ) ; <nl> - <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief creates a hash - index <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> TRI_index_t * TRI_CreateHashIndex ( struct TRI_doc_collection_s * , <nl> TRI_vector_t * shapeList , <nl> bool unique ) ; <nl> <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief frees the memory allocated , but does not free the pointer <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + void TRI_DestroyHashIndex ( TRI_index_t * idx ) ; <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief frees the memory allocated and frees the pointer <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + void TRI_FreeHashIndex ( TRI_index_t * idx ) ; <nl> <nl> - <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ } <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> + / / - - SECTION - - public functions <nl> + / / mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ addtogroup VocBase <nl> + / / / @ { <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief attempts to locate an entry in the hash index <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + HashIndexElements * TRI_LookupHashIndex ( TRI_index_t * , TRI_json_t * ) ; <nl> + <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ } <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> mmm a / VocBase / simple - collection . c <nl> ppp b / VocBase / simple - collection . c <nl> static bool OpenIterator ( TRI_df_marker_t const * marker , void * data , TRI_datafil <nl> / / / @ brief iterator for index open <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - static void OpenIndex ( char const * filename , void * data ) { <nl> + static bool OpenIndex ( char const * filename , void * data ) { <nl> TRI_idx_iid_t iid ; <nl> + TRI_index_t * idx ; <nl> + TRI_json_t * fieldCount ; <nl> + TRI_json_t * fieldStr ; <nl> TRI_json_t * gjs ; <nl> TRI_json_t * iis ; <nl> TRI_json_t * json ; <nl> - TRI_json_t * lat ; <nl> - TRI_json_t * loc ; <nl> - TRI_json_t * lon ; <nl> TRI_json_t * type ; <nl> - TRI_json_t * fieldCount ; <nl> - TRI_json_t * fieldStr ; <nl> - TRI_vector_t attributes ; <nl> TRI_sim_collection_t * doc ; <nl> - bool geoJson ; <nl> + TRI_vector_t attributes ; <nl> + bool uniqueIndex ; <nl> char const * typeStr ; <nl> + char fieldChar [ 30 ] ; <nl> char * error ; <nl> - char * fieldChar ; <nl> int intCount ; <nl> - bool ok ; <nl> - bool uniqueIndex ; <nl> + <nl> + / / load json description of the index <nl> json = TRI_JsonFile ( filename , & error ) ; <nl> <nl> + / / simple collection of the index <nl> + doc = ( TRI_sim_collection_t * ) data ; <nl> + <nl> + / / json must be a index description <nl> if ( json = = NULL ) { <nl> LOG_ERROR ( " cannot read index definition from ' % s ' : % s " , filename , error ) ; <nl> - return ; <nl> + return false ; <nl> } <nl> <nl> if ( json - > _type ! = TRI_JSON_ARRAY ) { <nl> LOG_ERROR ( " cannot read index definition from ' % s ' : expecting an array " , filename ) ; <nl> + <nl> TRI_FreeJson ( json ) ; <nl> - return ; <nl> + return false ; <nl> } <nl> <nl> + / / extract the type <nl> type = TRI_LookupArrayJson ( json , " type " ) ; <nl> <nl> if ( type - > _type ! = TRI_JSON_STRING ) { <nl> LOG_ERROR ( " cannot read index definition from ' % s ' : expecting a string for type " , filename ) ; <nl> + <nl> TRI_FreeJson ( json ) ; <nl> - return ; <nl> + return false ; <nl> } <nl> <nl> typeStr = type - > _value . _string . data ; <nl> - doc = ( TRI_sim_collection_t * ) data ; <nl> + <nl> + / / extract the index identifier <nl> + iis = TRI_LookupArrayJson ( json , " iid " ) ; <nl> + <nl> + if ( iis ! = NULL & & iis - > _type = = TRI_JSON_NUMBER ) { <nl> + iid = iis - > _value . _number ; <nl> + } <nl> + else { <nl> + LOG_ERROR ( " ignore hash - index , index identifier could not be located " ) ; <nl> + return false ; <nl> + } <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - / / geo index <nl> + / / GEO INDEX <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> if ( TRI_EqualString ( typeStr , " geo " ) ) { <nl> + TRI_json_t * lat ; <nl> + TRI_json_t * loc ; <nl> + TRI_json_t * lon ; <nl> + bool geoJson ; <nl> + <nl> loc = TRI_LookupArrayJson ( json , " location " ) ; <nl> lat = TRI_LookupArrayJson ( json , " latitude " ) ; <nl> lon = TRI_LookupArrayJson ( json , " longitude " ) ; <nl> - iis = TRI_LookupArrayJson ( json , " iid " ) ; <nl> gjs = TRI_LookupArrayJson ( json , " geoJson " ) ; <nl> iid = 0 ; <nl> geoJson = false ; <nl> <nl> - if ( iis ! = NULL & & iis - > _type = = TRI_JSON_NUMBER ) { <nl> - iid = iis - > _value . _number ; <nl> - } <nl> - <nl> if ( gjs ! = NULL & & gjs - > _type = = TRI_JSON_BOOLEAN ) { <nl> geoJson = gjs - > _value . _boolean ; <nl> } <nl> static void OpenIndex ( char const * filename , void * data ) { <nl> CreateGeoIndexSimCollection ( doc , NULL , lat - > _value . _string . data , lon - > _value . _string . data , false , iid ) ; <nl> } <nl> else { <nl> - LOG_WARNING ( " ignore geo - index , need either ' location ' or ' latitude ' and ' longitude ' " ) ; <nl> + LOG_ERROR ( " ignore geo - index % lu , need either ' location ' or ' latitude ' and ' longitude ' " , <nl> + ( unsigned long ) iid ) ; <nl> + <nl> + TRI_FreeJson ( json ) ; <nl> + return false ; <nl> } <nl> - } <nl> <nl> + TRI_FreeJson ( json ) ; <nl> + return true ; <nl> + } <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - / / Hash Index <nl> + / / HASH INDEX <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> else if ( TRI_EqualString ( typeStr , " hash " ) ) { <nl> <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - / / Initialise the ok value <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - ok = true ; <nl> - <nl> - <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - / / Initialise the vector in which we store the fields on which the hashing <nl> - / / will be based . <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - TRI_InitVector ( & attributes , sizeof ( char * ) ) ; <nl> - <nl> - <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - / / Determine the id of the hash index <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - if ( ok ) { <nl> - iis = TRI_LookupArrayJson ( json , " iid " ) ; <nl> - iid = 0 ; <nl> - if ( iis ! = NULL & & iis - > _type = = TRI_JSON_NUMBER ) { <nl> - iid = iis - > _value . _number ; <nl> - } <nl> - else { <nl> - LOG_WARNING ( " ignore hash - index , id could not be located " ) ; <nl> - ok = false ; <nl> - } <nl> - } <nl> - <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Determine if the hash index is unique or non - unique <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - if ( ok ) { <nl> - gjs = TRI_LookupArrayJson ( json , " unique " ) ; <nl> - uniqueIndex = false ; <nl> - if ( gjs ! = NULL & & gjs - > _type = = TRI_JSON_BOOLEAN ) { <nl> - uniqueIndex = gjs - > _value . _boolean ; <nl> - } <nl> - else { <nl> - LOG_WARNING ( " ignore hash - index , could not determine if unique or non - unique " ) ; <nl> - ok = false ; <nl> - } <nl> + gjs = TRI_LookupArrayJson ( json , " unique " ) ; <nl> + uniqueIndex = false ; <nl> + <nl> + if ( gjs ! = NULL & & gjs - > _type = = TRI_JSON_BOOLEAN ) { <nl> + uniqueIndex = gjs - > _value . _boolean ; <nl> } <nl> + else { <nl> + LOG_ERROR ( " ignore hash - index % lu , could not determine if unique or non - unique " , <nl> + ( unsigned long ) iid ) ; <nl> + <nl> + TRI_FreeJson ( json ) ; <nl> + return false ; <nl> + } <nl> <nl> - <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Extract the list of fields <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - if ( ok ) { <nl> - fieldCount = 0 ; <nl> - fieldCount = TRI_LookupArrayJson ( json , " field_count " ) ; <nl> - intCount = 0 ; <nl> - if ( ( fieldCount ! = NULL ) & & ( fieldCount - > _type = = TRI_JSON_NUMBER ) ) { <nl> - intCount = fieldCount - > _value . _number ; <nl> - } <nl> - if ( intCount < 1 ) { <nl> - LOG_WARNING ( " ignore hash - index , field count missing " ) ; <nl> - ok = false ; <nl> - } <nl> + fieldCount = 0 ; <nl> + fieldCount = TRI_LookupArrayJson ( json , " fieldCount " ) ; <nl> + intCount = 0 ; <nl> + <nl> + if ( ( fieldCount ! = NULL ) & & ( fieldCount - > _type = = TRI_JSON_NUMBER ) ) { <nl> + intCount = fieldCount - > _value . _number ; <nl> } <nl> - <nl> - if ( ok ) { <nl> - fieldChar = TRI_Allocate ( 30 ) ; <nl> - if ( fieldChar = = NULL ) { <nl> - LOG_WARNING ( " ignore hash - index , field count missing " ) ; <nl> - ok = false ; <nl> - } <nl> + <nl> + if ( intCount < 1 ) { <nl> + LOG_ERROR ( " ignore hash - index % lu , field count missing " , ( unsigned long ) iid ) ; <nl> + <nl> + TRI_FreeJson ( json ) ; <nl> + return false ; <nl> } <nl> + <nl> + / / Initialise the vector in which we store the fields on which the hashing <nl> + / / will be based . <nl> + TRI_InitVector ( & attributes , sizeof ( char * ) ) ; <nl> <nl> - if ( ok ) { <nl> - for ( int j = 0 ; j < intCount ; + + j ) { <nl> - sprintf ( fieldChar , " field_ % i " , j ) ; <nl> - fieldStr = TRI_LookupArrayJson ( json , fieldChar ) ; <nl> - if ( fieldStr - > _type ! = TRI_JSON_STRING ) { <nl> - LOG_WARNING ( " ignore hash - index , invalid field name for hash index " ) ; <nl> - ok = false ; <nl> - break ; <nl> - } <nl> - TRI_PushBackVector ( & attributes , & ( fieldStr - > _value . _string . data ) ) ; <nl> + / / find fields <nl> + for ( int j = 0 ; j < intCount ; + + j ) { <nl> + sprintf ( fieldChar , " field_ % i " , j ) ; <nl> + <nl> + fieldStr = TRI_LookupArrayJson ( json , fieldChar ) ; <nl> + <nl> + if ( fieldStr - > _type ! = TRI_JSON_STRING ) { <nl> + LOG_ERROR ( " ignore hash - index % lu , invalid field name for hash index " , <nl> + ( unsigned long ) iid ) ; <nl> + <nl> + TRI_DestroyVector ( & attributes ) ; <nl> + TRI_FreeJson ( json ) ; <nl> + return false ; <nl> } <nl> - TRI_Free ( fieldChar ) ; <nl> - } <nl> - <nl> - <nl> - if ( ok ) { <nl> - CreateHashIndexSimCollection ( doc , & attributes , iid , uniqueIndex ) ; <nl> - } <nl> - <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - / / Free the vector <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> + TRI_PushBackVector ( & attributes , & ( fieldStr - > _value . _string . data ) ) ; <nl> + } <nl> + <nl> + / / create the index <nl> + idx = CreateHashIndexSimCollection ( doc , & attributes , iid , uniqueIndex ) ; <nl> + <nl> TRI_DestroyVector ( & attributes ) ; <nl> - } <nl> + TRI_FreeJson ( json ) ; <nl> <nl> + if ( idx = = NULL ) { <nl> + LOG_ERROR ( " cannot create hash index % lu " , ( unsigned long ) iid ) ; <nl> + return false ; <nl> + } <nl> <nl> + return true ; <nl> + } <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - / / Skiplist Index <nl> + / / SKIPLIST INDEX <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - else if ( TRI_EqualString ( typeStr , " skiplist " ) ) { <nl> - <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - / / Initialise the ok value <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - ok = true ; <nl> - <nl> - <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - / / Initialise the vector in which we store the fields on which the hashing <nl> - / / will be based . <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - TRI_InitVector ( & attributes , sizeof ( char * ) ) ; <nl> <nl> + else if ( TRI_EqualString ( typeStr , " skiplist " ) ) { <nl> <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - / / Determine the id of the hash index <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - if ( ok ) { <nl> - iis = TRI_LookupArrayJson ( json , " iid " ) ; <nl> - iid = 0 ; <nl> - if ( iis ! = NULL & & iis - > _type = = TRI_JSON_NUMBER ) { <nl> - iid = iis - > _value . _number ; <nl> - } <nl> - else { <nl> - LOG_WARNING ( " ignore skiplist - index , id could not be located " ) ; <nl> - ok = false ; <nl> - } <nl> - } <nl> - <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Determine if the skiplist index is unique or non - unique <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - if ( ok ) { <nl> - gjs = TRI_LookupArrayJson ( json , " unique " ) ; <nl> - uniqueIndex = false ; <nl> - if ( gjs ! = NULL & & gjs - > _type = = TRI_JSON_BOOLEAN ) { <nl> - uniqueIndex = gjs - > _value . _boolean ; <nl> - } <nl> - else { <nl> - LOG_WARNING ( " ignore skiplist - index , could not determine if unique or non - unique " ) ; <nl> - ok = false ; <nl> - } <nl> + gjs = TRI_LookupArrayJson ( json , " unique " ) ; <nl> + uniqueIndex = false ; <nl> + <nl> + if ( gjs ! = NULL & & gjs - > _type = = TRI_JSON_BOOLEAN ) { <nl> + uniqueIndex = gjs - > _value . _boolean ; <nl> } <nl> + else { <nl> + LOG_ERROR ( " ignore skiplist - index % lu , could not determine if unique or non - unique " , <nl> + ( unsigned long ) iid ) ; <nl> + <nl> + TRI_FreeJson ( json ) ; <nl> + return false ; <nl> + } <nl> <nl> - <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Extract the list of fields <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - if ( ok ) { <nl> - fieldCount = 0 ; <nl> - fieldCount = TRI_LookupArrayJson ( json , " field_count " ) ; <nl> - intCount = 0 ; <nl> - if ( ( fieldCount ! = NULL ) & & ( fieldCount - > _type = = TRI_JSON_NUMBER ) ) { <nl> - intCount = fieldCount - > _value . _number ; <nl> - } <nl> - if ( intCount < 1 ) { <nl> - LOG_WARNING ( " ignore skiplist - index , field count missing " ) ; <nl> - ok = false ; <nl> - } <nl> + fieldCount = 0 ; <nl> + fieldCount = TRI_LookupArrayJson ( json , " fieldCount " ) ; <nl> + intCount = 0 ; <nl> + <nl> + if ( ( fieldCount ! = NULL ) & & ( fieldCount - > _type = = TRI_JSON_NUMBER ) ) { <nl> + intCount = fieldCount - > _value . _number ; <nl> } <nl> - <nl> - if ( ok ) { <nl> - fieldChar = TRI_Allocate ( 30 ) ; <nl> - if ( fieldChar = = NULL ) { <nl> - LOG_WARNING ( " ignore skiplist - index , field count missing " ) ; <nl> - ok = false ; <nl> - } <nl> + <nl> + if ( intCount < 1 ) { <nl> + LOG_ERROR ( " ignore skiplist - index % lu , field count missing " , ( unsigned long ) iid ) ; <nl> + <nl> + TRI_FreeJson ( json ) ; <nl> + return false ; <nl> } <nl> <nl> - if ( ok ) { <nl> - for ( int j = 0 ; j < intCount ; + + j ) { <nl> - sprintf ( fieldChar , " field_ % i " , j ) ; <nl> - fieldStr = TRI_LookupArrayJson ( json , fieldChar ) ; <nl> - if ( fieldStr - > _type ! = TRI_JSON_STRING ) { <nl> - LOG_WARNING ( " ignore skiplist - index , invalid field name for skiplist index " ) ; <nl> - ok = false ; <nl> - break ; <nl> - } <nl> - TRI_PushBackVector ( & attributes , & ( fieldStr - > _value . _string . data ) ) ; <nl> + / / Initialise the vector in which we store the fields on which the hashing <nl> + / / will be based . <nl> + TRI_InitVector ( & attributes , sizeof ( char * ) ) ; <nl> + <nl> + / / find fields <nl> + for ( int j = 0 ; j < intCount ; + + j ) { <nl> + sprintf ( fieldChar , " field_ % i " , j ) ; <nl> + <nl> + fieldStr = TRI_LookupArrayJson ( json , fieldChar ) ; <nl> + <nl> + if ( fieldStr - > _type ! = TRI_JSON_STRING ) { <nl> + LOG_ERROR ( " ignore skiplist - index % lu , invalid field name for hash index " , <nl> + ( unsigned long ) iid ) ; <nl> + <nl> + TRI_DestroyVector ( & attributes ) ; <nl> + TRI_FreeJson ( json ) ; <nl> + return false ; <nl> } <nl> - TRI_Free ( fieldChar ) ; <nl> - } <nl> - <nl> - <nl> - if ( ok ) { <nl> - CreateSkiplistIndexSimCollection ( doc , & attributes , iid , uniqueIndex ) ; <nl> - } <nl> + <nl> + TRI_PushBackVector ( & attributes , & ( fieldStr - > _value . _string . data ) ) ; <nl> + } <nl> <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - / / Free the vector <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + / / create the index <nl> + idx = CreateSkiplistIndexSimCollection ( doc , & attributes , iid , uniqueIndex ) ; <nl> + <nl> TRI_DestroyVector ( & attributes ) ; <nl> + TRI_FreeJson ( json ) ; <nl> + <nl> + if ( idx = = NULL ) { <nl> + LOG_ERROR ( " cannot create hash index % lu " , ( unsigned long ) iid ) ; <nl> + return false ; <nl> + } <nl> + <nl> + return true ; <nl> } <nl> <nl> - / / ups <nl> + / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + / / ups , unknown index type <nl> + / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> else { <nl> - LOG_WARNING ( " ignoring unknown index type ' % s ' " , typeStr ) ; <nl> - } <nl> + LOG_ERROR ( " ignoring unknown index type ' % s ' for index % lu " , <nl> + typeStr , <nl> + ( unsigned long ) iid ) ; <nl> <nl> - TRI_FreeJson ( json ) ; <nl> + TRI_FreeJson ( json ) ; <nl> + return false ; <nl> + } <nl> } <nl> <nl> <nl> static bool DeleteImmediateIndexes ( TRI_sim_collection_t * collection , <nl> / / / @ brief initialises an index with all existing documents <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - static void FillIndex ( TRI_sim_collection_t * collection , <nl> + static bool FillIndex ( TRI_sim_collection_t * collection , <nl> TRI_index_t * idx ) { <nl> size_t n ; <nl> size_t scanned ; <nl> static void FillIndex ( TRI_sim_collection_t * collection , <nl> if ( * ptr ) { <nl> + + scanned ; <nl> <nl> - if ( ! idx - > insert ( idx , * ptr ) ) { <nl> - / / TODO : handle errors <nl> + if ( ! idx - > insert ( idx , * ptr ) ) { <nl> + LOG_TRACE ( " failed to insert document ' % lu : % lu ' " , <nl> + ( unsigned long ) collection - > base . base . _cid , <nl> + ( unsigned long ) ( ( TRI_doc_mptr_t const * ) ptr ) - > _did ) ; <nl> + <nl> + return false ; <nl> } <nl> <nl> if ( scanned % 10000 = = 0 ) { <nl> static void FillIndex ( TRI_sim_collection_t * collection , <nl> } <nl> } <nl> } <nl> + <nl> + return true ; <nl> } <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> static TRI_index_t * CreateHashIndexSimCollection ( TRI_sim_collection_t * collecti <nl> const TRI_vector_t * attributes , <nl> TRI_idx_iid_t iid , <nl> bool unique ) { <nl> - TRI_index_t * idx = NULL ; <nl> - TRI_shaper_t * shaper = collection - > base . _shaper ; <nl> + TRI_index_t * idx ; <nl> + TRI_shaper_t * shaper ; <nl> TRI_vector_t shapes ; <nl> + bool ok ; <nl> + <nl> + idx = NULL ; <nl> + shaper = collection - > base . _shaper ; <nl> <nl> TRI_InitVector ( & shapes , sizeof ( TRI_shape_pid_t ) ) ; <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Determine the shape ids for the attributes <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> for ( size_t j = 0 ; j < attributes - > _length ; + + j ) { <nl> - char * shapeString = * ( ( char * * ) ( TRI_AtVector ( attributes , j ) ) ) ; <nl> - TRI_shape_pid_t shape = shaper - > findAttributePathByName ( shaper , shapeString ) ; <nl> - TRI_PushBackVector ( & shapes , & shape ) ; <nl> + char * shapeString ; <nl> + TRI_shape_pid_t shape ; <nl> + <nl> + shapeString = * ( ( char * * ) ( TRI_AtVector ( attributes , j ) ) ) ; <nl> + shape = shaper - > findAttributePathByName ( shaper , shapeString ) ; <nl> + <nl> + TRI_PushBackVector ( & shapes , & shape ) ; <nl> } <nl> - <nl> - <nl> + <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Attempt to find an existing index which matches the attributes above . <nl> / / If a suitable index is found , return that one otherwise we need to create <nl> / / a new one . <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> idx = TRI_LookupHashIndexSimCollection ( collection , & shapes ) ; <nl> <nl> - <nl> if ( idx ! = NULL ) { <nl> TRI_DestroyVector ( & shapes ) ; <nl> LOG_TRACE ( " hash - index already created " ) ; <nl> + <nl> return idx ; <nl> } <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Create the hash index <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - idx = TRI_CreateHashIndex ( & collection - > base , & shapes , unique ) ; <nl> - <nl> + <nl> + idx = TRI_CreateHashIndex ( & collection - > base , & shapes , unique ) ; <nl> <nl> + / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + / / release memory allocated to vector <nl> + / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> + TRI_DestroyVector ( & shapes ) ; <nl> + <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / If index id given , use it otherwise use the default . <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> if ( iid ) { <nl> idx - > _iid = iid ; <nl> } <nl> <nl> - <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / initialises the index with all existing documents <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - FillIndex ( collection , idx ) ; <nl> <nl> + ok = FillIndex ( collection , idx ) ; <nl> + <nl> + if ( ! ok ) { <nl> + TRI_FreeHashIndex ( idx ) ; <nl> + return NULL ; <nl> + } <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - / / store index <nl> + / / store index and return <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> + <nl> TRI_PushBackVectorPointer ( & collection - > _indexes , idx ) ; <nl> <nl> - <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - / / release memory allocated to vector <nl> - / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - TRI_DestroyVector ( & shapes ) ; <nl> - <nl> return idx ; <nl> } <nl> <nl> - <nl> - <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief adds a skiplist index to the collection <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> TRI_idx_iid_t TRI_EnsureHashIndexSimCollection ( TRI_sim_collection_t * collection , <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / within write - lock the collection <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> - TRI_WriteLockReadWriteLock ( & collection - > _lock ) ; <nl> <nl> + TRI_WriteLockReadWriteLock ( & collection - > _lock ) ; <nl> <nl> / / . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <nl> / / Given the list of attributes ( as strings ) <nl> TRI_idx_iid_t TRI_EnsureHashIndexSimCollection ( TRI_sim_collection_t * collection , <nl> return ok ? idx - > _iid : 0 ; <nl> } <nl> <nl> - <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief ensures that a skiplist index exists <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> mmm a / js / modules / actions . js <nl> ppp b / js / modules / actions . js <nl> exports . cursorNotModified = 40304 ; <nl> / / / Note that the url for " user " actions is automatically prefixed <nl> / / / with @ LIT { _action } . This applies to all specified contexts . For example , if <nl> / / / the context contains " admin " and " user " and the url is @ LIT { hallo } , then the <nl> - / / / action is accessible under @ { / _action / hallo } - even for the admin context . <nl> + / / / action is accessible under @ LIT { / _action / hallo } - even for the admin context . <nl> / / / <nl> / / / @ FA { options . callback } ( @ FA { request } , @ FA { response } ) <nl> / / / <nl> function defineHttp ( options ) { <nl> / / / <nl> / / / The functions defines a response . @ FA { code } is the status code to <nl> / / / return . @ FA { result } is the result object , which will be returned as JSON <nl> - / / / object in the body . @ { headers } is an array of headers to returned . <nl> + / / / object in the body . @ LIT { headers } is an array of headers to returned . <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> function actionResult ( req , res , code , result , headers ) { <nl>
|
more doc , error handling when loading corrupted index
|
arangodb/arangodb
|
25aaeac7c8d8acd0e4eb2d8e07569f20dc3add00
|
2012-03-09T12:56:28Z
|
mmm a / ChangeLog <nl> ppp b / ChangeLog <nl> <nl> + 2010 - 09 - 08 Tatsuhiro Tsujikawa < t - tujikawa @ users . sourceforge . net > <nl> + <nl> + Allow ' @ ' in username and password embedded in URI . It should be <nl> + percent - encoded but many people use their mail address as an <nl> + username and forget about PE . <nl> + * src / Request . cc <nl> + * test / RequestTest . cc <nl> + <nl> 2010 - 09 - 06 Tatsuhiro Tsujikawa < t - tujikawa @ users . sourceforge . net > <nl> <nl> Parse original URI when removing same host . <nl> mmm a / src / Request . cc <nl> ppp b / src / Request . cc <nl> bool Request : : parseUri ( const std : : string & srcUri ) { <nl> return false ; <nl> } <nl> / / find userinfo ( username and password ) in authority if they exist <nl> - std : : string : : const_iterator userInfoLast = authorityFirst ; <nl> + std : : string : : const_iterator userInfoLast = authorityLast ; <nl> std : : string : : const_iterator hostPortFirst = authorityFirst ; <nl> - for ( ; userInfoLast ! = authorityLast ; + + userInfoLast ) { <nl> + for ( ; userInfoLast ! = authorityFirst - 1 ; - - userInfoLast ) { <nl> if ( * userInfoLast = = ' @ ' ) { <nl> hostPortFirst = userInfoLast ; <nl> + + hostPortFirst ; <nl> mmm a / test / RequestTest . cc <nl> ppp b / test / RequestTest . cc <nl> void RequestTest : : testSetUri_zeroUsername ( ) <nl> void RequestTest : : testSetUri_username ( ) <nl> { <nl> Request req ; <nl> - CPPUNIT_ASSERT ( req . setUri ( " ftp : / / aria2user @ localhost / download / aria2 - 1 . 0 . 0 . tar . bz2 " ) ) ; <nl> + CPPUNIT_ASSERT ( req . setUri ( " ftp : / / aria2 @ user @ localhost / download / aria2 - 1 . 0 . 0 . tar . bz2 " ) ) ; <nl> CPPUNIT_ASSERT_EQUAL ( std : : string ( " ftp " ) , req . getProtocol ( ) ) ; <nl> CPPUNIT_ASSERT_EQUAL ( ( uint16_t ) 21 , req . getPort ( ) ) ; <nl> CPPUNIT_ASSERT_EQUAL ( std : : string ( " localhost " ) , req . getHost ( ) ) ; <nl> CPPUNIT_ASSERT_EQUAL ( std : : string ( " / download " ) , req . getDir ( ) ) ; <nl> CPPUNIT_ASSERT_EQUAL ( std : : string ( " aria2 - 1 . 0 . 0 . tar . bz2 " ) , req . getFile ( ) ) ; <nl> - CPPUNIT_ASSERT_EQUAL ( std : : string ( " aria2user " ) , req . getUsername ( ) ) ; <nl> + CPPUNIT_ASSERT_EQUAL ( std : : string ( " aria2 @ user " ) , req . getUsername ( ) ) ; <nl> CPPUNIT_ASSERT_EQUAL ( std : : string ( " " ) , req . getPassword ( ) ) ; <nl> } <nl> <nl> void RequestTest : : testSetUri_usernamePassword ( ) <nl> { <nl> Request req ; <nl> - CPPUNIT_ASSERT ( req . setUri ( " ftp : / / aria2user % 40 : aria2pass % 40 @ localhost / download / aria2 - 1 . 0 . 0 . tar . bz2 " ) ) ; <nl> + CPPUNIT_ASSERT ( req . setUri ( " ftp : / / aria2 @ user % 40 : aria2 @ pass % 40 @ localhost / download / aria2 - 1 . 0 . 0 . tar . bz2 " ) ) ; <nl> CPPUNIT_ASSERT_EQUAL ( std : : string ( " ftp " ) , req . getProtocol ( ) ) ; <nl> CPPUNIT_ASSERT_EQUAL ( ( uint16_t ) 21 , req . getPort ( ) ) ; <nl> CPPUNIT_ASSERT_EQUAL ( std : : string ( " localhost " ) , req . getHost ( ) ) ; <nl> CPPUNIT_ASSERT_EQUAL ( std : : string ( " / download " ) , req . getDir ( ) ) ; <nl> CPPUNIT_ASSERT_EQUAL ( std : : string ( " aria2 - 1 . 0 . 0 . tar . bz2 " ) , req . getFile ( ) ) ; <nl> - CPPUNIT_ASSERT_EQUAL ( std : : string ( " aria2user @ " ) , req . getUsername ( ) ) ; <nl> - CPPUNIT_ASSERT_EQUAL ( std : : string ( " aria2pass @ " ) , req . getPassword ( ) ) ; <nl> + CPPUNIT_ASSERT_EQUAL ( std : : string ( " aria2 @ user @ " ) , req . getUsername ( ) ) ; <nl> + CPPUNIT_ASSERT_EQUAL ( std : : string ( " aria2 @ pass @ " ) , req . getPassword ( ) ) ; <nl> <nl> / / make sure that after new uri is set , username and password are updated . <nl> CPPUNIT_ASSERT ( req . setUri ( " ftp : / / localhost / download / aria2 - 1 . 0 . 0 . tar . bz2 " ) ) ; <nl> CPPUNIT_ASSERT_EQUAL ( std : : string ( " " ) , req . getUsername ( ) ) ; <nl> CPPUNIT_ASSERT_EQUAL ( std : : string ( " " ) , req . getPassword ( ) ) ; <nl> - <nl> } <nl> <nl> void RequestTest : : testSetUri_supportsPersistentConnection ( ) <nl>
|
2010 - 09 - 08 Tatsuhiro Tsujikawa < t - tujikawa @ users . sourceforge . net >
|
aria2/aria2
|
bf9fd473bb18531ad01059d9ff7341a23add9f1e
|
2010-09-08T14:35:30Z
|
new file mode 100644 <nl> index 00000000000 . . 6482706ec83 <nl> mmm / dev / null <nl> ppp b / hphp / hack / test / integration_ml / test_infer_type . ml <nl> <nl> + ( * * <nl> + * Copyright ( c ) 2016 , Facebook , Inc . <nl> + * All rights reserved . <nl> + * <nl> + * This source code is licensed under the BSD - style license found in the <nl> + * LICENSE file in the " hack " directory of this source tree . An additional grant <nl> + * of patent rights can be found in the PATENTS file in the same directory . <nl> + * <nl> + * ) <nl> + <nl> + open Core <nl> + <nl> + module Test = Integration_test_base <nl> + <nl> + let id = " < ? hh / / strict <nl> + function id ( int $ x ) : int { <nl> + return $ x ; <nl> + / / ^ 3 : 10 <nl> + } <nl> + " <nl> + <nl> + let id_cases = [ <nl> + ( " id . php " , 3 , 10 ) , " int " ; <nl> + ] <nl> + <nl> + let class_A = " < ? hh / / strict <nl> + class A { <nl> + public function __construct ( <nl> + private int $ id , <nl> + ) { } <nl> + public function getId ( ) : int { <nl> + return $ this - > id ; <nl> + / / ^ 7 : 12 ^ 7 : 19 <nl> + } <nl> + } <nl> + " <nl> + <nl> + let class_A_cases = [ <nl> + ( " A . php " , 7 , 12 ) , " < static > " ; <nl> + ( " A . php " , 7 , 19 ) , " int " ; <nl> + ] <nl> + <nl> + let pair = " < ? hh / / strict <nl> + class Pair < T > { <nl> + private T $ fst ; <nl> + private T $ snd ; <nl> + <nl> + public function __construct ( T $ fst , T $ snd ) { <nl> + $ this - > fst = $ fst ; <nl> + $ this - > snd = $ snd ; <nl> + } <nl> + public function getFst ( ) : T { <nl> + return $ this - > fst ; <nl> + } <nl> + public function setFst ( T $ fst ) : void { <nl> + $ this - > fst = $ fst ; <nl> + } <nl> + public function getSnd ( ) : T { <nl> + return $ this - > snd ; <nl> + } <nl> + public function setSnd ( T $ snd ) : void { <nl> + $ this - > snd = $ snd ; <nl> + } <nl> + } <nl> + " <nl> + <nl> + let test_pair = " < ? hh / / strict <nl> + class B extends A { } <nl> + class C extends A { } <nl> + <nl> + function test_pair ( Pair < A > $ v ) : Pair < A > { <nl> + $ c = $ v - > getSnd ( ) ; <nl> + / / ^ 6 : 8 ^ 6 : 15 <nl> + $ v = new Pair ( new B ( 1 ) , new C ( 2 ) ) ; <nl> + / / ^ 8 : 4 ^ 8 : 17 <nl> + $ v - > setFst ( $ c ) ; <nl> + / / ^ 10 : 14 <nl> + return test_pair ( $ v ) ; <nl> + / / ^ 12 : 10 ^ 12 : 20 <nl> + } <nl> + " <nl> + <nl> + let test_pair_cases = [ <nl> + ( " test_pair . php " , 6 , 8 ) , " Pair < A > " ; <nl> + ( " test_pair . php " , 6 , 15 ) , " A " ; <nl> + ( " test_pair . php " , 8 , 4 ) , " Pair " ; <nl> + ( " test_pair . php " , 8 , 17 ) , " B " ; <nl> + ( " test_pair . php " , 10 , 14 ) , " A " ; <nl> + ( " test_pair . php " , 12 , 10 ) , " Pair < A > " ; <nl> + ( " test_pair . php " , 12 , 20 ) , " Pair " ; <nl> + ] <nl> + <nl> + let files = [ <nl> + " id . php " , id ; <nl> + " A . php " , class_A ; <nl> + " Pair . php " , pair ; <nl> + " test_pair . php " , test_pair ; <nl> + ] <nl> + <nl> + let cases = <nl> + id_cases <nl> + @ class_A_cases <nl> + @ test_pair_cases <nl> + <nl> + let ( ) = <nl> + let env = Test . setup_server ( ) in <nl> + let env = Test . setup_disk env files in <nl> + <nl> + Test . assert_no_errors env ; <nl> + <nl> + List . iter cases ~ f : begin fun ( ( file , line , col ) , expected_type ) - > <nl> + let fn = ServerUtils . FileName ( " / " ^ file ) in <nl> + let ty = ServerInferType . go env ( fn , line , col ) in <nl> + let ty_name , _ty_json = <nl> + match ty with <nl> + | Some ty - > ty <nl> + | None - > <nl> + Test . fail ( Printf . sprintf " No type inferred at % s : % d : % d " file line col ) ; <nl> + failwith " unreachable " <nl> + in <nl> + Test . assertEqual expected_type ty_name <nl> + end <nl>
|
Add tests for ServerInferType
|
facebook/hhvm
|
25df18a64c882ea555a2371ea91c373a920e90d5
|
2017-09-27T22:01:32Z
|
mmm a / fmt / format . h <nl> ppp b / fmt / format . h <nl> const Char * pointer_from ( null_terminating_iterator < Char > it ) ; <nl> template < typename Char > <nl> class null_terminating_iterator { <nl> public : <nl> - typedef Char value_type ; <nl> - typedef std : : ptrdiff_t difference_type ; <nl> + using difference_type = std : : ptrdiff_t ; <nl> + using value_type = Char ; <nl> + using pointer = const Char * ; <nl> + using reference = const Char & ; <nl> + using iterator_category = std : : random_access_iterator_tag ; <nl> <nl> null_terminating_iterator ( ) : ptr_ ( 0 ) , end_ ( 0 ) { } <nl> <nl> class dynamic_specs_handler : <nl> <nl> template < typename Iterator , typename Handler > <nl> Iterator parse_arg_id ( Iterator it , Handler handler ) { <nl> - typedef typename Iterator : : value_type char_type ; <nl> + using char_type = typename std : : iterator_traits < Iterator > : : value_type ; <nl> char_type c = * it ; <nl> if ( c = = ' } ' | | c = = ' : ' ) { <nl> handler ( ) ; <nl>
|
Make null_terminating_iterator more iteratory
|
fmtlib/fmt
|
be5b4552d97e8d578cfe6fc0d83802bac857f6d6
|
2017-09-28T05:40:58Z
|
mmm a / modules / highgui / src / cap_ffmpeg_impl_v2 . hpp <nl> ppp b / modules / highgui / src / cap_ffmpeg_impl_v2 . hpp <nl> bool CvCapture_FFMPEG : : grabFrame ( ) <nl> break ; <nl> } <nl> <nl> - if ( packet . data ) <nl> + / * if ( packet . data ) <nl> { <nl> av_free_packet ( & packet ) ; <nl> packet . data = NULL ; <nl> - } <nl> + } * / <nl> } <nl> <nl> if ( valid & & first_frame_number < 0 ) <nl>
|
temporary reverted av_free_packet ( ) patch .
|
opencv/opencv
|
8bab09de079d760856de412f5fbab9814552fc9e
|
2012-04-28T16:18:39Z
|
new file mode 100644 <nl> index 00000000000 . . 826a4e0bccc <nl> Binary files / dev / null and b / docs / specs / images / Apollo6 . 0_perception_detail . png differ <nl> mmm a / modules / perception / README . md <nl> ppp b / modules / perception / README . md <nl> The general architecture of the perception module is shown : <nl> ! [ ] ( https : / / github . com / ApolloAuto / apollo / blob / master / docs / specs / images / Apollo3 . 5_perception_sensor_based . png ) <nl> <nl> The detailed perception modules are displayed below . <nl> - ! [ ] ( https : / / github . com / ApolloAuto / apollo / blob / master / docs / specs / images / Apollo3 . 5_perception_detail . png ) <nl> + ! [ ] ( https : / / github . com / ApolloAuto / apollo / blob / master / docs / specs / images / Apollo6 . 0_perception_detail . png ) <nl> <nl> # # Input <nl> <nl>
|
Perception : update a picture in Perception README ( )
|
ApolloAuto/apollo
|
912d24071cb605a69f869b691581958e6265fa32
|
2020-09-21T11:04:13Z
|
mmm a / xbmc / cores / VideoPlayer / DVDCodecs / Video / VAAPI . cpp <nl> ppp b / xbmc / cores / VideoPlayer / DVDCodecs / Video / VAAPI . cpp <nl> IHardwareDecoder * CDecoder : : Create ( CDVDStreamInfo & hint , CProcessInfo & processIn <nl> return nullptr ; <nl> } <nl> <nl> - void CDecoder : : Register ( EGLDisplay eglDisplay ) <nl> + void CDecoder : : Register ( bool hevc ) <nl> { <nl> CVaapiConfig config ; <nl> if ( ! CVAAPIContext : : EnsureContext ( & config . context , nullptr ) ) <nl> return ; <nl> <nl> - config . dpy = config . context - > GetDisplay ( ) ; <nl> - config . surfaceWidth = 1920 ; <nl> - config . surfaceHeight = 1080 ; <nl> - config . profile = VAProfileH264Main ; <nl> - config . attrib = config . context - > GetAttrib ( config . profile ) ; <nl> - if ( ( config . attrib . value & ( VA_RT_FORMAT_YUV420 | VA_RT_FORMAT_YUV420_10BPP ) ) = = 0 ) <nl> - { <nl> - config . context - > Release ( nullptr ) ; <nl> - return ; <nl> - } <nl> - <nl> - config . configId = config . context - > CreateConfig ( config . profile , config . attrib ) ; <nl> - if ( config . configId = = VA_INVALID_ID ) <nl> - { <nl> - config . context - > Release ( nullptr ) ; <nl> - return ; <nl> - } <nl> - <nl> - / / create surfaces <nl> - VASurfaceID surface ; <nl> - VAStatus status ; <nl> - VAImage image ; <nl> - VABufferInfo bufferInfo ; <nl> - <nl> - if ( vaCreateSurfaces ( config . dpy , VA_RT_FORMAT_YUV420 , <nl> - config . surfaceWidth , config . surfaceHeight , <nl> - & surface , 1 , NULL , 0 ) ! = VA_STATUS_SUCCESS ) <nl> - { <nl> - config . context - > Release ( nullptr ) ; <nl> - return ; <nl> - } <nl> - <nl> - / / check interop <nl> - PFNEGLCREATEIMAGEKHRPROC eglCreateImageKHR = ( PFNEGLCREATEIMAGEKHRPROC ) eglGetProcAddress ( " eglCreateImageKHR " ) ; <nl> - PFNEGLDESTROYIMAGEKHRPROC eglDestroyImageKHR = ( PFNEGLDESTROYIMAGEKHRPROC ) eglGetProcAddress ( " eglDestroyImageKHR " ) ; <nl> - if ( ! eglCreateImageKHR | | ! eglDestroyImageKHR ) <nl> - { <nl> - config . context - > Release ( nullptr ) ; <nl> - return ; <nl> - } <nl> - <nl> - status = vaDeriveImage ( config . dpy , surface , & image ) ; <nl> - if ( status = = VA_STATUS_SUCCESS ) <nl> - { <nl> - memset ( & bufferInfo , 0 , sizeof ( bufferInfo ) ) ; <nl> - bufferInfo . mem_type = VA_SURFACE_ATTRIB_MEM_TYPE_DRM_PRIME ; <nl> - status = vaAcquireBufferHandle ( config . dpy , image . buf , & bufferInfo ) ; <nl> - if ( status = = VA_STATUS_SUCCESS ) <nl> - { <nl> - EGLImageKHR eglImage ; <nl> - GLint attribs [ 23 ] , * attrib ; <nl> - <nl> - attrib = attribs ; <nl> - * attrib + + = EGL_LINUX_DRM_FOURCC_EXT ; <nl> - * attrib + + = fourcc_code ( ' R ' , ' 8 ' , ' ' , ' ' ) ; <nl> - * attrib + + = EGL_WIDTH ; <nl> - * attrib + + = image . width ; <nl> - * attrib + + = EGL_HEIGHT ; <nl> - * attrib + + = image . height ; <nl> - * attrib + + = EGL_DMA_BUF_PLANE0_FD_EXT ; <nl> - * attrib + + = ( intptr_t ) bufferInfo . handle ; <nl> - * attrib + + = EGL_DMA_BUF_PLANE0_OFFSET_EXT ; <nl> - * attrib + + = image . offsets [ 0 ] ; <nl> - * attrib + + = EGL_DMA_BUF_PLANE0_PITCH_EXT ; <nl> - * attrib + + = image . pitches [ 0 ] ; <nl> - * attrib + + = EGL_NONE ; <nl> - eglImage = eglCreateImageKHR ( eglDisplay , <nl> - EGL_NO_CONTEXT , EGL_LINUX_DMA_BUF_EXT , ( EGLClientBuffer ) NULL , <nl> - attribs ) ; <nl> - if ( eglImage ) <nl> - { <nl> - eglDestroyImageKHR ( eglDisplay , eglImage ) ; <nl> - m_capGeneral = true ; <nl> - CDVDFactoryCodec : : RegisterHWAccel ( " vaapi " , CDecoder : : Create ) ; <nl> - } <nl> - <nl> - } <nl> - vaDestroyImage ( config . dpy , image . image_id ) ; <nl> - } <nl> - vaDestroySurfaces ( config . dpy , & surface , 1 ) ; <nl> - <nl> - / / check hevc <nl> - / / create surfaces <nl> - if ( vaCreateSurfaces ( config . dpy , VA_RT_FORMAT_YUV420_10BPP , <nl> - config . surfaceWidth , config . surfaceHeight , <nl> - & surface , 1 , NULL , 0 ) ! = VA_STATUS_SUCCESS ) <nl> - { <nl> - config . context - > Release ( nullptr ) ; <nl> - return ; <nl> - } <nl> - <nl> - / / check interop <nl> - status = vaDeriveImage ( config . dpy , surface , & image ) ; <nl> - if ( status = = VA_STATUS_SUCCESS ) <nl> - { <nl> - memset ( & bufferInfo , 0 , sizeof ( bufferInfo ) ) ; <nl> - bufferInfo . mem_type = VA_SURFACE_ATTRIB_MEM_TYPE_DRM_PRIME ; <nl> - status = vaAcquireBufferHandle ( config . dpy , image . buf , & bufferInfo ) ; <nl> - if ( status = = VA_STATUS_SUCCESS ) <nl> - { <nl> - EGLImageKHR eglImage ; <nl> - GLint attribs [ 23 ] , * attrib ; <nl> - <nl> - attrib = attribs ; <nl> - * attrib + + = EGL_LINUX_DRM_FOURCC_EXT ; <nl> - * attrib + + = fourcc_code ( ' G ' , ' R ' , ' 3 ' , ' 2 ' ) ; <nl> - * attrib + + = EGL_WIDTH ; <nl> - * attrib + + = ( image . width + 1 ) > > 1 ; <nl> - * attrib + + = EGL_HEIGHT ; <nl> - * attrib + + = ( image . height + 1 ) > > 1 ; <nl> - * attrib + + = EGL_DMA_BUF_PLANE0_FD_EXT ; <nl> - * attrib + + = ( intptr_t ) bufferInfo . handle ; <nl> - * attrib + + = EGL_DMA_BUF_PLANE0_OFFSET_EXT ; <nl> - * attrib + + = image . offsets [ 1 ] ; <nl> - * attrib + + = EGL_DMA_BUF_PLANE0_PITCH_EXT ; <nl> - * attrib + + = image . pitches [ 1 ] ; <nl> - * attrib + + = EGL_NONE ; <nl> - eglImage = eglCreateImageKHR ( eglDisplay , <nl> - EGL_NO_CONTEXT , EGL_LINUX_DMA_BUF_EXT , ( EGLClientBuffer ) NULL , <nl> - attribs ) ; <nl> - if ( eglImage ) <nl> - { <nl> - eglDestroyImageKHR ( eglDisplay , eglImage ) ; <nl> - m_capHevc = true ; <nl> - } <nl> - <nl> - } <nl> - vaDestroyImage ( config . dpy , image . image_id ) ; <nl> - } <nl> - vaDestroySurfaces ( config . dpy , & surface , 1 ) ; <nl> - <nl> + m_capGeneral = true ; <nl> + m_capHevc = hevc ; <nl> + CDVDFactoryCodec : : RegisterHWAccel ( " vaapi " , CDecoder : : Create ) ; <nl> config . context - > Release ( nullptr ) ; <nl> } <nl> <nl> mmm a / xbmc / cores / VideoPlayer / DVDCodecs / Video / VAAPI . h <nl> ppp b / xbmc / cores / VideoPlayer / DVDCodecs / Video / VAAPI . h <nl> class CDecoder <nl> static int FFGetBuffer ( AVCodecContext * avctx , AVFrame * pic , int flags ) ; <nl> <nl> static IHardwareDecoder * Create ( CDVDStreamInfo & hint , CProcessInfo & processInfo , AVPixelFormat fmt ) ; <nl> - static void Register ( void * eglDisplay ) ; <nl> + static void Register ( bool hevc ) ; <nl> <nl> protected : <nl> void SetWidthHeight ( int width , int height ) ; <nl> mmm a / xbmc / cores / VideoPlayer / VideoRenderers / HwDecRender / RendererVAAPIGL . cpp <nl> ppp b / xbmc / cores / VideoPlayer / VideoRenderers / HwDecRender / RendererVAAPIGL . cpp <nl> <nl> * / <nl> <nl> # include " RendererVAAPIGL . h " <nl> - <nl> + # include " . . / RenderFactory . h " <nl> # include " cores / VideoPlayer / DVDCodecs / Video / VAAPI . h " <nl> # include " cores / VideoPlayer / DVDCodecs / DVDCodecUtils . h " <nl> # include " settings / Settings . h " <nl> <nl> <nl> using namespace VAAPI ; <nl> <nl> + CBaseRenderer * CRendererVAAPI : : Create ( CVideoBuffer * buffer ) <nl> + { <nl> + CVaapiRenderPicture * vb = dynamic_cast < CVaapiRenderPicture * > ( buffer ) ; <nl> + if ( vb ) <nl> + return new CRendererVAAPI ( ) ; <nl> + <nl> + return nullptr ; <nl> + } <nl> + <nl> + void CRendererVAAPI : : Register ( VADisplay vaDpy , EGLDisplay eglDisplay , bool & general , bool & hevc ) <nl> + { <nl> + general = CVaapiTexture : : TestInterop ( vaDpy , eglDisplay ) ; <nl> + hevc = CVaapiTexture : : TestInteropHevc ( vaDpy , eglDisplay ) ; <nl> + if ( general ) <nl> + VIDEOPLAYER : : CRendererFactory : : RegisterRenderer ( " vaapi " , CRendererVAAPI : : Create ) ; <nl> + } <nl> + <nl> CRendererVAAPI : : CRendererVAAPI ( ) = default ; <nl> <nl> CRendererVAAPI : : ~ CRendererVAAPI ( ) <nl> CRendererVAAPI : : ~ CRendererVAAPI ( ) <nl> } <nl> } <nl> <nl> - bool CRendererVAAPI : : HandlesVideoBuffer ( CVideoBuffer * buffer ) <nl> - { <nl> - CVaapiRenderPicture * pic = dynamic_cast < CVaapiRenderPicture * > ( buffer ) ; <nl> - if ( pic ) <nl> - return true ; <nl> - <nl> - return false ; <nl> - } <nl> - <nl> bool CRendererVAAPI : : Configure ( const VideoPicture & picture , float fps , unsigned flags , unsigned int orientation ) <nl> { <nl> CVaapiRenderPicture * pic = dynamic_cast < CVaapiRenderPicture * > ( picture . videoBuffer ) ; <nl> mmm a / xbmc / cores / VideoPlayer / VideoRenderers / HwDecRender / RendererVAAPIGL . h <nl> ppp b / xbmc / cores / VideoPlayer / VideoRenderers / HwDecRender / RendererVAAPIGL . h <nl> class CRendererVAAPI : public CLinuxRendererGL <nl> CRendererVAAPI ( ) ; <nl> virtual ~ CRendererVAAPI ( ) ; <nl> <nl> + static CBaseRenderer * Create ( CVideoBuffer * buffer ) ; <nl> + static void Register ( VADisplay vaDpy , EGLDisplay eglDisplay , bool & general , bool & hevc ) ; <nl> + <nl> virtual bool Configure ( const VideoPicture & picture , float fps , unsigned flags , unsigned int orientation ) override ; <nl> <nl> / / Player functions <nl> virtual bool ConfigChanged ( const VideoPicture & picture ) override ; <nl> - static bool HandlesVideoBuffer ( CVideoBuffer * buffer ) ; <nl> virtual void ReleaseBuffer ( int idx ) override ; <nl> bool NeedBuffer ( int idx ) override ; <nl> <nl> mmm a / xbmc / cores / VideoPlayer / VideoRenderers / HwDecRender / RendererVAAPIGLES . cpp <nl> ppp b / xbmc / cores / VideoPlayer / VideoRenderers / HwDecRender / RendererVAAPIGLES . cpp <nl> <nl> * / <nl> <nl> # include " RendererVAAPIGLES . h " <nl> - <nl> + # include " . . / RenderFactory . h " <nl> # include " cores / VideoPlayer / DVDCodecs / Video / VAAPI . h " <nl> # include " cores / VideoPlayer / DVDCodecs / DVDCodecUtils . h " <nl> # include " settings / Settings . h " <nl> CRendererVAAPI : : ~ CRendererVAAPI ( ) <nl> } <nl> } <nl> <nl> - bool CRendererVAAPI : : HandlesVideoBuffer ( CVideoBuffer * buffer ) <nl> - { <nl> - CVaapiRenderPicture * pic = dynamic_cast < CVaapiRenderPicture * > ( buffer ) ; <nl> - if ( pic ) <nl> - return true ; <nl> - <nl> - return false ; <nl> - } <nl> - <nl> bool CRendererVAAPI : : Configure ( const VideoPicture & picture , float fps , unsigned flags , unsigned int orientation ) <nl> { <nl> CVaapiRenderPicture * pic = dynamic_cast < CVaapiRenderPicture * > ( picture . videoBuffer ) ; <nl> mmm a / xbmc / cores / VideoPlayer / VideoRenderers / HwDecRender / RendererVAAPIGLES . h <nl> ppp b / xbmc / cores / VideoPlayer / VideoRenderers / HwDecRender / RendererVAAPIGLES . h <nl> class CRendererVAAPI : public CLinuxRendererGLES <nl> CRendererVAAPI ( ) ; <nl> virtual ~ CRendererVAAPI ( ) ; <nl> <nl> + static CBaseRenderer * Create ( CVideoBuffer * buffer ) ; <nl> + static void Register ( VADisplay vaDpy , EGLDisplay eglDisplay , bool & general , bool & hevc ) ; <nl> + <nl> virtual bool Configure ( const VideoPicture & picture , float fps , unsigned flags , unsigned int orientation ) override ; <nl> <nl> / / Player functions <nl> virtual bool ConfigChanged ( const VideoPicture & picture ) override ; <nl> - static bool HandlesVideoBuffer ( CVideoBuffer * buffer ) ; <nl> virtual void ReleaseBuffer ( int idx ) override ; <nl> bool NeedBuffer ( int idx ) override ; <nl> <nl> mmm a / xbmc / cores / VideoPlayer / VideoRenderers / HwDecRender / VaapiEGL . cpp <nl> ppp b / xbmc / cores / VideoPlayer / VideoRenderers / HwDecRender / VaapiEGL . cpp <nl> void CVaapiTexture : : Unmap ( ) <nl> m_vaapiPic - > Release ( ) ; <nl> m_vaapiPic = nullptr ; <nl> } <nl> + <nl> + bool CVaapiTexture : : TestInterop ( VADisplay vaDpy , EGLDisplay eglDisplay ) <nl> + { <nl> + bool ret = false ; <nl> + <nl> + int major_version , minor_version ; <nl> + vaInitialize ( vaDpy , & major_version , & minor_version ) ; <nl> + <nl> + int width = 1920 ; <nl> + int height = 1080 ; <nl> + <nl> + / / create surfaces <nl> + VASurfaceID surface ; <nl> + VAStatus status ; <nl> + VAImage image ; <nl> + VABufferInfo bufferInfo ; <nl> + <nl> + if ( vaCreateSurfaces ( vaDpy , VA_RT_FORMAT_YUV420 , <nl> + width , height , <nl> + & surface , 1 , NULL , 0 ) ! = VA_STATUS_SUCCESS ) <nl> + { <nl> + vaTerminate ( vaDpy ) ; <nl> + return false ; <nl> + } <nl> + <nl> + / / check interop <nl> + PFNEGLCREATEIMAGEKHRPROC eglCreateImageKHR = ( PFNEGLCREATEIMAGEKHRPROC ) eglGetProcAddress ( " eglCreateImageKHR " ) ; <nl> + PFNEGLDESTROYIMAGEKHRPROC eglDestroyImageKHR = ( PFNEGLDESTROYIMAGEKHRPROC ) eglGetProcAddress ( " eglDestroyImageKHR " ) ; <nl> + if ( ! eglCreateImageKHR | | ! eglDestroyImageKHR ) <nl> + { <nl> + vaTerminate ( vaDpy ) ; <nl> + return false ; <nl> + } <nl> + <nl> + status = vaDeriveImage ( vaDpy , surface , & image ) ; <nl> + if ( status = = VA_STATUS_SUCCESS ) <nl> + { <nl> + memset ( & bufferInfo , 0 , sizeof ( bufferInfo ) ) ; <nl> + bufferInfo . mem_type = VA_SURFACE_ATTRIB_MEM_TYPE_DRM_PRIME ; <nl> + status = vaAcquireBufferHandle ( vaDpy , image . buf , & bufferInfo ) ; <nl> + if ( status = = VA_STATUS_SUCCESS ) <nl> + { <nl> + EGLImageKHR eglImage ; <nl> + GLint attribs [ 23 ] , * attrib ; <nl> + <nl> + attrib = attribs ; <nl> + * attrib + + = EGL_LINUX_DRM_FOURCC_EXT ; <nl> + * attrib + + = fourcc_code ( ' R ' , ' 8 ' , ' ' , ' ' ) ; <nl> + * attrib + + = EGL_WIDTH ; <nl> + * attrib + + = image . width ; <nl> + * attrib + + = EGL_HEIGHT ; <nl> + * attrib + + = image . height ; <nl> + * attrib + + = EGL_DMA_BUF_PLANE0_FD_EXT ; <nl> + * attrib + + = ( intptr_t ) bufferInfo . handle ; <nl> + * attrib + + = EGL_DMA_BUF_PLANE0_OFFSET_EXT ; <nl> + * attrib + + = image . offsets [ 0 ] ; <nl> + * attrib + + = EGL_DMA_BUF_PLANE0_PITCH_EXT ; <nl> + * attrib + + = image . pitches [ 0 ] ; <nl> + * attrib + + = EGL_NONE ; <nl> + eglImage = eglCreateImageKHR ( eglDisplay , <nl> + EGL_NO_CONTEXT , EGL_LINUX_DMA_BUF_EXT , ( EGLClientBuffer ) NULL , <nl> + attribs ) ; <nl> + if ( eglImage ) <nl> + { <nl> + eglDestroyImageKHR ( eglDisplay , eglImage ) ; <nl> + ret = true ; <nl> + } <nl> + <nl> + } <nl> + vaDestroyImage ( vaDpy , image . image_id ) ; <nl> + } <nl> + vaDestroySurfaces ( vaDpy , & surface , 1 ) ; <nl> + vaTerminate ( vaDpy ) ; <nl> + <nl> + return ret ; <nl> + } <nl> + <nl> + bool CVaapiTexture : : TestInteropHevc ( VADisplay vaDpy , EGLDisplay eglDisplay ) <nl> + { <nl> + bool ret = false ; <nl> + <nl> + int major_version , minor_version ; <nl> + vaInitialize ( vaDpy , & major_version , & minor_version ) ; <nl> + <nl> + int width = 1920 ; <nl> + int height = 1080 ; <nl> + <nl> + / / create surfaces <nl> + VASurfaceID surface ; <nl> + VAStatus status ; <nl> + VAImage image ; <nl> + VABufferInfo bufferInfo ; <nl> + <nl> + if ( vaCreateSurfaces ( vaDpy , VA_RT_FORMAT_YUV420_10BPP , <nl> + width , height , <nl> + & surface , 1 , NULL , 0 ) ! = VA_STATUS_SUCCESS ) <nl> + { <nl> + vaTerminate ( vaDpy ) ; <nl> + return ret ; <nl> + } <nl> + <nl> + PFNEGLCREATEIMAGEKHRPROC eglCreateImageKHR = ( PFNEGLCREATEIMAGEKHRPROC ) eglGetProcAddress ( " eglCreateImageKHR " ) ; <nl> + PFNEGLDESTROYIMAGEKHRPROC eglDestroyImageKHR = ( PFNEGLDESTROYIMAGEKHRPROC ) eglGetProcAddress ( " eglDestroyImageKHR " ) ; <nl> + if ( ! eglCreateImageKHR | | ! eglDestroyImageKHR ) <nl> + { <nl> + vaTerminate ( vaDpy ) ; <nl> + return false ; <nl> + } <nl> + <nl> + / / check interop <nl> + status = vaDeriveImage ( vaDpy , surface , & image ) ; <nl> + if ( status = = VA_STATUS_SUCCESS ) <nl> + { <nl> + memset ( & bufferInfo , 0 , sizeof ( bufferInfo ) ) ; <nl> + bufferInfo . mem_type = VA_SURFACE_ATTRIB_MEM_TYPE_DRM_PRIME ; <nl> + status = vaAcquireBufferHandle ( vaDpy , image . buf , & bufferInfo ) ; <nl> + if ( status = = VA_STATUS_SUCCESS ) <nl> + { <nl> + EGLImageKHR eglImage ; <nl> + GLint attribs [ 23 ] , * attrib ; <nl> + <nl> + attrib = attribs ; <nl> + * attrib + + = EGL_LINUX_DRM_FOURCC_EXT ; <nl> + * attrib + + = fourcc_code ( ' G ' , ' R ' , ' 3 ' , ' 2 ' ) ; <nl> + * attrib + + = EGL_WIDTH ; <nl> + * attrib + + = ( image . width + 1 ) > > 1 ; <nl> + * attrib + + = EGL_HEIGHT ; <nl> + * attrib + + = ( image . height + 1 ) > > 1 ; <nl> + * attrib + + = EGL_DMA_BUF_PLANE0_FD_EXT ; <nl> + * attrib + + = ( intptr_t ) bufferInfo . handle ; <nl> + * attrib + + = EGL_DMA_BUF_PLANE0_OFFSET_EXT ; <nl> + * attrib + + = image . offsets [ 1 ] ; <nl> + * attrib + + = EGL_DMA_BUF_PLANE0_PITCH_EXT ; <nl> + * attrib + + = image . pitches [ 1 ] ; <nl> + * attrib + + = EGL_NONE ; <nl> + eglImage = eglCreateImageKHR ( eglDisplay , <nl> + EGL_NO_CONTEXT , EGL_LINUX_DMA_BUF_EXT , ( EGLClientBuffer ) NULL , <nl> + attribs ) ; <nl> + if ( eglImage ) <nl> + { <nl> + eglDestroyImageKHR ( eglDisplay , eglImage ) ; <nl> + ret = true ; <nl> + } <nl> + <nl> + } <nl> + vaDestroyImage ( vaDpy , image . image_id ) ; <nl> + } <nl> + vaDestroySurfaces ( vaDpy , & surface , 1 ) ; <nl> + vaTerminate ( vaDpy ) ; <nl> + <nl> + return ret ; <nl> + } <nl> + <nl> mmm a / xbmc / cores / VideoPlayer / VideoRenderers / HwDecRender / VaapiEGL . h <nl> ppp b / xbmc / cores / VideoPlayer / VideoRenderers / HwDecRender / VaapiEGL . h <nl> class CVaapiTexture <nl> bool Map ( CVaapiRenderPicture * pic ) ; <nl> void Unmap ( ) ; <nl> void Init ( InteropInfo & interop ) ; <nl> + static bool TestInterop ( VADisplay vaDpy , EGLDisplay eglDisplay ) ; <nl> + static bool TestInteropHevc ( VADisplay vaDpy , EGLDisplay eglDisplay ) ; <nl> <nl> GLuint m_texture = 0 ; <nl> GLuint m_textureY = 0 ; <nl>
|
VideoPlayer : vaapi - move egl interop check to renderer
|
xbmc/xbmc
|
6a01b4c526fff20af4d6d89fe56ef99560d0a6c3
|
2017-07-10T18:57:40Z
|
mmm a / boards . txt <nl> ppp b / boards . txt <nl> espduino . build . mcu = esp8266 <nl> espduino . build . f_cpu = 80000000L <nl> espduino . build . board = ESP8266_ESP13 <nl> espduino . build . core = esp8266 <nl> - espduino . build . variant = espduino <nl> + espduino . build . variant = ESPDuino <nl> espduino . build . flash_mode = dio <nl> espduino . build . flash_size = 4M <nl> espduino . build . flash_freq = 40 <nl>
|
Fix case - insensitive ESPDuino file name
|
esp8266/Arduino
|
d118b4447ad686ca09b22edc137d09c47fbb07eb
|
2016-03-05T15:02:18Z
|
mmm a / spec / api - web - contents - spec . js <nl> ppp b / spec / api - web - contents - spec . js <nl> describe ( ' webContents module ' , function ( ) { <nl> describe ( ' getAllWebContents ( ) API ' , function ( ) { <nl> it ( ' returns an array of web contents ' , function ( done ) { <nl> w . webContents . on ( ' devtools - opened ' , function ( ) { <nl> - assert . equal ( webContents . getAllWebContents ( ) . length , 4 ) <nl> + const all = webContents . getAllWebContents ( ) . sort ( function ( a , b ) { <nl> + return a . getId ( ) - b . getId ( ) <nl> + } ) <nl> <nl> - assert . equal ( webContents . getAllWebContents ( ) [ 0 ] . getType ( ) , ' remote ' ) <nl> - assert . equal ( webContents . getAllWebContents ( ) [ 1 ] . getType ( ) , ' webview ' ) <nl> - assert . equal ( webContents . getAllWebContents ( ) [ 2 ] . getType ( ) , ' window ' ) <nl> - assert . equal ( webContents . getAllWebContents ( ) [ 3 ] . getType ( ) , ' window ' ) <nl> + assert . equal ( all . length , 4 ) <nl> + assert . equal ( all [ 0 ] . getType ( ) , ' window ' ) <nl> + assert . equal ( all [ 1 ] . getType ( ) , ' window ' ) <nl> + assert . equal ( all [ 2 ] . getType ( ) , ' remote ' ) <nl> + assert . equal ( all [ 3 ] . getType ( ) , ' webview ' ) <nl> <nl> done ( ) <nl> } ) <nl>
|
Sort contents by id for consistent ordering
|
electron/electron
|
a4001fbc550239722240e17a67cd95e334912ad5
|
2016-07-14T16:41:10Z
|
mmm a / tensorflow / python / framework / ops . py <nl> ppp b / tensorflow / python / framework / ops . py <nl> def device ( self , device_name_or_function ) : <nl> * If it is None , all ` device ( ) ` invocations from the enclosing context <nl> will be ignored . <nl> <nl> + For information about the valid syntax of device name strings , see <nl> + the documentation in <nl> + [ ` DeviceNameUtils ` ] ( https : / / www . tensorflow . org / code / tensorflow / core / util / device_name_utils . h ) . <nl> + <nl> For example : <nl> <nl> ` ` ` python <nl>
|
Point documentation of Device name strings in tf . device to the
|
tensorflow/tensorflow
|
8c158c76ec2815d7470413e98af25ee87ead350d
|
2016-05-10T17:10:52Z
|
mmm a / tools / android / packaging / Makefile <nl> ppp b / tools / android / packaging / Makefile <nl> extras : libs <nl> find ` pwd ` / xbmc / assets / - depth - name " . git " - exec rm - rf { } \ ; <nl> find ` pwd ` / xbmc / assets / system / - name " * . so " - exec rm { } \ ; <nl> find ` pwd ` / xbmc / assets / addons / skin . * / media / * - depth - not - iname " Textures . xbt " - exec rm - rf { } \ ; <nl> + find ` pwd ` / xbmc / assets / system / keymaps / - depth - name " joystick * . xml " ! - name " joystick . xml " - exec rm { } \ ; <nl> @ echo " native_arch = $ ( ARCH ) " > xbmc / res / raw / xbmc . properties <nl> cd xbmc / assets / addons ; rm - rf $ ( EXCLUDED_ADDONS ) <nl> cp - rfp $ ( PREFIX ) / lib / python2 . 6 xbmc / assets / python2 . 6 / lib / <nl>
|
FIX : [ droid ] do not package non - standard joystick keymap
|
xbmc/xbmc
|
71493aaaac6d68362622862a70c65c5aa2751b8b
|
2013-11-17T09:40:36Z
|
mmm a / src / webpage . cpp <nl> ppp b / src / webpage . cpp <nl> void WebPage : : sendEvent ( const QString & type , const QVariant & arg1 , const QVarian <nl> / / this is the case for e . g . sendEvent ( " . . . " , ' A ' ) <nl> / / but also works with sendEvent ( " . . . " , " ABCD " ) <nl> foreach ( const QChar typeChar , arg1 . toString ( ) ) { <nl> - sendEvent ( " keydown " , typeChar , NULL , NULL , modifierArg ) ; <nl> - sendEvent ( " keyup " , typeChar , NULL , NULL , modifierArg ) ; <nl> + sendEvent ( " keydown " , typeChar , QVariant ( ) , QString ( ) , modifierArg ) ; <nl> + sendEvent ( " keyup " , typeChar , QVariant ( ) , QString ( ) , modifierArg ) ; <nl> } <nl> } else { <nl> / / otherwise we assume a raw integer char - code was given <nl> - sendEvent ( " keydown " , arg1 . toInt ( ) , NULL , NULL , modifierArg ) ; <nl> - sendEvent ( " keyup " , arg1 . toInt ( ) , NULL , NULL , modifierArg ) ; <nl> + sendEvent ( " keydown " , arg1 . toInt ( ) , QVariant ( ) , QString ( ) , modifierArg ) ; <nl> + sendEvent ( " keyup " , arg1 . toInt ( ) , QVariant ( ) , QString ( ) , modifierArg ) ; <nl> } <nl> return ; <nl> } <nl>
|
Fix compilation issue on Ubuntu .
|
ariya/phantomjs
|
f2628b32fedf457094ffe3ddbec47915ddfc03d1
|
2012-11-01T03:38:16Z
|
mmm a / Marlin / Marlin_main . cpp <nl> ppp b / Marlin / Marlin_main . cpp <nl> static uint8_t target_extruder ; <nl> ; <nl> # endif <nl> <nl> - # if ENABLED ( ULTIPANEL ) & & HAS_CASE_LIGHT <nl> + # if HAS_CASE_LIGHT <nl> bool case_light_on = <nl> # if ENABLED ( CASE_LIGHT_DEFAULT_ON ) <nl> true <nl>
|
Fix for the PR ( Case light menu ( 3rd attempt ) )
|
MarlinFirmware/Marlin
|
88157ba52902c4bebd0e259bc5a0b54b7b605c3d
|
2016-12-15T15:57:32Z
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.