desc
stringlengths
3
26.7k
decl
stringlengths
11
7.89k
bodies
stringlengths
8
553k
'Define name for referencing matching tokens as a nested attribute of the returned parse results. NOTE: this returns a *copy* of the original C{ParserElement} object; this is so that the client can define a basic element, such as an integer, and reference it in multiple places with different names. You can also set results names using the abbreviated syntax, C{expr("name")} in place of C{expr.setResultsName("name")} - see L{I{__call__}<__call__>}.'
def setResultsName(self, name, listAllMatches=False):
newself = self.copy() if name.endswith('*'): name = name[:(-1)] listAllMatches = True newself.resultsName = name newself.modalResults = (not listAllMatches) return newself
'Method to invoke the Python pdb debugger when this element is about to be parsed. Set C{breakFlag} to True to enable, False to disable.'
def setBreak(self, breakFlag=True):
if breakFlag: _parseMethod = self._parse def breaker(instring, loc, doActions=True, callPreParse=True): import pdb pdb.set_trace() return _parseMethod(instring, loc, doActions, callPreParse) breaker._originalParseMethod = _parseMethod self._parse = breaker elif hasattr(self._parse, '_originalParseMethod'): self._parse = self._parse._originalParseMethod return self
'Define action to perform when successfully matching parse element definition. Parse action fn is a callable method with 0-3 arguments, called as C{fn(s,loc,toks)}, C{fn(loc,toks)}, C{fn(toks)}, or just C{fn()}, where: - s = the original string being parsed (see note below) - loc = the location of the matching substring - toks = a list of the matched tokens, packaged as a C{L{ParseResults}} object If the functions in fns modify the tokens, they can return them as the return value from fn, and the modified list of tokens will replace the original. Otherwise, fn does not need to return any value. Note: the default parsing behavior is to expand tabs in the input string before starting the parsing process. See L{I{parseString}<parseString>} for more information on parsing strings containing C{<TAB>}s, and suggested methods to maintain a consistent view of the parsed string, the parse location, and line and column positions within the parsed string.'
def setParseAction(self, *fns, **kwargs):
self.parseAction = list(map(_trim_arity, list(fns))) self.callDuringTry = kwargs.get('callDuringTry', False) return self
'Add parse action to expression\'s list of parse actions. See L{I{setParseAction}<setParseAction>}.'
def addParseAction(self, *fns, **kwargs):
self.parseAction += list(map(_trim_arity, list(fns))) self.callDuringTry = (self.callDuringTry or kwargs.get('callDuringTry', False)) return self
'Add a boolean predicate function to expression\'s list of parse actions. See L{I{setParseAction}<setParseAction>}. Optional keyword argument C{message} can be used to define a custom message to be used in the raised exception.'
def addCondition(self, *fns, **kwargs):
msg = (kwargs.get('message') or 'failed user-defined condition') for fn in fns: def pa(s, l, t): if (not bool(_trim_arity(fn)(s, l, t))): raise ParseException(s, l, msg) return t self.parseAction.append(pa) self.callDuringTry = (self.callDuringTry or kwargs.get('callDuringTry', False)) return self
'Define action to perform if parsing fails at this expression. Fail acton fn is a callable function that takes the arguments C{fn(s,loc,expr,err)} where: - s = string being parsed - loc = location where expression match was attempted and failed - expr = the parse expression that failed - err = the exception thrown The function returns no value. It may throw C{L{ParseFatalException}} if it is desired to stop parsing immediately.'
def setFailAction(self, fn):
self.failAction = fn return self
'Enables "packrat" parsing, which adds memoizing to the parsing logic. Repeated parse attempts at the same string location (which happens often in many complex grammars) can immediately return a cached value, instead of re-executing parsing/validating code. Memoizing is done of both valid results and parsing exceptions. This speedup may break existing programs that use parse actions that have side-effects. For this reason, packrat parsing is disabled when you first import pyparsing. To activate the packrat feature, your program must call the class method C{ParserElement.enablePackrat()}. If your program uses C{psyco} to "compile as you go", you must call C{enablePackrat} before calling C{psyco.full()}. If you do not do this, Python will crash. For best results, call C{enablePackrat()} immediately after importing pyparsing.'
@staticmethod def enablePackrat():
if (not ParserElement._packratEnabled): ParserElement._packratEnabled = True ParserElement._parse = ParserElement._parseCache
'Execute the parse expression with the given string. This is the main interface to the client code, once the complete expression has been built. If you want the grammar to require that the entire input string be successfully parsed, then set C{parseAll} to True (equivalent to ending the grammar with C{L{StringEnd()}}). Note: C{parseString} implicitly calls C{expandtabs()} on the input string, in order to report proper column numbers in parse actions. If the input string contains tabs and the grammar uses parse actions that use the C{loc} argument to index into the string being parsed, you can ensure you have a consistent view of the input string by: - calling C{parseWithTabs} on your grammar before calling C{parseString} (see L{I{parseWithTabs}<parseWithTabs>}) - define your parse action using the full C{(s,loc,toks)} signature, and reference the input string using the parse action\'s C{s} argument - explictly expand the tabs in your input string before calling C{parseString}'
def parseString(self, instring, parseAll=False):
ParserElement.resetCache() if (not self.streamlined): self.streamline() for e in self.ignoreExprs: e.streamline() if (not self.keepTabs): instring = instring.expandtabs() try: (loc, tokens) = self._parse(instring, 0) if parseAll: loc = self.preParse(instring, loc) se = (Empty() + StringEnd()) se._parse(instring, loc) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: raise exc else: return tokens
'Scan the input string for expression matches. Each match will return the matching tokens, start location, and end location. May be called with optional C{maxMatches} argument, to clip scanning after \'n\' matches are found. If C{overlap} is specified, then overlapping matches will be reported. Note that the start and end locations are reported relative to the string being parsed. See L{I{parseString}<parseString>} for more information on parsing strings with embedded tabs.'
def scanString(self, instring, maxMatches=_MAX_INT, overlap=False):
if (not self.streamlined): self.streamline() for e in self.ignoreExprs: e.streamline() if (not self.keepTabs): instring = _ustr(instring).expandtabs() instrlen = len(instring) loc = 0 preparseFn = self.preParse parseFn = self._parse ParserElement.resetCache() matches = 0 try: while ((loc <= instrlen) and (matches < maxMatches)): try: preloc = preparseFn(instring, loc) (nextLoc, tokens) = parseFn(instring, preloc, callPreParse=False) except ParseException: loc = (preloc + 1) else: if (nextLoc > loc): matches += 1 (yield (tokens, preloc, nextLoc)) if overlap: nextloc = preparseFn(instring, loc) if (nextloc > loc): loc = nextLoc else: loc += 1 else: loc = nextLoc else: loc = (preloc + 1) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: raise exc
'Extension to C{L{scanString}}, to modify matching text with modified tokens that may be returned from a parse action. To use C{transformString}, define a grammar and attach a parse action to it that modifies the returned token list. Invoking C{transformString()} on a target string will then scan for matches, and replace the matched text patterns according to the logic in the parse action. C{transformString()} returns the resulting transformed string.'
def transformString(self, instring):
out = [] lastE = 0 self.keepTabs = True try: for (t, s, e) in self.scanString(instring): out.append(instring[lastE:s]) if t: if isinstance(t, ParseResults): out += t.asList() elif isinstance(t, list): out += t else: out.append(t) lastE = e out.append(instring[lastE:]) out = [o for o in out if o] return ''.join(map(_ustr, _flatten(out))) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: raise exc
'Another extension to C{L{scanString}}, simplifying the access to the tokens found to match the given parse expression. May be called with optional C{maxMatches} argument, to clip searching after \'n\' matches are found.'
def searchString(self, instring, maxMatches=_MAX_INT):
try: return ParseResults([t for (t, s, e) in self.scanString(instring, maxMatches)]) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: raise exc
'Implementation of + operator - returns C{L{And}}'
def __add__(self, other):
if isinstance(other, basestring): other = ParserElement.literalStringClass(other) if (not isinstance(other, ParserElement)): warnings.warn(('Cannot combine element of type %s with ParserElement' % type(other)), SyntaxWarning, stacklevel=2) return None return And([self, other])
'Implementation of + operator when left operand is not a C{L{ParserElement}}'
def __radd__(self, other):
if isinstance(other, basestring): other = ParserElement.literalStringClass(other) if (not isinstance(other, ParserElement)): warnings.warn(('Cannot combine element of type %s with ParserElement' % type(other)), SyntaxWarning, stacklevel=2) return None return (other + self)
'Implementation of - operator, returns C{L{And}} with error stop'
def __sub__(self, other):
if isinstance(other, basestring): other = ParserElement.literalStringClass(other) if (not isinstance(other, ParserElement)): warnings.warn(('Cannot combine element of type %s with ParserElement' % type(other)), SyntaxWarning, stacklevel=2) return None return And([self, And._ErrorStop(), other])
'Implementation of - operator when left operand is not a C{L{ParserElement}}'
def __rsub__(self, other):
if isinstance(other, basestring): other = ParserElement.literalStringClass(other) if (not isinstance(other, ParserElement)): warnings.warn(('Cannot combine element of type %s with ParserElement' % type(other)), SyntaxWarning, stacklevel=2) return None return (other - self)
'Implementation of * operator, allows use of C{expr * 3} in place of C{expr + expr + expr}. Expressions may also me multiplied by a 2-integer tuple, similar to C{{min,max}} multipliers in regular expressions. Tuples may also include C{None} as in: - C{expr*(n,None)} or C{expr*(n,)} is equivalent to C{expr*n + L{ZeroOrMore}(expr)} (read as "at least n instances of C{expr}") - C{expr*(None,n)} is equivalent to C{expr*(0,n)} (read as "0 to n instances of C{expr}") - C{expr*(None,None)} is equivalent to C{L{ZeroOrMore}(expr)} - C{expr*(1,None)} is equivalent to C{L{OneOrMore}(expr)} Note that C{expr*(None,n)} does not raise an exception if more than n exprs exist in the input stream; that is, C{expr*(None,n)} does not enforce a maximum number of expr occurrences. If this behavior is desired, then write C{expr*(None,n) + ~expr}'
def __mul__(self, other):
if isinstance(other, int): (minElements, optElements) = (other, 0) elif isinstance(other, tuple): other = (other + (None, None))[:2] if (other[0] is None): other = (0, other[1]) if (isinstance(other[0], int) and (other[1] is None)): if (other[0] == 0): return ZeroOrMore(self) if (other[0] == 1): return OneOrMore(self) else: return ((self * other[0]) + ZeroOrMore(self)) elif (isinstance(other[0], int) and isinstance(other[1], int)): (minElements, optElements) = other optElements -= minElements else: raise TypeError("cannot multiply 'ParserElement' and ('%s','%s') objects", type(other[0]), type(other[1])) else: raise TypeError("cannot multiply 'ParserElement' and '%s' objects", type(other)) if (minElements < 0): raise ValueError('cannot multiply ParserElement by negative value') if (optElements < 0): raise ValueError('second tuple value must be greater or equal to first tuple value') if (minElements == optElements == 0): raise ValueError('cannot multiply ParserElement by 0 or (0,0)') if optElements: def makeOptionalList(n): if (n > 1): return Optional((self + makeOptionalList((n - 1)))) else: return Optional(self) if minElements: if (minElements == 1): ret = (self + makeOptionalList(optElements)) else: ret = (And(([self] * minElements)) + makeOptionalList(optElements)) else: ret = makeOptionalList(optElements) elif (minElements == 1): ret = self else: ret = And(([self] * minElements)) return ret
'Implementation of | operator - returns C{L{MatchFirst}}'
def __or__(self, other):
if isinstance(other, basestring): other = ParserElement.literalStringClass(other) if (not isinstance(other, ParserElement)): warnings.warn(('Cannot combine element of type %s with ParserElement' % type(other)), SyntaxWarning, stacklevel=2) return None return MatchFirst([self, other])
'Implementation of | operator when left operand is not a C{L{ParserElement}}'
def __ror__(self, other):
if isinstance(other, basestring): other = ParserElement.literalStringClass(other) if (not isinstance(other, ParserElement)): warnings.warn(('Cannot combine element of type %s with ParserElement' % type(other)), SyntaxWarning, stacklevel=2) return None return (other | self)
'Implementation of ^ operator - returns C{L{Or}}'
def __xor__(self, other):
if isinstance(other, basestring): other = ParserElement.literalStringClass(other) if (not isinstance(other, ParserElement)): warnings.warn(('Cannot combine element of type %s with ParserElement' % type(other)), SyntaxWarning, stacklevel=2) return None return Or([self, other])
'Implementation of ^ operator when left operand is not a C{L{ParserElement}}'
def __rxor__(self, other):
if isinstance(other, basestring): other = ParserElement.literalStringClass(other) if (not isinstance(other, ParserElement)): warnings.warn(('Cannot combine element of type %s with ParserElement' % type(other)), SyntaxWarning, stacklevel=2) return None return (other ^ self)
'Implementation of & operator - returns C{L{Each}}'
def __and__(self, other):
if isinstance(other, basestring): other = ParserElement.literalStringClass(other) if (not isinstance(other, ParserElement)): warnings.warn(('Cannot combine element of type %s with ParserElement' % type(other)), SyntaxWarning, stacklevel=2) return None return Each([self, other])
'Implementation of & operator when left operand is not a C{L{ParserElement}}'
def __rand__(self, other):
if isinstance(other, basestring): other = ParserElement.literalStringClass(other) if (not isinstance(other, ParserElement)): warnings.warn(('Cannot combine element of type %s with ParserElement' % type(other)), SyntaxWarning, stacklevel=2) return None return (other & self)
'Implementation of ~ operator - returns C{L{NotAny}}'
def __invert__(self):
return NotAny(self)
'Shortcut for C{L{setResultsName}}, with C{listAllMatches=default}:: userdata = Word(alphas).setResultsName("name") + Word(nums+"-").setResultsName("socsecno") could be written as:: userdata = Word(alphas)("name") + Word(nums+"-")("socsecno") If C{name} is given with a trailing C{\'*\'} character, then C{listAllMatches} will be passed as C{True}. If C{name} is omitted, same as calling C{L{copy}}.'
def __call__(self, name=None):
if (name is not None): return self.setResultsName(name) else: return self.copy()
'Suppresses the output of this C{ParserElement}; useful to keep punctuation from cluttering up returned output.'
def suppress(self):
return Suppress(self)
'Disables the skipping of whitespace before matching the characters in the C{ParserElement}\'s defined pattern. This is normally only used internally by the pyparsing module, but may be needed in some whitespace-sensitive grammars.'
def leaveWhitespace(self):
self.skipWhitespace = False return self
'Overrides the default whitespace chars'
def setWhitespaceChars(self, chars):
self.skipWhitespace = True self.whiteChars = chars self.copyDefaultWhiteChars = False return self
'Overrides default behavior to expand C{<TAB>}s to spaces before parsing the input string. Must be called before C{parseString} when the input grammar contains elements that match C{<TAB>} characters.'
def parseWithTabs(self):
self.keepTabs = True return self
'Define expression to be ignored (e.g., comments) while doing pattern matching; may be called repeatedly, to define multiple comment or other ignorable patterns.'
def ignore(self, other):
if isinstance(other, basestring): other = Suppress(other) if isinstance(other, Suppress): if (other not in self.ignoreExprs): self.ignoreExprs.append(other) else: self.ignoreExprs.append(Suppress(other.copy())) return self
'Enable display of debugging messages while doing pattern matching.'
def setDebugActions(self, startAction, successAction, exceptionAction):
self.debugActions = ((startAction or _defaultStartDebugAction), (successAction or _defaultSuccessDebugAction), (exceptionAction or _defaultExceptionDebugAction)) self.debug = True return self
'Enable display of debugging messages while doing pattern matching. Set C{flag} to True to enable, False to disable.'
def setDebug(self, flag=True):
if flag: self.setDebugActions(_defaultStartDebugAction, _defaultSuccessDebugAction, _defaultExceptionDebugAction) else: self.debug = False return self
'Check defined expressions for valid structure, check for infinite recursive definitions.'
def validate(self, validateTrace=[]):
self.checkRecursion([])
'Execute the parse expression on the given file or filename. If a filename is specified (instead of a file object), the entire file is opened, read, and closed before parsing.'
def parseFile(self, file_or_filename, parseAll=False):
try: file_contents = file_or_filename.read() except AttributeError: f = open(file_or_filename, 'r') file_contents = f.read() f.close() try: return self.parseString(file_contents, parseAll) except ParseBaseException as exc: if ParserElement.verbose_stacktrace: raise else: raise exc
'Execute the parse expression on a series of test strings, showing each test, the parsed results or where the parse failed. Quick and easy way to run a parse expression against a list of sample strings. Parameters: - tests - a list of separate test strings, or a multiline string of test strings - parseAll - (default=False) - flag to pass to C{L{parseString}} when running tests'
def runTests(self, tests, parseAll=False):
if isinstance(tests, basestring): tests = map(str.strip, tests.splitlines()) for t in tests: out = [t] try: out.append(self.parseString(t, parseAll=parseAll).dump()) except ParseException as pe: if ('\n' in t): out.append(line(pe.loc, t)) out.append(((' ' * (col(pe.loc, t) - 1)) + '^')) else: out.append(((' ' * pe.loc) + '^')) out.append(str(pe)) out.append('') print '\n'.join(out)
'Overrides the default Keyword chars'
@staticmethod def setDefaultKeywordChars(chars):
Keyword.DEFAULT_KEYWORD_CHARS = chars
'The parameters C{pattern} and C{flags} are passed to the C{re.compile()} function as-is. See the Python C{re} module for an explanation of the acceptable patterns and flags.'
def __init__(self, pattern, flags=0):
super(Regex, self).__init__() if isinstance(pattern, basestring): if (not pattern): warnings.warn('null string passed to Regex; use Empty() instead', SyntaxWarning, stacklevel=2) self.pattern = pattern self.flags = flags try: self.re = re.compile(self.pattern, self.flags) self.reString = self.pattern except sre_constants.error: warnings.warn(('invalid pattern (%s) passed to Regex' % pattern), SyntaxWarning, stacklevel=2) raise elif isinstance(pattern, Regex.compiledREtype): self.re = pattern self.pattern = self.reString = str(pattern) self.flags = flags else: raise ValueError('Regex may only be constructed with a string or a compiled RE object') self.name = _ustr(self) self.errmsg = ('Expected ' + self.name) self.mayIndexError = False self.mayReturnEmpty = True
'Defined with the following parameters: - quoteChar - string of one or more characters defining the quote delimiting string - escChar - character to escape quotes, typically backslash (default=None) - escQuote - special quote sequence to escape an embedded quote string (such as SQL\'s "" to escape an embedded ") (default=None) - multiline - boolean indicating whether quotes can span multiple lines (default=C{False}) - unquoteResults - boolean indicating whether the matched text should be unquoted (default=C{True}) - endQuoteChar - string of one or more characters defining the end of the quote delimited string (default=C{None} => same as quoteChar)'
def __init__(self, quoteChar, escChar=None, escQuote=None, multiline=False, unquoteResults=True, endQuoteChar=None):
super(QuotedString, self).__init__() quoteChar = quoteChar.strip() if (not quoteChar): warnings.warn('quoteChar cannot be the empty string', SyntaxWarning, stacklevel=2) raise SyntaxError() if (endQuoteChar is None): endQuoteChar = quoteChar else: endQuoteChar = endQuoteChar.strip() if (not endQuoteChar): warnings.warn('endQuoteChar cannot be the empty string', SyntaxWarning, stacklevel=2) raise SyntaxError() self.quoteChar = quoteChar self.quoteCharLen = len(quoteChar) self.firstQuoteChar = quoteChar[0] self.endQuoteChar = endQuoteChar self.endQuoteCharLen = len(endQuoteChar) self.escChar = escChar self.escQuote = escQuote self.unquoteResults = unquoteResults if multiline: self.flags = (re.MULTILINE | re.DOTALL) self.pattern = ('%s(?:[^%s%s]' % (re.escape(self.quoteChar), _escapeRegexRangeChars(self.endQuoteChar[0]), (((escChar is not None) and _escapeRegexRangeChars(escChar)) or ''))) else: self.flags = 0 self.pattern = ('%s(?:[^%s\\n\\r%s]' % (re.escape(self.quoteChar), _escapeRegexRangeChars(self.endQuoteChar[0]), (((escChar is not None) and _escapeRegexRangeChars(escChar)) or ''))) if (len(self.endQuoteChar) > 1): self.pattern += (('|(?:' + ')|(?:'.join((('%s[^%s]' % (re.escape(self.endQuoteChar[:i]), _escapeRegexRangeChars(self.endQuoteChar[i]))) for i in range((len(self.endQuoteChar) - 1), 0, (-1))))) + ')') if escQuote: self.pattern += ('|(?:%s)' % re.escape(escQuote)) if escChar: self.pattern += ('|(?:%s.)' % re.escape(escChar)) self.escCharReplacePattern = (re.escape(self.escChar) + '(.)') self.pattern += (')*%s' % re.escape(self.endQuoteChar)) try: self.re = re.compile(self.pattern, self.flags) self.reString = self.pattern except sre_constants.error: warnings.warn(('invalid pattern (%s) passed to Regex' % self.pattern), SyntaxWarning, stacklevel=2) raise self.name = _ustr(self) self.errmsg = ('Expected ' + self.name) self.mayIndexError = False self.mayReturnEmpty = True
'Extends C{leaveWhitespace} defined in base class, and also invokes C{leaveWhitespace} on all contained expressions.'
def leaveWhitespace(self):
self.skipWhitespace = False self.exprs = [e.copy() for e in self.exprs] for e in self.exprs: e.leaveWhitespace() return self
'Resolve strings to objects using standard import and attribute syntax.'
def resolve(self, s):
name = s.split('.') used = name.pop(0) try: found = self.importer(used) for frag in name: used += ('.' + frag) try: found = getattr(found, frag) except AttributeError: self.importer(used) found = getattr(found, frag) return found except ImportError: (e, tb) = sys.exc_info()[1:] v = ValueError(('Cannot resolve %r: %s' % (s, e))) (v.__cause__, v.__traceback__) = (e, tb) raise v
'Default converter for the ext:// protocol.'
def ext_convert(self, value):
return self.resolve(value)
'Default converter for the cfg:// protocol.'
def cfg_convert(self, value):
rest = value m = self.WORD_PATTERN.match(rest) if (m is None): raise ValueError(('Unable to convert %r' % value)) else: rest = rest[m.end():] d = self.config[m.groups()[0]] while rest: m = self.DOT_PATTERN.match(rest) if m: d = d[m.groups()[0]] else: m = self.INDEX_PATTERN.match(rest) if m: idx = m.groups()[0] if (not self.DIGIT_PATTERN.match(idx)): d = d[idx] else: try: n = int(idx) d = d[n] except TypeError: d = d[idx] if m: rest = rest[m.end():] else: raise ValueError(('Unable to convert %r at %r' % (value, rest))) return d
'Convert values to an appropriate type. dicts, lists and tuples are replaced by their converting alternatives. Strings are checked to see if they have a conversion format and are converted if they do.'
def convert(self, value):
if ((not isinstance(value, ConvertingDict)) and isinstance(value, dict)): value = ConvertingDict(value) value.configurator = self elif ((not isinstance(value, ConvertingList)) and isinstance(value, list)): value = ConvertingList(value) value.configurator = self elif ((not isinstance(value, ConvertingTuple)) and isinstance(value, tuple)): value = ConvertingTuple(value) value.configurator = self elif isinstance(value, six.string_types): m = self.CONVERT_PATTERN.match(value) if m: d = m.groupdict() prefix = d['prefix'] converter = self.value_converters.get(prefix, None) if converter: suffix = d['suffix'] converter = getattr(self, converter) value = converter(suffix) return value
'Configure an object with a user-supplied factory.'
def configure_custom(self, config):
c = config.pop('()') if ((not hasattr(c, '__call__')) and hasattr(types, 'ClassType') and (type(c) != types.ClassType)): c = self.resolve(c) props = config.pop('.', None) kwargs = dict(((k, config[k]) for k in config if valid_ident(k))) result = c(**kwargs) if props: for (name, value) in props.items(): setattr(result, name, value) return result
'Utility function which converts lists to tuples.'
def as_tuple(self, value):
if isinstance(value, list): value = tuple(value) return value
'Do the configuration.'
def configure(self):
config = self.config if ('version' not in config): raise ValueError("dictionary doesn't specify a version") if (config['version'] != 1): raise ValueError(('Unsupported version: %s' % config['version'])) incremental = config.pop('incremental', False) EMPTY_DICT = {} logging._acquireLock() try: if incremental: handlers = config.get('handlers', EMPTY_DICT) if (sys.version_info[:2] == (2, 7)): for name in handlers: if (name not in logging._handlers): raise ValueError(('No handler found with name %r' % name)) else: try: handler = logging._handlers[name] handler_config = handlers[name] level = handler_config.get('level', None) if level: handler.setLevel(_checkLevel(level)) except StandardError as e: raise ValueError(('Unable to configure handler %r: %s' % (name, e))) loggers = config.get('loggers', EMPTY_DICT) for name in loggers: try: self.configure_logger(name, loggers[name], True) except StandardError as e: raise ValueError(('Unable to configure logger %r: %s' % (name, e))) root = config.get('root', None) if root: try: self.configure_root(root, True) except StandardError as e: raise ValueError(('Unable to configure root logger: %s' % e)) else: disable_existing = config.pop('disable_existing_loggers', True) logging._handlers.clear() del logging._handlerList[:] formatters = config.get('formatters', EMPTY_DICT) for name in formatters: try: formatters[name] = self.configure_formatter(formatters[name]) except StandardError as e: raise ValueError(('Unable to configure formatter %r: %s' % (name, e))) filters = config.get('filters', EMPTY_DICT) for name in filters: try: filters[name] = self.configure_filter(filters[name]) except StandardError as e: raise ValueError(('Unable to configure filter %r: %s' % (name, e))) handlers = config.get('handlers', EMPTY_DICT) for name in sorted(handlers): try: handler = self.configure_handler(handlers[name]) handler.name = name handlers[name] = handler except StandardError as e: raise ValueError(('Unable to configure handler %r: %s' % (name, e))) root = logging.root existing = list(root.manager.loggerDict) existing.sort() child_loggers = [] loggers = config.get('loggers', EMPTY_DICT) for name in loggers: if (name in existing): i = existing.index(name) prefixed = (name + '.') pflen = len(prefixed) num_existing = len(existing) i = (i + 1) while ((i < num_existing) and (existing[i][:pflen] == prefixed)): child_loggers.append(existing[i]) i = (i + 1) existing.remove(name) try: self.configure_logger(name, loggers[name]) except StandardError as e: raise ValueError(('Unable to configure logger %r: %s' % (name, e))) for log in existing: logger = root.manager.loggerDict[log] if (log in child_loggers): logger.level = logging.NOTSET logger.handlers = [] logger.propagate = True elif disable_existing: logger.disabled = True root = config.get('root', None) if root: try: self.configure_root(root) except StandardError as e: raise ValueError(('Unable to configure root logger: %s' % e)) finally: logging._releaseLock()
'Configure a formatter from a dictionary.'
def configure_formatter(self, config):
if ('()' in config): factory = config['()'] try: result = self.configure_custom(config) except TypeError as te: if ("'format'" not in str(te)): raise config['fmt'] = config.pop('format') config['()'] = factory result = self.configure_custom(config) else: fmt = config.get('format', None) dfmt = config.get('datefmt', None) result = logging.Formatter(fmt, dfmt) return result
'Configure a filter from a dictionary.'
def configure_filter(self, config):
if ('()' in config): result = self.configure_custom(config) else: name = config.get('name', '') result = logging.Filter(name) return result
'Add filters to a filterer from a list of names.'
def add_filters(self, filterer, filters):
for f in filters: try: filterer.addFilter(self.config['filters'][f]) except StandardError as e: raise ValueError(('Unable to add filter %r: %s' % (f, e)))
'Configure a handler from a dictionary.'
def configure_handler(self, config):
formatter = config.pop('formatter', None) if formatter: try: formatter = self.config['formatters'][formatter] except StandardError as e: raise ValueError(('Unable to set formatter %r: %s' % (formatter, e))) level = config.pop('level', None) filters = config.pop('filters', None) if ('()' in config): c = config.pop('()') if ((not hasattr(c, '__call__')) and hasattr(types, 'ClassType') and (type(c) != types.ClassType)): c = self.resolve(c) factory = c else: klass = self.resolve(config.pop('class')) if (issubclass(klass, logging.handlers.MemoryHandler) and ('target' in config)): try: config['target'] = self.config['handlers'][config['target']] except StandardError as e: raise ValueError(('Unable to set target handler %r: %s' % (config['target'], e))) elif (issubclass(klass, logging.handlers.SMTPHandler) and ('mailhost' in config)): config['mailhost'] = self.as_tuple(config['mailhost']) elif (issubclass(klass, logging.handlers.SysLogHandler) and ('address' in config)): config['address'] = self.as_tuple(config['address']) factory = klass kwargs = dict(((k, config[k]) for k in config if valid_ident(k))) try: result = factory(**kwargs) except TypeError as te: if ("'stream'" not in str(te)): raise kwargs['strm'] = kwargs.pop('stream') result = factory(**kwargs) if formatter: result.setFormatter(formatter) if (level is not None): result.setLevel(_checkLevel(level)) if filters: self.add_filters(result, filters) return result
'Add handlers to a logger from a list of names.'
def add_handlers(self, logger, handlers):
for h in handlers: try: logger.addHandler(self.config['handlers'][h]) except StandardError as e: raise ValueError(('Unable to add handler %r: %s' % (h, e)))
'Perform configuration which is common to root and non-root loggers.'
def common_logger_config(self, logger, config, incremental=False):
level = config.get('level', None) if (level is not None): logger.setLevel(_checkLevel(level)) if (not incremental): for h in logger.handlers[:]: logger.removeHandler(h) handlers = config.get('handlers', None) if handlers: self.add_handlers(logger, handlers) filters = config.get('filters', None) if filters: self.add_filters(logger, filters)
'Configure a non-root logger from a dictionary.'
def configure_logger(self, name, config, incremental=False):
logger = logging.getLogger(name) self.common_logger_config(logger, config, incremental) propagate = config.get('propagate', None) if (propagate is not None): logger.propagate = propagate
'Configure a root logger from a dictionary.'
def configure_root(self, config, incremental=False):
root = logging.getLogger() self.common_logger_config(root, config, incremental)
'Create a wheel cache. :param cache_dir: The root of the cache. :param format_control: A pip.index.FormatControl object to limit binaries being read from the cache.'
def __init__(self, cache_dir, format_control):
self._cache_dir = (expanduser(cache_dir) if cache_dir else None) self._format_control = format_control
':raises InvalidWheelFilename: when the filename is invalid for a wheel'
def __init__(self, filename):
wheel_info = self.wheel_file_re.match(filename) if (not wheel_info): raise InvalidWheelFilename(('%s is not a valid wheel filename.' % filename)) self.filename = filename self.name = wheel_info.group('name').replace('_', '-') self.version = wheel_info.group('ver').replace('_', '-') self.pyversions = wheel_info.group('pyver').split('.') self.abis = wheel_info.group('abi').split('.') self.plats = wheel_info.group('plat').split('.') self.file_tags = set(((x, y, z) for x in self.pyversions for y in self.abis for z in self.plats))
'Return the lowest index that one of the wheel\'s file_tag combinations achieves in the supported_tags list e.g. if there are 8 supported tags, and one of the file tags is first in the list, then return 0. Returns None is the wheel is not supported.'
def support_index_min(self, tags=None):
if (tags is None): tags = pep425tags.supported_tags indexes = [tags.index(c) for c in self.file_tags if (c in tags)] return (min(indexes) if indexes else None)
'Is this wheel supported on this system?'
def supported(self, tags=None):
if (tags is None): tags = pep425tags.supported_tags return bool(set(tags).intersection(self.file_tags))
'Build one wheel. :return: The filename of the built wheel, or None if the build failed.'
def _build_one(self, req, output_dir, python_tag=None):
tempd = tempfile.mkdtemp('pip-wheel-') try: if self.__build_one(req, tempd, python_tag=python_tag): try: wheel_name = os.listdir(tempd)[0] wheel_path = os.path.join(output_dir, wheel_name) shutil.move(os.path.join(tempd, wheel_name), wheel_path) logger.info('Stored in directory: %s', output_dir) return wheel_path except: pass self._clean_one(req) return None finally: rmtree(tempd)
'Build wheels. :param unpack: If True, replace the sdist we built from with the newly built wheel, in preparation for installation. :return: True if all the wheels built correctly.'
def build(self, autobuilding=False):
assert (self._wheel_dir or (autobuilding and self._cache_root)) self.requirement_set.prepare_files(self.finder) reqset = self.requirement_set.requirements.values() buildset = [] for req in reqset: if req.constraint: continue if req.is_wheel: if (not autobuilding): logger.info('Skipping %s, due to already being wheel.', req.name) elif req.editable: if (not autobuilding): logger.info('Skipping bdist_wheel for %s, due to being editable', req.name) elif (autobuilding and req.link and (not req.link.is_artifact)): pass elif (autobuilding and (not req.source_dir)): pass else: if autobuilding: link = req.link (base, ext) = link.splitext() if (pip.index.egg_info_matches(base, None, link) is None): continue if ('binary' not in pip.index.fmt_ctl_formats(self.finder.format_control, canonicalize_name(req.name))): logger.info('Skipping bdist_wheel for %s, due to binaries being disabled for it.', req.name) continue buildset.append(req) if (not buildset): return True logger.info('Building wheels for collected packages: %s', ', '.join([req.name for req in buildset])) with indent_log(): (build_success, build_failure) = ([], []) for req in buildset: python_tag = None if autobuilding: python_tag = pep425tags.implementation_tag output_dir = _cache_for_link(self._cache_root, req.link) try: ensure_dir(output_dir) except OSError as e: logger.warn('Building wheel for %s failed: %s', req.name, e) build_failure.append(req) continue else: output_dir = self._wheel_dir wheel_file = self._build_one(req, output_dir, python_tag=python_tag) if wheel_file: build_success.append(req) if autobuilding: if (req.source_dir and (not os.path.exists(os.path.join(req.source_dir, PIP_DELETE_MARKER_FILENAME)))): raise AssertionError('bad source dir - missing marker') req.remove_temporary_source() req.source_dir = req.build_location(self.requirement_set.build_dir) req.link = pip.index.Link(path_to_url(wheel_file)) assert req.link.is_wheel unpack_url(req.link, req.source_dir, None, False, session=self.requirement_set.session) else: build_failure.append(req) if build_success: logger.info('Successfully built %s', ' '.join([req.name for req in build_success])) if build_failure: logger.info('Failed to build %s', ' '.join([req.name for req in build_failure])) return (len(build_failure) == 0)
'Return True if the given path is one we are permitted to remove/modify, False otherwise.'
def _permitted(self, path):
return is_local(path)
'Compact a path set to contain the minimal number of paths necessary to contain all paths in the set. If /a/path/ and /a/path/to/a/file.txt are both in the set, leave only the shorter path.'
def compact(self, paths):
short_paths = set() for path in sorted(paths, key=len): if (not any([(path.startswith(shortpath) and (path[len(shortpath.rstrip(os.path.sep))] == os.path.sep)) for shortpath in short_paths])): short_paths.add(path) return short_paths
'Remove paths in ``self.paths`` with confirmation (unless ``auto_confirm`` is True).'
def remove(self, auto_confirm=False):
if (not self.paths): logger.info("Can't uninstall '%s'. No files were found to uninstall.", self.dist.project_name) return logger.info('Uninstalling %s-%s:', self.dist.project_name, self.dist.version) with indent_log(): paths = sorted(self.compact(self.paths)) if auto_confirm: response = 'y' else: for path in paths: logger.info(path) response = ask('Proceed (y/n)? ', ('y', 'n')) if self._refuse: logger.info('Not removing or modifying (outside of prefix):') for path in self.compact(self._refuse): logger.info(path) if (response == 'y'): self.save_dir = tempfile.mkdtemp(suffix='-uninstall', prefix='pip-') for path in paths: new_path = self._stash(path) logger.debug('Removing file or directory %s', path) self._moved_paths.append(path) renames(path, new_path) for pth in self.pth.values(): pth.remove() logger.info('Successfully uninstalled %s-%s', self.dist.project_name, self.dist.version)
'Rollback the changes previously made by remove().'
def rollback(self):
if (self.save_dir is None): logger.error("Can't roll back %s; was not uninstalled", self.dist.project_name) return False logger.info('Rolling back uninstall of %s', self.dist.project_name) for path in self._moved_paths: tmp_path = self._stash(path) logger.debug('Replacing %s', path) renames(tmp_path, path) for pth in self.pth.values(): pth.rollback()
'Remove temporary save dir: rollback will no longer be possible.'
def commit(self):
if (self.save_dir is not None): rmtree(self.save_dir) self.save_dir = None self._moved_paths = []
'Return a setuptools Dist object.'
def dist(self, finder):
raise NotImplementedError(self.dist)
'Ensure that we can get a Dist for this requirement.'
def prep_for_dist(self):
raise NotImplementedError(self.dist)
'Create a RequirementSet. :param wheel_download_dir: Where still-packed .whl files should be written to. If None they are written to the download_dir parameter. Separate to download_dir to permit only keeping wheel archives for pip wheel. :param download_dir: Where still packed archives should be written to. If None they are not saved, and are deleted immediately after unpacking. :param wheel_cache: The pip wheel cache, for passing to InstallRequirement.'
def __init__(self, build_dir, src_dir, download_dir, upgrade=False, ignore_installed=False, as_egg=False, target_dir=None, ignore_dependencies=False, force_reinstall=False, use_user_site=False, session=None, pycompile=True, isolated=False, wheel_download_dir=None, wheel_cache=None, require_hashes=False):
if (session is None): raise TypeError("RequirementSet() missing 1 required keyword argument: 'session'") self.build_dir = build_dir self.src_dir = src_dir self.download_dir = download_dir self.upgrade = upgrade self.ignore_installed = ignore_installed self.force_reinstall = force_reinstall self.requirements = Requirements() self.requirement_aliases = {} self.unnamed_requirements = [] self.ignore_dependencies = ignore_dependencies self.successfully_downloaded = [] self.successfully_installed = [] self.reqs_to_cleanup = [] self.as_egg = as_egg self.use_user_site = use_user_site self.target_dir = target_dir self.session = session self.pycompile = pycompile self.isolated = isolated if wheel_download_dir: wheel_download_dir = normalize_path(wheel_download_dir) self.wheel_download_dir = wheel_download_dir self._wheel_cache = wheel_cache self.require_hashes = require_hashes self._dependencies = defaultdict(list)
'Add install_req as a requirement to install. :param parent_req_name: The name of the requirement that needed this added. The name is used because when multiple unnamed requirements resolve to the same name, we could otherwise end up with dependency links that point outside the Requirements set. parent_req must already be added. Note that None implies that this is a user supplied requirement, vs an inferred one. :return: Additional requirements to scan. That is either [] if the requirement is not applicable, or [install_req] if the requirement is applicable and has just been added.'
def add_requirement(self, install_req, parent_req_name=None):
name = install_req.name if (not install_req.match_markers()): logger.warning("Ignoring %s: markers %r don't match your environment", install_req.name, install_req.markers) return [] install_req.as_egg = self.as_egg install_req.use_user_site = self.use_user_site install_req.target_dir = self.target_dir install_req.pycompile = self.pycompile if (not name): self.unnamed_requirements.append(install_req) return [install_req] else: try: existing_req = self.get_requirement(name) except KeyError: existing_req = None if ((parent_req_name is None) and existing_req and (not existing_req.constraint) and (existing_req.extras == install_req.extras) and (not (existing_req.req.specs == install_req.req.specs))): raise InstallationError(('Double requirement given: %s (already in %s, name=%r)' % (install_req, existing_req, name))) if (not existing_req): self.requirements[name] = install_req if (name.lower() != name): self.requirement_aliases[name.lower()] = name result = [install_req] else: result = [] if ((not install_req.constraint) and existing_req.constraint): if (install_req.link and (not (existing_req.link and (install_req.link.path == existing_req.link.path)))): self.reqs_to_cleanup.append(install_req) raise InstallationError(("Could not satisfy constraints for '%s': installation from path or url cannot be constrained to a version" % name)) existing_req.constraint = False existing_req.extras = tuple(sorted(set(existing_req.extras).union(set(install_req.extras)))) logger.debug('Setting %s extras to: %s', existing_req, existing_req.extras) result = [existing_req] install_req = existing_req if parent_req_name: parent_req = self.get_requirement(parent_req_name) self._dependencies[parent_req].append(install_req) return result
'Prepare process. Create temp directories, download and/or unpack files.'
def prepare_files(self, finder):
if self.wheel_download_dir: ensure_dir(self.wheel_download_dir) root_reqs = (self.unnamed_requirements + self.requirements.values()) require_hashes = (self.require_hashes or any((req.has_hash_options for req in root_reqs))) if (require_hashes and self.as_egg): raise InstallationError('--egg is not allowed with --require-hashes mode, since it delegates dependency resolution to setuptools and could thus result in installation of unhashed packages.') discovered_reqs = [] hash_errors = HashErrors() for req in chain(root_reqs, discovered_reqs): try: discovered_reqs.extend(self._prepare_file(finder, req, require_hashes=require_hashes, ignore_dependencies=self.ignore_dependencies)) except HashError as exc: exc.req = req hash_errors.append(exc) if hash_errors: raise hash_errors
'Check if req_to_install should be skipped. This will check if the req is installed, and whether we should upgrade or reinstall it, taking into account all the relevant user options. After calling this req_to_install will only have satisfied_by set to None if the req_to_install is to be upgraded/reinstalled etc. Any other value will be a dist recording the current thing installed that satisfies the requirement. Note that for vcs urls and the like we can\'t assess skipping in this routine - we simply identify that we need to pull the thing down, then later on it is pulled down and introspected to assess upgrade/ reinstalls etc. :return: A text reason for why it was skipped, or None.'
def _check_skip_installed(self, req_to_install, finder):
req_to_install.check_if_exists() if req_to_install.satisfied_by: skip_reason = 'satisfied (use --upgrade to upgrade)' if self.upgrade: best_installed = False if (not (self.force_reinstall or req_to_install.link)): try: finder.find_requirement(req_to_install, self.upgrade) except BestVersionAlreadyInstalled: skip_reason = 'up-to-date' best_installed = True except DistributionNotFound: pass if (not best_installed): if (not (self.use_user_site and (not dist_in_usersite(req_to_install.satisfied_by)))): req_to_install.conflicts_with = req_to_install.satisfied_by req_to_install.satisfied_by = None return skip_reason else: return None
'Prepare a single requirements file. :return: A list of additional InstallRequirements to also install.'
def _prepare_file(self, finder, req_to_install, require_hashes=False, ignore_dependencies=False):
if (req_to_install.constraint or req_to_install.prepared): return [] req_to_install.prepared = True if req_to_install.editable: logger.info('Obtaining %s', req_to_install) else: assert (req_to_install.satisfied_by is None) if (not self.ignore_installed): skip_reason = self._check_skip_installed(req_to_install, finder) if req_to_install.satisfied_by: assert (skip_reason is not None), ('_check_skip_installed returned None but req_to_install.satisfied_by is set to %r' % (req_to_install.satisfied_by,)) logger.info('Requirement already %s: %s', skip_reason, req_to_install) elif (req_to_install.link and (req_to_install.link.scheme == 'file')): path = url_to_path(req_to_install.link.url) logger.info('Processing %s', display_path(path)) else: logger.info('Collecting %s', req_to_install) with indent_log(): if req_to_install.editable: if require_hashes: raise InstallationError(('The editable requirement %s cannot be installed when requiring hashes, because there is no single file to hash.' % req_to_install)) req_to_install.ensure_has_source_dir(self.src_dir) req_to_install.update_editable((not self.is_download)) abstract_dist = make_abstract_dist(req_to_install) abstract_dist.prep_for_dist() if self.is_download: req_to_install.archive(self.download_dir) elif req_to_install.satisfied_by: if require_hashes: logger.debug('Since it is already installed, we are trusting this package without checking its hash. To ensure a completely repeatable environment, install into an empty virtualenv.') abstract_dist = Installed(req_to_install) else: req_to_install.ensure_has_source_dir(self.build_dir) if os.path.exists(os.path.join(req_to_install.source_dir, 'setup.py')): raise PreviousBuildDirError(("pip can't proceed with requirements '%s' due to a pre-existing build directory (%s). This is likely due to a previous installation that failed. pip is being responsible and not assuming it can delete this. Please delete it and try again." % (req_to_install, req_to_install.source_dir))) req_to_install.populate_link(finder, self.upgrade, require_hashes) assert req_to_install.link link = req_to_install.link if require_hashes: if is_vcs_url(link): raise VcsHashUnsupported() elif (is_file_url(link) and is_dir_url(link)): raise DirectoryUrlHashUnsupported() if ((not req_to_install.original_link) and (not req_to_install.is_pinned)): raise HashUnpinned() hashes = req_to_install.hashes(trust_internet=(not require_hashes)) if (require_hashes and (not hashes)): hashes = MissingHashes() try: download_dir = self.download_dir autodelete_unpacked = True if (req_to_install.link.is_wheel and self.wheel_download_dir): download_dir = self.wheel_download_dir if req_to_install.link.is_wheel: if download_dir: autodelete_unpacked = True else: autodelete_unpacked = False unpack_url(req_to_install.link, req_to_install.source_dir, download_dir, autodelete_unpacked, session=self.session, hashes=hashes) except requests.HTTPError as exc: logger.critical('Could not install requirement %s because of error %s', req_to_install, exc) raise InstallationError(('Could not install requirement %s because of HTTP error %s for URL %s' % (req_to_install, exc, req_to_install.link))) abstract_dist = make_abstract_dist(req_to_install) abstract_dist.prep_for_dist() if self.is_download: if (req_to_install.link.scheme in vcs.all_schemes): req_to_install.archive(self.download_dir) if (not self.ignore_installed): req_to_install.check_if_exists() if req_to_install.satisfied_by: if (self.upgrade or self.ignore_installed): if (not (self.use_user_site and (not dist_in_usersite(req_to_install.satisfied_by)))): req_to_install.conflicts_with = req_to_install.satisfied_by req_to_install.satisfied_by = None else: logger.info('Requirement already satisfied (use --upgrade to upgrade): %s', req_to_install) dist = abstract_dist.dist(finder) more_reqs = [] def add_req(subreq): sub_install_req = InstallRequirement(str(subreq), req_to_install, isolated=self.isolated, wheel_cache=self._wheel_cache) more_reqs.extend(self.add_requirement(sub_install_req, req_to_install.name)) if (not self.has_requirement(req_to_install.name)): self.add_requirement(req_to_install, None) if (not ignore_dependencies): if req_to_install.extras: logger.debug('Installing extra requirements: %r', ','.join(req_to_install.extras)) missing_requested = sorted((set(req_to_install.extras) - set(dist.extras))) for missing in missing_requested: logger.warning("%s does not provide the extra '%s'", dist, missing) available_requested = sorted((set(dist.extras) & set(req_to_install.extras))) for subreq in dist.requires(available_requested): add_req(subreq) self.reqs_to_cleanup.append(req_to_install) if ((not req_to_install.editable) and (not req_to_install.satisfied_by)): self.successfully_downloaded.append(req_to_install) return more_reqs
'Clean up files, remove builds.'
def cleanup_files(self):
logger.debug('Cleaning up...') with indent_log(): for req in self.reqs_to_cleanup: req.remove_temporary_source()
'Create the installation order. The installation order is topological - requirements are installed before the requiring thing. We break cycles at an arbitrary point, and make no other guarantees.'
def _to_install(self):
order = [] ordered_reqs = set() def schedule(req): if (req.satisfied_by or (req in ordered_reqs)): return if req.constraint: return ordered_reqs.add(req) for dep in self._dependencies[req]: schedule(dep) order.append(req) for install_req in self.requirements.values(): schedule(install_req) return order
'Install everything in this set (after having downloaded and unpacked the packages)'
def install(self, install_options, global_options=(), *args, **kwargs):
to_install = self._to_install() if to_install: logger.info('Installing collected packages: %s', ', '.join([req.name for req in to_install])) with indent_log(): for requirement in to_install: if requirement.conflicts_with: logger.info('Found existing installation: %s', requirement.conflicts_with) with indent_log(): requirement.uninstall(auto_confirm=True) try: requirement.install(install_options, global_options, *args, **kwargs) except: if (requirement.conflicts_with and (not requirement.install_succeeded)): requirement.rollback_uninstall() raise else: if (requirement.conflicts_with and requirement.install_succeeded): requirement.commit_uninstall() requirement.remove_temporary_source() self.successfully_installed = to_install
'Creates an InstallRequirement from a name, which might be a requirement, directory containing \'setup.py\', filename, or URL.'
@classmethod def from_line(cls, name, comes_from=None, isolated=False, options=None, wheel_cache=None, constraint=False):
from pip.index import Link if is_url(name): marker_sep = '; ' else: marker_sep = ';' if (marker_sep in name): (name, markers) = name.split(marker_sep, 1) markers = markers.strip() if (not markers): markers = None else: markers = None name = name.strip() req = None path = os.path.normpath(os.path.abspath(name)) link = None extras = None if is_url(name): link = Link(name) else: (p, extras) = _strip_extras(path) if (os.path.isdir(p) and ((os.path.sep in name) or name.startswith('.'))): if (not is_installable_dir(p)): raise InstallationError(("Directory %r is not installable. File 'setup.py' not found." % name)) link = Link(path_to_url(p)) elif is_archive_file(p): if (not os.path.isfile(p)): logger.warning('Requirement %r looks like a filename, but the file does not exist', name) link = Link(path_to_url(p)) if link: if ((link.scheme == 'file') and re.search('\\.\\./', link.url)): link = Link(path_to_url(os.path.normpath(os.path.abspath(link.path)))) if link.is_wheel: wheel = Wheel(link.filename) if (not wheel.supported()): raise UnsupportedWheel(('%s is not a supported wheel on this platform.' % wheel.filename)) req = ('%s==%s' % (wheel.name, wheel.version)) else: req = link.egg_fragment else: req = name options = (options if options else {}) res = cls(req, comes_from, link=link, markers=markers, isolated=isolated, options=options, wheel_cache=wheel_cache, constraint=constraint) if extras: res.extras = pkg_resources.Requirement.parse(('__placeholder__' + extras)).extras return res
'Ensure that if a link can be found for this, that it is found. Note that self.link may still be None - if Upgrade is False and the requirement is already installed. If require_hashes is True, don\'t use the wheel cache, because cached wheels, always built locally, have different hashes than the files downloaded from the index server and thus throw false hash mismatches. Furthermore, cached wheels at present have undeterministic contents due to file modification times.'
def populate_link(self, finder, upgrade, require_hashes):
if (self.link is None): self.link = finder.find_requirement(self, upgrade) if ((self._wheel_cache is not None) and (not require_hashes)): old_link = self.link self.link = self._wheel_cache.cached_wheel(self.link, self.name) if (old_link != self.link): logger.debug('Using cached wheel link: %s', self.link)
'Return whether I am pinned to an exact version. For example, some-package==1.2 is pinned; some-package>1.2 is not.'
@property def is_pinned(self):
specifiers = self.specifier return ((len(specifiers) == 1) and (next(iter(specifiers)).operator in ('==', '===')))
'Move self._temp_build_dir to self._ideal_build_dir/self.req.name For some requirements (e.g. a path to a directory), the name of the package is not available until we run egg_info, so the build_location will return a temporary directory and store the _ideal_build_dir. This is only called by self.egg_info_path to fix the temporary build directory.'
def _correct_build_location(self):
if (self.source_dir is not None): return assert (self.req is not None) assert self._temp_build_dir assert self._ideal_build_dir old_location = self._temp_build_dir self._temp_build_dir = None new_location = self.build_location(self._ideal_build_dir) if os.path.exists(new_location): raise InstallationError(('A package already exists in %s; please remove it to continue' % display_path(new_location))) logger.debug('Moving package %s from %s to new location %s', self, display_path(old_location), display_path(new_location)) shutil.move(old_location, new_location) self._temp_build_dir = new_location self._ideal_build_dir = None self.source_dir = new_location self._egg_info_path = None
'Uninstall the distribution currently satisfying this requirement. Prompts before removing or modifying files unless ``auto_confirm`` is True. Refuses to delete or modify files outside of ``sys.prefix`` - thus uninstallation within a virtual environment can only modify that virtual environment, even if the virtualenv is linked to global site-packages.'
def uninstall(self, auto_confirm=False):
if (not self.check_if_exists()): raise UninstallationError(('Cannot uninstall requirement %s, not installed' % (self.name,))) dist = (self.satisfied_by or self.conflicts_with) dist_path = normalize_path(dist.location) if (not dist_is_local(dist)): logger.info('Not uninstalling %s at %s, outside environment %s', dist.key, dist_path, sys.prefix) self.nothing_to_uninstall = True return if (dist_path in get_stdlib()): logger.info('Not uninstalling %s at %s, as it is in the standard library.', dist.key, dist_path) self.nothing_to_uninstall = True return paths_to_remove = UninstallPathSet(dist) develop_egg_link = egg_link_path(dist) develop_egg_link_egg_info = '{0}.egg-info'.format(pkg_resources.to_filename(dist.project_name)) egg_info_exists = (dist.egg_info and os.path.exists(dist.egg_info)) distutils_egg_info = getattr(dist._provider, 'path', None) if (egg_info_exists and dist.egg_info.endswith('.egg-info') and (not dist.egg_info.endswith(develop_egg_link_egg_info))): paths_to_remove.add(dist.egg_info) if dist.has_metadata('installed-files.txt'): for installed_file in dist.get_metadata('installed-files.txt').splitlines(): path = os.path.normpath(os.path.join(dist.egg_info, installed_file)) paths_to_remove.add(path) elif dist.has_metadata('top_level.txt'): if dist.has_metadata('namespace_packages.txt'): namespaces = dist.get_metadata('namespace_packages.txt') else: namespaces = [] for top_level_pkg in [p for p in dist.get_metadata('top_level.txt').splitlines() if (p and (p not in namespaces))]: path = os.path.join(dist.location, top_level_pkg) paths_to_remove.add(path) paths_to_remove.add((path + '.py')) paths_to_remove.add((path + '.pyc')) paths_to_remove.add((path + '.pyo')) elif distutils_egg_info: warnings.warn('Uninstalling a distutils installed project ({0}) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.'.format(self.name), RemovedInPip10Warning) paths_to_remove.add(distutils_egg_info) elif dist.location.endswith('.egg'): paths_to_remove.add(dist.location) easy_install_egg = os.path.split(dist.location)[1] easy_install_pth = os.path.join(os.path.dirname(dist.location), 'easy-install.pth') paths_to_remove.add_pth(easy_install_pth, ('./' + easy_install_egg)) elif develop_egg_link: with open(develop_egg_link, 'r') as fh: link_pointer = os.path.normcase(fh.readline().strip()) assert (link_pointer == dist.location), ('Egg-link %s does not match installed location of %s (at %s)' % (link_pointer, self.name, dist.location)) paths_to_remove.add(develop_egg_link) easy_install_pth = os.path.join(os.path.dirname(develop_egg_link), 'easy-install.pth') paths_to_remove.add_pth(easy_install_pth, dist.location) elif (egg_info_exists and dist.egg_info.endswith('.dist-info')): for path in pip.wheel.uninstallation_paths(dist): paths_to_remove.add(path) else: logger.debug('Not sure how to uninstall: %s - Check: %s', dist, dist.location) if (dist.has_metadata('scripts') and dist.metadata_isdir('scripts')): for script in dist.metadata_listdir('scripts'): if dist_in_usersite(dist): bin_dir = bin_user else: bin_dir = bin_py paths_to_remove.add(os.path.join(bin_dir, script)) if WINDOWS: paths_to_remove.add((os.path.join(bin_dir, script) + '.bat')) if dist.has_metadata('entry_points.txt'): if six.PY2: options = {} else: options = {'delimiters': ('=',)} config = configparser.SafeConfigParser(**options) config.readfp(FakeFile(dist.get_metadata_lines('entry_points.txt'))) if config.has_section('console_scripts'): for (name, value) in config.items('console_scripts'): if dist_in_usersite(dist): bin_dir = bin_user else: bin_dir = bin_py paths_to_remove.add(os.path.join(bin_dir, name)) if WINDOWS: paths_to_remove.add((os.path.join(bin_dir, name) + '.exe')) paths_to_remove.add((os.path.join(bin_dir, name) + '.exe.manifest')) paths_to_remove.add((os.path.join(bin_dir, name) + '-script.py')) paths_to_remove.remove(auto_confirm) self.uninstalled = paths_to_remove
'Ensure that a source_dir is set. This will create a temporary build dir if the name of the requirement isn\'t known yet. :param parent_dir: The ideal pip parent_dir for the source_dir. Generally src_dir for editables and build_dir for sdists. :return: self.source_dir'
def ensure_has_source_dir(self, parent_dir):
if (self.source_dir is None): self.source_dir = self.build_location(parent_dir) return self.source_dir
'Remove the source files from this requirement, if they are marked for deletion'
def remove_temporary_source(self):
if (self.source_dir and os.path.exists(os.path.join(self.source_dir, PIP_DELETE_MARKER_FILENAME))): logger.debug('Removing source in %s', self.source_dir) rmtree(self.source_dir) self.source_dir = None if (self._temp_build_dir and os.path.exists(self._temp_build_dir)): rmtree(self._temp_build_dir) self._temp_build_dir = None
'Find an installed distribution that satisfies or conflicts with this requirement, and set self.satisfied_by or self.conflicts_with appropriately.'
def check_if_exists(self):
if (self.req is None): return False try: self.satisfied_by = pkg_resources.get_distribution(self.req) except pkg_resources.DistributionNotFound: return False except pkg_resources.VersionConflict: existing_dist = pkg_resources.get_distribution(self.req.project_name) if self.use_user_site: if dist_in_usersite(existing_dist): self.conflicts_with = existing_dist elif (running_under_virtualenv() and dist_in_site_packages(existing_dist)): raise InstallationError(('Will not install to the user site because it will lack sys.path precedence to %s in %s' % (existing_dist.project_name, existing_dist.location))) else: self.conflicts_with = existing_dist return True
'Return a pkg_resources.Distribution built from self.egg_info_path'
def get_dist(self):
egg_info = self.egg_info_path('').rstrip('/') base_dir = os.path.dirname(egg_info) metadata = pkg_resources.PathMetadata(base_dir, egg_info) dist_name = os.path.splitext(os.path.basename(egg_info))[0] return pkg_resources.Distribution(os.path.dirname(egg_info), project_name=dist_name, metadata=metadata)
'Return whether any known-good hashes are specified as options. These activate --require-hashes mode; hashes specified as part of a URL do not.'
@property def has_hash_options(self):
return bool(self.options.get('hashes', {}))
'Return a hash-comparer that considers my option- and URL-based hashes to be known-good. Hashes in URLs--ones embedded in the requirements file, not ones downloaded from an index server--are almost peers with ones from flags. They satisfy --require-hashes (whether it was implicitly or explicitly activated) but do not activate it. md5 and sha224 are not allowed in flags, which should nudge people toward good algos. We always OR all hashes together, even ones from URLs. :param trust_internet: Whether to trust URL-based (#md5=...) hashes downloaded from the internet, as by populate_link()'
def hashes(self, trust_internet=True):
good_hashes = self.options.get('hashes', {}).copy() link = (self.link if trust_internet else self.original_link) if (link and link.hash): good_hashes.setdefault(link.hash_name, []).append(link.hash) return Hashes(good_hashes)
'Return a comma-separated list of option strings and metavars. :param option: tuple of (short opt, long opt), e.g: (\'-f\', \'--format\') :param mvarfmt: metavar format string - evaluated as mvarfmt % metavar :param optsep: separator'
def _format_option_strings(self, option, mvarfmt=' <%s>', optsep=', '):
opts = [] if option._short_opts: opts.append(option._short_opts[0]) if option._long_opts: opts.append(option._long_opts[0]) if (len(opts) > 1): opts.insert(1, optsep) if option.takes_value(): metavar = (option.metavar or option.dest.lower()) opts.append((mvarfmt % metavar.lower())) return ''.join(opts)
'Ensure there is only one newline between usage and the first heading if there is no description.'
def format_usage(self, usage):
msg = ('\nUsage: %s\n' % self.indent_lines(textwrap.dedent(usage), ' ')) return msg
'Insert an OptionGroup at a given position.'
def insert_option_group(self, idx, *args, **kwargs):
group = self.add_option_group(*args, **kwargs) self.option_groups.pop() self.option_groups.insert(idx, group) return group
'Get a list of all options, including those in option groups.'
@property def option_list_all(self):
res = self.option_list[:] for i in self.option_groups: res.extend(i.option_list) return res
'Updates the given defaults with values from the config files and the environ. Does a little special handling for certain types of options (lists).'
def _update_defaults(self, defaults):
config = {} for section in ('global', self.name): config.update(self.normalize_keys(self.get_config_section(section))) if (not self.isolated): config.update(self.normalize_keys(self.get_environ_vars())) self.values = optparse.Values(self.defaults) late_eval = set() for (key, val) in config.items(): if (not val): continue option = self.get_option(key) if (option is None): continue if (option.action in ('store_true', 'store_false', 'count')): val = strtobool(val) elif (option.action == 'append'): val = val.split() val = [self.check_default(option, key, v) for v in val] elif (option.action == 'callback'): late_eval.add(option.dest) opt_str = option.get_opt_string() val = option.convert_value(opt_str, val) args = (option.callback_args or ()) kwargs = (option.callback_kwargs or {}) option.callback(option, opt_str, val, self, *args, **kwargs) else: val = self.check_default(option, key, val) defaults[option.dest] = val for key in late_eval: defaults[key] = getattr(self.values, key) self.values = None return defaults
'Return a config dictionary with normalized keys regardless of whether the keys were specified in environment variables or in config files'
def normalize_keys(self, items):
normalized = {} for (key, val) in items: key = key.replace('_', '-') if (not key.startswith('--')): key = ('--%s' % key) normalized[key] = val return normalized
'Get a section of a configuration'
def get_config_section(self, name):
if self.config.has_section(name): return self.config.items(name) return []
'Returns a generator with all environmental vars with prefix PIP_'
def get_environ_vars(self):
for (key, val) in os.environ.items(): if _environ_prefix_re.search(key): (yield (_environ_prefix_re.sub('', key).lower(), val))
'Overridding to make updating the defaults after instantiation of the option parser possible, _update_defaults() does the dirty work.'
def get_default_values(self):
if (not self.process_default_values): return optparse.Values(self.defaults) defaults = self._update_defaults(self.defaults.copy()) for option in self._get_all_options(): default = defaults.get(option.dest) if isinstance(default, string_types): opt_str = option.get_opt_string() defaults[option.dest] = option.check_value(opt_str, default) return optparse.Values(defaults)
'Calls the standard formatter, but will indent all of the log messages by our current indentation level.'
def format(self, record):
formatted = logging.Formatter.format(self, record) formatted = ''.join([((' ' * get_indentation()) + line) for line in formatted.splitlines(True)]) return formatted
'Save the original SIGINT handler for later.'
def __init__(self, *args, **kwargs):
super(InterruptibleMixin, self).__init__(*args, **kwargs) self.original_handler = signal(SIGINT, self.handle_sigint) if (self.original_handler is None): self.original_handler = default_int_handler
'Restore the original SIGINT handler after finishing. This should happen regardless of whether the progress display finishes normally, or gets interrupted.'
def finish(self):
super(InterruptibleMixin, self).finish() signal(SIGINT, self.original_handler)
'Call self.finish() before delegating to the original SIGINT handler. This handler should only be in place while the progress display is active.'
def handle_sigint(self, signum, frame):
self.finish() self.original_handler(signum, frame)
':param hashes: A dict of algorithm names pointing to lists of allowed hex digests'
def __init__(self, hashes=None):
self._allowed = ({} if (hashes is None) else hashes)
'Check good hashes against ones built from iterable of chunks of data. Raise HashMismatch if none match.'
def check_against_chunks(self, chunks):
gots = {} for hash_name in iterkeys(self._allowed): try: gots[hash_name] = hashlib.new(hash_name) except (ValueError, TypeError): raise InstallationError(('Unknown hash name: %s' % hash_name)) for chunk in chunks: for hash in itervalues(gots): hash.update(chunk) for (hash_name, got) in iteritems(gots): if (got.hexdigest() in self._allowed[hash_name]): return self._raise(gots)