problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_13454
|
rasdani/github-patches
|
git_diff
|
psychopy__psychopy-773
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Overlapping recordings problem
I am having a problem with mic.record and mic.stop - I am currently on psychopy 1.81.00, but I have had the same problem in earlier versions. I have written some code which records until the participant hits a key, or until a time-limit is reached. I am getting occasional truncated recordings or zero-length recordings - these occur when I test the code myself, so it's not just the participants being trigger-happy. I think the problem occurs when the timer on some past recording runs out, it stops the current recording. So say you set a recording running with a limit of 10 seconds, send a mic.stop() after 5 seconds, then start a new recording, that new recording will be stopped after 5 seconds, when the timer on the original recording runs out - it doesn't seem to be quite as neat as that in practice, which is confusing, but you can see this in action with something like the following little program. How often to occurs depends on how unlucky you are, but if you run through the for loop 10-15 times you will get some truncated recordings.
from psychopy import microphone,core,event, visual
def recording(window,trialNum,mic):
print('recording ' + str(trialNum))
mic.reset()
instructionText = visual.TextStim(window, text='Count to five, then press space',color="black",pos=(0,0.0),wrapWidth=2)
instructionText.draw()
window.flip()
mic.record(7,block=False,filename=str(trialNum)+'.wav') #start recording
event.waitKeys(maxWait='inf', keyList=['space']) #wait for a space from participant
core.wait(0.1) #so you can hear the click of the spacebar
window.flip()
mic.stop() #stop the mic
core.wait(0.1) #to get a flicker between screens
# set up mic and window
microphone.switchOn(sampleRate=44100)
mic = microphone.AudioCapture()
myWin = visual.Window((800,600), allowGUI=True,color='white')
for t in range(100): #shouldn't need to do as many as 100 to get some truncated recordings!
recording(myWin,t,mic)
microphone.switchOff()
core.quit()
</issue>
<code>
[start of psychopy/app/builder/components/microphone.py]
1 # Part of the PsychoPy library
2 # Copyright (C) 2014 Jonathan Peirce
3 # Distributed under the terms of the GNU General Public License (GPL).
4
5 # Author: Jeremy R. Gray, 2012
6
7 from _base import *
8 from os import path
9 from psychopy.app.builder import components #for getInitVals()
10
11 thisFolder = path.abspath(path.dirname(__file__))#the absolute path to the folder containing this path
12 iconFile = path.join(thisFolder,'microphone.png')
13 tooltip = _translate('Microphone: basic sound capture (fixed onset & duration), okay for spoken words')
14
15 _localized = {'stereo': _translate('Stereo')}
16
17 class MicrophoneComponent(BaseComponent):
18 """An event class for capturing short sound stimuli"""
19 categories = ['Responses']
20 def __init__(self, exp, parentName, name='mic_1',
21 startType='time (s)', startVal=0.0,
22 stopType='duration (s)', stopVal=2.0, startEstim='', durationEstim='',
23 stereo=False
24 ):
25 super(MicrophoneComponent, self).__init__(exp, parentName, name=name,
26 startType=startType, startVal=startVal,
27 stopType=stopType, stopVal=stopVal,
28 startEstim=startEstim, durationEstim=durationEstim)
29 self.type='Microphone'
30 self.url="http://www.psychopy.org/builder/components/microphone.html"
31 self.exp.requirePsychopyLibs(['microphone'])
32 #params
33 self.params['stereo']=Param(stereo, valType='bool',
34 hint=_translate("Record two channels (stereo) or one (mono, smaller file)"),
35 label=_localized['stereo'])
36 self.params['stopType'].allowedVals = ['duration (s)']
37 self.params['stopType'].hint = _translate('The duration of the recording in seconds; blank = 0 sec')
38 def writeStartCode(self,buff):
39 # filename should have date_time, so filename_wav should be unique
40 buff.writeIndented("wavDirName = filename + '_wav'\n")
41 buff.writeIndented("if not os.path.isdir(wavDirName):\n" +
42 " os.makedirs(wavDirName) # to hold .wav files\n")
43 def writeRoutineStartCode(self,buff):
44 inits = components.getInitVals(self.params)
45 buff.writeIndented("%s = microphone.AdvAudioCapture(name='%s', saveDir=wavDirName, stereo=%s)\n" %(
46 inits['name'], inits['name'], inits['stereo']))
47 def writeFrameCode(self,buff):
48 """Write the code that will be called every frame"""
49 duration = "%s" % self.params['stopVal'] # type is code
50 if not len(duration):
51 duration = "0"
52 # starting condition:
53 buff.writeIndented("\n")
54 buff.writeIndented("# *%s* updates\n" %(self.params['name']))
55 self.writeStartTestCode(buff) # writes an if statement
56 buff.writeIndented("%(name)s.status = STARTED\n" %(self.params))
57 buff.writeIndented("%s.record(sec=%s, block=False) # start the recording thread\n" %
58 (self.params['name'], duration))
59 buff.setIndentLevel(-1, relative=True) # ends the if statement
60 buff.writeIndented("\n")
61 # these lines handle both normal end of rec thread, and user .stop():
62 buff.writeIndented("if %(name)s.status == STARTED and not %(name)s.recorder.running:\n" % self.params)
63 buff.writeIndented(" %s.status = FINISHED\n" % self.params['name'])
64 def writeRoutineEndCode(self,buff):
65 #some shortcuts
66 name = self.params['name']
67 if len(self.exp.flow._loopList):
68 currLoop = self.exp.flow._loopList[-1] #last (outer-most) loop
69 else:
70 currLoop = self.exp._expHandler
71
72 #write the actual code
73 buff.writeIndented("# check responses\n" %self.params)
74 buff.writeIndented("if not %(name)s.savedFile:\n"%self.params)
75 buff.writeIndented(" %(name)s.savedFile = None\n" %(self.params))
76 buff.writeIndented("# store data for %s (%s)\n" %(currLoop.params['name'], currLoop.type))
77
78 #always add saved file name
79 buff.writeIndented("%s.addData('%s.filename', %s.savedFile)\n" % (currLoop.params['name'],name,name))
80 if currLoop.params['name'].val == self.exp._expHandler.name:
81 buff.writeIndented("%s.nextEntry()\n" % self.exp._expHandler.name)
82 # best not to do loudness / rms or other processing here
83
[end of psychopy/app/builder/components/microphone.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/psychopy/app/builder/components/microphone.py b/psychopy/app/builder/components/microphone.py
--- a/psychopy/app/builder/components/microphone.py
+++ b/psychopy/app/builder/components/microphone.py
@@ -70,7 +70,8 @@
currLoop = self.exp._expHandler
#write the actual code
- buff.writeIndented("# check responses\n" %self.params)
+ buff.writeIndented("# %(name)s stop & responses\n" %self.params)
+ buff.writeIndented("%s.stop() # sometimes helpful\n" % self.params['name'])
buff.writeIndented("if not %(name)s.savedFile:\n"%self.params)
buff.writeIndented(" %(name)s.savedFile = None\n" %(self.params))
buff.writeIndented("# store data for %s (%s)\n" %(currLoop.params['name'], currLoop.type))
|
{"golden_diff": "diff --git a/psychopy/app/builder/components/microphone.py b/psychopy/app/builder/components/microphone.py\n--- a/psychopy/app/builder/components/microphone.py\n+++ b/psychopy/app/builder/components/microphone.py\n@@ -70,7 +70,8 @@\n currLoop = self.exp._expHandler\n \n #write the actual code\n- buff.writeIndented(\"# check responses\\n\" %self.params)\n+ buff.writeIndented(\"# %(name)s stop & responses\\n\" %self.params)\n+ buff.writeIndented(\"%s.stop() # sometimes helpful\\n\" % self.params['name'])\n buff.writeIndented(\"if not %(name)s.savedFile:\\n\"%self.params)\n buff.writeIndented(\" %(name)s.savedFile = None\\n\" %(self.params))\n buff.writeIndented(\"# store data for %s (%s)\\n\" %(currLoop.params['name'], currLoop.type))\n", "issue": "Overlapping recordings problem\nI am having a problem with mic.record and mic.stop - I am currently on psychopy 1.81.00, but I have had the same problem in earlier versions. I have written some code which records until the participant hits a key, or until a time-limit is reached. I am getting occasional truncated recordings or zero-length recordings - these occur when I test the code myself, so it's not just the participants being trigger-happy. I think the problem occurs when the timer on some past recording runs out, it stops the current recording. So say you set a recording running with a limit of 10 seconds, send a mic.stop() after 5 seconds, then start a new recording, that new recording will be stopped after 5 seconds, when the timer on the original recording runs out - it doesn't seem to be quite as neat as that in practice, which is confusing, but you can see this in action with something like the following little program. How often to occurs depends on how unlucky you are, but if you run through the for loop 10-15 times you will get some truncated recordings. \n\nfrom psychopy import microphone,core,event, visual\n\ndef recording(window,trialNum,mic):\n print('recording ' + str(trialNum))\n mic.reset()\n instructionText = visual.TextStim(window, text='Count to five, then press space',color=\"black\",pos=(0,0.0),wrapWidth=2)\n instructionText.draw()\n window.flip()\n mic.record(7,block=False,filename=str(trialNum)+'.wav') #start recording\n event.waitKeys(maxWait='inf', keyList=['space']) #wait for a space from participant\n core.wait(0.1) #so you can hear the click of the spacebar\n window.flip()\n mic.stop() #stop the mic\n core.wait(0.1) #to get a flicker between screens\n# set up mic and window\n\nmicrophone.switchOn(sampleRate=44100)\nmic = microphone.AudioCapture()\nmyWin = visual.Window((800,600), allowGUI=True,color='white')\nfor t in range(100): #shouldn't need to do as many as 100 to get some truncated recordings!\n recording(myWin,t,mic)\nmicrophone.switchOff()\ncore.quit()\n\n", "before_files": [{"content": "# Part of the PsychoPy library\n# Copyright (C) 2014 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\n# Author: Jeremy R. Gray, 2012\n\nfrom _base import *\nfrom os import path\nfrom psychopy.app.builder import components #for getInitVals()\n\nthisFolder = path.abspath(path.dirname(__file__))#the absolute path to the folder containing this path\niconFile = path.join(thisFolder,'microphone.png')\ntooltip = _translate('Microphone: basic sound capture (fixed onset & duration), okay for spoken words')\n\n_localized = {'stereo': _translate('Stereo')}\n\nclass MicrophoneComponent(BaseComponent):\n \"\"\"An event class for capturing short sound stimuli\"\"\"\n categories = ['Responses']\n def __init__(self, exp, parentName, name='mic_1',\n startType='time (s)', startVal=0.0,\n stopType='duration (s)', stopVal=2.0, startEstim='', durationEstim='',\n stereo=False\n ):\n super(MicrophoneComponent, self).__init__(exp, parentName, name=name,\n startType=startType, startVal=startVal,\n stopType=stopType, stopVal=stopVal,\n startEstim=startEstim, durationEstim=durationEstim)\n self.type='Microphone'\n self.url=\"http://www.psychopy.org/builder/components/microphone.html\"\n self.exp.requirePsychopyLibs(['microphone'])\n #params\n self.params['stereo']=Param(stereo, valType='bool',\n hint=_translate(\"Record two channels (stereo) or one (mono, smaller file)\"),\n label=_localized['stereo'])\n self.params['stopType'].allowedVals = ['duration (s)']\n self.params['stopType'].hint = _translate('The duration of the recording in seconds; blank = 0 sec')\n def writeStartCode(self,buff):\n # filename should have date_time, so filename_wav should be unique\n buff.writeIndented(\"wavDirName = filename + '_wav'\\n\")\n buff.writeIndented(\"if not os.path.isdir(wavDirName):\\n\" +\n \" os.makedirs(wavDirName) # to hold .wav files\\n\")\n def writeRoutineStartCode(self,buff):\n inits = components.getInitVals(self.params)\n buff.writeIndented(\"%s = microphone.AdvAudioCapture(name='%s', saveDir=wavDirName, stereo=%s)\\n\" %(\n inits['name'], inits['name'], inits['stereo']))\n def writeFrameCode(self,buff):\n \"\"\"Write the code that will be called every frame\"\"\"\n duration = \"%s\" % self.params['stopVal'] # type is code\n if not len(duration):\n duration = \"0\"\n # starting condition:\n buff.writeIndented(\"\\n\")\n buff.writeIndented(\"# *%s* updates\\n\" %(self.params['name']))\n self.writeStartTestCode(buff) # writes an if statement\n buff.writeIndented(\"%(name)s.status = STARTED\\n\" %(self.params))\n buff.writeIndented(\"%s.record(sec=%s, block=False) # start the recording thread\\n\" %\n (self.params['name'], duration))\n buff.setIndentLevel(-1, relative=True) # ends the if statement\n buff.writeIndented(\"\\n\")\n # these lines handle both normal end of rec thread, and user .stop():\n buff.writeIndented(\"if %(name)s.status == STARTED and not %(name)s.recorder.running:\\n\" % self.params)\n buff.writeIndented(\" %s.status = FINISHED\\n\" % self.params['name'])\n def writeRoutineEndCode(self,buff):\n #some shortcuts\n name = self.params['name']\n if len(self.exp.flow._loopList):\n currLoop = self.exp.flow._loopList[-1] #last (outer-most) loop\n else:\n currLoop = self.exp._expHandler\n\n #write the actual code\n buff.writeIndented(\"# check responses\\n\" %self.params)\n buff.writeIndented(\"if not %(name)s.savedFile:\\n\"%self.params)\n buff.writeIndented(\" %(name)s.savedFile = None\\n\" %(self.params))\n buff.writeIndented(\"# store data for %s (%s)\\n\" %(currLoop.params['name'], currLoop.type))\n\n #always add saved file name\n buff.writeIndented(\"%s.addData('%s.filename', %s.savedFile)\\n\" % (currLoop.params['name'],name,name))\n if currLoop.params['name'].val == self.exp._expHandler.name:\n buff.writeIndented(\"%s.nextEntry()\\n\" % self.exp._expHandler.name)\n # best not to do loudness / rms or other processing here\n", "path": "psychopy/app/builder/components/microphone.py"}]}
| 2,252 | 201 |
gh_patches_debug_37142
|
rasdani/github-patches
|
git_diff
|
joke2k__faker-1872
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
tests/test_passport.py::TestEnUS::testDates failure: ValueError: day is out of range for month
* Faker version: 18.10.0
* OS: Gentoo Linux amd64
It's possible that I've been incredibly lucky:
```
> expiry_date = issue_date.replace(year=issue_date.year + expiry_years)
E ValueError: day is out of range for month
[…]
expiry_years = 10
issue_date = datetime.datetime(2020, 2, 29, 6, 57, 56)
```
Full traceback below.
### Steps to reproduce
1. `python -m pytest` ;-)
### Expected behavior
Tests passing ;-).
### Actual behavior
```pytb
_________________________________________________________ TestEnUS.testDates __________________________________________________________
self = <tests.test_passport.TestEnUS object at 0x7f6db2c3a920>, faker = <faker.proxy.Faker object at 0x7f6db21159f0>, num_samples = 20
def testDates(self, faker, num_samples=20):
age4 = date.today() - timedelta(days=4 * 365)
age12 = date.today() - timedelta(days=12 * 365)
age17 = date.today() - timedelta(days=17 * 365)
age23 = date.today() - timedelta(days=23 * 365)
age30 = date.today() - timedelta(days=30 * 365)
birthdays = [(age4, 4), (age12, 12), (age17, 17), (age23, 23), (age30, 30)]
for _ in range(num_samples):
for birthday in birthdays:
> birth_date_f, issue_date_f, expiry_date_f = faker.passport_dates(birthday[0])
_ = 4
age12 = datetime.date(2011, 6, 5)
age17 = datetime.date(2006, 6, 6)
age23 = datetime.date(2000, 6, 7)
age30 = datetime.date(1993, 6, 9)
age4 = datetime.date(2019, 6, 3)
birth_date = datetime.date(2006, 6, 6)
birth_date_f = '06 Jun 2006'
birthday = (datetime.date(2000, 6, 7), 23)
birthdays = [(datetime.date(2019, 6, 3), 4),
(datetime.date(2011, 6, 5), 12),
(datetime.date(2006, 6, 6), 17),
(datetime.date(2000, 6, 7), 23),
(datetime.date(1993, 6, 9), 30)]
expiry_date = datetime.date(2025, 4, 8)
expiry_date_f = '08 Apr 2025'
faker = <faker.proxy.Faker object at 0x7f6db21159f0>
issue_date = datetime.date(2020, 4, 8)
issue_date_f = '08 Apr 2020'
num_samples = 20
self = <tests.test_passport.TestEnUS object at 0x7f6db2c3a920>
tests/test_passport.py:55:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <faker.providers.passport.en_US.Provider object at 0x7f6db20938b0>, birthday = datetime.date(2000, 6, 7)
def passport_dates(self, birthday: date = date.today()) -> Tuple[str, str, str]:
"""Generates a formatted date of birth, issue, and expiration dates.
issue and expiration dates are conditioned to fall within U.S. standards of 5 and 10 year expirations
The ``birthday`` argument is a datetime.date object representing a date of birth.
Sources:
-https://travel.state.gov/content/travel/en/passports/passport-help/faqs.html
"""
birth_date = birthday.strftime("%d ") + birthday.strftime("%b ") + birthday.strftime("%Y")
today = date.today()
age = (today - birthday).days // 365
if age < 16:
expiry_years = 5
issue_date = self.generator.date_time_between(today - timedelta(days=expiry_years * 365 - 1), today)
# Checks if age is less than 5 so issue date is not before birthdate
if age < 5:
issue_date = self.generator.date_time_between(birthday, today)
expiry_date = issue_date.replace(year=issue_date.year + expiry_years)
issue_date_fromat = issue_date.strftime("%d ") + issue_date.strftime("%b ") + issue_date.strftime("%Y")
expiry_date_format = expiry_date.strftime("%d ") + expiry_date.strftime("%b ") + expiry_date.strftime("%Y")
return birth_date, issue_date_fromat, expiry_date_format
elif age >= 26:
expiry_years = 10
issue_date = self.generator.date_time_between(today - timedelta(days=expiry_years * 365 - 1), today)
expiry_date = issue_date.replace(year=issue_date.year + expiry_years)
issue_date_fromat = issue_date.strftime("%d ") + issue_date.strftime("%b ") + issue_date.strftime("%Y")
expiry_date_format = expiry_date.strftime("%d ") + expiry_date.strftime("%b ") + expiry_date.strftime("%Y")
return birth_date, issue_date_fromat, expiry_date_format
else:
# In cases between age 16 and 26, the issue date is 5 years ago, but expiry may be in 10 or 5 years
expiry_years = 5
issue_date = self.generator.date_time_between(
today - timedelta(days=expiry_years * 365 - 1), birthday + timedelta(days=16 * 365 - 1)
)
# all people over 21 must have been over 16 when they recieved passport or it will be expired otherwise
if age >= 21:
issue_date = self.generator.date_time_between(today - timedelta(days=expiry_years * 365 - 1), today)
expiry_years = 10
> expiry_date = issue_date.replace(year=issue_date.year + expiry_years)
E ValueError: day is out of range for month
age = 23
birth_date = '07 Jun 2000'
birthday = datetime.date(2000, 6, 7)
expiry_years = 10
issue_date = datetime.datetime(2020, 2, 29, 6, 57, 56)
self = <faker.providers.passport.en_US.Provider object at 0x7f6db20938b0>
today = datetime.date(2023, 6, 2)
faker/providers/passport/en_US/__init__.py:69: ValueError
```
</issue>
<code>
[start of faker/providers/passport/en_US/__init__.py]
1 import random
2
3 from datetime import date, timedelta
4 from typing import Tuple
5
6 from .. import Provider as PassportProvider
7
8
9 class Provider(PassportProvider):
10 """Implement passport provider for ``en_US`` locale.
11
12 Sources:
13
14 - https://travel.state.gov/content/travel/en/passports/passport-help/next-generation-passport.html
15 - https://www.vitalrecordsonline.com/glossary/passport-book-number
16 """
17
18 passport_number_formats = (
19 # NGP
20 "?########",
21 # Pre-NGP
22 "#########",
23 )
24
25 def passport_dates(self, birthday: date = date.today()) -> Tuple[str, str, str]:
26 """Generates a formatted date of birth, issue, and expiration dates.
27 issue and expiration dates are conditioned to fall within U.S. standards of 5 and 10 year expirations
28
29
30 The ``birthday`` argument is a datetime.date object representing a date of birth.
31
32 Sources:
33
34 -https://travel.state.gov/content/travel/en/passports/passport-help/faqs.html
35 """
36 birth_date = birthday.strftime("%d ") + birthday.strftime("%b ") + birthday.strftime("%Y")
37 today = date.today()
38 age = (today - birthday).days // 365
39 if age < 16:
40 expiry_years = 5
41 issue_date = self.generator.date_time_between(today - timedelta(days=expiry_years * 365 - 1), today)
42 # Checks if age is less than 5 so issue date is not before birthdate
43 if age < 5:
44 issue_date = self.generator.date_time_between(birthday, today)
45 expiry_date = issue_date.replace(year=issue_date.year + expiry_years)
46
47 issue_date_fromat = issue_date.strftime("%d ") + issue_date.strftime("%b ") + issue_date.strftime("%Y")
48 expiry_date_format = expiry_date.strftime("%d ") + expiry_date.strftime("%b ") + expiry_date.strftime("%Y")
49 return birth_date, issue_date_fromat, expiry_date_format
50 elif age >= 26:
51 expiry_years = 10
52 issue_date = self.generator.date_time_between(today - timedelta(days=expiry_years * 365 - 1), today)
53 expiry_date = issue_date.replace(year=issue_date.year + expiry_years)
54 issue_date_fromat = issue_date.strftime("%d ") + issue_date.strftime("%b ") + issue_date.strftime("%Y")
55 expiry_date_format = expiry_date.strftime("%d ") + expiry_date.strftime("%b ") + expiry_date.strftime("%Y")
56 return birth_date, issue_date_fromat, expiry_date_format
57
58 else:
59 # In cases between age 16 and 26, the issue date is 5 years ago, but expiry may be in 10 or 5 years
60 expiry_years = 5
61 issue_date = self.generator.date_time_between(
62 today - timedelta(days=expiry_years * 365 - 1), birthday + timedelta(days=16 * 365 - 1)
63 )
64 # all people over 21 must have been over 16 when they recieved passport or it will be expired otherwise
65 if age >= 21:
66 issue_date = self.generator.date_time_between(today - timedelta(days=expiry_years * 365 - 1), today)
67 expiry_years = 10
68
69 expiry_date = issue_date.replace(year=issue_date.year + expiry_years)
70
71 issue_date_fromat = issue_date.strftime("%d ") + issue_date.strftime("%b ") + issue_date.strftime("%Y")
72 expiry_date_format = expiry_date.strftime("%d ") + expiry_date.strftime("%b ") + expiry_date.strftime("%Y")
73 return birth_date, issue_date_fromat, expiry_date_format
74
75 def passport_gender(self, seed: int = 0) -> str:
76 """Generates a string representing the gender displayed on a passport
77
78 Sources:
79
80 - https://williamsinstitute.law.ucla.edu/publications/x-gender-markers-passports/
81 """
82 if seed != 0:
83 random.seed(seed)
84
85 genders = ["M", "F", "X"]
86 gender = random.choices(genders, weights=[0.493, 0.493, 0.014], k=1)[0]
87 return gender
88
89 def passport_full(self) -> str:
90 """Generates a formatted sting with US Passport information"""
91 dob = self.passport_dob()
92 birth_date, issue_date, expiry_date = self.passport_dates(dob)
93 gender_g = self.passport_gender()
94 given_name, surname = self.passport_owner(gender=gender_g)
95 number = self.passport_number()
96
97 full_rep = """{first_name}\n{second_name}\n{gender}\n{dob}\n{issue}\n{expire}\n{num}\n"""
98 full_rep = full_rep.format(
99 first_name=given_name,
100 second_name=surname,
101 gender=gender_g,
102 dob=birth_date,
103 issue=issue_date,
104 expire=expiry_date,
105 num=number,
106 )
107 return full_rep
108
[end of faker/providers/passport/en_US/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/faker/providers/passport/en_US/__init__.py b/faker/providers/passport/en_US/__init__.py
--- a/faker/providers/passport/en_US/__init__.py
+++ b/faker/providers/passport/en_US/__init__.py
@@ -42,19 +42,9 @@
# Checks if age is less than 5 so issue date is not before birthdate
if age < 5:
issue_date = self.generator.date_time_between(birthday, today)
- expiry_date = issue_date.replace(year=issue_date.year + expiry_years)
-
- issue_date_fromat = issue_date.strftime("%d ") + issue_date.strftime("%b ") + issue_date.strftime("%Y")
- expiry_date_format = expiry_date.strftime("%d ") + expiry_date.strftime("%b ") + expiry_date.strftime("%Y")
- return birth_date, issue_date_fromat, expiry_date_format
elif age >= 26:
expiry_years = 10
issue_date = self.generator.date_time_between(today - timedelta(days=expiry_years * 365 - 1), today)
- expiry_date = issue_date.replace(year=issue_date.year + expiry_years)
- issue_date_fromat = issue_date.strftime("%d ") + issue_date.strftime("%b ") + issue_date.strftime("%Y")
- expiry_date_format = expiry_date.strftime("%d ") + expiry_date.strftime("%b ") + expiry_date.strftime("%Y")
- return birth_date, issue_date_fromat, expiry_date_format
-
else:
# In cases between age 16 and 26, the issue date is 5 years ago, but expiry may be in 10 or 5 years
expiry_years = 5
@@ -66,11 +56,13 @@
issue_date = self.generator.date_time_between(today - timedelta(days=expiry_years * 365 - 1), today)
expiry_years = 10
- expiry_date = issue_date.replace(year=issue_date.year + expiry_years)
+ if issue_date.day == 29 and issue_date.month == 2:
+ issue_date -= timedelta(days=1)
+ expiry_date = issue_date.replace(year=issue_date.year + expiry_years)
- issue_date_fromat = issue_date.strftime("%d ") + issue_date.strftime("%b ") + issue_date.strftime("%Y")
- expiry_date_format = expiry_date.strftime("%d ") + expiry_date.strftime("%b ") + expiry_date.strftime("%Y")
- return birth_date, issue_date_fromat, expiry_date_format
+ issue_date_format = issue_date.strftime("%d ") + issue_date.strftime("%b ") + issue_date.strftime("%Y")
+ expiry_date_format = expiry_date.strftime("%d ") + expiry_date.strftime("%b ") + expiry_date.strftime("%Y")
+ return birth_date, issue_date_format, expiry_date_format
def passport_gender(self, seed: int = 0) -> str:
"""Generates a string representing the gender displayed on a passport
|
{"golden_diff": "diff --git a/faker/providers/passport/en_US/__init__.py b/faker/providers/passport/en_US/__init__.py\n--- a/faker/providers/passport/en_US/__init__.py\n+++ b/faker/providers/passport/en_US/__init__.py\n@@ -42,19 +42,9 @@\n # Checks if age is less than 5 so issue date is not before birthdate\n if age < 5:\n issue_date = self.generator.date_time_between(birthday, today)\n- expiry_date = issue_date.replace(year=issue_date.year + expiry_years)\n-\n- issue_date_fromat = issue_date.strftime(\"%d \") + issue_date.strftime(\"%b \") + issue_date.strftime(\"%Y\")\n- expiry_date_format = expiry_date.strftime(\"%d \") + expiry_date.strftime(\"%b \") + expiry_date.strftime(\"%Y\")\n- return birth_date, issue_date_fromat, expiry_date_format\n elif age >= 26:\n expiry_years = 10\n issue_date = self.generator.date_time_between(today - timedelta(days=expiry_years * 365 - 1), today)\n- expiry_date = issue_date.replace(year=issue_date.year + expiry_years)\n- issue_date_fromat = issue_date.strftime(\"%d \") + issue_date.strftime(\"%b \") + issue_date.strftime(\"%Y\")\n- expiry_date_format = expiry_date.strftime(\"%d \") + expiry_date.strftime(\"%b \") + expiry_date.strftime(\"%Y\")\n- return birth_date, issue_date_fromat, expiry_date_format\n-\n else:\n # In cases between age 16 and 26, the issue date is 5 years ago, but expiry may be in 10 or 5 years\n expiry_years = 5\n@@ -66,11 +56,13 @@\n issue_date = self.generator.date_time_between(today - timedelta(days=expiry_years * 365 - 1), today)\n expiry_years = 10\n \n- expiry_date = issue_date.replace(year=issue_date.year + expiry_years)\n+ if issue_date.day == 29 and issue_date.month == 2:\n+ issue_date -= timedelta(days=1)\n+ expiry_date = issue_date.replace(year=issue_date.year + expiry_years)\n \n- issue_date_fromat = issue_date.strftime(\"%d \") + issue_date.strftime(\"%b \") + issue_date.strftime(\"%Y\")\n- expiry_date_format = expiry_date.strftime(\"%d \") + expiry_date.strftime(\"%b \") + expiry_date.strftime(\"%Y\")\n- return birth_date, issue_date_fromat, expiry_date_format\n+ issue_date_format = issue_date.strftime(\"%d \") + issue_date.strftime(\"%b \") + issue_date.strftime(\"%Y\")\n+ expiry_date_format = expiry_date.strftime(\"%d \") + expiry_date.strftime(\"%b \") + expiry_date.strftime(\"%Y\")\n+ return birth_date, issue_date_format, expiry_date_format\n \n def passport_gender(self, seed: int = 0) -> str:\n \"\"\"Generates a string representing the gender displayed on a passport\n", "issue": "tests/test_passport.py::TestEnUS::testDates failure: ValueError: day is out of range for month\n* Faker version: 18.10.0\r\n* OS: Gentoo Linux amd64\r\n\r\nIt's possible that I've been incredibly lucky:\r\n\r\n```\r\n> expiry_date = issue_date.replace(year=issue_date.year + expiry_years)\r\nE ValueError: day is out of range for month\r\n[\u2026]\r\nexpiry_years = 10\r\nissue_date = datetime.datetime(2020, 2, 29, 6, 57, 56)\r\n```\r\n\r\nFull traceback below.\r\n\r\n### Steps to reproduce\r\n\r\n1. `python -m pytest` ;-)\r\n\r\n### Expected behavior\r\n\r\nTests passing ;-).\r\n\r\n### Actual behavior\r\n\r\n```pytb\r\n_________________________________________________________ TestEnUS.testDates __________________________________________________________\r\n\r\nself = <tests.test_passport.TestEnUS object at 0x7f6db2c3a920>, faker = <faker.proxy.Faker object at 0x7f6db21159f0>, num_samples = 20\r\n\r\n def testDates(self, faker, num_samples=20):\r\n age4 = date.today() - timedelta(days=4 * 365)\r\n age12 = date.today() - timedelta(days=12 * 365)\r\n age17 = date.today() - timedelta(days=17 * 365)\r\n age23 = date.today() - timedelta(days=23 * 365)\r\n age30 = date.today() - timedelta(days=30 * 365)\r\n \r\n birthdays = [(age4, 4), (age12, 12), (age17, 17), (age23, 23), (age30, 30)]\r\n for _ in range(num_samples):\r\n for birthday in birthdays:\r\n> birth_date_f, issue_date_f, expiry_date_f = faker.passport_dates(birthday[0])\r\n\r\n_ = 4\r\nage12 = datetime.date(2011, 6, 5)\r\nage17 = datetime.date(2006, 6, 6)\r\nage23 = datetime.date(2000, 6, 7)\r\nage30 = datetime.date(1993, 6, 9)\r\nage4 = datetime.date(2019, 6, 3)\r\nbirth_date = datetime.date(2006, 6, 6)\r\nbirth_date_f = '06 Jun 2006'\r\nbirthday = (datetime.date(2000, 6, 7), 23)\r\nbirthdays = [(datetime.date(2019, 6, 3), 4),\r\n (datetime.date(2011, 6, 5), 12),\r\n (datetime.date(2006, 6, 6), 17),\r\n (datetime.date(2000, 6, 7), 23),\r\n (datetime.date(1993, 6, 9), 30)]\r\nexpiry_date = datetime.date(2025, 4, 8)\r\nexpiry_date_f = '08 Apr 2025'\r\nfaker = <faker.proxy.Faker object at 0x7f6db21159f0>\r\nissue_date = datetime.date(2020, 4, 8)\r\nissue_date_f = '08 Apr 2020'\r\nnum_samples = 20\r\nself = <tests.test_passport.TestEnUS object at 0x7f6db2c3a920>\r\n\r\ntests/test_passport.py:55: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = <faker.providers.passport.en_US.Provider object at 0x7f6db20938b0>, birthday = datetime.date(2000, 6, 7)\r\n\r\n def passport_dates(self, birthday: date = date.today()) -> Tuple[str, str, str]:\r\n \"\"\"Generates a formatted date of birth, issue, and expiration dates.\r\n issue and expiration dates are conditioned to fall within U.S. standards of 5 and 10 year expirations\r\n \r\n \r\n The ``birthday`` argument is a datetime.date object representing a date of birth.\r\n \r\n Sources:\r\n \r\n -https://travel.state.gov/content/travel/en/passports/passport-help/faqs.html\r\n \"\"\"\r\n birth_date = birthday.strftime(\"%d \") + birthday.strftime(\"%b \") + birthday.strftime(\"%Y\")\r\n today = date.today()\r\n age = (today - birthday).days // 365\r\n if age < 16:\r\n expiry_years = 5\r\n issue_date = self.generator.date_time_between(today - timedelta(days=expiry_years * 365 - 1), today)\r\n # Checks if age is less than 5 so issue date is not before birthdate\r\n if age < 5:\r\n issue_date = self.generator.date_time_between(birthday, today)\r\n expiry_date = issue_date.replace(year=issue_date.year + expiry_years)\r\n \r\n issue_date_fromat = issue_date.strftime(\"%d \") + issue_date.strftime(\"%b \") + issue_date.strftime(\"%Y\")\r\n expiry_date_format = expiry_date.strftime(\"%d \") + expiry_date.strftime(\"%b \") + expiry_date.strftime(\"%Y\")\r\n return birth_date, issue_date_fromat, expiry_date_format\r\n elif age >= 26:\r\n expiry_years = 10\r\n issue_date = self.generator.date_time_between(today - timedelta(days=expiry_years * 365 - 1), today)\r\n expiry_date = issue_date.replace(year=issue_date.year + expiry_years)\r\n issue_date_fromat = issue_date.strftime(\"%d \") + issue_date.strftime(\"%b \") + issue_date.strftime(\"%Y\")\r\n expiry_date_format = expiry_date.strftime(\"%d \") + expiry_date.strftime(\"%b \") + expiry_date.strftime(\"%Y\")\r\n return birth_date, issue_date_fromat, expiry_date_format\r\n \r\n else:\r\n # In cases between age 16 and 26, the issue date is 5 years ago, but expiry may be in 10 or 5 years\r\n expiry_years = 5\r\n issue_date = self.generator.date_time_between(\r\n today - timedelta(days=expiry_years * 365 - 1), birthday + timedelta(days=16 * 365 - 1)\r\n )\r\n # all people over 21 must have been over 16 when they recieved passport or it will be expired otherwise\r\n if age >= 21:\r\n issue_date = self.generator.date_time_between(today - timedelta(days=expiry_years * 365 - 1), today)\r\n expiry_years = 10\r\n \r\n> expiry_date = issue_date.replace(year=issue_date.year + expiry_years)\r\nE ValueError: day is out of range for month\r\n\r\nage = 23\r\nbirth_date = '07 Jun 2000'\r\nbirthday = datetime.date(2000, 6, 7)\r\nexpiry_years = 10\r\nissue_date = datetime.datetime(2020, 2, 29, 6, 57, 56)\r\nself = <faker.providers.passport.en_US.Provider object at 0x7f6db20938b0>\r\ntoday = datetime.date(2023, 6, 2)\r\n\r\nfaker/providers/passport/en_US/__init__.py:69: ValueError\r\n```\r\n\n", "before_files": [{"content": "import random\n\nfrom datetime import date, timedelta\nfrom typing import Tuple\n\nfrom .. import Provider as PassportProvider\n\n\nclass Provider(PassportProvider):\n \"\"\"Implement passport provider for ``en_US`` locale.\n\n Sources:\n\n - https://travel.state.gov/content/travel/en/passports/passport-help/next-generation-passport.html\n - https://www.vitalrecordsonline.com/glossary/passport-book-number\n \"\"\"\n\n passport_number_formats = (\n # NGP\n \"?########\",\n # Pre-NGP\n \"#########\",\n )\n\n def passport_dates(self, birthday: date = date.today()) -> Tuple[str, str, str]:\n \"\"\"Generates a formatted date of birth, issue, and expiration dates.\n issue and expiration dates are conditioned to fall within U.S. standards of 5 and 10 year expirations\n\n\n The ``birthday`` argument is a datetime.date object representing a date of birth.\n\n Sources:\n\n -https://travel.state.gov/content/travel/en/passports/passport-help/faqs.html\n \"\"\"\n birth_date = birthday.strftime(\"%d \") + birthday.strftime(\"%b \") + birthday.strftime(\"%Y\")\n today = date.today()\n age = (today - birthday).days // 365\n if age < 16:\n expiry_years = 5\n issue_date = self.generator.date_time_between(today - timedelta(days=expiry_years * 365 - 1), today)\n # Checks if age is less than 5 so issue date is not before birthdate\n if age < 5:\n issue_date = self.generator.date_time_between(birthday, today)\n expiry_date = issue_date.replace(year=issue_date.year + expiry_years)\n\n issue_date_fromat = issue_date.strftime(\"%d \") + issue_date.strftime(\"%b \") + issue_date.strftime(\"%Y\")\n expiry_date_format = expiry_date.strftime(\"%d \") + expiry_date.strftime(\"%b \") + expiry_date.strftime(\"%Y\")\n return birth_date, issue_date_fromat, expiry_date_format\n elif age >= 26:\n expiry_years = 10\n issue_date = self.generator.date_time_between(today - timedelta(days=expiry_years * 365 - 1), today)\n expiry_date = issue_date.replace(year=issue_date.year + expiry_years)\n issue_date_fromat = issue_date.strftime(\"%d \") + issue_date.strftime(\"%b \") + issue_date.strftime(\"%Y\")\n expiry_date_format = expiry_date.strftime(\"%d \") + expiry_date.strftime(\"%b \") + expiry_date.strftime(\"%Y\")\n return birth_date, issue_date_fromat, expiry_date_format\n\n else:\n # In cases between age 16 and 26, the issue date is 5 years ago, but expiry may be in 10 or 5 years\n expiry_years = 5\n issue_date = self.generator.date_time_between(\n today - timedelta(days=expiry_years * 365 - 1), birthday + timedelta(days=16 * 365 - 1)\n )\n # all people over 21 must have been over 16 when they recieved passport or it will be expired otherwise\n if age >= 21:\n issue_date = self.generator.date_time_between(today - timedelta(days=expiry_years * 365 - 1), today)\n expiry_years = 10\n\n expiry_date = issue_date.replace(year=issue_date.year + expiry_years)\n\n issue_date_fromat = issue_date.strftime(\"%d \") + issue_date.strftime(\"%b \") + issue_date.strftime(\"%Y\")\n expiry_date_format = expiry_date.strftime(\"%d \") + expiry_date.strftime(\"%b \") + expiry_date.strftime(\"%Y\")\n return birth_date, issue_date_fromat, expiry_date_format\n\n def passport_gender(self, seed: int = 0) -> str:\n \"\"\"Generates a string representing the gender displayed on a passport\n\n Sources:\n\n - https://williamsinstitute.law.ucla.edu/publications/x-gender-markers-passports/\n \"\"\"\n if seed != 0:\n random.seed(seed)\n\n genders = [\"M\", \"F\", \"X\"]\n gender = random.choices(genders, weights=[0.493, 0.493, 0.014], k=1)[0]\n return gender\n\n def passport_full(self) -> str:\n \"\"\"Generates a formatted sting with US Passport information\"\"\"\n dob = self.passport_dob()\n birth_date, issue_date, expiry_date = self.passport_dates(dob)\n gender_g = self.passport_gender()\n given_name, surname = self.passport_owner(gender=gender_g)\n number = self.passport_number()\n\n full_rep = \"\"\"{first_name}\\n{second_name}\\n{gender}\\n{dob}\\n{issue}\\n{expire}\\n{num}\\n\"\"\"\n full_rep = full_rep.format(\n first_name=given_name,\n second_name=surname,\n gender=gender_g,\n dob=birth_date,\n issue=issue_date,\n expire=expiry_date,\n num=number,\n )\n return full_rep\n", "path": "faker/providers/passport/en_US/__init__.py"}]}
| 3,561 | 646 |
gh_patches_debug_15852
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-4125
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cmake_find_package generator not forwarding all dependency properties
Some info first:
- conan v1.10.0
- cmake v3.13.1
- Linux GCC 8.1 x86_64 libstdc++11
If using CMake version >= 3.0 the `cmake_find_package` generator
generates CMake imported targets to easily import and track dependencies
of the required packages.
When using those imported targets in CMake, users get not only automatic
tracking of link dependencies **but also include directories and compile
options**. If we have three packages A, B, C, with C depending on B and
B depending on A (A <-- B <-- C) **C may require A include directories if A headers are
included in the public API of B**. In this scenario, if we are building
C with a cmake_find_package generator:
``` cmake
find_package(B REQUIRED)
add_library(C)
target_link_libraries(C PRIVATE B::B)
```
cmake correctly adds A's includes as part of C private include dirs
since the generated FindB.cmake generates not only B::B target but also
a target for A with all its properties, target that is linked against
B::B.
But if for some reason the A target (CONAN_PKG::A_A) had been defined previously,
the generator skips the generation of this target **and no longer links
A against B::B**, preventing A include dirs to be propagated to B and C.
I've found this issue when mixing bincrafters Qt package (which doesn't support the `cmake_find_package` generator) with a couple of more packages that I'm importing using the find package generator. I have to work this way since our guidelines encourage the usage of `find_package()` to require dependencies:
``` cmake
conan_basic_setup(TARGETS)
find_package(QtCore CONFIG REQUIRED)
find_package(foofromconan REQUIRED)
```
I know this use case is not common and shouldn't be encouraged, it's just an artifact of how the Qt package works right now (Our goal is to go full find_package() for transparent conan integration). But I think the "bug" in the generator could happen in different scenarios and the fix may be useful for others.
I've fixed this behavior by always adding the dependency targets to the list of targets to link (See #4125).
- [x] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
</issue>
<code>
[start of conans/client/generators/cmake_find_package.py]
1 from conans.client.generators.cmake import DepsCppCmake
2 from conans.model import Generator
3
4
5 generic_find_package_template = """
6 message(STATUS "Conan: Using autogenerated Find{name}.cmake")
7 # Global approach
8 SET({name}_FOUND 1)
9 SET({name}_INCLUDE_DIRS {deps.include_paths})
10 SET({name}_INCLUDES {deps.include_paths})
11 SET({name}_DEFINITIONS {deps.defines})
12 SET({name}_LIBRARIES "") # Will be filled later
13 SET({name}_LIBRARIES_TARGETS "") # Will be filled later, if CMake 3
14 SET({name}_LIBS "") # Same as {name}_LIBRARIES
15
16 mark_as_advanced({name}_FOUND {name}_INCLUDE_DIRS {name}_INCLUDES
17 {name}_DEFINITIONS {name}_LIBRARIES {name}_LIBS)
18
19
20 # Find the real .lib/.a and add them to {name}_LIBS and {name}_LIBRARY_LIST
21 SET({name}_LIBRARY_LIST {deps.libs})
22 SET({name}_LIB_DIRS {deps.lib_paths})
23 foreach(_LIBRARY_NAME ${{{name}_LIBRARY_LIST}})
24 unset(CONAN_FOUND_LIBRARY CACHE)
25 find_library(CONAN_FOUND_LIBRARY NAME ${{_LIBRARY_NAME}} PATHS ${{{name}_LIB_DIRS}}
26 NO_DEFAULT_PATH NO_CMAKE_FIND_ROOT_PATH)
27 if(CONAN_FOUND_LIBRARY)
28 list(APPEND {name}_LIBRARIES ${{CONAN_FOUND_LIBRARY}})
29 if(NOT ${{CMAKE_VERSION}} VERSION_LESS "3.0")
30 # Create a micro-target for each lib/a found
31 set(_LIB_NAME CONAN_LIB::{name}_${{_LIBRARY_NAME}})
32 if(NOT TARGET ${{_LIB_NAME}})
33 # Create a micro-target for each lib/a found
34 add_library(${{_LIB_NAME}} UNKNOWN IMPORTED)
35 set_target_properties(${{_LIB_NAME}} PROPERTIES IMPORTED_LOCATION ${{CONAN_FOUND_LIBRARY}})
36 list(APPEND {name}_LIBRARIES_TARGETS ${{_LIB_NAME}})
37 else()
38 message(STATUS "Skipping already existing target: ${{_LIB_NAME}}")
39 endif()
40 endif()
41 message(STATUS "Found: ${{CONAN_FOUND_LIBRARY}}")
42 else()
43 message(STATUS "Library ${{_LIBRARY_NAME}} not found in package, might be system one")
44 list(APPEND {name}_LIBRARIES_TARGETS ${{_LIBRARY_NAME}})
45 list(APPEND {name}_LIBRARIES ${{_LIBRARY_NAME}})
46 endif()
47 endforeach()
48 set({name}_LIBS ${{{name}_LIBRARIES}})
49
50 if(NOT ${{CMAKE_VERSION}} VERSION_LESS "3.0")
51 # Target approach
52 if(NOT TARGET {name}::{name})
53 add_library({name}::{name} INTERFACE IMPORTED)
54 if({name}_INCLUDE_DIRS)
55 set_target_properties({name}::{name} PROPERTIES
56 INTERFACE_INCLUDE_DIRECTORIES "${{{name}_INCLUDE_DIRS}}")
57 endif()
58 set_property(TARGET {name}::{name} PROPERTY INTERFACE_LINK_LIBRARIES ${{{name}_LIBRARIES_TARGETS}} "{deps.sharedlinkflags_list}" "{deps.exelinkflags_list}")
59 set_property(TARGET {name}::{name} PROPERTY INTERFACE_COMPILE_DEFINITIONS {deps.compile_definitions})
60 set_property(TARGET {name}::{name} PROPERTY INTERFACE_COMPILE_OPTIONS "{deps.cppflags_list}" "{deps.cflags_list}")
61 endif()
62 {find_dependencies}
63 endif()
64 """
65
66
67 class CMakeFindPackageGenerator(Generator):
68
69 @property
70 def filename(self):
71 pass
72
73 @property
74 def content(self):
75 ret = {}
76 for depname, cpp_info in self.deps_build_info.dependencies:
77 ret["Find%s.cmake" % depname] = self._single_find_package(depname, cpp_info)
78 return ret
79
80 @staticmethod
81 def _single_find_package(name, cpp_info):
82 deps = DepsCppCmake(cpp_info)
83 lines = []
84 if cpp_info.public_deps:
85 lines = CMakeFindPackageGenerator._transitive_lines(name, cpp_info)
86 tmp = generic_find_package_template.format(name=name, deps=deps,
87 find_dependencies="\n".join(lines))
88 return tmp
89
90 @staticmethod
91 def _transitive_lines(name, cpp_info):
92 lines = ["# Library dependencies", "include(CMakeFindDependencyMacro)"]
93 for dep in cpp_info.public_deps:
94 def property_lines(prop):
95 lib_t = "%s::%s" % (name, name)
96 dep_t = "%s::%s" % (dep, dep)
97 return ["get_target_property(tmp %s %s)" % (dep_t, prop),
98 "if(tmp)",
99 " set_property(TARGET %s APPEND PROPERTY %s ${tmp})" % (lib_t, prop),
100 'endif()']
101
102 lines.append("find_dependency(%s REQUIRED)" % dep)
103 lines.extend(property_lines("INTERFACE_LINK_LIBRARIES"))
104 lines.extend(property_lines("INTERFACE_COMPILE_DEFINITIONS"))
105 lines.extend(property_lines("INTERFACE_INCLUDE_DIRECTORIES"))
106 return lines
107
[end of conans/client/generators/cmake_find_package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conans/client/generators/cmake_find_package.py b/conans/client/generators/cmake_find_package.py
--- a/conans/client/generators/cmake_find_package.py
+++ b/conans/client/generators/cmake_find_package.py
@@ -33,10 +33,10 @@
# Create a micro-target for each lib/a found
add_library(${{_LIB_NAME}} UNKNOWN IMPORTED)
set_target_properties(${{_LIB_NAME}} PROPERTIES IMPORTED_LOCATION ${{CONAN_FOUND_LIBRARY}})
- list(APPEND {name}_LIBRARIES_TARGETS ${{_LIB_NAME}})
else()
message(STATUS "Skipping already existing target: ${{_LIB_NAME}}")
endif()
+ list(APPEND {name}_LIBRARIES_TARGETS ${{_LIB_NAME}})
endif()
message(STATUS "Found: ${{CONAN_FOUND_LIBRARY}}")
else()
|
{"golden_diff": "diff --git a/conans/client/generators/cmake_find_package.py b/conans/client/generators/cmake_find_package.py\n--- a/conans/client/generators/cmake_find_package.py\n+++ b/conans/client/generators/cmake_find_package.py\n@@ -33,10 +33,10 @@\n # Create a micro-target for each lib/a found\n add_library(${{_LIB_NAME}} UNKNOWN IMPORTED)\n set_target_properties(${{_LIB_NAME}} PROPERTIES IMPORTED_LOCATION ${{CONAN_FOUND_LIBRARY}})\n- list(APPEND {name}_LIBRARIES_TARGETS ${{_LIB_NAME}})\n else()\n message(STATUS \"Skipping already existing target: ${{_LIB_NAME}}\")\n endif()\n+ list(APPEND {name}_LIBRARIES_TARGETS ${{_LIB_NAME}})\n endif()\n message(STATUS \"Found: ${{CONAN_FOUND_LIBRARY}}\")\n else()\n", "issue": "cmake_find_package generator not forwarding all dependency properties\n Some info first:\r\n - conan v1.10.0\r\n - cmake v3.13.1\r\n - Linux GCC 8.1 x86_64 libstdc++11\r\n\r\nIf using CMake version >= 3.0 the `cmake_find_package` generator\r\ngenerates CMake imported targets to easily import and track dependencies\r\nof the required packages.\r\n\r\nWhen using those imported targets in CMake, users get not only automatic\r\ntracking of link dependencies **but also include directories and compile\r\noptions**. If we have three packages A, B, C, with C depending on B and\r\nB depending on A (A <-- B <-- C) **C may require A include directories if A headers are\r\nincluded in the public API of B**. In this scenario, if we are building\r\nC with a cmake_find_package generator:\r\n\r\n``` cmake\r\nfind_package(B REQUIRED)\r\nadd_library(C)\r\ntarget_link_libraries(C PRIVATE B::B)\r\n```\r\n\r\ncmake correctly adds A's includes as part of C private include dirs\r\nsince the generated FindB.cmake generates not only B::B target but also\r\na target for A with all its properties, target that is linked against\r\nB::B.\r\nBut if for some reason the A target (CONAN_PKG::A_A) had been defined previously,\r\nthe generator skips the generation of this target **and no longer links\r\nA against B::B**, preventing A include dirs to be propagated to B and C.\r\n\r\nI've found this issue when mixing bincrafters Qt package (which doesn't support the `cmake_find_package` generator) with a couple of more packages that I'm importing using the find package generator. I have to work this way since our guidelines encourage the usage of `find_package()` to require dependencies:\r\n\r\n``` cmake\r\nconan_basic_setup(TARGETS)\r\nfind_package(QtCore CONFIG REQUIRED)\r\nfind_package(foofromconan REQUIRED)\r\n```\r\nI know this use case is not common and shouldn't be encouraged, it's just an artifact of how the Qt package works right now (Our goal is to go full find_package() for transparent conan integration). But I think the \"bug\" in the generator could happen in different scenarios and the fix may be useful for others.\r\n\r\nI've fixed this behavior by always adding the dependency targets to the list of targets to link (See #4125).\r\n\r\n- [x] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).\r\n- [x] I've specified the Conan version, operating system version and any tool that can be relevant.\r\n- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.\r\n\r\n\n", "before_files": [{"content": "from conans.client.generators.cmake import DepsCppCmake\nfrom conans.model import Generator\n\n\ngeneric_find_package_template = \"\"\"\nmessage(STATUS \"Conan: Using autogenerated Find{name}.cmake\")\n# Global approach\nSET({name}_FOUND 1)\nSET({name}_INCLUDE_DIRS {deps.include_paths})\nSET({name}_INCLUDES {deps.include_paths})\nSET({name}_DEFINITIONS {deps.defines})\nSET({name}_LIBRARIES \"\") # Will be filled later\nSET({name}_LIBRARIES_TARGETS \"\") # Will be filled later, if CMake 3\nSET({name}_LIBS \"\") # Same as {name}_LIBRARIES\n\nmark_as_advanced({name}_FOUND {name}_INCLUDE_DIRS {name}_INCLUDES\n {name}_DEFINITIONS {name}_LIBRARIES {name}_LIBS)\n\n\n# Find the real .lib/.a and add them to {name}_LIBS and {name}_LIBRARY_LIST\nSET({name}_LIBRARY_LIST {deps.libs})\nSET({name}_LIB_DIRS {deps.lib_paths})\nforeach(_LIBRARY_NAME ${{{name}_LIBRARY_LIST}})\n unset(CONAN_FOUND_LIBRARY CACHE)\n find_library(CONAN_FOUND_LIBRARY NAME ${{_LIBRARY_NAME}} PATHS ${{{name}_LIB_DIRS}}\n NO_DEFAULT_PATH NO_CMAKE_FIND_ROOT_PATH)\n if(CONAN_FOUND_LIBRARY)\n list(APPEND {name}_LIBRARIES ${{CONAN_FOUND_LIBRARY}})\n if(NOT ${{CMAKE_VERSION}} VERSION_LESS \"3.0\")\n # Create a micro-target for each lib/a found\n set(_LIB_NAME CONAN_LIB::{name}_${{_LIBRARY_NAME}})\n if(NOT TARGET ${{_LIB_NAME}})\n # Create a micro-target for each lib/a found\n add_library(${{_LIB_NAME}} UNKNOWN IMPORTED)\n set_target_properties(${{_LIB_NAME}} PROPERTIES IMPORTED_LOCATION ${{CONAN_FOUND_LIBRARY}})\n list(APPEND {name}_LIBRARIES_TARGETS ${{_LIB_NAME}})\n else()\n message(STATUS \"Skipping already existing target: ${{_LIB_NAME}}\")\n endif()\n endif()\n message(STATUS \"Found: ${{CONAN_FOUND_LIBRARY}}\")\n else()\n message(STATUS \"Library ${{_LIBRARY_NAME}} not found in package, might be system one\")\n list(APPEND {name}_LIBRARIES_TARGETS ${{_LIBRARY_NAME}})\n list(APPEND {name}_LIBRARIES ${{_LIBRARY_NAME}})\n endif()\nendforeach()\nset({name}_LIBS ${{{name}_LIBRARIES}})\n\nif(NOT ${{CMAKE_VERSION}} VERSION_LESS \"3.0\")\n # Target approach\n if(NOT TARGET {name}::{name})\n add_library({name}::{name} INTERFACE IMPORTED)\n if({name}_INCLUDE_DIRS)\n set_target_properties({name}::{name} PROPERTIES\n INTERFACE_INCLUDE_DIRECTORIES \"${{{name}_INCLUDE_DIRS}}\")\n endif()\n set_property(TARGET {name}::{name} PROPERTY INTERFACE_LINK_LIBRARIES ${{{name}_LIBRARIES_TARGETS}} \"{deps.sharedlinkflags_list}\" \"{deps.exelinkflags_list}\")\n set_property(TARGET {name}::{name} PROPERTY INTERFACE_COMPILE_DEFINITIONS {deps.compile_definitions})\n set_property(TARGET {name}::{name} PROPERTY INTERFACE_COMPILE_OPTIONS \"{deps.cppflags_list}\" \"{deps.cflags_list}\")\n endif()\n {find_dependencies}\nendif()\n\"\"\"\n\n\nclass CMakeFindPackageGenerator(Generator):\n\n @property\n def filename(self):\n pass\n\n @property\n def content(self):\n ret = {}\n for depname, cpp_info in self.deps_build_info.dependencies:\n ret[\"Find%s.cmake\" % depname] = self._single_find_package(depname, cpp_info)\n return ret\n\n @staticmethod\n def _single_find_package(name, cpp_info):\n deps = DepsCppCmake(cpp_info)\n lines = []\n if cpp_info.public_deps:\n lines = CMakeFindPackageGenerator._transitive_lines(name, cpp_info)\n tmp = generic_find_package_template.format(name=name, deps=deps,\n find_dependencies=\"\\n\".join(lines))\n return tmp\n\n @staticmethod\n def _transitive_lines(name, cpp_info):\n lines = [\"# Library dependencies\", \"include(CMakeFindDependencyMacro)\"]\n for dep in cpp_info.public_deps:\n def property_lines(prop):\n lib_t = \"%s::%s\" % (name, name)\n dep_t = \"%s::%s\" % (dep, dep)\n return [\"get_target_property(tmp %s %s)\" % (dep_t, prop),\n \"if(tmp)\",\n \" set_property(TARGET %s APPEND PROPERTY %s ${tmp})\" % (lib_t, prop),\n 'endif()']\n\n lines.append(\"find_dependency(%s REQUIRED)\" % dep)\n lines.extend(property_lines(\"INTERFACE_LINK_LIBRARIES\"))\n lines.extend(property_lines(\"INTERFACE_COMPILE_DEFINITIONS\"))\n lines.extend(property_lines(\"INTERFACE_INCLUDE_DIRECTORIES\"))\n return lines\n", "path": "conans/client/generators/cmake_find_package.py"}]}
| 2,438 | 199 |
gh_patches_debug_22295
|
rasdani/github-patches
|
git_diff
|
cisagov__manage.get.gov-1439
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Subdomain request checking and error presentation
### Story
As a domain applicant, if I'm typing a subdomain on the domain page
I want to be presented with an error
so that I know that I can't submit a subdomain.
### Acceptance Criteria
- [ ] Leverage the existing in page error handling to present the user with content derived from #324
- [ ] Add a unit test to check validation
- [ ] Submissions should be denied if two dots are present. (Check for the edge case when two "." are not separated by characters: that state being "..")
- [ ] Submissions should be denied if two dots are present. (Check for the edge case when a "." is at the beginning if the domain without characters in front: that state being ".something."
- [ ] Confirm other edge cases
### Additional Context
_No response_
### Issue Links
Blocked by: #324
Related to: #720
</issue>
<code>
[start of src/api/views.py]
1 """Internal API views"""
2 from django.apps import apps
3 from django.views.decorators.http import require_http_methods
4 from django.http import HttpResponse, JsonResponse
5 from django.utils.safestring import mark_safe
6
7 from registrar.templatetags.url_helpers import public_site_url
8 from registrar.utility.errors import GenericError, GenericErrorCodes
9
10 import requests
11
12 from login_required import login_not_required
13
14 from cachetools.func import ttl_cache
15
16 from registrar.utility.s3_bucket import S3ClientError, S3ClientHelper
17
18
19 DOMAIN_FILE_URL = "https://raw.githubusercontent.com/cisagov/dotgov-data/main/current-full.csv"
20
21
22 DOMAIN_API_MESSAGES = {
23 "required": "Enter the .gov domain you want. Don’t include “www” or “.gov.”"
24 " For example, if you want www.city.gov, you would enter “city”"
25 " (without the quotes).",
26 "extra_dots": "Enter the .gov domain you want without any periods.",
27 # message below is considered safe; no user input can be inserted into the message
28 # body; public_site_url() function reads from local app settings and therefore safe
29 "unavailable": mark_safe( # nosec
30 "That domain isn’t available. "
31 "<a class='usa-link' href='{}' target='_blank'>"
32 "Read more about choosing your .gov domain.</a>".format(public_site_url("domains/choosing"))
33 ),
34 "invalid": "Enter a domain using only letters, numbers, or hyphens (though we don't recommend using hyphens).",
35 "success": "That domain is available!",
36 "error": GenericError.get_error_message(GenericErrorCodes.CANNOT_CONTACT_REGISTRY),
37 }
38
39
40 # this file doesn't change that often, nor is it that big, so cache the result
41 # in memory for ten minutes
42 @ttl_cache(ttl=600)
43 def _domains():
44 """Return a list of the current .gov domains.
45
46 Fetch a file from DOMAIN_FILE_URL, parse the CSV for the domain,
47 lowercase everything and return the list.
48 """
49 DraftDomain = apps.get_model("registrar.DraftDomain")
50 # 5 second timeout
51 file_contents = requests.get(DOMAIN_FILE_URL, timeout=5).text
52 domains = set()
53 # skip the first line
54 for line in file_contents.splitlines()[1:]:
55 # get the domain before the first comma
56 domain = line.split(",", 1)[0]
57 # sanity-check the string we got from the file here
58 if DraftDomain.string_could_be_domain(domain):
59 # lowercase everything when we put it in domains
60 domains.add(domain.lower())
61 return domains
62
63
64 def check_domain_available(domain):
65 """Return true if the given domain is available.
66
67 The given domain is lowercased to match against the domains list. If the
68 given domain doesn't end with .gov, ".gov" is added when looking for
69 a match. If check fails, throws a RegistryError.
70 """
71 Domain = apps.get_model("registrar.Domain")
72 if domain.endswith(".gov"):
73 return Domain.available(domain)
74 else:
75 # domain search string doesn't end with .gov, add it on here
76 return Domain.available(domain + ".gov")
77
78
79 @require_http_methods(["GET"])
80 @login_not_required
81 def available(request, domain=""):
82 """Is a given domain available or not.
83
84 Response is a JSON dictionary with the key "available" and value true or
85 false.
86 """
87 DraftDomain = apps.get_model("registrar.DraftDomain")
88 # validate that the given domain could be a domain name and fail early if
89 # not.
90 if not (DraftDomain.string_could_be_domain(domain) or DraftDomain.string_could_be_domain(domain + ".gov")):
91 return JsonResponse({"available": False, "message": DOMAIN_API_MESSAGES["invalid"]})
92 # a domain is available if it is NOT in the list of current domains
93 try:
94 if check_domain_available(domain):
95 return JsonResponse({"available": True, "message": DOMAIN_API_MESSAGES["success"]})
96 else:
97 return JsonResponse({"available": False, "message": DOMAIN_API_MESSAGES["unavailable"]})
98 except Exception:
99 return JsonResponse({"available": False, "message": DOMAIN_API_MESSAGES["error"]})
100
101
102 @require_http_methods(["GET"])
103 @login_not_required
104 def get_current_full(request, file_name="current-full.csv"):
105 """This will return the file content of current-full.csv which is the command
106 output of generate_current_full_report.py. This command iterates through each Domain
107 and returns a CSV representation."""
108 return serve_file(file_name)
109
110
111 @require_http_methods(["GET"])
112 @login_not_required
113 def get_current_federal(request, file_name="current-federal.csv"):
114 """This will return the file content of current-federal.csv which is the command
115 output of generate_current_federal_report.py. This command iterates through each Domain
116 and returns a CSV representation."""
117 return serve_file(file_name)
118
119
120 def serve_file(file_name):
121 """Downloads a file based on a given filepath. Returns a 500 if not found."""
122 s3_client = S3ClientHelper()
123 # Serve the CSV file. If not found, an exception will be thrown.
124 # This will then be caught by flat, causing it to not read it - which is what we want.
125 try:
126 file = s3_client.get_file(file_name, decode_to_utf=True)
127 except S3ClientError as err:
128 # TODO - #1317: Notify operations when auto report generation fails
129 raise err
130
131 response = HttpResponse(file)
132 return response
133
[end of src/api/views.py]
[start of src/registrar/config/urls.py]
1 """URL Configuration
2
3 For more information see:
4 https://docs.djangoproject.com/en/4.0/topics/http/urls/
5 """
6
7 from django.contrib import admin
8 from django.urls import include, path
9 from django.views.generic import RedirectView
10
11 from registrar import views
12 from registrar.views.application import Step
13 from registrar.views.utility import always_404
14 from api.views import available, get_current_federal, get_current_full
15
16
17 APPLICATION_NAMESPACE = views.ApplicationWizard.URL_NAMESPACE
18 application_urls = [
19 path("", views.ApplicationWizard.as_view(), name=""),
20 path("finished/", views.Finished.as_view(), name="finished"),
21 ]
22
23 # dynamically generate the other application_urls
24 for step, view in [
25 # add/remove steps here
26 (Step.ORGANIZATION_TYPE, views.OrganizationType),
27 (Step.TRIBAL_GOVERNMENT, views.TribalGovernment),
28 (Step.ORGANIZATION_FEDERAL, views.OrganizationFederal),
29 (Step.ORGANIZATION_ELECTION, views.OrganizationElection),
30 (Step.ORGANIZATION_CONTACT, views.OrganizationContact),
31 (Step.ABOUT_YOUR_ORGANIZATION, views.AboutYourOrganization),
32 (Step.AUTHORIZING_OFFICIAL, views.AuthorizingOfficial),
33 (Step.CURRENT_SITES, views.CurrentSites),
34 (Step.DOTGOV_DOMAIN, views.DotgovDomain),
35 (Step.PURPOSE, views.Purpose),
36 (Step.YOUR_CONTACT, views.YourContact),
37 (Step.OTHER_CONTACTS, views.OtherContacts),
38 (Step.NO_OTHER_CONTACTS, views.NoOtherContacts),
39 (Step.ANYTHING_ELSE, views.AnythingElse),
40 (Step.REQUIREMENTS, views.Requirements),
41 (Step.REVIEW, views.Review),
42 ]:
43 application_urls.append(path(f"{step}/", view.as_view(), name=step))
44
45
46 urlpatterns = [
47 path("", views.index, name="home"),
48 path(
49 "admin/logout/",
50 RedirectView.as_view(pattern_name="logout", permanent=False),
51 ),
52 path("admin/", admin.site.urls),
53 path(
54 "application/<id>/edit/",
55 views.ApplicationWizard.as_view(),
56 name=views.ApplicationWizard.EDIT_URL_NAME,
57 ),
58 path(
59 "application/<int:pk>",
60 views.ApplicationStatus.as_view(),
61 name="application-status",
62 ),
63 path(
64 "application/<int:pk>/withdraw",
65 views.ApplicationWithdrawConfirmation.as_view(),
66 name="application-withdraw-confirmation",
67 ),
68 path(
69 "application/<int:pk>/withdrawconfirmed",
70 views.ApplicationWithdrawn.as_view(),
71 name="application-withdrawn",
72 ),
73 path("health/", views.health),
74 path("openid/", include("djangooidc.urls")),
75 path("register/", include((application_urls, APPLICATION_NAMESPACE))),
76 path("api/v1/available/<domain>", available, name="available"),
77 path("api/v1/get-report/current-federal", get_current_federal, name="get-current-federal"),
78 path("api/v1/get-report/current-full", get_current_full, name="get-current-full"),
79 path(
80 "todo",
81 lambda r: always_404(r, "We forgot to include this link, sorry."),
82 name="todo",
83 ),
84 path("domain/<int:pk>", views.DomainView.as_view(), name="domain"),
85 path("domain/<int:pk>/users", views.DomainUsersView.as_view(), name="domain-users"),
86 path(
87 "domain/<int:pk>/dns",
88 views.DomainDNSView.as_view(),
89 name="domain-dns",
90 ),
91 path(
92 "domain/<int:pk>/dns/nameservers",
93 views.DomainNameserversView.as_view(),
94 name="domain-dns-nameservers",
95 ),
96 path(
97 "domain/<int:pk>/dns/dnssec",
98 views.DomainDNSSECView.as_view(),
99 name="domain-dns-dnssec",
100 ),
101 path(
102 "domain/<int:pk>/dns/dnssec/dsdata",
103 views.DomainDsDataView.as_view(),
104 name="domain-dns-dnssec-dsdata",
105 ),
106 path(
107 "domain/<int:pk>/your-contact-information",
108 views.DomainYourContactInformationView.as_view(),
109 name="domain-your-contact-information",
110 ),
111 path(
112 "domain/<int:pk>/org-name-address",
113 views.DomainOrgNameAddressView.as_view(),
114 name="domain-org-name-address",
115 ),
116 path(
117 "domain/<int:pk>/authorizing-official",
118 views.DomainAuthorizingOfficialView.as_view(),
119 name="domain-authorizing-official",
120 ),
121 path(
122 "domain/<int:pk>/security-email",
123 views.DomainSecurityEmailView.as_view(),
124 name="domain-security-email",
125 ),
126 path(
127 "domain/<int:pk>/users/add",
128 views.DomainAddUserView.as_view(),
129 name="domain-users-add",
130 ),
131 path(
132 "invitation/<int:pk>/delete",
133 views.DomainInvitationDeleteView.as_view(http_method_names=["post"]),
134 name="invitation-delete",
135 ),
136 ]
137
138 # we normally would guard these with `if settings.DEBUG` but tests run with
139 # DEBUG = False even when these apps have been loaded because settings.DEBUG
140 # was actually True. Instead, let's add these URLs any time we are able to
141 # import the debug toolbar package.
142 try:
143 import debug_toolbar # type: ignore
144
145 urlpatterns += [path("__debug__/", include(debug_toolbar.urls))]
146 except ImportError:
147 pass
148
[end of src/registrar/config/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/api/views.py b/src/api/views.py
--- a/src/api/views.py
+++ b/src/api/views.py
@@ -84,6 +84,7 @@
Response is a JSON dictionary with the key "available" and value true or
false.
"""
+ domain = request.GET.get("domain", "")
DraftDomain = apps.get_model("registrar.DraftDomain")
# validate that the given domain could be a domain name and fail early if
# not.
diff --git a/src/registrar/config/urls.py b/src/registrar/config/urls.py
--- a/src/registrar/config/urls.py
+++ b/src/registrar/config/urls.py
@@ -73,7 +73,7 @@
path("health/", views.health),
path("openid/", include("djangooidc.urls")),
path("register/", include((application_urls, APPLICATION_NAMESPACE))),
- path("api/v1/available/<domain>", available, name="available"),
+ path("api/v1/available/", available, name="available"),
path("api/v1/get-report/current-federal", get_current_federal, name="get-current-federal"),
path("api/v1/get-report/current-full", get_current_full, name="get-current-full"),
path(
|
{"golden_diff": "diff --git a/src/api/views.py b/src/api/views.py\n--- a/src/api/views.py\n+++ b/src/api/views.py\n@@ -84,6 +84,7 @@\n Response is a JSON dictionary with the key \"available\" and value true or\n false.\n \"\"\"\n+ domain = request.GET.get(\"domain\", \"\")\n DraftDomain = apps.get_model(\"registrar.DraftDomain\")\n # validate that the given domain could be a domain name and fail early if\n # not.\ndiff --git a/src/registrar/config/urls.py b/src/registrar/config/urls.py\n--- a/src/registrar/config/urls.py\n+++ b/src/registrar/config/urls.py\n@@ -73,7 +73,7 @@\n path(\"health/\", views.health),\n path(\"openid/\", include(\"djangooidc.urls\")),\n path(\"register/\", include((application_urls, APPLICATION_NAMESPACE))),\n- path(\"api/v1/available/<domain>\", available, name=\"available\"),\n+ path(\"api/v1/available/\", available, name=\"available\"),\n path(\"api/v1/get-report/current-federal\", get_current_federal, name=\"get-current-federal\"),\n path(\"api/v1/get-report/current-full\", get_current_full, name=\"get-current-full\"),\n path(\n", "issue": "Subdomain request checking and error presentation\n### Story\n\nAs a domain applicant, if I'm typing a subdomain on the domain page\nI want to be presented with an error\nso that I know that I can't submit a subdomain.\n\n\n### Acceptance Criteria\n\n- [ ] Leverage the existing in page error handling to present the user with content derived from #324 \n- [ ] Add a unit test to check validation\n- [ ] Submissions should be denied if two dots are present. (Check for the edge case when two \".\" are not separated by characters: that state being \"..\")\n- [ ] Submissions should be denied if two dots are present. (Check for the edge case when a \".\" is at the beginning if the domain without characters in front: that state being \".something.\"\n- [ ] Confirm other edge cases\n\n### Additional Context\n\n_No response_\n\n### Issue Links\n\nBlocked by: #324\nRelated to: #720 \n", "before_files": [{"content": "\"\"\"Internal API views\"\"\"\nfrom django.apps import apps\nfrom django.views.decorators.http import require_http_methods\nfrom django.http import HttpResponse, JsonResponse\nfrom django.utils.safestring import mark_safe\n\nfrom registrar.templatetags.url_helpers import public_site_url\nfrom registrar.utility.errors import GenericError, GenericErrorCodes\n\nimport requests\n\nfrom login_required import login_not_required\n\nfrom cachetools.func import ttl_cache\n\nfrom registrar.utility.s3_bucket import S3ClientError, S3ClientHelper\n\n\nDOMAIN_FILE_URL = \"https://raw.githubusercontent.com/cisagov/dotgov-data/main/current-full.csv\"\n\n\nDOMAIN_API_MESSAGES = {\n \"required\": \"Enter the .gov domain you want. Don\u2019t include \u201cwww\u201d or \u201c.gov.\u201d\"\n \" For example, if you want www.city.gov, you would enter \u201ccity\u201d\"\n \" (without the quotes).\",\n \"extra_dots\": \"Enter the .gov domain you want without any periods.\",\n # message below is considered safe; no user input can be inserted into the message\n # body; public_site_url() function reads from local app settings and therefore safe\n \"unavailable\": mark_safe( # nosec\n \"That domain isn\u2019t available. \"\n \"<a class='usa-link' href='{}' target='_blank'>\"\n \"Read more about choosing your .gov domain.</a>\".format(public_site_url(\"domains/choosing\"))\n ),\n \"invalid\": \"Enter a domain using only letters, numbers, or hyphens (though we don't recommend using hyphens).\",\n \"success\": \"That domain is available!\",\n \"error\": GenericError.get_error_message(GenericErrorCodes.CANNOT_CONTACT_REGISTRY),\n}\n\n\n# this file doesn't change that often, nor is it that big, so cache the result\n# in memory for ten minutes\n@ttl_cache(ttl=600)\ndef _domains():\n \"\"\"Return a list of the current .gov domains.\n\n Fetch a file from DOMAIN_FILE_URL, parse the CSV for the domain,\n lowercase everything and return the list.\n \"\"\"\n DraftDomain = apps.get_model(\"registrar.DraftDomain\")\n # 5 second timeout\n file_contents = requests.get(DOMAIN_FILE_URL, timeout=5).text\n domains = set()\n # skip the first line\n for line in file_contents.splitlines()[1:]:\n # get the domain before the first comma\n domain = line.split(\",\", 1)[0]\n # sanity-check the string we got from the file here\n if DraftDomain.string_could_be_domain(domain):\n # lowercase everything when we put it in domains\n domains.add(domain.lower())\n return domains\n\n\ndef check_domain_available(domain):\n \"\"\"Return true if the given domain is available.\n\n The given domain is lowercased to match against the domains list. If the\n given domain doesn't end with .gov, \".gov\" is added when looking for\n a match. If check fails, throws a RegistryError.\n \"\"\"\n Domain = apps.get_model(\"registrar.Domain\")\n if domain.endswith(\".gov\"):\n return Domain.available(domain)\n else:\n # domain search string doesn't end with .gov, add it on here\n return Domain.available(domain + \".gov\")\n\n\n@require_http_methods([\"GET\"])\n@login_not_required\ndef available(request, domain=\"\"):\n \"\"\"Is a given domain available or not.\n\n Response is a JSON dictionary with the key \"available\" and value true or\n false.\n \"\"\"\n DraftDomain = apps.get_model(\"registrar.DraftDomain\")\n # validate that the given domain could be a domain name and fail early if\n # not.\n if not (DraftDomain.string_could_be_domain(domain) or DraftDomain.string_could_be_domain(domain + \".gov\")):\n return JsonResponse({\"available\": False, \"message\": DOMAIN_API_MESSAGES[\"invalid\"]})\n # a domain is available if it is NOT in the list of current domains\n try:\n if check_domain_available(domain):\n return JsonResponse({\"available\": True, \"message\": DOMAIN_API_MESSAGES[\"success\"]})\n else:\n return JsonResponse({\"available\": False, \"message\": DOMAIN_API_MESSAGES[\"unavailable\"]})\n except Exception:\n return JsonResponse({\"available\": False, \"message\": DOMAIN_API_MESSAGES[\"error\"]})\n\n\n@require_http_methods([\"GET\"])\n@login_not_required\ndef get_current_full(request, file_name=\"current-full.csv\"):\n \"\"\"This will return the file content of current-full.csv which is the command\n output of generate_current_full_report.py. This command iterates through each Domain\n and returns a CSV representation.\"\"\"\n return serve_file(file_name)\n\n\n@require_http_methods([\"GET\"])\n@login_not_required\ndef get_current_federal(request, file_name=\"current-federal.csv\"):\n \"\"\"This will return the file content of current-federal.csv which is the command\n output of generate_current_federal_report.py. This command iterates through each Domain\n and returns a CSV representation.\"\"\"\n return serve_file(file_name)\n\n\ndef serve_file(file_name):\n \"\"\"Downloads a file based on a given filepath. Returns a 500 if not found.\"\"\"\n s3_client = S3ClientHelper()\n # Serve the CSV file. If not found, an exception will be thrown.\n # This will then be caught by flat, causing it to not read it - which is what we want.\n try:\n file = s3_client.get_file(file_name, decode_to_utf=True)\n except S3ClientError as err:\n # TODO - #1317: Notify operations when auto report generation fails\n raise err\n\n response = HttpResponse(file)\n return response\n", "path": "src/api/views.py"}, {"content": "\"\"\"URL Configuration\n\nFor more information see:\n https://docs.djangoproject.com/en/4.0/topics/http/urls/\n\"\"\"\n\nfrom django.contrib import admin\nfrom django.urls import include, path\nfrom django.views.generic import RedirectView\n\nfrom registrar import views\nfrom registrar.views.application import Step\nfrom registrar.views.utility import always_404\nfrom api.views import available, get_current_federal, get_current_full\n\n\nAPPLICATION_NAMESPACE = views.ApplicationWizard.URL_NAMESPACE\napplication_urls = [\n path(\"\", views.ApplicationWizard.as_view(), name=\"\"),\n path(\"finished/\", views.Finished.as_view(), name=\"finished\"),\n]\n\n# dynamically generate the other application_urls\nfor step, view in [\n # add/remove steps here\n (Step.ORGANIZATION_TYPE, views.OrganizationType),\n (Step.TRIBAL_GOVERNMENT, views.TribalGovernment),\n (Step.ORGANIZATION_FEDERAL, views.OrganizationFederal),\n (Step.ORGANIZATION_ELECTION, views.OrganizationElection),\n (Step.ORGANIZATION_CONTACT, views.OrganizationContact),\n (Step.ABOUT_YOUR_ORGANIZATION, views.AboutYourOrganization),\n (Step.AUTHORIZING_OFFICIAL, views.AuthorizingOfficial),\n (Step.CURRENT_SITES, views.CurrentSites),\n (Step.DOTGOV_DOMAIN, views.DotgovDomain),\n (Step.PURPOSE, views.Purpose),\n (Step.YOUR_CONTACT, views.YourContact),\n (Step.OTHER_CONTACTS, views.OtherContacts),\n (Step.NO_OTHER_CONTACTS, views.NoOtherContacts),\n (Step.ANYTHING_ELSE, views.AnythingElse),\n (Step.REQUIREMENTS, views.Requirements),\n (Step.REVIEW, views.Review),\n]:\n application_urls.append(path(f\"{step}/\", view.as_view(), name=step))\n\n\nurlpatterns = [\n path(\"\", views.index, name=\"home\"),\n path(\n \"admin/logout/\",\n RedirectView.as_view(pattern_name=\"logout\", permanent=False),\n ),\n path(\"admin/\", admin.site.urls),\n path(\n \"application/<id>/edit/\",\n views.ApplicationWizard.as_view(),\n name=views.ApplicationWizard.EDIT_URL_NAME,\n ),\n path(\n \"application/<int:pk>\",\n views.ApplicationStatus.as_view(),\n name=\"application-status\",\n ),\n path(\n \"application/<int:pk>/withdraw\",\n views.ApplicationWithdrawConfirmation.as_view(),\n name=\"application-withdraw-confirmation\",\n ),\n path(\n \"application/<int:pk>/withdrawconfirmed\",\n views.ApplicationWithdrawn.as_view(),\n name=\"application-withdrawn\",\n ),\n path(\"health/\", views.health),\n path(\"openid/\", include(\"djangooidc.urls\")),\n path(\"register/\", include((application_urls, APPLICATION_NAMESPACE))),\n path(\"api/v1/available/<domain>\", available, name=\"available\"),\n path(\"api/v1/get-report/current-federal\", get_current_federal, name=\"get-current-federal\"),\n path(\"api/v1/get-report/current-full\", get_current_full, name=\"get-current-full\"),\n path(\n \"todo\",\n lambda r: always_404(r, \"We forgot to include this link, sorry.\"),\n name=\"todo\",\n ),\n path(\"domain/<int:pk>\", views.DomainView.as_view(), name=\"domain\"),\n path(\"domain/<int:pk>/users\", views.DomainUsersView.as_view(), name=\"domain-users\"),\n path(\n \"domain/<int:pk>/dns\",\n views.DomainDNSView.as_view(),\n name=\"domain-dns\",\n ),\n path(\n \"domain/<int:pk>/dns/nameservers\",\n views.DomainNameserversView.as_view(),\n name=\"domain-dns-nameservers\",\n ),\n path(\n \"domain/<int:pk>/dns/dnssec\",\n views.DomainDNSSECView.as_view(),\n name=\"domain-dns-dnssec\",\n ),\n path(\n \"domain/<int:pk>/dns/dnssec/dsdata\",\n views.DomainDsDataView.as_view(),\n name=\"domain-dns-dnssec-dsdata\",\n ),\n path(\n \"domain/<int:pk>/your-contact-information\",\n views.DomainYourContactInformationView.as_view(),\n name=\"domain-your-contact-information\",\n ),\n path(\n \"domain/<int:pk>/org-name-address\",\n views.DomainOrgNameAddressView.as_view(),\n name=\"domain-org-name-address\",\n ),\n path(\n \"domain/<int:pk>/authorizing-official\",\n views.DomainAuthorizingOfficialView.as_view(),\n name=\"domain-authorizing-official\",\n ),\n path(\n \"domain/<int:pk>/security-email\",\n views.DomainSecurityEmailView.as_view(),\n name=\"domain-security-email\",\n ),\n path(\n \"domain/<int:pk>/users/add\",\n views.DomainAddUserView.as_view(),\n name=\"domain-users-add\",\n ),\n path(\n \"invitation/<int:pk>/delete\",\n views.DomainInvitationDeleteView.as_view(http_method_names=[\"post\"]),\n name=\"invitation-delete\",\n ),\n]\n\n# we normally would guard these with `if settings.DEBUG` but tests run with\n# DEBUG = False even when these apps have been loaded because settings.DEBUG\n# was actually True. Instead, let's add these URLs any time we are able to\n# import the debug toolbar package.\ntry:\n import debug_toolbar # type: ignore\n\n urlpatterns += [path(\"__debug__/\", include(debug_toolbar.urls))]\nexcept ImportError:\n pass\n", "path": "src/registrar/config/urls.py"}]}
| 3,760 | 275 |
gh_patches_debug_37009
|
rasdani/github-patches
|
git_diff
|
mozilla__pontoon-2846
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Refine pretranslation logic for strings with rejected suggestions
Currently, a string with suggestions is ignored by pretranslation.
We should:
* Pretranslate if all suggestions are submitted by users and rejected.
* Not pretranslate if a pretranslation was already rejected.
</issue>
<code>
[start of pontoon/pretranslation/pretranslate.py]
1 import logging
2 import operator
3 import re
4
5 from django.db.models import CharField, Value as V
6 from django.db.models.functions import Concat
7
8 from fluent.syntax import FluentParser, FluentSerializer
9 from functools import reduce
10
11 from pontoon.base.models import User, TranslatedResource
12 from pontoon.base.fluent import FlatTransformer, create_locale_plural_variants
13 from pontoon.machinery.utils import (
14 get_google_translate_data,
15 get_translation_memory_data,
16 )
17
18
19 log = logging.getLogger(__name__)
20
21 parser = FluentParser()
22 serializer = FluentSerializer()
23
24
25 class PretranslationTransformer(FlatTransformer):
26 def __init__(self, locale):
27 self.services = []
28 self.locale = locale
29
30 def visit_SelectExpression(self, node):
31 create_locale_plural_variants(node, self.locale)
32 return self.generic_visit(node)
33
34 def visit_TextElement(self, node):
35 # Machine translation treats each line as separate sentence,
36 # hence we replace newline characters with spaces.
37 source = node.value.replace("\n", " ")
38
39 pretranslation, service = get_pretranslated_data(source, self.locale)
40
41 if pretranslation is None:
42 raise ValueError(
43 f"Pretranslation for `{source}` to {self.locale.code} not available."
44 )
45
46 node.value = pretranslation
47 self.services.append(service)
48 return node
49
50
51 def get_pretranslations(entity, locale):
52 """
53 Get pretranslations for the entity-locale pair using internal translation memory and
54 Google's machine translation.
55
56 For Fluent strings, uplift SelectExpressions, serialize Placeables as TextElements
57 and then only pretranslate TextElements. Set the most frequent TextElement
58 pretranslation author as the author of the entire pretranslation.
59
60 :arg Entity entity: the Entity object
61 :arg Locale locale: the Locale object
62
63 :returns: a list of tuples, consisting of:
64 - a pretranslation of the entity
65 - a plural form
66 - a user (representing TM or GT service)
67 """
68 source = entity.string
69 services = {
70 "tm": User.objects.get(email="[email protected]"),
71 "gt": User.objects.get(email="[email protected]"),
72 }
73
74 if entity.resource.format == "ftl":
75 source_ast = parser.parse_entry(source)
76 pt_transformer = PretranslationTransformer(locale)
77
78 try:
79 pretranslated_ast = pt_transformer.visit(source_ast)
80 except ValueError as e:
81 log.info(f"Fluent pretranslation error: {e}")
82 return []
83
84 pretranslation = serializer.serialize_entry(pretranslated_ast)
85
86 authors = [services[service] for service in pt_transformer.services]
87 author = max(set(authors), key=authors.count) if authors else services["tm"]
88
89 return [(pretranslation, None, author)]
90
91 else:
92 pretranslation, service = get_pretranslated_data(source, locale)
93
94 if pretranslation is None:
95 return []
96
97 author = services[service]
98 if entity.string_plural == "":
99 return [(pretranslation, None, author)]
100 else:
101 plural_forms = range(0, locale.nplurals or 1)
102 return [
103 (pretranslation, plural_form, author) for plural_form in plural_forms
104 ]
105
106
107 def get_pretranslated_data(source, locale):
108 # Empty strings do not need translation
109 if re.search("^\\s*$", source):
110 return source, "tm"
111
112 # Try to get matches from Translation Memory
113 tm_response = get_translation_memory_data(text=source, locale=locale)
114 tm_perfect = [t for t in tm_response if int(t["quality"]) == 100]
115 if tm_perfect:
116 return tm_perfect[0]["target"], "tm"
117
118 # Fetch from Google Translate
119 elif locale.google_translate_code:
120 gt_response = get_google_translate_data(text=source, locale=locale)
121 if gt_response["status"]:
122 return gt_response["translation"], "gt"
123
124 return None, None
125
126
127 def update_changed_instances(tr_filter, tr_dict, translations):
128 """
129 Update the latest activity and stats for changed Locales, ProjectLocales
130 & TranslatedResources
131 """
132 tr_filter = tuple(tr_filter)
133 # Combine all generated filters with an OK operator.
134 # `operator.ior` is the '|' Python operator, which turns into a logical OR
135 # when used between django ORM query objects.
136 tr_query = reduce(operator.ior, tr_filter)
137
138 translatedresources = TranslatedResource.objects.filter(tr_query).annotate(
139 locale_resource=Concat(
140 "locale_id", V("-"), "resource_id", output_field=CharField()
141 )
142 )
143
144 translatedresources.update_stats()
145
146 for tr in translatedresources:
147 index = tr_dict[tr.locale_resource]
148 translation = translations[index]
149 translation.update_latest_translation()
150
[end of pontoon/pretranslation/pretranslate.py]
[start of pontoon/pretranslation/__init__.py]
[end of pontoon/pretranslation/__init__.py]
[start of pontoon/pretranslation/tasks.py]
1 import logging
2
3 from django.db.models import Q, CharField, Value as V
4 from django.db.models.functions import Concat
5 from django.conf import settings
6 from pontoon.base.models import (
7 Project,
8 Entity,
9 TranslatedResource,
10 Translation,
11 )
12 from pontoon.actionlog.models import ActionLog
13 from pontoon.pretranslation.pretranslate import (
14 get_pretranslations,
15 update_changed_instances,
16 )
17 from pontoon.base.tasks import PontoonTask
18 from pontoon.sync.core import serial_task
19 from pontoon.checks.utils import bulk_run_checks
20
21
22 log = logging.getLogger(__name__)
23
24
25 @serial_task(settings.SYNC_TASK_TIMEOUT, base=PontoonTask, lock_key="project={0}")
26 def pretranslate(self, project_pk, locales=None, entities=None):
27 """
28 Identifies strings without any translations and any suggestions.
29 Engages TheAlgorithm (bug 1552796) to gather pretranslations.
30 Stores pretranslations as suggestions (approved=False) to DB.
31
32 :arg project_pk: the pk of the project to be pretranslated
33 :arg Queryset locales: the locales for the project to be pretranslated
34 :arg Queryset entites: the entities for the project to be pretranslated
35
36 :returns: None
37 """
38 project = Project.objects.get(pk=project_pk)
39
40 if not project.pretranslation_enabled:
41 log.info(f"Pretranslation not enabled for project {project.name}")
42 return
43
44 if locales:
45 locales = project.locales.filter(pk__in=locales)
46 else:
47 locales = project.locales
48
49 locales = locales.filter(
50 project_locale__project=project,
51 project_locale__pretranslation_enabled=True,
52 project_locale__readonly=False,
53 )
54
55 if not locales:
56 log.info(
57 f"Pretranslation not enabled for any locale within project {project.name}"
58 )
59 return
60
61 log.info(f"Fetching pretranslations for project {project.name} started")
62
63 if not entities:
64 entities = Entity.objects.filter(
65 resource__project=project,
66 obsolete=False,
67 )
68
69 entities = entities.prefetch_related("resource")
70
71 # get available TranslatedResource pairs
72 tr_pairs = (
73 TranslatedResource.objects.filter(
74 resource__project=project,
75 locale__in=locales,
76 )
77 .annotate(
78 locale_resource=Concat(
79 "locale_id", V("-"), "resource_id", output_field=CharField()
80 )
81 )
82 .values_list("locale_resource", flat=True)
83 .distinct()
84 )
85
86 # Fetch all distinct locale-entity pairs for which translation exists
87 translated_entities = (
88 Translation.objects.filter(
89 locale__in=locales,
90 entity__in=entities,
91 )
92 .annotate(
93 locale_entity=Concat(
94 "locale_id", V("-"), "entity_id", output_field=CharField()
95 )
96 )
97 .values_list("locale_entity", flat=True)
98 .distinct()
99 )
100
101 translated_entities = list(translated_entities)
102
103 translations = []
104
105 # To keep track of changed TranslatedResources and their latest_translation
106 tr_dict = {}
107
108 tr_filter = []
109 index = -1
110
111 for locale in locales:
112 log.info(f"Fetching pretranslations for locale {locale.code} started")
113 for entity in entities:
114 locale_entity = f"{locale.id}-{entity.id}"
115 locale_resource = f"{locale.id}-{entity.resource.id}"
116 if locale_entity in translated_entities or locale_resource not in tr_pairs:
117 continue
118
119 pretranslations = get_pretranslations(entity, locale)
120
121 if not pretranslations:
122 continue
123
124 for string, plural_form, user in pretranslations:
125 t = Translation(
126 entity=entity,
127 locale=locale,
128 string=string,
129 user=user,
130 approved=False,
131 pretranslated=True,
132 active=True,
133 plural_form=plural_form,
134 )
135
136 index += 1
137 translations.append(t)
138
139 if locale_resource not in tr_dict:
140 tr_dict[locale_resource] = index
141
142 # Add query for fetching respective TranslatedResource.
143 tr_filter.append(
144 Q(locale__id=locale.id) & Q(resource__id=entity.resource.id)
145 )
146
147 # Update the latest translation index
148 tr_dict[locale_resource] = index
149
150 log.info(f"Fetching pretranslations for locale {locale.code} done")
151
152 if len(translations) == 0:
153 return
154
155 translations = Translation.objects.bulk_create(translations)
156
157 # Log creating actions
158 actions_to_log = [
159 ActionLog(
160 action_type=ActionLog.ActionType.TRANSLATION_CREATED,
161 performed_by=t.user,
162 translation=t,
163 )
164 for t in translations
165 ]
166
167 ActionLog.objects.bulk_create(actions_to_log)
168
169 # Run checks on all translations
170 translation_pks = {translation.pk for translation in translations}
171 bulk_run_checks(Translation.objects.for_checks().filter(pk__in=translation_pks))
172
173 # Mark translations as changed
174 changed_translations = Translation.objects.filter(
175 pk__in=translation_pks,
176 # Do not sync translations with errors and warnings
177 errors__isnull=True,
178 warnings__isnull=True,
179 )
180 changed_translations.bulk_mark_changed()
181
182 # Update latest activity and stats for changed instances.
183 update_changed_instances(tr_filter, tr_dict, translations)
184
185 log.info(f"Fetching pretranslations for project {project.name} done")
186
[end of pontoon/pretranslation/tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pontoon/pretranslation/__init__.py b/pontoon/pretranslation/__init__.py
--- a/pontoon/pretranslation/__init__.py
+++ b/pontoon/pretranslation/__init__.py
@@ -0,0 +1,4 @@
+AUTHORS = {
+ "tm": "[email protected]",
+ "gt": "[email protected]",
+}
diff --git a/pontoon/pretranslation/pretranslate.py b/pontoon/pretranslation/pretranslate.py
--- a/pontoon/pretranslation/pretranslate.py
+++ b/pontoon/pretranslation/pretranslate.py
@@ -14,6 +14,7 @@
get_google_translate_data,
get_translation_memory_data,
)
+from pontoon.pretranslation import AUTHORS
log = logging.getLogger(__name__)
@@ -66,10 +67,7 @@
- a user (representing TM or GT service)
"""
source = entity.string
- services = {
- "tm": User.objects.get(email="[email protected]"),
- "gt": User.objects.get(email="[email protected]"),
- }
+ services = {k: User.objects.get(email=email) for k, email in AUTHORS.items()}
if entity.resource.format == "ftl":
source_ast = parser.parse_entry(source)
diff --git a/pontoon/pretranslation/tasks.py b/pontoon/pretranslation/tasks.py
--- a/pontoon/pretranslation/tasks.py
+++ b/pontoon/pretranslation/tasks.py
@@ -8,8 +8,10 @@
Entity,
TranslatedResource,
Translation,
+ User,
)
from pontoon.actionlog.models import ActionLog
+from pontoon.pretranslation import AUTHORS
from pontoon.pretranslation.pretranslate import (
get_pretranslations,
update_changed_instances,
@@ -68,7 +70,7 @@
entities = entities.prefetch_related("resource")
- # get available TranslatedResource pairs
+ # Fetch all available locale-resource pairs (TranslatedResource objects)
tr_pairs = (
TranslatedResource.objects.filter(
resource__project=project,
@@ -83,12 +85,14 @@
.distinct()
)
- # Fetch all distinct locale-entity pairs for which translation exists
+ # Fetch all locale-entity pairs with non-rejected or pretranslated translations
+ pt_authors = [User.objects.get(email=email) for email in AUTHORS.values()]
translated_entities = (
Translation.objects.filter(
locale__in=locales,
entity__in=entities,
)
+ .filter(Q(rejected=False) | Q(user__in=pt_authors))
.annotate(
locale_entity=Concat(
"locale_id", V("-"), "entity_id", output_field=CharField()
|
{"golden_diff": "diff --git a/pontoon/pretranslation/__init__.py b/pontoon/pretranslation/__init__.py\n--- a/pontoon/pretranslation/__init__.py\n+++ b/pontoon/pretranslation/__init__.py\n@@ -0,0 +1,4 @@\n+AUTHORS = {\n+ \"tm\": \"[email protected]\",\n+ \"gt\": \"[email protected]\",\n+}\ndiff --git a/pontoon/pretranslation/pretranslate.py b/pontoon/pretranslation/pretranslate.py\n--- a/pontoon/pretranslation/pretranslate.py\n+++ b/pontoon/pretranslation/pretranslate.py\n@@ -14,6 +14,7 @@\n get_google_translate_data,\n get_translation_memory_data,\n )\n+from pontoon.pretranslation import AUTHORS\n \n \n log = logging.getLogger(__name__)\n@@ -66,10 +67,7 @@\n - a user (representing TM or GT service)\n \"\"\"\n source = entity.string\n- services = {\n- \"tm\": User.objects.get(email=\"[email protected]\"),\n- \"gt\": User.objects.get(email=\"[email protected]\"),\n- }\n+ services = {k: User.objects.get(email=email) for k, email in AUTHORS.items()}\n \n if entity.resource.format == \"ftl\":\n source_ast = parser.parse_entry(source)\ndiff --git a/pontoon/pretranslation/tasks.py b/pontoon/pretranslation/tasks.py\n--- a/pontoon/pretranslation/tasks.py\n+++ b/pontoon/pretranslation/tasks.py\n@@ -8,8 +8,10 @@\n Entity,\n TranslatedResource,\n Translation,\n+ User,\n )\n from pontoon.actionlog.models import ActionLog\n+from pontoon.pretranslation import AUTHORS\n from pontoon.pretranslation.pretranslate import (\n get_pretranslations,\n update_changed_instances,\n@@ -68,7 +70,7 @@\n \n entities = entities.prefetch_related(\"resource\")\n \n- # get available TranslatedResource pairs\n+ # Fetch all available locale-resource pairs (TranslatedResource objects)\n tr_pairs = (\n TranslatedResource.objects.filter(\n resource__project=project,\n@@ -83,12 +85,14 @@\n .distinct()\n )\n \n- # Fetch all distinct locale-entity pairs for which translation exists\n+ # Fetch all locale-entity pairs with non-rejected or pretranslated translations\n+ pt_authors = [User.objects.get(email=email) for email in AUTHORS.values()]\n translated_entities = (\n Translation.objects.filter(\n locale__in=locales,\n entity__in=entities,\n )\n+ .filter(Q(rejected=False) | Q(user__in=pt_authors))\n .annotate(\n locale_entity=Concat(\n \"locale_id\", V(\"-\"), \"entity_id\", output_field=CharField()\n", "issue": "Refine pretranslation logic for strings with rejected suggestions\nCurrently, a string with suggestions is ignored by pretranslation.\r\n\r\nWe should:\r\n* Pretranslate if all suggestions are submitted by users and rejected. \r\n* Not pretranslate if a pretranslation was already rejected.\n", "before_files": [{"content": "import logging\nimport operator\nimport re\n\nfrom django.db.models import CharField, Value as V\nfrom django.db.models.functions import Concat\n\nfrom fluent.syntax import FluentParser, FluentSerializer\nfrom functools import reduce\n\nfrom pontoon.base.models import User, TranslatedResource\nfrom pontoon.base.fluent import FlatTransformer, create_locale_plural_variants\nfrom pontoon.machinery.utils import (\n get_google_translate_data,\n get_translation_memory_data,\n)\n\n\nlog = logging.getLogger(__name__)\n\nparser = FluentParser()\nserializer = FluentSerializer()\n\n\nclass PretranslationTransformer(FlatTransformer):\n def __init__(self, locale):\n self.services = []\n self.locale = locale\n\n def visit_SelectExpression(self, node):\n create_locale_plural_variants(node, self.locale)\n return self.generic_visit(node)\n\n def visit_TextElement(self, node):\n # Machine translation treats each line as separate sentence,\n # hence we replace newline characters with spaces.\n source = node.value.replace(\"\\n\", \" \")\n\n pretranslation, service = get_pretranslated_data(source, self.locale)\n\n if pretranslation is None:\n raise ValueError(\n f\"Pretranslation for `{source}` to {self.locale.code} not available.\"\n )\n\n node.value = pretranslation\n self.services.append(service)\n return node\n\n\ndef get_pretranslations(entity, locale):\n \"\"\"\n Get pretranslations for the entity-locale pair using internal translation memory and\n Google's machine translation.\n\n For Fluent strings, uplift SelectExpressions, serialize Placeables as TextElements\n and then only pretranslate TextElements. Set the most frequent TextElement\n pretranslation author as the author of the entire pretranslation.\n\n :arg Entity entity: the Entity object\n :arg Locale locale: the Locale object\n\n :returns: a list of tuples, consisting of:\n - a pretranslation of the entity\n - a plural form\n - a user (representing TM or GT service)\n \"\"\"\n source = entity.string\n services = {\n \"tm\": User.objects.get(email=\"[email protected]\"),\n \"gt\": User.objects.get(email=\"[email protected]\"),\n }\n\n if entity.resource.format == \"ftl\":\n source_ast = parser.parse_entry(source)\n pt_transformer = PretranslationTransformer(locale)\n\n try:\n pretranslated_ast = pt_transformer.visit(source_ast)\n except ValueError as e:\n log.info(f\"Fluent pretranslation error: {e}\")\n return []\n\n pretranslation = serializer.serialize_entry(pretranslated_ast)\n\n authors = [services[service] for service in pt_transformer.services]\n author = max(set(authors), key=authors.count) if authors else services[\"tm\"]\n\n return [(pretranslation, None, author)]\n\n else:\n pretranslation, service = get_pretranslated_data(source, locale)\n\n if pretranslation is None:\n return []\n\n author = services[service]\n if entity.string_plural == \"\":\n return [(pretranslation, None, author)]\n else:\n plural_forms = range(0, locale.nplurals or 1)\n return [\n (pretranslation, plural_form, author) for plural_form in plural_forms\n ]\n\n\ndef get_pretranslated_data(source, locale):\n # Empty strings do not need translation\n if re.search(\"^\\\\s*$\", source):\n return source, \"tm\"\n\n # Try to get matches from Translation Memory\n tm_response = get_translation_memory_data(text=source, locale=locale)\n tm_perfect = [t for t in tm_response if int(t[\"quality\"]) == 100]\n if tm_perfect:\n return tm_perfect[0][\"target\"], \"tm\"\n\n # Fetch from Google Translate\n elif locale.google_translate_code:\n gt_response = get_google_translate_data(text=source, locale=locale)\n if gt_response[\"status\"]:\n return gt_response[\"translation\"], \"gt\"\n\n return None, None\n\n\ndef update_changed_instances(tr_filter, tr_dict, translations):\n \"\"\"\n Update the latest activity and stats for changed Locales, ProjectLocales\n & TranslatedResources\n \"\"\"\n tr_filter = tuple(tr_filter)\n # Combine all generated filters with an OK operator.\n # `operator.ior` is the '|' Python operator, which turns into a logical OR\n # when used between django ORM query objects.\n tr_query = reduce(operator.ior, tr_filter)\n\n translatedresources = TranslatedResource.objects.filter(tr_query).annotate(\n locale_resource=Concat(\n \"locale_id\", V(\"-\"), \"resource_id\", output_field=CharField()\n )\n )\n\n translatedresources.update_stats()\n\n for tr in translatedresources:\n index = tr_dict[tr.locale_resource]\n translation = translations[index]\n translation.update_latest_translation()\n", "path": "pontoon/pretranslation/pretranslate.py"}, {"content": "", "path": "pontoon/pretranslation/__init__.py"}, {"content": "import logging\n\nfrom django.db.models import Q, CharField, Value as V\nfrom django.db.models.functions import Concat\nfrom django.conf import settings\nfrom pontoon.base.models import (\n Project,\n Entity,\n TranslatedResource,\n Translation,\n)\nfrom pontoon.actionlog.models import ActionLog\nfrom pontoon.pretranslation.pretranslate import (\n get_pretranslations,\n update_changed_instances,\n)\nfrom pontoon.base.tasks import PontoonTask\nfrom pontoon.sync.core import serial_task\nfrom pontoon.checks.utils import bulk_run_checks\n\n\nlog = logging.getLogger(__name__)\n\n\n@serial_task(settings.SYNC_TASK_TIMEOUT, base=PontoonTask, lock_key=\"project={0}\")\ndef pretranslate(self, project_pk, locales=None, entities=None):\n \"\"\"\n Identifies strings without any translations and any suggestions.\n Engages TheAlgorithm (bug 1552796) to gather pretranslations.\n Stores pretranslations as suggestions (approved=False) to DB.\n\n :arg project_pk: the pk of the project to be pretranslated\n :arg Queryset locales: the locales for the project to be pretranslated\n :arg Queryset entites: the entities for the project to be pretranslated\n\n :returns: None\n \"\"\"\n project = Project.objects.get(pk=project_pk)\n\n if not project.pretranslation_enabled:\n log.info(f\"Pretranslation not enabled for project {project.name}\")\n return\n\n if locales:\n locales = project.locales.filter(pk__in=locales)\n else:\n locales = project.locales\n\n locales = locales.filter(\n project_locale__project=project,\n project_locale__pretranslation_enabled=True,\n project_locale__readonly=False,\n )\n\n if not locales:\n log.info(\n f\"Pretranslation not enabled for any locale within project {project.name}\"\n )\n return\n\n log.info(f\"Fetching pretranslations for project {project.name} started\")\n\n if not entities:\n entities = Entity.objects.filter(\n resource__project=project,\n obsolete=False,\n )\n\n entities = entities.prefetch_related(\"resource\")\n\n # get available TranslatedResource pairs\n tr_pairs = (\n TranslatedResource.objects.filter(\n resource__project=project,\n locale__in=locales,\n )\n .annotate(\n locale_resource=Concat(\n \"locale_id\", V(\"-\"), \"resource_id\", output_field=CharField()\n )\n )\n .values_list(\"locale_resource\", flat=True)\n .distinct()\n )\n\n # Fetch all distinct locale-entity pairs for which translation exists\n translated_entities = (\n Translation.objects.filter(\n locale__in=locales,\n entity__in=entities,\n )\n .annotate(\n locale_entity=Concat(\n \"locale_id\", V(\"-\"), \"entity_id\", output_field=CharField()\n )\n )\n .values_list(\"locale_entity\", flat=True)\n .distinct()\n )\n\n translated_entities = list(translated_entities)\n\n translations = []\n\n # To keep track of changed TranslatedResources and their latest_translation\n tr_dict = {}\n\n tr_filter = []\n index = -1\n\n for locale in locales:\n log.info(f\"Fetching pretranslations for locale {locale.code} started\")\n for entity in entities:\n locale_entity = f\"{locale.id}-{entity.id}\"\n locale_resource = f\"{locale.id}-{entity.resource.id}\"\n if locale_entity in translated_entities or locale_resource not in tr_pairs:\n continue\n\n pretranslations = get_pretranslations(entity, locale)\n\n if not pretranslations:\n continue\n\n for string, plural_form, user in pretranslations:\n t = Translation(\n entity=entity,\n locale=locale,\n string=string,\n user=user,\n approved=False,\n pretranslated=True,\n active=True,\n plural_form=plural_form,\n )\n\n index += 1\n translations.append(t)\n\n if locale_resource not in tr_dict:\n tr_dict[locale_resource] = index\n\n # Add query for fetching respective TranslatedResource.\n tr_filter.append(\n Q(locale__id=locale.id) & Q(resource__id=entity.resource.id)\n )\n\n # Update the latest translation index\n tr_dict[locale_resource] = index\n\n log.info(f\"Fetching pretranslations for locale {locale.code} done\")\n\n if len(translations) == 0:\n return\n\n translations = Translation.objects.bulk_create(translations)\n\n # Log creating actions\n actions_to_log = [\n ActionLog(\n action_type=ActionLog.ActionType.TRANSLATION_CREATED,\n performed_by=t.user,\n translation=t,\n )\n for t in translations\n ]\n\n ActionLog.objects.bulk_create(actions_to_log)\n\n # Run checks on all translations\n translation_pks = {translation.pk for translation in translations}\n bulk_run_checks(Translation.objects.for_checks().filter(pk__in=translation_pks))\n\n # Mark translations as changed\n changed_translations = Translation.objects.filter(\n pk__in=translation_pks,\n # Do not sync translations with errors and warnings\n errors__isnull=True,\n warnings__isnull=True,\n )\n changed_translations.bulk_mark_changed()\n\n # Update latest activity and stats for changed instances.\n update_changed_instances(tr_filter, tr_dict, translations)\n\n log.info(f\"Fetching pretranslations for project {project.name} done\")\n", "path": "pontoon/pretranslation/tasks.py"}]}
| 3,628 | 611 |
gh_patches_debug_15719
|
rasdani/github-patches
|
git_diff
|
ytdl-org__youtube-dl-20731
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
gfycat.com url changes
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2019.04.17**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/ytdl-org/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/ytdl-org/youtube-dl#faq) and [BUGS](https://github.com/ytdl-org/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/ytdl-org/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
gfycat.com has added dashes to some urls [https://gfycat.com/acceptablehappygoluckyharborporpoise-baseball](https://gfycat.com/acceptablehappygoluckyharborporpoise-baseball) causing a HTTP Error.
This could be fixed by excluding dashes in the url InfoExtractor.
</issue>
<code>
[start of youtube_dl/extractor/gfycat.py]
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 from .common import InfoExtractor
5 from ..utils import (
6 int_or_none,
7 float_or_none,
8 qualities,
9 ExtractorError,
10 )
11
12
13 class GfycatIE(InfoExtractor):
14 _VALID_URL = r'https?://(?:www\.)?gfycat\.com/(?:ifr/|gifs/detail/)?(?P<id>[^/?#]+)'
15 _TESTS = [{
16 'url': 'http://gfycat.com/DeadlyDecisiveGermanpinscher',
17 'info_dict': {
18 'id': 'DeadlyDecisiveGermanpinscher',
19 'ext': 'mp4',
20 'title': 'Ghost in the Shell',
21 'timestamp': 1410656006,
22 'upload_date': '20140914',
23 'uploader': 'anonymous',
24 'duration': 10.4,
25 'view_count': int,
26 'like_count': int,
27 'dislike_count': int,
28 'categories': list,
29 'age_limit': 0,
30 }
31 }, {
32 'url': 'http://gfycat.com/ifr/JauntyTimelyAmazontreeboa',
33 'info_dict': {
34 'id': 'JauntyTimelyAmazontreeboa',
35 'ext': 'mp4',
36 'title': 'JauntyTimelyAmazontreeboa',
37 'timestamp': 1411720126,
38 'upload_date': '20140926',
39 'uploader': 'anonymous',
40 'duration': 3.52,
41 'view_count': int,
42 'like_count': int,
43 'dislike_count': int,
44 'categories': list,
45 'age_limit': 0,
46 }
47 }, {
48 'url': 'https://gfycat.com/gifs/detail/UnconsciousLankyIvorygull',
49 'only_matching': True
50 }]
51
52 def _real_extract(self, url):
53 video_id = self._match_id(url)
54
55 gfy = self._download_json(
56 'https://api.gfycat.com/v1/gfycats/%s' % video_id,
57 video_id, 'Downloading video info')
58 if 'error' in gfy:
59 raise ExtractorError('Gfycat said: ' + gfy['error'], expected=True)
60 gfy = gfy['gfyItem']
61
62 title = gfy.get('title') or gfy['gfyName']
63 description = gfy.get('description')
64 timestamp = int_or_none(gfy.get('createDate'))
65 uploader = gfy.get('userName')
66 view_count = int_or_none(gfy.get('views'))
67 like_count = int_or_none(gfy.get('likes'))
68 dislike_count = int_or_none(gfy.get('dislikes'))
69 age_limit = 18 if gfy.get('nsfw') == '1' else 0
70
71 width = int_or_none(gfy.get('width'))
72 height = int_or_none(gfy.get('height'))
73 fps = int_or_none(gfy.get('frameRate'))
74 num_frames = int_or_none(gfy.get('numFrames'))
75
76 duration = float_or_none(num_frames, fps) if num_frames and fps else None
77
78 categories = gfy.get('tags') or gfy.get('extraLemmas') or []
79
80 FORMATS = ('gif', 'webm', 'mp4')
81 quality = qualities(FORMATS)
82
83 formats = []
84 for format_id in FORMATS:
85 video_url = gfy.get('%sUrl' % format_id)
86 if not video_url:
87 continue
88 filesize = int_or_none(gfy.get('%sSize' % format_id))
89 formats.append({
90 'url': video_url,
91 'format_id': format_id,
92 'width': width,
93 'height': height,
94 'fps': fps,
95 'filesize': filesize,
96 'quality': quality(format_id),
97 })
98 self._sort_formats(formats)
99
100 return {
101 'id': video_id,
102 'title': title,
103 'description': description,
104 'timestamp': timestamp,
105 'uploader': uploader,
106 'duration': duration,
107 'view_count': view_count,
108 'like_count': like_count,
109 'dislike_count': dislike_count,
110 'categories': categories,
111 'age_limit': age_limit,
112 'formats': formats,
113 }
114
[end of youtube_dl/extractor/gfycat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/youtube_dl/extractor/gfycat.py b/youtube_dl/extractor/gfycat.py
--- a/youtube_dl/extractor/gfycat.py
+++ b/youtube_dl/extractor/gfycat.py
@@ -11,7 +11,7 @@
class GfycatIE(InfoExtractor):
- _VALID_URL = r'https?://(?:www\.)?gfycat\.com/(?:ifr/|gifs/detail/)?(?P<id>[^/?#]+)'
+ _VALID_URL = r'https?://(?:www\.)?gfycat\.com/(?:ifr/|gifs/detail/)?(?P<id>[^-/?#]+)'
_TESTS = [{
'url': 'http://gfycat.com/DeadlyDecisiveGermanpinscher',
'info_dict': {
@@ -47,6 +47,9 @@
}, {
'url': 'https://gfycat.com/gifs/detail/UnconsciousLankyIvorygull',
'only_matching': True
+ }, {
+ 'url': 'https://gfycat.com/acceptablehappygoluckyharborporpoise-baseball',
+ 'only_matching': True
}]
def _real_extract(self, url):
|
{"golden_diff": "diff --git a/youtube_dl/extractor/gfycat.py b/youtube_dl/extractor/gfycat.py\n--- a/youtube_dl/extractor/gfycat.py\n+++ b/youtube_dl/extractor/gfycat.py\n@@ -11,7 +11,7 @@\n \n \n class GfycatIE(InfoExtractor):\n- _VALID_URL = r'https?://(?:www\\.)?gfycat\\.com/(?:ifr/|gifs/detail/)?(?P<id>[^/?#]+)'\n+ _VALID_URL = r'https?://(?:www\\.)?gfycat\\.com/(?:ifr/|gifs/detail/)?(?P<id>[^-/?#]+)'\n _TESTS = [{\n 'url': 'http://gfycat.com/DeadlyDecisiveGermanpinscher',\n 'info_dict': {\n@@ -47,6 +47,9 @@\n }, {\n 'url': 'https://gfycat.com/gifs/detail/UnconsciousLankyIvorygull',\n 'only_matching': True\n+ }, {\n+ 'url': 'https://gfycat.com/acceptablehappygoluckyharborporpoise-baseball',\n+ 'only_matching': True\n }]\n \n def _real_extract(self, url):\n", "issue": "gfycat.com url changes\n- [x] I've **verified** and **I assure** that I'm running youtube-dl **2019.04.17**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [x] At least skimmed through the [README](https://github.com/ytdl-org/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/ytdl-org/youtube-dl#faq) and [BUGS](https://github.com/ytdl-org/youtube-dl#bugs) sections\r\n- [x] [Searched](https://github.com/ytdl-org/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser\r\n\r\n### What is the purpose of your *issue*?\r\n- [x] Bug report (encountered problems with youtube-dl)\r\n- [x] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n\r\ngfycat.com has added dashes to some urls [https://gfycat.com/acceptablehappygoluckyharborporpoise-baseball](https://gfycat.com/acceptablehappygoluckyharborporpoise-baseball) causing a HTTP Error. \r\nThis could be fixed by excluding dashes in the url InfoExtractor.\r\n\r\n\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n int_or_none,\n float_or_none,\n qualities,\n ExtractorError,\n)\n\n\nclass GfycatIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?gfycat\\.com/(?:ifr/|gifs/detail/)?(?P<id>[^/?#]+)'\n _TESTS = [{\n 'url': 'http://gfycat.com/DeadlyDecisiveGermanpinscher',\n 'info_dict': {\n 'id': 'DeadlyDecisiveGermanpinscher',\n 'ext': 'mp4',\n 'title': 'Ghost in the Shell',\n 'timestamp': 1410656006,\n 'upload_date': '20140914',\n 'uploader': 'anonymous',\n 'duration': 10.4,\n 'view_count': int,\n 'like_count': int,\n 'dislike_count': int,\n 'categories': list,\n 'age_limit': 0,\n }\n }, {\n 'url': 'http://gfycat.com/ifr/JauntyTimelyAmazontreeboa',\n 'info_dict': {\n 'id': 'JauntyTimelyAmazontreeboa',\n 'ext': 'mp4',\n 'title': 'JauntyTimelyAmazontreeboa',\n 'timestamp': 1411720126,\n 'upload_date': '20140926',\n 'uploader': 'anonymous',\n 'duration': 3.52,\n 'view_count': int,\n 'like_count': int,\n 'dislike_count': int,\n 'categories': list,\n 'age_limit': 0,\n }\n }, {\n 'url': 'https://gfycat.com/gifs/detail/UnconsciousLankyIvorygull',\n 'only_matching': True\n }]\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n\n gfy = self._download_json(\n 'https://api.gfycat.com/v1/gfycats/%s' % video_id,\n video_id, 'Downloading video info')\n if 'error' in gfy:\n raise ExtractorError('Gfycat said: ' + gfy['error'], expected=True)\n gfy = gfy['gfyItem']\n\n title = gfy.get('title') or gfy['gfyName']\n description = gfy.get('description')\n timestamp = int_or_none(gfy.get('createDate'))\n uploader = gfy.get('userName')\n view_count = int_or_none(gfy.get('views'))\n like_count = int_or_none(gfy.get('likes'))\n dislike_count = int_or_none(gfy.get('dislikes'))\n age_limit = 18 if gfy.get('nsfw') == '1' else 0\n\n width = int_or_none(gfy.get('width'))\n height = int_or_none(gfy.get('height'))\n fps = int_or_none(gfy.get('frameRate'))\n num_frames = int_or_none(gfy.get('numFrames'))\n\n duration = float_or_none(num_frames, fps) if num_frames and fps else None\n\n categories = gfy.get('tags') or gfy.get('extraLemmas') or []\n\n FORMATS = ('gif', 'webm', 'mp4')\n quality = qualities(FORMATS)\n\n formats = []\n for format_id in FORMATS:\n video_url = gfy.get('%sUrl' % format_id)\n if not video_url:\n continue\n filesize = int_or_none(gfy.get('%sSize' % format_id))\n formats.append({\n 'url': video_url,\n 'format_id': format_id,\n 'width': width,\n 'height': height,\n 'fps': fps,\n 'filesize': filesize,\n 'quality': quality(format_id),\n })\n self._sort_formats(formats)\n\n return {\n 'id': video_id,\n 'title': title,\n 'description': description,\n 'timestamp': timestamp,\n 'uploader': uploader,\n 'duration': duration,\n 'view_count': view_count,\n 'like_count': like_count,\n 'dislike_count': dislike_count,\n 'categories': categories,\n 'age_limit': age_limit,\n 'formats': formats,\n }\n", "path": "youtube_dl/extractor/gfycat.py"}]}
| 2,088 | 289 |
gh_patches_debug_8775
|
rasdani/github-patches
|
git_diff
|
praw-dev__praw-637
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PRAW does not work with custom SSL cert bundle
Because of the proxy I'm using, I need to use a custom SSL cert bundle. Normally using the requests library, this is achievable in one of 2 ways:
Explicitly setting `verify` to the path of the bundle:
```
requests.get('https://google.com', verify='/path/to/cacert.pem')
```
Or setting an environment variable so that all requests use it:
```
export REQUESTS_CA_BUNDLE='/path/to/cacert.pem'
requests.get('https://google.com')
```
The environment variable is preferred because this allows the requests library to work when called from other packages that I did not write.
However, this does not work with PRAW. The problem I see is severalfold:
Using `Session.request` from requests library gets the environment variable properly through the `merge_environment_settings` method:
https://github.com/kennethreitz/requests/blob/fb014560611f6ebb97e7deb03ad8336c3c8f2db1/requests/sessions.py#L461
https://github.com/kennethreitz/requests/blob/fb014560611f6ebb97e7deb03ad8336c3c8f2db1/requests/sessions.py#L617-L629
But this code is never reached since PRAW builds its own request and uses Session.send which does not pull the environment variable:
https://github.com/praw-dev/praw/blob/3902dc24b0f42e487e26481aae46352806e3e6a8/praw/handlers.py#L101-L102
PRAW does support a setting for `validate_certs` which gets passed along as the `verify` parameter to requests library. The issue here is that PRAW only allows a boolean. Setting this variable to the path of a `cacert.pem` file evaluates to False and turns off SSL verification:
https://github.com/praw-dev/praw/blob/3902dc24b0f42e487e26481aae46352806e3e6a8/praw/__init__.py#L222-L223
There are a couple ways to solve this that I can think of. I would be glad to help out with a fix if that is something that is desirable.
</issue>
<code>
[start of praw/handlers.py]
1 """Provides classes that handle request dispatching."""
2
3 from __future__ import print_function, unicode_literals
4
5 import socket
6 import sys
7 import time
8 from functools import wraps
9 from praw.errors import ClientException
10 from praw.helpers import normalize_url
11 from requests import Session
12 from six import text_type
13 from six.moves import cPickle # pylint: disable=F0401
14 from threading import Lock
15 from timeit import default_timer as timer
16
17
18 class RateLimitHandler(object):
19 """The base handler that provides thread-safe rate limiting enforcement.
20
21 While this handler is threadsafe, PRAW is not thread safe when the same
22 `Reddit` instance is being utilized from multiple threads.
23
24 """
25
26 last_call = {} # Stores a two-item list: [lock, previous_call_time]
27 rl_lock = Lock() # lock used for adding items to last_call
28
29 @staticmethod
30 def rate_limit(function):
31 """Return a decorator that enforces API request limit guidelines.
32
33 We are allowed to make a API request every api_request_delay seconds as
34 specified in praw.ini. This value may differ from reddit to reddit. For
35 reddit.com it is 2. Any function decorated with this will be forced to
36 delay _rate_delay seconds from the calling of the last function
37 decorated with this before executing.
38
39 This decorator must be applied to a RateLimitHandler class method or
40 instance method as it assumes `rl_lock` and `last_call` are available.
41
42 """
43 @wraps(function)
44 def wrapped(cls, _rate_domain, _rate_delay, **kwargs):
45 cls.rl_lock.acquire()
46 lock_last = cls.last_call.setdefault(_rate_domain, [Lock(), 0])
47 with lock_last[0]: # Obtain the domain specific lock
48 cls.rl_lock.release()
49 # Sleep if necessary, then perform the request
50 now = timer()
51 delay = lock_last[1] + _rate_delay - now
52 if delay > 0:
53 now += delay
54 time.sleep(delay)
55 lock_last[1] = now
56 return function(cls, **kwargs)
57 return wrapped
58
59 @classmethod
60 def evict(cls, urls): # pylint: disable=W0613
61 """Method utilized to evict entries for the given urls.
62
63 :param urls: An iterable containing normalized urls.
64 :returns: The number of items removed from the cache.
65
66 By default this method returns False as a cache need not be present.
67
68 """
69 return 0
70
71 def __del__(self):
72 """Cleanup the HTTP session."""
73 if self.http:
74 try:
75 self.http.close()
76 except: # Never fail pylint: disable=W0702
77 pass
78
79 def __init__(self):
80 """Establish the HTTP session."""
81 self.http = Session() # Each instance should have its own session
82
83 def request(self, request, proxies, timeout, verify, **_):
84 """Responsible for dispatching the request and returning the result.
85
86 Network level exceptions should be raised and only
87 ``requests.Response`` should be returned.
88
89 :param request: A ``requests.PreparedRequest`` object containing all
90 the data necessary to perform the request.
91 :param proxies: A dictionary of proxy settings to be utilized for the
92 request.
93 :param timeout: Specifies the maximum time that the actual HTTP request
94 can take.
95 :param verify: Specifies if SSL certificates should be validated.
96
97 ``**_`` should be added to the method call to ignore the extra
98 arguments intended for the cache handler.
99
100 """
101 return self.http.send(request, proxies=proxies, timeout=timeout,
102 allow_redirects=False, verify=verify)
103 RateLimitHandler.request = RateLimitHandler.rate_limit(
104 RateLimitHandler.request)
105
106
107 class DefaultHandler(RateLimitHandler):
108 """Extends the RateLimitHandler to add thread-safe caching support."""
109
110 ca_lock = Lock()
111 cache = {}
112 cache_hit_callback = None
113 timeouts = {}
114
115 @staticmethod
116 def with_cache(function):
117 """Return a decorator that interacts with a handler's cache.
118
119 This decorator must be applied to a DefaultHandler class method or
120 instance method as it assumes `cache`, `ca_lock` and `timeouts` are
121 available.
122
123 """
124 @wraps(function)
125 def wrapped(cls, _cache_key, _cache_ignore, _cache_timeout, **kwargs):
126 def clear_timeouts():
127 """Clear the cache of timed out results."""
128 for key in list(cls.timeouts):
129 if timer() - cls.timeouts[key] > _cache_timeout:
130 del cls.timeouts[key]
131 del cls.cache[key]
132
133 if _cache_ignore:
134 return function(cls, **kwargs)
135 with cls.ca_lock:
136 clear_timeouts()
137 if _cache_key in cls.cache:
138 if cls.cache_hit_callback:
139 cls.cache_hit_callback(_cache_key)
140 return cls.cache[_cache_key]
141 # Releasing the lock before actually making the request allows for
142 # the possibility of more than one thread making the same request
143 # to get through. Without having domain-specific caching (under the
144 # assumption only one request to a domain can be made at a
145 # time), there isn't a better way to handle this.
146 result = function(cls, **kwargs)
147 # The handlers don't call `raise_for_status` so we need to ignore
148 # status codes that will result in an exception that should not be
149 # cached.
150 if result.status_code not in (200, 302):
151 return result
152 with cls.ca_lock:
153 cls.timeouts[_cache_key] = timer()
154 cls.cache[_cache_key] = result
155 return result
156 return wrapped
157
158 @classmethod
159 def clear_cache(cls):
160 """Remove all items from the cache."""
161 with cls.ca_lock:
162 cls.cache = {}
163 cls.timeouts = {}
164
165 @classmethod
166 def evict(cls, urls):
167 """Remove items from cache matching URLs.
168
169 Return the number of items removed.
170
171 """
172 if isinstance(urls, text_type):
173 urls = [urls]
174 urls = set(normalize_url(url) for url in urls)
175 retval = 0
176 with cls.ca_lock:
177 for key in list(cls.cache):
178 if key[0] in urls:
179 retval += 1
180 del cls.cache[key]
181 del cls.timeouts[key]
182 return retval
183 DefaultHandler.request = DefaultHandler.with_cache(RateLimitHandler.request)
184
185
186 class MultiprocessHandler(object):
187 """A PRAW handler to interact with the PRAW multi-process server."""
188
189 def __init__(self, host='localhost', port=10101):
190 """Construct an instance of the MultiprocessHandler."""
191 self.host = host
192 self.port = port
193
194 def _relay(self, **kwargs):
195 """Send the request through the server and return the HTTP response."""
196 retval = None
197 delay_time = 2 # For connection retries
198 read_attempts = 0 # For reading from socket
199 while retval is None: # Evict can return False
200 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
201 sock_fp = sock.makefile('rwb') # Used for pickle
202 try:
203 sock.connect((self.host, self.port))
204 cPickle.dump(kwargs, sock_fp, cPickle.HIGHEST_PROTOCOL)
205 sock_fp.flush()
206 retval = cPickle.load(sock_fp)
207 except: # pylint: disable=W0702
208 exc_type, exc, _ = sys.exc_info()
209 socket_error = exc_type is socket.error
210 if socket_error and exc.errno == 111: # Connection refused
211 sys.stderr.write('Cannot connect to multiprocess server. I'
212 's it running? Retrying in {0} seconds.\n'
213 .format(delay_time))
214 time.sleep(delay_time)
215 delay_time = min(64, delay_time * 2)
216 elif exc_type is EOFError or socket_error and exc.errno == 104:
217 # Failure during socket READ
218 if read_attempts >= 3:
219 raise ClientException('Successive failures reading '
220 'from the multiprocess server.')
221 sys.stderr.write('Lost connection with multiprocess server'
222 ' during read. Trying again.\n')
223 read_attempts += 1
224 else:
225 raise
226 finally:
227 sock_fp.close()
228 sock.close()
229 if isinstance(retval, Exception):
230 raise retval # pylint: disable=E0702
231 return retval
232
233 def evict(self, urls):
234 """Forward the eviction to the server and return its response."""
235 return self._relay(method='evict', urls=urls)
236
237 def request(self, **kwargs):
238 """Forward the request to the server and return its HTTP response."""
239 return self._relay(method='request', **kwargs)
240
[end of praw/handlers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/praw/handlers.py b/praw/handlers.py
--- a/praw/handlers.py
+++ b/praw/handlers.py
@@ -98,8 +98,12 @@
arguments intended for the cache handler.
"""
- return self.http.send(request, proxies=proxies, timeout=timeout,
- allow_redirects=False, verify=verify)
+ settings = self.http.merge_environment_settings(
+ request.url, proxies, False, verify, None
+ )
+ return self.http.send(request, timeout=timeout, allow_redirects=False,
+ **settings)
+
RateLimitHandler.request = RateLimitHandler.rate_limit(
RateLimitHandler.request)
|
{"golden_diff": "diff --git a/praw/handlers.py b/praw/handlers.py\n--- a/praw/handlers.py\n+++ b/praw/handlers.py\n@@ -98,8 +98,12 @@\n arguments intended for the cache handler.\n \n \"\"\"\n- return self.http.send(request, proxies=proxies, timeout=timeout,\n- allow_redirects=False, verify=verify)\n+ settings = self.http.merge_environment_settings(\n+ request.url, proxies, False, verify, None\n+ )\n+ return self.http.send(request, timeout=timeout, allow_redirects=False,\n+ **settings)\n+\n RateLimitHandler.request = RateLimitHandler.rate_limit(\n RateLimitHandler.request)\n", "issue": "PRAW does not work with custom SSL cert bundle\nBecause of the proxy I'm using, I need to use a custom SSL cert bundle. Normally using the requests library, this is achievable in one of 2 ways:\n\nExplicitly setting `verify` to the path of the bundle:\n\n```\nrequests.get('https://google.com', verify='/path/to/cacert.pem')\n```\n\nOr setting an environment variable so that all requests use it:\n\n```\nexport REQUESTS_CA_BUNDLE='/path/to/cacert.pem'\n\nrequests.get('https://google.com')\n```\n\nThe environment variable is preferred because this allows the requests library to work when called from other packages that I did not write.\n\nHowever, this does not work with PRAW. The problem I see is severalfold:\n\nUsing `Session.request` from requests library gets the environment variable properly through the `merge_environment_settings` method:\n\nhttps://github.com/kennethreitz/requests/blob/fb014560611f6ebb97e7deb03ad8336c3c8f2db1/requests/sessions.py#L461\nhttps://github.com/kennethreitz/requests/blob/fb014560611f6ebb97e7deb03ad8336c3c8f2db1/requests/sessions.py#L617-L629\n\nBut this code is never reached since PRAW builds its own request and uses Session.send which does not pull the environment variable:\n\nhttps://github.com/praw-dev/praw/blob/3902dc24b0f42e487e26481aae46352806e3e6a8/praw/handlers.py#L101-L102\n\nPRAW does support a setting for `validate_certs` which gets passed along as the `verify` parameter to requests library. The issue here is that PRAW only allows a boolean. Setting this variable to the path of a `cacert.pem` file evaluates to False and turns off SSL verification:\n\nhttps://github.com/praw-dev/praw/blob/3902dc24b0f42e487e26481aae46352806e3e6a8/praw/__init__.py#L222-L223\n\nThere are a couple ways to solve this that I can think of. I would be glad to help out with a fix if that is something that is desirable.\n\n", "before_files": [{"content": "\"\"\"Provides classes that handle request dispatching.\"\"\"\n\nfrom __future__ import print_function, unicode_literals\n\nimport socket\nimport sys\nimport time\nfrom functools import wraps\nfrom praw.errors import ClientException\nfrom praw.helpers import normalize_url\nfrom requests import Session\nfrom six import text_type\nfrom six.moves import cPickle # pylint: disable=F0401\nfrom threading import Lock\nfrom timeit import default_timer as timer\n\n\nclass RateLimitHandler(object):\n \"\"\"The base handler that provides thread-safe rate limiting enforcement.\n\n While this handler is threadsafe, PRAW is not thread safe when the same\n `Reddit` instance is being utilized from multiple threads.\n\n \"\"\"\n\n last_call = {} # Stores a two-item list: [lock, previous_call_time]\n rl_lock = Lock() # lock used for adding items to last_call\n\n @staticmethod\n def rate_limit(function):\n \"\"\"Return a decorator that enforces API request limit guidelines.\n\n We are allowed to make a API request every api_request_delay seconds as\n specified in praw.ini. This value may differ from reddit to reddit. For\n reddit.com it is 2. Any function decorated with this will be forced to\n delay _rate_delay seconds from the calling of the last function\n decorated with this before executing.\n\n This decorator must be applied to a RateLimitHandler class method or\n instance method as it assumes `rl_lock` and `last_call` are available.\n\n \"\"\"\n @wraps(function)\n def wrapped(cls, _rate_domain, _rate_delay, **kwargs):\n cls.rl_lock.acquire()\n lock_last = cls.last_call.setdefault(_rate_domain, [Lock(), 0])\n with lock_last[0]: # Obtain the domain specific lock\n cls.rl_lock.release()\n # Sleep if necessary, then perform the request\n now = timer()\n delay = lock_last[1] + _rate_delay - now\n if delay > 0:\n now += delay\n time.sleep(delay)\n lock_last[1] = now\n return function(cls, **kwargs)\n return wrapped\n\n @classmethod\n def evict(cls, urls): # pylint: disable=W0613\n \"\"\"Method utilized to evict entries for the given urls.\n\n :param urls: An iterable containing normalized urls.\n :returns: The number of items removed from the cache.\n\n By default this method returns False as a cache need not be present.\n\n \"\"\"\n return 0\n\n def __del__(self):\n \"\"\"Cleanup the HTTP session.\"\"\"\n if self.http:\n try:\n self.http.close()\n except: # Never fail pylint: disable=W0702\n pass\n\n def __init__(self):\n \"\"\"Establish the HTTP session.\"\"\"\n self.http = Session() # Each instance should have its own session\n\n def request(self, request, proxies, timeout, verify, **_):\n \"\"\"Responsible for dispatching the request and returning the result.\n\n Network level exceptions should be raised and only\n ``requests.Response`` should be returned.\n\n :param request: A ``requests.PreparedRequest`` object containing all\n the data necessary to perform the request.\n :param proxies: A dictionary of proxy settings to be utilized for the\n request.\n :param timeout: Specifies the maximum time that the actual HTTP request\n can take.\n :param verify: Specifies if SSL certificates should be validated.\n\n ``**_`` should be added to the method call to ignore the extra\n arguments intended for the cache handler.\n\n \"\"\"\n return self.http.send(request, proxies=proxies, timeout=timeout,\n allow_redirects=False, verify=verify)\nRateLimitHandler.request = RateLimitHandler.rate_limit(\n RateLimitHandler.request)\n\n\nclass DefaultHandler(RateLimitHandler):\n \"\"\"Extends the RateLimitHandler to add thread-safe caching support.\"\"\"\n\n ca_lock = Lock()\n cache = {}\n cache_hit_callback = None\n timeouts = {}\n\n @staticmethod\n def with_cache(function):\n \"\"\"Return a decorator that interacts with a handler's cache.\n\n This decorator must be applied to a DefaultHandler class method or\n instance method as it assumes `cache`, `ca_lock` and `timeouts` are\n available.\n\n \"\"\"\n @wraps(function)\n def wrapped(cls, _cache_key, _cache_ignore, _cache_timeout, **kwargs):\n def clear_timeouts():\n \"\"\"Clear the cache of timed out results.\"\"\"\n for key in list(cls.timeouts):\n if timer() - cls.timeouts[key] > _cache_timeout:\n del cls.timeouts[key]\n del cls.cache[key]\n\n if _cache_ignore:\n return function(cls, **kwargs)\n with cls.ca_lock:\n clear_timeouts()\n if _cache_key in cls.cache:\n if cls.cache_hit_callback:\n cls.cache_hit_callback(_cache_key)\n return cls.cache[_cache_key]\n # Releasing the lock before actually making the request allows for\n # the possibility of more than one thread making the same request\n # to get through. Without having domain-specific caching (under the\n # assumption only one request to a domain can be made at a\n # time), there isn't a better way to handle this.\n result = function(cls, **kwargs)\n # The handlers don't call `raise_for_status` so we need to ignore\n # status codes that will result in an exception that should not be\n # cached.\n if result.status_code not in (200, 302):\n return result\n with cls.ca_lock:\n cls.timeouts[_cache_key] = timer()\n cls.cache[_cache_key] = result\n return result\n return wrapped\n\n @classmethod\n def clear_cache(cls):\n \"\"\"Remove all items from the cache.\"\"\"\n with cls.ca_lock:\n cls.cache = {}\n cls.timeouts = {}\n\n @classmethod\n def evict(cls, urls):\n \"\"\"Remove items from cache matching URLs.\n\n Return the number of items removed.\n\n \"\"\"\n if isinstance(urls, text_type):\n urls = [urls]\n urls = set(normalize_url(url) for url in urls)\n retval = 0\n with cls.ca_lock:\n for key in list(cls.cache):\n if key[0] in urls:\n retval += 1\n del cls.cache[key]\n del cls.timeouts[key]\n return retval\nDefaultHandler.request = DefaultHandler.with_cache(RateLimitHandler.request)\n\n\nclass MultiprocessHandler(object):\n \"\"\"A PRAW handler to interact with the PRAW multi-process server.\"\"\"\n\n def __init__(self, host='localhost', port=10101):\n \"\"\"Construct an instance of the MultiprocessHandler.\"\"\"\n self.host = host\n self.port = port\n\n def _relay(self, **kwargs):\n \"\"\"Send the request through the server and return the HTTP response.\"\"\"\n retval = None\n delay_time = 2 # For connection retries\n read_attempts = 0 # For reading from socket\n while retval is None: # Evict can return False\n sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n sock_fp = sock.makefile('rwb') # Used for pickle\n try:\n sock.connect((self.host, self.port))\n cPickle.dump(kwargs, sock_fp, cPickle.HIGHEST_PROTOCOL)\n sock_fp.flush()\n retval = cPickle.load(sock_fp)\n except: # pylint: disable=W0702\n exc_type, exc, _ = sys.exc_info()\n socket_error = exc_type is socket.error\n if socket_error and exc.errno == 111: # Connection refused\n sys.stderr.write('Cannot connect to multiprocess server. I'\n 's it running? Retrying in {0} seconds.\\n'\n .format(delay_time))\n time.sleep(delay_time)\n delay_time = min(64, delay_time * 2)\n elif exc_type is EOFError or socket_error and exc.errno == 104:\n # Failure during socket READ\n if read_attempts >= 3:\n raise ClientException('Successive failures reading '\n 'from the multiprocess server.')\n sys.stderr.write('Lost connection with multiprocess server'\n ' during read. Trying again.\\n')\n read_attempts += 1\n else:\n raise\n finally:\n sock_fp.close()\n sock.close()\n if isinstance(retval, Exception):\n raise retval # pylint: disable=E0702\n return retval\n\n def evict(self, urls):\n \"\"\"Forward the eviction to the server and return its response.\"\"\"\n return self._relay(method='evict', urls=urls)\n\n def request(self, **kwargs):\n \"\"\"Forward the request to the server and return its HTTP response.\"\"\"\n return self._relay(method='request', **kwargs)\n", "path": "praw/handlers.py"}]}
| 3,624 | 154 |
gh_patches_debug_21287
|
rasdani/github-patches
|
git_diff
|
kymatio__kymatio-384
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unexpected crush with J=1
Hi,
I faced the following problem: I get unexpected and unexplained crush when J=1 regardless of other parameters values. Please, find the code and the error message below.
Here's the code:
```
data = torch.Tensor(np.random.rand(128,3,32,32))
print("shape before scattering: {}".format(data.shape))
scatModel = Scattering2D(J=1,L=8, shape = (32,32), max_order=2)
a = scatModel(data)
print("shape after scattering: {}".format(a.shape))
```
And the error message:
> shape before scattering: torch.Size([128, 3, 32, 32])
> Traceback (most recent call last):
> File "/user/HS221/dm00314/PycharmProjects/ScatterNetsTestFramework/venv/mixingMallatTest.py", line 24, in <module>
> a = scatModel(data)
> File "/user/HS221/dm00314/PycharmProjects/ScatterNetsTestFramework/venv/lib/python3.5/site-packages/kymatio-0.2.0.dev0-py3.5.egg/kymatio/scattering2d/scattering2d.py", line 235, in __call__
> File "/user/HS221/dm00314/PycharmProjects/ScatterNetsTestFramework/venv/lib/python3.5/site-packages/kymatio-0.2.0.dev0-py3.5.egg/kymatio/scattering2d/scattering2d.py", line 202, in forward
> KeyError: 0
Is it me or something is wrong?
Thank you,
Dmitry
</issue>
<code>
[start of kymatio/scattering2d/filter_bank.py]
1 """
2 Authors: Eugene Belilovsky, Edouard Oyallon and Sergey Zagoruyko
3 All rights reserved, 2017.
4 """
5
6 __all__ = ['filter_bank']
7
8 import numpy as np
9 from .utils import fft2
10
11
12 def filter_bank(M, N, J, L=8):
13 """
14 Builds in Fourier the Morlet filters used for the scattering transform.
15 Each single filter is provided as a dictionary with the following keys:
16 * 'j' : scale
17 * 'theta' : angle used
18 Parameters
19 ----------
20 M, N : int
21 spatial support of the input
22 J : int
23 logscale of the scattering
24 L : int, optional
25 number of angles used for the wavelet transform
26 Returns
27 -------
28 filters : list
29 A two list of dictionary containing respectively the low-pass and
30 wavelet filters.
31 Notes
32 -----
33 The design of the filters is optimized for the value L = 8.
34 """
35 filters = {}
36 filters['psi'] = []
37
38 for j in range(J):
39 for theta in range(L):
40 psi = {}
41 psi['j'] = j
42 psi['theta'] = theta
43 psi_signal = morlet_2d(M, N, 0.8 * 2**j,
44 (int(L-L/2-1)-theta) * np.pi / L,
45 3.0 / 4.0 * np.pi /2**j, 4.0/L)
46 psi_signal_fourier = fft2(psi_signal)
47 # drop the imaginary part, it is zero anyway
48 psi_signal_fourier = np.real(psi_signal_fourier)
49 for res in range(min(j + 1, J - 1)):
50 psi_signal_fourier_res = periodize_filter_fft(
51 psi_signal_fourier, res)
52 # add a trailing singleton dimension to mark it as non-complex
53 psi_signal_fourier_res = psi_signal_fourier_res[..., np.newaxis]
54 psi[res] = psi_signal_fourier_res
55 # Normalization to avoid doing it with the FFT.
56 psi[res] /= M*N// 2**(2*j)
57 filters['psi'].append(psi)
58
59 filters['phi'] = {}
60 phi_signal = gabor_2d(M, N, 0.8 * 2**(J-1), 0, 0)
61 phi_signal_fourier = fft2(phi_signal)
62 # drop the imaginary part, it is zero anyway
63 phi_signal_fourier = np.real(phi_signal_fourier)
64 filters['phi']['j'] = J
65 for res in range(J):
66 phi_signal_fourier_res = periodize_filter_fft(phi_signal_fourier, res)
67 # add a trailing singleton dimension to mark it as non-complex
68 phi_signal_fourier_res = phi_signal_fourier_res[..., np.newaxis]
69 filters['phi'][res] = phi_signal_fourier_res
70 # Normalization to avoid doing it with the FFT.
71 filters['phi'][res] /= M*N // 2 ** (2 * J)
72
73 return filters
74
75
76 def periodize_filter_fft(x, res):
77 """
78 Parameters
79 ----------
80 x : numpy array
81 signal to periodize in Fourier
82 res :
83 resolution to which the signal is cropped.
84
85 Returns
86 -------
87 crop : numpy array
88 It returns a crop version of the filter, assuming that
89 the convolutions will be done via compactly supported signals.
90 """
91 M = x.shape[0]
92 N = x.shape[1]
93
94 crop = np.zeros((M // 2 ** res, N // 2 ** res), x.dtype)
95
96 mask = np.ones(x.shape, np.float32)
97 len_x = int(M * (1 - 2 ** (-res)))
98 start_x = int(M * 2 ** (-res - 1))
99 len_y = int(N * (1 - 2 ** (-res)))
100 start_y = int(N * 2 ** (-res - 1))
101 mask[start_x:start_x + len_x,:] = 0
102 mask[:, start_y:start_y + len_y] = 0
103 x = np.multiply(x,mask)
104
105 for k in range(int(M / 2 ** res)):
106 for l in range(int(N / 2 ** res)):
107 for i in range(int(2 ** res)):
108 for j in range(int(2 ** res)):
109 crop[k, l] += x[k + i * int(M / 2 ** res), l + j * int(N / 2 ** res)]
110
111 return crop
112
113
114 def morlet_2d(M, N, sigma, theta, xi, slant=0.5, offset=0):
115 """
116 Computes a 2D Morlet filter.
117 A Morlet filter is the sum of a Gabor filter and a low-pass filter
118 to ensure that the sum has exactly zero mean in the temporal domain.
119 It is defined by the following formula in space:
120 psi(u) = g_{sigma}(u) (e^(i xi^T u) - beta)
121 where g_{sigma} is a Gaussian envelope, xi is a frequency and beta is
122 the cancelling parameter.
123
124 Parameters
125 ----------
126 M, N : int
127 spatial sizes
128 sigma : float
129 bandwidth parameter
130 xi : float
131 central frequency (in [0, 1])
132 theta : float
133 angle in [0, pi]
134 slant : float, optional
135 parameter which guides the elipsoidal shape of the morlet
136 offset : int, optional
137 offset by which the signal starts
138
139 Returns
140 -------
141 morlet_fft : ndarray
142 numpy array of size (M, N)
143 """
144 wv = gabor_2d(M, N, sigma, theta, xi, slant, offset)
145 wv_modulus = gabor_2d(M, N, sigma, theta, 0, slant, offset)
146 K = np.sum(wv) / np.sum(wv_modulus)
147
148 mor = wv - K * wv_modulus
149 return mor
150
151
152 def gabor_2d(M, N, sigma, theta, xi, slant=1.0, offset=0):
153 """
154 Computes a 2D Gabor filter.
155 A Gabor filter is defined by the following formula in space:
156 psi(u) = g_{sigma}(u) e^(i xi^T u)
157 where g_{sigma} is a Gaussian envelope and xi is a frequency.
158
159 Parameters
160 ----------
161 M, N : int
162 spatial sizes
163 sigma : float
164 bandwidth parameter
165 xi : float
166 central frequency (in [0, 1])
167 theta : float
168 angle in [0, pi]
169 slant : float, optional
170 parameter which guides the elipsoidal shape of the morlet
171 offset : int, optional
172 offset by which the signal starts
173
174 Returns
175 -------
176 morlet_fft : ndarray
177 numpy array of size (M, N)
178 """
179 gab = np.zeros((M, N), np.complex64)
180 R = np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]], np.float32)
181 R_inv = np.array([[np.cos(theta), np.sin(theta)], [-np.sin(theta), np.cos(theta)]], np.float32)
182 D = np.array([[1, 0], [0, slant * slant]])
183 curv = np.dot(R, np.dot(D, R_inv)) / ( 2 * sigma * sigma)
184
185 for ex in [-2, -1, 0, 1, 2]:
186 for ey in [-2, -1, 0, 1, 2]:
187 [xx, yy] = np.mgrid[offset + ex * M:offset + M + ex * M, offset + ey * N:offset + N + ey * N]
188 arg = -(curv[0, 0] * np.multiply(xx, xx) + (curv[0, 1] + curv[1, 0]) * np.multiply(xx, yy) + curv[
189 1, 1] * np.multiply(yy, yy)) + 1.j * (xx * xi * np.cos(theta) + yy * xi * np.sin(theta))
190 gab += np.exp(arg)
191
192 norm_factor = (2 * 3.1415 * sigma * sigma / slant)
193 gab /= norm_factor
194
195 return gab
196
[end of kymatio/scattering2d/filter_bank.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kymatio/scattering2d/filter_bank.py b/kymatio/scattering2d/filter_bank.py
--- a/kymatio/scattering2d/filter_bank.py
+++ b/kymatio/scattering2d/filter_bank.py
@@ -46,7 +46,7 @@
psi_signal_fourier = fft2(psi_signal)
# drop the imaginary part, it is zero anyway
psi_signal_fourier = np.real(psi_signal_fourier)
- for res in range(min(j + 1, J - 1)):
+ for res in range(min(j + 1, max(J - 1, 1))):
psi_signal_fourier_res = periodize_filter_fft(
psi_signal_fourier, res)
# add a trailing singleton dimension to mark it as non-complex
@@ -54,6 +54,7 @@
psi[res] = psi_signal_fourier_res
# Normalization to avoid doing it with the FFT.
psi[res] /= M*N// 2**(2*j)
+
filters['psi'].append(psi)
filters['phi'] = {}
|
{"golden_diff": "diff --git a/kymatio/scattering2d/filter_bank.py b/kymatio/scattering2d/filter_bank.py\n--- a/kymatio/scattering2d/filter_bank.py\n+++ b/kymatio/scattering2d/filter_bank.py\n@@ -46,7 +46,7 @@\n psi_signal_fourier = fft2(psi_signal)\n # drop the imaginary part, it is zero anyway\n psi_signal_fourier = np.real(psi_signal_fourier)\n- for res in range(min(j + 1, J - 1)):\n+ for res in range(min(j + 1, max(J - 1, 1))):\n psi_signal_fourier_res = periodize_filter_fft(\n psi_signal_fourier, res)\n # add a trailing singleton dimension to mark it as non-complex\n@@ -54,6 +54,7 @@\n psi[res] = psi_signal_fourier_res\n # Normalization to avoid doing it with the FFT.\n psi[res] /= M*N// 2**(2*j)\n+\n filters['psi'].append(psi)\n \n filters['phi'] = {}\n", "issue": "Unexpected crush with J=1 \nHi, \r\n\r\nI faced the following problem: I get unexpected and unexplained crush when J=1 regardless of other parameters values. Please, find the code and the error message below. \r\n\r\nHere's the code:\r\n\r\n```\r\ndata = torch.Tensor(np.random.rand(128,3,32,32))\r\nprint(\"shape before scattering: {}\".format(data.shape))\r\n\r\nscatModel = Scattering2D(J=1,L=8, shape = (32,32), max_order=2)\r\na = scatModel(data)\r\n\r\nprint(\"shape after scattering: {}\".format(a.shape))\r\n```\r\n\r\nAnd the error message: \r\n\r\n> shape before scattering: torch.Size([128, 3, 32, 32])\r\n> Traceback (most recent call last):\r\n> File \"/user/HS221/dm00314/PycharmProjects/ScatterNetsTestFramework/venv/mixingMallatTest.py\", line 24, in <module>\r\n> a = scatModel(data)\r\n> File \"/user/HS221/dm00314/PycharmProjects/ScatterNetsTestFramework/venv/lib/python3.5/site-packages/kymatio-0.2.0.dev0-py3.5.egg/kymatio/scattering2d/scattering2d.py\", line 235, in __call__\r\n> File \"/user/HS221/dm00314/PycharmProjects/ScatterNetsTestFramework/venv/lib/python3.5/site-packages/kymatio-0.2.0.dev0-py3.5.egg/kymatio/scattering2d/scattering2d.py\", line 202, in forward\r\n> KeyError: 0\r\n\r\nIs it me or something is wrong?\r\n\r\nThank you, \r\nDmitry\n", "before_files": [{"content": "\"\"\"\nAuthors: Eugene Belilovsky, Edouard Oyallon and Sergey Zagoruyko\nAll rights reserved, 2017.\n\"\"\"\n\n__all__ = ['filter_bank']\n\nimport numpy as np\nfrom .utils import fft2\n\n\ndef filter_bank(M, N, J, L=8):\n \"\"\"\n Builds in Fourier the Morlet filters used for the scattering transform.\n Each single filter is provided as a dictionary with the following keys:\n * 'j' : scale\n * 'theta' : angle used\n Parameters\n ----------\n M, N : int\n spatial support of the input\n J : int\n logscale of the scattering\n L : int, optional\n number of angles used for the wavelet transform\n Returns\n -------\n filters : list\n A two list of dictionary containing respectively the low-pass and\n wavelet filters.\n Notes\n -----\n The design of the filters is optimized for the value L = 8.\n \"\"\"\n filters = {}\n filters['psi'] = []\n\n for j in range(J):\n for theta in range(L):\n psi = {}\n psi['j'] = j\n psi['theta'] = theta\n psi_signal = morlet_2d(M, N, 0.8 * 2**j,\n (int(L-L/2-1)-theta) * np.pi / L,\n 3.0 / 4.0 * np.pi /2**j, 4.0/L)\n psi_signal_fourier = fft2(psi_signal)\n # drop the imaginary part, it is zero anyway\n psi_signal_fourier = np.real(psi_signal_fourier)\n for res in range(min(j + 1, J - 1)):\n psi_signal_fourier_res = periodize_filter_fft(\n psi_signal_fourier, res)\n # add a trailing singleton dimension to mark it as non-complex\n psi_signal_fourier_res = psi_signal_fourier_res[..., np.newaxis]\n psi[res] = psi_signal_fourier_res\n # Normalization to avoid doing it with the FFT.\n psi[res] /= M*N// 2**(2*j)\n filters['psi'].append(psi)\n\n filters['phi'] = {}\n phi_signal = gabor_2d(M, N, 0.8 * 2**(J-1), 0, 0)\n phi_signal_fourier = fft2(phi_signal)\n # drop the imaginary part, it is zero anyway\n phi_signal_fourier = np.real(phi_signal_fourier)\n filters['phi']['j'] = J\n for res in range(J):\n phi_signal_fourier_res = periodize_filter_fft(phi_signal_fourier, res)\n # add a trailing singleton dimension to mark it as non-complex\n phi_signal_fourier_res = phi_signal_fourier_res[..., np.newaxis]\n filters['phi'][res] = phi_signal_fourier_res\n # Normalization to avoid doing it with the FFT.\n filters['phi'][res] /= M*N // 2 ** (2 * J)\n\n return filters\n\n\ndef periodize_filter_fft(x, res):\n \"\"\"\n Parameters\n ----------\n x : numpy array\n signal to periodize in Fourier\n res :\n resolution to which the signal is cropped.\n\n Returns\n -------\n crop : numpy array\n It returns a crop version of the filter, assuming that\n the convolutions will be done via compactly supported signals.\n \"\"\"\n M = x.shape[0]\n N = x.shape[1]\n\n crop = np.zeros((M // 2 ** res, N // 2 ** res), x.dtype)\n\n mask = np.ones(x.shape, np.float32)\n len_x = int(M * (1 - 2 ** (-res)))\n start_x = int(M * 2 ** (-res - 1))\n len_y = int(N * (1 - 2 ** (-res)))\n start_y = int(N * 2 ** (-res - 1))\n mask[start_x:start_x + len_x,:] = 0\n mask[:, start_y:start_y + len_y] = 0\n x = np.multiply(x,mask)\n\n for k in range(int(M / 2 ** res)):\n for l in range(int(N / 2 ** res)):\n for i in range(int(2 ** res)):\n for j in range(int(2 ** res)):\n crop[k, l] += x[k + i * int(M / 2 ** res), l + j * int(N / 2 ** res)]\n\n return crop\n\n\ndef morlet_2d(M, N, sigma, theta, xi, slant=0.5, offset=0):\n \"\"\"\n Computes a 2D Morlet filter.\n A Morlet filter is the sum of a Gabor filter and a low-pass filter\n to ensure that the sum has exactly zero mean in the temporal domain.\n It is defined by the following formula in space:\n psi(u) = g_{sigma}(u) (e^(i xi^T u) - beta)\n where g_{sigma} is a Gaussian envelope, xi is a frequency and beta is\n the cancelling parameter.\n\n Parameters\n ----------\n M, N : int\n spatial sizes\n sigma : float\n bandwidth parameter\n xi : float\n central frequency (in [0, 1])\n theta : float\n angle in [0, pi]\n slant : float, optional\n parameter which guides the elipsoidal shape of the morlet\n offset : int, optional\n offset by which the signal starts\n\n Returns\n -------\n morlet_fft : ndarray\n numpy array of size (M, N)\n \"\"\"\n wv = gabor_2d(M, N, sigma, theta, xi, slant, offset)\n wv_modulus = gabor_2d(M, N, sigma, theta, 0, slant, offset)\n K = np.sum(wv) / np.sum(wv_modulus)\n\n mor = wv - K * wv_modulus\n return mor\n\n\ndef gabor_2d(M, N, sigma, theta, xi, slant=1.0, offset=0):\n \"\"\"\n Computes a 2D Gabor filter.\n A Gabor filter is defined by the following formula in space:\n psi(u) = g_{sigma}(u) e^(i xi^T u)\n where g_{sigma} is a Gaussian envelope and xi is a frequency.\n\n Parameters\n ----------\n M, N : int\n spatial sizes\n sigma : float\n bandwidth parameter\n xi : float\n central frequency (in [0, 1])\n theta : float\n angle in [0, pi]\n slant : float, optional\n parameter which guides the elipsoidal shape of the morlet\n offset : int, optional\n offset by which the signal starts\n\n Returns\n -------\n morlet_fft : ndarray\n numpy array of size (M, N)\n \"\"\"\n gab = np.zeros((M, N), np.complex64)\n R = np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]], np.float32)\n R_inv = np.array([[np.cos(theta), np.sin(theta)], [-np.sin(theta), np.cos(theta)]], np.float32)\n D = np.array([[1, 0], [0, slant * slant]])\n curv = np.dot(R, np.dot(D, R_inv)) / ( 2 * sigma * sigma)\n\n for ex in [-2, -1, 0, 1, 2]:\n for ey in [-2, -1, 0, 1, 2]:\n [xx, yy] = np.mgrid[offset + ex * M:offset + M + ex * M, offset + ey * N:offset + N + ey * N]\n arg = -(curv[0, 0] * np.multiply(xx, xx) + (curv[0, 1] + curv[1, 0]) * np.multiply(xx, yy) + curv[\n 1, 1] * np.multiply(yy, yy)) + 1.j * (xx * xi * np.cos(theta) + yy * xi * np.sin(theta))\n gab += np.exp(arg)\n\n norm_factor = (2 * 3.1415 * sigma * sigma / slant)\n gab /= norm_factor\n\n return gab\n", "path": "kymatio/scattering2d/filter_bank.py"}]}
| 3,294 | 242 |
gh_patches_debug_22409
|
rasdani/github-patches
|
git_diff
|
xonsh__xonsh-1551
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`xonsh.completers.pip` explodes if `pip` is not on PATH
On my Windows installation, Python is not on PATH (because multiple Python madness), and therefore neither is pip. However, the pip completer [expects pip to be on the path](https://github.com/xonsh/xonsh/blob/master/xonsh/completers/pip.py#L14).
This causes the completer to blow up with a `FileNotFoundError` when it tries to complete.
</issue>
<code>
[start of xonsh/completers/pip.py]
1 import re
2 import subprocess
3
4 import xonsh.lazyasd as xl
5
6 PIP_RE = xl.LazyObject(lambda: re.compile("pip(?:\d|\.)*"),
7 globals(), 'PIP_RE')
8 PIP_LIST_RE = xl.LazyObject(lambda: re.compile("pip(?:\d|\.)* (?:uninstall|show)"),
9 globals(), 'PIP_LIST_RE')
10
11
12 @xl.lazyobject
13 def ALL_COMMANDS():
14 help_text = str(subprocess.check_output(['pip', '--help'],
15 stderr=subprocess.DEVNULL))
16 commands = re.findall(" (\w+) ", help_text)
17 return [c for c in commands if c not in ['completion', 'help']]
18
19
20 def complete_pip(prefix, line, begidx, endidx, ctx):
21 """Completes python's package manager pip"""
22 line_len = len(line.split())
23 if (line_len > 3) or (line_len > 2 and line.endswith(' ')) or \
24 (not PIP_RE.search(line)):
25 return
26 if PIP_LIST_RE.search(line):
27 items = subprocess.check_output(['pip', 'list'], stderr=subprocess.DEVNULL)
28 items = items.decode('utf-8').splitlines()
29 return set(i.split()[0] for i in items)
30
31 if (line_len > 1 and line.endswith(' ')) or line_len > 2:
32 # "pip show " -> no complete (note space)
33 return
34 if prefix not in ALL_COMMANDS:
35 suggestions = [c for c in ALL_COMMANDS if c.startswith(prefix)]
36 if suggestions:
37 return suggestions, len(prefix)
38 return ALL_COMMANDS, len(prefix)
39
[end of xonsh/completers/pip.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/xonsh/completers/pip.py b/xonsh/completers/pip.py
--- a/xonsh/completers/pip.py
+++ b/xonsh/completers/pip.py
@@ -11,8 +11,11 @@
@xl.lazyobject
def ALL_COMMANDS():
- help_text = str(subprocess.check_output(['pip', '--help'],
- stderr=subprocess.DEVNULL))
+ try:
+ help_text = str(subprocess.check_output(['pip', '--help'],
+ stderr=subprocess.DEVNULL))
+ except FileNotFoundError:
+ return []
commands = re.findall(" (\w+) ", help_text)
return [c for c in commands if c not in ['completion', 'help']]
@@ -24,7 +27,11 @@
(not PIP_RE.search(line)):
return
if PIP_LIST_RE.search(line):
- items = subprocess.check_output(['pip', 'list'], stderr=subprocess.DEVNULL)
+ try:
+ items = subprocess.check_output(['pip', 'list'],
+ stderr=subprocess.DEVNULL)
+ except FileNotFoundError:
+ return set()
items = items.decode('utf-8').splitlines()
return set(i.split()[0] for i in items)
|
{"golden_diff": "diff --git a/xonsh/completers/pip.py b/xonsh/completers/pip.py\n--- a/xonsh/completers/pip.py\n+++ b/xonsh/completers/pip.py\n@@ -11,8 +11,11 @@\n \n @xl.lazyobject\n def ALL_COMMANDS():\n- help_text = str(subprocess.check_output(['pip', '--help'],\n- stderr=subprocess.DEVNULL))\n+ try:\n+ help_text = str(subprocess.check_output(['pip', '--help'],\n+ stderr=subprocess.DEVNULL))\n+ except FileNotFoundError:\n+ return []\n commands = re.findall(\" (\\w+) \", help_text)\n return [c for c in commands if c not in ['completion', 'help']]\n \n@@ -24,7 +27,11 @@\n (not PIP_RE.search(line)):\n return\n if PIP_LIST_RE.search(line):\n- items = subprocess.check_output(['pip', 'list'], stderr=subprocess.DEVNULL)\n+ try:\n+ items = subprocess.check_output(['pip', 'list'],\n+ stderr=subprocess.DEVNULL)\n+ except FileNotFoundError:\n+ return set()\n items = items.decode('utf-8').splitlines()\n return set(i.split()[0] for i in items)\n", "issue": "`xonsh.completers.pip` explodes if `pip` is not on PATH\nOn my Windows installation, Python is not on PATH (because multiple Python madness), and therefore neither is pip. However, the pip completer [expects pip to be on the path](https://github.com/xonsh/xonsh/blob/master/xonsh/completers/pip.py#L14).\n\nThis causes the completer to blow up with a `FileNotFoundError` when it tries to complete.\n\n", "before_files": [{"content": "import re\nimport subprocess\n\nimport xonsh.lazyasd as xl\n\nPIP_RE = xl.LazyObject(lambda: re.compile(\"pip(?:\\d|\\.)*\"),\n globals(), 'PIP_RE')\nPIP_LIST_RE = xl.LazyObject(lambda: re.compile(\"pip(?:\\d|\\.)* (?:uninstall|show)\"),\n globals(), 'PIP_LIST_RE')\n\n\[email protected]\ndef ALL_COMMANDS():\n help_text = str(subprocess.check_output(['pip', '--help'],\n stderr=subprocess.DEVNULL))\n commands = re.findall(\" (\\w+) \", help_text)\n return [c for c in commands if c not in ['completion', 'help']]\n\n\ndef complete_pip(prefix, line, begidx, endidx, ctx):\n \"\"\"Completes python's package manager pip\"\"\"\n line_len = len(line.split())\n if (line_len > 3) or (line_len > 2 and line.endswith(' ')) or \\\n (not PIP_RE.search(line)):\n return\n if PIP_LIST_RE.search(line):\n items = subprocess.check_output(['pip', 'list'], stderr=subprocess.DEVNULL)\n items = items.decode('utf-8').splitlines()\n return set(i.split()[0] for i in items)\n\n if (line_len > 1 and line.endswith(' ')) or line_len > 2:\n # \"pip show \" -> no complete (note space)\n return\n if prefix not in ALL_COMMANDS:\n suggestions = [c for c in ALL_COMMANDS if c.startswith(prefix)]\n if suggestions:\n return suggestions, len(prefix)\n return ALL_COMMANDS, len(prefix)\n", "path": "xonsh/completers/pip.py"}]}
| 1,071 | 281 |
gh_patches_debug_11511
|
rasdani/github-patches
|
git_diff
|
microsoft__botbuilder-python-307
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Minor differences in what's displayed after user response 45.state-management bot
## Version
v4.50b4
## Describe the bug
There's a minor difference in what's displayed after the user responds to the bot. The javascript_nodejs bot exhibits the same behavior (see [issue 1718](https://github.com/microsoft/BotBuilder-Samples/issues/1718) for more information).
## To Reproduce
Run bot per README.md instructions
1. go to bot's folder
2. run `python install -r requirement.txt`, then run `python app.py`
3. open in Emulator
The csharp_dotnet and javascript_nodejs bots were also run via CLI.
## Expected behavior
Bot should look and function just like bots in other languages (specifically csharp_dotnet bot since there are currently issues with javascript_nodejs sample).
## Screenshots
**charp_dotnetcore bot**: Bot responds with, "Thanks <string_user_responded_with. To see conversation data, type anything." after user's second response. Also welcomes users. This is IMHO the best version/gold standard for the sample currently.

**Python bot**: Bot responds with, "Thanks <string_user_responded_with." after user's second response. Also welcomes user.

**javascript_nodejs bot**: Bot responds with, "Thanks <string_user_responded_with." after user's second response. Does not welcome user (addressed in [issue 1718](https://github.com/microsoft/BotBuilder-Samples/issues/1718)).

## Additional context
To fix: Add **"To see conversation data, type anything."** to the string in **line 62** in 45.state-management/bots/state_management_bot.py
</issue>
<code>
[start of samples/45.state-management/bots/state_management_bot.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 import time
5 import pytz
6 from datetime import datetime
7
8 from botbuilder.core import ActivityHandler, ConversationState, TurnContext, UserState
9 from botbuilder.schema import ChannelAccount
10
11 from data_models import ConversationData, UserProfile
12
13
14 class StateManagementBot(ActivityHandler):
15 def __init__(self, conversation_state: ConversationState, user_state: UserState):
16 if conversation_state is None:
17 raise TypeError(
18 "[StateManagementBot]: Missing parameter. conversation_state is required but None was given"
19 )
20 if user_state is None:
21 raise TypeError(
22 "[StateManagementBot]: Missing parameter. user_state is required but None was given"
23 )
24
25 self.conversation_state = conversation_state
26 self.user_state = user_state
27
28 self.conversation_data = self.conversation_state.create_property(
29 "ConversationData"
30 )
31 self.user_profile = self.conversation_state.create_property("UserProfile")
32
33 async def on_turn(self, turn_context: TurnContext):
34 await super().on_turn(turn_context)
35
36 await self.conversation_state.save_changes(turn_context)
37 await self.user_state.save_changes(turn_context)
38
39 async def on_members_added_activity(
40 self, members_added: [ChannelAccount], turn_context: TurnContext
41 ):
42 for member in members_added:
43 if member.id != turn_context.activity.recipient.id:
44 await turn_context.send_activity(
45 "Welcome to State Bot Sample. Type anything to get started."
46 )
47
48 async def on_message_activity(self, turn_context: TurnContext):
49 # Get the state properties from the turn context.
50 user_profile = await self.user_profile.get(turn_context, UserProfile)
51 conversation_data = await self.conversation_data.get(
52 turn_context, ConversationData
53 )
54
55 if user_profile.name is None:
56 # First time around this is undefined, so we will prompt user for name.
57 if conversation_data.prompted_for_user_name:
58 # Set the name to what the user provided.
59 user_profile.name = turn_context.activity.text
60
61 # Acknowledge that we got their name.
62 await turn_context.send_activity(f"Thanks { user_profile.name }.")
63
64 # Reset the flag to allow the bot to go though the cycle again.
65 conversation_data.prompted_for_user_name = False
66 else:
67 # Prompt the user for their name.
68 await turn_context.send_activity("What is your name?")
69
70 # Set the flag to true, so we don't prompt in the next turn.
71 conversation_data.prompted_for_user_name = True
72 else:
73 # Add message details to the conversation data.
74 conversation_data.timestamp = self.__datetime_from_utc_to_local(
75 turn_context.activity.timestamp
76 )
77 conversation_data.channel_id = turn_context.activity.channel_id
78
79 # Display state data.
80 await turn_context.send_activity(
81 f"{ user_profile.name } sent: { turn_context.activity.text }"
82 )
83 await turn_context.send_activity(
84 f"Message received at: { conversation_data.timestamp }"
85 )
86 await turn_context.send_activity(
87 f"Message received from: { conversation_data.channel_id }"
88 )
89
90 def __datetime_from_utc_to_local(self, utc_datetime):
91 now_timestamp = time.time()
92 offset = datetime.fromtimestamp(now_timestamp) - datetime.utcfromtimestamp(
93 now_timestamp
94 )
95 result = utc_datetime + offset
96 return result.strftime("%I:%M:%S %p, %A, %B %d of %Y")
97
[end of samples/45.state-management/bots/state_management_bot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/samples/45.state-management/bots/state_management_bot.py b/samples/45.state-management/bots/state_management_bot.py
--- a/samples/45.state-management/bots/state_management_bot.py
+++ b/samples/45.state-management/bots/state_management_bot.py
@@ -59,7 +59,9 @@
user_profile.name = turn_context.activity.text
# Acknowledge that we got their name.
- await turn_context.send_activity(f"Thanks { user_profile.name }.")
+ await turn_context.send_activity(
+ f"Thanks { user_profile.name }. To see conversation data, type anything."
+ )
# Reset the flag to allow the bot to go though the cycle again.
conversation_data.prompted_for_user_name = False
|
{"golden_diff": "diff --git a/samples/45.state-management/bots/state_management_bot.py b/samples/45.state-management/bots/state_management_bot.py\n--- a/samples/45.state-management/bots/state_management_bot.py\n+++ b/samples/45.state-management/bots/state_management_bot.py\n@@ -59,7 +59,9 @@\n user_profile.name = turn_context.activity.text\n \n # Acknowledge that we got their name.\n- await turn_context.send_activity(f\"Thanks { user_profile.name }.\")\n+ await turn_context.send_activity(\n+ f\"Thanks { user_profile.name }. To see conversation data, type anything.\"\n+ )\n \n # Reset the flag to allow the bot to go though the cycle again.\n conversation_data.prompted_for_user_name = False\n", "issue": "Minor differences in what's displayed after user response 45.state-management bot\n## Version\r\nv4.50b4\r\n\r\n## Describe the bug\r\nThere's a minor difference in what's displayed after the user responds to the bot. The javascript_nodejs bot exhibits the same behavior (see [issue 1718](https://github.com/microsoft/BotBuilder-Samples/issues/1718) for more information).\r\n\r\n## To Reproduce\r\nRun bot per README.md instructions\r\n1. go to bot's folder\r\n2. run `python install -r requirement.txt`, then run `python app.py`\r\n3. open in Emulator\r\n\r\nThe csharp_dotnet and javascript_nodejs bots were also run via CLI. \r\n\r\n## Expected behavior\r\nBot should look and function just like bots in other languages (specifically csharp_dotnet bot since there are currently issues with javascript_nodejs sample). \r\n\r\n## Screenshots\r\n**charp_dotnetcore bot**: Bot responds with, \"Thanks <string_user_responded_with. To see conversation data, type anything.\" after user's second response. Also welcomes users. This is IMHO the best version/gold standard for the sample currently. \r\n\r\n\r\n**Python bot**: Bot responds with, \"Thanks <string_user_responded_with.\" after user's second response. Also welcomes user.\r\n\r\n\r\n**javascript_nodejs bot**: Bot responds with, \"Thanks <string_user_responded_with.\" after user's second response. Does not welcome user (addressed in [issue 1718](https://github.com/microsoft/BotBuilder-Samples/issues/1718)).\r\n\r\n\r\n\r\n## Additional context\r\nTo fix: Add **\"To see conversation data, type anything.\"** to the string in **line 62** in 45.state-management/bots/state_management_bot.py\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport time\nimport pytz\nfrom datetime import datetime\n\nfrom botbuilder.core import ActivityHandler, ConversationState, TurnContext, UserState\nfrom botbuilder.schema import ChannelAccount\n\nfrom data_models import ConversationData, UserProfile\n\n\nclass StateManagementBot(ActivityHandler):\n def __init__(self, conversation_state: ConversationState, user_state: UserState):\n if conversation_state is None:\n raise TypeError(\n \"[StateManagementBot]: Missing parameter. conversation_state is required but None was given\"\n )\n if user_state is None:\n raise TypeError(\n \"[StateManagementBot]: Missing parameter. user_state is required but None was given\"\n )\n\n self.conversation_state = conversation_state\n self.user_state = user_state\n\n self.conversation_data = self.conversation_state.create_property(\n \"ConversationData\"\n )\n self.user_profile = self.conversation_state.create_property(\"UserProfile\")\n\n async def on_turn(self, turn_context: TurnContext):\n await super().on_turn(turn_context)\n\n await self.conversation_state.save_changes(turn_context)\n await self.user_state.save_changes(turn_context)\n\n async def on_members_added_activity(\n self, members_added: [ChannelAccount], turn_context: TurnContext\n ):\n for member in members_added:\n if member.id != turn_context.activity.recipient.id:\n await turn_context.send_activity(\n \"Welcome to State Bot Sample. Type anything to get started.\"\n )\n\n async def on_message_activity(self, turn_context: TurnContext):\n # Get the state properties from the turn context.\n user_profile = await self.user_profile.get(turn_context, UserProfile)\n conversation_data = await self.conversation_data.get(\n turn_context, ConversationData\n )\n\n if user_profile.name is None:\n # First time around this is undefined, so we will prompt user for name.\n if conversation_data.prompted_for_user_name:\n # Set the name to what the user provided.\n user_profile.name = turn_context.activity.text\n\n # Acknowledge that we got their name.\n await turn_context.send_activity(f\"Thanks { user_profile.name }.\")\n\n # Reset the flag to allow the bot to go though the cycle again.\n conversation_data.prompted_for_user_name = False\n else:\n # Prompt the user for their name.\n await turn_context.send_activity(\"What is your name?\")\n\n # Set the flag to true, so we don't prompt in the next turn.\n conversation_data.prompted_for_user_name = True\n else:\n # Add message details to the conversation data.\n conversation_data.timestamp = self.__datetime_from_utc_to_local(\n turn_context.activity.timestamp\n )\n conversation_data.channel_id = turn_context.activity.channel_id\n\n # Display state data.\n await turn_context.send_activity(\n f\"{ user_profile.name } sent: { turn_context.activity.text }\"\n )\n await turn_context.send_activity(\n f\"Message received at: { conversation_data.timestamp }\"\n )\n await turn_context.send_activity(\n f\"Message received from: { conversation_data.channel_id }\"\n )\n\n def __datetime_from_utc_to_local(self, utc_datetime):\n now_timestamp = time.time()\n offset = datetime.fromtimestamp(now_timestamp) - datetime.utcfromtimestamp(\n now_timestamp\n )\n result = utc_datetime + offset\n return result.strftime(\"%I:%M:%S %p, %A, %B %d of %Y\")\n", "path": "samples/45.state-management/bots/state_management_bot.py"}]}
| 2,056 | 169 |
gh_patches_debug_3102
|
rasdani/github-patches
|
git_diff
|
pytorch__pytorch-4684
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update documentation for Embedding layer
The documentation corresponding [torch.nn.Embedding](http://pytorch.org/docs/master/nn.html) mentions that ```Keep in mind that only a limited number of optimizers support sparse gradients: currently it’s optim.SGD (cuda and cpu), and optim.Adagrad (cpu)```. This is outdated and now `SparseAdam` is also supported.
Update documentation for Embedding layer
The documentation corresponding [torch.nn.Embedding](http://pytorch.org/docs/master/nn.html) mentions that ```Keep in mind that only a limited number of optimizers support sparse gradients: currently it’s optim.SGD (cuda and cpu), and optim.Adagrad (cpu)```. This is outdated and now `SparseAdam` is also supported.
</issue>
<code>
[start of torch/nn/modules/sparse.py]
1 import torch
2 from torch.autograd import Variable
3 from torch.nn.parameter import Parameter
4
5 from .module import Module
6 from .. import functional as F
7
8
9 class Embedding(Module):
10 r"""A simple lookup table that stores embeddings of a fixed dictionary and size.
11
12 This module is often used to store word embeddings and retrieve them using indices.
13 The input to the module is a list of indices, and the output is the corresponding
14 word embeddings.
15
16 Args:
17 num_embeddings (int): size of the dictionary of embeddings
18 embedding_dim (int): the size of each embedding vector
19 padding_idx (int, optional): If given, pads the output with zeros whenever it encounters the index.
20 max_norm (float, optional): If given, will renormalize the embeddings to always have a norm lesser than this
21 norm_type (float, optional): The p of the p-norm to compute for the max_norm option
22 scale_grad_by_freq (boolean, optional): if given, this will scale gradients by the frequency of
23 the words in the mini-batch.
24 sparse (boolean, optional): if ``True``, gradient w.r.t. weight matrix will be a sparse tensor. See Notes for
25 more details regarding sparse gradients.
26
27 Attributes:
28 weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim)
29
30 Shape:
31 - Input: LongTensor `(N, W)`, N = mini-batch, W = number of indices to extract per mini-batch
32 - Output: `(N, W, embedding_dim)`
33
34 Notes:
35 Keep in mind that only a limited number of optimizers support
36 sparse gradients: currently it's `optim.SGD` (`cuda` and `cpu`),
37 and `optim.Adagrad` (`cpu`)
38
39 Examples::
40
41 >>> # an Embedding module containing 10 tensors of size 3
42 >>> embedding = nn.Embedding(10, 3)
43 >>> # a batch of 2 samples of 4 indices each
44 >>> input = Variable(torch.LongTensor([[1,2,4,5],[4,3,2,9]]))
45 >>> embedding(input)
46
47 Variable containing:
48 (0 ,.,.) =
49 -1.0822 1.2522 0.2434
50 0.8393 -0.6062 -0.3348
51 0.6597 0.0350 0.0837
52 0.5521 0.9447 0.0498
53
54 (1 ,.,.) =
55 0.6597 0.0350 0.0837
56 -0.1527 0.0877 0.4260
57 0.8393 -0.6062 -0.3348
58 -0.8738 -0.9054 0.4281
59 [torch.FloatTensor of size 2x4x3]
60
61 >>> # example with padding_idx
62 >>> embedding = nn.Embedding(10, 3, padding_idx=0)
63 >>> input = Variable(torch.LongTensor([[0,2,0,5]]))
64 >>> embedding(input)
65
66 Variable containing:
67 (0 ,.,.) =
68 0.0000 0.0000 0.0000
69 0.3452 0.4937 -0.9361
70 0.0000 0.0000 0.0000
71 0.0706 -2.1962 -0.6276
72 [torch.FloatTensor of size 1x4x3]
73
74 """
75
76 def __init__(self, num_embeddings, embedding_dim, padding_idx=None,
77 max_norm=None, norm_type=2, scale_grad_by_freq=False,
78 sparse=False):
79 super(Embedding, self).__init__()
80 self.num_embeddings = num_embeddings
81 self.embedding_dim = embedding_dim
82 if padding_idx is not None:
83 if padding_idx > 0:
84 assert padding_idx < self.num_embeddings, 'Padding_idx must be within num_embeddings'
85 elif padding_idx < 0:
86 assert padding_idx >= -self.num_embeddings, 'Padding_idx must be within num_embeddings'
87 padding_idx = self.num_embeddings + padding_idx
88 self.padding_idx = padding_idx
89 self.max_norm = max_norm
90 self.norm_type = norm_type
91 self.scale_grad_by_freq = scale_grad_by_freq
92 self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim))
93 self.sparse = sparse
94
95 self.reset_parameters()
96
97 def reset_parameters(self):
98 self.weight.data.normal_(0, 1)
99 if self.padding_idx is not None:
100 self.weight.data[self.padding_idx].fill_(0)
101
102 def forward(self, input):
103 return F.embedding(
104 input, self.weight, self.padding_idx, self.max_norm,
105 self.norm_type, self.scale_grad_by_freq, self.sparse)
106
107 def __repr__(self):
108 s = '{name}({num_embeddings}, {embedding_dim}'
109 if self.padding_idx is not None:
110 s += ', padding_idx={padding_idx}'
111 if self.max_norm is not None:
112 s += ', max_norm={max_norm}'
113 if self.norm_type != 2:
114 s += ', norm_type={norm_type}'
115 if self.scale_grad_by_freq is not False:
116 s += ', scale_grad_by_freq={scale_grad_by_freq}'
117 if self.sparse is not False:
118 s += ', sparse=True'
119 s += ')'
120 return s.format(name=self.__class__.__name__, **self.__dict__)
121
122
123 class EmbeddingBag(Module):
124 r"""Computes sums or means of 'bags' of embeddings, without instantiating the
125 intermediate embeddings.
126
127 For bags of constant length,
128 * nn.EmbeddingBag with `mode=sum` is equivalent to nn.Embedding followed by `torch.sum(dim=1)`
129 * with `mode=mean` is equivalent to nn.Embedding followed by `torch.mean(dim=1)`
130
131 However, nn.EmbeddingBag is much more time and memory efficient than using a chain of these
132 operations.
133
134 Args:
135 num_embeddings (int): size of the dictionary of embeddings
136 embedding_dim (int): the size of each embedding vector
137 max_norm (float, optional): If given, will renormalize the embeddings to always have a norm lesser than this
138 norm_type (float, optional): The p of the p-norm to compute for the max_norm option
139 scale_grad_by_freq (boolean, optional): if given, this will scale gradients by the frequency of
140 the words in the dictionary.
141 mode (string, optional): 'sum' | 'mean'. Specifies the way to reduce the bag. Default: 'mean'
142
143 Attributes:
144 weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim)
145
146 Inputs: input, offsets
147 - **input** (N or BxN): LongTensor containing the indices of the embeddings
148 to extract. When `input` is 1D Tensor of shape `N`,
149 an `offsets` Tensor is given, that contains the
150 starting position of each new sequence in the
151 mini-batch.
152 - **offsets** (B or None): LongTensor containing the starting positions of
153 each sample in a mini-batch of variable length
154 sequences. If `input` is 2D (BxN), then offsets
155 does not need to be given, as the `input` is
156 treated as a mini-batch of fixed length sequences
157 of length `N` each.
158
159
160 Shape:
161 - Input: LongTensor `N`, N = number of embeddings to extract
162 (or) LongTensor `BxN`, B = number of sequences in mini-batch,
163 N = number of embeddings per sequence
164 - Offsets: LongTensor `B`, B = number of bags. The values are the
165 offsets in `input` for each bag, i.e. the cumsum of lengths.
166 Offsets is not given if Input is 2D `BxN` Tensor,
167 the input is considered to be of fixed-length sequences
168 - Output: `(B, embedding_dim)`
169
170 Examples::
171
172 >>> # an Embedding module containing 10 tensors of size 3
173 >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum')
174 >>> # a batch of 2 samples of 4 indices each
175 >>> input = Variable(torch.LongTensor([1,2,4,5,4,3,2,9]))
176 >>> offsets = Variable(torch.LongTensor([0,4]))
177 >>> embedding_sum(input, offsets)
178
179 Variable containing:
180 -0.7296 -4.6926 0.3295
181 -0.5186 -0.5631 -0.2792
182 [torch.FloatTensor of size 2x3]
183
184 """
185
186 def __init__(self, num_embeddings, embedding_dim,
187 max_norm=None, norm_type=2, scale_grad_by_freq=False,
188 mode='mean'):
189 super(EmbeddingBag, self).__init__()
190 self.num_embeddings = num_embeddings
191 self.embedding_dim = embedding_dim
192 self.max_norm = max_norm
193 self.norm_type = norm_type
194 self.scale_grad_by_freq = scale_grad_by_freq
195 self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim))
196 self.mode = mode
197
198 self.reset_parameters()
199
200 def reset_parameters(self):
201 self.weight.data.normal_(0, 1)
202
203 def forward(self, input, offsets=None):
204 return F.embedding_bag(self.weight, input, offsets,
205 self.max_norm, self.norm_type,
206 self.scale_grad_by_freq, self.mode)
207
208 def __repr__(self):
209 s = '{name}({num_embeddings}, {embedding_dim}'
210 if self.max_norm is not None:
211 s += ', max_norm={max_norm}'
212 if self.norm_type != 2:
213 s += ', norm_type={norm_type}'
214 if self.scale_grad_by_freq is not False:
215 s += ', scale_grad_by_freq={scale_grad_by_freq}'
216 s += ', mode={mode}'
217 s += ')'
218 return s.format(name=self.__class__.__name__, **self.__dict__)
219
220 # TODO: SparseLinear
221
[end of torch/nn/modules/sparse.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/torch/nn/modules/sparse.py b/torch/nn/modules/sparse.py
--- a/torch/nn/modules/sparse.py
+++ b/torch/nn/modules/sparse.py
@@ -34,7 +34,7 @@
Notes:
Keep in mind that only a limited number of optimizers support
sparse gradients: currently it's `optim.SGD` (`cuda` and `cpu`),
- and `optim.Adagrad` (`cpu`)
+ `optim.SparseAdam` (`cuda` and `cpu`) and `optim.Adagrad` (`cpu`)
Examples::
|
{"golden_diff": "diff --git a/torch/nn/modules/sparse.py b/torch/nn/modules/sparse.py\n--- a/torch/nn/modules/sparse.py\n+++ b/torch/nn/modules/sparse.py\n@@ -34,7 +34,7 @@\n Notes:\n Keep in mind that only a limited number of optimizers support\n sparse gradients: currently it's `optim.SGD` (`cuda` and `cpu`),\n- and `optim.Adagrad` (`cpu`)\n+ `optim.SparseAdam` (`cuda` and `cpu`) and `optim.Adagrad` (`cpu`)\n \n Examples::\n", "issue": "Update documentation for Embedding layer\nThe documentation corresponding [torch.nn.Embedding](http://pytorch.org/docs/master/nn.html) mentions that ```Keep in mind that only a limited number of optimizers support sparse gradients: currently it\u2019s optim.SGD (cuda and cpu), and optim.Adagrad (cpu)```. This is outdated and now `SparseAdam` is also supported.\nUpdate documentation for Embedding layer\nThe documentation corresponding [torch.nn.Embedding](http://pytorch.org/docs/master/nn.html) mentions that ```Keep in mind that only a limited number of optimizers support sparse gradients: currently it\u2019s optim.SGD (cuda and cpu), and optim.Adagrad (cpu)```. This is outdated and now `SparseAdam` is also supported.\n", "before_files": [{"content": "import torch\nfrom torch.autograd import Variable\nfrom torch.nn.parameter import Parameter\n\nfrom .module import Module\nfrom .. import functional as F\n\n\nclass Embedding(Module):\n r\"\"\"A simple lookup table that stores embeddings of a fixed dictionary and size.\n\n This module is often used to store word embeddings and retrieve them using indices.\n The input to the module is a list of indices, and the output is the corresponding\n word embeddings.\n\n Args:\n num_embeddings (int): size of the dictionary of embeddings\n embedding_dim (int): the size of each embedding vector\n padding_idx (int, optional): If given, pads the output with zeros whenever it encounters the index.\n max_norm (float, optional): If given, will renormalize the embeddings to always have a norm lesser than this\n norm_type (float, optional): The p of the p-norm to compute for the max_norm option\n scale_grad_by_freq (boolean, optional): if given, this will scale gradients by the frequency of\n the words in the mini-batch.\n sparse (boolean, optional): if ``True``, gradient w.r.t. weight matrix will be a sparse tensor. See Notes for\n more details regarding sparse gradients.\n\n Attributes:\n weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim)\n\n Shape:\n - Input: LongTensor `(N, W)`, N = mini-batch, W = number of indices to extract per mini-batch\n - Output: `(N, W, embedding_dim)`\n\n Notes:\n Keep in mind that only a limited number of optimizers support\n sparse gradients: currently it's `optim.SGD` (`cuda` and `cpu`),\n and `optim.Adagrad` (`cpu`)\n\n Examples::\n\n >>> # an Embedding module containing 10 tensors of size 3\n >>> embedding = nn.Embedding(10, 3)\n >>> # a batch of 2 samples of 4 indices each\n >>> input = Variable(torch.LongTensor([[1,2,4,5],[4,3,2,9]]))\n >>> embedding(input)\n\n Variable containing:\n (0 ,.,.) =\n -1.0822 1.2522 0.2434\n 0.8393 -0.6062 -0.3348\n 0.6597 0.0350 0.0837\n 0.5521 0.9447 0.0498\n\n (1 ,.,.) =\n 0.6597 0.0350 0.0837\n -0.1527 0.0877 0.4260\n 0.8393 -0.6062 -0.3348\n -0.8738 -0.9054 0.4281\n [torch.FloatTensor of size 2x4x3]\n\n >>> # example with padding_idx\n >>> embedding = nn.Embedding(10, 3, padding_idx=0)\n >>> input = Variable(torch.LongTensor([[0,2,0,5]]))\n >>> embedding(input)\n\n Variable containing:\n (0 ,.,.) =\n 0.0000 0.0000 0.0000\n 0.3452 0.4937 -0.9361\n 0.0000 0.0000 0.0000\n 0.0706 -2.1962 -0.6276\n [torch.FloatTensor of size 1x4x3]\n\n \"\"\"\n\n def __init__(self, num_embeddings, embedding_dim, padding_idx=None,\n max_norm=None, norm_type=2, scale_grad_by_freq=False,\n sparse=False):\n super(Embedding, self).__init__()\n self.num_embeddings = num_embeddings\n self.embedding_dim = embedding_dim\n if padding_idx is not None:\n if padding_idx > 0:\n assert padding_idx < self.num_embeddings, 'Padding_idx must be within num_embeddings'\n elif padding_idx < 0:\n assert padding_idx >= -self.num_embeddings, 'Padding_idx must be within num_embeddings'\n padding_idx = self.num_embeddings + padding_idx\n self.padding_idx = padding_idx\n self.max_norm = max_norm\n self.norm_type = norm_type\n self.scale_grad_by_freq = scale_grad_by_freq\n self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim))\n self.sparse = sparse\n\n self.reset_parameters()\n\n def reset_parameters(self):\n self.weight.data.normal_(0, 1)\n if self.padding_idx is not None:\n self.weight.data[self.padding_idx].fill_(0)\n\n def forward(self, input):\n return F.embedding(\n input, self.weight, self.padding_idx, self.max_norm,\n self.norm_type, self.scale_grad_by_freq, self.sparse)\n\n def __repr__(self):\n s = '{name}({num_embeddings}, {embedding_dim}'\n if self.padding_idx is not None:\n s += ', padding_idx={padding_idx}'\n if self.max_norm is not None:\n s += ', max_norm={max_norm}'\n if self.norm_type != 2:\n s += ', norm_type={norm_type}'\n if self.scale_grad_by_freq is not False:\n s += ', scale_grad_by_freq={scale_grad_by_freq}'\n if self.sparse is not False:\n s += ', sparse=True'\n s += ')'\n return s.format(name=self.__class__.__name__, **self.__dict__)\n\n\nclass EmbeddingBag(Module):\n r\"\"\"Computes sums or means of 'bags' of embeddings, without instantiating the\n intermediate embeddings.\n\n For bags of constant length,\n * nn.EmbeddingBag with `mode=sum` is equivalent to nn.Embedding followed by `torch.sum(dim=1)`\n * with `mode=mean` is equivalent to nn.Embedding followed by `torch.mean(dim=1)`\n\n However, nn.EmbeddingBag is much more time and memory efficient than using a chain of these\n operations.\n\n Args:\n num_embeddings (int): size of the dictionary of embeddings\n embedding_dim (int): the size of each embedding vector\n max_norm (float, optional): If given, will renormalize the embeddings to always have a norm lesser than this\n norm_type (float, optional): The p of the p-norm to compute for the max_norm option\n scale_grad_by_freq (boolean, optional): if given, this will scale gradients by the frequency of\n the words in the dictionary.\n mode (string, optional): 'sum' | 'mean'. Specifies the way to reduce the bag. Default: 'mean'\n\n Attributes:\n weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim)\n\n Inputs: input, offsets\n - **input** (N or BxN): LongTensor containing the indices of the embeddings\n to extract. When `input` is 1D Tensor of shape `N`,\n an `offsets` Tensor is given, that contains the\n starting position of each new sequence in the\n mini-batch.\n - **offsets** (B or None): LongTensor containing the starting positions of\n each sample in a mini-batch of variable length\n sequences. If `input` is 2D (BxN), then offsets\n does not need to be given, as the `input` is\n treated as a mini-batch of fixed length sequences\n of length `N` each.\n\n\n Shape:\n - Input: LongTensor `N`, N = number of embeddings to extract\n (or) LongTensor `BxN`, B = number of sequences in mini-batch,\n N = number of embeddings per sequence\n - Offsets: LongTensor `B`, B = number of bags. The values are the\n offsets in `input` for each bag, i.e. the cumsum of lengths.\n Offsets is not given if Input is 2D `BxN` Tensor,\n the input is considered to be of fixed-length sequences\n - Output: `(B, embedding_dim)`\n\n Examples::\n\n >>> # an Embedding module containing 10 tensors of size 3\n >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum')\n >>> # a batch of 2 samples of 4 indices each\n >>> input = Variable(torch.LongTensor([1,2,4,5,4,3,2,9]))\n >>> offsets = Variable(torch.LongTensor([0,4]))\n >>> embedding_sum(input, offsets)\n\n Variable containing:\n -0.7296 -4.6926 0.3295\n -0.5186 -0.5631 -0.2792\n [torch.FloatTensor of size 2x3]\n\n \"\"\"\n\n def __init__(self, num_embeddings, embedding_dim,\n max_norm=None, norm_type=2, scale_grad_by_freq=False,\n mode='mean'):\n super(EmbeddingBag, self).__init__()\n self.num_embeddings = num_embeddings\n self.embedding_dim = embedding_dim\n self.max_norm = max_norm\n self.norm_type = norm_type\n self.scale_grad_by_freq = scale_grad_by_freq\n self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim))\n self.mode = mode\n\n self.reset_parameters()\n\n def reset_parameters(self):\n self.weight.data.normal_(0, 1)\n\n def forward(self, input, offsets=None):\n return F.embedding_bag(self.weight, input, offsets,\n self.max_norm, self.norm_type,\n self.scale_grad_by_freq, self.mode)\n\n def __repr__(self):\n s = '{name}({num_embeddings}, {embedding_dim}'\n if self.max_norm is not None:\n s += ', max_norm={max_norm}'\n if self.norm_type != 2:\n s += ', norm_type={norm_type}'\n if self.scale_grad_by_freq is not False:\n s += ', scale_grad_by_freq={scale_grad_by_freq}'\n s += ', mode={mode}'\n s += ')'\n return s.format(name=self.__class__.__name__, **self.__dict__)\n\n# TODO: SparseLinear\n", "path": "torch/nn/modules/sparse.py"}]}
| 3,569 | 133 |
gh_patches_debug_7489
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-1020
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Search results seem to need some relevancy tweaking
Searches seem to have some relevancy issues. For example:

Or: https://warehouse.python.org/search/?q=django&page=1 - Django itself doesn't seem to appear in the first half-dozen or so pages (I gave up paging before I found it).
Jacob
</issue>
<code>
[start of warehouse/views.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from pyramid.httpexceptions import (
14 HTTPException, HTTPSeeOther, HTTPMovedPermanently,
15 )
16 from pyramid.view import (
17 notfound_view_config, forbidden_view_config, view_config,
18 )
19 from sqlalchemy import func
20 from sqlalchemy.orm import aliased, joinedload
21
22 from warehouse.accounts import REDIRECT_FIELD_NAME
23 from warehouse.accounts.models import User
24 from warehouse.cache.origin import origin_cache
25 from warehouse.cache.http import cache_control
26 from warehouse.csrf import csrf_exempt
27 from warehouse.packaging.models import Project, Release, File
28 from warehouse.sessions import uses_session
29 from warehouse.utils.row_counter import RowCount
30 from warehouse.utils.paginate import ElasticsearchPage, paginate_url_factory
31
32
33 @view_config(context=HTTPException, decorator=[csrf_exempt])
34 @notfound_view_config(
35 append_slash=HTTPMovedPermanently,
36 decorator=[csrf_exempt],
37 )
38 def httpexception_view(exc, request):
39 return exc
40
41
42 @forbidden_view_config()
43 def forbidden(exc, request):
44 # If the forbidden error is because the user isn't logged in, then we'll
45 # redirect them to the log in page.
46 if request.authenticated_userid is None:
47 url = request.route_url(
48 "accounts.login",
49 _query={REDIRECT_FIELD_NAME: request.path_qs},
50 )
51 return HTTPSeeOther(url)
52
53 # If we've reached here, then the user is logged in and they are genuinely
54 # not allowed to access this page.
55 # TODO: Style the forbidden page.
56 return exc
57
58
59 @view_config(
60 route_name="robots.txt",
61 renderer="robots.txt",
62 decorator=[
63 cache_control(1 * 24 * 60 * 60), # 1 day
64 origin_cache(
65 1 * 24 * 60 * 60, # 1 day
66 stale_while_revalidate=6 * 60 * 60, # 6 hours
67 stale_if_error=1 * 24 * 60 * 60, # 1 day
68 ),
69 ],
70 )
71 def robotstxt(request):
72 request.response.content_type = "text/plain"
73 return {}
74
75
76 @view_config(
77 route_name="index",
78 renderer="index.html",
79 decorator=[
80 origin_cache(
81 1 * 60 * 60, # 1 hour
82 stale_while_revalidate=10 * 60, # 10 minutes
83 stale_if_error=1 * 24 * 60 * 60, # 1 day
84 keys=["all-projects"],
85 ),
86 ]
87 )
88 def index(request):
89 project_names = [
90 r[0] for r in (
91 request.db.query(File.name)
92 .group_by(File.name)
93 .order_by(func.sum(File.downloads).desc())
94 .limit(5)
95 .all())
96 ]
97 release_a = aliased(
98 Release,
99 request.db.query(Release)
100 .distinct(Release.name)
101 .filter(Release.name.in_(project_names))
102 .order_by(Release.name, Release._pypi_ordering.desc())
103 .subquery(),
104 )
105 top_projects = (
106 request.db.query(release_a)
107 .options(joinedload(release_a.project),
108 joinedload(release_a.uploader))
109 .order_by(func.array_idx(project_names, release_a.name))
110 .all()
111 )
112
113 latest_releases = (
114 request.db.query(Release)
115 .options(joinedload(Release.project),
116 joinedload(Release.uploader))
117 .order_by(Release.created.desc())
118 .limit(5)
119 .all()
120 )
121
122 counts = dict(
123 request.db.query(RowCount.table_name, RowCount.count)
124 .filter(
125 RowCount.table_name.in_([
126 Project.__tablename__,
127 Release.__tablename__,
128 File.__tablename__,
129 User.__tablename__,
130 ]))
131 .all()
132 )
133
134 return {
135 "latest_releases": latest_releases,
136 "top_projects": top_projects,
137 "num_projects": counts.get(Project.__tablename__, 0),
138 "num_releases": counts.get(Release.__tablename__, 0),
139 "num_files": counts.get(File.__tablename__, 0),
140 "num_users": counts.get(User.__tablename__, 0),
141 }
142
143
144 @view_config(
145 route_name="search",
146 renderer="search/results.html",
147 decorator=[
148 origin_cache(
149 1 * 60 * 60, # 1 hour
150 stale_while_revalidate=10 * 60, # 10 minutes
151 stale_if_error=1 * 24 * 60 * 60, # 1 day
152 keys=["all-projects"],
153 )
154 ],
155 )
156 def search(request):
157 if request.params.get("q"):
158 query = request.es.query(
159 "multi_match",
160 query=request.params["q"],
161 fields=[
162 "name", "version", "author", "author_email", "maintainer",
163 "maintainer_email", "home_page", "license", "summary",
164 "description", "keywords", "platform", "download_url",
165 ],
166 ).suggest(
167 name="name_suggestion",
168 text=request.params["q"],
169 term={"field": "name"}
170 )
171 else:
172 query = request.es.query()
173
174 if request.params.get("o"):
175 query = query.sort(request.params["o"])
176
177 page = ElasticsearchPage(
178 query,
179 page=int(request.params.get("page", 1)),
180 url_maker=paginate_url_factory(request),
181 )
182
183 return {
184 "page": page,
185 "term": request.params.get("q"),
186 "order": request.params.get("o"),
187 }
188
189
190 @view_config(
191 route_name="includes.current-user-indicator",
192 renderer="includes/current-user-indicator.html",
193 decorator=[uses_session],
194 )
195 def current_user_indicator(request):
196 return {}
197
[end of warehouse/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/warehouse/views.py b/warehouse/views.py
--- a/warehouse/views.py
+++ b/warehouse/views.py
@@ -159,7 +159,7 @@
"multi_match",
query=request.params["q"],
fields=[
- "name", "version", "author", "author_email", "maintainer",
+ "name^2", "version", "author", "author_email", "maintainer",
"maintainer_email", "home_page", "license", "summary",
"description", "keywords", "platform", "download_url",
],
|
{"golden_diff": "diff --git a/warehouse/views.py b/warehouse/views.py\n--- a/warehouse/views.py\n+++ b/warehouse/views.py\n@@ -159,7 +159,7 @@\n \"multi_match\",\n query=request.params[\"q\"],\n fields=[\n- \"name\", \"version\", \"author\", \"author_email\", \"maintainer\",\n+ \"name^2\", \"version\", \"author\", \"author_email\", \"maintainer\",\n \"maintainer_email\", \"home_page\", \"license\", \"summary\",\n \"description\", \"keywords\", \"platform\", \"download_url\",\n ],\n", "issue": "Search results seem to need some relevancy tweaking\nSearches seem to have some relevancy issues. For example:\n\n\n\nOr: https://warehouse.python.org/search/?q=django&page=1 - Django itself doesn't seem to appear in the first half-dozen or so pages (I gave up paging before I found it).\n\nJacob\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom pyramid.httpexceptions import (\n HTTPException, HTTPSeeOther, HTTPMovedPermanently,\n)\nfrom pyramid.view import (\n notfound_view_config, forbidden_view_config, view_config,\n)\nfrom sqlalchemy import func\nfrom sqlalchemy.orm import aliased, joinedload\n\nfrom warehouse.accounts import REDIRECT_FIELD_NAME\nfrom warehouse.accounts.models import User\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.cache.http import cache_control\nfrom warehouse.csrf import csrf_exempt\nfrom warehouse.packaging.models import Project, Release, File\nfrom warehouse.sessions import uses_session\nfrom warehouse.utils.row_counter import RowCount\nfrom warehouse.utils.paginate import ElasticsearchPage, paginate_url_factory\n\n\n@view_config(context=HTTPException, decorator=[csrf_exempt])\n@notfound_view_config(\n append_slash=HTTPMovedPermanently,\n decorator=[csrf_exempt],\n)\ndef httpexception_view(exc, request):\n return exc\n\n\n@forbidden_view_config()\ndef forbidden(exc, request):\n # If the forbidden error is because the user isn't logged in, then we'll\n # redirect them to the log in page.\n if request.authenticated_userid is None:\n url = request.route_url(\n \"accounts.login\",\n _query={REDIRECT_FIELD_NAME: request.path_qs},\n )\n return HTTPSeeOther(url)\n\n # If we've reached here, then the user is logged in and they are genuinely\n # not allowed to access this page.\n # TODO: Style the forbidden page.\n return exc\n\n\n@view_config(\n route_name=\"robots.txt\",\n renderer=\"robots.txt\",\n decorator=[\n cache_control(1 * 24 * 60 * 60), # 1 day\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=6 * 60 * 60, # 6 hours\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n ],\n)\ndef robotstxt(request):\n request.response.content_type = \"text/plain\"\n return {}\n\n\n@view_config(\n route_name=\"index\",\n renderer=\"index.html\",\n decorator=[\n origin_cache(\n 1 * 60 * 60, # 1 hour\n stale_while_revalidate=10 * 60, # 10 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n keys=[\"all-projects\"],\n ),\n ]\n)\ndef index(request):\n project_names = [\n r[0] for r in (\n request.db.query(File.name)\n .group_by(File.name)\n .order_by(func.sum(File.downloads).desc())\n .limit(5)\n .all())\n ]\n release_a = aliased(\n Release,\n request.db.query(Release)\n .distinct(Release.name)\n .filter(Release.name.in_(project_names))\n .order_by(Release.name, Release._pypi_ordering.desc())\n .subquery(),\n )\n top_projects = (\n request.db.query(release_a)\n .options(joinedload(release_a.project),\n joinedload(release_a.uploader))\n .order_by(func.array_idx(project_names, release_a.name))\n .all()\n )\n\n latest_releases = (\n request.db.query(Release)\n .options(joinedload(Release.project),\n joinedload(Release.uploader))\n .order_by(Release.created.desc())\n .limit(5)\n .all()\n )\n\n counts = dict(\n request.db.query(RowCount.table_name, RowCount.count)\n .filter(\n RowCount.table_name.in_([\n Project.__tablename__,\n Release.__tablename__,\n File.__tablename__,\n User.__tablename__,\n ]))\n .all()\n )\n\n return {\n \"latest_releases\": latest_releases,\n \"top_projects\": top_projects,\n \"num_projects\": counts.get(Project.__tablename__, 0),\n \"num_releases\": counts.get(Release.__tablename__, 0),\n \"num_files\": counts.get(File.__tablename__, 0),\n \"num_users\": counts.get(User.__tablename__, 0),\n }\n\n\n@view_config(\n route_name=\"search\",\n renderer=\"search/results.html\",\n decorator=[\n origin_cache(\n 1 * 60 * 60, # 1 hour\n stale_while_revalidate=10 * 60, # 10 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n keys=[\"all-projects\"],\n )\n ],\n)\ndef search(request):\n if request.params.get(\"q\"):\n query = request.es.query(\n \"multi_match\",\n query=request.params[\"q\"],\n fields=[\n \"name\", \"version\", \"author\", \"author_email\", \"maintainer\",\n \"maintainer_email\", \"home_page\", \"license\", \"summary\",\n \"description\", \"keywords\", \"platform\", \"download_url\",\n ],\n ).suggest(\n name=\"name_suggestion\",\n text=request.params[\"q\"],\n term={\"field\": \"name\"}\n )\n else:\n query = request.es.query()\n\n if request.params.get(\"o\"):\n query = query.sort(request.params[\"o\"])\n\n page = ElasticsearchPage(\n query,\n page=int(request.params.get(\"page\", 1)),\n url_maker=paginate_url_factory(request),\n )\n\n return {\n \"page\": page,\n \"term\": request.params.get(\"q\"),\n \"order\": request.params.get(\"o\"),\n }\n\n\n@view_config(\n route_name=\"includes.current-user-indicator\",\n renderer=\"includes/current-user-indicator.html\",\n decorator=[uses_session],\n)\ndef current_user_indicator(request):\n return {}\n", "path": "warehouse/views.py"}]}
| 2,547 | 129 |
gh_patches_debug_5513
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-2345
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pre-push hook failed with "ValueError: too many values to unpack (expected 4)"
### describe your issue
I ran
```
git push --dry-run origin HEAD^"{/^[a-zA-Z]+: }":refs/for/main%wip
```
and expected the hook to run properly, but it failed with a somewhat subtle error:
```
An unexpected error has occurred: ValueError: too many values to unpack (expected 4)
Check the log at $HOME/.cache/pre-commit/pre-commit.log
```
It was more clear from the `pre-commit.log` file, though (see below). I reproduced the issue using HEAD (f9473e756decd141a9834994840c1cb124564c2a) as well.
### pre-commit --version
2.12.1
### .pre-commit-config.yaml
```yaml
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.2.0
hooks:
- id: check-added-large-files
- id: check-json
- id: check-yaml
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/wwade/pre-commit-golang
rev: 503834f5c0933fbdf9a55e92329c1957e48f6d0a
hooks:
- id: go-fmt
- id: go-imports
- id: go-cyclo
args: [-over=15]
- id: validate-toml
- id: golangci-lint
- id: go-unit-tests
- id: go-mod-tidy
```
### ~/.cache/pre-commit/pre-commit.log (if present)
### version information
```
pre-commit version: 2.12.1
sys.version:
3.8.10 (default, Mar 15 2022, 12:22:08)
[GCC 9.4.0]
sys.executable: /usr/bin/python3
os.name: posix
sys.platform: linux
```
### error information
```
An unexpected error has occurred: ValueError: too many values to unpack (expected 4)
```
```
Traceback (most recent call last):
File "$HOME/.local/lib/python3.8/site-packages/pre_commit/error_handler.py", line 65, in error_handler
yield
File "$HOME/.local/lib/python3.8/site-packages/pre_commit/main.py", line 357, in main
return hook_impl(
File "$HOME/.local/lib/python3.8/site-packages/pre_commit/commands/hook_impl.py", line 223, in hook_impl
ns = _run_ns(hook_type, color, args, stdin)
File "$HOME/.local/lib/python3.8/site-packages/pre_commit/commands/hook_impl.py", line 195, in _run_ns
return _pre_push_ns(color, args, stdin)
File "$HOME/.local/lib/python3.8/site-packages/pre_commit/commands/hook_impl.py", line 113, in _pre_push_ns
_, local_sha, remote_branch, remote_sha = line.split()
ValueError: too many values to unpack (expected 4)
```
</issue>
<code>
[start of pre_commit/commands/hook_impl.py]
1 from __future__ import annotations
2
3 import argparse
4 import os.path
5 import subprocess
6 import sys
7 from typing import Sequence
8
9 from pre_commit.commands.run import run
10 from pre_commit.envcontext import envcontext
11 from pre_commit.parse_shebang import normalize_cmd
12 from pre_commit.store import Store
13
14 Z40 = '0' * 40
15
16
17 def _run_legacy(
18 hook_type: str,
19 hook_dir: str,
20 args: Sequence[str],
21 ) -> tuple[int, bytes]:
22 if os.environ.get('PRE_COMMIT_RUNNING_LEGACY'):
23 raise SystemExit(
24 f"bug: pre-commit's script is installed in migration mode\n"
25 f'run `pre-commit install -f --hook-type {hook_type}` to fix '
26 f'this\n\n'
27 f'Please report this bug at '
28 f'https://github.com/pre-commit/pre-commit/issues',
29 )
30
31 if hook_type == 'pre-push':
32 stdin = sys.stdin.buffer.read()
33 else:
34 stdin = b''
35
36 # not running in legacy mode
37 legacy_hook = os.path.join(hook_dir, f'{hook_type}.legacy')
38 if not os.access(legacy_hook, os.X_OK):
39 return 0, stdin
40
41 with envcontext((('PRE_COMMIT_RUNNING_LEGACY', '1'),)):
42 cmd = normalize_cmd((legacy_hook, *args))
43 return subprocess.run(cmd, input=stdin).returncode, stdin
44
45
46 def _validate_config(
47 retv: int,
48 config: str,
49 skip_on_missing_config: bool,
50 ) -> None:
51 if not os.path.isfile(config):
52 if skip_on_missing_config or os.getenv('PRE_COMMIT_ALLOW_NO_CONFIG'):
53 print(f'`{config}` config file not found. Skipping `pre-commit`.')
54 raise SystemExit(retv)
55 else:
56 print(
57 f'No {config} file was found\n'
58 f'- To temporarily silence this, run '
59 f'`PRE_COMMIT_ALLOW_NO_CONFIG=1 git ...`\n'
60 f'- To permanently silence this, install pre-commit with the '
61 f'--allow-missing-config option\n'
62 f'- To uninstall pre-commit run `pre-commit uninstall`',
63 )
64 raise SystemExit(1)
65
66
67 def _ns(
68 hook_type: str,
69 color: bool,
70 *,
71 all_files: bool = False,
72 remote_branch: str | None = None,
73 local_branch: str | None = None,
74 from_ref: str | None = None,
75 to_ref: str | None = None,
76 remote_name: str | None = None,
77 remote_url: str | None = None,
78 commit_msg_filename: str | None = None,
79 checkout_type: str | None = None,
80 is_squash_merge: str | None = None,
81 rewrite_command: str | None = None,
82 ) -> argparse.Namespace:
83 return argparse.Namespace(
84 color=color,
85 hook_stage=hook_type.replace('pre-', ''),
86 remote_branch=remote_branch,
87 local_branch=local_branch,
88 from_ref=from_ref,
89 to_ref=to_ref,
90 remote_name=remote_name,
91 remote_url=remote_url,
92 commit_msg_filename=commit_msg_filename,
93 all_files=all_files,
94 checkout_type=checkout_type,
95 is_squash_merge=is_squash_merge,
96 rewrite_command=rewrite_command,
97 files=(),
98 hook=None,
99 verbose=False,
100 show_diff_on_failure=False,
101 )
102
103
104 def _rev_exists(rev: str) -> bool:
105 return not subprocess.call(('git', 'rev-list', '--quiet', rev))
106
107
108 def _pre_push_ns(
109 color: bool,
110 args: Sequence[str],
111 stdin: bytes,
112 ) -> argparse.Namespace | None:
113 remote_name = args[0]
114 remote_url = args[1]
115
116 for line in stdin.decode().splitlines():
117 local_branch, local_sha, remote_branch, remote_sha = line.split()
118 if local_sha == Z40:
119 continue
120 elif remote_sha != Z40 and _rev_exists(remote_sha):
121 return _ns(
122 'pre-push', color,
123 from_ref=remote_sha, to_ref=local_sha,
124 remote_branch=remote_branch,
125 local_branch=local_branch,
126 remote_name=remote_name, remote_url=remote_url,
127 )
128 else:
129 # ancestors not found in remote
130 ancestors = subprocess.check_output((
131 'git', 'rev-list', local_sha, '--topo-order', '--reverse',
132 '--not', f'--remotes={remote_name}',
133 )).decode().strip()
134 if not ancestors:
135 continue
136 else:
137 first_ancestor = ancestors.splitlines()[0]
138 cmd = ('git', 'rev-list', '--max-parents=0', local_sha)
139 roots = set(subprocess.check_output(cmd).decode().splitlines())
140 if first_ancestor in roots:
141 # pushing the whole tree including root commit
142 return _ns(
143 'pre-push', color,
144 all_files=True,
145 remote_name=remote_name, remote_url=remote_url,
146 remote_branch=remote_branch,
147 local_branch=local_branch,
148 )
149 else:
150 rev_cmd = ('git', 'rev-parse', f'{first_ancestor}^')
151 source = subprocess.check_output(rev_cmd).decode().strip()
152 return _ns(
153 'pre-push', color,
154 from_ref=source, to_ref=local_sha,
155 remote_name=remote_name, remote_url=remote_url,
156 remote_branch=remote_branch,
157 local_branch=local_branch,
158 )
159
160 # nothing to push
161 return None
162
163
164 _EXPECTED_ARG_LENGTH_BY_HOOK = {
165 'commit-msg': 1,
166 'post-checkout': 3,
167 'post-commit': 0,
168 'pre-commit': 0,
169 'pre-merge-commit': 0,
170 'post-merge': 1,
171 'post-rewrite': 1,
172 'pre-push': 2,
173 }
174
175
176 def _check_args_length(hook_type: str, args: Sequence[str]) -> None:
177 if hook_type == 'prepare-commit-msg':
178 if len(args) < 1 or len(args) > 3:
179 raise SystemExit(
180 f'hook-impl for {hook_type} expected 1, 2, or 3 arguments '
181 f'but got {len(args)}: {args}',
182 )
183 elif hook_type in _EXPECTED_ARG_LENGTH_BY_HOOK:
184 expected = _EXPECTED_ARG_LENGTH_BY_HOOK[hook_type]
185 if len(args) != expected:
186 arguments_s = 'argument' if expected == 1 else 'arguments'
187 raise SystemExit(
188 f'hook-impl for {hook_type} expected {expected} {arguments_s} '
189 f'but got {len(args)}: {args}',
190 )
191 else:
192 raise AssertionError(f'unexpected hook type: {hook_type}')
193
194
195 def _run_ns(
196 hook_type: str,
197 color: bool,
198 args: Sequence[str],
199 stdin: bytes,
200 ) -> argparse.Namespace | None:
201 _check_args_length(hook_type, args)
202 if hook_type == 'pre-push':
203 return _pre_push_ns(color, args, stdin)
204 elif hook_type in {'commit-msg', 'prepare-commit-msg'}:
205 return _ns(hook_type, color, commit_msg_filename=args[0])
206 elif hook_type in {'post-commit', 'pre-merge-commit', 'pre-commit'}:
207 return _ns(hook_type, color)
208 elif hook_type == 'post-checkout':
209 return _ns(
210 hook_type, color,
211 from_ref=args[0], to_ref=args[1], checkout_type=args[2],
212 )
213 elif hook_type == 'post-merge':
214 return _ns(hook_type, color, is_squash_merge=args[0])
215 elif hook_type == 'post-rewrite':
216 return _ns(hook_type, color, rewrite_command=args[0])
217 else:
218 raise AssertionError(f'unexpected hook type: {hook_type}')
219
220
221 def hook_impl(
222 store: Store,
223 *,
224 config: str,
225 color: bool,
226 hook_type: str,
227 hook_dir: str,
228 skip_on_missing_config: bool,
229 args: Sequence[str],
230 ) -> int:
231 retv, stdin = _run_legacy(hook_type, hook_dir, args)
232 _validate_config(retv, config, skip_on_missing_config)
233 ns = _run_ns(hook_type, color, args, stdin)
234 if ns is None:
235 return retv
236 else:
237 return retv | run(config, store, ns)
238
[end of pre_commit/commands/hook_impl.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/commands/hook_impl.py b/pre_commit/commands/hook_impl.py
--- a/pre_commit/commands/hook_impl.py
+++ b/pre_commit/commands/hook_impl.py
@@ -114,7 +114,8 @@
remote_url = args[1]
for line in stdin.decode().splitlines():
- local_branch, local_sha, remote_branch, remote_sha = line.split()
+ parts = line.rsplit(maxsplit=3)
+ local_branch, local_sha, remote_branch, remote_sha = parts
if local_sha == Z40:
continue
elif remote_sha != Z40 and _rev_exists(remote_sha):
|
{"golden_diff": "diff --git a/pre_commit/commands/hook_impl.py b/pre_commit/commands/hook_impl.py\n--- a/pre_commit/commands/hook_impl.py\n+++ b/pre_commit/commands/hook_impl.py\n@@ -114,7 +114,8 @@\n remote_url = args[1]\n \n for line in stdin.decode().splitlines():\n- local_branch, local_sha, remote_branch, remote_sha = line.split()\n+ parts = line.rsplit(maxsplit=3)\n+ local_branch, local_sha, remote_branch, remote_sha = parts\n if local_sha == Z40:\n continue\n elif remote_sha != Z40 and _rev_exists(remote_sha):\n", "issue": "pre-push hook failed with \"ValueError: too many values to unpack (expected 4)\"\n### describe your issue\n\nI ran\r\n\r\n```\r\ngit push --dry-run origin HEAD^\"{/^[a-zA-Z]+: }\":refs/for/main%wip\r\n```\r\n\r\nand expected the hook to run properly, but it failed with a somewhat subtle error:\r\n\r\n```\r\nAn unexpected error has occurred: ValueError: too many values to unpack (expected 4)\r\nCheck the log at $HOME/.cache/pre-commit/pre-commit.log\r\n```\r\n\r\nIt was more clear from the `pre-commit.log` file, though (see below). I reproduced the issue using HEAD (f9473e756decd141a9834994840c1cb124564c2a) as well.\n\n### pre-commit --version\n\n2.12.1\n\n### .pre-commit-config.yaml\n\n```yaml\nrepos:\r\n - repo: https://github.com/pre-commit/pre-commit-hooks\r\n rev: v3.2.0\r\n hooks:\r\n - id: check-added-large-files\r\n - id: check-json\r\n - id: check-yaml\r\n - id: end-of-file-fixer\r\n - id: trailing-whitespace\r\n\r\n - repo: https://github.com/wwade/pre-commit-golang\r\n rev: 503834f5c0933fbdf9a55e92329c1957e48f6d0a\r\n hooks:\r\n - id: go-fmt\r\n - id: go-imports\r\n - id: go-cyclo\r\n args: [-over=15]\r\n - id: validate-toml\r\n - id: golangci-lint\r\n - id: go-unit-tests\r\n - id: go-mod-tidy\n```\n\n\n### ~/.cache/pre-commit/pre-commit.log (if present)\n\n### version information\r\n\r\n```\r\npre-commit version: 2.12.1\r\nsys.version:\r\n 3.8.10 (default, Mar 15 2022, 12:22:08) \r\n [GCC 9.4.0]\r\nsys.executable: /usr/bin/python3\r\nos.name: posix\r\nsys.platform: linux\r\n```\r\n\r\n### error information\r\n\r\n```\r\nAn unexpected error has occurred: ValueError: too many values to unpack (expected 4)\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"$HOME/.local/lib/python3.8/site-packages/pre_commit/error_handler.py\", line 65, in error_handler\r\n yield\r\n File \"$HOME/.local/lib/python3.8/site-packages/pre_commit/main.py\", line 357, in main\r\n return hook_impl(\r\n File \"$HOME/.local/lib/python3.8/site-packages/pre_commit/commands/hook_impl.py\", line 223, in hook_impl\r\n ns = _run_ns(hook_type, color, args, stdin)\r\n File \"$HOME/.local/lib/python3.8/site-packages/pre_commit/commands/hook_impl.py\", line 195, in _run_ns\r\n return _pre_push_ns(color, args, stdin)\r\n File \"$HOME/.local/lib/python3.8/site-packages/pre_commit/commands/hook_impl.py\", line 113, in _pre_push_ns\r\n _, local_sha, remote_branch, remote_sha = line.split()\r\nValueError: too many values to unpack (expected 4)\r\n```\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport argparse\nimport os.path\nimport subprocess\nimport sys\nfrom typing import Sequence\n\nfrom pre_commit.commands.run import run\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.parse_shebang import normalize_cmd\nfrom pre_commit.store import Store\n\nZ40 = '0' * 40\n\n\ndef _run_legacy(\n hook_type: str,\n hook_dir: str,\n args: Sequence[str],\n) -> tuple[int, bytes]:\n if os.environ.get('PRE_COMMIT_RUNNING_LEGACY'):\n raise SystemExit(\n f\"bug: pre-commit's script is installed in migration mode\\n\"\n f'run `pre-commit install -f --hook-type {hook_type}` to fix '\n f'this\\n\\n'\n f'Please report this bug at '\n f'https://github.com/pre-commit/pre-commit/issues',\n )\n\n if hook_type == 'pre-push':\n stdin = sys.stdin.buffer.read()\n else:\n stdin = b''\n\n # not running in legacy mode\n legacy_hook = os.path.join(hook_dir, f'{hook_type}.legacy')\n if not os.access(legacy_hook, os.X_OK):\n return 0, stdin\n\n with envcontext((('PRE_COMMIT_RUNNING_LEGACY', '1'),)):\n cmd = normalize_cmd((legacy_hook, *args))\n return subprocess.run(cmd, input=stdin).returncode, stdin\n\n\ndef _validate_config(\n retv: int,\n config: str,\n skip_on_missing_config: bool,\n) -> None:\n if not os.path.isfile(config):\n if skip_on_missing_config or os.getenv('PRE_COMMIT_ALLOW_NO_CONFIG'):\n print(f'`{config}` config file not found. Skipping `pre-commit`.')\n raise SystemExit(retv)\n else:\n print(\n f'No {config} file was found\\n'\n f'- To temporarily silence this, run '\n f'`PRE_COMMIT_ALLOW_NO_CONFIG=1 git ...`\\n'\n f'- To permanently silence this, install pre-commit with the '\n f'--allow-missing-config option\\n'\n f'- To uninstall pre-commit run `pre-commit uninstall`',\n )\n raise SystemExit(1)\n\n\ndef _ns(\n hook_type: str,\n color: bool,\n *,\n all_files: bool = False,\n remote_branch: str | None = None,\n local_branch: str | None = None,\n from_ref: str | None = None,\n to_ref: str | None = None,\n remote_name: str | None = None,\n remote_url: str | None = None,\n commit_msg_filename: str | None = None,\n checkout_type: str | None = None,\n is_squash_merge: str | None = None,\n rewrite_command: str | None = None,\n) -> argparse.Namespace:\n return argparse.Namespace(\n color=color,\n hook_stage=hook_type.replace('pre-', ''),\n remote_branch=remote_branch,\n local_branch=local_branch,\n from_ref=from_ref,\n to_ref=to_ref,\n remote_name=remote_name,\n remote_url=remote_url,\n commit_msg_filename=commit_msg_filename,\n all_files=all_files,\n checkout_type=checkout_type,\n is_squash_merge=is_squash_merge,\n rewrite_command=rewrite_command,\n files=(),\n hook=None,\n verbose=False,\n show_diff_on_failure=False,\n )\n\n\ndef _rev_exists(rev: str) -> bool:\n return not subprocess.call(('git', 'rev-list', '--quiet', rev))\n\n\ndef _pre_push_ns(\n color: bool,\n args: Sequence[str],\n stdin: bytes,\n) -> argparse.Namespace | None:\n remote_name = args[0]\n remote_url = args[1]\n\n for line in stdin.decode().splitlines():\n local_branch, local_sha, remote_branch, remote_sha = line.split()\n if local_sha == Z40:\n continue\n elif remote_sha != Z40 and _rev_exists(remote_sha):\n return _ns(\n 'pre-push', color,\n from_ref=remote_sha, to_ref=local_sha,\n remote_branch=remote_branch,\n local_branch=local_branch,\n remote_name=remote_name, remote_url=remote_url,\n )\n else:\n # ancestors not found in remote\n ancestors = subprocess.check_output((\n 'git', 'rev-list', local_sha, '--topo-order', '--reverse',\n '--not', f'--remotes={remote_name}',\n )).decode().strip()\n if not ancestors:\n continue\n else:\n first_ancestor = ancestors.splitlines()[0]\n cmd = ('git', 'rev-list', '--max-parents=0', local_sha)\n roots = set(subprocess.check_output(cmd).decode().splitlines())\n if first_ancestor in roots:\n # pushing the whole tree including root commit\n return _ns(\n 'pre-push', color,\n all_files=True,\n remote_name=remote_name, remote_url=remote_url,\n remote_branch=remote_branch,\n local_branch=local_branch,\n )\n else:\n rev_cmd = ('git', 'rev-parse', f'{first_ancestor}^')\n source = subprocess.check_output(rev_cmd).decode().strip()\n return _ns(\n 'pre-push', color,\n from_ref=source, to_ref=local_sha,\n remote_name=remote_name, remote_url=remote_url,\n remote_branch=remote_branch,\n local_branch=local_branch,\n )\n\n # nothing to push\n return None\n\n\n_EXPECTED_ARG_LENGTH_BY_HOOK = {\n 'commit-msg': 1,\n 'post-checkout': 3,\n 'post-commit': 0,\n 'pre-commit': 0,\n 'pre-merge-commit': 0,\n 'post-merge': 1,\n 'post-rewrite': 1,\n 'pre-push': 2,\n}\n\n\ndef _check_args_length(hook_type: str, args: Sequence[str]) -> None:\n if hook_type == 'prepare-commit-msg':\n if len(args) < 1 or len(args) > 3:\n raise SystemExit(\n f'hook-impl for {hook_type} expected 1, 2, or 3 arguments '\n f'but got {len(args)}: {args}',\n )\n elif hook_type in _EXPECTED_ARG_LENGTH_BY_HOOK:\n expected = _EXPECTED_ARG_LENGTH_BY_HOOK[hook_type]\n if len(args) != expected:\n arguments_s = 'argument' if expected == 1 else 'arguments'\n raise SystemExit(\n f'hook-impl for {hook_type} expected {expected} {arguments_s} '\n f'but got {len(args)}: {args}',\n )\n else:\n raise AssertionError(f'unexpected hook type: {hook_type}')\n\n\ndef _run_ns(\n hook_type: str,\n color: bool,\n args: Sequence[str],\n stdin: bytes,\n) -> argparse.Namespace | None:\n _check_args_length(hook_type, args)\n if hook_type == 'pre-push':\n return _pre_push_ns(color, args, stdin)\n elif hook_type in {'commit-msg', 'prepare-commit-msg'}:\n return _ns(hook_type, color, commit_msg_filename=args[0])\n elif hook_type in {'post-commit', 'pre-merge-commit', 'pre-commit'}:\n return _ns(hook_type, color)\n elif hook_type == 'post-checkout':\n return _ns(\n hook_type, color,\n from_ref=args[0], to_ref=args[1], checkout_type=args[2],\n )\n elif hook_type == 'post-merge':\n return _ns(hook_type, color, is_squash_merge=args[0])\n elif hook_type == 'post-rewrite':\n return _ns(hook_type, color, rewrite_command=args[0])\n else:\n raise AssertionError(f'unexpected hook type: {hook_type}')\n\n\ndef hook_impl(\n store: Store,\n *,\n config: str,\n color: bool,\n hook_type: str,\n hook_dir: str,\n skip_on_missing_config: bool,\n args: Sequence[str],\n) -> int:\n retv, stdin = _run_legacy(hook_type, hook_dir, args)\n _validate_config(retv, config, skip_on_missing_config)\n ns = _run_ns(hook_type, color, args, stdin)\n if ns is None:\n return retv\n else:\n return retv | run(config, store, ns)\n", "path": "pre_commit/commands/hook_impl.py"}]}
| 3,752 | 149 |
gh_patches_debug_31529
|
rasdani/github-patches
|
git_diff
|
sagemath__sage-36565
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
sage-download-file: Limit number of mirrors contacted
<div id="comment:0"></div>
In particular when `--enable-download-from-upstream-url` is in use.
CC: @jhpalmieri @dimpase @vbraun @williamstein
Component: **build**
_Issue created by migration from https://trac.sagemath.org/ticket/34411_
</issue>
<code>
[start of build/sage_bootstrap/download/mirror_list.py]
1 # -*- coding: utf-8 -*-
2 """
3 Access the List of Sage Download Mirrors
4 """
5
6 #*****************************************************************************
7 # Copyright (C) 2015 Volker Braun <[email protected]>
8 #
9 # This program is free software: you can redistribute it and/or modify
10 # it under the terms of the GNU General Public License as published by
11 # the Free Software Foundation, either version 2 of the License, or
12 # (at your option) any later version.
13 # http://www.gnu.org/licenses/
14 #*****************************************************************************
15
16 import os
17 import contextlib
18 import logging
19 log = logging.getLogger()
20
21 from sage_bootstrap.compat import urllib, urlparse
22 from sage_bootstrap.env import SAGE_DISTFILES, SAGE_ROOT
23
24 from fcntl import flock, LOCK_SH, LOCK_EX
25 from errno import ENOLCK
26
27
28 def try_lock(fd, operation):
29 """
30 Try flock() but ignore ``ENOLCK`` errors, which could happen if the
31 file system does not support locking.
32 """
33 try:
34 flock(fd, operation)
35 except IOError as e:
36 if e.errno != ENOLCK:
37 raise
38
39
40 class MirrorListException(RuntimeError):
41 pass
42
43
44 class MirrorList(object):
45
46 def __init__(self):
47 self.sources = []
48 upstream_d = os.path.join(SAGE_ROOT, '.upstream.d')
49 for fname in sorted(os.listdir(upstream_d)):
50 if '~' in fname or '#' in fname:
51 # Ignore auto-save and backup files
52 continue
53 try:
54 with open(os.path.join(upstream_d, fname), 'r') as f:
55 for line in f:
56 line = line.strip()
57 if line.startswith('#'):
58 continue
59 if not line:
60 continue
61 line = line.replace('${SAGE_ROOT}', SAGE_ROOT)
62 line = line.replace('${SAGE_DISTFILES}', SAGE_DISTFILES)
63 if '${SAGE_SERVER}' in line:
64 SAGE_SERVER = os.environ.get("SAGE_SERVER", "")
65 if not SAGE_SERVER:
66 continue
67 line = line.replace('${SAGE_SERVER}', SAGE_SERVER)
68 if line.endswith('mirror_list'):
69 cache_filename = os.path.join(SAGE_DISTFILES, line.rpartition('/')[2])
70 self.sources.append(MirrorList_from_url(line, cache_filename))
71 else:
72 self.sources.append([line])
73 except IOError:
74 # Silently ignore files that do not exist
75 pass
76
77 def __iter__(self):
78 """
79 Iterate through the list of mirrors.
80
81 This is the main entry point into the mirror list. Every
82 script should just use this function to try mirrors in order
83 of preference. This will not just yield the official mirrors,
84 but also urls for packages that are currently being tested.
85 """
86 for source in self.sources:
87 for mirror in source:
88 yield mirror
89
90
91 class MirrorList_from_url(object):
92
93 MAXAGE = 24*60*60 # seconds
94
95 def __init__(self, url, filename):
96 self.url = url
97 self.filename = filename
98 self._mirrors = None
99
100 @property
101 def mirrors(self):
102 if self._mirrors is not None:
103 return self._mirrors
104
105 try:
106 self.mirrorfile = open(self.filename, 'r+t')
107 except IOError:
108 self.mirrorfile = open(self.filename, 'w+t')
109
110 with self.mirrorfile:
111 self.mirrorfd = self.mirrorfile.fileno()
112 try_lock(self.mirrorfd, LOCK_SH) # shared (read) lock
113 if self._must_refresh():
114 try_lock(self.mirrorfd, LOCK_EX) # exclusive (write) lock
115 # Maybe the mirror list file was updated by a different
116 # process while we waited for the lock? Check again.
117 if self._must_refresh():
118 self._refresh()
119 if self._mirrors is None:
120 self._mirrors = self._load()
121
122 return self._mirrors
123
124 def _load(self, mirror_list=None):
125 """
126 Load and return `mirror_list` (defaults to the one on disk) as
127 a list of strings
128 """
129 if mirror_list is None:
130 try:
131 self.mirrorfile.seek(0)
132 mirror_list = self.mirrorfile.read()
133 except IOError:
134 log.critical('Failed to load the cached mirror list')
135 return []
136 if mirror_list == '':
137 return []
138 import ast
139 try:
140 return ast.literal_eval(mirror_list)
141 except SyntaxError:
142 log.critical('Downloaded mirror list has syntax error: {0}'.format(mirror_list))
143 return []
144
145 def _save(self):
146 """
147 Save the mirror list for (short-term) future use.
148 """
149 self.mirrorfile.seek(0)
150 self.mirrorfile.write(repr(self.mirrors))
151 self.mirrorfile.truncate()
152 self.mirrorfile.flush()
153
154 def _port_of_mirror(self, mirror):
155 if mirror.startswith('http://'):
156 return 80
157 if mirror.startswith('https://'):
158 return 443
159 if mirror.startswith('ftp://'):
160 return 21
161 # Sensible default (invalid mirror?)
162 return 80
163
164 def _rank_mirrors(self):
165 """
166 Sort the mirrors by speed, fastest being first
167
168 This method is used by the YUM fastestmirror plugin
169 """
170 timed_mirrors = []
171 import time, socket
172 log.info('Searching fastest mirror')
173 timeout = socket.getdefaulttimeout()
174 if timeout is None:
175 timeout = 1
176 for mirror in self.mirrors:
177 if not mirror.startswith('http'):
178 log.debug('we currently can only handle http, got %s', mirror)
179 continue
180 port = self._port_of_mirror(mirror)
181 mirror_hostname = urlparse.urlsplit(mirror).netloc
182 time_before = time.time()
183 try:
184 sock = socket.create_connection((mirror_hostname, port), timeout)
185 sock.close()
186 except (IOError, socket.error, socket.timeout) as err:
187 log.warning(str(err).strip() + ': ' + mirror)
188 continue
189 result = time.time() - time_before
190 result_ms = int(1000 * result)
191 log.info(str(result_ms).rjust(5) + 'ms: ' + mirror)
192 timed_mirrors.append((result, mirror))
193 if len(timed_mirrors) == 0:
194 # We cannot reach any mirror directly, most likely firewall issue
195 if 'http_proxy' not in os.environ:
196 log.error('Could not reach any mirror directly and no proxy set')
197 raise MirrorListException('Failed to connect to any mirror, probably no internet connection')
198 log.info('Cannot time mirrors via proxy, using default order')
199 else:
200 timed_mirrors.sort()
201 self._mirrors = [m[1] for m in timed_mirrors]
202 log.info('Fastest mirror: ' + self.fastest)
203
204 def _age(self):
205 """
206 Return the age of the cached mirror list in seconds
207 """
208 import time
209 mtime = os.fstat(self.mirrorfd).st_mtime
210 now = time.mktime(time.localtime())
211 return now - mtime
212
213 def _must_refresh(self):
214 """
215 Return whether we must download the mirror list.
216
217 If and only if this method returns ``False`` is it admissible
218 to use the cached mirror list.
219 """
220 if os.fstat(self.mirrorfd).st_size == 0:
221 return True
222 return self._age() > self.MAXAGE
223
224 def _refresh(self):
225 """
226 Download and rank the mirror list.
227 """
228 log.info('Downloading the Sage mirror list')
229 try:
230 with contextlib.closing(urllib.urlopen(self.url)) as f:
231 mirror_list = f.read().decode("ascii")
232 except IOError:
233 log.critical('Downloading the mirror list failed, using cached version')
234 else:
235 self._mirrors = self._load(mirror_list)
236 self._rank_mirrors()
237 self._save()
238
239 def __iter__(self):
240 """
241 Iterate through the list of mirrors.
242
243 This is the main entry point into the mirror list. Every
244 script should just use this function to try mirrors in order
245 of preference. This will not just yield the official mirrors,
246 but also urls for packages that are currently being tested.
247 """
248 try:
249 yield os.environ['SAGE_SERVER']
250 except KeyError:
251 pass
252 for mirror in self.mirrors:
253 if not mirror.endswith('/'):
254 mirror += '/'
255 yield mirror + '/'.join(['spkg', 'upstream', '${SPKG}'])
256
257 @property
258 def fastest(self):
259 return next(iter(self))
260
[end of build/sage_bootstrap/download/mirror_list.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/build/sage_bootstrap/download/mirror_list.py b/build/sage_bootstrap/download/mirror_list.py
--- a/build/sage_bootstrap/download/mirror_list.py
+++ b/build/sage_bootstrap/download/mirror_list.py
@@ -170,9 +170,7 @@
timed_mirrors = []
import time, socket
log.info('Searching fastest mirror')
- timeout = socket.getdefaulttimeout()
- if timeout is None:
- timeout = 1
+ timeout = 1
for mirror in self.mirrors:
if not mirror.startswith('http'):
log.debug('we currently can only handle http, got %s', mirror)
@@ -190,6 +188,11 @@
result_ms = int(1000 * result)
log.info(str(result_ms).rjust(5) + 'ms: ' + mirror)
timed_mirrors.append((result, mirror))
+ timed_mirrors.sort()
+ if len(timed_mirrors) >= 5 and timed_mirrors[4][0] < 0.3:
+ # We don't need more than 5 decent mirrors
+ break
+
if len(timed_mirrors) == 0:
# We cannot reach any mirror directly, most likely firewall issue
if 'http_proxy' not in os.environ:
@@ -197,7 +200,6 @@
raise MirrorListException('Failed to connect to any mirror, probably no internet connection')
log.info('Cannot time mirrors via proxy, using default order')
else:
- timed_mirrors.sort()
self._mirrors = [m[1] for m in timed_mirrors]
log.info('Fastest mirror: ' + self.fastest)
|
{"golden_diff": "diff --git a/build/sage_bootstrap/download/mirror_list.py b/build/sage_bootstrap/download/mirror_list.py\n--- a/build/sage_bootstrap/download/mirror_list.py\n+++ b/build/sage_bootstrap/download/mirror_list.py\n@@ -170,9 +170,7 @@\n timed_mirrors = []\n import time, socket\n log.info('Searching fastest mirror')\n- timeout = socket.getdefaulttimeout()\n- if timeout is None:\n- timeout = 1\n+ timeout = 1\n for mirror in self.mirrors:\n if not mirror.startswith('http'):\n log.debug('we currently can only handle http, got %s', mirror)\n@@ -190,6 +188,11 @@\n result_ms = int(1000 * result)\n log.info(str(result_ms).rjust(5) + 'ms: ' + mirror)\n timed_mirrors.append((result, mirror))\n+ timed_mirrors.sort()\n+ if len(timed_mirrors) >= 5 and timed_mirrors[4][0] < 0.3:\n+ # We don't need more than 5 decent mirrors\n+ break\n+\n if len(timed_mirrors) == 0:\n # We cannot reach any mirror directly, most likely firewall issue\n if 'http_proxy' not in os.environ:\n@@ -197,7 +200,6 @@\n raise MirrorListException('Failed to connect to any mirror, probably no internet connection')\n log.info('Cannot time mirrors via proxy, using default order')\n else:\n- timed_mirrors.sort()\n self._mirrors = [m[1] for m in timed_mirrors]\n log.info('Fastest mirror: ' + self.fastest)\n", "issue": "sage-download-file: Limit number of mirrors contacted\n<div id=\"comment:0\"></div>\n\nIn particular when `--enable-download-from-upstream-url` is in use.\n\nCC: @jhpalmieri @dimpase @vbraun @williamstein\n\nComponent: **build**\n\n_Issue created by migration from https://trac.sagemath.org/ticket/34411_\n\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nAccess the List of Sage Download Mirrors\n\"\"\"\n\n#*****************************************************************************\n# Copyright (C) 2015 Volker Braun <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 2 of the License, or\n# (at your option) any later version.\n# http://www.gnu.org/licenses/\n#*****************************************************************************\n\nimport os\nimport contextlib\nimport logging\nlog = logging.getLogger()\n\nfrom sage_bootstrap.compat import urllib, urlparse\nfrom sage_bootstrap.env import SAGE_DISTFILES, SAGE_ROOT\n\nfrom fcntl import flock, LOCK_SH, LOCK_EX\nfrom errno import ENOLCK\n\n\ndef try_lock(fd, operation):\n \"\"\"\n Try flock() but ignore ``ENOLCK`` errors, which could happen if the\n file system does not support locking.\n \"\"\"\n try:\n flock(fd, operation)\n except IOError as e:\n if e.errno != ENOLCK:\n raise\n\n \nclass MirrorListException(RuntimeError):\n pass\n \n\nclass MirrorList(object):\n\n def __init__(self):\n self.sources = []\n upstream_d = os.path.join(SAGE_ROOT, '.upstream.d')\n for fname in sorted(os.listdir(upstream_d)):\n if '~' in fname or '#' in fname:\n # Ignore auto-save and backup files\n continue\n try:\n with open(os.path.join(upstream_d, fname), 'r') as f:\n for line in f:\n line = line.strip()\n if line.startswith('#'):\n continue\n if not line:\n continue\n line = line.replace('${SAGE_ROOT}', SAGE_ROOT)\n line = line.replace('${SAGE_DISTFILES}', SAGE_DISTFILES)\n if '${SAGE_SERVER}' in line:\n SAGE_SERVER = os.environ.get(\"SAGE_SERVER\", \"\")\n if not SAGE_SERVER:\n continue\n line = line.replace('${SAGE_SERVER}', SAGE_SERVER)\n if line.endswith('mirror_list'):\n cache_filename = os.path.join(SAGE_DISTFILES, line.rpartition('/')[2])\n self.sources.append(MirrorList_from_url(line, cache_filename))\n else:\n self.sources.append([line])\n except IOError:\n # Silently ignore files that do not exist\n pass\n\n def __iter__(self):\n \"\"\"\n Iterate through the list of mirrors.\n\n This is the main entry point into the mirror list. Every\n script should just use this function to try mirrors in order\n of preference. This will not just yield the official mirrors,\n but also urls for packages that are currently being tested.\n \"\"\"\n for source in self.sources:\n for mirror in source:\n yield mirror\n\n\nclass MirrorList_from_url(object):\n \n MAXAGE = 24*60*60 # seconds\n\n def __init__(self, url, filename):\n self.url = url\n self.filename = filename\n self._mirrors = None\n\n @property\n def mirrors(self):\n if self._mirrors is not None:\n return self._mirrors\n\n try:\n self.mirrorfile = open(self.filename, 'r+t')\n except IOError:\n self.mirrorfile = open(self.filename, 'w+t')\n\n with self.mirrorfile:\n self.mirrorfd = self.mirrorfile.fileno()\n try_lock(self.mirrorfd, LOCK_SH) # shared (read) lock\n if self._must_refresh():\n try_lock(self.mirrorfd, LOCK_EX) # exclusive (write) lock\n # Maybe the mirror list file was updated by a different\n # process while we waited for the lock? Check again.\n if self._must_refresh():\n self._refresh()\n if self._mirrors is None:\n self._mirrors = self._load()\n\n return self._mirrors\n\n def _load(self, mirror_list=None):\n \"\"\"\n Load and return `mirror_list` (defaults to the one on disk) as\n a list of strings\n \"\"\"\n if mirror_list is None:\n try:\n self.mirrorfile.seek(0)\n mirror_list = self.mirrorfile.read()\n except IOError:\n log.critical('Failed to load the cached mirror list')\n return []\n if mirror_list == '':\n return []\n import ast\n try:\n return ast.literal_eval(mirror_list)\n except SyntaxError:\n log.critical('Downloaded mirror list has syntax error: {0}'.format(mirror_list))\n return []\n\n def _save(self):\n \"\"\"\n Save the mirror list for (short-term) future use.\n \"\"\"\n self.mirrorfile.seek(0)\n self.mirrorfile.write(repr(self.mirrors))\n self.mirrorfile.truncate()\n self.mirrorfile.flush()\n\n def _port_of_mirror(self, mirror):\n if mirror.startswith('http://'):\n return 80\n if mirror.startswith('https://'):\n return 443\n if mirror.startswith('ftp://'):\n return 21\n # Sensible default (invalid mirror?)\n return 80\n\n def _rank_mirrors(self):\n \"\"\"\n Sort the mirrors by speed, fastest being first\n\n This method is used by the YUM fastestmirror plugin\n \"\"\"\n timed_mirrors = []\n import time, socket\n log.info('Searching fastest mirror')\n timeout = socket.getdefaulttimeout()\n if timeout is None:\n timeout = 1\n for mirror in self.mirrors:\n if not mirror.startswith('http'):\n log.debug('we currently can only handle http, got %s', mirror)\n continue\n port = self._port_of_mirror(mirror)\n mirror_hostname = urlparse.urlsplit(mirror).netloc\n time_before = time.time()\n try:\n sock = socket.create_connection((mirror_hostname, port), timeout)\n sock.close()\n except (IOError, socket.error, socket.timeout) as err:\n log.warning(str(err).strip() + ': ' + mirror)\n continue\n result = time.time() - time_before\n result_ms = int(1000 * result)\n log.info(str(result_ms).rjust(5) + 'ms: ' + mirror)\n timed_mirrors.append((result, mirror))\n if len(timed_mirrors) == 0:\n # We cannot reach any mirror directly, most likely firewall issue\n if 'http_proxy' not in os.environ:\n log.error('Could not reach any mirror directly and no proxy set')\n raise MirrorListException('Failed to connect to any mirror, probably no internet connection')\n log.info('Cannot time mirrors via proxy, using default order')\n else:\n timed_mirrors.sort()\n self._mirrors = [m[1] for m in timed_mirrors]\n log.info('Fastest mirror: ' + self.fastest)\n\n def _age(self):\n \"\"\"\n Return the age of the cached mirror list in seconds\n \"\"\"\n import time\n mtime = os.fstat(self.mirrorfd).st_mtime\n now = time.mktime(time.localtime())\n return now - mtime\n\n def _must_refresh(self):\n \"\"\"\n Return whether we must download the mirror list.\n\n If and only if this method returns ``False`` is it admissible\n to use the cached mirror list.\n \"\"\"\n if os.fstat(self.mirrorfd).st_size == 0:\n return True\n return self._age() > self.MAXAGE\n\n def _refresh(self):\n \"\"\"\n Download and rank the mirror list.\n \"\"\"\n log.info('Downloading the Sage mirror list')\n try:\n with contextlib.closing(urllib.urlopen(self.url)) as f:\n mirror_list = f.read().decode(\"ascii\")\n except IOError:\n log.critical('Downloading the mirror list failed, using cached version')\n else:\n self._mirrors = self._load(mirror_list)\n self._rank_mirrors()\n self._save()\n\n def __iter__(self):\n \"\"\"\n Iterate through the list of mirrors.\n\n This is the main entry point into the mirror list. Every\n script should just use this function to try mirrors in order\n of preference. This will not just yield the official mirrors,\n but also urls for packages that are currently being tested.\n \"\"\"\n try:\n yield os.environ['SAGE_SERVER']\n except KeyError:\n pass\n for mirror in self.mirrors:\n if not mirror.endswith('/'):\n mirror += '/'\n yield mirror + '/'.join(['spkg', 'upstream', '${SPKG}'])\n\n @property\n def fastest(self):\n return next(iter(self))\n", "path": "build/sage_bootstrap/download/mirror_list.py"}]}
| 3,211 | 390 |
gh_patches_debug_21997
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-2564
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MetricReaders must be registered in only one MeterProvider instance
From the [spec](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/sdk.md#metricreader):
> The SDK MUST NOT allow a MetricReader instance to be registered on more than one MeterProvider instance.
</issue>
<code>
[start of opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from atexit import register, unregister
16 from logging import getLogger
17 from threading import Lock
18 from typing import Optional, Sequence
19
20 from opentelemetry._metrics import Meter as APIMeter
21 from opentelemetry._metrics import MeterProvider as APIMeterProvider
22 from opentelemetry._metrics import NoOpMeter
23 from opentelemetry._metrics.instrument import Counter as APICounter
24 from opentelemetry._metrics.instrument import Histogram as APIHistogram
25 from opentelemetry._metrics.instrument import (
26 ObservableCounter as APIObservableCounter,
27 )
28 from opentelemetry._metrics.instrument import (
29 ObservableGauge as APIObservableGauge,
30 )
31 from opentelemetry._metrics.instrument import (
32 ObservableUpDownCounter as APIObservableUpDownCounter,
33 )
34 from opentelemetry._metrics.instrument import UpDownCounter as APIUpDownCounter
35 from opentelemetry.sdk._metrics.instrument import (
36 Counter,
37 Histogram,
38 ObservableCounter,
39 ObservableGauge,
40 ObservableUpDownCounter,
41 UpDownCounter,
42 )
43 from opentelemetry.sdk._metrics.measurement_consumer import (
44 MeasurementConsumer,
45 SynchronousMeasurementConsumer,
46 )
47 from opentelemetry.sdk._metrics.metric_reader import MetricReader
48 from opentelemetry.sdk._metrics.sdk_configuration import SdkConfiguration
49 from opentelemetry.sdk._metrics.view import View
50 from opentelemetry.sdk.resources import Resource
51 from opentelemetry.sdk.util.instrumentation import InstrumentationInfo
52 from opentelemetry.util._once import Once
53
54 _logger = getLogger(__name__)
55
56
57 class Meter(APIMeter):
58 def __init__(
59 self,
60 instrumentation_info: InstrumentationInfo,
61 measurement_consumer: MeasurementConsumer,
62 ):
63 super().__init__(instrumentation_info)
64 self._instrumentation_info = instrumentation_info
65 self._measurement_consumer = measurement_consumer
66
67 def create_counter(self, name, unit=None, description=None) -> APICounter:
68 return Counter(
69 name,
70 self._instrumentation_info,
71 self._measurement_consumer,
72 unit,
73 description,
74 )
75
76 def create_up_down_counter(
77 self, name, unit=None, description=None
78 ) -> APIUpDownCounter:
79 return UpDownCounter(
80 name,
81 self._instrumentation_info,
82 self._measurement_consumer,
83 unit,
84 description,
85 )
86
87 def create_observable_counter(
88 self, name, callback, unit=None, description=None
89 ) -> APIObservableCounter:
90
91 instrument = ObservableCounter(
92 name,
93 self._instrumentation_info,
94 self._measurement_consumer,
95 callback,
96 unit,
97 description,
98 )
99
100 self._measurement_consumer.register_asynchronous_instrument(instrument)
101
102 return instrument
103
104 def create_histogram(
105 self, name, unit=None, description=None
106 ) -> APIHistogram:
107 return Histogram(
108 name,
109 self._instrumentation_info,
110 self._measurement_consumer,
111 unit,
112 description,
113 )
114
115 def create_observable_gauge(
116 self, name, callback, unit=None, description=None
117 ) -> APIObservableGauge:
118
119 instrument = ObservableGauge(
120 name,
121 self._instrumentation_info,
122 self._measurement_consumer,
123 callback,
124 unit,
125 description,
126 )
127
128 self._measurement_consumer.register_asynchronous_instrument(instrument)
129
130 return instrument
131
132 def create_observable_up_down_counter(
133 self, name, callback, unit=None, description=None
134 ) -> APIObservableUpDownCounter:
135
136 instrument = ObservableUpDownCounter(
137 name,
138 self._instrumentation_info,
139 self._measurement_consumer,
140 callback,
141 unit,
142 description,
143 )
144
145 self._measurement_consumer.register_asynchronous_instrument(instrument)
146
147 return instrument
148
149
150 class MeterProvider(APIMeterProvider):
151 r"""See `opentelemetry._metrics.MeterProvider`.
152
153 Args:
154 metric_readers: Register metric readers to collect metrics from the SDK on demand. Each
155 `MetricReader` is completely independent and will collect separate streams of
156 metrics. TODO: reference ``PeriodicExportingMetricReader`` usage with push
157 exporters here.
158 resource: The resource representing what the metrics emitted from the SDK pertain to.
159 shutdown_on_exit: If true, registers an `atexit` handler to call
160 `MeterProvider.shutdown`
161 views: The views to configure the metric output the SDK
162
163 By default, instruments which do not match any `View` (or if no `View`\ s are provided)
164 will report metrics with the default aggregation for the instrument's kind. To disable
165 instruments by default, configure a match-all `View` with `DropAggregation` and then create
166 `View`\ s to re-enable individual instruments:
167
168 .. code-block:: python
169 :caption: Disable default views
170
171 MeterProvider(
172 views=[
173 View(instrument_name="*", aggregation=DropAggregation()),
174 View(instrument_name="mycounter"),
175 ],
176 # ...
177 )
178 """
179
180 def __init__(
181 self,
182 metric_readers: Sequence[MetricReader] = (),
183 resource: Resource = Resource.create({}),
184 shutdown_on_exit: bool = True,
185 views: Sequence[View] = (),
186 ):
187 self._lock = Lock()
188 self._meter_lock = Lock()
189 self._atexit_handler = None
190 self._sdk_config = SdkConfiguration(
191 resource=resource,
192 metric_readers=metric_readers,
193 views=views,
194 )
195 self._measurement_consumer = SynchronousMeasurementConsumer(
196 sdk_config=self._sdk_config
197 )
198
199 if shutdown_on_exit:
200 self._atexit_handler = register(self.shutdown)
201
202 self._meters = {}
203 self._metric_readers = metric_readers
204
205 for metric_reader in self._sdk_config.metric_readers:
206 metric_reader._set_collect_callback(
207 self._measurement_consumer.collect
208 )
209
210 self._shutdown_once = Once()
211 self._shutdown = False
212
213 def force_flush(self) -> bool:
214
215 # FIXME implement a timeout
216
217 for metric_reader in self._sdk_config.metric_readers:
218 metric_reader.collect()
219 return True
220
221 def shutdown(self):
222 # FIXME implement a timeout
223
224 def _shutdown():
225 self._shutdown = True
226
227 did_shutdown = self._shutdown_once.do_once(_shutdown)
228
229 if not did_shutdown:
230 _logger.warning("shutdown can only be called once")
231 return False
232
233 overall_result = True
234
235 for metric_reader in self._sdk_config.metric_readers:
236 metric_reader_result = metric_reader.shutdown()
237
238 if not metric_reader_result:
239 _logger.warning(
240 "MetricReader %s failed to shutdown", metric_reader
241 )
242
243 overall_result = overall_result and metric_reader_result
244
245 if self._atexit_handler is not None:
246 unregister(self._atexit_handler)
247 self._atexit_handler = None
248
249 return overall_result
250
251 def get_meter(
252 self,
253 name: str,
254 version: Optional[str] = None,
255 schema_url: Optional[str] = None,
256 ) -> Meter:
257
258 if self._shutdown:
259 _logger.warning(
260 "A shutdown `MeterProvider` can not provide a `Meter`"
261 )
262 return NoOpMeter(name, version=version, schema_url=schema_url)
263
264 if not name:
265 _logger.warning("Meter name cannot be None or empty.")
266 return NoOpMeter(name, version=version, schema_url=schema_url)
267
268 info = InstrumentationInfo(name, version, schema_url)
269 with self._meter_lock:
270 if not self._meters.get(info):
271 self._meters[info] = Meter(
272 info,
273 self._measurement_consumer,
274 )
275 return self._meters[info]
276
[end of opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py
--- a/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py
@@ -177,6 +177,9 @@
)
"""
+ _all_metric_readers_lock = Lock()
+ _all_metric_readers = set()
+
def __init__(
self,
metric_readers: Sequence[MetricReader] = (),
@@ -200,9 +203,18 @@
self._atexit_handler = register(self.shutdown)
self._meters = {}
- self._metric_readers = metric_readers
for metric_reader in self._sdk_config.metric_readers:
+
+ with self._all_metric_readers_lock:
+ if metric_reader in self._all_metric_readers:
+ raise Exception(
+ f"MetricReader {metric_reader} has been registered "
+ "already in other MeterProvider instance"
+ )
+
+ self._all_metric_readers.add(metric_reader)
+
metric_reader._set_collect_callback(
self._measurement_consumer.collect
)
|
{"golden_diff": "diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py\n--- a/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py\n+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py\n@@ -177,6 +177,9 @@\n )\n \"\"\"\n \n+ _all_metric_readers_lock = Lock()\n+ _all_metric_readers = set()\n+\n def __init__(\n self,\n metric_readers: Sequence[MetricReader] = (),\n@@ -200,9 +203,18 @@\n self._atexit_handler = register(self.shutdown)\n \n self._meters = {}\n- self._metric_readers = metric_readers\n \n for metric_reader in self._sdk_config.metric_readers:\n+\n+ with self._all_metric_readers_lock:\n+ if metric_reader in self._all_metric_readers:\n+ raise Exception(\n+ f\"MetricReader {metric_reader} has been registered \"\n+ \"already in other MeterProvider instance\"\n+ )\n+\n+ self._all_metric_readers.add(metric_reader)\n+\n metric_reader._set_collect_callback(\n self._measurement_consumer.collect\n )\n", "issue": "MetricReaders must be registered in only one MeterProvider instance\nFrom the [spec](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/sdk.md#metricreader):\r\n\r\n> The SDK MUST NOT allow a MetricReader instance to be registered on more than one MeterProvider instance.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom atexit import register, unregister\nfrom logging import getLogger\nfrom threading import Lock\nfrom typing import Optional, Sequence\n\nfrom opentelemetry._metrics import Meter as APIMeter\nfrom opentelemetry._metrics import MeterProvider as APIMeterProvider\nfrom opentelemetry._metrics import NoOpMeter\nfrom opentelemetry._metrics.instrument import Counter as APICounter\nfrom opentelemetry._metrics.instrument import Histogram as APIHistogram\nfrom opentelemetry._metrics.instrument import (\n ObservableCounter as APIObservableCounter,\n)\nfrom opentelemetry._metrics.instrument import (\n ObservableGauge as APIObservableGauge,\n)\nfrom opentelemetry._metrics.instrument import (\n ObservableUpDownCounter as APIObservableUpDownCounter,\n)\nfrom opentelemetry._metrics.instrument import UpDownCounter as APIUpDownCounter\nfrom opentelemetry.sdk._metrics.instrument import (\n Counter,\n Histogram,\n ObservableCounter,\n ObservableGauge,\n ObservableUpDownCounter,\n UpDownCounter,\n)\nfrom opentelemetry.sdk._metrics.measurement_consumer import (\n MeasurementConsumer,\n SynchronousMeasurementConsumer,\n)\nfrom opentelemetry.sdk._metrics.metric_reader import MetricReader\nfrom opentelemetry.sdk._metrics.sdk_configuration import SdkConfiguration\nfrom opentelemetry.sdk._metrics.view import View\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.util.instrumentation import InstrumentationInfo\nfrom opentelemetry.util._once import Once\n\n_logger = getLogger(__name__)\n\n\nclass Meter(APIMeter):\n def __init__(\n self,\n instrumentation_info: InstrumentationInfo,\n measurement_consumer: MeasurementConsumer,\n ):\n super().__init__(instrumentation_info)\n self._instrumentation_info = instrumentation_info\n self._measurement_consumer = measurement_consumer\n\n def create_counter(self, name, unit=None, description=None) -> APICounter:\n return Counter(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n unit,\n description,\n )\n\n def create_up_down_counter(\n self, name, unit=None, description=None\n ) -> APIUpDownCounter:\n return UpDownCounter(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n unit,\n description,\n )\n\n def create_observable_counter(\n self, name, callback, unit=None, description=None\n ) -> APIObservableCounter:\n\n instrument = ObservableCounter(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n callback,\n unit,\n description,\n )\n\n self._measurement_consumer.register_asynchronous_instrument(instrument)\n\n return instrument\n\n def create_histogram(\n self, name, unit=None, description=None\n ) -> APIHistogram:\n return Histogram(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n unit,\n description,\n )\n\n def create_observable_gauge(\n self, name, callback, unit=None, description=None\n ) -> APIObservableGauge:\n\n instrument = ObservableGauge(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n callback,\n unit,\n description,\n )\n\n self._measurement_consumer.register_asynchronous_instrument(instrument)\n\n return instrument\n\n def create_observable_up_down_counter(\n self, name, callback, unit=None, description=None\n ) -> APIObservableUpDownCounter:\n\n instrument = ObservableUpDownCounter(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n callback,\n unit,\n description,\n )\n\n self._measurement_consumer.register_asynchronous_instrument(instrument)\n\n return instrument\n\n\nclass MeterProvider(APIMeterProvider):\n r\"\"\"See `opentelemetry._metrics.MeterProvider`.\n\n Args:\n metric_readers: Register metric readers to collect metrics from the SDK on demand. Each\n `MetricReader` is completely independent and will collect separate streams of\n metrics. TODO: reference ``PeriodicExportingMetricReader`` usage with push\n exporters here.\n resource: The resource representing what the metrics emitted from the SDK pertain to.\n shutdown_on_exit: If true, registers an `atexit` handler to call\n `MeterProvider.shutdown`\n views: The views to configure the metric output the SDK\n\n By default, instruments which do not match any `View` (or if no `View`\\ s are provided)\n will report metrics with the default aggregation for the instrument's kind. To disable\n instruments by default, configure a match-all `View` with `DropAggregation` and then create\n `View`\\ s to re-enable individual instruments:\n\n .. code-block:: python\n :caption: Disable default views\n\n MeterProvider(\n views=[\n View(instrument_name=\"*\", aggregation=DropAggregation()),\n View(instrument_name=\"mycounter\"),\n ],\n # ...\n )\n \"\"\"\n\n def __init__(\n self,\n metric_readers: Sequence[MetricReader] = (),\n resource: Resource = Resource.create({}),\n shutdown_on_exit: bool = True,\n views: Sequence[View] = (),\n ):\n self._lock = Lock()\n self._meter_lock = Lock()\n self._atexit_handler = None\n self._sdk_config = SdkConfiguration(\n resource=resource,\n metric_readers=metric_readers,\n views=views,\n )\n self._measurement_consumer = SynchronousMeasurementConsumer(\n sdk_config=self._sdk_config\n )\n\n if shutdown_on_exit:\n self._atexit_handler = register(self.shutdown)\n\n self._meters = {}\n self._metric_readers = metric_readers\n\n for metric_reader in self._sdk_config.metric_readers:\n metric_reader._set_collect_callback(\n self._measurement_consumer.collect\n )\n\n self._shutdown_once = Once()\n self._shutdown = False\n\n def force_flush(self) -> bool:\n\n # FIXME implement a timeout\n\n for metric_reader in self._sdk_config.metric_readers:\n metric_reader.collect()\n return True\n\n def shutdown(self):\n # FIXME implement a timeout\n\n def _shutdown():\n self._shutdown = True\n\n did_shutdown = self._shutdown_once.do_once(_shutdown)\n\n if not did_shutdown:\n _logger.warning(\"shutdown can only be called once\")\n return False\n\n overall_result = True\n\n for metric_reader in self._sdk_config.metric_readers:\n metric_reader_result = metric_reader.shutdown()\n\n if not metric_reader_result:\n _logger.warning(\n \"MetricReader %s failed to shutdown\", metric_reader\n )\n\n overall_result = overall_result and metric_reader_result\n\n if self._atexit_handler is not None:\n unregister(self._atexit_handler)\n self._atexit_handler = None\n\n return overall_result\n\n def get_meter(\n self,\n name: str,\n version: Optional[str] = None,\n schema_url: Optional[str] = None,\n ) -> Meter:\n\n if self._shutdown:\n _logger.warning(\n \"A shutdown `MeterProvider` can not provide a `Meter`\"\n )\n return NoOpMeter(name, version=version, schema_url=schema_url)\n\n if not name:\n _logger.warning(\"Meter name cannot be None or empty.\")\n return NoOpMeter(name, version=version, schema_url=schema_url)\n\n info = InstrumentationInfo(name, version, schema_url)\n with self._meter_lock:\n if not self._meters.get(info):\n self._meters[info] = Meter(\n info,\n self._measurement_consumer,\n )\n return self._meters[info]\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py"}]}
| 3,106 | 280 |
gh_patches_debug_28331
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-4113
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
testing 5024: missing location label
**URL:** https://meinberlin-dev.liqd.net/projekte/burgerhaushalt-spandau/?mode=list
**user:** any
**expected behaviour:**
**behaviour:** location label (Bezeichnung des markierten Ortes) is missing
**important screensize:**
**device & browser:**
**Comment/Question:** maybe we need a smaller char restriction here? it's at 255 now, I wonder if something like 50 should be enough for something displayed as a tag? or continue with ... for longer words?
old list
<img width="446" alt="Bildschirmfoto 2021-12-21 um 16 35 27" src="https://user-images.githubusercontent.com/35491681/146956690-789f6d02-372c-4877-a4c9-c539b5fc90c3.png">
new list
<img width="446" alt="Bildschirmfoto 2021-12-21 um 16 34 09" src="https://user-images.githubusercontent.com/35491681/146956491-2472f9f2-e90d-4975-88a8-fbe1a7012657.png">
old list with long label
<img width="656" alt="Bildschirmfoto 2021-12-21 um 16 36 09" src="https://user-images.githubusercontent.com/35491681/146956804-ced5b4b8-0da8-42fc-a17c-901fc86efe9b.png">
</issue>
<code>
[start of meinberlin/apps/budgeting/serializers.py]
1 from django.contrib.contenttypes.models import ContentType
2 from rest_framework import serializers
3
4 from adhocracy4.categories.models import Category
5 from meinberlin.apps.votes.models import TokenVote
6
7 from .models import Proposal
8
9
10 class CategoryField(serializers.Field):
11
12 def to_internal_value(self, category):
13 if category:
14 return Category.objects.get(pk=category)
15 else:
16 return None
17
18 def to_representation(self, category):
19 return {'id': category.pk, 'name': category.name}
20
21
22 class ProposalSerializer(serializers.ModelSerializer):
23
24 creator = serializers.SerializerMethodField()
25 comment_count = serializers.SerializerMethodField()
26 positive_rating_count = serializers.SerializerMethodField()
27 negative_rating_count = serializers.SerializerMethodField()
28 category = CategoryField()
29 url = serializers.SerializerMethodField()
30 moderator_feedback = serializers.SerializerMethodField()
31 session_token_voted = serializers.SerializerMethodField()
32
33 class Meta:
34 model = Proposal
35 fields = ('budget', 'category', 'comment_count', 'created', 'modified',
36 'creator', 'is_archived', 'name', 'negative_rating_count',
37 'positive_rating_count', 'url', 'pk', 'moderator_feedback',
38 'session_token_voted')
39 read_only_fields = ('budget', 'category', 'comment_count', 'created',
40 'modified', 'creator', 'is_archived', 'name',
41 'negative_rating_count', 'positive_rating_count',
42 'url', 'pk', 'moderator_feedback',
43 'session_token_voted')
44
45 def get_creator(self, proposal):
46 return proposal.creator.username
47
48 def get_comment_count(self, proposal):
49 if hasattr(proposal, 'comment_count'):
50 return proposal.comment_count
51 else:
52 return 0
53
54 def get_positive_rating_count(self, proposal):
55 if hasattr(proposal, 'positive_rating_count'):
56 return proposal.positive_rating_count
57 else:
58 return 0
59
60 def get_negative_rating_count(self, proposal):
61 if hasattr(proposal, 'negative_rating_count'):
62 return proposal.negative_rating_count
63 else:
64 return 0
65
66 def get_url(self, proposal):
67 return proposal.get_absolute_url()
68
69 def get_moderator_feedback(self, proposal):
70 if hasattr(proposal, 'moderator_feedback'):
71 return (proposal.moderator_feedback,
72 proposal.get_moderator_feedback_display())
73 else:
74 return None
75
76 def get_session_token_voted(self, proposal):
77 """Serialize if proposal has been voted.
78
79 Returns bool that indicates whether the proposal has
80 been voted with the token in the current session
81 """
82 if 'request' in self.context:
83 if 'voting_token' in self.context['request'].session:
84 vote = TokenVote.objects.filter(
85 token__pk=self.context['request'].session['voting_token'],
86 content_type=ContentType.objects.get_for_model(
87 proposal.__class__),
88 object_pk=proposal.pk
89 )
90 if vote.exists():
91 return True
92
93 return False
94
[end of meinberlin/apps/budgeting/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/meinberlin/apps/budgeting/serializers.py b/meinberlin/apps/budgeting/serializers.py
--- a/meinberlin/apps/budgeting/serializers.py
+++ b/meinberlin/apps/budgeting/serializers.py
@@ -35,12 +35,12 @@
fields = ('budget', 'category', 'comment_count', 'created', 'modified',
'creator', 'is_archived', 'name', 'negative_rating_count',
'positive_rating_count', 'url', 'pk', 'moderator_feedback',
- 'session_token_voted')
+ 'point_label', 'session_token_voted')
read_only_fields = ('budget', 'category', 'comment_count', 'created',
'modified', 'creator', 'is_archived', 'name',
'negative_rating_count', 'positive_rating_count',
'url', 'pk', 'moderator_feedback',
- 'session_token_voted')
+ 'point_label', 'session_token_voted')
def get_creator(self, proposal):
return proposal.creator.username
@@ -73,6 +73,12 @@
else:
return None
+ def get_point_label(self, proposal):
+ if hasattr(proposal, 'point_label'):
+ return (proposal.point_label)
+ else:
+ return None
+
def get_session_token_voted(self, proposal):
"""Serialize if proposal has been voted.
|
{"golden_diff": "diff --git a/meinberlin/apps/budgeting/serializers.py b/meinberlin/apps/budgeting/serializers.py\n--- a/meinberlin/apps/budgeting/serializers.py\n+++ b/meinberlin/apps/budgeting/serializers.py\n@@ -35,12 +35,12 @@\n fields = ('budget', 'category', 'comment_count', 'created', 'modified',\n 'creator', 'is_archived', 'name', 'negative_rating_count',\n 'positive_rating_count', 'url', 'pk', 'moderator_feedback',\n- 'session_token_voted')\n+ 'point_label', 'session_token_voted')\n read_only_fields = ('budget', 'category', 'comment_count', 'created',\n 'modified', 'creator', 'is_archived', 'name',\n 'negative_rating_count', 'positive_rating_count',\n 'url', 'pk', 'moderator_feedback',\n- 'session_token_voted')\n+ 'point_label', 'session_token_voted')\n \n def get_creator(self, proposal):\n return proposal.creator.username\n@@ -73,6 +73,12 @@\n else:\n return None\n \n+ def get_point_label(self, proposal):\n+ if hasattr(proposal, 'point_label'):\n+ return (proposal.point_label)\n+ else:\n+ return None\n+\n def get_session_token_voted(self, proposal):\n \"\"\"Serialize if proposal has been voted.\n", "issue": "testing 5024: missing location label\n**URL:** https://meinberlin-dev.liqd.net/projekte/burgerhaushalt-spandau/?mode=list\r\n**user:** any\r\n**expected behaviour:** \r\n**behaviour:** location label (Bezeichnung des markierten Ortes) is missing\r\n**important screensize:**\r\n**device & browser:** \r\n**Comment/Question:** maybe we need a smaller char restriction here? it's at 255 now, I wonder if something like 50 should be enough for something displayed as a tag? or continue with ... for longer words?\r\n\r\nold list\r\n<img width=\"446\" alt=\"Bildschirmfoto 2021-12-21 um 16 35 27\" src=\"https://user-images.githubusercontent.com/35491681/146956690-789f6d02-372c-4877-a4c9-c539b5fc90c3.png\">\r\n\r\n\r\nnew list\r\n<img width=\"446\" alt=\"Bildschirmfoto 2021-12-21 um 16 34 09\" src=\"https://user-images.githubusercontent.com/35491681/146956491-2472f9f2-e90d-4975-88a8-fbe1a7012657.png\">\r\n\r\nold list with long label\r\n<img width=\"656\" alt=\"Bildschirmfoto 2021-12-21 um 16 36 09\" src=\"https://user-images.githubusercontent.com/35491681/146956804-ced5b4b8-0da8-42fc-a17c-901fc86efe9b.png\">\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from django.contrib.contenttypes.models import ContentType\nfrom rest_framework import serializers\n\nfrom adhocracy4.categories.models import Category\nfrom meinberlin.apps.votes.models import TokenVote\n\nfrom .models import Proposal\n\n\nclass CategoryField(serializers.Field):\n\n def to_internal_value(self, category):\n if category:\n return Category.objects.get(pk=category)\n else:\n return None\n\n def to_representation(self, category):\n return {'id': category.pk, 'name': category.name}\n\n\nclass ProposalSerializer(serializers.ModelSerializer):\n\n creator = serializers.SerializerMethodField()\n comment_count = serializers.SerializerMethodField()\n positive_rating_count = serializers.SerializerMethodField()\n negative_rating_count = serializers.SerializerMethodField()\n category = CategoryField()\n url = serializers.SerializerMethodField()\n moderator_feedback = serializers.SerializerMethodField()\n session_token_voted = serializers.SerializerMethodField()\n\n class Meta:\n model = Proposal\n fields = ('budget', 'category', 'comment_count', 'created', 'modified',\n 'creator', 'is_archived', 'name', 'negative_rating_count',\n 'positive_rating_count', 'url', 'pk', 'moderator_feedback',\n 'session_token_voted')\n read_only_fields = ('budget', 'category', 'comment_count', 'created',\n 'modified', 'creator', 'is_archived', 'name',\n 'negative_rating_count', 'positive_rating_count',\n 'url', 'pk', 'moderator_feedback',\n 'session_token_voted')\n\n def get_creator(self, proposal):\n return proposal.creator.username\n\n def get_comment_count(self, proposal):\n if hasattr(proposal, 'comment_count'):\n return proposal.comment_count\n else:\n return 0\n\n def get_positive_rating_count(self, proposal):\n if hasattr(proposal, 'positive_rating_count'):\n return proposal.positive_rating_count\n else:\n return 0\n\n def get_negative_rating_count(self, proposal):\n if hasattr(proposal, 'negative_rating_count'):\n return proposal.negative_rating_count\n else:\n return 0\n\n def get_url(self, proposal):\n return proposal.get_absolute_url()\n\n def get_moderator_feedback(self, proposal):\n if hasattr(proposal, 'moderator_feedback'):\n return (proposal.moderator_feedback,\n proposal.get_moderator_feedback_display())\n else:\n return None\n\n def get_session_token_voted(self, proposal):\n \"\"\"Serialize if proposal has been voted.\n\n Returns bool that indicates whether the proposal has\n been voted with the token in the current session\n \"\"\"\n if 'request' in self.context:\n if 'voting_token' in self.context['request'].session:\n vote = TokenVote.objects.filter(\n token__pk=self.context['request'].session['voting_token'],\n content_type=ContentType.objects.get_for_model(\n proposal.__class__),\n object_pk=proposal.pk\n )\n if vote.exists():\n return True\n\n return False\n", "path": "meinberlin/apps/budgeting/serializers.py"}]}
| 1,803 | 317 |
gh_patches_debug_56812
|
rasdani/github-patches
|
git_diff
|
microsoft__knossos-ksc-1027
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: Segmentation fault in sqrl_pytorch-PyTorch CUDA
Just saw this while working on something else. I haven't done a lot to debug it, but note that it's in copydown, on a fairly innocuous operation (aten::sum(Tensor 2) -> Float), so might be something to do with KS_ALLOCATOR not being defined?
Or could just be out of memory not caught?

</issue>
<code>
[start of examples/dl-capsule/sqrl.py]
1 import torch
2 import ksc.torch_frontend as knossos
3
4 # run-bench: Knossos source, and "nice" PyTorch implementation
5 # BEGINDOC
6 @knossos.register
7 def sqrl(x: torch.Tensor):
8 """
9 sqrl: Squared Leaky Relu
10 Like a capsule from /Stuck in a Rut/
11 Typically x is a 4x4 tensor, possibly
12 packed in a 4n x 4m array
13 """
14 y = torch.sum(x)
15 if y < 0.0:
16 t = -0.125 * x
17 else:
18 t = 1 / 2 * x ** 2
19 return torch.mean(torch.sin(t) * t)
20
21
22 # ENDDOC
23
24 # run-bench: PyTorch "fast" implementation
25 def sqrl_pytorch(x: torch.Tensor):
26 return sqrl(x)
27
28
29 # run-bench: PyTorch "nice" implementation
30 def sqrl_pytorch_nice(x: torch.Tensor):
31 return sqrl(x)
32
33
34 # run-bench: Define a range of values at which to call the methods
35 def sqrl_bench_configs():
36 yield torch.randn((4, 4))
37 yield torch.randn((16, 16))
38
39
40 #################################
41 #
42 # vsqrl - vectorized sqrl
43 #
44
45 vsqrl = knossos.vmap(sqrl)
46
47
48 # run-bench: Define a range of values at which to call the methods
49 def vsqrl_bench_configs():
50 yield torch.randn((10, 4, 4))
51 yield torch.randn((1000, 4, 4))
52 yield torch.randn((1000, 16, 16))
53
[end of examples/dl-capsule/sqrl.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/dl-capsule/sqrl.py b/examples/dl-capsule/sqrl.py
--- a/examples/dl-capsule/sqrl.py
+++ b/examples/dl-capsule/sqrl.py
@@ -23,12 +23,12 @@
# run-bench: PyTorch "fast" implementation
def sqrl_pytorch(x: torch.Tensor):
- return sqrl(x)
+ return sqrl.raw_f(x)
# run-bench: PyTorch "nice" implementation
def sqrl_pytorch_nice(x: torch.Tensor):
- return sqrl(x)
+ return sqrl.raw_f(x)
# run-bench: Define a range of values at which to call the methods
|
{"golden_diff": "diff --git a/examples/dl-capsule/sqrl.py b/examples/dl-capsule/sqrl.py\n--- a/examples/dl-capsule/sqrl.py\n+++ b/examples/dl-capsule/sqrl.py\n@@ -23,12 +23,12 @@\n \n # run-bench: PyTorch \"fast\" implementation\n def sqrl_pytorch(x: torch.Tensor):\n- return sqrl(x)\n+ return sqrl.raw_f(x)\n \n \n # run-bench: PyTorch \"nice\" implementation\n def sqrl_pytorch_nice(x: torch.Tensor):\n- return sqrl(x)\n+ return sqrl.raw_f(x)\n \n \n # run-bench: Define a range of values at which to call the methods\n", "issue": "Bug: Segmentation fault in sqrl_pytorch-PyTorch CUDA\nJust saw this while working on something else. I haven't done a lot to debug it, but note that it's in copydown, on a fairly innocuous operation (aten::sum(Tensor 2) -> Float), so might be something to do with KS_ALLOCATOR not being defined?\r\nOr could just be out of memory not caught?\r\n\r\n\n", "before_files": [{"content": "import torch\nimport ksc.torch_frontend as knossos\n\n# run-bench: Knossos source, and \"nice\" PyTorch implementation\n# BEGINDOC\[email protected]\ndef sqrl(x: torch.Tensor):\n \"\"\"\n sqrl: Squared Leaky Relu\n Like a capsule from /Stuck in a Rut/\n Typically x is a 4x4 tensor, possibly\n packed in a 4n x 4m array\n \"\"\"\n y = torch.sum(x)\n if y < 0.0:\n t = -0.125 * x\n else:\n t = 1 / 2 * x ** 2\n return torch.mean(torch.sin(t) * t)\n\n\n# ENDDOC\n\n# run-bench: PyTorch \"fast\" implementation\ndef sqrl_pytorch(x: torch.Tensor):\n return sqrl(x)\n\n\n# run-bench: PyTorch \"nice\" implementation\ndef sqrl_pytorch_nice(x: torch.Tensor):\n return sqrl(x)\n\n\n# run-bench: Define a range of values at which to call the methods\ndef sqrl_bench_configs():\n yield torch.randn((4, 4))\n yield torch.randn((16, 16))\n\n\n#################################\n#\n# vsqrl - vectorized sqrl\n#\n\nvsqrl = knossos.vmap(sqrl)\n\n\n# run-bench: Define a range of values at which to call the methods\ndef vsqrl_bench_configs():\n yield torch.randn((10, 4, 4))\n yield torch.randn((1000, 4, 4))\n yield torch.randn((1000, 16, 16))\n", "path": "examples/dl-capsule/sqrl.py"}]}
| 1,177 | 166 |
gh_patches_debug_8708
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-6851
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add WantAuthnRequestsSigned Parameter to SAML Metadata Endpoint for Improved Integration with Spring Boot SAML Library
**Is your feature request related to a problem? Please describe.**
Yes, I consistently encounter an issue when integrating with the Spring Boot SAML Library. When the **WantAuthnRequestsSigned** parameter is missing in the SAML metadata endpoint within the XML file, the authentication page displays the message: "Verification Certificate configured, but request is not signed."
**Describe the solution you'd like**
I would like the **WantAuthnRequestsSigned="true"** parameter to be added to the XML file in the SAML metadata endpoint for the element **<md:IDPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">**.
**Describe alternatives you've considered**
A potential alternative would be to modify the Spring Boot SAML Library to function even without the **WantAuthnRequestsSigned** parameter. However, this could lead to security concerns and is not the recommended approach.
**Additional context**
Adding this parameter is crucial for the correct functioning of authentication and the security of the system. It would be highly beneficial if this parameter were included in the XML file by default to prevent future integration issues.
</issue>
<code>
[start of authentik/providers/saml/processors/metadata.py]
1 """SAML Identity Provider Metadata Processor"""
2 from hashlib import sha256
3 from typing import Iterator, Optional
4
5 import xmlsec # nosec
6 from django.http import HttpRequest
7 from django.urls import reverse
8 from lxml.etree import Element, SubElement, tostring # nosec
9
10 from authentik.providers.saml.models import SAMLProvider
11 from authentik.providers.saml.utils.encoding import strip_pem_header
12 from authentik.sources.saml.processors.constants import (
13 DIGEST_ALGORITHM_TRANSLATION_MAP,
14 NS_MAP,
15 NS_SAML_METADATA,
16 NS_SAML_PROTOCOL,
17 NS_SIGNATURE,
18 SAML_BINDING_POST,
19 SAML_BINDING_REDIRECT,
20 SAML_NAME_ID_FORMAT_EMAIL,
21 SAML_NAME_ID_FORMAT_PERSISTENT,
22 SAML_NAME_ID_FORMAT_TRANSIENT,
23 SAML_NAME_ID_FORMAT_X509,
24 SIGN_ALGORITHM_TRANSFORM_MAP,
25 )
26
27
28 class MetadataProcessor:
29 """SAML Identity Provider Metadata Processor"""
30
31 provider: SAMLProvider
32 http_request: HttpRequest
33 force_binding: Optional[str]
34
35 def __init__(self, provider: SAMLProvider, request: HttpRequest):
36 self.provider = provider
37 self.http_request = request
38 self.force_binding = None
39 self.xml_id = "_" + sha256(f"{provider.name}-{provider.pk}".encode("ascii")).hexdigest()
40
41 def get_signing_key_descriptor(self) -> Optional[Element]:
42 """Get Signing KeyDescriptor, if enabled for the provider"""
43 if not self.provider.signing_kp:
44 return None
45 key_descriptor = Element(f"{{{NS_SAML_METADATA}}}KeyDescriptor")
46 key_descriptor.attrib["use"] = "signing"
47 key_info = SubElement(key_descriptor, f"{{{NS_SIGNATURE}}}KeyInfo")
48 x509_data = SubElement(key_info, f"{{{NS_SIGNATURE}}}X509Data")
49 x509_certificate = SubElement(x509_data, f"{{{NS_SIGNATURE}}}X509Certificate")
50 x509_certificate.text = strip_pem_header(
51 self.provider.signing_kp.certificate_data.replace("\r", "")
52 )
53 return key_descriptor
54
55 def get_name_id_formats(self) -> Iterator[Element]:
56 """Get compatible NameID Formats"""
57 formats = [
58 SAML_NAME_ID_FORMAT_EMAIL,
59 SAML_NAME_ID_FORMAT_PERSISTENT,
60 SAML_NAME_ID_FORMAT_X509,
61 SAML_NAME_ID_FORMAT_TRANSIENT,
62 ]
63 for name_id_format in formats:
64 element = Element(f"{{{NS_SAML_METADATA}}}NameIDFormat")
65 element.text = name_id_format
66 yield element
67
68 def get_sso_bindings(self) -> Iterator[Element]:
69 """Get all Bindings supported"""
70 binding_url_map = {
71 (SAML_BINDING_REDIRECT, "SingleSignOnService"): self.http_request.build_absolute_uri(
72 reverse(
73 "authentik_providers_saml:sso-redirect",
74 kwargs={"application_slug": self.provider.application.slug},
75 )
76 ),
77 (SAML_BINDING_POST, "SingleSignOnService"): self.http_request.build_absolute_uri(
78 reverse(
79 "authentik_providers_saml:sso-post",
80 kwargs={"application_slug": self.provider.application.slug},
81 )
82 ),
83 }
84 for binding_svc, url in binding_url_map.items():
85 binding, svc = binding_svc
86 if self.force_binding and self.force_binding != binding:
87 continue
88 element = Element(f"{{{NS_SAML_METADATA}}}{svc}")
89 element.attrib["Binding"] = binding
90 element.attrib["Location"] = url
91 yield element
92
93 def get_slo_bindings(self) -> Iterator[Element]:
94 """Get all Bindings supported"""
95 binding_url_map = {
96 (SAML_BINDING_REDIRECT, "SingleLogoutService"): self.http_request.build_absolute_uri(
97 reverse(
98 "authentik_providers_saml:slo-redirect",
99 kwargs={"application_slug": self.provider.application.slug},
100 )
101 ),
102 (SAML_BINDING_POST, "SingleLogoutService"): self.http_request.build_absolute_uri(
103 reverse(
104 "authentik_providers_saml:slo-post",
105 kwargs={"application_slug": self.provider.application.slug},
106 )
107 ),
108 }
109 for binding_svc, url in binding_url_map.items():
110 binding, svc = binding_svc
111 if self.force_binding and self.force_binding != binding:
112 continue
113 element = Element(f"{{{NS_SAML_METADATA}}}{svc}")
114 element.attrib["Binding"] = binding
115 element.attrib["Location"] = url
116 yield element
117
118 def _prepare_signature(self, entity_descriptor: Element):
119 sign_algorithm_transform = SIGN_ALGORITHM_TRANSFORM_MAP.get(
120 self.provider.signature_algorithm, xmlsec.constants.TransformRsaSha1
121 )
122 signature = xmlsec.template.create(
123 entity_descriptor,
124 xmlsec.constants.TransformExclC14N,
125 sign_algorithm_transform,
126 ns="ds", # type: ignore
127 )
128 entity_descriptor.append(signature)
129
130 def _sign(self, entity_descriptor: Element):
131 digest_algorithm_transform = DIGEST_ALGORITHM_TRANSLATION_MAP.get(
132 self.provider.digest_algorithm, xmlsec.constants.TransformSha1
133 )
134 assertion = entity_descriptor.xpath("//md:EntityDescriptor", namespaces=NS_MAP)[0]
135 xmlsec.tree.add_ids(assertion, ["ID"])
136 signature_node = xmlsec.tree.find_node(assertion, xmlsec.constants.NodeSignature)
137 ref = xmlsec.template.add_reference(
138 signature_node,
139 digest_algorithm_transform,
140 uri="#" + self.xml_id,
141 )
142 xmlsec.template.add_transform(ref, xmlsec.constants.TransformEnveloped)
143 xmlsec.template.add_transform(ref, xmlsec.constants.TransformExclC14N)
144 key_info = xmlsec.template.ensure_key_info(signature_node)
145 xmlsec.template.add_x509_data(key_info)
146
147 ctx = xmlsec.SignatureContext()
148
149 key = xmlsec.Key.from_memory(
150 self.provider.signing_kp.key_data,
151 xmlsec.constants.KeyDataFormatPem,
152 None,
153 )
154 key.load_cert_from_memory(
155 self.provider.signing_kp.certificate_data,
156 xmlsec.constants.KeyDataFormatCertPem,
157 )
158 ctx.key = key
159 ctx.sign(signature_node)
160
161 def build_entity_descriptor(self) -> str:
162 """Build full EntityDescriptor"""
163 entity_descriptor = Element(f"{{{NS_SAML_METADATA}}}EntityDescriptor", nsmap=NS_MAP)
164 entity_descriptor.attrib["ID"] = self.xml_id
165 entity_descriptor.attrib["entityID"] = self.provider.issuer
166
167 if self.provider.signing_kp:
168 self._prepare_signature(entity_descriptor)
169
170 idp_sso_descriptor = SubElement(
171 entity_descriptor, f"{{{NS_SAML_METADATA}}}IDPSSODescriptor"
172 )
173 idp_sso_descriptor.attrib["protocolSupportEnumeration"] = NS_SAML_PROTOCOL
174
175 signing_descriptor = self.get_signing_key_descriptor()
176 if signing_descriptor is not None:
177 idp_sso_descriptor.append(signing_descriptor)
178
179 for binding in self.get_slo_bindings():
180 idp_sso_descriptor.append(binding)
181
182 for name_id_format in self.get_name_id_formats():
183 idp_sso_descriptor.append(name_id_format)
184
185 for binding in self.get_sso_bindings():
186 idp_sso_descriptor.append(binding)
187
188 if self.provider.signing_kp:
189 self._sign(entity_descriptor)
190
191 return tostring(entity_descriptor).decode()
192
[end of authentik/providers/saml/processors/metadata.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/authentik/providers/saml/processors/metadata.py b/authentik/providers/saml/processors/metadata.py
--- a/authentik/providers/saml/processors/metadata.py
+++ b/authentik/providers/saml/processors/metadata.py
@@ -171,6 +171,8 @@
entity_descriptor, f"{{{NS_SAML_METADATA}}}IDPSSODescriptor"
)
idp_sso_descriptor.attrib["protocolSupportEnumeration"] = NS_SAML_PROTOCOL
+ if self.provider.verification_kp:
+ idp_sso_descriptor.attrib["WantAuthnRequestsSigned"] = "true"
signing_descriptor = self.get_signing_key_descriptor()
if signing_descriptor is not None:
|
{"golden_diff": "diff --git a/authentik/providers/saml/processors/metadata.py b/authentik/providers/saml/processors/metadata.py\n--- a/authentik/providers/saml/processors/metadata.py\n+++ b/authentik/providers/saml/processors/metadata.py\n@@ -171,6 +171,8 @@\n entity_descriptor, f\"{{{NS_SAML_METADATA}}}IDPSSODescriptor\"\n )\n idp_sso_descriptor.attrib[\"protocolSupportEnumeration\"] = NS_SAML_PROTOCOL\n+ if self.provider.verification_kp:\n+ idp_sso_descriptor.attrib[\"WantAuthnRequestsSigned\"] = \"true\"\n \n signing_descriptor = self.get_signing_key_descriptor()\n if signing_descriptor is not None:\n", "issue": "Add WantAuthnRequestsSigned Parameter to SAML Metadata Endpoint for Improved Integration with Spring Boot SAML Library\n**Is your feature request related to a problem? Please describe.**\r\nYes, I consistently encounter an issue when integrating with the Spring Boot SAML Library. When the **WantAuthnRequestsSigned** parameter is missing in the SAML metadata endpoint within the XML file, the authentication page displays the message: \"Verification Certificate configured, but request is not signed.\"\r\n\r\n**Describe the solution you'd like**\r\nI would like the **WantAuthnRequestsSigned=\"true\"** parameter to be added to the XML file in the SAML metadata endpoint for the element **<md:IDPSSODescriptor protocolSupportEnumeration=\"urn:oasis:names:tc:SAML:2.0:protocol\">**.\r\n\r\n**Describe alternatives you've considered**\r\nA potential alternative would be to modify the Spring Boot SAML Library to function even without the **WantAuthnRequestsSigned** parameter. However, this could lead to security concerns and is not the recommended approach.\r\n\r\n**Additional context**\r\nAdding this parameter is crucial for the correct functioning of authentication and the security of the system. It would be highly beneficial if this parameter were included in the XML file by default to prevent future integration issues.\n", "before_files": [{"content": "\"\"\"SAML Identity Provider Metadata Processor\"\"\"\nfrom hashlib import sha256\nfrom typing import Iterator, Optional\n\nimport xmlsec # nosec\nfrom django.http import HttpRequest\nfrom django.urls import reverse\nfrom lxml.etree import Element, SubElement, tostring # nosec\n\nfrom authentik.providers.saml.models import SAMLProvider\nfrom authentik.providers.saml.utils.encoding import strip_pem_header\nfrom authentik.sources.saml.processors.constants import (\n DIGEST_ALGORITHM_TRANSLATION_MAP,\n NS_MAP,\n NS_SAML_METADATA,\n NS_SAML_PROTOCOL,\n NS_SIGNATURE,\n SAML_BINDING_POST,\n SAML_BINDING_REDIRECT,\n SAML_NAME_ID_FORMAT_EMAIL,\n SAML_NAME_ID_FORMAT_PERSISTENT,\n SAML_NAME_ID_FORMAT_TRANSIENT,\n SAML_NAME_ID_FORMAT_X509,\n SIGN_ALGORITHM_TRANSFORM_MAP,\n)\n\n\nclass MetadataProcessor:\n \"\"\"SAML Identity Provider Metadata Processor\"\"\"\n\n provider: SAMLProvider\n http_request: HttpRequest\n force_binding: Optional[str]\n\n def __init__(self, provider: SAMLProvider, request: HttpRequest):\n self.provider = provider\n self.http_request = request\n self.force_binding = None\n self.xml_id = \"_\" + sha256(f\"{provider.name}-{provider.pk}\".encode(\"ascii\")).hexdigest()\n\n def get_signing_key_descriptor(self) -> Optional[Element]:\n \"\"\"Get Signing KeyDescriptor, if enabled for the provider\"\"\"\n if not self.provider.signing_kp:\n return None\n key_descriptor = Element(f\"{{{NS_SAML_METADATA}}}KeyDescriptor\")\n key_descriptor.attrib[\"use\"] = \"signing\"\n key_info = SubElement(key_descriptor, f\"{{{NS_SIGNATURE}}}KeyInfo\")\n x509_data = SubElement(key_info, f\"{{{NS_SIGNATURE}}}X509Data\")\n x509_certificate = SubElement(x509_data, f\"{{{NS_SIGNATURE}}}X509Certificate\")\n x509_certificate.text = strip_pem_header(\n self.provider.signing_kp.certificate_data.replace(\"\\r\", \"\")\n )\n return key_descriptor\n\n def get_name_id_formats(self) -> Iterator[Element]:\n \"\"\"Get compatible NameID Formats\"\"\"\n formats = [\n SAML_NAME_ID_FORMAT_EMAIL,\n SAML_NAME_ID_FORMAT_PERSISTENT,\n SAML_NAME_ID_FORMAT_X509,\n SAML_NAME_ID_FORMAT_TRANSIENT,\n ]\n for name_id_format in formats:\n element = Element(f\"{{{NS_SAML_METADATA}}}NameIDFormat\")\n element.text = name_id_format\n yield element\n\n def get_sso_bindings(self) -> Iterator[Element]:\n \"\"\"Get all Bindings supported\"\"\"\n binding_url_map = {\n (SAML_BINDING_REDIRECT, \"SingleSignOnService\"): self.http_request.build_absolute_uri(\n reverse(\n \"authentik_providers_saml:sso-redirect\",\n kwargs={\"application_slug\": self.provider.application.slug},\n )\n ),\n (SAML_BINDING_POST, \"SingleSignOnService\"): self.http_request.build_absolute_uri(\n reverse(\n \"authentik_providers_saml:sso-post\",\n kwargs={\"application_slug\": self.provider.application.slug},\n )\n ),\n }\n for binding_svc, url in binding_url_map.items():\n binding, svc = binding_svc\n if self.force_binding and self.force_binding != binding:\n continue\n element = Element(f\"{{{NS_SAML_METADATA}}}{svc}\")\n element.attrib[\"Binding\"] = binding\n element.attrib[\"Location\"] = url\n yield element\n\n def get_slo_bindings(self) -> Iterator[Element]:\n \"\"\"Get all Bindings supported\"\"\"\n binding_url_map = {\n (SAML_BINDING_REDIRECT, \"SingleLogoutService\"): self.http_request.build_absolute_uri(\n reverse(\n \"authentik_providers_saml:slo-redirect\",\n kwargs={\"application_slug\": self.provider.application.slug},\n )\n ),\n (SAML_BINDING_POST, \"SingleLogoutService\"): self.http_request.build_absolute_uri(\n reverse(\n \"authentik_providers_saml:slo-post\",\n kwargs={\"application_slug\": self.provider.application.slug},\n )\n ),\n }\n for binding_svc, url in binding_url_map.items():\n binding, svc = binding_svc\n if self.force_binding and self.force_binding != binding:\n continue\n element = Element(f\"{{{NS_SAML_METADATA}}}{svc}\")\n element.attrib[\"Binding\"] = binding\n element.attrib[\"Location\"] = url\n yield element\n\n def _prepare_signature(self, entity_descriptor: Element):\n sign_algorithm_transform = SIGN_ALGORITHM_TRANSFORM_MAP.get(\n self.provider.signature_algorithm, xmlsec.constants.TransformRsaSha1\n )\n signature = xmlsec.template.create(\n entity_descriptor,\n xmlsec.constants.TransformExclC14N,\n sign_algorithm_transform,\n ns=\"ds\", # type: ignore\n )\n entity_descriptor.append(signature)\n\n def _sign(self, entity_descriptor: Element):\n digest_algorithm_transform = DIGEST_ALGORITHM_TRANSLATION_MAP.get(\n self.provider.digest_algorithm, xmlsec.constants.TransformSha1\n )\n assertion = entity_descriptor.xpath(\"//md:EntityDescriptor\", namespaces=NS_MAP)[0]\n xmlsec.tree.add_ids(assertion, [\"ID\"])\n signature_node = xmlsec.tree.find_node(assertion, xmlsec.constants.NodeSignature)\n ref = xmlsec.template.add_reference(\n signature_node,\n digest_algorithm_transform,\n uri=\"#\" + self.xml_id,\n )\n xmlsec.template.add_transform(ref, xmlsec.constants.TransformEnveloped)\n xmlsec.template.add_transform(ref, xmlsec.constants.TransformExclC14N)\n key_info = xmlsec.template.ensure_key_info(signature_node)\n xmlsec.template.add_x509_data(key_info)\n\n ctx = xmlsec.SignatureContext()\n\n key = xmlsec.Key.from_memory(\n self.provider.signing_kp.key_data,\n xmlsec.constants.KeyDataFormatPem,\n None,\n )\n key.load_cert_from_memory(\n self.provider.signing_kp.certificate_data,\n xmlsec.constants.KeyDataFormatCertPem,\n )\n ctx.key = key\n ctx.sign(signature_node)\n\n def build_entity_descriptor(self) -> str:\n \"\"\"Build full EntityDescriptor\"\"\"\n entity_descriptor = Element(f\"{{{NS_SAML_METADATA}}}EntityDescriptor\", nsmap=NS_MAP)\n entity_descriptor.attrib[\"ID\"] = self.xml_id\n entity_descriptor.attrib[\"entityID\"] = self.provider.issuer\n\n if self.provider.signing_kp:\n self._prepare_signature(entity_descriptor)\n\n idp_sso_descriptor = SubElement(\n entity_descriptor, f\"{{{NS_SAML_METADATA}}}IDPSSODescriptor\"\n )\n idp_sso_descriptor.attrib[\"protocolSupportEnumeration\"] = NS_SAML_PROTOCOL\n\n signing_descriptor = self.get_signing_key_descriptor()\n if signing_descriptor is not None:\n idp_sso_descriptor.append(signing_descriptor)\n\n for binding in self.get_slo_bindings():\n idp_sso_descriptor.append(binding)\n\n for name_id_format in self.get_name_id_formats():\n idp_sso_descriptor.append(name_id_format)\n\n for binding in self.get_sso_bindings():\n idp_sso_descriptor.append(binding)\n\n if self.provider.signing_kp:\n self._sign(entity_descriptor)\n\n return tostring(entity_descriptor).decode()\n", "path": "authentik/providers/saml/processors/metadata.py"}]}
| 2,857 | 158 |
gh_patches_debug_31908
|
rasdani/github-patches
|
git_diff
|
rucio__rucio-5322
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add add-exception command in the CLI
Motivation
----------
A CLI command to add a new exception is missing and need to be added
</issue>
<code>
[start of lib/rucio/client/lifetimeclient.py]
1 # Copyright 2017-2018 CERN for the benefit of the ATLAS collaboration.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 # Authors:
16 # - Cedric Serfon <[email protected]>, 2017
17 # - Vincent Garonne <[email protected]>, 2018
18 # - Martin Barisits <[email protected]>, 2018
19 # - Andrew Lister <[email protected]>, 2019
20
21 from __future__ import print_function
22
23 from json import loads
24 from requests.status_codes import codes
25
26 from rucio.client.baseclient import BaseClient
27 from rucio.client.baseclient import choice
28 from rucio.common.utils import build_url, render_json
29
30
31 class LifetimeClient(BaseClient):
32
33 """Lifetime client class for working with Lifetime Model exceptions"""
34
35 LIFETIME_BASEURL = 'lifetime_exceptions'
36
37 def list_exceptions(self, exception_id=None, states=None):
38 """
39 List exceptions to Lifetime Model.
40
41 :param id: The id of the exception
42 :param states: The states to filter
43 """
44
45 path = self.LIFETIME_BASEURL + '/'
46 params = {}
47 if exception_id:
48 params['exception_id'] = exception_id
49 if states:
50 params['states'] = exception_id
51 url = build_url(choice(self.list_hosts), path=path, params=params)
52
53 result = self._send_request(url)
54 if result.status_code == codes.ok:
55 lifetime_exceptions = self._load_json_data(result)
56 return lifetime_exceptions
57 else:
58 exc_cls, exc_msg = self._get_exception(headers=result.headers, status_code=result.status_code)
59 raise exc_cls(exc_msg)
60
61 def add_exception(self, dids, account, pattern, comments, expires_at):
62 """
63 Add exceptions to Lifetime Model.
64
65 :param dids: The list of dids
66 :param account: The account of the requester.
67 :param pattern: The account.
68 :param comments: The comments associated to the exception.
69 :param expires_at: The expiration date of the exception.
70
71 returns: The id of the exception.
72 """
73 path = self.LIFETIME_BASEURL + '/'
74 url = build_url(choice(self.list_hosts), path=path)
75 data = {'dids': dids, 'account': account, 'pattern': pattern, 'comments': comments, 'expires_at': expires_at}
76 print(render_json(**data))
77 result = self._send_request(url, type_='POST', data=render_json(**data))
78 print(result.text)
79 if result.status_code == codes.created:
80 return loads(result.text)
81 exc_cls, exc_msg = self._get_exception(headers=result.headers, status_code=result.status_code, data=result.content)
82 raise exc_cls(exc_msg)
83
[end of lib/rucio/client/lifetimeclient.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/rucio/client/lifetimeclient.py b/lib/rucio/client/lifetimeclient.py
--- a/lib/rucio/client/lifetimeclient.py
+++ b/lib/rucio/client/lifetimeclient.py
@@ -1,4 +1,5 @@
-# Copyright 2017-2018 CERN for the benefit of the ATLAS collaboration.
+# -*- coding: utf-8 -*-
+# Copyright 2017-2022 CERN
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,10 +14,13 @@
# limitations under the License.
#
# Authors:
-# - Cedric Serfon <[email protected]>, 2017
-# - Vincent Garonne <[email protected]>, 2018
+# - Cedric Serfon <[email protected]>, 2017-2022
+# - Vincent Garonne <[email protected]>, 2018
+# - Joaquín Bogado <[email protected]>, 2018
# - Martin Barisits <[email protected]>, 2018
# - Andrew Lister <[email protected]>, 2019
+# - David Población Criado <[email protected]>, 2021
+# - Igor Mandrichenko <[email protected]>, 2021
from __future__ import print_function
@@ -73,9 +77,7 @@
path = self.LIFETIME_BASEURL + '/'
url = build_url(choice(self.list_hosts), path=path)
data = {'dids': dids, 'account': account, 'pattern': pattern, 'comments': comments, 'expires_at': expires_at}
- print(render_json(**data))
result = self._send_request(url, type_='POST', data=render_json(**data))
- print(result.text)
if result.status_code == codes.created:
return loads(result.text)
exc_cls, exc_msg = self._get_exception(headers=result.headers, status_code=result.status_code, data=result.content)
|
{"golden_diff": "diff --git a/lib/rucio/client/lifetimeclient.py b/lib/rucio/client/lifetimeclient.py\n--- a/lib/rucio/client/lifetimeclient.py\n+++ b/lib/rucio/client/lifetimeclient.py\n@@ -1,4 +1,5 @@\n-# Copyright 2017-2018 CERN for the benefit of the ATLAS collaboration.\n+# -*- coding: utf-8 -*-\n+# Copyright 2017-2022 CERN\n #\n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n@@ -13,10 +14,13 @@\n # limitations under the License.\n #\n # Authors:\n-# - Cedric Serfon <[email protected]>, 2017\n-# - Vincent Garonne <[email protected]>, 2018\n+# - Cedric Serfon <[email protected]>, 2017-2022\n+# - Vincent Garonne <[email protected]>, 2018\n+# - Joaqu\u00edn Bogado <[email protected]>, 2018\n # - Martin Barisits <[email protected]>, 2018\n # - Andrew Lister <[email protected]>, 2019\n+# - David Poblaci\u00f3n Criado <[email protected]>, 2021\n+# - Igor Mandrichenko <[email protected]>, 2021\n \n from __future__ import print_function\n \n@@ -73,9 +77,7 @@\n path = self.LIFETIME_BASEURL + '/'\n url = build_url(choice(self.list_hosts), path=path)\n data = {'dids': dids, 'account': account, 'pattern': pattern, 'comments': comments, 'expires_at': expires_at}\n- print(render_json(**data))\n result = self._send_request(url, type_='POST', data=render_json(**data))\n- print(result.text)\n if result.status_code == codes.created:\n return loads(result.text)\n exc_cls, exc_msg = self._get_exception(headers=result.headers, status_code=result.status_code, data=result.content)\n", "issue": "Add add-exception command in the CLI\nMotivation\r\n----------\r\nA CLI command to add a new exception is missing and need to be added\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2017-2018 CERN for the benefit of the ATLAS collaboration.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Authors:\n# - Cedric Serfon <[email protected]>, 2017\n# - Vincent Garonne <[email protected]>, 2018\n# - Martin Barisits <[email protected]>, 2018\n# - Andrew Lister <[email protected]>, 2019\n\nfrom __future__ import print_function\n\nfrom json import loads\nfrom requests.status_codes import codes\n\nfrom rucio.client.baseclient import BaseClient\nfrom rucio.client.baseclient import choice\nfrom rucio.common.utils import build_url, render_json\n\n\nclass LifetimeClient(BaseClient):\n\n \"\"\"Lifetime client class for working with Lifetime Model exceptions\"\"\"\n\n LIFETIME_BASEURL = 'lifetime_exceptions'\n\n def list_exceptions(self, exception_id=None, states=None):\n \"\"\"\n List exceptions to Lifetime Model.\n\n :param id: The id of the exception\n :param states: The states to filter\n \"\"\"\n\n path = self.LIFETIME_BASEURL + '/'\n params = {}\n if exception_id:\n params['exception_id'] = exception_id\n if states:\n params['states'] = exception_id\n url = build_url(choice(self.list_hosts), path=path, params=params)\n\n result = self._send_request(url)\n if result.status_code == codes.ok:\n lifetime_exceptions = self._load_json_data(result)\n return lifetime_exceptions\n else:\n exc_cls, exc_msg = self._get_exception(headers=result.headers, status_code=result.status_code)\n raise exc_cls(exc_msg)\n\n def add_exception(self, dids, account, pattern, comments, expires_at):\n \"\"\"\n Add exceptions to Lifetime Model.\n\n :param dids: The list of dids\n :param account: The account of the requester.\n :param pattern: The account.\n :param comments: The comments associated to the exception.\n :param expires_at: The expiration date of the exception.\n\n returns: The id of the exception.\n \"\"\"\n path = self.LIFETIME_BASEURL + '/'\n url = build_url(choice(self.list_hosts), path=path)\n data = {'dids': dids, 'account': account, 'pattern': pattern, 'comments': comments, 'expires_at': expires_at}\n print(render_json(**data))\n result = self._send_request(url, type_='POST', data=render_json(**data))\n print(result.text)\n if result.status_code == codes.created:\n return loads(result.text)\n exc_cls, exc_msg = self._get_exception(headers=result.headers, status_code=result.status_code, data=result.content)\n raise exc_cls(exc_msg)\n", "path": "lib/rucio/client/lifetimeclient.py"}]}
| 1,464 | 513 |
gh_patches_debug_856
|
rasdani/github-patches
|
git_diff
|
microsoft__botbuilder-python-1451
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dependecy conflict between botframework 4.11.0 and azure-identity 1.5.0
## Version
4.11 (also happening with 4.10)
## Describe the bug
`botframework-connector == 4.11.0` (current) requires `msal == 1.2.0`
`azure-identity == 1.5.0` (current) requires `msal >=1.6.0,<2.0.0`
This created a dependency conflict where bot libraries can't coexist in the same program. This used to work a couple of months ago (I bumped into this issue after revisiting some code I had worked on before).
## To Reproduce
This is my `requirements.txt` file, just add it and run `pipenv install -r requirements.txt` (versions pinned to :
```
botbuilder-core == 4.11
azure-keyvault-secrets
azure-identity == 1.5
botbuilder-ai == 4.11
```
## Expected behavior
Packages should install without conflict
## Screenshots
Extract from the error message `pipenv install` shows:
```
[pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies.
First try clearing your dependency cache with $ pipenv lock --clear, then try the original command again.
Alternatively, you can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation.
Hint: try $ pipenv lock --pre if it is a pre-release dependency.
ERROR: ERROR: Could not find a version that matches msal<2.0.0,==1.2.0,>=0.4.1,>=1.6.0
Tried: 0.1.0, 0.1.0, 0.2.0, 0.2.0, 0.3.0, 0.3.0, 0.3.1, 0.3.1, 0.4.0, 0.4.0, 0.4.1, 0.4.1, 0.5.0, 0.5.0, 0.5.1, 0.5.1, 0.6.0, 0.6.0, 0.6.1, 0.6.1, 0.7.0, 0.7.0, 0.8.0, 0.8.0, 0.8.0, 0.9.0, 0.9.0, 1.0.0, 1.0.0, 1.1.0, 1.1.0, 1.2.0, 1.2.0, 1.3.0, 1.3.0, 1.4.0, 1.4.0, 1.4.1, 1.4.1, 1.4.2, 1.4.2, 1.4.3, 1.4.3, 1.5.0, 1.5.0, 1.5.1, 1.5.1, 1.6.0, 1.6.0, 1.7.0, 1.7.0, 1.8.0, 1.8.0
There are incompatible versions in the resolved dependencies.
```
Relevant extract from the output of `pipenv graph` as per the suggestion above:
```
azure-identity==1.5.0
- msal [required: >=1.6.0,<2.0.0, installed: 1.2.0]
- msal-extensions [required: ~=0.3.0, installed: 0.3.0]
- msal [required: >=0.4.1,<2.0.0, installed: 1.2.0]
azure-keyvault-secrets==4.2.0
botbuilder-ai==4.11.0
- botbuilder-core [required: ==4.11.0, installed: 4.11.0]
- botframework-connector [required: ==4.11.0, installed: 4.11.0]
- msal [required: ==1.2.0, installed: 1.2.0]
```
## Additional context
This issue was also reported in [botbuilder-samples repo's issue 2978](https://github.com/microsoft/BotBuilder-Samples/issues/2978)
</issue>
<code>
[start of libraries/botframework-connector/setup.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3 import os
4 from setuptools import setup
5
6 NAME = "botframework-connector"
7 VERSION = os.environ["packageVersion"] if "packageVersion" in os.environ else "4.12.0"
8 REQUIRES = [
9 "msrest==0.6.10",
10 "requests==2.23.0",
11 "cryptography==3.2",
12 "PyJWT==1.5.3",
13 "botbuilder-schema==4.12.0",
14 "adal==1.2.1",
15 "msal==1.2.0",
16 ]
17
18 root = os.path.abspath(os.path.dirname(__file__))
19
20 with open(os.path.join(root, "README.rst"), encoding="utf-8") as f:
21 long_description = f.read()
22
23 setup(
24 name=NAME,
25 version=VERSION,
26 description="Microsoft Bot Framework Bot Builder SDK for Python.",
27 author="Microsoft",
28 url="https://www.github.com/Microsoft/botbuilder-python",
29 keywords=["BotFrameworkConnector", "bots", "ai", "botframework", "botbuilder"],
30 install_requires=REQUIRES,
31 packages=[
32 "botframework.connector",
33 "botframework.connector.auth",
34 "botframework.connector.async_mixin",
35 "botframework.connector.operations",
36 "botframework.connector.models",
37 "botframework.connector.aio",
38 "botframework.connector.aio.operations_async",
39 "botframework.connector.teams",
40 "botframework.connector.teams.operations",
41 "botframework.connector.token_api",
42 "botframework.connector.token_api.aio",
43 "botframework.connector.token_api.models",
44 "botframework.connector.token_api.operations",
45 ],
46 include_package_data=True,
47 long_description=long_description,
48 long_description_content_type="text/x-rst",
49 license="MIT",
50 classifiers=[
51 "Programming Language :: Python :: 3.7",
52 "Intended Audience :: Developers",
53 "License :: OSI Approved :: MIT License",
54 "Operating System :: OS Independent",
55 "Development Status :: 5 - Production/Stable",
56 "Topic :: Scientific/Engineering :: Artificial Intelligence",
57 ],
58 )
59
[end of libraries/botframework-connector/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/libraries/botframework-connector/setup.py b/libraries/botframework-connector/setup.py
--- a/libraries/botframework-connector/setup.py
+++ b/libraries/botframework-connector/setup.py
@@ -12,7 +12,7 @@
"PyJWT==1.5.3",
"botbuilder-schema==4.12.0",
"adal==1.2.1",
- "msal==1.2.0",
+ "msal==1.6.0",
]
root = os.path.abspath(os.path.dirname(__file__))
|
{"golden_diff": "diff --git a/libraries/botframework-connector/setup.py b/libraries/botframework-connector/setup.py\n--- a/libraries/botframework-connector/setup.py\n+++ b/libraries/botframework-connector/setup.py\n@@ -12,7 +12,7 @@\n \"PyJWT==1.5.3\",\n \"botbuilder-schema==4.12.0\",\n \"adal==1.2.1\",\n- \"msal==1.2.0\",\n+ \"msal==1.6.0\",\n ]\n \n root = os.path.abspath(os.path.dirname(__file__))\n", "issue": "dependecy conflict between botframework 4.11.0 and azure-identity 1.5.0\n## Version\r\n4.11 (also happening with 4.10)\r\n\r\n## Describe the bug\r\n`botframework-connector == 4.11.0` (current) requires `msal == 1.2.0`\r\n`azure-identity == 1.5.0` (current) requires `msal >=1.6.0,<2.0.0`\r\n\r\nThis created a dependency conflict where bot libraries can't coexist in the same program. This used to work a couple of months ago (I bumped into this issue after revisiting some code I had worked on before).\r\n\r\n## To Reproduce\r\nThis is my `requirements.txt` file, just add it and run `pipenv install -r requirements.txt` (versions pinned to :\r\n```\r\nbotbuilder-core == 4.11\r\nazure-keyvault-secrets\r\nazure-identity == 1.5\r\nbotbuilder-ai == 4.11\r\n```\r\n\r\n## Expected behavior\r\nPackages should install without conflict\r\n\r\n## Screenshots\r\nExtract from the error message `pipenv install` shows:\r\n```\r\n[pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies.\r\n First try clearing your dependency cache with $ pipenv lock --clear, then try the original command again.\r\n Alternatively, you can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation.\r\n Hint: try $ pipenv lock --pre if it is a pre-release dependency.\r\nERROR: ERROR: Could not find a version that matches msal<2.0.0,==1.2.0,>=0.4.1,>=1.6.0\r\nTried: 0.1.0, 0.1.0, 0.2.0, 0.2.0, 0.3.0, 0.3.0, 0.3.1, 0.3.1, 0.4.0, 0.4.0, 0.4.1, 0.4.1, 0.5.0, 0.5.0, 0.5.1, 0.5.1, 0.6.0, 0.6.0, 0.6.1, 0.6.1, 0.7.0, 0.7.0, 0.8.0, 0.8.0, 0.8.0, 0.9.0, 0.9.0, 1.0.0, 1.0.0, 1.1.0, 1.1.0, 1.2.0, 1.2.0, 1.3.0, 1.3.0, 1.4.0, 1.4.0, 1.4.1, 1.4.1, 1.4.2, 1.4.2, 1.4.3, 1.4.3, 1.5.0, 1.5.0, 1.5.1, 1.5.1, 1.6.0, 1.6.0, 1.7.0, 1.7.0, 1.8.0, 1.8.0\r\nThere are incompatible versions in the resolved dependencies.\r\n```\r\nRelevant extract from the output of `pipenv graph` as per the suggestion above:\r\n```\r\nazure-identity==1.5.0\r\n - msal [required: >=1.6.0,<2.0.0, installed: 1.2.0]\r\n - msal-extensions [required: ~=0.3.0, installed: 0.3.0]\r\n - msal [required: >=0.4.1,<2.0.0, installed: 1.2.0]\r\nazure-keyvault-secrets==4.2.0\r\nbotbuilder-ai==4.11.0\r\n - botbuilder-core [required: ==4.11.0, installed: 4.11.0]\r\n - botframework-connector [required: ==4.11.0, installed: 4.11.0]\r\n - msal [required: ==1.2.0, installed: 1.2.0]\r\n```\r\n\r\n## Additional context\r\nThis issue was also reported in [botbuilder-samples repo's issue 2978](https://github.com/microsoft/BotBuilder-Samples/issues/2978)\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nimport os\nfrom setuptools import setup\n\nNAME = \"botframework-connector\"\nVERSION = os.environ[\"packageVersion\"] if \"packageVersion\" in os.environ else \"4.12.0\"\nREQUIRES = [\n \"msrest==0.6.10\",\n \"requests==2.23.0\",\n \"cryptography==3.2\",\n \"PyJWT==1.5.3\",\n \"botbuilder-schema==4.12.0\",\n \"adal==1.2.1\",\n \"msal==1.2.0\",\n]\n\nroot = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(root, \"README.rst\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nsetup(\n name=NAME,\n version=VERSION,\n description=\"Microsoft Bot Framework Bot Builder SDK for Python.\",\n author=\"Microsoft\",\n url=\"https://www.github.com/Microsoft/botbuilder-python\",\n keywords=[\"BotFrameworkConnector\", \"bots\", \"ai\", \"botframework\", \"botbuilder\"],\n install_requires=REQUIRES,\n packages=[\n \"botframework.connector\",\n \"botframework.connector.auth\",\n \"botframework.connector.async_mixin\",\n \"botframework.connector.operations\",\n \"botframework.connector.models\",\n \"botframework.connector.aio\",\n \"botframework.connector.aio.operations_async\",\n \"botframework.connector.teams\",\n \"botframework.connector.teams.operations\",\n \"botframework.connector.token_api\",\n \"botframework.connector.token_api.aio\",\n \"botframework.connector.token_api.models\",\n \"botframework.connector.token_api.operations\",\n ],\n include_package_data=True,\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n license=\"MIT\",\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "libraries/botframework-connector/setup.py"}]}
| 2,150 | 131 |
gh_patches_debug_4120
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-2709
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of colossalai/cli/launcher/__init__.py]
1 import click
2 from .run import launch_multi_processes
3 from colossalai.context import Config
4
5
6 @click.command(help="Launch distributed training on a single node or multiple nodes",
7 context_settings=dict(ignore_unknown_options=True))
8 @click.option("-H",
9 "-host",
10 "--host",
11 type=str,
12 default=None,
13 help="the list of hostnames to launch in the format <host1>,<host2>")
14 @click.option(
15 "--hostfile",
16 type=str,
17 default=None,
18 help="Hostfile path that defines the device pool available to the job, each line in the file is a hostname")
19 @click.option("--include",
20 type=str,
21 default=None,
22 help="Specify computing devices to use during execution. String format is <host1>,<host2>,"
23 " only effective when used with --hostfile.")
24 @click.option(
25 "--exclude",
26 type=str,
27 default=None,
28 help=
29 "Specify computing devices to NOT use during execution. Mutually exclusive with --include. Formatting is the same as --includ,"
30 " only effective when used with --hostfile.")
31 @click.option("--num_nodes",
32 type=int,
33 default=-1,
34 help="Total number of worker nodes to use, only effective when used with --hostfile.")
35 @click.option("--nproc_per_node", type=int, default=None, help="Number of GPUs to use on each node.")
36 @click.option("--master_port",
37 type=int,
38 default=29500,
39 help="(optional) Port used by PyTorch distributed for communication during distributed training.")
40 @click.option("--master_addr",
41 type=str,
42 default="127.0.0.1",
43 help="(optional) IP address of node 0, will be inferred via 'hostname -I' if not specified.")
44 @click.option(
45 "--extra_launch_args",
46 type=str,
47 default=None,
48 help=
49 "Set additional torch distributed launcher arguments such as --standalone. The format is --extra_launch_args arg1=1,arg2=2. "
50 "This will be converted to --arg1=1 --arg2=2 during execution")
51 @click.option("--ssh-port", type=int, default=None, help="(optional) the port used for ssh connection")
52 @click.argument("user_script", type=str)
53 @click.argument('user_args', nargs=-1)
54 def run(host: str, hostfile: str, num_nodes: int, nproc_per_node: int, include: str, exclude: str, master_addr: str,
55 master_port: int, extra_launch_args: str, ssh_port: int, user_script: str, user_args: str) -> None:
56 """
57 To launch multiple processes on a single node or multiple nodes via command line.
58
59 Usage::
60 # run with 4 GPUs on the current node use default port 29500
61 colossalai run --nprocs_per_node 4 train.py
62
63 # run with 2 GPUs on the current node at port 29550
64 colossalai run --nprocs_per_node 4 --master_port 29550 train.py
65
66 # run on two nodes
67 colossalai run --host <host1>,<host2> --master_addr host1 --nprocs_per_node 4 train.py
68
69 # run with hostfile
70 colossalai run --hostfile <file_path> --master_addr <host> --nprocs_per_node 4 train.py
71
72 # run with hostfile with only included hosts
73 colossalai run --hostfile <file_path> --master_addr host1 --include host1,host2 --nprocs_per_node 4 train.py
74
75 # run with hostfile excluding the hosts selected
76 colossalai run --hostfile <file_path> --master_addr host1 --exclude host2 --nprocs_per_node 4 train.py
77 """
78 if not user_script.endswith('.py'):
79 click.echo(f'Error: invalid Python file {user_script}. Did you use a wrong option? Try colossalai run --help')
80 exit()
81
82 args_dict = locals()
83 args = Config(args_dict)
84 args.user_args = list(args.user_args)
85 launch_multi_processes(args)
86
[end of colossalai/cli/launcher/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/colossalai/cli/launcher/__init__.py b/colossalai/cli/launcher/__init__.py
--- a/colossalai/cli/launcher/__init__.py
+++ b/colossalai/cli/launcher/__init__.py
@@ -1,7 +1,9 @@
import click
-from .run import launch_multi_processes
+
from colossalai.context import Config
+from .run import launch_multi_processes
+
@click.command(help="Launch distributed training on a single node or multiple nodes",
context_settings=dict(ignore_unknown_options=True))
|
{"golden_diff": "diff --git a/colossalai/cli/launcher/__init__.py b/colossalai/cli/launcher/__init__.py\n--- a/colossalai/cli/launcher/__init__.py\n+++ b/colossalai/cli/launcher/__init__.py\n@@ -1,7 +1,9 @@\n import click\n-from .run import launch_multi_processes\n+\n from colossalai.context import Config\n \n+from .run import launch_multi_processes\n+\n \n @click.command(help=\"Launch distributed training on a single node or multiple nodes\",\n context_settings=dict(ignore_unknown_options=True))\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "import click\nfrom .run import launch_multi_processes\nfrom colossalai.context import Config\n\n\[email protected](help=\"Launch distributed training on a single node or multiple nodes\",\n context_settings=dict(ignore_unknown_options=True))\[email protected](\"-H\",\n \"-host\",\n \"--host\",\n type=str,\n default=None,\n help=\"the list of hostnames to launch in the format <host1>,<host2>\")\[email protected](\n \"--hostfile\",\n type=str,\n default=None,\n help=\"Hostfile path that defines the device pool available to the job, each line in the file is a hostname\")\[email protected](\"--include\",\n type=str,\n default=None,\n help=\"Specify computing devices to use during execution. String format is <host1>,<host2>,\"\n \" only effective when used with --hostfile.\")\[email protected](\n \"--exclude\",\n type=str,\n default=None,\n help=\n \"Specify computing devices to NOT use during execution. Mutually exclusive with --include. Formatting is the same as --includ,\"\n \" only effective when used with --hostfile.\")\[email protected](\"--num_nodes\",\n type=int,\n default=-1,\n help=\"Total number of worker nodes to use, only effective when used with --hostfile.\")\[email protected](\"--nproc_per_node\", type=int, default=None, help=\"Number of GPUs to use on each node.\")\[email protected](\"--master_port\",\n type=int,\n default=29500,\n help=\"(optional) Port used by PyTorch distributed for communication during distributed training.\")\[email protected](\"--master_addr\",\n type=str,\n default=\"127.0.0.1\",\n help=\"(optional) IP address of node 0, will be inferred via 'hostname -I' if not specified.\")\[email protected](\n \"--extra_launch_args\",\n type=str,\n default=None,\n help=\n \"Set additional torch distributed launcher arguments such as --standalone. The format is --extra_launch_args arg1=1,arg2=2. \"\n \"This will be converted to --arg1=1 --arg2=2 during execution\")\[email protected](\"--ssh-port\", type=int, default=None, help=\"(optional) the port used for ssh connection\")\[email protected](\"user_script\", type=str)\[email protected]('user_args', nargs=-1)\ndef run(host: str, hostfile: str, num_nodes: int, nproc_per_node: int, include: str, exclude: str, master_addr: str,\n master_port: int, extra_launch_args: str, ssh_port: int, user_script: str, user_args: str) -> None:\n \"\"\"\n To launch multiple processes on a single node or multiple nodes via command line.\n\n Usage::\n # run with 4 GPUs on the current node use default port 29500\n colossalai run --nprocs_per_node 4 train.py\n\n # run with 2 GPUs on the current node at port 29550\n colossalai run --nprocs_per_node 4 --master_port 29550 train.py\n\n # run on two nodes\n colossalai run --host <host1>,<host2> --master_addr host1 --nprocs_per_node 4 train.py\n\n # run with hostfile\n colossalai run --hostfile <file_path> --master_addr <host> --nprocs_per_node 4 train.py\n\n # run with hostfile with only included hosts\n colossalai run --hostfile <file_path> --master_addr host1 --include host1,host2 --nprocs_per_node 4 train.py\n\n # run with hostfile excluding the hosts selected\n colossalai run --hostfile <file_path> --master_addr host1 --exclude host2 --nprocs_per_node 4 train.py\n \"\"\"\n if not user_script.endswith('.py'):\n click.echo(f'Error: invalid Python file {user_script}. Did you use a wrong option? Try colossalai run --help')\n exit()\n\n args_dict = locals()\n args = Config(args_dict)\n args.user_args = list(args.user_args)\n launch_multi_processes(args)\n", "path": "colossalai/cli/launcher/__init__.py"}]}
| 1,633 | 121 |
gh_patches_debug_12616
|
rasdani/github-patches
|
git_diff
|
mne-tools__mne-python-9042
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
use bibtex in plot_evoked_whitening.py
convert references in `examples/visualization/plot_evoked_whitening.py` to use footcite / footbibliography
</issue>
<code>
[start of examples/visualization/plot_evoked_whitening.py]
1 """
2 =============================================
3 Whitening evoked data with a noise covariance
4 =============================================
5
6 Evoked data are loaded and then whitened using a given noise covariance
7 matrix. It's an excellent quality check to see if baseline signals match
8 the assumption of Gaussian white noise during the baseline period.
9
10 Covariance estimation and diagnostic plots are based on [1]_.
11
12 References
13 ----------
14 .. [1] Engemann D. and Gramfort A. (2015) Automated model selection in
15 covariance estimation and spatial whitening of MEG and EEG signals, vol.
16 108, 328-342, NeuroImage.
17
18 """
19 # Authors: Alexandre Gramfort <[email protected]>
20 # Denis A. Engemann <[email protected]>
21 #
22 # License: BSD (3-clause)
23
24 import mne
25
26 from mne import io
27 from mne.datasets import sample
28 from mne.cov import compute_covariance
29
30 print(__doc__)
31
32 ###############################################################################
33 # Set parameters
34
35 data_path = sample.data_path()
36 raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
37 event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
38
39 raw = io.read_raw_fif(raw_fname, preload=True)
40 raw.filter(1, 40, n_jobs=1, fir_design='firwin')
41 raw.info['bads'] += ['MEG 2443'] # bads + 1 more
42 events = mne.read_events(event_fname)
43
44 # let's look at rare events, button presses
45 event_id, tmin, tmax = 2, -0.2, 0.5
46 reject = dict(mag=4e-12, grad=4000e-13, eeg=80e-6)
47
48 epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=('meg', 'eeg'),
49 baseline=None, reject=reject, preload=True)
50
51 # Uncomment next line to use fewer samples and study regularization effects
52 # epochs = epochs[:20] # For your data, use as many samples as you can!
53
54 ###############################################################################
55 # Compute covariance using automated regularization
56 method_params = dict(diagonal_fixed=dict(mag=0.01, grad=0.01, eeg=0.01))
57 noise_covs = compute_covariance(epochs, tmin=None, tmax=0, method='auto',
58 return_estimators=True, verbose=True, n_jobs=1,
59 projs=None, rank=None,
60 method_params=method_params)
61
62 # With "return_estimator=True" all estimated covariances sorted
63 # by log-likelihood are returned.
64
65 print('Covariance estimates sorted from best to worst')
66 for c in noise_covs:
67 print("%s : %s" % (c['method'], c['loglik']))
68
69 ###############################################################################
70 # Show the evoked data:
71
72 evoked = epochs.average()
73
74 evoked.plot(time_unit='s') # plot evoked response
75
76 ###############################################################################
77 # We can then show whitening for our various noise covariance estimates.
78 #
79 # Here we should look to see if baseline signals match the
80 # assumption of Gaussian white noise. we expect values centered at
81 # 0 within 2 standard deviations for 95% of the time points.
82 #
83 # For the Global field power we expect a value of 1.
84
85 evoked.plot_white(noise_covs, time_unit='s')
86
[end of examples/visualization/plot_evoked_whitening.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/visualization/plot_evoked_whitening.py b/examples/visualization/plot_evoked_whitening.py
--- a/examples/visualization/plot_evoked_whitening.py
+++ b/examples/visualization/plot_evoked_whitening.py
@@ -7,13 +7,12 @@
matrix. It's an excellent quality check to see if baseline signals match
the assumption of Gaussian white noise during the baseline period.
-Covariance estimation and diagnostic plots are based on [1]_.
+Covariance estimation and diagnostic plots are based on
+:footcite:`EngemannGramfort2015`.
References
----------
-.. [1] Engemann D. and Gramfort A. (2015) Automated model selection in
- covariance estimation and spatial whitening of MEG and EEG signals, vol.
- 108, 328-342, NeuroImage.
+.. footbibliography::
"""
# Authors: Alexandre Gramfort <[email protected]>
|
{"golden_diff": "diff --git a/examples/visualization/plot_evoked_whitening.py b/examples/visualization/plot_evoked_whitening.py\n--- a/examples/visualization/plot_evoked_whitening.py\n+++ b/examples/visualization/plot_evoked_whitening.py\n@@ -7,13 +7,12 @@\n matrix. It's an excellent quality check to see if baseline signals match\n the assumption of Gaussian white noise during the baseline period.\n \n-Covariance estimation and diagnostic plots are based on [1]_.\n+Covariance estimation and diagnostic plots are based on\n+:footcite:`EngemannGramfort2015`.\n \n References\n ----------\n-.. [1] Engemann D. and Gramfort A. (2015) Automated model selection in\n- covariance estimation and spatial whitening of MEG and EEG signals, vol.\n- 108, 328-342, NeuroImage.\n+.. footbibliography::\n \n \"\"\"\n # Authors: Alexandre Gramfort <[email protected]>\n", "issue": "use bibtex in plot_evoked_whitening.py\nconvert references in `examples/visualization/plot_evoked_whitening.py` to use footcite / footbibliography\r\n\n", "before_files": [{"content": "\"\"\"\n=============================================\nWhitening evoked data with a noise covariance\n=============================================\n\nEvoked data are loaded and then whitened using a given noise covariance\nmatrix. It's an excellent quality check to see if baseline signals match\nthe assumption of Gaussian white noise during the baseline period.\n\nCovariance estimation and diagnostic plots are based on [1]_.\n\nReferences\n----------\n.. [1] Engemann D. and Gramfort A. (2015) Automated model selection in\n covariance estimation and spatial whitening of MEG and EEG signals, vol.\n 108, 328-342, NeuroImage.\n\n\"\"\"\n# Authors: Alexandre Gramfort <[email protected]>\n# Denis A. Engemann <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport mne\n\nfrom mne import io\nfrom mne.datasets import sample\nfrom mne.cov import compute_covariance\n\nprint(__doc__)\n\n###############################################################################\n# Set parameters\n\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\n\nraw = io.read_raw_fif(raw_fname, preload=True)\nraw.filter(1, 40, n_jobs=1, fir_design='firwin')\nraw.info['bads'] += ['MEG 2443'] # bads + 1 more\nevents = mne.read_events(event_fname)\n\n# let's look at rare events, button presses\nevent_id, tmin, tmax = 2, -0.2, 0.5\nreject = dict(mag=4e-12, grad=4000e-13, eeg=80e-6)\n\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=('meg', 'eeg'),\n baseline=None, reject=reject, preload=True)\n\n# Uncomment next line to use fewer samples and study regularization effects\n# epochs = epochs[:20] # For your data, use as many samples as you can!\n\n###############################################################################\n# Compute covariance using automated regularization\nmethod_params = dict(diagonal_fixed=dict(mag=0.01, grad=0.01, eeg=0.01))\nnoise_covs = compute_covariance(epochs, tmin=None, tmax=0, method='auto',\n return_estimators=True, verbose=True, n_jobs=1,\n projs=None, rank=None,\n method_params=method_params)\n\n# With \"return_estimator=True\" all estimated covariances sorted\n# by log-likelihood are returned.\n\nprint('Covariance estimates sorted from best to worst')\nfor c in noise_covs:\n print(\"%s : %s\" % (c['method'], c['loglik']))\n\n###############################################################################\n# Show the evoked data:\n\nevoked = epochs.average()\n\nevoked.plot(time_unit='s') # plot evoked response\n\n###############################################################################\n# We can then show whitening for our various noise covariance estimates.\n#\n# Here we should look to see if baseline signals match the\n# assumption of Gaussian white noise. we expect values centered at\n# 0 within 2 standard deviations for 95% of the time points.\n#\n# For the Global field power we expect a value of 1.\n\nevoked.plot_white(noise_covs, time_unit='s')\n", "path": "examples/visualization/plot_evoked_whitening.py"}]}
| 1,517 | 225 |
gh_patches_debug_25660
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-2713
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pending deprecation warning on fernet creation since 1.2.1
``` python
In [1]: from cryptography.fernet import Fernet
In [2]: key = Fernet.generate_key()
In [3]: fernet = Fernet(key)
/home/simon/.virtualenvs/project/local/lib/python2.7/site-packages/cryptography/x509/__init__.py:32: PendingDeprecationWarning: CRLExtensionOID has been renamed to CRLEntryExtensionOID
from cryptography.x509.oid import (
```
</issue>
<code>
[start of src/cryptography/x509/__init__.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 from cryptography.x509.base import (
8 Certificate, CertificateBuilder, CertificateRevocationList,
9 CertificateRevocationListBuilder,
10 CertificateSigningRequest, CertificateSigningRequestBuilder,
11 InvalidVersion, RevokedCertificate, RevokedCertificateBuilder,
12 Version, load_der_x509_certificate, load_der_x509_crl, load_der_x509_csr,
13 load_pem_x509_certificate, load_pem_x509_crl, load_pem_x509_csr,
14 )
15 from cryptography.x509.extensions import (
16 AccessDescription, AuthorityInformationAccess,
17 AuthorityKeyIdentifier, BasicConstraints, CRLDistributionPoints,
18 CRLNumber, CRLReason, CertificateIssuer, CertificatePolicies,
19 DistributionPoint, DuplicateExtension, ExtendedKeyUsage, Extension,
20 ExtensionNotFound, ExtensionType, Extensions, GeneralNames,
21 InhibitAnyPolicy, InvalidityDate, IssuerAlternativeName, KeyUsage,
22 NameConstraints, NoticeReference, OCSPNoCheck, PolicyInformation,
23 ReasonFlags, SubjectAlternativeName, SubjectKeyIdentifier,
24 UnrecognizedExtension, UnsupportedExtension, UserNotice
25 )
26 from cryptography.x509.general_name import (
27 DNSName, DirectoryName, GeneralName, IPAddress, OtherName, RFC822Name,
28 RegisteredID, UniformResourceIdentifier, UnsupportedGeneralNameType,
29 _GENERAL_NAMES
30 )
31 from cryptography.x509.name import Name, NameAttribute
32 from cryptography.x509.oid import (
33 AuthorityInformationAccessOID, CRLEntryExtensionOID, CRLExtensionOID,
34 CertificatePoliciesOID, ExtendedKeyUsageOID, ExtensionOID, NameOID,
35 ObjectIdentifier, SignatureAlgorithmOID, _SIG_OIDS_TO_HASH
36 )
37
38
39 OID_AUTHORITY_INFORMATION_ACCESS = ExtensionOID.AUTHORITY_INFORMATION_ACCESS
40 OID_AUTHORITY_KEY_IDENTIFIER = ExtensionOID.AUTHORITY_KEY_IDENTIFIER
41 OID_BASIC_CONSTRAINTS = ExtensionOID.BASIC_CONSTRAINTS
42 OID_CERTIFICATE_POLICIES = ExtensionOID.CERTIFICATE_POLICIES
43 OID_CRL_DISTRIBUTION_POINTS = ExtensionOID.CRL_DISTRIBUTION_POINTS
44 OID_EXTENDED_KEY_USAGE = ExtensionOID.EXTENDED_KEY_USAGE
45 OID_FRESHEST_CRL = ExtensionOID.FRESHEST_CRL
46 OID_INHIBIT_ANY_POLICY = ExtensionOID.INHIBIT_ANY_POLICY
47 OID_ISSUER_ALTERNATIVE_NAME = ExtensionOID.ISSUER_ALTERNATIVE_NAME
48 OID_KEY_USAGE = ExtensionOID.KEY_USAGE
49 OID_NAME_CONSTRAINTS = ExtensionOID.NAME_CONSTRAINTS
50 OID_OCSP_NO_CHECK = ExtensionOID.OCSP_NO_CHECK
51 OID_POLICY_CONSTRAINTS = ExtensionOID.POLICY_CONSTRAINTS
52 OID_POLICY_MAPPINGS = ExtensionOID.POLICY_MAPPINGS
53 OID_SUBJECT_ALTERNATIVE_NAME = ExtensionOID.SUBJECT_ALTERNATIVE_NAME
54 OID_SUBJECT_DIRECTORY_ATTRIBUTES = ExtensionOID.SUBJECT_DIRECTORY_ATTRIBUTES
55 OID_SUBJECT_INFORMATION_ACCESS = ExtensionOID.SUBJECT_INFORMATION_ACCESS
56 OID_SUBJECT_KEY_IDENTIFIER = ExtensionOID.SUBJECT_KEY_IDENTIFIER
57
58 OID_DSA_WITH_SHA1 = SignatureAlgorithmOID.DSA_WITH_SHA1
59 OID_DSA_WITH_SHA224 = SignatureAlgorithmOID.DSA_WITH_SHA224
60 OID_DSA_WITH_SHA256 = SignatureAlgorithmOID.DSA_WITH_SHA256
61 OID_ECDSA_WITH_SHA1 = SignatureAlgorithmOID.ECDSA_WITH_SHA1
62 OID_ECDSA_WITH_SHA224 = SignatureAlgorithmOID.ECDSA_WITH_SHA224
63 OID_ECDSA_WITH_SHA256 = SignatureAlgorithmOID.ECDSA_WITH_SHA256
64 OID_ECDSA_WITH_SHA384 = SignatureAlgorithmOID.ECDSA_WITH_SHA384
65 OID_ECDSA_WITH_SHA512 = SignatureAlgorithmOID.ECDSA_WITH_SHA512
66 OID_RSA_WITH_MD5 = SignatureAlgorithmOID.RSA_WITH_MD5
67 OID_RSA_WITH_SHA1 = SignatureAlgorithmOID.RSA_WITH_SHA1
68 OID_RSA_WITH_SHA224 = SignatureAlgorithmOID.RSA_WITH_SHA224
69 OID_RSA_WITH_SHA256 = SignatureAlgorithmOID.RSA_WITH_SHA256
70 OID_RSA_WITH_SHA384 = SignatureAlgorithmOID.RSA_WITH_SHA384
71 OID_RSA_WITH_SHA512 = SignatureAlgorithmOID.RSA_WITH_SHA512
72
73 OID_COMMON_NAME = NameOID.COMMON_NAME
74 OID_COUNTRY_NAME = NameOID.COUNTRY_NAME
75 OID_DOMAIN_COMPONENT = NameOID.DOMAIN_COMPONENT
76 OID_DN_QUALIFIER = NameOID.DN_QUALIFIER
77 OID_EMAIL_ADDRESS = NameOID.EMAIL_ADDRESS
78 OID_GENERATION_QUALIFIER = NameOID.GENERATION_QUALIFIER
79 OID_GIVEN_NAME = NameOID.GIVEN_NAME
80 OID_LOCALITY_NAME = NameOID.LOCALITY_NAME
81 OID_ORGANIZATIONAL_UNIT_NAME = NameOID.ORGANIZATIONAL_UNIT_NAME
82 OID_ORGANIZATION_NAME = NameOID.ORGANIZATION_NAME
83 OID_PSEUDONYM = NameOID.PSEUDONYM
84 OID_SERIAL_NUMBER = NameOID.SERIAL_NUMBER
85 OID_STATE_OR_PROVINCE_NAME = NameOID.STATE_OR_PROVINCE_NAME
86 OID_SURNAME = NameOID.SURNAME
87 OID_TITLE = NameOID.TITLE
88
89 OID_CLIENT_AUTH = ExtendedKeyUsageOID.CLIENT_AUTH
90 OID_CODE_SIGNING = ExtendedKeyUsageOID.CODE_SIGNING
91 OID_EMAIL_PROTECTION = ExtendedKeyUsageOID.EMAIL_PROTECTION
92 OID_OCSP_SIGNING = ExtendedKeyUsageOID.OCSP_SIGNING
93 OID_SERVER_AUTH = ExtendedKeyUsageOID.SERVER_AUTH
94 OID_TIME_STAMPING = ExtendedKeyUsageOID.TIME_STAMPING
95
96 OID_ANY_POLICY = CertificatePoliciesOID.ANY_POLICY
97 OID_CPS_QUALIFIER = CertificatePoliciesOID.CPS_QUALIFIER
98 OID_CPS_USER_NOTICE = CertificatePoliciesOID.CPS_USER_NOTICE
99
100 OID_CERTIFICATE_ISSUER = CRLEntryExtensionOID.CERTIFICATE_ISSUER
101 OID_CRL_REASON = CRLEntryExtensionOID.CRL_REASON
102 OID_INVALIDITY_DATE = CRLEntryExtensionOID.INVALIDITY_DATE
103
104 OID_CA_ISSUERS = AuthorityInformationAccessOID.CA_ISSUERS
105 OID_OCSP = AuthorityInformationAccessOID.OCSP
106
107
108 __all__ = [
109 "load_pem_x509_certificate",
110 "load_der_x509_certificate",
111 "load_pem_x509_csr",
112 "load_der_x509_csr",
113 "load_pem_x509_crl",
114 "load_der_x509_crl",
115 "InvalidVersion",
116 "DuplicateExtension",
117 "UnsupportedExtension",
118 "ExtensionNotFound",
119 "UnsupportedGeneralNameType",
120 "NameAttribute",
121 "Name",
122 "ObjectIdentifier",
123 "ExtensionType",
124 "Extensions",
125 "Extension",
126 "ExtendedKeyUsage",
127 "OCSPNoCheck",
128 "BasicConstraints",
129 "CRLNumber",
130 "KeyUsage",
131 "AuthorityInformationAccess",
132 "AccessDescription",
133 "CertificatePolicies",
134 "PolicyInformation",
135 "UserNotice",
136 "NoticeReference",
137 "SubjectKeyIdentifier",
138 "NameConstraints",
139 "CRLDistributionPoints",
140 "DistributionPoint",
141 "ReasonFlags",
142 "InhibitAnyPolicy",
143 "SubjectAlternativeName",
144 "IssuerAlternativeName",
145 "AuthorityKeyIdentifier",
146 "GeneralNames",
147 "GeneralName",
148 "RFC822Name",
149 "DNSName",
150 "UniformResourceIdentifier",
151 "RegisteredID",
152 "DirectoryName",
153 "IPAddress",
154 "OtherName",
155 "Certificate",
156 "CertificateRevocationList",
157 "CertificateRevocationListBuilder",
158 "CertificateSigningRequest",
159 "RevokedCertificate",
160 "RevokedCertificateBuilder",
161 "CertificateSigningRequestBuilder",
162 "CertificateBuilder",
163 "Version",
164 "_SIG_OIDS_TO_HASH",
165 "OID_CA_ISSUERS",
166 "OID_OCSP",
167 "_GENERAL_NAMES",
168 "CRLExtensionOID",
169 "CertificateIssuer",
170 "CRLReason",
171 "InvalidityDate",
172 "UnrecognizedExtension",
173 ]
174
[end of src/cryptography/x509/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cryptography/x509/__init__.py b/src/cryptography/x509/__init__.py
--- a/src/cryptography/x509/__init__.py
+++ b/src/cryptography/x509/__init__.py
@@ -4,6 +4,7 @@
from __future__ import absolute_import, division, print_function
+from cryptography import utils
from cryptography.x509.base import (
Certificate, CertificateBuilder, CertificateRevocationList,
CertificateRevocationListBuilder,
@@ -30,12 +31,19 @@
)
from cryptography.x509.name import Name, NameAttribute
from cryptography.x509.oid import (
- AuthorityInformationAccessOID, CRLEntryExtensionOID, CRLExtensionOID,
+ AuthorityInformationAccessOID, CRLEntryExtensionOID,
CertificatePoliciesOID, ExtendedKeyUsageOID, ExtensionOID, NameOID,
ObjectIdentifier, SignatureAlgorithmOID, _SIG_OIDS_TO_HASH
)
+CRLExtensionOID = utils.deprecated(
+ CRLEntryExtensionOID,
+ __name__,
+ "CRLExtensionOID has been renamed to CRLEntryExtensionOID",
+ utils.DeprecatedIn12
+)
+
OID_AUTHORITY_INFORMATION_ACCESS = ExtensionOID.AUTHORITY_INFORMATION_ACCESS
OID_AUTHORITY_KEY_IDENTIFIER = ExtensionOID.AUTHORITY_KEY_IDENTIFIER
OID_BASIC_CONSTRAINTS = ExtensionOID.BASIC_CONSTRAINTS
|
{"golden_diff": "diff --git a/src/cryptography/x509/__init__.py b/src/cryptography/x509/__init__.py\n--- a/src/cryptography/x509/__init__.py\n+++ b/src/cryptography/x509/__init__.py\n@@ -4,6 +4,7 @@\n \n from __future__ import absolute_import, division, print_function\n \n+from cryptography import utils\n from cryptography.x509.base import (\n Certificate, CertificateBuilder, CertificateRevocationList,\n CertificateRevocationListBuilder,\n@@ -30,12 +31,19 @@\n )\n from cryptography.x509.name import Name, NameAttribute\n from cryptography.x509.oid import (\n- AuthorityInformationAccessOID, CRLEntryExtensionOID, CRLExtensionOID,\n+ AuthorityInformationAccessOID, CRLEntryExtensionOID,\n CertificatePoliciesOID, ExtendedKeyUsageOID, ExtensionOID, NameOID,\n ObjectIdentifier, SignatureAlgorithmOID, _SIG_OIDS_TO_HASH\n )\n \n \n+CRLExtensionOID = utils.deprecated(\n+ CRLEntryExtensionOID,\n+ __name__,\n+ \"CRLExtensionOID has been renamed to CRLEntryExtensionOID\",\n+ utils.DeprecatedIn12\n+)\n+\n OID_AUTHORITY_INFORMATION_ACCESS = ExtensionOID.AUTHORITY_INFORMATION_ACCESS\n OID_AUTHORITY_KEY_IDENTIFIER = ExtensionOID.AUTHORITY_KEY_IDENTIFIER\n OID_BASIC_CONSTRAINTS = ExtensionOID.BASIC_CONSTRAINTS\n", "issue": "Pending deprecation warning on fernet creation since 1.2.1\n``` python\nIn [1]: from cryptography.fernet import Fernet\n\nIn [2]: key = Fernet.generate_key()\n\nIn [3]: fernet = Fernet(key)\n/home/simon/.virtualenvs/project/local/lib/python2.7/site-packages/cryptography/x509/__init__.py:32: PendingDeprecationWarning: CRLExtensionOID has been renamed to CRLEntryExtensionOID\n from cryptography.x509.oid import (\n```\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom cryptography.x509.base import (\n Certificate, CertificateBuilder, CertificateRevocationList,\n CertificateRevocationListBuilder,\n CertificateSigningRequest, CertificateSigningRequestBuilder,\n InvalidVersion, RevokedCertificate, RevokedCertificateBuilder,\n Version, load_der_x509_certificate, load_der_x509_crl, load_der_x509_csr,\n load_pem_x509_certificate, load_pem_x509_crl, load_pem_x509_csr,\n)\nfrom cryptography.x509.extensions import (\n AccessDescription, AuthorityInformationAccess,\n AuthorityKeyIdentifier, BasicConstraints, CRLDistributionPoints,\n CRLNumber, CRLReason, CertificateIssuer, CertificatePolicies,\n DistributionPoint, DuplicateExtension, ExtendedKeyUsage, Extension,\n ExtensionNotFound, ExtensionType, Extensions, GeneralNames,\n InhibitAnyPolicy, InvalidityDate, IssuerAlternativeName, KeyUsage,\n NameConstraints, NoticeReference, OCSPNoCheck, PolicyInformation,\n ReasonFlags, SubjectAlternativeName, SubjectKeyIdentifier,\n UnrecognizedExtension, UnsupportedExtension, UserNotice\n)\nfrom cryptography.x509.general_name import (\n DNSName, DirectoryName, GeneralName, IPAddress, OtherName, RFC822Name,\n RegisteredID, UniformResourceIdentifier, UnsupportedGeneralNameType,\n _GENERAL_NAMES\n)\nfrom cryptography.x509.name import Name, NameAttribute\nfrom cryptography.x509.oid import (\n AuthorityInformationAccessOID, CRLEntryExtensionOID, CRLExtensionOID,\n CertificatePoliciesOID, ExtendedKeyUsageOID, ExtensionOID, NameOID,\n ObjectIdentifier, SignatureAlgorithmOID, _SIG_OIDS_TO_HASH\n)\n\n\nOID_AUTHORITY_INFORMATION_ACCESS = ExtensionOID.AUTHORITY_INFORMATION_ACCESS\nOID_AUTHORITY_KEY_IDENTIFIER = ExtensionOID.AUTHORITY_KEY_IDENTIFIER\nOID_BASIC_CONSTRAINTS = ExtensionOID.BASIC_CONSTRAINTS\nOID_CERTIFICATE_POLICIES = ExtensionOID.CERTIFICATE_POLICIES\nOID_CRL_DISTRIBUTION_POINTS = ExtensionOID.CRL_DISTRIBUTION_POINTS\nOID_EXTENDED_KEY_USAGE = ExtensionOID.EXTENDED_KEY_USAGE\nOID_FRESHEST_CRL = ExtensionOID.FRESHEST_CRL\nOID_INHIBIT_ANY_POLICY = ExtensionOID.INHIBIT_ANY_POLICY\nOID_ISSUER_ALTERNATIVE_NAME = ExtensionOID.ISSUER_ALTERNATIVE_NAME\nOID_KEY_USAGE = ExtensionOID.KEY_USAGE\nOID_NAME_CONSTRAINTS = ExtensionOID.NAME_CONSTRAINTS\nOID_OCSP_NO_CHECK = ExtensionOID.OCSP_NO_CHECK\nOID_POLICY_CONSTRAINTS = ExtensionOID.POLICY_CONSTRAINTS\nOID_POLICY_MAPPINGS = ExtensionOID.POLICY_MAPPINGS\nOID_SUBJECT_ALTERNATIVE_NAME = ExtensionOID.SUBJECT_ALTERNATIVE_NAME\nOID_SUBJECT_DIRECTORY_ATTRIBUTES = ExtensionOID.SUBJECT_DIRECTORY_ATTRIBUTES\nOID_SUBJECT_INFORMATION_ACCESS = ExtensionOID.SUBJECT_INFORMATION_ACCESS\nOID_SUBJECT_KEY_IDENTIFIER = ExtensionOID.SUBJECT_KEY_IDENTIFIER\n\nOID_DSA_WITH_SHA1 = SignatureAlgorithmOID.DSA_WITH_SHA1\nOID_DSA_WITH_SHA224 = SignatureAlgorithmOID.DSA_WITH_SHA224\nOID_DSA_WITH_SHA256 = SignatureAlgorithmOID.DSA_WITH_SHA256\nOID_ECDSA_WITH_SHA1 = SignatureAlgorithmOID.ECDSA_WITH_SHA1\nOID_ECDSA_WITH_SHA224 = SignatureAlgorithmOID.ECDSA_WITH_SHA224\nOID_ECDSA_WITH_SHA256 = SignatureAlgorithmOID.ECDSA_WITH_SHA256\nOID_ECDSA_WITH_SHA384 = SignatureAlgorithmOID.ECDSA_WITH_SHA384\nOID_ECDSA_WITH_SHA512 = SignatureAlgorithmOID.ECDSA_WITH_SHA512\nOID_RSA_WITH_MD5 = SignatureAlgorithmOID.RSA_WITH_MD5\nOID_RSA_WITH_SHA1 = SignatureAlgorithmOID.RSA_WITH_SHA1\nOID_RSA_WITH_SHA224 = SignatureAlgorithmOID.RSA_WITH_SHA224\nOID_RSA_WITH_SHA256 = SignatureAlgorithmOID.RSA_WITH_SHA256\nOID_RSA_WITH_SHA384 = SignatureAlgorithmOID.RSA_WITH_SHA384\nOID_RSA_WITH_SHA512 = SignatureAlgorithmOID.RSA_WITH_SHA512\n\nOID_COMMON_NAME = NameOID.COMMON_NAME\nOID_COUNTRY_NAME = NameOID.COUNTRY_NAME\nOID_DOMAIN_COMPONENT = NameOID.DOMAIN_COMPONENT\nOID_DN_QUALIFIER = NameOID.DN_QUALIFIER\nOID_EMAIL_ADDRESS = NameOID.EMAIL_ADDRESS\nOID_GENERATION_QUALIFIER = NameOID.GENERATION_QUALIFIER\nOID_GIVEN_NAME = NameOID.GIVEN_NAME\nOID_LOCALITY_NAME = NameOID.LOCALITY_NAME\nOID_ORGANIZATIONAL_UNIT_NAME = NameOID.ORGANIZATIONAL_UNIT_NAME\nOID_ORGANIZATION_NAME = NameOID.ORGANIZATION_NAME\nOID_PSEUDONYM = NameOID.PSEUDONYM\nOID_SERIAL_NUMBER = NameOID.SERIAL_NUMBER\nOID_STATE_OR_PROVINCE_NAME = NameOID.STATE_OR_PROVINCE_NAME\nOID_SURNAME = NameOID.SURNAME\nOID_TITLE = NameOID.TITLE\n\nOID_CLIENT_AUTH = ExtendedKeyUsageOID.CLIENT_AUTH\nOID_CODE_SIGNING = ExtendedKeyUsageOID.CODE_SIGNING\nOID_EMAIL_PROTECTION = ExtendedKeyUsageOID.EMAIL_PROTECTION\nOID_OCSP_SIGNING = ExtendedKeyUsageOID.OCSP_SIGNING\nOID_SERVER_AUTH = ExtendedKeyUsageOID.SERVER_AUTH\nOID_TIME_STAMPING = ExtendedKeyUsageOID.TIME_STAMPING\n\nOID_ANY_POLICY = CertificatePoliciesOID.ANY_POLICY\nOID_CPS_QUALIFIER = CertificatePoliciesOID.CPS_QUALIFIER\nOID_CPS_USER_NOTICE = CertificatePoliciesOID.CPS_USER_NOTICE\n\nOID_CERTIFICATE_ISSUER = CRLEntryExtensionOID.CERTIFICATE_ISSUER\nOID_CRL_REASON = CRLEntryExtensionOID.CRL_REASON\nOID_INVALIDITY_DATE = CRLEntryExtensionOID.INVALIDITY_DATE\n\nOID_CA_ISSUERS = AuthorityInformationAccessOID.CA_ISSUERS\nOID_OCSP = AuthorityInformationAccessOID.OCSP\n\n\n__all__ = [\n \"load_pem_x509_certificate\",\n \"load_der_x509_certificate\",\n \"load_pem_x509_csr\",\n \"load_der_x509_csr\",\n \"load_pem_x509_crl\",\n \"load_der_x509_crl\",\n \"InvalidVersion\",\n \"DuplicateExtension\",\n \"UnsupportedExtension\",\n \"ExtensionNotFound\",\n \"UnsupportedGeneralNameType\",\n \"NameAttribute\",\n \"Name\",\n \"ObjectIdentifier\",\n \"ExtensionType\",\n \"Extensions\",\n \"Extension\",\n \"ExtendedKeyUsage\",\n \"OCSPNoCheck\",\n \"BasicConstraints\",\n \"CRLNumber\",\n \"KeyUsage\",\n \"AuthorityInformationAccess\",\n \"AccessDescription\",\n \"CertificatePolicies\",\n \"PolicyInformation\",\n \"UserNotice\",\n \"NoticeReference\",\n \"SubjectKeyIdentifier\",\n \"NameConstraints\",\n \"CRLDistributionPoints\",\n \"DistributionPoint\",\n \"ReasonFlags\",\n \"InhibitAnyPolicy\",\n \"SubjectAlternativeName\",\n \"IssuerAlternativeName\",\n \"AuthorityKeyIdentifier\",\n \"GeneralNames\",\n \"GeneralName\",\n \"RFC822Name\",\n \"DNSName\",\n \"UniformResourceIdentifier\",\n \"RegisteredID\",\n \"DirectoryName\",\n \"IPAddress\",\n \"OtherName\",\n \"Certificate\",\n \"CertificateRevocationList\",\n \"CertificateRevocationListBuilder\",\n \"CertificateSigningRequest\",\n \"RevokedCertificate\",\n \"RevokedCertificateBuilder\",\n \"CertificateSigningRequestBuilder\",\n \"CertificateBuilder\",\n \"Version\",\n \"_SIG_OIDS_TO_HASH\",\n \"OID_CA_ISSUERS\",\n \"OID_OCSP\",\n \"_GENERAL_NAMES\",\n \"CRLExtensionOID\",\n \"CertificateIssuer\",\n \"CRLReason\",\n \"InvalidityDate\",\n \"UnrecognizedExtension\",\n]\n", "path": "src/cryptography/x509/__init__.py"}]}
| 2,808 | 311 |
gh_patches_debug_7944
|
rasdani/github-patches
|
git_diff
|
hylang__hy-1710
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"\n" isn't mangled appropriately
=> (mangle "\n")
'hyx_XUnX'
=> (unmangle (mangle "\n"))
Traceback (most recent call last):
…
ValueError: invalid literal for int() with base 16: 'n'
</issue>
<code>
[start of hy/lex/__init__.py]
1 # Copyright 2018 the authors.
2 # This file is part of Hy, which is free software licensed under the Expat
3 # license. See the LICENSE.
4
5 from __future__ import unicode_literals
6
7 import re
8 import sys
9 import unicodedata
10
11 from hy._compat import str_type, isidentifier, UCS4
12 from hy.lex.exceptions import PrematureEndOfInput, LexException # NOQA
13 from hy.models import HyExpression, HySymbol
14
15 try:
16 from io import StringIO
17 except ImportError:
18 from StringIO import StringIO
19
20
21 def hy_parse(source):
22 """Parse a Hy source string.
23
24 Parameters
25 ----------
26 source: string
27 Source code to parse.
28
29 Returns
30 -------
31 out : instance of `types.CodeType`
32 """
33 source = re.sub(r'\A#!.*', '', source)
34 return HyExpression([HySymbol("do")] + tokenize(source + "\n"))
35
36
37 def tokenize(buf):
38 """
39 Tokenize a Lisp file or string buffer into internal Hy objects.
40 """
41 from hy.lex.lexer import lexer
42 from hy.lex.parser import parser
43 from rply.errors import LexingError
44 try:
45 return parser.parse(lexer.lex(buf))
46 except LexingError as e:
47 pos = e.getsourcepos()
48 raise LexException("Could not identify the next token.",
49 pos.lineno, pos.colno, buf)
50 except LexException as e:
51 if e.source is None:
52 e.source = buf
53 raise
54
55
56 mangle_delim = 'X'
57
58
59 def mangle(s):
60 """Stringify the argument and convert it to a valid Python identifier
61 according to Hy's mangling rules."""
62 def unicode_char_to_hex(uchr):
63 # Covert a unicode char to hex string, without prefix
64 return uchr.encode('unicode-escape').decode('utf-8').lstrip('\\U').lstrip('\\u').lstrip('0')
65
66 assert s
67
68 s = str_type(s)
69 s = s.replace("-", "_")
70 s2 = s.lstrip('_')
71 leading_underscores = '_' * (len(s) - len(s2))
72 s = s2
73
74 if s.endswith("?"):
75 s = 'is_' + s[:-1]
76 if not isidentifier(leading_underscores + s):
77 # Replace illegal characters with their Unicode character
78 # names, or hexadecimal if they don't have one.
79 s = 'hyx_' + ''.join(
80 c
81 if c != mangle_delim and isidentifier('S' + c)
82 # We prepend the "S" because some characters aren't
83 # allowed at the start of an identifier.
84 else '{0}{1}{0}'.format(mangle_delim,
85 unicodedata.name(c, '').lower().replace('-', 'H').replace(' ', '_')
86 or 'U{}'.format(unicode_char_to_hex(c)))
87 for c in unicode_to_ucs4iter(s))
88
89 s = leading_underscores + s
90 assert isidentifier(s)
91 return s
92
93
94 def unmangle(s):
95 """Stringify the argument and try to convert it to a pretty unmangled
96 form. This may not round-trip, because different Hy symbol names can
97 mangle to the same Python identifier."""
98
99 s = str_type(s)
100
101 s2 = s.lstrip('_')
102 leading_underscores = len(s) - len(s2)
103 s = s2
104
105 if s.startswith('hyx_'):
106 s = re.sub('{0}(U)?([_a-z0-9H]+?){0}'.format(mangle_delim),
107 lambda mo:
108 chr(int(mo.group(2), base=16))
109 if mo.group(1)
110 else unicodedata.lookup(
111 mo.group(2).replace('_', ' ').replace('H', '-').upper()),
112 s[len('hyx_'):])
113 if s.startswith('is_'):
114 s = s[len("is_"):] + "?"
115 s = s.replace('_', '-')
116
117 return '-' * leading_underscores + s
118
119
120 def unicode_to_ucs4iter(ustr):
121 # Covert a unicode string to an iterable object,
122 # elements in the object are single USC-4 unicode characters
123 if UCS4:
124 return ustr
125 ucs4_list = list(ustr)
126 for i, u in enumerate(ucs4_list):
127 if 0xD7FF < ord(u) < 0xDC00:
128 ucs4_list[i] += ucs4_list[i + 1]
129 del ucs4_list[i + 1]
130 return ucs4_list
131
132
133 def read(from_file=sys.stdin, eof=""):
134 """Read from input and returns a tokenized string.
135
136 Can take a given input buffer to read from, and a single byte as EOF
137 (defaults to an empty string).
138 """
139 buff = ""
140 while True:
141 inn = str(from_file.readline())
142 if inn == eof:
143 raise EOFError("Reached end of file")
144 buff += inn
145 try:
146 parsed = next(iter(tokenize(buff)), None)
147 except (PrematureEndOfInput, IndexError):
148 pass
149 else:
150 break
151 return parsed
152
153
154 def read_str(input):
155 return read(StringIO(str_type(input)))
156
[end of hy/lex/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/hy/lex/__init__.py b/hy/lex/__init__.py
--- a/hy/lex/__init__.py
+++ b/hy/lex/__init__.py
@@ -61,7 +61,10 @@
according to Hy's mangling rules."""
def unicode_char_to_hex(uchr):
# Covert a unicode char to hex string, without prefix
- return uchr.encode('unicode-escape').decode('utf-8').lstrip('\\U').lstrip('\\u').lstrip('0')
+ if len(uchr) == 1 and ord(uchr) < 128:
+ return format(ord(uchr), 'x')
+ return (uchr.encode('unicode-escape').decode('utf-8')
+ .lstrip('\\U').lstrip('\\u').lstrip('\\x').lstrip('0'))
assert s
|
{"golden_diff": "diff --git a/hy/lex/__init__.py b/hy/lex/__init__.py\n--- a/hy/lex/__init__.py\n+++ b/hy/lex/__init__.py\n@@ -61,7 +61,10 @@\n according to Hy's mangling rules.\"\"\"\n def unicode_char_to_hex(uchr):\n # Covert a unicode char to hex string, without prefix\n- return uchr.encode('unicode-escape').decode('utf-8').lstrip('\\\\U').lstrip('\\\\u').lstrip('0')\n+ if len(uchr) == 1 and ord(uchr) < 128:\n+ return format(ord(uchr), 'x')\n+ return (uchr.encode('unicode-escape').decode('utf-8')\n+ .lstrip('\\\\U').lstrip('\\\\u').lstrip('\\\\x').lstrip('0'))\n \n assert s\n", "issue": "\"\\n\" isn't mangled appropriately\n => (mangle \"\\n\")\r\n 'hyx_XUnX'\r\n => (unmangle (mangle \"\\n\"))\r\n Traceback (most recent call last):\r\n \u2026\r\n ValueError: invalid literal for int() with base 16: 'n'\r\n\n", "before_files": [{"content": "# Copyright 2018 the authors.\n# This file is part of Hy, which is free software licensed under the Expat\n# license. See the LICENSE.\n\nfrom __future__ import unicode_literals\n\nimport re\nimport sys\nimport unicodedata\n\nfrom hy._compat import str_type, isidentifier, UCS4\nfrom hy.lex.exceptions import PrematureEndOfInput, LexException # NOQA\nfrom hy.models import HyExpression, HySymbol\n\ntry:\n from io import StringIO\nexcept ImportError:\n from StringIO import StringIO\n\n\ndef hy_parse(source):\n \"\"\"Parse a Hy source string.\n\n Parameters\n ----------\n source: string\n Source code to parse.\n\n Returns\n -------\n out : instance of `types.CodeType`\n \"\"\"\n source = re.sub(r'\\A#!.*', '', source)\n return HyExpression([HySymbol(\"do\")] + tokenize(source + \"\\n\"))\n\n\ndef tokenize(buf):\n \"\"\"\n Tokenize a Lisp file or string buffer into internal Hy objects.\n \"\"\"\n from hy.lex.lexer import lexer\n from hy.lex.parser import parser\n from rply.errors import LexingError\n try:\n return parser.parse(lexer.lex(buf))\n except LexingError as e:\n pos = e.getsourcepos()\n raise LexException(\"Could not identify the next token.\",\n pos.lineno, pos.colno, buf)\n except LexException as e:\n if e.source is None:\n e.source = buf\n raise\n\n\nmangle_delim = 'X'\n\n\ndef mangle(s):\n \"\"\"Stringify the argument and convert it to a valid Python identifier\n according to Hy's mangling rules.\"\"\"\n def unicode_char_to_hex(uchr):\n # Covert a unicode char to hex string, without prefix\n return uchr.encode('unicode-escape').decode('utf-8').lstrip('\\\\U').lstrip('\\\\u').lstrip('0')\n\n assert s\n\n s = str_type(s)\n s = s.replace(\"-\", \"_\")\n s2 = s.lstrip('_')\n leading_underscores = '_' * (len(s) - len(s2))\n s = s2\n\n if s.endswith(\"?\"):\n s = 'is_' + s[:-1]\n if not isidentifier(leading_underscores + s):\n # Replace illegal characters with their Unicode character\n # names, or hexadecimal if they don't have one.\n s = 'hyx_' + ''.join(\n c\n if c != mangle_delim and isidentifier('S' + c)\n # We prepend the \"S\" because some characters aren't\n # allowed at the start of an identifier.\n else '{0}{1}{0}'.format(mangle_delim,\n unicodedata.name(c, '').lower().replace('-', 'H').replace(' ', '_')\n or 'U{}'.format(unicode_char_to_hex(c)))\n for c in unicode_to_ucs4iter(s))\n\n s = leading_underscores + s\n assert isidentifier(s)\n return s\n\n\ndef unmangle(s):\n \"\"\"Stringify the argument and try to convert it to a pretty unmangled\n form. This may not round-trip, because different Hy symbol names can\n mangle to the same Python identifier.\"\"\"\n\n s = str_type(s)\n\n s2 = s.lstrip('_')\n leading_underscores = len(s) - len(s2)\n s = s2\n\n if s.startswith('hyx_'):\n s = re.sub('{0}(U)?([_a-z0-9H]+?){0}'.format(mangle_delim),\n lambda mo:\n chr(int(mo.group(2), base=16))\n if mo.group(1)\n else unicodedata.lookup(\n mo.group(2).replace('_', ' ').replace('H', '-').upper()),\n s[len('hyx_'):])\n if s.startswith('is_'):\n s = s[len(\"is_\"):] + \"?\"\n s = s.replace('_', '-')\n\n return '-' * leading_underscores + s\n\n\ndef unicode_to_ucs4iter(ustr):\n # Covert a unicode string to an iterable object,\n # elements in the object are single USC-4 unicode characters\n if UCS4:\n return ustr\n ucs4_list = list(ustr)\n for i, u in enumerate(ucs4_list):\n if 0xD7FF < ord(u) < 0xDC00:\n ucs4_list[i] += ucs4_list[i + 1]\n del ucs4_list[i + 1]\n return ucs4_list\n\n\ndef read(from_file=sys.stdin, eof=\"\"):\n \"\"\"Read from input and returns a tokenized string.\n\n Can take a given input buffer to read from, and a single byte as EOF\n (defaults to an empty string).\n \"\"\"\n buff = \"\"\n while True:\n inn = str(from_file.readline())\n if inn == eof:\n raise EOFError(\"Reached end of file\")\n buff += inn\n try:\n parsed = next(iter(tokenize(buff)), None)\n except (PrematureEndOfInput, IndexError):\n pass\n else:\n break\n return parsed\n\n\ndef read_str(input):\n return read(StringIO(str_type(input)))\n", "path": "hy/lex/__init__.py"}]}
| 2,122 | 203 |
gh_patches_debug_18576
|
rasdani/github-patches
|
git_diff
|
openvinotoolkit__datumaro-800
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`loglevel` does not affect CLI output
</issue>
<code>
[start of datumaro/cli/__main__.py]
1 # Copyright (C) 2019-2022 Intel Corporation
2 #
3 # SPDX-License-Identifier: MIT
4
5 import argparse
6 import logging as log
7 import os.path as osp
8 import sys
9 import warnings
10
11 from ..util.telemetry_utils import (
12 close_telemetry_session,
13 init_telemetry_session,
14 send_command_exception_info,
15 send_command_failure_info,
16 send_command_success_info,
17 )
18 from ..version import VERSION
19 from . import commands, contexts
20 from .util import add_subparser
21 from .util.errors import CliException
22
23 _log_levels = {
24 "debug": log.DEBUG,
25 "info": log.INFO,
26 "warning": log.WARNING,
27 "error": log.ERROR,
28 "critical": log.CRITICAL,
29 }
30
31
32 def loglevel(name):
33 return _log_levels[name]
34
35
36 class _LogManager:
37 @classmethod
38 def init_logger(cls, args=None):
39 # Define minimalistic parser only to obtain loglevel
40 parser = argparse.ArgumentParser(add_help=False)
41 cls._define_loglevel_option(parser)
42 args, _ = parser.parse_known_args(args)
43
44 log.basicConfig(format="%(asctime)s %(levelname)s: %(message)s", level=args.loglevel)
45
46 # Suppress own deprecation warnings
47 warnings.filterwarnings("ignore", category=DeprecationWarning, module=r"datumaro\..*")
48
49 @staticmethod
50 def _define_loglevel_option(parser):
51 parser.add_argument(
52 "--loglevel",
53 type=loglevel,
54 default="info",
55 help="Logging level (options: %s; default: %s)"
56 % (", ".join(_log_levels.keys()), "%(default)s"),
57 )
58 return parser
59
60
61 def _make_subcommands_help(commands, help_line_start=0):
62 desc = ""
63 for command_name, _, command_help in commands:
64 desc += (" %-" + str(max(0, help_line_start - 2 - 1)) + "s%s\n") % (
65 command_name,
66 command_help,
67 )
68 return desc
69
70
71 def _get_known_contexts():
72 return [
73 ("model", contexts.model, "Actions with models"),
74 ("project", contexts.project, "Actions with projects"),
75 ("source", contexts.source, "Actions with data sources"),
76 ("util", contexts.util, "Auxillary tools and utilities"),
77 ]
78
79
80 def _get_known_commands():
81 return [
82 ("Project modification:", None, ""),
83 ("add", commands.add, "Add dataset"),
84 ("create", commands.create, "Create empty project"),
85 ("import", commands.import_, "Import dataset"),
86 ("remove", commands.remove, "Remove dataset"),
87 ("", None, ""),
88 ("Project versioning:", None, ""),
89 ("checkout", commands.checkout, "Switch to another branch or revision"),
90 ("commit", commands.commit, "Commit changes in tracked files"),
91 ("log", commands.log, "List history"),
92 ("status", commands.status, "Display current status"),
93 ("", None, ""),
94 ("Dataset operations:", None, ""),
95 ("convert", commands.convert, "Convert dataset between formats"),
96 (
97 "describe-downloads",
98 commands.describe_downloads,
99 "Print information about downloadable datasets",
100 ),
101 ("detect-format", commands.detect_format, "Detect the format of a dataset"),
102 ("diff", commands.diff, "Compare datasets"),
103 ("download", commands.download, "Download a publicly available dataset"),
104 ("explain", commands.explain, "Run Explainable AI algorithm for model"),
105 ("export", commands.export, "Export dataset in some format"),
106 ("filter", commands.filter, "Filter dataset items"),
107 ("generate", commands.generate, "Generate synthetic dataset"),
108 ("info", commands.info, "Print dataset info"),
109 ("merge", commands.merge, "Merge datasets"),
110 ("patch", commands.patch, "Update dataset from another one"),
111 ("stats", commands.stats, "Compute dataset statistics"),
112 ("transform", commands.transform, "Modify dataset items"),
113 ("validate", commands.validate, "Validate dataset"),
114 ]
115
116
117 def _get_sensitive_args():
118 known_contexts = _get_known_contexts()
119 known_commands = _get_known_commands()
120
121 res = {}
122 for _, command, _ in known_contexts + known_commands:
123 if command is not None:
124 res.update(command.get_sensitive_args())
125
126 return res
127
128
129 def make_parser():
130 parser = argparse.ArgumentParser(
131 description="Dataset Framework", formatter_class=argparse.RawDescriptionHelpFormatter
132 )
133 if parser.prog == osp.basename(__file__): # python -m datumaro ...
134 parser.prog = "datumaro"
135
136 parser.add_argument("--version", action="version", version=VERSION)
137 _LogManager._define_loglevel_option(parser)
138
139 known_contexts = _get_known_contexts()
140 known_commands = _get_known_commands()
141
142 # Argparse doesn't support subparser groups:
143 # https://stackoverflow.com/questions/32017020/grouping-argparse-subparser-arguments
144 help_line_start = max((len(e[0]) for e in known_contexts + known_commands), default=0)
145 help_line_start = max((2 + help_line_start) // 4 + 1, 6) * 4 # align to tabs
146 subcommands_desc = ""
147 if known_contexts:
148 subcommands_desc += "Contexts:\n"
149 subcommands_desc += _make_subcommands_help(known_contexts, help_line_start)
150 if known_commands:
151 if subcommands_desc:
152 subcommands_desc += "\n"
153 subcommands_desc += "Commands:\n"
154 subcommands_desc += _make_subcommands_help(known_commands, help_line_start)
155 if subcommands_desc:
156 subcommands_desc += (
157 "\nRun '%s COMMAND --help' for more information on a command." % parser.prog
158 )
159
160 subcommands = parser.add_subparsers(
161 title=subcommands_desc, description="", help=argparse.SUPPRESS
162 )
163 for command_name, command, _ in known_contexts + known_commands:
164 if command is not None:
165 add_subparser(subcommands, command_name, command.build_parser)
166
167 return parser
168
169
170 def main(args=None):
171 _LogManager.init_logger(args)
172
173 parser = make_parser()
174 args = parser.parse_args(args)
175
176 if "command" not in args:
177 parser.print_help()
178 return 1
179
180 sensitive_args = _get_sensitive_args()
181 telemetry = init_telemetry_session(app_name="Datumaro", app_version=VERSION)
182
183 try:
184 retcode = args.command(args)
185 if retcode is None:
186 retcode = 0
187 except CliException as e:
188 log.error(e)
189 send_command_exception_info(telemetry, args, sensitive_args=sensitive_args[args.command])
190 return 1
191 except Exception as e:
192 log.error(e)
193 send_command_exception_info(telemetry, args, sensitive_args=sensitive_args[args.command])
194 raise
195 else:
196 if retcode:
197 send_command_failure_info(telemetry, args, sensitive_args=sensitive_args[args.command])
198 else:
199 send_command_success_info(telemetry, args, sensitive_args=sensitive_args[args.command])
200 return retcode
201 finally:
202 close_telemetry_session(telemetry)
203
204
205 if __name__ == "__main__":
206 sys.exit(main())
207
[end of datumaro/cli/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/datumaro/cli/__main__.py b/datumaro/cli/__main__.py
--- a/datumaro/cli/__main__.py
+++ b/datumaro/cli/__main__.py
@@ -40,8 +40,17 @@
parser = argparse.ArgumentParser(add_help=False)
cls._define_loglevel_option(parser)
args, _ = parser.parse_known_args(args)
-
- log.basicConfig(format="%(asctime)s %(levelname)s: %(message)s", level=args.loglevel)
+ log_format = "%(asctime)s %(levelname)s: %(message)s"
+
+ # Try setting up logging with basicConfig.
+ # This does nothing, if other parts of the software
+ # already configured handlers, i.e. during imports and when
+ # main is called programmatically.
+ log.basicConfig(format=log_format, level=args.loglevel)
+ # Force-overwrite the log level and formatter
+ log.root.setLevel(args.loglevel)
+ for h in log.root.handlers:
+ h.setFormatter(log.Formatter(log_format))
# Suppress own deprecation warnings
warnings.filterwarnings("ignore", category=DeprecationWarning, module=r"datumaro\..*")
|
{"golden_diff": "diff --git a/datumaro/cli/__main__.py b/datumaro/cli/__main__.py\n--- a/datumaro/cli/__main__.py\n+++ b/datumaro/cli/__main__.py\n@@ -40,8 +40,17 @@\n parser = argparse.ArgumentParser(add_help=False)\n cls._define_loglevel_option(parser)\n args, _ = parser.parse_known_args(args)\n-\n- log.basicConfig(format=\"%(asctime)s %(levelname)s: %(message)s\", level=args.loglevel)\n+ log_format = \"%(asctime)s %(levelname)s: %(message)s\"\n+\n+ # Try setting up logging with basicConfig.\n+ # This does nothing, if other parts of the software\n+ # already configured handlers, i.e. during imports and when\n+ # main is called programmatically.\n+ log.basicConfig(format=log_format, level=args.loglevel)\n+ # Force-overwrite the log level and formatter\n+ log.root.setLevel(args.loglevel)\n+ for h in log.root.handlers:\n+ h.setFormatter(log.Formatter(log_format))\n \n # Suppress own deprecation warnings\n warnings.filterwarnings(\"ignore\", category=DeprecationWarning, module=r\"datumaro\\..*\")\n", "issue": "`loglevel` does not affect CLI output\n\n", "before_files": [{"content": "# Copyright (C) 2019-2022 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\nimport argparse\nimport logging as log\nimport os.path as osp\nimport sys\nimport warnings\n\nfrom ..util.telemetry_utils import (\n close_telemetry_session,\n init_telemetry_session,\n send_command_exception_info,\n send_command_failure_info,\n send_command_success_info,\n)\nfrom ..version import VERSION\nfrom . import commands, contexts\nfrom .util import add_subparser\nfrom .util.errors import CliException\n\n_log_levels = {\n \"debug\": log.DEBUG,\n \"info\": log.INFO,\n \"warning\": log.WARNING,\n \"error\": log.ERROR,\n \"critical\": log.CRITICAL,\n}\n\n\ndef loglevel(name):\n return _log_levels[name]\n\n\nclass _LogManager:\n @classmethod\n def init_logger(cls, args=None):\n # Define minimalistic parser only to obtain loglevel\n parser = argparse.ArgumentParser(add_help=False)\n cls._define_loglevel_option(parser)\n args, _ = parser.parse_known_args(args)\n\n log.basicConfig(format=\"%(asctime)s %(levelname)s: %(message)s\", level=args.loglevel)\n\n # Suppress own deprecation warnings\n warnings.filterwarnings(\"ignore\", category=DeprecationWarning, module=r\"datumaro\\..*\")\n\n @staticmethod\n def _define_loglevel_option(parser):\n parser.add_argument(\n \"--loglevel\",\n type=loglevel,\n default=\"info\",\n help=\"Logging level (options: %s; default: %s)\"\n % (\", \".join(_log_levels.keys()), \"%(default)s\"),\n )\n return parser\n\n\ndef _make_subcommands_help(commands, help_line_start=0):\n desc = \"\"\n for command_name, _, command_help in commands:\n desc += (\" %-\" + str(max(0, help_line_start - 2 - 1)) + \"s%s\\n\") % (\n command_name,\n command_help,\n )\n return desc\n\n\ndef _get_known_contexts():\n return [\n (\"model\", contexts.model, \"Actions with models\"),\n (\"project\", contexts.project, \"Actions with projects\"),\n (\"source\", contexts.source, \"Actions with data sources\"),\n (\"util\", contexts.util, \"Auxillary tools and utilities\"),\n ]\n\n\ndef _get_known_commands():\n return [\n (\"Project modification:\", None, \"\"),\n (\"add\", commands.add, \"Add dataset\"),\n (\"create\", commands.create, \"Create empty project\"),\n (\"import\", commands.import_, \"Import dataset\"),\n (\"remove\", commands.remove, \"Remove dataset\"),\n (\"\", None, \"\"),\n (\"Project versioning:\", None, \"\"),\n (\"checkout\", commands.checkout, \"Switch to another branch or revision\"),\n (\"commit\", commands.commit, \"Commit changes in tracked files\"),\n (\"log\", commands.log, \"List history\"),\n (\"status\", commands.status, \"Display current status\"),\n (\"\", None, \"\"),\n (\"Dataset operations:\", None, \"\"),\n (\"convert\", commands.convert, \"Convert dataset between formats\"),\n (\n \"describe-downloads\",\n commands.describe_downloads,\n \"Print information about downloadable datasets\",\n ),\n (\"detect-format\", commands.detect_format, \"Detect the format of a dataset\"),\n (\"diff\", commands.diff, \"Compare datasets\"),\n (\"download\", commands.download, \"Download a publicly available dataset\"),\n (\"explain\", commands.explain, \"Run Explainable AI algorithm for model\"),\n (\"export\", commands.export, \"Export dataset in some format\"),\n (\"filter\", commands.filter, \"Filter dataset items\"),\n (\"generate\", commands.generate, \"Generate synthetic dataset\"),\n (\"info\", commands.info, \"Print dataset info\"),\n (\"merge\", commands.merge, \"Merge datasets\"),\n (\"patch\", commands.patch, \"Update dataset from another one\"),\n (\"stats\", commands.stats, \"Compute dataset statistics\"),\n (\"transform\", commands.transform, \"Modify dataset items\"),\n (\"validate\", commands.validate, \"Validate dataset\"),\n ]\n\n\ndef _get_sensitive_args():\n known_contexts = _get_known_contexts()\n known_commands = _get_known_commands()\n\n res = {}\n for _, command, _ in known_contexts + known_commands:\n if command is not None:\n res.update(command.get_sensitive_args())\n\n return res\n\n\ndef make_parser():\n parser = argparse.ArgumentParser(\n description=\"Dataset Framework\", formatter_class=argparse.RawDescriptionHelpFormatter\n )\n if parser.prog == osp.basename(__file__): # python -m datumaro ...\n parser.prog = \"datumaro\"\n\n parser.add_argument(\"--version\", action=\"version\", version=VERSION)\n _LogManager._define_loglevel_option(parser)\n\n known_contexts = _get_known_contexts()\n known_commands = _get_known_commands()\n\n # Argparse doesn't support subparser groups:\n # https://stackoverflow.com/questions/32017020/grouping-argparse-subparser-arguments\n help_line_start = max((len(e[0]) for e in known_contexts + known_commands), default=0)\n help_line_start = max((2 + help_line_start) // 4 + 1, 6) * 4 # align to tabs\n subcommands_desc = \"\"\n if known_contexts:\n subcommands_desc += \"Contexts:\\n\"\n subcommands_desc += _make_subcommands_help(known_contexts, help_line_start)\n if known_commands:\n if subcommands_desc:\n subcommands_desc += \"\\n\"\n subcommands_desc += \"Commands:\\n\"\n subcommands_desc += _make_subcommands_help(known_commands, help_line_start)\n if subcommands_desc:\n subcommands_desc += (\n \"\\nRun '%s COMMAND --help' for more information on a command.\" % parser.prog\n )\n\n subcommands = parser.add_subparsers(\n title=subcommands_desc, description=\"\", help=argparse.SUPPRESS\n )\n for command_name, command, _ in known_contexts + known_commands:\n if command is not None:\n add_subparser(subcommands, command_name, command.build_parser)\n\n return parser\n\n\ndef main(args=None):\n _LogManager.init_logger(args)\n\n parser = make_parser()\n args = parser.parse_args(args)\n\n if \"command\" not in args:\n parser.print_help()\n return 1\n\n sensitive_args = _get_sensitive_args()\n telemetry = init_telemetry_session(app_name=\"Datumaro\", app_version=VERSION)\n\n try:\n retcode = args.command(args)\n if retcode is None:\n retcode = 0\n except CliException as e:\n log.error(e)\n send_command_exception_info(telemetry, args, sensitive_args=sensitive_args[args.command])\n return 1\n except Exception as e:\n log.error(e)\n send_command_exception_info(telemetry, args, sensitive_args=sensitive_args[args.command])\n raise\n else:\n if retcode:\n send_command_failure_info(telemetry, args, sensitive_args=sensitive_args[args.command])\n else:\n send_command_success_info(telemetry, args, sensitive_args=sensitive_args[args.command])\n return retcode\n finally:\n close_telemetry_session(telemetry)\n\n\nif __name__ == \"__main__\":\n sys.exit(main())\n", "path": "datumaro/cli/__main__.py"}]}
| 2,635 | 261 |
gh_patches_debug_15369
|
rasdani/github-patches
|
git_diff
|
ibis-project__ibis-3044
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug: isolated dask backend tests fail due to removed imports
For some reason lines 6 and 8 here: https://github.com/ibis-project/ibis/commit/a1262410310bb4d638a73e1cdfbe93c2b4089905#diff-96d84d9b6e9e84a2be7a046dc9853df1ca5fc6e894307339b02cd61e666c0149L6-L8
were removed.
This causes dasks tests to fail when they are run in isolation from other tests that (transitively) import from the pandas backend.
This is both a ci bug and a bug in the code, since we're not testing backends independently. Perhaps unsurprisingly I discovered the bug in #2937, which fixes the CI part of this problem.
</issue>
<code>
[start of ibis/backends/dask/__init__.py]
1 from typing import Mapping
2
3 import dask
4 import dask.dataframe as dd
5 import pandas as pd
6 import toolz
7 from dask.base import DaskMethodsMixin
8
9 import ibis.common.exceptions as com
10 import ibis.config
11 import ibis.expr.schema as sch
12 import ibis.expr.types as ir
13 from ibis.backends.pandas import BasePandasBackend
14
15 from .client import DaskDatabase, DaskTable, ibis_schema_to_dask
16 from .core import execute_and_reset
17
18 # Make sure that the pandas backend is loaded, dispatching has been
19 # executed, and options have been loaded
20 ibis.pandas
21
22
23 class Backend(BasePandasBackend):
24 name = 'dask'
25 database_class = DaskDatabase
26 table_class = DaskTable
27
28 def connect(self, dictionary):
29 # register dispatchers
30 from . import udf # noqa: F401
31
32 return super().connect(dictionary)
33
34 @property
35 def version(self):
36 return dask.__version__
37
38 def execute(
39 self,
40 query: ir.Expr,
41 params: Mapping[ir.Expr, object] = None,
42 limit: str = 'default',
43 **kwargs,
44 ):
45 if limit != 'default':
46 raise ValueError(
47 'limit parameter to execute is not yet implemented in the '
48 'dask backend'
49 )
50
51 if not isinstance(query, ir.Expr):
52 raise TypeError(
53 "`query` has type {!r}, expected ibis.expr.types.Expr".format(
54 type(query).__name__
55 )
56 )
57
58 result = self.compile(query, params, **kwargs)
59 if isinstance(result, DaskMethodsMixin):
60 return result.compute()
61 else:
62 return result
63
64 def compile(
65 self, query: ir.Expr, params: Mapping[ir.Expr, object] = None, **kwargs
66 ):
67 """Compile `expr`.
68
69 Notes
70 -----
71 For the dask backend returns a dask graph that you can run ``.compute``
72 on to get a pandas object.
73
74 """
75 return execute_and_reset(query, params=params, **kwargs)
76
77 def create_table(
78 self,
79 table_name: str,
80 obj: dd.DataFrame = None,
81 schema: sch.Schema = None,
82 ):
83 """Create a table."""
84 if obj is not None:
85 df = obj
86 elif schema is not None:
87 dtypes = ibis_schema_to_dask(schema)
88 df = schema.apply_to(
89 dd.from_pandas(
90 pd.DataFrame(columns=list(map(toolz.first, dtypes))),
91 npartitions=1,
92 )
93 )
94 else:
95 raise com.IbisError('Must pass expr or schema')
96
97 self.dictionary[table_name] = df
98
[end of ibis/backends/dask/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ibis/backends/dask/__init__.py b/ibis/backends/dask/__init__.py
--- a/ibis/backends/dask/__init__.py
+++ b/ibis/backends/dask/__init__.py
@@ -6,6 +6,9 @@
import toolz
from dask.base import DaskMethodsMixin
+# import the pandas execution module to register dispatched implementations of
+# execute_node that the dask backend will later override
+import ibis.backends.pandas.execution # noqa: F401
import ibis.common.exceptions as com
import ibis.config
import ibis.expr.schema as sch
@@ -15,8 +18,7 @@
from .client import DaskDatabase, DaskTable, ibis_schema_to_dask
from .core import execute_and_reset
-# Make sure that the pandas backend is loaded, dispatching has been
-# executed, and options have been loaded
+# Make sure that the pandas backend options have been loaded
ibis.pandas
|
{"golden_diff": "diff --git a/ibis/backends/dask/__init__.py b/ibis/backends/dask/__init__.py\n--- a/ibis/backends/dask/__init__.py\n+++ b/ibis/backends/dask/__init__.py\n@@ -6,6 +6,9 @@\n import toolz\n from dask.base import DaskMethodsMixin\n \n+# import the pandas execution module to register dispatched implementations of\n+# execute_node that the dask backend will later override\n+import ibis.backends.pandas.execution # noqa: F401\n import ibis.common.exceptions as com\n import ibis.config\n import ibis.expr.schema as sch\n@@ -15,8 +18,7 @@\n from .client import DaskDatabase, DaskTable, ibis_schema_to_dask\n from .core import execute_and_reset\n \n-# Make sure that the pandas backend is loaded, dispatching has been\n-# executed, and options have been loaded\n+# Make sure that the pandas backend options have been loaded\n ibis.pandas\n", "issue": "bug: isolated dask backend tests fail due to removed imports\nFor some reason lines 6 and 8 here: https://github.com/ibis-project/ibis/commit/a1262410310bb4d638a73e1cdfbe93c2b4089905#diff-96d84d9b6e9e84a2be7a046dc9853df1ca5fc6e894307339b02cd61e666c0149L6-L8\r\n\r\nwere removed.\r\n\r\nThis causes dasks tests to fail when they are run in isolation from other tests that (transitively) import from the pandas backend.\r\n\r\nThis is both a ci bug and a bug in the code, since we're not testing backends independently. Perhaps unsurprisingly I discovered the bug in #2937, which fixes the CI part of this problem.\n", "before_files": [{"content": "from typing import Mapping\n\nimport dask\nimport dask.dataframe as dd\nimport pandas as pd\nimport toolz\nfrom dask.base import DaskMethodsMixin\n\nimport ibis.common.exceptions as com\nimport ibis.config\nimport ibis.expr.schema as sch\nimport ibis.expr.types as ir\nfrom ibis.backends.pandas import BasePandasBackend\n\nfrom .client import DaskDatabase, DaskTable, ibis_schema_to_dask\nfrom .core import execute_and_reset\n\n# Make sure that the pandas backend is loaded, dispatching has been\n# executed, and options have been loaded\nibis.pandas\n\n\nclass Backend(BasePandasBackend):\n name = 'dask'\n database_class = DaskDatabase\n table_class = DaskTable\n\n def connect(self, dictionary):\n # register dispatchers\n from . import udf # noqa: F401\n\n return super().connect(dictionary)\n\n @property\n def version(self):\n return dask.__version__\n\n def execute(\n self,\n query: ir.Expr,\n params: Mapping[ir.Expr, object] = None,\n limit: str = 'default',\n **kwargs,\n ):\n if limit != 'default':\n raise ValueError(\n 'limit parameter to execute is not yet implemented in the '\n 'dask backend'\n )\n\n if not isinstance(query, ir.Expr):\n raise TypeError(\n \"`query` has type {!r}, expected ibis.expr.types.Expr\".format(\n type(query).__name__\n )\n )\n\n result = self.compile(query, params, **kwargs)\n if isinstance(result, DaskMethodsMixin):\n return result.compute()\n else:\n return result\n\n def compile(\n self, query: ir.Expr, params: Mapping[ir.Expr, object] = None, **kwargs\n ):\n \"\"\"Compile `expr`.\n\n Notes\n -----\n For the dask backend returns a dask graph that you can run ``.compute``\n on to get a pandas object.\n\n \"\"\"\n return execute_and_reset(query, params=params, **kwargs)\n\n def create_table(\n self,\n table_name: str,\n obj: dd.DataFrame = None,\n schema: sch.Schema = None,\n ):\n \"\"\"Create a table.\"\"\"\n if obj is not None:\n df = obj\n elif schema is not None:\n dtypes = ibis_schema_to_dask(schema)\n df = schema.apply_to(\n dd.from_pandas(\n pd.DataFrame(columns=list(map(toolz.first, dtypes))),\n npartitions=1,\n )\n )\n else:\n raise com.IbisError('Must pass expr or schema')\n\n self.dictionary[table_name] = df\n", "path": "ibis/backends/dask/__init__.py"}]}
| 1,538 | 224 |
gh_patches_debug_5574
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-2384
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Viewing of "file"-kind of CIV in archive items leads to 403
I created an archive item and added a file type CIV but when trying to view the file it leads to a permission denied error. It seems that the permission check when serving a CIV file is missing a check for archive item viewing. It only checks for algorithm jobs and evaluations:
https://github.com/comic/grand-challenge.org/blob/9322d09c0859998a77accb5c13d6db675504a9c1/app/grandchallenge/serving/views.py#L94-L117
Permissions for archives are only done on archive level (vs. archive item level) so we need to add a check here to see if the CIV belongs to an archive item and if the user has the `archives.view_archive` permission for that archive.
</issue>
<code>
[start of app/grandchallenge/serving/views.py]
1 import posixpath
2
3 from django.conf import settings
4 from django.core.exceptions import MultipleObjectsReturned, PermissionDenied
5 from django.db.models import F, Q
6 from django.http import Http404, HttpResponseRedirect
7 from django.utils._os import safe_join
8 from guardian.shortcuts import get_objects_for_user
9 from knox.auth import TokenAuthentication
10 from rest_framework.exceptions import AuthenticationFailed
11
12 from grandchallenge.cases.models import Image
13 from grandchallenge.challenges.models import ChallengeRequest
14 from grandchallenge.components.models import ComponentInterfaceValue
15 from grandchallenge.core.storage import internal_protected_s3_storage
16 from grandchallenge.evaluation.models import Submission
17 from grandchallenge.serving.models import Download
18
19
20 def protected_storage_redirect(*, name):
21 # Get the storage with the internal redirect and auth. This will prepend
22 # settings.AWS_S3_ENDPOINT_URL to the url
23 if not internal_protected_s3_storage.exists(name=name):
24 raise Http404("File not found.")
25
26 if settings.PROTECTED_S3_STORAGE_USE_CLOUDFRONT:
27 response = HttpResponseRedirect(
28 internal_protected_s3_storage.cloudfront_signed_url(name=name)
29 )
30 else:
31 url = internal_protected_s3_storage.url(name=name)
32 response = HttpResponseRedirect(url)
33
34 return response
35
36
37 def serve_images(request, *, pk, path, pa="", pb=""):
38 document_root = safe_join(
39 f"/{settings.IMAGE_FILES_SUBDIRECTORY}", pa, pb, str(pk)
40 )
41 path = posixpath.normpath(path).lstrip("/")
42 name = safe_join(document_root, path)
43
44 try:
45 image = Image.objects.get(pk=pk)
46 except Image.DoesNotExist:
47 raise Http404("Image not found.")
48
49 try:
50 user, _ = TokenAuthentication().authenticate(request)
51 except (AuthenticationFailed, TypeError):
52 user = request.user
53
54 if user.has_perm("view_image", image):
55 _create_download(creator_id=user.pk, image_id=image.pk)
56 return protected_storage_redirect(name=name)
57
58 raise PermissionDenied
59
60
61 def serve_submissions(request, *, submission_pk, **_):
62 try:
63 submission = Submission.objects.get(pk=submission_pk)
64 except Submission.DoesNotExist:
65 raise Http404("Submission not found.")
66
67 if request.user.has_perm("view_submission", submission):
68 _create_download(
69 creator_id=request.user.pk, submission_id=submission.pk
70 )
71 return protected_storage_redirect(
72 name=submission.predictions_file.name
73 )
74
75 raise PermissionDenied
76
77
78 def serve_component_interface_value(
79 request, *, component_interface_value_pk, **_
80 ):
81 try:
82 user, _ = TokenAuthentication().authenticate(request)
83 except (AuthenticationFailed, TypeError):
84 user = request.user
85
86 try:
87 # output should only be connected to a single job; throw error if not?
88 civ = ComponentInterfaceValue.objects.get(
89 pk=component_interface_value_pk
90 )
91 except (MultipleObjectsReturned, ComponentInterfaceValue.DoesNotExist):
92 raise Http404("No ComponentInterfaceValue found.")
93
94 if (
95 get_objects_for_user(
96 user=user, perms="algorithms.view_job", accept_global_perms=False
97 )
98 .filter(
99 Q(outputs__pk=component_interface_value_pk)
100 | Q(inputs__pk=component_interface_value_pk)
101 )
102 .exists()
103 ):
104 return protected_storage_redirect(name=civ.file.name)
105 elif (
106 get_objects_for_user(
107 user=user,
108 perms="evaluation.view_evaluation",
109 accept_global_perms=False,
110 )
111 .filter(
112 Q(outputs__pk=component_interface_value_pk)
113 | Q(inputs__pk=component_interface_value_pk)
114 )
115 .exists()
116 ):
117 return protected_storage_redirect(name=civ.file.name)
118
119 raise PermissionDenied
120
121
122 def _create_download(*, creator_id, image_id=None, submission_id=None):
123 kwargs = {"creator_id": creator_id}
124
125 if image_id is not None:
126 kwargs["image_id"] = image_id
127
128 if submission_id is not None:
129 kwargs["submission_id"] = submission_id
130
131 n_updated = Download.objects.filter(**kwargs).update(count=F("count") + 1)
132
133 if n_updated == 0:
134 Download.objects.create(**kwargs)
135
136
137 def serve_structured_challenge_submission_form(
138 request, *, challenge_request_pk, **_
139 ):
140 try:
141 challenge_request = ChallengeRequest.objects.get(
142 pk=challenge_request_pk
143 )
144 except ChallengeRequest.DoesNotExist:
145 raise Http404("Challenge request not found.")
146
147 if request.user.has_perm("challenges.view_challengerequest"):
148 return protected_storage_redirect(
149 name=challenge_request.structured_challenge_submission_form.name
150 )
151 else:
152 raise PermissionDenied
153
[end of app/grandchallenge/serving/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/app/grandchallenge/serving/views.py b/app/grandchallenge/serving/views.py
--- a/app/grandchallenge/serving/views.py
+++ b/app/grandchallenge/serving/views.py
@@ -115,6 +115,16 @@
.exists()
):
return protected_storage_redirect(name=civ.file.name)
+ elif (
+ get_objects_for_user(
+ user=user,
+ perms="archives.view_archive",
+ accept_global_perms=False,
+ )
+ .filter(items__values__pk=component_interface_value_pk)
+ .exists()
+ ):
+ return protected_storage_redirect(name=civ.file.name)
raise PermissionDenied
|
{"golden_diff": "diff --git a/app/grandchallenge/serving/views.py b/app/grandchallenge/serving/views.py\n--- a/app/grandchallenge/serving/views.py\n+++ b/app/grandchallenge/serving/views.py\n@@ -115,6 +115,16 @@\n .exists()\n ):\n return protected_storage_redirect(name=civ.file.name)\n+ elif (\n+ get_objects_for_user(\n+ user=user,\n+ perms=\"archives.view_archive\",\n+ accept_global_perms=False,\n+ )\n+ .filter(items__values__pk=component_interface_value_pk)\n+ .exists()\n+ ):\n+ return protected_storage_redirect(name=civ.file.name)\n \n raise PermissionDenied\n", "issue": "Viewing of \"file\"-kind of CIV in archive items leads to 403\nI created an archive item and added a file type CIV but when trying to view the file it leads to a permission denied error. It seems that the permission check when serving a CIV file is missing a check for archive item viewing. It only checks for algorithm jobs and evaluations:\r\n\r\nhttps://github.com/comic/grand-challenge.org/blob/9322d09c0859998a77accb5c13d6db675504a9c1/app/grandchallenge/serving/views.py#L94-L117\r\n\r\nPermissions for archives are only done on archive level (vs. archive item level) so we need to add a check here to see if the CIV belongs to an archive item and if the user has the `archives.view_archive` permission for that archive.\n", "before_files": [{"content": "import posixpath\n\nfrom django.conf import settings\nfrom django.core.exceptions import MultipleObjectsReturned, PermissionDenied\nfrom django.db.models import F, Q\nfrom django.http import Http404, HttpResponseRedirect\nfrom django.utils._os import safe_join\nfrom guardian.shortcuts import get_objects_for_user\nfrom knox.auth import TokenAuthentication\nfrom rest_framework.exceptions import AuthenticationFailed\n\nfrom grandchallenge.cases.models import Image\nfrom grandchallenge.challenges.models import ChallengeRequest\nfrom grandchallenge.components.models import ComponentInterfaceValue\nfrom grandchallenge.core.storage import internal_protected_s3_storage\nfrom grandchallenge.evaluation.models import Submission\nfrom grandchallenge.serving.models import Download\n\n\ndef protected_storage_redirect(*, name):\n # Get the storage with the internal redirect and auth. This will prepend\n # settings.AWS_S3_ENDPOINT_URL to the url\n if not internal_protected_s3_storage.exists(name=name):\n raise Http404(\"File not found.\")\n\n if settings.PROTECTED_S3_STORAGE_USE_CLOUDFRONT:\n response = HttpResponseRedirect(\n internal_protected_s3_storage.cloudfront_signed_url(name=name)\n )\n else:\n url = internal_protected_s3_storage.url(name=name)\n response = HttpResponseRedirect(url)\n\n return response\n\n\ndef serve_images(request, *, pk, path, pa=\"\", pb=\"\"):\n document_root = safe_join(\n f\"/{settings.IMAGE_FILES_SUBDIRECTORY}\", pa, pb, str(pk)\n )\n path = posixpath.normpath(path).lstrip(\"/\")\n name = safe_join(document_root, path)\n\n try:\n image = Image.objects.get(pk=pk)\n except Image.DoesNotExist:\n raise Http404(\"Image not found.\")\n\n try:\n user, _ = TokenAuthentication().authenticate(request)\n except (AuthenticationFailed, TypeError):\n user = request.user\n\n if user.has_perm(\"view_image\", image):\n _create_download(creator_id=user.pk, image_id=image.pk)\n return protected_storage_redirect(name=name)\n\n raise PermissionDenied\n\n\ndef serve_submissions(request, *, submission_pk, **_):\n try:\n submission = Submission.objects.get(pk=submission_pk)\n except Submission.DoesNotExist:\n raise Http404(\"Submission not found.\")\n\n if request.user.has_perm(\"view_submission\", submission):\n _create_download(\n creator_id=request.user.pk, submission_id=submission.pk\n )\n return protected_storage_redirect(\n name=submission.predictions_file.name\n )\n\n raise PermissionDenied\n\n\ndef serve_component_interface_value(\n request, *, component_interface_value_pk, **_\n):\n try:\n user, _ = TokenAuthentication().authenticate(request)\n except (AuthenticationFailed, TypeError):\n user = request.user\n\n try:\n # output should only be connected to a single job; throw error if not?\n civ = ComponentInterfaceValue.objects.get(\n pk=component_interface_value_pk\n )\n except (MultipleObjectsReturned, ComponentInterfaceValue.DoesNotExist):\n raise Http404(\"No ComponentInterfaceValue found.\")\n\n if (\n get_objects_for_user(\n user=user, perms=\"algorithms.view_job\", accept_global_perms=False\n )\n .filter(\n Q(outputs__pk=component_interface_value_pk)\n | Q(inputs__pk=component_interface_value_pk)\n )\n .exists()\n ):\n return protected_storage_redirect(name=civ.file.name)\n elif (\n get_objects_for_user(\n user=user,\n perms=\"evaluation.view_evaluation\",\n accept_global_perms=False,\n )\n .filter(\n Q(outputs__pk=component_interface_value_pk)\n | Q(inputs__pk=component_interface_value_pk)\n )\n .exists()\n ):\n return protected_storage_redirect(name=civ.file.name)\n\n raise PermissionDenied\n\n\ndef _create_download(*, creator_id, image_id=None, submission_id=None):\n kwargs = {\"creator_id\": creator_id}\n\n if image_id is not None:\n kwargs[\"image_id\"] = image_id\n\n if submission_id is not None:\n kwargs[\"submission_id\"] = submission_id\n\n n_updated = Download.objects.filter(**kwargs).update(count=F(\"count\") + 1)\n\n if n_updated == 0:\n Download.objects.create(**kwargs)\n\n\ndef serve_structured_challenge_submission_form(\n request, *, challenge_request_pk, **_\n):\n try:\n challenge_request = ChallengeRequest.objects.get(\n pk=challenge_request_pk\n )\n except ChallengeRequest.DoesNotExist:\n raise Http404(\"Challenge request not found.\")\n\n if request.user.has_perm(\"challenges.view_challengerequest\"):\n return protected_storage_redirect(\n name=challenge_request.structured_challenge_submission_form.name\n )\n else:\n raise PermissionDenied\n", "path": "app/grandchallenge/serving/views.py"}]}
| 2,090 | 151 |
gh_patches_debug_17925
|
rasdani/github-patches
|
git_diff
|
Parsl__parsl-588
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix import error
```
>>> from parsl.dataflow.states import FINAL_FAILED_STATES
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/annawoodard/parsl/parsl/__init__.py", line 37, in <module>
from parsl.dataflow.dflow import DataFlowKernel, DataFlowKernelLoader
File "/home/annawoodard/parsl/parsl/dataflow/dflow.py", line 31, in <module>
from parsl.dataflow.usage_tracking.usage import UsageTracker
File "/home/annawoodard/parsl/parsl/dataflow/usage_tracking/usage.py", line 13, in <module>
from parsl.dataflow.states import FINAL_FAILED_STATES
ImportError: cannot import name 'FINAL_FAILED_STATES'
```
</issue>
<code>
[start of parsl/dataflow/usage_tracking/usage.py]
1 import uuid
2 import time
3 import hashlib
4 import os
5 import getpass
6 import json
7 import logging
8 import socket
9 import sys
10 import platform
11 import multiprocessing as mp
12
13 from parsl.dataflow.states import FINAL_FAILED_STATES
14 from parsl.version import VERSION as PARSL_VERSION
15
16 logger = logging.getLogger(__name__)
17
18
19 def async_process(fn):
20 """ Decorator function to launch a function as a separate process """
21
22 def run(*args, **kwargs):
23 proc = mp.Process(target=fn, args=args, kwargs=kwargs)
24 proc.start()
25 return proc
26
27 return run
28
29
30 @async_process
31 def udp_messenger(domain_name, UDP_IP, UDP_PORT, sock_timeout, message):
32 """Send UDP messages to usage tracker asynchronously
33
34 This multiprocessing based messenger was written to overcome the limitations
35 of signalling/terminating a thread that is blocked on a system call. This
36 messenger is created as a separate process, and initialized with 2 queues,
37 to_send to receive messages to be sent to the internet.
38
39 Args:
40 - domain_name (str) : Domain name string
41 - UDP_IP (str) : IP address YYY.YYY.YYY.YYY
42 - UDP_PORT (int) : UDP port to send out on
43 - sock_timeout (int) : Socket timeout
44 - to_send (multiprocessing.Queue) : Queue of outgoing messages to internet
45 """
46 try:
47 if message is None:
48 raise ValueError("message was none")
49
50 encoded_message = bytes(message, "utf-8")
51
52 if encoded_message is None:
53 raise ValueError("utf-8 encoding of message failed")
54
55 if domain_name:
56 try:
57 UDP_IP = socket.gethostbyname(domain_name)
58 except Exception:
59 # (False, "Domain lookup failed, defaulting to {0}".format(UDP_IP))
60 pass
61
62 if UDP_IP is None:
63 raise Exception("UDP_IP is None")
64
65 if UDP_PORT is None:
66 raise Exception("UDP_PORT is None")
67
68 sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP
69 sock.settimeout(sock_timeout)
70 sock.sendto(bytes(message, "utf-8"), (UDP_IP, UDP_PORT))
71 sock.close()
72
73 except socket.timeout:
74 logger.debug("Failed to send usage tracking data: socket timeout")
75 except OSError as e:
76 logger.debug("Failed to send usage tracking data: OSError: {}".format(e))
77 except Exception as e:
78 logger.debug("Failed to send usage tracking data: Exception: {}".format(e))
79
80
81 class UsageTracker (object):
82 """Anonymized Usage Tracking for Parsl.
83
84 Client for this is here : https://github.com/Parsl/parsl_tracking
85 This issue captures the discussion that went into functionality
86 implemented here : https://github.com/Parsl/parsl/issues/34
87
88 """
89
90 def __init__(self, dfk, ip='52.3.111.203', port=50077,
91 domain_name='tracking.parsl-project.org'):
92 """Initialize usage tracking unless the user has opted-out.
93
94 We will try to resolve the hostname specified in kwarg:domain_name
95 and if that fails attempt to use the kwarg:ip. Determining the
96 IP and sending message is threaded to avoid slowing down DFK
97 initialization.
98
99 Tracks usage stats by inspecting the internal state of the dfk.
100
101 Args:
102 - dfk (DFK object) : Data Flow Kernel object
103
104 KWargs:
105 - ip (string) : IP address
106 - port (int) : Port number, Default:50077
107 - domain_name (string) : Domain name, will override IP
108 Default: tracking.parsl-project.org
109 """
110
111 self.domain_name = domain_name
112 self.ip = ip
113 # The sock timeout will only apply to UDP send and not domain resolution
114 self.sock_timeout = 5
115 self.UDP_PORT = port
116 self.UDP_IP = None
117 self.procs = []
118 self.dfk = dfk
119 self.config = self.dfk.config
120 self.uuid = str(uuid.uuid4())
121 self.parsl_version = PARSL_VERSION
122 self.python_version = "{}.{}.{}".format(sys.version_info.major,
123 sys.version_info.minor,
124 sys.version_info.micro)
125 self.test_mode, self.tracking_enabled = self.check_tracking_enabled()
126 logger.debug("Tracking status: {}".format(self.tracking_enabled))
127 logger.debug("Testing mode : {}".format(self.test_mode))
128 self.initialized = False # Once first message is sent this will be True
129
130 def check_tracking_enabled(self):
131 """By default tracking is enabled.
132
133 If Test mode is set via env variable PARSL_TESTING, a test flag is set
134
135 Tracking is disabled if :
136 1. config["globals"]["usageTracking"] is set to False (Bool)
137 2. Environment variable PARSL_TRACKING is set to false (case insensitive)
138
139 """
140 track = True # By default we track usage
141 test = False # By default we are not in testing mode
142
143 testvar = str(os.environ.get("PARSL_TESTING", 'None')).lower()
144 if testvar == 'true':
145 test = True
146
147 if not self.config.usage_tracking:
148 track = False
149
150 envvar = str(os.environ.get("PARSL_TRACKING", True)).lower()
151 if envvar == "false":
152 track = False
153
154 return test, track
155
156 def construct_start_message(self):
157 """Collect preliminary run info at the start of the DFK.
158
159 Returns :
160 - Message dict dumped as json string, ready for UDP
161 """
162 uname = getpass.getuser().encode('latin1')
163 hashed_username = hashlib.sha256(uname).hexdigest()[0:10]
164 hname = socket.gethostname().encode('latin1')
165 hashed_hostname = hashlib.sha256(hname).hexdigest()[0:10]
166 message = {'uuid': self.uuid,
167 'uname': hashed_username,
168 'hname': hashed_hostname,
169 'test': self.test_mode,
170 'parsl_v': self.parsl_version,
171 'python_v': self.python_version,
172 'os': platform.system(),
173 'os_v': platform.release(),
174 'start': time.time()}
175
176 return json.dumps(message)
177
178 def construct_end_message(self):
179 """Collect the final run information at the time of DFK cleanup.
180
181 Returns:
182 - Message dict dumped as json string, ready for UDP
183 """
184 app_count = self.dfk.task_count
185
186 site_count = len([x for x in self.dfk.config.executors if x.managed])
187
188 failed_states = FINAL_FAILED_STATES
189 app_fails = len([t for t in self.dfk.tasks if
190 self.dfk.tasks[t]['status'] in failed_states])
191
192 message = {'uuid': self.uuid,
193 'end': time.time(),
194 't_apps': app_count,
195 'sites': site_count,
196 'c_time': None,
197 'failed': app_fails,
198 'test': self.test_mode,
199 }
200
201 return json.dumps(message)
202
203 def send_UDP_message(self, message):
204 """Send UDP message."""
205 x = 0
206 if self.tracking_enabled:
207 try:
208 proc = udp_messenger(self.domain_name, self.UDP_IP, self.UDP_PORT, self.sock_timeout, message)
209 self.procs.append(proc)
210 except Exception as e:
211 logger.debug("Usage tracking failed: {}".format(e))
212 else:
213 x = -1
214
215 return x
216
217 def send_message(self):
218 """Send message over UDP.
219
220 If tracking is disables, the bytes_sent will always be set to -1
221
222 Returns:
223 (bytes_sent, time_taken)
224 """
225 start = time.time()
226 message = None
227 if not self.initialized:
228 message = self.construct_start_message()
229 self.initialized = True
230 else:
231 message = self.construct_end_message()
232
233 self.send_UDP_message(message)
234 end = time.time()
235
236 return end - start
237
238 def __del__(self):
239 return self.close()
240
241 def close(self):
242 """We terminate (SIGTERM) the processes added to the self.procs list """
243 for proc in self.procs:
244 proc.terminate()
245
246
247 if __name__ == '__main__':
248
249 from parsl import *
250
251 set_stream_logger()
252 workers = ThreadPoolExecutor(max_workers=4)
253 dfk = DataFlowKernel(executors=[workers])
254
255 dfk.cleanup()
256
[end of parsl/dataflow/usage_tracking/usage.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/parsl/dataflow/usage_tracking/usage.py b/parsl/dataflow/usage_tracking/usage.py
--- a/parsl/dataflow/usage_tracking/usage.py
+++ b/parsl/dataflow/usage_tracking/usage.py
@@ -10,7 +10,7 @@
import platform
import multiprocessing as mp
-from parsl.dataflow.states import FINAL_FAILED_STATES
+from parsl.dataflow.states import FINAL_FAILURE_STATES
from parsl.version import VERSION as PARSL_VERSION
logger = logging.getLogger(__name__)
@@ -185,9 +185,8 @@
site_count = len([x for x in self.dfk.config.executors if x.managed])
- failed_states = FINAL_FAILED_STATES
app_fails = len([t for t in self.dfk.tasks if
- self.dfk.tasks[t]['status'] in failed_states])
+ self.dfk.tasks[t]['status'] in FINAL_FAILURE_STATES])
message = {'uuid': self.uuid,
'end': time.time(),
|
{"golden_diff": "diff --git a/parsl/dataflow/usage_tracking/usage.py b/parsl/dataflow/usage_tracking/usage.py\n--- a/parsl/dataflow/usage_tracking/usage.py\n+++ b/parsl/dataflow/usage_tracking/usage.py\n@@ -10,7 +10,7 @@\n import platform\n import multiprocessing as mp\n \n-from parsl.dataflow.states import FINAL_FAILED_STATES\n+from parsl.dataflow.states import FINAL_FAILURE_STATES\n from parsl.version import VERSION as PARSL_VERSION\n \n logger = logging.getLogger(__name__)\n@@ -185,9 +185,8 @@\n \n site_count = len([x for x in self.dfk.config.executors if x.managed])\n \n- failed_states = FINAL_FAILED_STATES\n app_fails = len([t for t in self.dfk.tasks if\n- self.dfk.tasks[t]['status'] in failed_states])\n+ self.dfk.tasks[t]['status'] in FINAL_FAILURE_STATES])\n \n message = {'uuid': self.uuid,\n 'end': time.time(),\n", "issue": "Fix import error\n```\r\n>>> from parsl.dataflow.states import FINAL_FAILED_STATES\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/annawoodard/parsl/parsl/__init__.py\", line 37, in <module>\r\n from parsl.dataflow.dflow import DataFlowKernel, DataFlowKernelLoader\r\n File \"/home/annawoodard/parsl/parsl/dataflow/dflow.py\", line 31, in <module>\r\n from parsl.dataflow.usage_tracking.usage import UsageTracker\r\n File \"/home/annawoodard/parsl/parsl/dataflow/usage_tracking/usage.py\", line 13, in <module>\r\n from parsl.dataflow.states import FINAL_FAILED_STATES\r\nImportError: cannot import name 'FINAL_FAILED_STATES'\r\n```\n", "before_files": [{"content": "import uuid\nimport time\nimport hashlib\nimport os\nimport getpass\nimport json\nimport logging\nimport socket\nimport sys\nimport platform\nimport multiprocessing as mp\n\nfrom parsl.dataflow.states import FINAL_FAILED_STATES\nfrom parsl.version import VERSION as PARSL_VERSION\n\nlogger = logging.getLogger(__name__)\n\n\ndef async_process(fn):\n \"\"\" Decorator function to launch a function as a separate process \"\"\"\n\n def run(*args, **kwargs):\n proc = mp.Process(target=fn, args=args, kwargs=kwargs)\n proc.start()\n return proc\n\n return run\n\n\n@async_process\ndef udp_messenger(domain_name, UDP_IP, UDP_PORT, sock_timeout, message):\n \"\"\"Send UDP messages to usage tracker asynchronously\n\n This multiprocessing based messenger was written to overcome the limitations\n of signalling/terminating a thread that is blocked on a system call. This\n messenger is created as a separate process, and initialized with 2 queues,\n to_send to receive messages to be sent to the internet.\n\n Args:\n - domain_name (str) : Domain name string\n - UDP_IP (str) : IP address YYY.YYY.YYY.YYY\n - UDP_PORT (int) : UDP port to send out on\n - sock_timeout (int) : Socket timeout\n - to_send (multiprocessing.Queue) : Queue of outgoing messages to internet\n \"\"\"\n try:\n if message is None:\n raise ValueError(\"message was none\")\n\n encoded_message = bytes(message, \"utf-8\")\n\n if encoded_message is None:\n raise ValueError(\"utf-8 encoding of message failed\")\n\n if domain_name:\n try:\n UDP_IP = socket.gethostbyname(domain_name)\n except Exception:\n # (False, \"Domain lookup failed, defaulting to {0}\".format(UDP_IP))\n pass\n\n if UDP_IP is None:\n raise Exception(\"UDP_IP is None\")\n\n if UDP_PORT is None:\n raise Exception(\"UDP_PORT is None\")\n\n sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP\n sock.settimeout(sock_timeout)\n sock.sendto(bytes(message, \"utf-8\"), (UDP_IP, UDP_PORT))\n sock.close()\n\n except socket.timeout:\n logger.debug(\"Failed to send usage tracking data: socket timeout\")\n except OSError as e:\n logger.debug(\"Failed to send usage tracking data: OSError: {}\".format(e))\n except Exception as e:\n logger.debug(\"Failed to send usage tracking data: Exception: {}\".format(e))\n\n\nclass UsageTracker (object):\n \"\"\"Anonymized Usage Tracking for Parsl.\n\n Client for this is here : https://github.com/Parsl/parsl_tracking\n This issue captures the discussion that went into functionality\n implemented here : https://github.com/Parsl/parsl/issues/34\n\n \"\"\"\n\n def __init__(self, dfk, ip='52.3.111.203', port=50077,\n domain_name='tracking.parsl-project.org'):\n \"\"\"Initialize usage tracking unless the user has opted-out.\n\n We will try to resolve the hostname specified in kwarg:domain_name\n and if that fails attempt to use the kwarg:ip. Determining the\n IP and sending message is threaded to avoid slowing down DFK\n initialization.\n\n Tracks usage stats by inspecting the internal state of the dfk.\n\n Args:\n - dfk (DFK object) : Data Flow Kernel object\n\n KWargs:\n - ip (string) : IP address\n - port (int) : Port number, Default:50077\n - domain_name (string) : Domain name, will override IP\n Default: tracking.parsl-project.org\n \"\"\"\n\n self.domain_name = domain_name\n self.ip = ip\n # The sock timeout will only apply to UDP send and not domain resolution\n self.sock_timeout = 5\n self.UDP_PORT = port\n self.UDP_IP = None\n self.procs = []\n self.dfk = dfk\n self.config = self.dfk.config\n self.uuid = str(uuid.uuid4())\n self.parsl_version = PARSL_VERSION\n self.python_version = \"{}.{}.{}\".format(sys.version_info.major,\n sys.version_info.minor,\n sys.version_info.micro)\n self.test_mode, self.tracking_enabled = self.check_tracking_enabled()\n logger.debug(\"Tracking status: {}\".format(self.tracking_enabled))\n logger.debug(\"Testing mode : {}\".format(self.test_mode))\n self.initialized = False # Once first message is sent this will be True\n\n def check_tracking_enabled(self):\n \"\"\"By default tracking is enabled.\n\n If Test mode is set via env variable PARSL_TESTING, a test flag is set\n\n Tracking is disabled if :\n 1. config[\"globals\"][\"usageTracking\"] is set to False (Bool)\n 2. Environment variable PARSL_TRACKING is set to false (case insensitive)\n\n \"\"\"\n track = True # By default we track usage\n test = False # By default we are not in testing mode\n\n testvar = str(os.environ.get(\"PARSL_TESTING\", 'None')).lower()\n if testvar == 'true':\n test = True\n\n if not self.config.usage_tracking:\n track = False\n\n envvar = str(os.environ.get(\"PARSL_TRACKING\", True)).lower()\n if envvar == \"false\":\n track = False\n\n return test, track\n\n def construct_start_message(self):\n \"\"\"Collect preliminary run info at the start of the DFK.\n\n Returns :\n - Message dict dumped as json string, ready for UDP\n \"\"\"\n uname = getpass.getuser().encode('latin1')\n hashed_username = hashlib.sha256(uname).hexdigest()[0:10]\n hname = socket.gethostname().encode('latin1')\n hashed_hostname = hashlib.sha256(hname).hexdigest()[0:10]\n message = {'uuid': self.uuid,\n 'uname': hashed_username,\n 'hname': hashed_hostname,\n 'test': self.test_mode,\n 'parsl_v': self.parsl_version,\n 'python_v': self.python_version,\n 'os': platform.system(),\n 'os_v': platform.release(),\n 'start': time.time()}\n\n return json.dumps(message)\n\n def construct_end_message(self):\n \"\"\"Collect the final run information at the time of DFK cleanup.\n\n Returns:\n - Message dict dumped as json string, ready for UDP\n \"\"\"\n app_count = self.dfk.task_count\n\n site_count = len([x for x in self.dfk.config.executors if x.managed])\n\n failed_states = FINAL_FAILED_STATES\n app_fails = len([t for t in self.dfk.tasks if\n self.dfk.tasks[t]['status'] in failed_states])\n\n message = {'uuid': self.uuid,\n 'end': time.time(),\n 't_apps': app_count,\n 'sites': site_count,\n 'c_time': None,\n 'failed': app_fails,\n 'test': self.test_mode,\n }\n\n return json.dumps(message)\n\n def send_UDP_message(self, message):\n \"\"\"Send UDP message.\"\"\"\n x = 0\n if self.tracking_enabled:\n try:\n proc = udp_messenger(self.domain_name, self.UDP_IP, self.UDP_PORT, self.sock_timeout, message)\n self.procs.append(proc)\n except Exception as e:\n logger.debug(\"Usage tracking failed: {}\".format(e))\n else:\n x = -1\n\n return x\n\n def send_message(self):\n \"\"\"Send message over UDP.\n\n If tracking is disables, the bytes_sent will always be set to -1\n\n Returns:\n (bytes_sent, time_taken)\n \"\"\"\n start = time.time()\n message = None\n if not self.initialized:\n message = self.construct_start_message()\n self.initialized = True\n else:\n message = self.construct_end_message()\n\n self.send_UDP_message(message)\n end = time.time()\n\n return end - start\n\n def __del__(self):\n return self.close()\n\n def close(self):\n \"\"\"We terminate (SIGTERM) the processes added to the self.procs list \"\"\"\n for proc in self.procs:\n proc.terminate()\n\n\nif __name__ == '__main__':\n\n from parsl import *\n\n set_stream_logger()\n workers = ThreadPoolExecutor(max_workers=4)\n dfk = DataFlowKernel(executors=[workers])\n\n dfk.cleanup()\n", "path": "parsl/dataflow/usage_tracking/usage.py"}]}
| 3,269 | 225 |
gh_patches_debug_20633
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-460
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AWS::Serverless::Api Cors config transform error
#### cfn-lint version:
``` console
$ cfn-lint --version
cfn-lint 0.8.3
```
#### Description of issue:
When I have a `AWS::Serverless::Api` object with a [Cors Configuration](https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#cors-configuration) object in it I routinely get the following error when linting `template.yaml`
`E0001 Error transforming template: 'dict_node' object has no attribute 'start_mark'`
I also got the following stack trace when running
```
Traceback (most recent call last):
File "/usr/local/bin/cfn-lint", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/site-packages/cfnlint/__main__.py", line 36, in main
args.regions, args.override_spec))
File "/usr/local/lib/python2.7/site-packages/cfnlint/core.py", line 46, in run_cli
return run_checks(filename, template, rules, regions)
File "/usr/local/lib/python2.7/site-packages/cfnlint/core.py", line 316, in run_checks
matches.extend(runner.transform())
File "/usr/local/lib/python2.7/site-packages/cfnlint/__init__.py", line 894, in transform
matches = transform.transform_template()
File "/usr/local/lib/python2.7/site-packages/cfnlint/transform.py", line 115, in transform_template
sam_translator.translate(sam_template=self._template, parameter_values={}))
File "/Users/eaddingtonwhite/Library/Python/2.7/lib/python/site-packages/samtranslator/translator/translator.py", line 71, in translate
translated = macro.to_cloudformation(**kwargs)
File "/Users/eaddingtonwhite/Library/Python/2.7/lib/python/site-packages/samtranslator/model/sam_resources.py", line 501, in to_cloudformation
rest_api, deployment, stage = api_generator.to_cloudformation()
File "/Users/eaddingtonwhite/Library/Python/2.7/lib/python/site-packages/samtranslator/model/api/api_generator.py", line 160, in to_cloudformation
rest_api = self._construct_rest_api()
File "/Users/eaddingtonwhite/Library/Python/2.7/lib/python/site-packages/samtranslator/model/api/api_generator.py", line 69, in _construct_rest_api
self._add_cors()
File "/Users/eaddingtonwhite/Library/Python/2.7/lib/python/site-packages/samtranslator/model/api/api_generator.py", line 212, in _add_cors
self.definition_body = editor.swagger
File "/Users/eaddingtonwhite/Library/Python/2.7/lib/python/site-packages/samtranslator/swagger/swagger.py", line 298, in swagger
return copy.deepcopy(self._doc)
File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 174, in deepcopy
y = copier(memo)
File "/usr/local/lib/python2.7/site-packages/cfnlint/decode/node.py", line 75, in __deepcopy__
result = cls.__new__(cls, self.start_mark, self.end_mark)
AttributeError: 'dict_node' object has no attribute 'start_mark'
```
Here is simple example I made to reproduce
```yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
Function:
Type: AWS::Serverless::Function
Properties:
CodeUri: hello_world/
Handler: app.lambda_handler
Runtime: nodejs8.10
Api:
Type: AWS::Serverless::Api
Properties:
Name: test
StageName: test
Cors:
AllowHeaders: "'my_custom_header,content-type'"
AllowOrigin: "'*'"
MaxAge: ""
DefinitionBody:
swagger: "2.0"
info:
version: "1.0"
title: "test"
basePath: "/"
schemes:
- "https"
paths:
/:
get:
x-amazon-apigateway-integration:
uri:
Fn::Sub: "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${Function.Arn}/invocations"
passthroughBehavior: "when_no_match"
httpMethod: "POST"
type: "aws_proxy"
```
If you comment out the following lines then issue goes away:
```yaml
Cors:
AllowHeaders: "'my_custom_header,content-type'"
AllowOrigin: "'*'"
MaxAge: ""
```
</issue>
<code>
[start of src/cfnlint/decode/node.py]
1 """
2 Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import sys
18 import logging
19 from copy import deepcopy
20 import six
21
22 LOGGER = logging.getLogger(__name__)
23
24
25 class TemplateAttributeError(AttributeError):
26 """ Custom error to capture Attribute Errors in the Template """
27
28
29 def create_str_node_class(cls):
30 """
31 Create string node class
32 """
33 class node_class(cls):
34 """Node class created based on the input class"""
35 def __init__(self, x, start_mark, end_mark):
36 try:
37 cls.__init__(self, x)
38 except TypeError:
39 cls.__init__(self)
40 self.start_mark = start_mark
41 self.end_mark = end_mark
42
43 # pylint: disable=bad-classmethod-argument, unused-argument
44 def __new__(self, x, start_mark, end_mark):
45 if sys.version_info >= (3, 0):
46 return cls.__new__(self, x)
47
48 if isinstance(x, six.string_types):
49 return cls.__new__(self, x.encode('ascii', 'ignore'))
50
51 return cls.__new__(self, x)
52
53 def __getattr__(self, name):
54 raise TemplateAttributeError('%s.%s is invalid' % (self.__class__.__name__, name))
55
56 def __deepcopy__(self, memo):
57 result = str_node(self, self.start_mark, self.end_mark)
58 memo[id(self)] = result
59 return result
60
61 def __copy__(self):
62 return self
63
64 node_class.__name__ = '%s_node' % cls.__name__
65 return node_class
66
67
68 def create_dict_node_class(cls):
69 """
70 Create dynamic node class
71 """
72 class node_class(cls):
73 """Node class created based on the input class"""
74 def __init__(self, x, start_mark, end_mark):
75 try:
76 cls.__init__(self, x)
77 except TypeError:
78 cls.__init__(self)
79 self.start_mark = start_mark
80 self.end_mark = end_mark
81 self.condition_functions = ['Fn::If']
82
83 def __deepcopy__(self, memo):
84 cls = self.__class__
85 result = cls.__new__(cls, self.start_mark, self.end_mark)
86 memo[id(self)] = result
87 for k, v in self.items():
88 result[deepcopy(k)] = deepcopy(v, memo)
89
90 return result
91
92 def __copy__(self):
93 return self
94
95 def get_safe(self, key, default=None, path=None, type_t=()):
96 """
97 Get values in format
98 """
99 path = path or []
100 value = self.get(key, default)
101 if not isinstance(value, (dict)):
102 if isinstance(value, type_t) or not type_t:
103 return [(value, (path[:] + [key]))]
104
105 results = []
106 for sub_v, sub_path in value.items_safe(path):
107 if isinstance(sub_v, type_t) or not type_t:
108 results.append((sub_v, sub_path))
109
110 return results
111
112 def items_safe(self, path=None, type_t=()):
113 """Get items while handling IFs"""
114 path = path or []
115 if len(self) == 1:
116 for k, v in self.items():
117 if k == 'Fn::If':
118 if isinstance(v, list):
119 if len(v) == 3:
120 for i, if_v in enumerate(v[1:]):
121 if isinstance(if_v, dict):
122 # yield from if_v.items_safe(path[:] + [k, i - 1])
123 # Python 2.7 support
124 for items, p in if_v.items_safe(path[:] + [k, i + 1]):
125 if isinstance(items, type_t) or not type_t:
126 yield items, p
127 elif isinstance(if_v, list):
128 if isinstance(if_v, type_t) or not type_t:
129 yield if_v, path[:] + [k, i + 1]
130 else:
131 if isinstance(if_v, type_t) or not type_t:
132 yield if_v, path[:] + [k, i + 1]
133 elif k != 'Ref' and v != 'AWS::NoValue':
134 if isinstance(self, type_t) or not type_t:
135 yield self, path[:]
136 else:
137 if isinstance(self, type_t) or not type_t:
138 yield self, path[:]
139
140 def __getattr__(self, name):
141 raise TemplateAttributeError('%s.%s is invalid' % (self.__class__.__name__, name))
142
143 node_class.__name__ = '%s_node' % cls.__name__
144 return node_class
145
146
147 def create_dict_list_class(cls):
148 """
149 Create dynamic list class
150 """
151 class node_class(cls):
152 """Node class created based on the input class"""
153 def __init__(self, x, start_mark, end_mark):
154 try:
155 cls.__init__(self, x)
156 except TypeError:
157 cls.__init__(self)
158 self.start_mark = start_mark
159 self.end_mark = end_mark
160 self.condition_functions = ['Fn::If']
161
162 def __deepcopy__(self, memo):
163 cls = self.__class__
164 result = cls.__new__(cls, self.start_mark, self.end_mark)
165 memo[id(self)] = result
166 for _, v in enumerate(self):
167 result.append(deepcopy(v, memo))
168
169 return result
170
171 def __copy__(self):
172 return self
173
174 def items_safe(self, path=None, type_t=()):
175 """Get items while handling IFs"""
176 path = path or []
177 for i, v in enumerate(self):
178 if isinstance(v, dict):
179 for items, p in v.items_safe(path[:] + [i]):
180 if isinstance(items, type_t) or not type_t:
181 yield items, p
182 else:
183 if isinstance(v, type_t) or not type_t:
184 yield v, path[:] + [i]
185
186 def __getattr__(self, name):
187 raise TemplateAttributeError('%s.%s is invalid' % (self.__class__.__name__, name))
188
189 node_class.__name__ = '%s_node' % cls.__name__
190 return node_class
191
192
193 str_node = create_str_node_class(str)
194 dict_node = create_dict_node_class(dict)
195 list_node = create_dict_list_class(list)
196
[end of src/cfnlint/decode/node.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cfnlint/decode/node.py b/src/cfnlint/decode/node.py
--- a/src/cfnlint/decode/node.py
+++ b/src/cfnlint/decode/node.py
@@ -81,8 +81,7 @@
self.condition_functions = ['Fn::If']
def __deepcopy__(self, memo):
- cls = self.__class__
- result = cls.__new__(cls, self.start_mark, self.end_mark)
+ result = dict_node(self, self.start_mark, self.end_mark)
memo[id(self)] = result
for k, v in self.items():
result[deepcopy(k)] = deepcopy(v, memo)
@@ -160,8 +159,7 @@
self.condition_functions = ['Fn::If']
def __deepcopy__(self, memo):
- cls = self.__class__
- result = cls.__new__(cls, self.start_mark, self.end_mark)
+ result = list_node([], self.start_mark, self.end_mark)
memo[id(self)] = result
for _, v in enumerate(self):
result.append(deepcopy(v, memo))
|
{"golden_diff": "diff --git a/src/cfnlint/decode/node.py b/src/cfnlint/decode/node.py\n--- a/src/cfnlint/decode/node.py\n+++ b/src/cfnlint/decode/node.py\n@@ -81,8 +81,7 @@\n self.condition_functions = ['Fn::If']\n \n def __deepcopy__(self, memo):\n- cls = self.__class__\n- result = cls.__new__(cls, self.start_mark, self.end_mark)\n+ result = dict_node(self, self.start_mark, self.end_mark)\n memo[id(self)] = result\n for k, v in self.items():\n result[deepcopy(k)] = deepcopy(v, memo)\n@@ -160,8 +159,7 @@\n self.condition_functions = ['Fn::If']\n \n def __deepcopy__(self, memo):\n- cls = self.__class__\n- result = cls.__new__(cls, self.start_mark, self.end_mark)\n+ result = list_node([], self.start_mark, self.end_mark)\n memo[id(self)] = result\n for _, v in enumerate(self):\n result.append(deepcopy(v, memo))\n", "issue": "AWS::Serverless::Api Cors config transform error\n#### cfn-lint version:\r\n``` console\r\n$ cfn-lint --version\r\ncfn-lint 0.8.3\r\n```\r\n\r\n#### Description of issue:\r\nWhen I have a `AWS::Serverless::Api` object with a [Cors Configuration](https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#cors-configuration) object in it I routinely get the following error when linting `template.yaml` \r\n\r\n`E0001 Error transforming template: 'dict_node' object has no attribute 'start_mark'`\r\n\r\nI also got the following stack trace when running \r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/cfn-lint\", line 11, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python2.7/site-packages/cfnlint/__main__.py\", line 36, in main\r\n args.regions, args.override_spec))\r\n File \"/usr/local/lib/python2.7/site-packages/cfnlint/core.py\", line 46, in run_cli\r\n return run_checks(filename, template, rules, regions)\r\n File \"/usr/local/lib/python2.7/site-packages/cfnlint/core.py\", line 316, in run_checks\r\n matches.extend(runner.transform())\r\n File \"/usr/local/lib/python2.7/site-packages/cfnlint/__init__.py\", line 894, in transform\r\n matches = transform.transform_template()\r\n File \"/usr/local/lib/python2.7/site-packages/cfnlint/transform.py\", line 115, in transform_template\r\n sam_translator.translate(sam_template=self._template, parameter_values={}))\r\n File \"/Users/eaddingtonwhite/Library/Python/2.7/lib/python/site-packages/samtranslator/translator/translator.py\", line 71, in translate\r\n translated = macro.to_cloudformation(**kwargs)\r\n File \"/Users/eaddingtonwhite/Library/Python/2.7/lib/python/site-packages/samtranslator/model/sam_resources.py\", line 501, in to_cloudformation\r\n rest_api, deployment, stage = api_generator.to_cloudformation()\r\n File \"/Users/eaddingtonwhite/Library/Python/2.7/lib/python/site-packages/samtranslator/model/api/api_generator.py\", line 160, in to_cloudformation\r\n rest_api = self._construct_rest_api()\r\n File \"/Users/eaddingtonwhite/Library/Python/2.7/lib/python/site-packages/samtranslator/model/api/api_generator.py\", line 69, in _construct_rest_api\r\n self._add_cors()\r\n File \"/Users/eaddingtonwhite/Library/Python/2.7/lib/python/site-packages/samtranslator/model/api/api_generator.py\", line 212, in _add_cors\r\n self.definition_body = editor.swagger\r\n File \"/Users/eaddingtonwhite/Library/Python/2.7/lib/python/site-packages/samtranslator/swagger/swagger.py\", line 298, in swagger\r\n return copy.deepcopy(self._doc)\r\n File \"/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py\", line 174, in deepcopy\r\n y = copier(memo)\r\n File \"/usr/local/lib/python2.7/site-packages/cfnlint/decode/node.py\", line 75, in __deepcopy__\r\n result = cls.__new__(cls, self.start_mark, self.end_mark)\r\nAttributeError: 'dict_node' object has no attribute 'start_mark'\r\n```\r\nHere is simple example I made to reproduce\r\n```yaml\r\nAWSTemplateFormatVersion: '2010-09-09'\r\nTransform: AWS::Serverless-2016-10-31\r\n\r\nResources:\r\n Function:\r\n Type: AWS::Serverless::Function\r\n Properties:\r\n CodeUri: hello_world/\r\n Handler: app.lambda_handler\r\n Runtime: nodejs8.10\r\n\r\n Api:\r\n Type: AWS::Serverless::Api\r\n Properties:\r\n Name: test\r\n StageName: test\r\n Cors:\r\n AllowHeaders: \"'my_custom_header,content-type'\"\r\n AllowOrigin: \"'*'\"\r\n MaxAge: \"\"\r\n DefinitionBody:\r\n swagger: \"2.0\"\r\n info:\r\n version: \"1.0\"\r\n title: \"test\"\r\n basePath: \"/\"\r\n schemes:\r\n - \"https\"\r\n paths:\r\n /:\r\n get:\r\n x-amazon-apigateway-integration:\r\n uri:\r\n Fn::Sub: \"arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${Function.Arn}/invocations\"\r\n passthroughBehavior: \"when_no_match\"\r\n httpMethod: \"POST\"\r\n type: \"aws_proxy\"\r\n```\r\n\r\nIf you comment out the following lines then issue goes away:\r\n```yaml\r\n Cors:\r\n AllowHeaders: \"'my_custom_header,content-type'\"\r\n AllowOrigin: \"'*'\"\r\n MaxAge: \"\"\r\n```\n", "before_files": [{"content": "\"\"\"\n Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport sys\nimport logging\nfrom copy import deepcopy\nimport six\n\nLOGGER = logging.getLogger(__name__)\n\n\nclass TemplateAttributeError(AttributeError):\n \"\"\" Custom error to capture Attribute Errors in the Template \"\"\"\n\n\ndef create_str_node_class(cls):\n \"\"\"\n Create string node class\n \"\"\"\n class node_class(cls):\n \"\"\"Node class created based on the input class\"\"\"\n def __init__(self, x, start_mark, end_mark):\n try:\n cls.__init__(self, x)\n except TypeError:\n cls.__init__(self)\n self.start_mark = start_mark\n self.end_mark = end_mark\n\n # pylint: disable=bad-classmethod-argument, unused-argument\n def __new__(self, x, start_mark, end_mark):\n if sys.version_info >= (3, 0):\n return cls.__new__(self, x)\n\n if isinstance(x, six.string_types):\n return cls.__new__(self, x.encode('ascii', 'ignore'))\n\n return cls.__new__(self, x)\n\n def __getattr__(self, name):\n raise TemplateAttributeError('%s.%s is invalid' % (self.__class__.__name__, name))\n\n def __deepcopy__(self, memo):\n result = str_node(self, self.start_mark, self.end_mark)\n memo[id(self)] = result\n return result\n\n def __copy__(self):\n return self\n\n node_class.__name__ = '%s_node' % cls.__name__\n return node_class\n\n\ndef create_dict_node_class(cls):\n \"\"\"\n Create dynamic node class\n \"\"\"\n class node_class(cls):\n \"\"\"Node class created based on the input class\"\"\"\n def __init__(self, x, start_mark, end_mark):\n try:\n cls.__init__(self, x)\n except TypeError:\n cls.__init__(self)\n self.start_mark = start_mark\n self.end_mark = end_mark\n self.condition_functions = ['Fn::If']\n\n def __deepcopy__(self, memo):\n cls = self.__class__\n result = cls.__new__(cls, self.start_mark, self.end_mark)\n memo[id(self)] = result\n for k, v in self.items():\n result[deepcopy(k)] = deepcopy(v, memo)\n\n return result\n\n def __copy__(self):\n return self\n\n def get_safe(self, key, default=None, path=None, type_t=()):\n \"\"\"\n Get values in format\n \"\"\"\n path = path or []\n value = self.get(key, default)\n if not isinstance(value, (dict)):\n if isinstance(value, type_t) or not type_t:\n return [(value, (path[:] + [key]))]\n\n results = []\n for sub_v, sub_path in value.items_safe(path):\n if isinstance(sub_v, type_t) or not type_t:\n results.append((sub_v, sub_path))\n\n return results\n\n def items_safe(self, path=None, type_t=()):\n \"\"\"Get items while handling IFs\"\"\"\n path = path or []\n if len(self) == 1:\n for k, v in self.items():\n if k == 'Fn::If':\n if isinstance(v, list):\n if len(v) == 3:\n for i, if_v in enumerate(v[1:]):\n if isinstance(if_v, dict):\n # yield from if_v.items_safe(path[:] + [k, i - 1])\n # Python 2.7 support\n for items, p in if_v.items_safe(path[:] + [k, i + 1]):\n if isinstance(items, type_t) or not type_t:\n yield items, p\n elif isinstance(if_v, list):\n if isinstance(if_v, type_t) or not type_t:\n yield if_v, path[:] + [k, i + 1]\n else:\n if isinstance(if_v, type_t) or not type_t:\n yield if_v, path[:] + [k, i + 1]\n elif k != 'Ref' and v != 'AWS::NoValue':\n if isinstance(self, type_t) or not type_t:\n yield self, path[:]\n else:\n if isinstance(self, type_t) or not type_t:\n yield self, path[:]\n\n def __getattr__(self, name):\n raise TemplateAttributeError('%s.%s is invalid' % (self.__class__.__name__, name))\n\n node_class.__name__ = '%s_node' % cls.__name__\n return node_class\n\n\ndef create_dict_list_class(cls):\n \"\"\"\n Create dynamic list class\n \"\"\"\n class node_class(cls):\n \"\"\"Node class created based on the input class\"\"\"\n def __init__(self, x, start_mark, end_mark):\n try:\n cls.__init__(self, x)\n except TypeError:\n cls.__init__(self)\n self.start_mark = start_mark\n self.end_mark = end_mark\n self.condition_functions = ['Fn::If']\n\n def __deepcopy__(self, memo):\n cls = self.__class__\n result = cls.__new__(cls, self.start_mark, self.end_mark)\n memo[id(self)] = result\n for _, v in enumerate(self):\n result.append(deepcopy(v, memo))\n\n return result\n\n def __copy__(self):\n return self\n\n def items_safe(self, path=None, type_t=()):\n \"\"\"Get items while handling IFs\"\"\"\n path = path or []\n for i, v in enumerate(self):\n if isinstance(v, dict):\n for items, p in v.items_safe(path[:] + [i]):\n if isinstance(items, type_t) or not type_t:\n yield items, p\n else:\n if isinstance(v, type_t) or not type_t:\n yield v, path[:] + [i]\n\n def __getattr__(self, name):\n raise TemplateAttributeError('%s.%s is invalid' % (self.__class__.__name__, name))\n\n node_class.__name__ = '%s_node' % cls.__name__\n return node_class\n\n\nstr_node = create_str_node_class(str)\ndict_node = create_dict_node_class(dict)\nlist_node = create_dict_list_class(list)\n", "path": "src/cfnlint/decode/node.py"}]}
| 3,674 | 251 |
gh_patches_debug_507
|
rasdani/github-patches
|
git_diff
|
googleapis__python-bigquery-1794
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug: client doesn't retry "Job exceeded rate limits" for DDL query jobs that exceed quota for table update operations
In https://github.com/googleapis/python-bigquery-sqlalchemy/pull/1009#discussion_r1457644849 it seems that the query in https://btx-internal.corp.google.com/invocations/ffafb866-6bc0-423f-a86b-df69fb270d57/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-bigquery-sqlalchemy%2Fpresubmit%2Fprerelease-deps;config=default/log with rate limits exceeded errors are not retried.
#### Environment details
- OS type and version:
- Python version: `python --version`
- pip version: `pip --version`
- `google-cloud-bigquery` version: `pip show google-cloud-bigquery`
#### Steps to reproduce
Run a DDL query more than 5 times in 10 seconds, violating the five table metadata update operations per 10 seconds per table limit (https://cloud.google.com/bigquery/quotas#standard_tables).
#### Code example
```python
import google.cloud.bigquery
bqclient = google.cloud.bigquery.Client()
sql = "ALTER TABLE `swast-scratch.my_dataset.my_table` ADD COLUMN IF NOT EXISTS my_string_col STRING"
for _ in range(100):
bqclient.query_and_wait(sql)
```
#### Stack trace
```
BadRequest Traceback (most recent call last)
Input In [4], in <cell line: 1>()
1 for _ in range(100):
----> 2 bqclient.query_and_wait(sql)
File ~/src/github.com/googleapis/python-bigquery/google/cloud/bigquery/client.py:3503, in Client.query_and_wait(self, query, job_config, location, project, api_timeout, wait_timeout, retry, job_retry, page_size, max_results)
3497 _verify_job_config_type(job_config, QueryJobConfig)
3499 job_config = _job_helpers.job_config_with_defaults(
3500 job_config, self._default_query_job_config
3501 )
-> 3503 return _job_helpers.query_and_wait(
3504 self,
3505 query,
3506 job_config=job_config,
3507 location=location,
3508 project=project,
3509 api_timeout=api_timeout,
3510 wait_timeout=wait_timeout,
3511 retry=retry,
3512 job_retry=job_retry,
3513 page_size=page_size,
3514 max_results=max_results,
3515 )
File ~/src/github.com/googleapis/python-bigquery/google/cloud/bigquery/_job_helpers.py:498, in query_and_wait(client, query, job_config, location, project, api_timeout, wait_timeout, retry, job_retry, page_size, max_results)
481 return table.RowIterator(
482 client=client,
483 api_request=functools.partial(client._call_api, retry, timeout=api_timeout),
(...)
494 num_dml_affected_rows=query_results.num_dml_affected_rows,
495 )
497 if job_retry is not None:
--> 498 return job_retry(do_query)()
499 else:
500 return do_query()
File /opt/miniconda3/envs/dev-3.10/lib/python3.10/site-packages/google/api_core/retry.py:349, in Retry.__call__.<locals>.retry_wrapped_func(*args, **kwargs)
345 target = functools.partial(func, *args, **kwargs)
346 sleep_generator = exponential_sleep_generator(
347 self._initial, self._maximum, multiplier=self._multiplier
348 )
--> 349 return retry_target(
350 target,
351 self._predicate,
352 sleep_generator,
353 self._timeout,
354 on_error=on_error,
355 )
File /opt/miniconda3/envs/dev-3.10/lib/python3.10/site-packages/google/api_core/retry.py:191, in retry_target(target, predicate, sleep_generator, timeout, on_error, **kwargs)
189 for sleep in sleep_generator:
190 try:
--> 191 return target()
193 # pylint: disable=broad-except
194 # This function explicitly must deal with broad exceptions.
195 except Exception as exc:
File ~/src/github.com/googleapis/python-bigquery/google/cloud/bigquery/_job_helpers.py:439, in query_and_wait.<locals>.do_query()
437 # For easier testing, handle the retries ourselves.
438 if retry is not None:
--> 439 response = retry(client._call_api)(
440 retry=None, # We're calling the retry decorator ourselves.
441 span_name="BigQuery.query",
442 span_attributes=span_attributes,
443 method="POST",
444 path=path,
445 data=request_body,
446 timeout=api_timeout,
447 )
448 else:
449 response = client._call_api(
450 retry=None,
451 span_name="BigQuery.query",
(...)
456 timeout=api_timeout,
457 )
File /opt/miniconda3/envs/dev-3.10/lib/python3.10/site-packages/google/api_core/retry.py:349, in Retry.__call__.<locals>.retry_wrapped_func(*args, **kwargs)
345 target = functools.partial(func, *args, **kwargs)
346 sleep_generator = exponential_sleep_generator(
347 self._initial, self._maximum, multiplier=self._multiplier
348 )
--> 349 return retry_target(
350 target,
351 self._predicate,
352 sleep_generator,
353 self._timeout,
354 on_error=on_error,
355 )
File /opt/miniconda3/envs/dev-3.10/lib/python3.10/site-packages/google/api_core/retry.py:191, in retry_target(target, predicate, sleep_generator, timeout, on_error, **kwargs)
189 for sleep in sleep_generator:
190 try:
--> 191 return target()
193 # pylint: disable=broad-except
194 # This function explicitly must deal with broad exceptions.
195 except Exception as exc:
File ~/src/github.com/googleapis/python-bigquery/google/cloud/bigquery/client.py:827, in Client._call_api(self, retry, span_name, span_attributes, job_ref, headers, **kwargs)
823 if span_name is not None:
824 with create_span(
825 name=span_name, attributes=span_attributes, client=self, job_ref=job_ref
826 ):
--> 827 return call()
829 return call()
File /opt/miniconda3/envs/dev-3.10/lib/python3.10/site-packages/google/cloud/_http/__init__.py:494, in JSONConnection.api_request(self, method, path, query_params, data, content_type, headers, api_base_url, api_version, expect_json, _target_object, timeout, extra_api_info)
482 response = self._make_request(
483 method=method,
484 url=url,
(...)
490 extra_api_info=extra_api_info,
491 )
493 if not 200 <= response.status_code < 300:
--> 494 raise exceptions.from_http_response(response)
496 if expect_json and response.content:
497 return response.json()
BadRequest: 400 POST https://bigquery.googleapis.com/bigquery/v2/projects/swast-scratch/queries?prettyPrint=false: Job exceeded rate limits: Your table exceeded quota for table update operations. For more information, see https://cloud.google.com/bigquery/docs/troubleshoot-quotas
In [5]: import sys
In [6]: exc = sys.last_value
In [7]: exc
Out[7]: google.api_core.exceptions.BadRequest('POST https://bigquery.googleapis.com/bigquery/v2/projects/swast-scratch/queries?prettyPrint=false: Job exceeded rate limits: Your table exceeded quota for table update operations. For more information, see https://cloud.google.com/bigquery/docs/troubleshoot-quotas')
In [8]: exc.reason
In [9]: exc.errors
Out[9]:
[{'message': 'Job exceeded rate limits: Your table exceeded quota for table update operations. For more information, see https://cloud.google.com/bigquery/docs/troubleshoot-quotas',
'domain': 'global',
'reason': 'jobRateLimitExceeded'}]
In [10]: exc.errors[0]["reason"]
Out[10]: 'jobRateLimitExceeded'
```
</issue>
<code>
[start of google/cloud/bigquery/retry.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from google.api_core import exceptions
16 from google.api_core import retry
17 from google.auth import exceptions as auth_exceptions # type: ignore
18 import requests.exceptions
19
20
21 _RETRYABLE_REASONS = frozenset(
22 ["rateLimitExceeded", "backendError", "internalError", "badGateway"]
23 )
24
25 _UNSTRUCTURED_RETRYABLE_TYPES = (
26 ConnectionError,
27 exceptions.TooManyRequests,
28 exceptions.InternalServerError,
29 exceptions.BadGateway,
30 exceptions.ServiceUnavailable,
31 requests.exceptions.ChunkedEncodingError,
32 requests.exceptions.ConnectionError,
33 requests.exceptions.Timeout,
34 auth_exceptions.TransportError,
35 )
36
37 _DEFAULT_RETRY_DEADLINE = 10.0 * 60.0 # 10 minutes
38
39 # Allow for a few retries after the API request times out. This relevant for
40 # rateLimitExceeded errors, which can be raised either by the Google load
41 # balancer or the BigQuery job server.
42 _DEFAULT_JOB_DEADLINE = 3.0 * _DEFAULT_RETRY_DEADLINE
43
44
45 def _should_retry(exc):
46 """Predicate for determining when to retry.
47
48 We retry if and only if the 'reason' is 'backendError'
49 or 'rateLimitExceeded'.
50 """
51 if not hasattr(exc, "errors") or len(exc.errors) == 0:
52 # Check for unstructured error returns, e.g. from GFE
53 return isinstance(exc, _UNSTRUCTURED_RETRYABLE_TYPES)
54
55 reason = exc.errors[0]["reason"]
56 return reason in _RETRYABLE_REASONS
57
58
59 DEFAULT_RETRY = retry.Retry(predicate=_should_retry, deadline=_DEFAULT_RETRY_DEADLINE)
60 """The default retry object.
61
62 Any method with a ``retry`` parameter will be retried automatically,
63 with reasonable defaults. To disable retry, pass ``retry=None``.
64 To modify the default retry behavior, call a ``with_XXX`` method
65 on ``DEFAULT_RETRY``. For example, to change the deadline to 30 seconds,
66 pass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``.
67 """
68
69 DEFAULT_TIMEOUT = None
70 """The default API timeout.
71
72 This is the time to wait per request. To adjust the total wait time, set a
73 deadline on the retry object.
74 """
75
76 job_retry_reasons = "rateLimitExceeded", "backendError"
77
78
79 def _job_should_retry(exc):
80 if not hasattr(exc, "errors") or len(exc.errors) == 0:
81 return False
82
83 reason = exc.errors[0]["reason"]
84 return reason in job_retry_reasons
85
86
87 DEFAULT_JOB_RETRY = retry.Retry(
88 predicate=_job_should_retry, deadline=_DEFAULT_JOB_DEADLINE
89 )
90 """
91 The default job retry object.
92 """
93
[end of google/cloud/bigquery/retry.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/google/cloud/bigquery/retry.py b/google/cloud/bigquery/retry.py
--- a/google/cloud/bigquery/retry.py
+++ b/google/cloud/bigquery/retry.py
@@ -73,7 +73,7 @@
deadline on the retry object.
"""
-job_retry_reasons = "rateLimitExceeded", "backendError"
+job_retry_reasons = "rateLimitExceeded", "backendError", "jobRateLimitExceeded"
def _job_should_retry(exc):
|
{"golden_diff": "diff --git a/google/cloud/bigquery/retry.py b/google/cloud/bigquery/retry.py\n--- a/google/cloud/bigquery/retry.py\n+++ b/google/cloud/bigquery/retry.py\n@@ -73,7 +73,7 @@\n deadline on the retry object.\n \"\"\"\n \n-job_retry_reasons = \"rateLimitExceeded\", \"backendError\"\n+job_retry_reasons = \"rateLimitExceeded\", \"backendError\", \"jobRateLimitExceeded\"\n \n \n def _job_should_retry(exc):\n", "issue": "bug: client doesn't retry \"Job exceeded rate limits\" for DDL query jobs that exceed quota for table update operations \nIn https://github.com/googleapis/python-bigquery-sqlalchemy/pull/1009#discussion_r1457644849 it seems that the query in https://btx-internal.corp.google.com/invocations/ffafb866-6bc0-423f-a86b-df69fb270d57/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-bigquery-sqlalchemy%2Fpresubmit%2Fprerelease-deps;config=default/log with rate limits exceeded errors are not retried.\r\n\r\n#### Environment details\r\n\r\n - OS type and version:\r\n - Python version: `python --version`\r\n - pip version: `pip --version`\r\n - `google-cloud-bigquery` version: `pip show google-cloud-bigquery`\r\n\r\n#### Steps to reproduce\r\n\r\nRun a DDL query more than 5 times in 10 seconds, violating the five table metadata update operations per 10 seconds per table limit (https://cloud.google.com/bigquery/quotas#standard_tables).\r\n\r\n#### Code example\r\n\r\n```python\r\nimport google.cloud.bigquery\r\nbqclient = google.cloud.bigquery.Client()\r\nsql = \"ALTER TABLE `swast-scratch.my_dataset.my_table` ADD COLUMN IF NOT EXISTS my_string_col STRING\"\r\nfor _ in range(100):\r\n bqclient.query_and_wait(sql)\r\n```\r\n\r\n#### Stack trace\r\n\r\n```\r\nBadRequest Traceback (most recent call last)\r\nInput In [4], in <cell line: 1>()\r\n 1 for _ in range(100):\r\n----> 2 bqclient.query_and_wait(sql)\r\n\r\nFile ~/src/github.com/googleapis/python-bigquery/google/cloud/bigquery/client.py:3503, in Client.query_and_wait(self, query, job_config, location, project, api_timeout, wait_timeout, retry, job_retry, page_size, max_results)\r\n 3497 _verify_job_config_type(job_config, QueryJobConfig)\r\n 3499 job_config = _job_helpers.job_config_with_defaults(\r\n 3500 job_config, self._default_query_job_config\r\n 3501 )\r\n-> 3503 return _job_helpers.query_and_wait(\r\n 3504 self,\r\n 3505 query,\r\n 3506 job_config=job_config,\r\n 3507 location=location,\r\n 3508 project=project,\r\n 3509 api_timeout=api_timeout,\r\n 3510 wait_timeout=wait_timeout,\r\n 3511 retry=retry,\r\n 3512 job_retry=job_retry,\r\n 3513 page_size=page_size,\r\n 3514 max_results=max_results,\r\n 3515 )\r\n\r\nFile ~/src/github.com/googleapis/python-bigquery/google/cloud/bigquery/_job_helpers.py:498, in query_and_wait(client, query, job_config, location, project, api_timeout, wait_timeout, retry, job_retry, page_size, max_results)\r\n 481 return table.RowIterator(\r\n 482 client=client,\r\n 483 api_request=functools.partial(client._call_api, retry, timeout=api_timeout),\r\n (...)\r\n 494 num_dml_affected_rows=query_results.num_dml_affected_rows,\r\n 495 )\r\n 497 if job_retry is not None:\r\n--> 498 return job_retry(do_query)()\r\n 499 else:\r\n 500 return do_query()\r\n\r\nFile /opt/miniconda3/envs/dev-3.10/lib/python3.10/site-packages/google/api_core/retry.py:349, in Retry.__call__.<locals>.retry_wrapped_func(*args, **kwargs)\r\n 345 target = functools.partial(func, *args, **kwargs)\r\n 346 sleep_generator = exponential_sleep_generator(\r\n 347 self._initial, self._maximum, multiplier=self._multiplier\r\n 348 )\r\n--> 349 return retry_target(\r\n 350 target,\r\n 351 self._predicate,\r\n 352 sleep_generator,\r\n 353 self._timeout,\r\n 354 on_error=on_error,\r\n 355 )\r\n\r\nFile /opt/miniconda3/envs/dev-3.10/lib/python3.10/site-packages/google/api_core/retry.py:191, in retry_target(target, predicate, sleep_generator, timeout, on_error, **kwargs)\r\n 189 for sleep in sleep_generator:\r\n 190 try:\r\n--> 191 return target()\r\n 193 # pylint: disable=broad-except\r\n 194 # This function explicitly must deal with broad exceptions.\r\n 195 except Exception as exc:\r\n\r\nFile ~/src/github.com/googleapis/python-bigquery/google/cloud/bigquery/_job_helpers.py:439, in query_and_wait.<locals>.do_query()\r\n 437 # For easier testing, handle the retries ourselves.\r\n 438 if retry is not None:\r\n--> 439 response = retry(client._call_api)(\r\n 440 retry=None, # We're calling the retry decorator ourselves.\r\n 441 span_name=\"BigQuery.query\",\r\n 442 span_attributes=span_attributes,\r\n 443 method=\"POST\",\r\n 444 path=path,\r\n 445 data=request_body,\r\n 446 timeout=api_timeout,\r\n 447 )\r\n 448 else:\r\n 449 response = client._call_api(\r\n 450 retry=None,\r\n 451 span_name=\"BigQuery.query\",\r\n (...)\r\n 456 timeout=api_timeout,\r\n 457 )\r\n\r\nFile /opt/miniconda3/envs/dev-3.10/lib/python3.10/site-packages/google/api_core/retry.py:349, in Retry.__call__.<locals>.retry_wrapped_func(*args, **kwargs)\r\n 345 target = functools.partial(func, *args, **kwargs)\r\n 346 sleep_generator = exponential_sleep_generator(\r\n 347 self._initial, self._maximum, multiplier=self._multiplier\r\n 348 )\r\n--> 349 return retry_target(\r\n 350 target,\r\n 351 self._predicate,\r\n 352 sleep_generator,\r\n 353 self._timeout,\r\n 354 on_error=on_error,\r\n 355 )\r\n\r\nFile /opt/miniconda3/envs/dev-3.10/lib/python3.10/site-packages/google/api_core/retry.py:191, in retry_target(target, predicate, sleep_generator, timeout, on_error, **kwargs)\r\n 189 for sleep in sleep_generator:\r\n 190 try:\r\n--> 191 return target()\r\n 193 # pylint: disable=broad-except\r\n 194 # This function explicitly must deal with broad exceptions.\r\n 195 except Exception as exc:\r\n\r\nFile ~/src/github.com/googleapis/python-bigquery/google/cloud/bigquery/client.py:827, in Client._call_api(self, retry, span_name, span_attributes, job_ref, headers, **kwargs)\r\n 823 if span_name is not None:\r\n 824 with create_span(\r\n 825 name=span_name, attributes=span_attributes, client=self, job_ref=job_ref\r\n 826 ):\r\n--> 827 return call()\r\n 829 return call()\r\n\r\nFile /opt/miniconda3/envs/dev-3.10/lib/python3.10/site-packages/google/cloud/_http/__init__.py:494, in JSONConnection.api_request(self, method, path, query_params, data, content_type, headers, api_base_url, api_version, expect_json, _target_object, timeout, extra_api_info)\r\n 482 response = self._make_request(\r\n 483 method=method,\r\n 484 url=url,\r\n (...)\r\n 490 extra_api_info=extra_api_info,\r\n 491 )\r\n 493 if not 200 <= response.status_code < 300:\r\n--> 494 raise exceptions.from_http_response(response)\r\n 496 if expect_json and response.content:\r\n 497 return response.json()\r\n\r\nBadRequest: 400 POST https://bigquery.googleapis.com/bigquery/v2/projects/swast-scratch/queries?prettyPrint=false: Job exceeded rate limits: Your table exceeded quota for table update operations. For more information, see https://cloud.google.com/bigquery/docs/troubleshoot-quotas\r\n\r\nIn [5]: import sys\r\n\r\nIn [6]: exc = sys.last_value\r\n\r\nIn [7]: exc\r\nOut[7]: google.api_core.exceptions.BadRequest('POST https://bigquery.googleapis.com/bigquery/v2/projects/swast-scratch/queries?prettyPrint=false: Job exceeded rate limits: Your table exceeded quota for table update operations. For more information, see https://cloud.google.com/bigquery/docs/troubleshoot-quotas')\r\n\r\nIn [8]: exc.reason\r\n\r\nIn [9]: exc.errors\r\nOut[9]: \r\n[{'message': 'Job exceeded rate limits: Your table exceeded quota for table update operations. For more information, see https://cloud.google.com/bigquery/docs/troubleshoot-quotas',\r\n 'domain': 'global',\r\n 'reason': 'jobRateLimitExceeded'}]\r\n\r\nIn [10]: exc.errors[0][\"reason\"]\r\nOut[10]: 'jobRateLimitExceeded'\r\n```\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom google.api_core import exceptions\nfrom google.api_core import retry\nfrom google.auth import exceptions as auth_exceptions # type: ignore\nimport requests.exceptions\n\n\n_RETRYABLE_REASONS = frozenset(\n [\"rateLimitExceeded\", \"backendError\", \"internalError\", \"badGateway\"]\n)\n\n_UNSTRUCTURED_RETRYABLE_TYPES = (\n ConnectionError,\n exceptions.TooManyRequests,\n exceptions.InternalServerError,\n exceptions.BadGateway,\n exceptions.ServiceUnavailable,\n requests.exceptions.ChunkedEncodingError,\n requests.exceptions.ConnectionError,\n requests.exceptions.Timeout,\n auth_exceptions.TransportError,\n)\n\n_DEFAULT_RETRY_DEADLINE = 10.0 * 60.0 # 10 minutes\n\n# Allow for a few retries after the API request times out. This relevant for\n# rateLimitExceeded errors, which can be raised either by the Google load\n# balancer or the BigQuery job server.\n_DEFAULT_JOB_DEADLINE = 3.0 * _DEFAULT_RETRY_DEADLINE\n\n\ndef _should_retry(exc):\n \"\"\"Predicate for determining when to retry.\n\n We retry if and only if the 'reason' is 'backendError'\n or 'rateLimitExceeded'.\n \"\"\"\n if not hasattr(exc, \"errors\") or len(exc.errors) == 0:\n # Check for unstructured error returns, e.g. from GFE\n return isinstance(exc, _UNSTRUCTURED_RETRYABLE_TYPES)\n\n reason = exc.errors[0][\"reason\"]\n return reason in _RETRYABLE_REASONS\n\n\nDEFAULT_RETRY = retry.Retry(predicate=_should_retry, deadline=_DEFAULT_RETRY_DEADLINE)\n\"\"\"The default retry object.\n\nAny method with a ``retry`` parameter will be retried automatically,\nwith reasonable defaults. To disable retry, pass ``retry=None``.\nTo modify the default retry behavior, call a ``with_XXX`` method\non ``DEFAULT_RETRY``. For example, to change the deadline to 30 seconds,\npass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``.\n\"\"\"\n\nDEFAULT_TIMEOUT = None\n\"\"\"The default API timeout.\n\nThis is the time to wait per request. To adjust the total wait time, set a\ndeadline on the retry object.\n\"\"\"\n\njob_retry_reasons = \"rateLimitExceeded\", \"backendError\"\n\n\ndef _job_should_retry(exc):\n if not hasattr(exc, \"errors\") or len(exc.errors) == 0:\n return False\n\n reason = exc.errors[0][\"reason\"]\n return reason in job_retry_reasons\n\n\nDEFAULT_JOB_RETRY = retry.Retry(\n predicate=_job_should_retry, deadline=_DEFAULT_JOB_DEADLINE\n)\n\"\"\"\nThe default job retry object.\n\"\"\"\n", "path": "google/cloud/bigquery/retry.py"}]}
| 3,622 | 106 |
gh_patches_debug_23392
|
rasdani/github-patches
|
git_diff
|
getnikola__nikola-2082
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
listing directive explodes badly if used wrong
Example:
```
.. listing:: hello.py
```
Which is a way too common first attempt to use it, crashes like this:
```
TaskError - taskid:render_posts:cache/posts/foo.html
PythonAction Error
Traceback (most recent call last):
File "/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/doit/action.py", line 383, in execute
returned_value = self.py_callable(*self.args, **kwargs)
File "/home/ralsina/Desktop/proyectos/nikola/master/nikola/post.py", line 485, in compile
self.is_two_file),
File "/home/ralsina/Desktop/proyectos/nikola/master/nikola/plugins/compile/rest/__init__.py", line 100, in compile_html
output, error_level, deps = self.compile_html_string(data, source, is_two_file)
File "/home/ralsina/Desktop/proyectos/nikola/master/nikola/plugins/compile/rest/__init__.py", line 86, in compile_html_string
}, logger=self.logger, source_path=source_path, l_add_ln=add_ln, transforms=self.site.rst_transforms)
File "/home/ralsina/Desktop/proyectos/nikola/master/nikola/plugins/compile/rest/__init__.py", line 276, in rst2html
pub.publish(enable_exit_status=enable_exit_status)
File "/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/core.py", line 217, in publish
self.settings)
File "/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/readers/__init__.py", line 72, in read
self.parse()
File "/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/readers/__init__.py", line 78, in parse
self.parser.parse(self.input, document)
File "/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/parsers/rst/__init__.py", line 172, in parse
self.statemachine.run(inputlines, document, inliner=self.inliner)
File "/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/parsers/rst/states.py", line 170, in run
input_source=document['source'])
File "/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/statemachine.py", line 239, in run
context, state, transitions)
File "/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/statemachine.py", line 460, in check_line
return method(match, context, next_state)
File "/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/parsers/rst/states.py", line 2299, in explicit_markup
nodelist, blank_finish = self.explicit_construct(match)
File "/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/parsers/rst/states.py", line 2311, in explicit_construct
return method(self, expmatch)
File "/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/parsers/rst/states.py", line 2054, in directive
directive_class, match, type_name, option_presets)
File "/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/parsers/rst/states.py", line 2103, in run_directive
result = directive_instance.run()
File "/home/ralsina/Desktop/proyectos/nikola/master/nikola/plugins/compile/rest/listing.py", line 174, in run
lang = self.arguments.pop(0)
IndexError: pop from empty list
```
</issue>
<code>
[start of nikola/plugins/compile/rest/listing.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2015 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27
28 """Define and register a listing directive using the existing CodeBlock."""
29
30
31 from __future__ import unicode_literals
32 import io
33 import os
34 import uuid
35 try:
36 from urlparse import urlunsplit
37 except ImportError:
38 from urllib.parse import urlunsplit # NOQA
39
40 import docutils.parsers.rst.directives.body
41 import docutils.parsers.rst.directives.misc
42 from docutils import core
43 from docutils import nodes
44 from docutils.parsers.rst import Directive, directives
45 from docutils.parsers.rst.roles import set_classes
46 from docutils.parsers.rst.directives.misc import Include
47
48 from pygments.lexers import get_lexer_by_name
49 import pygments
50 import pygments.util
51
52 from nikola import utils
53 from nikola.plugin_categories import RestExtension
54
55
56 # A sanitized version of docutils.parsers.rst.directives.body.CodeBlock.
57 class CodeBlock(Directive):
58
59 """Parse and mark up content of a code block."""
60
61 optional_arguments = 1
62 option_spec = {'class': directives.class_option,
63 'name': directives.unchanged,
64 'number-lines': directives.unchanged, # integer or None
65 'linenos': directives.unchanged,
66 'tab-width': directives.nonnegative_int}
67 has_content = True
68
69 def run(self):
70 """Run code block directive."""
71 self.assert_has_content()
72
73 if 'linenos' in self.options:
74 self.options['number-lines'] = self.options['linenos']
75 if 'tab-width' in self.options:
76 self.content = [x.replace('\t', ' ' * self.options['tab-width']) for x in self.content]
77
78 if self.arguments:
79 language = self.arguments[0]
80 else:
81 language = 'text'
82 set_classes(self.options)
83 classes = ['code']
84 if language:
85 classes.append(language)
86 if 'classes' in self.options:
87 classes.extend(self.options['classes'])
88
89 code = '\n'.join(self.content)
90
91 try:
92 lexer = get_lexer_by_name(language)
93 except pygments.util.ClassNotFound:
94 raise self.error('Cannot find pygments lexer for language "{0}"'.format(language))
95
96 if 'number-lines' in self.options:
97 linenos = 'table'
98 # optional argument `startline`, defaults to 1
99 try:
100 linenostart = int(self.options['number-lines'] or 1)
101 except ValueError:
102 raise self.error(':number-lines: with non-integer start value')
103 else:
104 linenos = False
105 linenostart = 1 # actually unused
106
107 if self.site.invariant: # for testing purposes
108 anchor_ref = 'rest_code_' + 'fixedvaluethatisnotauuid'
109 else:
110 anchor_ref = 'rest_code_' + uuid.uuid4().hex
111
112 formatter = utils.NikolaPygmentsHTML(anchor_ref=anchor_ref, classes=classes, linenos=linenos, linenostart=linenostart)
113 out = pygments.highlight(code, lexer, formatter)
114 node = nodes.raw('', out, format='html')
115
116 self.add_name(node)
117 # if called from "include", set the source
118 if 'source' in self.options:
119 node.attributes['source'] = self.options['source']
120
121 return [node]
122
123 # Monkey-patch: replace insane docutils CodeBlock with our implementation.
124 docutils.parsers.rst.directives.body.CodeBlock = CodeBlock
125 docutils.parsers.rst.directives.misc.CodeBlock = CodeBlock
126
127
128 class Plugin(RestExtension):
129
130 """Plugin for listing directive."""
131
132 name = "rest_listing"
133
134 def set_site(self, site):
135 """Set Nikola site."""
136 self.site = site
137 # Even though listings don't use CodeBlock anymore, I am
138 # leaving these to make the code directive work with
139 # docutils < 0.9
140 CodeBlock.site = site
141 directives.register_directive('code', CodeBlock)
142 directives.register_directive('code-block', CodeBlock)
143 directives.register_directive('sourcecode', CodeBlock)
144 directives.register_directive('listing', Listing)
145 Listing.folders = site.config['LISTINGS_FOLDERS']
146 return super(Plugin, self).set_site(site)
147
148
149 # Add sphinx compatibility option
150 listing_spec = Include.option_spec
151 listing_spec['linenos'] = directives.unchanged
152
153
154 class Listing(Include):
155
156 """Create a highlighted block of code from a file in listings/.
157
158 Usage:
159
160 .. listing:: nikola.py python
161 :number-lines:
162
163 """
164
165 has_content = False
166 required_arguments = 1
167 optional_arguments = 1
168 option_spec = listing_spec
169
170 def run(self):
171 """Run listing directive."""
172 _fname = self.arguments.pop(0)
173 fname = _fname.replace('/', os.sep)
174 lang = self.arguments.pop(0)
175 if len(self.folders) == 1:
176 listings_folder = next(iter(self.folders.keys()))
177 if fname.startswith(listings_folder):
178 fpath = os.path.join(fname) # new syntax: specify folder name
179 else:
180 fpath = os.path.join(listings_folder, fname) # old syntax: don't specify folder name
181 else:
182 fpath = os.path.join(fname) # must be new syntax: specify folder name
183 self.arguments.insert(0, fpath)
184 self.options['code'] = lang
185 if 'linenos' in self.options:
186 self.options['number-lines'] = self.options['linenos']
187 with io.open(fpath, 'r+', encoding='utf8') as fileobject:
188 self.content = fileobject.read().splitlines()
189 self.state.document.settings.record_dependencies.add(fpath)
190 target = urlunsplit(("link", 'listing', fpath.replace('\\', '/'), '', ''))
191 generated_nodes = (
192 [core.publish_doctree('`{0} <{1}>`_'.format(_fname, target))[0]])
193 generated_nodes += self.get_code_from_file(fileobject)
194 return generated_nodes
195
196 def get_code_from_file(self, data):
197 """Create CodeBlock nodes from file object content."""
198 return super(Listing, self).run()
199
200 def assert_has_content(self):
201 """Listing has no content, override check from superclass."""
202 pass
203
[end of nikola/plugins/compile/rest/listing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nikola/plugins/compile/rest/listing.py b/nikola/plugins/compile/rest/listing.py
--- a/nikola/plugins/compile/rest/listing.py
+++ b/nikola/plugins/compile/rest/listing.py
@@ -171,7 +171,12 @@
"""Run listing directive."""
_fname = self.arguments.pop(0)
fname = _fname.replace('/', os.sep)
- lang = self.arguments.pop(0)
+ try:
+ lang = self.arguments.pop(0)
+ self.options['code'] = lang
+ except IndexError:
+ self.options['literal'] = True
+
if len(self.folders) == 1:
listings_folder = next(iter(self.folders.keys()))
if fname.startswith(listings_folder):
@@ -181,7 +186,6 @@
else:
fpath = os.path.join(fname) # must be new syntax: specify folder name
self.arguments.insert(0, fpath)
- self.options['code'] = lang
if 'linenos' in self.options:
self.options['number-lines'] = self.options['linenos']
with io.open(fpath, 'r+', encoding='utf8') as fileobject:
|
{"golden_diff": "diff --git a/nikola/plugins/compile/rest/listing.py b/nikola/plugins/compile/rest/listing.py\n--- a/nikola/plugins/compile/rest/listing.py\n+++ b/nikola/plugins/compile/rest/listing.py\n@@ -171,7 +171,12 @@\n \"\"\"Run listing directive.\"\"\"\n _fname = self.arguments.pop(0)\n fname = _fname.replace('/', os.sep)\n- lang = self.arguments.pop(0)\n+ try:\n+ lang = self.arguments.pop(0)\n+ self.options['code'] = lang\n+ except IndexError:\n+ self.options['literal'] = True\n+\n if len(self.folders) == 1:\n listings_folder = next(iter(self.folders.keys()))\n if fname.startswith(listings_folder):\n@@ -181,7 +186,6 @@\n else:\n fpath = os.path.join(fname) # must be new syntax: specify folder name\n self.arguments.insert(0, fpath)\n- self.options['code'] = lang\n if 'linenos' in self.options:\n self.options['number-lines'] = self.options['linenos']\n with io.open(fpath, 'r+', encoding='utf8') as fileobject:\n", "issue": "listing directive explodes badly if used wrong\nExample: \n\n```\n.. listing:: hello.py\n```\n\nWhich is a way too common first attempt to use it, crashes like this:\n\n```\nTaskError - taskid:render_posts:cache/posts/foo.html\nPythonAction Error\nTraceback (most recent call last):\n File \"/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/doit/action.py\", line 383, in execute\n returned_value = self.py_callable(*self.args, **kwargs)\n File \"/home/ralsina/Desktop/proyectos/nikola/master/nikola/post.py\", line 485, in compile\n self.is_two_file),\n File \"/home/ralsina/Desktop/proyectos/nikola/master/nikola/plugins/compile/rest/__init__.py\", line 100, in compile_html\n output, error_level, deps = self.compile_html_string(data, source, is_two_file)\n File \"/home/ralsina/Desktop/proyectos/nikola/master/nikola/plugins/compile/rest/__init__.py\", line 86, in compile_html_string\n }, logger=self.logger, source_path=source_path, l_add_ln=add_ln, transforms=self.site.rst_transforms)\n File \"/home/ralsina/Desktop/proyectos/nikola/master/nikola/plugins/compile/rest/__init__.py\", line 276, in rst2html\n pub.publish(enable_exit_status=enable_exit_status)\n File \"/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/core.py\", line 217, in publish\n self.settings)\n File \"/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/readers/__init__.py\", line 72, in read\n self.parse()\n File \"/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/readers/__init__.py\", line 78, in parse\n self.parser.parse(self.input, document)\n File \"/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/parsers/rst/__init__.py\", line 172, in parse\n self.statemachine.run(inputlines, document, inliner=self.inliner)\n File \"/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/parsers/rst/states.py\", line 170, in run\n input_source=document['source'])\n File \"/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/statemachine.py\", line 239, in run\n context, state, transitions)\n File \"/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/statemachine.py\", line 460, in check_line\n return method(match, context, next_state)\n File \"/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/parsers/rst/states.py\", line 2299, in explicit_markup\n nodelist, blank_finish = self.explicit_construct(match)\n File \"/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/parsers/rst/states.py\", line 2311, in explicit_construct\n return method(self, expmatch)\n File \"/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/parsers/rst/states.py\", line 2054, in directive\n directive_class, match, type_name, option_presets)\n File \"/home/ralsina/.virtualenvs/nikola/local/lib/python2.7/site-packages/docutils-0.12-py2.7.egg/docutils/parsers/rst/states.py\", line 2103, in run_directive\n result = directive_instance.run()\n File \"/home/ralsina/Desktop/proyectos/nikola/master/nikola/plugins/compile/rest/listing.py\", line 174, in run\n lang = self.arguments.pop(0)\nIndexError: pop from empty list\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2015 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\n\"\"\"Define and register a listing directive using the existing CodeBlock.\"\"\"\n\n\nfrom __future__ import unicode_literals\nimport io\nimport os\nimport uuid\ntry:\n from urlparse import urlunsplit\nexcept ImportError:\n from urllib.parse import urlunsplit # NOQA\n\nimport docutils.parsers.rst.directives.body\nimport docutils.parsers.rst.directives.misc\nfrom docutils import core\nfrom docutils import nodes\nfrom docutils.parsers.rst import Directive, directives\nfrom docutils.parsers.rst.roles import set_classes\nfrom docutils.parsers.rst.directives.misc import Include\n\nfrom pygments.lexers import get_lexer_by_name\nimport pygments\nimport pygments.util\n\nfrom nikola import utils\nfrom nikola.plugin_categories import RestExtension\n\n\n# A sanitized version of docutils.parsers.rst.directives.body.CodeBlock.\nclass CodeBlock(Directive):\n\n \"\"\"Parse and mark up content of a code block.\"\"\"\n\n optional_arguments = 1\n option_spec = {'class': directives.class_option,\n 'name': directives.unchanged,\n 'number-lines': directives.unchanged, # integer or None\n 'linenos': directives.unchanged,\n 'tab-width': directives.nonnegative_int}\n has_content = True\n\n def run(self):\n \"\"\"Run code block directive.\"\"\"\n self.assert_has_content()\n\n if 'linenos' in self.options:\n self.options['number-lines'] = self.options['linenos']\n if 'tab-width' in self.options:\n self.content = [x.replace('\\t', ' ' * self.options['tab-width']) for x in self.content]\n\n if self.arguments:\n language = self.arguments[0]\n else:\n language = 'text'\n set_classes(self.options)\n classes = ['code']\n if language:\n classes.append(language)\n if 'classes' in self.options:\n classes.extend(self.options['classes'])\n\n code = '\\n'.join(self.content)\n\n try:\n lexer = get_lexer_by_name(language)\n except pygments.util.ClassNotFound:\n raise self.error('Cannot find pygments lexer for language \"{0}\"'.format(language))\n\n if 'number-lines' in self.options:\n linenos = 'table'\n # optional argument `startline`, defaults to 1\n try:\n linenostart = int(self.options['number-lines'] or 1)\n except ValueError:\n raise self.error(':number-lines: with non-integer start value')\n else:\n linenos = False\n linenostart = 1 # actually unused\n\n if self.site.invariant: # for testing purposes\n anchor_ref = 'rest_code_' + 'fixedvaluethatisnotauuid'\n else:\n anchor_ref = 'rest_code_' + uuid.uuid4().hex\n\n formatter = utils.NikolaPygmentsHTML(anchor_ref=anchor_ref, classes=classes, linenos=linenos, linenostart=linenostart)\n out = pygments.highlight(code, lexer, formatter)\n node = nodes.raw('', out, format='html')\n\n self.add_name(node)\n # if called from \"include\", set the source\n if 'source' in self.options:\n node.attributes['source'] = self.options['source']\n\n return [node]\n\n# Monkey-patch: replace insane docutils CodeBlock with our implementation.\ndocutils.parsers.rst.directives.body.CodeBlock = CodeBlock\ndocutils.parsers.rst.directives.misc.CodeBlock = CodeBlock\n\n\nclass Plugin(RestExtension):\n\n \"\"\"Plugin for listing directive.\"\"\"\n\n name = \"rest_listing\"\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n self.site = site\n # Even though listings don't use CodeBlock anymore, I am\n # leaving these to make the code directive work with\n # docutils < 0.9\n CodeBlock.site = site\n directives.register_directive('code', CodeBlock)\n directives.register_directive('code-block', CodeBlock)\n directives.register_directive('sourcecode', CodeBlock)\n directives.register_directive('listing', Listing)\n Listing.folders = site.config['LISTINGS_FOLDERS']\n return super(Plugin, self).set_site(site)\n\n\n# Add sphinx compatibility option\nlisting_spec = Include.option_spec\nlisting_spec['linenos'] = directives.unchanged\n\n\nclass Listing(Include):\n\n \"\"\"Create a highlighted block of code from a file in listings/.\n\n Usage:\n\n .. listing:: nikola.py python\n :number-lines:\n\n \"\"\"\n\n has_content = False\n required_arguments = 1\n optional_arguments = 1\n option_spec = listing_spec\n\n def run(self):\n \"\"\"Run listing directive.\"\"\"\n _fname = self.arguments.pop(0)\n fname = _fname.replace('/', os.sep)\n lang = self.arguments.pop(0)\n if len(self.folders) == 1:\n listings_folder = next(iter(self.folders.keys()))\n if fname.startswith(listings_folder):\n fpath = os.path.join(fname) # new syntax: specify folder name\n else:\n fpath = os.path.join(listings_folder, fname) # old syntax: don't specify folder name\n else:\n fpath = os.path.join(fname) # must be new syntax: specify folder name\n self.arguments.insert(0, fpath)\n self.options['code'] = lang\n if 'linenos' in self.options:\n self.options['number-lines'] = self.options['linenos']\n with io.open(fpath, 'r+', encoding='utf8') as fileobject:\n self.content = fileobject.read().splitlines()\n self.state.document.settings.record_dependencies.add(fpath)\n target = urlunsplit((\"link\", 'listing', fpath.replace('\\\\', '/'), '', ''))\n generated_nodes = (\n [core.publish_doctree('`{0} <{1}>`_'.format(_fname, target))[0]])\n generated_nodes += self.get_code_from_file(fileobject)\n return generated_nodes\n\n def get_code_from_file(self, data):\n \"\"\"Create CodeBlock nodes from file object content.\"\"\"\n return super(Listing, self).run()\n\n def assert_has_content(self):\n \"\"\"Listing has no content, override check from superclass.\"\"\"\n pass\n", "path": "nikola/plugins/compile/rest/listing.py"}]}
| 3,731 | 273 |
gh_patches_debug_30324
|
rasdani/github-patches
|
git_diff
|
ros__ros_comm-269
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
roslaunch does not accept a package name if a file with the same name exists in the current directory
Reproduce with:
```
roscreate-pkg roslaunch_test
cd roslaunch_test
mkdir bin
mkdir launch
touch bin/roslaunch_test
echo "<launch/>" > launch/example.launch
cd bin
roslaunch roslaunch_test example.launch
```
Error output:
```
Usage: roslaunch [options] [package] <filename> [arg_name:=value...]
roslaunch: error: The following input files do not exist: example.launch
```
Without the file in `bin/` or with another working directory roslaunch executes without errors (and exits immediately as there are no nodes).
I am using roslaunch 1.9.47 installed from the binary package repository in ROS hydro.
</issue>
<code>
[start of tools/roslaunch/src/roslaunch/rlutil.py]
1 # Software License Agreement (BSD License)
2 #
3 # Copyright (c) 2009, Willow Garage, Inc.
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions
8 # are met:
9 #
10 # * Redistributions of source code must retain the above copyright
11 # notice, this list of conditions and the following disclaimer.
12 # * Redistributions in binary form must reproduce the above
13 # copyright notice, this list of conditions and the following
14 # disclaimer in the documentation and/or other materials provided
15 # with the distribution.
16 # * Neither the name of Willow Garage, Inc. nor the names of its
17 # contributors may be used to endorse or promote products derived
18 # from this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
21 # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
22 # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
23 # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
24 # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
25 # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
26 # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
27 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
28 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
29 # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
30 # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
31 # POSSIBILITY OF SUCH DAMAGE.
32
33 """
34 Uncategorized utility routines for roslaunch.
35
36 This API should not be considered stable.
37 """
38
39 from __future__ import print_function
40
41 import os
42 import sys
43 import time
44
45 import roslib.packages
46
47 import rosclean
48 import rospkg
49 import rosgraph
50
51 import roslaunch.core
52 import roslaunch.config
53 import roslaunch.depends
54 from rosmaster import DEFAULT_MASTER_PORT
55
56 def check_log_disk_usage():
57 """
58 Check size of log directory. If high, print warning to user
59 """
60 try:
61 d = rospkg.get_log_dir()
62 roslaunch.core.printlog("Checking log directory for disk usage. This may take awhile.\nPress Ctrl-C to interrupt")
63 disk_usage = rosclean.get_disk_usage(d)
64 # warn if over a gig
65 if disk_usage > 1073741824:
66 roslaunch.core.printerrlog("WARNING: disk usage in log directory [%s] is over 1GB.\nIt's recommended that you use the 'rosclean' command."%d)
67 else:
68 roslaunch.core.printlog("Done checking log file disk usage. Usage is <1GB.")
69 except:
70 pass
71
72 def resolve_launch_arguments(args):
73 """
74 Resolve command-line args to roslaunch filenames.
75
76 :returns: resolved filenames, ``[str]``
77 """
78
79 # strip remapping args for processing
80 args = rosgraph.myargv(args)
81
82 # user can either specify:
83 # - filename + launch args
84 # - package + relative-filename + launch args
85 if not args:
86 return args
87 resolved_args = None
88 top = args[0]
89 if os.path.isfile(top):
90 resolved_args = [top] + args[1:]
91 elif len(args) == 1:
92 raise roslaunch.core.RLException("[%s] does not exist. please specify a package and launch file"%(top))
93 else:
94 try:
95 resolved = roslib.packages.find_resource(top, args[1])
96 if len(resolved) == 1:
97 resolved = resolved[0]
98 elif len(resolved) > 1:
99 raise roslaunch.core.RLException("multiple files named [%s] in package [%s]:%s\nPlease specify full path instead" % (args[1], top, ''.join(['\n- %s' % r for r in resolved])))
100 except rospkg.ResourceNotFound as e:
101 raise roslaunch.core.RLException("[%s] is not a package or launch file name"%top)
102 if not resolved:
103 raise roslaunch.core.RLException("cannot locate [%s] in package [%s]"%(args[1], top))
104 else:
105 resolved_args = [resolved] + args[2:]
106 return resolved_args
107
108 def _wait_for_master():
109 """
110 Block until ROS Master is online
111
112 :raise: :exc:`RuntimeError` If unexpected error occurs
113 """
114 m = roslaunch.core.Master() # get a handle to the default master
115 is_running = m.is_running()
116 if not is_running:
117 roslaunch.core.printlog("roscore/master is not yet running, will wait for it to start")
118 while not is_running:
119 time.sleep(0.1)
120 is_running = m.is_running()
121 if is_running:
122 roslaunch.core.printlog("master has started, initiating launch")
123 else:
124 raise RuntimeError("unknown error waiting for master to start")
125
126 _terminal_name = None
127
128 def _set_terminal(s):
129 import platform
130 if platform.system() in ['FreeBSD', 'Linux', 'Darwin', 'Unix']:
131 try:
132 print('\033]2;%s\007'%(s))
133 except:
134 pass
135
136 def update_terminal_name(ros_master_uri):
137 """
138 append master URI to the terminal name
139 """
140 if _terminal_name:
141 _set_terminal(_terminal_name + ' ' + ros_master_uri)
142
143 def change_terminal_name(args, is_core):
144 """
145 use echo (where available) to change the name of the terminal window
146 """
147 global _terminal_name
148 _terminal_name = 'roscore' if is_core else ','.join(args)
149 _set_terminal(_terminal_name)
150
151 def get_or_generate_uuid(options_runid, options_wait_for_master):
152 """
153 :param options_runid: run_id value from command-line or ``None``, ``str``
154 :param options_wait_for_master: the wait_for_master command
155 option. If this is True, it means that we must retrieve the
156 value from the parameter server and need to avoid any race
157 conditions with the roscore being initialized. ``bool``
158 """
159
160 # Three possible sources of the run_id:
161 #
162 # - if we're a child process, we get it from options_runid
163 # - if there's already a roscore running, read from the param server
164 # - generate one if we're running the roscore
165 if options_runid:
166 return options_runid
167
168 # #773: Generate a run_id to use if we launch a master
169 # process. If a master is already running, we'll get the
170 # run_id from it instead
171 param_server = rosgraph.Master('/roslaunch')
172 val = None
173 while val is None:
174 try:
175 val = param_server.getParam('/run_id')
176 except:
177 if not options_wait_for_master:
178 val = roslaunch.core.generate_run_id()
179 return val
180
181 def check_roslaunch(f):
182 """
183 Check roslaunch file for errors, returning error message if check fails. This routine
184 is mainly to support rostest's roslaunch_check.
185
186 :param f: roslaunch file name, ``str``
187 :returns: error message or ``None``
188 """
189 try:
190 rl_config = roslaunch.config.load_config_default([f], DEFAULT_MASTER_PORT, verbose=False)
191 except roslaunch.core.RLException as e:
192 return str(e)
193
194 errors = []
195 # check for missing deps
196 base_pkg, file_deps, missing = roslaunch.depends.roslaunch_deps([f])
197 for pkg, miss in missing.iteritems():
198 if miss:
199 errors.append("Missing manifest dependencies: %s/manifest.xml: %s"%(pkg, ', '.join(miss)))
200
201 # load all node defs
202 nodes = []
203 for filename, rldeps in file_deps.iteritems():
204 nodes.extend(rldeps.nodes)
205
206 # check for missing packages
207 rospack = rospkg.RosPack()
208 for pkg, node_type in nodes:
209 try:
210 rospack.get_path(pkg)
211 except:
212 errors.append("cannot find package [%s] for node [%s]"%(pkg, node_type))
213
214 # check for missing nodes
215 for pkg, node_type in nodes:
216 try:
217 if not roslib.packages.find_node(pkg, node_type):
218 errors.append("cannot find node [%s] in package [%s]"%(node_type, pkg))
219 except Exception as e:
220 errors.append("unable to find node [%s/%s]: %s"%(pkg, node_type, str(e)))
221
222 # Check for configuration errors, #2889
223 for err in rl_config.config_errors:
224 errors.append('ROSLaunch config error: %s' % err)
225
226 if errors:
227 return '\n'.join(errors)
228
229 def print_file_list(roslaunch_files):
230 """
231 :param roslaunch_files: list of launch files to load, ``str``
232
233 :returns: list of files involved in processing roslaunch_files, including the files themselves.
234 """
235 from roslaunch.config import load_config_default, get_roscore_filename
236 import roslaunch.xmlloader
237 try:
238 loader = roslaunch.xmlloader.XmlLoader(resolve_anon=True)
239 config = load_config_default(roslaunch_files, None, loader=loader, verbose=False, assign_machines=False)
240 files = [os.path.abspath(x) for x in set(config.roslaunch_files) - set([get_roscore_filename()])]
241 print('\n'.join(files))
242 except roslaunch.core.RLException as e:
243 print(str(e), file=sys.stderr)
244 sys.exit(1)
245
246
[end of tools/roslaunch/src/roslaunch/rlutil.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tools/roslaunch/src/roslaunch/rlutil.py b/tools/roslaunch/src/roslaunch/rlutil.py
--- a/tools/roslaunch/src/roslaunch/rlutil.py
+++ b/tools/roslaunch/src/roslaunch/rlutil.py
@@ -85,24 +85,26 @@
if not args:
return args
resolved_args = None
- top = args[0]
- if os.path.isfile(top):
- resolved_args = [top] + args[1:]
- elif len(args) == 1:
- raise roslaunch.core.RLException("[%s] does not exist. please specify a package and launch file"%(top))
- else:
+
+ # try to resolve launch file in package first
+ if len(args) >= 2:
try:
- resolved = roslib.packages.find_resource(top, args[1])
+ resolved = roslib.packages.find_resource(args[0], args[1])
+ if len(resolved) > 1:
+ raise roslaunch.core.RLException("multiple files named [%s] in package [%s]:%s\nPlease specify full path instead" % (args[1], args[0], ''.join(['\n- %s' % r for r in resolved])))
if len(resolved) == 1:
- resolved = resolved[0]
- elif len(resolved) > 1:
- raise roslaunch.core.RLException("multiple files named [%s] in package [%s]:%s\nPlease specify full path instead" % (args[1], top, ''.join(['\n- %s' % r for r in resolved])))
- except rospkg.ResourceNotFound as e:
- raise roslaunch.core.RLException("[%s] is not a package or launch file name"%top)
- if not resolved:
- raise roslaunch.core.RLException("cannot locate [%s] in package [%s]"%(args[1], top))
+ resolved_args = [resolved[0]] + args[2:]
+ except rospkg.ResourceNotFound:
+ pass
+ # try to resolve launch file
+ if resolved_args is None and os.path.isfile(args[0]):
+ resolved_args = [args[0]] + args[1:]
+ # raise if unable to resolve
+ if resolved_args is None:
+ if len(args) >= 2:
+ raise roslaunch.core.RLException("[%s] is neither a launch file in package [%s] nor is [%s] a launch file name" % (args[1], args[0], args[0]))
else:
- resolved_args = [resolved] + args[2:]
+ raise roslaunch.core.RLException("[%s] is not a launch file name" % args[0])
return resolved_args
def _wait_for_master():
|
{"golden_diff": "diff --git a/tools/roslaunch/src/roslaunch/rlutil.py b/tools/roslaunch/src/roslaunch/rlutil.py\n--- a/tools/roslaunch/src/roslaunch/rlutil.py\n+++ b/tools/roslaunch/src/roslaunch/rlutil.py\n@@ -85,24 +85,26 @@\n if not args:\n return args\n resolved_args = None\n- top = args[0]\n- if os.path.isfile(top):\n- resolved_args = [top] + args[1:]\n- elif len(args) == 1:\n- raise roslaunch.core.RLException(\"[%s] does not exist. please specify a package and launch file\"%(top))\n- else:\n+\n+ # try to resolve launch file in package first\n+ if len(args) >= 2:\n try:\n- resolved = roslib.packages.find_resource(top, args[1])\n+ resolved = roslib.packages.find_resource(args[0], args[1])\n+ if len(resolved) > 1:\n+ raise roslaunch.core.RLException(\"multiple files named [%s] in package [%s]:%s\\nPlease specify full path instead\" % (args[1], args[0], ''.join(['\\n- %s' % r for r in resolved])))\n if len(resolved) == 1:\n- resolved = resolved[0]\n- elif len(resolved) > 1:\n- raise roslaunch.core.RLException(\"multiple files named [%s] in package [%s]:%s\\nPlease specify full path instead\" % (args[1], top, ''.join(['\\n- %s' % r for r in resolved])))\n- except rospkg.ResourceNotFound as e:\n- raise roslaunch.core.RLException(\"[%s] is not a package or launch file name\"%top)\n- if not resolved:\n- raise roslaunch.core.RLException(\"cannot locate [%s] in package [%s]\"%(args[1], top))\n+ resolved_args = [resolved[0]] + args[2:]\n+ except rospkg.ResourceNotFound:\n+ pass\n+ # try to resolve launch file\n+ if resolved_args is None and os.path.isfile(args[0]):\n+ resolved_args = [args[0]] + args[1:]\n+ # raise if unable to resolve\n+ if resolved_args is None:\n+ if len(args) >= 2:\n+ raise roslaunch.core.RLException(\"[%s] is neither a launch file in package [%s] nor is [%s] a launch file name\" % (args[1], args[0], args[0]))\n else:\n- resolved_args = [resolved] + args[2:]\n+ raise roslaunch.core.RLException(\"[%s] is not a launch file name\" % args[0])\n return resolved_args\n \n def _wait_for_master():\n", "issue": "roslaunch does not accept a package name if a file with the same name exists in the current directory\nReproduce with:\n\n```\nroscreate-pkg roslaunch_test\ncd roslaunch_test\nmkdir bin\nmkdir launch\ntouch bin/roslaunch_test\necho \"<launch/>\" > launch/example.launch\ncd bin\nroslaunch roslaunch_test example.launch\n```\n\nError output:\n\n```\nUsage: roslaunch [options] [package] <filename> [arg_name:=value...]\n\nroslaunch: error: The following input files do not exist: example.launch\n```\n\nWithout the file in `bin/` or with another working directory roslaunch executes without errors (and exits immediately as there are no nodes).\n\nI am using roslaunch 1.9.47 installed from the binary package repository in ROS hydro.\n\n", "before_files": [{"content": "# Software License Agreement (BSD License)\n#\n# Copyright (c) 2009, Willow Garage, Inc.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following\n# disclaimer in the documentation and/or other materials provided\n# with the distribution.\n# * Neither the name of Willow Garage, Inc. nor the names of its\n# contributors may be used to endorse or promote products derived\n# from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS\n# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE\n# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN\n# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n# POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"\nUncategorized utility routines for roslaunch.\n\nThis API should not be considered stable.\n\"\"\"\n\nfrom __future__ import print_function\n\nimport os\nimport sys\nimport time\n\nimport roslib.packages\n\nimport rosclean\nimport rospkg\nimport rosgraph\n\nimport roslaunch.core\nimport roslaunch.config\nimport roslaunch.depends\nfrom rosmaster import DEFAULT_MASTER_PORT\n\ndef check_log_disk_usage():\n \"\"\"\n Check size of log directory. If high, print warning to user\n \"\"\"\n try:\n d = rospkg.get_log_dir()\n roslaunch.core.printlog(\"Checking log directory for disk usage. This may take awhile.\\nPress Ctrl-C to interrupt\") \n disk_usage = rosclean.get_disk_usage(d)\n # warn if over a gig\n if disk_usage > 1073741824:\n roslaunch.core.printerrlog(\"WARNING: disk usage in log directory [%s] is over 1GB.\\nIt's recommended that you use the 'rosclean' command.\"%d)\n else:\n roslaunch.core.printlog(\"Done checking log file disk usage. Usage is <1GB.\") \n except:\n pass\n\ndef resolve_launch_arguments(args):\n \"\"\"\n Resolve command-line args to roslaunch filenames.\n\n :returns: resolved filenames, ``[str]``\n \"\"\"\n\n # strip remapping args for processing\n args = rosgraph.myargv(args)\n \n # user can either specify:\n # - filename + launch args\n # - package + relative-filename + launch args\n if not args:\n return args\n resolved_args = None\n top = args[0]\n if os.path.isfile(top):\n resolved_args = [top] + args[1:]\n elif len(args) == 1:\n raise roslaunch.core.RLException(\"[%s] does not exist. please specify a package and launch file\"%(top))\n else:\n try:\n resolved = roslib.packages.find_resource(top, args[1])\n if len(resolved) == 1:\n resolved = resolved[0]\n elif len(resolved) > 1:\n raise roslaunch.core.RLException(\"multiple files named [%s] in package [%s]:%s\\nPlease specify full path instead\" % (args[1], top, ''.join(['\\n- %s' % r for r in resolved])))\n except rospkg.ResourceNotFound as e:\n raise roslaunch.core.RLException(\"[%s] is not a package or launch file name\"%top)\n if not resolved:\n raise roslaunch.core.RLException(\"cannot locate [%s] in package [%s]\"%(args[1], top))\n else:\n resolved_args = [resolved] + args[2:]\n return resolved_args\n\ndef _wait_for_master():\n \"\"\"\n Block until ROS Master is online\n \n :raise: :exc:`RuntimeError` If unexpected error occurs\n \"\"\"\n m = roslaunch.core.Master() # get a handle to the default master\n is_running = m.is_running()\n if not is_running:\n roslaunch.core.printlog(\"roscore/master is not yet running, will wait for it to start\")\n while not is_running:\n time.sleep(0.1)\n is_running = m.is_running()\n if is_running:\n roslaunch.core.printlog(\"master has started, initiating launch\")\n else:\n raise RuntimeError(\"unknown error waiting for master to start\")\n\n_terminal_name = None\n\ndef _set_terminal(s):\n import platform\n if platform.system() in ['FreeBSD', 'Linux', 'Darwin', 'Unix']:\n try:\n print('\\033]2;%s\\007'%(s))\n except:\n pass\n \ndef update_terminal_name(ros_master_uri):\n \"\"\"\n append master URI to the terminal name\n \"\"\"\n if _terminal_name:\n _set_terminal(_terminal_name + ' ' + ros_master_uri)\n\ndef change_terminal_name(args, is_core):\n \"\"\"\n use echo (where available) to change the name of the terminal window\n \"\"\"\n global _terminal_name\n _terminal_name = 'roscore' if is_core else ','.join(args)\n _set_terminal(_terminal_name)\n\ndef get_or_generate_uuid(options_runid, options_wait_for_master):\n \"\"\"\n :param options_runid: run_id value from command-line or ``None``, ``str``\n :param options_wait_for_master: the wait_for_master command\n option. If this is True, it means that we must retrieve the\n value from the parameter server and need to avoid any race\n conditions with the roscore being initialized. ``bool``\n \"\"\"\n\n # Three possible sources of the run_id:\n #\n # - if we're a child process, we get it from options_runid\n # - if there's already a roscore running, read from the param server\n # - generate one if we're running the roscore\n if options_runid:\n return options_runid\n\n # #773: Generate a run_id to use if we launch a master\n # process. If a master is already running, we'll get the\n # run_id from it instead\n param_server = rosgraph.Master('/roslaunch')\n val = None\n while val is None:\n try:\n val = param_server.getParam('/run_id')\n except:\n if not options_wait_for_master:\n val = roslaunch.core.generate_run_id()\n return val\n \ndef check_roslaunch(f):\n \"\"\"\n Check roslaunch file for errors, returning error message if check fails. This routine\n is mainly to support rostest's roslaunch_check.\n\n :param f: roslaunch file name, ``str``\n :returns: error message or ``None``\n \"\"\"\n try:\n rl_config = roslaunch.config.load_config_default([f], DEFAULT_MASTER_PORT, verbose=False)\n except roslaunch.core.RLException as e:\n return str(e)\n \n errors = []\n # check for missing deps\n base_pkg, file_deps, missing = roslaunch.depends.roslaunch_deps([f])\n for pkg, miss in missing.iteritems():\n if miss:\n errors.append(\"Missing manifest dependencies: %s/manifest.xml: %s\"%(pkg, ', '.join(miss)))\n \n # load all node defs\n nodes = []\n for filename, rldeps in file_deps.iteritems():\n nodes.extend(rldeps.nodes)\n\n # check for missing packages\n rospack = rospkg.RosPack()\n for pkg, node_type in nodes:\n try:\n rospack.get_path(pkg)\n except:\n errors.append(\"cannot find package [%s] for node [%s]\"%(pkg, node_type))\n\n # check for missing nodes\n for pkg, node_type in nodes:\n try:\n if not roslib.packages.find_node(pkg, node_type):\n errors.append(\"cannot find node [%s] in package [%s]\"%(node_type, pkg))\n except Exception as e:\n errors.append(\"unable to find node [%s/%s]: %s\"%(pkg, node_type, str(e)))\n \n # Check for configuration errors, #2889\n for err in rl_config.config_errors:\n errors.append('ROSLaunch config error: %s' % err)\n\n if errors:\n return '\\n'.join(errors)\n \ndef print_file_list(roslaunch_files):\n \"\"\"\n :param roslaunch_files: list of launch files to load, ``str``\n\n :returns: list of files involved in processing roslaunch_files, including the files themselves.\n \"\"\"\n from roslaunch.config import load_config_default, get_roscore_filename\n import roslaunch.xmlloader\n try:\n loader = roslaunch.xmlloader.XmlLoader(resolve_anon=True)\n config = load_config_default(roslaunch_files, None, loader=loader, verbose=False, assign_machines=False)\n files = [os.path.abspath(x) for x in set(config.roslaunch_files) - set([get_roscore_filename()])]\n print('\\n'.join(files))\n except roslaunch.core.RLException as e:\n print(str(e), file=sys.stderr)\n sys.exit(1)\n\n", "path": "tools/roslaunch/src/roslaunch/rlutil.py"}]}
| 3,503 | 641 |
gh_patches_debug_41028
|
rasdani/github-patches
|
git_diff
|
plotly__dash-1932
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove `webdriver-manager` from `dash[testing]` requirements
This was added in #1801 but wasn't really clear why it was needed, and it reaches out to the internet during its installation (even if installed from a local PyPI mirror) which causes problems for some users.
[BUG] Dash DataTable style_header text_alignment breaks between Dash v2.0.0 and v2.1.0
Production environment is under a customer container hosting environment that hosts docker container in a large corporate environment. Docker container is setup with rhel 7 (Redhat) with an Apache 2.4 web server. Python 3.6 is the version of python using pip as the installation for packages.
- replace the result of `pip list | grep dash` below
```
dash 2.1.0
dash-bootstrap-components 1.0.2
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-renderer 1.9.1
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: MacOS Monterey version 12.2
- Browser Chrome
- Version 98.0.4758.80
**Describe the bug**
When using 'text-align': 'center' in the style_header directive within the dash_table.DataTable, under Dash 2.1.0, the headings for the table are not centered, rather they are right aligned, regardless of the setting of the 'text-align' value. In fact, directly editing the text-align within Chrome's developer tool will not change the alignment. Changing other attributes (font, color, etc... will work).
**Expected behavior**
I would expect that when 'text-align': 'center' is specified for the style_header directive within dash_table.DataTable that the headings for the columns specified would be centered above the column.
**Screenshots**

**Additional Information**
I've been able to workaround this issue by reverting to Dash 2.0.0 in my production build, but I have not been able to build a similar working environment, neither in pip nor conda. My production version is working. I only reverted to Dash 2.0.0 since it was a very recent release, and then the behavior of the dash_table.DataTable worked correctly. It was the only change I made to the Docker build. Now, regardless of whether I use Dash 2.0.0 or Dash 2.1.0, I'm seeing the persistent right alignment of headers in my DataTables.
I do not know if it will help, but here is an example of code that I saw work under Dash 2.0.0, and then fail once I upgraded to Dash 2.1.0:
```
import pandas as pd
from dash import dcc
# import dash_core_components as dcc
from dash import html
# import dash_html_components as html
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output
import plotly.express as px
import time
import random
app = dash.Dash(__name__)
def get_rand_data():
mylist=[random.randint(1,6) for _ in range(6)]
return mylist
def build_data(mylist):
df = pd.DataFrame({
"Fruit": ["Apples", "Oranges", "Bananas", "Apples", "Oranges", "Bananas"],
"Amount": mylist,
"City": ["SF", "SF", "SF", "Montreal", "Montreal", "Montreal"]
})
dcc.Store(id='new_data', data=df.to_json())
return df
def draw_graph(df):
fig = px.bar(df, x="Fruit", y="Amount", color="City", barmode="group")
return fig
def get_table():
data_table = dash_table.DataTable(
columns=[
{"name": ["", "Year"], "id": "year"},
{"name": ["City", "Montreal"], "id": "montreal"},
{"name": ["City", "Toronto"], "id": "toronto"},
{"name": ["City", "Ottawa"], "id": "ottawa"},
{"name": ["City", "Vancouver"], "id": "vancouver"},
{"name": ["Climate", "Temperature"], "id": "temp"},
{"name": ["Climate", "Humidity"], "id": "humidity"},
],
data=[
{
"year": i,
"montreal": i * 10,
"toronto": i * 100,
"ottawa": i * -1,
"vancouver": i * -10,
"temp": i * -100,
"humidity": i * 5,
}
for i in range(10)
],
style_header={
'text-align': 'center',
},
merge_duplicate_headers=True,
)
return data_table
mylist=get_rand_data()
df = build_data(mylist)
fig = draw_graph(df)
data_table = get_table()
refresh_button = dbc.Button('Refresh Data', color="info", className="me-1", id='refresh_button_lmd')
app.layout = html.Div(children=[
html.H1(children='Hello Dash'),
refresh_button,
html.Div(children=[
data_table
]),
dcc.Store(id='new_data'),
dcc.Loading(
id='loading-data',
children=[
html.Div(children=[
dcc.Graph(
id='example-graph',
figure=fig
)
]
)
],
type='circle',
),
])
@app.callback(Output("example-graph", "figure"),
Input("new_data", "data"))
def on_data(data):
df = pd.read_json(data)
time.sleep(5)
fig = draw_graph(df)
return fig
@app.callback(Output('new_data', 'data'),
Input('refresh_button_lmd', 'n_clicks'))
def new_data(n_clicks):
if n_clicks is None:
print("Override Startup")
mylist = get_rand_data()
df = build_data(mylist)
data = df.to_json()
else:
print(f'Button was clicked, this is {n_clicks} times.')
mylist = get_rand_data()
df = build_data(mylist)
data=df.to_json()
return data
if __name__ == '__main__':
app.run_server(debug=True)
```
I suspect that perhaps an upgrade supporting the move to Dash 2.1.0 might be the issue, and that now that I've moved my base install, I do not know what library is causing this. Any help would be appreciated. I would like to remove the constraint of staying at Dash 2.0.0 as I saw faster response times with Dash 2.1.0. Thanks!
</issue>
<code>
[start of dash/development/update_components.py]
1 import sys
2 import subprocess
3 import shlex
4 import os
5 import argparse
6 import shutil
7 import logging
8 import coloredlogs
9
10
11 class _CombinedFormatter(
12 argparse.ArgumentDefaultsHelpFormatter, argparse.RawDescriptionHelpFormatter
13 ):
14 pass
15
16
17 logger = logging.getLogger(__name__)
18 coloredlogs.install(
19 fmt="%(asctime)s,%(msecs)03d %(levelname)s - %(message)s", datefmt="%H:%M:%S"
20 )
21
22
23 def booststrap_components(components_source):
24
25 is_windows = sys.platform == "win32"
26
27 source_glob = (
28 components_source
29 if components_source != "all"
30 else "dash-core-components|dash-html-components|dash-table"
31 )
32
33 cmd = shlex.split(
34 "npx lerna exec --scope *@({})* -- npm i".format(source_glob),
35 posix=not is_windows,
36 )
37
38 with subprocess.Popen(
39 cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=is_windows
40 ) as proc:
41 out, err = proc.communicate()
42 status = proc.poll()
43
44 if err:
45 print(err.decode(), file=sys.stderr)
46
47 if status == 0:
48 print(
49 "🟢 Finished installing npm dependencies for the following component packages: {} (status={}) 🟢".format(
50 source_glob, status
51 ),
52 file=sys.stderr,
53 )
54 if not out:
55 print(
56 "Failed installing npm dependencies for the following component packages {} (status={})".format(
57 source_glob, status
58 ),
59 file=sys.stderr,
60 )
61
62
63 def build_components(components_source):
64
65 is_windows = sys.platform == "win32"
66
67 source_glob = (
68 components_source
69 if components_source != "all"
70 else "dash-core-components|dash-html-components|dash-table"
71 )
72
73 cmd = shlex.split(
74 "npx lerna exec --scope *@({})* -- npm run build".format(source_glob),
75 posix=not is_windows,
76 )
77
78 with subprocess.Popen(
79 cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=is_windows
80 ) as proc:
81 out, err = proc.communicate()
82 status = proc.poll()
83
84 if err:
85 print(err.decode(), file=sys.stderr)
86
87 if not out:
88 print(
89 "🟢 Finished updating the following component packages {} (status={}) 🟢".format(
90 source_glob, status
91 ),
92 file=sys.stderr,
93 )
94 sys.exit(1)
95
96 for package in source_glob.split("|"):
97 build_directory = os.path.join(
98 "components", package, package.replace("-", "_").rstrip("/\\")
99 )
100
101 dest_dir = (
102 "dcc"
103 if package == "dash-core-components"
104 else "html"
105 if package == "dash-html-components"
106 else "dash_table"
107 )
108
109 dest_path = os.path.join("dash", dest_dir)
110
111 if not os.path.exists(dest_path):
112 try:
113 os.makedirs(dest_path)
114 except OSError:
115 logger.exception("🚨 Having issues manipulating %s", dest_path)
116 sys.exit(1)
117
118 if not os.path.exists(build_directory):
119 print(
120 "Could not locate build artifacts. Check that the npm build process completed successfully for the given package: {}".format(
121 package
122 )
123 )
124 else:
125 print("🚚 Moving build artifacts from " + build_directory + " to Dash 🚚")
126 shutil.rmtree(dest_path)
127 shutil.copytree(build_directory, dest_path)
128 with open(os.path.join(dest_path, ".gitkeep"), "w"):
129 pass
130 print(
131 "🟢 Finished moving build artifacts from "
132 + build_directory
133 + " to Dash 🟢"
134 )
135
136
137 def cli():
138 parser = argparse.ArgumentParser(
139 prog="dash-update-components",
140 formatter_class=_CombinedFormatter,
141 description="Update the specified subcomponent libraries within Dash"
142 " by copying over build artifacts, dependencies, and dependency metadata.",
143 )
144 parser.add_argument(
145 "components_source",
146 help="A glob string that matches the Dash component libraries to be updated (eg.'dash-table' // 'dash-core-components|dash-html-components' // 'all'). The default argument is 'all'.",
147 default="all",
148 )
149
150 args = parser.parse_args()
151
152 booststrap_components(args.components_source)
153 build_components(args.components_source)
154
155
156 cli()
157
[end of dash/development/update_components.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dash/development/update_components.py b/dash/development/update_components.py
--- a/dash/development/update_components.py
+++ b/dash/development/update_components.py
@@ -42,20 +42,17 @@
status = proc.poll()
if err:
- print(err.decode(), file=sys.stderr)
+ print(("🛑 " if status else "") + err.decode(), file=sys.stderr)
- if status == 0:
+ if status or not out:
print(
- "🟢 Finished installing npm dependencies for the following component packages: {} (status={}) 🟢".format(
- source_glob, status
- ),
+ "🚨 Failed installing npm dependencies for component packages: {source_glob} (status={status}) 🚨",
file=sys.stderr,
)
- if not out:
+ sys.exit(1)
+ else:
print(
- "Failed installing npm dependencies for the following component packages {} (status={})".format(
- source_glob, status
- ),
+ f"🟢 Finished installing npm dependencies for component packages: {source_glob} 🟢",
file=sys.stderr,
)
@@ -82,13 +79,11 @@
status = proc.poll()
if err:
- print(err.decode(), file=sys.stderr)
+ print(("🛑 " if status else "") + err.decode(), file=sys.stderr)
- if not out:
+ if status or not out:
print(
- "🟢 Finished updating the following component packages {} (status={}) 🟢".format(
- source_glob, status
- ),
+ f"🚨 Finished updating component packages: {source_glob} (status={status}) 🚨",
file=sys.stderr,
)
sys.exit(1)
@@ -117,21 +112,18 @@
if not os.path.exists(build_directory):
print(
- "Could not locate build artifacts. Check that the npm build process completed successfully for the given package: {}".format(
- package
- )
+ "🚨 Could not locate build artifacts."
+ + " Check that the npm build process completed"
+ + f" successfully for package: {package} 🚨"
)
+ sys.exit(1)
else:
- print("🚚 Moving build artifacts from " + build_directory + " to Dash 🚚")
+ print(f"🚚 Moving build artifacts from {build_directory} to Dash 🚚")
shutil.rmtree(dest_path)
shutil.copytree(build_directory, dest_path)
with open(os.path.join(dest_path, ".gitkeep"), "w"):
pass
- print(
- "🟢 Finished moving build artifacts from "
- + build_directory
- + " to Dash 🟢"
- )
+ print(f"🟢 Finished moving build artifacts from {build_directory} to Dash 🟢")
def cli():
@@ -143,7 +135,9 @@
)
parser.add_argument(
"components_source",
- help="A glob string that matches the Dash component libraries to be updated (eg.'dash-table' // 'dash-core-components|dash-html-components' // 'all'). The default argument is 'all'.",
+ help="A glob string that matches the Dash component libraries to be updated"
+ " (eg.'dash-table' // 'dash-core-components|dash-html-components' // 'all')."
+ " The default argument is 'all'.",
default="all",
)
@@ -153,4 +147,5 @@
build_components(args.components_source)
-cli()
+if __name__ == "__main__":
+ cli()
|
{"golden_diff": "diff --git a/dash/development/update_components.py b/dash/development/update_components.py\n--- a/dash/development/update_components.py\n+++ b/dash/development/update_components.py\n@@ -42,20 +42,17 @@\n status = proc.poll()\n \n if err:\n- print(err.decode(), file=sys.stderr)\n+ print((\"\ud83d\uded1 \" if status else \"\") + err.decode(), file=sys.stderr)\n \n- if status == 0:\n+ if status or not out:\n print(\n- \"\ud83d\udfe2 Finished installing npm dependencies for the following component packages: {} (status={}) \ud83d\udfe2\".format(\n- source_glob, status\n- ),\n+ \"\ud83d\udea8 Failed installing npm dependencies for component packages: {source_glob} (status={status}) \ud83d\udea8\",\n file=sys.stderr,\n )\n- if not out:\n+ sys.exit(1)\n+ else:\n print(\n- \"Failed installing npm dependencies for the following component packages {} (status={})\".format(\n- source_glob, status\n- ),\n+ f\"\ud83d\udfe2 Finished installing npm dependencies for component packages: {source_glob} \ud83d\udfe2\",\n file=sys.stderr,\n )\n \n@@ -82,13 +79,11 @@\n status = proc.poll()\n \n if err:\n- print(err.decode(), file=sys.stderr)\n+ print((\"\ud83d\uded1 \" if status else \"\") + err.decode(), file=sys.stderr)\n \n- if not out:\n+ if status or not out:\n print(\n- \"\ud83d\udfe2 Finished updating the following component packages {} (status={}) \ud83d\udfe2\".format(\n- source_glob, status\n- ),\n+ f\"\ud83d\udea8 Finished updating component packages: {source_glob} (status={status}) \ud83d\udea8\",\n file=sys.stderr,\n )\n sys.exit(1)\n@@ -117,21 +112,18 @@\n \n if not os.path.exists(build_directory):\n print(\n- \"Could not locate build artifacts. Check that the npm build process completed successfully for the given package: {}\".format(\n- package\n- )\n+ \"\ud83d\udea8 Could not locate build artifacts.\"\n+ + \" Check that the npm build process completed\"\n+ + f\" successfully for package: {package} \ud83d\udea8\"\n )\n+ sys.exit(1)\n else:\n- print(\"\ud83d\ude9a Moving build artifacts from \" + build_directory + \" to Dash \ud83d\ude9a\")\n+ print(f\"\ud83d\ude9a Moving build artifacts from {build_directory} to Dash \ud83d\ude9a\")\n shutil.rmtree(dest_path)\n shutil.copytree(build_directory, dest_path)\n with open(os.path.join(dest_path, \".gitkeep\"), \"w\"):\n pass\n- print(\n- \"\ud83d\udfe2 Finished moving build artifacts from \"\n- + build_directory\n- + \" to Dash \ud83d\udfe2\"\n- )\n+ print(f\"\ud83d\udfe2 Finished moving build artifacts from {build_directory} to Dash \ud83d\udfe2\")\n \n \n def cli():\n@@ -143,7 +135,9 @@\n )\n parser.add_argument(\n \"components_source\",\n- help=\"A glob string that matches the Dash component libraries to be updated (eg.'dash-table' // 'dash-core-components|dash-html-components' // 'all'). The default argument is 'all'.\",\n+ help=\"A glob string that matches the Dash component libraries to be updated\"\n+ \" (eg.'dash-table' // 'dash-core-components|dash-html-components' // 'all').\"\n+ \" The default argument is 'all'.\",\n default=\"all\",\n )\n \n@@ -153,4 +147,5 @@\n build_components(args.components_source)\n \n \n-cli()\n+if __name__ == \"__main__\":\n+ cli()\n", "issue": "Remove `webdriver-manager` from `dash[testing]` requirements\nThis was added in #1801 but wasn't really clear why it was needed, and it reaches out to the internet during its installation (even if installed from a local PyPI mirror) which causes problems for some users.\n[BUG] Dash DataTable style_header text_alignment breaks between Dash v2.0.0 and v2.1.0\nProduction environment is under a customer container hosting environment that hosts docker container in a large corporate environment. Docker container is setup with rhel 7 (Redhat) with an Apache 2.4 web server. Python 3.6 is the version of python using pip as the installation for packages.\r\n\r\n- replace the result of `pip list | grep dash` below\r\n```\r\ndash 2.1.0 \r\ndash-bootstrap-components 1.0.2 \r\ndash-core-components 2.0.0 \r\ndash-html-components 2.0.0 \r\ndash-renderer 1.9.1 \r\ndash-table 5.0.0\r\n```\r\n\r\n- if frontend related, tell us your Browser, Version and OS\r\n\r\n - OS: MacOS Monterey version 12.2\r\n - Browser Chrome\r\n - Version 98.0.4758.80\r\n\r\n**Describe the bug**\r\n\r\nWhen using 'text-align': 'center' in the style_header directive within the dash_table.DataTable, under Dash 2.1.0, the headings for the table are not centered, rather they are right aligned, regardless of the setting of the 'text-align' value. In fact, directly editing the text-align within Chrome's developer tool will not change the alignment. Changing other attributes (font, color, etc... will work). \r\n\r\n**Expected behavior**\r\n\r\nI would expect that when 'text-align': 'center' is specified for the style_header directive within dash_table.DataTable that the headings for the columns specified would be centered above the column.\r\n\r\n**Screenshots**\r\n\r\n\r\n\r\n**Additional Information**\r\nI've been able to workaround this issue by reverting to Dash 2.0.0 in my production build, but I have not been able to build a similar working environment, neither in pip nor conda. My production version is working. I only reverted to Dash 2.0.0 since it was a very recent release, and then the behavior of the dash_table.DataTable worked correctly. It was the only change I made to the Docker build. Now, regardless of whether I use Dash 2.0.0 or Dash 2.1.0, I'm seeing the persistent right alignment of headers in my DataTables. \r\n\r\nI do not know if it will help, but here is an example of code that I saw work under Dash 2.0.0, and then fail once I upgraded to Dash 2.1.0:\r\n```\r\nimport pandas as pd\r\nfrom dash import dcc\r\n# import dash_core_components as dcc\r\nfrom dash import html\r\n# import dash_html_components as html\r\nimport dash_bootstrap_components as dbc\r\nfrom dash.dependencies import Input, Output\r\nimport plotly.express as px\r\nimport time\r\nimport random\r\n\r\napp = dash.Dash(__name__)\r\n\r\ndef get_rand_data():\r\n mylist=[random.randint(1,6) for _ in range(6)]\r\n return mylist\r\n\r\ndef build_data(mylist):\r\n df = pd.DataFrame({\r\n \"Fruit\": [\"Apples\", \"Oranges\", \"Bananas\", \"Apples\", \"Oranges\", \"Bananas\"],\r\n \"Amount\": mylist,\r\n \"City\": [\"SF\", \"SF\", \"SF\", \"Montreal\", \"Montreal\", \"Montreal\"]\r\n })\r\n dcc.Store(id='new_data', data=df.to_json())\r\n return df\r\n\r\ndef draw_graph(df):\r\n fig = px.bar(df, x=\"Fruit\", y=\"Amount\", color=\"City\", barmode=\"group\")\r\n return fig\r\n\r\ndef get_table():\r\n data_table = dash_table.DataTable(\r\n columns=[\r\n {\"name\": [\"\", \"Year\"], \"id\": \"year\"},\r\n {\"name\": [\"City\", \"Montreal\"], \"id\": \"montreal\"},\r\n {\"name\": [\"City\", \"Toronto\"], \"id\": \"toronto\"},\r\n {\"name\": [\"City\", \"Ottawa\"], \"id\": \"ottawa\"},\r\n {\"name\": [\"City\", \"Vancouver\"], \"id\": \"vancouver\"},\r\n {\"name\": [\"Climate\", \"Temperature\"], \"id\": \"temp\"},\r\n {\"name\": [\"Climate\", \"Humidity\"], \"id\": \"humidity\"},\r\n ],\r\n data=[\r\n {\r\n \"year\": i,\r\n \"montreal\": i * 10,\r\n \"toronto\": i * 100,\r\n \"ottawa\": i * -1,\r\n \"vancouver\": i * -10,\r\n \"temp\": i * -100,\r\n \"humidity\": i * 5,\r\n }\r\n for i in range(10)\r\n ],\r\n style_header={\r\n 'text-align': 'center',\r\n },\r\n merge_duplicate_headers=True,\r\n )\r\n return data_table\r\n\r\n\r\nmylist=get_rand_data()\r\ndf = build_data(mylist)\r\nfig = draw_graph(df)\r\ndata_table = get_table()\r\n\r\nrefresh_button = dbc.Button('Refresh Data', color=\"info\", className=\"me-1\", id='refresh_button_lmd')\r\n\r\napp.layout = html.Div(children=[\r\n html.H1(children='Hello Dash'),\r\n refresh_button,\r\n html.Div(children=[\r\n data_table\r\n ]),\r\n dcc.Store(id='new_data'),\r\n dcc.Loading(\r\n id='loading-data',\r\n children=[\r\n html.Div(children=[\r\n dcc.Graph(\r\n id='example-graph',\r\n figure=fig\r\n )\r\n ]\r\n )\r\n ],\r\n type='circle',\r\n ),\r\n])\r\n\r\n\r\[email protected](Output(\"example-graph\", \"figure\"),\r\n Input(\"new_data\", \"data\"))\r\ndef on_data(data):\r\n df = pd.read_json(data)\r\n time.sleep(5)\r\n fig = draw_graph(df)\r\n return fig\r\n\r\n\r\[email protected](Output('new_data', 'data'),\r\n Input('refresh_button_lmd', 'n_clicks'))\r\ndef new_data(n_clicks):\r\n if n_clicks is None:\r\n print(\"Override Startup\")\r\n mylist = get_rand_data()\r\n df = build_data(mylist)\r\n data = df.to_json()\r\n else:\r\n print(f'Button was clicked, this is {n_clicks} times.')\r\n mylist = get_rand_data()\r\n df = build_data(mylist)\r\n data=df.to_json()\r\n return data\r\n\r\nif __name__ == '__main__':\r\n app.run_server(debug=True)\r\n```\r\n\r\nI suspect that perhaps an upgrade supporting the move to Dash 2.1.0 might be the issue, and that now that I've moved my base install, I do not know what library is causing this. Any help would be appreciated. I would like to remove the constraint of staying at Dash 2.0.0 as I saw faster response times with Dash 2.1.0. Thanks!\n", "before_files": [{"content": "import sys\nimport subprocess\nimport shlex\nimport os\nimport argparse\nimport shutil\nimport logging\nimport coloredlogs\n\n\nclass _CombinedFormatter(\n argparse.ArgumentDefaultsHelpFormatter, argparse.RawDescriptionHelpFormatter\n):\n pass\n\n\nlogger = logging.getLogger(__name__)\ncoloredlogs.install(\n fmt=\"%(asctime)s,%(msecs)03d %(levelname)s - %(message)s\", datefmt=\"%H:%M:%S\"\n)\n\n\ndef booststrap_components(components_source):\n\n is_windows = sys.platform == \"win32\"\n\n source_glob = (\n components_source\n if components_source != \"all\"\n else \"dash-core-components|dash-html-components|dash-table\"\n )\n\n cmd = shlex.split(\n \"npx lerna exec --scope *@({})* -- npm i\".format(source_glob),\n posix=not is_windows,\n )\n\n with subprocess.Popen(\n cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=is_windows\n ) as proc:\n out, err = proc.communicate()\n status = proc.poll()\n\n if err:\n print(err.decode(), file=sys.stderr)\n\n if status == 0:\n print(\n \"\ud83d\udfe2 Finished installing npm dependencies for the following component packages: {} (status={}) \ud83d\udfe2\".format(\n source_glob, status\n ),\n file=sys.stderr,\n )\n if not out:\n print(\n \"Failed installing npm dependencies for the following component packages {} (status={})\".format(\n source_glob, status\n ),\n file=sys.stderr,\n )\n\n\ndef build_components(components_source):\n\n is_windows = sys.platform == \"win32\"\n\n source_glob = (\n components_source\n if components_source != \"all\"\n else \"dash-core-components|dash-html-components|dash-table\"\n )\n\n cmd = shlex.split(\n \"npx lerna exec --scope *@({})* -- npm run build\".format(source_glob),\n posix=not is_windows,\n )\n\n with subprocess.Popen(\n cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=is_windows\n ) as proc:\n out, err = proc.communicate()\n status = proc.poll()\n\n if err:\n print(err.decode(), file=sys.stderr)\n\n if not out:\n print(\n \"\ud83d\udfe2 Finished updating the following component packages {} (status={}) \ud83d\udfe2\".format(\n source_glob, status\n ),\n file=sys.stderr,\n )\n sys.exit(1)\n\n for package in source_glob.split(\"|\"):\n build_directory = os.path.join(\n \"components\", package, package.replace(\"-\", \"_\").rstrip(\"/\\\\\")\n )\n\n dest_dir = (\n \"dcc\"\n if package == \"dash-core-components\"\n else \"html\"\n if package == \"dash-html-components\"\n else \"dash_table\"\n )\n\n dest_path = os.path.join(\"dash\", dest_dir)\n\n if not os.path.exists(dest_path):\n try:\n os.makedirs(dest_path)\n except OSError:\n logger.exception(\"\ud83d\udea8 Having issues manipulating %s\", dest_path)\n sys.exit(1)\n\n if not os.path.exists(build_directory):\n print(\n \"Could not locate build artifacts. Check that the npm build process completed successfully for the given package: {}\".format(\n package\n )\n )\n else:\n print(\"\ud83d\ude9a Moving build artifacts from \" + build_directory + \" to Dash \ud83d\ude9a\")\n shutil.rmtree(dest_path)\n shutil.copytree(build_directory, dest_path)\n with open(os.path.join(dest_path, \".gitkeep\"), \"w\"):\n pass\n print(\n \"\ud83d\udfe2 Finished moving build artifacts from \"\n + build_directory\n + \" to Dash \ud83d\udfe2\"\n )\n\n\ndef cli():\n parser = argparse.ArgumentParser(\n prog=\"dash-update-components\",\n formatter_class=_CombinedFormatter,\n description=\"Update the specified subcomponent libraries within Dash\"\n \" by copying over build artifacts, dependencies, and dependency metadata.\",\n )\n parser.add_argument(\n \"components_source\",\n help=\"A glob string that matches the Dash component libraries to be updated (eg.'dash-table' // 'dash-core-components|dash-html-components' // 'all'). The default argument is 'all'.\",\n default=\"all\",\n )\n\n args = parser.parse_args()\n\n booststrap_components(args.components_source)\n build_components(args.components_source)\n\n\ncli()\n", "path": "dash/development/update_components.py"}]}
| 3,401 | 809 |
gh_patches_debug_31015
|
rasdani/github-patches
|
git_diff
|
opensearch-project__opensearch-build-1979
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Enhancement] OS/OSD runner is not separated in manifest_workflow and reduce runtime
We want OS and OSD runner to be separated when workflow create the new manifest file.
As of now both os and osd runner are pointing to the same one.
There are different configurations between the two runners.
The workflow also runs for way too long.
gradle properties does not require a gradle mavelocal to run beforehand.
Plus, gradle 7.x cannot build OS 1.0 anyway and will fail the workflow in the middle.
Thanks.
</issue>
<code>
[start of src/manifests_workflow/component_opensearch_min.py]
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 from typing import Any, List
8
9 from git.git_repository import GitRepository
10 from manifests_workflow.component import Component
11 from manifests_workflow.component_opensearch import ComponentOpenSearch
12 from system.properties_file import PropertiesFile
13
14
15 class ComponentOpenSearchMin(Component):
16 def __init__(self, repo: GitRepository, snapshot: bool = False) -> None:
17 super().__init__(
18 "OpenSearch",
19 repo,
20 snapshot,
21 ["gradle:publish", "gradle:properties:version"],
22 )
23
24 @classmethod
25 def branches(self, url: str = "https://github.com/opensearch-project/OpenSearch.git") -> List[str]:
26 return Component.branches(url)
27
28 @classmethod
29 def checkout(self, path: str, branch: str = "main", snapshot: bool = False) -> 'ComponentOpenSearchMin':
30 return ComponentOpenSearchMin(
31 GitRepository("https://github.com/opensearch-project/OpenSearch.git", branch, path),
32 snapshot,
33 )
34
35 def publish_to_maven_local(self) -> None:
36 cmd = ComponentOpenSearch.gradle_cmd("publishToMavenLocal", {"build.snapshot": str(self.snapshot).lower()})
37 self.git_repo.execute_silent(cmd)
38
39 @property
40 def properties(self) -> PropertiesFile:
41 cmd = ComponentOpenSearch.gradle_cmd("properties", {"build.snapshot": str(self.snapshot).lower()})
42 return PropertiesFile(self.git_repo.output(cmd))
43
44 @property
45 def version(self) -> Any:
46 self.publish_to_maven_local()
47 return self.properties.get_value("version")
48
[end of src/manifests_workflow/component_opensearch_min.py]
[start of src/manifests_workflow/input_manifests.py]
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 import glob
8 import logging
9 import os
10 import re
11 from abc import abstractmethod
12 from typing import Dict, List, Type, Union
13
14 from manifests.input_manifest import InputManifest
15 from manifests.manifests import Manifests
16 from manifests_workflow.component_opensearch import ComponentOpenSearch
17 from manifests_workflow.component_opensearch_dashboards_min import ComponentOpenSearchDashboardsMin
18 from manifests_workflow.component_opensearch_min import ComponentOpenSearchMin
19 from system.temporary_directory import TemporaryDirectory
20
21
22 class InputManifests(Manifests):
23 def __init__(self, name: str) -> None:
24 self.name = name
25 self.prefix = name.lower().replace(" ", "-")
26 super().__init__(InputManifest, InputManifests.files(self.prefix))
27
28 @classmethod
29 def manifests_path(self) -> str:
30 return os.path.realpath(os.path.join(os.path.dirname(__file__), "..", "..", "manifests"))
31
32 @classmethod
33 def jenkins_path(self) -> str:
34 return os.path.realpath(os.path.join(os.path.dirname(__file__), "..", "..", "jenkins"))
35
36 @classmethod
37 def cron_jenkinsfile(self) -> str:
38 return os.path.join(self.jenkins_path(), "check-for-build.jenkinsfile")
39
40 @classmethod
41 def files(self, name: str) -> List:
42 results = []
43 for filename in glob.glob(os.path.join(self.manifests_path(), f"**/{name}-*.yml")):
44 # avoids the -maven manifest
45 match = re.search(rf"^{name}-([0-9.]*).yml$", os.path.basename(filename))
46 if match:
47 results.append(filename)
48 return results
49
50 @abstractmethod
51 def update(self, min_klass: Union[Type[ComponentOpenSearchMin], Type[ComponentOpenSearchDashboardsMin]], component_klass: Type[ComponentOpenSearch], keep: bool = False) -> None:
52 known_versions = self.versions
53 logging.info(f"Known versions: {known_versions}")
54 main_versions: Dict = {}
55 with TemporaryDirectory(keep=keep, chdir=True) as work_dir:
56 logging.info(f"Checking out components into {work_dir.name}")
57
58 # check out and build #main, 1.x, etc.
59 branches = min_klass.branches()
60
61 logging.info(f"Checking {self.name} {branches} branches")
62 for branch in branches:
63 c = min_klass.checkout(
64 path=os.path.join(work_dir.name, self.name.replace(" ", ""), branch),
65 branch=branch,
66 )
67
68 version = c.version
69 logging.info(f"{self.name}#{branch} is version {version}")
70 if version not in main_versions.keys():
71 main_versions[version] = [c]
72
73 if component_klass is not None:
74 # components can increment their own version first without incrementing min
75 manifest = self.latest
76 logging.info(f"Examining components in the latest manifest of {manifest.build.name} ({manifest.build.version})")
77 for component in manifest.components.values():
78 if component.name == self.name:
79 continue
80
81 logging.info(f"Checking out {component.name}#main")
82 component = component_klass.checkout(
83 name=component.name,
84 path=os.path.join(work_dir.name, component.name),
85 opensearch_version=manifest.build.version,
86 branch="main",
87 )
88
89 component_version = component.version
90 if component_version:
91 release_version = ".".join(component_version.split(".")[:3])
92 if release_version not in main_versions.keys():
93 main_versions[release_version] = []
94 main_versions[release_version].append(component)
95 logging.info(f"{component.name}#main is version {release_version} (from {component_version})")
96
97 # summarize
98 logging.info("Found versions on main:")
99 for main_version in main_versions.keys():
100 for component in main_versions[main_version]:
101 logging.info(f" {component.name}={main_version}")
102
103 # generate new manifests
104 for release_version in sorted(main_versions.keys() - known_versions):
105 self.write_manifest(release_version, main_versions[release_version])
106 self.add_to_cron(release_version)
107
108 def create_manifest(self, version: str, components: List = []) -> InputManifest:
109 data: Dict = {
110 "schema-version": "1.0",
111 "build": {
112 "name": self.name,
113 "version": version
114 },
115 "ci": {
116 "image": {
117 "name": "opensearchstaging/ci-runner:ci-runner-centos7-v1"
118 }
119 },
120 "components": [],
121 }
122
123 for component in components:
124 logging.info(f" Adding {component.name}")
125 data["components"].append(component.to_dict())
126
127 return InputManifest(data)
128
129 def write_manifest(self, version: str, components: List = []) -> None:
130 logging.info(f"Creating new version: {version}")
131 manifest = self.create_manifest(version, components)
132 manifest_dir = os.path.join(self.manifests_path(), version)
133 os.makedirs(manifest_dir, exist_ok=True)
134 manifest_path = os.path.join(manifest_dir, f"{self.prefix}-{version}.yml")
135 manifest.to_file(manifest_path)
136 logging.info(f"Wrote {manifest_path}")
137
138 def add_to_cron(self, version: str) -> None:
139 logging.info(f"Adding new version to cron: {version}")
140 jenkinsfile = self.cron_jenkinsfile()
141 with open(jenkinsfile, "r") as f:
142 data = f.read()
143
144 cron_entry = f"H/10 * * * * %INPUT_MANIFEST={version}/{self.prefix}-{version}.yml;TARGET_JOB_NAME=distribution-build-{self.prefix}\n"
145
146 if cron_entry in data:
147 raise ValueError(f"{jenkinsfile} already contains an entry for {self.prefix} {version}")
148
149 data = data.replace(
150 "parameterizedCron '''\n",
151 f"parameterizedCron '''\n{' ' * 12}{cron_entry}"
152 )
153
154 with open(jenkinsfile, "w") as f:
155 f.write(data)
156
157 logging.info(f"Wrote {jenkinsfile}")
158
[end of src/manifests_workflow/input_manifests.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/manifests_workflow/component_opensearch_min.py b/src/manifests_workflow/component_opensearch_min.py
--- a/src/manifests_workflow/component_opensearch_min.py
+++ b/src/manifests_workflow/component_opensearch_min.py
@@ -32,10 +32,6 @@
snapshot,
)
- def publish_to_maven_local(self) -> None:
- cmd = ComponentOpenSearch.gradle_cmd("publishToMavenLocal", {"build.snapshot": str(self.snapshot).lower()})
- self.git_repo.execute_silent(cmd)
-
@property
def properties(self) -> PropertiesFile:
cmd = ComponentOpenSearch.gradle_cmd("properties", {"build.snapshot": str(self.snapshot).lower()})
@@ -43,5 +39,4 @@
@property
def version(self) -> Any:
- self.publish_to_maven_local()
return self.properties.get_value("version")
diff --git a/src/manifests_workflow/input_manifests.py b/src/manifests_workflow/input_manifests.py
--- a/src/manifests_workflow/input_manifests.py
+++ b/src/manifests_workflow/input_manifests.py
@@ -106,6 +106,11 @@
self.add_to_cron(release_version)
def create_manifest(self, version: str, components: List = []) -> InputManifest:
+ image_map = {
+ "opensearch": "opensearchstaging/ci-runner:ci-runner-centos7-opensearch-build-v1",
+ "opensearch-dashboards": "opensearchstaging/ci-runner:ci-runner-centos7-opensearch-dashboards-build-v1"
+ }
+
data: Dict = {
"schema-version": "1.0",
"build": {
@@ -114,7 +119,7 @@
},
"ci": {
"image": {
- "name": "opensearchstaging/ci-runner:ci-runner-centos7-v1"
+ "name": image_map[self.prefix]
}
},
"components": [],
|
{"golden_diff": "diff --git a/src/manifests_workflow/component_opensearch_min.py b/src/manifests_workflow/component_opensearch_min.py\n--- a/src/manifests_workflow/component_opensearch_min.py\n+++ b/src/manifests_workflow/component_opensearch_min.py\n@@ -32,10 +32,6 @@\n snapshot,\n )\n \n- def publish_to_maven_local(self) -> None:\n- cmd = ComponentOpenSearch.gradle_cmd(\"publishToMavenLocal\", {\"build.snapshot\": str(self.snapshot).lower()})\n- self.git_repo.execute_silent(cmd)\n-\n @property\n def properties(self) -> PropertiesFile:\n cmd = ComponentOpenSearch.gradle_cmd(\"properties\", {\"build.snapshot\": str(self.snapshot).lower()})\n@@ -43,5 +39,4 @@\n \n @property\n def version(self) -> Any:\n- self.publish_to_maven_local()\n return self.properties.get_value(\"version\")\ndiff --git a/src/manifests_workflow/input_manifests.py b/src/manifests_workflow/input_manifests.py\n--- a/src/manifests_workflow/input_manifests.py\n+++ b/src/manifests_workflow/input_manifests.py\n@@ -106,6 +106,11 @@\n self.add_to_cron(release_version)\n \n def create_manifest(self, version: str, components: List = []) -> InputManifest:\n+ image_map = {\n+ \"opensearch\": \"opensearchstaging/ci-runner:ci-runner-centos7-opensearch-build-v1\",\n+ \"opensearch-dashboards\": \"opensearchstaging/ci-runner:ci-runner-centos7-opensearch-dashboards-build-v1\"\n+ }\n+\n data: Dict = {\n \"schema-version\": \"1.0\",\n \"build\": {\n@@ -114,7 +119,7 @@\n },\n \"ci\": {\n \"image\": {\n- \"name\": \"opensearchstaging/ci-runner:ci-runner-centos7-v1\"\n+ \"name\": image_map[self.prefix]\n }\n },\n \"components\": [],\n", "issue": "[Enhancement] OS/OSD runner is not separated in manifest_workflow and reduce runtime\nWe want OS and OSD runner to be separated when workflow create the new manifest file.\r\nAs of now both os and osd runner are pointing to the same one.\r\nThere are different configurations between the two runners.\r\n\r\nThe workflow also runs for way too long.\r\ngradle properties does not require a gradle mavelocal to run beforehand.\r\n\r\nPlus, gradle 7.x cannot build OS 1.0 anyway and will fail the workflow in the middle.\r\n\r\nThanks.\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nfrom typing import Any, List\n\nfrom git.git_repository import GitRepository\nfrom manifests_workflow.component import Component\nfrom manifests_workflow.component_opensearch import ComponentOpenSearch\nfrom system.properties_file import PropertiesFile\n\n\nclass ComponentOpenSearchMin(Component):\n def __init__(self, repo: GitRepository, snapshot: bool = False) -> None:\n super().__init__(\n \"OpenSearch\",\n repo,\n snapshot,\n [\"gradle:publish\", \"gradle:properties:version\"],\n )\n\n @classmethod\n def branches(self, url: str = \"https://github.com/opensearch-project/OpenSearch.git\") -> List[str]:\n return Component.branches(url)\n\n @classmethod\n def checkout(self, path: str, branch: str = \"main\", snapshot: bool = False) -> 'ComponentOpenSearchMin':\n return ComponentOpenSearchMin(\n GitRepository(\"https://github.com/opensearch-project/OpenSearch.git\", branch, path),\n snapshot,\n )\n\n def publish_to_maven_local(self) -> None:\n cmd = ComponentOpenSearch.gradle_cmd(\"publishToMavenLocal\", {\"build.snapshot\": str(self.snapshot).lower()})\n self.git_repo.execute_silent(cmd)\n\n @property\n def properties(self) -> PropertiesFile:\n cmd = ComponentOpenSearch.gradle_cmd(\"properties\", {\"build.snapshot\": str(self.snapshot).lower()})\n return PropertiesFile(self.git_repo.output(cmd))\n\n @property\n def version(self) -> Any:\n self.publish_to_maven_local()\n return self.properties.get_value(\"version\")\n", "path": "src/manifests_workflow/component_opensearch_min.py"}, {"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport glob\nimport logging\nimport os\nimport re\nfrom abc import abstractmethod\nfrom typing import Dict, List, Type, Union\n\nfrom manifests.input_manifest import InputManifest\nfrom manifests.manifests import Manifests\nfrom manifests_workflow.component_opensearch import ComponentOpenSearch\nfrom manifests_workflow.component_opensearch_dashboards_min import ComponentOpenSearchDashboardsMin\nfrom manifests_workflow.component_opensearch_min import ComponentOpenSearchMin\nfrom system.temporary_directory import TemporaryDirectory\n\n\nclass InputManifests(Manifests):\n def __init__(self, name: str) -> None:\n self.name = name\n self.prefix = name.lower().replace(\" \", \"-\")\n super().__init__(InputManifest, InputManifests.files(self.prefix))\n\n @classmethod\n def manifests_path(self) -> str:\n return os.path.realpath(os.path.join(os.path.dirname(__file__), \"..\", \"..\", \"manifests\"))\n\n @classmethod\n def jenkins_path(self) -> str:\n return os.path.realpath(os.path.join(os.path.dirname(__file__), \"..\", \"..\", \"jenkins\"))\n\n @classmethod\n def cron_jenkinsfile(self) -> str:\n return os.path.join(self.jenkins_path(), \"check-for-build.jenkinsfile\")\n\n @classmethod\n def files(self, name: str) -> List:\n results = []\n for filename in glob.glob(os.path.join(self.manifests_path(), f\"**/{name}-*.yml\")):\n # avoids the -maven manifest\n match = re.search(rf\"^{name}-([0-9.]*).yml$\", os.path.basename(filename))\n if match:\n results.append(filename)\n return results\n\n @abstractmethod\n def update(self, min_klass: Union[Type[ComponentOpenSearchMin], Type[ComponentOpenSearchDashboardsMin]], component_klass: Type[ComponentOpenSearch], keep: bool = False) -> None:\n known_versions = self.versions\n logging.info(f\"Known versions: {known_versions}\")\n main_versions: Dict = {}\n with TemporaryDirectory(keep=keep, chdir=True) as work_dir:\n logging.info(f\"Checking out components into {work_dir.name}\")\n\n # check out and build #main, 1.x, etc.\n branches = min_klass.branches()\n\n logging.info(f\"Checking {self.name} {branches} branches\")\n for branch in branches:\n c = min_klass.checkout(\n path=os.path.join(work_dir.name, self.name.replace(\" \", \"\"), branch),\n branch=branch,\n )\n\n version = c.version\n logging.info(f\"{self.name}#{branch} is version {version}\")\n if version not in main_versions.keys():\n main_versions[version] = [c]\n\n if component_klass is not None:\n # components can increment their own version first without incrementing min\n manifest = self.latest\n logging.info(f\"Examining components in the latest manifest of {manifest.build.name} ({manifest.build.version})\")\n for component in manifest.components.values():\n if component.name == self.name:\n continue\n\n logging.info(f\"Checking out {component.name}#main\")\n component = component_klass.checkout(\n name=component.name,\n path=os.path.join(work_dir.name, component.name),\n opensearch_version=manifest.build.version,\n branch=\"main\",\n )\n\n component_version = component.version\n if component_version:\n release_version = \".\".join(component_version.split(\".\")[:3])\n if release_version not in main_versions.keys():\n main_versions[release_version] = []\n main_versions[release_version].append(component)\n logging.info(f\"{component.name}#main is version {release_version} (from {component_version})\")\n\n # summarize\n logging.info(\"Found versions on main:\")\n for main_version in main_versions.keys():\n for component in main_versions[main_version]:\n logging.info(f\" {component.name}={main_version}\")\n\n # generate new manifests\n for release_version in sorted(main_versions.keys() - known_versions):\n self.write_manifest(release_version, main_versions[release_version])\n self.add_to_cron(release_version)\n\n def create_manifest(self, version: str, components: List = []) -> InputManifest:\n data: Dict = {\n \"schema-version\": \"1.0\",\n \"build\": {\n \"name\": self.name,\n \"version\": version\n },\n \"ci\": {\n \"image\": {\n \"name\": \"opensearchstaging/ci-runner:ci-runner-centos7-v1\"\n }\n },\n \"components\": [],\n }\n\n for component in components:\n logging.info(f\" Adding {component.name}\")\n data[\"components\"].append(component.to_dict())\n\n return InputManifest(data)\n\n def write_manifest(self, version: str, components: List = []) -> None:\n logging.info(f\"Creating new version: {version}\")\n manifest = self.create_manifest(version, components)\n manifest_dir = os.path.join(self.manifests_path(), version)\n os.makedirs(manifest_dir, exist_ok=True)\n manifest_path = os.path.join(manifest_dir, f\"{self.prefix}-{version}.yml\")\n manifest.to_file(manifest_path)\n logging.info(f\"Wrote {manifest_path}\")\n\n def add_to_cron(self, version: str) -> None:\n logging.info(f\"Adding new version to cron: {version}\")\n jenkinsfile = self.cron_jenkinsfile()\n with open(jenkinsfile, \"r\") as f:\n data = f.read()\n\n cron_entry = f\"H/10 * * * * %INPUT_MANIFEST={version}/{self.prefix}-{version}.yml;TARGET_JOB_NAME=distribution-build-{self.prefix}\\n\"\n\n if cron_entry in data:\n raise ValueError(f\"{jenkinsfile} already contains an entry for {self.prefix} {version}\")\n\n data = data.replace(\n \"parameterizedCron '''\\n\",\n f\"parameterizedCron '''\\n{' ' * 12}{cron_entry}\"\n )\n\n with open(jenkinsfile, \"w\") as f:\n f.write(data)\n\n logging.info(f\"Wrote {jenkinsfile}\")\n", "path": "src/manifests_workflow/input_manifests.py"}]}
| 2,893 | 458 |
gh_patches_debug_15729
|
rasdani/github-patches
|
git_diff
|
Azure__azure-cli-extensions-601
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
min_pro is used in the example of help in express-route-cross-connection
- If the issue is to do with Azure CLI 2.0 in-particular, create an issue here at [Azure/azure-cli](https://github.com/Azure/azure-cli/issues)
### Extension name (the extension in question)
express-route-cross-connection
### Description of issue (in as much detail as possible)
https://github.com/Azure/azure-cli-extensions/blob/bbefbe73a620c6407b522484d6b2ba848cb4f9f5/src/express-route-cross-connection/azext_expressroutecrossconnection/_help.py#L131
We shouldn't use min_profile in the help example. It needs to be updated to supportedprofile instead.
-----
</issue>
<code>
[start of src/express-route-cross-connection/azext_expressroutecrossconnection/_help.py]
1 # --------------------------------------------------------------------------------------------
2 # Copyright (c) Microsoft Corporation. All rights reserved.
3 # Licensed under the MIT License. See License.txt in the project root for license information.
4 # --------------------------------------------------------------------------------------------
5
6 from knack.help_files import helps
7
8
9 helps['network cross-connection'] = """
10 type: group
11 short-summary: Manage customers' ExpressRoute circuits.
12 long-summary: >
13 To learn more about ExpressRoute circuits visit
14 https://docs.microsoft.com/en-us/azure/expressroute/howto-circuit-cli
15 """
16
17 helps['network cross-connection list'] = """
18 type: command
19 short-summary: List all ExpressRoute circuits for the current subscription.
20 examples:
21 - name: List all ExpressRoute circuits for the current subscription.
22 text: >
23 az network cross-connection list -g MyResourceGroup
24 """
25
26 helps['network cross-connection list-arp-tables'] = """
27 type: command
28 short-summary: Show the current Address Resolution Protocol (ARP) table of an ExpressRoute circuit peering.
29 examples:
30 - name: Show the current Address Resolution Protocol (ARP) table of an ExpressRoute circuit.
31 text: |
32 az network cross-connection list-arp-tables -g MyResourceGroup -n MyCircuit \\
33 --path primary --peering-name AzurePrivatePeering
34 """
35
36 helps['network cross-connection list-route-tables'] = """
37 type: command
38 short-summary: Show the current routing table of an ExpressRoute circuit peering.
39 examples:
40 - name: Show the current routing table of an ExpressRoute circuit peering.
41 text: |
42 az network cross-connection list-route-tables -g MyResourceGroup -n MyCircuit \\
43 --path primary --peering-name AzurePrivatePeering
44 """
45
46 helps['network cross-connection show'] = """
47 type: command
48 short-summary: Get the details of an ExpressRoute circuit.
49 examples:
50 - name: Get the details of an ExpressRoute circuit.
51 text: >
52 az network cross-connection show -n MyCircuit -g MyResourceGroup
53 """
54
55 helps['network cross-connection update'] = """
56 type: command
57 short-summary: Update settings of an ExpressRoute circuit.
58 examples:
59 - name: Change the SKU of an ExpressRoute circuit from Standard to Premium.
60 text: >
61 az network cross-connection update -n MyCircuit -g MyResourceGroup --sku-tier Premium
62 """
63
64 helps['network cross-connection wait'] = """
65 type: command
66 short-summary: Place the CLI in a waiting state until a condition of the ExpressRoute is met.
67 examples:
68 - name: Pause executing next line of CLI script until the ExpressRoute circuit is successfully provisioned.
69 text: az network cross-connection wait -n MyCircuit --g MyResourceGroup --created
70 """
71
72 helps['network cross-connection peering'] = """
73 type: group
74 short-summary: Manage ExpressRoute peering of an ExpressRoute circuit.
75 """
76
77 helps['network cross-connection peering create'] = """
78 type: command
79 short-summary: Create peering settings for an ExpressRoute circuit.
80 examples:
81 - name: Create Microsoft Peering settings with IPv4 configuration.
82 text: |
83 az network cross-connection peering create -g MyResourceGroup --circuit-name MyCircuit \\
84 --peering-type MicrosoftPeering --peer-asn 10002 --vlan-id 103 \\
85 --primary-peer-subnet 101.0.0.0/30 --secondary-peer-subnet 102.0.0.0/30 \\
86 --advertised-public-prefixes 101.0.0.0/30
87 - name: Add IPv6 settings to existing IPv4 config for Microsoft peering.
88 text: |
89 az network cross-connection peering update -g MyResourceGroup --circuit-name MyCircuit \\
90 --peering-type MicrosoftPeering --ip-version ipv6 --primary-peer-subnet 2002:db00::/126 \\
91 --secondary-peer-subnet 2003:db00::/126 --advertised-public-prefixes 2002:db00::/126
92 """
93
94 helps['network cross-connection peering delete'] = """
95 type: command
96 short-summary: Delete peering settings.
97 examples:
98 - name: Delete private peering.
99 text: >
100 az network cross-connection peering delete -g MyResourceGroup --circuit-name MyCircuit -n AzurePrivatePeering
101 """
102
103 helps['network cross-connection peering list'] = """
104 type: command
105 short-summary: List peering settings of an ExpressRoute circuit.
106 examples:
107 - name: List peering settings of an ExpressRoute circuit.
108 text: >
109 az network cross-connection peering list -g MyResourceGroup --circuit-name MyCircuit
110 """
111
112 helps['network cross-connection peering show'] = """
113 type: command
114 short-summary: Get the details of an express route peering.
115 examples:
116 - name: Get private peering details of an ExpressRoute circuit.
117 text: >
118 az network cross-connection peering show -g MyResourceGroup --circuit-name MyCircuit -n AzurePrivatePeering
119 """
120
121 helps['network cross-connection peering update'] = """
122 type: command
123 short-summary: Update peering settings of an ExpressRoute circuit.
124 examples:
125 - name: Add IPv6 Microsoft Peering settings to existing IPv4 config.
126 text: |
127 az network cross-connection peering update -g MyResourceGroup \\
128 --circuit-name MyCircuit --peering-type MicrosoftPeering --ip-version ipv6 \\
129 --primary-peer-subnet 2002:db00::/126 --secondary-peer-subnet 2003:db00::/126 \\
130 --advertised-public-prefixes 2002:db00::/126
131 min_profile: latest
132 """
133
[end of src/express-route-cross-connection/azext_expressroutecrossconnection/_help.py]
[start of src/express-route-cross-connection/setup.py]
1 #!/usr/bin/env python
2
3 # --------------------------------------------------------------------------------------------
4 # Copyright (c) Microsoft Corporation. All rights reserved.
5 # Licensed under the MIT License. See License.txt in the project root for license information.
6 # --------------------------------------------------------------------------------------------
7
8 from codecs import open
9 from setuptools import setup, find_packages
10
11 VERSION = "0.1.0"
12
13 CLASSIFIERS = [
14 'Development Status :: 4 - Beta',
15 'Intended Audience :: Developers',
16 'Intended Audience :: System Administrators',
17 'Programming Language :: Python',
18 'Programming Language :: Python :: 2',
19 'Programming Language :: Python :: 2.7',
20 'Programming Language :: Python :: 3',
21 'Programming Language :: Python :: 3.4',
22 'Programming Language :: Python :: 3.5',
23 'Programming Language :: Python :: 3.6',
24 'License :: OSI Approved :: MIT License',
25 ]
26
27 DEPENDENCIES = []
28
29 setup(
30 name='express-route-cross-connection',
31 version=VERSION,
32 description='Manage customer ExpressRoute circuits using an ExpressRoute cross-connection.',
33 long_description='These commands give ISPs limited ability to manage the ExpressRoute circuits of ' \
34 'their customers through an ExpressRoute cross-connection resource.',
35 license='MIT',
36 author='Microsoft Corporation',
37 author_email='[email protected]',
38 url='https://github.com/Azure/azure-cli-extensions/tree/master/src/express-route-cross-connection',
39 classifiers=CLASSIFIERS,
40 package_data={'azext_expressroutecrossconnection': ['azext_metadata.json']},
41 packages=find_packages(),
42 install_requires=DEPENDENCIES
43 )
44
[end of src/express-route-cross-connection/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/express-route-cross-connection/azext_expressroutecrossconnection/_help.py b/src/express-route-cross-connection/azext_expressroutecrossconnection/_help.py
--- a/src/express-route-cross-connection/azext_expressroutecrossconnection/_help.py
+++ b/src/express-route-cross-connection/azext_expressroutecrossconnection/_help.py
@@ -128,5 +128,4 @@
--circuit-name MyCircuit --peering-type MicrosoftPeering --ip-version ipv6 \\
--primary-peer-subnet 2002:db00::/126 --secondary-peer-subnet 2003:db00::/126 \\
--advertised-public-prefixes 2002:db00::/126
- min_profile: latest
"""
diff --git a/src/express-route-cross-connection/setup.py b/src/express-route-cross-connection/setup.py
--- a/src/express-route-cross-connection/setup.py
+++ b/src/express-route-cross-connection/setup.py
@@ -8,7 +8,7 @@
from codecs import open
from setuptools import setup, find_packages
-VERSION = "0.1.0"
+VERSION = "0.1.1"
CLASSIFIERS = [
'Development Status :: 4 - Beta',
|
{"golden_diff": "diff --git a/src/express-route-cross-connection/azext_expressroutecrossconnection/_help.py b/src/express-route-cross-connection/azext_expressroutecrossconnection/_help.py\n--- a/src/express-route-cross-connection/azext_expressroutecrossconnection/_help.py\n+++ b/src/express-route-cross-connection/azext_expressroutecrossconnection/_help.py\n@@ -128,5 +128,4 @@\n --circuit-name MyCircuit --peering-type MicrosoftPeering --ip-version ipv6 \\\\\n --primary-peer-subnet 2002:db00::/126 --secondary-peer-subnet 2003:db00::/126 \\\\\n --advertised-public-prefixes 2002:db00::/126\n- min_profile: latest\n \"\"\"\ndiff --git a/src/express-route-cross-connection/setup.py b/src/express-route-cross-connection/setup.py\n--- a/src/express-route-cross-connection/setup.py\n+++ b/src/express-route-cross-connection/setup.py\n@@ -8,7 +8,7 @@\n from codecs import open\n from setuptools import setup, find_packages\n \n-VERSION = \"0.1.0\"\n+VERSION = \"0.1.1\"\n \n CLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n", "issue": "min_pro is used in the example of help in express-route-cross-connection\n- If the issue is to do with Azure CLI 2.0 in-particular, create an issue here at [Azure/azure-cli](https://github.com/Azure/azure-cli/issues)\r\n\r\n### Extension name (the extension in question)\r\nexpress-route-cross-connection\r\n\r\n### Description of issue (in as much detail as possible)\r\nhttps://github.com/Azure/azure-cli-extensions/blob/bbefbe73a620c6407b522484d6b2ba848cb4f9f5/src/express-route-cross-connection/azext_expressroutecrossconnection/_help.py#L131\r\n\r\nWe shouldn't use min_profile in the help example. It needs to be updated to supportedprofile instead.\r\n-----\r\n\r\n\n", "before_files": [{"content": "# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom knack.help_files import helps\n\n\nhelps['network cross-connection'] = \"\"\"\n type: group\n short-summary: Manage customers' ExpressRoute circuits.\n long-summary: >\n To learn more about ExpressRoute circuits visit\n https://docs.microsoft.com/en-us/azure/expressroute/howto-circuit-cli\n\"\"\"\n\nhelps['network cross-connection list'] = \"\"\"\n type: command\n short-summary: List all ExpressRoute circuits for the current subscription.\n examples:\n - name: List all ExpressRoute circuits for the current subscription.\n text: >\n az network cross-connection list -g MyResourceGroup\n\"\"\"\n\nhelps['network cross-connection list-arp-tables'] = \"\"\"\n type: command\n short-summary: Show the current Address Resolution Protocol (ARP) table of an ExpressRoute circuit peering.\n examples:\n - name: Show the current Address Resolution Protocol (ARP) table of an ExpressRoute circuit.\n text: |\n az network cross-connection list-arp-tables -g MyResourceGroup -n MyCircuit \\\\\n --path primary --peering-name AzurePrivatePeering\n\"\"\"\n\nhelps['network cross-connection list-route-tables'] = \"\"\"\n type: command\n short-summary: Show the current routing table of an ExpressRoute circuit peering.\n examples:\n - name: Show the current routing table of an ExpressRoute circuit peering.\n text: |\n az network cross-connection list-route-tables -g MyResourceGroup -n MyCircuit \\\\\n --path primary --peering-name AzurePrivatePeering\n\"\"\"\n\nhelps['network cross-connection show'] = \"\"\"\n type: command\n short-summary: Get the details of an ExpressRoute circuit.\n examples:\n - name: Get the details of an ExpressRoute circuit.\n text: >\n az network cross-connection show -n MyCircuit -g MyResourceGroup\n\"\"\"\n\nhelps['network cross-connection update'] = \"\"\"\n type: command\n short-summary: Update settings of an ExpressRoute circuit.\n examples:\n - name: Change the SKU of an ExpressRoute circuit from Standard to Premium.\n text: >\n az network cross-connection update -n MyCircuit -g MyResourceGroup --sku-tier Premium\n\"\"\"\n\nhelps['network cross-connection wait'] = \"\"\"\n type: command\n short-summary: Place the CLI in a waiting state until a condition of the ExpressRoute is met.\n examples:\n - name: Pause executing next line of CLI script until the ExpressRoute circuit is successfully provisioned.\n text: az network cross-connection wait -n MyCircuit --g MyResourceGroup --created\n\"\"\"\n\nhelps['network cross-connection peering'] = \"\"\"\n type: group\n short-summary: Manage ExpressRoute peering of an ExpressRoute circuit.\n\"\"\"\n\nhelps['network cross-connection peering create'] = \"\"\"\n type: command\n short-summary: Create peering settings for an ExpressRoute circuit.\n examples:\n - name: Create Microsoft Peering settings with IPv4 configuration.\n text: |\n az network cross-connection peering create -g MyResourceGroup --circuit-name MyCircuit \\\\\n --peering-type MicrosoftPeering --peer-asn 10002 --vlan-id 103 \\\\\n --primary-peer-subnet 101.0.0.0/30 --secondary-peer-subnet 102.0.0.0/30 \\\\\n --advertised-public-prefixes 101.0.0.0/30\n - name: Add IPv6 settings to existing IPv4 config for Microsoft peering.\n text: |\n az network cross-connection peering update -g MyResourceGroup --circuit-name MyCircuit \\\\\n --peering-type MicrosoftPeering --ip-version ipv6 --primary-peer-subnet 2002:db00::/126 \\\\\n --secondary-peer-subnet 2003:db00::/126 --advertised-public-prefixes 2002:db00::/126\n\"\"\"\n\nhelps['network cross-connection peering delete'] = \"\"\"\n type: command\n short-summary: Delete peering settings.\n examples:\n - name: Delete private peering.\n text: >\n az network cross-connection peering delete -g MyResourceGroup --circuit-name MyCircuit -n AzurePrivatePeering\n\"\"\"\n\nhelps['network cross-connection peering list'] = \"\"\"\n type: command\n short-summary: List peering settings of an ExpressRoute circuit.\n examples:\n - name: List peering settings of an ExpressRoute circuit.\n text: >\n az network cross-connection peering list -g MyResourceGroup --circuit-name MyCircuit\n\"\"\"\n\nhelps['network cross-connection peering show'] = \"\"\"\n type: command\n short-summary: Get the details of an express route peering.\n examples:\n - name: Get private peering details of an ExpressRoute circuit.\n text: >\n az network cross-connection peering show -g MyResourceGroup --circuit-name MyCircuit -n AzurePrivatePeering\n\"\"\"\n\nhelps['network cross-connection peering update'] = \"\"\"\n type: command\n short-summary: Update peering settings of an ExpressRoute circuit.\n examples:\n - name: Add IPv6 Microsoft Peering settings to existing IPv4 config.\n text: |\n az network cross-connection peering update -g MyResourceGroup \\\\\n --circuit-name MyCircuit --peering-type MicrosoftPeering --ip-version ipv6 \\\\\n --primary-peer-subnet 2002:db00::/126 --secondary-peer-subnet 2003:db00::/126 \\\\\n --advertised-public-prefixes 2002:db00::/126\n min_profile: latest\n\"\"\"\n", "path": "src/express-route-cross-connection/azext_expressroutecrossconnection/_help.py"}, {"content": "#!/usr/bin/env python\n\n# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom codecs import open\nfrom setuptools import setup, find_packages\n\nVERSION = \"0.1.0\"\n\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'License :: OSI Approved :: MIT License',\n]\n\nDEPENDENCIES = []\n\nsetup(\n name='express-route-cross-connection',\n version=VERSION,\n description='Manage customer ExpressRoute circuits using an ExpressRoute cross-connection.',\n long_description='These commands give ISPs limited ability to manage the ExpressRoute circuits of ' \\\n 'their customers through an ExpressRoute cross-connection resource.',\n license='MIT',\n author='Microsoft Corporation',\n author_email='[email protected]',\n url='https://github.com/Azure/azure-cli-extensions/tree/master/src/express-route-cross-connection',\n classifiers=CLASSIFIERS,\n package_data={'azext_expressroutecrossconnection': ['azext_metadata.json']},\n packages=find_packages(),\n install_requires=DEPENDENCIES\n)\n", "path": "src/express-route-cross-connection/setup.py"}]}
| 2,792 | 304 |
gh_patches_debug_37896
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-1337
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make StreamField lazily fetch related objects
I have a site that includes a list of pages in the footer (privacy policy, terms and conditions, etc). This is implemented using a `wagtailsettings` model with a join table/InlinePanel to `wagtailcore.Page`. Every page load thus fetches these pages to print their URL and title in the footer.
It appears that when a page is fetched, any related models referenced in that pages StreamField contents are also fetched. This means that loading the links for the footer of one page means fetching all of the images contained in the content of all of the footer pages. These extra database calls are clearly wasteful.
StreamField instances should lazily fetch related objects when the StreamField is first accessed, instead of as soon as it is loaded.
</issue>
<code>
[start of wagtail/wagtailcore/blocks/stream_block.py]
1 from __future__ import absolute_import, unicode_literals
2
3 import collections
4
5 from django import forms
6 from django.core.exceptions import ValidationError
7 from django.forms.utils import ErrorList
8 from django.template.loader import render_to_string
9 from django.utils.encoding import python_2_unicode_compatible, force_text
10 from django.utils.html import format_html_join
11 from django.utils.safestring import mark_safe
12
13 import six
14
15 from wagtail.wagtailcore.utils import escape_script
16
17 from .base import Block, DeclarativeSubBlocksMetaclass, BoundBlock
18 from .utils import indent, js_dict
19
20
21 __all__ = ['BaseStreamBlock', 'StreamBlock', 'StreamValue']
22
23
24 class BaseStreamBlock(Block):
25 # TODO: decide what it means to pass a 'default' arg to StreamBlock's constructor. Logically we want it to be
26 # of type StreamValue, but we can't construct one of those because it needs a reference back to the StreamBlock
27 # that we haven't constructed yet...
28 class Meta:
29 @property
30 def default(self):
31 return StreamValue(self, [])
32
33 def __init__(self, local_blocks=None, **kwargs):
34 self._constructor_kwargs = kwargs
35
36 super(BaseStreamBlock, self).__init__(**kwargs)
37
38 self.child_blocks = self.base_blocks.copy() # create a local (shallow) copy of base_blocks so that it can be supplemented by local_blocks
39 if local_blocks:
40 for name, block in local_blocks:
41 block.set_name(name)
42 self.child_blocks[name] = block
43
44 self.dependencies = self.child_blocks.values()
45
46 def render_list_member(self, block_type_name, value, prefix, index, errors=None):
47 """
48 Render the HTML for a single list item. This consists of an <li> wrapper, hidden fields
49 to manage ID/deleted state/type, delete/reorder buttons, and the child block's own HTML.
50 """
51 child_block = self.child_blocks[block_type_name]
52 child = child_block.bind(value, prefix="%s-value" % prefix, errors=errors)
53 return render_to_string('wagtailadmin/block_forms/stream_member.html', {
54 'child_blocks': self.child_blocks.values(),
55 'block_type_name': block_type_name,
56 'prefix': prefix,
57 'child': child,
58 'index': index,
59 })
60
61 def html_declarations(self):
62 return format_html_join(
63 '\n', '<script type="text/template" id="{0}-newmember-{1}">{2}</script>',
64 [
65 (
66 self.definition_prefix,
67 name,
68 mark_safe(escape_script(self.render_list_member(name, child_block.meta.default, '__PREFIX__', '')))
69 )
70 for name, child_block in self.child_blocks.items()
71 ]
72 )
73
74 @property
75 def media(self):
76 return forms.Media(js=['wagtailadmin/js/blocks/sequence.js', 'wagtailadmin/js/blocks/stream.js'])
77
78 def js_initializer(self):
79 # compile a list of info dictionaries, one for each available block type
80 child_blocks = []
81 for name, child_block in self.child_blocks.items():
82 # each info dictionary specifies at least a block name
83 child_block_info = {'name': "'%s'" % name}
84
85 # if the child defines a JS initializer function, include that in the info dict
86 # along with the param that needs to be passed to it for initializing an empty/default block
87 # of that type
88 child_js_initializer = child_block.js_initializer()
89 if child_js_initializer:
90 child_block_info['initializer'] = child_js_initializer
91
92 child_blocks.append(indent(js_dict(child_block_info)))
93
94 opts = {
95 'definitionPrefix': "'%s'" % self.definition_prefix,
96 'childBlocks': '[\n%s\n]' % ',\n'.join(child_blocks),
97 }
98
99 return "StreamBlock(%s)" % js_dict(opts)
100
101 def render_form(self, value, prefix='', errors=None):
102 if errors:
103 if len(errors) > 1:
104 # We rely on ListBlock.clean throwing a single ValidationError with a specially crafted
105 # 'params' attribute that we can pull apart and distribute to the child blocks
106 raise TypeError('ListBlock.render_form unexpectedly received multiple errors')
107 error_list = errors.as_data()[0].params
108 else:
109 error_list = None
110
111 # drop any child values that are an unrecognised block type
112 valid_children = [child for child in value if child.block_type in self.child_blocks]
113
114 list_members_html = [
115 self.render_list_member(child.block_type, child.value, "%s-%d" % (prefix, i), i,
116 errors=error_list[i] if error_list else None)
117 for (i, child) in enumerate(valid_children)
118 ]
119
120 return render_to_string('wagtailadmin/block_forms/stream.html', {
121 'label': self.label,
122 'prefix': prefix,
123 'list_members_html': list_members_html,
124 'child_blocks': self.child_blocks.values(),
125 'header_menu_prefix': '%s-before' % prefix,
126 })
127
128 def value_from_datadict(self, data, files, prefix):
129 count = int(data['%s-count' % prefix])
130 values_with_indexes = []
131 for i in range(0, count):
132 if data['%s-%d-deleted' % (prefix, i)]:
133 continue
134 block_type_name = data['%s-%d-type' % (prefix, i)]
135 try:
136 child_block = self.child_blocks[block_type_name]
137 except KeyError:
138 continue
139
140 values_with_indexes.append(
141 (
142 int(data['%s-%d-order' % (prefix, i)]),
143 block_type_name,
144 child_block.value_from_datadict(data, files, '%s-%d-value' % (prefix, i)),
145 )
146 )
147
148 values_with_indexes.sort()
149 return StreamValue(self, [
150 (child_block_type_name, value)
151 for (index, child_block_type_name, value) in values_with_indexes
152 ])
153
154 def clean(self, value):
155 cleaned_data = []
156 errors = []
157 for child in value: # child is a BoundBlock instance
158 try:
159 cleaned_data.append(
160 (child.block.name, child.block.clean(child.value))
161 )
162 except ValidationError as e:
163 errors.append(ErrorList([e]))
164 else:
165 errors.append(None)
166
167 if any(errors):
168 # The message here is arbitrary - outputting error messages is delegated to the child blocks,
169 # which only involves the 'params' list
170 raise ValidationError('Validation error in StreamBlock', params=errors)
171
172 return StreamValue(self, cleaned_data)
173
174 def to_python(self, value):
175 # the incoming JSONish representation is a list of dicts, each with a 'type' and 'value' field.
176 # Convert this to a StreamValue backed by a list of (type, value) tuples
177 return StreamValue(self, [
178 (child_data['type'], self.child_blocks[child_data['type']].to_python(child_data['value']))
179 for child_data in value
180 if child_data['type'] in self.child_blocks
181 ])
182
183 def get_prep_value(self, value):
184 if value is None:
185 # treat None as identical to an empty stream
186 return []
187
188 return [
189 {'type': child.block.name, 'value': child.block.get_prep_value(child.value)}
190 for child in value # child is a BoundBlock instance
191 ]
192
193 def render_basic(self, value):
194 return format_html_join('\n', '<div class="block-{1}">{0}</div>',
195 [(force_text(child), child.block_type) for child in value]
196 )
197
198 def get_searchable_content(self, value):
199 content = []
200
201 for child in value:
202 content.extend(child.block.get_searchable_content(child.value))
203
204 return content
205
206 def deconstruct(self):
207 """
208 Always deconstruct StreamBlock instances as if they were plain StreamBlocks with all of the
209 field definitions passed to the constructor - even if in reality this is a subclass of StreamBlock
210 with the fields defined declaratively, or some combination of the two.
211
212 This ensures that the field definitions get frozen into migrations, rather than leaving a reference
213 to a custom subclass in the user's models.py that may or may not stick around.
214 """
215 path = 'wagtail.wagtailcore.blocks.StreamBlock'
216 args = [self.child_blocks.items()]
217 kwargs = self._constructor_kwargs
218 return (path, args, kwargs)
219
220
221 class StreamBlock(six.with_metaclass(DeclarativeSubBlocksMetaclass, BaseStreamBlock)):
222 pass
223
224
225 @python_2_unicode_compatible # provide equivalent __unicode__ and __str__ methods on Py2
226 class StreamValue(collections.Sequence):
227 """
228 Custom type used to represent the value of a StreamBlock; behaves as a sequence of BoundBlocks
229 (which keep track of block types in a way that the values alone wouldn't).
230 """
231
232 @python_2_unicode_compatible
233 class StreamChild(BoundBlock):
234 """Provides some extensions to BoundBlock to make it more natural to work with on front-end templates"""
235 def __str__(self):
236 """Render the value according to the block's native rendering"""
237 return self.block.render(self.value)
238
239 @property
240 def block_type(self):
241 """
242 Syntactic sugar so that we can say child.block_type instead of child.block.name.
243 (This doesn't belong on BoundBlock itself because the idea of block.name denoting
244 the child's "type" ('heading', 'paragraph' etc) is unique to StreamBlock, and in the
245 wider context people are liable to confuse it with the block class (CharBlock etc).
246 """
247 return self.block.name
248
249 def __init__(self, stream_block, stream_data):
250 self.stream_block = stream_block # the StreamBlock object that handles this value
251 self.stream_data = stream_data # a list of (type_name, value) tuples
252 self._bound_blocks = {} # populated lazily from stream_data as we access items through __getitem__
253
254 def __getitem__(self, i):
255 if i not in self._bound_blocks:
256 type_name, value = self.stream_data[i]
257 child_block = self.stream_block.child_blocks[type_name]
258 self._bound_blocks[i] = StreamValue.StreamChild(child_block, value)
259
260 return self._bound_blocks[i]
261
262 def __len__(self):
263 return len(self.stream_data)
264
265 def __repr__(self):
266 return repr(list(self))
267
268 def __str__(self):
269 return self.stream_block.render(self)
270
[end of wagtail/wagtailcore/blocks/stream_block.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wagtail/wagtailcore/blocks/stream_block.py b/wagtail/wagtailcore/blocks/stream_block.py
--- a/wagtail/wagtailcore/blocks/stream_block.py
+++ b/wagtail/wagtailcore/blocks/stream_block.py
@@ -173,12 +173,12 @@
def to_python(self, value):
# the incoming JSONish representation is a list of dicts, each with a 'type' and 'value' field.
- # Convert this to a StreamValue backed by a list of (type, value) tuples
+ # This is passed to StreamValue to be expanded lazily - but first we reject any unrecognised
+ # block types from the list
return StreamValue(self, [
- (child_data['type'], self.child_blocks[child_data['type']].to_python(child_data['value']))
- for child_data in value
+ child_data for child_data in value
if child_data['type'] in self.child_blocks
- ])
+ ], is_lazy=True)
def get_prep_value(self, value):
if value is None:
@@ -246,15 +246,36 @@
"""
return self.block.name
- def __init__(self, stream_block, stream_data):
+ def __init__(self, stream_block, stream_data, is_lazy=False):
+ """
+ Construct a StreamValue linked to the given StreamBlock,
+ with child values given in stream_data.
+
+ Passing is_lazy=True means that stream_data is raw JSONish data as stored
+ in the database, and needs to be converted to native values
+ (using block.to_python()) when accessed. In this mode, stream_data is a
+ list of dicts, each containing 'type' and 'value' keys.
+
+ Passing is_lazy=False means that stream_data consists of immediately usable
+ native values. In this mode, stream_data is a list of (type_name, value)
+ tuples.
+ """
+ self.is_lazy = is_lazy
self.stream_block = stream_block # the StreamBlock object that handles this value
self.stream_data = stream_data # a list of (type_name, value) tuples
self._bound_blocks = {} # populated lazily from stream_data as we access items through __getitem__
def __getitem__(self, i):
if i not in self._bound_blocks:
- type_name, value = self.stream_data[i]
- child_block = self.stream_block.child_blocks[type_name]
+ if self.is_lazy:
+ raw_value = self.stream_data[i]
+ type_name = raw_value['type']
+ child_block = self.stream_block.child_blocks[type_name]
+ value = child_block.to_python(raw_value['value'])
+ else:
+ type_name, value = self.stream_data[i]
+ child_block = self.stream_block.child_blocks[type_name]
+
self._bound_blocks[i] = StreamValue.StreamChild(child_block, value)
return self._bound_blocks[i]
|
{"golden_diff": "diff --git a/wagtail/wagtailcore/blocks/stream_block.py b/wagtail/wagtailcore/blocks/stream_block.py\n--- a/wagtail/wagtailcore/blocks/stream_block.py\n+++ b/wagtail/wagtailcore/blocks/stream_block.py\n@@ -173,12 +173,12 @@\n \n def to_python(self, value):\n # the incoming JSONish representation is a list of dicts, each with a 'type' and 'value' field.\n- # Convert this to a StreamValue backed by a list of (type, value) tuples\n+ # This is passed to StreamValue to be expanded lazily - but first we reject any unrecognised\n+ # block types from the list\n return StreamValue(self, [\n- (child_data['type'], self.child_blocks[child_data['type']].to_python(child_data['value']))\n- for child_data in value\n+ child_data for child_data in value\n if child_data['type'] in self.child_blocks\n- ])\n+ ], is_lazy=True)\n \n def get_prep_value(self, value):\n if value is None:\n@@ -246,15 +246,36 @@\n \"\"\"\n return self.block.name\n \n- def __init__(self, stream_block, stream_data):\n+ def __init__(self, stream_block, stream_data, is_lazy=False):\n+ \"\"\"\n+ Construct a StreamValue linked to the given StreamBlock,\n+ with child values given in stream_data.\n+\n+ Passing is_lazy=True means that stream_data is raw JSONish data as stored\n+ in the database, and needs to be converted to native values\n+ (using block.to_python()) when accessed. In this mode, stream_data is a\n+ list of dicts, each containing 'type' and 'value' keys.\n+\n+ Passing is_lazy=False means that stream_data consists of immediately usable\n+ native values. In this mode, stream_data is a list of (type_name, value)\n+ tuples.\n+ \"\"\"\n+ self.is_lazy = is_lazy\n self.stream_block = stream_block # the StreamBlock object that handles this value\n self.stream_data = stream_data # a list of (type_name, value) tuples\n self._bound_blocks = {} # populated lazily from stream_data as we access items through __getitem__\n \n def __getitem__(self, i):\n if i not in self._bound_blocks:\n- type_name, value = self.stream_data[i]\n- child_block = self.stream_block.child_blocks[type_name]\n+ if self.is_lazy:\n+ raw_value = self.stream_data[i]\n+ type_name = raw_value['type']\n+ child_block = self.stream_block.child_blocks[type_name]\n+ value = child_block.to_python(raw_value['value'])\n+ else:\n+ type_name, value = self.stream_data[i]\n+ child_block = self.stream_block.child_blocks[type_name]\n+\n self._bound_blocks[i] = StreamValue.StreamChild(child_block, value)\n \n return self._bound_blocks[i]\n", "issue": "Make StreamField lazily fetch related objects\nI have a site that includes a list of pages in the footer (privacy policy, terms and conditions, etc). This is implemented using a `wagtailsettings` model with a join table/InlinePanel to `wagtailcore.Page`. Every page load thus fetches these pages to print their URL and title in the footer.\n\nIt appears that when a page is fetched, any related models referenced in that pages StreamField contents are also fetched. This means that loading the links for the footer of one page means fetching all of the images contained in the content of all of the footer pages. These extra database calls are clearly wasteful.\n\nStreamField instances should lazily fetch related objects when the StreamField is first accessed, instead of as soon as it is loaded.\n\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nimport collections\n\nfrom django import forms\nfrom django.core.exceptions import ValidationError\nfrom django.forms.utils import ErrorList\nfrom django.template.loader import render_to_string\nfrom django.utils.encoding import python_2_unicode_compatible, force_text\nfrom django.utils.html import format_html_join\nfrom django.utils.safestring import mark_safe\n\nimport six\n\nfrom wagtail.wagtailcore.utils import escape_script\n\nfrom .base import Block, DeclarativeSubBlocksMetaclass, BoundBlock\nfrom .utils import indent, js_dict\n\n\n__all__ = ['BaseStreamBlock', 'StreamBlock', 'StreamValue']\n\n\nclass BaseStreamBlock(Block):\n # TODO: decide what it means to pass a 'default' arg to StreamBlock's constructor. Logically we want it to be\n # of type StreamValue, but we can't construct one of those because it needs a reference back to the StreamBlock\n # that we haven't constructed yet...\n class Meta:\n @property\n def default(self):\n return StreamValue(self, [])\n\n def __init__(self, local_blocks=None, **kwargs):\n self._constructor_kwargs = kwargs\n\n super(BaseStreamBlock, self).__init__(**kwargs)\n\n self.child_blocks = self.base_blocks.copy() # create a local (shallow) copy of base_blocks so that it can be supplemented by local_blocks\n if local_blocks:\n for name, block in local_blocks:\n block.set_name(name)\n self.child_blocks[name] = block\n\n self.dependencies = self.child_blocks.values()\n\n def render_list_member(self, block_type_name, value, prefix, index, errors=None):\n \"\"\"\n Render the HTML for a single list item. This consists of an <li> wrapper, hidden fields\n to manage ID/deleted state/type, delete/reorder buttons, and the child block's own HTML.\n \"\"\"\n child_block = self.child_blocks[block_type_name]\n child = child_block.bind(value, prefix=\"%s-value\" % prefix, errors=errors)\n return render_to_string('wagtailadmin/block_forms/stream_member.html', {\n 'child_blocks': self.child_blocks.values(),\n 'block_type_name': block_type_name,\n 'prefix': prefix,\n 'child': child,\n 'index': index,\n })\n\n def html_declarations(self):\n return format_html_join(\n '\\n', '<script type=\"text/template\" id=\"{0}-newmember-{1}\">{2}</script>',\n [\n (\n self.definition_prefix,\n name,\n mark_safe(escape_script(self.render_list_member(name, child_block.meta.default, '__PREFIX__', '')))\n )\n for name, child_block in self.child_blocks.items()\n ]\n )\n\n @property\n def media(self):\n return forms.Media(js=['wagtailadmin/js/blocks/sequence.js', 'wagtailadmin/js/blocks/stream.js'])\n\n def js_initializer(self):\n # compile a list of info dictionaries, one for each available block type\n child_blocks = []\n for name, child_block in self.child_blocks.items():\n # each info dictionary specifies at least a block name\n child_block_info = {'name': \"'%s'\" % name}\n\n # if the child defines a JS initializer function, include that in the info dict\n # along with the param that needs to be passed to it for initializing an empty/default block\n # of that type\n child_js_initializer = child_block.js_initializer()\n if child_js_initializer:\n child_block_info['initializer'] = child_js_initializer\n\n child_blocks.append(indent(js_dict(child_block_info)))\n\n opts = {\n 'definitionPrefix': \"'%s'\" % self.definition_prefix,\n 'childBlocks': '[\\n%s\\n]' % ',\\n'.join(child_blocks),\n }\n\n return \"StreamBlock(%s)\" % js_dict(opts)\n\n def render_form(self, value, prefix='', errors=None):\n if errors:\n if len(errors) > 1:\n # We rely on ListBlock.clean throwing a single ValidationError with a specially crafted\n # 'params' attribute that we can pull apart and distribute to the child blocks\n raise TypeError('ListBlock.render_form unexpectedly received multiple errors')\n error_list = errors.as_data()[0].params\n else:\n error_list = None\n\n # drop any child values that are an unrecognised block type\n valid_children = [child for child in value if child.block_type in self.child_blocks]\n\n list_members_html = [\n self.render_list_member(child.block_type, child.value, \"%s-%d\" % (prefix, i), i,\n errors=error_list[i] if error_list else None)\n for (i, child) in enumerate(valid_children)\n ]\n\n return render_to_string('wagtailadmin/block_forms/stream.html', {\n 'label': self.label,\n 'prefix': prefix,\n 'list_members_html': list_members_html,\n 'child_blocks': self.child_blocks.values(),\n 'header_menu_prefix': '%s-before' % prefix,\n })\n\n def value_from_datadict(self, data, files, prefix):\n count = int(data['%s-count' % prefix])\n values_with_indexes = []\n for i in range(0, count):\n if data['%s-%d-deleted' % (prefix, i)]:\n continue\n block_type_name = data['%s-%d-type' % (prefix, i)]\n try:\n child_block = self.child_blocks[block_type_name]\n except KeyError:\n continue\n\n values_with_indexes.append(\n (\n int(data['%s-%d-order' % (prefix, i)]),\n block_type_name,\n child_block.value_from_datadict(data, files, '%s-%d-value' % (prefix, i)),\n )\n )\n\n values_with_indexes.sort()\n return StreamValue(self, [\n (child_block_type_name, value)\n for (index, child_block_type_name, value) in values_with_indexes\n ])\n\n def clean(self, value):\n cleaned_data = []\n errors = []\n for child in value: # child is a BoundBlock instance\n try:\n cleaned_data.append(\n (child.block.name, child.block.clean(child.value))\n )\n except ValidationError as e:\n errors.append(ErrorList([e]))\n else:\n errors.append(None)\n\n if any(errors):\n # The message here is arbitrary - outputting error messages is delegated to the child blocks,\n # which only involves the 'params' list\n raise ValidationError('Validation error in StreamBlock', params=errors)\n\n return StreamValue(self, cleaned_data)\n\n def to_python(self, value):\n # the incoming JSONish representation is a list of dicts, each with a 'type' and 'value' field.\n # Convert this to a StreamValue backed by a list of (type, value) tuples\n return StreamValue(self, [\n (child_data['type'], self.child_blocks[child_data['type']].to_python(child_data['value']))\n for child_data in value\n if child_data['type'] in self.child_blocks\n ])\n\n def get_prep_value(self, value):\n if value is None:\n # treat None as identical to an empty stream\n return []\n\n return [\n {'type': child.block.name, 'value': child.block.get_prep_value(child.value)}\n for child in value # child is a BoundBlock instance\n ]\n\n def render_basic(self, value):\n return format_html_join('\\n', '<div class=\"block-{1}\">{0}</div>',\n [(force_text(child), child.block_type) for child in value]\n )\n\n def get_searchable_content(self, value):\n content = []\n\n for child in value:\n content.extend(child.block.get_searchable_content(child.value))\n\n return content\n\n def deconstruct(self):\n \"\"\"\n Always deconstruct StreamBlock instances as if they were plain StreamBlocks with all of the\n field definitions passed to the constructor - even if in reality this is a subclass of StreamBlock\n with the fields defined declaratively, or some combination of the two.\n\n This ensures that the field definitions get frozen into migrations, rather than leaving a reference\n to a custom subclass in the user's models.py that may or may not stick around.\n \"\"\"\n path = 'wagtail.wagtailcore.blocks.StreamBlock'\n args = [self.child_blocks.items()]\n kwargs = self._constructor_kwargs\n return (path, args, kwargs)\n\n\nclass StreamBlock(six.with_metaclass(DeclarativeSubBlocksMetaclass, BaseStreamBlock)):\n pass\n\n\n@python_2_unicode_compatible # provide equivalent __unicode__ and __str__ methods on Py2\nclass StreamValue(collections.Sequence):\n \"\"\"\n Custom type used to represent the value of a StreamBlock; behaves as a sequence of BoundBlocks\n (which keep track of block types in a way that the values alone wouldn't).\n \"\"\"\n\n @python_2_unicode_compatible\n class StreamChild(BoundBlock):\n \"\"\"Provides some extensions to BoundBlock to make it more natural to work with on front-end templates\"\"\"\n def __str__(self):\n \"\"\"Render the value according to the block's native rendering\"\"\"\n return self.block.render(self.value)\n\n @property\n def block_type(self):\n \"\"\"\n Syntactic sugar so that we can say child.block_type instead of child.block.name.\n (This doesn't belong on BoundBlock itself because the idea of block.name denoting\n the child's \"type\" ('heading', 'paragraph' etc) is unique to StreamBlock, and in the\n wider context people are liable to confuse it with the block class (CharBlock etc).\n \"\"\"\n return self.block.name\n\n def __init__(self, stream_block, stream_data):\n self.stream_block = stream_block # the StreamBlock object that handles this value\n self.stream_data = stream_data # a list of (type_name, value) tuples\n self._bound_blocks = {} # populated lazily from stream_data as we access items through __getitem__\n\n def __getitem__(self, i):\n if i not in self._bound_blocks:\n type_name, value = self.stream_data[i]\n child_block = self.stream_block.child_blocks[type_name]\n self._bound_blocks[i] = StreamValue.StreamChild(child_block, value)\n\n return self._bound_blocks[i]\n\n def __len__(self):\n return len(self.stream_data)\n\n def __repr__(self):\n return repr(list(self))\n\n def __str__(self):\n return self.stream_block.render(self)\n", "path": "wagtail/wagtailcore/blocks/stream_block.py"}]}
| 3,710 | 671 |
gh_patches_debug_21386
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-9759
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Include the `task-run` as a related resources when emitting events.
### First check
- [X] I added a descriptive title to this issue.
- [X] I used the GitHub search to find a similar request and didn't find it.
- [X] I searched the Prefect documentation for this feature.
### Prefect Version
2.x
### Describe the current behavior
Currently task-runs aren't included as related resources when emitting events.
### Describe the proposed behavior
The current task run in the TaskRunContext should be included in any event fired while a task is running.
### Example Use
_No response_
### Additional context
_No response_
</issue>
<code>
[start of src/prefect/events/related.py]
1 import asyncio
2 import pendulum
3 from typing import (
4 TYPE_CHECKING,
5 Any,
6 Awaitable,
7 Callable,
8 Dict,
9 Iterable,
10 List,
11 Optional,
12 Set,
13 Tuple,
14 Union,
15 )
16 from uuid import UUID
17 from pendulum.datetime import DateTime
18
19 from .schemas import RelatedResource
20
21 if TYPE_CHECKING:
22 from prefect._internal.schemas.bases import ObjectBaseModel
23
24 ResourceCacheEntry = Dict[str, Union[str, "ObjectBaseModel", None]]
25 RelatedResourceCache = Dict[str, Tuple[ResourceCacheEntry, DateTime]]
26
27 MAX_CACHE_SIZE = 100
28 RESOURCE_CACHE: RelatedResourceCache = {}
29
30
31 def tags_as_related_resources(tags: Iterable[str]) -> List[RelatedResource]:
32 return [
33 RelatedResource(
34 __root__={
35 "prefect.resource.id": f"prefect.tag.{tag}",
36 "prefect.resource.role": "tag",
37 }
38 )
39 for tag in sorted(tags)
40 ]
41
42
43 def object_as_related_resource(kind: str, role: str, object: Any) -> RelatedResource:
44 resource_id = f"prefect.{kind}.{object.id}"
45
46 return RelatedResource(
47 __root__={
48 "prefect.resource.id": resource_id,
49 "prefect.resource.role": role,
50 "prefect.resource.name": object.name,
51 }
52 )
53
54
55 async def related_resources_from_run_context(
56 exclude: Optional[Set[str]] = None,
57 ) -> List[RelatedResource]:
58 from prefect.client.orchestration import get_client
59 from prefect.context import FlowRunContext, TaskRunContext
60
61 if exclude is None:
62 exclude = set()
63
64 flow_run_context = FlowRunContext.get()
65 task_run_context = TaskRunContext.get()
66
67 if not flow_run_context and not task_run_context:
68 return []
69
70 flow_run_id: UUID = (
71 flow_run_context.flow_run.id
72 if flow_run_context
73 else task_run_context.task_run.flow_run_id
74 )
75
76 related_objects: list[ResourceCacheEntry] = []
77
78 async with get_client() as client:
79
80 async def dummy_read():
81 return {}
82
83 related_objects = [
84 await _get_and_cache_related_object(
85 kind="flow-run",
86 role="flow-run",
87 client_method=client.read_flow_run,
88 obj_id=flow_run_id,
89 cache=RESOURCE_CACHE,
90 )
91 ]
92
93 flow_run = related_objects[0]["object"]
94
95 if flow_run:
96 related_objects += list(
97 await asyncio.gather(
98 _get_and_cache_related_object(
99 kind="flow",
100 role="flow",
101 client_method=client.read_flow,
102 obj_id=flow_run.flow_id,
103 cache=RESOURCE_CACHE,
104 ),
105 (
106 _get_and_cache_related_object(
107 kind="deployment",
108 role="deployment",
109 client_method=client.read_deployment,
110 obj_id=flow_run.deployment_id,
111 cache=RESOURCE_CACHE,
112 )
113 if flow_run.deployment_id
114 else dummy_read()
115 ),
116 (
117 _get_and_cache_related_object(
118 kind="work-queue",
119 role="work-queue",
120 client_method=client.read_work_queue,
121 obj_id=flow_run.work_queue_id,
122 cache=RESOURCE_CACHE,
123 )
124 if flow_run.work_queue_id
125 else dummy_read()
126 ),
127 (
128 _get_and_cache_related_object(
129 kind="work-pool",
130 role="work-pool",
131 client_method=client.read_work_pool,
132 obj_id=flow_run.work_pool_name,
133 cache=RESOURCE_CACHE,
134 )
135 if flow_run.work_pool_name
136 else dummy_read()
137 ),
138 )
139 )
140
141 related = []
142 tags = set()
143
144 for entry in related_objects:
145 obj_ = entry.get("object")
146 if obj_ is None:
147 continue
148
149 assert isinstance(entry["kind"], str) and isinstance(entry["role"], str)
150
151 resource = object_as_related_resource(
152 kind=entry["kind"], role=entry["kind"], object=obj_
153 )
154
155 if resource.id in exclude:
156 continue
157
158 related.append(resource)
159 if hasattr(obj_, "tags"):
160 tags |= set(obj_.tags)
161
162 related += [
163 resource
164 for resource in tags_as_related_resources(tags)
165 if resource.id not in exclude
166 ]
167
168 return related
169
170
171 async def _get_and_cache_related_object(
172 kind: str,
173 role: str,
174 client_method: Callable[[Union[UUID, str]], Awaitable[Optional["ObjectBaseModel"]]],
175 obj_id: Union[UUID, str],
176 cache: RelatedResourceCache,
177 ) -> ResourceCacheEntry:
178 cache_key = f"{kind}.{obj_id}"
179 entry = None
180
181 if cache_key in cache:
182 entry, _ = cache[cache_key]
183 else:
184 obj_ = await client_method(obj_id)
185 entry = {
186 "kind": kind,
187 "object": obj_,
188 }
189
190 cache[cache_key] = (entry, pendulum.now("UTC"))
191
192 # In the case of a worker or agent this cache could be long-lived. To keep
193 # from running out of memory only keep `MAX_CACHE_SIZE` entries in the
194 # cache.
195 if len(cache) > MAX_CACHE_SIZE:
196 oldest_key = sorted(
197 [(key, timestamp) for key, (_, timestamp) in cache.items()],
198 key=lambda k: k[1],
199 )[0][0]
200
201 if oldest_key:
202 del cache[oldest_key]
203
204 # Because the role is event specific and can change depending on the
205 # type of event being emitted, this adds the role from the args to the
206 # entry before returning it rather than storing it in the cache.
207 entry["role"] = role
208 return entry
209
[end of src/prefect/events/related.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/prefect/events/related.py b/src/prefect/events/related.py
--- a/src/prefect/events/related.py
+++ b/src/prefect/events/related.py
@@ -80,15 +80,33 @@
async def dummy_read():
return {}
- related_objects = [
- await _get_and_cache_related_object(
- kind="flow-run",
- role="flow-run",
- client_method=client.read_flow_run,
- obj_id=flow_run_id,
- cache=RESOURCE_CACHE,
+ if flow_run_context:
+ related_objects.append(
+ {
+ "kind": "flow-run",
+ "role": "flow-run",
+ "object": flow_run_context.flow_run,
+ },
+ )
+ else:
+ related_objects.append(
+ await _get_and_cache_related_object(
+ kind="flow-run",
+ role="flow-run",
+ client_method=client.read_flow_run,
+ obj_id=flow_run_id,
+ cache=RESOURCE_CACHE,
+ )
+ )
+
+ if task_run_context:
+ related_objects.append(
+ {
+ "kind": "task-run",
+ "role": "task-run",
+ "object": task_run_context.task_run,
+ },
)
- ]
flow_run = related_objects[0]["object"]
|
{"golden_diff": "diff --git a/src/prefect/events/related.py b/src/prefect/events/related.py\n--- a/src/prefect/events/related.py\n+++ b/src/prefect/events/related.py\n@@ -80,15 +80,33 @@\n async def dummy_read():\n return {}\n \n- related_objects = [\n- await _get_and_cache_related_object(\n- kind=\"flow-run\",\n- role=\"flow-run\",\n- client_method=client.read_flow_run,\n- obj_id=flow_run_id,\n- cache=RESOURCE_CACHE,\n+ if flow_run_context:\n+ related_objects.append(\n+ {\n+ \"kind\": \"flow-run\",\n+ \"role\": \"flow-run\",\n+ \"object\": flow_run_context.flow_run,\n+ },\n+ )\n+ else:\n+ related_objects.append(\n+ await _get_and_cache_related_object(\n+ kind=\"flow-run\",\n+ role=\"flow-run\",\n+ client_method=client.read_flow_run,\n+ obj_id=flow_run_id,\n+ cache=RESOURCE_CACHE,\n+ )\n+ )\n+\n+ if task_run_context:\n+ related_objects.append(\n+ {\n+ \"kind\": \"task-run\",\n+ \"role\": \"task-run\",\n+ \"object\": task_run_context.task_run,\n+ },\n )\n- ]\n \n flow_run = related_objects[0][\"object\"]\n", "issue": "Include the `task-run` as a related resources when emitting events.\n### First check\n\n- [X] I added a descriptive title to this issue.\n- [X] I used the GitHub search to find a similar request and didn't find it.\n- [X] I searched the Prefect documentation for this feature.\n\n### Prefect Version\n\n2.x\n\n### Describe the current behavior\n\nCurrently task-runs aren't included as related resources when emitting events.\n\n### Describe the proposed behavior\n\nThe current task run in the TaskRunContext should be included in any event fired while a task is running.\n\n### Example Use\n\n_No response_\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "import asyncio\nimport pendulum\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Awaitable,\n Callable,\n Dict,\n Iterable,\n List,\n Optional,\n Set,\n Tuple,\n Union,\n)\nfrom uuid import UUID\nfrom pendulum.datetime import DateTime\n\nfrom .schemas import RelatedResource\n\nif TYPE_CHECKING:\n from prefect._internal.schemas.bases import ObjectBaseModel\n\nResourceCacheEntry = Dict[str, Union[str, \"ObjectBaseModel\", None]]\nRelatedResourceCache = Dict[str, Tuple[ResourceCacheEntry, DateTime]]\n\nMAX_CACHE_SIZE = 100\nRESOURCE_CACHE: RelatedResourceCache = {}\n\n\ndef tags_as_related_resources(tags: Iterable[str]) -> List[RelatedResource]:\n return [\n RelatedResource(\n __root__={\n \"prefect.resource.id\": f\"prefect.tag.{tag}\",\n \"prefect.resource.role\": \"tag\",\n }\n )\n for tag in sorted(tags)\n ]\n\n\ndef object_as_related_resource(kind: str, role: str, object: Any) -> RelatedResource:\n resource_id = f\"prefect.{kind}.{object.id}\"\n\n return RelatedResource(\n __root__={\n \"prefect.resource.id\": resource_id,\n \"prefect.resource.role\": role,\n \"prefect.resource.name\": object.name,\n }\n )\n\n\nasync def related_resources_from_run_context(\n exclude: Optional[Set[str]] = None,\n) -> List[RelatedResource]:\n from prefect.client.orchestration import get_client\n from prefect.context import FlowRunContext, TaskRunContext\n\n if exclude is None:\n exclude = set()\n\n flow_run_context = FlowRunContext.get()\n task_run_context = TaskRunContext.get()\n\n if not flow_run_context and not task_run_context:\n return []\n\n flow_run_id: UUID = (\n flow_run_context.flow_run.id\n if flow_run_context\n else task_run_context.task_run.flow_run_id\n )\n\n related_objects: list[ResourceCacheEntry] = []\n\n async with get_client() as client:\n\n async def dummy_read():\n return {}\n\n related_objects = [\n await _get_and_cache_related_object(\n kind=\"flow-run\",\n role=\"flow-run\",\n client_method=client.read_flow_run,\n obj_id=flow_run_id,\n cache=RESOURCE_CACHE,\n )\n ]\n\n flow_run = related_objects[0][\"object\"]\n\n if flow_run:\n related_objects += list(\n await asyncio.gather(\n _get_and_cache_related_object(\n kind=\"flow\",\n role=\"flow\",\n client_method=client.read_flow,\n obj_id=flow_run.flow_id,\n cache=RESOURCE_CACHE,\n ),\n (\n _get_and_cache_related_object(\n kind=\"deployment\",\n role=\"deployment\",\n client_method=client.read_deployment,\n obj_id=flow_run.deployment_id,\n cache=RESOURCE_CACHE,\n )\n if flow_run.deployment_id\n else dummy_read()\n ),\n (\n _get_and_cache_related_object(\n kind=\"work-queue\",\n role=\"work-queue\",\n client_method=client.read_work_queue,\n obj_id=flow_run.work_queue_id,\n cache=RESOURCE_CACHE,\n )\n if flow_run.work_queue_id\n else dummy_read()\n ),\n (\n _get_and_cache_related_object(\n kind=\"work-pool\",\n role=\"work-pool\",\n client_method=client.read_work_pool,\n obj_id=flow_run.work_pool_name,\n cache=RESOURCE_CACHE,\n )\n if flow_run.work_pool_name\n else dummy_read()\n ),\n )\n )\n\n related = []\n tags = set()\n\n for entry in related_objects:\n obj_ = entry.get(\"object\")\n if obj_ is None:\n continue\n\n assert isinstance(entry[\"kind\"], str) and isinstance(entry[\"role\"], str)\n\n resource = object_as_related_resource(\n kind=entry[\"kind\"], role=entry[\"kind\"], object=obj_\n )\n\n if resource.id in exclude:\n continue\n\n related.append(resource)\n if hasattr(obj_, \"tags\"):\n tags |= set(obj_.tags)\n\n related += [\n resource\n for resource in tags_as_related_resources(tags)\n if resource.id not in exclude\n ]\n\n return related\n\n\nasync def _get_and_cache_related_object(\n kind: str,\n role: str,\n client_method: Callable[[Union[UUID, str]], Awaitable[Optional[\"ObjectBaseModel\"]]],\n obj_id: Union[UUID, str],\n cache: RelatedResourceCache,\n) -> ResourceCacheEntry:\n cache_key = f\"{kind}.{obj_id}\"\n entry = None\n\n if cache_key in cache:\n entry, _ = cache[cache_key]\n else:\n obj_ = await client_method(obj_id)\n entry = {\n \"kind\": kind,\n \"object\": obj_,\n }\n\n cache[cache_key] = (entry, pendulum.now(\"UTC\"))\n\n # In the case of a worker or agent this cache could be long-lived. To keep\n # from running out of memory only keep `MAX_CACHE_SIZE` entries in the\n # cache.\n if len(cache) > MAX_CACHE_SIZE:\n oldest_key = sorted(\n [(key, timestamp) for key, (_, timestamp) in cache.items()],\n key=lambda k: k[1],\n )[0][0]\n\n if oldest_key:\n del cache[oldest_key]\n\n # Because the role is event specific and can change depending on the\n # type of event being emitted, this adds the role from the args to the\n # entry before returning it rather than storing it in the cache.\n entry[\"role\"] = role\n return entry\n", "path": "src/prefect/events/related.py"}]}
| 2,439 | 307 |
gh_patches_debug_35609
|
rasdani/github-patches
|
git_diff
|
ckan__ckan-3200
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DataStore Map and Explorer not displaying map tiles since 11 July 2016
Direct tile access to MapQuest maps has been discontinued as of 11 July 2016 and the DataStore Map and Explorer previews no longer display map tiles.
The issue actually lies with recline.js and it has been logged https://github.com/okfn/recline/issues/500 and there is a referenced patch to replace MapQuest with Open Street Map frodrigo/recline@3df0c2a2bb8897124bdbbca715b2be1fd99cb08f
Thought it would be useful to have a record here and request the packaged recline.js should be updated.
</issue>
<code>
[start of ckanext/reclineview/plugin.py]
1 # encoding: utf-8
2
3 from logging import getLogger
4
5 from ckan.common import json
6 import ckan.plugins as p
7 import ckan.plugins.toolkit as toolkit
8
9 log = getLogger(__name__)
10 ignore_empty = p.toolkit.get_validator('ignore_empty')
11 natural_number_validator = p.toolkit.get_validator('natural_number_validator')
12 Invalid = p.toolkit.Invalid
13
14
15 def in_list(list_possible_values):
16 '''
17 Validator that checks that the input value is one of the given
18 possible values.
19
20 :param list_possible_values: function that returns list of possible values
21 for validated field
22 :type possible_values: function
23 '''
24 def validate(key, data, errors, context):
25 if not data[key] in list_possible_values():
26 raise Invalid('"{0}" is not a valid parameter'.format(data[key]))
27 return validate
28
29
30 def datastore_fields(resource, valid_field_types):
31 '''
32 Return a list of all datastore fields for a given resource, as long as
33 the datastore field type is in valid_field_types.
34
35 :param resource: resource dict
36 :type resource: dict
37 :param valid_field_types: field types to include in returned list
38 :type valid_field_types: list of strings
39 '''
40 data = {'resource_id': resource['id'], 'limit': 0}
41 fields = toolkit.get_action('datastore_search')({}, data)['fields']
42 return [{'value': f['id'], 'text': f['id']} for f in fields
43 if f['type'] in valid_field_types]
44
45
46 class ReclineViewBase(p.SingletonPlugin):
47 '''
48 This base class for the Recline view extensions.
49 '''
50 p.implements(p.IConfigurer, inherit=True)
51 p.implements(p.IResourceView, inherit=True)
52
53 def update_config(self, config):
54 '''
55 Set up the resource library, public directory and
56 template directory for the view
57 '''
58 toolkit.add_public_directory(config, 'theme/public')
59 toolkit.add_template_directory(config, 'theme/templates')
60 toolkit.add_resource('theme/public', 'ckanext-reclineview')
61
62 def can_view(self, data_dict):
63 resource = data_dict['resource']
64 return (resource.get('datastore_active') or
65 '_datastore_only_resource' in resource.get('url', ''))
66
67 def setup_template_variables(self, context, data_dict):
68 return {'resource_json': json.dumps(data_dict['resource']),
69 'resource_view_json': json.dumps(data_dict['resource_view'])}
70
71 def view_template(self, context, data_dict):
72 return 'recline_view.html'
73
74
75 class ReclineView(ReclineViewBase):
76 '''
77 This extension views resources using a Recline MultiView.
78 '''
79
80 def info(self):
81 return {'name': 'recline_view',
82 'title': 'Data Explorer',
83 'filterable': True,
84 'icon': 'table',
85 'requires_datastore': False,
86 'default_title': p.toolkit._('Data Explorer'),
87 }
88
89 def can_view(self, data_dict):
90 resource = data_dict['resource']
91
92 if (resource.get('datastore_active') or
93 '_datastore_only_resource' in resource.get('url', '')):
94 return True
95 resource_format = resource.get('format', None)
96 if resource_format:
97 return resource_format.lower() in ['csv', 'xls', 'xlsx', 'tsv']
98 else:
99 return False
100
101
102 class ReclineGridView(ReclineViewBase):
103 '''
104 This extension views resources using a Recline grid.
105 '''
106
107 def info(self):
108 return {'name': 'recline_grid_view',
109 'title': 'Grid',
110 'filterable': True,
111 'icon': 'table',
112 'requires_datastore': True,
113 'default_title': p.toolkit._('Table'),
114 }
115
116
117 class ReclineGraphView(ReclineViewBase):
118 '''
119 This extension views resources using a Recline graph.
120 '''
121
122 graph_types = [{'value': 'lines-and-points',
123 'text': 'Lines and points'},
124 {'value': 'lines', 'text': 'Lines'},
125 {'value': 'points', 'text': 'Points'},
126 {'value': 'bars', 'text': 'Bars'},
127 {'value': 'columns', 'text': 'Columns'}]
128
129 datastore_fields = []
130
131 datastore_field_types = ['numeric', 'int4', 'timestamp']
132
133 def list_graph_types(self):
134 return [t['value'] for t in self.graph_types]
135
136 def list_datastore_fields(self):
137 return [t['value'] for t in self.datastore_fields]
138
139 def info(self):
140 # in_list validator here is passed functions because this
141 # method does not know what the possible values of the
142 # datastore fields are (requires a datastore search)
143 schema = {
144 'offset': [ignore_empty, natural_number_validator],
145 'limit': [ignore_empty, natural_number_validator],
146 'graph_type': [ignore_empty, in_list(self.list_graph_types)],
147 'group': [ignore_empty, in_list(self.list_datastore_fields)],
148 'series': [ignore_empty, in_list(self.list_datastore_fields)]
149 }
150 return {'name': 'recline_graph_view',
151 'title': 'Graph',
152 'filterable': True,
153 'icon': 'bar-chart',
154 'requires_datastore': True,
155 'schema': schema,
156 'default_title': p.toolkit._('Graph'),
157 }
158
159 def setup_template_variables(self, context, data_dict):
160 self.datastore_fields = datastore_fields(data_dict['resource'],
161 self.datastore_field_types)
162 vars = ReclineViewBase.setup_template_variables(self, context,
163 data_dict)
164 vars.update({'graph_types': self.graph_types,
165 'graph_fields': self.datastore_fields})
166 return vars
167
168 def form_template(self, context, data_dict):
169 return 'recline_graph_form.html'
170
171
172 class ReclineMapView(ReclineViewBase):
173 '''
174 This extension views resources using a Recline map.
175 '''
176
177 map_field_types = [{'value': 'lat_long',
178 'text': 'Latitude / Longitude fields'},
179 {'value': 'geojson', 'text': 'GeoJSON'}]
180
181 datastore_fields = []
182
183 datastore_field_latlon_types = ['numeric']
184
185 datastore_field_geojson_types = ['text']
186
187 def list_map_field_types(self):
188 return [t['value'] for t in self.map_field_types]
189
190 def list_datastore_fields(self):
191 return [t['value'] for t in self.datastore_fields]
192
193 def info(self):
194 # in_list validator here is passed functions because this
195 # method does not know what the possible values of the
196 # datastore fields are (requires a datastore search)
197 schema = {
198 'offset': [ignore_empty, natural_number_validator],
199 'limit': [ignore_empty, natural_number_validator],
200 'map_field_type': [ignore_empty,
201 in_list(self.list_map_field_types)],
202 'latitude_field': [ignore_empty,
203 in_list(self.list_datastore_fields)],
204 'longitude_field': [ignore_empty,
205 in_list(self.list_datastore_fields)],
206 'geojson_field': [ignore_empty,
207 in_list(self.list_datastore_fields)],
208 'auto_zoom': [ignore_empty],
209 'cluster_markers': [ignore_empty]
210 }
211 return {'name': 'recline_map_view',
212 'title': 'Map',
213 'schema': schema,
214 'filterable': True,
215 'icon': 'map-marker',
216 'default_title': p.toolkit._('Map'),
217 }
218
219 def setup_template_variables(self, context, data_dict):
220 map_latlon_fields = datastore_fields(
221 data_dict['resource'], self.datastore_field_latlon_types)
222 map_geojson_fields = datastore_fields(
223 data_dict['resource'], self.datastore_field_geojson_types)
224
225 self.datastore_fields = map_latlon_fields + map_geojson_fields
226
227 vars = ReclineViewBase.setup_template_variables(self, context,
228 data_dict)
229 vars.update({'map_field_types': self.map_field_types,
230 'map_latlon_fields': map_latlon_fields,
231 'map_geojson_fields': map_geojson_fields
232 })
233 return vars
234
235 def form_template(self, context, data_dict):
236 return 'recline_map_form.html'
237
[end of ckanext/reclineview/plugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ckanext/reclineview/plugin.py b/ckanext/reclineview/plugin.py
--- a/ckanext/reclineview/plugin.py
+++ b/ckanext/reclineview/plugin.py
@@ -5,6 +5,7 @@
from ckan.common import json
import ckan.plugins as p
import ckan.plugins.toolkit as toolkit
+from pylons import config
log = getLogger(__name__)
ignore_empty = p.toolkit.get_validator('ignore_empty')
@@ -12,6 +13,15 @@
Invalid = p.toolkit.Invalid
+def get_mapview_config():
+ '''
+ Extracts and returns map view configuration of the reclineview extension.
+ '''
+ namespace = 'ckanext.spatial.common_map.'
+ return dict([(k.replace(namespace, ''), v) for k, v in config.iteritems()
+ if k.startswith(namespace)])
+
+
def in_list(list_possible_values):
'''
Validator that checks that the input value is one of the given
@@ -77,6 +87,8 @@
This extension views resources using a Recline MultiView.
'''
+ p.implements(p.ITemplateHelpers, inherit=True)
+
def info(self):
return {'name': 'recline_view',
'title': 'Data Explorer',
@@ -98,6 +110,11 @@
else:
return False
+ def get_helpers(self):
+ return {
+ 'get_map_config': get_mapview_config
+ }
+
class ReclineGridView(ReclineViewBase):
'''
@@ -174,6 +191,8 @@
This extension views resources using a Recline map.
'''
+ p.implements(p.ITemplateHelpers, inherit=True)
+
map_field_types = [{'value': 'lat_long',
'text': 'Latitude / Longitude fields'},
{'value': 'geojson', 'text': 'GeoJSON'}]
@@ -234,3 +253,8 @@
def form_template(self, context, data_dict):
return 'recline_map_form.html'
+
+ def get_helpers(self):
+ return {
+ 'get_mapview_config': get_mapview_config
+ }
|
{"golden_diff": "diff --git a/ckanext/reclineview/plugin.py b/ckanext/reclineview/plugin.py\n--- a/ckanext/reclineview/plugin.py\n+++ b/ckanext/reclineview/plugin.py\n@@ -5,6 +5,7 @@\n from ckan.common import json\n import ckan.plugins as p\n import ckan.plugins.toolkit as toolkit\n+from pylons import config\n \n log = getLogger(__name__)\n ignore_empty = p.toolkit.get_validator('ignore_empty')\n@@ -12,6 +13,15 @@\n Invalid = p.toolkit.Invalid\n \n \n+def get_mapview_config():\n+ '''\n+ Extracts and returns map view configuration of the reclineview extension.\n+ '''\n+ namespace = 'ckanext.spatial.common_map.'\n+ return dict([(k.replace(namespace, ''), v) for k, v in config.iteritems()\n+ if k.startswith(namespace)])\n+\n+\n def in_list(list_possible_values):\n '''\n Validator that checks that the input value is one of the given\n@@ -77,6 +87,8 @@\n This extension views resources using a Recline MultiView.\n '''\n \n+ p.implements(p.ITemplateHelpers, inherit=True)\n+\n def info(self):\n return {'name': 'recline_view',\n 'title': 'Data Explorer',\n@@ -98,6 +110,11 @@\n else:\n return False\n \n+ def get_helpers(self):\n+ return {\n+ 'get_map_config': get_mapview_config\n+ }\n+\n \n class ReclineGridView(ReclineViewBase):\n '''\n@@ -174,6 +191,8 @@\n This extension views resources using a Recline map.\n '''\n \n+ p.implements(p.ITemplateHelpers, inherit=True)\n+\n map_field_types = [{'value': 'lat_long',\n 'text': 'Latitude / Longitude fields'},\n {'value': 'geojson', 'text': 'GeoJSON'}]\n@@ -234,3 +253,8 @@\n \n def form_template(self, context, data_dict):\n return 'recline_map_form.html'\n+\n+ def get_helpers(self):\n+ return {\n+ 'get_mapview_config': get_mapview_config\n+ }\n", "issue": "DataStore Map and Explorer not displaying map tiles since 11 July 2016\nDirect tile access to MapQuest maps has been discontinued as of 11 July 2016 and the DataStore Map and Explorer previews no longer display map tiles. \n\nThe issue actually lies with recline.js and it has been logged https://github.com/okfn/recline/issues/500 and there is a referenced patch to replace MapQuest with Open Street Map frodrigo/recline@3df0c2a2bb8897124bdbbca715b2be1fd99cb08f\n\nThought it would be useful to have a record here and request the packaged recline.js should be updated.\n\n", "before_files": [{"content": "# encoding: utf-8\n\nfrom logging import getLogger\n\nfrom ckan.common import json\nimport ckan.plugins as p\nimport ckan.plugins.toolkit as toolkit\n\nlog = getLogger(__name__)\nignore_empty = p.toolkit.get_validator('ignore_empty')\nnatural_number_validator = p.toolkit.get_validator('natural_number_validator')\nInvalid = p.toolkit.Invalid\n\n\ndef in_list(list_possible_values):\n '''\n Validator that checks that the input value is one of the given\n possible values.\n\n :param list_possible_values: function that returns list of possible values\n for validated field\n :type possible_values: function\n '''\n def validate(key, data, errors, context):\n if not data[key] in list_possible_values():\n raise Invalid('\"{0}\" is not a valid parameter'.format(data[key]))\n return validate\n\n\ndef datastore_fields(resource, valid_field_types):\n '''\n Return a list of all datastore fields for a given resource, as long as\n the datastore field type is in valid_field_types.\n\n :param resource: resource dict\n :type resource: dict\n :param valid_field_types: field types to include in returned list\n :type valid_field_types: list of strings\n '''\n data = {'resource_id': resource['id'], 'limit': 0}\n fields = toolkit.get_action('datastore_search')({}, data)['fields']\n return [{'value': f['id'], 'text': f['id']} for f in fields\n if f['type'] in valid_field_types]\n\n\nclass ReclineViewBase(p.SingletonPlugin):\n '''\n This base class for the Recline view extensions.\n '''\n p.implements(p.IConfigurer, inherit=True)\n p.implements(p.IResourceView, inherit=True)\n\n def update_config(self, config):\n '''\n Set up the resource library, public directory and\n template directory for the view\n '''\n toolkit.add_public_directory(config, 'theme/public')\n toolkit.add_template_directory(config, 'theme/templates')\n toolkit.add_resource('theme/public', 'ckanext-reclineview')\n\n def can_view(self, data_dict):\n resource = data_dict['resource']\n return (resource.get('datastore_active') or\n '_datastore_only_resource' in resource.get('url', ''))\n\n def setup_template_variables(self, context, data_dict):\n return {'resource_json': json.dumps(data_dict['resource']),\n 'resource_view_json': json.dumps(data_dict['resource_view'])}\n\n def view_template(self, context, data_dict):\n return 'recline_view.html'\n\n\nclass ReclineView(ReclineViewBase):\n '''\n This extension views resources using a Recline MultiView.\n '''\n\n def info(self):\n return {'name': 'recline_view',\n 'title': 'Data Explorer',\n 'filterable': True,\n 'icon': 'table',\n 'requires_datastore': False,\n 'default_title': p.toolkit._('Data Explorer'),\n }\n\n def can_view(self, data_dict):\n resource = data_dict['resource']\n\n if (resource.get('datastore_active') or\n '_datastore_only_resource' in resource.get('url', '')):\n return True\n resource_format = resource.get('format', None)\n if resource_format:\n return resource_format.lower() in ['csv', 'xls', 'xlsx', 'tsv']\n else:\n return False\n\n\nclass ReclineGridView(ReclineViewBase):\n '''\n This extension views resources using a Recline grid.\n '''\n\n def info(self):\n return {'name': 'recline_grid_view',\n 'title': 'Grid',\n 'filterable': True,\n 'icon': 'table',\n 'requires_datastore': True,\n 'default_title': p.toolkit._('Table'),\n }\n\n\nclass ReclineGraphView(ReclineViewBase):\n '''\n This extension views resources using a Recline graph.\n '''\n\n graph_types = [{'value': 'lines-and-points',\n 'text': 'Lines and points'},\n {'value': 'lines', 'text': 'Lines'},\n {'value': 'points', 'text': 'Points'},\n {'value': 'bars', 'text': 'Bars'},\n {'value': 'columns', 'text': 'Columns'}]\n\n datastore_fields = []\n\n datastore_field_types = ['numeric', 'int4', 'timestamp']\n\n def list_graph_types(self):\n return [t['value'] for t in self.graph_types]\n\n def list_datastore_fields(self):\n return [t['value'] for t in self.datastore_fields]\n\n def info(self):\n # in_list validator here is passed functions because this\n # method does not know what the possible values of the\n # datastore fields are (requires a datastore search)\n schema = {\n 'offset': [ignore_empty, natural_number_validator],\n 'limit': [ignore_empty, natural_number_validator],\n 'graph_type': [ignore_empty, in_list(self.list_graph_types)],\n 'group': [ignore_empty, in_list(self.list_datastore_fields)],\n 'series': [ignore_empty, in_list(self.list_datastore_fields)]\n }\n return {'name': 'recline_graph_view',\n 'title': 'Graph',\n 'filterable': True,\n 'icon': 'bar-chart',\n 'requires_datastore': True,\n 'schema': schema,\n 'default_title': p.toolkit._('Graph'),\n }\n\n def setup_template_variables(self, context, data_dict):\n self.datastore_fields = datastore_fields(data_dict['resource'],\n self.datastore_field_types)\n vars = ReclineViewBase.setup_template_variables(self, context,\n data_dict)\n vars.update({'graph_types': self.graph_types,\n 'graph_fields': self.datastore_fields})\n return vars\n\n def form_template(self, context, data_dict):\n return 'recline_graph_form.html'\n\n\nclass ReclineMapView(ReclineViewBase):\n '''\n This extension views resources using a Recline map.\n '''\n\n map_field_types = [{'value': 'lat_long',\n 'text': 'Latitude / Longitude fields'},\n {'value': 'geojson', 'text': 'GeoJSON'}]\n\n datastore_fields = []\n\n datastore_field_latlon_types = ['numeric']\n\n datastore_field_geojson_types = ['text']\n\n def list_map_field_types(self):\n return [t['value'] for t in self.map_field_types]\n\n def list_datastore_fields(self):\n return [t['value'] for t in self.datastore_fields]\n\n def info(self):\n # in_list validator here is passed functions because this\n # method does not know what the possible values of the\n # datastore fields are (requires a datastore search)\n schema = {\n 'offset': [ignore_empty, natural_number_validator],\n 'limit': [ignore_empty, natural_number_validator],\n 'map_field_type': [ignore_empty,\n in_list(self.list_map_field_types)],\n 'latitude_field': [ignore_empty,\n in_list(self.list_datastore_fields)],\n 'longitude_field': [ignore_empty,\n in_list(self.list_datastore_fields)],\n 'geojson_field': [ignore_empty,\n in_list(self.list_datastore_fields)],\n 'auto_zoom': [ignore_empty],\n 'cluster_markers': [ignore_empty]\n }\n return {'name': 'recline_map_view',\n 'title': 'Map',\n 'schema': schema,\n 'filterable': True,\n 'icon': 'map-marker',\n 'default_title': p.toolkit._('Map'),\n }\n\n def setup_template_variables(self, context, data_dict):\n map_latlon_fields = datastore_fields(\n data_dict['resource'], self.datastore_field_latlon_types)\n map_geojson_fields = datastore_fields(\n data_dict['resource'], self.datastore_field_geojson_types)\n\n self.datastore_fields = map_latlon_fields + map_geojson_fields\n\n vars = ReclineViewBase.setup_template_variables(self, context,\n data_dict)\n vars.update({'map_field_types': self.map_field_types,\n 'map_latlon_fields': map_latlon_fields,\n 'map_geojson_fields': map_geojson_fields\n })\n return vars\n\n def form_template(self, context, data_dict):\n return 'recline_map_form.html'\n", "path": "ckanext/reclineview/plugin.py"}]}
| 3,092 | 488 |
gh_patches_debug_37532
|
rasdani/github-patches
|
git_diff
|
HypothesisWorks__hypothesis-1633
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add a CLI option for verbosity to the pytest plugin
The `HYPOTHESIS_VERBOSITY_LEVEL` environment variable is now deprecated (from #1211). An easy way to control verbosity is still useful though, so we would like to support this as a command-line flag.
This would be implemented in [`hypothesis.extra.pytestplugin`](https://github.com/HypothesisWorks/hypothesis/blob/master/hypothesis-python/src/hypothesis/extra/pytestplugin.py), similarly to [the deprecated version here](https://github.com/HypothesisWorks/hypothesis/blob/3c5f3906a7339af8bf2448281377abe903575245/hypothesis-python/src/hypothesis/_settings.py#L626-L629). The new ``--hypothesis-verbosity`` option should be applied *after* loading the profile specified by ``--hypothesis-profile`` (if given).
Finally, the new option should be listed in `docs/details.rst`, including that the verbosity option is applied after loading a profile.
*If you would like to work on this issue, feel free to comment and I will help you get started!*
</issue>
<code>
[start of hypothesis-python/src/hypothesis/extra/pytestplugin.py]
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis, which may be found at
4 # https://github.com/HypothesisWorks/hypothesis-python
5 #
6 # Most of this work is copyright (C) 2013-2018 David R. MacIver
7 # ([email protected]), but it contains contributions by others. See
8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
9 # consult the git log if you need to determine who owns an individual
10 # contribution.
11 #
12 # This Source Code Form is subject to the terms of the Mozilla Public License,
13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
14 # obtain one at http://mozilla.org/MPL/2.0/.
15 #
16 # END HEADER
17
18 from __future__ import division, print_function, absolute_import
19
20 import pytest
21
22 from hypothesis import core, settings
23 from hypothesis.reporting import default as default_reporter
24 from hypothesis.reporting import with_reporter
25 from hypothesis.statistics import collector
26 from hypothesis.internal.compat import OrderedDict, text_type
27 from hypothesis.internal.detection import is_hypothesis_test
28
29 LOAD_PROFILE_OPTION = '--hypothesis-profile'
30 PRINT_STATISTICS_OPTION = '--hypothesis-show-statistics'
31 SEED_OPTION = '--hypothesis-seed'
32
33
34 class StoringReporter(object):
35
36 def __init__(self, config):
37 self.config = config
38 self.results = []
39
40 def __call__(self, msg):
41 if self.config.getoption('capture', 'fd') == 'no':
42 default_reporter(msg)
43 if not isinstance(msg, text_type):
44 msg = repr(msg)
45 self.results.append(msg)
46
47
48 def pytest_addoption(parser):
49 group = parser.getgroup('hypothesis', 'Hypothesis')
50 group.addoption(
51 LOAD_PROFILE_OPTION,
52 action='store',
53 help='Load in a registered hypothesis.settings profile'
54 )
55 group.addoption(
56 PRINT_STATISTICS_OPTION,
57 action='store_true',
58 help='Configure when statistics are printed',
59 default=False
60 )
61 group.addoption(
62 SEED_OPTION,
63 action='store',
64 help='Set a seed to use for all Hypothesis tests'
65 )
66
67
68 def pytest_report_header(config):
69 profile = config.getoption(LOAD_PROFILE_OPTION)
70 if not profile:
71 profile = 'default'
72 settings_str = settings.get_profile(profile).show_changed()
73 if settings_str != '':
74 settings_str = ' -> %s' % (settings_str)
75 return 'hypothesis profile %r%s' % (profile, settings_str)
76
77
78 def pytest_configure(config):
79 core.running_under_pytest = True
80 profile = config.getoption(LOAD_PROFILE_OPTION)
81 if profile:
82 settings.load_profile(profile)
83 seed = config.getoption(SEED_OPTION)
84 if seed is not None:
85 try:
86 seed = int(seed)
87 except ValueError:
88 pass
89 core.global_force_seed = seed
90 config.addinivalue_line(
91 'markers',
92 'hypothesis: Tests which use hypothesis.')
93
94
95 gathered_statistics = OrderedDict() # type: dict
96
97
98 @pytest.mark.hookwrapper
99 def pytest_runtest_call(item):
100 if not (hasattr(item, 'obj') and is_hypothesis_test(item.obj)):
101 yield
102 else:
103 store = StoringReporter(item.config)
104
105 def note_statistics(stats):
106 gathered_statistics[item.nodeid] = stats
107
108 with collector.with_value(note_statistics):
109 with with_reporter(store):
110 yield
111 if store.results:
112 item.hypothesis_report_information = list(store.results)
113
114
115 @pytest.mark.hookwrapper
116 def pytest_runtest_makereport(item, call):
117 report = (yield).get_result()
118 if hasattr(item, 'hypothesis_report_information'):
119 report.sections.append((
120 'Hypothesis',
121 '\n'.join(item.hypothesis_report_information)
122 ))
123
124
125 def pytest_terminal_summary(terminalreporter):
126 if not terminalreporter.config.getoption(PRINT_STATISTICS_OPTION):
127 return
128 terminalreporter.section('Hypothesis Statistics')
129 for name, statistics in gathered_statistics.items():
130 terminalreporter.write_line(name + ':')
131 terminalreporter.write_line('')
132
133 if not statistics.has_runs:
134 terminalreporter.write_line(' - Test was never run')
135 continue
136
137 terminalreporter.write_line((
138 ' - %d passing examples, %d failing examples,'
139 ' %d invalid examples') % (
140 statistics.passing_examples, statistics.failing_examples,
141 statistics.invalid_examples,
142 ))
143 terminalreporter.write_line(
144 ' - Typical runtimes: %s' % (statistics.runtimes,)
145 )
146 terminalreporter.write_line(
147 ' - Fraction of time spent in data generation: %s' % (
148 statistics.draw_time_percentage,))
149 terminalreporter.write_line(
150 ' - Stopped because %s' % (statistics.exit_reason,)
151 )
152 if statistics.events:
153 terminalreporter.write_line(' - Events:')
154 for event in statistics.events:
155 terminalreporter.write_line(
156 ' * %s' % (event,)
157 )
158 terminalreporter.write_line('')
159
160
161 def pytest_collection_modifyitems(items):
162 for item in items:
163 if not isinstance(item, pytest.Function):
164 continue
165 if getattr(item.function, 'is_hypothesis_test', False):
166 item.add_marker('hypothesis')
167
168
169 def load():
170 pass
171
[end of hypothesis-python/src/hypothesis/extra/pytestplugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/hypothesis-python/src/hypothesis/extra/pytestplugin.py b/hypothesis-python/src/hypothesis/extra/pytestplugin.py
--- a/hypothesis-python/src/hypothesis/extra/pytestplugin.py
+++ b/hypothesis-python/src/hypothesis/extra/pytestplugin.py
@@ -19,7 +19,7 @@
import pytest
-from hypothesis import core, settings
+from hypothesis import Verbosity, core, settings
from hypothesis.reporting import default as default_reporter
from hypothesis.reporting import with_reporter
from hypothesis.statistics import collector
@@ -27,6 +27,7 @@
from hypothesis.internal.detection import is_hypothesis_test
LOAD_PROFILE_OPTION = '--hypothesis-profile'
+VERBOSITY_OPTION = '--hypothesis-verbosity'
PRINT_STATISTICS_OPTION = '--hypothesis-show-statistics'
SEED_OPTION = '--hypothesis-seed'
@@ -52,6 +53,12 @@
action='store',
help='Load in a registered hypothesis.settings profile'
)
+ group.addoption(
+ VERBOSITY_OPTION,
+ action='store',
+ choices=[opt.name for opt in Verbosity],
+ help='Override profile with verbosity setting specified'
+ )
group.addoption(
PRINT_STATISTICS_OPTION,
action='store_true',
@@ -68,7 +75,7 @@
def pytest_report_header(config):
profile = config.getoption(LOAD_PROFILE_OPTION)
if not profile:
- profile = 'default'
+ profile = settings._current_profile
settings_str = settings.get_profile(profile).show_changed()
if settings_str != '':
settings_str = ' -> %s' % (settings_str)
@@ -80,6 +87,16 @@
profile = config.getoption(LOAD_PROFILE_OPTION)
if profile:
settings.load_profile(profile)
+ verbosity_name = config.getoption(VERBOSITY_OPTION)
+ if verbosity_name:
+ verbosity_value = Verbosity[verbosity_name]
+ profile_name = '%s-with-%s-verbosity' % (
+ settings._current_profile, verbosity_name
+ )
+ # register_profile creates a new profile, exactly like the current one,
+ # with the extra values given (in this case 'verbosity')
+ settings.register_profile(profile_name, verbosity=verbosity_value)
+ settings.load_profile(profile_name)
seed = config.getoption(SEED_OPTION)
if seed is not None:
try:
|
{"golden_diff": "diff --git a/hypothesis-python/src/hypothesis/extra/pytestplugin.py b/hypothesis-python/src/hypothesis/extra/pytestplugin.py\n--- a/hypothesis-python/src/hypothesis/extra/pytestplugin.py\n+++ b/hypothesis-python/src/hypothesis/extra/pytestplugin.py\n@@ -19,7 +19,7 @@\n \n import pytest\n \n-from hypothesis import core, settings\n+from hypothesis import Verbosity, core, settings\n from hypothesis.reporting import default as default_reporter\n from hypothesis.reporting import with_reporter\n from hypothesis.statistics import collector\n@@ -27,6 +27,7 @@\n from hypothesis.internal.detection import is_hypothesis_test\n \n LOAD_PROFILE_OPTION = '--hypothesis-profile'\n+VERBOSITY_OPTION = '--hypothesis-verbosity'\n PRINT_STATISTICS_OPTION = '--hypothesis-show-statistics'\n SEED_OPTION = '--hypothesis-seed'\n \n@@ -52,6 +53,12 @@\n action='store',\n help='Load in a registered hypothesis.settings profile'\n )\n+ group.addoption(\n+ VERBOSITY_OPTION,\n+ action='store',\n+ choices=[opt.name for opt in Verbosity],\n+ help='Override profile with verbosity setting specified'\n+ )\n group.addoption(\n PRINT_STATISTICS_OPTION,\n action='store_true',\n@@ -68,7 +75,7 @@\n def pytest_report_header(config):\n profile = config.getoption(LOAD_PROFILE_OPTION)\n if not profile:\n- profile = 'default'\n+ profile = settings._current_profile\n settings_str = settings.get_profile(profile).show_changed()\n if settings_str != '':\n settings_str = ' -> %s' % (settings_str)\n@@ -80,6 +87,16 @@\n profile = config.getoption(LOAD_PROFILE_OPTION)\n if profile:\n settings.load_profile(profile)\n+ verbosity_name = config.getoption(VERBOSITY_OPTION)\n+ if verbosity_name:\n+ verbosity_value = Verbosity[verbosity_name]\n+ profile_name = '%s-with-%s-verbosity' % (\n+ settings._current_profile, verbosity_name\n+ )\n+ # register_profile creates a new profile, exactly like the current one,\n+ # with the extra values given (in this case 'verbosity')\n+ settings.register_profile(profile_name, verbosity=verbosity_value)\n+ settings.load_profile(profile_name)\n seed = config.getoption(SEED_OPTION)\n if seed is not None:\n try:\n", "issue": "Add a CLI option for verbosity to the pytest plugin\nThe `HYPOTHESIS_VERBOSITY_LEVEL` environment variable is now deprecated (from #1211). An easy way to control verbosity is still useful though, so we would like to support this as a command-line flag.\r\n\r\nThis would be implemented in [`hypothesis.extra.pytestplugin`](https://github.com/HypothesisWorks/hypothesis/blob/master/hypothesis-python/src/hypothesis/extra/pytestplugin.py), similarly to [the deprecated version here](https://github.com/HypothesisWorks/hypothesis/blob/3c5f3906a7339af8bf2448281377abe903575245/hypothesis-python/src/hypothesis/_settings.py#L626-L629). The new ``--hypothesis-verbosity`` option should be applied *after* loading the profile specified by ``--hypothesis-profile`` (if given).\r\n\r\nFinally, the new option should be listed in `docs/details.rst`, including that the verbosity option is applied after loading a profile.\r\n\r\n*If you would like to work on this issue, feel free to comment and I will help you get started!*\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport pytest\n\nfrom hypothesis import core, settings\nfrom hypothesis.reporting import default as default_reporter\nfrom hypothesis.reporting import with_reporter\nfrom hypothesis.statistics import collector\nfrom hypothesis.internal.compat import OrderedDict, text_type\nfrom hypothesis.internal.detection import is_hypothesis_test\n\nLOAD_PROFILE_OPTION = '--hypothesis-profile'\nPRINT_STATISTICS_OPTION = '--hypothesis-show-statistics'\nSEED_OPTION = '--hypothesis-seed'\n\n\nclass StoringReporter(object):\n\n def __init__(self, config):\n self.config = config\n self.results = []\n\n def __call__(self, msg):\n if self.config.getoption('capture', 'fd') == 'no':\n default_reporter(msg)\n if not isinstance(msg, text_type):\n msg = repr(msg)\n self.results.append(msg)\n\n\ndef pytest_addoption(parser):\n group = parser.getgroup('hypothesis', 'Hypothesis')\n group.addoption(\n LOAD_PROFILE_OPTION,\n action='store',\n help='Load in a registered hypothesis.settings profile'\n )\n group.addoption(\n PRINT_STATISTICS_OPTION,\n action='store_true',\n help='Configure when statistics are printed',\n default=False\n )\n group.addoption(\n SEED_OPTION,\n action='store',\n help='Set a seed to use for all Hypothesis tests'\n )\n\n\ndef pytest_report_header(config):\n profile = config.getoption(LOAD_PROFILE_OPTION)\n if not profile:\n profile = 'default'\n settings_str = settings.get_profile(profile).show_changed()\n if settings_str != '':\n settings_str = ' -> %s' % (settings_str)\n return 'hypothesis profile %r%s' % (profile, settings_str)\n\n\ndef pytest_configure(config):\n core.running_under_pytest = True\n profile = config.getoption(LOAD_PROFILE_OPTION)\n if profile:\n settings.load_profile(profile)\n seed = config.getoption(SEED_OPTION)\n if seed is not None:\n try:\n seed = int(seed)\n except ValueError:\n pass\n core.global_force_seed = seed\n config.addinivalue_line(\n 'markers',\n 'hypothesis: Tests which use hypothesis.')\n\n\ngathered_statistics = OrderedDict() # type: dict\n\n\[email protected]\ndef pytest_runtest_call(item):\n if not (hasattr(item, 'obj') and is_hypothesis_test(item.obj)):\n yield\n else:\n store = StoringReporter(item.config)\n\n def note_statistics(stats):\n gathered_statistics[item.nodeid] = stats\n\n with collector.with_value(note_statistics):\n with with_reporter(store):\n yield\n if store.results:\n item.hypothesis_report_information = list(store.results)\n\n\[email protected]\ndef pytest_runtest_makereport(item, call):\n report = (yield).get_result()\n if hasattr(item, 'hypothesis_report_information'):\n report.sections.append((\n 'Hypothesis',\n '\\n'.join(item.hypothesis_report_information)\n ))\n\n\ndef pytest_terminal_summary(terminalreporter):\n if not terminalreporter.config.getoption(PRINT_STATISTICS_OPTION):\n return\n terminalreporter.section('Hypothesis Statistics')\n for name, statistics in gathered_statistics.items():\n terminalreporter.write_line(name + ':')\n terminalreporter.write_line('')\n\n if not statistics.has_runs:\n terminalreporter.write_line(' - Test was never run')\n continue\n\n terminalreporter.write_line((\n ' - %d passing examples, %d failing examples,'\n ' %d invalid examples') % (\n statistics.passing_examples, statistics.failing_examples,\n statistics.invalid_examples,\n ))\n terminalreporter.write_line(\n ' - Typical runtimes: %s' % (statistics.runtimes,)\n )\n terminalreporter.write_line(\n ' - Fraction of time spent in data generation: %s' % (\n statistics.draw_time_percentage,))\n terminalreporter.write_line(\n ' - Stopped because %s' % (statistics.exit_reason,)\n )\n if statistics.events:\n terminalreporter.write_line(' - Events:')\n for event in statistics.events:\n terminalreporter.write_line(\n ' * %s' % (event,)\n )\n terminalreporter.write_line('')\n\n\ndef pytest_collection_modifyitems(items):\n for item in items:\n if not isinstance(item, pytest.Function):\n continue\n if getattr(item.function, 'is_hypothesis_test', False):\n item.add_marker('hypothesis')\n\n\ndef load():\n pass\n", "path": "hypothesis-python/src/hypothesis/extra/pytestplugin.py"}]}
| 2,404 | 541 |
gh_patches_debug_34216
|
rasdani/github-patches
|
git_diff
|
zigpy__zha-device-handlers-3023
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Device Support Request] TS0601 by _TZE200_3ejwxpmu (QOXEZY Zigbee CO2 NDIR Sensor)
**Is your feature request related to a problem? Please describe.**
This is a NDIR CO2 sensor which measures carbon dioxide (additionally humidity and temperature). While I can connect it to ZHA, it generates no entities. Is it possible to integrate?
**Describe the solution you'd like**
I would like to be able to see all available entities.
<details>
<!-- Device signature can be acquired by clicking on the "Zigbee Device Signature" button in the device settings view -->
<summary>Device signature</summary>
{
"node_descriptor": "NodeDescriptor(logical_type=<LogicalType.Router: 1>, complex_descriptor_available=0, user_descriptor_available=0, reserved=0, aps_flags=0, frequency_band=<FrequencyBand.Freq2400MHz: 8>, mac_capability_flags=<MACCapabilityFlags.AllocateAddress|RxOnWhenIdle|MainsPowered|FullFunctionDevice: 142>, manufacturer_code=4417, maximum_buffer_size=66, maximum_incoming_transfer_size=66, server_mask=10752, maximum_outgoing_transfer_size=66, descriptor_capability_field=<DescriptorCapability.NONE: 0>, *allocate_address=True, *is_alternate_pan_coordinator=False, *is_coordinator=False, *is_end_device=False, *is_full_function_device=True, *is_mains_powered=True, *is_receiver_on_when_idle=True, *is_router=True, *is_security_capable=False)",
"endpoints": {
"1": {
"profile_id": 260,
"device_type": "0x0051",
"in_clusters": [
"0x0000",
"0x0004",
"0x0005",
"0xef00"
],
"out_clusters": [
"0x000a",
"0x0019"
]
},
"242": {
"profile_id": 41440,
"device_type": "0x0061",
"in_clusters": [],
"out_clusters": [
"0x0021"
]
}
},
"manufacturer": "_TZE200_3ejwxpmu",
"model": "TS0601",
"class": "zigpy.device.Device"
}
</details>
**Additional context**
Add any other context or screenshots about the feature request here.

</issue>
<code>
[start of zhaquirks/tuya/air/ts0601_air_quality.py]
1 """Tuya Air Quality sensor."""
2
3 from zigpy.profiles import zgp, zha
4 from zigpy.quirks import CustomDevice
5 from zigpy.zcl.clusters.general import Basic, GreenPowerProxy, Groups, Ota, Scenes, Time
6
7 from zhaquirks.const import (
8 DEVICE_TYPE,
9 ENDPOINTS,
10 INPUT_CLUSTERS,
11 MODELS_INFO,
12 OUTPUT_CLUSTERS,
13 PROFILE_ID,
14 )
15 from zhaquirks.tuya.air import (
16 TuyaAirQualityCO2,
17 TuyaAirQualityFormaldehyde,
18 TuyaAirQualityHumidity,
19 TuyaAirQualityTemperature,
20 TuyaAirQualityVOC,
21 TuyaCO2ManufCluster,
22 )
23
24
25 class TuyaCO2Sensor(CustomDevice):
26 """Tuya Air quality device."""
27
28 signature = {
29 # NodeDescriptor(logical_type=<LogicalType.Router: 1>, complex_descriptor_available=0, user_descriptor_available=0, reserved=0, aps_flags=0, frequency_band=<FrequencyBand.Freq2400MHz: 8>, mac_capability_flags=<MACCapabilityFlags.AllocateAddress|RxOnWhenIdle|MainsPowered|FullFunctionDevice: 142>, manufacturer_code=4098, maximum_buffer_size=82, maximum_incoming_transfer_size=82, server_mask=11264, maximum_outgoing_transfer_size=82, descriptor_capability_field=<DescriptorCapability.0: 0>, *allocate_address=True, *is_alternate_pan_coordinator=False, *is_coordinator=False, *is_end_device=False, *is_full_function_device=True, *is_mains_powered=True, *is_receiver_on_when_idle=True, *is_router=True, *is_security_capable=False)]
30 # device_version=1
31 # SizePrefixedSimpleDescriptor(endpoint=1, profile=260, device_type=81, device_version=1,
32 # input_clusters=[0, 4, 5, 61184],
33 # output_clusters=[25, 10])
34 MODELS_INFO: [
35 ("_TZE200_8ygsuhe1", "TS0601"),
36 ("_TZE200_ryfmq5rl", "TS0601"),
37 ("_TZE200_yvx5lh6k", "TS0601"),
38 ("_TZE200_c2fmom5z", "TS0601"),
39 ],
40 ENDPOINTS: {
41 1: {
42 PROFILE_ID: zha.PROFILE_ID,
43 DEVICE_TYPE: zha.DeviceType.SMART_PLUG,
44 INPUT_CLUSTERS: [
45 Basic.cluster_id,
46 Groups.cluster_id,
47 Scenes.cluster_id,
48 TuyaCO2ManufCluster.cluster_id,
49 ],
50 OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],
51 }
52 },
53 }
54
55 replacement = {
56 ENDPOINTS: {
57 1: {
58 DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,
59 INPUT_CLUSTERS: [
60 Basic.cluster_id,
61 Groups.cluster_id,
62 Scenes.cluster_id,
63 TuyaCO2ManufCluster,
64 TuyaAirQualityCO2,
65 TuyaAirQualityFormaldehyde,
66 TuyaAirQualityHumidity,
67 TuyaAirQualityTemperature,
68 TuyaAirQualityVOC,
69 ],
70 OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],
71 }
72 }
73 }
74
75
76 class TuyaCO2SensorGPP(CustomDevice):
77 """Tuya Air quality device with GPP."""
78
79 signature = {
80 # NodeDescriptor(logical_type=<LogicalType.Router: 1>, complex_descriptor_available=0, user_descriptor_available=0, reserved=0, aps_flags=0, frequency_band=<FrequencyBand.Freq2400MHz: 8>, mac_capability_flags=<MACCapabilityFlags.AllocateAddress|RxOnWhenIdle|MainsPowered|FullFunctionDevice: 142>, manufacturer_code=4098, maximum_buffer_size=82, maximum_incoming_transfer_size=82, server_mask=11264, maximum_outgoing_transfer_size=82, descriptor_capability_field=<DescriptorCapability.0: 0>, *allocate_address=True, *is_alternate_pan_coordinator=False, *is_coordinator=False, *is_end_device=False, *is_full_function_device=True, *is_mains_powered=True, *is_receiver_on_when_idle=True, *is_router=True, *is_security_capable=False)]
81 # device_version=1
82 # SizePrefixedSimpleDescriptor(endpoint=1, profile=260, device_type=81, device_version=1,
83 # input_clusters=[0, 4, 5, 61184],
84 # output_clusters=[25, 10])
85 MODELS_INFO: [
86 ("_TZE200_8ygsuhe1", "TS0601"),
87 ("_TZE200_ryfmq5rl", "TS0601"),
88 ("_TZE200_yvx5lh6k", "TS0601"),
89 ("_TZE200_c2fmom5z", "TS0601"),
90 ],
91 ENDPOINTS: {
92 1: {
93 PROFILE_ID: zha.PROFILE_ID,
94 DEVICE_TYPE: zha.DeviceType.SMART_PLUG,
95 INPUT_CLUSTERS: [
96 Basic.cluster_id,
97 Groups.cluster_id,
98 Scenes.cluster_id,
99 TuyaCO2ManufCluster.cluster_id,
100 ],
101 OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],
102 },
103 242: {
104 # <SimpleDescriptor endpoint=242 profile=41440 device_type=97
105 # input_clusters=[]
106 # output_clusters=[33]
107 PROFILE_ID: zgp.PROFILE_ID,
108 DEVICE_TYPE: zgp.DeviceType.PROXY_BASIC,
109 INPUT_CLUSTERS: [],
110 OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],
111 },
112 },
113 }
114
115 replacement = {
116 ENDPOINTS: {
117 1: {
118 DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,
119 INPUT_CLUSTERS: [
120 Basic.cluster_id,
121 Groups.cluster_id,
122 Scenes.cluster_id,
123 TuyaCO2ManufCluster,
124 TuyaAirQualityCO2,
125 TuyaAirQualityFormaldehyde,
126 TuyaAirQualityHumidity,
127 TuyaAirQualityTemperature,
128 TuyaAirQualityVOC,
129 ],
130 OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],
131 },
132 242: {
133 PROFILE_ID: zgp.PROFILE_ID,
134 DEVICE_TYPE: zgp.DeviceType.PROXY_BASIC,
135 INPUT_CLUSTERS: [],
136 OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],
137 },
138 }
139 }
140
141
142 class TuyaNDIRCO2SensorGPP(CustomDevice):
143 """Tuya NIDR CO2 sensor with GPP."""
144
145 signature = {
146 # NodeDescriptor(logical_type=<LogicalType.Router: 1>, complex_descriptor_available=0, user_descriptor_available=0, reserved=0, aps_flags=0, frequency_band=<FrequencyBand.Freq2400MHz: 8>, mac_capability_flags=<MACCapabilityFlags.AllocateAddress|RxOnWhenIdle|MainsPowered|FullFunctionDevice: 142>, manufacturer_code=4417, maximum_buffer_size=66, maximum_incoming_transfer_size=66, server_mask=10752, maximum_outgoing_transfer_size=66, descriptor_capability_field=<DescriptorCapability.NONE: 0>, *allocate_address=True, *is_alternate_pan_coordinator=False, *is_coordinator=False, *is_end_device=False, *is_full_function_device=True, *is_mains_powered=True, *is_receiver_on_when_idle=True, *is_router=True, *is_security_capable=False)
147 # device_version=1
148 # SizePrefixedSimpleDescriptor(endpoint=1, profile=260, device_type=81, device_version=1,
149 # input_clusters=[0, 4, 5, 61184],
150 # output_clusters=[25, 10])
151 MODELS_INFO: [
152 ("_TZE200_ogkdpgy2", "TS0601"),
153 ],
154 ENDPOINTS: {
155 1: {
156 PROFILE_ID: zha.PROFILE_ID,
157 DEVICE_TYPE: zha.DeviceType.SMART_PLUG,
158 INPUT_CLUSTERS: [
159 Basic.cluster_id,
160 Groups.cluster_id,
161 Scenes.cluster_id,
162 TuyaCO2ManufCluster.cluster_id,
163 ],
164 OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],
165 },
166 242: {
167 # <SimpleDescriptor endpoint=242 profile=41440 device_type=97
168 # input_clusters=[]
169 # output_clusters=[33]
170 PROFILE_ID: zgp.PROFILE_ID,
171 DEVICE_TYPE: zgp.DeviceType.PROXY_BASIC,
172 INPUT_CLUSTERS: [],
173 OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],
174 },
175 },
176 }
177
178 replacement = {
179 ENDPOINTS: {
180 1: {
181 DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,
182 INPUT_CLUSTERS: [
183 Basic.cluster_id,
184 Groups.cluster_id,
185 Scenes.cluster_id,
186 TuyaCO2ManufCluster,
187 TuyaAirQualityCO2,
188 ],
189 OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],
190 },
191 242: {
192 PROFILE_ID: zgp.PROFILE_ID,
193 DEVICE_TYPE: zgp.DeviceType.PROXY_BASIC,
194 INPUT_CLUSTERS: [],
195 OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],
196 },
197 }
198 }
199
[end of zhaquirks/tuya/air/ts0601_air_quality.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/zhaquirks/tuya/air/ts0601_air_quality.py b/zhaquirks/tuya/air/ts0601_air_quality.py
--- a/zhaquirks/tuya/air/ts0601_air_quality.py
+++ b/zhaquirks/tuya/air/ts0601_air_quality.py
@@ -196,3 +196,64 @@
},
}
}
+
+
+class TuyaNDIRCO2SensorGPP2(CustomDevice):
+ """Tuya NIDR CO2 sensor."""
+
+ signature = {
+ # NodeDescriptor(logical_type=<LogicalType.Router: 1>, complex_descriptor_available=0, user_descriptor_available=0, reserved=0, aps_flags=0, frequency_band=<FrequencyBand.Freq2400MHz: 8>, mac_capability_flags=<MACCapabilityFlags.FullFunctionDevice|MainsPowered|RxOnWhenIdle|AllocateAddress: 142>, manufacturer_code=4417, maximum_buffer_size=66, maximum_incoming_transfer_size=66, server_mask=10752, maximum_outgoing_transfer_size=66, descriptor_capability_field=<DescriptorCapability.NONE: 0>, *allocate_address=True, *is_alternate_pan_coordinator=False, *is_coordinator=False, *is_end_device=False, *is_full_function_device=True, *is_mains_powered=True, *is_receiver_on_when_idle=True, *is_router=True, *is_security_capable=False)
+ # device_version=1
+ # SizePrefixedSimpleDescriptor(endpoint=1, profile=260, device_type=81, device_version=1,
+ # input_clusters=[0, 4, 5, 61184],
+ # output_clusters=[10, 25])
+ MODELS_INFO: [
+ ("_TZE200_3ejwxpmu", "TS0601"),
+ ],
+ ENDPOINTS: {
+ 1: {
+ PROFILE_ID: zha.PROFILE_ID,
+ DEVICE_TYPE: zha.DeviceType.SMART_PLUG,
+ INPUT_CLUSTERS: [
+ Basic.cluster_id,
+ Groups.cluster_id,
+ Scenes.cluster_id,
+ TuyaCO2ManufCluster.cluster_id,
+ ],
+ OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],
+ },
+ 242: {
+ # <SimpleDescriptor endpoint=242 profile=41440 device_type=97
+ # input_clusters=[]
+ # output_clusters=[33]
+ PROFILE_ID: zgp.PROFILE_ID,
+ DEVICE_TYPE: zgp.DeviceType.PROXY_BASIC,
+ INPUT_CLUSTERS: [],
+ OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],
+ },
+ },
+ }
+
+ replacement = {
+ ENDPOINTS: {
+ 1: {
+ DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,
+ INPUT_CLUSTERS: [
+ Basic.cluster_id,
+ Groups.cluster_id,
+ Scenes.cluster_id,
+ TuyaCO2ManufCluster,
+ TuyaAirQualityCO2,
+ TuyaAirQualityHumidity,
+ TuyaAirQualityTemperature,
+ ],
+ OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],
+ },
+ 242: {
+ PROFILE_ID: zgp.PROFILE_ID,
+ DEVICE_TYPE: zgp.DeviceType.PROXY_BASIC,
+ INPUT_CLUSTERS: [],
+ OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],
+ },
+ }
+ }
|
{"golden_diff": "diff --git a/zhaquirks/tuya/air/ts0601_air_quality.py b/zhaquirks/tuya/air/ts0601_air_quality.py\n--- a/zhaquirks/tuya/air/ts0601_air_quality.py\n+++ b/zhaquirks/tuya/air/ts0601_air_quality.py\n@@ -196,3 +196,64 @@\n },\n }\n }\n+\n+\n+class TuyaNDIRCO2SensorGPP2(CustomDevice):\n+ \"\"\"Tuya NIDR CO2 sensor.\"\"\"\n+\n+ signature = {\n+ # NodeDescriptor(logical_type=<LogicalType.Router: 1>, complex_descriptor_available=0, user_descriptor_available=0, reserved=0, aps_flags=0, frequency_band=<FrequencyBand.Freq2400MHz: 8>, mac_capability_flags=<MACCapabilityFlags.FullFunctionDevice|MainsPowered|RxOnWhenIdle|AllocateAddress: 142>, manufacturer_code=4417, maximum_buffer_size=66, maximum_incoming_transfer_size=66, server_mask=10752, maximum_outgoing_transfer_size=66, descriptor_capability_field=<DescriptorCapability.NONE: 0>, *allocate_address=True, *is_alternate_pan_coordinator=False, *is_coordinator=False, *is_end_device=False, *is_full_function_device=True, *is_mains_powered=True, *is_receiver_on_when_idle=True, *is_router=True, *is_security_capable=False)\n+ # device_version=1\n+ # SizePrefixedSimpleDescriptor(endpoint=1, profile=260, device_type=81, device_version=1,\n+ # input_clusters=[0, 4, 5, 61184],\n+ # output_clusters=[10, 25])\n+ MODELS_INFO: [\n+ (\"_TZE200_3ejwxpmu\", \"TS0601\"),\n+ ],\n+ ENDPOINTS: {\n+ 1: {\n+ PROFILE_ID: zha.PROFILE_ID,\n+ DEVICE_TYPE: zha.DeviceType.SMART_PLUG,\n+ INPUT_CLUSTERS: [\n+ Basic.cluster_id,\n+ Groups.cluster_id,\n+ Scenes.cluster_id,\n+ TuyaCO2ManufCluster.cluster_id,\n+ ],\n+ OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],\n+ },\n+ 242: {\n+ # <SimpleDescriptor endpoint=242 profile=41440 device_type=97\n+ # input_clusters=[]\n+ # output_clusters=[33]\n+ PROFILE_ID: zgp.PROFILE_ID,\n+ DEVICE_TYPE: zgp.DeviceType.PROXY_BASIC,\n+ INPUT_CLUSTERS: [],\n+ OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],\n+ },\n+ },\n+ }\n+\n+ replacement = {\n+ ENDPOINTS: {\n+ 1: {\n+ DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n+ INPUT_CLUSTERS: [\n+ Basic.cluster_id,\n+ Groups.cluster_id,\n+ Scenes.cluster_id,\n+ TuyaCO2ManufCluster,\n+ TuyaAirQualityCO2,\n+ TuyaAirQualityHumidity,\n+ TuyaAirQualityTemperature,\n+ ],\n+ OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],\n+ },\n+ 242: {\n+ PROFILE_ID: zgp.PROFILE_ID,\n+ DEVICE_TYPE: zgp.DeviceType.PROXY_BASIC,\n+ INPUT_CLUSTERS: [],\n+ OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],\n+ },\n+ }\n+ }\n", "issue": "[Device Support Request] TS0601 by _TZE200_3ejwxpmu (QOXEZY Zigbee CO2 NDIR Sensor)\n**Is your feature request related to a problem? Please describe.**\r\nThis is a NDIR CO2 sensor which measures carbon dioxide (additionally humidity and temperature). While I can connect it to ZHA, it generates no entities. Is it possible to integrate?\r\n\r\n**Describe the solution you'd like**\r\nI would like to be able to see all available entities.\r\n\r\n<details>\r\n<!-- Device signature can be acquired by clicking on the \"Zigbee Device Signature\" button in the device settings view -->\r\n<summary>Device signature</summary>\r\n{\r\n \"node_descriptor\": \"NodeDescriptor(logical_type=<LogicalType.Router: 1>, complex_descriptor_available=0, user_descriptor_available=0, reserved=0, aps_flags=0, frequency_band=<FrequencyBand.Freq2400MHz: 8>, mac_capability_flags=<MACCapabilityFlags.AllocateAddress|RxOnWhenIdle|MainsPowered|FullFunctionDevice: 142>, manufacturer_code=4417, maximum_buffer_size=66, maximum_incoming_transfer_size=66, server_mask=10752, maximum_outgoing_transfer_size=66, descriptor_capability_field=<DescriptorCapability.NONE: 0>, *allocate_address=True, *is_alternate_pan_coordinator=False, *is_coordinator=False, *is_end_device=False, *is_full_function_device=True, *is_mains_powered=True, *is_receiver_on_when_idle=True, *is_router=True, *is_security_capable=False)\",\r\n \"endpoints\": {\r\n \"1\": {\r\n \"profile_id\": 260,\r\n \"device_type\": \"0x0051\",\r\n \"in_clusters\": [\r\n \"0x0000\",\r\n \"0x0004\",\r\n \"0x0005\",\r\n \"0xef00\"\r\n ],\r\n \"out_clusters\": [\r\n \"0x000a\",\r\n \"0x0019\"\r\n ]\r\n },\r\n \"242\": {\r\n \"profile_id\": 41440,\r\n \"device_type\": \"0x0061\",\r\n \"in_clusters\": [],\r\n \"out_clusters\": [\r\n \"0x0021\"\r\n ]\r\n }\r\n },\r\n \"manufacturer\": \"_TZE200_3ejwxpmu\",\r\n \"model\": \"TS0601\",\r\n \"class\": \"zigpy.device.Device\"\r\n}\r\n\r\n</details>\r\n\r\n**Additional context**\r\nAdd any other context or screenshots about the feature request here.\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"Tuya Air Quality sensor.\"\"\"\n\nfrom zigpy.profiles import zgp, zha\nfrom zigpy.quirks import CustomDevice\nfrom zigpy.zcl.clusters.general import Basic, GreenPowerProxy, Groups, Ota, Scenes, Time\n\nfrom zhaquirks.const import (\n DEVICE_TYPE,\n ENDPOINTS,\n INPUT_CLUSTERS,\n MODELS_INFO,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n)\nfrom zhaquirks.tuya.air import (\n TuyaAirQualityCO2,\n TuyaAirQualityFormaldehyde,\n TuyaAirQualityHumidity,\n TuyaAirQualityTemperature,\n TuyaAirQualityVOC,\n TuyaCO2ManufCluster,\n)\n\n\nclass TuyaCO2Sensor(CustomDevice):\n \"\"\"Tuya Air quality device.\"\"\"\n\n signature = {\n # NodeDescriptor(logical_type=<LogicalType.Router: 1>, complex_descriptor_available=0, user_descriptor_available=0, reserved=0, aps_flags=0, frequency_band=<FrequencyBand.Freq2400MHz: 8>, mac_capability_flags=<MACCapabilityFlags.AllocateAddress|RxOnWhenIdle|MainsPowered|FullFunctionDevice: 142>, manufacturer_code=4098, maximum_buffer_size=82, maximum_incoming_transfer_size=82, server_mask=11264, maximum_outgoing_transfer_size=82, descriptor_capability_field=<DescriptorCapability.0: 0>, *allocate_address=True, *is_alternate_pan_coordinator=False, *is_coordinator=False, *is_end_device=False, *is_full_function_device=True, *is_mains_powered=True, *is_receiver_on_when_idle=True, *is_router=True, *is_security_capable=False)]\n # device_version=1\n # SizePrefixedSimpleDescriptor(endpoint=1, profile=260, device_type=81, device_version=1,\n # input_clusters=[0, 4, 5, 61184],\n # output_clusters=[25, 10])\n MODELS_INFO: [\n (\"_TZE200_8ygsuhe1\", \"TS0601\"),\n (\"_TZE200_ryfmq5rl\", \"TS0601\"),\n (\"_TZE200_yvx5lh6k\", \"TS0601\"),\n (\"_TZE200_c2fmom5z\", \"TS0601\"),\n ],\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.SMART_PLUG,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n TuyaCO2ManufCluster.cluster_id,\n ],\n OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],\n }\n },\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n TuyaCO2ManufCluster,\n TuyaAirQualityCO2,\n TuyaAirQualityFormaldehyde,\n TuyaAirQualityHumidity,\n TuyaAirQualityTemperature,\n TuyaAirQualityVOC,\n ],\n OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],\n }\n }\n }\n\n\nclass TuyaCO2SensorGPP(CustomDevice):\n \"\"\"Tuya Air quality device with GPP.\"\"\"\n\n signature = {\n # NodeDescriptor(logical_type=<LogicalType.Router: 1>, complex_descriptor_available=0, user_descriptor_available=0, reserved=0, aps_flags=0, frequency_band=<FrequencyBand.Freq2400MHz: 8>, mac_capability_flags=<MACCapabilityFlags.AllocateAddress|RxOnWhenIdle|MainsPowered|FullFunctionDevice: 142>, manufacturer_code=4098, maximum_buffer_size=82, maximum_incoming_transfer_size=82, server_mask=11264, maximum_outgoing_transfer_size=82, descriptor_capability_field=<DescriptorCapability.0: 0>, *allocate_address=True, *is_alternate_pan_coordinator=False, *is_coordinator=False, *is_end_device=False, *is_full_function_device=True, *is_mains_powered=True, *is_receiver_on_when_idle=True, *is_router=True, *is_security_capable=False)]\n # device_version=1\n # SizePrefixedSimpleDescriptor(endpoint=1, profile=260, device_type=81, device_version=1,\n # input_clusters=[0, 4, 5, 61184],\n # output_clusters=[25, 10])\n MODELS_INFO: [\n (\"_TZE200_8ygsuhe1\", \"TS0601\"),\n (\"_TZE200_ryfmq5rl\", \"TS0601\"),\n (\"_TZE200_yvx5lh6k\", \"TS0601\"),\n (\"_TZE200_c2fmom5z\", \"TS0601\"),\n ],\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.SMART_PLUG,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n TuyaCO2ManufCluster.cluster_id,\n ],\n OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],\n },\n 242: {\n # <SimpleDescriptor endpoint=242 profile=41440 device_type=97\n # input_clusters=[]\n # output_clusters=[33]\n PROFILE_ID: zgp.PROFILE_ID,\n DEVICE_TYPE: zgp.DeviceType.PROXY_BASIC,\n INPUT_CLUSTERS: [],\n OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],\n },\n },\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n TuyaCO2ManufCluster,\n TuyaAirQualityCO2,\n TuyaAirQualityFormaldehyde,\n TuyaAirQualityHumidity,\n TuyaAirQualityTemperature,\n TuyaAirQualityVOC,\n ],\n OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],\n },\n 242: {\n PROFILE_ID: zgp.PROFILE_ID,\n DEVICE_TYPE: zgp.DeviceType.PROXY_BASIC,\n INPUT_CLUSTERS: [],\n OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],\n },\n }\n }\n\n\nclass TuyaNDIRCO2SensorGPP(CustomDevice):\n \"\"\"Tuya NIDR CO2 sensor with GPP.\"\"\"\n\n signature = {\n # NodeDescriptor(logical_type=<LogicalType.Router: 1>, complex_descriptor_available=0, user_descriptor_available=0, reserved=0, aps_flags=0, frequency_band=<FrequencyBand.Freq2400MHz: 8>, mac_capability_flags=<MACCapabilityFlags.AllocateAddress|RxOnWhenIdle|MainsPowered|FullFunctionDevice: 142>, manufacturer_code=4417, maximum_buffer_size=66, maximum_incoming_transfer_size=66, server_mask=10752, maximum_outgoing_transfer_size=66, descriptor_capability_field=<DescriptorCapability.NONE: 0>, *allocate_address=True, *is_alternate_pan_coordinator=False, *is_coordinator=False, *is_end_device=False, *is_full_function_device=True, *is_mains_powered=True, *is_receiver_on_when_idle=True, *is_router=True, *is_security_capable=False)\n # device_version=1\n # SizePrefixedSimpleDescriptor(endpoint=1, profile=260, device_type=81, device_version=1,\n # input_clusters=[0, 4, 5, 61184],\n # output_clusters=[25, 10])\n MODELS_INFO: [\n (\"_TZE200_ogkdpgy2\", \"TS0601\"),\n ],\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.SMART_PLUG,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n TuyaCO2ManufCluster.cluster_id,\n ],\n OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],\n },\n 242: {\n # <SimpleDescriptor endpoint=242 profile=41440 device_type=97\n # input_clusters=[]\n # output_clusters=[33]\n PROFILE_ID: zgp.PROFILE_ID,\n DEVICE_TYPE: zgp.DeviceType.PROXY_BASIC,\n INPUT_CLUSTERS: [],\n OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],\n },\n },\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n TuyaCO2ManufCluster,\n TuyaAirQualityCO2,\n ],\n OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],\n },\n 242: {\n PROFILE_ID: zgp.PROFILE_ID,\n DEVICE_TYPE: zgp.DeviceType.PROXY_BASIC,\n INPUT_CLUSTERS: [],\n OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],\n },\n }\n }\n", "path": "zhaquirks/tuya/air/ts0601_air_quality.py"}]}
| 3,809 | 811 |
gh_patches_debug_6667
|
rasdani/github-patches
|
git_diff
|
pyjanitor-devs__pyjanitor-633
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Version on GitHub different from version on PyPI
# Brief Description of Fix
<!-- Please describe the fix in terms of a "before" and "after". In other words, what's not so good about the current docs
page, and what you would like to see it become.
Example starter wording is provided. -->
Currently, the version in the repo is "0.19.0", whereas it's "0.20.0" on PyPI.
I would like to propose a change, such that the version is updated here.
# Relevant Context
<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available
to get you started. -->
- [Link to PyPI](https://pypi.org/project/pyjanitor/)
</issue>
<code>
[start of setup.py]
1 import re
2 from pathlib import Path
3
4 from setuptools import setup
5
6
7 def requirements():
8 with open("requirements.txt", "r+") as f:
9 return f.read()
10
11
12 def generate_long_description() -> str:
13 """
14 Extra chunks from README for PyPI description.
15
16 Target chunks must be contained within `.. pypi-doc` pair comments,
17 so there must be an even number of comments in README.
18
19 :returns: Extracted description from README
20
21 """
22 # Read the contents of README file
23 this_directory = Path(__file__).parent
24 with open(this_directory / "README.rst", encoding="utf-8") as f:
25 readme = f.read()
26
27 # Find pypi-doc comments in README
28 indices = [m.start() for m in re.finditer(".. pypi-doc", readme)]
29 if len(indices) % 2 != 0:
30 raise Exception("Odd number of `.. pypi-doc` comments in README")
31
32 # Loop through pairs of comments and save text between pairs
33 long_description = ""
34 for i in range(0, len(indices), 2):
35 start_index = indices[i] + 11
36 end_index = indices[i + 1]
37 long_description += readme[start_index:end_index]
38 return long_description
39
40
41 extra_spark = ["pyspark"]
42 extra_biology = ["biopython"]
43 extra_chemistry = ["rdkit"]
44 extra_engineering = ["unyt"]
45 extra_all = extra_biology + extra_engineering + extra_spark
46
47 setup(
48 name="pyjanitor",
49 version="0.19.0",
50 description="Tools for cleaning pandas DataFrames",
51 author="Eric J. Ma",
52 author_email="[email protected]",
53 url="https://github.com/ericmjl/pyjanitor",
54 packages=["janitor"],
55 install_requires=requirements(),
56 extras_require={
57 "all": extra_all,
58 "biology": extra_biology,
59 # "chemistry": extra_chemistry, should be inserted once rdkit
60 # fixes https://github.com/rdkit/rdkit/issues/1812
61 "engineering": extra_engineering,
62 "spark": extra_spark,
63 },
64 python_requires=">=3.6",
65 long_description=generate_long_description(),
66 long_description_content_type="text/x-rst",
67 )
68
[end of setup.py]
[start of janitor/__init__.py]
1 try:
2 import janitor.xarray
3 except ImportError:
4 pass
5
6 from .functions import * # noqa: F403, F401
7 from .math import *
8 from .ml import get_features_targets as _get_features_targets
9 from .utils import refactored_function
10
11 # from .dataframe import JanitorDataFrame as DataFrame # noqa: F401
12 # from .dataframe import JanitorSeries as Series # noqa: F401
13
14
15 @refactored_function(
16 "get_features_targets() has moved. Please use ml.get_features_targets()."
17 )
18 def get_features_targets(*args, **kwargs):
19 return _get_features_targets(*args, **kwargs)
20
21
22 __version__ = "0.19.0"
23
[end of janitor/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/janitor/__init__.py b/janitor/__init__.py
--- a/janitor/__init__.py
+++ b/janitor/__init__.py
@@ -19,4 +19,4 @@
return _get_features_targets(*args, **kwargs)
-__version__ = "0.19.0"
+__version__ = "0.20.0"
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -46,7 +46,7 @@
setup(
name="pyjanitor",
- version="0.19.0",
+ version="0.20.0",
description="Tools for cleaning pandas DataFrames",
author="Eric J. Ma",
author_email="[email protected]",
|
{"golden_diff": "diff --git a/janitor/__init__.py b/janitor/__init__.py\n--- a/janitor/__init__.py\n+++ b/janitor/__init__.py\n@@ -19,4 +19,4 @@\n return _get_features_targets(*args, **kwargs)\n \n \n-__version__ = \"0.19.0\"\n+__version__ = \"0.20.0\"\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -46,7 +46,7 @@\n \n setup(\n name=\"pyjanitor\",\n- version=\"0.19.0\",\n+ version=\"0.20.0\",\n description=\"Tools for cleaning pandas DataFrames\",\n author=\"Eric J. Ma\",\n author_email=\"[email protected]\",\n", "issue": "Version on GitHub different from version on PyPI\n# Brief Description of Fix\r\n\r\n<!-- Please describe the fix in terms of a \"before\" and \"after\". In other words, what's not so good about the current docs\r\npage, and what you would like to see it become.\r\n\r\nExample starter wording is provided. -->\r\n\r\nCurrently, the version in the repo is \"0.19.0\", whereas it's \"0.20.0\" on PyPI.\r\n\r\nI would like to propose a change, such that the version is updated here.\r\n\r\n# Relevant Context\r\n\r\n<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available\r\nto get you started. -->\r\n\r\n- [Link to PyPI](https://pypi.org/project/pyjanitor/)\r\n\n", "before_files": [{"content": "import re\nfrom pathlib import Path\n\nfrom setuptools import setup\n\n\ndef requirements():\n with open(\"requirements.txt\", \"r+\") as f:\n return f.read()\n\n\ndef generate_long_description() -> str:\n \"\"\"\n Extra chunks from README for PyPI description.\n\n Target chunks must be contained within `.. pypi-doc` pair comments,\n so there must be an even number of comments in README.\n\n :returns: Extracted description from README\n\n \"\"\"\n # Read the contents of README file\n this_directory = Path(__file__).parent\n with open(this_directory / \"README.rst\", encoding=\"utf-8\") as f:\n readme = f.read()\n\n # Find pypi-doc comments in README\n indices = [m.start() for m in re.finditer(\".. pypi-doc\", readme)]\n if len(indices) % 2 != 0:\n raise Exception(\"Odd number of `.. pypi-doc` comments in README\")\n\n # Loop through pairs of comments and save text between pairs\n long_description = \"\"\n for i in range(0, len(indices), 2):\n start_index = indices[i] + 11\n end_index = indices[i + 1]\n long_description += readme[start_index:end_index]\n return long_description\n\n\nextra_spark = [\"pyspark\"]\nextra_biology = [\"biopython\"]\nextra_chemistry = [\"rdkit\"]\nextra_engineering = [\"unyt\"]\nextra_all = extra_biology + extra_engineering + extra_spark\n\nsetup(\n name=\"pyjanitor\",\n version=\"0.19.0\",\n description=\"Tools for cleaning pandas DataFrames\",\n author=\"Eric J. Ma\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ericmjl/pyjanitor\",\n packages=[\"janitor\"],\n install_requires=requirements(),\n extras_require={\n \"all\": extra_all,\n \"biology\": extra_biology,\n # \"chemistry\": extra_chemistry, should be inserted once rdkit\n # fixes https://github.com/rdkit/rdkit/issues/1812\n \"engineering\": extra_engineering,\n \"spark\": extra_spark,\n },\n python_requires=\">=3.6\",\n long_description=generate_long_description(),\n long_description_content_type=\"text/x-rst\",\n)\n", "path": "setup.py"}, {"content": "try:\n import janitor.xarray\nexcept ImportError:\n pass\n\nfrom .functions import * # noqa: F403, F401\nfrom .math import *\nfrom .ml import get_features_targets as _get_features_targets\nfrom .utils import refactored_function\n\n# from .dataframe import JanitorDataFrame as DataFrame # noqa: F401\n# from .dataframe import JanitorSeries as Series # noqa: F401\n\n\n@refactored_function(\n \"get_features_targets() has moved. Please use ml.get_features_targets().\"\n)\ndef get_features_targets(*args, **kwargs):\n return _get_features_targets(*args, **kwargs)\n\n\n__version__ = \"0.19.0\"\n", "path": "janitor/__init__.py"}]}
| 1,550 | 184 |
gh_patches_debug_18328
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-5219
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: conan config install
To help us debug your issue please explain:
- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
Following situation:
Running conan 1.15.1 on Linux (Ubuntu 18.4 LTS)
When loading our configuration to a new `CONAN_USER_HOME` and start import our configuration:
`conan config install ~/work/scripts/conan/config/`
I get the following error:
`ERROR: [Errno 2] No such file or directory: '/home/j/tempconan/.conan/remotes.json'`
When I just open in between `conan remote list` it works afterwards.
When running with Conan 1.14.1 it also works. So it must be a bug afterwards.
Here the recipe:
```
mkdir tempconan && cd tempconan
export CONAN_USER_HOME=/home/j/tempconan
conan config install ~/work/scripts/conan/config/
```
And the callstack:
```
j@ubuntu:~/tempconan$ conan config install ~/work/scripts/conan/config/Copying file version.txt to /home/j/tempconan/.conan/.
Copying file artifacts.properties to /home/j/tempconan/.conan/.
Processing conan.conf
Traceback (most recent call last):
File "/home/j/.local/lib/python3.6/site-packages/conans/client/command.py", line 1607, in run
method(args[0][1:])
File "/home/j/.local/lib/python3.6/site-packages/conans/client/command.py", line 478, in config
target_folder=args.target_folder)
File "/home/j/.local/lib/python3.6/site-packages/conans/client/conan_api.py", line 92, in wrapper
return f(*args, **kwargs)
File "/home/j/.local/lib/python3.6/site-packages/conans/client/conan_api.py", line 621, in config_install
source_folder=source_folder, target_folder=target_folder)
File "/home/j/.local/lib/python3.6/site-packages/conans/client/conf/config_installer.py", line 230, in configuration_install
_process_config(config, cache, output, requester)
File "/home/j/.local/lib/python3.6/site-packages/conans/client/conf/config_installer.py", line 182, in _process_config
_process_folder(config, config.uri, cache, output)
File "/home/j/.local/lib/python3.6/site-packages/conans/client/conf/config_installer.py", line 93, in _process_folder
os.remove(cache.registry_path)
FileNotFoundError: [Errno 2] No such file or directory: '/home/j/tempconan/.conan/remotes.json'
ERROR: [Errno 2] No such file or directory: '/home/j/tempconan/.conan/remotes.json'
```
</issue>
<code>
[start of conans/client/conf/config_installer.py]
1 import json
2 import os
3 import shutil
4
5 from contextlib import contextmanager
6 from six.moves.urllib.parse import urlparse
7
8 from conans import load
9 from conans.client import tools
10 from conans.client.cache.remote_registry import load_registry_txt,\
11 migrate_registry_file
12 from conans.client.tools import Git
13 from conans.client.tools.files import unzip
14 from conans.errors import ConanException
15 from conans.util.files import mkdir, rmdir, walk, save
16
17
18 def _hide_password(resource):
19 """
20 Hide password from url/file path
21
22 :param resource: string with url or file path
23 :return: resource with hidden password if present
24 """
25 password = urlparse(resource).password
26 return resource.replace(password, "<hidden>") if password else resource
27
28
29 def _handle_remotes(cache, remote_file):
30 # FIXME: Should we encourage to pass the remotes in json?
31 remotes, _ = load_registry_txt(load(remote_file))
32 cache.registry.define(remotes)
33
34
35 @contextmanager
36 def tmp_config_install_folder(cache):
37 tmp_folder = os.path.join(cache.cache_folder, "tmp_config_install")
38 # necessary for Mac OSX, where the temp folders in /var/ are symlinks to /private/var/
39 tmp_folder = os.path.realpath(tmp_folder)
40 mkdir(tmp_folder)
41 try:
42 yield tmp_folder
43 finally:
44 rmdir(tmp_folder)
45
46
47 def _process_git_repo(config, cache, output):
48 output.info("Trying to clone repo: %s" % config.uri)
49 with tmp_config_install_folder(cache) as tmp_folder:
50 with tools.chdir(tmp_folder):
51 try:
52 args = config.args or ""
53 git = Git(verify_ssl=config.verify_ssl, output=output)
54 git.clone(config.uri, args=args)
55 output.info("Repo cloned!")
56 except Exception as e:
57 raise ConanException("Can't clone repo: %s" % str(e))
58 _process_folder(config, tmp_folder, cache, output)
59
60
61 def _process_zip_file(config, zippath, cache, output, tmp_folder, remove=False):
62 unzip(zippath, tmp_folder, output=output)
63 if remove:
64 os.unlink(zippath)
65 _process_folder(config, tmp_folder, cache, output)
66
67
68 def _handle_conan_conf(current_conan_conf, new_conan_conf_path):
69 current_conan_conf.read(new_conan_conf_path)
70 with open(current_conan_conf.filename, "w") as f:
71 current_conan_conf.write(f)
72
73
74 def _process_folder(config, folder, cache, output):
75 if config.source_folder:
76 folder = os.path.join(folder, config.source_folder)
77 for root, dirs, files in walk(folder):
78 dirs[:] = [d for d in dirs if d != ".git"]
79 if ".git" in root:
80 continue
81 for f in files:
82 if f == "settings.yml":
83 output.info("Installing settings.yml")
84 settings_path = cache.settings_path
85 shutil.copy(os.path.join(root, f), settings_path)
86 elif f == "conan.conf":
87 output.info("Processing conan.conf")
88 _handle_conan_conf(cache.config, os.path.join(root, f))
89 elif f == "remotes.txt":
90 output.info("Defining remotes from remotes.txt")
91 _handle_remotes(cache, os.path.join(root, f))
92 elif f in ("registry.txt", "registry.json"):
93 os.remove(cache.registry_path)
94 shutil.copy(os.path.join(root, f), cache.cache_folder)
95 migrate_registry_file(cache, output)
96 elif f == "remotes.json":
97 # Fix for Conan 2.0
98 raise ConanException("remotes.json install is not supported yet. Use 'remotes.txt'")
99 else:
100 # This is ugly, should be removed in Conan 2.0
101 if root == folder and f in ("README.md", "LICENSE.txt"):
102 output.info("Skip %s" % f)
103 continue
104 relpath = os.path.relpath(root, folder)
105 if config.target_folder:
106 target_folder = os.path.join(cache.cache_folder, config.target_folder,
107 relpath)
108 else:
109 target_folder = os.path.join(cache.cache_folder, relpath)
110 mkdir(target_folder)
111 output.info("Copying file %s to %s" % (f, target_folder))
112 shutil.copy(os.path.join(root, f), target_folder)
113
114
115 def _process_download(config, cache, output, requester):
116 with tmp_config_install_folder(cache) as tmp_folder:
117 output.info("Trying to download %s" % _hide_password(config.uri))
118 zippath = os.path.join(tmp_folder, "config.zip")
119 try:
120 tools.download(config.uri, zippath, out=output, verify=config.verify_ssl,
121 requester=requester)
122 _process_zip_file(config, zippath, cache, output, tmp_folder, remove=True)
123 except Exception as e:
124 raise ConanException("Error while installing config from %s\n%s" % (config.uri, str(e)))
125
126
127 class _ConfigOrigin(object):
128 def __init__(self, data):
129 self.type = data.get("type")
130 self.uri = data.get("uri")
131 self.verify_ssl = data.get("verify_ssl")
132 self.args = data.get("args")
133 self.source_folder = data.get("source_folder")
134 self.target_folder = data.get("target_folder")
135
136 def __eq__(self, other):
137 return (self.type == other.type and self.uri == other.uri and
138 self.args == other.args and self.source_folder == other.source_folder
139 and self.target_folder == other.target_folder)
140
141 def __ne__(self, other):
142 return not self.__eq__(other)
143
144 def json(self):
145 return {"type": self.type,
146 "uri": self.uri,
147 "verify_ssl": self.verify_ssl,
148 "args": self.args,
149 "source_folder": self.source_folder,
150 "target_folder": self.target_folder}
151
152 @staticmethod
153 def from_item(uri, config_type, verify_ssl, args, source_folder, target_folder):
154 config = _ConfigOrigin({})
155 if config_type:
156 config.type = config_type
157 else:
158 if uri.endswith(".git"):
159 config.type = "git"
160 elif os.path.isdir(uri):
161 config.type = "dir"
162 elif os.path.isfile(uri):
163 config.type = "file"
164 elif uri.startswith("http"):
165 config.type = "url"
166 else:
167 raise ConanException("Unable to deduce type config install: %s" % uri)
168 config.source_folder = source_folder
169 config.target_folder = target_folder
170 config.args = args
171 config.verify_ssl = verify_ssl
172 if os.path.exists(uri):
173 uri = os.path.abspath(uri)
174 config.uri = uri
175 return config
176
177
178 def _process_config(config, cache, output, requester):
179 if config.type == "git":
180 _process_git_repo(config, cache, output)
181 elif config.type == "dir":
182 _process_folder(config, config.uri, cache, output)
183 elif config.type == "file":
184 with tmp_config_install_folder(cache) as tmp_folder:
185 _process_zip_file(config, config.uri, cache, output, tmp_folder)
186 elif config.type == "url":
187 _process_download(config, cache, output, requester=requester)
188 else:
189 raise ConanException("Unable to process config install: %s" % config.uri)
190
191
192 def _save_configs(configs_file, configs):
193 save(configs_file, json.dumps([config.json() for config in configs],
194 indent=True))
195
196
197 def configuration_install(uri, cache, output, verify_ssl, requester, config_type=None,
198 args=None, source_folder=None, target_folder=None):
199 configs = []
200 configs_file = cache.config_install_file
201 if os.path.isfile(configs_file):
202 try:
203 configs = json.loads(load(configs_file))
204 except Exception as e:
205 raise ConanException("Error loading configs-install file: %s\n%"
206 % (configs_file, str(e)))
207 configs = [_ConfigOrigin(config) for config in configs]
208 if uri is None:
209 if config_type or args or not verify_ssl: # Not the defaults
210 if not configs:
211 raise ConanException("Called config install without arguments")
212 # Modify the last one
213 config = configs[-1]
214 config.config_type = config_type or config.type
215 config.args = args or config.args
216 config.verify_ssl = verify_ssl or config.verify_ssl
217 _process_config(config, cache, output, requester)
218 _save_configs(configs_file, configs)
219 else:
220 if not configs:
221 raise ConanException("Called config install without arguments")
222 # Execute the previously stored ones
223 for config in configs:
224 output.info("Config install: %s" % _hide_password(config.uri))
225 _process_config(config, cache, output, requester)
226 else:
227 # Execute and store the new one
228 config = _ConfigOrigin.from_item(uri, config_type, verify_ssl, args,
229 source_folder, target_folder)
230 _process_config(config, cache, output, requester)
231 if config not in configs:
232 configs.append(config)
233 else:
234 configs = [(c if c != config else config) for c in configs]
235 _save_configs(configs_file, configs)
236
[end of conans/client/conf/config_installer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conans/client/conf/config_installer.py b/conans/client/conf/config_installer.py
--- a/conans/client/conf/config_installer.py
+++ b/conans/client/conf/config_installer.py
@@ -90,9 +90,13 @@
output.info("Defining remotes from remotes.txt")
_handle_remotes(cache, os.path.join(root, f))
elif f in ("registry.txt", "registry.json"):
- os.remove(cache.registry_path)
- shutil.copy(os.path.join(root, f), cache.cache_folder)
- migrate_registry_file(cache, output)
+ try:
+ os.remove(cache.registry_path)
+ except OSError:
+ pass
+ finally:
+ shutil.copy(os.path.join(root, f), cache.cache_folder)
+ migrate_registry_file(cache, output)
elif f == "remotes.json":
# Fix for Conan 2.0
raise ConanException("remotes.json install is not supported yet. Use 'remotes.txt'")
|
{"golden_diff": "diff --git a/conans/client/conf/config_installer.py b/conans/client/conf/config_installer.py\n--- a/conans/client/conf/config_installer.py\n+++ b/conans/client/conf/config_installer.py\n@@ -90,9 +90,13 @@\n output.info(\"Defining remotes from remotes.txt\")\n _handle_remotes(cache, os.path.join(root, f))\n elif f in (\"registry.txt\", \"registry.json\"):\n- os.remove(cache.registry_path)\n- shutil.copy(os.path.join(root, f), cache.cache_folder)\n- migrate_registry_file(cache, output)\n+ try:\n+ os.remove(cache.registry_path)\n+ except OSError:\n+ pass\n+ finally:\n+ shutil.copy(os.path.join(root, f), cache.cache_folder)\n+ migrate_registry_file(cache, output)\n elif f == \"remotes.json\":\n # Fix for Conan 2.0\n raise ConanException(\"remotes.json install is not supported yet. Use 'remotes.txt'\")\n", "issue": "Bug: conan config install\nTo help us debug your issue please explain:\r\n\r\n- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).\r\n- [x] I've specified the Conan version, operating system version and any tool that can be relevant.\r\n- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.\r\n\r\nFollowing situation:\r\nRunning conan 1.15.1 on Linux (Ubuntu 18.4 LTS)\r\nWhen loading our configuration to a new `CONAN_USER_HOME` and start import our configuration:\r\n`conan config install ~/work/scripts/conan/config/`\r\nI get the following error:\r\n`ERROR: [Errno 2] No such file or directory: '/home/j/tempconan/.conan/remotes.json'`\r\n\r\nWhen I just open in between `conan remote list` it works afterwards.\r\n\r\nWhen running with Conan 1.14.1 it also works. So it must be a bug afterwards.\r\n\r\nHere the recipe:\r\n```\r\nmkdir tempconan && cd tempconan\r\nexport CONAN_USER_HOME=/home/j/tempconan\r\nconan config install ~/work/scripts/conan/config/\r\n```\r\n\r\nAnd the callstack:\r\n```\r\nj@ubuntu:~/tempconan$ conan config install ~/work/scripts/conan/config/Copying file version.txt to /home/j/tempconan/.conan/.\r\nCopying file artifacts.properties to /home/j/tempconan/.conan/.\r\nProcessing conan.conf\r\nTraceback (most recent call last):\r\n File \"/home/j/.local/lib/python3.6/site-packages/conans/client/command.py\", line 1607, in run\r\n method(args[0][1:])\r\n File \"/home/j/.local/lib/python3.6/site-packages/conans/client/command.py\", line 478, in config\r\n target_folder=args.target_folder)\r\n File \"/home/j/.local/lib/python3.6/site-packages/conans/client/conan_api.py\", line 92, in wrapper\r\n return f(*args, **kwargs)\r\n File \"/home/j/.local/lib/python3.6/site-packages/conans/client/conan_api.py\", line 621, in config_install\r\n source_folder=source_folder, target_folder=target_folder)\r\n File \"/home/j/.local/lib/python3.6/site-packages/conans/client/conf/config_installer.py\", line 230, in configuration_install\r\n _process_config(config, cache, output, requester)\r\n File \"/home/j/.local/lib/python3.6/site-packages/conans/client/conf/config_installer.py\", line 182, in _process_config\r\n _process_folder(config, config.uri, cache, output)\r\n File \"/home/j/.local/lib/python3.6/site-packages/conans/client/conf/config_installer.py\", line 93, in _process_folder\r\n os.remove(cache.registry_path)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/home/j/tempconan/.conan/remotes.json'\r\n\r\nERROR: [Errno 2] No such file or directory: '/home/j/tempconan/.conan/remotes.json'\r\n\r\n```\r\n\r\n\r\n\n", "before_files": [{"content": "import json\nimport os\nimport shutil\n\nfrom contextlib import contextmanager\nfrom six.moves.urllib.parse import urlparse\n\nfrom conans import load\nfrom conans.client import tools\nfrom conans.client.cache.remote_registry import load_registry_txt,\\\n migrate_registry_file\nfrom conans.client.tools import Git\nfrom conans.client.tools.files import unzip\nfrom conans.errors import ConanException\nfrom conans.util.files import mkdir, rmdir, walk, save\n\n\ndef _hide_password(resource):\n \"\"\"\n Hide password from url/file path\n\n :param resource: string with url or file path\n :return: resource with hidden password if present\n \"\"\"\n password = urlparse(resource).password\n return resource.replace(password, \"<hidden>\") if password else resource\n\n\ndef _handle_remotes(cache, remote_file):\n # FIXME: Should we encourage to pass the remotes in json?\n remotes, _ = load_registry_txt(load(remote_file))\n cache.registry.define(remotes)\n\n\n@contextmanager\ndef tmp_config_install_folder(cache):\n tmp_folder = os.path.join(cache.cache_folder, \"tmp_config_install\")\n # necessary for Mac OSX, where the temp folders in /var/ are symlinks to /private/var/\n tmp_folder = os.path.realpath(tmp_folder)\n mkdir(tmp_folder)\n try:\n yield tmp_folder\n finally:\n rmdir(tmp_folder)\n\n\ndef _process_git_repo(config, cache, output):\n output.info(\"Trying to clone repo: %s\" % config.uri)\n with tmp_config_install_folder(cache) as tmp_folder:\n with tools.chdir(tmp_folder):\n try:\n args = config.args or \"\"\n git = Git(verify_ssl=config.verify_ssl, output=output)\n git.clone(config.uri, args=args)\n output.info(\"Repo cloned!\")\n except Exception as e:\n raise ConanException(\"Can't clone repo: %s\" % str(e))\n _process_folder(config, tmp_folder, cache, output)\n\n\ndef _process_zip_file(config, zippath, cache, output, tmp_folder, remove=False):\n unzip(zippath, tmp_folder, output=output)\n if remove:\n os.unlink(zippath)\n _process_folder(config, tmp_folder, cache, output)\n\n\ndef _handle_conan_conf(current_conan_conf, new_conan_conf_path):\n current_conan_conf.read(new_conan_conf_path)\n with open(current_conan_conf.filename, \"w\") as f:\n current_conan_conf.write(f)\n\n\ndef _process_folder(config, folder, cache, output):\n if config.source_folder:\n folder = os.path.join(folder, config.source_folder)\n for root, dirs, files in walk(folder):\n dirs[:] = [d for d in dirs if d != \".git\"]\n if \".git\" in root:\n continue\n for f in files:\n if f == \"settings.yml\":\n output.info(\"Installing settings.yml\")\n settings_path = cache.settings_path\n shutil.copy(os.path.join(root, f), settings_path)\n elif f == \"conan.conf\":\n output.info(\"Processing conan.conf\")\n _handle_conan_conf(cache.config, os.path.join(root, f))\n elif f == \"remotes.txt\":\n output.info(\"Defining remotes from remotes.txt\")\n _handle_remotes(cache, os.path.join(root, f))\n elif f in (\"registry.txt\", \"registry.json\"):\n os.remove(cache.registry_path)\n shutil.copy(os.path.join(root, f), cache.cache_folder)\n migrate_registry_file(cache, output)\n elif f == \"remotes.json\":\n # Fix for Conan 2.0\n raise ConanException(\"remotes.json install is not supported yet. Use 'remotes.txt'\")\n else:\n # This is ugly, should be removed in Conan 2.0\n if root == folder and f in (\"README.md\", \"LICENSE.txt\"):\n output.info(\"Skip %s\" % f)\n continue\n relpath = os.path.relpath(root, folder)\n if config.target_folder:\n target_folder = os.path.join(cache.cache_folder, config.target_folder,\n relpath)\n else:\n target_folder = os.path.join(cache.cache_folder, relpath)\n mkdir(target_folder)\n output.info(\"Copying file %s to %s\" % (f, target_folder))\n shutil.copy(os.path.join(root, f), target_folder)\n\n\ndef _process_download(config, cache, output, requester):\n with tmp_config_install_folder(cache) as tmp_folder:\n output.info(\"Trying to download %s\" % _hide_password(config.uri))\n zippath = os.path.join(tmp_folder, \"config.zip\")\n try:\n tools.download(config.uri, zippath, out=output, verify=config.verify_ssl,\n requester=requester)\n _process_zip_file(config, zippath, cache, output, tmp_folder, remove=True)\n except Exception as e:\n raise ConanException(\"Error while installing config from %s\\n%s\" % (config.uri, str(e)))\n\n\nclass _ConfigOrigin(object):\n def __init__(self, data):\n self.type = data.get(\"type\")\n self.uri = data.get(\"uri\")\n self.verify_ssl = data.get(\"verify_ssl\")\n self.args = data.get(\"args\")\n self.source_folder = data.get(\"source_folder\")\n self.target_folder = data.get(\"target_folder\")\n\n def __eq__(self, other):\n return (self.type == other.type and self.uri == other.uri and\n self.args == other.args and self.source_folder == other.source_folder\n and self.target_folder == other.target_folder)\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def json(self):\n return {\"type\": self.type,\n \"uri\": self.uri,\n \"verify_ssl\": self.verify_ssl,\n \"args\": self.args,\n \"source_folder\": self.source_folder,\n \"target_folder\": self.target_folder}\n\n @staticmethod\n def from_item(uri, config_type, verify_ssl, args, source_folder, target_folder):\n config = _ConfigOrigin({})\n if config_type:\n config.type = config_type\n else:\n if uri.endswith(\".git\"):\n config.type = \"git\"\n elif os.path.isdir(uri):\n config.type = \"dir\"\n elif os.path.isfile(uri):\n config.type = \"file\"\n elif uri.startswith(\"http\"):\n config.type = \"url\"\n else:\n raise ConanException(\"Unable to deduce type config install: %s\" % uri)\n config.source_folder = source_folder\n config.target_folder = target_folder\n config.args = args\n config.verify_ssl = verify_ssl\n if os.path.exists(uri):\n uri = os.path.abspath(uri)\n config.uri = uri\n return config\n\n\ndef _process_config(config, cache, output, requester):\n if config.type == \"git\":\n _process_git_repo(config, cache, output)\n elif config.type == \"dir\":\n _process_folder(config, config.uri, cache, output)\n elif config.type == \"file\":\n with tmp_config_install_folder(cache) as tmp_folder:\n _process_zip_file(config, config.uri, cache, output, tmp_folder)\n elif config.type == \"url\":\n _process_download(config, cache, output, requester=requester)\n else:\n raise ConanException(\"Unable to process config install: %s\" % config.uri)\n\n\ndef _save_configs(configs_file, configs):\n save(configs_file, json.dumps([config.json() for config in configs],\n indent=True))\n\n\ndef configuration_install(uri, cache, output, verify_ssl, requester, config_type=None,\n args=None, source_folder=None, target_folder=None):\n configs = []\n configs_file = cache.config_install_file\n if os.path.isfile(configs_file):\n try:\n configs = json.loads(load(configs_file))\n except Exception as e:\n raise ConanException(\"Error loading configs-install file: %s\\n%\"\n % (configs_file, str(e)))\n configs = [_ConfigOrigin(config) for config in configs]\n if uri is None:\n if config_type or args or not verify_ssl: # Not the defaults\n if not configs:\n raise ConanException(\"Called config install without arguments\")\n # Modify the last one\n config = configs[-1]\n config.config_type = config_type or config.type\n config.args = args or config.args\n config.verify_ssl = verify_ssl or config.verify_ssl\n _process_config(config, cache, output, requester)\n _save_configs(configs_file, configs)\n else:\n if not configs:\n raise ConanException(\"Called config install without arguments\")\n # Execute the previously stored ones\n for config in configs:\n output.info(\"Config install: %s\" % _hide_password(config.uri))\n _process_config(config, cache, output, requester)\n else:\n # Execute and store the new one\n config = _ConfigOrigin.from_item(uri, config_type, verify_ssl, args,\n source_folder, target_folder)\n _process_config(config, cache, output, requester)\n if config not in configs:\n configs.append(config)\n else:\n configs = [(c if c != config else config) for c in configs]\n _save_configs(configs_file, configs)\n", "path": "conans/client/conf/config_installer.py"}]}
| 3,818 | 216 |
gh_patches_debug_33758
|
rasdani/github-patches
|
git_diff
|
kedro-org__kedro-2587
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update default suggestions in `settings.py` to ones that work
## Description
Update docs and default suggestions in `settings.py`, because currently some of those suggestions don't actually work.
Currently, the `BaseSessionStore` is the default session store. The other possible stores a user can use are the `ShelveStore` and the `SQLiteStore` (currently part of viz).
The `ShelveStore` is the default suggestion to override the default in `settings.py`, but when users are using some sort of multiprocessing this store type will not work. See: https://github.com/kedro-org/kedro/issues/1442
Also look at the other default suggestions and verify that they make sense.
(Later consideration, but not part of this work)
If we move the `SQLiteStore` from viz to kedro core, we could add that as the default suggestion instead.
</issue>
<code>
[start of kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py]
1 """Project settings. There is no need to edit this file unless you want to change values
2 from the Kedro defaults. For further information, including these default values, see
3 https://kedro.readthedocs.io/en/stable/kedro_project_setup/settings.html."""
4
5 # Instantiated project hooks.
6 # from {{cookiecutter.python_package}}.hooks import ProjectHooks
7 # HOOKS = (ProjectHooks(),)
8
9 # Installed plugins for which to disable hook auto-registration.
10 # DISABLE_HOOKS_FOR_PLUGINS = ("kedro-viz",)
11
12 # Class that manages storing KedroSession data.
13 # from kedro.framework.session.shelvestore import ShelveStore
14 # SESSION_STORE_CLASS = ShelveStore
15 # Keyword arguments to pass to the `SESSION_STORE_CLASS` constructor.
16 # SESSION_STORE_ARGS = {
17 # "path": "./sessions"
18 # }
19
20 # Class that manages Kedro's library components.
21 # from kedro.framework.context import KedroContext
22 # CONTEXT_CLASS = KedroContext
23
24 # Directory that holds configuration.
25 # CONF_SOURCE = "conf"
26
27 # Class that manages how configuration is loaded.
28 # CONFIG_LOADER_CLASS = ConfigLoader
29 # Keyword arguments to pass to the `CONFIG_LOADER_CLASS` constructor.
30 # CONFIG_LOADER_ARGS = {
31 # "config_patterns": {
32 # "spark" : ["spark*/"],
33 # "parameters": ["parameters*", "parameters*/**", "**/parameters*"],
34 # }
35 # }
36
37 # Class that manages the Data Catalog.
38 # from kedro.io import DataCatalog
39 # DATA_CATALOG_CLASS = DataCatalog
40
[end of kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py b/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py
--- a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py
+++ b/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py
@@ -3,6 +3,7 @@
https://kedro.readthedocs.io/en/stable/kedro_project_setup/settings.html."""
# Instantiated project hooks.
+# For example, after creating a hooks.py and defining a ProjectHooks class there, do
# from {{cookiecutter.python_package}}.hooks import ProjectHooks
# HOOKS = (ProjectHooks(),)
@@ -10,22 +11,19 @@
# DISABLE_HOOKS_FOR_PLUGINS = ("kedro-viz",)
# Class that manages storing KedroSession data.
-# from kedro.framework.session.shelvestore import ShelveStore
-# SESSION_STORE_CLASS = ShelveStore
+# from kedro.framework.session.store import BaseSessionStore
+# SESSION_STORE_CLASS = BaseSessionStore
# Keyword arguments to pass to the `SESSION_STORE_CLASS` constructor.
# SESSION_STORE_ARGS = {
# "path": "./sessions"
# }
-# Class that manages Kedro's library components.
-# from kedro.framework.context import KedroContext
-# CONTEXT_CLASS = KedroContext
-
# Directory that holds configuration.
# CONF_SOURCE = "conf"
# Class that manages how configuration is loaded.
-# CONFIG_LOADER_CLASS = ConfigLoader
+# from kedro.config import OmegaConfigLoader
+# CONFIG_LOADER_CLASS = OmegaConfigLoader
# Keyword arguments to pass to the `CONFIG_LOADER_CLASS` constructor.
# CONFIG_LOADER_ARGS = {
# "config_patterns": {
@@ -34,6 +32,10 @@
# }
# }
+# Class that manages Kedro's library components.
+# from kedro.framework.context import KedroContext
+# CONTEXT_CLASS = KedroContext
+
# Class that manages the Data Catalog.
# from kedro.io import DataCatalog
# DATA_CATALOG_CLASS = DataCatalog
|
{"golden_diff": "diff --git a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py b/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py\n--- a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py\t\n+++ b/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py\t\n@@ -3,6 +3,7 @@\n https://kedro.readthedocs.io/en/stable/kedro_project_setup/settings.html.\"\"\"\n \n # Instantiated project hooks.\n+# For example, after creating a hooks.py and defining a ProjectHooks class there, do\n # from {{cookiecutter.python_package}}.hooks import ProjectHooks\n # HOOKS = (ProjectHooks(),)\n \n@@ -10,22 +11,19 @@\n # DISABLE_HOOKS_FOR_PLUGINS = (\"kedro-viz\",)\n \n # Class that manages storing KedroSession data.\n-# from kedro.framework.session.shelvestore import ShelveStore\n-# SESSION_STORE_CLASS = ShelveStore\n+# from kedro.framework.session.store import BaseSessionStore\n+# SESSION_STORE_CLASS = BaseSessionStore\n # Keyword arguments to pass to the `SESSION_STORE_CLASS` constructor.\n # SESSION_STORE_ARGS = {\n # \"path\": \"./sessions\"\n # }\n \n-# Class that manages Kedro's library components.\n-# from kedro.framework.context import KedroContext\n-# CONTEXT_CLASS = KedroContext\n-\n # Directory that holds configuration.\n # CONF_SOURCE = \"conf\"\n \n # Class that manages how configuration is loaded.\n-# CONFIG_LOADER_CLASS = ConfigLoader\n+# from kedro.config import OmegaConfigLoader\n+# CONFIG_LOADER_CLASS = OmegaConfigLoader\n # Keyword arguments to pass to the `CONFIG_LOADER_CLASS` constructor.\n # CONFIG_LOADER_ARGS = {\n # \"config_patterns\": {\n@@ -34,6 +32,10 @@\n # }\n # }\n \n+# Class that manages Kedro's library components.\n+# from kedro.framework.context import KedroContext\n+# CONTEXT_CLASS = KedroContext\n+\n # Class that manages the Data Catalog.\n # from kedro.io import DataCatalog\n # DATA_CATALOG_CLASS = DataCatalog\n", "issue": "Update default suggestions in `settings.py` to ones that work\n## Description\r\nUpdate docs and default suggestions in `settings.py`, because currently some of those suggestions don't actually work. \r\n\r\nCurrently, the `BaseSessionStore` is the default session store. The other possible stores a user can use are the `ShelveStore` and the `SQLiteStore` (currently part of viz).\r\n\r\nThe `ShelveStore` is the default suggestion to override the default in `settings.py`, but when users are using some sort of multiprocessing this store type will not work. See: https://github.com/kedro-org/kedro/issues/1442\r\n\r\nAlso look at the other default suggestions and verify that they make sense. \r\n\r\n(Later consideration, but not part of this work)\r\nIf we move the `SQLiteStore` from viz to kedro core, we could add that as the default suggestion instead. \r\n\n", "before_files": [{"content": "\"\"\"Project settings. There is no need to edit this file unless you want to change values\nfrom the Kedro defaults. For further information, including these default values, see\nhttps://kedro.readthedocs.io/en/stable/kedro_project_setup/settings.html.\"\"\"\n\n# Instantiated project hooks.\n# from {{cookiecutter.python_package}}.hooks import ProjectHooks\n# HOOKS = (ProjectHooks(),)\n\n# Installed plugins for which to disable hook auto-registration.\n# DISABLE_HOOKS_FOR_PLUGINS = (\"kedro-viz\",)\n\n# Class that manages storing KedroSession data.\n# from kedro.framework.session.shelvestore import ShelveStore\n# SESSION_STORE_CLASS = ShelveStore\n# Keyword arguments to pass to the `SESSION_STORE_CLASS` constructor.\n# SESSION_STORE_ARGS = {\n# \"path\": \"./sessions\"\n# }\n\n# Class that manages Kedro's library components.\n# from kedro.framework.context import KedroContext\n# CONTEXT_CLASS = KedroContext\n\n# Directory that holds configuration.\n# CONF_SOURCE = \"conf\"\n\n# Class that manages how configuration is loaded.\n# CONFIG_LOADER_CLASS = ConfigLoader\n# Keyword arguments to pass to the `CONFIG_LOADER_CLASS` constructor.\n# CONFIG_LOADER_ARGS = {\n# \"config_patterns\": {\n# \"spark\" : [\"spark*/\"],\n# \"parameters\": [\"parameters*\", \"parameters*/**\", \"**/parameters*\"],\n# }\n# }\n\n# Class that manages the Data Catalog.\n# from kedro.io import DataCatalog\n# DATA_CATALOG_CLASS = DataCatalog\n", "path": "kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py"}]}
| 1,163 | 496 |
gh_patches_debug_51452
|
rasdani/github-patches
|
git_diff
|
lutris__lutris-389
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create desktop/application menu shortcut writes a bad .desktop file
File contents:
```
[Desktop Entry]
Type=Application
Name=%s
Icon=%s
Exec=lutris lutris:%s
Categories=Game
```
**How to reproduce**
Right click a game and select Create desktop shortcut.
Navigate to ~/Desktop
You see a file with name `gameslug-id.desktop` but it contains what's above. If you're in a file manager you see the game title instead of the filename, so it appears as `%s` there.
**Lutris debug output**
```
[system]:Executing which xdg-user-dir
```
Operating system: Arch Linux
</issue>
<code>
[start of lutris/shortcuts.py]
1 """Desktop file creator."""
2 import os
3 import stat
4 import shutil
5 import subprocess
6
7 from textwrap import dedent
8 from xdg import BaseDirectory
9 from gi.repository import GLib
10
11 from lutris.util import system
12 from lutris.util.log import logger
13 from lutris.settings import CACHE_DIR
14
15
16 def get_xdg_basename(game_slug, game_id, legacy=False):
17 if legacy:
18 filename = "{}.desktop".format(game_slug)
19 else:
20 filename = "{}-{}.desktop".format(game_slug, game_id)
21 return filename
22
23
24 def create_launcher(game_slug, game_id, game_name, desktop=False, menu=False):
25 """Create a .desktop file."""
26 desktop_dir = (
27 GLib.get_user_special_dir(GLib.UserDirectory.DIRECTORY_DESKTOP)
28 )
29 launcher_content = dedent(
30 """
31 [Desktop Entry]
32 Type=Application
33 Name=%s
34 Icon=%s
35 Exec=lutris lutris:%s
36 Categories=Game
37 """.format(game_name, 'lutris_{}'.format(game_slug), game_id)
38 )
39
40 launcher_filename = get_xdg_basename(game_slug, game_id, legacy=False)
41 tmp_launcher_path = os.path.join(CACHE_DIR, launcher_filename)
42 tmp_launcher = open(tmp_launcher_path, "w")
43 tmp_launcher.write(launcher_content)
44 tmp_launcher.close()
45 os.chmod(tmp_launcher_path, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC |
46 stat.S_IRGRP | stat.S_IWGRP | stat.S_IXGRP)
47
48 if desktop:
49 shutil.copy(tmp_launcher_path,
50 os.path.join(desktop_dir, launcher_filename))
51 if menu:
52 menu_path = os.path.join(GLib.get_user_data_dir(), 'applications')
53 shutil.copy(tmp_launcher_path,
54 os.path.join(menu_path, launcher_filename))
55 os.remove(tmp_launcher_path)
56
57
58 def get_launcher_path(game_slug, game_id):
59 """Return the path of a XDG game launcher.
60 When legacy is set, it will return the old path with only the slug,
61 otherwise it will return the path with slug + id
62 """
63 xdg_executable = 'xdg-user-dir'
64 if not system.find_executable(xdg_executable):
65 logger.error("%s not found", xdg_executable)
66 return
67 desktop_dir = subprocess.Popen([xdg_executable, 'DESKTOP'],
68 stdout=subprocess.PIPE).communicate()[0]
69 desktop_dir = str(desktop_dir).strip()
70
71 legacy_launcher_path = os.path.join(
72 desktop_dir, get_xdg_basename(game_slug, game_id, legacy=True)
73 )
74 # First check if legacy path exists, for backward compatibility
75 if system.path_exists(legacy_launcher_path):
76 return legacy_launcher_path
77 # Otherwise return new path, whether it exists or not
78 return os.path.join(
79 desktop_dir, get_xdg_basename(game_slug, game_id, legacy=False)
80 )
81
82
83 def get_menu_launcher_path(game_slug, game_id):
84 """Return the path to a XDG menu launcher, prioritizing legacy paths if
85 they exist
86 """
87 menu_dir = os.path.join(BaseDirectory.xdg_data_home, 'applications')
88 menu_path = os.path.join(
89 menu_dir, get_xdg_basename(game_slug, game_id, legacy=True)
90 )
91 if system.path_exists(menu_path):
92 return menu_path
93 return os.path.join(
94 menu_dir, get_xdg_basename(game_slug, game_id, legacy=False)
95 )
96
97
98 def desktop_launcher_exists(game_slug, game_id):
99 return system.path_exists(get_launcher_path(game_slug, game_id))
100
101
102 def menu_launcher_exists(game_slug, game_id):
103 return system.path_exists(get_menu_launcher_path(game_slug, game_id))
104
105
106 def remove_launcher(game_slug, game_id, desktop=False, menu=False):
107 """Remove existing .desktop file."""
108 if desktop:
109 launcher_path = get_launcher_path(game_slug, game_id)
110 if system.path_exists(launcher_path):
111 os.remove(launcher_path)
112
113 if menu:
114 menu_path = get_menu_launcher_path(game_slug, game_id)
115 if system.path_exists(menu_path):
116 os.remove(menu_path)
117
[end of lutris/shortcuts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lutris/shortcuts.py b/lutris/shortcuts.py
--- a/lutris/shortcuts.py
+++ b/lutris/shortcuts.py
@@ -30,9 +30,9 @@
"""
[Desktop Entry]
Type=Application
- Name=%s
- Icon=%s
- Exec=lutris lutris:%s
+ Name={}
+ Icon={}
+ Exec=lutris lutris:{}
Categories=Game
""".format(game_name, 'lutris_{}'.format(game_slug), game_id)
)
|
{"golden_diff": "diff --git a/lutris/shortcuts.py b/lutris/shortcuts.py\n--- a/lutris/shortcuts.py\n+++ b/lutris/shortcuts.py\n@@ -30,9 +30,9 @@\n \"\"\"\n [Desktop Entry]\n Type=Application\n- Name=%s\n- Icon=%s\n- Exec=lutris lutris:%s\n+ Name={}\n+ Icon={}\n+ Exec=lutris lutris:{}\n Categories=Game\n \"\"\".format(game_name, 'lutris_{}'.format(game_slug), game_id)\n )\n", "issue": "Create desktop/application menu shortcut writes a bad .desktop file\nFile contents:\n\n```\n[Desktop Entry]\nType=Application\nName=%s\nIcon=%s\nExec=lutris lutris:%s\nCategories=Game\n```\n\n**How to reproduce**\nRight click a game and select Create desktop shortcut.\nNavigate to ~/Desktop\nYou see a file with name `gameslug-id.desktop` but it contains what's above. If you're in a file manager you see the game title instead of the filename, so it appears as `%s` there.\n\n**Lutris debug output**\n\n```\n[system]:Executing which xdg-user-dir\n```\n\nOperating system: Arch Linux\n\n", "before_files": [{"content": "\"\"\"Desktop file creator.\"\"\"\nimport os\nimport stat\nimport shutil\nimport subprocess\n\nfrom textwrap import dedent\nfrom xdg import BaseDirectory\nfrom gi.repository import GLib\n\nfrom lutris.util import system\nfrom lutris.util.log import logger\nfrom lutris.settings import CACHE_DIR\n\n\ndef get_xdg_basename(game_slug, game_id, legacy=False):\n if legacy:\n filename = \"{}.desktop\".format(game_slug)\n else:\n filename = \"{}-{}.desktop\".format(game_slug, game_id)\n return filename\n\n\ndef create_launcher(game_slug, game_id, game_name, desktop=False, menu=False):\n \"\"\"Create a .desktop file.\"\"\"\n desktop_dir = (\n GLib.get_user_special_dir(GLib.UserDirectory.DIRECTORY_DESKTOP)\n )\n launcher_content = dedent(\n \"\"\"\n [Desktop Entry]\n Type=Application\n Name=%s\n Icon=%s\n Exec=lutris lutris:%s\n Categories=Game\n \"\"\".format(game_name, 'lutris_{}'.format(game_slug), game_id)\n )\n\n launcher_filename = get_xdg_basename(game_slug, game_id, legacy=False)\n tmp_launcher_path = os.path.join(CACHE_DIR, launcher_filename)\n tmp_launcher = open(tmp_launcher_path, \"w\")\n tmp_launcher.write(launcher_content)\n tmp_launcher.close()\n os.chmod(tmp_launcher_path, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC |\n stat.S_IRGRP | stat.S_IWGRP | stat.S_IXGRP)\n\n if desktop:\n shutil.copy(tmp_launcher_path,\n os.path.join(desktop_dir, launcher_filename))\n if menu:\n menu_path = os.path.join(GLib.get_user_data_dir(), 'applications')\n shutil.copy(tmp_launcher_path,\n os.path.join(menu_path, launcher_filename))\n os.remove(tmp_launcher_path)\n\n\ndef get_launcher_path(game_slug, game_id):\n \"\"\"Return the path of a XDG game launcher.\n When legacy is set, it will return the old path with only the slug,\n otherwise it will return the path with slug + id\n \"\"\"\n xdg_executable = 'xdg-user-dir'\n if not system.find_executable(xdg_executable):\n logger.error(\"%s not found\", xdg_executable)\n return\n desktop_dir = subprocess.Popen([xdg_executable, 'DESKTOP'],\n stdout=subprocess.PIPE).communicate()[0]\n desktop_dir = str(desktop_dir).strip()\n\n legacy_launcher_path = os.path.join(\n desktop_dir, get_xdg_basename(game_slug, game_id, legacy=True)\n )\n # First check if legacy path exists, for backward compatibility\n if system.path_exists(legacy_launcher_path):\n return legacy_launcher_path\n # Otherwise return new path, whether it exists or not\n return os.path.join(\n desktop_dir, get_xdg_basename(game_slug, game_id, legacy=False)\n )\n\n\ndef get_menu_launcher_path(game_slug, game_id):\n \"\"\"Return the path to a XDG menu launcher, prioritizing legacy paths if\n they exist\n \"\"\"\n menu_dir = os.path.join(BaseDirectory.xdg_data_home, 'applications')\n menu_path = os.path.join(\n menu_dir, get_xdg_basename(game_slug, game_id, legacy=True)\n )\n if system.path_exists(menu_path):\n return menu_path\n return os.path.join(\n menu_dir, get_xdg_basename(game_slug, game_id, legacy=False)\n )\n\n\ndef desktop_launcher_exists(game_slug, game_id):\n return system.path_exists(get_launcher_path(game_slug, game_id))\n\n\ndef menu_launcher_exists(game_slug, game_id):\n return system.path_exists(get_menu_launcher_path(game_slug, game_id))\n\n\ndef remove_launcher(game_slug, game_id, desktop=False, menu=False):\n \"\"\"Remove existing .desktop file.\"\"\"\n if desktop:\n launcher_path = get_launcher_path(game_slug, game_id)\n if system.path_exists(launcher_path):\n os.remove(launcher_path)\n\n if menu:\n menu_path = get_menu_launcher_path(game_slug, game_id)\n if system.path_exists(menu_path):\n os.remove(menu_path)\n", "path": "lutris/shortcuts.py"}]}
| 1,814 | 130 |
gh_patches_debug_41709
|
rasdani/github-patches
|
git_diff
|
lisa-lab__pylearn2-1276
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove core files from whitelist
- [x] models/mlp.py
- [x] `costs/mlp/__init__.py`
- [x] costs/mlp/dropout.py
- [x] monitor.py
- [x] optimization/batch_gradient_descent.py
- [x] blocks.py
- [ ] expr/nnet.py
- [x] costs/cost.py
- [x] datasets/dense_design_matrix.py
- [x] datasets/dataset.py
</issue>
<code>
[start of pylearn2/expr/nnet.py]
1 """
2 Useful expressions common to many neural network applications.
3 """
4 __authors__ = "Ian Goodfellow"
5 __copyright__ = "Copyright 2010-2013, Universite de Montreal"
6 __credits__ = ["Ian Goodfellow"]
7 __license__ = "3-clause BSD"
8 __maintainer__ = "LISA Lab"
9 __email__ = "pylearn-dev@googlegroups"
10
11 import numpy as np
12 import theano
13 from theano.printing import Print
14 from theano import tensor as T
15 from theano.gof.op import get_debug_values
16
17 def softmax_numpy(x):
18 """
19 .. todo::
20
21 WRITEME properly
22
23 Parameters
24 ----------
25 x : matrix
26
27 Returns
28 -------
29 rval : vector
30 rval[i] is the softmax of row i of x
31 """
32 stable_x = (x.T - x.max(axis=1)).T
33 numer = np.exp(stable_x)
34 return (numer.T / numer.sum(axis=1)).T
35
36 def pseudoinverse_softmax_numpy(x):
37 """
38 .. todo::
39
40 WRITEME properly
41
42 Parameters
43 ----------
44 x : vector
45
46 Returns
47 -------
48 y : vector
49 softmax(y) = x
50
51 Notes
52 -----
53 This problem is underdetermined, so we also impose y.mean() = 0
54 """
55 rval = np.log(x)
56 rval -= rval.mean()
57 return rval
58
59 def sigmoid_numpy(x):
60 """
61 .. todo::
62
63 WRITEME
64 """
65 assert not isinstance(x, theano.gof.Variable)
66 return 1. / (1. + np.exp(-x))
67
68 def inverse_sigmoid_numpy(x):
69 """
70 .. todo::
71
72 WRITEME
73 """
74 return np.log(x / (1. - x))
75
76 def arg_of_softmax(Y_hat):
77 """
78 Given the output of a call to theano.tensor.nnet.softmax,
79 returns the argument to the softmax (by tracing the Theano
80 graph).
81
82 Parameters
83 ----------
84 Y_hat : Variable
85 softmax(Z)
86
87 Returns
88 -------
89 Z : Variable
90 The variable that was passed to the Softmax op to create `Y_hat`.
91 Raises an error if `Y_hat` is not actually the output of a
92 Softmax.
93 """
94 assert hasattr(Y_hat, 'owner')
95 owner = Y_hat.owner
96 assert owner is not None
97 op = owner.op
98 if isinstance(op, Print):
99 assert len(owner.inputs) == 1
100 Y_hat, = owner.inputs
101 owner = Y_hat.owner
102 op = owner.op
103 if not isinstance(op, T.nnet.Softmax):
104 raise ValueError("Expected Y_hat to be the output of a softmax, "
105 "but it appears to be the output of " + str(op) + " of type "
106 + str(type(op)))
107 z ,= owner.inputs
108 assert z.ndim == 2
109 return z
110
111 def kl(Y, Y_hat, batch_axis):
112 """
113 Warning: This function expects a sigmoid nonlinearity in the
114 output layer. Returns a batch (vector) of mean across units of
115 KL divergence for each example,
116 KL(P || Q) where P is defined by Y and Q is defined by Y_hat:
117
118 p log p - p log q + (1-p) log (1-p) - (1-p) log (1-q)
119 For binary p, some terms drop out:
120 - p log q - (1-p) log (1-q)
121 - p log sigmoid(z) - (1-p) log sigmoid(-z)
122 p softplus(-z) + (1-p) softplus(z)
123
124 Parameters
125 ----------
126 Y : Variable
127 targets for the sigmoid outputs. Currently Y must be purely binary.
128 If it's not, you'll still get the right gradient, but the
129 value in the monitoring channel will be wrong.
130 Y_hat : Variable
131 predictions made by the sigmoid layer. Y_hat must be generated by
132 fprop, i.e., it must be a symbolic sigmoid.
133 batch_axis : list
134 list of axes to compute average kl divergence across.
135
136 Returns
137 -------
138 ave : Variable
139 average kl divergence between Y and Y_hat.
140 """
141
142 assert hasattr(Y_hat, 'owner')
143 assert batch_axis is not None
144
145 owner = Y_hat.owner
146 assert owner is not None
147 op = owner.op
148
149 if not hasattr(op, 'scalar_op'):
150 raise ValueError("Expected Y_hat to be generated by an Elemwise "
151 "op, got "+str(op)+" of type "+str(type(op)))
152 assert isinstance(op.scalar_op, T.nnet.sigm.ScalarSigmoid)
153
154 for Yv in get_debug_values(Y):
155 if not (Yv.min() >= 0.0 and Yv.max() <= 1.0):
156 raise ValueError("Expected Y to be between 0 and 1. Either Y"
157 + "< 0 or Y > 1 was found in the input.")
158
159 z, = owner.inputs
160
161 term_1 = Y * T.nnet.softplus(-z)
162 term_2 = (1 - Y) * T.nnet.softplus(z)
163
164 total = term_1 + term_2
165 naxes = total.ndim
166 axes_to_reduce = list(range(naxes))
167 del axes_to_reduce[batch_axis]
168 ave = total.mean(axis=axes_to_reduce)
169
170 return ave
171
172 def elemwise_kl(Y, Y_hat):
173 """
174 Warning: This function expects a sigmoid nonlinearity in the
175 output layer. Returns a batch (vector) of mean across units of
176 KL divergence for each example,
177 KL(P || Q) where P is defined by Y and Q is defined by Y_hat:
178
179 p log p - p log q + (1-p) log (1-p) - (1-p) log (1-q)
180 For binary p, some terms drop out:
181 - p log q - (1-p) log (1-q)
182 - p log sigmoid(z) - (1-p) log sigmoid(-z)
183 p softplus(-z) + (1-p) softplus(z)
184
185 Parameters
186 ----------
187 Y : Variable
188 targets for the sigmoid outputs. Currently Y must be purely binary.
189 If it's not, you'll still get the right gradient, but the
190 value in the monitoring channel will be wrong.
191 Y_hat : Variable
192 predictions made by the sigmoid layer. Y_hat must be generated by
193 fprop, i.e., it must be a symbolic sigmoid.
194
195 Returns
196 -------
197 ave : Variable
198 kl divergence between Y and Y_hat.
199 """
200 assert hasattr(Y_hat, 'owner')
201
202 owner = Y_hat.owner
203 assert owner is not None
204 op = owner.op
205
206 if not hasattr(op, 'scalar_op'):
207 raise ValueError("Expected Y_hat to be generated by an Elemwise "
208 "op, got "+str(op)+" of type "+str(type(op)))
209 assert isinstance(op.scalar_op, T.nnet.sigm.ScalarSigmoid)
210
211 for Yv in get_debug_values(Y):
212 if not (Yv.min() >= 0.0 and Yv.max() <= 1.0):
213 raise ValueError("Expected Y to be between 0 and 1. Either Y"
214 + "< 0 or Y > 1 was found in the input.")
215
216 z, = owner.inputs
217
218 term_1 = Y * T.nnet.softplus(-z)
219 term_2 = (1 - Y) * T.nnet.softplus(z)
220
221 total = term_1 + term_2
222
223 return total
224
225
226 def softmax_ratio(numer, denom):
227 """
228 .. todo::
229
230 WRITEME properly
231
232 Parameters
233 ----------
234 numer : Variable
235 Output of a softmax.
236 denom : Variable
237 Output of a softmax.
238
239 Returns
240 -------
241 ratio : Variable
242 numer / denom, computed in a numerically stable way
243 """
244
245 numer_Z = arg_of_softmax(numer)
246 denom_Z = arg_of_softmax(denom)
247 numer_Z -= numer_Z.max(axis=1).dimshuffle(0, 'x')
248 denom_Z -= denom_Z.min(axis=1).dimshuffle(0, 'x')
249
250 new_num = T.exp(numer_Z - denom_Z) * (T.exp(denom_Z).sum(
251 axis=1).dimshuffle(0, 'x'))
252 new_den = (T.exp(numer_Z).sum(axis=1).dimshuffle(0, 'x'))
253
254 return new_num / new_den
255
256 def compute_precision(tp, fp):
257 """
258 Computes the precision for the binary decisions.
259 Computed as tp/(tp + fp).
260
261 Parameters
262 ----------
263 tp : Variable
264 True positives.
265 fp : Variable
266 False positives.
267
268 Returns
269 -------
270 precision : Variable
271 Precision of the binary classifications.
272 """
273 precision = tp / T.maximum(1., tp + fp)
274 return precision
275
276 def compute_recall(y, tp):
277 """
278 Computes the recall for the binary classification.
279
280 Parameters
281 ----------
282 y : Variable
283 Targets for the binary classifications.
284 tp : Variable
285 True positives.
286
287 Returns
288 -------
289 recall : Variable
290 Recall for the binary classification.
291 """
292 recall = tp / T.maximum(1., y.sum())
293 return recall
294
295 def compute_f1(precision, recall):
296 """
297 Computes the f1 score for the binary classification.
298 Computed as,
299
300 f1 = 2 * precision * recall / (precision + recall)
301
302 Parameters
303 ----------
304 precision : Variable
305 Precision score of the binary decisions.
306 recall : Variable
307 Recall score of the binary decisions.
308
309 Returns
310 -------
311 f1 : Variable
312 f1 score for the binary decisions.
313 """
314 f1 = (2. * precision * recall /
315 T.maximum(1, precision + recall))
316 return f1
317
318
[end of pylearn2/expr/nnet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pylearn2/expr/nnet.py b/pylearn2/expr/nnet.py
--- a/pylearn2/expr/nnet.py
+++ b/pylearn2/expr/nnet.py
@@ -14,6 +14,7 @@
from theano import tensor as T
from theano.gof.op import get_debug_values
+
def softmax_numpy(x):
"""
.. todo::
@@ -33,6 +34,7 @@
numer = np.exp(stable_x)
return (numer.T / numer.sum(axis=1)).T
+
def pseudoinverse_softmax_numpy(x):
"""
.. todo::
@@ -56,6 +58,7 @@
rval -= rval.mean()
return rval
+
def sigmoid_numpy(x):
"""
.. todo::
@@ -65,6 +68,7 @@
assert not isinstance(x, theano.gof.Variable)
return 1. / (1. + np.exp(-x))
+
def inverse_sigmoid_numpy(x):
"""
.. todo::
@@ -73,6 +77,7 @@
"""
return np.log(x / (1. - x))
+
def arg_of_softmax(Y_hat):
"""
Given the output of a call to theano.tensor.nnet.softmax,
@@ -102,12 +107,13 @@
op = owner.op
if not isinstance(op, T.nnet.Softmax):
raise ValueError("Expected Y_hat to be the output of a softmax, "
- "but it appears to be the output of " + str(op) + " of type "
- + str(type(op)))
- z ,= owner.inputs
+ "but it appears to be the output of " + str(op) +
+ " of type " + str(type(op)))
+ z, = owner.inputs
assert z.ndim == 2
return z
+
def kl(Y, Y_hat, batch_axis):
"""
Warning: This function expects a sigmoid nonlinearity in the
@@ -169,6 +175,7 @@
return ave
+
def elemwise_kl(Y, Y_hat):
"""
Warning: This function expects a sigmoid nonlinearity in the
@@ -253,6 +260,7 @@
return new_num / new_den
+
def compute_precision(tp, fp):
"""
Computes the precision for the binary decisions.
@@ -273,6 +281,7 @@
precision = tp / T.maximum(1., tp + fp)
return precision
+
def compute_recall(y, tp):
"""
Computes the recall for the binary classification.
@@ -292,6 +301,7 @@
recall = tp / T.maximum(1., y.sum())
return recall
+
def compute_f1(precision, recall):
"""
Computes the f1 score for the binary classification.
@@ -312,6 +322,5 @@
f1 score for the binary decisions.
"""
f1 = (2. * precision * recall /
- T.maximum(1, precision + recall))
- return f1
-
+ T.maximum(1, precision + recall))
+ return f1
\ No newline at end of file
|
{"golden_diff": "diff --git a/pylearn2/expr/nnet.py b/pylearn2/expr/nnet.py\n--- a/pylearn2/expr/nnet.py\n+++ b/pylearn2/expr/nnet.py\n@@ -14,6 +14,7 @@\n from theano import tensor as T\n from theano.gof.op import get_debug_values\n \n+\n def softmax_numpy(x):\n \"\"\"\n .. todo::\n@@ -33,6 +34,7 @@\n numer = np.exp(stable_x)\n return (numer.T / numer.sum(axis=1)).T\n \n+\n def pseudoinverse_softmax_numpy(x):\n \"\"\"\n .. todo::\n@@ -56,6 +58,7 @@\n rval -= rval.mean()\n return rval\n \n+\n def sigmoid_numpy(x):\n \"\"\"\n .. todo::\n@@ -65,6 +68,7 @@\n assert not isinstance(x, theano.gof.Variable)\n return 1. / (1. + np.exp(-x))\n \n+\n def inverse_sigmoid_numpy(x):\n \"\"\"\n .. todo::\n@@ -73,6 +77,7 @@\n \"\"\"\n return np.log(x / (1. - x))\n \n+\n def arg_of_softmax(Y_hat):\n \"\"\"\n Given the output of a call to theano.tensor.nnet.softmax,\n@@ -102,12 +107,13 @@\n op = owner.op\n if not isinstance(op, T.nnet.Softmax):\n raise ValueError(\"Expected Y_hat to be the output of a softmax, \"\n- \"but it appears to be the output of \" + str(op) + \" of type \"\n- + str(type(op)))\n- z ,= owner.inputs\n+ \"but it appears to be the output of \" + str(op) +\n+ \" of type \" + str(type(op)))\n+ z, = owner.inputs\n assert z.ndim == 2\n return z\n \n+\n def kl(Y, Y_hat, batch_axis):\n \"\"\"\n Warning: This function expects a sigmoid nonlinearity in the\n@@ -169,6 +175,7 @@\n \n return ave\n \n+\n def elemwise_kl(Y, Y_hat):\n \"\"\"\n Warning: This function expects a sigmoid nonlinearity in the\n@@ -253,6 +260,7 @@\n \n return new_num / new_den\n \n+\n def compute_precision(tp, fp):\n \"\"\"\n Computes the precision for the binary decisions.\n@@ -273,6 +281,7 @@\n precision = tp / T.maximum(1., tp + fp)\n return precision\n \n+\n def compute_recall(y, tp):\n \"\"\"\n Computes the recall for the binary classification.\n@@ -292,6 +301,7 @@\n recall = tp / T.maximum(1., y.sum())\n return recall\n \n+\n def compute_f1(precision, recall):\n \"\"\"\n Computes the f1 score for the binary classification.\n@@ -312,6 +322,5 @@\n f1 score for the binary decisions.\n \"\"\"\n f1 = (2. * precision * recall /\n- T.maximum(1, precision + recall))\n- return f1\n-\n+ T.maximum(1, precision + recall))\n+ return f1\n\\ No newline at end of file\n", "issue": "Remove core files from whitelist\n- [x] models/mlp.py\n- [x] `costs/mlp/__init__.py`\n- [x] costs/mlp/dropout.py\n- [x] monitor.py\n- [x] optimization/batch_gradient_descent.py\n- [x] blocks.py\n- [ ] expr/nnet.py\n- [x] costs/cost.py\n- [x] datasets/dense_design_matrix.py\n- [x] datasets/dataset.py\n\n", "before_files": [{"content": "\"\"\"\nUseful expressions common to many neural network applications.\n\"\"\"\n__authors__ = \"Ian Goodfellow\"\n__copyright__ = \"Copyright 2010-2013, Universite de Montreal\"\n__credits__ = [\"Ian Goodfellow\"]\n__license__ = \"3-clause BSD\"\n__maintainer__ = \"LISA Lab\"\n__email__ = \"pylearn-dev@googlegroups\"\n\nimport numpy as np\nimport theano\nfrom theano.printing import Print\nfrom theano import tensor as T\nfrom theano.gof.op import get_debug_values\n\ndef softmax_numpy(x):\n \"\"\"\n .. todo::\n\n WRITEME properly\n\n Parameters\n ----------\n x : matrix\n\n Returns\n -------\n rval : vector\n rval[i] is the softmax of row i of x\n \"\"\"\n stable_x = (x.T - x.max(axis=1)).T\n numer = np.exp(stable_x)\n return (numer.T / numer.sum(axis=1)).T\n\ndef pseudoinverse_softmax_numpy(x):\n \"\"\"\n .. todo::\n\n WRITEME properly\n\n Parameters\n ----------\n x : vector\n\n Returns\n -------\n y : vector\n softmax(y) = x\n\n Notes\n -----\n This problem is underdetermined, so we also impose y.mean() = 0\n \"\"\"\n rval = np.log(x)\n rval -= rval.mean()\n return rval\n\ndef sigmoid_numpy(x):\n \"\"\"\n .. todo::\n\n WRITEME\n \"\"\"\n assert not isinstance(x, theano.gof.Variable)\n return 1. / (1. + np.exp(-x))\n\ndef inverse_sigmoid_numpy(x):\n \"\"\"\n .. todo::\n\n WRITEME\n \"\"\"\n return np.log(x / (1. - x))\n\ndef arg_of_softmax(Y_hat):\n \"\"\"\n Given the output of a call to theano.tensor.nnet.softmax,\n returns the argument to the softmax (by tracing the Theano\n graph).\n\n Parameters\n ----------\n Y_hat : Variable\n softmax(Z)\n\n Returns\n -------\n Z : Variable\n The variable that was passed to the Softmax op to create `Y_hat`.\n Raises an error if `Y_hat` is not actually the output of a\n Softmax.\n \"\"\"\n assert hasattr(Y_hat, 'owner')\n owner = Y_hat.owner\n assert owner is not None\n op = owner.op\n if isinstance(op, Print):\n assert len(owner.inputs) == 1\n Y_hat, = owner.inputs\n owner = Y_hat.owner\n op = owner.op\n if not isinstance(op, T.nnet.Softmax):\n raise ValueError(\"Expected Y_hat to be the output of a softmax, \"\n \"but it appears to be the output of \" + str(op) + \" of type \"\n + str(type(op)))\n z ,= owner.inputs\n assert z.ndim == 2\n return z\n\ndef kl(Y, Y_hat, batch_axis):\n \"\"\"\n Warning: This function expects a sigmoid nonlinearity in the\n output layer. Returns a batch (vector) of mean across units of\n KL divergence for each example,\n KL(P || Q) where P is defined by Y and Q is defined by Y_hat:\n\n p log p - p log q + (1-p) log (1-p) - (1-p) log (1-q)\n For binary p, some terms drop out:\n - p log q - (1-p) log (1-q)\n - p log sigmoid(z) - (1-p) log sigmoid(-z)\n p softplus(-z) + (1-p) softplus(z)\n\n Parameters\n ----------\n Y : Variable\n targets for the sigmoid outputs. Currently Y must be purely binary.\n If it's not, you'll still get the right gradient, but the\n value in the monitoring channel will be wrong.\n Y_hat : Variable\n predictions made by the sigmoid layer. Y_hat must be generated by\n fprop, i.e., it must be a symbolic sigmoid.\n batch_axis : list\n list of axes to compute average kl divergence across.\n\n Returns\n -------\n ave : Variable\n average kl divergence between Y and Y_hat.\n \"\"\"\n\n assert hasattr(Y_hat, 'owner')\n assert batch_axis is not None\n\n owner = Y_hat.owner\n assert owner is not None\n op = owner.op\n\n if not hasattr(op, 'scalar_op'):\n raise ValueError(\"Expected Y_hat to be generated by an Elemwise \"\n \"op, got \"+str(op)+\" of type \"+str(type(op)))\n assert isinstance(op.scalar_op, T.nnet.sigm.ScalarSigmoid)\n\n for Yv in get_debug_values(Y):\n if not (Yv.min() >= 0.0 and Yv.max() <= 1.0):\n raise ValueError(\"Expected Y to be between 0 and 1. Either Y\"\n + \"< 0 or Y > 1 was found in the input.\")\n\n z, = owner.inputs\n\n term_1 = Y * T.nnet.softplus(-z)\n term_2 = (1 - Y) * T.nnet.softplus(z)\n\n total = term_1 + term_2\n naxes = total.ndim\n axes_to_reduce = list(range(naxes))\n del axes_to_reduce[batch_axis]\n ave = total.mean(axis=axes_to_reduce)\n\n return ave\n\ndef elemwise_kl(Y, Y_hat):\n \"\"\"\n Warning: This function expects a sigmoid nonlinearity in the\n output layer. Returns a batch (vector) of mean across units of\n KL divergence for each example,\n KL(P || Q) where P is defined by Y and Q is defined by Y_hat:\n\n p log p - p log q + (1-p) log (1-p) - (1-p) log (1-q)\n For binary p, some terms drop out:\n - p log q - (1-p) log (1-q)\n - p log sigmoid(z) - (1-p) log sigmoid(-z)\n p softplus(-z) + (1-p) softplus(z)\n\n Parameters\n ----------\n Y : Variable\n targets for the sigmoid outputs. Currently Y must be purely binary.\n If it's not, you'll still get the right gradient, but the\n value in the monitoring channel will be wrong.\n Y_hat : Variable\n predictions made by the sigmoid layer. Y_hat must be generated by\n fprop, i.e., it must be a symbolic sigmoid.\n\n Returns\n -------\n ave : Variable\n kl divergence between Y and Y_hat.\n \"\"\"\n assert hasattr(Y_hat, 'owner')\n\n owner = Y_hat.owner\n assert owner is not None\n op = owner.op\n\n if not hasattr(op, 'scalar_op'):\n raise ValueError(\"Expected Y_hat to be generated by an Elemwise \"\n \"op, got \"+str(op)+\" of type \"+str(type(op)))\n assert isinstance(op.scalar_op, T.nnet.sigm.ScalarSigmoid)\n\n for Yv in get_debug_values(Y):\n if not (Yv.min() >= 0.0 and Yv.max() <= 1.0):\n raise ValueError(\"Expected Y to be between 0 and 1. Either Y\"\n + \"< 0 or Y > 1 was found in the input.\")\n\n z, = owner.inputs\n\n term_1 = Y * T.nnet.softplus(-z)\n term_2 = (1 - Y) * T.nnet.softplus(z)\n\n total = term_1 + term_2\n\n return total\n\n\ndef softmax_ratio(numer, denom):\n \"\"\"\n .. todo::\n\n WRITEME properly\n\n Parameters\n ----------\n numer : Variable\n Output of a softmax.\n denom : Variable\n Output of a softmax.\n\n Returns\n -------\n ratio : Variable\n numer / denom, computed in a numerically stable way\n \"\"\"\n\n numer_Z = arg_of_softmax(numer)\n denom_Z = arg_of_softmax(denom)\n numer_Z -= numer_Z.max(axis=1).dimshuffle(0, 'x')\n denom_Z -= denom_Z.min(axis=1).dimshuffle(0, 'x')\n\n new_num = T.exp(numer_Z - denom_Z) * (T.exp(denom_Z).sum(\n axis=1).dimshuffle(0, 'x'))\n new_den = (T.exp(numer_Z).sum(axis=1).dimshuffle(0, 'x'))\n\n return new_num / new_den\n\ndef compute_precision(tp, fp):\n \"\"\"\n Computes the precision for the binary decisions.\n Computed as tp/(tp + fp).\n\n Parameters\n ----------\n tp : Variable\n True positives.\n fp : Variable\n False positives.\n\n Returns\n -------\n precision : Variable\n Precision of the binary classifications.\n \"\"\"\n precision = tp / T.maximum(1., tp + fp)\n return precision\n\ndef compute_recall(y, tp):\n \"\"\"\n Computes the recall for the binary classification.\n\n Parameters\n ----------\n y : Variable\n Targets for the binary classifications.\n tp : Variable\n True positives.\n\n Returns\n -------\n recall : Variable\n Recall for the binary classification.\n \"\"\"\n recall = tp / T.maximum(1., y.sum())\n return recall\n\ndef compute_f1(precision, recall):\n \"\"\"\n Computes the f1 score for the binary classification.\n Computed as,\n\n f1 = 2 * precision * recall / (precision + recall)\n\n Parameters\n ----------\n precision : Variable\n Precision score of the binary decisions.\n recall : Variable\n Recall score of the binary decisions.\n\n Returns\n -------\n f1 : Variable\n f1 score for the binary decisions.\n \"\"\"\n f1 = (2. * precision * recall /\n T.maximum(1, precision + recall))\n return f1\n\n", "path": "pylearn2/expr/nnet.py"}]}
| 3,672 | 712 |
gh_patches_debug_4426
|
rasdani/github-patches
|
git_diff
|
OpenNMT__OpenNMT-tf-569
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error while running the exported model
Hi,
I was trying to run the example given [https://github.com/OpenNMT/OpenNMT-tf/tree/master/examples/serving/python](url).
I am getting the following error.
> Source: I am going.
Traceback (most recent call last):
File "ende_client.py", line 66, in <module>
main()
File "ende_client.py", line 60, in main
output = translator.translate([text])
File "ende_client.py", line 22, in translate
return self._postprocess(outputs)
File "ende_client.py", line 47, in _postprocess
texts.append(self._tokenizer.detokenize(tokens))
TypeError: detokenize(): incompatible function arguments. The following argument types are supported:
1. (self: pyonmttok.Tokenizer, tokens: list, features: object = None) -> str
> Invoked with: <pyonmttok.Tokenizer object at 0x147d10d0d538>, array([b'\xe2\x96\x81Ich', b'\xe2\x96\x81gehe', b'.'], dtype=object)
> WARNING:tensorflow:Unresolved object in checkpoint: (root).examples_inputter.features_inputter.ids_to_tokens._initializer
> WARNING:tensorflow:Unresolved object in checkpoint: (root).examples_inputter.labels_inputter.ids_to_tokens._initializer
> WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/alpha/guide/checkpoints#loading_mechanics for details.
>
I have the updated version of pyonmttok.
Thanks,
Sriram
</issue>
<code>
[start of examples/serving/python/ende_client.py]
1 import argparse
2 import os
3
4 import tensorflow as tf
5 import tensorflow_addons as tfa # Register TensorFlow Addons kernels.
6
7 import pyonmttok
8
9
10 class EnDeTranslator(object):
11
12 def __init__(self, export_dir):
13 imported = tf.saved_model.load(export_dir)
14 self._translate_fn = imported.signatures["serving_default"]
15 sp_model_path = os.path.join(export_dir, "assets.extra", "wmtende.model")
16 self._tokenizer = pyonmttok.Tokenizer("none", sp_model_path=sp_model_path)
17
18 def translate(self, texts):
19 """Translates a batch of texts."""
20 inputs = self._preprocess(texts)
21 outputs = self._translate_fn(**inputs)
22 return self._postprocess(outputs)
23
24 def _preprocess(self, texts):
25 all_tokens = []
26 lengths = []
27 max_length = 0
28 for text in texts:
29 tokens, _ = self._tokenizer.tokenize(text)
30 length = len(tokens)
31 all_tokens.append(tokens)
32 lengths.append(length)
33 max_length = max(max_length, length)
34 for tokens, length in zip(all_tokens, lengths):
35 if length < max_length:
36 tokens += [""] * (max_length - length)
37
38 inputs = {
39 "tokens": tf.constant(all_tokens, dtype=tf.string),
40 "length": tf.constant(lengths, dtype=tf.int32)}
41 return inputs
42
43 def _postprocess(self, outputs):
44 texts = []
45 for tokens, length in zip(outputs["tokens"].numpy(), outputs["length"].numpy()):
46 tokens = tokens[0][:length[0]]
47 texts.append(self._tokenizer.detokenize(tokens))
48 return texts
49
50
51 def main():
52 parser = argparse.ArgumentParser(description="Translation client example")
53 parser.add_argument("export_dir", help="Saved model directory")
54 args = parser.parse_args()
55
56 translator = EnDeTranslator(args.export_dir)
57
58 while True:
59 text = input("Source: ")
60 output = translator.translate([text])
61 print("Target: %s" % output[0])
62 print("")
63
64
65 if __name__ == "__main__":
66 main()
67
[end of examples/serving/python/ende_client.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/serving/python/ende_client.py b/examples/serving/python/ende_client.py
--- a/examples/serving/python/ende_client.py
+++ b/examples/serving/python/ende_client.py
@@ -43,7 +43,7 @@
def _postprocess(self, outputs):
texts = []
for tokens, length in zip(outputs["tokens"].numpy(), outputs["length"].numpy()):
- tokens = tokens[0][:length[0]]
+ tokens = tokens[0][:length[0]].tolist()
texts.append(self._tokenizer.detokenize(tokens))
return texts
|
{"golden_diff": "diff --git a/examples/serving/python/ende_client.py b/examples/serving/python/ende_client.py\n--- a/examples/serving/python/ende_client.py\n+++ b/examples/serving/python/ende_client.py\n@@ -43,7 +43,7 @@\n def _postprocess(self, outputs):\n texts = []\n for tokens, length in zip(outputs[\"tokens\"].numpy(), outputs[\"length\"].numpy()):\n- tokens = tokens[0][:length[0]]\n+ tokens = tokens[0][:length[0]].tolist()\n texts.append(self._tokenizer.detokenize(tokens))\n return texts\n", "issue": "Error while running the exported model \nHi,\r\n\r\nI was trying to run the example given [https://github.com/OpenNMT/OpenNMT-tf/tree/master/examples/serving/python](url).\r\n\r\nI am getting the following error.\r\n\r\n> Source: I am going.\r\nTraceback (most recent call last):\r\n File \"ende_client.py\", line 66, in <module>\r\n main()\r\n File \"ende_client.py\", line 60, in main\r\n output = translator.translate([text])\r\n File \"ende_client.py\", line 22, in translate\r\n return self._postprocess(outputs)\r\n File \"ende_client.py\", line 47, in _postprocess\r\n texts.append(self._tokenizer.detokenize(tokens))\r\nTypeError: detokenize(): incompatible function arguments. The following argument types are supported:\r\n 1. (self: pyonmttok.Tokenizer, tokens: list, features: object = None) -> str\r\n\r\n> Invoked with: <pyonmttok.Tokenizer object at 0x147d10d0d538>, array([b'\\xe2\\x96\\x81Ich', b'\\xe2\\x96\\x81gehe', b'.'], dtype=object)\r\n> WARNING:tensorflow:Unresolved object in checkpoint: (root).examples_inputter.features_inputter.ids_to_tokens._initializer\r\n> WARNING:tensorflow:Unresolved object in checkpoint: (root).examples_inputter.labels_inputter.ids_to_tokens._initializer\r\n> WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/alpha/guide/checkpoints#loading_mechanics for details.\r\n> \r\n\r\nI have the updated version of pyonmttok.\r\n\r\nThanks,\r\nSriram\n", "before_files": [{"content": "import argparse\nimport os\n\nimport tensorflow as tf\nimport tensorflow_addons as tfa # Register TensorFlow Addons kernels.\n\nimport pyonmttok\n\n\nclass EnDeTranslator(object):\n\n def __init__(self, export_dir):\n imported = tf.saved_model.load(export_dir)\n self._translate_fn = imported.signatures[\"serving_default\"]\n sp_model_path = os.path.join(export_dir, \"assets.extra\", \"wmtende.model\")\n self._tokenizer = pyonmttok.Tokenizer(\"none\", sp_model_path=sp_model_path)\n\n def translate(self, texts):\n \"\"\"Translates a batch of texts.\"\"\"\n inputs = self._preprocess(texts)\n outputs = self._translate_fn(**inputs)\n return self._postprocess(outputs)\n\n def _preprocess(self, texts):\n all_tokens = []\n lengths = []\n max_length = 0\n for text in texts:\n tokens, _ = self._tokenizer.tokenize(text)\n length = len(tokens)\n all_tokens.append(tokens)\n lengths.append(length)\n max_length = max(max_length, length)\n for tokens, length in zip(all_tokens, lengths):\n if length < max_length:\n tokens += [\"\"] * (max_length - length)\n\n inputs = {\n \"tokens\": tf.constant(all_tokens, dtype=tf.string),\n \"length\": tf.constant(lengths, dtype=tf.int32)}\n return inputs\n\n def _postprocess(self, outputs):\n texts = []\n for tokens, length in zip(outputs[\"tokens\"].numpy(), outputs[\"length\"].numpy()):\n tokens = tokens[0][:length[0]]\n texts.append(self._tokenizer.detokenize(tokens))\n return texts\n\n\ndef main():\n parser = argparse.ArgumentParser(description=\"Translation client example\")\n parser.add_argument(\"export_dir\", help=\"Saved model directory\")\n args = parser.parse_args()\n\n translator = EnDeTranslator(args.export_dir)\n\n while True:\n text = input(\"Source: \")\n output = translator.translate([text])\n print(\"Target: %s\" % output[0])\n print(\"\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "examples/serving/python/ende_client.py"}]}
| 1,563 | 129 |
gh_patches_debug_22338
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-554
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
id and last_modified should be stripped before validating the JSON schema
Otherwise it obliges everyone to add `id` and `last_modified` to their JSON schema or use `additionalProperties : true`.
- http://spacetelescope.github.io/understanding-json-schema/reference/object.html#properties
- See #256
- See #548
``` diff
try:
- jsonschema.validate(new, schema)
+ stripped = copy.deepcopy(new)
+ stripped.pop(self.model.id_field, None)
+ stripped.pop(self.model.modified_field, None)
+ jsonschema.validate(stripped, schema)
```
id and last_modified should be stripped before validating the JSON schema
Otherwise it obliges everyone to add `id` and `last_modified` to their JSON schema or use `additionalProperties : true`.
- http://spacetelescope.github.io/understanding-json-schema/reference/object.html#properties
- See #256
- See #548
``` diff
try:
- jsonschema.validate(new, schema)
+ stripped = copy.deepcopy(new)
+ stripped.pop(self.model.id_field, None)
+ stripped.pop(self.model.modified_field, None)
+ jsonschema.validate(stripped, schema)
```
</issue>
<code>
[start of kinto/views/records.py]
1 import jsonschema
2 from cliquet import resource
3 from cliquet.errors import raise_invalid
4 from jsonschema import exceptions as jsonschema_exceptions
5 from pyramid.security import Authenticated
6 from pyramid.settings import asbool
7
8 from kinto.views import object_exists_or_404
9
10
11 class RecordSchema(resource.ResourceSchema):
12 class Options:
13 preserve_unknown = True
14
15
16 _parent_path = '/buckets/{{bucket_id}}/collections/{{collection_id}}'
17
18
19 @resource.register(name='record',
20 collection_path=_parent_path + '/records',
21 record_path=_parent_path + '/records/{{id}}')
22 class Record(resource.ShareableResource):
23
24 mapping = RecordSchema()
25 schema_field = 'schema'
26
27 def __init__(self, *args, **kwargs):
28 super(Record, self).__init__(*args, **kwargs)
29
30 # Check if already fetched before (in batch).
31 collections = self.request.bound_data.setdefault('collections', {})
32 collection_uri = self.get_parent_id(self.request)
33 if collection_uri not in collections:
34 # Unknown yet, fetch from storage.
35 collection_parent_id = '/buckets/%s' % self.bucket_id
36 collection = object_exists_or_404(self.request,
37 collection_id='collection',
38 parent_id=collection_parent_id,
39 object_id=self.collection_id)
40 collections[collection_uri] = collection
41
42 self._collection = collections[collection_uri]
43
44 def get_parent_id(self, request):
45 self.bucket_id = request.matchdict['bucket_id']
46 self.collection_id = request.matchdict['collection_id']
47 return '/buckets/%s/collections/%s' % (self.bucket_id,
48 self.collection_id)
49
50 def is_known_field(self, field_name):
51 """Without schema, any field is considered as known."""
52 return True
53
54 def process_record(self, new, old=None):
55 """Validate records against collection schema, if any."""
56 new = super(Record, self).process_record(new, old)
57
58 schema = self._collection.get('schema')
59 settings = self.request.registry.settings
60 schema_validation = 'experimental_collection_schema_validation'
61 if not schema or not asbool(settings.get(schema_validation)):
62 return new
63
64 collection_timestamp = self._collection[self.model.modified_field]
65
66 try:
67 jsonschema.validate(new, schema)
68 new[self.schema_field] = collection_timestamp
69 except jsonschema_exceptions.ValidationError as e:
70 field = e.path.pop() if e.path else e.validator_value.pop()
71 raise_invalid(self.request, name=field, description=e.message)
72
73 return new
74
75 def collection_get(self):
76 result = super(Record, self).collection_get()
77 self._handle_cache_expires(self.request.response)
78 return result
79
80 def get(self):
81 result = super(Record, self).get()
82 self._handle_cache_expires(self.request.response)
83 return result
84
85 def _handle_cache_expires(self, response):
86 """If the parent collection defines a ``cache_expires`` attribute,
87 then cache-control response headers are sent.
88
89 .. note::
90
91 Those headers are also sent if the
92 ``kinto.record_cache_expires_seconds`` setting is defined.
93 """
94 is_anonymous = Authenticated not in self.request.effective_principals
95 if not is_anonymous:
96 return
97
98 cache_expires = self._collection.get('cache_expires')
99 if cache_expires is None:
100 by_bucket = 'kinto.%s_record_cache_expires_seconds' % (
101 self.bucket_id)
102 by_collection = '%s_%s_record_cache_expires_seconds' % (
103 self.bucket_id, self.collection_id)
104 settings = self.request.registry.settings
105 cache_expires = settings.get(by_collection,
106 settings.get(by_bucket))
107
108 if cache_expires is not None:
109 response.cache_expires(seconds=cache_expires)
110
[end of kinto/views/records.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kinto/views/records.py b/kinto/views/records.py
--- a/kinto/views/records.py
+++ b/kinto/views/records.py
@@ -1,3 +1,5 @@
+import copy
+
import jsonschema
from cliquet import resource
from cliquet.errors import raise_invalid
@@ -64,12 +66,17 @@
collection_timestamp = self._collection[self.model.modified_field]
try:
- jsonschema.validate(new, schema)
- new[self.schema_field] = collection_timestamp
+ stripped = copy.deepcopy(new)
+ stripped.pop(self.model.id_field, None)
+ stripped.pop(self.model.modified_field, None)
+ stripped.pop(self.model.permissions_field, None)
+ stripped.pop(self.schema_field, None)
+ jsonschema.validate(stripped, schema)
except jsonschema_exceptions.ValidationError as e:
field = e.path.pop() if e.path else e.validator_value.pop()
raise_invalid(self.request, name=field, description=e.message)
+ new[self.schema_field] = collection_timestamp
return new
def collection_get(self):
|
{"golden_diff": "diff --git a/kinto/views/records.py b/kinto/views/records.py\n--- a/kinto/views/records.py\n+++ b/kinto/views/records.py\n@@ -1,3 +1,5 @@\n+import copy\n+\n import jsonschema\n from cliquet import resource\n from cliquet.errors import raise_invalid\n@@ -64,12 +66,17 @@\n collection_timestamp = self._collection[self.model.modified_field]\n \n try:\n- jsonschema.validate(new, schema)\n- new[self.schema_field] = collection_timestamp\n+ stripped = copy.deepcopy(new)\n+ stripped.pop(self.model.id_field, None)\n+ stripped.pop(self.model.modified_field, None)\n+ stripped.pop(self.model.permissions_field, None)\n+ stripped.pop(self.schema_field, None)\n+ jsonschema.validate(stripped, schema)\n except jsonschema_exceptions.ValidationError as e:\n field = e.path.pop() if e.path else e.validator_value.pop()\n raise_invalid(self.request, name=field, description=e.message)\n \n+ new[self.schema_field] = collection_timestamp\n return new\n \n def collection_get(self):\n", "issue": "id and last_modified should be stripped before validating the JSON schema\nOtherwise it obliges everyone to add `id` and `last_modified` to their JSON schema or use `additionalProperties : true`.\n- http://spacetelescope.github.io/understanding-json-schema/reference/object.html#properties\n- See #256 \n- See #548 \n\n``` diff\n try:\n- jsonschema.validate(new, schema)\n+ stripped = copy.deepcopy(new)\n+ stripped.pop(self.model.id_field, None)\n+ stripped.pop(self.model.modified_field, None)\n+ jsonschema.validate(stripped, schema)\n```\n\nid and last_modified should be stripped before validating the JSON schema\nOtherwise it obliges everyone to add `id` and `last_modified` to their JSON schema or use `additionalProperties : true`.\n- http://spacetelescope.github.io/understanding-json-schema/reference/object.html#properties\n- See #256 \n- See #548 \n\n``` diff\n try:\n- jsonschema.validate(new, schema)\n+ stripped = copy.deepcopy(new)\n+ stripped.pop(self.model.id_field, None)\n+ stripped.pop(self.model.modified_field, None)\n+ jsonschema.validate(stripped, schema)\n```\n\n", "before_files": [{"content": "import jsonschema\nfrom cliquet import resource\nfrom cliquet.errors import raise_invalid\nfrom jsonschema import exceptions as jsonschema_exceptions\nfrom pyramid.security import Authenticated\nfrom pyramid.settings import asbool\n\nfrom kinto.views import object_exists_or_404\n\n\nclass RecordSchema(resource.ResourceSchema):\n class Options:\n preserve_unknown = True\n\n\n_parent_path = '/buckets/{{bucket_id}}/collections/{{collection_id}}'\n\n\[email protected](name='record',\n collection_path=_parent_path + '/records',\n record_path=_parent_path + '/records/{{id}}')\nclass Record(resource.ShareableResource):\n\n mapping = RecordSchema()\n schema_field = 'schema'\n\n def __init__(self, *args, **kwargs):\n super(Record, self).__init__(*args, **kwargs)\n\n # Check if already fetched before (in batch).\n collections = self.request.bound_data.setdefault('collections', {})\n collection_uri = self.get_parent_id(self.request)\n if collection_uri not in collections:\n # Unknown yet, fetch from storage.\n collection_parent_id = '/buckets/%s' % self.bucket_id\n collection = object_exists_or_404(self.request,\n collection_id='collection',\n parent_id=collection_parent_id,\n object_id=self.collection_id)\n collections[collection_uri] = collection\n\n self._collection = collections[collection_uri]\n\n def get_parent_id(self, request):\n self.bucket_id = request.matchdict['bucket_id']\n self.collection_id = request.matchdict['collection_id']\n return '/buckets/%s/collections/%s' % (self.bucket_id,\n self.collection_id)\n\n def is_known_field(self, field_name):\n \"\"\"Without schema, any field is considered as known.\"\"\"\n return True\n\n def process_record(self, new, old=None):\n \"\"\"Validate records against collection schema, if any.\"\"\"\n new = super(Record, self).process_record(new, old)\n\n schema = self._collection.get('schema')\n settings = self.request.registry.settings\n schema_validation = 'experimental_collection_schema_validation'\n if not schema or not asbool(settings.get(schema_validation)):\n return new\n\n collection_timestamp = self._collection[self.model.modified_field]\n\n try:\n jsonschema.validate(new, schema)\n new[self.schema_field] = collection_timestamp\n except jsonschema_exceptions.ValidationError as e:\n field = e.path.pop() if e.path else e.validator_value.pop()\n raise_invalid(self.request, name=field, description=e.message)\n\n return new\n\n def collection_get(self):\n result = super(Record, self).collection_get()\n self._handle_cache_expires(self.request.response)\n return result\n\n def get(self):\n result = super(Record, self).get()\n self._handle_cache_expires(self.request.response)\n return result\n\n def _handle_cache_expires(self, response):\n \"\"\"If the parent collection defines a ``cache_expires`` attribute,\n then cache-control response headers are sent.\n\n .. note::\n\n Those headers are also sent if the\n ``kinto.record_cache_expires_seconds`` setting is defined.\n \"\"\"\n is_anonymous = Authenticated not in self.request.effective_principals\n if not is_anonymous:\n return\n\n cache_expires = self._collection.get('cache_expires')\n if cache_expires is None:\n by_bucket = 'kinto.%s_record_cache_expires_seconds' % (\n self.bucket_id)\n by_collection = '%s_%s_record_cache_expires_seconds' % (\n self.bucket_id, self.collection_id)\n settings = self.request.registry.settings\n cache_expires = settings.get(by_collection,\n settings.get(by_bucket))\n\n if cache_expires is not None:\n response.cache_expires(seconds=cache_expires)\n", "path": "kinto/views/records.py"}]}
| 1,842 | 242 |
gh_patches_debug_24054
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-1890
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Catch ValueError when updating orders if order is paid
This error should be caught.
Sentry Issue: [CONCREXIT-6V](https://sentry.io/organizations/thalia/issues/2472850301/?referrer=github_integration)
```
ValueError: This order has already been paid for.
(12 additional frame(s) were not displayed)
...
File "rest_framework/serializers.py", line 200, in save
self.instance = self.update(self.instance, validated_data)
File "sales/api/v2/admin/serializers/order.py", line 151, in update
OrderItemSerializer(item, context={"order": instance}).update(
File "sales/api/v2/admin/serializers/order.py", line 66, in update
super().update(instance, validated_data)
File "rest_framework/serializers.py", line 983, in update
instance.save()
File "sales/models/order.py", line 215, in save
raise ValueError("This order has already been paid for.")
```
</issue>
<code>
[start of website/sales/api/v2/admin/serializers/order.py]
1 from django.core.exceptions import ObjectDoesNotExist
2 from django.utils.encoding import smart_str
3 from rest_framework import serializers
4
5 from members.api.v2.serializers.member import MemberSerializer
6 from payments.api.v2.serializers import PaymentSerializer
7 from sales.models.order import Order, OrderItem
8 from sales.models.product import ProductListItem
9
10
11 class ProductNameRelatedField(serializers.SlugRelatedField):
12 def get_queryset(self):
13 shift = self.root.context.get("shift", None)
14 if shift is None:
15 shift = self.root.instance.shift
16 return ProductListItem.objects.filter(product_list=shift.product_list)
17
18 def to_internal_value(self, data):
19 if type(data) is ProductListItem:
20 return data
21
22 queryset = self.get_queryset()
23 try:
24 return queryset.get(product__name=data)
25 except ObjectDoesNotExist:
26 self.fail(
27 "does_not_exist", slug_name=self.slug_field, value=smart_str(data)
28 )
29 except (TypeError, ValueError):
30 self.fail("invalid")
31
32 def to_representation(self, obj):
33 return obj.product.name
34
35
36 class OrderItemSerializer(serializers.ModelSerializer):
37 """Serializer for order items."""
38
39 class Meta:
40 model = OrderItem
41 fields = ("product", "amount", "total")
42 read_only_fields = ("total",)
43
44 product = ProductNameRelatedField("product")
45
46 total = serializers.DecimalField(
47 max_digits=6, decimal_places=2, min_value=0, read_only=True
48 )
49
50 def get_fields(self):
51 fields = super().get_fields()
52 request = self.context.get("request", None)
53 if request and request.user and request.user.has_perm("sales.custom_prices"):
54 fields["total"].read_only = False
55 return fields
56
57 def create(self, validated_data, **kwargs):
58 order = self.context["order"]
59 item = OrderItem.objects.create(order=order, **validated_data)
60 return item
61
62 def update(self, instance, validated_data, **kwargs):
63 order = self.context["order"]
64 instance.order = order
65 instance.total = None # Always recalculate the total amount if updating using API (note the difference from the model that only recalculates if the total is None, to deal with historic data and allow for special discounts)
66 super().update(instance, validated_data)
67 return instance
68
69
70 class OrderSerializer(serializers.ModelSerializer):
71 """Serializer for orders."""
72
73 class Meta:
74 model = Order
75 fields = (
76 "pk",
77 "shift",
78 "created_at",
79 "order_items",
80 "order_description",
81 "age_restricted",
82 "subtotal",
83 "discount",
84 "total_amount",
85 "num_items",
86 "payment",
87 "payer",
88 "payment_url",
89 )
90 read_only_fields = (
91 "pk",
92 "created_at",
93 "payment",
94 "num_items",
95 "order_description",
96 )
97
98 shift = serializers.PrimaryKeyRelatedField(read_only=True)
99
100 age_restricted = serializers.BooleanField(read_only=True)
101
102 order_items = OrderItemSerializer(many=True, required=False)
103
104 subtotal = serializers.DecimalField(
105 max_digits=6, decimal_places=2, min_value=0, read_only=True
106 )
107
108 discount = serializers.DecimalField(
109 max_digits=6, decimal_places=2, min_value=0, read_only=True
110 )
111
112 total_amount = serializers.DecimalField(
113 max_digits=6, decimal_places=2, min_value=0, read_only=True
114 )
115
116 payment = PaymentSerializer(read_only=True)
117
118 payer = MemberSerializer(read_only=True)
119
120 payment_url = serializers.URLField(read_only=True)
121
122 def get_fields(self):
123 fields = super().get_fields()
124 request = self.context.get("request", None)
125 if request and request.user and request.user.has_perm("sales.custom_prices"):
126 try:
127 fields["discount"].read_only = False
128 except KeyError:
129 pass
130 return fields
131
132 def create(self, validated_data):
133 shift = self.context["shift"]
134 order = Order.objects.create(shift=shift)
135 if "order_items" in validated_data:
136 items_data = validated_data.pop("order_items")
137 for item_data in items_data:
138 OrderItem.objects.create(order=order, **item_data)
139 return order
140
141 def update(self, instance, validated_data):
142 # Update the order items for an order
143 if "order_items" in validated_data:
144 items_data = validated_data.pop("order_items")
145 current_items = list(instance.order_items.all())
146
147 # Overwrite all existing order items by the newly provided ones
148 for item_data in items_data:
149 if len(current_items) > 0:
150 item = current_items.pop(0)
151 OrderItemSerializer(item, context={"order": instance}).update(
152 item, item_data
153 )
154 else:
155 # Create new order items if required
156 OrderItemSerializer(context={"order": instance}).create(
157 validated_data=item_data
158 )
159
160 # Delete all order items that we have not updated
161 for i in current_items:
162 i.delete()
163
164 # Update other fields of the order as default
165 instance = super().update(instance, validated_data)
166 instance = Order.objects.get(
167 pk=instance.pk
168 ) # refresh from database to update queryable properties
169 return instance
170
171
172 class OrderListSerializer(OrderSerializer):
173 class Meta:
174 model = Order
175 fields = (
176 "pk",
177 "created_at",
178 "total_amount",
179 "num_items",
180 )
181 read_only_fields = (
182 "pk",
183 "created_at",
184 "total_amount",
185 "num_items",
186 )
187
[end of website/sales/api/v2/admin/serializers/order.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/website/sales/api/v2/admin/serializers/order.py b/website/sales/api/v2/admin/serializers/order.py
--- a/website/sales/api/v2/admin/serializers/order.py
+++ b/website/sales/api/v2/admin/serializers/order.py
@@ -1,6 +1,7 @@
-from django.core.exceptions import ObjectDoesNotExist
+from django.core.exceptions import ObjectDoesNotExist, ValidationError
from django.utils.encoding import smart_str
from rest_framework import serializers
+from rest_framework.settings import api_settings
from members.api.v2.serializers.member import MemberSerializer
from payments.api.v2.serializers import PaymentSerializer
@@ -63,7 +64,10 @@
order = self.context["order"]
instance.order = order
instance.total = None # Always recalculate the total amount if updating using API (note the difference from the model that only recalculates if the total is None, to deal with historic data and allow for special discounts)
- super().update(instance, validated_data)
+ try:
+ super().update(instance, validated_data)
+ except ValueError as e:
+ raise ValidationError({api_settings.NON_FIELD_ERRORS_KEY: [e]})
return instance
|
{"golden_diff": "diff --git a/website/sales/api/v2/admin/serializers/order.py b/website/sales/api/v2/admin/serializers/order.py\n--- a/website/sales/api/v2/admin/serializers/order.py\n+++ b/website/sales/api/v2/admin/serializers/order.py\n@@ -1,6 +1,7 @@\n-from django.core.exceptions import ObjectDoesNotExist\n+from django.core.exceptions import ObjectDoesNotExist, ValidationError\n from django.utils.encoding import smart_str\n from rest_framework import serializers\n+from rest_framework.settings import api_settings\n \n from members.api.v2.serializers.member import MemberSerializer\n from payments.api.v2.serializers import PaymentSerializer\n@@ -63,7 +64,10 @@\n order = self.context[\"order\"]\n instance.order = order\n instance.total = None # Always recalculate the total amount if updating using API (note the difference from the model that only recalculates if the total is None, to deal with historic data and allow for special discounts)\n- super().update(instance, validated_data)\n+ try:\n+ super().update(instance, validated_data)\n+ except ValueError as e:\n+ raise ValidationError({api_settings.NON_FIELD_ERRORS_KEY: [e]})\n return instance\n", "issue": "Catch ValueError when updating orders if order is paid\nThis error should be caught.\n\nSentry Issue: [CONCREXIT-6V](https://sentry.io/organizations/thalia/issues/2472850301/?referrer=github_integration)\n\n```\nValueError: This order has already been paid for.\n(12 additional frame(s) were not displayed)\n...\n File \"rest_framework/serializers.py\", line 200, in save\n self.instance = self.update(self.instance, validated_data)\n File \"sales/api/v2/admin/serializers/order.py\", line 151, in update\n OrderItemSerializer(item, context={\"order\": instance}).update(\n File \"sales/api/v2/admin/serializers/order.py\", line 66, in update\n super().update(instance, validated_data)\n File \"rest_framework/serializers.py\", line 983, in update\n instance.save()\n File \"sales/models/order.py\", line 215, in save\n raise ValueError(\"This order has already been paid for.\")\n```\n", "before_files": [{"content": "from django.core.exceptions import ObjectDoesNotExist\nfrom django.utils.encoding import smart_str\nfrom rest_framework import serializers\n\nfrom members.api.v2.serializers.member import MemberSerializer\nfrom payments.api.v2.serializers import PaymentSerializer\nfrom sales.models.order import Order, OrderItem\nfrom sales.models.product import ProductListItem\n\n\nclass ProductNameRelatedField(serializers.SlugRelatedField):\n def get_queryset(self):\n shift = self.root.context.get(\"shift\", None)\n if shift is None:\n shift = self.root.instance.shift\n return ProductListItem.objects.filter(product_list=shift.product_list)\n\n def to_internal_value(self, data):\n if type(data) is ProductListItem:\n return data\n\n queryset = self.get_queryset()\n try:\n return queryset.get(product__name=data)\n except ObjectDoesNotExist:\n self.fail(\n \"does_not_exist\", slug_name=self.slug_field, value=smart_str(data)\n )\n except (TypeError, ValueError):\n self.fail(\"invalid\")\n\n def to_representation(self, obj):\n return obj.product.name\n\n\nclass OrderItemSerializer(serializers.ModelSerializer):\n \"\"\"Serializer for order items.\"\"\"\n\n class Meta:\n model = OrderItem\n fields = (\"product\", \"amount\", \"total\")\n read_only_fields = (\"total\",)\n\n product = ProductNameRelatedField(\"product\")\n\n total = serializers.DecimalField(\n max_digits=6, decimal_places=2, min_value=0, read_only=True\n )\n\n def get_fields(self):\n fields = super().get_fields()\n request = self.context.get(\"request\", None)\n if request and request.user and request.user.has_perm(\"sales.custom_prices\"):\n fields[\"total\"].read_only = False\n return fields\n\n def create(self, validated_data, **kwargs):\n order = self.context[\"order\"]\n item = OrderItem.objects.create(order=order, **validated_data)\n return item\n\n def update(self, instance, validated_data, **kwargs):\n order = self.context[\"order\"]\n instance.order = order\n instance.total = None # Always recalculate the total amount if updating using API (note the difference from the model that only recalculates if the total is None, to deal with historic data and allow for special discounts)\n super().update(instance, validated_data)\n return instance\n\n\nclass OrderSerializer(serializers.ModelSerializer):\n \"\"\"Serializer for orders.\"\"\"\n\n class Meta:\n model = Order\n fields = (\n \"pk\",\n \"shift\",\n \"created_at\",\n \"order_items\",\n \"order_description\",\n \"age_restricted\",\n \"subtotal\",\n \"discount\",\n \"total_amount\",\n \"num_items\",\n \"payment\",\n \"payer\",\n \"payment_url\",\n )\n read_only_fields = (\n \"pk\",\n \"created_at\",\n \"payment\",\n \"num_items\",\n \"order_description\",\n )\n\n shift = serializers.PrimaryKeyRelatedField(read_only=True)\n\n age_restricted = serializers.BooleanField(read_only=True)\n\n order_items = OrderItemSerializer(many=True, required=False)\n\n subtotal = serializers.DecimalField(\n max_digits=6, decimal_places=2, min_value=0, read_only=True\n )\n\n discount = serializers.DecimalField(\n max_digits=6, decimal_places=2, min_value=0, read_only=True\n )\n\n total_amount = serializers.DecimalField(\n max_digits=6, decimal_places=2, min_value=0, read_only=True\n )\n\n payment = PaymentSerializer(read_only=True)\n\n payer = MemberSerializer(read_only=True)\n\n payment_url = serializers.URLField(read_only=True)\n\n def get_fields(self):\n fields = super().get_fields()\n request = self.context.get(\"request\", None)\n if request and request.user and request.user.has_perm(\"sales.custom_prices\"):\n try:\n fields[\"discount\"].read_only = False\n except KeyError:\n pass\n return fields\n\n def create(self, validated_data):\n shift = self.context[\"shift\"]\n order = Order.objects.create(shift=shift)\n if \"order_items\" in validated_data:\n items_data = validated_data.pop(\"order_items\")\n for item_data in items_data:\n OrderItem.objects.create(order=order, **item_data)\n return order\n\n def update(self, instance, validated_data):\n # Update the order items for an order\n if \"order_items\" in validated_data:\n items_data = validated_data.pop(\"order_items\")\n current_items = list(instance.order_items.all())\n\n # Overwrite all existing order items by the newly provided ones\n for item_data in items_data:\n if len(current_items) > 0:\n item = current_items.pop(0)\n OrderItemSerializer(item, context={\"order\": instance}).update(\n item, item_data\n )\n else:\n # Create new order items if required\n OrderItemSerializer(context={\"order\": instance}).create(\n validated_data=item_data\n )\n\n # Delete all order items that we have not updated\n for i in current_items:\n i.delete()\n\n # Update other fields of the order as default\n instance = super().update(instance, validated_data)\n instance = Order.objects.get(\n pk=instance.pk\n ) # refresh from database to update queryable properties\n return instance\n\n\nclass OrderListSerializer(OrderSerializer):\n class Meta:\n model = Order\n fields = (\n \"pk\",\n \"created_at\",\n \"total_amount\",\n \"num_items\",\n )\n read_only_fields = (\n \"pk\",\n \"created_at\",\n \"total_amount\",\n \"num_items\",\n )\n", "path": "website/sales/api/v2/admin/serializers/order.py"}]}
| 2,444 | 265 |
gh_patches_debug_40362
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-1338
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update modifierclass.rst template to accept functions that aren't methods of a class
# Description
First noticed in PR #750, while trying to get the docs to show the full signatures and docstrings:
```rst
.. autosummary::
:toctree: _generated/
:nosignatures:
:template: modifierclass.rst
```
I'm getting the following warnings (which we treat as errors) when I try to build the docs:
```
WARNING: error while formatting arguments for pyhf.infer.calculators.generate_asimov_data: 'function' object has no attribute '__mro__'
WARNING: error while formatting arguments for pyhf.infer.hypotest: 'function' object has no attribute '__mro__'
WARNING: error while formatting arguments for pyhf.infer.mle.fit: 'function' object has no attribute '__mro__'
WARNING: error while formatting arguments for pyhf.infer.mle.fixed_poi_fit: 'function' object has no attribute '__mro__'
WARNING: error while formatting arguments for pyhf.infer.mle.twice_nll: 'function' object has no attribute '__mro__'
WARNING: error while formatting arguments for pyhf.infer.test_statistics.qmu: 'function' object has no attribute '__mro__'
```
which I believe is happening as `__mro__` only exists on the class, and these functions exist in the source code outside of a class definition.
This means that the [`modifierclass.rst` template](https://github.com/scikit-hep/pyhf/blob/1ee6e38d42d9551220f20de483e0049b28c848b0/docs/_templates/modifierclass.rst) will need to get updated to deal with functions that aren't methods of a class.
Fix up docstring for _minimize() functions in the optimizers
# Description
From https://github.com/scikit-hep/pyhf/pull/1338#pullrequestreview-596818258
> I'm not sure if it can be fixed here, but the "Minimizer Options" aren't being displayed correctly for the optimizer _minimize methods.

> we've never been able to see the `_minimize` methods before, so it isn't surprising they might not look perfect.
</issue>
<code>
[start of src/pyhf/optimize/opt_minuit.py]
1 """Minuit Optimizer Class."""
2 from .. import exceptions
3 from .mixins import OptimizerMixin
4 import scipy
5 import iminuit
6
7
8 class minuit_optimizer(OptimizerMixin):
9 """
10 Optimizer that minimizes via :meth:`iminuit.Minuit.migrad`.
11 """
12
13 __slots__ = ['name', 'errordef', 'steps', 'strategy', 'tolerance']
14
15 def __init__(self, *args, **kwargs):
16 """
17 Create :class:`iminuit.Minuit` optimizer.
18
19 .. note::
20
21 ``errordef`` should be 1.0 for a least-squares cost function and 0.5
22 for negative log-likelihood function. See page 37 of
23 http://hep.fi.infn.it/minuit.pdf. This parameter is sometimes
24 called ``UP`` in the ``MINUIT`` docs.
25
26
27 Args:
28 errordef (:obj:`float`): See minuit docs. Default is 1.0.
29 steps (:obj:`int`): Number of steps for the bounds. Default is 1000.
30 strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is None.
31 tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.
32 """
33 self.name = 'minuit'
34 self.errordef = kwargs.pop('errordef', 1)
35 self.steps = kwargs.pop('steps', 1000)
36 self.strategy = kwargs.pop('strategy', None)
37 self.tolerance = kwargs.pop('tolerance', 0.1)
38 super().__init__(*args, **kwargs)
39
40 def _get_minimizer(
41 self, objective_and_grad, init_pars, init_bounds, fixed_vals=None, do_grad=False
42 ):
43
44 step_sizes = [(b[1] - b[0]) / float(self.steps) for b in init_bounds]
45 fixed_vals = fixed_vals or []
46 # Minuit wants True/False for each parameter
47 fixed_bools = [False] * len(init_pars)
48 for index, val in fixed_vals:
49 fixed_bools[index] = True
50 init_pars[index] = val
51 step_sizes[index] = 0.0
52
53 # Minuit requires jac=callable
54 if do_grad:
55 wrapped_objective = lambda pars: objective_and_grad(pars)[0] # noqa: E731
56 jac = lambda pars: objective_and_grad(pars)[1] # noqa: E731
57 else:
58 wrapped_objective = objective_and_grad
59 jac = None
60
61 minuit = iminuit.Minuit(wrapped_objective, init_pars, grad=jac)
62 minuit.errors = step_sizes
63 minuit.limits = init_bounds
64 minuit.fixed = fixed_bools
65 minuit.print_level = self.verbose
66 minuit.errordef = self.errordef
67 return minuit
68
69 def _minimize(
70 self,
71 minimizer,
72 func,
73 x0,
74 do_grad=False,
75 bounds=None,
76 fixed_vals=None,
77 options={},
78 ):
79
80 """
81 Same signature as :func:`scipy.optimize.minimize`.
82
83 Note: an additional `minuit` is injected into the fitresult to get the
84 underlying minimizer.
85
86 Minimizer Options:
87 maxiter (:obj:`int`): maximum number of iterations. Default is 100000.
88 strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is to configure in response to `do_grad`.
89 tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.
90
91 Returns:
92 fitresult (scipy.optimize.OptimizeResult): the fit result
93 """
94 maxiter = options.pop('maxiter', self.maxiter)
95 # 0: Fast, user-provided gradient
96 # 1: Default, no user-provided gradient
97 strategy = options.pop(
98 'strategy', self.strategy if self.strategy else not do_grad
99 )
100 tolerance = options.pop('tolerance', self.tolerance)
101 if options:
102 raise exceptions.Unsupported(
103 f"Unsupported options were passed in: {list(options.keys())}."
104 )
105
106 minimizer.strategy = strategy
107 minimizer.tol = tolerance
108 minimizer.migrad(ncall=maxiter)
109 # Following lines below come from:
110 # https://github.com/scikit-hep/iminuit/blob/23bad7697e39d363f259ca8349684df939b1b2e6/src/iminuit/_minimize.py#L111-L130
111 message = "Optimization terminated successfully."
112 if not minimizer.valid:
113 message = "Optimization failed."
114 fmin = minimizer.fmin
115 if fmin.has_reached_call_limit:
116 message += " Call limit was reached."
117 if fmin.is_above_max_edm:
118 message += " Estimated distance to minimum too large."
119
120 hess_inv = None
121 corr = None
122 unc = None
123 if minimizer.valid:
124 # Extra call to hesse() after migrad() is always needed for good error estimates. If you pass a user-provided gradient to MINUIT, convergence is faster.
125 minimizer.hesse()
126 hess_inv = minimizer.covariance
127 corr = hess_inv.correlation()
128 unc = minimizer.errors
129
130 return scipy.optimize.OptimizeResult(
131 x=minimizer.values,
132 unc=unc,
133 corr=corr,
134 success=minimizer.valid,
135 fun=minimizer.fval,
136 hess_inv=hess_inv,
137 message=message,
138 nfev=minimizer.nfcn,
139 njev=minimizer.ngrad,
140 minuit=minimizer,
141 )
142
[end of src/pyhf/optimize/opt_minuit.py]
[start of src/pyhf/optimize/opt_scipy.py]
1 """SciPy Optimizer Class."""
2 from .. import exceptions
3 from .mixins import OptimizerMixin
4 import scipy
5
6
7 class scipy_optimizer(OptimizerMixin):
8 """
9 Optimizer that uses :func:`scipy.optimize.minimize`.
10 """
11
12 __slots__ = ['name', 'tolerance']
13
14 def __init__(self, *args, **kwargs):
15 """
16 Initialize the scipy_optimizer.
17
18 See :class:`pyhf.optimize.mixins.OptimizerMixin` for other configuration options.
19
20 Args:
21 tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is None.
22 """
23 self.name = 'scipy'
24 self.tolerance = kwargs.pop('tolerance', None)
25 super().__init__(*args, **kwargs)
26
27 def _get_minimizer(
28 self, objective_and_grad, init_pars, init_bounds, fixed_vals=None, do_grad=False
29 ):
30 return scipy.optimize.minimize
31
32 def _minimize(
33 self,
34 minimizer,
35 func,
36 x0,
37 do_grad=False,
38 bounds=None,
39 fixed_vals=None,
40 options={},
41 ):
42 """
43 Same signature as :func:`scipy.optimize.minimize`.
44
45 Minimizer Options:
46 maxiter (:obj:`int`): maximum number of iterations. Default is 100000.
47 verbose (:obj:`bool`): print verbose output during minimization. Default is off.
48 method (:obj:`str`): minimization routine. Default is 'SLSQP'.
49 tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is None.
50
51 Returns:
52 fitresult (scipy.optimize.OptimizeResult): the fit result
53 """
54 maxiter = options.pop('maxiter', self.maxiter)
55 verbose = options.pop('verbose', self.verbose)
56 method = options.pop('method', 'SLSQP')
57 tolerance = options.pop('tolerance', self.tolerance)
58 if options:
59 raise exceptions.Unsupported(
60 f"Unsupported options were passed in: {list(options.keys())}."
61 )
62
63 fixed_vals = fixed_vals or []
64 indices = [i for i, _ in fixed_vals]
65 values = [v for _, v in fixed_vals]
66 if fixed_vals:
67 constraints = [{'type': 'eq', 'fun': lambda v: v[indices] - values}]
68 # update the initial values to the fixed value for any fixed parameter
69 for idx, fixed_val in fixed_vals:
70 x0[idx] = fixed_val
71 else:
72 constraints = []
73
74 return minimizer(
75 func,
76 x0,
77 method=method,
78 jac=do_grad,
79 bounds=bounds,
80 constraints=constraints,
81 tol=tolerance,
82 options=dict(maxiter=maxiter, disp=bool(verbose)),
83 )
84
[end of src/pyhf/optimize/opt_scipy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/pyhf/optimize/opt_minuit.py b/src/pyhf/optimize/opt_minuit.py
--- a/src/pyhf/optimize/opt_minuit.py
+++ b/src/pyhf/optimize/opt_minuit.py
@@ -25,10 +25,12 @@
Args:
- errordef (:obj:`float`): See minuit docs. Default is 1.0.
- steps (:obj:`int`): Number of steps for the bounds. Default is 1000.
- strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is None.
- tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.
+ errordef (:obj:`float`): See minuit docs. Default is ``1.0``.
+ steps (:obj:`int`): Number of steps for the bounds. Default is ``1000``.
+ strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is ``None``.
+ tolerance (:obj:`float`): Tolerance for termination.
+ See specific optimizer for detailed meaning.
+ Default is ``0.1``.
"""
self.name = 'minuit'
self.errordef = kwargs.pop('errordef', 1)
@@ -84,9 +86,12 @@
underlying minimizer.
Minimizer Options:
- maxiter (:obj:`int`): maximum number of iterations. Default is 100000.
- strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is to configure in response to `do_grad`.
- tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.
+ * maxiter (:obj:`int`): Maximum number of iterations. Default is ``100000``.
+ * strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`.
+ Default is to configure in response to ``do_grad``.
+ * tolerance (:obj:`float`): Tolerance for termination.
+ See specific optimizer for detailed meaning.
+ Default is ``0.1``.
Returns:
fitresult (scipy.optimize.OptimizeResult): the fit result
diff --git a/src/pyhf/optimize/opt_scipy.py b/src/pyhf/optimize/opt_scipy.py
--- a/src/pyhf/optimize/opt_scipy.py
+++ b/src/pyhf/optimize/opt_scipy.py
@@ -18,7 +18,9 @@
See :class:`pyhf.optimize.mixins.OptimizerMixin` for other configuration options.
Args:
- tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is None.
+ tolerance (:obj:`float`): Tolerance for termination.
+ See specific optimizer for detailed meaning.
+ Default is ``None``.
"""
self.name = 'scipy'
self.tolerance = kwargs.pop('tolerance', None)
@@ -43,10 +45,13 @@
Same signature as :func:`scipy.optimize.minimize`.
Minimizer Options:
- maxiter (:obj:`int`): maximum number of iterations. Default is 100000.
- verbose (:obj:`bool`): print verbose output during minimization. Default is off.
- method (:obj:`str`): minimization routine. Default is 'SLSQP'.
- tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is None.
+ * maxiter (:obj:`int`): Maximum number of iterations. Default is ``100000``.
+ * verbose (:obj:`bool`): Print verbose output during minimization.
+ Default is ``False``.
+ * method (:obj:`str`): Minimization routine. Default is ``'SLSQP'``.
+ * tolerance (:obj:`float`): Tolerance for termination. See specific optimizer
+ for detailed meaning.
+ Default is ``None``.
Returns:
fitresult (scipy.optimize.OptimizeResult): the fit result
|
{"golden_diff": "diff --git a/src/pyhf/optimize/opt_minuit.py b/src/pyhf/optimize/opt_minuit.py\n--- a/src/pyhf/optimize/opt_minuit.py\n+++ b/src/pyhf/optimize/opt_minuit.py\n@@ -25,10 +25,12 @@\n \n \n Args:\n- errordef (:obj:`float`): See minuit docs. Default is 1.0.\n- steps (:obj:`int`): Number of steps for the bounds. Default is 1000.\n- strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is None.\n- tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.\n+ errordef (:obj:`float`): See minuit docs. Default is ``1.0``.\n+ steps (:obj:`int`): Number of steps for the bounds. Default is ``1000``.\n+ strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is ``None``.\n+ tolerance (:obj:`float`): Tolerance for termination.\n+ See specific optimizer for detailed meaning.\n+ Default is ``0.1``.\n \"\"\"\n self.name = 'minuit'\n self.errordef = kwargs.pop('errordef', 1)\n@@ -84,9 +86,12 @@\n underlying minimizer.\n \n Minimizer Options:\n- maxiter (:obj:`int`): maximum number of iterations. Default is 100000.\n- strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is to configure in response to `do_grad`.\n- tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.\n+ * maxiter (:obj:`int`): Maximum number of iterations. Default is ``100000``.\n+ * strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`.\n+ Default is to configure in response to ``do_grad``.\n+ * tolerance (:obj:`float`): Tolerance for termination.\n+ See specific optimizer for detailed meaning.\n+ Default is ``0.1``.\n \n Returns:\n fitresult (scipy.optimize.OptimizeResult): the fit result\ndiff --git a/src/pyhf/optimize/opt_scipy.py b/src/pyhf/optimize/opt_scipy.py\n--- a/src/pyhf/optimize/opt_scipy.py\n+++ b/src/pyhf/optimize/opt_scipy.py\n@@ -18,7 +18,9 @@\n See :class:`pyhf.optimize.mixins.OptimizerMixin` for other configuration options.\n \n Args:\n- tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is None.\n+ tolerance (:obj:`float`): Tolerance for termination.\n+ See specific optimizer for detailed meaning.\n+ Default is ``None``.\n \"\"\"\n self.name = 'scipy'\n self.tolerance = kwargs.pop('tolerance', None)\n@@ -43,10 +45,13 @@\n Same signature as :func:`scipy.optimize.minimize`.\n \n Minimizer Options:\n- maxiter (:obj:`int`): maximum number of iterations. Default is 100000.\n- verbose (:obj:`bool`): print verbose output during minimization. Default is off.\n- method (:obj:`str`): minimization routine. Default is 'SLSQP'.\n- tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is None.\n+ * maxiter (:obj:`int`): Maximum number of iterations. Default is ``100000``.\n+ * verbose (:obj:`bool`): Print verbose output during minimization.\n+ Default is ``False``.\n+ * method (:obj:`str`): Minimization routine. Default is ``'SLSQP'``.\n+ * tolerance (:obj:`float`): Tolerance for termination. See specific optimizer\n+ for detailed meaning.\n+ Default is ``None``.\n \n Returns:\n fitresult (scipy.optimize.OptimizeResult): the fit result\n", "issue": "Update modifierclass.rst template to accept functions that aren't methods of a class\n# Description\r\n\r\nFirst noticed in PR #750, while trying to get the docs to show the full signatures and docstrings:\r\n\r\n```rst\r\n.. autosummary::\r\n :toctree: _generated/\r\n :nosignatures:\r\n :template: modifierclass.rst\r\n```\r\n\r\nI'm getting the following warnings (which we treat as errors) when I try to build the docs:\r\n```\r\nWARNING: error while formatting arguments for pyhf.infer.calculators.generate_asimov_data: 'function' object has no attribute '__mro__'\r\nWARNING: error while formatting arguments for pyhf.infer.hypotest: 'function' object has no attribute '__mro__'\r\nWARNING: error while formatting arguments for pyhf.infer.mle.fit: 'function' object has no attribute '__mro__'\r\nWARNING: error while formatting arguments for pyhf.infer.mle.fixed_poi_fit: 'function' object has no attribute '__mro__'\r\nWARNING: error while formatting arguments for pyhf.infer.mle.twice_nll: 'function' object has no attribute '__mro__'\r\nWARNING: error while formatting arguments for pyhf.infer.test_statistics.qmu: 'function' object has no attribute '__mro__'\r\n```\r\n\r\nwhich I believe is happening as `__mro__` only exists on the class, and these functions exist in the source code outside of a class definition.\r\n\r\nThis means that the [`modifierclass.rst` template](https://github.com/scikit-hep/pyhf/blob/1ee6e38d42d9551220f20de483e0049b28c848b0/docs/_templates/modifierclass.rst) will need to get updated to deal with functions that aren't methods of a class.\nFix up docstring for _minimize() functions in the optimizers\n# Description\r\n\r\nFrom https://github.com/scikit-hep/pyhf/pull/1338#pullrequestreview-596818258\r\n\r\n> I'm not sure if it can be fixed here, but the \"Minimizer Options\" aren't being displayed correctly for the optimizer _minimize methods.\r\n\r\n\r\n\r\n> we've never been able to see the `_minimize` methods before, so it isn't surprising they might not look perfect.\n", "before_files": [{"content": "\"\"\"Minuit Optimizer Class.\"\"\"\nfrom .. import exceptions\nfrom .mixins import OptimizerMixin\nimport scipy\nimport iminuit\n\n\nclass minuit_optimizer(OptimizerMixin):\n \"\"\"\n Optimizer that minimizes via :meth:`iminuit.Minuit.migrad`.\n \"\"\"\n\n __slots__ = ['name', 'errordef', 'steps', 'strategy', 'tolerance']\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Create :class:`iminuit.Minuit` optimizer.\n\n .. note::\n\n ``errordef`` should be 1.0 for a least-squares cost function and 0.5\n for negative log-likelihood function. See page 37 of\n http://hep.fi.infn.it/minuit.pdf. This parameter is sometimes\n called ``UP`` in the ``MINUIT`` docs.\n\n\n Args:\n errordef (:obj:`float`): See minuit docs. Default is 1.0.\n steps (:obj:`int`): Number of steps for the bounds. Default is 1000.\n strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is None.\n tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.\n \"\"\"\n self.name = 'minuit'\n self.errordef = kwargs.pop('errordef', 1)\n self.steps = kwargs.pop('steps', 1000)\n self.strategy = kwargs.pop('strategy', None)\n self.tolerance = kwargs.pop('tolerance', 0.1)\n super().__init__(*args, **kwargs)\n\n def _get_minimizer(\n self, objective_and_grad, init_pars, init_bounds, fixed_vals=None, do_grad=False\n ):\n\n step_sizes = [(b[1] - b[0]) / float(self.steps) for b in init_bounds]\n fixed_vals = fixed_vals or []\n # Minuit wants True/False for each parameter\n fixed_bools = [False] * len(init_pars)\n for index, val in fixed_vals:\n fixed_bools[index] = True\n init_pars[index] = val\n step_sizes[index] = 0.0\n\n # Minuit requires jac=callable\n if do_grad:\n wrapped_objective = lambda pars: objective_and_grad(pars)[0] # noqa: E731\n jac = lambda pars: objective_and_grad(pars)[1] # noqa: E731\n else:\n wrapped_objective = objective_and_grad\n jac = None\n\n minuit = iminuit.Minuit(wrapped_objective, init_pars, grad=jac)\n minuit.errors = step_sizes\n minuit.limits = init_bounds\n minuit.fixed = fixed_bools\n minuit.print_level = self.verbose\n minuit.errordef = self.errordef\n return minuit\n\n def _minimize(\n self,\n minimizer,\n func,\n x0,\n do_grad=False,\n bounds=None,\n fixed_vals=None,\n options={},\n ):\n\n \"\"\"\n Same signature as :func:`scipy.optimize.minimize`.\n\n Note: an additional `minuit` is injected into the fitresult to get the\n underlying minimizer.\n\n Minimizer Options:\n maxiter (:obj:`int`): maximum number of iterations. Default is 100000.\n strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is to configure in response to `do_grad`.\n tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.\n\n Returns:\n fitresult (scipy.optimize.OptimizeResult): the fit result\n \"\"\"\n maxiter = options.pop('maxiter', self.maxiter)\n # 0: Fast, user-provided gradient\n # 1: Default, no user-provided gradient\n strategy = options.pop(\n 'strategy', self.strategy if self.strategy else not do_grad\n )\n tolerance = options.pop('tolerance', self.tolerance)\n if options:\n raise exceptions.Unsupported(\n f\"Unsupported options were passed in: {list(options.keys())}.\"\n )\n\n minimizer.strategy = strategy\n minimizer.tol = tolerance\n minimizer.migrad(ncall=maxiter)\n # Following lines below come from:\n # https://github.com/scikit-hep/iminuit/blob/23bad7697e39d363f259ca8349684df939b1b2e6/src/iminuit/_minimize.py#L111-L130\n message = \"Optimization terminated successfully.\"\n if not minimizer.valid:\n message = \"Optimization failed.\"\n fmin = minimizer.fmin\n if fmin.has_reached_call_limit:\n message += \" Call limit was reached.\"\n if fmin.is_above_max_edm:\n message += \" Estimated distance to minimum too large.\"\n\n hess_inv = None\n corr = None\n unc = None\n if minimizer.valid:\n # Extra call to hesse() after migrad() is always needed for good error estimates. If you pass a user-provided gradient to MINUIT, convergence is faster.\n minimizer.hesse()\n hess_inv = minimizer.covariance\n corr = hess_inv.correlation()\n unc = minimizer.errors\n\n return scipy.optimize.OptimizeResult(\n x=minimizer.values,\n unc=unc,\n corr=corr,\n success=minimizer.valid,\n fun=minimizer.fval,\n hess_inv=hess_inv,\n message=message,\n nfev=minimizer.nfcn,\n njev=minimizer.ngrad,\n minuit=minimizer,\n )\n", "path": "src/pyhf/optimize/opt_minuit.py"}, {"content": "\"\"\"SciPy Optimizer Class.\"\"\"\nfrom .. import exceptions\nfrom .mixins import OptimizerMixin\nimport scipy\n\n\nclass scipy_optimizer(OptimizerMixin):\n \"\"\"\n Optimizer that uses :func:`scipy.optimize.minimize`.\n \"\"\"\n\n __slots__ = ['name', 'tolerance']\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Initialize the scipy_optimizer.\n\n See :class:`pyhf.optimize.mixins.OptimizerMixin` for other configuration options.\n\n Args:\n tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is None.\n \"\"\"\n self.name = 'scipy'\n self.tolerance = kwargs.pop('tolerance', None)\n super().__init__(*args, **kwargs)\n\n def _get_minimizer(\n self, objective_and_grad, init_pars, init_bounds, fixed_vals=None, do_grad=False\n ):\n return scipy.optimize.minimize\n\n def _minimize(\n self,\n minimizer,\n func,\n x0,\n do_grad=False,\n bounds=None,\n fixed_vals=None,\n options={},\n ):\n \"\"\"\n Same signature as :func:`scipy.optimize.minimize`.\n\n Minimizer Options:\n maxiter (:obj:`int`): maximum number of iterations. Default is 100000.\n verbose (:obj:`bool`): print verbose output during minimization. Default is off.\n method (:obj:`str`): minimization routine. Default is 'SLSQP'.\n tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is None.\n\n Returns:\n fitresult (scipy.optimize.OptimizeResult): the fit result\n \"\"\"\n maxiter = options.pop('maxiter', self.maxiter)\n verbose = options.pop('verbose', self.verbose)\n method = options.pop('method', 'SLSQP')\n tolerance = options.pop('tolerance', self.tolerance)\n if options:\n raise exceptions.Unsupported(\n f\"Unsupported options were passed in: {list(options.keys())}.\"\n )\n\n fixed_vals = fixed_vals or []\n indices = [i for i, _ in fixed_vals]\n values = [v for _, v in fixed_vals]\n if fixed_vals:\n constraints = [{'type': 'eq', 'fun': lambda v: v[indices] - values}]\n # update the initial values to the fixed value for any fixed parameter\n for idx, fixed_val in fixed_vals:\n x0[idx] = fixed_val\n else:\n constraints = []\n\n return minimizer(\n func,\n x0,\n method=method,\n jac=do_grad,\n bounds=bounds,\n constraints=constraints,\n tol=tolerance,\n options=dict(maxiter=maxiter, disp=bool(verbose)),\n )\n", "path": "src/pyhf/optimize/opt_scipy.py"}]}
| 3,513 | 927 |
gh_patches_debug_31727
|
rasdani/github-patches
|
git_diff
|
onnx__onnx-5555
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use pillow to replace opencv in reference evaluator
Caveat: https://github.com/python-pillow/Pillow/issues/6047#issuecomment-1038150443
cc @jcwchen
</issue>
<code>
[start of onnx/reference/ops/op_image_decoder.py]
1 # Copyright (c) ONNX Project Contributors
2
3 # SPDX-License-Identifier: Apache-2.0
4 # pylint: disable=C0123,C3001,R0912,R0913,R0914,R1730,W0221,W0613
5
6 import numpy as np
7
8 from onnx.reference.op_run import OpRun
9
10
11 class ImageDecoder(OpRun):
12 def _run( # type: ignore
13 self,
14 encoded,
15 pixel_format="RGB",
16 ):
17 try:
18 # pylint: disable=import-outside-toplevel`
19 import cv2
20 except ImportError as e:
21 raise ImportError(
22 "opencv-python must be installed to use the reference implementation of the ImageDecoder operator"
23 ) from e
24 decoded = None
25 if pixel_format == "BGR":
26 decoded = cv2.imdecode(encoded, cv2.IMREAD_COLOR)
27 elif pixel_format == "RGB":
28 decoded = cv2.imdecode(encoded, cv2.IMREAD_COLOR)
29 decoded = cv2.cvtColor(decoded, cv2.COLOR_BGR2RGB)
30 elif pixel_format == "Grayscale":
31 decoded = cv2.imdecode(encoded, cv2.IMREAD_GRAYSCALE)
32 decoded = np.expand_dims(decoded, axis=2) # (H, W) to (H, W, 1)
33 else:
34 raise RuntimeError(f"pixel_format={pixel_format!r} is not supported.")
35 return (decoded,)
36
[end of onnx/reference/ops/op_image_decoder.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/onnx/reference/ops/op_image_decoder.py b/onnx/reference/ops/op_image_decoder.py
--- a/onnx/reference/ops/op_image_decoder.py
+++ b/onnx/reference/ops/op_image_decoder.py
@@ -1,7 +1,10 @@
# Copyright (c) ONNX Project Contributors
# SPDX-License-Identifier: Apache-2.0
-# pylint: disable=C0123,C3001,R0912,R0913,R0914,R1730,W0221,W0613
+
+from __future__ import annotations
+
+import io
import numpy as np
@@ -9,27 +12,22 @@
class ImageDecoder(OpRun):
- def _run( # type: ignore
- self,
- encoded,
- pixel_format="RGB",
- ):
+ def _run(self, encoded: np.ndarray, pixel_format="RGB") -> tuple[np.ndarray]: # type: ignore
try:
- # pylint: disable=import-outside-toplevel`
- import cv2
+ import PIL.Image # pylint: disable=import-outside-toplevel
except ImportError as e:
raise ImportError(
- "opencv-python must be installed to use the reference implementation of the ImageDecoder operator"
+ "Pillow must be installed to use the reference implementation of the ImageDecoder operator"
) from e
- decoded = None
+ img = PIL.Image.open(io.BytesIO(encoded.tobytes()))
if pixel_format == "BGR":
- decoded = cv2.imdecode(encoded, cv2.IMREAD_COLOR)
+ decoded = np.array(img)[:, :, ::-1]
elif pixel_format == "RGB":
- decoded = cv2.imdecode(encoded, cv2.IMREAD_COLOR)
- decoded = cv2.cvtColor(decoded, cv2.COLOR_BGR2RGB)
+ decoded = np.array(img)
elif pixel_format == "Grayscale":
- decoded = cv2.imdecode(encoded, cv2.IMREAD_GRAYSCALE)
+ img = img.convert("L")
+ decoded = np.array(img)
decoded = np.expand_dims(decoded, axis=2) # (H, W) to (H, W, 1)
else:
- raise RuntimeError(f"pixel_format={pixel_format!r} is not supported.")
+ raise ValueError(f"pixel_format={pixel_format!r} is not supported.")
return (decoded,)
|
{"golden_diff": "diff --git a/onnx/reference/ops/op_image_decoder.py b/onnx/reference/ops/op_image_decoder.py\n--- a/onnx/reference/ops/op_image_decoder.py\n+++ b/onnx/reference/ops/op_image_decoder.py\n@@ -1,7 +1,10 @@\n # Copyright (c) ONNX Project Contributors\n \n # SPDX-License-Identifier: Apache-2.0\n-# pylint: disable=C0123,C3001,R0912,R0913,R0914,R1730,W0221,W0613\n+\n+from __future__ import annotations\n+\n+import io\n \n import numpy as np\n \n@@ -9,27 +12,22 @@\n \n \n class ImageDecoder(OpRun):\n- def _run( # type: ignore\n- self,\n- encoded,\n- pixel_format=\"RGB\",\n- ):\n+ def _run(self, encoded: np.ndarray, pixel_format=\"RGB\") -> tuple[np.ndarray]: # type: ignore\n try:\n- # pylint: disable=import-outside-toplevel`\n- import cv2\n+ import PIL.Image # pylint: disable=import-outside-toplevel\n except ImportError as e:\n raise ImportError(\n- \"opencv-python must be installed to use the reference implementation of the ImageDecoder operator\"\n+ \"Pillow must be installed to use the reference implementation of the ImageDecoder operator\"\n ) from e\n- decoded = None\n+ img = PIL.Image.open(io.BytesIO(encoded.tobytes()))\n if pixel_format == \"BGR\":\n- decoded = cv2.imdecode(encoded, cv2.IMREAD_COLOR)\n+ decoded = np.array(img)[:, :, ::-1]\n elif pixel_format == \"RGB\":\n- decoded = cv2.imdecode(encoded, cv2.IMREAD_COLOR)\n- decoded = cv2.cvtColor(decoded, cv2.COLOR_BGR2RGB)\n+ decoded = np.array(img)\n elif pixel_format == \"Grayscale\":\n- decoded = cv2.imdecode(encoded, cv2.IMREAD_GRAYSCALE)\n+ img = img.convert(\"L\")\n+ decoded = np.array(img)\n decoded = np.expand_dims(decoded, axis=2) # (H, W) to (H, W, 1)\n else:\n- raise RuntimeError(f\"pixel_format={pixel_format!r} is not supported.\")\n+ raise ValueError(f\"pixel_format={pixel_format!r} is not supported.\")\n return (decoded,)\n", "issue": "Use pillow to replace opencv in reference evaluator\nCaveat: https://github.com/python-pillow/Pillow/issues/6047#issuecomment-1038150443\r\n\r\ncc @jcwchen \n", "before_files": [{"content": "# Copyright (c) ONNX Project Contributors\n\n# SPDX-License-Identifier: Apache-2.0\n# pylint: disable=C0123,C3001,R0912,R0913,R0914,R1730,W0221,W0613\n\nimport numpy as np\n\nfrom onnx.reference.op_run import OpRun\n\n\nclass ImageDecoder(OpRun):\n def _run( # type: ignore\n self,\n encoded,\n pixel_format=\"RGB\",\n ):\n try:\n # pylint: disable=import-outside-toplevel`\n import cv2\n except ImportError as e:\n raise ImportError(\n \"opencv-python must be installed to use the reference implementation of the ImageDecoder operator\"\n ) from e\n decoded = None\n if pixel_format == \"BGR\":\n decoded = cv2.imdecode(encoded, cv2.IMREAD_COLOR)\n elif pixel_format == \"RGB\":\n decoded = cv2.imdecode(encoded, cv2.IMREAD_COLOR)\n decoded = cv2.cvtColor(decoded, cv2.COLOR_BGR2RGB)\n elif pixel_format == \"Grayscale\":\n decoded = cv2.imdecode(encoded, cv2.IMREAD_GRAYSCALE)\n decoded = np.expand_dims(decoded, axis=2) # (H, W) to (H, W, 1)\n else:\n raise RuntimeError(f\"pixel_format={pixel_format!r} is not supported.\")\n return (decoded,)\n", "path": "onnx/reference/ops/op_image_decoder.py"}]}
| 971 | 538 |
gh_patches_debug_7783
|
rasdani/github-patches
|
git_diff
|
conda__conda-build-249
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to build source.git_url based build recipes
See for an example: https://binstar.org/conda/conda-env/builds/2/3
</issue>
<code>
[start of conda_build/source.py]
1 from __future__ import absolute_import, division, print_function
2
3 import os
4 import sys
5 from os.path import join, isdir, isfile, abspath, expanduser
6 from shutil import copytree, ignore_patterns, copy2
7 from subprocess import check_call, Popen, PIPE
8
9 from conda.fetch import download
10 from conda.utils import hashsum_file
11
12 from conda_build import external
13 from conda_build.config import config
14 from conda_build.utils import rm_rf, tar_xf, unzip
15
16
17 SRC_CACHE = join(config.croot, 'src_cache')
18 GIT_CACHE = join(config.croot, 'git_cache')
19 HG_CACHE = join(config.croot, 'hg_cache')
20 SVN_CACHE = join(config.croot, 'svn_cache')
21 WORK_DIR = join(config.croot, 'work')
22
23
24 def get_dir():
25 if not isdir(WORK_DIR):
26 os.makedirs(WORK_DIR)
27 lst = [fn for fn in os.listdir(WORK_DIR) if not fn.startswith('.')]
28 if len(lst) == 1:
29 dir_path = join(WORK_DIR, lst[0])
30 if isdir(dir_path):
31 return dir_path
32 return WORK_DIR
33
34
35 def download_to_cache(meta):
36 ''' Download a source to the local cache. '''
37 print('Source cache directory is: %s' % SRC_CACHE)
38 if not isdir(SRC_CACHE):
39 os.makedirs(SRC_CACHE)
40
41 fn = meta['fn']
42 path = join(SRC_CACHE, fn)
43
44 if isfile(path):
45 print('Found source in cache: %s' % fn)
46 else:
47 print('Downloading source to cache: %s' % fn)
48 download(meta['url'], path)
49
50 for tp in 'md5', 'sha1', 'sha256':
51 if meta.get(tp) and hashsum_file(path, tp) != meta[tp]:
52 raise RuntimeError("%s mismatch: '%s' != '%s'" %
53 (tp.upper(), hashsum_file(path, tp), meta[tp]))
54
55 return path
56
57
58 def unpack(meta):
59 ''' Uncompress a downloaded source. '''
60 src_path = download_to_cache(meta)
61
62 os.makedirs(WORK_DIR)
63 print("Extracting download")
64 if src_path.lower().endswith(('.tar.gz', '.tar.bz2', '.tgz', '.tar.xz', '.tar')):
65 tar_xf(src_path, WORK_DIR)
66 elif src_path.lower().endswith('.zip'):
67 unzip(src_path, WORK_DIR)
68 else:
69 # In this case, the build script will need to deal with unpacking the source
70 print("Warning: Unrecognized source format. Source file will be copied to the SRC_DIR")
71 copy2(src_path, WORK_DIR)
72
73
74 def git_source(meta, recipe_dir):
75 ''' Download a source from Git repo. '''
76 if not isdir(GIT_CACHE):
77 os.makedirs(GIT_CACHE)
78
79 git = external.find_executable('git')
80 if not git:
81 sys.exit("Error: git is not installed")
82 git_url = meta['git_url']
83 if git_url.startswith('.'):
84 # It's a relative path from the conda recipe
85 os.chdir(recipe_dir)
86 git_dn = abspath(expanduser(git_url)).replace('/', '_')
87 else:
88 git_dn = git_url.split(':')[-1].replace('/', '_')
89 cache_repo = cache_repo_arg = join(GIT_CACHE, git_dn)
90 if sys.platform == 'win32':
91 cache_repo_arg = cache_repo_arg.replace('\\', '/')
92 if os.getenv('USERNAME') == 'builder':
93 cache_repo_arg = '/cygdrive/c/' + cache_repo_arg[3:]
94
95 # update (or create) the cache repo
96 if isdir(cache_repo):
97 check_call([git, 'fetch'], cwd=cache_repo)
98 else:
99 check_call([git, 'clone', '--mirror', git_url, cache_repo_arg], cwd=recipe_dir)
100 assert isdir(cache_repo)
101
102 # now clone into the work directory
103 checkout = meta.get('git_rev')
104 # if rev is not specified, and the git_url is local,
105 # assume the user wants the current HEAD
106 if not checkout and git_url.startswith('.'):
107 process = Popen(["git", "rev-parse", "HEAD"],
108 stdout=PIPE, stderr=PIPE,
109 cwd=git_url)
110 output = process.communicate()[0].strip()
111 checkout = output.decode('utf-8')
112 if checkout:
113 print('checkout: %r' % checkout)
114
115 check_call([git, 'clone', cache_repo_arg, WORK_DIR])
116 if checkout:
117 check_call([git, 'checkout', checkout], cwd=WORK_DIR)
118
119 git_info()
120 return WORK_DIR
121
122
123 def git_info(fo=None):
124 ''' Print info about a Git repo. '''
125 assert isdir(WORK_DIR)
126
127 # Ensure to explicitly set GIT_DIR as some Linux machines will not
128 # properly execute without it.
129 env = os.environ.copy()
130 env['GIT_DIR'] = join(WORK_DIR, '.git')
131 env = {str(key): str(value) for key, value in env.items()}
132 for cmd, check_error in [
133 ('git log -n1', True),
134 ('git describe --tags --dirty', False),
135 ('git status', True)]:
136 p = Popen(cmd.split(), stdout=PIPE, stderr=PIPE, cwd=WORK_DIR, env=env)
137 stdout, stderr = p.communicate()
138 stdout = stdout.decode('utf-8')
139 stderr = stderr.decode('utf-8')
140 if check_error and stderr and stderr.strip():
141 raise Exception("git error: %s" % stderr)
142 if fo:
143 fo.write(u'==> %s <==\n' % cmd)
144 fo.write(stdout + u'\n')
145 else:
146 print(u'==> %s <==\n' % cmd)
147 print(stdout + u'\n')
148
149
150 def hg_source(meta):
151 ''' Download a source from Mercurial repo. '''
152 hg = external.find_executable('hg')
153 if not hg:
154 sys.exit('Error: hg not installed')
155 hg_url = meta['hg_url']
156 if not isdir(HG_CACHE):
157 os.makedirs(HG_CACHE)
158 hg_dn = hg_url.split(':')[-1].replace('/', '_')
159 cache_repo = join(HG_CACHE, hg_dn)
160 if isdir(cache_repo):
161 check_call([hg, 'pull'], cwd=cache_repo)
162 else:
163 check_call([hg, 'clone', hg_url, cache_repo])
164 assert isdir(cache_repo)
165
166 # now clone in to work directory
167 update = meta.get('hg_tag') or 'tip'
168 print('checkout: %r' % update)
169
170 check_call([hg, 'clone', cache_repo, WORK_DIR])
171 check_call([hg, 'update', '-C', update], cwd=WORK_DIR)
172 return WORK_DIR
173
174
175
176 def svn_source(meta):
177 ''' Download a source from SVN repo. '''
178 def parse_bool(s):
179 return str(s).lower().strip() in ('yes', 'true', '1', 'on')
180
181 svn = external.find_executable('svn')
182 if not svn:
183 sys.exit("Error: svn is not installed")
184 svn_url = meta['svn_url']
185 svn_revision = meta.get('svn_rev') or 'head'
186 svn_ignore_externals = parse_bool(meta.get('svn_ignore_externals') or 'no')
187 if not isdir(SVN_CACHE):
188 os.makedirs(SVN_CACHE)
189 svn_dn = svn_url.split(':', 1)[-1].replace('/', '_').replace(':', '_')
190 cache_repo = join(SVN_CACHE, svn_dn)
191 if svn_ignore_externals:
192 extra_args = ['--ignore-externals']
193 else:
194 extra_args = []
195 if isdir(cache_repo):
196 check_call([svn, 'up', '-r', svn_revision] + extra_args, cwd=cache_repo)
197 else:
198 check_call([svn, 'co', '-r', svn_revision] + extra_args + [svn_url,
199 cache_repo])
200 assert isdir(cache_repo)
201
202 # now copy into work directory
203 copytree(cache_repo, WORK_DIR)
204 return WORK_DIR
205
206
207 def apply_patch(src_dir, path):
208 print('Applying patch: %r' % path)
209 if not isfile(path):
210 sys.exit('Error: no such patch: %s' % path)
211
212 patch = external.find_executable('patch')
213 if patch is None:
214 sys.exit("""\
215 Error:
216 Did not find 'patch' in: %s
217 You can install 'patch' using apt-get, yum (Linux), Xcode (MacOSX),
218 or conda, cygwin (Windows),
219 """ % (os.pathsep.join(external.dir_paths)))
220 check_call([patch, '-p0', '-i', path], cwd=src_dir)
221
222
223 def provide(recipe_dir, meta, patch=True):
224 """
225 given a recipe_dir:
226 - download (if necessary)
227 - unpack
228 - apply patches (if any)
229 """
230 print("Removing old work directory")
231 rm_rf(WORK_DIR)
232 if 'fn' in meta:
233 unpack(meta)
234 elif 'git_url' in meta:
235 git_source(meta, recipe_dir)
236 elif 'hg_url' in meta:
237 hg_source(meta)
238 elif 'svn_url' in meta:
239 svn_source(meta)
240 elif 'path' in meta:
241 print("Copying %s to %s" % (abspath(join(recipe_dir, meta.get('path'))), WORK_DIR))
242 copytree(abspath(join(recipe_dir, meta.get('path'))), WORK_DIR)
243 else: # no source
244 os.makedirs(WORK_DIR)
245
246 if patch:
247 src_dir = get_dir()
248 for patch in meta.get('patches', []):
249 apply_patch(src_dir, join(recipe_dir, patch))
250
251
252 if __name__ == '__main__':
253 print(provide('.',
254 {'url': 'http://pypi.python.org/packages/source/b/bitarray/bitarray-0.8.0.tar.gz',
255 'git_url': '[email protected]:ilanschnell/bitarray.git',
256 'git_tag': '0.5.2'}))
257
[end of conda_build/source.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conda_build/source.py b/conda_build/source.py
--- a/conda_build/source.py
+++ b/conda_build/source.py
@@ -83,7 +83,8 @@
if git_url.startswith('.'):
# It's a relative path from the conda recipe
os.chdir(recipe_dir)
- git_dn = abspath(expanduser(git_url)).replace('/', '_')
+ git_dn = abspath(expanduser(git_url))
+ git_dn = "_".join(git_dn.split(os.path.sep)[1:])
else:
git_dn = git_url.split(':')[-1].replace('/', '_')
cache_repo = cache_repo_arg = join(GIT_CACHE, git_dn)
|
{"golden_diff": "diff --git a/conda_build/source.py b/conda_build/source.py\n--- a/conda_build/source.py\n+++ b/conda_build/source.py\n@@ -83,7 +83,8 @@\n if git_url.startswith('.'):\n # It's a relative path from the conda recipe\n os.chdir(recipe_dir)\n- git_dn = abspath(expanduser(git_url)).replace('/', '_')\n+ git_dn = abspath(expanduser(git_url))\n+ git_dn = \"_\".join(git_dn.split(os.path.sep)[1:])\n else:\n git_dn = git_url.split(':')[-1].replace('/', '_')\n cache_repo = cache_repo_arg = join(GIT_CACHE, git_dn)\n", "issue": "Unable to build source.git_url based build recipes\nSee for an example: https://binstar.org/conda/conda-env/builds/2/3\n\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport os\nimport sys\nfrom os.path import join, isdir, isfile, abspath, expanduser\nfrom shutil import copytree, ignore_patterns, copy2\nfrom subprocess import check_call, Popen, PIPE\n\nfrom conda.fetch import download\nfrom conda.utils import hashsum_file\n\nfrom conda_build import external\nfrom conda_build.config import config\nfrom conda_build.utils import rm_rf, tar_xf, unzip\n\n\nSRC_CACHE = join(config.croot, 'src_cache')\nGIT_CACHE = join(config.croot, 'git_cache')\nHG_CACHE = join(config.croot, 'hg_cache')\nSVN_CACHE = join(config.croot, 'svn_cache')\nWORK_DIR = join(config.croot, 'work')\n\n\ndef get_dir():\n if not isdir(WORK_DIR):\n os.makedirs(WORK_DIR)\n lst = [fn for fn in os.listdir(WORK_DIR) if not fn.startswith('.')]\n if len(lst) == 1:\n dir_path = join(WORK_DIR, lst[0])\n if isdir(dir_path):\n return dir_path\n return WORK_DIR\n\n\ndef download_to_cache(meta):\n ''' Download a source to the local cache. '''\n print('Source cache directory is: %s' % SRC_CACHE)\n if not isdir(SRC_CACHE):\n os.makedirs(SRC_CACHE)\n\n fn = meta['fn']\n path = join(SRC_CACHE, fn)\n\n if isfile(path):\n print('Found source in cache: %s' % fn)\n else:\n print('Downloading source to cache: %s' % fn)\n download(meta['url'], path)\n\n for tp in 'md5', 'sha1', 'sha256':\n if meta.get(tp) and hashsum_file(path, tp) != meta[tp]:\n raise RuntimeError(\"%s mismatch: '%s' != '%s'\" %\n (tp.upper(), hashsum_file(path, tp), meta[tp]))\n\n return path\n\n\ndef unpack(meta):\n ''' Uncompress a downloaded source. '''\n src_path = download_to_cache(meta)\n\n os.makedirs(WORK_DIR)\n print(\"Extracting download\")\n if src_path.lower().endswith(('.tar.gz', '.tar.bz2', '.tgz', '.tar.xz', '.tar')):\n tar_xf(src_path, WORK_DIR)\n elif src_path.lower().endswith('.zip'):\n unzip(src_path, WORK_DIR)\n else:\n # In this case, the build script will need to deal with unpacking the source\n print(\"Warning: Unrecognized source format. Source file will be copied to the SRC_DIR\")\n copy2(src_path, WORK_DIR)\n\n\ndef git_source(meta, recipe_dir):\n ''' Download a source from Git repo. '''\n if not isdir(GIT_CACHE):\n os.makedirs(GIT_CACHE)\n\n git = external.find_executable('git')\n if not git:\n sys.exit(\"Error: git is not installed\")\n git_url = meta['git_url']\n if git_url.startswith('.'):\n # It's a relative path from the conda recipe\n os.chdir(recipe_dir)\n git_dn = abspath(expanduser(git_url)).replace('/', '_')\n else:\n git_dn = git_url.split(':')[-1].replace('/', '_')\n cache_repo = cache_repo_arg = join(GIT_CACHE, git_dn)\n if sys.platform == 'win32':\n cache_repo_arg = cache_repo_arg.replace('\\\\', '/')\n if os.getenv('USERNAME') == 'builder':\n cache_repo_arg = '/cygdrive/c/' + cache_repo_arg[3:]\n\n # update (or create) the cache repo\n if isdir(cache_repo):\n check_call([git, 'fetch'], cwd=cache_repo)\n else:\n check_call([git, 'clone', '--mirror', git_url, cache_repo_arg], cwd=recipe_dir)\n assert isdir(cache_repo)\n\n # now clone into the work directory\n checkout = meta.get('git_rev')\n # if rev is not specified, and the git_url is local,\n # assume the user wants the current HEAD\n if not checkout and git_url.startswith('.'):\n process = Popen([\"git\", \"rev-parse\", \"HEAD\"],\n stdout=PIPE, stderr=PIPE,\n cwd=git_url)\n output = process.communicate()[0].strip()\n checkout = output.decode('utf-8')\n if checkout:\n print('checkout: %r' % checkout)\n\n check_call([git, 'clone', cache_repo_arg, WORK_DIR])\n if checkout:\n check_call([git, 'checkout', checkout], cwd=WORK_DIR)\n\n git_info()\n return WORK_DIR\n\n\ndef git_info(fo=None):\n ''' Print info about a Git repo. '''\n assert isdir(WORK_DIR)\n\n # Ensure to explicitly set GIT_DIR as some Linux machines will not\n # properly execute without it.\n env = os.environ.copy()\n env['GIT_DIR'] = join(WORK_DIR, '.git')\n env = {str(key): str(value) for key, value in env.items()}\n for cmd, check_error in [\n ('git log -n1', True),\n ('git describe --tags --dirty', False),\n ('git status', True)]:\n p = Popen(cmd.split(), stdout=PIPE, stderr=PIPE, cwd=WORK_DIR, env=env)\n stdout, stderr = p.communicate()\n stdout = stdout.decode('utf-8')\n stderr = stderr.decode('utf-8')\n if check_error and stderr and stderr.strip():\n raise Exception(\"git error: %s\" % stderr)\n if fo:\n fo.write(u'==> %s <==\\n' % cmd)\n fo.write(stdout + u'\\n')\n else:\n print(u'==> %s <==\\n' % cmd)\n print(stdout + u'\\n')\n\n\ndef hg_source(meta):\n ''' Download a source from Mercurial repo. '''\n hg = external.find_executable('hg')\n if not hg:\n sys.exit('Error: hg not installed')\n hg_url = meta['hg_url']\n if not isdir(HG_CACHE):\n os.makedirs(HG_CACHE)\n hg_dn = hg_url.split(':')[-1].replace('/', '_')\n cache_repo = join(HG_CACHE, hg_dn)\n if isdir(cache_repo):\n check_call([hg, 'pull'], cwd=cache_repo)\n else:\n check_call([hg, 'clone', hg_url, cache_repo])\n assert isdir(cache_repo)\n\n # now clone in to work directory\n update = meta.get('hg_tag') or 'tip'\n print('checkout: %r' % update)\n\n check_call([hg, 'clone', cache_repo, WORK_DIR])\n check_call([hg, 'update', '-C', update], cwd=WORK_DIR)\n return WORK_DIR\n\n\n\ndef svn_source(meta):\n ''' Download a source from SVN repo. '''\n def parse_bool(s):\n return str(s).lower().strip() in ('yes', 'true', '1', 'on')\n\n svn = external.find_executable('svn')\n if not svn:\n sys.exit(\"Error: svn is not installed\")\n svn_url = meta['svn_url']\n svn_revision = meta.get('svn_rev') or 'head'\n svn_ignore_externals = parse_bool(meta.get('svn_ignore_externals') or 'no')\n if not isdir(SVN_CACHE):\n os.makedirs(SVN_CACHE)\n svn_dn = svn_url.split(':', 1)[-1].replace('/', '_').replace(':', '_')\n cache_repo = join(SVN_CACHE, svn_dn)\n if svn_ignore_externals:\n extra_args = ['--ignore-externals']\n else:\n extra_args = []\n if isdir(cache_repo):\n check_call([svn, 'up', '-r', svn_revision] + extra_args, cwd=cache_repo)\n else:\n check_call([svn, 'co', '-r', svn_revision] + extra_args + [svn_url,\n cache_repo])\n assert isdir(cache_repo)\n\n # now copy into work directory\n copytree(cache_repo, WORK_DIR)\n return WORK_DIR\n\n\ndef apply_patch(src_dir, path):\n print('Applying patch: %r' % path)\n if not isfile(path):\n sys.exit('Error: no such patch: %s' % path)\n\n patch = external.find_executable('patch')\n if patch is None:\n sys.exit(\"\"\"\\\nError:\n Did not find 'patch' in: %s\n You can install 'patch' using apt-get, yum (Linux), Xcode (MacOSX),\n or conda, cygwin (Windows),\n\"\"\" % (os.pathsep.join(external.dir_paths)))\n check_call([patch, '-p0', '-i', path], cwd=src_dir)\n\n\ndef provide(recipe_dir, meta, patch=True):\n \"\"\"\n given a recipe_dir:\n - download (if necessary)\n - unpack\n - apply patches (if any)\n \"\"\"\n print(\"Removing old work directory\")\n rm_rf(WORK_DIR)\n if 'fn' in meta:\n unpack(meta)\n elif 'git_url' in meta:\n git_source(meta, recipe_dir)\n elif 'hg_url' in meta:\n hg_source(meta)\n elif 'svn_url' in meta:\n svn_source(meta)\n elif 'path' in meta:\n print(\"Copying %s to %s\" % (abspath(join(recipe_dir, meta.get('path'))), WORK_DIR))\n copytree(abspath(join(recipe_dir, meta.get('path'))), WORK_DIR)\n else: # no source\n os.makedirs(WORK_DIR)\n\n if patch:\n src_dir = get_dir()\n for patch in meta.get('patches', []):\n apply_patch(src_dir, join(recipe_dir, patch))\n\n\nif __name__ == '__main__':\n print(provide('.',\n {'url': 'http://pypi.python.org/packages/source/b/bitarray/bitarray-0.8.0.tar.gz',\n 'git_url': '[email protected]:ilanschnell/bitarray.git',\n 'git_tag': '0.5.2'}))\n", "path": "conda_build/source.py"}]}
| 3,452 | 155 |
gh_patches_debug_3465
|
rasdani/github-patches
|
git_diff
|
sktime__sktime-5208
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] ColumnSelect missing tag`"skip-inverse-transform"`
**Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
The ColumnSelect is missing the tag `"skip-inverse-transform"`. Thus, a TransformedTargetForecaster subsetting the `y` input will fail when calling predict.
**To Reproduce**
<!--
Add a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com
-->
```python
from sktime.forecasting.naive import NaiveForecaster
from sktime.transformations.series.subset import ColumnSelect
from sktime.datasets import load_longley
y = load_longley()[1][["GNP", "UNEMP"]]
fc = ColumnSelect(["GNP"]) * NaiveForecaster()
fc.fit(y)
fc.predict(fh=[1])
```
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
I would expect that the forecast is performed only on the selected time series without raising an error.
**Additional context**
<!--
Add any other context about the problem here.
-->
This would be fixed by just adding the tag `skip-inverse-transform` to ColumnSelect.
</issue>
<code>
[start of sktime/transformations/series/subset.py]
1 """Transformers for index and column subsetting."""
2 # copyright: sktime developers, BSD-3-Clause License (see LICENSE file).
3
4 __author__ = ["fkiraly"]
5
6 import pandas as pd
7
8 from sktime.transformations.base import BaseTransformer
9
10
11 class IndexSubset(BaseTransformer):
12 r"""Index subsetting transformer.
13
14 In transform, subsets `X` to the indices in `y.index`.
15 If `y` is None, returns `X` without subsetting.
16 numpy-based `X` are interpreted as having a RangeIndex starting at n,
17 where n is the number of numpy rows seen so far through `fit` and `update`.
18 Non-pandas types are interpreted as having index as after conversion to pandas,
19 via `datatypes.convert_to`, to the `"pd.DataFrame"` sktime type.
20
21 Parameters
22 ----------
23 index_treatment : str, optional, one of "keep" (default) or "remove"
24 determines which indices are kept in `Xt = transform(X, y)`
25 "keep" = all indices in y also appear in Xt. If not present in X, NA is filled.
26 "remove" = only indices that appear in both X and y are present in Xt.
27
28 Examples
29 --------
30 >>> from sktime.transformations.series.subset import IndexSubset
31 >>> from sktime.datasets import load_airline
32 >>> X = load_airline()[0:32]
33 >>> y = load_airline()[24:42]
34 >>> transformer = IndexSubset()
35 >>> X_subset = transformer.fit_transform(X=X, y=y)
36 """
37
38 _tags = {
39 "scitype:transform-input": "Series",
40 # what is the scitype of X: Series, or Panel
41 "scitype:transform-output": "Series",
42 # what scitype is returned: Primitives, Series, Panel
43 "scitype:instancewise": True, # is this an instance-wise transform?
44 "X_inner_mtype": ["pd.DataFrame", "pd.Series"],
45 "y_inner_mtype": ["pd.DataFrame", "pd.Series"],
46 "transform-returns-same-time-index": False,
47 "fit_is_empty": False,
48 "univariate-only": False,
49 "capability:inverse_transform": False,
50 "remember_data": True, # remember all data seen as _X
51 }
52
53 def __init__(self, index_treatment="keep"):
54 self.index_treatment = index_treatment
55 super().__init__()
56
57 def _transform(self, X, y=None):
58 """Transform X and return a transformed version.
59
60 private _transform containing the core logic, called from transform
61
62 Parameters
63 ----------
64 X : pd.DataFrame or pd.Series
65 Data to be transformed
66 y : pd.DataFrame or pd.Series
67 Additional data, e.g., labels for transformation
68
69 Returns
70 -------
71 Xt : pd.DataFrame or pd.Series, same type as X
72 transformed version of X
73 """
74 if y is None:
75 return X
76
77 X = self._X
78
79 index_treatment = self.index_treatment
80 ind_X_and_y = X.index.intersection(y.index)
81
82 if index_treatment == "remove":
83 Xt = X.loc[ind_X_and_y]
84 elif index_treatment == "keep":
85 Xt = X.loc[ind_X_and_y]
86 y_idx_frame = type(X)(index=y.index, dtype="float64")
87 Xt = Xt.combine_first(y_idx_frame)
88 else:
89 raise ValueError(
90 f'index_treatment must be one of "remove", "keep", but found'
91 f' "{index_treatment}"'
92 )
93 return Xt
94
95 @classmethod
96 def get_test_params(cls, parameter_set="default"):
97 """Return testing parameter settings for the estimator.
98
99 Parameters
100 ----------
101 parameter_set : str, default="default"
102 Name of the set of test parameters to return, for use in tests. If no
103 special parameters are defined for a value, will return `"default"` set.
104 There are currently no reserved values for transformers.
105
106 Returns
107 -------
108 params : dict or list of dict, default = {}
109 Parameters to create testing instances of the class
110 Each dict are parameters to construct an "interesting" test instance, i.e.,
111 `MyClass(**params)` or `MyClass(**params[i])` creates a valid test instance.
112 `create_test_instance` uses the first (or only) dictionary in `params`
113 """
114 params1 = {"index_treatment": "remove"}
115 params2 = {"index_treatment": "keep"}
116
117 return [params1, params2]
118
119
120 class ColumnSelect(BaseTransformer):
121 r"""Column selection transformer.
122
123 In transform, subsets `X` to `columns` provided as hyper-parameters.
124
125 Sequence of columns in `Xt=transform(X)` is as in `columns` hyper-parameter.
126 Caveat: this means that `transform` may change sequence of columns,
127 even if no columns are removed from `X` in `transform(X)`.
128
129 Parameters
130 ----------
131 columns : pandas compatible index or index coercible, optional, default = None
132 columns to which X in transform is to be subset
133 integer_treatment : str, optional, one of "col" (default) and "coerce"
134 determines how integer index columns are treated
135 "col" = subsets by column iloc index, even if columns is not in X.columns
136 "coerce" = coerces to integer pandas.Index and attempts to subset
137 index_treatment : str, optional, one of "remove" (default) or "keep"
138 determines which column are kept in `Xt = transform(X, y)`
139 "remove" = only indices that appear in both X and columns are present in Xt.
140 "keep" = all indices in columns appear in Xt. If not present in X, NA is filled.
141
142 Examples
143 --------
144 >>> from sktime.transformations.series.subset import ColumnSelect
145 >>> from sktime.datasets import load_longley
146 >>> X = load_longley()[1]
147 >>> transformer = ColumnSelect(columns=["GNPDEFL", "POP", "FOO"])
148 >>> X_subset = transformer.fit_transform(X=X)
149 """
150
151 _tags = {
152 "scitype:transform-input": "Series",
153 # what is the scitype of X: Series, or Panel
154 "scitype:transform-output": "Series",
155 # what scitype is returned: Primitives, Series, Panel
156 "scitype:instancewise": True, # is this an instance-wise transform?
157 "X_inner_mtype": ["pd.DataFrame", "pd-multiindex", "pd_multiindex_hier"],
158 "y_inner_mtype": "None",
159 "transform-returns-same-time-index": True,
160 "fit_is_empty": True,
161 "univariate-only": False,
162 "capability:inverse_transform": False,
163 }
164
165 def __init__(self, columns=None, integer_treatment="col", index_treatment="remove"):
166 self.columns = columns
167 self.integer_treatment = integer_treatment
168 self.index_treatment = index_treatment
169 super().__init__()
170
171 def _transform(self, X, y=None):
172 """Transform X and return a transformed version.
173
174 private _transform containing the core logic, called from transform
175
176 Parameters
177 ----------
178 X : pd.DataFrame
179 Data to be transformed
180 y : Ignored argument for interface compatibility
181
182 Returns
183 -------
184 Xt : pd.DataFrame
185 transformed version of X
186 """
187 columns = self.columns
188 integer_treatment = self.integer_treatment
189 index_treatment = self.index_treatment
190
191 if columns is None:
192 return X
193 if pd.api.types.is_scalar(columns):
194 columns = [columns]
195
196 columns = pd.Index(columns)
197
198 if integer_treatment == "col" and pd.api.types.is_integer_dtype(columns):
199 columns = [x for x in columns if x < len(X.columns)]
200 col_idx = X.columns[columns]
201 return X[col_idx]
202
203 in_cols = columns.isin(X.columns)
204 col_X_and_cols = columns[in_cols]
205
206 if index_treatment == "remove":
207 Xt = X[col_X_and_cols]
208 elif index_treatment == "keep":
209 Xt = X.reindex(columns=columns)
210 else:
211 raise ValueError(
212 f'index_treatment must be one of "remove", "keep", but found'
213 f' "{index_treatment}"'
214 )
215 return Xt
216
217 @classmethod
218 def get_test_params(cls, parameter_set="default"):
219 """Return testing parameter settings for the estimator.
220
221 Parameters
222 ----------
223 parameter_set : str, default="default"
224 Name of the set of test parameters to return, for use in tests. If no
225 special parameters are defined for a value, will return `"default"` set.
226 There are currently no reserved values for transformers.
227
228 Returns
229 -------
230 params : dict or list of dict, default = {}
231 Parameters to create testing instances of the class
232 Each dict are parameters to construct an "interesting" test instance, i.e.,
233 `MyClass(**params)` or `MyClass(**params[i])` creates a valid test instance.
234 `create_test_instance` uses the first (or only) dictionary in `params`
235 """
236 params1 = {"columns": None}
237 params2 = {"columns": [0, 2, 3]}
238 params3 = {"columns": ["a", "foo", "bar"], "index_treatment": "keep"}
239 params4 = {"columns": "a", "index_treatment": "keep"}
240
241 return [params1, params2, params3, params4]
242
[end of sktime/transformations/series/subset.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sktime/transformations/series/subset.py b/sktime/transformations/series/subset.py
--- a/sktime/transformations/series/subset.py
+++ b/sktime/transformations/series/subset.py
@@ -160,6 +160,7 @@
"fit_is_empty": True,
"univariate-only": False,
"capability:inverse_transform": False,
+ "skip-inverse-transform": True,
}
def __init__(self, columns=None, integer_treatment="col", index_treatment="remove"):
|
{"golden_diff": "diff --git a/sktime/transformations/series/subset.py b/sktime/transformations/series/subset.py\n--- a/sktime/transformations/series/subset.py\n+++ b/sktime/transformations/series/subset.py\n@@ -160,6 +160,7 @@\n \"fit_is_empty\": True,\n \"univariate-only\": False,\n \"capability:inverse_transform\": False,\n+ \"skip-inverse-transform\": True,\n }\n \n def __init__(self, columns=None, integer_treatment=\"col\", index_treatment=\"remove\"):\n", "issue": "[BUG] ColumnSelect missing tag`\"skip-inverse-transform\"`\n**Describe the bug**\r\n<!--\r\nA clear and concise description of what the bug is.\r\n-->\r\nThe ColumnSelect is missing the tag `\"skip-inverse-transform\"`. Thus, a TransformedTargetForecaster subsetting the `y` input will fail when calling predict.\r\n\r\n**To Reproduce**\r\n<!--\r\nAdd a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve\r\n\r\nIf the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com\r\n-->\r\n\r\n\r\n\r\n```python\r\nfrom sktime.forecasting.naive import NaiveForecaster\r\nfrom sktime.transformations.series.subset import ColumnSelect\r\nfrom sktime.datasets import load_longley\r\ny = load_longley()[1][[\"GNP\", \"UNEMP\"]]\r\nfc = ColumnSelect([\"GNP\"]) * NaiveForecaster()\r\nfc.fit(y)\r\nfc.predict(fh=[1])\r\n```\r\n\r\n**Expected behavior**\r\n<!--\r\nA clear and concise description of what you expected to happen.\r\n-->\r\nI would expect that the forecast is performed only on the selected time series without raising an error.\r\n\r\n**Additional context**\r\n<!--\r\nAdd any other context about the problem here.\r\n-->\r\nThis would be fixed by just adding the tag `skip-inverse-transform` to ColumnSelect.\r\n\n", "before_files": [{"content": "\"\"\"Transformers for index and column subsetting.\"\"\"\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file).\n\n__author__ = [\"fkiraly\"]\n\nimport pandas as pd\n\nfrom sktime.transformations.base import BaseTransformer\n\n\nclass IndexSubset(BaseTransformer):\n r\"\"\"Index subsetting transformer.\n\n In transform, subsets `X` to the indices in `y.index`.\n If `y` is None, returns `X` without subsetting.\n numpy-based `X` are interpreted as having a RangeIndex starting at n,\n where n is the number of numpy rows seen so far through `fit` and `update`.\n Non-pandas types are interpreted as having index as after conversion to pandas,\n via `datatypes.convert_to`, to the `\"pd.DataFrame\"` sktime type.\n\n Parameters\n ----------\n index_treatment : str, optional, one of \"keep\" (default) or \"remove\"\n determines which indices are kept in `Xt = transform(X, y)`\n \"keep\" = all indices in y also appear in Xt. If not present in X, NA is filled.\n \"remove\" = only indices that appear in both X and y are present in Xt.\n\n Examples\n --------\n >>> from sktime.transformations.series.subset import IndexSubset\n >>> from sktime.datasets import load_airline\n >>> X = load_airline()[0:32]\n >>> y = load_airline()[24:42]\n >>> transformer = IndexSubset()\n >>> X_subset = transformer.fit_transform(X=X, y=y)\n \"\"\"\n\n _tags = {\n \"scitype:transform-input\": \"Series\",\n # what is the scitype of X: Series, or Panel\n \"scitype:transform-output\": \"Series\",\n # what scitype is returned: Primitives, Series, Panel\n \"scitype:instancewise\": True, # is this an instance-wise transform?\n \"X_inner_mtype\": [\"pd.DataFrame\", \"pd.Series\"],\n \"y_inner_mtype\": [\"pd.DataFrame\", \"pd.Series\"],\n \"transform-returns-same-time-index\": False,\n \"fit_is_empty\": False,\n \"univariate-only\": False,\n \"capability:inverse_transform\": False,\n \"remember_data\": True, # remember all data seen as _X\n }\n\n def __init__(self, index_treatment=\"keep\"):\n self.index_treatment = index_treatment\n super().__init__()\n\n def _transform(self, X, y=None):\n \"\"\"Transform X and return a transformed version.\n\n private _transform containing the core logic, called from transform\n\n Parameters\n ----------\n X : pd.DataFrame or pd.Series\n Data to be transformed\n y : pd.DataFrame or pd.Series\n Additional data, e.g., labels for transformation\n\n Returns\n -------\n Xt : pd.DataFrame or pd.Series, same type as X\n transformed version of X\n \"\"\"\n if y is None:\n return X\n\n X = self._X\n\n index_treatment = self.index_treatment\n ind_X_and_y = X.index.intersection(y.index)\n\n if index_treatment == \"remove\":\n Xt = X.loc[ind_X_and_y]\n elif index_treatment == \"keep\":\n Xt = X.loc[ind_X_and_y]\n y_idx_frame = type(X)(index=y.index, dtype=\"float64\")\n Xt = Xt.combine_first(y_idx_frame)\n else:\n raise ValueError(\n f'index_treatment must be one of \"remove\", \"keep\", but found'\n f' \"{index_treatment}\"'\n )\n return Xt\n\n @classmethod\n def get_test_params(cls, parameter_set=\"default\"):\n \"\"\"Return testing parameter settings for the estimator.\n\n Parameters\n ----------\n parameter_set : str, default=\"default\"\n Name of the set of test parameters to return, for use in tests. If no\n special parameters are defined for a value, will return `\"default\"` set.\n There are currently no reserved values for transformers.\n\n Returns\n -------\n params : dict or list of dict, default = {}\n Parameters to create testing instances of the class\n Each dict are parameters to construct an \"interesting\" test instance, i.e.,\n `MyClass(**params)` or `MyClass(**params[i])` creates a valid test instance.\n `create_test_instance` uses the first (or only) dictionary in `params`\n \"\"\"\n params1 = {\"index_treatment\": \"remove\"}\n params2 = {\"index_treatment\": \"keep\"}\n\n return [params1, params2]\n\n\nclass ColumnSelect(BaseTransformer):\n r\"\"\"Column selection transformer.\n\n In transform, subsets `X` to `columns` provided as hyper-parameters.\n\n Sequence of columns in `Xt=transform(X)` is as in `columns` hyper-parameter.\n Caveat: this means that `transform` may change sequence of columns,\n even if no columns are removed from `X` in `transform(X)`.\n\n Parameters\n ----------\n columns : pandas compatible index or index coercible, optional, default = None\n columns to which X in transform is to be subset\n integer_treatment : str, optional, one of \"col\" (default) and \"coerce\"\n determines how integer index columns are treated\n \"col\" = subsets by column iloc index, even if columns is not in X.columns\n \"coerce\" = coerces to integer pandas.Index and attempts to subset\n index_treatment : str, optional, one of \"remove\" (default) or \"keep\"\n determines which column are kept in `Xt = transform(X, y)`\n \"remove\" = only indices that appear in both X and columns are present in Xt.\n \"keep\" = all indices in columns appear in Xt. If not present in X, NA is filled.\n\n Examples\n --------\n >>> from sktime.transformations.series.subset import ColumnSelect\n >>> from sktime.datasets import load_longley\n >>> X = load_longley()[1]\n >>> transformer = ColumnSelect(columns=[\"GNPDEFL\", \"POP\", \"FOO\"])\n >>> X_subset = transformer.fit_transform(X=X)\n \"\"\"\n\n _tags = {\n \"scitype:transform-input\": \"Series\",\n # what is the scitype of X: Series, or Panel\n \"scitype:transform-output\": \"Series\",\n # what scitype is returned: Primitives, Series, Panel\n \"scitype:instancewise\": True, # is this an instance-wise transform?\n \"X_inner_mtype\": [\"pd.DataFrame\", \"pd-multiindex\", \"pd_multiindex_hier\"],\n \"y_inner_mtype\": \"None\",\n \"transform-returns-same-time-index\": True,\n \"fit_is_empty\": True,\n \"univariate-only\": False,\n \"capability:inverse_transform\": False,\n }\n\n def __init__(self, columns=None, integer_treatment=\"col\", index_treatment=\"remove\"):\n self.columns = columns\n self.integer_treatment = integer_treatment\n self.index_treatment = index_treatment\n super().__init__()\n\n def _transform(self, X, y=None):\n \"\"\"Transform X and return a transformed version.\n\n private _transform containing the core logic, called from transform\n\n Parameters\n ----------\n X : pd.DataFrame\n Data to be transformed\n y : Ignored argument for interface compatibility\n\n Returns\n -------\n Xt : pd.DataFrame\n transformed version of X\n \"\"\"\n columns = self.columns\n integer_treatment = self.integer_treatment\n index_treatment = self.index_treatment\n\n if columns is None:\n return X\n if pd.api.types.is_scalar(columns):\n columns = [columns]\n\n columns = pd.Index(columns)\n\n if integer_treatment == \"col\" and pd.api.types.is_integer_dtype(columns):\n columns = [x for x in columns if x < len(X.columns)]\n col_idx = X.columns[columns]\n return X[col_idx]\n\n in_cols = columns.isin(X.columns)\n col_X_and_cols = columns[in_cols]\n\n if index_treatment == \"remove\":\n Xt = X[col_X_and_cols]\n elif index_treatment == \"keep\":\n Xt = X.reindex(columns=columns)\n else:\n raise ValueError(\n f'index_treatment must be one of \"remove\", \"keep\", but found'\n f' \"{index_treatment}\"'\n )\n return Xt\n\n @classmethod\n def get_test_params(cls, parameter_set=\"default\"):\n \"\"\"Return testing parameter settings for the estimator.\n\n Parameters\n ----------\n parameter_set : str, default=\"default\"\n Name of the set of test parameters to return, for use in tests. If no\n special parameters are defined for a value, will return `\"default\"` set.\n There are currently no reserved values for transformers.\n\n Returns\n -------\n params : dict or list of dict, default = {}\n Parameters to create testing instances of the class\n Each dict are parameters to construct an \"interesting\" test instance, i.e.,\n `MyClass(**params)` or `MyClass(**params[i])` creates a valid test instance.\n `create_test_instance` uses the first (or only) dictionary in `params`\n \"\"\"\n params1 = {\"columns\": None}\n params2 = {\"columns\": [0, 2, 3]}\n params3 = {\"columns\": [\"a\", \"foo\", \"bar\"], \"index_treatment\": \"keep\"}\n params4 = {\"columns\": \"a\", \"index_treatment\": \"keep\"}\n\n return [params1, params2, params3, params4]\n", "path": "sktime/transformations/series/subset.py"}]}
| 3,560 | 129 |
gh_patches_debug_30815
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-238
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement `map` for `LocalExecutor`
For some reason we avoided doing this, but it's actually entirely possible to do! Would be great for local debugging.
</issue>
<code>
[start of src/prefect/engine/executors/local.py]
1 # Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula
2
3 from prefect.engine.executors.base import Executor
4
5
6 class LocalExecutor(Executor):
7 """
8 An executor that runs all functions synchronously and immediately in
9 the local thread. To be used mainly for debugging purposes.
10 """
11
12 def submit(self, fn, *args, **kwargs):
13 """
14 Submit a function to the executor for execution. Returns the result of the computation.
15
16 Args:
17 - fn (Callable): function which is being submitted for execution
18 - *args (Any): arguments to be passed to `fn`
19 - **kwargs (Any): keyword arguments to be passed to `fn`
20
21 Returns:
22 - Any: the result of `fn(*args, **kwargs)`
23 """
24 return fn(*args, **kwargs)
25
26 def wait(self, futures, timeout=None):
27 """
28 Returns:
29 - Any: whatever `futures` were provided
30 """
31 return futures
32
[end of src/prefect/engine/executors/local.py]
[start of src/prefect/engine/executors/__init__.py]
1 # Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula
2
3 """
4 Prefect Executors implement the logic for how Tasks are run. The standard interface
5 for an Executor consists of the following methods:
6
7 - `submit(fn, *args, **kwargs)`: submit `fn(*args, **kwargs)` for execution;
8 note that this function is (in general) non-blocking, meaning that `executor.submit(...)`
9 will _immediately_ return a future-like object regardless of whether `fn(*args, **kwargs)`
10 has completed running
11 - `submit_with_context(fn, *args, context, **kwargs)`: submit `fn(*args,
12 **kwargs)` for execution with the provided `prefect.context`
13 - `wait(object)`: resolves any objects returned by `executor.submit` to
14 their values; this function _will_ block until execution of `object` is complete
15 - `map(fn, *args, upstream_states, **kwargs)`: submit function to be mapped
16 over based on the edge information contained in `upstream_states`. Any "mapped" Edge
17 will be converted into multiple function submissions, one for each value of the upstream mapped tasks.
18
19 Currently, the available executor options are:
20
21 - `LocalExecutor`: the no frills, straightforward executor - great for simple
22 debugging; tasks are executed immediately upon being called by `executor.submit()`.
23 Note that the `map` feature is currently _not_ supported with this executor.
24 - `SynchronousExecutor`: an executor that runs on `dask` primitives with the
25 synchronous dask scheduler; currently the default executor
26 - `DaskExecutor`: the most feature-rich of the executors, this executor runs
27 on `dask.distributed` and has support for multiprocessing, multithreading, and distributed execution.
28
29 Which executor you choose depends on whether you intend to use things like parallelism
30 of task execution.
31 """
32 import sys
33
34 from warnings import warn as _warn
35 from importlib import import_module as _import_module
36
37 import prefect as _prefect
38 from prefect.engine.executors.base import Executor
39 from prefect.engine.executors.local import LocalExecutor
40 from prefect.engine.executors.sync import SynchronousExecutor
41
42 if sys.version_info >= (3, 5):
43 from prefect.engine.executors.dask import DaskExecutor
44
45 try:
46 cfg_exec = _prefect.config.engine.executor
47 *module, cls_name = cfg_exec.split(".")
48 module = _import_module(".".join(module))
49 DEFAULT_EXECUTOR = getattr(module, cls_name)()
50 except:
51 _warn(
52 "Could not import {}, using prefect.engine.executors.LocalExecutor instead.".format(
53 _prefect.config.engine.executor
54 )
55 )
56 DEFAULT_EXECUTOR = LocalExecutor()
57
[end of src/prefect/engine/executors/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/prefect/engine/executors/__init__.py b/src/prefect/engine/executors/__init__.py
--- a/src/prefect/engine/executors/__init__.py
+++ b/src/prefect/engine/executors/__init__.py
@@ -20,7 +20,6 @@
- `LocalExecutor`: the no frills, straightforward executor - great for simple
debugging; tasks are executed immediately upon being called by `executor.submit()`.
- Note that the `map` feature is currently _not_ supported with this executor.
- `SynchronousExecutor`: an executor that runs on `dask` primitives with the
synchronous dask scheduler; currently the default executor
- `DaskExecutor`: the most feature-rich of the executors, this executor runs
diff --git a/src/prefect/engine/executors/local.py b/src/prefect/engine/executors/local.py
--- a/src/prefect/engine/executors/local.py
+++ b/src/prefect/engine/executors/local.py
@@ -1,6 +1,9 @@
# Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula
+from typing import Any, Callable, Iterable
+
from prefect.engine.executors.base import Executor
+from prefect.utilities.executors import dict_to_list
class LocalExecutor(Executor):
@@ -9,6 +12,17 @@
the local thread. To be used mainly for debugging purposes.
"""
+ def map(
+ self, fn: Callable, *args: Any, upstream_states=None, **kwargs: Any
+ ) -> Iterable[Any]:
+
+ states = dict_to_list(upstream_states)
+ results = []
+ for elem in states:
+ results.append(self.submit(fn, *args, upstream_states=elem, **kwargs))
+
+ return results
+
def submit(self, fn, *args, **kwargs):
"""
Submit a function to the executor for execution. Returns the result of the computation.
|
{"golden_diff": "diff --git a/src/prefect/engine/executors/__init__.py b/src/prefect/engine/executors/__init__.py\n--- a/src/prefect/engine/executors/__init__.py\n+++ b/src/prefect/engine/executors/__init__.py\n@@ -20,7 +20,6 @@\n \n - `LocalExecutor`: the no frills, straightforward executor - great for simple\n debugging; tasks are executed immediately upon being called by `executor.submit()`.\n- Note that the `map` feature is currently _not_ supported with this executor.\n - `SynchronousExecutor`: an executor that runs on `dask` primitives with the\n synchronous dask scheduler; currently the default executor\n - `DaskExecutor`: the most feature-rich of the executors, this executor runs\ndiff --git a/src/prefect/engine/executors/local.py b/src/prefect/engine/executors/local.py\n--- a/src/prefect/engine/executors/local.py\n+++ b/src/prefect/engine/executors/local.py\n@@ -1,6 +1,9 @@\n # Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula\n \n+from typing import Any, Callable, Iterable\n+\n from prefect.engine.executors.base import Executor\n+from prefect.utilities.executors import dict_to_list\n \n \n class LocalExecutor(Executor):\n@@ -9,6 +12,17 @@\n the local thread. To be used mainly for debugging purposes.\n \"\"\"\n \n+ def map(\n+ self, fn: Callable, *args: Any, upstream_states=None, **kwargs: Any\n+ ) -> Iterable[Any]:\n+\n+ states = dict_to_list(upstream_states)\n+ results = []\n+ for elem in states:\n+ results.append(self.submit(fn, *args, upstream_states=elem, **kwargs))\n+\n+ return results\n+\n def submit(self, fn, *args, **kwargs):\n \"\"\"\n Submit a function to the executor for execution. Returns the result of the computation.\n", "issue": "Implement `map` for `LocalExecutor`\nFor some reason we avoided doing this, but it's actually entirely possible to do! Would be great for local debugging.\n", "before_files": [{"content": "# Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula\n\nfrom prefect.engine.executors.base import Executor\n\n\nclass LocalExecutor(Executor):\n \"\"\"\n An executor that runs all functions synchronously and immediately in\n the local thread. To be used mainly for debugging purposes.\n \"\"\"\n\n def submit(self, fn, *args, **kwargs):\n \"\"\"\n Submit a function to the executor for execution. Returns the result of the computation.\n\n Args:\n - fn (Callable): function which is being submitted for execution\n - *args (Any): arguments to be passed to `fn`\n - **kwargs (Any): keyword arguments to be passed to `fn`\n\n Returns:\n - Any: the result of `fn(*args, **kwargs)`\n \"\"\"\n return fn(*args, **kwargs)\n\n def wait(self, futures, timeout=None):\n \"\"\"\n Returns:\n - Any: whatever `futures` were provided\n \"\"\"\n return futures\n", "path": "src/prefect/engine/executors/local.py"}, {"content": "# Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula\n\n\"\"\"\nPrefect Executors implement the logic for how Tasks are run. The standard interface\nfor an Executor consists of the following methods:\n\n- `submit(fn, *args, **kwargs)`: submit `fn(*args, **kwargs)` for execution;\n note that this function is (in general) non-blocking, meaning that `executor.submit(...)`\n will _immediately_ return a future-like object regardless of whether `fn(*args, **kwargs)`\n has completed running\n- `submit_with_context(fn, *args, context, **kwargs)`: submit `fn(*args,\n **kwargs)` for execution with the provided `prefect.context`\n- `wait(object)`: resolves any objects returned by `executor.submit` to\n their values; this function _will_ block until execution of `object` is complete\n- `map(fn, *args, upstream_states, **kwargs)`: submit function to be mapped\n over based on the edge information contained in `upstream_states`. Any \"mapped\" Edge\n will be converted into multiple function submissions, one for each value of the upstream mapped tasks.\n\nCurrently, the available executor options are:\n\n- `LocalExecutor`: the no frills, straightforward executor - great for simple\n debugging; tasks are executed immediately upon being called by `executor.submit()`.\n Note that the `map` feature is currently _not_ supported with this executor.\n- `SynchronousExecutor`: an executor that runs on `dask` primitives with the\n synchronous dask scheduler; currently the default executor\n- `DaskExecutor`: the most feature-rich of the executors, this executor runs\n on `dask.distributed` and has support for multiprocessing, multithreading, and distributed execution.\n\nWhich executor you choose depends on whether you intend to use things like parallelism\nof task execution.\n\"\"\"\nimport sys\n\nfrom warnings import warn as _warn\nfrom importlib import import_module as _import_module\n\nimport prefect as _prefect\nfrom prefect.engine.executors.base import Executor\nfrom prefect.engine.executors.local import LocalExecutor\nfrom prefect.engine.executors.sync import SynchronousExecutor\n\nif sys.version_info >= (3, 5):\n from prefect.engine.executors.dask import DaskExecutor\n\ntry:\n cfg_exec = _prefect.config.engine.executor\n *module, cls_name = cfg_exec.split(\".\")\n module = _import_module(\".\".join(module))\n DEFAULT_EXECUTOR = getattr(module, cls_name)()\nexcept:\n _warn(\n \"Could not import {}, using prefect.engine.executors.LocalExecutor instead.\".format(\n _prefect.config.engine.executor\n )\n )\n DEFAULT_EXECUTOR = LocalExecutor()\n", "path": "src/prefect/engine/executors/__init__.py"}]}
| 1,567 | 432 |
gh_patches_debug_24122
|
rasdani/github-patches
|
git_diff
|
microsoft__DeepSpeed-2567
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[REQUEST] Remove deprecated PyTorch code
**Is your feature request related to a problem? Please describe.**
DeepSpeed uses code that's deprecated. This causes problems for libraries built on deepspeed when testing.
**Describe the solution you'd like**
```
/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py:171: in get_global_rank
return torch.distributed.distributed_c10d._get_global_rank(group, group_rank)
```
should instead use `get_global_rank` as recommended by the warning
```
E UserWarning: torch.distributed.distributed_c10d._get_global_rank is deprecated please use torch.distributed.distributed_c10d.get_global_rank instead
```
</issue>
<code>
[start of deepspeed/comm/__init__.py]
1 import torch
2 from .utils import *
3 from deepspeed import utils
4
5 supported_torch_version = False
6
7 # See more details at: https://github.com/pytorch/pytorch/pull/48767
8 # The PG API in torch versions lesser than 1.8 are different so it is
9 # non-trivial to support both in the same API. We will just use the
10 # DS comm. backend in deepspeed/comm/comm.py if torch version if 1.8+.
11
12 if older_torch():
13 # Add custom deepspeed torch comm functions here since we can't import deepspeed.comm
14 # NOTE: We can't call torch.distributed directly here. Current hack is to import functions before calling them.
15 supported_torch_version = False
16 from torch.distributed import *
17
18 def get_world_group():
19 return group.WORLD
20
21 def get_global_rank(group, group_rank):
22 from torch.distributed.distributed_c10d import _get_global_rank
23 return _get_global_rank(group, group_rank)
24
25 def allgather_fn(output_tensor, input_tensor, group=None, async_op=False):
26 from torch.distributed import all_gather, get_world_size
27 from torch import chunk
28 output_tensors = list(chunk(output_tensor, get_world_size(group)))
29 return all_gather(output_tensors, input_tensor, group=group, async_op=async_op)
30
31 def reduce_scatter_fn(output_tensor, input_tensor, group=None, async_op=False):
32 from torch.distributed import reduce_scatter, get_world_size
33 from torch import chunk
34 input_tensor_lst = list(chunk(input_tensor, get_world_size(group)))
35 return reduce_scatter(output_tensor, input_tensor_lst, group=group)
36
37 def configure(deepspeed_config=None,
38 enabled=None,
39 prof_all=None,
40 prof_ops=None,
41 verbose=None):
42 utils.logger.warn(
43 "Communication logging is not supported in torch versions older than 1.8")
44
45 else:
46 supported_torch_version = True
47 from .comm import *
48
[end of deepspeed/comm/__init__.py]
[start of deepspeed/comm/torch.py]
1 '''
2 Copyright 2021 The Microsoft DeepSpeed Team
3 '''
4 from deepspeed import utils
5
6 from .utils import *
7 from .backend import *
8 from .comm import *
9
10
11 class TorchBackend(Backend):
12 """
13 A light-weight wrapper class for torch.distributed API.
14 Only a subset of functions are wrapped. Once the init_process_group
15 is initialized, standard torch.distributed.* can be used directly
16 so no need to wrap all the functions. We can keep adding wrappers as
17 needed.
18 """
19 def __init__(self, backend, timeout, init_method, name='torch'):
20 super(TorchBackend, self).__init__()
21 self.torch_version_before_18 = older_torch()
22 self.has_allgather_base = has_allgather_base()
23 self.has_reduce_scatter_base = has_reduce_scatter_base()
24 self.initialized = True
25 self.name = name
26 # Future functionality to support ds.initialize() on a single GPU
27 # The idea is to fake that dist backend is initialized even when
28 # it is not so we can run on a single GPU without doing any init_process_group
29 self.single_gpu_mode = True
30 self.init_process_group(backend, timeout, init_method)
31
32 def init_process_group(self, backend, timeout, init_method):
33 if not torch.distributed.is_initialized():
34 torch.distributed.init_process_group(backend,
35 timeout=timeout,
36 init_method=init_method)
37 self.using_mpi = torch.distributed.get_backend() == 'mpi'
38
39 def all_reduce(self,
40 tensor,
41 op=torch.distributed.ReduceOp.SUM,
42 group=None,
43 async_op=False):
44 op = self._reduce_op(op)
45 return torch.distributed.all_reduce(tensor=tensor,
46 op=op,
47 group=group,
48 async_op=async_op)
49
50 def reduce(self, tensor, dst, op=ReduceOp.SUM, group=None, async_op=False):
51 return torch.distributed.reduce(tensor=tensor,
52 dst=dst,
53 op=self._reduce_op(op),
54 group=group,
55 async_op=async_op)
56
57 def reduce_scatter(self,
58 output,
59 input_list,
60 op=ReduceOp.SUM,
61 group=None,
62 async_op=False):
63 return torch.distributed.reduce_scatter(output=output,
64 input_list=input_list,
65 op=self._reduce_op(op),
66 group=group,
67 async_op=async_op)
68
69 def broadcast(self, tensor, src, group=None, async_op=False):
70 return torch.distributed.broadcast(tensor=tensor,
71 src=src,
72 group=group,
73 async_op=async_op)
74
75 def all_gather(self, tensor_list, tensor, group=None, async_op=False):
76 return torch.distributed.all_gather(tensor_list=tensor_list,
77 tensor=tensor,
78 group=group,
79 async_op=async_op)
80
81 def all_gather_base(self, output_tensor, input_tensor, group=None, async_op=False):
82 if self.has_allgather_base:
83 return torch.distributed.distributed_c10d._all_gather_base(
84 output_tensor=output_tensor,
85 input_tensor=input_tensor,
86 group=group,
87 async_op=async_op)
88 else:
89 utils.logger.warning(
90 "unable to find torch.distributed._all_gather_base. will fall back to "
91 "torch.distributed.reduce_scatter which will result in suboptimal performance. "
92 "please consider upgrading your pytorch installation.")
93 pass
94
95 def reduce_scatter_base(self,
96 output_tensor,
97 input_tensor,
98 group=None,
99 async_op=False):
100 if self.has_reduce_scatter_base:
101 return torch.distributed._reduce_scatter_base(output_tensor,
102 input_tensor,
103 group=group,
104 async_op=async_op)
105 else:
106 utils.logger.warning(
107 "unable to find torch.distributed._reduce_scatter_base. will fall back to "
108 "torch.distributed.reduce_scatter which will result in suboptimal performance. "
109 "please consider upgrading your pytorch installation.")
110 pass
111
112 def all_to_all_single(self,
113 output,
114 input,
115 output_split_sizes=None,
116 input_split_sizes=None,
117 group=None,
118 async_op=False):
119 return torch.distributed.all_to_all_single(output=output,
120 input=input,
121 output_split_sizes=output_split_sizes,
122 input_split_sizes=input_split_sizes,
123 group=group,
124 async_op=async_op)
125
126 def send(self, tensor, dst, group=None, tag=0):
127 return torch.distributed.send(tensor=tensor, dst=dst, group=group, tag=tag)
128
129 def recv(self, tensor, src=None, group=None, tag=0):
130 return torch.distributed.recv(tensor=tensor, src=src, group=group, tag=tag)
131
132 def isend(self, tensor, dst, group=None, tag=0):
133 return torch.distributed.isend(tensor=tensor, dst=dst, group=group, tag=tag)
134
135 def irecv(self, tensor, src=None, group=None, tag=0):
136 return torch.distributed.irecv(tensor=tensor, src=src, group=group, tag=tag)
137
138 def gather(self, tensor, gather_list=None, dst=0, group=None, async_op=False):
139 return torch.distributed.gather(tensor=tensor,
140 gather_list=gather_list,
141 dst=dst,
142 group=group,
143 async_op=async_op)
144
145 def scatter(self, tensor, scatter_list=None, src=0, group=None, async_op=False):
146 return torch.distributed.scatter(tensor=tensor,
147 scatter_list=scatter_list,
148 src=src,
149 group=group,
150 async_op=async_op)
151
152 def barrier(self):
153 return torch.distributed.barrier()
154
155 def get_rank(self, group=None):
156 return torch.distributed.get_rank(group=group)
157
158 def get_world_size(self, group=None):
159 return torch.distributed.get_world_size(group=group)
160
161 def is_initialized(self):
162 return torch.distributed.is_initialized()
163
164 def get_backend(self, group=None):
165 return torch.distributed.get_backend(group=group)
166
167 def new_group(self, ranks):
168 return torch.distributed.new_group(ranks)
169
170 def get_global_rank(self, group, group_rank):
171 return torch.distributed.distributed_c10d._get_global_rank(group, group_rank)
172
173 def get_world_group(self):
174 return torch.distributed.group.WORLD
175
176 def destroy_process_group(self, group=None):
177 return torch.distributed.destroy_process_group(group=group)
178
179 def _reduce_op(self, op):
180 '''
181 Helper function. If the op provided is not a torch.dist.ReduceOp, convert it and return
182 '''
183 if not isinstance(op, torch.distributed.ReduceOp):
184 if op == ReduceOp.SUM:
185 op = torch.distributed.ReduceOp.SUM
186 elif op == ReduceOp.PRODUCT:
187 op = torch.distributed.ReduceOp.PRODUCT
188 elif op == ReduceOp.AVG:
189 op = torch.distributed.ReduceOp.AVG
190 elif op == ReduceOp.MIN:
191 op = torch.distributed.ReduceOp.MIN
192 elif op == ReduceOp.MAX:
193 op = torch.distributed.ReduceOp.MAX
194 elif op == ReduceOp.BAND:
195 op = torch.distributed.ReduceOp.BAND
196 elif op == ReduceOp.BOR:
197 op = torch.distributed.ReduceOp.BOR
198 elif op == ReduceOp.BXOR:
199 op = torch.distributed.ReduceOp.BXOR
200 return op
201
202
203 # This will become a light-weight wrapper around torch.distributed functions
204 # TODO: create some example to show how this wrapper can help profile communication
205 # TODO: make sure there is no performance regression with this approach
206 # TODO: explore monkey-patching if this does not work
207
[end of deepspeed/comm/torch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/deepspeed/comm/__init__.py b/deepspeed/comm/__init__.py
--- a/deepspeed/comm/__init__.py
+++ b/deepspeed/comm/__init__.py
@@ -19,7 +19,10 @@
return group.WORLD
def get_global_rank(group, group_rank):
- from torch.distributed.distributed_c10d import _get_global_rank
+ if hasattr(torch.distributed.distributed_c10d, "get_global_rank"):
+ from torch.distributed.distributed_c10d import get_global_rank as _get_global_rank
+ else:
+ from torch.distributed.distributed_c10d import _get_global_rank
return _get_global_rank(group, group_rank)
def allgather_fn(output_tensor, input_tensor, group=None, async_op=False):
diff --git a/deepspeed/comm/torch.py b/deepspeed/comm/torch.py
--- a/deepspeed/comm/torch.py
+++ b/deepspeed/comm/torch.py
@@ -168,7 +168,11 @@
return torch.distributed.new_group(ranks)
def get_global_rank(self, group, group_rank):
- return torch.distributed.distributed_c10d._get_global_rank(group, group_rank)
+ if hasattr(torch.distributed.distributed_c10d, "get_global_rank"):
+ from torch.distributed.distributed_c10d import get_global_rank as _get_global_rank
+ else:
+ from torch.distributed.distributed_c10d import _get_global_rank
+ return _get_global_rank(group, group_rank)
def get_world_group(self):
return torch.distributed.group.WORLD
|
{"golden_diff": "diff --git a/deepspeed/comm/__init__.py b/deepspeed/comm/__init__.py\n--- a/deepspeed/comm/__init__.py\n+++ b/deepspeed/comm/__init__.py\n@@ -19,7 +19,10 @@\n return group.WORLD\n \n def get_global_rank(group, group_rank):\n- from torch.distributed.distributed_c10d import _get_global_rank\n+ if hasattr(torch.distributed.distributed_c10d, \"get_global_rank\"):\n+ from torch.distributed.distributed_c10d import get_global_rank as _get_global_rank\n+ else:\n+ from torch.distributed.distributed_c10d import _get_global_rank\n return _get_global_rank(group, group_rank)\n \n def allgather_fn(output_tensor, input_tensor, group=None, async_op=False):\ndiff --git a/deepspeed/comm/torch.py b/deepspeed/comm/torch.py\n--- a/deepspeed/comm/torch.py\n+++ b/deepspeed/comm/torch.py\n@@ -168,7 +168,11 @@\n return torch.distributed.new_group(ranks)\n \n def get_global_rank(self, group, group_rank):\n- return torch.distributed.distributed_c10d._get_global_rank(group, group_rank)\n+ if hasattr(torch.distributed.distributed_c10d, \"get_global_rank\"):\n+ from torch.distributed.distributed_c10d import get_global_rank as _get_global_rank\n+ else:\n+ from torch.distributed.distributed_c10d import _get_global_rank\n+ return _get_global_rank(group, group_rank)\n \n def get_world_group(self):\n return torch.distributed.group.WORLD\n", "issue": "[REQUEST] Remove deprecated PyTorch code\n**Is your feature request related to a problem? Please describe.**\r\nDeepSpeed uses code that's deprecated. This causes problems for libraries built on deepspeed when testing.\r\n\r\n**Describe the solution you'd like**\r\n```\r\n/usr/local/lib/python3.10/dist-packages/deepspeed/comm/torch.py:171: in get_global_rank\r\n return torch.distributed.distributed_c10d._get_global_rank(group, group_rank)\r\n```\r\nshould instead use `get_global_rank` as recommended by the warning\r\n```\r\nE UserWarning: torch.distributed.distributed_c10d._get_global_rank is deprecated please use torch.distributed.distributed_c10d.get_global_rank instead\r\n```\r\n\n", "before_files": [{"content": "import torch\nfrom .utils import *\nfrom deepspeed import utils\n\nsupported_torch_version = False\n\n# See more details at: https://github.com/pytorch/pytorch/pull/48767\n# The PG API in torch versions lesser than 1.8 are different so it is\n# non-trivial to support both in the same API. We will just use the\n# DS comm. backend in deepspeed/comm/comm.py if torch version if 1.8+.\n\nif older_torch():\n # Add custom deepspeed torch comm functions here since we can't import deepspeed.comm\n # NOTE: We can't call torch.distributed directly here. Current hack is to import functions before calling them.\n supported_torch_version = False\n from torch.distributed import *\n\n def get_world_group():\n return group.WORLD\n\n def get_global_rank(group, group_rank):\n from torch.distributed.distributed_c10d import _get_global_rank\n return _get_global_rank(group, group_rank)\n\n def allgather_fn(output_tensor, input_tensor, group=None, async_op=False):\n from torch.distributed import all_gather, get_world_size\n from torch import chunk\n output_tensors = list(chunk(output_tensor, get_world_size(group)))\n return all_gather(output_tensors, input_tensor, group=group, async_op=async_op)\n\n def reduce_scatter_fn(output_tensor, input_tensor, group=None, async_op=False):\n from torch.distributed import reduce_scatter, get_world_size\n from torch import chunk\n input_tensor_lst = list(chunk(input_tensor, get_world_size(group)))\n return reduce_scatter(output_tensor, input_tensor_lst, group=group)\n\n def configure(deepspeed_config=None,\n enabled=None,\n prof_all=None,\n prof_ops=None,\n verbose=None):\n utils.logger.warn(\n \"Communication logging is not supported in torch versions older than 1.8\")\n\nelse:\n supported_torch_version = True\n from .comm import *\n", "path": "deepspeed/comm/__init__.py"}, {"content": "'''\nCopyright 2021 The Microsoft DeepSpeed Team\n'''\nfrom deepspeed import utils\n\nfrom .utils import *\nfrom .backend import *\nfrom .comm import *\n\n\nclass TorchBackend(Backend):\n \"\"\"\n A light-weight wrapper class for torch.distributed API.\n Only a subset of functions are wrapped. Once the init_process_group\n is initialized, standard torch.distributed.* can be used directly\n so no need to wrap all the functions. We can keep adding wrappers as\n needed.\n \"\"\"\n def __init__(self, backend, timeout, init_method, name='torch'):\n super(TorchBackend, self).__init__()\n self.torch_version_before_18 = older_torch()\n self.has_allgather_base = has_allgather_base()\n self.has_reduce_scatter_base = has_reduce_scatter_base()\n self.initialized = True\n self.name = name\n # Future functionality to support ds.initialize() on a single GPU\n # The idea is to fake that dist backend is initialized even when\n # it is not so we can run on a single GPU without doing any init_process_group\n self.single_gpu_mode = True\n self.init_process_group(backend, timeout, init_method)\n\n def init_process_group(self, backend, timeout, init_method):\n if not torch.distributed.is_initialized():\n torch.distributed.init_process_group(backend,\n timeout=timeout,\n init_method=init_method)\n self.using_mpi = torch.distributed.get_backend() == 'mpi'\n\n def all_reduce(self,\n tensor,\n op=torch.distributed.ReduceOp.SUM,\n group=None,\n async_op=False):\n op = self._reduce_op(op)\n return torch.distributed.all_reduce(tensor=tensor,\n op=op,\n group=group,\n async_op=async_op)\n\n def reduce(self, tensor, dst, op=ReduceOp.SUM, group=None, async_op=False):\n return torch.distributed.reduce(tensor=tensor,\n dst=dst,\n op=self._reduce_op(op),\n group=group,\n async_op=async_op)\n\n def reduce_scatter(self,\n output,\n input_list,\n op=ReduceOp.SUM,\n group=None,\n async_op=False):\n return torch.distributed.reduce_scatter(output=output,\n input_list=input_list,\n op=self._reduce_op(op),\n group=group,\n async_op=async_op)\n\n def broadcast(self, tensor, src, group=None, async_op=False):\n return torch.distributed.broadcast(tensor=tensor,\n src=src,\n group=group,\n async_op=async_op)\n\n def all_gather(self, tensor_list, tensor, group=None, async_op=False):\n return torch.distributed.all_gather(tensor_list=tensor_list,\n tensor=tensor,\n group=group,\n async_op=async_op)\n\n def all_gather_base(self, output_tensor, input_tensor, group=None, async_op=False):\n if self.has_allgather_base:\n return torch.distributed.distributed_c10d._all_gather_base(\n output_tensor=output_tensor,\n input_tensor=input_tensor,\n group=group,\n async_op=async_op)\n else:\n utils.logger.warning(\n \"unable to find torch.distributed._all_gather_base. will fall back to \"\n \"torch.distributed.reduce_scatter which will result in suboptimal performance. \"\n \"please consider upgrading your pytorch installation.\")\n pass\n\n def reduce_scatter_base(self,\n output_tensor,\n input_tensor,\n group=None,\n async_op=False):\n if self.has_reduce_scatter_base:\n return torch.distributed._reduce_scatter_base(output_tensor,\n input_tensor,\n group=group,\n async_op=async_op)\n else:\n utils.logger.warning(\n \"unable to find torch.distributed._reduce_scatter_base. will fall back to \"\n \"torch.distributed.reduce_scatter which will result in suboptimal performance. \"\n \"please consider upgrading your pytorch installation.\")\n pass\n\n def all_to_all_single(self,\n output,\n input,\n output_split_sizes=None,\n input_split_sizes=None,\n group=None,\n async_op=False):\n return torch.distributed.all_to_all_single(output=output,\n input=input,\n output_split_sizes=output_split_sizes,\n input_split_sizes=input_split_sizes,\n group=group,\n async_op=async_op)\n\n def send(self, tensor, dst, group=None, tag=0):\n return torch.distributed.send(tensor=tensor, dst=dst, group=group, tag=tag)\n\n def recv(self, tensor, src=None, group=None, tag=0):\n return torch.distributed.recv(tensor=tensor, src=src, group=group, tag=tag)\n\n def isend(self, tensor, dst, group=None, tag=0):\n return torch.distributed.isend(tensor=tensor, dst=dst, group=group, tag=tag)\n\n def irecv(self, tensor, src=None, group=None, tag=0):\n return torch.distributed.irecv(tensor=tensor, src=src, group=group, tag=tag)\n\n def gather(self, tensor, gather_list=None, dst=0, group=None, async_op=False):\n return torch.distributed.gather(tensor=tensor,\n gather_list=gather_list,\n dst=dst,\n group=group,\n async_op=async_op)\n\n def scatter(self, tensor, scatter_list=None, src=0, group=None, async_op=False):\n return torch.distributed.scatter(tensor=tensor,\n scatter_list=scatter_list,\n src=src,\n group=group,\n async_op=async_op)\n\n def barrier(self):\n return torch.distributed.barrier()\n\n def get_rank(self, group=None):\n return torch.distributed.get_rank(group=group)\n\n def get_world_size(self, group=None):\n return torch.distributed.get_world_size(group=group)\n\n def is_initialized(self):\n return torch.distributed.is_initialized()\n\n def get_backend(self, group=None):\n return torch.distributed.get_backend(group=group)\n\n def new_group(self, ranks):\n return torch.distributed.new_group(ranks)\n\n def get_global_rank(self, group, group_rank):\n return torch.distributed.distributed_c10d._get_global_rank(group, group_rank)\n\n def get_world_group(self):\n return torch.distributed.group.WORLD\n\n def destroy_process_group(self, group=None):\n return torch.distributed.destroy_process_group(group=group)\n\n def _reduce_op(self, op):\n '''\n Helper function. If the op provided is not a torch.dist.ReduceOp, convert it and return\n '''\n if not isinstance(op, torch.distributed.ReduceOp):\n if op == ReduceOp.SUM:\n op = torch.distributed.ReduceOp.SUM\n elif op == ReduceOp.PRODUCT:\n op = torch.distributed.ReduceOp.PRODUCT\n elif op == ReduceOp.AVG:\n op = torch.distributed.ReduceOp.AVG\n elif op == ReduceOp.MIN:\n op = torch.distributed.ReduceOp.MIN\n elif op == ReduceOp.MAX:\n op = torch.distributed.ReduceOp.MAX\n elif op == ReduceOp.BAND:\n op = torch.distributed.ReduceOp.BAND\n elif op == ReduceOp.BOR:\n op = torch.distributed.ReduceOp.BOR\n elif op == ReduceOp.BXOR:\n op = torch.distributed.ReduceOp.BXOR\n return op\n\n\n# This will become a light-weight wrapper around torch.distributed functions\n# TODO: create some example to show how this wrapper can help profile communication\n# TODO: make sure there is no performance regression with this approach\n# TODO: explore monkey-patching if this does not work\n", "path": "deepspeed/comm/torch.py"}]}
| 3,440 | 383 |
gh_patches_debug_38507
|
rasdani/github-patches
|
git_diff
|
pypa__pip-4293
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip freeze reports doubles in requirements files as "not installed"
* Pip version: 9.0.1
* Python version: all
* Operating system: all
### Description:
If a package is listed twice in a single requirements file, or once in two or more requirements files, then 'pip freeze -r requirements.txt' erroneously reports that the package isn't installed.
### What I've run:
```
pip install simplejson
cat <<EOF > requirements.txt
simplejson
simplejson
EOF
pip freeze -r requirements.txt
simplejson==3.10.0
Requirement file [requirements.txt] contains simplejson, but that package is not installed
## The following requirements were added by pip freeze:
```
Similarly, pip freeze now supports multiple '-r' options, and if the same package appears in more than one of the requirements files, then the same message about the package not being installed is displayed.
</issue>
<code>
[start of src/pip/_internal/operations/freeze.py]
1 from __future__ import absolute_import
2
3 import logging
4 import os
5 import re
6 import warnings
7
8 from pip._vendor import pkg_resources
9 from pip._vendor.packaging.utils import canonicalize_name
10 from pip._vendor.pkg_resources import RequirementParseError
11
12 from pip._internal.exceptions import InstallationError
13 from pip._internal.req import InstallRequirement
14 from pip._internal.req.req_file import COMMENT_RE
15 from pip._internal.utils.deprecation import RemovedInPip11Warning
16 from pip._internal.utils.misc import (
17 dist_is_editable, get_installed_distributions
18 )
19
20 logger = logging.getLogger(__name__)
21
22
23 def freeze(
24 requirement=None,
25 find_links=None, local_only=None, user_only=None, skip_regex=None,
26 isolated=False,
27 wheel_cache=None,
28 exclude_editable=False,
29 skip=()):
30 find_links = find_links or []
31 skip_match = None
32
33 if skip_regex:
34 skip_match = re.compile(skip_regex).search
35
36 dependency_links = []
37
38 for dist in pkg_resources.working_set:
39 if dist.has_metadata('dependency_links.txt'):
40 dependency_links.extend(
41 dist.get_metadata_lines('dependency_links.txt')
42 )
43 for link in find_links:
44 if '#egg=' in link:
45 dependency_links.append(link)
46 for link in find_links:
47 yield '-f %s' % link
48 installations = {}
49 for dist in get_installed_distributions(local_only=local_only,
50 skip=(),
51 user_only=user_only):
52 try:
53 req = FrozenRequirement.from_dist(
54 dist,
55 dependency_links
56 )
57 except RequirementParseError:
58 logger.warning(
59 "Could not parse requirement: %s",
60 dist.project_name
61 )
62 continue
63 if exclude_editable and req.editable:
64 continue
65 installations[req.name] = req
66
67 if requirement:
68 # the options that don't get turned into an InstallRequirement
69 # should only be emitted once, even if the same option is in multiple
70 # requirements files, so we need to keep track of what has been emitted
71 # so that we don't emit it again if it's seen again
72 emitted_options = set()
73 for req_file_path in requirement:
74 with open(req_file_path) as req_file:
75 for line in req_file:
76 if (not line.strip() or
77 line.strip().startswith('#') or
78 (skip_match and skip_match(line)) or
79 line.startswith((
80 '-r', '--requirement',
81 '-Z', '--always-unzip',
82 '-f', '--find-links',
83 '-i', '--index-url',
84 '--pre',
85 '--trusted-host',
86 '--process-dependency-links',
87 '--extra-index-url'))):
88 line = line.rstrip()
89 if line not in emitted_options:
90 emitted_options.add(line)
91 yield line
92 continue
93
94 if line.startswith('-e') or line.startswith('--editable'):
95 if line.startswith('-e'):
96 line = line[2:].strip()
97 else:
98 line = line[len('--editable'):].strip().lstrip('=')
99 line_req = InstallRequirement.from_editable(
100 line,
101 isolated=isolated,
102 wheel_cache=wheel_cache,
103 )
104 else:
105 line_req = InstallRequirement.from_line(
106 COMMENT_RE.sub('', line).strip(),
107 isolated=isolated,
108 wheel_cache=wheel_cache,
109 )
110
111 if not line_req.name:
112 logger.info(
113 "Skipping line in requirement file [%s] because "
114 "it's not clear what it would install: %s",
115 req_file_path, line.strip(),
116 )
117 logger.info(
118 " (add #egg=PackageName to the URL to avoid"
119 " this warning)"
120 )
121 elif line_req.name not in installations:
122 logger.warning(
123 "Requirement file [%s] contains %s, but that "
124 "package is not installed",
125 req_file_path, COMMENT_RE.sub('', line).strip(),
126 )
127 else:
128 yield str(installations[line_req.name]).rstrip()
129 del installations[line_req.name]
130
131 yield(
132 '## The following requirements were added by '
133 'pip freeze:'
134 )
135 for installation in sorted(
136 installations.values(), key=lambda x: x.name.lower()):
137 if canonicalize_name(installation.name) not in skip:
138 yield str(installation).rstrip()
139
140
141 class FrozenRequirement(object):
142 def __init__(self, name, req, editable, comments=()):
143 self.name = name
144 self.req = req
145 self.editable = editable
146 self.comments = comments
147
148 _rev_re = re.compile(r'-r(\d+)$')
149 _date_re = re.compile(r'-(20\d\d\d\d\d\d)$')
150
151 @classmethod
152 def from_dist(cls, dist, dependency_links):
153 location = os.path.normcase(os.path.abspath(dist.location))
154 comments = []
155 from pip._internal.vcs import vcs, get_src_requirement
156 if dist_is_editable(dist) and vcs.get_backend_name(location):
157 editable = True
158 try:
159 req = get_src_requirement(dist, location)
160 except InstallationError as exc:
161 logger.warning(
162 "Error when trying to get requirement for VCS system %s, "
163 "falling back to uneditable format", exc
164 )
165 req = None
166 if req is None:
167 logger.warning(
168 'Could not determine repository location of %s', location
169 )
170 comments.append(
171 '## !! Could not determine repository location'
172 )
173 req = dist.as_requirement()
174 editable = False
175 else:
176 editable = False
177 req = dist.as_requirement()
178 specs = req.specs
179 assert len(specs) == 1 and specs[0][0] in ["==", "==="], \
180 'Expected 1 spec with == or ===; specs = %r; dist = %r' % \
181 (specs, dist)
182 version = specs[0][1]
183 ver_match = cls._rev_re.search(version)
184 date_match = cls._date_re.search(version)
185 if ver_match or date_match:
186 svn_backend = vcs.get_backend('svn')
187 if svn_backend:
188 svn_location = svn_backend().get_location(
189 dist,
190 dependency_links,
191 )
192 if not svn_location:
193 logger.warning(
194 'Warning: cannot find svn location for %s', req)
195 comments.append(
196 '## FIXME: could not find svn URL in dependency_links '
197 'for this package:'
198 )
199 else:
200 warnings.warn(
201 "SVN editable detection based on dependency links "
202 "will be dropped in the future.",
203 RemovedInPip11Warning,
204 )
205 comments.append(
206 '# Installing as editable to satisfy requirement %s:' %
207 req
208 )
209 if ver_match:
210 rev = ver_match.group(1)
211 else:
212 rev = '{%s}' % date_match.group(1)
213 editable = True
214 req = '%s@%s#egg=%s' % (
215 svn_location,
216 rev,
217 cls.egg_name(dist)
218 )
219 return cls(dist.project_name, req, editable, comments)
220
221 @staticmethod
222 def egg_name(dist):
223 name = dist.egg_name()
224 match = re.search(r'-py\d\.\d$', name)
225 if match:
226 name = name[:match.start()]
227 return name
228
229 def __str__(self):
230 req = self.req
231 if self.editable:
232 req = '-e %s' % req
233 return '\n'.join(list(self.comments) + [str(req)]) + '\n'
234
[end of src/pip/_internal/operations/freeze.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/pip/_internal/operations/freeze.py b/src/pip/_internal/operations/freeze.py
--- a/src/pip/_internal/operations/freeze.py
+++ b/src/pip/_internal/operations/freeze.py
@@ -1,11 +1,12 @@
from __future__ import absolute_import
+import collections
import logging
import os
import re
import warnings
-from pip._vendor import pkg_resources
+from pip._vendor import pkg_resources, six
from pip._vendor.packaging.utils import canonicalize_name
from pip._vendor.pkg_resources import RequirementParseError
@@ -70,6 +71,9 @@
# requirements files, so we need to keep track of what has been emitted
# so that we don't emit it again if it's seen again
emitted_options = set()
+ # keep track of which files a requirement is in so that we can
+ # give an accurate warning if a requirement appears multiple times.
+ req_files = collections.defaultdict(list)
for req_file_path in requirement:
with open(req_file_path) as req_file:
for line in req_file:
@@ -119,14 +123,28 @@
" this warning)"
)
elif line_req.name not in installations:
- logger.warning(
- "Requirement file [%s] contains %s, but that "
- "package is not installed",
- req_file_path, COMMENT_RE.sub('', line).strip(),
- )
+ # either it's not installed, or it is installed
+ # but has been processed already
+ if not req_files[line_req.name]:
+ logger.warning(
+ "Requirement file [%s] contains %s, but that "
+ "package is not installed",
+ req_file_path,
+ COMMENT_RE.sub('', line).strip(),
+ )
+ else:
+ req_files[line_req.name].append(req_file_path)
else:
yield str(installations[line_req.name]).rstrip()
del installations[line_req.name]
+ req_files[line_req.name].append(req_file_path)
+
+ # Warn about requirements that were included multiple times (in a
+ # single requirements file or in different requirements files).
+ for name, files in six.iteritems(req_files):
+ if len(files) > 1:
+ logger.warning("Requirement %s included multiple times [%s]",
+ name, ', '.join(sorted(set(files))))
yield(
'## The following requirements were added by '
|
{"golden_diff": "diff --git a/src/pip/_internal/operations/freeze.py b/src/pip/_internal/operations/freeze.py\n--- a/src/pip/_internal/operations/freeze.py\n+++ b/src/pip/_internal/operations/freeze.py\n@@ -1,11 +1,12 @@\n from __future__ import absolute_import\n \n+import collections\n import logging\n import os\n import re\n import warnings\n \n-from pip._vendor import pkg_resources\n+from pip._vendor import pkg_resources, six\n from pip._vendor.packaging.utils import canonicalize_name\n from pip._vendor.pkg_resources import RequirementParseError\n \n@@ -70,6 +71,9 @@\n # requirements files, so we need to keep track of what has been emitted\n # so that we don't emit it again if it's seen again\n emitted_options = set()\n+ # keep track of which files a requirement is in so that we can\n+ # give an accurate warning if a requirement appears multiple times.\n+ req_files = collections.defaultdict(list)\n for req_file_path in requirement:\n with open(req_file_path) as req_file:\n for line in req_file:\n@@ -119,14 +123,28 @@\n \" this warning)\"\n )\n elif line_req.name not in installations:\n- logger.warning(\n- \"Requirement file [%s] contains %s, but that \"\n- \"package is not installed\",\n- req_file_path, COMMENT_RE.sub('', line).strip(),\n- )\n+ # either it's not installed, or it is installed\n+ # but has been processed already\n+ if not req_files[line_req.name]:\n+ logger.warning(\n+ \"Requirement file [%s] contains %s, but that \"\n+ \"package is not installed\",\n+ req_file_path,\n+ COMMENT_RE.sub('', line).strip(),\n+ )\n+ else:\n+ req_files[line_req.name].append(req_file_path)\n else:\n yield str(installations[line_req.name]).rstrip()\n del installations[line_req.name]\n+ req_files[line_req.name].append(req_file_path)\n+\n+ # Warn about requirements that were included multiple times (in a\n+ # single requirements file or in different requirements files).\n+ for name, files in six.iteritems(req_files):\n+ if len(files) > 1:\n+ logger.warning(\"Requirement %s included multiple times [%s]\",\n+ name, ', '.join(sorted(set(files))))\n \n yield(\n '## The following requirements were added by '\n", "issue": "pip freeze reports doubles in requirements files as \"not installed\"\n* Pip version: 9.0.1\r\n* Python version: all\r\n* Operating system: all\r\n\r\n### Description:\r\nIf a package is listed twice in a single requirements file, or once in two or more requirements files, then 'pip freeze -r requirements.txt' erroneously reports that the package isn't installed.\r\n\r\n\r\n### What I've run:\r\n\r\n```\r\npip install simplejson\r\n\r\ncat <<EOF > requirements.txt\r\nsimplejson\r\nsimplejson\r\nEOF\r\n\r\npip freeze -r requirements.txt\r\nsimplejson==3.10.0\r\nRequirement file [requirements.txt] contains simplejson, but that package is not installed\r\n## The following requirements were added by pip freeze:\r\n```\r\n\r\nSimilarly, pip freeze now supports multiple '-r' options, and if the same package appears in more than one of the requirements files, then the same message about the package not being installed is displayed.\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport logging\nimport os\nimport re\nimport warnings\n\nfrom pip._vendor import pkg_resources\nfrom pip._vendor.packaging.utils import canonicalize_name\nfrom pip._vendor.pkg_resources import RequirementParseError\n\nfrom pip._internal.exceptions import InstallationError\nfrom pip._internal.req import InstallRequirement\nfrom pip._internal.req.req_file import COMMENT_RE\nfrom pip._internal.utils.deprecation import RemovedInPip11Warning\nfrom pip._internal.utils.misc import (\n dist_is_editable, get_installed_distributions\n)\n\nlogger = logging.getLogger(__name__)\n\n\ndef freeze(\n requirement=None,\n find_links=None, local_only=None, user_only=None, skip_regex=None,\n isolated=False,\n wheel_cache=None,\n exclude_editable=False,\n skip=()):\n find_links = find_links or []\n skip_match = None\n\n if skip_regex:\n skip_match = re.compile(skip_regex).search\n\n dependency_links = []\n\n for dist in pkg_resources.working_set:\n if dist.has_metadata('dependency_links.txt'):\n dependency_links.extend(\n dist.get_metadata_lines('dependency_links.txt')\n )\n for link in find_links:\n if '#egg=' in link:\n dependency_links.append(link)\n for link in find_links:\n yield '-f %s' % link\n installations = {}\n for dist in get_installed_distributions(local_only=local_only,\n skip=(),\n user_only=user_only):\n try:\n req = FrozenRequirement.from_dist(\n dist,\n dependency_links\n )\n except RequirementParseError:\n logger.warning(\n \"Could not parse requirement: %s\",\n dist.project_name\n )\n continue\n if exclude_editable and req.editable:\n continue\n installations[req.name] = req\n\n if requirement:\n # the options that don't get turned into an InstallRequirement\n # should only be emitted once, even if the same option is in multiple\n # requirements files, so we need to keep track of what has been emitted\n # so that we don't emit it again if it's seen again\n emitted_options = set()\n for req_file_path in requirement:\n with open(req_file_path) as req_file:\n for line in req_file:\n if (not line.strip() or\n line.strip().startswith('#') or\n (skip_match and skip_match(line)) or\n line.startswith((\n '-r', '--requirement',\n '-Z', '--always-unzip',\n '-f', '--find-links',\n '-i', '--index-url',\n '--pre',\n '--trusted-host',\n '--process-dependency-links',\n '--extra-index-url'))):\n line = line.rstrip()\n if line not in emitted_options:\n emitted_options.add(line)\n yield line\n continue\n\n if line.startswith('-e') or line.startswith('--editable'):\n if line.startswith('-e'):\n line = line[2:].strip()\n else:\n line = line[len('--editable'):].strip().lstrip('=')\n line_req = InstallRequirement.from_editable(\n line,\n isolated=isolated,\n wheel_cache=wheel_cache,\n )\n else:\n line_req = InstallRequirement.from_line(\n COMMENT_RE.sub('', line).strip(),\n isolated=isolated,\n wheel_cache=wheel_cache,\n )\n\n if not line_req.name:\n logger.info(\n \"Skipping line in requirement file [%s] because \"\n \"it's not clear what it would install: %s\",\n req_file_path, line.strip(),\n )\n logger.info(\n \" (add #egg=PackageName to the URL to avoid\"\n \" this warning)\"\n )\n elif line_req.name not in installations:\n logger.warning(\n \"Requirement file [%s] contains %s, but that \"\n \"package is not installed\",\n req_file_path, COMMENT_RE.sub('', line).strip(),\n )\n else:\n yield str(installations[line_req.name]).rstrip()\n del installations[line_req.name]\n\n yield(\n '## The following requirements were added by '\n 'pip freeze:'\n )\n for installation in sorted(\n installations.values(), key=lambda x: x.name.lower()):\n if canonicalize_name(installation.name) not in skip:\n yield str(installation).rstrip()\n\n\nclass FrozenRequirement(object):\n def __init__(self, name, req, editable, comments=()):\n self.name = name\n self.req = req\n self.editable = editable\n self.comments = comments\n\n _rev_re = re.compile(r'-r(\\d+)$')\n _date_re = re.compile(r'-(20\\d\\d\\d\\d\\d\\d)$')\n\n @classmethod\n def from_dist(cls, dist, dependency_links):\n location = os.path.normcase(os.path.abspath(dist.location))\n comments = []\n from pip._internal.vcs import vcs, get_src_requirement\n if dist_is_editable(dist) and vcs.get_backend_name(location):\n editable = True\n try:\n req = get_src_requirement(dist, location)\n except InstallationError as exc:\n logger.warning(\n \"Error when trying to get requirement for VCS system %s, \"\n \"falling back to uneditable format\", exc\n )\n req = None\n if req is None:\n logger.warning(\n 'Could not determine repository location of %s', location\n )\n comments.append(\n '## !! Could not determine repository location'\n )\n req = dist.as_requirement()\n editable = False\n else:\n editable = False\n req = dist.as_requirement()\n specs = req.specs\n assert len(specs) == 1 and specs[0][0] in [\"==\", \"===\"], \\\n 'Expected 1 spec with == or ===; specs = %r; dist = %r' % \\\n (specs, dist)\n version = specs[0][1]\n ver_match = cls._rev_re.search(version)\n date_match = cls._date_re.search(version)\n if ver_match or date_match:\n svn_backend = vcs.get_backend('svn')\n if svn_backend:\n svn_location = svn_backend().get_location(\n dist,\n dependency_links,\n )\n if not svn_location:\n logger.warning(\n 'Warning: cannot find svn location for %s', req)\n comments.append(\n '## FIXME: could not find svn URL in dependency_links '\n 'for this package:'\n )\n else:\n warnings.warn(\n \"SVN editable detection based on dependency links \"\n \"will be dropped in the future.\",\n RemovedInPip11Warning,\n )\n comments.append(\n '# Installing as editable to satisfy requirement %s:' %\n req\n )\n if ver_match:\n rev = ver_match.group(1)\n else:\n rev = '{%s}' % date_match.group(1)\n editable = True\n req = '%s@%s#egg=%s' % (\n svn_location,\n rev,\n cls.egg_name(dist)\n )\n return cls(dist.project_name, req, editable, comments)\n\n @staticmethod\n def egg_name(dist):\n name = dist.egg_name()\n match = re.search(r'-py\\d\\.\\d$', name)\n if match:\n name = name[:match.start()]\n return name\n\n def __str__(self):\n req = self.req\n if self.editable:\n req = '-e %s' % req\n return '\\n'.join(list(self.comments) + [str(req)]) + '\\n'\n", "path": "src/pip/_internal/operations/freeze.py"}]}
| 2,960 | 546 |
gh_patches_debug_9615
|
rasdani/github-patches
|
git_diff
|
urllib3__urllib3-3264
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
How to implement a "watcher" thread as suggested by the documentation?
The documentation of the [`timeout`](https://urllib3.readthedocs.io/en/latest/reference/urllib3.util.html#module-urllib3.util.timeout) module says:
> If your goal is to cut off any request after a set amount of wall clock time, consider having a second “watcher” thread to cut off a slow request.
How would that work?
It seems like it is [strongly discouraged or even impossible](https://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-thread) to kill a thread in Python, so what would that watcher thread do?
If it is not possible to write a watcher thread in Python, the documentation shouldn't suggest to do it.
</issue>
<code>
[start of src/urllib3/util/timeout.py]
1 from __future__ import annotations
2
3 import time
4 import typing
5 from enum import Enum
6 from socket import getdefaulttimeout
7
8 from ..exceptions import TimeoutStateError
9
10 if typing.TYPE_CHECKING:
11 from typing import Final
12
13
14 class _TYPE_DEFAULT(Enum):
15 # This value should never be passed to socket.settimeout() so for safety we use a -1.
16 # socket.settimout() raises a ValueError for negative values.
17 token = -1
18
19
20 _DEFAULT_TIMEOUT: Final[_TYPE_DEFAULT] = _TYPE_DEFAULT.token
21
22 _TYPE_TIMEOUT = typing.Optional[typing.Union[float, _TYPE_DEFAULT]]
23
24
25 class Timeout:
26 """Timeout configuration.
27
28 Timeouts can be defined as a default for a pool:
29
30 .. code-block:: python
31
32 import urllib3
33
34 timeout = urllib3.util.Timeout(connect=2.0, read=7.0)
35
36 http = urllib3.PoolManager(timeout=timeout)
37
38 resp = http.request("GET", "https://example.com/")
39
40 print(resp.status)
41
42 Or per-request (which overrides the default for the pool):
43
44 .. code-block:: python
45
46 response = http.request("GET", "https://example.com/", timeout=Timeout(10))
47
48 Timeouts can be disabled by setting all the parameters to ``None``:
49
50 .. code-block:: python
51
52 no_timeout = Timeout(connect=None, read=None)
53 response = http.request("GET", "https://example.com/", timeout=no_timeout)
54
55
56 :param total:
57 This combines the connect and read timeouts into one; the read timeout
58 will be set to the time leftover from the connect attempt. In the
59 event that both a connect timeout and a total are specified, or a read
60 timeout and a total are specified, the shorter timeout will be applied.
61
62 Defaults to None.
63
64 :type total: int, float, or None
65
66 :param connect:
67 The maximum amount of time (in seconds) to wait for a connection
68 attempt to a server to succeed. Omitting the parameter will default the
69 connect timeout to the system default, probably `the global default
70 timeout in socket.py
71 <http://hg.python.org/cpython/file/603b4d593758/Lib/socket.py#l535>`_.
72 None will set an infinite timeout for connection attempts.
73
74 :type connect: int, float, or None
75
76 :param read:
77 The maximum amount of time (in seconds) to wait between consecutive
78 read operations for a response from the server. Omitting the parameter
79 will default the read timeout to the system default, probably `the
80 global default timeout in socket.py
81 <http://hg.python.org/cpython/file/603b4d593758/Lib/socket.py#l535>`_.
82 None will set an infinite timeout.
83
84 :type read: int, float, or None
85
86 .. note::
87
88 Many factors can affect the total amount of time for urllib3 to return
89 an HTTP response.
90
91 For example, Python's DNS resolver does not obey the timeout specified
92 on the socket. Other factors that can affect total request time include
93 high CPU load, high swap, the program running at a low priority level,
94 or other behaviors.
95
96 In addition, the read and total timeouts only measure the time between
97 read operations on the socket connecting the client and the server,
98 not the total amount of time for the request to return a complete
99 response. For most requests, the timeout is raised because the server
100 has not sent the first byte in the specified time. This is not always
101 the case; if a server streams one byte every fifteen seconds, a timeout
102 of 20 seconds will not trigger, even though the request will take
103 several minutes to complete.
104
105 If your goal is to cut off any request after a set amount of wall clock
106 time, consider having a second "watcher" thread to cut off a slow
107 request.
108 """
109
110 #: A sentinel object representing the default timeout value
111 DEFAULT_TIMEOUT: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT
112
113 def __init__(
114 self,
115 total: _TYPE_TIMEOUT = None,
116 connect: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT,
117 read: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT,
118 ) -> None:
119 self._connect = self._validate_timeout(connect, "connect")
120 self._read = self._validate_timeout(read, "read")
121 self.total = self._validate_timeout(total, "total")
122 self._start_connect: float | None = None
123
124 def __repr__(self) -> str:
125 return f"{type(self).__name__}(connect={self._connect!r}, read={self._read!r}, total={self.total!r})"
126
127 # __str__ provided for backwards compatibility
128 __str__ = __repr__
129
130 @staticmethod
131 def resolve_default_timeout(timeout: _TYPE_TIMEOUT) -> float | None:
132 return getdefaulttimeout() if timeout is _DEFAULT_TIMEOUT else timeout
133
134 @classmethod
135 def _validate_timeout(cls, value: _TYPE_TIMEOUT, name: str) -> _TYPE_TIMEOUT:
136 """Check that a timeout attribute is valid.
137
138 :param value: The timeout value to validate
139 :param name: The name of the timeout attribute to validate. This is
140 used to specify in error messages.
141 :return: The validated and casted version of the given value.
142 :raises ValueError: If it is a numeric value less than or equal to
143 zero, or the type is not an integer, float, or None.
144 """
145 if value is None or value is _DEFAULT_TIMEOUT:
146 return value
147
148 if isinstance(value, bool):
149 raise ValueError(
150 "Timeout cannot be a boolean value. It must "
151 "be an int, float or None."
152 )
153 try:
154 float(value)
155 except (TypeError, ValueError):
156 raise ValueError(
157 "Timeout value %s was %s, but it must be an "
158 "int, float or None." % (name, value)
159 ) from None
160
161 try:
162 if value <= 0:
163 raise ValueError(
164 "Attempted to set %s timeout to %s, but the "
165 "timeout cannot be set to a value less "
166 "than or equal to 0." % (name, value)
167 )
168 except TypeError:
169 raise ValueError(
170 "Timeout value %s was %s, but it must be an "
171 "int, float or None." % (name, value)
172 ) from None
173
174 return value
175
176 @classmethod
177 def from_float(cls, timeout: _TYPE_TIMEOUT) -> Timeout:
178 """Create a new Timeout from a legacy timeout value.
179
180 The timeout value used by httplib.py sets the same timeout on the
181 connect(), and recv() socket requests. This creates a :class:`Timeout`
182 object that sets the individual timeouts to the ``timeout`` value
183 passed to this function.
184
185 :param timeout: The legacy timeout value.
186 :type timeout: integer, float, :attr:`urllib3.util.Timeout.DEFAULT_TIMEOUT`, or None
187 :return: Timeout object
188 :rtype: :class:`Timeout`
189 """
190 return Timeout(read=timeout, connect=timeout)
191
192 def clone(self) -> Timeout:
193 """Create a copy of the timeout object
194
195 Timeout properties are stored per-pool but each request needs a fresh
196 Timeout object to ensure each one has its own start/stop configured.
197
198 :return: a copy of the timeout object
199 :rtype: :class:`Timeout`
200 """
201 # We can't use copy.deepcopy because that will also create a new object
202 # for _GLOBAL_DEFAULT_TIMEOUT, which socket.py uses as a sentinel to
203 # detect the user default.
204 return Timeout(connect=self._connect, read=self._read, total=self.total)
205
206 def start_connect(self) -> float:
207 """Start the timeout clock, used during a connect() attempt
208
209 :raises urllib3.exceptions.TimeoutStateError: if you attempt
210 to start a timer that has been started already.
211 """
212 if self._start_connect is not None:
213 raise TimeoutStateError("Timeout timer has already been started.")
214 self._start_connect = time.monotonic()
215 return self._start_connect
216
217 def get_connect_duration(self) -> float:
218 """Gets the time elapsed since the call to :meth:`start_connect`.
219
220 :return: Elapsed time in seconds.
221 :rtype: float
222 :raises urllib3.exceptions.TimeoutStateError: if you attempt
223 to get duration for a timer that hasn't been started.
224 """
225 if self._start_connect is None:
226 raise TimeoutStateError(
227 "Can't get connect duration for timer that has not started."
228 )
229 return time.monotonic() - self._start_connect
230
231 @property
232 def connect_timeout(self) -> _TYPE_TIMEOUT:
233 """Get the value to use when setting a connection timeout.
234
235 This will be a positive float or integer, the value None
236 (never timeout), or the default system timeout.
237
238 :return: Connect timeout.
239 :rtype: int, float, :attr:`Timeout.DEFAULT_TIMEOUT` or None
240 """
241 if self.total is None:
242 return self._connect
243
244 if self._connect is None or self._connect is _DEFAULT_TIMEOUT:
245 return self.total
246
247 return min(self._connect, self.total) # type: ignore[type-var]
248
249 @property
250 def read_timeout(self) -> float | None:
251 """Get the value for the read timeout.
252
253 This assumes some time has elapsed in the connection timeout and
254 computes the read timeout appropriately.
255
256 If self.total is set, the read timeout is dependent on the amount of
257 time taken by the connect timeout. If the connection time has not been
258 established, a :exc:`~urllib3.exceptions.TimeoutStateError` will be
259 raised.
260
261 :return: Value to use for the read timeout.
262 :rtype: int, float or None
263 :raises urllib3.exceptions.TimeoutStateError: If :meth:`start_connect`
264 has not yet been called on this object.
265 """
266 if (
267 self.total is not None
268 and self.total is not _DEFAULT_TIMEOUT
269 and self._read is not None
270 and self._read is not _DEFAULT_TIMEOUT
271 ):
272 # In case the connect timeout has not yet been established.
273 if self._start_connect is None:
274 return self._read
275 return max(0, min(self.total - self.get_connect_duration(), self._read))
276 elif self.total is not None and self.total is not _DEFAULT_TIMEOUT:
277 return max(0, self.total - self.get_connect_duration())
278 else:
279 return self.resolve_default_timeout(self._read)
280
[end of src/urllib3/util/timeout.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/urllib3/util/timeout.py b/src/urllib3/util/timeout.py
--- a/src/urllib3/util/timeout.py
+++ b/src/urllib3/util/timeout.py
@@ -101,10 +101,6 @@
the case; if a server streams one byte every fifteen seconds, a timeout
of 20 seconds will not trigger, even though the request will take
several minutes to complete.
-
- If your goal is to cut off any request after a set amount of wall clock
- time, consider having a second "watcher" thread to cut off a slow
- request.
"""
#: A sentinel object representing the default timeout value
|
{"golden_diff": "diff --git a/src/urllib3/util/timeout.py b/src/urllib3/util/timeout.py\n--- a/src/urllib3/util/timeout.py\n+++ b/src/urllib3/util/timeout.py\n@@ -101,10 +101,6 @@\n the case; if a server streams one byte every fifteen seconds, a timeout\n of 20 seconds will not trigger, even though the request will take\n several minutes to complete.\n-\n- If your goal is to cut off any request after a set amount of wall clock\n- time, consider having a second \"watcher\" thread to cut off a slow\n- request.\n \"\"\"\n \n #: A sentinel object representing the default timeout value\n", "issue": "How to implement a \"watcher\" thread as suggested by the documentation?\nThe documentation of the [`timeout`](https://urllib3.readthedocs.io/en/latest/reference/urllib3.util.html#module-urllib3.util.timeout) module says:\r\n\r\n> If your goal is to cut off any request after a set amount of wall clock time, consider having a second \u201cwatcher\u201d thread to cut off a slow request.\r\n\r\nHow would that work?\r\n\r\nIt seems like it is [strongly discouraged or even impossible](https://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-thread) to kill a thread in Python, so what would that watcher thread do?\r\n\r\nIf it is not possible to write a watcher thread in Python, the documentation shouldn't suggest to do it.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport time\nimport typing\nfrom enum import Enum\nfrom socket import getdefaulttimeout\n\nfrom ..exceptions import TimeoutStateError\n\nif typing.TYPE_CHECKING:\n from typing import Final\n\n\nclass _TYPE_DEFAULT(Enum):\n # This value should never be passed to socket.settimeout() so for safety we use a -1.\n # socket.settimout() raises a ValueError for negative values.\n token = -1\n\n\n_DEFAULT_TIMEOUT: Final[_TYPE_DEFAULT] = _TYPE_DEFAULT.token\n\n_TYPE_TIMEOUT = typing.Optional[typing.Union[float, _TYPE_DEFAULT]]\n\n\nclass Timeout:\n \"\"\"Timeout configuration.\n\n Timeouts can be defined as a default for a pool:\n\n .. code-block:: python\n\n import urllib3\n\n timeout = urllib3.util.Timeout(connect=2.0, read=7.0)\n\n http = urllib3.PoolManager(timeout=timeout)\n\n resp = http.request(\"GET\", \"https://example.com/\")\n\n print(resp.status)\n\n Or per-request (which overrides the default for the pool):\n\n .. code-block:: python\n\n response = http.request(\"GET\", \"https://example.com/\", timeout=Timeout(10))\n\n Timeouts can be disabled by setting all the parameters to ``None``:\n\n .. code-block:: python\n\n no_timeout = Timeout(connect=None, read=None)\n response = http.request(\"GET\", \"https://example.com/\", timeout=no_timeout)\n\n\n :param total:\n This combines the connect and read timeouts into one; the read timeout\n will be set to the time leftover from the connect attempt. In the\n event that both a connect timeout and a total are specified, or a read\n timeout and a total are specified, the shorter timeout will be applied.\n\n Defaults to None.\n\n :type total: int, float, or None\n\n :param connect:\n The maximum amount of time (in seconds) to wait for a connection\n attempt to a server to succeed. Omitting the parameter will default the\n connect timeout to the system default, probably `the global default\n timeout in socket.py\n <http://hg.python.org/cpython/file/603b4d593758/Lib/socket.py#l535>`_.\n None will set an infinite timeout for connection attempts.\n\n :type connect: int, float, or None\n\n :param read:\n The maximum amount of time (in seconds) to wait between consecutive\n read operations for a response from the server. Omitting the parameter\n will default the read timeout to the system default, probably `the\n global default timeout in socket.py\n <http://hg.python.org/cpython/file/603b4d593758/Lib/socket.py#l535>`_.\n None will set an infinite timeout.\n\n :type read: int, float, or None\n\n .. note::\n\n Many factors can affect the total amount of time for urllib3 to return\n an HTTP response.\n\n For example, Python's DNS resolver does not obey the timeout specified\n on the socket. Other factors that can affect total request time include\n high CPU load, high swap, the program running at a low priority level,\n or other behaviors.\n\n In addition, the read and total timeouts only measure the time between\n read operations on the socket connecting the client and the server,\n not the total amount of time for the request to return a complete\n response. For most requests, the timeout is raised because the server\n has not sent the first byte in the specified time. This is not always\n the case; if a server streams one byte every fifteen seconds, a timeout\n of 20 seconds will not trigger, even though the request will take\n several minutes to complete.\n\n If your goal is to cut off any request after a set amount of wall clock\n time, consider having a second \"watcher\" thread to cut off a slow\n request.\n \"\"\"\n\n #: A sentinel object representing the default timeout value\n DEFAULT_TIMEOUT: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT\n\n def __init__(\n self,\n total: _TYPE_TIMEOUT = None,\n connect: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT,\n read: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT,\n ) -> None:\n self._connect = self._validate_timeout(connect, \"connect\")\n self._read = self._validate_timeout(read, \"read\")\n self.total = self._validate_timeout(total, \"total\")\n self._start_connect: float | None = None\n\n def __repr__(self) -> str:\n return f\"{type(self).__name__}(connect={self._connect!r}, read={self._read!r}, total={self.total!r})\"\n\n # __str__ provided for backwards compatibility\n __str__ = __repr__\n\n @staticmethod\n def resolve_default_timeout(timeout: _TYPE_TIMEOUT) -> float | None:\n return getdefaulttimeout() if timeout is _DEFAULT_TIMEOUT else timeout\n\n @classmethod\n def _validate_timeout(cls, value: _TYPE_TIMEOUT, name: str) -> _TYPE_TIMEOUT:\n \"\"\"Check that a timeout attribute is valid.\n\n :param value: The timeout value to validate\n :param name: The name of the timeout attribute to validate. This is\n used to specify in error messages.\n :return: The validated and casted version of the given value.\n :raises ValueError: If it is a numeric value less than or equal to\n zero, or the type is not an integer, float, or None.\n \"\"\"\n if value is None or value is _DEFAULT_TIMEOUT:\n return value\n\n if isinstance(value, bool):\n raise ValueError(\n \"Timeout cannot be a boolean value. It must \"\n \"be an int, float or None.\"\n )\n try:\n float(value)\n except (TypeError, ValueError):\n raise ValueError(\n \"Timeout value %s was %s, but it must be an \"\n \"int, float or None.\" % (name, value)\n ) from None\n\n try:\n if value <= 0:\n raise ValueError(\n \"Attempted to set %s timeout to %s, but the \"\n \"timeout cannot be set to a value less \"\n \"than or equal to 0.\" % (name, value)\n )\n except TypeError:\n raise ValueError(\n \"Timeout value %s was %s, but it must be an \"\n \"int, float or None.\" % (name, value)\n ) from None\n\n return value\n\n @classmethod\n def from_float(cls, timeout: _TYPE_TIMEOUT) -> Timeout:\n \"\"\"Create a new Timeout from a legacy timeout value.\n\n The timeout value used by httplib.py sets the same timeout on the\n connect(), and recv() socket requests. This creates a :class:`Timeout`\n object that sets the individual timeouts to the ``timeout`` value\n passed to this function.\n\n :param timeout: The legacy timeout value.\n :type timeout: integer, float, :attr:`urllib3.util.Timeout.DEFAULT_TIMEOUT`, or None\n :return: Timeout object\n :rtype: :class:`Timeout`\n \"\"\"\n return Timeout(read=timeout, connect=timeout)\n\n def clone(self) -> Timeout:\n \"\"\"Create a copy of the timeout object\n\n Timeout properties are stored per-pool but each request needs a fresh\n Timeout object to ensure each one has its own start/stop configured.\n\n :return: a copy of the timeout object\n :rtype: :class:`Timeout`\n \"\"\"\n # We can't use copy.deepcopy because that will also create a new object\n # for _GLOBAL_DEFAULT_TIMEOUT, which socket.py uses as a sentinel to\n # detect the user default.\n return Timeout(connect=self._connect, read=self._read, total=self.total)\n\n def start_connect(self) -> float:\n \"\"\"Start the timeout clock, used during a connect() attempt\n\n :raises urllib3.exceptions.TimeoutStateError: if you attempt\n to start a timer that has been started already.\n \"\"\"\n if self._start_connect is not None:\n raise TimeoutStateError(\"Timeout timer has already been started.\")\n self._start_connect = time.monotonic()\n return self._start_connect\n\n def get_connect_duration(self) -> float:\n \"\"\"Gets the time elapsed since the call to :meth:`start_connect`.\n\n :return: Elapsed time in seconds.\n :rtype: float\n :raises urllib3.exceptions.TimeoutStateError: if you attempt\n to get duration for a timer that hasn't been started.\n \"\"\"\n if self._start_connect is None:\n raise TimeoutStateError(\n \"Can't get connect duration for timer that has not started.\"\n )\n return time.monotonic() - self._start_connect\n\n @property\n def connect_timeout(self) -> _TYPE_TIMEOUT:\n \"\"\"Get the value to use when setting a connection timeout.\n\n This will be a positive float or integer, the value None\n (never timeout), or the default system timeout.\n\n :return: Connect timeout.\n :rtype: int, float, :attr:`Timeout.DEFAULT_TIMEOUT` or None\n \"\"\"\n if self.total is None:\n return self._connect\n\n if self._connect is None or self._connect is _DEFAULT_TIMEOUT:\n return self.total\n\n return min(self._connect, self.total) # type: ignore[type-var]\n\n @property\n def read_timeout(self) -> float | None:\n \"\"\"Get the value for the read timeout.\n\n This assumes some time has elapsed in the connection timeout and\n computes the read timeout appropriately.\n\n If self.total is set, the read timeout is dependent on the amount of\n time taken by the connect timeout. If the connection time has not been\n established, a :exc:`~urllib3.exceptions.TimeoutStateError` will be\n raised.\n\n :return: Value to use for the read timeout.\n :rtype: int, float or None\n :raises urllib3.exceptions.TimeoutStateError: If :meth:`start_connect`\n has not yet been called on this object.\n \"\"\"\n if (\n self.total is not None\n and self.total is not _DEFAULT_TIMEOUT\n and self._read is not None\n and self._read is not _DEFAULT_TIMEOUT\n ):\n # In case the connect timeout has not yet been established.\n if self._start_connect is None:\n return self._read\n return max(0, min(self.total - self.get_connect_duration(), self._read))\n elif self.total is not None and self.total is not _DEFAULT_TIMEOUT:\n return max(0, self.total - self.get_connect_duration())\n else:\n return self.resolve_default_timeout(self._read)\n", "path": "src/urllib3/util/timeout.py"}]}
| 3,795 | 160 |
gh_patches_debug_55169
|
rasdani/github-patches
|
git_diff
|
spack__spack-19617
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Jupyter: No module named ipykernel_launcher
### Steps to reproduce the issue
```console
$ spack env create my-jupyter
$ spack env activate my-jupyter
$ spack add py-jupyter
$ spack add py-ipython
$ spack add py-ipykernel
$ spack add py-notebook
$ spack install
```
### Error Message
If I try to start `jupyter notebook` now and open a Python3 Notebook I get no working Python3 kernel
```
Kernel started: af71e14f-24f7-40a4-92a8-48e79f5d621c, name: python3
/home/axel/src/spack/opt/spack/linux-ubuntu18.04-skylake/gcc-8.4.0/python-3.8.6-wuh5zypqqvf3fba4ootslwky3plqqsqw/bin/python3.8: No module named ipykernel_launcher
[I 00:55:29.178 NotebookApp] KernelRestarter: restarting kernel (1/5), new random ports
/home/axel/src/spack/opt/spack/linux-ubuntu18.04-skylake/gcc-8.4.0/python-3.8.6-wuh5zypqqvf3fba4ootslwky3plqqsqw/bin/python3.8: No module named ipykernel_launcher
# ...
```
### Information on your system
```bash
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.5 LTS
Release: 18.04
Codename: bionic
```
`spack debug report`:
* **Spack:** 0.15.4-1470-99ef3d11c1
* **Python:** 3.8.6
* **Platform:** linux-ubuntu18.04-skylake
### Additional information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [x] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [x] I have searched the issues of this repo and believe this is not a duplicate
- [ ] I have run the failing commands in debug mode and reported the output
</issue>
<code>
[start of var/spack/repos/builtin/packages/py-ipykernel/package.py]
1 # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6
7 class PyIpykernel(PythonPackage):
8 """IPython Kernel for Jupyter"""
9
10 homepage = "https://pypi.python.org/pypi/ipykernel"
11 url = "https://pypi.io/packages/source/i/ipykernel/ipykernel-5.3.4.tar.gz"
12
13 version('5.3.4', sha256='9b2652af1607986a1b231c62302d070bc0534f564c393a5d9d130db9abbbe89d')
14 version('5.1.1', sha256='f0e962052718068ad3b1d8bcc703794660858f58803c3798628817f492a8769c')
15 version('5.1.0', sha256='0fc0bf97920d454102168ec2008620066878848fcfca06c22b669696212e292f')
16 version('4.10.0', sha256='699103c8e64886e3ec7053f2a6aa83bb90426063526f63a818732ff385202bad')
17 version('4.5.0', sha256='245a798edb8fd751b95750d8645d736dd739a020e7fc7d5627dac4d1c35d8295')
18 version('4.4.1', sha256='6d48398b3112efb733b254edede4b7f3262c28bd19f665b64ef1acf6ec5cd74f')
19 version('4.4.0', sha256='d516427c3bd689205e6693c9616302ef34017b91ada3c9ea3fca6e90702b7ffe')
20 version('4.3.1', sha256='8219d3eaa3e4d4efc5f349114e41a40f0986c91a960846bb81d5da817fb7cc3f')
21 version('4.3.0', sha256='f214c661328c836e02b6f185f98f3eccd7ce396791937493ffa1babf5e3267ab')
22 version('4.2.2', sha256='a876da43e01acec2c305abdd8e6aa55f052bab1196171ccf1cb9a6aa230298b0')
23 version('4.2.1', sha256='081a5d4db33db58697be2d682b92f79b2c239493445f13dd457c15bc3e52c874')
24 version('4.2.0', sha256='723b3d4baac20f0c9cd91fc75c3e813636ecb6c6e303fb34d628c3df078985a7')
25 version('4.1.1', sha256='d8c5555386d0f18f1336dea9800f9f0fe96dcecc9757c0f980e11fdfadb661ff')
26 version('4.1.0', sha256='e0e150ad55e487e49054efc9a4b0e2e17f27e1de77444b26760789077b146d86')
27
28 depends_on('[email protected]:2.8,3.3:', type=('build', 'run'))
29 depends_on('[email protected]:', when='@5.0:', type=('build', 'run'))
30 depends_on('[email protected]:', when='@5.2:', type=('build', 'run'))
31 depends_on('py-setuptools', type='build', when='@5:')
32 depends_on('[email protected]:', when='@:4.999', type=('build', 'run'))
33 depends_on('[email protected]:', when='@5.0.0:', type=('build', 'run'))
34 depends_on('[email protected]:', type=('build', 'run'))
35 depends_on('py-jupyter-client', type=('build', 'run'))
36 depends_on('[email protected]:', when='@:4.999', type=('build', 'run'))
37 depends_on('[email protected]:', when='@5.0.0:', type=('build', 'run'))
38 depends_on('py-appnope', when='platform=darwin', type=('build', 'run'))
39 depends_on('py-pytest@:5.3.3,5.3.5:', type='test')
40 depends_on('py-pytest-cov', type='test')
41 # depends_on('py-flaky', type='test')
42 depends_on('py-nose', type='test')
43
[end of var/spack/repos/builtin/packages/py-ipykernel/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/var/spack/repos/builtin/packages/py-ipykernel/package.py b/var/spack/repos/builtin/packages/py-ipykernel/package.py
--- a/var/spack/repos/builtin/packages/py-ipykernel/package.py
+++ b/var/spack/repos/builtin/packages/py-ipykernel/package.py
@@ -40,3 +40,9 @@
depends_on('py-pytest-cov', type='test')
# depends_on('py-flaky', type='test')
depends_on('py-nose', type='test')
+
+ phases = ['build', 'install', 'install_data']
+
+ def install_data(self):
+ """ install the Jupyter kernel spec """
+ self.spec['python'].command('-m ipykernel', ['install'])
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/py-ipykernel/package.py b/var/spack/repos/builtin/packages/py-ipykernel/package.py\n--- a/var/spack/repos/builtin/packages/py-ipykernel/package.py\n+++ b/var/spack/repos/builtin/packages/py-ipykernel/package.py\n@@ -40,3 +40,9 @@\n depends_on('py-pytest-cov', type='test')\n # depends_on('py-flaky', type='test')\n depends_on('py-nose', type='test')\n+\n+ phases = ['build', 'install', 'install_data']\n+\n+ def install_data(self):\n+ \"\"\" install the Jupyter kernel spec \"\"\"\n+ self.spec['python'].command('-m ipykernel', ['install'])\n", "issue": "Jupyter: No module named ipykernel_launcher\n### Steps to reproduce the issue\r\n\r\n```console\r\n$ spack env create my-jupyter\r\n$ spack env activate my-jupyter\r\n$ spack add py-jupyter\r\n$ spack add py-ipython\r\n$ spack add py-ipykernel\r\n$ spack add py-notebook\r\n$ spack install\r\n```\r\n\r\n### Error Message\r\n\r\nIf I try to start `jupyter notebook` now and open a Python3 Notebook I get no working Python3 kernel\r\n```\r\nKernel started: af71e14f-24f7-40a4-92a8-48e79f5d621c, name: python3\r\n/home/axel/src/spack/opt/spack/linux-ubuntu18.04-skylake/gcc-8.4.0/python-3.8.6-wuh5zypqqvf3fba4ootslwky3plqqsqw/bin/python3.8: No module named ipykernel_launcher\r\n\r\n[I 00:55:29.178 NotebookApp] KernelRestarter: restarting kernel (1/5), new random ports\r\n/home/axel/src/spack/opt/spack/linux-ubuntu18.04-skylake/gcc-8.4.0/python-3.8.6-wuh5zypqqvf3fba4ootslwky3plqqsqw/bin/python3.8: No module named ipykernel_launcher\r\n\r\n# ...\r\n```\r\n\r\n### Information on your system\r\n\r\n```bash\r\n$ lsb_release -a\r\nNo LSB modules are available.\r\nDistributor ID:\tUbuntu\r\nDescription:\tUbuntu 18.04.5 LTS\r\nRelease:\t18.04\r\nCodename:\tbionic\r\n```\r\n\r\n`spack debug report`:\r\n* **Spack:** 0.15.4-1470-99ef3d11c1\r\n* **Python:** 3.8.6\r\n* **Platform:** linux-ubuntu18.04-skylake\r\n\r\n### Additional information\r\n\r\n<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->\r\n- [x] I have run `spack debug report` and reported the version of Spack/Python/Platform\r\n- [x] I have searched the issues of this repo and believe this is not a duplicate\r\n- [ ] I have run the failing commands in debug mode and reported the output\n", "before_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\n\nclass PyIpykernel(PythonPackage):\n \"\"\"IPython Kernel for Jupyter\"\"\"\n\n homepage = \"https://pypi.python.org/pypi/ipykernel\"\n url = \"https://pypi.io/packages/source/i/ipykernel/ipykernel-5.3.4.tar.gz\"\n\n version('5.3.4', sha256='9b2652af1607986a1b231c62302d070bc0534f564c393a5d9d130db9abbbe89d')\n version('5.1.1', sha256='f0e962052718068ad3b1d8bcc703794660858f58803c3798628817f492a8769c')\n version('5.1.0', sha256='0fc0bf97920d454102168ec2008620066878848fcfca06c22b669696212e292f')\n version('4.10.0', sha256='699103c8e64886e3ec7053f2a6aa83bb90426063526f63a818732ff385202bad')\n version('4.5.0', sha256='245a798edb8fd751b95750d8645d736dd739a020e7fc7d5627dac4d1c35d8295')\n version('4.4.1', sha256='6d48398b3112efb733b254edede4b7f3262c28bd19f665b64ef1acf6ec5cd74f')\n version('4.4.0', sha256='d516427c3bd689205e6693c9616302ef34017b91ada3c9ea3fca6e90702b7ffe')\n version('4.3.1', sha256='8219d3eaa3e4d4efc5f349114e41a40f0986c91a960846bb81d5da817fb7cc3f')\n version('4.3.0', sha256='f214c661328c836e02b6f185f98f3eccd7ce396791937493ffa1babf5e3267ab')\n version('4.2.2', sha256='a876da43e01acec2c305abdd8e6aa55f052bab1196171ccf1cb9a6aa230298b0')\n version('4.2.1', sha256='081a5d4db33db58697be2d682b92f79b2c239493445f13dd457c15bc3e52c874')\n version('4.2.0', sha256='723b3d4baac20f0c9cd91fc75c3e813636ecb6c6e303fb34d628c3df078985a7')\n version('4.1.1', sha256='d8c5555386d0f18f1336dea9800f9f0fe96dcecc9757c0f980e11fdfadb661ff')\n version('4.1.0', sha256='e0e150ad55e487e49054efc9a4b0e2e17f27e1de77444b26760789077b146d86')\n\n depends_on('[email protected]:2.8,3.3:', type=('build', 'run'))\n depends_on('[email protected]:', when='@5.0:', type=('build', 'run'))\n depends_on('[email protected]:', when='@5.2:', type=('build', 'run'))\n depends_on('py-setuptools', type='build', when='@5:')\n depends_on('[email protected]:', when='@:4.999', type=('build', 'run'))\n depends_on('[email protected]:', when='@5.0.0:', type=('build', 'run'))\n depends_on('[email protected]:', type=('build', 'run'))\n depends_on('py-jupyter-client', type=('build', 'run'))\n depends_on('[email protected]:', when='@:4.999', type=('build', 'run'))\n depends_on('[email protected]:', when='@5.0.0:', type=('build', 'run'))\n depends_on('py-appnope', when='platform=darwin', type=('build', 'run'))\n depends_on('py-pytest@:5.3.3,5.3.5:', type='test')\n depends_on('py-pytest-cov', type='test')\n # depends_on('py-flaky', type='test')\n depends_on('py-nose', type='test')\n", "path": "var/spack/repos/builtin/packages/py-ipykernel/package.py"}]}
| 2,648 | 166 |
gh_patches_debug_17691
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-867
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No documentation for network api
The following have missing documentation ([readthedocs](http://docker-py.readthedocs.org/)).
- [x] `Client.networks`
- [x] `Client.create_network`
- [x] `Client.remove_network`
- [x] `Client.inspect_network`
- [x] `Client.connect_container_to_network`
- [x] `Client.disconnect_container_from_network`
</issue>
<code>
[start of docker/api/volume.py]
1 from .. import utils
2
3
4 class VolumeApiMixin(object):
5 @utils.minimum_version('1.21')
6 def volumes(self, filters=None):
7 params = {
8 'filters': utils.convert_filters(filters) if filters else None
9 }
10 url = self._url('/volumes')
11 return self._result(self._get(url, params=params), True)
12
13 @utils.minimum_version('1.21')
14 def create_volume(self, name, driver=None, driver_opts=None):
15 url = self._url('/volumes/create')
16 if driver_opts is not None and not isinstance(driver_opts, dict):
17 raise TypeError('driver_opts must be a dictionary')
18
19 data = {
20 'Name': name,
21 'Driver': driver,
22 'DriverOpts': driver_opts,
23 }
24 return self._result(self._post_json(url, data=data), True)
25
26 @utils.minimum_version('1.21')
27 def inspect_volume(self, name):
28 url = self._url('/volumes/{0}', name)
29 return self._result(self._get(url), True)
30
31 @utils.minimum_version('1.21')
32 def remove_volume(self, name):
33 url = self._url('/volumes/{0}', name)
34 resp = self._delete(url)
35 self._raise_for_status(resp)
36 return True
37
[end of docker/api/volume.py]
[start of docker/api/network.py]
1 import json
2
3 from ..utils import check_resource, minimum_version
4
5
6 class NetworkApiMixin(object):
7 @minimum_version('1.21')
8 def networks(self, names=None, ids=None):
9 filters = {}
10 if names:
11 filters['name'] = names
12 if ids:
13 filters['id'] = ids
14
15 params = {'filters': json.dumps(filters)}
16
17 url = self._url("/networks")
18 res = self._get(url, params=params)
19 return self._result(res, json=True)
20
21 @minimum_version('1.21')
22 def create_network(self, name, driver=None):
23 data = {
24 'name': name,
25 'driver': driver,
26 }
27 url = self._url("/networks/create")
28 res = self._post_json(url, data=data)
29 return self._result(res, json=True)
30
31 @minimum_version('1.21')
32 def remove_network(self, net_id):
33 url = self._url("/networks/{0}", net_id)
34 res = self._delete(url)
35 self._raise_for_status(res)
36
37 @minimum_version('1.21')
38 def inspect_network(self, net_id):
39 url = self._url("/networks/{0}", net_id)
40 res = self._get(url)
41 return self._result(res, json=True)
42
43 @check_resource
44 @minimum_version('1.21')
45 def connect_container_to_network(self, container, net_id):
46 data = {"container": container}
47 url = self._url("/networks/{0}/connect", net_id)
48 self._post_json(url, data=data)
49
50 @check_resource
51 @minimum_version('1.21')
52 def disconnect_container_from_network(self, container, net_id):
53 data = {"container": container}
54 url = self._url("/networks/{0}/disconnect", net_id)
55 self._post_json(url, data=data)
56
[end of docker/api/network.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docker/api/network.py b/docker/api/network.py
--- a/docker/api/network.py
+++ b/docker/api/network.py
@@ -19,10 +19,14 @@
return self._result(res, json=True)
@minimum_version('1.21')
- def create_network(self, name, driver=None):
+ def create_network(self, name, driver=None, options=None):
+ if options is not None and not isinstance(options, dict):
+ raise TypeError('options must be a dictionary')
+
data = {
'name': name,
'driver': driver,
+ 'options': options
}
url = self._url("/networks/create")
res = self._post_json(url, data=data)
diff --git a/docker/api/volume.py b/docker/api/volume.py
--- a/docker/api/volume.py
+++ b/docker/api/volume.py
@@ -33,4 +33,3 @@
url = self._url('/volumes/{0}', name)
resp = self._delete(url)
self._raise_for_status(resp)
- return True
|
{"golden_diff": "diff --git a/docker/api/network.py b/docker/api/network.py\n--- a/docker/api/network.py\n+++ b/docker/api/network.py\n@@ -19,10 +19,14 @@\n return self._result(res, json=True)\n \n @minimum_version('1.21')\n- def create_network(self, name, driver=None):\n+ def create_network(self, name, driver=None, options=None):\n+ if options is not None and not isinstance(options, dict):\n+ raise TypeError('options must be a dictionary')\n+\n data = {\n 'name': name,\n 'driver': driver,\n+ 'options': options\n }\n url = self._url(\"/networks/create\")\n res = self._post_json(url, data=data)\ndiff --git a/docker/api/volume.py b/docker/api/volume.py\n--- a/docker/api/volume.py\n+++ b/docker/api/volume.py\n@@ -33,4 +33,3 @@\n url = self._url('/volumes/{0}', name)\n resp = self._delete(url)\n self._raise_for_status(resp)\n- return True\n", "issue": "No documentation for network api\nThe following have missing documentation ([readthedocs](http://docker-py.readthedocs.org/)).\n- [x] `Client.networks`\n- [x] `Client.create_network`\n- [x] `Client.remove_network`\n- [x] `Client.inspect_network`\n- [x] `Client.connect_container_to_network`\n- [x] `Client.disconnect_container_from_network`\n\n", "before_files": [{"content": "from .. import utils\n\n\nclass VolumeApiMixin(object):\n @utils.minimum_version('1.21')\n def volumes(self, filters=None):\n params = {\n 'filters': utils.convert_filters(filters) if filters else None\n }\n url = self._url('/volumes')\n return self._result(self._get(url, params=params), True)\n\n @utils.minimum_version('1.21')\n def create_volume(self, name, driver=None, driver_opts=None):\n url = self._url('/volumes/create')\n if driver_opts is not None and not isinstance(driver_opts, dict):\n raise TypeError('driver_opts must be a dictionary')\n\n data = {\n 'Name': name,\n 'Driver': driver,\n 'DriverOpts': driver_opts,\n }\n return self._result(self._post_json(url, data=data), True)\n\n @utils.minimum_version('1.21')\n def inspect_volume(self, name):\n url = self._url('/volumes/{0}', name)\n return self._result(self._get(url), True)\n\n @utils.minimum_version('1.21')\n def remove_volume(self, name):\n url = self._url('/volumes/{0}', name)\n resp = self._delete(url)\n self._raise_for_status(resp)\n return True\n", "path": "docker/api/volume.py"}, {"content": "import json\n\nfrom ..utils import check_resource, minimum_version\n\n\nclass NetworkApiMixin(object):\n @minimum_version('1.21')\n def networks(self, names=None, ids=None):\n filters = {}\n if names:\n filters['name'] = names\n if ids:\n filters['id'] = ids\n\n params = {'filters': json.dumps(filters)}\n\n url = self._url(\"/networks\")\n res = self._get(url, params=params)\n return self._result(res, json=True)\n\n @minimum_version('1.21')\n def create_network(self, name, driver=None):\n data = {\n 'name': name,\n 'driver': driver,\n }\n url = self._url(\"/networks/create\")\n res = self._post_json(url, data=data)\n return self._result(res, json=True)\n\n @minimum_version('1.21')\n def remove_network(self, net_id):\n url = self._url(\"/networks/{0}\", net_id)\n res = self._delete(url)\n self._raise_for_status(res)\n\n @minimum_version('1.21')\n def inspect_network(self, net_id):\n url = self._url(\"/networks/{0}\", net_id)\n res = self._get(url)\n return self._result(res, json=True)\n\n @check_resource\n @minimum_version('1.21')\n def connect_container_to_network(self, container, net_id):\n data = {\"container\": container}\n url = self._url(\"/networks/{0}/connect\", net_id)\n self._post_json(url, data=data)\n\n @check_resource\n @minimum_version('1.21')\n def disconnect_container_from_network(self, container, net_id):\n data = {\"container\": container}\n url = self._url(\"/networks/{0}/disconnect\", net_id)\n self._post_json(url, data=data)\n", "path": "docker/api/network.py"}]}
| 1,507 | 243 |
gh_patches_debug_5886
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-215
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Not stashing changes before installing
Hi,
I'm regularly running into this situation: I have pending changes, I run `git commit -a`, and pre-commit tries to install its hooks:
```
[INFO] Initializing environment for git://github.com/pre-commit/pre-commit-hooks.
An unexpected error has occurred: CalledProcessError: Command: ['git', 'checkout', 'd3db0385825d4c082bc7117c090ac16cb4840f3e']
Return code: 1
Expected return code: 0
Output: (none)
Errors:
error: Your local changes to the following files would be overwritten by checkout:
.pre-commit-config.yaml
.travis.yml
CHANGELOG
README.md
hooks.yaml
pre_commit_hooks/autopep8_wrapper.py
pre_commit_hooks/check_json.py
pre_commit_hooks/check_yaml.py
pre_commit_hooks/debug_statement_hook.py
pre_commit_hooks/end_of_file_fixer.py
pre_commit_hooks/tests_should_end_in_test.py
pre_commit_hooks/trailing_whitespace_fixer.py
pre_commit_hooks/util.py
pylintrc
requirements-dev.txt
setup.py
testing/util.py
tests/autopep8_wrapper_test.py
tests/debug_statement_hook_test.py
tests/end_of_file_fixer_test.py
tests/tests_should_end_in_test_test.py
tests/trailing_whitespace_fixer_test.py
tests/util_test.py
tox.ini
Please, commit your changes or stash them before you can switch branches.
Aborting
Check the log at ~/.pre-commit/pre-commit.log
```
The log contents are
```
An unexpected error has occurred: CalledProcessError: Command: ['git', 'checkout', 'd3db0385825d4c082bc7117c090ac16cb4840f3e']
Return code: 1
Expected return code: 0
Output: (none)
Errors:
error: Your local changes to the following files would be overwritten by checkout:
.pre-commit-config.yaml
.travis.yml
CHANGELOG
README.md
hooks.yaml
pre_commit_hooks/autopep8_wrapper.py
pre_commit_hooks/check_json.py
pre_commit_hooks/check_yaml.py
pre_commit_hooks/debug_statement_hook.py
pre_commit_hooks/end_of_file_fixer.py
pre_commit_hooks/tests_should_end_in_test.py
pre_commit_hooks/trailing_whitespace_fixer.py
pre_commit_hooks/util.py
pylintrc
requirements-dev.txt
setup.py
testing/util.py
tests/autopep8_wrapper_test.py
tests/debug_statement_hook_test.py
tests/end_of_file_fixer_test.py
tests/tests_should_end_in_test_test.py
tests/trailing_whitespace_fixer_test.py
tests/util_test.py
tox.ini
Please, commit your changes or stash them before you can switch branches.
Aborting
Traceback (most recent call last):
File "/home/qdm/workspace/web/pre-commit/pre_commit/error_handler.py", line 34, in error_handler
yield
File "/home/qdm/workspace/web/pre-commit/pre_commit/main.py", line 129, in main
return run(runner, args)
File "/home/qdm/workspace/web/pre-commit/pre_commit/commands/run.py", line 165, in run
return _run_hooks(runner, args, write=write, environ=environ)
File "/home/qdm/workspace/web/pre-commit/pre_commit/commands/run.py", line 115, in _run_hooks
for repo in runner.repositories:
File "/usr/lib/python3.4/site-packages/cached_property.py", line 26, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/qdm/workspace/web/pre-commit/pre_commit/runner.py", line 43, in repositories
repository.require_installed()
File "/home/qdm/workspace/web/pre-commit/pre_commit/repository.py", line 64, in require_installed
self.install()
File "/home/qdm/workspace/web/pre-commit/pre_commit/repository.py", line 78, in install
for language_name, _ in self.languages
File "/usr/lib/python3.4/site-packages/cached_property.py", line 26, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/qdm/workspace/web/pre-commit/pre_commit/repository.py", line 41, in languages
for _, hook in self.hooks
File "/usr/lib/python3.4/site-packages/cached_property.py", line 26, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/qdm/workspace/web/pre-commit/pre_commit/repository.py", line 49, in hooks
for hook in self.repo_config['hooks']
File "/home/qdm/workspace/web/pre-commit/pre_commit/repository.py", line 49, in <genexpr>
for hook in self.repo_config['hooks']
File "/usr/lib/python3.4/site-packages/cached_property.py", line 26, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/qdm/workspace/web/pre-commit/pre_commit/manifest.py", line 24, in hooks
return dict((hook['id'], hook) for hook in self.manifest_contents)
File "/usr/lib/python3.4/site-packages/cached_property.py", line 26, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/qdm/workspace/web/pre-commit/pre_commit/manifest.py", line 18, in manifest_contents
self.repo_path_getter.repo_path, C.MANIFEST_FILE,
File "/usr/lib/python3.4/site-packages/cached_property.py", line 26, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/qdm/workspace/web/pre-commit/pre_commit/store.py", line 46, in repo_path
return self._store.clone(self._repo, self._sha)
File "/home/qdm/workspace/web/pre-commit/pre_commit/store.py", line 119, in clone
cmd_output('git', 'checkout', sha)
File "/home/qdm/workspace/web/pre-commit/pre_commit/util.py", line 160, in cmd_output
returncode, cmd, retcode, output=(stdout, stderr),
pre_commit.util.CalledProcessError: Command: ['git', 'checkout', 'd3db0385825d4c082bc7117c090ac16cb4840f3e']
Return code: 1
Expected return code: 0
Output: (none)
Errors:
error: Your local changes to the following files would be overwritten by checkout:
.pre-commit-config.yaml
.travis.yml
CHANGELOG
README.md
hooks.yaml
pre_commit_hooks/autopep8_wrapper.py
pre_commit_hooks/check_json.py
pre_commit_hooks/check_yaml.py
pre_commit_hooks/debug_statement_hook.py
pre_commit_hooks/end_of_file_fixer.py
pre_commit_hooks/tests_should_end_in_test.py
pre_commit_hooks/trailing_whitespace_fixer.py
pre_commit_hooks/util.py
pylintrc
requirements-dev.txt
setup.py
testing/util.py
tests/autopep8_wrapper_test.py
tests/debug_statement_hook_test.py
tests/end_of_file_fixer_test.py
tests/tests_should_end_in_test_test.py
tests/trailing_whitespace_fixer_test.py
tests/util_test.py
tox.ini
Please, commit your changes or stash them before you can switch branches.
Aborting
```
I think this is a regression from a previous version, it was more seamless then.
</issue>
<code>
[start of pre_commit/store.py]
1 from __future__ import unicode_literals
2
3 import contextlib
4 import io
5 import logging
6 import os
7 import os.path
8 import sqlite3
9 import tempfile
10
11 from cached_property import cached_property
12
13 from pre_commit.prefixed_command_runner import PrefixedCommandRunner
14 from pre_commit.util import clean_path_on_failure
15 from pre_commit.util import cmd_output
16 from pre_commit.util import cwd
17
18
19 logger = logging.getLogger('pre_commit')
20
21
22 def _get_default_directory():
23 """Returns the default directory for the Store. This is intentionally
24 underscored to indicate that `Store.get_default_directory` is the intended
25 way to get this information. This is also done so
26 `Store.get_default_directory` can be mocked in tests and
27 `_get_default_directory` can be tested.
28 """
29 return os.environ.get(
30 'PRE_COMMIT_HOME',
31 os.path.join(os.path.expanduser('~'), '.pre-commit'),
32 )
33
34
35 class Store(object):
36 get_default_directory = staticmethod(_get_default_directory)
37
38 class RepoPathGetter(object):
39 def __init__(self, repo, sha, store):
40 self._repo = repo
41 self._sha = sha
42 self._store = store
43
44 @cached_property
45 def repo_path(self):
46 return self._store.clone(self._repo, self._sha)
47
48 def __init__(self, directory=None):
49 if directory is None:
50 directory = self.get_default_directory()
51
52 self.directory = directory
53 self.__created = False
54
55 def _write_readme(self):
56 with io.open(os.path.join(self.directory, 'README'), 'w') as readme:
57 readme.write(
58 'This directory is maintained by the pre-commit project.\n'
59 'Learn more: https://github.com/pre-commit/pre-commit\n'
60 )
61
62 def _write_sqlite_db(self):
63 # To avoid a race where someone ^Cs between db creation and execution
64 # of the CREATE TABLE statement
65 fd, tmpfile = tempfile.mkstemp(dir=self.directory)
66 # We'll be managing this file ourselves
67 os.close(fd)
68 # sqlite doesn't close its fd with its contextmanager >.<
69 # contextlib.closing fixes this.
70 # See: http://stackoverflow.com/a/28032829/812183
71 with contextlib.closing(sqlite3.connect(tmpfile)) as db:
72 db.executescript(
73 'CREATE TABLE repos ('
74 ' repo CHAR(255) NOT NULL,'
75 ' ref CHAR(255) NOT NULL,'
76 ' path CHAR(255) NOT NULL,'
77 ' PRIMARY KEY (repo, ref)'
78 ');'
79 )
80
81 # Atomic file move
82 os.rename(tmpfile, self.db_path)
83
84 def _create(self):
85 if os.path.exists(self.db_path):
86 return
87 if not os.path.exists(self.directory):
88 os.makedirs(self.directory)
89 self._write_readme()
90 self._write_sqlite_db()
91
92 def require_created(self):
93 """Require the pre-commit file store to be created."""
94 if self.__created:
95 return
96
97 self._create()
98 self.__created = True
99
100 def clone(self, url, sha):
101 """Clone the given url and checkout the specific sha."""
102 self.require_created()
103
104 # Check if we already exist
105 with sqlite3.connect(self.db_path) as db:
106 result = db.execute(
107 'SELECT path FROM repos WHERE repo = ? AND ref = ?',
108 [url, sha],
109 ).fetchone()
110 if result:
111 return result[0]
112
113 logger.info('Initializing environment for {0}.'.format(url))
114
115 dir = tempfile.mkdtemp(prefix='repo', dir=self.directory)
116 with clean_path_on_failure(dir):
117 cmd_output('git', 'clone', '--no-checkout', url, dir)
118 with cwd(dir):
119 cmd_output('git', 'checkout', sha)
120
121 # Update our db with the created repo
122 with sqlite3.connect(self.db_path) as db:
123 db.execute(
124 'INSERT INTO repos (repo, ref, path) VALUES (?, ?, ?)',
125 [url, sha, dir],
126 )
127 return dir
128
129 def get_repo_path_getter(self, repo, sha):
130 return self.RepoPathGetter(repo, sha, self)
131
132 @cached_property
133 def cmd_runner(self):
134 return PrefixedCommandRunner(self.directory)
135
136 @cached_property
137 def db_path(self):
138 return os.path.join(self.directory, 'db.db')
139
[end of pre_commit/store.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/store.py b/pre_commit/store.py
--- a/pre_commit/store.py
+++ b/pre_commit/store.py
@@ -116,7 +116,7 @@
with clean_path_on_failure(dir):
cmd_output('git', 'clone', '--no-checkout', url, dir)
with cwd(dir):
- cmd_output('git', 'checkout', sha)
+ cmd_output('git', 'reset', sha, '--hard')
# Update our db with the created repo
with sqlite3.connect(self.db_path) as db:
|
{"golden_diff": "diff --git a/pre_commit/store.py b/pre_commit/store.py\n--- a/pre_commit/store.py\n+++ b/pre_commit/store.py\n@@ -116,7 +116,7 @@\n with clean_path_on_failure(dir):\n cmd_output('git', 'clone', '--no-checkout', url, dir)\n with cwd(dir):\n- cmd_output('git', 'checkout', sha)\n+ cmd_output('git', 'reset', sha, '--hard')\n \n # Update our db with the created repo\n with sqlite3.connect(self.db_path) as db:\n", "issue": "Not stashing changes before installing\nHi,\n\nI'm regularly running into this situation: I have pending changes, I run `git commit -a`, and pre-commit tries to install its hooks:\n\n```\n[INFO] Initializing environment for git://github.com/pre-commit/pre-commit-hooks.\nAn unexpected error has occurred: CalledProcessError: Command: ['git', 'checkout', 'd3db0385825d4c082bc7117c090ac16cb4840f3e']\nReturn code: 1\nExpected return code: 0\nOutput: (none)\nErrors: \n error: Your local changes to the following files would be overwritten by checkout:\n .pre-commit-config.yaml\n .travis.yml\n CHANGELOG\n README.md\n hooks.yaml\n pre_commit_hooks/autopep8_wrapper.py\n pre_commit_hooks/check_json.py\n pre_commit_hooks/check_yaml.py\n pre_commit_hooks/debug_statement_hook.py\n pre_commit_hooks/end_of_file_fixer.py\n pre_commit_hooks/tests_should_end_in_test.py\n pre_commit_hooks/trailing_whitespace_fixer.py\n pre_commit_hooks/util.py\n pylintrc\n requirements-dev.txt\n setup.py\n testing/util.py\n tests/autopep8_wrapper_test.py\n tests/debug_statement_hook_test.py\n tests/end_of_file_fixer_test.py\n tests/tests_should_end_in_test_test.py\n tests/trailing_whitespace_fixer_test.py\n tests/util_test.py\n tox.ini\n Please, commit your changes or stash them before you can switch branches.\n Aborting\n\n\nCheck the log at ~/.pre-commit/pre-commit.log\n```\n\nThe log contents are \n\n```\nAn unexpected error has occurred: CalledProcessError: Command: ['git', 'checkout', 'd3db0385825d4c082bc7117c090ac16cb4840f3e']\nReturn code: 1\nExpected return code: 0\nOutput: (none)\nErrors: \n error: Your local changes to the following files would be overwritten by checkout:\n .pre-commit-config.yaml\n .travis.yml\n CHANGELOG\n README.md\n hooks.yaml\n pre_commit_hooks/autopep8_wrapper.py\n pre_commit_hooks/check_json.py\n pre_commit_hooks/check_yaml.py\n pre_commit_hooks/debug_statement_hook.py\n pre_commit_hooks/end_of_file_fixer.py\n pre_commit_hooks/tests_should_end_in_test.py\n pre_commit_hooks/trailing_whitespace_fixer.py\n pre_commit_hooks/util.py\n pylintrc\n requirements-dev.txt\n setup.py\n testing/util.py\n tests/autopep8_wrapper_test.py\n tests/debug_statement_hook_test.py\n tests/end_of_file_fixer_test.py\n tests/tests_should_end_in_test_test.py\n tests/trailing_whitespace_fixer_test.py\n tests/util_test.py\n tox.ini\n Please, commit your changes or stash them before you can switch branches.\n Aborting\n\n\nTraceback (most recent call last):\n File \"/home/qdm/workspace/web/pre-commit/pre_commit/error_handler.py\", line 34, in error_handler\n yield\n File \"/home/qdm/workspace/web/pre-commit/pre_commit/main.py\", line 129, in main\n return run(runner, args)\n File \"/home/qdm/workspace/web/pre-commit/pre_commit/commands/run.py\", line 165, in run\n return _run_hooks(runner, args, write=write, environ=environ)\n File \"/home/qdm/workspace/web/pre-commit/pre_commit/commands/run.py\", line 115, in _run_hooks\n for repo in runner.repositories:\n File \"/usr/lib/python3.4/site-packages/cached_property.py\", line 26, in __get__\n value = obj.__dict__[self.func.__name__] = self.func(obj)\n File \"/home/qdm/workspace/web/pre-commit/pre_commit/runner.py\", line 43, in repositories\n repository.require_installed()\n File \"/home/qdm/workspace/web/pre-commit/pre_commit/repository.py\", line 64, in require_installed\n self.install()\n File \"/home/qdm/workspace/web/pre-commit/pre_commit/repository.py\", line 78, in install\n for language_name, _ in self.languages\n File \"/usr/lib/python3.4/site-packages/cached_property.py\", line 26, in __get__\n value = obj.__dict__[self.func.__name__] = self.func(obj)\n File \"/home/qdm/workspace/web/pre-commit/pre_commit/repository.py\", line 41, in languages\n for _, hook in self.hooks\n File \"/usr/lib/python3.4/site-packages/cached_property.py\", line 26, in __get__\n value = obj.__dict__[self.func.__name__] = self.func(obj)\n File \"/home/qdm/workspace/web/pre-commit/pre_commit/repository.py\", line 49, in hooks\n for hook in self.repo_config['hooks']\n File \"/home/qdm/workspace/web/pre-commit/pre_commit/repository.py\", line 49, in <genexpr>\n for hook in self.repo_config['hooks']\n File \"/usr/lib/python3.4/site-packages/cached_property.py\", line 26, in __get__\n value = obj.__dict__[self.func.__name__] = self.func(obj)\n File \"/home/qdm/workspace/web/pre-commit/pre_commit/manifest.py\", line 24, in hooks\n return dict((hook['id'], hook) for hook in self.manifest_contents)\n File \"/usr/lib/python3.4/site-packages/cached_property.py\", line 26, in __get__\n value = obj.__dict__[self.func.__name__] = self.func(obj)\n File \"/home/qdm/workspace/web/pre-commit/pre_commit/manifest.py\", line 18, in manifest_contents\n self.repo_path_getter.repo_path, C.MANIFEST_FILE,\n File \"/usr/lib/python3.4/site-packages/cached_property.py\", line 26, in __get__\n value = obj.__dict__[self.func.__name__] = self.func(obj)\n File \"/home/qdm/workspace/web/pre-commit/pre_commit/store.py\", line 46, in repo_path\n return self._store.clone(self._repo, self._sha)\n File \"/home/qdm/workspace/web/pre-commit/pre_commit/store.py\", line 119, in clone\n cmd_output('git', 'checkout', sha)\n File \"/home/qdm/workspace/web/pre-commit/pre_commit/util.py\", line 160, in cmd_output\n returncode, cmd, retcode, output=(stdout, stderr),\npre_commit.util.CalledProcessError: Command: ['git', 'checkout', 'd3db0385825d4c082bc7117c090ac16cb4840f3e']\nReturn code: 1\nExpected return code: 0\nOutput: (none)\nErrors: \n error: Your local changes to the following files would be overwritten by checkout:\n .pre-commit-config.yaml\n .travis.yml\n CHANGELOG\n README.md\n hooks.yaml\n pre_commit_hooks/autopep8_wrapper.py\n pre_commit_hooks/check_json.py\n pre_commit_hooks/check_yaml.py\n pre_commit_hooks/debug_statement_hook.py\n pre_commit_hooks/end_of_file_fixer.py\n pre_commit_hooks/tests_should_end_in_test.py\n pre_commit_hooks/trailing_whitespace_fixer.py\n pre_commit_hooks/util.py\n pylintrc\n requirements-dev.txt\n setup.py\n testing/util.py\n tests/autopep8_wrapper_test.py\n tests/debug_statement_hook_test.py\n tests/end_of_file_fixer_test.py\n tests/tests_should_end_in_test_test.py\n tests/trailing_whitespace_fixer_test.py\n tests/util_test.py\n tox.ini\n Please, commit your changes or stash them before you can switch branches.\n Aborting\n```\n\nI think this is a regression from a previous version, it was more seamless then.\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport io\nimport logging\nimport os\nimport os.path\nimport sqlite3\nimport tempfile\n\nfrom cached_property import cached_property\n\nfrom pre_commit.prefixed_command_runner import PrefixedCommandRunner\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cwd\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef _get_default_directory():\n \"\"\"Returns the default directory for the Store. This is intentionally\n underscored to indicate that `Store.get_default_directory` is the intended\n way to get this information. This is also done so\n `Store.get_default_directory` can be mocked in tests and\n `_get_default_directory` can be tested.\n \"\"\"\n return os.environ.get(\n 'PRE_COMMIT_HOME',\n os.path.join(os.path.expanduser('~'), '.pre-commit'),\n )\n\n\nclass Store(object):\n get_default_directory = staticmethod(_get_default_directory)\n\n class RepoPathGetter(object):\n def __init__(self, repo, sha, store):\n self._repo = repo\n self._sha = sha\n self._store = store\n\n @cached_property\n def repo_path(self):\n return self._store.clone(self._repo, self._sha)\n\n def __init__(self, directory=None):\n if directory is None:\n directory = self.get_default_directory()\n\n self.directory = directory\n self.__created = False\n\n def _write_readme(self):\n with io.open(os.path.join(self.directory, 'README'), 'w') as readme:\n readme.write(\n 'This directory is maintained by the pre-commit project.\\n'\n 'Learn more: https://github.com/pre-commit/pre-commit\\n'\n )\n\n def _write_sqlite_db(self):\n # To avoid a race where someone ^Cs between db creation and execution\n # of the CREATE TABLE statement\n fd, tmpfile = tempfile.mkstemp(dir=self.directory)\n # We'll be managing this file ourselves\n os.close(fd)\n # sqlite doesn't close its fd with its contextmanager >.<\n # contextlib.closing fixes this.\n # See: http://stackoverflow.com/a/28032829/812183\n with contextlib.closing(sqlite3.connect(tmpfile)) as db:\n db.executescript(\n 'CREATE TABLE repos ('\n ' repo CHAR(255) NOT NULL,'\n ' ref CHAR(255) NOT NULL,'\n ' path CHAR(255) NOT NULL,'\n ' PRIMARY KEY (repo, ref)'\n ');'\n )\n\n # Atomic file move\n os.rename(tmpfile, self.db_path)\n\n def _create(self):\n if os.path.exists(self.db_path):\n return\n if not os.path.exists(self.directory):\n os.makedirs(self.directory)\n self._write_readme()\n self._write_sqlite_db()\n\n def require_created(self):\n \"\"\"Require the pre-commit file store to be created.\"\"\"\n if self.__created:\n return\n\n self._create()\n self.__created = True\n\n def clone(self, url, sha):\n \"\"\"Clone the given url and checkout the specific sha.\"\"\"\n self.require_created()\n\n # Check if we already exist\n with sqlite3.connect(self.db_path) as db:\n result = db.execute(\n 'SELECT path FROM repos WHERE repo = ? AND ref = ?',\n [url, sha],\n ).fetchone()\n if result:\n return result[0]\n\n logger.info('Initializing environment for {0}.'.format(url))\n\n dir = tempfile.mkdtemp(prefix='repo', dir=self.directory)\n with clean_path_on_failure(dir):\n cmd_output('git', 'clone', '--no-checkout', url, dir)\n with cwd(dir):\n cmd_output('git', 'checkout', sha)\n\n # Update our db with the created repo\n with sqlite3.connect(self.db_path) as db:\n db.execute(\n 'INSERT INTO repos (repo, ref, path) VALUES (?, ?, ?)',\n [url, sha, dir],\n )\n return dir\n\n def get_repo_path_getter(self, repo, sha):\n return self.RepoPathGetter(repo, sha, self)\n\n @cached_property\n def cmd_runner(self):\n return PrefixedCommandRunner(self.directory)\n\n @cached_property\n def db_path(self):\n return os.path.join(self.directory, 'db.db')\n", "path": "pre_commit/store.py"}]}
| 3,583 | 123 |
gh_patches_debug_35739
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-center-index-12771
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[package] wayland/all: incompatible with latest (1.52.0) Conan version
### Package and Environment Details
* Package Name/Version: **wayland/1.20.0**
* Conan version: **conan 1.52.0**
### Conan profile
_No response_
### Steps to reproduce
conan export recipes/wayland/all wayland/1.20.0@
### Logs
<details><summary>Click to expand log</summary>
```
File "recipes/wayland/all/conanfile.py", line 5, in <module>
from conan.tools.gnu.pkgconfigdeps.pc_files_creator import get_pc_files_and_content
ModuleNotFoundError: No module named 'conan.tools.gnu.pkgconfigdeps.pc_files_creator'; 'conan.tools.gnu.pkgconfigdeps' is not a package
```
</details>
</issue>
<code>
[start of recipes/wayland/all/conanfile.py]
1 from conan import ConanFile
2 from conan.errors import ConanInvalidConfiguration
3 from conan.tools.build import cross_building
4 from conan.tools.files import copy, get, mkdir, replace_in_file, rmdir, save
5 from conan.tools.gnu.pkgconfigdeps.pc_files_creator import get_pc_files_and_content
6 from conan.tools.layout import basic_layout
7 from conan.tools.meson import Meson, MesonToolchain
8 from conan.tools.scm import Version
9 import os
10
11 required_conan_version = ">=1.50.0"
12
13
14 class WaylandConan(ConanFile):
15 name = "wayland"
16 description = (
17 "Wayland is a project to define a protocol for a compositor to talk to "
18 "its clients as well as a library implementation of the protocol"
19 )
20 topics = ("protocol", "compositor", "display")
21 url = "https://github.com/conan-io/conan-center-index"
22 homepage = "https://wayland.freedesktop.org"
23 license = "MIT"
24
25 settings = "os", "arch", "compiler", "build_type"
26 options = {
27 "shared": [True, False],
28 "fPIC": [True, False],
29 "enable_libraries": [True, False],
30 "enable_dtd_validation": [True, False],
31 }
32 default_options = {
33 "shared": False,
34 "fPIC": True,
35 "enable_libraries": True,
36 "enable_dtd_validation": True,
37 }
38
39 generators = "PkgConfigDeps", "VirtualBuildEnv", "VirtualRunEnv"
40
41 def configure(self):
42 if self.options.shared:
43 del self.options.fPIC
44 try:
45 del self.settings.compiler.libcxx
46 except Exception:
47 pass
48 try:
49 del self.settings.compiler.cppstd
50 except Exception:
51 pass
52
53 def requirements(self):
54 if self.options.enable_libraries:
55 self.requires("libffi/3.4.2")
56 if self.options.enable_dtd_validation:
57 self.requires("libxml2/2.9.14")
58 self.requires("expat/2.4.8")
59
60 def validate(self):
61 if self.info.settings.os != "Linux":
62 raise ConanInvalidConfiguration("Wayland can be built on Linux only")
63
64 def build_requirements(self):
65 self.tool_requires("meson/0.63.1")
66 self.tool_requires("pkgconf/1.7.4")
67 if cross_building(self):
68 self.tool_requires(self.ref)
69
70 def layout(self):
71 basic_layout(self, src_folder="src")
72
73 def source(self):
74 get(self, **self.conan_data["sources"][self.version], strip_root=True)
75
76 def generate(self):
77 tc = MesonToolchain(self)
78 tc.project_options["libdir"] = "lib"
79 tc.project_options["datadir"] = "res"
80 tc.project_options["libraries"] = self.options.enable_libraries
81 tc.project_options["dtd_validation"] = self.options.enable_dtd_validation
82 tc.project_options["documentation"] = False
83 if Version(self.version) >= "1.18.91":
84 tc.project_options["scanner"] = True
85
86 # Generate PC files for the tool_requires wayland package to ensure wayland-scanner is found for build machine.
87 if cross_building(self):
88 native_generators_folder = os.path.join(self.generators_folder, "native")
89 mkdir(self, native_generators_folder)
90 for target in ["wayland", "expat", "libxml2", "libiconv"]:
91 for pc_name, pc_content in get_pc_files_and_content(self, self.dependencies.build[target]).items():
92 save(self, os.path.join(native_generators_folder, pc_name), pc_content)
93 tc.project_options["build.pkg_config_path"] = native_generators_folder
94 tc.generate()
95
96 def _patch_sources(self):
97 replace_in_file(self, os.path.join(self.source_folder, "meson.build"),
98 "subdir('tests')", "#subdir('tests')")
99
100 def build(self):
101 self._patch_sources()
102 meson = Meson(self)
103 meson.configure()
104 meson.build()
105
106 def package(self):
107 copy(self, "COPYING", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
108 meson = Meson(self)
109 meson.install()
110 pkg_config_dir = os.path.join(self.package_folder, "lib", "pkgconfig")
111 rmdir(self, pkg_config_dir)
112
113 def package_info(self):
114 self.cpp_info.components["wayland-scanner"].set_property("pkg_config_name", "wayland-scanner")
115 self.cpp_info.components["wayland-scanner"].names["pkg_config"] = "wayland-scanner"
116 self.cpp_info.components["wayland-scanner"].resdirs = ["res"]
117
118 self.cpp_info.components["wayland-scanner"].includedirs = []
119 self.cpp_info.components["wayland-scanner"].libdirs = []
120 self.cpp_info.components["wayland-scanner"].set_property("component_version", self.version)
121
122 self.cpp_info.components["wayland-scanner"].requires = ["expat::expat"]
123 if self.options.enable_dtd_validation:
124 self.cpp_info.components["wayland-scanner"].requires.append("libxml2::libxml2")
125 pkgconfig_variables = {
126 'datarootdir': '${prefix}/res',
127 'pkgdatadir': '${datarootdir}/wayland',
128 'bindir': '${prefix}/bin',
129 'wayland_scanner': '${bindir}/wayland-scanner',
130 }
131 self.cpp_info.components["wayland-scanner"].set_property(
132 "pkg_config_custom_content",
133 "\n".join(f"{key}={value}" for key,value in pkgconfig_variables.items()))
134
135 bindir = os.path.join(self.package_folder, "bin")
136 self.buildenv_info.prepend_path("PATH", bindir)
137 self.runenv_info.prepend_path("PATH", bindir)
138 # TODO: Remove in Conan 2.0 where Environment class will be required.
139 self.output.info("Appending PATH environment variable: {}".format(bindir))
140 self.env_info.PATH.append(bindir)
141
142 if self.options.enable_libraries:
143 self.cpp_info.components["wayland-server"].libs = ["wayland-server"]
144 self.cpp_info.components["wayland-server"].set_property("pkg_config_name", "wayland-server")
145 self.cpp_info.components["wayland-server"].names["pkg_config"] = "wayland-server"
146 self.cpp_info.components["wayland-server"].requires = ["libffi::libffi"]
147 self.cpp_info.components["wayland-server"].system_libs = ["pthread", "m"]
148 self.cpp_info.components["wayland-server"].resdirs = ["res"]
149 if self.version >= Version("1.21.0") and self.settings.os == "Linux":
150 self.cpp_info.components["wayland-server"].system_libs += ["rt"]
151 self.cpp_info.components["wayland-server"].set_property("component_version", self.version)
152
153 pkgconfig_variables = {
154 'datarootdir': '${prefix}/res',
155 'pkgdatadir': '${datarootdir}/wayland',
156 }
157 self.cpp_info.components["wayland-server"].set_property(
158 "pkg_config_custom_content",
159 "\n".join(f"{key}={value}" for key, value in pkgconfig_variables.items()))
160
161 self.cpp_info.components["wayland-client"].libs = ["wayland-client"]
162 self.cpp_info.components["wayland-client"].set_property("pkg_config_name", "wayland-client")
163 self.cpp_info.components["wayland-client"].names["pkg_config"] = "wayland-client"
164 self.cpp_info.components["wayland-client"].requires = ["libffi::libffi"]
165 self.cpp_info.components["wayland-client"].system_libs = ["pthread", "m"]
166 self.cpp_info.components["wayland-client"].resdirs = ["res"]
167 if self.version >= Version("1.21.0") and self.settings.os == "Linux":
168 self.cpp_info.components["wayland-client"].system_libs += ["rt"]
169 self.cpp_info.components["wayland-client"].set_property("component_version", self.version)
170
171 pkgconfig_variables = {
172 'datarootdir': '${prefix}/res',
173 'pkgdatadir': '${datarootdir}/wayland',
174 }
175 self.cpp_info.components["wayland-client"].set_property(
176 "pkg_config_custom_content",
177 "\n".join(f"{key}={value}" for key, value in pkgconfig_variables.items()))
178
179 self.cpp_info.components["wayland-cursor"].libs = ["wayland-cursor"]
180 self.cpp_info.components["wayland-cursor"].set_property("pkg_config_name", "wayland-cursor")
181 self.cpp_info.components["wayland-cursor"].names["pkg_config"] = "wayland-cursor"
182 self.cpp_info.components["wayland-cursor"].requires = ["wayland-client"]
183 self.cpp_info.components["wayland-cursor"].set_property("component_version", self.version)
184
185 self.cpp_info.components["wayland-egl"].libs = ["wayland-egl"]
186 self.cpp_info.components["wayland-egl"].set_property("pkg_config_name", "wayland-egl")
187 self.cpp_info.components["wayland-egl"].names["pkg_config"] = "wayland-egl"
188 self.cpp_info.components["wayland-egl"].requires = ["wayland-client"]
189 self.cpp_info.components["wayland-egl"].set_property("component_version", "18.1.0")
190
191 self.cpp_info.components["wayland-egl-backend"].names["pkg_config"] = "wayland-egl-backend"
192 self.cpp_info.components["wayland-egl-backend"].set_property("pkg_config_name", "wayland-egl-backend")
193 self.cpp_info.components["wayland-egl-backend"].set_property("component_version", "3")
194
195 bindir = os.path.join(self.package_folder, "bin")
196 self.output.info("Appending PATH environment variable: {}".format(bindir))
197 self.env_info.PATH.append(bindir)
198
[end of recipes/wayland/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/recipes/wayland/all/conanfile.py b/recipes/wayland/all/conanfile.py
--- a/recipes/wayland/all/conanfile.py
+++ b/recipes/wayland/all/conanfile.py
@@ -1,8 +1,7 @@
from conan import ConanFile
from conan.errors import ConanInvalidConfiguration
from conan.tools.build import cross_building
-from conan.tools.files import copy, get, mkdir, replace_in_file, rmdir, save
-from conan.tools.gnu.pkgconfigdeps.pc_files_creator import get_pc_files_and_content
+from conan.tools.files import copy, get, replace_in_file, rmdir
from conan.tools.layout import basic_layout
from conan.tools.meson import Meson, MesonToolchain
from conan.tools.scm import Version
@@ -82,21 +81,20 @@
tc.project_options["documentation"] = False
if Version(self.version) >= "1.18.91":
tc.project_options["scanner"] = True
-
- # Generate PC files for the tool_requires wayland package to ensure wayland-scanner is found for build machine.
- if cross_building(self):
- native_generators_folder = os.path.join(self.generators_folder, "native")
- mkdir(self, native_generators_folder)
- for target in ["wayland", "expat", "libxml2", "libiconv"]:
- for pc_name, pc_content in get_pc_files_and_content(self, self.dependencies.build[target]).items():
- save(self, os.path.join(native_generators_folder, pc_name), pc_content)
- tc.project_options["build.pkg_config_path"] = native_generators_folder
tc.generate()
def _patch_sources(self):
replace_in_file(self, os.path.join(self.source_folder, "meson.build"),
"subdir('tests')", "#subdir('tests')")
+ if cross_building(self):
+ replace_in_file(self, f"{self.source_folder}/src/meson.build",
+ "scanner_dep = dependency('wayland-scanner', native: true, version: meson.project_version())",
+ "# scanner_dep = dependency('wayland-scanner', native: true, version: meson.project_version())")
+ replace_in_file(self, f"{self.source_folder}/src/meson.build",
+ "wayland_scanner_for_build = find_program(scanner_dep.get_variable(pkgconfig: 'wayland_scanner'))",
+ "wayland_scanner_for_build = find_program('wayland-scanner')")
+
def build(self):
self._patch_sources()
meson = Meson(self)
|
{"golden_diff": "diff --git a/recipes/wayland/all/conanfile.py b/recipes/wayland/all/conanfile.py\n--- a/recipes/wayland/all/conanfile.py\n+++ b/recipes/wayland/all/conanfile.py\n@@ -1,8 +1,7 @@\n from conan import ConanFile\n from conan.errors import ConanInvalidConfiguration\n from conan.tools.build import cross_building\n-from conan.tools.files import copy, get, mkdir, replace_in_file, rmdir, save\n-from conan.tools.gnu.pkgconfigdeps.pc_files_creator import get_pc_files_and_content\n+from conan.tools.files import copy, get, replace_in_file, rmdir\n from conan.tools.layout import basic_layout\n from conan.tools.meson import Meson, MesonToolchain\n from conan.tools.scm import Version\n@@ -82,21 +81,20 @@\n tc.project_options[\"documentation\"] = False\n if Version(self.version) >= \"1.18.91\":\n tc.project_options[\"scanner\"] = True\n-\n- # Generate PC files for the tool_requires wayland package to ensure wayland-scanner is found for build machine.\n- if cross_building(self):\n- native_generators_folder = os.path.join(self.generators_folder, \"native\")\n- mkdir(self, native_generators_folder)\n- for target in [\"wayland\", \"expat\", \"libxml2\", \"libiconv\"]:\n- for pc_name, pc_content in get_pc_files_and_content(self, self.dependencies.build[target]).items():\n- save(self, os.path.join(native_generators_folder, pc_name), pc_content)\n- tc.project_options[\"build.pkg_config_path\"] = native_generators_folder\n tc.generate()\n \n def _patch_sources(self):\n replace_in_file(self, os.path.join(self.source_folder, \"meson.build\"),\n \"subdir('tests')\", \"#subdir('tests')\")\n \n+ if cross_building(self):\n+ replace_in_file(self, f\"{self.source_folder}/src/meson.build\",\n+ \"scanner_dep = dependency('wayland-scanner', native: true, version: meson.project_version())\",\n+ \"# scanner_dep = dependency('wayland-scanner', native: true, version: meson.project_version())\")\n+ replace_in_file(self, f\"{self.source_folder}/src/meson.build\",\n+ \"wayland_scanner_for_build = find_program(scanner_dep.get_variable(pkgconfig: 'wayland_scanner'))\",\n+ \"wayland_scanner_for_build = find_program('wayland-scanner')\")\n+\n def build(self):\n self._patch_sources()\n meson = Meson(self)\n", "issue": "[package] wayland/all: incompatible with latest (1.52.0) Conan version\n### Package and Environment Details\n\n* Package Name/Version: **wayland/1.20.0**\r\n* Conan version: **conan 1.52.0**\n\n### Conan profile\n\n_No response_\n\n### Steps to reproduce\n\nconan export recipes/wayland/all wayland/1.20.0@\n\n### Logs\n\n<details><summary>Click to expand log</summary>\r\n\r\n```\r\nFile \"recipes/wayland/all/conanfile.py\", line 5, in <module>\r\n from conan.tools.gnu.pkgconfigdeps.pc_files_creator import get_pc_files_and_content\r\nModuleNotFoundError: No module named 'conan.tools.gnu.pkgconfigdeps.pc_files_creator'; 'conan.tools.gnu.pkgconfigdeps' is not a package\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "from conan import ConanFile\nfrom conan.errors import ConanInvalidConfiguration\nfrom conan.tools.build import cross_building\nfrom conan.tools.files import copy, get, mkdir, replace_in_file, rmdir, save\nfrom conan.tools.gnu.pkgconfigdeps.pc_files_creator import get_pc_files_and_content\nfrom conan.tools.layout import basic_layout\nfrom conan.tools.meson import Meson, MesonToolchain\nfrom conan.tools.scm import Version\nimport os\n\nrequired_conan_version = \">=1.50.0\"\n\n\nclass WaylandConan(ConanFile):\n name = \"wayland\"\n description = (\n \"Wayland is a project to define a protocol for a compositor to talk to \"\n \"its clients as well as a library implementation of the protocol\"\n )\n topics = (\"protocol\", \"compositor\", \"display\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://wayland.freedesktop.org\"\n license = \"MIT\"\n\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"enable_libraries\": [True, False],\n \"enable_dtd_validation\": [True, False],\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"enable_libraries\": True,\n \"enable_dtd_validation\": True,\n }\n\n generators = \"PkgConfigDeps\", \"VirtualBuildEnv\", \"VirtualRunEnv\"\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n try:\n del self.settings.compiler.libcxx\n except Exception:\n pass\n try:\n del self.settings.compiler.cppstd\n except Exception:\n pass\n\n def requirements(self):\n if self.options.enable_libraries:\n self.requires(\"libffi/3.4.2\")\n if self.options.enable_dtd_validation:\n self.requires(\"libxml2/2.9.14\")\n self.requires(\"expat/2.4.8\")\n\n def validate(self):\n if self.info.settings.os != \"Linux\":\n raise ConanInvalidConfiguration(\"Wayland can be built on Linux only\")\n\n def build_requirements(self):\n self.tool_requires(\"meson/0.63.1\")\n self.tool_requires(\"pkgconf/1.7.4\")\n if cross_building(self):\n self.tool_requires(self.ref)\n\n def layout(self):\n basic_layout(self, src_folder=\"src\")\n\n def source(self):\n get(self, **self.conan_data[\"sources\"][self.version], strip_root=True)\n\n def generate(self):\n tc = MesonToolchain(self)\n tc.project_options[\"libdir\"] = \"lib\"\n tc.project_options[\"datadir\"] = \"res\"\n tc.project_options[\"libraries\"] = self.options.enable_libraries\n tc.project_options[\"dtd_validation\"] = self.options.enable_dtd_validation\n tc.project_options[\"documentation\"] = False\n if Version(self.version) >= \"1.18.91\":\n tc.project_options[\"scanner\"] = True\n\n # Generate PC files for the tool_requires wayland package to ensure wayland-scanner is found for build machine.\n if cross_building(self):\n native_generators_folder = os.path.join(self.generators_folder, \"native\")\n mkdir(self, native_generators_folder)\n for target in [\"wayland\", \"expat\", \"libxml2\", \"libiconv\"]:\n for pc_name, pc_content in get_pc_files_and_content(self, self.dependencies.build[target]).items():\n save(self, os.path.join(native_generators_folder, pc_name), pc_content)\n tc.project_options[\"build.pkg_config_path\"] = native_generators_folder\n tc.generate()\n\n def _patch_sources(self):\n replace_in_file(self, os.path.join(self.source_folder, \"meson.build\"),\n \"subdir('tests')\", \"#subdir('tests')\")\n\n def build(self):\n self._patch_sources()\n meson = Meson(self)\n meson.configure()\n meson.build()\n\n def package(self):\n copy(self, \"COPYING\", src=self.source_folder, dst=os.path.join(self.package_folder, \"licenses\"))\n meson = Meson(self)\n meson.install()\n pkg_config_dir = os.path.join(self.package_folder, \"lib\", \"pkgconfig\")\n rmdir(self, pkg_config_dir)\n\n def package_info(self):\n self.cpp_info.components[\"wayland-scanner\"].set_property(\"pkg_config_name\", \"wayland-scanner\")\n self.cpp_info.components[\"wayland-scanner\"].names[\"pkg_config\"] = \"wayland-scanner\"\n self.cpp_info.components[\"wayland-scanner\"].resdirs = [\"res\"]\n\n self.cpp_info.components[\"wayland-scanner\"].includedirs = []\n self.cpp_info.components[\"wayland-scanner\"].libdirs = []\n self.cpp_info.components[\"wayland-scanner\"].set_property(\"component_version\", self.version)\n\n self.cpp_info.components[\"wayland-scanner\"].requires = [\"expat::expat\"]\n if self.options.enable_dtd_validation:\n self.cpp_info.components[\"wayland-scanner\"].requires.append(\"libxml2::libxml2\")\n pkgconfig_variables = {\n 'datarootdir': '${prefix}/res',\n 'pkgdatadir': '${datarootdir}/wayland',\n 'bindir': '${prefix}/bin',\n 'wayland_scanner': '${bindir}/wayland-scanner',\n }\n self.cpp_info.components[\"wayland-scanner\"].set_property(\n \"pkg_config_custom_content\",\n \"\\n\".join(f\"{key}={value}\" for key,value in pkgconfig_variables.items()))\n\n bindir = os.path.join(self.package_folder, \"bin\")\n self.buildenv_info.prepend_path(\"PATH\", bindir)\n self.runenv_info.prepend_path(\"PATH\", bindir)\n # TODO: Remove in Conan 2.0 where Environment class will be required.\n self.output.info(\"Appending PATH environment variable: {}\".format(bindir))\n self.env_info.PATH.append(bindir)\n\n if self.options.enable_libraries:\n self.cpp_info.components[\"wayland-server\"].libs = [\"wayland-server\"]\n self.cpp_info.components[\"wayland-server\"].set_property(\"pkg_config_name\", \"wayland-server\")\n self.cpp_info.components[\"wayland-server\"].names[\"pkg_config\"] = \"wayland-server\"\n self.cpp_info.components[\"wayland-server\"].requires = [\"libffi::libffi\"]\n self.cpp_info.components[\"wayland-server\"].system_libs = [\"pthread\", \"m\"]\n self.cpp_info.components[\"wayland-server\"].resdirs = [\"res\"]\n if self.version >= Version(\"1.21.0\") and self.settings.os == \"Linux\":\n self.cpp_info.components[\"wayland-server\"].system_libs += [\"rt\"]\n self.cpp_info.components[\"wayland-server\"].set_property(\"component_version\", self.version)\n\n pkgconfig_variables = {\n 'datarootdir': '${prefix}/res',\n 'pkgdatadir': '${datarootdir}/wayland',\n }\n self.cpp_info.components[\"wayland-server\"].set_property(\n \"pkg_config_custom_content\",\n \"\\n\".join(f\"{key}={value}\" for key, value in pkgconfig_variables.items()))\n\n self.cpp_info.components[\"wayland-client\"].libs = [\"wayland-client\"]\n self.cpp_info.components[\"wayland-client\"].set_property(\"pkg_config_name\", \"wayland-client\")\n self.cpp_info.components[\"wayland-client\"].names[\"pkg_config\"] = \"wayland-client\"\n self.cpp_info.components[\"wayland-client\"].requires = [\"libffi::libffi\"]\n self.cpp_info.components[\"wayland-client\"].system_libs = [\"pthread\", \"m\"]\n self.cpp_info.components[\"wayland-client\"].resdirs = [\"res\"]\n if self.version >= Version(\"1.21.0\") and self.settings.os == \"Linux\":\n self.cpp_info.components[\"wayland-client\"].system_libs += [\"rt\"]\n self.cpp_info.components[\"wayland-client\"].set_property(\"component_version\", self.version)\n\n pkgconfig_variables = {\n 'datarootdir': '${prefix}/res',\n 'pkgdatadir': '${datarootdir}/wayland',\n }\n self.cpp_info.components[\"wayland-client\"].set_property(\n \"pkg_config_custom_content\",\n \"\\n\".join(f\"{key}={value}\" for key, value in pkgconfig_variables.items()))\n\n self.cpp_info.components[\"wayland-cursor\"].libs = [\"wayland-cursor\"]\n self.cpp_info.components[\"wayland-cursor\"].set_property(\"pkg_config_name\", \"wayland-cursor\")\n self.cpp_info.components[\"wayland-cursor\"].names[\"pkg_config\"] = \"wayland-cursor\"\n self.cpp_info.components[\"wayland-cursor\"].requires = [\"wayland-client\"]\n self.cpp_info.components[\"wayland-cursor\"].set_property(\"component_version\", self.version)\n\n self.cpp_info.components[\"wayland-egl\"].libs = [\"wayland-egl\"]\n self.cpp_info.components[\"wayland-egl\"].set_property(\"pkg_config_name\", \"wayland-egl\")\n self.cpp_info.components[\"wayland-egl\"].names[\"pkg_config\"] = \"wayland-egl\"\n self.cpp_info.components[\"wayland-egl\"].requires = [\"wayland-client\"]\n self.cpp_info.components[\"wayland-egl\"].set_property(\"component_version\", \"18.1.0\")\n\n self.cpp_info.components[\"wayland-egl-backend\"].names[\"pkg_config\"] = \"wayland-egl-backend\"\n self.cpp_info.components[\"wayland-egl-backend\"].set_property(\"pkg_config_name\", \"wayland-egl-backend\")\n self.cpp_info.components[\"wayland-egl-backend\"].set_property(\"component_version\", \"3\")\n\n bindir = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bindir))\n self.env_info.PATH.append(bindir)\n", "path": "recipes/wayland/all/conanfile.py"}]}
| 3,385 | 573 |
gh_patches_debug_4081
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-8054
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pin click
resolves #8048
### Description
Pin main to `click>=8.1.1,<8.1.4`
### Checklist
- [ ] I have read [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md) and understand what's expected of me
- [ ] I have signed the [CLA](https://docs.getdbt.com/docs/contributor-license-agreements)
- [ ] I have run this code in development and it appears to resolve the stated issue
- [ ] This PR includes tests, or tests are not required/relevant for this PR
- [ ] I have [opened an issue to add/update docs](https://github.com/dbt-labs/docs.getdbt.com/issues/new/choose), or docs changes are not required/relevant for this PR
- [ ] I have run `changie new` to [create a changelog entry](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-a-changelog-entry)
</issue>
<code>
[start of core/setup.py]
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 7, 2):
6 print("Error: dbt does not support this version of Python.")
7 print("Please upgrade to Python 3.7.2 or higher.")
8 sys.exit(1)
9
10
11 from setuptools import setup
12
13 try:
14 from setuptools import find_namespace_packages
15 except ImportError:
16 # the user has a downlevel version of setuptools.
17 print("Error: dbt requires setuptools v40.1.0 or higher.")
18 print('Please upgrade setuptools with "pip install --upgrade setuptools" ' "and try again")
19 sys.exit(1)
20
21
22 this_directory = os.path.abspath(os.path.dirname(__file__))
23 with open(os.path.join(this_directory, "README.md")) as f:
24 long_description = f.read()
25
26
27 package_name = "dbt-core"
28 package_version = "1.3.4"
29 description = """With dbt, data analysts and engineers can build analytics \
30 the way engineers build applications."""
31
32
33 setup(
34 name=package_name,
35 version=package_version,
36 description=description,
37 long_description=long_description,
38 long_description_content_type="text/markdown",
39 author="dbt Labs",
40 author_email="[email protected]",
41 url="https://github.com/dbt-labs/dbt-core",
42 packages=find_namespace_packages(include=["dbt", "dbt.*"]),
43 include_package_data=True,
44 test_suite="test",
45 entry_points={
46 "console_scripts": ["dbt = dbt.main:main"],
47 },
48 install_requires=[
49 "Jinja2==3.1.2",
50 "agate>=1.6,<1.6.4",
51 "click>=7.0,<9",
52 "colorama>=0.3.9,<0.4.6",
53 "hologram>=0.0.14,<=0.0.15",
54 "isodate>=0.6,<0.7",
55 "logbook>=1.5,<1.6",
56 "mashumaro[msgpack]==3.0.4",
57 "minimal-snowplow-tracker==0.0.2",
58 "networkx>=2.3,<2.8.1;python_version<'3.8'",
59 "networkx>=2.3,<3;python_version>='3.8'",
60 "packaging>=20.9,<22.0",
61 "sqlparse>=0.2.3,<0.4.4",
62 "dbt-extractor~=0.4.1",
63 "typing-extensions>=3.7.4",
64 "werkzeug>=1,<3",
65 "pathspec~=0.9.0",
66 "pytz>=2015.7",
67 # the following are all to match snowflake-connector-python
68 "requests<3.0.0",
69 "idna>=2.5,<4",
70 "cffi>=1.9,<2.0.0",
71 "pyyaml>=6.0",
72 ],
73 zip_safe=False,
74 classifiers=[
75 "Development Status :: 5 - Production/Stable",
76 "License :: OSI Approved :: Apache Software License",
77 "Operating System :: Microsoft :: Windows",
78 "Operating System :: MacOS :: MacOS X",
79 "Operating System :: POSIX :: Linux",
80 "Programming Language :: Python :: 3.7",
81 "Programming Language :: Python :: 3.8",
82 "Programming Language :: Python :: 3.9",
83 "Programming Language :: Python :: 3.10",
84 ],
85 python_requires=">=3.7.2",
86 )
87
[end of core/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/core/setup.py b/core/setup.py
--- a/core/setup.py
+++ b/core/setup.py
@@ -48,7 +48,8 @@
install_requires=[
"Jinja2==3.1.2",
"agate>=1.6,<1.6.4",
- "click>=7.0,<9",
+ # temporarily pinning click for mypy failures: https://github.com/pallets/click/issues/2558
+ "click>=7.0,<8.1.4",
"colorama>=0.3.9,<0.4.6",
"hologram>=0.0.14,<=0.0.15",
"isodate>=0.6,<0.7",
|
{"golden_diff": "diff --git a/core/setup.py b/core/setup.py\n--- a/core/setup.py\n+++ b/core/setup.py\n@@ -48,7 +48,8 @@\n install_requires=[\n \"Jinja2==3.1.2\",\n \"agate>=1.6,<1.6.4\",\n- \"click>=7.0,<9\",\n+ # temporarily pinning click for mypy failures: https://github.com/pallets/click/issues/2558\n+ \"click>=7.0,<8.1.4\",\n \"colorama>=0.3.9,<0.4.6\",\n \"hologram>=0.0.14,<=0.0.15\",\n \"isodate>=0.6,<0.7\",\n", "issue": "pin click\nresolves #8048 \r\n\r\n### Description\r\n\r\nPin main to `click>=8.1.1,<8.1.4`\r\n\r\n### Checklist\r\n\r\n- [ ] I have read [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md) and understand what's expected of me\r\n- [ ] I have signed the [CLA](https://docs.getdbt.com/docs/contributor-license-agreements)\r\n- [ ] I have run this code in development and it appears to resolve the stated issue\r\n- [ ] This PR includes tests, or tests are not required/relevant for this PR\r\n- [ ] I have [opened an issue to add/update docs](https://github.com/dbt-labs/docs.getdbt.com/issues/new/choose), or docs changes are not required/relevant for this PR\r\n- [ ] I have run `changie new` to [create a changelog entry](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-a-changelog-entry)\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 7, 2):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.7.2 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.3.4\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\"dbt = dbt.main:main\"],\n },\n install_requires=[\n \"Jinja2==3.1.2\",\n \"agate>=1.6,<1.6.4\",\n \"click>=7.0,<9\",\n \"colorama>=0.3.9,<0.4.6\",\n \"hologram>=0.0.14,<=0.0.15\",\n \"isodate>=0.6,<0.7\",\n \"logbook>=1.5,<1.6\",\n \"mashumaro[msgpack]==3.0.4\",\n \"minimal-snowplow-tracker==0.0.2\",\n \"networkx>=2.3,<2.8.1;python_version<'3.8'\",\n \"networkx>=2.3,<3;python_version>='3.8'\",\n \"packaging>=20.9,<22.0\",\n \"sqlparse>=0.2.3,<0.4.4\",\n \"dbt-extractor~=0.4.1\",\n \"typing-extensions>=3.7.4\",\n \"werkzeug>=1,<3\",\n \"pathspec~=0.9.0\",\n \"pytz>=2015.7\",\n # the following are all to match snowflake-connector-python\n \"requests<3.0.0\",\n \"idna>=2.5,<4\",\n \"cffi>=1.9,<2.0.0\",\n \"pyyaml>=6.0\",\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n ],\n python_requires=\">=3.7.2\",\n)\n", "path": "core/setup.py"}]}
| 1,712 | 171 |
gh_patches_debug_42184
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-1016
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IHG hotels scraper missing IHG Army Hotels
Missing this additional source of hotel listings:
https://www.ihg.com/armyhotels/
</issue>
<code>
[start of locations/spiders/ihg_hotels.py]
1 import json
2 import re
3 import scrapy
4
5 from locations.items import GeojsonPointItem
6
7
8 class IHGHotels(scrapy.Spider):
9
10 name = "ihg_hotels"
11 # allowed_domains = ["ihg.com"] # the Kimpton hotels each have their own domains
12 download_delay = 0.5
13
14 start_urls = (
15 'https://www.ihg.com/holidayinn/destinations/us/en/explore',
16 )
17
18 def parse_hotel(self, response):
19 if 'hoteldetail' not in response.url:
20 # got redirected back to search page
21 return
22
23 street_address = " ".join(response.xpath('//span[@itemprop="streetAddress"]/p/text()').extract())
24 if not street_address:
25 street_address = response.xpath('//span[@itemprop="streetAddress"]/text()').extract_first()
26
27 city = response.xpath('//span[@itemprop="addressLocality"]/text()').extract_first()
28 state = response.xpath('//span[@itemprop="addressRegion"]/text()').extract_first()
29
30 properties = {
31 'ref': "_".join(re.search(r'en/(.*)/(.*)/hoteldetail', response.url).groups()),
32 'name': response.xpath('//meta[@property="og:title"]/@content').extract_first(),
33 'addr_full': street_address.replace(u'\u00a0', ' ').strip(', ') if street_address else None,
34 'city': city.replace(u'\u00a0', ' ').strip(', ') if city else None,
35 'state': state.replace(u'\u00a0', ' ') if state else None,
36 'postcode': response.xpath('//span[@itemprop="postalCode"]/text()').extract_first(),
37 'country': response.xpath('//span[@itemprop="addressCountry"]/text()').extract_first(),
38 'phone': (response.xpath('//span[@itemprop="telephone"]/text()').extract_first() or '').strip('| '),
39 'lat': float(response.xpath('//meta[@property="place:location:latitude"]/@content').extract_first()),
40 'lon': float(response.xpath('//meta[@property="place:location:longitude"]/@content').extract_first()),
41 'website': response.url,
42 }
43
44 yield GeojsonPointItem(**properties)
45
46 def parse_kimpton(self, response):
47 url = response.xpath('//a[contains(text(), "VISIT HOTEL WEBSITE")]/@href').extract_first()
48 properties = {
49 'ref': "_".join(re.search(r'en/(.*)/(.*)/hoteldetail', response.url).groups()),
50 'lat': float(response.xpath('//meta[@property="place:location:latitude"]/@content').extract_first()),
51 'lon': float(response.xpath('//meta[@property="place:location:longitude"]/@content').extract_first()),
52 }
53 if not url: # "opening soon" hotels just have teaser pages
54 return
55 url = url.split('?')[0] # remove querystring
56 yield scrapy.Request(url, callback=self.parse_kimpton_data, meta={"properties": properties})
57
58 def parse_kimpton_data(self, response):
59 properties = response.meta["properties"]
60 script = response.xpath('//script[@type="application/ld+json"]/text()').extract_first()
61 if script:
62 data = json.loads(script)
63 else:
64 data = {}
65 if 'name' in data:
66 properties.update({
67 'name': data["name"],
68 'addr_full': data["address"]["streetAddress"],
69 'city': data["address"]["addressLocality"],
70 'state': data["address"].get("addressRegion"),
71 'postcode': data["address"]["postalCode"],
72 'country': data["address"].get("addressCountry"),
73 'phone': data.get("telephone"),
74 'website': data["url"]
75 })
76
77 else:
78 street_address = " ".join(response.xpath('//span[@itemprop="streetAddress"]/p/text()').extract())
79 if not street_address:
80 street_address = response.xpath('//span[@itemprop="streetAddress"]/text()').extract_first()
81
82 city = response.xpath('//span[@itemprop="addressLocality"]/text()').extract_first()
83 state = response.xpath('//span[@itemprop="addressRegion"]/text()').extract_first()
84
85 properties.update({
86 'name': response.xpath('//meta[@property="og:title"]/@content').extract_first(),
87 'addr_full': street_address.replace(u'\u00a0', ' ').strip(', ') if street_address else None,
88 'city': city.replace(u'\u00a0', ' ').strip(', ') if city else None,
89 'state': state.replace(u'\u00a0', ' ') if state else None,
90 'postcode': response.xpath('//span[@itemprop="postalCode"]/text()').extract_first(),
91 'country': response.xpath('//span[@itemprop="addressCountry"]/text()').extract_first(),
92 'phone': (response.xpath('//span[@itemprop="telephone"]/text()').extract_first() or '').strip('| '),
93 'website': response.url,
94 })
95
96 yield GeojsonPointItem(**properties)
97
98 def parse_regent(self, response):
99 data = json.loads(response.xpath('//script[@type="application/ld+json"]/text()').extract_first())
100
101 properties = {
102 'ref': "_".join(re.search(r'en/(.*)/(.*)/hoteldetail', response.url).groups()),
103 'name': data["name"],
104 'addr_full': data["address"]["streetAddress"],
105 'city': data["address"]["addressLocality"],
106 'state': data["address"].get("addressRegion"),
107 'postcode': data["address"]["postalCode"],
108 'country': data["address"]["addressCountry"],
109 'phone': data["telephone"],
110 'lat': float(data["geo"]["latitude"]),
111 'lon': float(data["geo"]["longitude"]),
112 'website': response.url,
113 }
114
115 yield GeojsonPointItem(**properties)
116
117 def parse_crowne_plaza(self, response):
118 address = response.xpath('//a[@class="hotel-home"]/text()').extract_first().strip()
119
120 address_parts = address.split('|')
121
122 if len(address_parts) == 4: # international addresses
123 addr_city, postcode, country, _ = address_parts
124 state = ''
125 else: # us addresses
126 addr_city, state, postcode, country, _ = address_parts
127
128 street_address = ",".join(addr_city.split(',')[0:-1])
129 city = addr_city.split(',')[-1]
130
131 properties = {
132 'ref': "_".join(re.search(r'en/(.*)/(.*)/hoteldetail', response.url).groups()),
133 'name': response.xpath('//meta[@property="og:title"]/@content').extract_first(),
134 'addr_full': street_address.strip(),
135 'city': city.strip(),
136 'state': state.strip(),
137 'postcode': postcode.strip(),
138 'country': country.strip(),
139 'phone': response.xpath('//div[@class="new-hinfo-address"]/p/a[2]/text()').extract_first(),
140 'lat': float(response.xpath('//meta[@property="place:location:latitude"]/@content').extract_first()),
141 'lon': float(response.xpath('//meta[@property="place:location:longitude"]/@content').extract_first()),
142 'website': response.url,
143 }
144
145 yield GeojsonPointItem(**properties)
146
147 def parse_candlewood_staybridge(self, response):
148 if 'hoteldetail' not in response.url:
149 # got redirected back to search page
150 return
151
152 street_address = " ".join(response.xpath('//span[@itemprop="streetAddress"]/p/text()').extract())
153 if not street_address:
154 street_address = response.xpath('//span[@itemprop="streetAddress"]/text()').extract_first()
155
156 region = response.xpath('//span[@itemprop="addressRegion"]/text()').extract_first().replace(u'\u00a0',' ')
157
158 match = re.search(r'([a-z]+)\s(\d+)\s(.*)', region, re.IGNORECASE)
159 if match:
160 state, postcode, country = match.groups()
161 else:
162 state, postcode, country = None, None, region.strip()
163
164 properties = {
165 'ref': "_".join(re.search(r'en/(.*)/(.*)/hoteldetail', response.url).groups()),
166 'name': response.xpath('//meta[@property="og:title"]/@content').extract_first(),
167 'addr_full': street_address.replace(u'\u00a0', ' ').strip(', '),
168 'city': response.xpath('//span[@itemprop="addressLocality"]/text()').extract_first().replace(u'\u00a0', ' ').strip(', '),
169 'state': state,
170 'postcode': postcode,
171 'country': country,
172 'phone': response.xpath('//div[@class="booking"]/a/text()').extract_first(),
173 'lat': float(response.xpath('//meta[@property="place:location:latitude"]/@content').extract_first()),
174 'lon': float(response.xpath('//meta[@property="place:location:longitude"]/@content').extract_first()),
175 'website': response.url,
176 }
177
178 yield GeojsonPointItem(**properties)
179
180 def parse(self, response):
181
182 hotel_parsers = {
183 'holidayinn': self.parse_hotel,
184 'crowneplaza': self.parse_crowne_plaza,
185 'holidayinnexpress': self.parse_hotel,
186 'hotelindigo': self.parse_hotel,
187 'candlewood': self.parse_candlewood_staybridge,
188 'staybridge': self.parse_candlewood_staybridge,
189 'holidayinnresorts': self.parse_hotel,
190 'intercontinental': self.parse_hotel,
191 'regent': self.parse_regent,
192 'hotels': self.parse_hotel, # vocos
193 'kimptonhotels': self.parse_kimpton,
194 'holidayinnclubvacations': self.parse_hotel,
195 'evenhotels': self.parse_hotel,
196 'avidhotels': self.parse_hotel,
197 'hualuxe': self.parse_hotel
198 }
199
200 hotel_urls = response.xpath('//div[@class="hotelList"]//div[contains(@class, "hotelItem")]//a[contains(@class, "hotel-name")]/@href').extract()
201 if hotel_urls:
202 for url in hotel_urls:
203 hotel_type = re.search(r'ihg.com/(.*?)/', response.urljoin(url), re.IGNORECASE).group(1)
204
205 yield scrapy.Request(response.urljoin(url),
206 callback=hotel_parsers[hotel_type])
207
208 else:
209 urls = response.xpath('//li[@class="listingItem"]/a/@href').extract()
210 for url in urls:
211 yield scrapy.Request(response.urljoin(url))
212
[end of locations/spiders/ihg_hotels.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/ihg_hotels.py b/locations/spiders/ihg_hotels.py
--- a/locations/spiders/ihg_hotels.py
+++ b/locations/spiders/ihg_hotels.py
@@ -13,6 +13,7 @@
start_urls = (
'https://www.ihg.com/holidayinn/destinations/us/en/explore',
+ 'https://www.ihg.com/armyhotels/hotels/us/en/installations',
)
def parse_hotel(self, response):
@@ -32,7 +33,7 @@
'name': response.xpath('//meta[@property="og:title"]/@content').extract_first(),
'addr_full': street_address.replace(u'\u00a0', ' ').strip(', ') if street_address else None,
'city': city.replace(u'\u00a0', ' ').strip(', ') if city else None,
- 'state': state.replace(u'\u00a0', ' ') if state else None,
+ 'state': state.replace(u'\u00a0', ' ').strip(', ') if state else None,
'postcode': response.xpath('//span[@itemprop="postalCode"]/text()').extract_first(),
'country': response.xpath('//span[@itemprop="addressCountry"]/text()').extract_first(),
'phone': (response.xpath('//span[@itemprop="telephone"]/text()').extract_first() or '').strip('| '),
@@ -177,6 +178,23 @@
yield GeojsonPointItem(**properties)
+ def parse_army_hotel(self, response):
+ properties = {
+ 'ref': "_".join(re.search(r'en/(.*)/(.*)/hoteldetail', response.url).groups()),
+ 'name': response.xpath('//meta[@property="og:title"]/@content').extract_first(),
+ 'addr_full': response.xpath('//meta[@property="business:contact_data:street_address"]/@content').extract_first(),
+ 'city': response.xpath('//meta[@property="business:contact_data:locality"]/@content').extract_first(),
+ 'state': response.xpath('//meta[@property="business:contact_data:region"]/@content').extract_first(),
+ 'postcode': response.xpath('//meta[@property="business:contact_data:postal_code"]/@content').extract_first(),
+ 'country': response.xpath('//meta[@property="business:contact_data:country_name"]/@content').extract_first(),
+ 'phone': (response.xpath('//span[@title="Hotel Front Desk:"]/span/text()').extract_first() or "").strip(),
+ 'lat': float(response.xpath('//meta[@property="place:location:latitude"]/@content').extract_first()),
+ 'lon': float(response.xpath('//meta[@property="place:location:longitude"]/@content').extract_first()),
+ 'website': response.url,
+ }
+
+ yield GeojsonPointItem(**properties)
+
def parse(self, response):
hotel_parsers = {
@@ -194,10 +212,15 @@
'holidayinnclubvacations': self.parse_hotel,
'evenhotels': self.parse_hotel,
'avidhotels': self.parse_hotel,
- 'hualuxe': self.parse_hotel
+ 'hualuxe': self.parse_hotel,
+ 'armyhotels': self.parse_army_hotel
}
hotel_urls = response.xpath('//div[@class="hotelList"]//div[contains(@class, "hotelItem")]//a[contains(@class, "hotel-name")]/@href').extract()
+
+ if 'armyhotels' in response.url:
+ hotel_urls = response.xpath('//div[@id="hotelListWrap"]//a/@href').extract()
+
if hotel_urls:
for url in hotel_urls:
hotel_type = re.search(r'ihg.com/(.*?)/', response.urljoin(url), re.IGNORECASE).group(1)
|
{"golden_diff": "diff --git a/locations/spiders/ihg_hotels.py b/locations/spiders/ihg_hotels.py\n--- a/locations/spiders/ihg_hotels.py\n+++ b/locations/spiders/ihg_hotels.py\n@@ -13,6 +13,7 @@\n \n start_urls = (\n 'https://www.ihg.com/holidayinn/destinations/us/en/explore',\n+ 'https://www.ihg.com/armyhotels/hotels/us/en/installations',\n )\n \n def parse_hotel(self, response):\n@@ -32,7 +33,7 @@\n 'name': response.xpath('//meta[@property=\"og:title\"]/@content').extract_first(),\n 'addr_full': street_address.replace(u'\\u00a0', ' ').strip(', ') if street_address else None,\n 'city': city.replace(u'\\u00a0', ' ').strip(', ') if city else None,\n- 'state': state.replace(u'\\u00a0', ' ') if state else None,\n+ 'state': state.replace(u'\\u00a0', ' ').strip(', ') if state else None,\n 'postcode': response.xpath('//span[@itemprop=\"postalCode\"]/text()').extract_first(),\n 'country': response.xpath('//span[@itemprop=\"addressCountry\"]/text()').extract_first(),\n 'phone': (response.xpath('//span[@itemprop=\"telephone\"]/text()').extract_first() or '').strip('| '),\n@@ -177,6 +178,23 @@\n \n yield GeojsonPointItem(**properties)\n \n+ def parse_army_hotel(self, response):\n+ properties = {\n+ 'ref': \"_\".join(re.search(r'en/(.*)/(.*)/hoteldetail', response.url).groups()),\n+ 'name': response.xpath('//meta[@property=\"og:title\"]/@content').extract_first(),\n+ 'addr_full': response.xpath('//meta[@property=\"business:contact_data:street_address\"]/@content').extract_first(),\n+ 'city': response.xpath('//meta[@property=\"business:contact_data:locality\"]/@content').extract_first(),\n+ 'state': response.xpath('//meta[@property=\"business:contact_data:region\"]/@content').extract_first(),\n+ 'postcode': response.xpath('//meta[@property=\"business:contact_data:postal_code\"]/@content').extract_first(),\n+ 'country': response.xpath('//meta[@property=\"business:contact_data:country_name\"]/@content').extract_first(),\n+ 'phone': (response.xpath('//span[@title=\"Hotel Front Desk:\"]/span/text()').extract_first() or \"\").strip(),\n+ 'lat': float(response.xpath('//meta[@property=\"place:location:latitude\"]/@content').extract_first()),\n+ 'lon': float(response.xpath('//meta[@property=\"place:location:longitude\"]/@content').extract_first()),\n+ 'website': response.url,\n+ }\n+\n+ yield GeojsonPointItem(**properties)\n+\n def parse(self, response):\n \n hotel_parsers = {\n@@ -194,10 +212,15 @@\n 'holidayinnclubvacations': self.parse_hotel,\n 'evenhotels': self.parse_hotel,\n 'avidhotels': self.parse_hotel,\n- 'hualuxe': self.parse_hotel\n+ 'hualuxe': self.parse_hotel,\n+ 'armyhotels': self.parse_army_hotel\n }\n \n hotel_urls = response.xpath('//div[@class=\"hotelList\"]//div[contains(@class, \"hotelItem\")]//a[contains(@class, \"hotel-name\")]/@href').extract()\n+\n+ if 'armyhotels' in response.url:\n+ hotel_urls = response.xpath('//div[@id=\"hotelListWrap\"]//a/@href').extract()\n+\n if hotel_urls:\n for url in hotel_urls:\n hotel_type = re.search(r'ihg.com/(.*?)/', response.urljoin(url), re.IGNORECASE).group(1)\n", "issue": "IHG hotels scraper missing IHG Army Hotels\nMissing this additional source of hotel listings:\r\nhttps://www.ihg.com/armyhotels/\n", "before_files": [{"content": "import json\nimport re\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\n\n\nclass IHGHotels(scrapy.Spider):\n\n name = \"ihg_hotels\"\n # allowed_domains = [\"ihg.com\"] # the Kimpton hotels each have their own domains\n download_delay = 0.5\n\n start_urls = (\n 'https://www.ihg.com/holidayinn/destinations/us/en/explore',\n )\n\n def parse_hotel(self, response):\n if 'hoteldetail' not in response.url:\n # got redirected back to search page\n return\n\n street_address = \" \".join(response.xpath('//span[@itemprop=\"streetAddress\"]/p/text()').extract())\n if not street_address:\n street_address = response.xpath('//span[@itemprop=\"streetAddress\"]/text()').extract_first()\n\n city = response.xpath('//span[@itemprop=\"addressLocality\"]/text()').extract_first()\n state = response.xpath('//span[@itemprop=\"addressRegion\"]/text()').extract_first()\n\n properties = {\n 'ref': \"_\".join(re.search(r'en/(.*)/(.*)/hoteldetail', response.url).groups()),\n 'name': response.xpath('//meta[@property=\"og:title\"]/@content').extract_first(),\n 'addr_full': street_address.replace(u'\\u00a0', ' ').strip(', ') if street_address else None,\n 'city': city.replace(u'\\u00a0', ' ').strip(', ') if city else None,\n 'state': state.replace(u'\\u00a0', ' ') if state else None,\n 'postcode': response.xpath('//span[@itemprop=\"postalCode\"]/text()').extract_first(),\n 'country': response.xpath('//span[@itemprop=\"addressCountry\"]/text()').extract_first(),\n 'phone': (response.xpath('//span[@itemprop=\"telephone\"]/text()').extract_first() or '').strip('| '),\n 'lat': float(response.xpath('//meta[@property=\"place:location:latitude\"]/@content').extract_first()),\n 'lon': float(response.xpath('//meta[@property=\"place:location:longitude\"]/@content').extract_first()),\n 'website': response.url,\n }\n\n yield GeojsonPointItem(**properties)\n\n def parse_kimpton(self, response):\n url = response.xpath('//a[contains(text(), \"VISIT HOTEL WEBSITE\")]/@href').extract_first()\n properties = {\n 'ref': \"_\".join(re.search(r'en/(.*)/(.*)/hoteldetail', response.url).groups()),\n 'lat': float(response.xpath('//meta[@property=\"place:location:latitude\"]/@content').extract_first()),\n 'lon': float(response.xpath('//meta[@property=\"place:location:longitude\"]/@content').extract_first()),\n }\n if not url: # \"opening soon\" hotels just have teaser pages\n return\n url = url.split('?')[0] # remove querystring\n yield scrapy.Request(url, callback=self.parse_kimpton_data, meta={\"properties\": properties})\n\n def parse_kimpton_data(self, response):\n properties = response.meta[\"properties\"]\n script = response.xpath('//script[@type=\"application/ld+json\"]/text()').extract_first()\n if script:\n data = json.loads(script)\n else:\n data = {}\n if 'name' in data:\n properties.update({\n 'name': data[\"name\"],\n 'addr_full': data[\"address\"][\"streetAddress\"],\n 'city': data[\"address\"][\"addressLocality\"],\n 'state': data[\"address\"].get(\"addressRegion\"),\n 'postcode': data[\"address\"][\"postalCode\"],\n 'country': data[\"address\"].get(\"addressCountry\"),\n 'phone': data.get(\"telephone\"),\n 'website': data[\"url\"]\n })\n\n else:\n street_address = \" \".join(response.xpath('//span[@itemprop=\"streetAddress\"]/p/text()').extract())\n if not street_address:\n street_address = response.xpath('//span[@itemprop=\"streetAddress\"]/text()').extract_first()\n\n city = response.xpath('//span[@itemprop=\"addressLocality\"]/text()').extract_first()\n state = response.xpath('//span[@itemprop=\"addressRegion\"]/text()').extract_first()\n\n properties.update({\n 'name': response.xpath('//meta[@property=\"og:title\"]/@content').extract_first(),\n 'addr_full': street_address.replace(u'\\u00a0', ' ').strip(', ') if street_address else None,\n 'city': city.replace(u'\\u00a0', ' ').strip(', ') if city else None,\n 'state': state.replace(u'\\u00a0', ' ') if state else None,\n 'postcode': response.xpath('//span[@itemprop=\"postalCode\"]/text()').extract_first(),\n 'country': response.xpath('//span[@itemprop=\"addressCountry\"]/text()').extract_first(),\n 'phone': (response.xpath('//span[@itemprop=\"telephone\"]/text()').extract_first() or '').strip('| '),\n 'website': response.url,\n })\n\n yield GeojsonPointItem(**properties)\n\n def parse_regent(self, response):\n data = json.loads(response.xpath('//script[@type=\"application/ld+json\"]/text()').extract_first())\n\n properties = {\n 'ref': \"_\".join(re.search(r'en/(.*)/(.*)/hoteldetail', response.url).groups()),\n 'name': data[\"name\"],\n 'addr_full': data[\"address\"][\"streetAddress\"],\n 'city': data[\"address\"][\"addressLocality\"],\n 'state': data[\"address\"].get(\"addressRegion\"),\n 'postcode': data[\"address\"][\"postalCode\"],\n 'country': data[\"address\"][\"addressCountry\"],\n 'phone': data[\"telephone\"],\n 'lat': float(data[\"geo\"][\"latitude\"]),\n 'lon': float(data[\"geo\"][\"longitude\"]),\n 'website': response.url,\n }\n\n yield GeojsonPointItem(**properties)\n\n def parse_crowne_plaza(self, response):\n address = response.xpath('//a[@class=\"hotel-home\"]/text()').extract_first().strip()\n\n address_parts = address.split('|')\n\n if len(address_parts) == 4: # international addresses\n addr_city, postcode, country, _ = address_parts\n state = ''\n else: # us addresses\n addr_city, state, postcode, country, _ = address_parts\n\n street_address = \",\".join(addr_city.split(',')[0:-1])\n city = addr_city.split(',')[-1]\n\n properties = {\n 'ref': \"_\".join(re.search(r'en/(.*)/(.*)/hoteldetail', response.url).groups()),\n 'name': response.xpath('//meta[@property=\"og:title\"]/@content').extract_first(),\n 'addr_full': street_address.strip(),\n 'city': city.strip(),\n 'state': state.strip(),\n 'postcode': postcode.strip(),\n 'country': country.strip(),\n 'phone': response.xpath('//div[@class=\"new-hinfo-address\"]/p/a[2]/text()').extract_first(),\n 'lat': float(response.xpath('//meta[@property=\"place:location:latitude\"]/@content').extract_first()),\n 'lon': float(response.xpath('//meta[@property=\"place:location:longitude\"]/@content').extract_first()),\n 'website': response.url,\n }\n\n yield GeojsonPointItem(**properties)\n\n def parse_candlewood_staybridge(self, response):\n if 'hoteldetail' not in response.url:\n # got redirected back to search page\n return\n\n street_address = \" \".join(response.xpath('//span[@itemprop=\"streetAddress\"]/p/text()').extract())\n if not street_address:\n street_address = response.xpath('//span[@itemprop=\"streetAddress\"]/text()').extract_first()\n\n region = response.xpath('//span[@itemprop=\"addressRegion\"]/text()').extract_first().replace(u'\\u00a0',' ')\n\n match = re.search(r'([a-z]+)\\s(\\d+)\\s(.*)', region, re.IGNORECASE)\n if match:\n state, postcode, country = match.groups()\n else:\n state, postcode, country = None, None, region.strip()\n\n properties = {\n 'ref': \"_\".join(re.search(r'en/(.*)/(.*)/hoteldetail', response.url).groups()),\n 'name': response.xpath('//meta[@property=\"og:title\"]/@content').extract_first(),\n 'addr_full': street_address.replace(u'\\u00a0', ' ').strip(', '),\n 'city': response.xpath('//span[@itemprop=\"addressLocality\"]/text()').extract_first().replace(u'\\u00a0', ' ').strip(', '),\n 'state': state,\n 'postcode': postcode,\n 'country': country,\n 'phone': response.xpath('//div[@class=\"booking\"]/a/text()').extract_first(),\n 'lat': float(response.xpath('//meta[@property=\"place:location:latitude\"]/@content').extract_first()),\n 'lon': float(response.xpath('//meta[@property=\"place:location:longitude\"]/@content').extract_first()),\n 'website': response.url,\n }\n\n yield GeojsonPointItem(**properties)\n\n def parse(self, response):\n\n hotel_parsers = {\n 'holidayinn': self.parse_hotel,\n 'crowneplaza': self.parse_crowne_plaza,\n 'holidayinnexpress': self.parse_hotel,\n 'hotelindigo': self.parse_hotel,\n 'candlewood': self.parse_candlewood_staybridge,\n 'staybridge': self.parse_candlewood_staybridge,\n 'holidayinnresorts': self.parse_hotel,\n 'intercontinental': self.parse_hotel,\n 'regent': self.parse_regent,\n 'hotels': self.parse_hotel, # vocos\n 'kimptonhotels': self.parse_kimpton,\n 'holidayinnclubvacations': self.parse_hotel,\n 'evenhotels': self.parse_hotel,\n 'avidhotels': self.parse_hotel,\n 'hualuxe': self.parse_hotel\n }\n\n hotel_urls = response.xpath('//div[@class=\"hotelList\"]//div[contains(@class, \"hotelItem\")]//a[contains(@class, \"hotel-name\")]/@href').extract()\n if hotel_urls:\n for url in hotel_urls:\n hotel_type = re.search(r'ihg.com/(.*?)/', response.urljoin(url), re.IGNORECASE).group(1)\n\n yield scrapy.Request(response.urljoin(url),\n callback=hotel_parsers[hotel_type])\n\n else:\n urls = response.xpath('//li[@class=\"listingItem\"]/a/@href').extract()\n for url in urls:\n yield scrapy.Request(response.urljoin(url))\n", "path": "locations/spiders/ihg_hotels.py"}]}
| 3,417 | 871 |
gh_patches_debug_1005
|
rasdani/github-patches
|
git_diff
|
Pycord-Development__pycord-1218
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Mypy can't type check pycord when namespace_packages are enabled
### Summary
Mypy errors when using pycord with namespace_packages flag enabled
### Reproduction Steps
Run mypy against a simple pycord setup.
An example set up is as follows:
```
my-repo/
├─ my_bot/
│ ├─ bot.py
.mypy.ini
```
Run mypy via: `mypy my_bot/`
Mypy config:
```ini
[mypy]
namespace_packages = True
ignore_missing_imports = True
```
### Minimal Reproducible Code
```python
`from discord import ApplicationCommand` in bot.py
```
### Expected Results
Type checking works as expected with `namespace_packages` enabled
### Actual Results
Type checking errors with:
```sh
virtual-env-path/lib/python3.9/site-packages/discord/commands/__init__.py: error: Source file found twice under different module names: "discord.commands.__init__" and "discord.commands"
Found 1 error in 1 file (errors prevented further checking)
```
### Intents
N/A
### System Information
```yaml
- Python v3.9.5-final
- py-cord v2.0.0-beta
- py-cord pkg_resources: v2.0.0b3
- aiohttp v3.8.1
- system info: Darwin 20.6.0 Darwin Kernel Version 20.6.0: Tue Oct 12 18:33:42 PDT 2021; root:xnu-7195.141.8~1/RELEASE_X86_64
```
### Checklist
- [X] I have searched the open issues for duplicates.
- [X] I have shown the entire traceback, if possible.
- [X] I have removed my token from display, if visible.
### Additional Context
Mypy won't error is `namespace_packages` is `False` but then it cannot infer the types properly and will result in errors such as:
```sh
app/bot.py:1: error: Module "discord" has no attribute "ApplicationCommand"; maybe "ApplicationCommandMixin"?
```
This issue is also persistent in nextcord however, nextcord is available under `discord` and `nextcord` so in `nextcord` this issue is fixed by changing the import to `from nextcord import ApplicationCommand`. Pycord doesn't expose the package as `pycord`. Any reason for this?.
Mypy can't type check pycord when namespace_packages are enabled
### Summary
Mypy errors when using pycord with namespace_packages flag enabled
### Reproduction Steps
Run mypy against a simple pycord setup.
An example set up is as follows:
```
my-repo/
├─ my_bot/
│ ├─ bot.py
.mypy.ini
```
Run mypy via: `mypy my_bot/`
Mypy config:
```ini
[mypy]
namespace_packages = True
ignore_missing_imports = True
```
### Minimal Reproducible Code
```python
`from discord import ApplicationCommand` in bot.py
```
### Expected Results
Type checking works as expected with `namespace_packages` enabled
### Actual Results
Type checking errors with:
```sh
virtual-env-path/lib/python3.9/site-packages/discord/commands/__init__.py: error: Source file found twice under different module names: "discord.commands.__init__" and "discord.commands"
Found 1 error in 1 file (errors prevented further checking)
```
### Intents
N/A
### System Information
```yaml
- Python v3.9.5-final
- py-cord v2.0.0-beta
- py-cord pkg_resources: v2.0.0b3
- aiohttp v3.8.1
- system info: Darwin 20.6.0 Darwin Kernel Version 20.6.0: Tue Oct 12 18:33:42 PDT 2021; root:xnu-7195.141.8~1/RELEASE_X86_64
```
### Checklist
- [X] I have searched the open issues for duplicates.
- [X] I have shown the entire traceback, if possible.
- [X] I have removed my token from display, if visible.
### Additional Context
Mypy won't error is `namespace_packages` is `False` but then it cannot infer the types properly and will result in errors such as:
```sh
app/bot.py:1: error: Module "discord" has no attribute "ApplicationCommand"; maybe "ApplicationCommandMixin"?
```
This issue is also persistent in nextcord however, nextcord is available under `discord` and `nextcord` so in `nextcord` this issue is fixed by changing the import to `from nextcord import ApplicationCommand`. Pycord doesn't expose the package as `pycord`. Any reason for this?.
</issue>
<code>
[start of discord/__init__.py]
1 """
2 Discord API Wrapper
3 ~~~~~~~~~~~~~~~~~~~
4
5 A basic wrapper for the Discord API.
6
7 :copyright: (c) 2015-2021 Rapptz & (c) 2021-present Pycord Development
8 :license: MIT, see LICENSE for more details.
9
10 """
11
12 __title__ = "pycord"
13 __author__ = "Pycord Development"
14 __license__ = "MIT"
15 __copyright__ = "Copyright 2015-2021 Rapptz & Copyright 2021-present Pycord Development"
16 __version__ = "2.0.0b5"
17
18 __path__ = __import__("pkgutil").extend_path(__path__, __name__)
19
20 import logging
21 from typing import Literal, NamedTuple
22
23 from . import abc, opus, sinks, ui, utils
24 from .activity import *
25 from .appinfo import *
26 from .asset import *
27 from .audit_logs import *
28 from .bot import *
29 from .channel import *
30 from .client import *
31 from .cog import Cog
32 from .colour import *
33 from .commands.__init__ import *
34 from .components import *
35 from .embeds import *
36 from .emoji import *
37 from .enums import *
38 from .errors import *
39 from .file import *
40 from .flags import *
41 from .guild import *
42 from .http import *
43 from .integrations import *
44 from .interactions import *
45 from .invite import *
46 from .member import *
47 from .mentions import *
48 from .message import *
49 from .object import *
50 from .partial_emoji import *
51 from .permissions import *
52 from .player import *
53 from .raw_models import *
54 from .reaction import *
55 from .role import *
56 from .scheduled_events import *
57 from .shard import *
58 from .stage_instance import *
59 from .sticker import *
60 from .team import *
61 from .template import *
62 from .threads import *
63 from .user import *
64 from .voice_client import *
65 from .webhook import *
66 from .welcome_screen import *
67 from .widget import *
68
69
70 class VersionInfo(NamedTuple):
71 major: int
72 minor: int
73 micro: int
74 releaselevel: Literal["alpha", "beta", "candidate", "final"]
75 serial: int
76
77
78 version_info: VersionInfo = VersionInfo(major=2, minor=0, micro=0, releaselevel="beta", serial=5)
79
80 logging.getLogger(__name__).addHandler(logging.NullHandler())
81
[end of discord/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/discord/__init__.py b/discord/__init__.py
--- a/discord/__init__.py
+++ b/discord/__init__.py
@@ -30,7 +30,7 @@
from .client import *
from .cog import Cog
from .colour import *
-from .commands.__init__ import *
+from .commands import *
from .components import *
from .embeds import *
from .emoji import *
|
{"golden_diff": "diff --git a/discord/__init__.py b/discord/__init__.py\n--- a/discord/__init__.py\n+++ b/discord/__init__.py\n@@ -30,7 +30,7 @@\n from .client import *\n from .cog import Cog\n from .colour import *\n-from .commands.__init__ import *\n+from .commands import *\n from .components import *\n from .embeds import *\n from .emoji import *\n", "issue": "Mypy can't type check pycord when namespace_packages are enabled\n### Summary\r\n\r\nMypy errors when using pycord with namespace_packages flag enabled\r\n\r\n### Reproduction Steps\r\n\r\nRun mypy against a simple pycord setup.\r\n\r\nAn example set up is as follows:\r\n\r\n```\r\nmy-repo/\r\n\u251c\u2500 my_bot/\r\n\u2502 \u251c\u2500 bot.py\r\n.mypy.ini\r\n```\r\n\r\nRun mypy via: `mypy my_bot/`\r\n\r\nMypy config:\r\n```ini\r\n[mypy]\r\nnamespace_packages = True\r\nignore_missing_imports = True\r\n```\r\n\r\n\r\n### Minimal Reproducible Code\r\n\r\n```python\r\n`from discord import ApplicationCommand` in bot.py\r\n```\r\n\r\n\r\n### Expected Results\r\n\r\nType checking works as expected with `namespace_packages` enabled\r\n\r\n### Actual Results\r\n\r\nType checking errors with:\r\n```sh\r\nvirtual-env-path/lib/python3.9/site-packages/discord/commands/__init__.py: error: Source file found twice under different module names: \"discord.commands.__init__\" and \"discord.commands\"\r\nFound 1 error in 1 file (errors prevented further checking)\r\n```\r\n\r\n### Intents\r\n\r\nN/A\r\n\r\n### System Information\r\n\r\n```yaml\r\n- Python v3.9.5-final\r\n- py-cord v2.0.0-beta\r\n - py-cord pkg_resources: v2.0.0b3\r\n- aiohttp v3.8.1\r\n- system info: Darwin 20.6.0 Darwin Kernel Version 20.6.0: Tue Oct 12 18:33:42 PDT 2021; root:xnu-7195.141.8~1/RELEASE_X86_64\r\n```\r\n\r\n### Checklist\r\n\r\n- [X] I have searched the open issues for duplicates.\r\n- [X] I have shown the entire traceback, if possible.\r\n- [X] I have removed my token from display, if visible.\r\n\r\n### Additional Context\r\n\r\nMypy won't error is `namespace_packages` is `False` but then it cannot infer the types properly and will result in errors such as:\r\n```sh\r\napp/bot.py:1: error: Module \"discord\" has no attribute \"ApplicationCommand\"; maybe \"ApplicationCommandMixin\"?\r\n```\r\n\r\nThis issue is also persistent in nextcord however, nextcord is available under `discord` and `nextcord` so in `nextcord` this issue is fixed by changing the import to `from nextcord import ApplicationCommand`. Pycord doesn't expose the package as `pycord`. Any reason for this?.\nMypy can't type check pycord when namespace_packages are enabled\n### Summary\r\n\r\nMypy errors when using pycord with namespace_packages flag enabled\r\n\r\n### Reproduction Steps\r\n\r\nRun mypy against a simple pycord setup.\r\n\r\nAn example set up is as follows:\r\n\r\n```\r\nmy-repo/\r\n\u251c\u2500 my_bot/\r\n\u2502 \u251c\u2500 bot.py\r\n.mypy.ini\r\n```\r\n\r\nRun mypy via: `mypy my_bot/`\r\n\r\nMypy config:\r\n```ini\r\n[mypy]\r\nnamespace_packages = True\r\nignore_missing_imports = True\r\n```\r\n\r\n\r\n### Minimal Reproducible Code\r\n\r\n```python\r\n`from discord import ApplicationCommand` in bot.py\r\n```\r\n\r\n\r\n### Expected Results\r\n\r\nType checking works as expected with `namespace_packages` enabled\r\n\r\n### Actual Results\r\n\r\nType checking errors with:\r\n```sh\r\nvirtual-env-path/lib/python3.9/site-packages/discord/commands/__init__.py: error: Source file found twice under different module names: \"discord.commands.__init__\" and \"discord.commands\"\r\nFound 1 error in 1 file (errors prevented further checking)\r\n```\r\n\r\n### Intents\r\n\r\nN/A\r\n\r\n### System Information\r\n\r\n```yaml\r\n- Python v3.9.5-final\r\n- py-cord v2.0.0-beta\r\n - py-cord pkg_resources: v2.0.0b3\r\n- aiohttp v3.8.1\r\n- system info: Darwin 20.6.0 Darwin Kernel Version 20.6.0: Tue Oct 12 18:33:42 PDT 2021; root:xnu-7195.141.8~1/RELEASE_X86_64\r\n```\r\n\r\n### Checklist\r\n\r\n- [X] I have searched the open issues for duplicates.\r\n- [X] I have shown the entire traceback, if possible.\r\n- [X] I have removed my token from display, if visible.\r\n\r\n### Additional Context\r\n\r\nMypy won't error is `namespace_packages` is `False` but then it cannot infer the types properly and will result in errors such as:\r\n```sh\r\napp/bot.py:1: error: Module \"discord\" has no attribute \"ApplicationCommand\"; maybe \"ApplicationCommandMixin\"?\r\n```\r\n\r\nThis issue is also persistent in nextcord however, nextcord is available under `discord` and `nextcord` so in `nextcord` this issue is fixed by changing the import to `from nextcord import ApplicationCommand`. Pycord doesn't expose the package as `pycord`. Any reason for this?.\n", "before_files": [{"content": "\"\"\"\nDiscord API Wrapper\n~~~~~~~~~~~~~~~~~~~\n\nA basic wrapper for the Discord API.\n\n:copyright: (c) 2015-2021 Rapptz & (c) 2021-present Pycord Development\n:license: MIT, see LICENSE for more details.\n\n\"\"\"\n\n__title__ = \"pycord\"\n__author__ = \"Pycord Development\"\n__license__ = \"MIT\"\n__copyright__ = \"Copyright 2015-2021 Rapptz & Copyright 2021-present Pycord Development\"\n__version__ = \"2.0.0b5\"\n\n__path__ = __import__(\"pkgutil\").extend_path(__path__, __name__)\n\nimport logging\nfrom typing import Literal, NamedTuple\n\nfrom . import abc, opus, sinks, ui, utils\nfrom .activity import *\nfrom .appinfo import *\nfrom .asset import *\nfrom .audit_logs import *\nfrom .bot import *\nfrom .channel import *\nfrom .client import *\nfrom .cog import Cog\nfrom .colour import *\nfrom .commands.__init__ import *\nfrom .components import *\nfrom .embeds import *\nfrom .emoji import *\nfrom .enums import *\nfrom .errors import *\nfrom .file import *\nfrom .flags import *\nfrom .guild import *\nfrom .http import *\nfrom .integrations import *\nfrom .interactions import *\nfrom .invite import *\nfrom .member import *\nfrom .mentions import *\nfrom .message import *\nfrom .object import *\nfrom .partial_emoji import *\nfrom .permissions import *\nfrom .player import *\nfrom .raw_models import *\nfrom .reaction import *\nfrom .role import *\nfrom .scheduled_events import *\nfrom .shard import *\nfrom .stage_instance import *\nfrom .sticker import *\nfrom .team import *\nfrom .template import *\nfrom .threads import *\nfrom .user import *\nfrom .voice_client import *\nfrom .webhook import *\nfrom .welcome_screen import *\nfrom .widget import *\n\n\nclass VersionInfo(NamedTuple):\n major: int\n minor: int\n micro: int\n releaselevel: Literal[\"alpha\", \"beta\", \"candidate\", \"final\"]\n serial: int\n\n\nversion_info: VersionInfo = VersionInfo(major=2, minor=0, micro=0, releaselevel=\"beta\", serial=5)\n\nlogging.getLogger(__name__).addHandler(logging.NullHandler())\n", "path": "discord/__init__.py"}]}
| 2,276 | 96 |
gh_patches_debug_14814
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-599
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update urllib3: HTTP Header Injection vuln
**Describe the bug**
urllib3 needs to be updated to at least 1.25.9 to fix a high severity HTTP Header Injection vulnerability. Snyk info page [here](https://snyk.io/vuln/SNYK-PYTHON-URLLIB3-1014645).
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 import logging
3 import os
4 from importlib import util
5 from os import path
6
7 import setuptools
8 from setuptools import setup
9
10 # read the contents of your README file
11 this_directory = path.abspath(path.dirname(__file__))
12 with open(path.join(this_directory, "README.md"), encoding="utf-8") as f:
13 long_description = f.read()
14
15 logger = logging.getLogger(__name__)
16 spec = util.spec_from_file_location(
17 "checkov.version", os.path.join("checkov", "version.py")
18 )
19 # noinspection PyUnresolvedReferences
20 mod = util.module_from_spec(spec)
21 spec.loader.exec_module(mod) # type: ignore
22 version = mod.version # type: ignore
23
24 setup(
25 extras_require={
26 "dev": [
27 "alabaster==0.7.12",
28 "attrs==19.3.0",
29 "babel==2.7.0",
30 "certifi==2019.11.28",
31 "chardet==3.0.4",
32 "coverage==4.5.4",
33 "coverage-badge==1.0.1",
34 "docopt==0.6.2",
35 "docutils==0.15.2",
36 "idna==2.8",
37 "imagesize==1.1.0",
38 "importlib-metadata==1.1.0; python_version < '3.8'",
39 "jinja2==2.10.3",
40 "lark-parser==0.7.8",
41 "markupsafe==1.1.1",
42 "more-itertools==8.0.0",
43 "packaging==19.2",
44 "pluggy==0.13.1",
45 "py==1.8.0",
46 "pygments==2.5.2",
47 "pyparsing==2.4.5",
48 "pytest==5.3.1",
49 "bc-python-hcl2>=0.3.10",
50 "pytz==2019.3",
51 "pyyaml==5.3.1",
52 "requests==2.22.0",
53 "six==1.15.0",
54 "snowballstemmer==2.0.0",
55 "sphinx==2.2.1",
56 "sphinxcontrib-applehelp==1.0.1",
57 "sphinxcontrib-devhelp==1.0.1",
58 "sphinxcontrib-htmlhelp==1.0.2",
59 "sphinxcontrib-jsmath==1.0.1",
60 "sphinxcontrib-qthelp==1.0.2",
61 "sphinxcontrib-serializinghtml==1.1.3",
62 "urllib3==1.25.7",
63 "wcwidth==0.1.7",
64 "zipp==0.6.0",
65 "GitPython==3.1.7",
66 "gitdb==4.0.5"
67 ]
68 },
69 install_requires=[
70 "boto3==1.12.43",
71 "chardet==3.0.4",
72 "colorama==0.4.3",
73 "docopt==0.6.2",
74 "idna==2.8",
75 "jmespath==0.10.0",
76 "junit-xml==1.8",
77 "lark-parser==0.7.8",
78 "bc-python-hcl2>=0.3.11",
79 "pyyaml==5.3.1",
80 "requests==2.22.0",
81 "six==1.15.0",
82 "tabulate==0.8.6",
83 "termcolor==1.1.0",
84 "urllib3==1.25.7",
85 "dpath==1.5.0",
86 "GitPython==3.1.7",
87 "gitdb==4.0.5"
88 ],
89 license="Apache License 2.0",
90 name="checkov",
91 version=version,
92 description="Infrastructure as code static analysis",
93 author="bridgecrew",
94 author_email="[email protected]",
95 url="https://github.com/bridgecrewio/checkov",
96 packages=setuptools.find_packages(exclude=["tests*","integration_tests*"]),
97 scripts=["bin/checkov","bin/checkov.cmd"],
98 long_description=long_description,
99 long_description_content_type="text/markdown",
100 classifiers=[
101 'Environment :: Console',
102 'Intended Audience :: Developers',
103 'Intended Audience :: System Administrators',
104 'Programming Language :: Python :: 3.7',
105 'Topic :: Security',
106 'Topic :: Software Development :: Build Tools'
107 ]
108 )
109
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -59,7 +59,7 @@
"sphinxcontrib-jsmath==1.0.1",
"sphinxcontrib-qthelp==1.0.2",
"sphinxcontrib-serializinghtml==1.1.3",
- "urllib3==1.25.7",
+ "urllib3==1.25.10",
"wcwidth==0.1.7",
"zipp==0.6.0",
"GitPython==3.1.7",
@@ -81,7 +81,7 @@
"six==1.15.0",
"tabulate==0.8.6",
"termcolor==1.1.0",
- "urllib3==1.25.7",
+ "urllib3==1.25.10",
"dpath==1.5.0",
"GitPython==3.1.7",
"gitdb==4.0.5"
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -59,7 +59,7 @@\n \"sphinxcontrib-jsmath==1.0.1\",\n \"sphinxcontrib-qthelp==1.0.2\",\n \"sphinxcontrib-serializinghtml==1.1.3\",\n- \"urllib3==1.25.7\",\n+ \"urllib3==1.25.10\",\n \"wcwidth==0.1.7\",\n \"zipp==0.6.0\",\n \"GitPython==3.1.7\",\n@@ -81,7 +81,7 @@\n \"six==1.15.0\",\n \"tabulate==0.8.6\",\n \"termcolor==1.1.0\",\n- \"urllib3==1.25.7\",\n+ \"urllib3==1.25.10\",\n \"dpath==1.5.0\",\n \"GitPython==3.1.7\",\n \"gitdb==4.0.5\"\n", "issue": "Update urllib3: HTTP Header Injection vuln\n**Describe the bug**\r\nurllib3 needs to be updated to at least 1.25.9 to fix a high severity HTTP Header Injection vulnerability. Snyk info page [here](https://snyk.io/vuln/SNYK-PYTHON-URLLIB3-1014645).\n", "before_files": [{"content": "#!/usr/bin/env python\nimport logging\nimport os\nfrom importlib import util\nfrom os import path\n\nimport setuptools\nfrom setuptools import setup\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nlogger = logging.getLogger(__name__)\nspec = util.spec_from_file_location(\n \"checkov.version\", os.path.join(\"checkov\", \"version.py\")\n)\n# noinspection PyUnresolvedReferences\nmod = util.module_from_spec(spec)\nspec.loader.exec_module(mod) # type: ignore\nversion = mod.version # type: ignore\n\nsetup(\n extras_require={\n \"dev\": [\n \"alabaster==0.7.12\",\n \"attrs==19.3.0\",\n \"babel==2.7.0\",\n \"certifi==2019.11.28\",\n \"chardet==3.0.4\",\n \"coverage==4.5.4\",\n \"coverage-badge==1.0.1\",\n \"docopt==0.6.2\",\n \"docutils==0.15.2\",\n \"idna==2.8\",\n \"imagesize==1.1.0\",\n \"importlib-metadata==1.1.0; python_version < '3.8'\",\n \"jinja2==2.10.3\",\n \"lark-parser==0.7.8\",\n \"markupsafe==1.1.1\",\n \"more-itertools==8.0.0\",\n \"packaging==19.2\",\n \"pluggy==0.13.1\",\n \"py==1.8.0\",\n \"pygments==2.5.2\",\n \"pyparsing==2.4.5\",\n \"pytest==5.3.1\",\n \"bc-python-hcl2>=0.3.10\",\n \"pytz==2019.3\",\n \"pyyaml==5.3.1\",\n \"requests==2.22.0\",\n \"six==1.15.0\",\n \"snowballstemmer==2.0.0\",\n \"sphinx==2.2.1\",\n \"sphinxcontrib-applehelp==1.0.1\",\n \"sphinxcontrib-devhelp==1.0.1\",\n \"sphinxcontrib-htmlhelp==1.0.2\",\n \"sphinxcontrib-jsmath==1.0.1\",\n \"sphinxcontrib-qthelp==1.0.2\",\n \"sphinxcontrib-serializinghtml==1.1.3\",\n \"urllib3==1.25.7\",\n \"wcwidth==0.1.7\",\n \"zipp==0.6.0\",\n \"GitPython==3.1.7\",\n \"gitdb==4.0.5\"\n ]\n },\n install_requires=[\n \"boto3==1.12.43\",\n \"chardet==3.0.4\",\n \"colorama==0.4.3\",\n \"docopt==0.6.2\",\n \"idna==2.8\",\n \"jmespath==0.10.0\",\n \"junit-xml==1.8\",\n \"lark-parser==0.7.8\",\n \"bc-python-hcl2>=0.3.11\",\n \"pyyaml==5.3.1\",\n \"requests==2.22.0\",\n \"six==1.15.0\",\n \"tabulate==0.8.6\",\n \"termcolor==1.1.0\",\n \"urllib3==1.25.7\",\n \"dpath==1.5.0\",\n \"GitPython==3.1.7\",\n \"gitdb==4.0.5\"\n ],\n license=\"Apache License 2.0\",\n name=\"checkov\",\n version=version,\n description=\"Infrastructure as code static analysis\",\n author=\"bridgecrew\",\n author_email=\"[email protected]\",\n url=\"https://github.com/bridgecrewio/checkov\",\n packages=setuptools.find_packages(exclude=[\"tests*\",\"integration_tests*\"]),\n scripts=[\"bin/checkov\",\"bin/checkov.cmd\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Security',\n 'Topic :: Software Development :: Build Tools'\n ]\n)\n", "path": "setup.py"}]}
| 1,861 | 246 |
gh_patches_debug_28270
|
rasdani/github-patches
|
git_diff
|
celery__celery-4744
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MongoDB backend does not support mongodb+srv:// URL's
## Checklist
https://github.com/celery/celery/blob/master/celery/backends/mongodb.py#L143-L146
## Steps to reproduce
Set the MongoDB URL to one like:
```mongodb+srv://mongo.private.corp.example.com/celery?ssl=false```
## Expected behavior
This works.
## Actual behavior
This fails because the URL parsing does not match on `mongodb+srv://` instead trying to match `mongodb://` only.
</issue>
<code>
[start of celery/backends/mongodb.py]
1 # -*- coding: utf-8 -*-
2 """MongoDB result store backend."""
3 from __future__ import absolute_import, unicode_literals
4
5 from datetime import datetime, timedelta
6
7 from kombu.exceptions import EncodeError
8 from kombu.utils.objects import cached_property
9 from kombu.utils.url import maybe_sanitize_url
10
11 from celery import states
12 from celery.exceptions import ImproperlyConfigured
13 from celery.five import items, string_t
14
15 from .base import BaseBackend
16
17 try:
18 import pymongo
19 except ImportError: # pragma: no cover
20 pymongo = None # noqa
21
22 if pymongo:
23 try:
24 from bson.binary import Binary
25 except ImportError: # pragma: no cover
26 from pymongo.binary import Binary # noqa
27 from pymongo.errors import InvalidDocument # noqa
28 else: # pragma: no cover
29 Binary = None # noqa
30
31 class InvalidDocument(Exception): # noqa
32 pass
33
34 __all__ = ('MongoBackend',)
35
36 BINARY_CODECS = frozenset(['pickle', 'msgpack'])
37
38
39 class MongoBackend(BaseBackend):
40 """MongoDB result backend.
41
42 Raises:
43 celery.exceptions.ImproperlyConfigured:
44 if module :pypi:`pymongo` is not available.
45 """
46
47 mongo_host = None
48 host = 'localhost'
49 port = 27017
50 user = None
51 password = None
52 database_name = 'celery'
53 taskmeta_collection = 'celery_taskmeta'
54 groupmeta_collection = 'celery_groupmeta'
55 max_pool_size = 10
56 options = None
57
58 supports_autoexpire = False
59
60 _connection = None
61
62 def __init__(self, app=None, **kwargs):
63 self.options = {}
64
65 super(MongoBackend, self).__init__(app, **kwargs)
66
67 if not pymongo:
68 raise ImproperlyConfigured(
69 'You need to install the pymongo library to use the '
70 'MongoDB backend.')
71
72 # Set option defaults
73 for key, value in items(self._prepare_client_options()):
74 self.options.setdefault(key, value)
75
76 # update conf with mongo uri data, only if uri was given
77 if self.url:
78 if self.url == 'mongodb://':
79 self.url += 'localhost'
80
81 uri_data = pymongo.uri_parser.parse_uri(self.url)
82 # build the hosts list to create a mongo connection
83 hostslist = [
84 '{0}:{1}'.format(x[0], x[1]) for x in uri_data['nodelist']
85 ]
86 self.user = uri_data['username']
87 self.password = uri_data['password']
88 self.mongo_host = hostslist
89 if uri_data['database']:
90 # if no database is provided in the uri, use default
91 self.database_name = uri_data['database']
92
93 self.options.update(uri_data['options'])
94
95 # update conf with specific settings
96 config = self.app.conf.get('mongodb_backend_settings')
97 if config is not None:
98 if not isinstance(config, dict):
99 raise ImproperlyConfigured(
100 'MongoDB backend settings should be grouped in a dict')
101 config = dict(config) # don't modify original
102
103 if 'host' in config or 'port' in config:
104 # these should take over uri conf
105 self.mongo_host = None
106
107 self.host = config.pop('host', self.host)
108 self.port = config.pop('port', self.port)
109 self.mongo_host = config.pop('mongo_host', self.mongo_host)
110 self.user = config.pop('user', self.user)
111 self.password = config.pop('password', self.password)
112 self.database_name = config.pop('database', self.database_name)
113 self.taskmeta_collection = config.pop(
114 'taskmeta_collection', self.taskmeta_collection,
115 )
116 self.groupmeta_collection = config.pop(
117 'groupmeta_collection', self.groupmeta_collection,
118 )
119
120 self.options.update(config.pop('options', {}))
121 self.options.update(config)
122
123 def _prepare_client_options(self):
124 if pymongo.version_tuple >= (3,):
125 return {'maxPoolSize': self.max_pool_size}
126 else: # pragma: no cover
127 return {'max_pool_size': self.max_pool_size,
128 'auto_start_request': False}
129
130 def _get_connection(self):
131 """Connect to the MongoDB server."""
132 if self._connection is None:
133 from pymongo import MongoClient
134
135 host = self.mongo_host
136 if not host:
137 # The first pymongo.Connection() argument (host) can be
138 # a list of ['host:port'] elements or a mongodb connection
139 # URI. If this is the case, don't use self.port
140 # but let pymongo get the port(s) from the URI instead.
141 # This enables the use of replica sets and sharding.
142 # See pymongo.Connection() for more info.
143 host = self.host
144 if isinstance(host, string_t) \
145 and not host.startswith('mongodb://'):
146 host = 'mongodb://{0}:{1}'.format(host, self.port)
147 # don't change self.options
148 conf = dict(self.options)
149 conf['host'] = host
150
151 self._connection = MongoClient(**conf)
152
153 return self._connection
154
155 def encode(self, data):
156 if self.serializer == 'bson':
157 # mongodb handles serialization
158 return data
159 payload = super(MongoBackend, self).encode(data)
160
161 # serializer which are in a unsupported format (pickle/binary)
162 if self.serializer in BINARY_CODECS:
163 payload = Binary(payload)
164 return payload
165
166 def decode(self, data):
167 if self.serializer == 'bson':
168 return data
169 return super(MongoBackend, self).decode(data)
170
171 def _store_result(self, task_id, result, state,
172 traceback=None, request=None, **kwargs):
173 """Store return value and state of an executed task."""
174 meta = {
175 '_id': task_id,
176 'status': state,
177 'result': self.encode(result),
178 'date_done': datetime.utcnow(),
179 'traceback': self.encode(traceback),
180 'children': self.encode(
181 self.current_task_children(request),
182 ),
183 }
184 if request and getattr(request, 'parent_id', None):
185 meta['parent_id'] = request.parent_id
186
187 try:
188 self.collection.save(meta)
189 except InvalidDocument as exc:
190 raise EncodeError(exc)
191
192 return result
193
194 def _get_task_meta_for(self, task_id):
195 """Get task meta-data for a task by id."""
196 obj = self.collection.find_one({'_id': task_id})
197 if obj:
198 return self.meta_from_decoded({
199 'task_id': obj['_id'],
200 'status': obj['status'],
201 'result': self.decode(obj['result']),
202 'date_done': obj['date_done'],
203 'traceback': self.decode(obj['traceback']),
204 'children': self.decode(obj['children']),
205 })
206 return {'status': states.PENDING, 'result': None}
207
208 def _save_group(self, group_id, result):
209 """Save the group result."""
210 self.group_collection.save({
211 '_id': group_id,
212 'result': self.encode([i.id for i in result]),
213 'date_done': datetime.utcnow(),
214 })
215 return result
216
217 def _restore_group(self, group_id):
218 """Get the result for a group by id."""
219 obj = self.group_collection.find_one({'_id': group_id})
220 if obj:
221 return {
222 'task_id': obj['_id'],
223 'date_done': obj['date_done'],
224 'result': [
225 self.app.AsyncResult(task)
226 for task in self.decode(obj['result'])
227 ],
228 }
229
230 def _delete_group(self, group_id):
231 """Delete a group by id."""
232 self.group_collection.remove({'_id': group_id})
233
234 def _forget(self, task_id):
235 """Remove result from MongoDB.
236
237 Raises:
238 pymongo.exceptions.OperationsError:
239 if the task_id could not be removed.
240 """
241 # By using safe=True, this will wait until it receives a response from
242 # the server. Likewise, it will raise an OperationsError if the
243 # response was unable to be completed.
244 self.collection.remove({'_id': task_id})
245
246 def cleanup(self):
247 """Delete expired meta-data."""
248 self.collection.remove(
249 {'date_done': {'$lt': self.app.now() - self.expires_delta}},
250 )
251 self.group_collection.remove(
252 {'date_done': {'$lt': self.app.now() - self.expires_delta}},
253 )
254
255 def __reduce__(self, args=(), kwargs={}):
256 return super(MongoBackend, self).__reduce__(
257 args, dict(kwargs, expires=self.expires, url=self.url))
258
259 def _get_database(self):
260 conn = self._get_connection()
261 db = conn[self.database_name]
262 if self.user and self.password:
263 if not db.authenticate(self.user, self.password):
264 raise ImproperlyConfigured(
265 'Invalid MongoDB username or password.')
266 return db
267
268 @cached_property
269 def database(self):
270 """Get database from MongoDB connection.
271
272 performs authentication if necessary.
273 """
274 return self._get_database()
275
276 @cached_property
277 def collection(self):
278 """Get the meta-data task collection."""
279 collection = self.database[self.taskmeta_collection]
280
281 # Ensure an index on date_done is there, if not process the index
282 # in the background. Once completed cleanup will be much faster
283 collection.ensure_index('date_done', background='true')
284 return collection
285
286 @cached_property
287 def group_collection(self):
288 """Get the meta-data task collection."""
289 collection = self.database[self.groupmeta_collection]
290
291 # Ensure an index on date_done is there, if not process the index
292 # in the background. Once completed cleanup will be much faster
293 collection.ensure_index('date_done', background='true')
294 return collection
295
296 @cached_property
297 def expires_delta(self):
298 return timedelta(seconds=self.expires)
299
300 def as_uri(self, include_password=False):
301 """Return the backend as an URI.
302
303 Arguments:
304 include_password (bool): Password censored if disabled.
305 """
306 if not self.url:
307 return 'mongodb://'
308 if include_password:
309 return self.url
310
311 if ',' not in self.url:
312 return maybe_sanitize_url(self.url)
313
314 uri1, remainder = self.url.split(',', 1)
315 return ','.join([maybe_sanitize_url(uri1), remainder])
316
[end of celery/backends/mongodb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/celery/backends/mongodb.py b/celery/backends/mongodb.py
--- a/celery/backends/mongodb.py
+++ b/celery/backends/mongodb.py
@@ -6,7 +6,7 @@
from kombu.exceptions import EncodeError
from kombu.utils.objects import cached_property
-from kombu.utils.url import maybe_sanitize_url
+from kombu.utils.url import maybe_sanitize_url, urlparse
from celery import states
from celery.exceptions import ImproperlyConfigured
@@ -75,8 +75,7 @@
# update conf with mongo uri data, only if uri was given
if self.url:
- if self.url == 'mongodb://':
- self.url += 'localhost'
+ self.url = self._ensure_mongodb_uri_compliance(self.url)
uri_data = pymongo.uri_parser.parse_uri(self.url)
# build the hosts list to create a mongo connection
@@ -120,6 +119,17 @@
self.options.update(config.pop('options', {}))
self.options.update(config)
+ @staticmethod
+ def _ensure_mongodb_uri_compliance(url):
+ parsed_url = urlparse(url)
+ if not parsed_url.scheme.startswith('mongodb'):
+ url = 'mongodb+{}'.format(url)
+
+ if url == 'mongodb://':
+ url += 'localhost'
+
+ return url
+
def _prepare_client_options(self):
if pymongo.version_tuple >= (3,):
return {'maxPoolSize': self.max_pool_size}
|
{"golden_diff": "diff --git a/celery/backends/mongodb.py b/celery/backends/mongodb.py\n--- a/celery/backends/mongodb.py\n+++ b/celery/backends/mongodb.py\n@@ -6,7 +6,7 @@\n \n from kombu.exceptions import EncodeError\n from kombu.utils.objects import cached_property\n-from kombu.utils.url import maybe_sanitize_url\n+from kombu.utils.url import maybe_sanitize_url, urlparse\n \n from celery import states\n from celery.exceptions import ImproperlyConfigured\n@@ -75,8 +75,7 @@\n \n # update conf with mongo uri data, only if uri was given\n if self.url:\n- if self.url == 'mongodb://':\n- self.url += 'localhost'\n+ self.url = self._ensure_mongodb_uri_compliance(self.url)\n \n uri_data = pymongo.uri_parser.parse_uri(self.url)\n # build the hosts list to create a mongo connection\n@@ -120,6 +119,17 @@\n self.options.update(config.pop('options', {}))\n self.options.update(config)\n \n+ @staticmethod\n+ def _ensure_mongodb_uri_compliance(url):\n+ parsed_url = urlparse(url)\n+ if not parsed_url.scheme.startswith('mongodb'):\n+ url = 'mongodb+{}'.format(url)\n+\n+ if url == 'mongodb://':\n+ url += 'localhost'\n+\n+ return url\n+\n def _prepare_client_options(self):\n if pymongo.version_tuple >= (3,):\n return {'maxPoolSize': self.max_pool_size}\n", "issue": "MongoDB backend does not support mongodb+srv:// URL's\n## Checklist\r\n\r\nhttps://github.com/celery/celery/blob/master/celery/backends/mongodb.py#L143-L146\r\n\r\n## Steps to reproduce\r\n\r\nSet the MongoDB URL to one like:\r\n\r\n```mongodb+srv://mongo.private.corp.example.com/celery?ssl=false```\r\n\r\n## Expected behavior\r\n\r\nThis works.\r\n\r\n## Actual behavior\r\n\r\nThis fails because the URL parsing does not match on `mongodb+srv://` instead trying to match `mongodb://` only.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"MongoDB result store backend.\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nfrom datetime import datetime, timedelta\n\nfrom kombu.exceptions import EncodeError\nfrom kombu.utils.objects import cached_property\nfrom kombu.utils.url import maybe_sanitize_url\n\nfrom celery import states\nfrom celery.exceptions import ImproperlyConfigured\nfrom celery.five import items, string_t\n\nfrom .base import BaseBackend\n\ntry:\n import pymongo\nexcept ImportError: # pragma: no cover\n pymongo = None # noqa\n\nif pymongo:\n try:\n from bson.binary import Binary\n except ImportError: # pragma: no cover\n from pymongo.binary import Binary # noqa\n from pymongo.errors import InvalidDocument # noqa\nelse: # pragma: no cover\n Binary = None # noqa\n\n class InvalidDocument(Exception): # noqa\n pass\n\n__all__ = ('MongoBackend',)\n\nBINARY_CODECS = frozenset(['pickle', 'msgpack'])\n\n\nclass MongoBackend(BaseBackend):\n \"\"\"MongoDB result backend.\n\n Raises:\n celery.exceptions.ImproperlyConfigured:\n if module :pypi:`pymongo` is not available.\n \"\"\"\n\n mongo_host = None\n host = 'localhost'\n port = 27017\n user = None\n password = None\n database_name = 'celery'\n taskmeta_collection = 'celery_taskmeta'\n groupmeta_collection = 'celery_groupmeta'\n max_pool_size = 10\n options = None\n\n supports_autoexpire = False\n\n _connection = None\n\n def __init__(self, app=None, **kwargs):\n self.options = {}\n\n super(MongoBackend, self).__init__(app, **kwargs)\n\n if not pymongo:\n raise ImproperlyConfigured(\n 'You need to install the pymongo library to use the '\n 'MongoDB backend.')\n\n # Set option defaults\n for key, value in items(self._prepare_client_options()):\n self.options.setdefault(key, value)\n\n # update conf with mongo uri data, only if uri was given\n if self.url:\n if self.url == 'mongodb://':\n self.url += 'localhost'\n\n uri_data = pymongo.uri_parser.parse_uri(self.url)\n # build the hosts list to create a mongo connection\n hostslist = [\n '{0}:{1}'.format(x[0], x[1]) for x in uri_data['nodelist']\n ]\n self.user = uri_data['username']\n self.password = uri_data['password']\n self.mongo_host = hostslist\n if uri_data['database']:\n # if no database is provided in the uri, use default\n self.database_name = uri_data['database']\n\n self.options.update(uri_data['options'])\n\n # update conf with specific settings\n config = self.app.conf.get('mongodb_backend_settings')\n if config is not None:\n if not isinstance(config, dict):\n raise ImproperlyConfigured(\n 'MongoDB backend settings should be grouped in a dict')\n config = dict(config) # don't modify original\n\n if 'host' in config or 'port' in config:\n # these should take over uri conf\n self.mongo_host = None\n\n self.host = config.pop('host', self.host)\n self.port = config.pop('port', self.port)\n self.mongo_host = config.pop('mongo_host', self.mongo_host)\n self.user = config.pop('user', self.user)\n self.password = config.pop('password', self.password)\n self.database_name = config.pop('database', self.database_name)\n self.taskmeta_collection = config.pop(\n 'taskmeta_collection', self.taskmeta_collection,\n )\n self.groupmeta_collection = config.pop(\n 'groupmeta_collection', self.groupmeta_collection,\n )\n\n self.options.update(config.pop('options', {}))\n self.options.update(config)\n\n def _prepare_client_options(self):\n if pymongo.version_tuple >= (3,):\n return {'maxPoolSize': self.max_pool_size}\n else: # pragma: no cover\n return {'max_pool_size': self.max_pool_size,\n 'auto_start_request': False}\n\n def _get_connection(self):\n \"\"\"Connect to the MongoDB server.\"\"\"\n if self._connection is None:\n from pymongo import MongoClient\n\n host = self.mongo_host\n if not host:\n # The first pymongo.Connection() argument (host) can be\n # a list of ['host:port'] elements or a mongodb connection\n # URI. If this is the case, don't use self.port\n # but let pymongo get the port(s) from the URI instead.\n # This enables the use of replica sets and sharding.\n # See pymongo.Connection() for more info.\n host = self.host\n if isinstance(host, string_t) \\\n and not host.startswith('mongodb://'):\n host = 'mongodb://{0}:{1}'.format(host, self.port)\n # don't change self.options\n conf = dict(self.options)\n conf['host'] = host\n\n self._connection = MongoClient(**conf)\n\n return self._connection\n\n def encode(self, data):\n if self.serializer == 'bson':\n # mongodb handles serialization\n return data\n payload = super(MongoBackend, self).encode(data)\n\n # serializer which are in a unsupported format (pickle/binary)\n if self.serializer in BINARY_CODECS:\n payload = Binary(payload)\n return payload\n\n def decode(self, data):\n if self.serializer == 'bson':\n return data\n return super(MongoBackend, self).decode(data)\n\n def _store_result(self, task_id, result, state,\n traceback=None, request=None, **kwargs):\n \"\"\"Store return value and state of an executed task.\"\"\"\n meta = {\n '_id': task_id,\n 'status': state,\n 'result': self.encode(result),\n 'date_done': datetime.utcnow(),\n 'traceback': self.encode(traceback),\n 'children': self.encode(\n self.current_task_children(request),\n ),\n }\n if request and getattr(request, 'parent_id', None):\n meta['parent_id'] = request.parent_id\n\n try:\n self.collection.save(meta)\n except InvalidDocument as exc:\n raise EncodeError(exc)\n\n return result\n\n def _get_task_meta_for(self, task_id):\n \"\"\"Get task meta-data for a task by id.\"\"\"\n obj = self.collection.find_one({'_id': task_id})\n if obj:\n return self.meta_from_decoded({\n 'task_id': obj['_id'],\n 'status': obj['status'],\n 'result': self.decode(obj['result']),\n 'date_done': obj['date_done'],\n 'traceback': self.decode(obj['traceback']),\n 'children': self.decode(obj['children']),\n })\n return {'status': states.PENDING, 'result': None}\n\n def _save_group(self, group_id, result):\n \"\"\"Save the group result.\"\"\"\n self.group_collection.save({\n '_id': group_id,\n 'result': self.encode([i.id for i in result]),\n 'date_done': datetime.utcnow(),\n })\n return result\n\n def _restore_group(self, group_id):\n \"\"\"Get the result for a group by id.\"\"\"\n obj = self.group_collection.find_one({'_id': group_id})\n if obj:\n return {\n 'task_id': obj['_id'],\n 'date_done': obj['date_done'],\n 'result': [\n self.app.AsyncResult(task)\n for task in self.decode(obj['result'])\n ],\n }\n\n def _delete_group(self, group_id):\n \"\"\"Delete a group by id.\"\"\"\n self.group_collection.remove({'_id': group_id})\n\n def _forget(self, task_id):\n \"\"\"Remove result from MongoDB.\n\n Raises:\n pymongo.exceptions.OperationsError:\n if the task_id could not be removed.\n \"\"\"\n # By using safe=True, this will wait until it receives a response from\n # the server. Likewise, it will raise an OperationsError if the\n # response was unable to be completed.\n self.collection.remove({'_id': task_id})\n\n def cleanup(self):\n \"\"\"Delete expired meta-data.\"\"\"\n self.collection.remove(\n {'date_done': {'$lt': self.app.now() - self.expires_delta}},\n )\n self.group_collection.remove(\n {'date_done': {'$lt': self.app.now() - self.expires_delta}},\n )\n\n def __reduce__(self, args=(), kwargs={}):\n return super(MongoBackend, self).__reduce__(\n args, dict(kwargs, expires=self.expires, url=self.url))\n\n def _get_database(self):\n conn = self._get_connection()\n db = conn[self.database_name]\n if self.user and self.password:\n if not db.authenticate(self.user, self.password):\n raise ImproperlyConfigured(\n 'Invalid MongoDB username or password.')\n return db\n\n @cached_property\n def database(self):\n \"\"\"Get database from MongoDB connection.\n\n performs authentication if necessary.\n \"\"\"\n return self._get_database()\n\n @cached_property\n def collection(self):\n \"\"\"Get the meta-data task collection.\"\"\"\n collection = self.database[self.taskmeta_collection]\n\n # Ensure an index on date_done is there, if not process the index\n # in the background. Once completed cleanup will be much faster\n collection.ensure_index('date_done', background='true')\n return collection\n\n @cached_property\n def group_collection(self):\n \"\"\"Get the meta-data task collection.\"\"\"\n collection = self.database[self.groupmeta_collection]\n\n # Ensure an index on date_done is there, if not process the index\n # in the background. Once completed cleanup will be much faster\n collection.ensure_index('date_done', background='true')\n return collection\n\n @cached_property\n def expires_delta(self):\n return timedelta(seconds=self.expires)\n\n def as_uri(self, include_password=False):\n \"\"\"Return the backend as an URI.\n\n Arguments:\n include_password (bool): Password censored if disabled.\n \"\"\"\n if not self.url:\n return 'mongodb://'\n if include_password:\n return self.url\n\n if ',' not in self.url:\n return maybe_sanitize_url(self.url)\n\n uri1, remainder = self.url.split(',', 1)\n return ','.join([maybe_sanitize_url(uri1), remainder])\n", "path": "celery/backends/mongodb.py"}]}
| 3,761 | 338 |
gh_patches_debug_1412
|
rasdani/github-patches
|
git_diff
|
mne-tools__mne-python-3718
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ENH?: News / updates
It seems like we should have a little news/updates section of one-liners on the website, including things like:
1. Release notifications
2. Upcoming MNE-Python workshops
3. Upcoming coding sprints
If people agree I can put some old ones (last couple of release dates), and we can add to it as announcement-worthy things come up.
</issue>
<code>
[start of doc/sphinxext/cited_mne.py]
1 #!/usr/bin/env python
2 """Parse google scholar -> rst for MNE citations.
3
4 Example usage::
5
6 $ cited_mne --backend selenium --clear
7
8 """
9
10 # Author: Mainak Jas <[email protected]>
11 # License : BSD 3-clause
12
13 # Parts of this code were copied from google_scholar_parser
14 # (https://github.com/carlosp420/google_scholar_parser)
15
16 import os
17 import os.path as op
18 import re
19 import time
20 import random
21 import requests
22
23 import numpy as np
24 from joblib import Memory
25 from BeautifulSoup import BeautifulSoup
26
27 from mne.externals.tempita import Template
28 from mne.commands.utils import get_optparser
29
30 # cache to avoid making too many calls to Google Scholar
31 cachedir = 'cachedir'
32 if not os.path.exists(cachedir):
33 os.mkdir(cachedir)
34 mem = Memory(cachedir=cachedir, verbose=2)
35
36 UA = ('Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.2.9) '
37 'Gecko/20100913 Firefox/3.6.9')
38
39 # ##### Templates for citations #####
40 html = (u""".. _cited
41
42 Publications from MNE users
43 ===========================
44
45 Papers citing MNE as extracted from Google Scholar (on %s).
46
47 """)
48
49 cite_template = Template(u"""
50 {{for ii, publication in enumerate(publications)}}
51 {{ii + 1}}. {{publication}}.
52 {{endfor}}
53
54 """)
55
56
57 def parse_soup_page(soup):
58 """Parse the page using BeautifulSoup.
59
60 Parameters
61 ----------
62 soup : instance of BeautifulSoup
63 The page to be parsed.
64
65 Returns
66 -------
67 titles : list
68 The article titles.
69 authors : list
70 The name of the authors.
71 links : list
72 Hyperlinks to the articles.
73 """
74 titles, authors, links = list(), list(), list()
75 for div in soup.findAll('div'):
76 if div.name == "div" and div.get('class') == "gs_ri":
77 links.append(div.a['href'])
78 div_pub = div.findAll('div')
79 for d in div_pub:
80 if d.name == 'div' and d.get('class') == 'gs_a':
81 authors.append(d.text)
82 titles.append(div.a.text)
83 return titles, authors, links
84
85
86 def get_total_citations(soup):
87 """Get total citations."""
88 results = soup.find('div', attrs={'id': 'gs_ab_md'}).contents[0]
89 matches = re.search("About\s(\d+)\s", results)
90 if matches:
91 hits = matches.groups()[0]
92 return hits
93
94
95 def _get_soup(url, backend='selenium'):
96 """Get BeautifulSoup object from url.
97
98 Parameters
99 ----------
100 url : str
101 The url to fetch.
102 backend : 'selenium' | 'requests'
103 Use selenium by default because google can ask for captcha. For
104 'selenium' backend Firefox must be installed.
105
106 Returns
107 -------
108 soup : instance of BeautifulSoup
109 The soup page from the url.
110 """
111 if backend == 'requests':
112 req = requests.get(url, headers={'User-Agent': UA})
113 html_doc = req.text
114 soup = BeautifulSoup(html_doc)
115 if soup.find('div', attrs={'id': 'gs_ab_md'}) is None:
116 print('Falling back on to selenium backend due to captcha.')
117 backend = 'selenium'
118
119 if backend == 'selenium':
120 from selenium import webdriver
121 import selenium.webdriver.support.ui as ui
122
123 driver = webdriver.Firefox()
124 # give enough time to solve captcha.
125 wait = ui.WebDriverWait(driver, 200)
126
127 driver.get(url)
128 wait.until(lambda driver: driver.find_elements_by_id('gs_ab_md'))
129
130 html_doc = driver.page_source
131 soup = BeautifulSoup(html_doc)
132 driver.close()
133
134 return soup
135
136
137 @mem.cache
138 def get_citing_articles(cites_url, backend):
139 """Get the citing articles.
140
141 Parameters
142 ----------
143 cites_url: str
144 A citation url from Google Scholar.
145 backend : 'selenium' | 'requests'
146 Use selenium by default because google can ask for captcha. For
147 'selenium' backend Firefox must be installed.
148
149
150 Returns
151 -------
152 titles : list
153 The article titles.
154 authors : list
155 The name of the authors.
156 links : list
157 Hyperlinks to the articles.
158 """
159 n = random.random() * 5
160 time.sleep(n)
161 print("\nSleeping: {0} seconds".format(n))
162
163 # GS seems to allow only 20 hits per page!
164 cites_url += "&num=20"
165 soup = _get_soup(cites_url, backend=backend)
166 hits = get_total_citations(soup)
167 print("Got a total of {0} citations".format(hits))
168
169 hits = int(hits)
170 index = 0
171 titles, authors, links = list(), list(), list()
172 while hits > 1:
173 n = random.random() * 2
174 time.sleep(n)
175 if index > 0:
176 url = cites_url + "&start=" + str(index)
177 else:
178 url = cites_url
179 index += 20
180 hits -= 20
181 print("{0} more citations to process".format(hits))
182 soup = soup = _get_soup(url, backend=backend)
183 title, author, link = parse_soup_page(soup)
184 for this_title, this_author, this_link in zip(title, author, link):
185 titles.append(this_title)
186 authors.append(this_author)
187 links.append(this_link)
188
189 return titles, authors, links
190
191 if __name__ == '__main__':
192 parser = get_optparser(__file__)
193 parser.add_option("-c", "--clear", dest="clear", action='store_true',
194 help="if True, clear the cache.", default=False)
195 parser.add_option("-b", "--backend", dest="backend",
196 help="backend for parsing (selenium | requests)",
197 default='requests')
198 options, args = parser.parse_args()
199 backend, clear = options.backend, options.clear
200
201 if clear:
202 mem.clear()
203
204 random.seed()
205 gen_date = time.strftime("%B %d, %Y")
206 html = html % gen_date
207
208 url_tails = ['1521584321377182930', '12188330066413208874']
209 papers = ['MEG and EEG data analysis with MNE-Python',
210 'MNE software for processing MEG and EEG data']
211
212 publications = list()
213 for url_tail, paper in zip(url_tails, papers):
214 titles, authors, links = get_citing_articles(
215 'https://scholar.google.co.in/scholar?cites=%s'
216 % url_tail, backend=backend)
217
218 this_publication = list()
219 for ii in range(len(titles)):
220 pub = '`%s. <%s>`_. %s' % (titles[ii], links[ii], authors[ii])
221 this_publication.append(pub)
222
223 this_publication = [p.encode('utf8') for p in this_publication]
224 publications.append(this_publication)
225
226 # get a union of the citations for the two papers, sorted in
227 # alphabetic order
228 publications = np.union1d(publications[1], publications[0]).tolist()
229
230 # sort by year of publication
231 years = list()
232 for pub in publications:
233 m = re.search('\d{4} -', pub)
234 if m is None:
235 years.append(-1)
236 else:
237 years.append(int(m.group(0)[:-2]))
238 order = np.argsort(years)[::-1]
239 publications = [publications[idx] for idx in order]
240
241 # filter out publications not containing (http://, https://, ftp://)
242 publications = [p for p in publications if
243 any(sub in p for sub in ('http://', 'https://', 'ftp://'))]
244
245 # create rst & cleanup
246 this_html = cite_template.substitute(publications=publications)
247 this_html = this_html.replace('…', '...')
248 html += this_html
249
250 # output an rst file
251 with open(op.join('..', 'cited.rst'), 'w') as f:
252 f.write(html.encode('utf8'))
253
[end of doc/sphinxext/cited_mne.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/doc/sphinxext/cited_mne.py b/doc/sphinxext/cited_mne.py
--- a/doc/sphinxext/cited_mne.py
+++ b/doc/sphinxext/cited_mne.py
@@ -39,8 +39,8 @@
# ##### Templates for citations #####
html = (u""".. _cited
-Publications from MNE users
-===========================
+Publications by users
+=====================
Papers citing MNE as extracted from Google Scholar (on %s).
|
{"golden_diff": "diff --git a/doc/sphinxext/cited_mne.py b/doc/sphinxext/cited_mne.py\n--- a/doc/sphinxext/cited_mne.py\n+++ b/doc/sphinxext/cited_mne.py\n@@ -39,8 +39,8 @@\n # ##### Templates for citations #####\n html = (u\"\"\".. _cited\n \n-Publications from MNE users\n-===========================\n+Publications by users\n+=====================\n \n Papers citing MNE as extracted from Google Scholar (on %s).\n", "issue": "ENH?: News / updates\nIt seems like we should have a little news/updates section of one-liners on the website, including things like:\n1. Release notifications\n2. Upcoming MNE-Python workshops\n3. Upcoming coding sprints\n\nIf people agree I can put some old ones (last couple of release dates), and we can add to it as announcement-worthy things come up.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"Parse google scholar -> rst for MNE citations.\n\nExample usage::\n\n $ cited_mne --backend selenium --clear\n\n\"\"\"\n\n# Author: Mainak Jas <[email protected]>\n# License : BSD 3-clause\n\n# Parts of this code were copied from google_scholar_parser\n# (https://github.com/carlosp420/google_scholar_parser)\n\nimport os\nimport os.path as op\nimport re\nimport time\nimport random\nimport requests\n\nimport numpy as np\nfrom joblib import Memory\nfrom BeautifulSoup import BeautifulSoup\n\nfrom mne.externals.tempita import Template\nfrom mne.commands.utils import get_optparser\n\n# cache to avoid making too many calls to Google Scholar\ncachedir = 'cachedir'\nif not os.path.exists(cachedir):\n os.mkdir(cachedir)\nmem = Memory(cachedir=cachedir, verbose=2)\n\nUA = ('Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.2.9) '\n 'Gecko/20100913 Firefox/3.6.9')\n\n# ##### Templates for citations #####\nhtml = (u\"\"\".. _cited\n\nPublications from MNE users\n===========================\n\nPapers citing MNE as extracted from Google Scholar (on %s).\n\n\"\"\")\n\ncite_template = Template(u\"\"\"\n{{for ii, publication in enumerate(publications)}}\n{{ii + 1}}. {{publication}}.\n{{endfor}}\n\n\"\"\")\n\n\ndef parse_soup_page(soup):\n \"\"\"Parse the page using BeautifulSoup.\n\n Parameters\n ----------\n soup : instance of BeautifulSoup\n The page to be parsed.\n\n Returns\n -------\n titles : list\n The article titles.\n authors : list\n The name of the authors.\n links : list\n Hyperlinks to the articles.\n \"\"\"\n titles, authors, links = list(), list(), list()\n for div in soup.findAll('div'):\n if div.name == \"div\" and div.get('class') == \"gs_ri\":\n links.append(div.a['href'])\n div_pub = div.findAll('div')\n for d in div_pub:\n if d.name == 'div' and d.get('class') == 'gs_a':\n authors.append(d.text)\n titles.append(div.a.text)\n return titles, authors, links\n\n\ndef get_total_citations(soup):\n \"\"\"Get total citations.\"\"\"\n results = soup.find('div', attrs={'id': 'gs_ab_md'}).contents[0]\n matches = re.search(\"About\\s(\\d+)\\s\", results)\n if matches:\n hits = matches.groups()[0]\n return hits\n\n\ndef _get_soup(url, backend='selenium'):\n \"\"\"Get BeautifulSoup object from url.\n\n Parameters\n ----------\n url : str\n The url to fetch.\n backend : 'selenium' | 'requests'\n Use selenium by default because google can ask for captcha. For\n 'selenium' backend Firefox must be installed.\n\n Returns\n -------\n soup : instance of BeautifulSoup\n The soup page from the url.\n \"\"\"\n if backend == 'requests':\n req = requests.get(url, headers={'User-Agent': UA})\n html_doc = req.text\n soup = BeautifulSoup(html_doc)\n if soup.find('div', attrs={'id': 'gs_ab_md'}) is None:\n print('Falling back on to selenium backend due to captcha.')\n backend = 'selenium'\n\n if backend == 'selenium':\n from selenium import webdriver\n import selenium.webdriver.support.ui as ui\n\n driver = webdriver.Firefox()\n # give enough time to solve captcha.\n wait = ui.WebDriverWait(driver, 200)\n\n driver.get(url)\n wait.until(lambda driver: driver.find_elements_by_id('gs_ab_md'))\n\n html_doc = driver.page_source\n soup = BeautifulSoup(html_doc)\n driver.close()\n\n return soup\n\n\[email protected]\ndef get_citing_articles(cites_url, backend):\n \"\"\"Get the citing articles.\n\n Parameters\n ----------\n cites_url: str\n A citation url from Google Scholar.\n backend : 'selenium' | 'requests'\n Use selenium by default because google can ask for captcha. For\n 'selenium' backend Firefox must be installed.\n\n\n Returns\n -------\n titles : list\n The article titles.\n authors : list\n The name of the authors.\n links : list\n Hyperlinks to the articles.\n \"\"\"\n n = random.random() * 5\n time.sleep(n)\n print(\"\\nSleeping: {0} seconds\".format(n))\n\n # GS seems to allow only 20 hits per page!\n cites_url += \"&num=20\"\n soup = _get_soup(cites_url, backend=backend)\n hits = get_total_citations(soup)\n print(\"Got a total of {0} citations\".format(hits))\n\n hits = int(hits)\n index = 0\n titles, authors, links = list(), list(), list()\n while hits > 1:\n n = random.random() * 2\n time.sleep(n)\n if index > 0:\n url = cites_url + \"&start=\" + str(index)\n else:\n url = cites_url\n index += 20\n hits -= 20\n print(\"{0} more citations to process\".format(hits))\n soup = soup = _get_soup(url, backend=backend)\n title, author, link = parse_soup_page(soup)\n for this_title, this_author, this_link in zip(title, author, link):\n titles.append(this_title)\n authors.append(this_author)\n links.append(this_link)\n\n return titles, authors, links\n\nif __name__ == '__main__':\n parser = get_optparser(__file__)\n parser.add_option(\"-c\", \"--clear\", dest=\"clear\", action='store_true',\n help=\"if True, clear the cache.\", default=False)\n parser.add_option(\"-b\", \"--backend\", dest=\"backend\",\n help=\"backend for parsing (selenium | requests)\",\n default='requests')\n options, args = parser.parse_args()\n backend, clear = options.backend, options.clear\n\n if clear:\n mem.clear()\n\n random.seed()\n gen_date = time.strftime(\"%B %d, %Y\")\n html = html % gen_date\n\n url_tails = ['1521584321377182930', '12188330066413208874']\n papers = ['MEG and EEG data analysis with MNE-Python',\n 'MNE software for processing MEG and EEG data']\n\n publications = list()\n for url_tail, paper in zip(url_tails, papers):\n titles, authors, links = get_citing_articles(\n 'https://scholar.google.co.in/scholar?cites=%s'\n % url_tail, backend=backend)\n\n this_publication = list()\n for ii in range(len(titles)):\n pub = '`%s. <%s>`_. %s' % (titles[ii], links[ii], authors[ii])\n this_publication.append(pub)\n\n this_publication = [p.encode('utf8') for p in this_publication]\n publications.append(this_publication)\n\n # get a union of the citations for the two papers, sorted in\n # alphabetic order\n publications = np.union1d(publications[1], publications[0]).tolist()\n\n # sort by year of publication\n years = list()\n for pub in publications:\n m = re.search('\\d{4} -', pub)\n if m is None:\n years.append(-1)\n else:\n years.append(int(m.group(0)[:-2]))\n order = np.argsort(years)[::-1]\n publications = [publications[idx] for idx in order]\n\n # filter out publications not containing (http://, https://, ftp://)\n publications = [p for p in publications if\n any(sub in p for sub in ('http://', 'https://', 'ftp://'))]\n\n # create rst & cleanup\n this_html = cite_template.substitute(publications=publications)\n this_html = this_html.replace('…', '...')\n html += this_html\n\n # output an rst file\n with open(op.join('..', 'cited.rst'), 'w') as f:\n f.write(html.encode('utf8'))\n", "path": "doc/sphinxext/cited_mne.py"}]}
| 3,149 | 112 |
gh_patches_debug_16857
|
rasdani/github-patches
|
git_diff
|
cloud-custodian__cloud-custodian-4882
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
c7n_mailer, AWS not installing Lambda, no logs, no errors
I have tried to setup/install the c7n_mailer lambda on our AWS account according to the docs. I have tried it from my Mac and from Docker Images (in a Jenkins pipeline) to no avail. The kicker is I am not getting any error, or output. Is there anything I can look at to see if I have an issue from my end our something on the AWS account. This is the command I am running:
```
c7n-mailer --config mailer.yml --update-lambda
```
</issue>
<code>
[start of tools/c7n_mailer/c7n_mailer/deploy.py]
1 # Copyright 2016-2017 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from __future__ import absolute_import, division, print_function, unicode_literals
15
16 import copy
17 import json
18 import os
19
20 from c7n.mu import (
21 CloudWatchEventSource,
22 LambdaFunction,
23 LambdaManager,
24 PythonPackageArchive)
25
26
27 entry_source = """\
28 import logging
29
30 from c7n_mailer import handle
31
32 logger = logging.getLogger('custodian.mailer')
33 log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
34 logging.basicConfig(level=logging.INFO, format=log_format)
35 logging.getLogger('botocore').setLevel(logging.WARNING)
36
37 def dispatch(event, context):
38 return handle.start_c7n_mailer(logger)
39 """
40
41
42 def get_archive(config):
43 archive = PythonPackageArchive(modules=[
44 'c7n_mailer',
45 # core deps
46 'jinja2', 'markupsafe', 'ruamel', 'ldap3', 'pyasn1', 'redis',
47 # for other dependencies
48 'pkg_resources',
49 # transport datadog - recursive deps
50 'datadog', 'simplejson', 'decorator',
51 # requests (recursive deps), needed by datadog, slackclient, splunk
52 'requests', 'urllib3', 'idna', 'chardet', 'certifi',
53 # used by splunk; also dependencies of c7n itself
54 'jsonpointer', 'jsonpatch'])
55
56 for d in set(config['templates_folders']):
57 if not os.path.exists(d):
58 continue
59 for t in [f for f in os.listdir(d) if os.path.splitext(f)[1] == '.j2']:
60 with open(os.path.join(d, t)) as fh:
61 archive.add_contents('msg-templates/%s' % t, fh.read())
62
63 function_config = copy.deepcopy(config)
64 function_config['templates_folders'] = ['msg-templates/']
65 archive.add_contents('config.json', json.dumps(function_config))
66 archive.add_contents('periodic.py', entry_source)
67
68 archive.close()
69 return archive
70
71
72 def provision(config, session_factory):
73 func_config = dict(
74 name=config.get('lambda_name', 'cloud-custodian-mailer'),
75 description=config.get('lambda_description', 'Cloud Custodian Mailer'),
76 tags=config.get('lambda_tags', {}),
77 handler='periodic.dispatch',
78 runtime=config['runtime'],
79 memory_size=config['memory'],
80 timeout=config['timeout'],
81 role=config['role'],
82 subnets=config['subnets'],
83 security_groups=config['security_groups'],
84 dead_letter_config=config.get('dead_letter_config', {}),
85 events=[
86 CloudWatchEventSource(
87 {'type': 'periodic',
88 'schedule': config.get('lambda_schedule', 'rate(5 minutes)')},
89 session_factory)
90 ])
91
92 archive = get_archive(config)
93 func = LambdaFunction(func_config, archive)
94 manager = LambdaManager(session_factory)
95 manager.publish(func)
96
[end of tools/c7n_mailer/c7n_mailer/deploy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tools/c7n_mailer/c7n_mailer/deploy.py b/tools/c7n_mailer/c7n_mailer/deploy.py
--- a/tools/c7n_mailer/c7n_mailer/deploy.py
+++ b/tools/c7n_mailer/c7n_mailer/deploy.py
@@ -14,6 +14,7 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import copy
+import logging
import json
import os
@@ -24,6 +25,8 @@
PythonPackageArchive)
+log = logging.getLogger('custodian-mailer')
+
entry_source = """\
import logging
@@ -91,5 +94,6 @@
archive = get_archive(config)
func = LambdaFunction(func_config, archive)
+ log.info("Provisioning mailer lambda %s" % (session_factory().region_name))
manager = LambdaManager(session_factory)
manager.publish(func)
|
{"golden_diff": "diff --git a/tools/c7n_mailer/c7n_mailer/deploy.py b/tools/c7n_mailer/c7n_mailer/deploy.py\n--- a/tools/c7n_mailer/c7n_mailer/deploy.py\n+++ b/tools/c7n_mailer/c7n_mailer/deploy.py\n@@ -14,6 +14,7 @@\n from __future__ import absolute_import, division, print_function, unicode_literals\n \n import copy\n+import logging\n import json\n import os\n \n@@ -24,6 +25,8 @@\n PythonPackageArchive)\n \n \n+log = logging.getLogger('custodian-mailer')\n+\n entry_source = \"\"\"\\\n import logging\n \n@@ -91,5 +94,6 @@\n \n archive = get_archive(config)\n func = LambdaFunction(func_config, archive)\n+ log.info(\"Provisioning mailer lambda %s\" % (session_factory().region_name))\n manager = LambdaManager(session_factory)\n manager.publish(func)\n", "issue": "c7n_mailer, AWS not installing Lambda, no logs, no errors\nI have tried to setup/install the c7n_mailer lambda on our AWS account according to the docs. I have tried it from my Mac and from Docker Images (in a Jenkins pipeline) to no avail. The kicker is I am not getting any error, or output. Is there anything I can look at to see if I have an issue from my end our something on the AWS account. This is the command I am running:\r\n```\r\nc7n-mailer --config mailer.yml --update-lambda\r\n```\n", "before_files": [{"content": "# Copyright 2016-2017 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport copy\nimport json\nimport os\n\nfrom c7n.mu import (\n CloudWatchEventSource,\n LambdaFunction,\n LambdaManager,\n PythonPackageArchive)\n\n\nentry_source = \"\"\"\\\nimport logging\n\nfrom c7n_mailer import handle\n\nlogger = logging.getLogger('custodian.mailer')\nlog_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'\nlogging.basicConfig(level=logging.INFO, format=log_format)\nlogging.getLogger('botocore').setLevel(logging.WARNING)\n\ndef dispatch(event, context):\n return handle.start_c7n_mailer(logger)\n\"\"\"\n\n\ndef get_archive(config):\n archive = PythonPackageArchive(modules=[\n 'c7n_mailer',\n # core deps\n 'jinja2', 'markupsafe', 'ruamel', 'ldap3', 'pyasn1', 'redis',\n # for other dependencies\n 'pkg_resources',\n # transport datadog - recursive deps\n 'datadog', 'simplejson', 'decorator',\n # requests (recursive deps), needed by datadog, slackclient, splunk\n 'requests', 'urllib3', 'idna', 'chardet', 'certifi',\n # used by splunk; also dependencies of c7n itself\n 'jsonpointer', 'jsonpatch'])\n\n for d in set(config['templates_folders']):\n if not os.path.exists(d):\n continue\n for t in [f for f in os.listdir(d) if os.path.splitext(f)[1] == '.j2']:\n with open(os.path.join(d, t)) as fh:\n archive.add_contents('msg-templates/%s' % t, fh.read())\n\n function_config = copy.deepcopy(config)\n function_config['templates_folders'] = ['msg-templates/']\n archive.add_contents('config.json', json.dumps(function_config))\n archive.add_contents('periodic.py', entry_source)\n\n archive.close()\n return archive\n\n\ndef provision(config, session_factory):\n func_config = dict(\n name=config.get('lambda_name', 'cloud-custodian-mailer'),\n description=config.get('lambda_description', 'Cloud Custodian Mailer'),\n tags=config.get('lambda_tags', {}),\n handler='periodic.dispatch',\n runtime=config['runtime'],\n memory_size=config['memory'],\n timeout=config['timeout'],\n role=config['role'],\n subnets=config['subnets'],\n security_groups=config['security_groups'],\n dead_letter_config=config.get('dead_letter_config', {}),\n events=[\n CloudWatchEventSource(\n {'type': 'periodic',\n 'schedule': config.get('lambda_schedule', 'rate(5 minutes)')},\n session_factory)\n ])\n\n archive = get_archive(config)\n func = LambdaFunction(func_config, archive)\n manager = LambdaManager(session_factory)\n manager.publish(func)\n", "path": "tools/c7n_mailer/c7n_mailer/deploy.py"}]}
| 1,622 | 212 |
gh_patches_debug_5811
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-4878
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CT-371] [Bug] CLI vars in packages.yml and selectors.yml don't work.
When we did the work to separate out rendering of secrets, a bug was introduced where instead of using cli_vars to construct the contexts for packages and selectors, we use the entire yaml context (which is also a dict). Because of this we get errors like: "Object of type method is not JSON serializable" and also vars are not found when rendering.
</issue>
<code>
[start of core/dbt/config/renderer.py]
1 from typing import Dict, Any, Tuple, Optional, Union, Callable
2
3 from dbt.clients.jinja import get_rendered, catch_jinja
4 from dbt.context.target import TargetContext
5 from dbt.context.secret import SecretContext
6 from dbt.context.base import BaseContext
7 from dbt.contracts.connection import HasCredentials
8 from dbt.exceptions import DbtProjectError, CompilationException, RecursionException
9 from dbt.utils import deep_map_render
10
11
12 Keypath = Tuple[Union[str, int], ...]
13
14
15 class BaseRenderer:
16 def __init__(self, context: Dict[str, Any]) -> None:
17 self.context = context
18
19 @property
20 def name(self):
21 return "Rendering"
22
23 def should_render_keypath(self, keypath: Keypath) -> bool:
24 return True
25
26 def render_entry(self, value: Any, keypath: Keypath) -> Any:
27 if not self.should_render_keypath(keypath):
28 return value
29
30 return self.render_value(value, keypath)
31
32 def render_value(self, value: Any, keypath: Optional[Keypath] = None) -> Any:
33 # keypath is ignored.
34 # if it wasn't read as a string, ignore it
35 if not isinstance(value, str):
36 return value
37 try:
38 with catch_jinja():
39 return get_rendered(value, self.context, native=True)
40 except CompilationException as exc:
41 msg = f"Could not render {value}: {exc.msg}"
42 raise CompilationException(msg) from exc
43
44 def render_data(self, data: Dict[str, Any]) -> Dict[str, Any]:
45 try:
46 return deep_map_render(self.render_entry, data)
47 except RecursionException:
48 raise DbtProjectError(
49 f"Cycle detected: {self.name} input has a reference to itself", project=data
50 )
51
52
53 def _list_if_none(value):
54 if value is None:
55 value = []
56 return value
57
58
59 def _dict_if_none(value):
60 if value is None:
61 value = {}
62 return value
63
64
65 def _list_if_none_or_string(value):
66 value = _list_if_none(value)
67 if isinstance(value, str):
68 return [value]
69 return value
70
71
72 class ProjectPostprocessor(Dict[Keypath, Callable[[Any], Any]]):
73 def __init__(self):
74 super().__init__()
75
76 self[("on-run-start",)] = _list_if_none_or_string
77 self[("on-run-end",)] = _list_if_none_or_string
78
79 for k in ("models", "seeds", "snapshots"):
80 self[(k,)] = _dict_if_none
81 self[(k, "vars")] = _dict_if_none
82 self[(k, "pre-hook")] = _list_if_none_or_string
83 self[(k, "post-hook")] = _list_if_none_or_string
84 self[("seeds", "column_types")] = _dict_if_none
85
86 def postprocess(self, value: Any, key: Keypath) -> Any:
87 if key in self:
88 handler = self[key]
89 return handler(value)
90
91 return value
92
93
94 class DbtProjectYamlRenderer(BaseRenderer):
95 _KEYPATH_HANDLERS = ProjectPostprocessor()
96
97 def __init__(
98 self, profile: Optional[HasCredentials] = None, cli_vars: Optional[Dict[str, Any]] = None
99 ) -> None:
100 # Generate contexts here because we want to save the context
101 # object in order to retrieve the env_vars. This is almost always
102 # a TargetContext, but in the debug task we want a project
103 # even when we don't have a profile.
104 if cli_vars is None:
105 cli_vars = {}
106 if profile:
107 self.ctx_obj = TargetContext(profile, cli_vars)
108 else:
109 self.ctx_obj = BaseContext(cli_vars) # type:ignore
110 context = self.ctx_obj.to_dict()
111 super().__init__(context)
112
113 @property
114 def name(self):
115 "Project config"
116
117 def get_package_renderer(self) -> BaseRenderer:
118 return PackageRenderer(self.context)
119
120 def get_selector_renderer(self) -> BaseRenderer:
121 return SelectorRenderer(self.context)
122
123 def render_project(
124 self,
125 project: Dict[str, Any],
126 project_root: str,
127 ) -> Dict[str, Any]:
128 """Render the project and insert the project root after rendering."""
129 rendered_project = self.render_data(project)
130 rendered_project["project-root"] = project_root
131 return rendered_project
132
133 def render_packages(self, packages: Dict[str, Any]):
134 """Render the given packages dict"""
135 package_renderer = self.get_package_renderer()
136 return package_renderer.render_data(packages)
137
138 def render_selectors(self, selectors: Dict[str, Any]):
139 selector_renderer = self.get_selector_renderer()
140 return selector_renderer.render_data(selectors)
141
142 def render_entry(self, value: Any, keypath: Keypath) -> Any:
143 result = super().render_entry(value, keypath)
144 return self._KEYPATH_HANDLERS.postprocess(result, keypath)
145
146 def should_render_keypath(self, keypath: Keypath) -> bool:
147 if not keypath:
148 return True
149
150 first = keypath[0]
151 # run hooks are not rendered
152 if first in {"on-run-start", "on-run-end", "query-comment"}:
153 return False
154
155 # don't render vars blocks until runtime
156 if first == "vars":
157 return False
158
159 if first in {"seeds", "models", "snapshots", "tests"}:
160 keypath_parts = {(k.lstrip("+ ") if isinstance(k, str) else k) for k in keypath}
161 # model-level hooks
162 if "pre-hook" in keypath_parts or "post-hook" in keypath_parts:
163 return False
164
165 return True
166
167
168 class SelectorRenderer(BaseRenderer):
169 @property
170 def name(self):
171 return "Selector config"
172
173
174 class SecretRenderer(BaseRenderer):
175 def __init__(self, cli_vars: Optional[Dict[str, Any]] = None) -> None:
176 # Generate contexts here because we want to save the context
177 # object in order to retrieve the env_vars.
178 if cli_vars is None:
179 cli_vars = {}
180 self.ctx_obj = SecretContext(cli_vars)
181 context = self.ctx_obj.to_dict()
182 super().__init__(context)
183
184 @property
185 def name(self):
186 return "Secret"
187
188
189 class ProfileRenderer(SecretRenderer):
190 @property
191 def name(self):
192 return "Profile"
193
194
195 class PackageRenderer(SecretRenderer):
196 @property
197 def name(self):
198 return "Packages config"
199
[end of core/dbt/config/renderer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/core/dbt/config/renderer.py b/core/dbt/config/renderer.py
--- a/core/dbt/config/renderer.py
+++ b/core/dbt/config/renderer.py
@@ -115,10 +115,10 @@
"Project config"
def get_package_renderer(self) -> BaseRenderer:
- return PackageRenderer(self.context)
+ return PackageRenderer(self.ctx_obj.cli_vars)
def get_selector_renderer(self) -> BaseRenderer:
- return SelectorRenderer(self.context)
+ return SelectorRenderer(self.ctx_obj.cli_vars)
def render_project(
self,
|
{"golden_diff": "diff --git a/core/dbt/config/renderer.py b/core/dbt/config/renderer.py\n--- a/core/dbt/config/renderer.py\n+++ b/core/dbt/config/renderer.py\n@@ -115,10 +115,10 @@\n \"Project config\"\n \n def get_package_renderer(self) -> BaseRenderer:\n- return PackageRenderer(self.context)\n+ return PackageRenderer(self.ctx_obj.cli_vars)\n \n def get_selector_renderer(self) -> BaseRenderer:\n- return SelectorRenderer(self.context)\n+ return SelectorRenderer(self.ctx_obj.cli_vars)\n \n def render_project(\n self,\n", "issue": "[CT-371] [Bug] CLI vars in packages.yml and selectors.yml don't work.\nWhen we did the work to separate out rendering of secrets, a bug was introduced where instead of using cli_vars to construct the contexts for packages and selectors, we use the entire yaml context (which is also a dict). Because of this we get errors like: \"Object of type method is not JSON serializable\" and also vars are not found when rendering.\n", "before_files": [{"content": "from typing import Dict, Any, Tuple, Optional, Union, Callable\n\nfrom dbt.clients.jinja import get_rendered, catch_jinja\nfrom dbt.context.target import TargetContext\nfrom dbt.context.secret import SecretContext\nfrom dbt.context.base import BaseContext\nfrom dbt.contracts.connection import HasCredentials\nfrom dbt.exceptions import DbtProjectError, CompilationException, RecursionException\nfrom dbt.utils import deep_map_render\n\n\nKeypath = Tuple[Union[str, int], ...]\n\n\nclass BaseRenderer:\n def __init__(self, context: Dict[str, Any]) -> None:\n self.context = context\n\n @property\n def name(self):\n return \"Rendering\"\n\n def should_render_keypath(self, keypath: Keypath) -> bool:\n return True\n\n def render_entry(self, value: Any, keypath: Keypath) -> Any:\n if not self.should_render_keypath(keypath):\n return value\n\n return self.render_value(value, keypath)\n\n def render_value(self, value: Any, keypath: Optional[Keypath] = None) -> Any:\n # keypath is ignored.\n # if it wasn't read as a string, ignore it\n if not isinstance(value, str):\n return value\n try:\n with catch_jinja():\n return get_rendered(value, self.context, native=True)\n except CompilationException as exc:\n msg = f\"Could not render {value}: {exc.msg}\"\n raise CompilationException(msg) from exc\n\n def render_data(self, data: Dict[str, Any]) -> Dict[str, Any]:\n try:\n return deep_map_render(self.render_entry, data)\n except RecursionException:\n raise DbtProjectError(\n f\"Cycle detected: {self.name} input has a reference to itself\", project=data\n )\n\n\ndef _list_if_none(value):\n if value is None:\n value = []\n return value\n\n\ndef _dict_if_none(value):\n if value is None:\n value = {}\n return value\n\n\ndef _list_if_none_or_string(value):\n value = _list_if_none(value)\n if isinstance(value, str):\n return [value]\n return value\n\n\nclass ProjectPostprocessor(Dict[Keypath, Callable[[Any], Any]]):\n def __init__(self):\n super().__init__()\n\n self[(\"on-run-start\",)] = _list_if_none_or_string\n self[(\"on-run-end\",)] = _list_if_none_or_string\n\n for k in (\"models\", \"seeds\", \"snapshots\"):\n self[(k,)] = _dict_if_none\n self[(k, \"vars\")] = _dict_if_none\n self[(k, \"pre-hook\")] = _list_if_none_or_string\n self[(k, \"post-hook\")] = _list_if_none_or_string\n self[(\"seeds\", \"column_types\")] = _dict_if_none\n\n def postprocess(self, value: Any, key: Keypath) -> Any:\n if key in self:\n handler = self[key]\n return handler(value)\n\n return value\n\n\nclass DbtProjectYamlRenderer(BaseRenderer):\n _KEYPATH_HANDLERS = ProjectPostprocessor()\n\n def __init__(\n self, profile: Optional[HasCredentials] = None, cli_vars: Optional[Dict[str, Any]] = None\n ) -> None:\n # Generate contexts here because we want to save the context\n # object in order to retrieve the env_vars. This is almost always\n # a TargetContext, but in the debug task we want a project\n # even when we don't have a profile.\n if cli_vars is None:\n cli_vars = {}\n if profile:\n self.ctx_obj = TargetContext(profile, cli_vars)\n else:\n self.ctx_obj = BaseContext(cli_vars) # type:ignore\n context = self.ctx_obj.to_dict()\n super().__init__(context)\n\n @property\n def name(self):\n \"Project config\"\n\n def get_package_renderer(self) -> BaseRenderer:\n return PackageRenderer(self.context)\n\n def get_selector_renderer(self) -> BaseRenderer:\n return SelectorRenderer(self.context)\n\n def render_project(\n self,\n project: Dict[str, Any],\n project_root: str,\n ) -> Dict[str, Any]:\n \"\"\"Render the project and insert the project root after rendering.\"\"\"\n rendered_project = self.render_data(project)\n rendered_project[\"project-root\"] = project_root\n return rendered_project\n\n def render_packages(self, packages: Dict[str, Any]):\n \"\"\"Render the given packages dict\"\"\"\n package_renderer = self.get_package_renderer()\n return package_renderer.render_data(packages)\n\n def render_selectors(self, selectors: Dict[str, Any]):\n selector_renderer = self.get_selector_renderer()\n return selector_renderer.render_data(selectors)\n\n def render_entry(self, value: Any, keypath: Keypath) -> Any:\n result = super().render_entry(value, keypath)\n return self._KEYPATH_HANDLERS.postprocess(result, keypath)\n\n def should_render_keypath(self, keypath: Keypath) -> bool:\n if not keypath:\n return True\n\n first = keypath[0]\n # run hooks are not rendered\n if first in {\"on-run-start\", \"on-run-end\", \"query-comment\"}:\n return False\n\n # don't render vars blocks until runtime\n if first == \"vars\":\n return False\n\n if first in {\"seeds\", \"models\", \"snapshots\", \"tests\"}:\n keypath_parts = {(k.lstrip(\"+ \") if isinstance(k, str) else k) for k in keypath}\n # model-level hooks\n if \"pre-hook\" in keypath_parts or \"post-hook\" in keypath_parts:\n return False\n\n return True\n\n\nclass SelectorRenderer(BaseRenderer):\n @property\n def name(self):\n return \"Selector config\"\n\n\nclass SecretRenderer(BaseRenderer):\n def __init__(self, cli_vars: Optional[Dict[str, Any]] = None) -> None:\n # Generate contexts here because we want to save the context\n # object in order to retrieve the env_vars.\n if cli_vars is None:\n cli_vars = {}\n self.ctx_obj = SecretContext(cli_vars)\n context = self.ctx_obj.to_dict()\n super().__init__(context)\n\n @property\n def name(self):\n return \"Secret\"\n\n\nclass ProfileRenderer(SecretRenderer):\n @property\n def name(self):\n return \"Profile\"\n\n\nclass PackageRenderer(SecretRenderer):\n @property\n def name(self):\n return \"Packages config\"\n", "path": "core/dbt/config/renderer.py"}]}
| 2,582 | 128 |
gh_patches_debug_27777
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmpose-926
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
a dataset bug causing topdown training very slow, wasting 3 min every epoch
i found a dataset bug, i test it on several server(including 8 a100 with 96 core cpu), it all happened. for every epoch, this bug cause about 3min time wasting. i jsut can locat the bug, but i don't known why it happen. it seems only happen when distribution launching.
bug loaction: when you lauch a topdown method, eg, topdown_heatmap/coco/res50_coco_256x192.py, go to /mmcv/runner/epoch_based_runner.py, about line 48. there is such func
self.call_hook('before_train_epoch')
time.sleep(2) # Prevent possible deadlock during epoch transition
for i, data_batch in enumerate(self.data_loader):
self._inner_iter = i
at the every epoch begining, the ( for i, data_batch in enumerate(self.data_loader): ) takes about 3min, it make the training very slow.
you can modify the ori code to the code below to reproduce this issue, this only happen at very epoch begining.
self.call_hook('before_train_epoch')
time.sleep(2) # Prevent possible deadlock during epoch transition
print('before_train_epoch, time:{}'.format(time.time()-start_time))
start_time = time.time()
for i, data_batch in enumerate(self.data_loader):
self._inner_iter = i
print('before_train_iter_load_data, time:{}'.format(time.time()-start_time))
here is my sys information
Python: 3.7.10 | packaged by conda-forge | (default, Feb 19 2021, 16:07:37) [GCC 9.3.0]
CUDA available: True GPU 0,1,2,3,4,5,6,7: A100-SXM4-40GB
CUDA_HOME: /usr/local/cuda-11.1
NVCC: Build cuda_11.1.TC455_06.29190527_0 GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.8.1+cu111
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.1
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-
gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
- CuDNN 8.0.5
- Magma 2.5.2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated
-fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -We
xtra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable
-Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-ps
abi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -f
no-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.1, USE_CUDA=
ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,
TorchVision: 0.9.1+cu111
OpenCV: 4.5.3
MMCV: 1.3.8
MMCV Compiler: GCC 7.5
MMCV CUDA Compiler: 11.1
MMPose: 0.15.0+51b4b45
</issue>
<code>
[start of mmpose/apis/train.py]
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import warnings
3
4 import torch
5 from mmcv.parallel import MMDataParallel, MMDistributedDataParallel
6 from mmcv.runner import DistSamplerSeedHook, EpochBasedRunner, OptimizerHook
7
8 from mmpose.core import DistEvalHook, EvalHook, build_optimizers
9 from mmpose.core.distributed_wrapper import DistributedDataParallelWrapper
10 from mmpose.datasets import build_dataloader, build_dataset
11 from mmpose.utils import get_root_logger
12
13 try:
14 from mmcv.runner import Fp16OptimizerHook
15 except ImportError:
16 warnings.warn(
17 'Fp16OptimizerHook from mmpose will be deprecated from '
18 'v0.15.0. Please install mmcv>=1.1.4', DeprecationWarning)
19 from mmpose.core import Fp16OptimizerHook
20
21
22 def train_model(model,
23 dataset,
24 cfg,
25 distributed=False,
26 validate=False,
27 timestamp=None,
28 meta=None):
29 """Train model entry function.
30
31 Args:
32 model (nn.Module): The model to be trained.
33 dataset (Dataset): Train dataset.
34 cfg (dict): The config dict for training.
35 distributed (bool): Whether to use distributed training.
36 Default: False.
37 validate (bool): Whether to do evaluation. Default: False.
38 timestamp (str | None): Local time for runner. Default: None.
39 meta (dict | None): Meta dict to record some important information.
40 Default: None
41 """
42 logger = get_root_logger(cfg.log_level)
43
44 # prepare data loaders
45 dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]
46 dataloader_setting = dict(
47 samples_per_gpu=cfg.data.get('samples_per_gpu', {}),
48 workers_per_gpu=cfg.data.get('workers_per_gpu', {}),
49 # cfg.gpus will be ignored if distributed
50 num_gpus=len(cfg.gpu_ids),
51 dist=distributed,
52 seed=cfg.seed)
53 dataloader_setting = dict(dataloader_setting,
54 **cfg.data.get('train_dataloader', {}))
55
56 data_loaders = [
57 build_dataloader(ds, **dataloader_setting) for ds in dataset
58 ]
59
60 # determine wether use adversarial training precess or not
61 use_adverserial_train = cfg.get('use_adversarial_train', False)
62
63 # put model on gpus
64 if distributed:
65 find_unused_parameters = cfg.get('find_unused_parameters', True)
66 # Sets the `find_unused_parameters` parameter in
67 # torch.nn.parallel.DistributedDataParallel
68
69 if use_adverserial_train:
70 # Use DistributedDataParallelWrapper for adversarial training
71 model = DistributedDataParallelWrapper(
72 model,
73 device_ids=[torch.cuda.current_device()],
74 broadcast_buffers=False,
75 find_unused_parameters=find_unused_parameters)
76 else:
77 model = MMDistributedDataParallel(
78 model.cuda(),
79 device_ids=[torch.cuda.current_device()],
80 broadcast_buffers=False,
81 find_unused_parameters=find_unused_parameters)
82 else:
83 model = MMDataParallel(
84 model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)
85
86 # build runner
87 optimizer = build_optimizers(model, cfg.optimizer)
88
89 runner = EpochBasedRunner(
90 model,
91 optimizer=optimizer,
92 work_dir=cfg.work_dir,
93 logger=logger,
94 meta=meta)
95 # an ugly workaround to make .log and .log.json filenames the same
96 runner.timestamp = timestamp
97
98 if use_adverserial_train:
99 # The optimizer step process is included in the train_step function
100 # of the model, so the runner should NOT include optimizer hook.
101 optimizer_config = None
102 else:
103 # fp16 setting
104 fp16_cfg = cfg.get('fp16', None)
105 if fp16_cfg is not None:
106 optimizer_config = Fp16OptimizerHook(
107 **cfg.optimizer_config, **fp16_cfg, distributed=distributed)
108 elif distributed and 'type' not in cfg.optimizer_config:
109 optimizer_config = OptimizerHook(**cfg.optimizer_config)
110 else:
111 optimizer_config = cfg.optimizer_config
112
113 # register hooks
114 runner.register_training_hooks(cfg.lr_config, optimizer_config,
115 cfg.checkpoint_config, cfg.log_config,
116 cfg.get('momentum_config', None))
117 if distributed:
118 runner.register_hook(DistSamplerSeedHook())
119
120 # register eval hooks
121 if validate:
122 eval_cfg = cfg.get('evaluation', {})
123 val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))
124 dataloader_setting = dict(
125 samples_per_gpu=1,
126 workers_per_gpu=cfg.data.get('workers_per_gpu', {}),
127 # cfg.gpus will be ignored if distributed
128 num_gpus=len(cfg.gpu_ids),
129 dist=distributed,
130 drop_last=False,
131 shuffle=False)
132 dataloader_setting = dict(dataloader_setting,
133 **cfg.data.get('val_dataloader', {}))
134 val_dataloader = build_dataloader(val_dataset, **dataloader_setting)
135 eval_hook = DistEvalHook if distributed else EvalHook
136 runner.register_hook(eval_hook(val_dataloader, **eval_cfg))
137
138 if cfg.resume_from:
139 runner.resume(cfg.resume_from)
140 elif cfg.load_from:
141 runner.load_checkpoint(cfg.load_from)
142 runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
143
[end of mmpose/apis/train.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mmpose/apis/train.py b/mmpose/apis/train.py
--- a/mmpose/apis/train.py
+++ b/mmpose/apis/train.py
@@ -43,19 +43,33 @@
# prepare data loaders
dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]
- dataloader_setting = dict(
- samples_per_gpu=cfg.data.get('samples_per_gpu', {}),
- workers_per_gpu=cfg.data.get('workers_per_gpu', {}),
- # cfg.gpus will be ignored if distributed
- num_gpus=len(cfg.gpu_ids),
- dist=distributed,
- seed=cfg.seed)
- dataloader_setting = dict(dataloader_setting,
- **cfg.data.get('train_dataloader', {}))
-
- data_loaders = [
- build_dataloader(ds, **dataloader_setting) for ds in dataset
- ]
+ # step 1: give default values and override (if exist) from cfg.data
+ loader_cfg = {
+ **dict(
+ seed=cfg.get('seed'),
+ drop_last=False,
+ dist=distributed,
+ num_gpus=len(cfg.gpu_ids)),
+ **({} if torch.__version__ != 'parrots' else dict(
+ prefetch_num=2,
+ pin_memory=False,
+ )),
+ **dict((k, cfg.data[k]) for k in [
+ 'samples_per_gpu',
+ 'workers_per_gpu',
+ 'shuffle',
+ 'seed',
+ 'drop_last',
+ 'prefetch_num',
+ 'pin_memory',
+ 'persistent_workers',
+ ] if k in cfg.data)
+ }
+
+ # step 2: cfg.data.train_dataloader has highest priority
+ train_loader_cfg = dict(loader_cfg, **cfg.data.get('train_dataloader', {}))
+
+ data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset]
# determine wether use adversarial training precess or not
use_adverserial_train = cfg.get('use_adversarial_train', False)
|
{"golden_diff": "diff --git a/mmpose/apis/train.py b/mmpose/apis/train.py\n--- a/mmpose/apis/train.py\n+++ b/mmpose/apis/train.py\n@@ -43,19 +43,33 @@\n \n # prepare data loaders\n dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]\n- dataloader_setting = dict(\n- samples_per_gpu=cfg.data.get('samples_per_gpu', {}),\n- workers_per_gpu=cfg.data.get('workers_per_gpu', {}),\n- # cfg.gpus will be ignored if distributed\n- num_gpus=len(cfg.gpu_ids),\n- dist=distributed,\n- seed=cfg.seed)\n- dataloader_setting = dict(dataloader_setting,\n- **cfg.data.get('train_dataloader', {}))\n-\n- data_loaders = [\n- build_dataloader(ds, **dataloader_setting) for ds in dataset\n- ]\n+ # step 1: give default values and override (if exist) from cfg.data\n+ loader_cfg = {\n+ **dict(\n+ seed=cfg.get('seed'),\n+ drop_last=False,\n+ dist=distributed,\n+ num_gpus=len(cfg.gpu_ids)),\n+ **({} if torch.__version__ != 'parrots' else dict(\n+ prefetch_num=2,\n+ pin_memory=False,\n+ )),\n+ **dict((k, cfg.data[k]) for k in [\n+ 'samples_per_gpu',\n+ 'workers_per_gpu',\n+ 'shuffle',\n+ 'seed',\n+ 'drop_last',\n+ 'prefetch_num',\n+ 'pin_memory',\n+ 'persistent_workers',\n+ ] if k in cfg.data)\n+ }\n+\n+ # step 2: cfg.data.train_dataloader has highest priority\n+ train_loader_cfg = dict(loader_cfg, **cfg.data.get('train_dataloader', {}))\n+\n+ data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset]\n \n # determine wether use adversarial training precess or not\n use_adverserial_train = cfg.get('use_adversarial_train', False)\n", "issue": "a dataset bug causing topdown training very slow, wasting 3 min every epoch\ni found a dataset bug, i test it on several server(including 8 a100 with 96 core cpu), it all happened. for every epoch, this bug cause about 3min time wasting. i jsut can locat the bug, but i don't known why it happen. it seems only happen when distribution launching.\r\n\r\nbug loaction: when you lauch a topdown method, eg, topdown_heatmap/coco/res50_coco_256x192.py, go to /mmcv/runner/epoch_based_runner.py, about line 48. there is such func\r\n\r\n self.call_hook('before_train_epoch')\r\n time.sleep(2) # Prevent possible deadlock during epoch transition\r\n for i, data_batch in enumerate(self.data_loader):\r\n self._inner_iter = i\r\n\r\nat the every epoch begining, the ( for i, data_batch in enumerate(self.data_loader): ) takes about 3min, it make the training very slow.\r\n\r\nyou can modify the ori code to the code below to reproduce this issue, this only happen at very epoch begining.\r\n\r\n self.call_hook('before_train_epoch')\r\n time.sleep(2) # Prevent possible deadlock during epoch transition\r\n print('before_train_epoch, time:{}'.format(time.time()-start_time))\r\n start_time = time.time()\r\n for i, data_batch in enumerate(self.data_loader):\r\n self._inner_iter = i\r\n print('before_train_iter_load_data, time:{}'.format(time.time()-start_time))\r\n\r\nhere is my sys information\r\nPython: 3.7.10 | packaged by conda-forge | (default, Feb 19 2021, 16:07:37) [GCC 9.3.0] \r\nCUDA available: True GPU 0,1,2,3,4,5,6,7: A100-SXM4-40GB \r\nCUDA_HOME: /usr/local/cuda-11.1 \r\nNVCC: Build cuda_11.1.TC455_06.29190527_0 GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 \r\nPyTorch: 1.8.1+cu111 \r\nPyTorch compiling details: PyTorch built with: \r\n - GCC 7.3 \r\n - C++ Version: 201402 \r\n - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications\r\n - Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)\r\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\r\n - NNPACK is enabled \r\n - CPU capability usage: AVX2 \r\n - CUDA Runtime 11.1\r\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-\r\ngencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86\r\n - CuDNN 8.0.5 \r\n - Magma 2.5.2 \r\n - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated\r\n-fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -We\r\nxtra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable\r\n-Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-ps\r\nabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -f\r\nno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.1, USE_CUDA=\r\nON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,\r\n \r\nTorchVision: 0.9.1+cu111 \r\nOpenCV: 4.5.3 \r\nMMCV: 1.3.8 \r\nMMCV Compiler: GCC 7.5\r\nMMCV CUDA Compiler: 11.1\r\nMMPose: 0.15.0+51b4b45\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport warnings\n\nimport torch\nfrom mmcv.parallel import MMDataParallel, MMDistributedDataParallel\nfrom mmcv.runner import DistSamplerSeedHook, EpochBasedRunner, OptimizerHook\n\nfrom mmpose.core import DistEvalHook, EvalHook, build_optimizers\nfrom mmpose.core.distributed_wrapper import DistributedDataParallelWrapper\nfrom mmpose.datasets import build_dataloader, build_dataset\nfrom mmpose.utils import get_root_logger\n\ntry:\n from mmcv.runner import Fp16OptimizerHook\nexcept ImportError:\n warnings.warn(\n 'Fp16OptimizerHook from mmpose will be deprecated from '\n 'v0.15.0. Please install mmcv>=1.1.4', DeprecationWarning)\n from mmpose.core import Fp16OptimizerHook\n\n\ndef train_model(model,\n dataset,\n cfg,\n distributed=False,\n validate=False,\n timestamp=None,\n meta=None):\n \"\"\"Train model entry function.\n\n Args:\n model (nn.Module): The model to be trained.\n dataset (Dataset): Train dataset.\n cfg (dict): The config dict for training.\n distributed (bool): Whether to use distributed training.\n Default: False.\n validate (bool): Whether to do evaluation. Default: False.\n timestamp (str | None): Local time for runner. Default: None.\n meta (dict | None): Meta dict to record some important information.\n Default: None\n \"\"\"\n logger = get_root_logger(cfg.log_level)\n\n # prepare data loaders\n dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]\n dataloader_setting = dict(\n samples_per_gpu=cfg.data.get('samples_per_gpu', {}),\n workers_per_gpu=cfg.data.get('workers_per_gpu', {}),\n # cfg.gpus will be ignored if distributed\n num_gpus=len(cfg.gpu_ids),\n dist=distributed,\n seed=cfg.seed)\n dataloader_setting = dict(dataloader_setting,\n **cfg.data.get('train_dataloader', {}))\n\n data_loaders = [\n build_dataloader(ds, **dataloader_setting) for ds in dataset\n ]\n\n # determine wether use adversarial training precess or not\n use_adverserial_train = cfg.get('use_adversarial_train', False)\n\n # put model on gpus\n if distributed:\n find_unused_parameters = cfg.get('find_unused_parameters', True)\n # Sets the `find_unused_parameters` parameter in\n # torch.nn.parallel.DistributedDataParallel\n\n if use_adverserial_train:\n # Use DistributedDataParallelWrapper for adversarial training\n model = DistributedDataParallelWrapper(\n model,\n device_ids=[torch.cuda.current_device()],\n broadcast_buffers=False,\n find_unused_parameters=find_unused_parameters)\n else:\n model = MMDistributedDataParallel(\n model.cuda(),\n device_ids=[torch.cuda.current_device()],\n broadcast_buffers=False,\n find_unused_parameters=find_unused_parameters)\n else:\n model = MMDataParallel(\n model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)\n\n # build runner\n optimizer = build_optimizers(model, cfg.optimizer)\n\n runner = EpochBasedRunner(\n model,\n optimizer=optimizer,\n work_dir=cfg.work_dir,\n logger=logger,\n meta=meta)\n # an ugly workaround to make .log and .log.json filenames the same\n runner.timestamp = timestamp\n\n if use_adverserial_train:\n # The optimizer step process is included in the train_step function\n # of the model, so the runner should NOT include optimizer hook.\n optimizer_config = None\n else:\n # fp16 setting\n fp16_cfg = cfg.get('fp16', None)\n if fp16_cfg is not None:\n optimizer_config = Fp16OptimizerHook(\n **cfg.optimizer_config, **fp16_cfg, distributed=distributed)\n elif distributed and 'type' not in cfg.optimizer_config:\n optimizer_config = OptimizerHook(**cfg.optimizer_config)\n else:\n optimizer_config = cfg.optimizer_config\n\n # register hooks\n runner.register_training_hooks(cfg.lr_config, optimizer_config,\n cfg.checkpoint_config, cfg.log_config,\n cfg.get('momentum_config', None))\n if distributed:\n runner.register_hook(DistSamplerSeedHook())\n\n # register eval hooks\n if validate:\n eval_cfg = cfg.get('evaluation', {})\n val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))\n dataloader_setting = dict(\n samples_per_gpu=1,\n workers_per_gpu=cfg.data.get('workers_per_gpu', {}),\n # cfg.gpus will be ignored if distributed\n num_gpus=len(cfg.gpu_ids),\n dist=distributed,\n drop_last=False,\n shuffle=False)\n dataloader_setting = dict(dataloader_setting,\n **cfg.data.get('val_dataloader', {}))\n val_dataloader = build_dataloader(val_dataset, **dataloader_setting)\n eval_hook = DistEvalHook if distributed else EvalHook\n runner.register_hook(eval_hook(val_dataloader, **eval_cfg))\n\n if cfg.resume_from:\n runner.resume(cfg.resume_from)\n elif cfg.load_from:\n runner.load_checkpoint(cfg.load_from)\n runner.run(data_loaders, cfg.workflow, cfg.total_epochs)\n", "path": "mmpose/apis/train.py"}]}
| 3,380 | 467 |
gh_patches_debug_17873
|
rasdani/github-patches
|
git_diff
|
PyGithub__PyGithub-1327
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
InputGitTreeElement should allow passing "null" for sha
Github's [Tree creation api](https://developer.github.com/v3/git/trees/#create-a-tree) allows us to pass `sha = null` to indicate that the specified blob needs to be deleted.
However, I don't have a way to pass this info to my `InputGitTreeElement`. I can either give it a str or a `github.GithubObject.NotSet`. This means I have no way of deleting files from a tree using PyGithub (I'd like to delete multiple files in a single commit so tree creation is the ideal choice for me).
The current design is to only pass the `sha` if it is actually set:
https://github.com/PyGithub/PyGithub/blob/540a085001/github/InputGitTreeElement.py#L81
I can understand that passing a `None` goes against the design. I think something like `github.GithubObject.Null` could be introduced to explicitly say that this field is `null`. It can be used everywhere the GH API accepts a null value.
Example
```python
new_tree = repo.create_git_tree(
[
InputGitTreeElement(
path="my/dir/my_file.txt", mode="100644", type="blob", sha=github.GithubObject.Null
),
],
base_tree=head_commit.tree
)
```
This will delete `my/dir/my_file.txt`
---
My current workaround is to directly hit the api to create tree (using requests, setting `sha=None`), get the tree sha & use it with pygithub for my remaining workflow (committing, etc).
Please let me know in case I misunderstood some aspect or if anything needs to be elaborated upon.
</issue>
<code>
[start of github/InputGitTreeElement.py]
1 # -*- coding: utf-8 -*-
2
3 ############################ Copyrights and license ############################
4 # #
5 # Copyright 2012 Vincent Jacques <[email protected]> #
6 # Copyright 2012 Zearin <[email protected]> #
7 # Copyright 2013 Vincent Jacques <[email protected]> #
8 # Copyright 2014 Vincent Jacques <[email protected]> #
9 # Copyright 2016 Peter Buckley <[email protected]> #
10 # Copyright 2018 Wan Liuyang <[email protected]> #
11 # Copyright 2018 sfdye <[email protected]> #
12 # #
13 # This file is part of PyGithub. #
14 # http://pygithub.readthedocs.io/ #
15 # #
16 # PyGithub is free software: you can redistribute it and/or modify it under #
17 # the terms of the GNU Lesser General Public License as published by the Free #
18 # Software Foundation, either version 3 of the License, or (at your option) #
19 # any later version. #
20 # #
21 # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
22 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
23 # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
24 # details. #
25 # #
26 # You should have received a copy of the GNU Lesser General Public License #
27 # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
28 # #
29 ################################################################################
30
31 from __future__ import absolute_import
32
33 import six
34
35 import github.GithubObject
36
37
38 class InputGitTreeElement(object):
39 """
40 This class represents InputGitTreeElements
41 """
42
43 def __init__(
44 self,
45 path,
46 mode,
47 type,
48 content=github.GithubObject.NotSet,
49 sha=github.GithubObject.NotSet,
50 ):
51 """
52 :param path: string
53 :param mode: string
54 :param type: string
55 :param content: string
56 :param sha: string
57 """
58
59 assert isinstance(path, (str, six.text_type)), path
60 assert isinstance(mode, (str, six.text_type)), mode
61 assert isinstance(type, (str, six.text_type)), type
62 assert content is github.GithubObject.NotSet or isinstance(
63 content, (str, six.text_type)
64 ), content
65 assert sha is github.GithubObject.NotSet or isinstance(
66 sha, (str, six.text_type)
67 ), sha
68 self.__path = path
69 self.__mode = mode
70 self.__type = type
71 self.__content = content
72 self.__sha = sha
73
74 @property
75 def _identity(self):
76 identity = {
77 "path": self.__path,
78 "mode": self.__mode,
79 "type": self.__type,
80 }
81 if self.__sha is not github.GithubObject.NotSet:
82 identity["sha"] = self.__sha
83 if self.__content is not github.GithubObject.NotSet:
84 identity["content"] = self.__content
85 return identity
86
[end of github/InputGitTreeElement.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/github/InputGitTreeElement.py b/github/InputGitTreeElement.py
--- a/github/InputGitTreeElement.py
+++ b/github/InputGitTreeElement.py
@@ -53,7 +53,7 @@
:param mode: string
:param type: string
:param content: string
- :param sha: string
+ :param sha: string or None
"""
assert isinstance(path, (str, six.text_type)), path
@@ -62,8 +62,10 @@
assert content is github.GithubObject.NotSet or isinstance(
content, (str, six.text_type)
), content
- assert sha is github.GithubObject.NotSet or isinstance(
- sha, (str, six.text_type)
+ assert (
+ sha is github.GithubObject.NotSet
+ or sha is None
+ or isinstance(sha, (str, six.text_type))
), sha
self.__path = path
self.__mode = mode
|
{"golden_diff": "diff --git a/github/InputGitTreeElement.py b/github/InputGitTreeElement.py\n--- a/github/InputGitTreeElement.py\n+++ b/github/InputGitTreeElement.py\n@@ -53,7 +53,7 @@\n :param mode: string\n :param type: string\n :param content: string\n- :param sha: string\n+ :param sha: string or None\n \"\"\"\n \n assert isinstance(path, (str, six.text_type)), path\n@@ -62,8 +62,10 @@\n assert content is github.GithubObject.NotSet or isinstance(\n content, (str, six.text_type)\n ), content\n- assert sha is github.GithubObject.NotSet or isinstance(\n- sha, (str, six.text_type)\n+ assert (\n+ sha is github.GithubObject.NotSet\n+ or sha is None\n+ or isinstance(sha, (str, six.text_type))\n ), sha\n self.__path = path\n self.__mode = mode\n", "issue": "InputGitTreeElement should allow passing \"null\" for sha\nGithub's [Tree creation api](https://developer.github.com/v3/git/trees/#create-a-tree) allows us to pass `sha = null` to indicate that the specified blob needs to be deleted.\r\n\r\nHowever, I don't have a way to pass this info to my `InputGitTreeElement`. I can either give it a str or a `github.GithubObject.NotSet`. This means I have no way of deleting files from a tree using PyGithub (I'd like to delete multiple files in a single commit so tree creation is the ideal choice for me).\r\n\r\nThe current design is to only pass the `sha` if it is actually set:\r\nhttps://github.com/PyGithub/PyGithub/blob/540a085001/github/InputGitTreeElement.py#L81\r\n\r\nI can understand that passing a `None` goes against the design. I think something like `github.GithubObject.Null` could be introduced to explicitly say that this field is `null`. It can be used everywhere the GH API accepts a null value.\r\n\r\nExample\r\n```python\r\nnew_tree = repo.create_git_tree(\r\n [\r\n InputGitTreeElement(\r\n path=\"my/dir/my_file.txt\", mode=\"100644\", type=\"blob\", sha=github.GithubObject.Null\r\n ),\r\n ],\r\n base_tree=head_commit.tree\r\n)\r\n```\r\nThis will delete `my/dir/my_file.txt`\r\n\r\n---\r\n\r\nMy current workaround is to directly hit the api to create tree (using requests, setting `sha=None`), get the tree sha & use it with pygithub for my remaining workflow (committing, etc).\r\n\r\nPlease let me know in case I misunderstood some aspect or if anything needs to be elaborated upon.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2018 Wan Liuyang <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\nfrom __future__ import absolute_import\n\nimport six\n\nimport github.GithubObject\n\n\nclass InputGitTreeElement(object):\n \"\"\"\n This class represents InputGitTreeElements\n \"\"\"\n\n def __init__(\n self,\n path,\n mode,\n type,\n content=github.GithubObject.NotSet,\n sha=github.GithubObject.NotSet,\n ):\n \"\"\"\n :param path: string\n :param mode: string\n :param type: string\n :param content: string\n :param sha: string\n \"\"\"\n\n assert isinstance(path, (str, six.text_type)), path\n assert isinstance(mode, (str, six.text_type)), mode\n assert isinstance(type, (str, six.text_type)), type\n assert content is github.GithubObject.NotSet or isinstance(\n content, (str, six.text_type)\n ), content\n assert sha is github.GithubObject.NotSet or isinstance(\n sha, (str, six.text_type)\n ), sha\n self.__path = path\n self.__mode = mode\n self.__type = type\n self.__content = content\n self.__sha = sha\n\n @property\n def _identity(self):\n identity = {\n \"path\": self.__path,\n \"mode\": self.__mode,\n \"type\": self.__type,\n }\n if self.__sha is not github.GithubObject.NotSet:\n identity[\"sha\"] = self.__sha\n if self.__content is not github.GithubObject.NotSet:\n identity[\"content\"] = self.__content\n return identity\n", "path": "github/InputGitTreeElement.py"}]}
| 1,782 | 223 |
gh_patches_debug_24648
|
rasdani/github-patches
|
git_diff
|
pypa__pip-8079
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
New resolver cannot installs distributions that only have pre releases
**Environment**
* pip version: master, today
* Python version: 3
* OS: linux
**Description**
I want to install a distribution that only has pre-releases. The legacy resolver does support this. The new one does not.
Note: using `--pre` does not seem to influence the result. The legacy resolver could install such distributions without using `--pre`.
**Expected behavior**
Installation should succeed.
**How to Reproduce**
```console
$ pip install --no-deps odoo13-addon-date-range --unstable-feature=resolver
ERROR: Exception:
Traceback (most recent call last):
File "/home/me/pip/src/pip/_internal/cli/base_command.py", line 199, in _main
status = self.run(options, args)
File "/home/me/pip/src/pip/_internal/cli/req_command.py", line 185, in wrapper
return func(self, options, args)
File "/home/me/pip/src/pip/_internal/commands/install.py", line 333, in run
reqs, check_supported_wheels=not options.target_dir
File "/home/me/pip/src/pip/_internal/resolution/resolvelib/resolver.py", line 80, in resolve
self._result = resolver.resolve(requirements)
File "/home/me/pip/src/pip/_vendor/resolvelib/resolvers.py", line 413, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/home/me/pip/src/pip/_vendor/resolvelib/resolvers.py", line 310, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/home/me/pip/src/pip/_vendor/resolvelib/resolvers.py", line 240, in _attempt_to_pin_criterion
raise InconsistentCandidate(candidate, criterion)
pip._vendor.resolvelib.resolvers.InconsistentCandidate: Provided candidate LinkCandidate('https://files.pythonhosted.org/packages/1f/0b/945335a37082b6b013cc1331f49e3f5b6a18cdd0b693475e6ca9e9a7df6e/odoo13_addon_date_range-13.0.1.0.1.dev8-py3-none-any.whl#sha256=3883bbe87db8d5db4364e8a42e86546e19e8e4f123d98c4e9454587dfa9401df (from https://pypi.org/simple/odoo13-addon-date-range/) (requires-python:>=3.5)') does not satisfy SpecifierRequirement('odoo13-addon-date-range')
```
Note I used `--no-deps` because a dependency is not on pypi, but that has no influence on the result.
</issue>
<code>
[start of src/pip/_internal/resolution/resolvelib/requirements.py]
1 from pip._vendor.packaging.utils import canonicalize_name
2
3 from pip._internal.utils.typing import MYPY_CHECK_RUNNING
4
5 from .base import Requirement, format_name
6
7 if MYPY_CHECK_RUNNING:
8 from typing import Sequence
9
10 from pip._vendor.packaging.specifiers import SpecifierSet
11
12 from pip._internal.req.req_install import InstallRequirement
13
14 from .base import Candidate
15 from .factory import Factory
16
17
18 class ExplicitRequirement(Requirement):
19 def __init__(self, candidate):
20 # type: (Candidate) -> None
21 self.candidate = candidate
22
23 def __repr__(self):
24 # type: () -> str
25 return "{class_name}({candidate!r})".format(
26 class_name=self.__class__.__name__,
27 candidate=self.candidate,
28 )
29
30 @property
31 def name(self):
32 # type: () -> str
33 # No need to canonicalise - the candidate did this
34 return self.candidate.name
35
36 def find_matches(self):
37 # type: () -> Sequence[Candidate]
38 return [self.candidate]
39
40 def is_satisfied_by(self, candidate):
41 # type: (Candidate) -> bool
42 return candidate == self.candidate
43
44
45 class SpecifierRequirement(Requirement):
46 def __init__(self, ireq, factory):
47 # type: (InstallRequirement, Factory) -> None
48 assert ireq.link is None, "This is a link, not a specifier"
49 self._ireq = ireq
50 self._factory = factory
51 self.extras = ireq.req.extras
52
53 def __str__(self):
54 # type: () -> str
55 return str(self._ireq.req)
56
57 def __repr__(self):
58 # type: () -> str
59 return "{class_name}({requirement!r})".format(
60 class_name=self.__class__.__name__,
61 requirement=str(self._ireq.req),
62 )
63
64 @property
65 def name(self):
66 # type: () -> str
67 canonical_name = canonicalize_name(self._ireq.req.name)
68 return format_name(canonical_name, self.extras)
69
70 def find_matches(self):
71 # type: () -> Sequence[Candidate]
72 it = self._factory.iter_found_candidates(self._ireq, self.extras)
73 return list(it)
74
75 def is_satisfied_by(self, candidate):
76 # type: (Candidate) -> bool
77 assert candidate.name == self.name, \
78 "Internal issue: Candidate is not for this requirement " \
79 " {} vs {}".format(candidate.name, self.name)
80 return candidate.version in self._ireq.req.specifier
81
82
83 class RequiresPythonRequirement(Requirement):
84 """A requirement representing Requires-Python metadata.
85 """
86 def __init__(self, specifier, match):
87 # type: (SpecifierSet, Candidate) -> None
88 self.specifier = specifier
89 self._candidate = match
90
91 def __repr__(self):
92 # type: () -> str
93 return "{class_name}({specifier!r})".format(
94 class_name=self.__class__.__name__,
95 specifier=str(self.specifier),
96 )
97
98 @property
99 def name(self):
100 # type: () -> str
101 return self._candidate.name
102
103 def find_matches(self):
104 # type: () -> Sequence[Candidate]
105 if self._candidate.version in self.specifier:
106 return [self._candidate]
107 return []
108
109 def is_satisfied_by(self, candidate):
110 # type: (Candidate) -> bool
111 assert candidate.name == self._candidate.name, "Not Python candidate"
112 return candidate.version in self.specifier
113
[end of src/pip/_internal/resolution/resolvelib/requirements.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/pip/_internal/resolution/resolvelib/requirements.py b/src/pip/_internal/resolution/resolvelib/requirements.py
--- a/src/pip/_internal/resolution/resolvelib/requirements.py
+++ b/src/pip/_internal/resolution/resolvelib/requirements.py
@@ -77,7 +77,11 @@
assert candidate.name == self.name, \
"Internal issue: Candidate is not for this requirement " \
" {} vs {}".format(candidate.name, self.name)
- return candidate.version in self._ireq.req.specifier
+ # We can safely always allow prereleases here since PackageFinder
+ # already implements the prerelease logic, and would have filtered out
+ # prerelease candidates if the user does not expect them.
+ spec = self._ireq.req.specifier
+ return spec.contains(candidate.version, prereleases=True)
class RequiresPythonRequirement(Requirement):
@@ -109,4 +113,7 @@
def is_satisfied_by(self, candidate):
# type: (Candidate) -> bool
assert candidate.name == self._candidate.name, "Not Python candidate"
- return candidate.version in self.specifier
+ # We can safely always allow prereleases here since PackageFinder
+ # already implements the prerelease logic, and would have filtered out
+ # prerelease candidates if the user does not expect them.
+ return self.specifier.contains(candidate.version, prereleases=True)
|
{"golden_diff": "diff --git a/src/pip/_internal/resolution/resolvelib/requirements.py b/src/pip/_internal/resolution/resolvelib/requirements.py\n--- a/src/pip/_internal/resolution/resolvelib/requirements.py\n+++ b/src/pip/_internal/resolution/resolvelib/requirements.py\n@@ -77,7 +77,11 @@\n assert candidate.name == self.name, \\\n \"Internal issue: Candidate is not for this requirement \" \\\n \" {} vs {}\".format(candidate.name, self.name)\n- return candidate.version in self._ireq.req.specifier\n+ # We can safely always allow prereleases here since PackageFinder\n+ # already implements the prerelease logic, and would have filtered out\n+ # prerelease candidates if the user does not expect them.\n+ spec = self._ireq.req.specifier\n+ return spec.contains(candidate.version, prereleases=True)\n \n \n class RequiresPythonRequirement(Requirement):\n@@ -109,4 +113,7 @@\n def is_satisfied_by(self, candidate):\n # type: (Candidate) -> bool\n assert candidate.name == self._candidate.name, \"Not Python candidate\"\n- return candidate.version in self.specifier\n+ # We can safely always allow prereleases here since PackageFinder\n+ # already implements the prerelease logic, and would have filtered out\n+ # prerelease candidates if the user does not expect them.\n+ return self.specifier.contains(candidate.version, prereleases=True)\n", "issue": "New resolver cannot installs distributions that only have pre releases\n**Environment**\r\n\r\n* pip version: master, today\r\n* Python version: 3\r\n* OS: linux\r\n\r\n**Description**\r\n\r\nI want to install a distribution that only has pre-releases. The legacy resolver does support this. The new one does not. \r\n\r\nNote: using `--pre` does not seem to influence the result. The legacy resolver could install such distributions without using `--pre`.\r\n\r\n**Expected behavior**\r\n\r\nInstallation should succeed.\r\n\r\n**How to Reproduce**\r\n\r\n```console\r\n$ pip install --no-deps odoo13-addon-date-range --unstable-feature=resolver\r\nERROR: Exception:\r\nTraceback (most recent call last):\r\n File \"/home/me/pip/src/pip/_internal/cli/base_command.py\", line 199, in _main\r\n status = self.run(options, args)\r\n File \"/home/me/pip/src/pip/_internal/cli/req_command.py\", line 185, in wrapper\r\n return func(self, options, args)\r\n File \"/home/me/pip/src/pip/_internal/commands/install.py\", line 333, in run\r\n reqs, check_supported_wheels=not options.target_dir\r\n File \"/home/me/pip/src/pip/_internal/resolution/resolvelib/resolver.py\", line 80, in resolve\r\n self._result = resolver.resolve(requirements)\r\n File \"/home/me/pip/src/pip/_vendor/resolvelib/resolvers.py\", line 413, in resolve\r\n state = resolution.resolve(requirements, max_rounds=max_rounds)\r\n File \"/home/me/pip/src/pip/_vendor/resolvelib/resolvers.py\", line 310, in resolve\r\n failure_causes = self._attempt_to_pin_criterion(name, criterion)\r\n File \"/home/me/pip/src/pip/_vendor/resolvelib/resolvers.py\", line 240, in _attempt_to_pin_criterion\r\n raise InconsistentCandidate(candidate, criterion)\r\npip._vendor.resolvelib.resolvers.InconsistentCandidate: Provided candidate LinkCandidate('https://files.pythonhosted.org/packages/1f/0b/945335a37082b6b013cc1331f49e3f5b6a18cdd0b693475e6ca9e9a7df6e/odoo13_addon_date_range-13.0.1.0.1.dev8-py3-none-any.whl#sha256=3883bbe87db8d5db4364e8a42e86546e19e8e4f123d98c4e9454587dfa9401df (from https://pypi.org/simple/odoo13-addon-date-range/) (requires-python:>=3.5)') does not satisfy SpecifierRequirement('odoo13-addon-date-range')\r\n```\r\n\r\nNote I used `--no-deps` because a dependency is not on pypi, but that has no influence on the result.\n", "before_files": [{"content": "from pip._vendor.packaging.utils import canonicalize_name\n\nfrom pip._internal.utils.typing import MYPY_CHECK_RUNNING\n\nfrom .base import Requirement, format_name\n\nif MYPY_CHECK_RUNNING:\n from typing import Sequence\n\n from pip._vendor.packaging.specifiers import SpecifierSet\n\n from pip._internal.req.req_install import InstallRequirement\n\n from .base import Candidate\n from .factory import Factory\n\n\nclass ExplicitRequirement(Requirement):\n def __init__(self, candidate):\n # type: (Candidate) -> None\n self.candidate = candidate\n\n def __repr__(self):\n # type: () -> str\n return \"{class_name}({candidate!r})\".format(\n class_name=self.__class__.__name__,\n candidate=self.candidate,\n )\n\n @property\n def name(self):\n # type: () -> str\n # No need to canonicalise - the candidate did this\n return self.candidate.name\n\n def find_matches(self):\n # type: () -> Sequence[Candidate]\n return [self.candidate]\n\n def is_satisfied_by(self, candidate):\n # type: (Candidate) -> bool\n return candidate == self.candidate\n\n\nclass SpecifierRequirement(Requirement):\n def __init__(self, ireq, factory):\n # type: (InstallRequirement, Factory) -> None\n assert ireq.link is None, \"This is a link, not a specifier\"\n self._ireq = ireq\n self._factory = factory\n self.extras = ireq.req.extras\n\n def __str__(self):\n # type: () -> str\n return str(self._ireq.req)\n\n def __repr__(self):\n # type: () -> str\n return \"{class_name}({requirement!r})\".format(\n class_name=self.__class__.__name__,\n requirement=str(self._ireq.req),\n )\n\n @property\n def name(self):\n # type: () -> str\n canonical_name = canonicalize_name(self._ireq.req.name)\n return format_name(canonical_name, self.extras)\n\n def find_matches(self):\n # type: () -> Sequence[Candidate]\n it = self._factory.iter_found_candidates(self._ireq, self.extras)\n return list(it)\n\n def is_satisfied_by(self, candidate):\n # type: (Candidate) -> bool\n assert candidate.name == self.name, \\\n \"Internal issue: Candidate is not for this requirement \" \\\n \" {} vs {}\".format(candidate.name, self.name)\n return candidate.version in self._ireq.req.specifier\n\n\nclass RequiresPythonRequirement(Requirement):\n \"\"\"A requirement representing Requires-Python metadata.\n \"\"\"\n def __init__(self, specifier, match):\n # type: (SpecifierSet, Candidate) -> None\n self.specifier = specifier\n self._candidate = match\n\n def __repr__(self):\n # type: () -> str\n return \"{class_name}({specifier!r})\".format(\n class_name=self.__class__.__name__,\n specifier=str(self.specifier),\n )\n\n @property\n def name(self):\n # type: () -> str\n return self._candidate.name\n\n def find_matches(self):\n # type: () -> Sequence[Candidate]\n if self._candidate.version in self.specifier:\n return [self._candidate]\n return []\n\n def is_satisfied_by(self, candidate):\n # type: (Candidate) -> bool\n assert candidate.name == self._candidate.name, \"Not Python candidate\"\n return candidate.version in self.specifier\n", "path": "src/pip/_internal/resolution/resolvelib/requirements.py"}]}
| 2,255 | 324 |
gh_patches_debug_5545
|
rasdani/github-patches
|
git_diff
|
tensorflow__tfx-3864
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update tensorflow-hub requirement to allow 0.12.0?
If the feature is related to a specific library below, please raise an issue in
the respective repo directly:
[TensorFlow Data Validation Repo](https://github.com/tensorflow/data-validation/issues)
[TensorFlow Model Analysis Repo](https://github.com/tensorflow/model-analysis/issues)
[TensorFlow Transform Repo](https://github.com/tensorflow/transform/issues)
[TensorFlow Serving Repo](https://github.com/tensorflow/serving/issues)
**System information**
- TFX Version (you are using): 1.0.0-rc0
- Environment in which you plan to use the feature (e.g., Local
(Linux/MacOS/Windows), Interactive Notebook, Google Cloud, etc..): MacOS, AWS
- Are you willing to contribute it (Yes/No): Yes
**Describe the feature and the current behavior/state.**
tfx (1.0.0-rc0) currently depends on tensorflow-hub (>=0.9.0,<0.10)
I was wondering if we could update tensorflow-hub dependancy for tfx to allow tf-hub 0.12.0, so something like (>=0.9.0,<=0.12.0)?
I am not sure if that would break anything in tfx, but I am happy to investigate and contribute to this change
**Will this change the current API? How?**
No
**Who will benefit with this feature?**
tensorflow-hub has added some new features in 0.10.0 and beyond (specifically the one I'm interested in "`compute_output_shape` in `hub.KerasLayer`" which they added in 0.12.0). It would be cool to be able to take advantage of those while still being able to use tfx
**Do you have a workaround or are completely blocked by this?** :
Blocked
**Name of your Organization (Optional)**
**Any Other info.**
</issue>
<code>
[start of tfx/dependencies.py]
1 # Copyright 2019 Google LLC. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Package dependencies for TFX.
15
16 tfx and family libraries (such as tensorflow-model-analysis) adopts environment
17 variable (TFX_DEPENDENCY_SELECTOR) based dependency version selection. This
18 dependency will be baked in to the wheel, in other words you cannot change
19 dependency string once wheel is built.
20
21 - UNCONSTRAINED uses dependency without any version constraint string, which is
22 useful when you manually build wheels of parent library (e.g. tfx-bsl) of
23 arbitrary version, and install it without dependency constraints conflict.
24 - NIGHTLY uses x.(y+1).0.dev version as a lower version constraint. tfx nightly
25 will transitively depend on nightly versions of other TFX family libraries,
26 and this version constraint is required.
27 - GIT_MASTER uses github master branch URL of the dependency, which is useful
28 during development, or when depending on the github master HEAD version of
29 tfx. This is because tfx github master HEAD version is actually using github
30 master HEAD version of parent libraries.
31 Caveat: URL dependency is not upgraded with --upgrade flag, and you have to
32 specify --force-reinstall flag to fetch the latest change from each master
33 branch HEAD.
34 - For the release, we use a range of version, which is also used as a default.
35 """
36 import os
37
38
39 def select_constraint(default, nightly=None, git_master=None):
40 """Select dependency constraint based on TFX_DEPENDENCY_SELECTOR env var."""
41 selector = os.environ.get('TFX_DEPENDENCY_SELECTOR')
42 if selector == 'UNCONSTRAINED':
43 return ''
44 elif selector == 'NIGHTLY' and nightly is not None:
45 return nightly
46 elif selector == 'GIT_MASTER' and git_master is not None:
47 return git_master
48 else:
49 return default
50
51
52 def make_pipeline_sdk_required_install_packages():
53 return [
54 'absl-py>=0.9,<0.13',
55 'ml-metadata' + select_constraint(
56 # LINT.IfChange
57 default='>=1.0.0,<1.1.0',
58 # LINT.ThenChange(tfx/workspace.bzl)
59 nightly='>=1.1.0.dev',
60 git_master='@git+https://github.com/google/ml-metadata@master'),
61 'packaging>=20,<21',
62 'portpicker>=1.3.1,<2',
63 'protobuf>=3.12.2,<4',
64 'docker>=4.1,<5',
65 # TODO(b/176812386): Deprecate usage of jinja2 for placeholders.
66 'jinja2>=2.7.3,<3',
67 ]
68
69
70 def make_required_install_packages():
71 # Make sure to sync the versions of common dependencies (absl-py, numpy,
72 # and protobuf) with TF.
73 return make_pipeline_sdk_required_install_packages() + [
74 'apache-beam[gcp]>=2.29,<3',
75 'attrs>=19.3.0,<21',
76 'click>=7,<8',
77 'google-api-python-client>=1.7.8,<2',
78 'google-cloud-aiplatform>=0.5.0,<0.8',
79 'google-cloud-bigquery>=1.28.0,<3',
80 'grpcio>=1.28.1,<2',
81 # TODO(b/173976603): remove pinned keras-tuner upperbound when its
82 # dependency expecatation with TensorFlow is sorted out.
83 'keras-tuner>=1,<1.0.2',
84 'kubernetes>=10.0.1,<12',
85 # TODO(b/179195488): remove numpy dependency after 1.20 migration.
86 # This dependency was added only to limit numpy 1.20 installation.
87 'numpy>=1.16,<1.20',
88 'pyarrow>=1,<3',
89 'pyyaml>=3.12,<6',
90 'tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3',
91 'tensorflow-hub>=0.9.0,<=0.12.0',
92 'tensorflow-data-validation' + select_constraint(
93 default='>=1.0.0,<1.1.0',
94 nightly='>=1.1.0.dev',
95 git_master='@git+https://github.com/tensorflow/data-validation@master'
96 ),
97 'tensorflow-model-analysis' + select_constraint(
98 default='>=0.31,<0.32',
99 nightly='>=0.32.0.dev',
100 git_master='@git+https://github.com/tensorflow/model-analysis@master'),
101 'tensorflow-serving-api>=1.15,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3',
102 'tensorflow-transform' + select_constraint(
103 default='>=1.0.0,<1.1.0',
104 nightly='>=1.1.0.dev',
105 git_master='@git+https://github.com/tensorflow/transform@master'),
106 'tfx-bsl' + select_constraint(
107 default='>=1.0.0,<1.1.0',
108 nightly='>=1.1.0.dev',
109 git_master='@git+https://github.com/tensorflow/tfx-bsl@master'),
110 ]
111
112
113 def make_extra_packages_airflow():
114 """Prepare extra packages needed for Apache Airflow orchestrator."""
115 return [
116 # TODO(b/188940096): update supported version.
117 'apache-airflow[mysql]>=1.10.14,<3',
118 # TODO(b/182848576): Delete pinned sqlalchemy after apache-airflow 2.0.2
119 # or later.(github.com/apache/airflow/issues/14811)
120 'sqlalchemy>=1.3,<1.4',
121 ]
122
123
124 def make_extra_packages_kfp():
125 """Prepare extra packages needed for Kubeflow Pipelines orchestrator."""
126 return [
127 'kfp>=1.6.1,<2',
128 'kfp-pipeline-spec>=0.1.7,<0.2',
129 ]
130
131
132 def make_extra_packages_test():
133 """Prepare extra packages needed for running unit tests."""
134 # Note: It is okay to pin packages to exact versions in this list to minimize
135 # conflicts.
136 return make_extra_packages_airflow() + make_extra_packages_kfp() + [
137 'pytest>=5,<6',
138 ]
139
140
141 def make_extra_packages_docker_image():
142 # Packages needed for tfx docker image.
143 return [
144 'kfp-pipeline-spec>=0.1.7,<0.2',
145 'mmh>=2.2,<3',
146 'python-snappy>=0.5,<0.6',
147 ]
148
149
150 def make_extra_packages_tfjs():
151 # Packages needed for tfjs.
152 return [
153 'tensorflowjs>=3.6.0,<4',
154 ]
155
156
157 def make_extra_packages_tf_ranking():
158 # Packages needed for tf-ranking which is used in tfx/examples/ranking.
159 return [
160 'tensorflow-ranking>=0.3.3,<0.4',
161 'struct2tensor' + select_constraint(
162 default='>=0.31,<0.32',
163 nightly='>=0.32.0.dev',
164 git_master='@git+https://github.com/google/struct2tensor@master'),
165 ]
166
167
168 def make_extra_packages_examples():
169 # Extra dependencies required for tfx/examples.
170 return [
171 # Required for presto ExampleGen custom component in
172 # tfx/examples/custom_components/presto_example_gen
173 'presto-python-client>=0.7,<0.8',
174 # Required for slack custom component in
175 # tfx/examples/custom_components/slack
176 'slackclient>=2.8.2,<3',
177 'websocket-client>=0.57,<1',
178 # Required for bert examples in tfx/examples/bert
179 'tensorflow-text>=1.15.1,<3',
180 # Required for tfx/examples/cifar10
181 'flatbuffers>=1.12,<2',
182 'tflite-support>=0.1.0a1,<0.1.1',
183 # Required for tfx/examples/penguin/experimental
184 # LINT.IfChange
185 'scikit-learn>=0.23,<0.24',
186 # LINT.ThenChange(
187 # examples/penguin/experimental/penguin_pipeline_sklearn_gcp.py)
188 # Required for the experimental tfx/examples using Flax, e.g.,
189 # tfx/examples/penguin.
190 'jax>=0.2.13,<0.3',
191 'jaxlib>=0.1.64,<0.2',
192 'flax>=0.3.3,<0.4',
193 # Required for tfx/examples/penguin/penguin_utils_cloud_tuner.py
194 'tensorflow-cloud>=0.1,<0.2',
195 ]
196
197
198 def make_extra_packages_all():
199 # All extra dependencies.
200 return [
201 *make_extra_packages_test(),
202 *make_extra_packages_tfjs(),
203 *make_extra_packages_tf_ranking(),
204 *make_extra_packages_examples(),
205 ]
206
[end of tfx/dependencies.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tfx/dependencies.py b/tfx/dependencies.py
--- a/tfx/dependencies.py
+++ b/tfx/dependencies.py
@@ -88,7 +88,7 @@
'pyarrow>=1,<3',
'pyyaml>=3.12,<6',
'tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3',
- 'tensorflow-hub>=0.9.0,<=0.12.0',
+ 'tensorflow-hub>=0.9.0,<0.13',
'tensorflow-data-validation' + select_constraint(
default='>=1.0.0,<1.1.0',
nightly='>=1.1.0.dev',
|
{"golden_diff": "diff --git a/tfx/dependencies.py b/tfx/dependencies.py\n--- a/tfx/dependencies.py\n+++ b/tfx/dependencies.py\n@@ -88,7 +88,7 @@\n 'pyarrow>=1,<3',\n 'pyyaml>=3.12,<6',\n 'tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3',\n- 'tensorflow-hub>=0.9.0,<=0.12.0',\n+ 'tensorflow-hub>=0.9.0,<0.13',\n 'tensorflow-data-validation' + select_constraint(\n default='>=1.0.0,<1.1.0',\n nightly='>=1.1.0.dev',\n", "issue": "Update tensorflow-hub requirement to allow 0.12.0?\nIf the feature is related to a specific library below, please raise an issue in\r\nthe respective repo directly:\r\n\r\n[TensorFlow Data Validation Repo](https://github.com/tensorflow/data-validation/issues)\r\n\r\n[TensorFlow Model Analysis Repo](https://github.com/tensorflow/model-analysis/issues)\r\n\r\n[TensorFlow Transform Repo](https://github.com/tensorflow/transform/issues)\r\n\r\n[TensorFlow Serving Repo](https://github.com/tensorflow/serving/issues)\r\n\r\n**System information**\r\n\r\n- TFX Version (you are using): 1.0.0-rc0\r\n- Environment in which you plan to use the feature (e.g., Local\r\n (Linux/MacOS/Windows), Interactive Notebook, Google Cloud, etc..): MacOS, AWS\r\n- Are you willing to contribute it (Yes/No): Yes\r\n\r\n**Describe the feature and the current behavior/state.**\r\ntfx (1.0.0-rc0) currently depends on tensorflow-hub (>=0.9.0,<0.10)\r\n\r\nI was wondering if we could update tensorflow-hub dependancy for tfx to allow tf-hub 0.12.0, so something like (>=0.9.0,<=0.12.0)?\r\n\r\nI am not sure if that would break anything in tfx, but I am happy to investigate and contribute to this change\r\n\r\n**Will this change the current API? How?**\r\nNo\r\n\r\n**Who will benefit with this feature?**\r\ntensorflow-hub has added some new features in 0.10.0 and beyond (specifically the one I'm interested in \"`compute_output_shape` in `hub.KerasLayer`\" which they added in 0.12.0). It would be cool to be able to take advantage of those while still being able to use tfx\r\n\r\n**Do you have a workaround or are completely blocked by this?** :\r\nBlocked\r\n\r\n**Name of your Organization (Optional)**\r\n\r\n\r\n**Any Other info.**\r\n\n", "before_files": [{"content": "# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Package dependencies for TFX.\n\ntfx and family libraries (such as tensorflow-model-analysis) adopts environment\nvariable (TFX_DEPENDENCY_SELECTOR) based dependency version selection. This\ndependency will be baked in to the wheel, in other words you cannot change\ndependency string once wheel is built.\n\n- UNCONSTRAINED uses dependency without any version constraint string, which is\n useful when you manually build wheels of parent library (e.g. tfx-bsl) of\n arbitrary version, and install it without dependency constraints conflict.\n- NIGHTLY uses x.(y+1).0.dev version as a lower version constraint. tfx nightly\n will transitively depend on nightly versions of other TFX family libraries,\n and this version constraint is required.\n- GIT_MASTER uses github master branch URL of the dependency, which is useful\n during development, or when depending on the github master HEAD version of\n tfx. This is because tfx github master HEAD version is actually using github\n master HEAD version of parent libraries.\n Caveat: URL dependency is not upgraded with --upgrade flag, and you have to\n specify --force-reinstall flag to fetch the latest change from each master\n branch HEAD.\n- For the release, we use a range of version, which is also used as a default.\n\"\"\"\nimport os\n\n\ndef select_constraint(default, nightly=None, git_master=None):\n \"\"\"Select dependency constraint based on TFX_DEPENDENCY_SELECTOR env var.\"\"\"\n selector = os.environ.get('TFX_DEPENDENCY_SELECTOR')\n if selector == 'UNCONSTRAINED':\n return ''\n elif selector == 'NIGHTLY' and nightly is not None:\n return nightly\n elif selector == 'GIT_MASTER' and git_master is not None:\n return git_master\n else:\n return default\n\n\ndef make_pipeline_sdk_required_install_packages():\n return [\n 'absl-py>=0.9,<0.13',\n 'ml-metadata' + select_constraint(\n # LINT.IfChange\n default='>=1.0.0,<1.1.0',\n # LINT.ThenChange(tfx/workspace.bzl)\n nightly='>=1.1.0.dev',\n git_master='@git+https://github.com/google/ml-metadata@master'),\n 'packaging>=20,<21',\n 'portpicker>=1.3.1,<2',\n 'protobuf>=3.12.2,<4',\n 'docker>=4.1,<5',\n # TODO(b/176812386): Deprecate usage of jinja2 for placeholders.\n 'jinja2>=2.7.3,<3',\n ]\n\n\ndef make_required_install_packages():\n # Make sure to sync the versions of common dependencies (absl-py, numpy,\n # and protobuf) with TF.\n return make_pipeline_sdk_required_install_packages() + [\n 'apache-beam[gcp]>=2.29,<3',\n 'attrs>=19.3.0,<21',\n 'click>=7,<8',\n 'google-api-python-client>=1.7.8,<2',\n 'google-cloud-aiplatform>=0.5.0,<0.8',\n 'google-cloud-bigquery>=1.28.0,<3',\n 'grpcio>=1.28.1,<2',\n # TODO(b/173976603): remove pinned keras-tuner upperbound when its\n # dependency expecatation with TensorFlow is sorted out.\n 'keras-tuner>=1,<1.0.2',\n 'kubernetes>=10.0.1,<12',\n # TODO(b/179195488): remove numpy dependency after 1.20 migration.\n # This dependency was added only to limit numpy 1.20 installation.\n 'numpy>=1.16,<1.20',\n 'pyarrow>=1,<3',\n 'pyyaml>=3.12,<6',\n 'tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3',\n 'tensorflow-hub>=0.9.0,<=0.12.0',\n 'tensorflow-data-validation' + select_constraint(\n default='>=1.0.0,<1.1.0',\n nightly='>=1.1.0.dev',\n git_master='@git+https://github.com/tensorflow/data-validation@master'\n ),\n 'tensorflow-model-analysis' + select_constraint(\n default='>=0.31,<0.32',\n nightly='>=0.32.0.dev',\n git_master='@git+https://github.com/tensorflow/model-analysis@master'),\n 'tensorflow-serving-api>=1.15,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3',\n 'tensorflow-transform' + select_constraint(\n default='>=1.0.0,<1.1.0',\n nightly='>=1.1.0.dev',\n git_master='@git+https://github.com/tensorflow/transform@master'),\n 'tfx-bsl' + select_constraint(\n default='>=1.0.0,<1.1.0',\n nightly='>=1.1.0.dev',\n git_master='@git+https://github.com/tensorflow/tfx-bsl@master'),\n ]\n\n\ndef make_extra_packages_airflow():\n \"\"\"Prepare extra packages needed for Apache Airflow orchestrator.\"\"\"\n return [\n # TODO(b/188940096): update supported version.\n 'apache-airflow[mysql]>=1.10.14,<3',\n # TODO(b/182848576): Delete pinned sqlalchemy after apache-airflow 2.0.2\n # or later.(github.com/apache/airflow/issues/14811)\n 'sqlalchemy>=1.3,<1.4',\n ]\n\n\ndef make_extra_packages_kfp():\n \"\"\"Prepare extra packages needed for Kubeflow Pipelines orchestrator.\"\"\"\n return [\n 'kfp>=1.6.1,<2',\n 'kfp-pipeline-spec>=0.1.7,<0.2',\n ]\n\n\ndef make_extra_packages_test():\n \"\"\"Prepare extra packages needed for running unit tests.\"\"\"\n # Note: It is okay to pin packages to exact versions in this list to minimize\n # conflicts.\n return make_extra_packages_airflow() + make_extra_packages_kfp() + [\n 'pytest>=5,<6',\n ]\n\n\ndef make_extra_packages_docker_image():\n # Packages needed for tfx docker image.\n return [\n 'kfp-pipeline-spec>=0.1.7,<0.2',\n 'mmh>=2.2,<3',\n 'python-snappy>=0.5,<0.6',\n ]\n\n\ndef make_extra_packages_tfjs():\n # Packages needed for tfjs.\n return [\n 'tensorflowjs>=3.6.0,<4',\n ]\n\n\ndef make_extra_packages_tf_ranking():\n # Packages needed for tf-ranking which is used in tfx/examples/ranking.\n return [\n 'tensorflow-ranking>=0.3.3,<0.4',\n 'struct2tensor' + select_constraint(\n default='>=0.31,<0.32',\n nightly='>=0.32.0.dev',\n git_master='@git+https://github.com/google/struct2tensor@master'),\n ]\n\n\ndef make_extra_packages_examples():\n # Extra dependencies required for tfx/examples.\n return [\n # Required for presto ExampleGen custom component in\n # tfx/examples/custom_components/presto_example_gen\n 'presto-python-client>=0.7,<0.8',\n # Required for slack custom component in\n # tfx/examples/custom_components/slack\n 'slackclient>=2.8.2,<3',\n 'websocket-client>=0.57,<1',\n # Required for bert examples in tfx/examples/bert\n 'tensorflow-text>=1.15.1,<3',\n # Required for tfx/examples/cifar10\n 'flatbuffers>=1.12,<2',\n 'tflite-support>=0.1.0a1,<0.1.1',\n # Required for tfx/examples/penguin/experimental\n # LINT.IfChange\n 'scikit-learn>=0.23,<0.24',\n # LINT.ThenChange(\n # examples/penguin/experimental/penguin_pipeline_sklearn_gcp.py)\n # Required for the experimental tfx/examples using Flax, e.g.,\n # tfx/examples/penguin.\n 'jax>=0.2.13,<0.3',\n 'jaxlib>=0.1.64,<0.2',\n 'flax>=0.3.3,<0.4',\n # Required for tfx/examples/penguin/penguin_utils_cloud_tuner.py\n 'tensorflow-cloud>=0.1,<0.2',\n ]\n\n\ndef make_extra_packages_all():\n # All extra dependencies.\n return [\n *make_extra_packages_test(),\n *make_extra_packages_tfjs(),\n *make_extra_packages_tf_ranking(),\n *make_extra_packages_examples(),\n ]\n", "path": "tfx/dependencies.py"}]}
| 3,641 | 180 |
gh_patches_debug_7777
|
rasdani/github-patches
|
git_diff
|
cowrie__cowrie-1685
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
HPFeeds3 UnicodeDecodeError in ttylog.read().encode().hex()
**Describe the bug**
Stack Trace from the cowrie version v2.3.0, as already described in #1307
```
cowrie | 2022-01-23T14:52:17+0000 [twisted.logger._observer#critical] Temporarily disabling observer LegacyLogObserverWrapper(<bound method Output.emit of <cowrie.output.hpfeeds3.Output object at 0x7f4019656490>>) due to exception: [Failure instance: Traceback: <class 'UnicodeDecodeError'>: 'utf-8' codec can't decode byte 0x88 in position 16: invalid start byte
cowrie | /home/cowrie/cowrie/src/cowrie/insults/insults.py:226:connectionLost
cowrie | /usr/lib/python3.9/site-packages/twisted/python/threadable.py:51:sync
cowrie | /usr/lib/python3.9/site-packages/twisted/python/log.py:281:msg
cowrie | /usr/lib/python3.9/site-packages/twisted/logger/_legacy.py:147:publishToNewObserver
cowrie | --- <exception caught here> ---
cowrie | /usr/lib/python3.9/site-packages/twisted/logger/_observer.py:82:__call__
cowrie | /usr/lib/python3.9/site-packages/twisted/logger/_legacy.py:90:__call__
cowrie | /home/cowrie/cowrie/src/cowrie/core/output.py:240:emit
cowrie | /home/cowrie/cowrie/src/cowrie/output/hpfeeds3.py:110:write
cowrie | /usr/lib/python3.9/codecs.py:322:decode
cowrie | ]
cowrie | Traceback (most recent call last):
cowrie | File "/home/cowrie/cowrie/src/cowrie/insults/insults.py", line 226, in connectionLost
cowrie | log.msg(
cowrie | File "/usr/lib/python3.9/site-packages/twisted/python/threadable.py", line 51, in sync
cowrie | return function(self, *args, **kwargs)
cowrie | File "/usr/lib/python3.9/site-packages/twisted/python/log.py", line 281, in msg
cowrie | _publishNew(self._publishPublisher, actualEventDict, textFromEventDict)
cowrie | File "/usr/lib/python3.9/site-packages/twisted/logger/_legacy.py", line 147, in publishToNewObserver
cowrie | observer(eventDict)
cowrie | --- <exception caught here> ---
cowrie | File "/usr/lib/python3.9/site-packages/twisted/logger/_observer.py", line 82, in __call__
cowrie | observer(event)
cowrie | File "/usr/lib/python3.9/site-packages/twisted/logger/_legacy.py", line 90, in __call__
cowrie | self.legacyObserver(event)
cowrie | File "/home/cowrie/cowrie/src/cowrie/core/output.py", line 240, in emit
cowrie | self.write(ev)
cowrie | File "/home/cowrie/cowrie/src/cowrie/output/hpfeeds3.py", line 110, in write
cowrie | self.meta[session]["ttylog"] = ttylog.read().encode().hex()
cowrie | File "/usr/lib/python3.9/codecs.py", line 322, in decode
cowrie | (result, consumed) = self._buffer_decode(data, self.errors, final)
cowrie | builtins.UnicodeDecodeError: 'utf-8' codec can't decode byte 0x88 in position 16: invalid start byte
```
**Server (please complete the following information):**
- OS: Alpine Linux in Docker
- Python: Python 3.9
**Additional context**
The ttylog seems to be a binary file with only parts of it being text.
At the moment the file is opened as a text file, then encoded to utf-8 bytes and then to a hex representation. Opening it as a binary file and directly transforming it to a hex reprenstation should fix it.
HPFeeds3 UnicodeDecodeError in ttylog.read().encode().hex()
**Describe the bug**
Stack Trace from the cowrie version v2.3.0, as already described in #1307
```
cowrie | 2022-01-23T14:52:17+0000 [twisted.logger._observer#critical] Temporarily disabling observer LegacyLogObserverWrapper(<bound method Output.emit of <cowrie.output.hpfeeds3.Output object at 0x7f4019656490>>) due to exception: [Failure instance: Traceback: <class 'UnicodeDecodeError'>: 'utf-8' codec can't decode byte 0x88 in position 16: invalid start byte
cowrie | /home/cowrie/cowrie/src/cowrie/insults/insults.py:226:connectionLost
cowrie | /usr/lib/python3.9/site-packages/twisted/python/threadable.py:51:sync
cowrie | /usr/lib/python3.9/site-packages/twisted/python/log.py:281:msg
cowrie | /usr/lib/python3.9/site-packages/twisted/logger/_legacy.py:147:publishToNewObserver
cowrie | --- <exception caught here> ---
cowrie | /usr/lib/python3.9/site-packages/twisted/logger/_observer.py:82:__call__
cowrie | /usr/lib/python3.9/site-packages/twisted/logger/_legacy.py:90:__call__
cowrie | /home/cowrie/cowrie/src/cowrie/core/output.py:240:emit
cowrie | /home/cowrie/cowrie/src/cowrie/output/hpfeeds3.py:110:write
cowrie | /usr/lib/python3.9/codecs.py:322:decode
cowrie | ]
cowrie | Traceback (most recent call last):
cowrie | File "/home/cowrie/cowrie/src/cowrie/insults/insults.py", line 226, in connectionLost
cowrie | log.msg(
cowrie | File "/usr/lib/python3.9/site-packages/twisted/python/threadable.py", line 51, in sync
cowrie | return function(self, *args, **kwargs)
cowrie | File "/usr/lib/python3.9/site-packages/twisted/python/log.py", line 281, in msg
cowrie | _publishNew(self._publishPublisher, actualEventDict, textFromEventDict)
cowrie | File "/usr/lib/python3.9/site-packages/twisted/logger/_legacy.py", line 147, in publishToNewObserver
cowrie | observer(eventDict)
cowrie | --- <exception caught here> ---
cowrie | File "/usr/lib/python3.9/site-packages/twisted/logger/_observer.py", line 82, in __call__
cowrie | observer(event)
cowrie | File "/usr/lib/python3.9/site-packages/twisted/logger/_legacy.py", line 90, in __call__
cowrie | self.legacyObserver(event)
cowrie | File "/home/cowrie/cowrie/src/cowrie/core/output.py", line 240, in emit
cowrie | self.write(ev)
cowrie | File "/home/cowrie/cowrie/src/cowrie/output/hpfeeds3.py", line 110, in write
cowrie | self.meta[session]["ttylog"] = ttylog.read().encode().hex()
cowrie | File "/usr/lib/python3.9/codecs.py", line 322, in decode
cowrie | (result, consumed) = self._buffer_decode(data, self.errors, final)
cowrie | builtins.UnicodeDecodeError: 'utf-8' codec can't decode byte 0x88 in position 16: invalid start byte
```
**Server (please complete the following information):**
- OS: Alpine Linux in Docker
- Python: Python 3.9
**Additional context**
The ttylog seems to be a binary file with only parts of it being text.
At the moment the file is opened as a text file, then encoded to utf-8 bytes and then to a hex representation. Opening it as a binary file and directly transforming it to a hex reprenstation should fix it.
</issue>
<code>
[start of src/cowrie/output/hpfeeds3.py]
1 """
2 Output plugin for HPFeeds
3 """
4
5 from __future__ import annotations
6
7 import json
8 import logging
9
10 from hpfeeds.twisted import ClientSessionService
11
12 from twisted.internet import endpoints, ssl
13 from twisted.internet import reactor # type: ignore
14 from twisted.python import log
15
16 import cowrie.core.output
17 from cowrie.core.config import CowrieConfig
18
19
20 class Output(cowrie.core.output.Output):
21 """
22 Output plugin for HPFeeds
23 """
24
25 channel = "cowrie.sessions"
26
27 def start(self):
28 if CowrieConfig.has_option("output_hpfeeds3", "channel"):
29 self.channel = CowrieConfig.get("output_hpfeeds3", "channel")
30
31 if CowrieConfig.has_option("output_hpfeeds3", "endpoint"):
32 endpoint = CowrieConfig.get("output_hpfeeds3", "endpoint")
33 else:
34 server = CowrieConfig.get("output_hpfeeds3", "server")
35 port = CowrieConfig.getint("output_hpfeeds3", "port")
36
37 if CowrieConfig.has_option("output_hpfeeds3", "tlscert"):
38 with open(CowrieConfig.get("output_hpfeeds3", "tlscert")) as fp:
39 authority = ssl.Certificate.loadPEM(fp.read())
40 options = ssl.optionsForClientTLS(server, authority)
41 endpoint = endpoints.SSL4ClientEndpoint(reactor, server, port, options)
42 else:
43 endpoint = endpoints.HostnameEndpoint(reactor, server, port)
44
45 ident = CowrieConfig.get("output_hpfeeds3", "identifier")
46 secret = CowrieConfig.get("output_hpfeeds3", "secret")
47
48 self.meta = {}
49
50 self.client = ClientSessionService(endpoint, ident, secret)
51 self.client.startService()
52
53 def stop(self):
54 self.client.stopService()
55
56 def write(self, entry):
57 session = entry["session"]
58 if entry["eventid"] == "cowrie.session.connect":
59 self.meta[session] = {
60 "session": session,
61 "startTime": entry["timestamp"],
62 "endTime": "",
63 "peerIP": entry["src_ip"],
64 "peerPort": entry["src_port"],
65 "hostIP": entry["dst_ip"],
66 "hostPort": entry["dst_port"],
67 "loggedin": None,
68 "credentials": [],
69 "commands": [],
70 "unknownCommands": [],
71 "urls": [],
72 "version": None,
73 "ttylog": None,
74 "hashes": set(),
75 "protocol": entry["protocol"],
76 }
77
78 elif entry["eventid"] == "cowrie.login.success":
79 u, p = entry["username"], entry["password"]
80 self.meta[session]["loggedin"] = (u, p)
81
82 elif entry["eventid"] == "cowrie.login.failed":
83 u, p = entry["username"], entry["password"]
84 self.meta[session]["credentials"].append((u, p))
85
86 elif entry["eventid"] == "cowrie.command.input":
87 c = entry["input"]
88 self.meta[session]["commands"].append(c)
89
90 elif entry["eventid"] == "cowrie.command.failed":
91 uc = entry["input"]
92 self.meta[session]["unknownCommands"].append(uc)
93
94 elif entry["eventid"] == "cowrie.session.file_download":
95 if "url" in entry:
96 url = entry["url"]
97 self.meta[session]["urls"].append(url)
98 self.meta[session]["hashes"].add(entry["shasum"])
99
100 elif entry["eventid"] == "cowrie.session.file_upload":
101 self.meta[session]["hashes"].add(entry["shasum"])
102
103 elif entry["eventid"] == "cowrie.client.version":
104 v = entry["version"]
105 self.meta[session]["version"] = v
106
107 elif entry["eventid"] == "cowrie.log.closed":
108 # entry["ttylog"]
109 with open(entry["ttylog"]) as ttylog:
110 self.meta[session]["ttylog"] = ttylog.read().encode().hex()
111
112 elif entry["eventid"] == "cowrie.session.closed":
113 meta = self.meta.pop(session, None)
114 if meta:
115 log.msg("publishing metadata to hpfeeds", logLevel=logging.DEBUG)
116 meta["endTime"] = entry["timestamp"]
117 meta["hashes"] = list(meta["hashes"])
118 self.client.publish(self.channel, json.dumps(meta).encode("utf-8"))
119
[end of src/cowrie/output/hpfeeds3.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cowrie/output/hpfeeds3.py b/src/cowrie/output/hpfeeds3.py
--- a/src/cowrie/output/hpfeeds3.py
+++ b/src/cowrie/output/hpfeeds3.py
@@ -106,8 +106,8 @@
elif entry["eventid"] == "cowrie.log.closed":
# entry["ttylog"]
- with open(entry["ttylog"]) as ttylog:
- self.meta[session]["ttylog"] = ttylog.read().encode().hex()
+ with open(entry["ttylog"], 'rb') as ttylog:
+ self.meta[session]["ttylog"] = ttylog.read().hex()
elif entry["eventid"] == "cowrie.session.closed":
meta = self.meta.pop(session, None)
|
{"golden_diff": "diff --git a/src/cowrie/output/hpfeeds3.py b/src/cowrie/output/hpfeeds3.py\n--- a/src/cowrie/output/hpfeeds3.py\n+++ b/src/cowrie/output/hpfeeds3.py\n@@ -106,8 +106,8 @@\n \n elif entry[\"eventid\"] == \"cowrie.log.closed\":\n # entry[\"ttylog\"]\n- with open(entry[\"ttylog\"]) as ttylog:\n- self.meta[session][\"ttylog\"] = ttylog.read().encode().hex()\n+ with open(entry[\"ttylog\"], 'rb') as ttylog:\n+ self.meta[session][\"ttylog\"] = ttylog.read().hex()\n \n elif entry[\"eventid\"] == \"cowrie.session.closed\":\n meta = self.meta.pop(session, None)\n", "issue": "HPFeeds3 UnicodeDecodeError in ttylog.read().encode().hex()\n**Describe the bug**\r\nStack Trace from the cowrie version v2.3.0, as already described in #1307 \r\n\r\n```\r\ncowrie | 2022-01-23T14:52:17+0000 [twisted.logger._observer#critical] Temporarily disabling observer LegacyLogObserverWrapper(<bound method Output.emit of <cowrie.output.hpfeeds3.Output object at 0x7f4019656490>>) due to exception: [Failure instance: Traceback: <class 'UnicodeDecodeError'>: 'utf-8' codec can't decode byte 0x88 in position 16: invalid start byte\r\ncowrie | \t/home/cowrie/cowrie/src/cowrie/insults/insults.py:226:connectionLost\r\ncowrie | \t/usr/lib/python3.9/site-packages/twisted/python/threadable.py:51:sync\r\ncowrie | \t/usr/lib/python3.9/site-packages/twisted/python/log.py:281:msg\r\ncowrie | \t/usr/lib/python3.9/site-packages/twisted/logger/_legacy.py:147:publishToNewObserver\r\ncowrie | \t--- <exception caught here> ---\r\ncowrie | \t/usr/lib/python3.9/site-packages/twisted/logger/_observer.py:82:__call__\r\ncowrie | \t/usr/lib/python3.9/site-packages/twisted/logger/_legacy.py:90:__call__\r\ncowrie | \t/home/cowrie/cowrie/src/cowrie/core/output.py:240:emit\r\ncowrie | \t/home/cowrie/cowrie/src/cowrie/output/hpfeeds3.py:110:write\r\ncowrie | \t/usr/lib/python3.9/codecs.py:322:decode\r\ncowrie | \t]\r\ncowrie | \tTraceback (most recent call last):\r\ncowrie | \t File \"/home/cowrie/cowrie/src/cowrie/insults/insults.py\", line 226, in connectionLost\r\ncowrie | \t log.msg(\r\ncowrie | \t File \"/usr/lib/python3.9/site-packages/twisted/python/threadable.py\", line 51, in sync\r\ncowrie | \t return function(self, *args, **kwargs)\r\ncowrie | \t File \"/usr/lib/python3.9/site-packages/twisted/python/log.py\", line 281, in msg\r\ncowrie | \t _publishNew(self._publishPublisher, actualEventDict, textFromEventDict)\r\ncowrie | \t File \"/usr/lib/python3.9/site-packages/twisted/logger/_legacy.py\", line 147, in publishToNewObserver\r\ncowrie | \t observer(eventDict)\r\ncowrie | \t--- <exception caught here> ---\r\ncowrie | \t File \"/usr/lib/python3.9/site-packages/twisted/logger/_observer.py\", line 82, in __call__\r\ncowrie | \t observer(event)\r\ncowrie | \t File \"/usr/lib/python3.9/site-packages/twisted/logger/_legacy.py\", line 90, in __call__\r\ncowrie | \t self.legacyObserver(event)\r\ncowrie | \t File \"/home/cowrie/cowrie/src/cowrie/core/output.py\", line 240, in emit\r\ncowrie | \t self.write(ev)\r\ncowrie | \t File \"/home/cowrie/cowrie/src/cowrie/output/hpfeeds3.py\", line 110, in write\r\ncowrie | \t self.meta[session][\"ttylog\"] = ttylog.read().encode().hex()\r\ncowrie | \t File \"/usr/lib/python3.9/codecs.py\", line 322, in decode\r\ncowrie | \t (result, consumed) = self._buffer_decode(data, self.errors, final)\r\ncowrie | \tbuiltins.UnicodeDecodeError: 'utf-8' codec can't decode byte 0x88 in position 16: invalid start byte\r\n```\r\n\r\n**Server (please complete the following information):**\r\n - OS: Alpine Linux in Docker\r\n - Python: Python 3.9\r\n\r\n**Additional context**\r\nThe ttylog seems to be a binary file with only parts of it being text. \r\n\r\nAt the moment the file is opened as a text file, then encoded to utf-8 bytes and then to a hex representation. Opening it as a binary file and directly transforming it to a hex reprenstation should fix it.\nHPFeeds3 UnicodeDecodeError in ttylog.read().encode().hex()\n**Describe the bug**\r\nStack Trace from the cowrie version v2.3.0, as already described in #1307 \r\n\r\n```\r\ncowrie | 2022-01-23T14:52:17+0000 [twisted.logger._observer#critical] Temporarily disabling observer LegacyLogObserverWrapper(<bound method Output.emit of <cowrie.output.hpfeeds3.Output object at 0x7f4019656490>>) due to exception: [Failure instance: Traceback: <class 'UnicodeDecodeError'>: 'utf-8' codec can't decode byte 0x88 in position 16: invalid start byte\r\ncowrie | \t/home/cowrie/cowrie/src/cowrie/insults/insults.py:226:connectionLost\r\ncowrie | \t/usr/lib/python3.9/site-packages/twisted/python/threadable.py:51:sync\r\ncowrie | \t/usr/lib/python3.9/site-packages/twisted/python/log.py:281:msg\r\ncowrie | \t/usr/lib/python3.9/site-packages/twisted/logger/_legacy.py:147:publishToNewObserver\r\ncowrie | \t--- <exception caught here> ---\r\ncowrie | \t/usr/lib/python3.9/site-packages/twisted/logger/_observer.py:82:__call__\r\ncowrie | \t/usr/lib/python3.9/site-packages/twisted/logger/_legacy.py:90:__call__\r\ncowrie | \t/home/cowrie/cowrie/src/cowrie/core/output.py:240:emit\r\ncowrie | \t/home/cowrie/cowrie/src/cowrie/output/hpfeeds3.py:110:write\r\ncowrie | \t/usr/lib/python3.9/codecs.py:322:decode\r\ncowrie | \t]\r\ncowrie | \tTraceback (most recent call last):\r\ncowrie | \t File \"/home/cowrie/cowrie/src/cowrie/insults/insults.py\", line 226, in connectionLost\r\ncowrie | \t log.msg(\r\ncowrie | \t File \"/usr/lib/python3.9/site-packages/twisted/python/threadable.py\", line 51, in sync\r\ncowrie | \t return function(self, *args, **kwargs)\r\ncowrie | \t File \"/usr/lib/python3.9/site-packages/twisted/python/log.py\", line 281, in msg\r\ncowrie | \t _publishNew(self._publishPublisher, actualEventDict, textFromEventDict)\r\ncowrie | \t File \"/usr/lib/python3.9/site-packages/twisted/logger/_legacy.py\", line 147, in publishToNewObserver\r\ncowrie | \t observer(eventDict)\r\ncowrie | \t--- <exception caught here> ---\r\ncowrie | \t File \"/usr/lib/python3.9/site-packages/twisted/logger/_observer.py\", line 82, in __call__\r\ncowrie | \t observer(event)\r\ncowrie | \t File \"/usr/lib/python3.9/site-packages/twisted/logger/_legacy.py\", line 90, in __call__\r\ncowrie | \t self.legacyObserver(event)\r\ncowrie | \t File \"/home/cowrie/cowrie/src/cowrie/core/output.py\", line 240, in emit\r\ncowrie | \t self.write(ev)\r\ncowrie | \t File \"/home/cowrie/cowrie/src/cowrie/output/hpfeeds3.py\", line 110, in write\r\ncowrie | \t self.meta[session][\"ttylog\"] = ttylog.read().encode().hex()\r\ncowrie | \t File \"/usr/lib/python3.9/codecs.py\", line 322, in decode\r\ncowrie | \t (result, consumed) = self._buffer_decode(data, self.errors, final)\r\ncowrie | \tbuiltins.UnicodeDecodeError: 'utf-8' codec can't decode byte 0x88 in position 16: invalid start byte\r\n```\r\n\r\n**Server (please complete the following information):**\r\n - OS: Alpine Linux in Docker\r\n - Python: Python 3.9\r\n\r\n**Additional context**\r\nThe ttylog seems to be a binary file with only parts of it being text. \r\n\r\nAt the moment the file is opened as a text file, then encoded to utf-8 bytes and then to a hex representation. Opening it as a binary file and directly transforming it to a hex reprenstation should fix it.\n", "before_files": [{"content": "\"\"\"\nOutput plugin for HPFeeds\n\"\"\"\n\nfrom __future__ import annotations\n\nimport json\nimport logging\n\nfrom hpfeeds.twisted import ClientSessionService\n\nfrom twisted.internet import endpoints, ssl\nfrom twisted.internet import reactor # type: ignore\nfrom twisted.python import log\n\nimport cowrie.core.output\nfrom cowrie.core.config import CowrieConfig\n\n\nclass Output(cowrie.core.output.Output):\n \"\"\"\n Output plugin for HPFeeds\n \"\"\"\n\n channel = \"cowrie.sessions\"\n\n def start(self):\n if CowrieConfig.has_option(\"output_hpfeeds3\", \"channel\"):\n self.channel = CowrieConfig.get(\"output_hpfeeds3\", \"channel\")\n\n if CowrieConfig.has_option(\"output_hpfeeds3\", \"endpoint\"):\n endpoint = CowrieConfig.get(\"output_hpfeeds3\", \"endpoint\")\n else:\n server = CowrieConfig.get(\"output_hpfeeds3\", \"server\")\n port = CowrieConfig.getint(\"output_hpfeeds3\", \"port\")\n\n if CowrieConfig.has_option(\"output_hpfeeds3\", \"tlscert\"):\n with open(CowrieConfig.get(\"output_hpfeeds3\", \"tlscert\")) as fp:\n authority = ssl.Certificate.loadPEM(fp.read())\n options = ssl.optionsForClientTLS(server, authority)\n endpoint = endpoints.SSL4ClientEndpoint(reactor, server, port, options)\n else:\n endpoint = endpoints.HostnameEndpoint(reactor, server, port)\n\n ident = CowrieConfig.get(\"output_hpfeeds3\", \"identifier\")\n secret = CowrieConfig.get(\"output_hpfeeds3\", \"secret\")\n\n self.meta = {}\n\n self.client = ClientSessionService(endpoint, ident, secret)\n self.client.startService()\n\n def stop(self):\n self.client.stopService()\n\n def write(self, entry):\n session = entry[\"session\"]\n if entry[\"eventid\"] == \"cowrie.session.connect\":\n self.meta[session] = {\n \"session\": session,\n \"startTime\": entry[\"timestamp\"],\n \"endTime\": \"\",\n \"peerIP\": entry[\"src_ip\"],\n \"peerPort\": entry[\"src_port\"],\n \"hostIP\": entry[\"dst_ip\"],\n \"hostPort\": entry[\"dst_port\"],\n \"loggedin\": None,\n \"credentials\": [],\n \"commands\": [],\n \"unknownCommands\": [],\n \"urls\": [],\n \"version\": None,\n \"ttylog\": None,\n \"hashes\": set(),\n \"protocol\": entry[\"protocol\"],\n }\n\n elif entry[\"eventid\"] == \"cowrie.login.success\":\n u, p = entry[\"username\"], entry[\"password\"]\n self.meta[session][\"loggedin\"] = (u, p)\n\n elif entry[\"eventid\"] == \"cowrie.login.failed\":\n u, p = entry[\"username\"], entry[\"password\"]\n self.meta[session][\"credentials\"].append((u, p))\n\n elif entry[\"eventid\"] == \"cowrie.command.input\":\n c = entry[\"input\"]\n self.meta[session][\"commands\"].append(c)\n\n elif entry[\"eventid\"] == \"cowrie.command.failed\":\n uc = entry[\"input\"]\n self.meta[session][\"unknownCommands\"].append(uc)\n\n elif entry[\"eventid\"] == \"cowrie.session.file_download\":\n if \"url\" in entry:\n url = entry[\"url\"]\n self.meta[session][\"urls\"].append(url)\n self.meta[session][\"hashes\"].add(entry[\"shasum\"])\n\n elif entry[\"eventid\"] == \"cowrie.session.file_upload\":\n self.meta[session][\"hashes\"].add(entry[\"shasum\"])\n\n elif entry[\"eventid\"] == \"cowrie.client.version\":\n v = entry[\"version\"]\n self.meta[session][\"version\"] = v\n\n elif entry[\"eventid\"] == \"cowrie.log.closed\":\n # entry[\"ttylog\"]\n with open(entry[\"ttylog\"]) as ttylog:\n self.meta[session][\"ttylog\"] = ttylog.read().encode().hex()\n\n elif entry[\"eventid\"] == \"cowrie.session.closed\":\n meta = self.meta.pop(session, None)\n if meta:\n log.msg(\"publishing metadata to hpfeeds\", logLevel=logging.DEBUG)\n meta[\"endTime\"] = entry[\"timestamp\"]\n meta[\"hashes\"] = list(meta[\"hashes\"])\n self.client.publish(self.channel, json.dumps(meta).encode(\"utf-8\"))\n", "path": "src/cowrie/output/hpfeeds3.py"}]}
| 3,794 | 178 |
gh_patches_debug_18915
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-2123
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Links to company websites doesn't work without http://
On a company profile page the link to the company's website will only redirect the user if `http://` is specified when the link is added in the dashboard. For example, the link to AppearTV is written as `www.appeartv.com`, and redirects to `https://online.ntnu.no/company/60/www.appeartv.com`.
There is no information to the user creating an event to add http either, so I can imagine this being a growing problem.
Links to company websites doesn't work without http://
On a company profile page the link to the company's website will only redirect the user if `http://` is specified when the link is added in the dashboard. For example, the link to AppearTV is written as `www.appeartv.com`, and redirects to `https://online.ntnu.no/company/60/www.appeartv.com`.
There is no information to the user creating an event to add http either, so I can imagine this being a growing problem.
</issue>
<code>
[start of apps/companyprofile/dashboard/forms.py]
1 # -*- coding: utf-8 -*-
2 from django.forms import ModelForm
3
4 from apps.companyprofile.models import Company
5 from apps.dashboard.widgets import widget_generator
6 from apps.gallery.widgets import SingleImageInput
7
8
9 class CompanyForm(ModelForm):
10
11 class Meta(object):
12 model = Company
13 fields = ('name', 'short_description', 'long_description', 'image', 'site', 'email_address', 'phone_number',)
14 exclude = ['old_image']
15
16 # Widget generator accepts a form widget, and a list of tuples between field name and an attribute dict
17 widgets = widget_generator(SingleImageInput, [('image', {'id': 'responsive-image-id'})])
18
[end of apps/companyprofile/dashboard/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apps/companyprofile/dashboard/forms.py b/apps/companyprofile/dashboard/forms.py
--- a/apps/companyprofile/dashboard/forms.py
+++ b/apps/companyprofile/dashboard/forms.py
@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
from django.forms import ModelForm
+from django.forms.fields import URLField
from apps.companyprofile.models import Company
from apps.dashboard.widgets import widget_generator
@@ -7,10 +8,12 @@
class CompanyForm(ModelForm):
+ site = URLField(max_length=100)
class Meta(object):
model = Company
fields = ('name', 'short_description', 'long_description', 'image', 'site', 'email_address', 'phone_number',)
+
exclude = ['old_image']
# Widget generator accepts a form widget, and a list of tuples between field name and an attribute dict
|
{"golden_diff": "diff --git a/apps/companyprofile/dashboard/forms.py b/apps/companyprofile/dashboard/forms.py\n--- a/apps/companyprofile/dashboard/forms.py\n+++ b/apps/companyprofile/dashboard/forms.py\n@@ -1,5 +1,6 @@\n # -*- coding: utf-8 -*-\n from django.forms import ModelForm\n+from django.forms.fields import URLField\n \n from apps.companyprofile.models import Company\n from apps.dashboard.widgets import widget_generator\n@@ -7,10 +8,12 @@\n \n \n class CompanyForm(ModelForm):\n+ site = URLField(max_length=100)\n \n class Meta(object):\n model = Company\n fields = ('name', 'short_description', 'long_description', 'image', 'site', 'email_address', 'phone_number',)\n+\n exclude = ['old_image']\n \n # Widget generator accepts a form widget, and a list of tuples between field name and an attribute dict\n", "issue": "Links to company websites doesn't work without http:// \nOn a company profile page the link to the company's website will only redirect the user if `http://` is specified when the link is added in the dashboard. For example, the link to AppearTV is written as `www.appeartv.com`, and redirects to `https://online.ntnu.no/company/60/www.appeartv.com`.\nThere is no information to the user creating an event to add http either, so I can imagine this being a growing problem. \n\nLinks to company websites doesn't work without http:// \nOn a company profile page the link to the company's website will only redirect the user if `http://` is specified when the link is added in the dashboard. For example, the link to AppearTV is written as `www.appeartv.com`, and redirects to `https://online.ntnu.no/company/60/www.appeartv.com`.\nThere is no information to the user creating an event to add http either, so I can imagine this being a growing problem. \n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom django.forms import ModelForm\n\nfrom apps.companyprofile.models import Company\nfrom apps.dashboard.widgets import widget_generator\nfrom apps.gallery.widgets import SingleImageInput\n\n\nclass CompanyForm(ModelForm):\n\n class Meta(object):\n model = Company\n fields = ('name', 'short_description', 'long_description', 'image', 'site', 'email_address', 'phone_number',)\n exclude = ['old_image']\n\n # Widget generator accepts a form widget, and a list of tuples between field name and an attribute dict\n widgets = widget_generator(SingleImageInput, [('image', {'id': 'responsive-image-id'})])\n", "path": "apps/companyprofile/dashboard/forms.py"}]}
| 926 | 188 |
gh_patches_debug_32850
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-3316
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dvc push -q is not completely quiet, shows file transfer tqdm progress bars
Hey guys, love what you've done with DVC.
Had a quick bug that's causing me a little issue. When I use 'dvc push -q' I'm still seeing tqdm progress bars. Wouldn't be a huge issue, but I'm probably pushing 100K 250kb files. This is a local remote, so the transfer speeds are quick. I know in some of my other scripts where I use tqdm, if the iteration time is very small, the tqdm overhead of writing to std::out actually starts to contribute to performance.
dvc version: 0.83.0
os: Windows 10

dvc push -q is not completely quiet, shows file transfer tqdm progress bars
Hey guys, love what you've done with DVC.
Had a quick bug that's causing me a little issue. When I use 'dvc push -q' I'm still seeing tqdm progress bars. Wouldn't be a huge issue, but I'm probably pushing 100K 250kb files. This is a local remote, so the transfer speeds are quick. I know in some of my other scripts where I use tqdm, if the iteration time is very small, the tqdm overhead of writing to std::out actually starts to contribute to performance.
dvc version: 0.83.0
os: Windows 10

</issue>
<code>
[start of dvc/repo/add.py]
1 import logging
2 import os
3
4 import colorama
5
6 from . import locked
7 from ..exceptions import (
8 RecursiveAddingWhileUsingFilename,
9 OverlappingOutputPathsError,
10 )
11 from ..output.base import OutputDoesNotExistError
12 from ..progress import Tqdm
13 from ..repo.scm_context import scm_context
14 from ..stage import Stage
15 from ..utils import LARGE_DIR_SIZE
16
17 logger = logging.getLogger(__name__)
18
19
20 @locked
21 @scm_context
22 def add(repo, targets, recursive=False, no_commit=False, fname=None):
23 if recursive and fname:
24 raise RecursiveAddingWhileUsingFilename()
25
26 if isinstance(targets, str):
27 targets = [targets]
28
29 stages_list = []
30 num_targets = len(targets)
31 with Tqdm(total=num_targets, desc="Add", unit="file", leave=True) as pbar:
32 if num_targets == 1:
33 # clear unneeded top-level progress bar for single target
34 pbar.bar_format = "Adding..."
35 pbar.refresh()
36 for target in targets:
37 sub_targets = _find_all_targets(repo, target, recursive)
38 pbar.total += len(sub_targets) - 1
39
40 if os.path.isdir(target) and len(sub_targets) > LARGE_DIR_SIZE:
41 logger.warning(
42 "You are adding a large directory '{target}' recursively,"
43 " consider tracking it as a whole instead.\n"
44 "{purple}HINT:{nc} Remove the generated DVC-file and then"
45 " run `{cyan}dvc add {target}{nc}`".format(
46 purple=colorama.Fore.MAGENTA,
47 cyan=colorama.Fore.CYAN,
48 nc=colorama.Style.RESET_ALL,
49 target=target,
50 )
51 )
52
53 stages = _create_stages(repo, sub_targets, fname, pbar=pbar)
54
55 try:
56 repo.check_modified_graph(stages)
57 except OverlappingOutputPathsError as exc:
58 msg = (
59 "Cannot add '{out}', because it is overlapping with other "
60 "DVC tracked output: '{parent}'.\n"
61 "To include '{out}' in '{parent}', run "
62 "'dvc commit {parent_stage}'"
63 ).format(
64 out=exc.overlapping_out.path_info,
65 parent=exc.parent.path_info,
66 parent_stage=exc.parent.stage.relpath,
67 )
68 raise OverlappingOutputPathsError(
69 exc.parent, exc.overlapping_out, msg
70 )
71
72 with Tqdm(
73 total=len(stages),
74 desc="Processing",
75 unit="file",
76 disable=True if len(stages) == 1 else None,
77 ) as pbar_stages:
78 for stage in stages:
79 try:
80 stage.save()
81 except OutputDoesNotExistError:
82 pbar.n -= 1
83 raise
84
85 if not no_commit:
86 stage.commit()
87
88 stage.dump()
89 pbar_stages.update()
90
91 stages_list += stages
92
93 if num_targets == 1: # restore bar format for stats
94 pbar.bar_format = pbar.BAR_FMT_DEFAULT
95
96 return stages_list
97
98
99 def _find_all_targets(repo, target, recursive):
100 if os.path.isdir(target) and recursive:
101 return [
102 fname
103 for fname in Tqdm(
104 repo.tree.walk_files(target),
105 desc="Searching " + target,
106 bar_format=Tqdm.BAR_FMT_NOTOTAL,
107 unit="file",
108 )
109 if not repo.is_dvc_internal(fname)
110 if not Stage.is_stage_file(fname)
111 if not repo.scm.belongs_to_scm(fname)
112 if not repo.scm.is_tracked(fname)
113 ]
114 return [target]
115
116
117 def _create_stages(repo, targets, fname, pbar=None):
118 stages = []
119
120 for out in Tqdm(
121 targets,
122 desc="Creating DVC-files",
123 disable=True if len(targets) < LARGE_DIR_SIZE else None,
124 unit="file",
125 ):
126 stage = Stage.create(
127 repo, outs=[out], accompany_outs=True, fname=fname
128 )
129
130 if not stage:
131 if pbar is not None:
132 pbar.total -= 1
133 continue
134
135 stages.append(stage)
136 if pbar is not None:
137 pbar.update_desc(out)
138
139 return stages
140
[end of dvc/repo/add.py]
[start of dvc/progress.py]
1 """Manages progress bars for DVC repo."""
2
3 import logging
4 import sys
5 from threading import RLock
6
7 from tqdm import tqdm
8
9 from dvc.utils import env2bool
10
11 logger = logging.getLogger(__name__)
12 tqdm.set_lock(RLock())
13
14
15 class Tqdm(tqdm):
16 """
17 maximum-compatibility tqdm-based progressbars
18 """
19
20 BAR_FMT_DEFAULT = (
21 "{percentage:3.0f}% {desc}|{bar}|"
22 "{n_fmt}/{total_fmt}"
23 " [{elapsed}<{remaining}, {rate_fmt:>11}{postfix}]"
24 )
25 # nested bars should have fixed bar widths to align nicely
26 BAR_FMT_DEFAULT_NESTED = (
27 "{percentage:3.0f}%|{bar:10}|{desc:{ncols_desc}.{ncols_desc}}"
28 "{n_fmt}/{total_fmt}"
29 " [{elapsed}<{remaining}, {rate_fmt:>11}{postfix}]"
30 )
31 BAR_FMT_NOTOTAL = (
32 "{desc:{ncols_desc}.{ncols_desc}}{n_fmt}"
33 " [{elapsed}, {rate_fmt:>11}{postfix}]"
34 )
35 BYTES_DEFAULTS = dict(
36 unit="B", unit_scale=True, unit_divisor=1024, miniters=1
37 )
38
39 def __init__(
40 self,
41 iterable=None,
42 disable=None,
43 level=logging.ERROR,
44 desc=None,
45 leave=False,
46 bar_format=None,
47 bytes=False, # pylint: disable=W0622
48 file=None,
49 total=None,
50 **kwargs
51 ):
52 """
53 bytes : shortcut for
54 `unit='B', unit_scale=True, unit_divisor=1024, miniters=1`
55 desc : persists after `close()`
56 level : effective logging level for determining `disable`;
57 used only if `disable` is unspecified
58 disable : If (default: None), will be determined by logging level.
59 May be overridden to `True` due to non-TTY status.
60 Skip override by specifying env var `DVC_IGNORE_ISATTY`.
61 kwargs : anything accepted by `tqdm.tqdm()`
62 """
63 kwargs = kwargs.copy()
64 if bytes:
65 kwargs = {**self.BYTES_DEFAULTS, **kwargs}
66 else:
67 kwargs.setdefault("unit_scale", total > 999 if total else True)
68 if file is None:
69 file = sys.stderr
70 self.desc_persist = desc
71 # auto-disable based on `logger.level`
72 if disable is None:
73 disable = logger.getEffectiveLevel() > level
74 # auto-disable based on TTY
75 if (
76 not disable
77 and not env2bool("DVC_IGNORE_ISATTY")
78 and hasattr(file, "isatty")
79 ):
80 disable = not file.isatty()
81 super().__init__(
82 iterable=iterable,
83 disable=disable,
84 leave=leave,
85 desc=desc,
86 bar_format="!",
87 lock_args=(False,),
88 total=total,
89 **kwargs
90 )
91 if bar_format is None:
92 if self.__len__():
93 self.bar_format = (
94 self.BAR_FMT_DEFAULT_NESTED
95 if self.pos
96 else self.BAR_FMT_DEFAULT
97 )
98 else:
99 self.bar_format = self.BAR_FMT_NOTOTAL
100 else:
101 self.bar_format = bar_format
102 self.refresh()
103
104 def update_desc(self, desc, n=1):
105 """
106 Calls `set_description_str(desc)` and `update(n)`
107 """
108 self.set_description_str(desc, refresh=False)
109 self.update(n)
110
111 def update_to(self, current, total=None):
112 if total:
113 self.total = total # pylint: disable=W0613,W0201
114 self.update(current - self.n)
115
116 def close(self):
117 if self.desc_persist is not None:
118 self.set_description_str(self.desc_persist, refresh=False)
119 # unknown/zero ETA
120 self.bar_format = self.bar_format.replace("<{remaining}", "")
121 # remove completed bar
122 self.bar_format = self.bar_format.replace("|{bar:10}|", " ")
123 super().close()
124
125 @property
126 def format_dict(self):
127 """inject `ncols_desc` to fill the display width (`ncols`)"""
128 d = super().format_dict
129 ncols = d["ncols"] or 80
130 ncols_desc = ncols - len(self.format_meter(ncols_desc=1, **d)) + 1
131 ncols_desc = max(ncols_desc, 0)
132 if ncols_desc:
133 d["ncols_desc"] = ncols_desc
134 else:
135 # work-around for zero-width description
136 d["ncols_desc"] = 1
137 d["prefix"] = ""
138 return d
139
[end of dvc/progress.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dvc/progress.py b/dvc/progress.py
--- a/dvc/progress.py
+++ b/dvc/progress.py
@@ -55,7 +55,8 @@
desc : persists after `close()`
level : effective logging level for determining `disable`;
used only if `disable` is unspecified
- disable : If (default: None), will be determined by logging level.
+ disable : If (default: None) or False,
+ will be determined by logging level.
May be overridden to `True` due to non-TTY status.
Skip override by specifying env var `DVC_IGNORE_ISATTY`.
kwargs : anything accepted by `tqdm.tqdm()`
@@ -69,7 +70,7 @@
file = sys.stderr
self.desc_persist = desc
# auto-disable based on `logger.level`
- if disable is None:
+ if not disable:
disable = logger.getEffectiveLevel() > level
# auto-disable based on TTY
if (
diff --git a/dvc/repo/add.py b/dvc/repo/add.py
--- a/dvc/repo/add.py
+++ b/dvc/repo/add.py
@@ -55,7 +55,7 @@
total=len(stages),
desc="Processing",
unit="file",
- disable=True if len(stages) == 1 else None,
+ disable=len(stages) == 1,
) as pbar_stages:
for stage in stages:
try:
@@ -102,7 +102,7 @@
for out in Tqdm(
targets,
desc="Creating DVC-files",
- disable=True if len(targets) < LARGE_DIR_SIZE else None,
+ disable=len(targets) < LARGE_DIR_SIZE,
unit="file",
):
stage = Stage.create(repo, outs=[out], add=True, fname=fname)
|
{"golden_diff": "diff --git a/dvc/progress.py b/dvc/progress.py\n--- a/dvc/progress.py\n+++ b/dvc/progress.py\n@@ -55,7 +55,8 @@\n desc : persists after `close()`\n level : effective logging level for determining `disable`;\n used only if `disable` is unspecified\n- disable : If (default: None), will be determined by logging level.\n+ disable : If (default: None) or False,\n+ will be determined by logging level.\n May be overridden to `True` due to non-TTY status.\n Skip override by specifying env var `DVC_IGNORE_ISATTY`.\n kwargs : anything accepted by `tqdm.tqdm()`\n@@ -69,7 +70,7 @@\n file = sys.stderr\n self.desc_persist = desc\n # auto-disable based on `logger.level`\n- if disable is None:\n+ if not disable:\n disable = logger.getEffectiveLevel() > level\n # auto-disable based on TTY\n if (\ndiff --git a/dvc/repo/add.py b/dvc/repo/add.py\n--- a/dvc/repo/add.py\n+++ b/dvc/repo/add.py\n@@ -55,7 +55,7 @@\n total=len(stages),\n desc=\"Processing\",\n unit=\"file\",\n- disable=True if len(stages) == 1 else None,\n+ disable=len(stages) == 1,\n ) as pbar_stages:\n for stage in stages:\n try:\n@@ -102,7 +102,7 @@\n for out in Tqdm(\n targets,\n desc=\"Creating DVC-files\",\n- disable=True if len(targets) < LARGE_DIR_SIZE else None,\n+ disable=len(targets) < LARGE_DIR_SIZE,\n unit=\"file\",\n ):\n stage = Stage.create(repo, outs=[out], add=True, fname=fname)\n", "issue": "dvc push -q is not completely quiet, shows file transfer tqdm progress bars\nHey guys, love what you've done with DVC.\r\n\r\nHad a quick bug that's causing me a little issue. When I use 'dvc push -q' I'm still seeing tqdm progress bars. Wouldn't be a huge issue, but I'm probably pushing 100K 250kb files. This is a local remote, so the transfer speeds are quick. I know in some of my other scripts where I use tqdm, if the iteration time is very small, the tqdm overhead of writing to std::out actually starts to contribute to performance.\r\n\r\ndvc version: 0.83.0\r\nos: Windows 10\r\n\r\n\r\n\r\n\r\n\ndvc push -q is not completely quiet, shows file transfer tqdm progress bars\nHey guys, love what you've done with DVC.\r\n\r\nHad a quick bug that's causing me a little issue. When I use 'dvc push -q' I'm still seeing tqdm progress bars. Wouldn't be a huge issue, but I'm probably pushing 100K 250kb files. This is a local remote, so the transfer speeds are quick. I know in some of my other scripts where I use tqdm, if the iteration time is very small, the tqdm overhead of writing to std::out actually starts to contribute to performance.\r\n\r\ndvc version: 0.83.0\r\nos: Windows 10\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "import logging\nimport os\n\nimport colorama\n\nfrom . import locked\nfrom ..exceptions import (\n RecursiveAddingWhileUsingFilename,\n OverlappingOutputPathsError,\n)\nfrom ..output.base import OutputDoesNotExistError\nfrom ..progress import Tqdm\nfrom ..repo.scm_context import scm_context\nfrom ..stage import Stage\nfrom ..utils import LARGE_DIR_SIZE\n\nlogger = logging.getLogger(__name__)\n\n\n@locked\n@scm_context\ndef add(repo, targets, recursive=False, no_commit=False, fname=None):\n if recursive and fname:\n raise RecursiveAddingWhileUsingFilename()\n\n if isinstance(targets, str):\n targets = [targets]\n\n stages_list = []\n num_targets = len(targets)\n with Tqdm(total=num_targets, desc=\"Add\", unit=\"file\", leave=True) as pbar:\n if num_targets == 1:\n # clear unneeded top-level progress bar for single target\n pbar.bar_format = \"Adding...\"\n pbar.refresh()\n for target in targets:\n sub_targets = _find_all_targets(repo, target, recursive)\n pbar.total += len(sub_targets) - 1\n\n if os.path.isdir(target) and len(sub_targets) > LARGE_DIR_SIZE:\n logger.warning(\n \"You are adding a large directory '{target}' recursively,\"\n \" consider tracking it as a whole instead.\\n\"\n \"{purple}HINT:{nc} Remove the generated DVC-file and then\"\n \" run `{cyan}dvc add {target}{nc}`\".format(\n purple=colorama.Fore.MAGENTA,\n cyan=colorama.Fore.CYAN,\n nc=colorama.Style.RESET_ALL,\n target=target,\n )\n )\n\n stages = _create_stages(repo, sub_targets, fname, pbar=pbar)\n\n try:\n repo.check_modified_graph(stages)\n except OverlappingOutputPathsError as exc:\n msg = (\n \"Cannot add '{out}', because it is overlapping with other \"\n \"DVC tracked output: '{parent}'.\\n\"\n \"To include '{out}' in '{parent}', run \"\n \"'dvc commit {parent_stage}'\"\n ).format(\n out=exc.overlapping_out.path_info,\n parent=exc.parent.path_info,\n parent_stage=exc.parent.stage.relpath,\n )\n raise OverlappingOutputPathsError(\n exc.parent, exc.overlapping_out, msg\n )\n\n with Tqdm(\n total=len(stages),\n desc=\"Processing\",\n unit=\"file\",\n disable=True if len(stages) == 1 else None,\n ) as pbar_stages:\n for stage in stages:\n try:\n stage.save()\n except OutputDoesNotExistError:\n pbar.n -= 1\n raise\n\n if not no_commit:\n stage.commit()\n\n stage.dump()\n pbar_stages.update()\n\n stages_list += stages\n\n if num_targets == 1: # restore bar format for stats\n pbar.bar_format = pbar.BAR_FMT_DEFAULT\n\n return stages_list\n\n\ndef _find_all_targets(repo, target, recursive):\n if os.path.isdir(target) and recursive:\n return [\n fname\n for fname in Tqdm(\n repo.tree.walk_files(target),\n desc=\"Searching \" + target,\n bar_format=Tqdm.BAR_FMT_NOTOTAL,\n unit=\"file\",\n )\n if not repo.is_dvc_internal(fname)\n if not Stage.is_stage_file(fname)\n if not repo.scm.belongs_to_scm(fname)\n if not repo.scm.is_tracked(fname)\n ]\n return [target]\n\n\ndef _create_stages(repo, targets, fname, pbar=None):\n stages = []\n\n for out in Tqdm(\n targets,\n desc=\"Creating DVC-files\",\n disable=True if len(targets) < LARGE_DIR_SIZE else None,\n unit=\"file\",\n ):\n stage = Stage.create(\n repo, outs=[out], accompany_outs=True, fname=fname\n )\n\n if not stage:\n if pbar is not None:\n pbar.total -= 1\n continue\n\n stages.append(stage)\n if pbar is not None:\n pbar.update_desc(out)\n\n return stages\n", "path": "dvc/repo/add.py"}, {"content": "\"\"\"Manages progress bars for DVC repo.\"\"\"\n\nimport logging\nimport sys\nfrom threading import RLock\n\nfrom tqdm import tqdm\n\nfrom dvc.utils import env2bool\n\nlogger = logging.getLogger(__name__)\ntqdm.set_lock(RLock())\n\n\nclass Tqdm(tqdm):\n \"\"\"\n maximum-compatibility tqdm-based progressbars\n \"\"\"\n\n BAR_FMT_DEFAULT = (\n \"{percentage:3.0f}% {desc}|{bar}|\"\n \"{n_fmt}/{total_fmt}\"\n \" [{elapsed}<{remaining}, {rate_fmt:>11}{postfix}]\"\n )\n # nested bars should have fixed bar widths to align nicely\n BAR_FMT_DEFAULT_NESTED = (\n \"{percentage:3.0f}%|{bar:10}|{desc:{ncols_desc}.{ncols_desc}}\"\n \"{n_fmt}/{total_fmt}\"\n \" [{elapsed}<{remaining}, {rate_fmt:>11}{postfix}]\"\n )\n BAR_FMT_NOTOTAL = (\n \"{desc:{ncols_desc}.{ncols_desc}}{n_fmt}\"\n \" [{elapsed}, {rate_fmt:>11}{postfix}]\"\n )\n BYTES_DEFAULTS = dict(\n unit=\"B\", unit_scale=True, unit_divisor=1024, miniters=1\n )\n\n def __init__(\n self,\n iterable=None,\n disable=None,\n level=logging.ERROR,\n desc=None,\n leave=False,\n bar_format=None,\n bytes=False, # pylint: disable=W0622\n file=None,\n total=None,\n **kwargs\n ):\n \"\"\"\n bytes : shortcut for\n `unit='B', unit_scale=True, unit_divisor=1024, miniters=1`\n desc : persists after `close()`\n level : effective logging level for determining `disable`;\n used only if `disable` is unspecified\n disable : If (default: None), will be determined by logging level.\n May be overridden to `True` due to non-TTY status.\n Skip override by specifying env var `DVC_IGNORE_ISATTY`.\n kwargs : anything accepted by `tqdm.tqdm()`\n \"\"\"\n kwargs = kwargs.copy()\n if bytes:\n kwargs = {**self.BYTES_DEFAULTS, **kwargs}\n else:\n kwargs.setdefault(\"unit_scale\", total > 999 if total else True)\n if file is None:\n file = sys.stderr\n self.desc_persist = desc\n # auto-disable based on `logger.level`\n if disable is None:\n disable = logger.getEffectiveLevel() > level\n # auto-disable based on TTY\n if (\n not disable\n and not env2bool(\"DVC_IGNORE_ISATTY\")\n and hasattr(file, \"isatty\")\n ):\n disable = not file.isatty()\n super().__init__(\n iterable=iterable,\n disable=disable,\n leave=leave,\n desc=desc,\n bar_format=\"!\",\n lock_args=(False,),\n total=total,\n **kwargs\n )\n if bar_format is None:\n if self.__len__():\n self.bar_format = (\n self.BAR_FMT_DEFAULT_NESTED\n if self.pos\n else self.BAR_FMT_DEFAULT\n )\n else:\n self.bar_format = self.BAR_FMT_NOTOTAL\n else:\n self.bar_format = bar_format\n self.refresh()\n\n def update_desc(self, desc, n=1):\n \"\"\"\n Calls `set_description_str(desc)` and `update(n)`\n \"\"\"\n self.set_description_str(desc, refresh=False)\n self.update(n)\n\n def update_to(self, current, total=None):\n if total:\n self.total = total # pylint: disable=W0613,W0201\n self.update(current - self.n)\n\n def close(self):\n if self.desc_persist is not None:\n self.set_description_str(self.desc_persist, refresh=False)\n # unknown/zero ETA\n self.bar_format = self.bar_format.replace(\"<{remaining}\", \"\")\n # remove completed bar\n self.bar_format = self.bar_format.replace(\"|{bar:10}|\", \" \")\n super().close()\n\n @property\n def format_dict(self):\n \"\"\"inject `ncols_desc` to fill the display width (`ncols`)\"\"\"\n d = super().format_dict\n ncols = d[\"ncols\"] or 80\n ncols_desc = ncols - len(self.format_meter(ncols_desc=1, **d)) + 1\n ncols_desc = max(ncols_desc, 0)\n if ncols_desc:\n d[\"ncols_desc\"] = ncols_desc\n else:\n # work-around for zero-width description\n d[\"ncols_desc\"] = 1\n d[\"prefix\"] = \"\"\n return d\n", "path": "dvc/progress.py"}]}
| 3,559 | 426 |
gh_patches_debug_11671
|
rasdani/github-patches
|
git_diff
|
netbox-community__netbox-14461
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove the `clearcache` management command
### Proposed Changes
Remove the `clearcache` management command (from the `core` app), and omit it from the upgrade script.
### Justification
~This command was introduced back when we were experimenting with query caching, and is no longer needed.~ I was mistaken; it was actually introduced under #9122 to provide a mechanism for clearing the cached API spec. However, this is also no longer used since we moved to `drf-spectacular` (see #9608).
The Django cache is currently used only for discrete caching operations, including:
* Config revision tracking
* Recording the most recent release
* Caching RSS feed content (the RSSFeedWidget)
There has already been at least one bug related to this function (see #14182). Additionally, plugins may utilize the cache for other purposes, and we cannot make the assumption that it is safe to clear other cached data.
### Impact
Any mechanisms within NetBox or a plugin which employ caching will be responsible for their own cleanup, where applicable.
</issue>
<code>
[start of netbox/core/management/commands/clearcache.py]
1 from django.core.cache import cache
2 from django.core.management.base import BaseCommand
3
4 from core.models import ConfigRevision
5
6
7 class Command(BaseCommand):
8 """Command to clear the entire cache."""
9 help = 'Clears the cache.'
10
11 def handle(self, *args, **kwargs):
12 # Fetch the current config revision from the cache
13 config_version = cache.get('config_version')
14 # Clear the cache
15 cache.clear()
16 self.stdout.write('Cache has been cleared.', ending="\n")
17 if config_version:
18 # Activate the current config revision
19 ConfigRevision.objects.get(id=config_version).activate()
20 self.stdout.write(f'Config revision ({config_version}) has been restored.', ending="\n")
21
[end of netbox/core/management/commands/clearcache.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/netbox/core/management/commands/clearcache.py b/netbox/core/management/commands/clearcache.py
deleted file mode 100644
--- a/netbox/core/management/commands/clearcache.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from django.core.cache import cache
-from django.core.management.base import BaseCommand
-
-from core.models import ConfigRevision
-
-
-class Command(BaseCommand):
- """Command to clear the entire cache."""
- help = 'Clears the cache.'
-
- def handle(self, *args, **kwargs):
- # Fetch the current config revision from the cache
- config_version = cache.get('config_version')
- # Clear the cache
- cache.clear()
- self.stdout.write('Cache has been cleared.', ending="\n")
- if config_version:
- # Activate the current config revision
- ConfigRevision.objects.get(id=config_version).activate()
- self.stdout.write(f'Config revision ({config_version}) has been restored.', ending="\n")
|
{"golden_diff": "diff --git a/netbox/core/management/commands/clearcache.py b/netbox/core/management/commands/clearcache.py\ndeleted file mode 100644\n--- a/netbox/core/management/commands/clearcache.py\n+++ /dev/null\n@@ -1,20 +0,0 @@\n-from django.core.cache import cache\n-from django.core.management.base import BaseCommand\n-\n-from core.models import ConfigRevision\n-\n-\n-class Command(BaseCommand):\n- \"\"\"Command to clear the entire cache.\"\"\"\n- help = 'Clears the cache.'\n-\n- def handle(self, *args, **kwargs):\n- # Fetch the current config revision from the cache\n- config_version = cache.get('config_version')\n- # Clear the cache\n- cache.clear()\n- self.stdout.write('Cache has been cleared.', ending=\"\\n\")\n- if config_version:\n- # Activate the current config revision\n- ConfigRevision.objects.get(id=config_version).activate()\n- self.stdout.write(f'Config revision ({config_version}) has been restored.', ending=\"\\n\")\n", "issue": "Remove the `clearcache` management command\n### Proposed Changes\r\n\r\nRemove the `clearcache` management command (from the `core` app), and omit it from the upgrade script.\r\n\r\n### Justification\r\n\r\n~This command was introduced back when we were experimenting with query caching, and is no longer needed.~ I was mistaken; it was actually introduced under #9122 to provide a mechanism for clearing the cached API spec. However, this is also no longer used since we moved to `drf-spectacular` (see #9608).\r\n\r\nThe Django cache is currently used only for discrete caching operations, including:\r\n\r\n* Config revision tracking\r\n* Recording the most recent release\r\n* Caching RSS feed content (the RSSFeedWidget)\r\n\r\nThere has already been at least one bug related to this function (see #14182). Additionally, plugins may utilize the cache for other purposes, and we cannot make the assumption that it is safe to clear other cached data.\r\n\r\n### Impact\r\n\r\nAny mechanisms within NetBox or a plugin which employ caching will be responsible for their own cleanup, where applicable.\n", "before_files": [{"content": "from django.core.cache import cache\nfrom django.core.management.base import BaseCommand\n\nfrom core.models import ConfigRevision\n\n\nclass Command(BaseCommand):\n \"\"\"Command to clear the entire cache.\"\"\"\n help = 'Clears the cache.'\n\n def handle(self, *args, **kwargs):\n # Fetch the current config revision from the cache\n config_version = cache.get('config_version')\n # Clear the cache\n cache.clear()\n self.stdout.write('Cache has been cleared.', ending=\"\\n\")\n if config_version:\n # Activate the current config revision\n ConfigRevision.objects.get(id=config_version).activate()\n self.stdout.write(f'Config revision ({config_version}) has been restored.', ending=\"\\n\")\n", "path": "netbox/core/management/commands/clearcache.py"}]}
| 956 | 231 |
gh_patches_debug_54036
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-3190
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 0.1.3
## 2023-08-16
```[tasklist]
### Tasks
- [x] Cut 0.1.3 release branch, freeze code
- [x] Update version number in all places in the new branch
- [x] Make an image from the branch with tag `0.1.3`, push to Dockerhub
- [x] Test installation with the new image
- [x] Test upgrade
- [x] Smoke testing application
- [x] Stability of the newly released items
```
</issue>
<code>
[start of mathesar/__init__.py]
1 default_app_config = 'mathesar.apps.MathesarConfig'
2
3 __version__ = "0.1.2"
4
[end of mathesar/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mathesar/__init__.py b/mathesar/__init__.py
--- a/mathesar/__init__.py
+++ b/mathesar/__init__.py
@@ -1,3 +1,3 @@
default_app_config = 'mathesar.apps.MathesarConfig'
-__version__ = "0.1.2"
+__version__ = "0.1.3"
|
{"golden_diff": "diff --git a/mathesar/__init__.py b/mathesar/__init__.py\n--- a/mathesar/__init__.py\n+++ b/mathesar/__init__.py\n@@ -1,3 +1,3 @@\n default_app_config = 'mathesar.apps.MathesarConfig'\n \n-__version__ = \"0.1.2\"\n+__version__ = \"0.1.3\"\n", "issue": "Release 0.1.3\n## 2023-08-16\r\n```[tasklist]\r\n### Tasks\r\n- [x] Cut 0.1.3 release branch, freeze code\r\n- [x] Update version number in all places in the new branch\r\n- [x] Make an image from the branch with tag `0.1.3`, push to Dockerhub\r\n- [x] Test installation with the new image\r\n- [x] Test upgrade\r\n- [x] Smoke testing application\r\n- [x] Stability of the newly released items\r\n```\r\n\n", "before_files": [{"content": "default_app_config = 'mathesar.apps.MathesarConfig'\n\n__version__ = \"0.1.2\"\n", "path": "mathesar/__init__.py"}]}
| 690 | 83 |
gh_patches_debug_10502
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-158
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add custom (meta) data on buckets and collections
For some use-cases, it might become useful to be able to store some custom attributes in buckets or collections (e.g. metadata like application version, contact email or whatever).
Currently both Collection and Bucket resources do not define extra fields in their schema, and Cliquet drops unknown fields if not explicitly allowed.
We can either:
- Allow unknown fields in collection and buckets schemas
- Add a specific root level field (along `data` and `permissions`)
- Add a specific field (called `meta` for example) in the schema that could receive anything.
The advantage of the latter is that custom fields do not interfere with anything in the protocol, and are trivial to implement. The inconvenient is having to put `{data: {metadata: {email: "[email protected]"}}` in the payload.
Thoughts ?
</issue>
<code>
[start of kinto/views/collections.py]
1 import colander
2 import jsonschema
3 from cliquet import resource
4 from jsonschema import exceptions as jsonschema_exceptions
5
6 from kinto.views import NameGenerator, object_exists_or_404
7
8
9 class JSONSchemaMapping(colander.SchemaNode):
10 def schema_type(self, **kw):
11 return colander.Mapping(unknown='preserve')
12
13 def deserialize(self, cstruct=colander.null):
14 # Start by deserializing a simple mapping.
15 validated = super(JSONSchemaMapping, self).deserialize(cstruct)
16
17 # In case it is optional in parent schema.
18 if not validated or validated in (colander.null, colander.drop):
19 return validated
20
21 try:
22 jsonschema.Draft4Validator.check_schema(validated)
23 except jsonschema_exceptions.SchemaError as e:
24 self.raise_invalid(e.path.pop() + e.message)
25 return validated
26
27
28 class CollectionSchema(resource.ResourceSchema):
29 schema = JSONSchemaMapping(missing=colander.drop)
30
31
32 @resource.register(name='collection',
33 collection_methods=('GET',),
34 collection_path='/buckets/{{bucket_id}}/collections',
35 record_path='/buckets/{{bucket_id}}/collections/{{id}}')
36 class Collection(resource.ProtectedResource):
37 mapping = CollectionSchema()
38 permissions = ('read', 'write', 'record:create')
39
40 def __init__(self, *args, **kwargs):
41 super(Collection, self).__init__(*args, **kwargs)
42
43 bucket_id = self.request.matchdict['bucket_id']
44 object_exists_or_404(self.request,
45 collection_id='bucket',
46 object_id=bucket_id)
47
48 self.collection.id_generator = NameGenerator()
49
50 def get_parent_id(self, request):
51 bucket_id = request.matchdict['bucket_id']
52 parent_id = '/buckets/%s' % bucket_id
53 return parent_id
54
55 def delete(self):
56 result = super(Collection, self).delete()
57
58 # Delete records.
59 storage = self.collection.storage
60 parent_id = '%s/collections/%s' % (self.collection.parent_id,
61 self.record_id)
62 storage.delete_all(collection_id='record',
63 parent_id=parent_id,
64 with_deleted=False)
65 storage.purge_deleted(collection_id='record', parent_id=parent_id)
66
67 return result
68
[end of kinto/views/collections.py]
[start of kinto/views/records.py]
1 import jsonschema
2 from cliquet import resource, schema
3 from cliquet.errors import raise_invalid
4 from jsonschema import exceptions as jsonschema_exceptions
5
6 from kinto.views import object_exists_or_404
7
8
9 class RecordSchema(schema.ResourceSchema):
10 class Options():
11 preserve_unknown = True
12
13
14 _parent_path = '/buckets/{{bucket_id}}/collections/{{collection_id}}'
15
16
17 @resource.register(name='record',
18 collection_path=_parent_path + '/records',
19 record_path=_parent_path + '/records/{{id}}')
20 class Record(resource.ProtectedResource):
21
22 mapping = RecordSchema()
23 schema_field = 'schema'
24
25 def __init__(self, *args, **kwargs):
26 super(Record, self).__init__(*args, **kwargs)
27
28 bucket_id = self.request.matchdict['bucket_id']
29 object_exists_or_404(self.request,
30 collection_id='bucket',
31 object_id=bucket_id)
32
33 collection_id = self.request.matchdict['collection_id']
34 parent_id = '/buckets/%s' % bucket_id
35 self._collection = object_exists_or_404(self.request,
36 collection_id='collection',
37 parent_id=parent_id,
38 object_id=collection_id)
39
40 def get_parent_id(self, request):
41 bucket_id = request.matchdict['bucket_id']
42 collection_id = request.matchdict['collection_id']
43 return '/buckets/%s/collections/%s' % (bucket_id, collection_id)
44
45 def is_known_field(self, field_name):
46 """Without schema, any field is considered as known."""
47 return True
48
49 def process_record(self, new, old=None):
50 """Validate records against collection schema, if any."""
51 schema = self._collection.get('schema')
52 if not schema:
53 return new
54
55 collection_timestamp = self._collection[self.collection.modified_field]
56
57 try:
58 jsonschema.validate(new, schema)
59 new[self.schema_field] = collection_timestamp
60 except jsonschema_exceptions.ValidationError as e:
61 field = e.path.pop() if e.path else e.validator_value.pop()
62 raise_invalid(self.request, name=field, description=e.message)
63
64 return new
65
[end of kinto/views/records.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kinto/views/collections.py b/kinto/views/collections.py
--- a/kinto/views/collections.py
+++ b/kinto/views/collections.py
@@ -28,6 +28,9 @@
class CollectionSchema(resource.ResourceSchema):
schema = JSONSchemaMapping(missing=colander.drop)
+ class Options:
+ preserve_unknown = True
+
@resource.register(name='collection',
collection_methods=('GET',),
diff --git a/kinto/views/records.py b/kinto/views/records.py
--- a/kinto/views/records.py
+++ b/kinto/views/records.py
@@ -7,7 +7,7 @@
class RecordSchema(schema.ResourceSchema):
- class Options():
+ class Options:
preserve_unknown = True
|
{"golden_diff": "diff --git a/kinto/views/collections.py b/kinto/views/collections.py\n--- a/kinto/views/collections.py\n+++ b/kinto/views/collections.py\n@@ -28,6 +28,9 @@\n class CollectionSchema(resource.ResourceSchema):\n schema = JSONSchemaMapping(missing=colander.drop)\n \n+ class Options:\n+ preserve_unknown = True\n+\n \n @resource.register(name='collection',\n collection_methods=('GET',),\ndiff --git a/kinto/views/records.py b/kinto/views/records.py\n--- a/kinto/views/records.py\n+++ b/kinto/views/records.py\n@@ -7,7 +7,7 @@\n \n \n class RecordSchema(schema.ResourceSchema):\n- class Options():\n+ class Options:\n preserve_unknown = True\n", "issue": "Add custom (meta) data on buckets and collections \nFor some use-cases, it might become useful to be able to store some custom attributes in buckets or collections (e.g. metadata like application version, contact email or whatever).\n\nCurrently both Collection and Bucket resources do not define extra fields in their schema, and Cliquet drops unknown fields if not explicitly allowed.\n\nWe can either:\n- Allow unknown fields in collection and buckets schemas\n- Add a specific root level field (along `data` and `permissions`)\n- Add a specific field (called `meta` for example) in the schema that could receive anything.\n\nThe advantage of the latter is that custom fields do not interfere with anything in the protocol, and are trivial to implement. The inconvenient is having to put `{data: {metadata: {email: \"[email protected]\"}}` in the payload.\n\nThoughts ?\n\n", "before_files": [{"content": "import colander\nimport jsonschema\nfrom cliquet import resource\nfrom jsonschema import exceptions as jsonschema_exceptions\n\nfrom kinto.views import NameGenerator, object_exists_or_404\n\n\nclass JSONSchemaMapping(colander.SchemaNode):\n def schema_type(self, **kw):\n return colander.Mapping(unknown='preserve')\n\n def deserialize(self, cstruct=colander.null):\n # Start by deserializing a simple mapping.\n validated = super(JSONSchemaMapping, self).deserialize(cstruct)\n\n # In case it is optional in parent schema.\n if not validated or validated in (colander.null, colander.drop):\n return validated\n\n try:\n jsonschema.Draft4Validator.check_schema(validated)\n except jsonschema_exceptions.SchemaError as e:\n self.raise_invalid(e.path.pop() + e.message)\n return validated\n\n\nclass CollectionSchema(resource.ResourceSchema):\n schema = JSONSchemaMapping(missing=colander.drop)\n\n\[email protected](name='collection',\n collection_methods=('GET',),\n collection_path='/buckets/{{bucket_id}}/collections',\n record_path='/buckets/{{bucket_id}}/collections/{{id}}')\nclass Collection(resource.ProtectedResource):\n mapping = CollectionSchema()\n permissions = ('read', 'write', 'record:create')\n\n def __init__(self, *args, **kwargs):\n super(Collection, self).__init__(*args, **kwargs)\n\n bucket_id = self.request.matchdict['bucket_id']\n object_exists_or_404(self.request,\n collection_id='bucket',\n object_id=bucket_id)\n\n self.collection.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n bucket_id = request.matchdict['bucket_id']\n parent_id = '/buckets/%s' % bucket_id\n return parent_id\n\n def delete(self):\n result = super(Collection, self).delete()\n\n # Delete records.\n storage = self.collection.storage\n parent_id = '%s/collections/%s' % (self.collection.parent_id,\n self.record_id)\n storage.delete_all(collection_id='record',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/collections.py"}, {"content": "import jsonschema\nfrom cliquet import resource, schema\nfrom cliquet.errors import raise_invalid\nfrom jsonschema import exceptions as jsonschema_exceptions\n\nfrom kinto.views import object_exists_or_404\n\n\nclass RecordSchema(schema.ResourceSchema):\n class Options():\n preserve_unknown = True\n\n\n_parent_path = '/buckets/{{bucket_id}}/collections/{{collection_id}}'\n\n\[email protected](name='record',\n collection_path=_parent_path + '/records',\n record_path=_parent_path + '/records/{{id}}')\nclass Record(resource.ProtectedResource):\n\n mapping = RecordSchema()\n schema_field = 'schema'\n\n def __init__(self, *args, **kwargs):\n super(Record, self).__init__(*args, **kwargs)\n\n bucket_id = self.request.matchdict['bucket_id']\n object_exists_or_404(self.request,\n collection_id='bucket',\n object_id=bucket_id)\n\n collection_id = self.request.matchdict['collection_id']\n parent_id = '/buckets/%s' % bucket_id\n self._collection = object_exists_or_404(self.request,\n collection_id='collection',\n parent_id=parent_id,\n object_id=collection_id)\n\n def get_parent_id(self, request):\n bucket_id = request.matchdict['bucket_id']\n collection_id = request.matchdict['collection_id']\n return '/buckets/%s/collections/%s' % (bucket_id, collection_id)\n\n def is_known_field(self, field_name):\n \"\"\"Without schema, any field is considered as known.\"\"\"\n return True\n\n def process_record(self, new, old=None):\n \"\"\"Validate records against collection schema, if any.\"\"\"\n schema = self._collection.get('schema')\n if not schema:\n return new\n\n collection_timestamp = self._collection[self.collection.modified_field]\n\n try:\n jsonschema.validate(new, schema)\n new[self.schema_field] = collection_timestamp\n except jsonschema_exceptions.ValidationError as e:\n field = e.path.pop() if e.path else e.validator_value.pop()\n raise_invalid(self.request, name=field, description=e.message)\n\n return new\n", "path": "kinto/views/records.py"}]}
| 1,924 | 167 |
gh_patches_debug_5423
|
rasdani/github-patches
|
git_diff
|
ivy-llc__ivy-18290
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
linear
#15051
</issue>
<code>
[start of ivy/functional/frontends/paddle/nn/functional/common.py]
1 # local
2 import ivy
3 from ivy.func_wrapper import with_supported_dtypes
4 from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back
5
6
7 @to_ivy_arrays_and_back
8 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
9 def cosine_similarity(x1, x2, *, axis=1, eps=1e-08):
10 if len(x1.shape) == len(x2.shape) and len(x2.shape) >= 2:
11 numerator = ivy.sum(x1 * x2, axis=axis)
12 x1_squared_norm = ivy.sum(ivy.square(x1), axis=axis)
13 x2_squared_norm = ivy.sum(ivy.square(x2), axis=axis)
14 else:
15 numerator = ivy.sum(x1 * x2)
16 x1_squared_norm = ivy.sum(ivy.square(x1))
17 x2_squared_norm = ivy.sum(ivy.square(x2))
18
19 x1_norm = ivy.sqrt(x1_squared_norm)
20 x2_norm = ivy.sqrt(x2_squared_norm)
21 norm_mm = x1_norm * x2_norm
22 denominator = ivy.maximum(norm_mm, eps)
23
24 cosine = numerator / denominator
25 return cosine
26
27
28 @to_ivy_arrays_and_back
29 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
30 def dropout2d(x, *, p=0.5, training=True, data_format="NCHW", name=None):
31 return ivy.dropout2d(x, p=p, training=training, data_format=data_format)
32
33
34 def get_mask(shape, device, prob, seed=None):
35 mask = ivy.where(
36 ivy.random_uniform(shape=shape, device=device, seed=seed) < prob,
37 0.0,
38 1.0,
39 )
40 return mask
41
42
43 @to_ivy_arrays_and_back
44 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
45 def dropout(x, p=0.5, axis=None, training=True, mode="upscale_in_train", name=None):
46 if axis > 1:
47 raise ValueError("Axis value can only be 0 or 1 or None.")
48 elif axis is None or (isinstance(axis, list) and len(axis) == 2):
49 mask = get_mask(shape=x.shape, device=ivy.dev(x), prob=p, seed=None)
50 elif axis == 0:
51 mask = get_mask(shape=(x.shape[0], 1), device=ivy.dev(x), prob=p)
52 mask = ivy.broadcast_to(mask, x.shape)
53 elif axis == 1:
54 mask = get_mask(shape=(1, x.shape[1]), device=ivy.dev(x), prob=p)
55 mask = ivy.broadcast_to(mask, x.shape)
56 if mode == "upscale_in_train":
57 if training:
58 out = ivy.multiply(x, mask)
59 ret = ivy.multiply(out, 1.0 / (1.0 - p))
60 else:
61 ret = x
62 else:
63 if training:
64 ret = ivy.multiply(x, mask)
65 else:
66 ret = ivy.multiply(x, (1.0 - p))
67 return ret
68
69
70 @to_ivy_arrays_and_back
71 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
72 def zeropad2d(x, padding, data_format="NCHW", name=None):
73 if ivy.is_array(padding):
74 padding = padding.to_list()
75 if isinstance(padding, int):
76 padding = [padding, padding, padding, padding]
77 if len(padding) != 4:
78 raise ValueError("Padding length should be 4.")
79 if x.ndim != 4:
80 raise ValueError("Input x must be 4-dimensional.")
81 if data_format == "NCHW":
82 padding = ((0, 0), (0, 0), (padding[2], padding[3]), (padding[0], padding[1]))
83 elif data_format == "NHWC":
84 padding = ((0, 0), (padding[2], padding[3]), (padding[0], padding[1]), (0, 0))
85 else:
86 raise ValueError("Unknown data_format: {}".format(data_format))
87 return ivy.pad(x, padding, mode="constant", constant_values=0.0)
88
[end of ivy/functional/frontends/paddle/nn/functional/common.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ivy/functional/frontends/paddle/nn/functional/common.py b/ivy/functional/frontends/paddle/nn/functional/common.py
--- a/ivy/functional/frontends/paddle/nn/functional/common.py
+++ b/ivy/functional/frontends/paddle/nn/functional/common.py
@@ -85,3 +85,10 @@
else:
raise ValueError("Unknown data_format: {}".format(data_format))
return ivy.pad(x, padding, mode="constant", constant_values=0.0)
+
+
+@to_ivy_arrays_and_back
+@with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
+def linear(x, weight, bias=None, name=None):
+ weight = ivy.swapaxes(weight, -1, -2)
+ return ivy.linear(x, weight, bias=bias)
|
{"golden_diff": "diff --git a/ivy/functional/frontends/paddle/nn/functional/common.py b/ivy/functional/frontends/paddle/nn/functional/common.py\n--- a/ivy/functional/frontends/paddle/nn/functional/common.py\n+++ b/ivy/functional/frontends/paddle/nn/functional/common.py\n@@ -85,3 +85,10 @@\n else:\n raise ValueError(\"Unknown data_format: {}\".format(data_format))\n return ivy.pad(x, padding, mode=\"constant\", constant_values=0.0)\n+\n+\n+@to_ivy_arrays_and_back\n+@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n+def linear(x, weight, bias=None, name=None):\n+ weight = ivy.swapaxes(weight, -1, -2)\n+ return ivy.linear(x, weight, bias=bias)\n", "issue": "linear\n#15051 \n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\n\n\n@to_ivy_arrays_and_back\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\ndef cosine_similarity(x1, x2, *, axis=1, eps=1e-08):\n if len(x1.shape) == len(x2.shape) and len(x2.shape) >= 2:\n numerator = ivy.sum(x1 * x2, axis=axis)\n x1_squared_norm = ivy.sum(ivy.square(x1), axis=axis)\n x2_squared_norm = ivy.sum(ivy.square(x2), axis=axis)\n else:\n numerator = ivy.sum(x1 * x2)\n x1_squared_norm = ivy.sum(ivy.square(x1))\n x2_squared_norm = ivy.sum(ivy.square(x2))\n\n x1_norm = ivy.sqrt(x1_squared_norm)\n x2_norm = ivy.sqrt(x2_squared_norm)\n norm_mm = x1_norm * x2_norm\n denominator = ivy.maximum(norm_mm, eps)\n\n cosine = numerator / denominator\n return cosine\n\n\n@to_ivy_arrays_and_back\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\ndef dropout2d(x, *, p=0.5, training=True, data_format=\"NCHW\", name=None):\n return ivy.dropout2d(x, p=p, training=training, data_format=data_format)\n\n\ndef get_mask(shape, device, prob, seed=None):\n mask = ivy.where(\n ivy.random_uniform(shape=shape, device=device, seed=seed) < prob,\n 0.0,\n 1.0,\n )\n return mask\n\n\n@to_ivy_arrays_and_back\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\ndef dropout(x, p=0.5, axis=None, training=True, mode=\"upscale_in_train\", name=None):\n if axis > 1:\n raise ValueError(\"Axis value can only be 0 or 1 or None.\")\n elif axis is None or (isinstance(axis, list) and len(axis) == 2):\n mask = get_mask(shape=x.shape, device=ivy.dev(x), prob=p, seed=None)\n elif axis == 0:\n mask = get_mask(shape=(x.shape[0], 1), device=ivy.dev(x), prob=p)\n mask = ivy.broadcast_to(mask, x.shape)\n elif axis == 1:\n mask = get_mask(shape=(1, x.shape[1]), device=ivy.dev(x), prob=p)\n mask = ivy.broadcast_to(mask, x.shape)\n if mode == \"upscale_in_train\":\n if training:\n out = ivy.multiply(x, mask)\n ret = ivy.multiply(out, 1.0 / (1.0 - p))\n else:\n ret = x\n else:\n if training:\n ret = ivy.multiply(x, mask)\n else:\n ret = ivy.multiply(x, (1.0 - p))\n return ret\n\n\n@to_ivy_arrays_and_back\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\ndef zeropad2d(x, padding, data_format=\"NCHW\", name=None):\n if ivy.is_array(padding):\n padding = padding.to_list()\n if isinstance(padding, int):\n padding = [padding, padding, padding, padding]\n if len(padding) != 4:\n raise ValueError(\"Padding length should be 4.\")\n if x.ndim != 4:\n raise ValueError(\"Input x must be 4-dimensional.\")\n if data_format == \"NCHW\":\n padding = ((0, 0), (0, 0), (padding[2], padding[3]), (padding[0], padding[1]))\n elif data_format == \"NHWC\":\n padding = ((0, 0), (padding[2], padding[3]), (padding[0], padding[1]), (0, 0))\n else:\n raise ValueError(\"Unknown data_format: {}\".format(data_format))\n return ivy.pad(x, padding, mode=\"constant\", constant_values=0.0)\n", "path": "ivy/functional/frontends/paddle/nn/functional/common.py"}]}
| 1,715 | 201 |
gh_patches_debug_2077
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-2615
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[requires.io] dependency update on master branch
</issue>
<code>
[start of setup.py]
1 import os
2 import runpy
3 from codecs import open
4
5 from setuptools import setup, find_packages
6
7 # Based on https://github.com/pypa/sampleproject/blob/master/setup.py
8 # and https://python-packaging-user-guide.readthedocs.org/
9
10 here = os.path.abspath(os.path.dirname(__file__))
11
12 with open(os.path.join(here, 'README.rst'), encoding='utf-8') as f:
13 long_description = f.read()
14
15 VERSION = runpy.run_path(os.path.join(here, "mitmproxy", "version.py"))["VERSION"]
16
17 setup(
18 name="mitmproxy",
19 version=VERSION,
20 description="An interactive, SSL-capable, man-in-the-middle HTTP proxy for penetration testers and software developers.",
21 long_description=long_description,
22 url="http://mitmproxy.org",
23 author="Aldo Cortesi",
24 author_email="[email protected]",
25 license="MIT",
26 classifiers=[
27 "License :: OSI Approved :: MIT License",
28 "Development Status :: 5 - Production/Stable",
29 "Environment :: Console",
30 "Environment :: Console :: Curses",
31 "Operating System :: MacOS :: MacOS X",
32 "Operating System :: POSIX",
33 "Operating System :: Microsoft :: Windows",
34 "Programming Language :: Python",
35 "Programming Language :: Python :: 3",
36 "Programming Language :: Python :: 3 :: Only",
37 "Programming Language :: Python :: 3.5",
38 "Programming Language :: Python :: 3.6",
39 "Programming Language :: Python :: Implementation :: CPython",
40 "Topic :: Security",
41 "Topic :: Internet",
42 "Topic :: Internet :: WWW/HTTP",
43 "Topic :: Internet :: Proxy Servers",
44 "Topic :: Software Development :: Testing"
45 ],
46 packages=find_packages(include=[
47 "mitmproxy", "mitmproxy.*",
48 "pathod", "pathod.*",
49 ]),
50 include_package_data=True,
51 entry_points={
52 'console_scripts': [
53 "mitmproxy = mitmproxy.tools.main:mitmproxy",
54 "mitmdump = mitmproxy.tools.main:mitmdump",
55 "mitmweb = mitmproxy.tools.main:mitmweb",
56 "pathod = pathod.pathod_cmdline:go_pathod",
57 "pathoc = pathod.pathoc_cmdline:go_pathoc"
58 ]
59 },
60 # https://packaging.python.org/en/latest/requirements/#install-requires
61 # It is not considered best practice to use install_requires to pin dependencies to specific versions.
62 install_requires=[
63 "blinker>=1.4, <1.5",
64 "brotlipy>=0.5.1, <0.8",
65 "certifi>=2015.11.20.1", # no semver here - this should always be on the last release!
66 "click>=6.2, <7",
67 "cryptography>=2.0,<2.2",
68 "h2>=3.0, <4",
69 "hyperframe>=5.0, <6",
70 "kaitaistruct>=0.7, <0.8",
71 "ldap3>=2.2.0, <2.4",
72 "passlib>=1.6.5, <1.8",
73 "pyasn1>=0.3.1, <0.4",
74 "pyOpenSSL>=17.2,<17.4",
75 "pyparsing>=2.1.3, <2.3",
76 "pyperclip>=1.5.22, <1.6",
77 "requests>=2.9.1, <3",
78 "ruamel.yaml>=0.13.2, <0.16",
79 "sortedcontainers>=1.5.4, <1.6",
80 "tornado>=4.3, <4.6",
81 "urwid>=1.3.1, <1.4",
82 ],
83 extras_require={
84 ':sys_platform == "win32"': [
85 "pydivert>=2.0.3,<2.2",
86 ],
87 'dev': [
88 "flake8>=3.2.1, <3.5",
89 "Flask>=0.10.1, <0.13",
90 "mypy>=0.530,<0.541",
91 "pytest-cov>=2.2.1, <3",
92 "pytest-faulthandler>=1.3.0, <2",
93 "pytest-timeout>=1.0.0, <2",
94 "pytest-xdist>=1.14, <2",
95 "pytest>=3.1, <4",
96 "rstcheck>=2.2, <4.0",
97 "sphinx_rtd_theme>=0.1.9, <0.3",
98 "sphinx-autobuild>=0.5.2, <0.8",
99 "sphinx>=1.3.5, <1.7",
100 "sphinxcontrib-documentedlist>=0.5.0, <0.7",
101 "tox>=2.3, <3",
102 ],
103 'examples': [
104 "beautifulsoup4>=4.4.1, <4.7",
105 "Pillow>=4.3,<4.4",
106 ]
107 }
108 )
109
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -85,7 +85,7 @@
"pydivert>=2.0.3,<2.2",
],
'dev': [
- "flake8>=3.2.1, <3.5",
+ "flake8>=3.5, <3.6",
"Flask>=0.10.1, <0.13",
"mypy>=0.530,<0.541",
"pytest-cov>=2.2.1, <3",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -85,7 +85,7 @@\n \"pydivert>=2.0.3,<2.2\",\n ],\n 'dev': [\n- \"flake8>=3.2.1, <3.5\",\n+ \"flake8>=3.5, <3.6\",\n \"Flask>=0.10.1, <0.13\",\n \"mypy>=0.530,<0.541\",\n \"pytest-cov>=2.2.1, <3\",\n", "issue": "[requires.io] dependency update on master branch\n\n", "before_files": [{"content": "import os\nimport runpy\nfrom codecs import open\n\nfrom setuptools import setup, find_packages\n\n# Based on https://github.com/pypa/sampleproject/blob/master/setup.py\n# and https://python-packaging-user-guide.readthedocs.org/\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(here, 'README.rst'), encoding='utf-8') as f:\n long_description = f.read()\n\nVERSION = runpy.run_path(os.path.join(here, \"mitmproxy\", \"version.py\"))[\"VERSION\"]\n\nsetup(\n name=\"mitmproxy\",\n version=VERSION,\n description=\"An interactive, SSL-capable, man-in-the-middle HTTP proxy for penetration testers and software developers.\",\n long_description=long_description,\n url=\"http://mitmproxy.org\",\n author=\"Aldo Cortesi\",\n author_email=\"[email protected]\",\n license=\"MIT\",\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Environment :: Console :: Curses\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Topic :: Security\",\n \"Topic :: Internet\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: Proxy Servers\",\n \"Topic :: Software Development :: Testing\"\n ],\n packages=find_packages(include=[\n \"mitmproxy\", \"mitmproxy.*\",\n \"pathod\", \"pathod.*\",\n ]),\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n \"mitmproxy = mitmproxy.tools.main:mitmproxy\",\n \"mitmdump = mitmproxy.tools.main:mitmdump\",\n \"mitmweb = mitmproxy.tools.main:mitmweb\",\n \"pathod = pathod.pathod_cmdline:go_pathod\",\n \"pathoc = pathod.pathoc_cmdline:go_pathoc\"\n ]\n },\n # https://packaging.python.org/en/latest/requirements/#install-requires\n # It is not considered best practice to use install_requires to pin dependencies to specific versions.\n install_requires=[\n \"blinker>=1.4, <1.5\",\n \"brotlipy>=0.5.1, <0.8\",\n \"certifi>=2015.11.20.1\", # no semver here - this should always be on the last release!\n \"click>=6.2, <7\",\n \"cryptography>=2.0,<2.2\",\n \"h2>=3.0, <4\",\n \"hyperframe>=5.0, <6\",\n \"kaitaistruct>=0.7, <0.8\",\n \"ldap3>=2.2.0, <2.4\",\n \"passlib>=1.6.5, <1.8\",\n \"pyasn1>=0.3.1, <0.4\",\n \"pyOpenSSL>=17.2,<17.4\",\n \"pyparsing>=2.1.3, <2.3\",\n \"pyperclip>=1.5.22, <1.6\",\n \"requests>=2.9.1, <3\",\n \"ruamel.yaml>=0.13.2, <0.16\",\n \"sortedcontainers>=1.5.4, <1.6\",\n \"tornado>=4.3, <4.6\",\n \"urwid>=1.3.1, <1.4\",\n ],\n extras_require={\n ':sys_platform == \"win32\"': [\n \"pydivert>=2.0.3,<2.2\",\n ],\n 'dev': [\n \"flake8>=3.2.1, <3.5\",\n \"Flask>=0.10.1, <0.13\",\n \"mypy>=0.530,<0.541\",\n \"pytest-cov>=2.2.1, <3\",\n \"pytest-faulthandler>=1.3.0, <2\",\n \"pytest-timeout>=1.0.0, <2\",\n \"pytest-xdist>=1.14, <2\",\n \"pytest>=3.1, <4\",\n \"rstcheck>=2.2, <4.0\",\n \"sphinx_rtd_theme>=0.1.9, <0.3\",\n \"sphinx-autobuild>=0.5.2, <0.8\",\n \"sphinx>=1.3.5, <1.7\",\n \"sphinxcontrib-documentedlist>=0.5.0, <0.7\",\n \"tox>=2.3, <3\",\n ],\n 'examples': [\n \"beautifulsoup4>=4.4.1, <4.7\",\n \"Pillow>=4.3,<4.4\",\n ]\n }\n)\n", "path": "setup.py"}]}
| 1,914 | 137 |
gh_patches_debug_28334
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-3104
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
only caputring the first line https://github.com/bridgecrewio/checkov/blob/2.0.1131/checkov/dockerfile/checks/WorkdirIsAbsolute.py
def scan_entity_conf(self, conf):
for mydir in conf:
mypath = mydir["value"]
if re.match(PATH, mypath):
return CheckResult.FAILED, mydir
return CheckResult.PASSED, None
</issue>
<code>
[start of checkov/dockerfile/checks/WorkdirIsAbsolute.py]
1 import re
2
3 from checkov.common.models.enums import CheckCategories, CheckResult
4 from checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck
5
6 ISABSOLUTE = re.compile("(^/[A-z0-9-_+]*)|(^[A-z0-9-_+]:\\\\.*)|(^\\$[{}A-z0-9-_+].*)")
7
8
9 class WorkdirIsAbsolute(BaseDockerfileCheck):
10 def __init__(self):
11 """
12 For clarity and reliability, you should always use absolute paths for your WORKDIR.
13 """
14 name = "Ensure that WORKDIR values are absolute paths"
15 id = "CKV_DOCKER_10"
16 supported_instructions = ["WORKDIR"]
17 categories = [CheckCategories.CONVENTION]
18 super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)
19
20 def scan_entity_conf(self, conf):
21 for mydir in conf:
22 mypath = mydir["value"]
23 if not re.match(ISABSOLUTE, mypath):
24 return CheckResult.FAILED, mydir
25 return CheckResult.PASSED, None
26
27
28 check = WorkdirIsAbsolute()
29
[end of checkov/dockerfile/checks/WorkdirIsAbsolute.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/checkov/dockerfile/checks/WorkdirIsAbsolute.py b/checkov/dockerfile/checks/WorkdirIsAbsolute.py
--- a/checkov/dockerfile/checks/WorkdirIsAbsolute.py
+++ b/checkov/dockerfile/checks/WorkdirIsAbsolute.py
@@ -1,3 +1,5 @@
+from __future__ import annotations
+
import re
from checkov.common.models.enums import CheckCategories, CheckResult
@@ -7,21 +9,26 @@
class WorkdirIsAbsolute(BaseDockerfileCheck):
- def __init__(self):
+ def __init__(self) -> None:
"""
For clarity and reliability, you should always use absolute paths for your WORKDIR.
"""
name = "Ensure that WORKDIR values are absolute paths"
id = "CKV_DOCKER_10"
- supported_instructions = ["WORKDIR"]
- categories = [CheckCategories.CONVENTION]
+ supported_instructions = ("WORKDIR",)
+ categories = (CheckCategories.CONVENTION,)
super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)
- def scan_entity_conf(self, conf):
- for mydir in conf:
- mypath = mydir["value"]
- if not re.match(ISABSOLUTE, mypath):
- return CheckResult.FAILED, mydir
+ def scan_entity_conf(self, conf: list[dict[str, int | str]]) -> tuple[CheckResult, list[dict[str, int | str]] | None]:
+ workdirs = []
+ for workdir in conf:
+ path = workdir["value"]
+ if not re.match(ISABSOLUTE, path):
+ workdirs.append(workdir)
+
+ if workdirs:
+ return CheckResult.FAILED, workdirs
+
return CheckResult.PASSED, None
|
{"golden_diff": "diff --git a/checkov/dockerfile/checks/WorkdirIsAbsolute.py b/checkov/dockerfile/checks/WorkdirIsAbsolute.py\n--- a/checkov/dockerfile/checks/WorkdirIsAbsolute.py\n+++ b/checkov/dockerfile/checks/WorkdirIsAbsolute.py\n@@ -1,3 +1,5 @@\n+from __future__ import annotations\n+\n import re\n \n from checkov.common.models.enums import CheckCategories, CheckResult\n@@ -7,21 +9,26 @@\n \n \n class WorkdirIsAbsolute(BaseDockerfileCheck):\n- def __init__(self):\n+ def __init__(self) -> None:\n \"\"\"\n For clarity and reliability, you should always use absolute paths for your WORKDIR.\n \"\"\"\n name = \"Ensure that WORKDIR values are absolute paths\"\n id = \"CKV_DOCKER_10\"\n- supported_instructions = [\"WORKDIR\"]\n- categories = [CheckCategories.CONVENTION]\n+ supported_instructions = (\"WORKDIR\",)\n+ categories = (CheckCategories.CONVENTION,)\n super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)\n \n- def scan_entity_conf(self, conf):\n- for mydir in conf:\n- mypath = mydir[\"value\"]\n- if not re.match(ISABSOLUTE, mypath):\n- return CheckResult.FAILED, mydir\n+ def scan_entity_conf(self, conf: list[dict[str, int | str]]) -> tuple[CheckResult, list[dict[str, int | str]] | None]:\n+ workdirs = []\n+ for workdir in conf:\n+ path = workdir[\"value\"]\n+ if not re.match(ISABSOLUTE, path):\n+ workdirs.append(workdir)\n+\n+ if workdirs:\n+ return CheckResult.FAILED, workdirs\n+\n return CheckResult.PASSED, None\n", "issue": "only caputring the first line https://github.com/bridgecrewio/checkov/blob/2.0.1131/checkov/dockerfile/checks/WorkdirIsAbsolute.py\ndef scan_entity_conf(self, conf):\r\n for mydir in conf:\r\n mypath = mydir[\"value\"]\r\n if re.match(PATH, mypath):\r\n return CheckResult.FAILED, mydir\r\n return CheckResult.PASSED, None\n", "before_files": [{"content": "import re\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck\n\nISABSOLUTE = re.compile(\"(^/[A-z0-9-_+]*)|(^[A-z0-9-_+]:\\\\\\\\.*)|(^\\\\$[{}A-z0-9-_+].*)\")\n\n\nclass WorkdirIsAbsolute(BaseDockerfileCheck):\n def __init__(self):\n \"\"\"\n For clarity and reliability, you should always use absolute paths for your WORKDIR.\n \"\"\"\n name = \"Ensure that WORKDIR values are absolute paths\"\n id = \"CKV_DOCKER_10\"\n supported_instructions = [\"WORKDIR\"]\n categories = [CheckCategories.CONVENTION]\n super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)\n\n def scan_entity_conf(self, conf):\n for mydir in conf:\n mypath = mydir[\"value\"]\n if not re.match(ISABSOLUTE, mypath):\n return CheckResult.FAILED, mydir\n return CheckResult.PASSED, None\n\n\ncheck = WorkdirIsAbsolute()\n", "path": "checkov/dockerfile/checks/WorkdirIsAbsolute.py"}]}
| 946 | 411 |
gh_patches_debug_29107
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-410
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Nested stack reference to InstanceProfile triggers E2502 Property IamInstanceProfile should relate to AWS::IAM::InstanceProfile
*cfn-lint version: `0.8.1`*
# Description of issue
When using nested stacks and passing IamInstanceProfile ARNs between stacks, E2502 is triggered though it shouldn't be.
# Steps to reproduce
Create a parent template like this
```yaml
AWSTemplateFormatVersion: 2010-09-09
Resources:
IAMInstanceProfile:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://s3-us-west-2.amazonaws.com/example-bucket/example-instance-profile.yml
Instance:
Type: AWS::CloudFormation::Stack
Properties:
Parameters:
IamInstanceProfile: !GetAtt IAMInstanceProfile.Outputs.InstanceProfileArn
TemplateURL: https://s3-us-west-2.amazonaws.com/example-bucket/example-instance.yml
```
and a child template like this
```yaml
AWSTemplateFormatVersion: 2010-09-09
Resources:
InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Roles:
- ExampleRole
Outputs:
InstanceProfileArn:
Value: !GetAtt InstanceProfile.Arn
```
# Expected results
The `IamInstanceProfile` parameter in the parent template's `Instance` sub-stack resource definition does indeed contain a valid IAM Instance Profile ARN (passed in from the `IAMInstanceProfile` sub-stack resource and as a result, there should be no error.
Ideally cfn-lint would recognize that `GetAtt` is referencing an output from another stack which very well could be an InstanceProfile ARN and as a result, optimistically not report this error.
Alternatively, if cfn-lint could introspect the sub-stack and determine the object type of the output, it would know whether or not it was the correct object type.
# Actual results
cfn-lint reports the error
> E2502 Property IamInstanceProfile should relate to AWS::IAM::InstanceProfile for Resources/Instance/Properties/Parameters/IamInstanceProfile/Fn::GetAtt
> example-parent.yml:11:9
</issue>
<code>
[start of src/cfnlint/rules/resources/iam/InstanceProfile.py]
1 """
2 Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 from cfnlint import CloudFormationLintRule
18 from cfnlint import RuleMatch
19
20
21 class InstanceProfile(CloudFormationLintRule):
22 """Check if IamInstanceProfile are used"""
23 id = 'E2502'
24 shortdesc = 'Check if IamInstanceProfile are using the name and not ARN'
25 description = 'See if there are any properties IamInstanceProfile' + \
26 'are using name and not ARN'
27 source_url = 'https://github.com/awslabs/cfn-python-lint'
28 tags = ['properties']
29
30 def match(self, cfn):
31 """Check CloudFormation IamInstanceProfile Parameters"""
32
33 matches = []
34
35 # Build the list of keys
36 trees = cfn.search_deep_keys('Fn::GetAtt')
37 # Filter only resources
38 # Disable pylint for Pylint 2
39 # pylint: disable=W0110
40 trees = filter(lambda x: x[0] == 'Resources', trees)
41 for tree in trees:
42 if any(e == 'IamInstanceProfile' for e in tree):
43 obj = tree[-1]
44 objtype = cfn.template.get('Resources', {}).get(obj[0], {}).get('Type')
45 if objtype:
46 if objtype != 'AWS::IAM::InstanceProfile':
47 message = 'Property IamInstanceProfile should relate to AWS::IAM::InstanceProfile for %s' % (
48 '/'.join(map(str, tree[:-1])))
49 matches.append(RuleMatch(tree[:-1], message))
50 else:
51 if cfn.template.get('Resources', {}).get(tree[1], {}).get('Type') in ['AWS::EC2::SpotFleet']:
52 if obj[1] != 'Arn':
53 message = 'Property IamInstanceProfile should be an ARN for %s' % (
54 '/'.join(map(str, tree[:-1])))
55 matches.append(RuleMatch(tree[:-1], message))
56 else:
57 if obj[1] == 'Arn':
58 message = 'Property IamInstanceProfile shouldn\'t be an ARN for %s' % (
59 '/'.join(map(str, tree[:-1])))
60 matches.append(RuleMatch(tree[:-1], message))
61
62 # Search Refs
63 trees = cfn.search_deep_keys('Ref')
64 # Filter only resoureces
65 trees = filter(lambda x: x[0] == 'Resources', trees)
66 for tree in trees:
67 if any(e == 'IamInstanceProfile' for e in tree):
68 obj = tree[-1]
69 objtype = cfn.template.get('Resources', {}).get(obj, {}).get('Type')
70 if objtype:
71 if objtype != 'AWS::IAM::InstanceProfile':
72 message = 'Property IamInstanceProfile should relate to AWS::IAM::InstanceProfile for %s' % (
73 '/'.join(map(str, tree[:-1])))
74 matches.append(RuleMatch(tree[:-1], message))
75
76 return matches
77
[end of src/cfnlint/rules/resources/iam/InstanceProfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cfnlint/rules/resources/iam/InstanceProfile.py b/src/cfnlint/rules/resources/iam/InstanceProfile.py
--- a/src/cfnlint/rules/resources/iam/InstanceProfile.py
+++ b/src/cfnlint/rules/resources/iam/InstanceProfile.py
@@ -43,12 +43,17 @@
obj = tree[-1]
objtype = cfn.template.get('Resources', {}).get(obj[0], {}).get('Type')
if objtype:
- if objtype != 'AWS::IAM::InstanceProfile':
+ if objtype not in ['AWS::IAM::InstanceProfile', 'AWS::CloudFormation::Stack', 'AWS::CloudFormation::CustomResource']:
message = 'Property IamInstanceProfile should relate to AWS::IAM::InstanceProfile for %s' % (
'/'.join(map(str, tree[:-1])))
matches.append(RuleMatch(tree[:-1], message))
else:
- if cfn.template.get('Resources', {}).get(tree[1], {}).get('Type') in ['AWS::EC2::SpotFleet']:
+ if objtype in ['AWS::CloudFormation::Stack']:
+ if obj[1] != 'Outputs':
+ message = 'Property IamInstanceProfile should relate to AWS::IAM::InstanceProfile for %s' % (
+ '/'.join(map(str, tree[:-1])))
+ matches.append(RuleMatch(tree[:-1], message))
+ elif cfn.template.get('Resources', {}).get(tree[1], {}).get('Type') in ['AWS::EC2::SpotFleet']:
if obj[1] != 'Arn':
message = 'Property IamInstanceProfile should be an ARN for %s' % (
'/'.join(map(str, tree[:-1])))
|
{"golden_diff": "diff --git a/src/cfnlint/rules/resources/iam/InstanceProfile.py b/src/cfnlint/rules/resources/iam/InstanceProfile.py\n--- a/src/cfnlint/rules/resources/iam/InstanceProfile.py\n+++ b/src/cfnlint/rules/resources/iam/InstanceProfile.py\n@@ -43,12 +43,17 @@\n obj = tree[-1]\n objtype = cfn.template.get('Resources', {}).get(obj[0], {}).get('Type')\n if objtype:\n- if objtype != 'AWS::IAM::InstanceProfile':\n+ if objtype not in ['AWS::IAM::InstanceProfile', 'AWS::CloudFormation::Stack', 'AWS::CloudFormation::CustomResource']:\n message = 'Property IamInstanceProfile should relate to AWS::IAM::InstanceProfile for %s' % (\n '/'.join(map(str, tree[:-1])))\n matches.append(RuleMatch(tree[:-1], message))\n else:\n- if cfn.template.get('Resources', {}).get(tree[1], {}).get('Type') in ['AWS::EC2::SpotFleet']:\n+ if objtype in ['AWS::CloudFormation::Stack']:\n+ if obj[1] != 'Outputs':\n+ message = 'Property IamInstanceProfile should relate to AWS::IAM::InstanceProfile for %s' % (\n+ '/'.join(map(str, tree[:-1])))\n+ matches.append(RuleMatch(tree[:-1], message))\n+ elif cfn.template.get('Resources', {}).get(tree[1], {}).get('Type') in ['AWS::EC2::SpotFleet']:\n if obj[1] != 'Arn':\n message = 'Property IamInstanceProfile should be an ARN for %s' % (\n '/'.join(map(str, tree[:-1])))\n", "issue": "Nested stack reference to InstanceProfile triggers E2502 Property IamInstanceProfile should relate to AWS::IAM::InstanceProfile\n*cfn-lint version: `0.8.1`*\r\n\r\n# Description of issue\r\n\r\nWhen using nested stacks and passing IamInstanceProfile ARNs between stacks, E2502 is triggered though it shouldn't be.\r\n\r\n# Steps to reproduce\r\n\r\nCreate a parent template like this\r\n```yaml\r\nAWSTemplateFormatVersion: 2010-09-09\r\nResources:\r\n IAMInstanceProfile:\r\n Type: AWS::CloudFormation::Stack\r\n Properties:\r\n TemplateURL: https://s3-us-west-2.amazonaws.com/example-bucket/example-instance-profile.yml\r\n Instance:\r\n Type: AWS::CloudFormation::Stack\r\n Properties:\r\n Parameters:\r\n IamInstanceProfile: !GetAtt IAMInstanceProfile.Outputs.InstanceProfileArn\r\n TemplateURL: https://s3-us-west-2.amazonaws.com/example-bucket/example-instance.yml\r\n```\r\nand a child template like this\r\n\r\n```yaml\r\nAWSTemplateFormatVersion: 2010-09-09\r\nResources:\r\n InstanceProfile:\r\n Type: AWS::IAM::InstanceProfile\r\n Properties:\r\n Roles:\r\n - ExampleRole\r\nOutputs:\r\n InstanceProfileArn:\r\n Value: !GetAtt InstanceProfile.Arn\r\n```\r\n\r\n# Expected results\r\n\r\nThe `IamInstanceProfile` parameter in the parent template's `Instance` sub-stack resource definition does indeed contain a valid IAM Instance Profile ARN (passed in from the `IAMInstanceProfile` sub-stack resource and as a result, there should be no error.\r\n\r\nIdeally cfn-lint would recognize that `GetAtt` is referencing an output from another stack which very well could be an InstanceProfile ARN and as a result, optimistically not report this error.\r\n\r\nAlternatively, if cfn-lint could introspect the sub-stack and determine the object type of the output, it would know whether or not it was the correct object type.\r\n\r\n# Actual results\r\n\r\ncfn-lint reports the error\r\n\r\n> E2502 Property IamInstanceProfile should relate to AWS::IAM::InstanceProfile for Resources/Instance/Properties/Parameters/IamInstanceProfile/Fn::GetAtt\r\n> example-parent.yml:11:9\n", "before_files": [{"content": "\"\"\"\n Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\n\nclass InstanceProfile(CloudFormationLintRule):\n \"\"\"Check if IamInstanceProfile are used\"\"\"\n id = 'E2502'\n shortdesc = 'Check if IamInstanceProfile are using the name and not ARN'\n description = 'See if there are any properties IamInstanceProfile' + \\\n 'are using name and not ARN'\n source_url = 'https://github.com/awslabs/cfn-python-lint'\n tags = ['properties']\n\n def match(self, cfn):\n \"\"\"Check CloudFormation IamInstanceProfile Parameters\"\"\"\n\n matches = []\n\n # Build the list of keys\n trees = cfn.search_deep_keys('Fn::GetAtt')\n # Filter only resources\n # Disable pylint for Pylint 2\n # pylint: disable=W0110\n trees = filter(lambda x: x[0] == 'Resources', trees)\n for tree in trees:\n if any(e == 'IamInstanceProfile' for e in tree):\n obj = tree[-1]\n objtype = cfn.template.get('Resources', {}).get(obj[0], {}).get('Type')\n if objtype:\n if objtype != 'AWS::IAM::InstanceProfile':\n message = 'Property IamInstanceProfile should relate to AWS::IAM::InstanceProfile for %s' % (\n '/'.join(map(str, tree[:-1])))\n matches.append(RuleMatch(tree[:-1], message))\n else:\n if cfn.template.get('Resources', {}).get(tree[1], {}).get('Type') in ['AWS::EC2::SpotFleet']:\n if obj[1] != 'Arn':\n message = 'Property IamInstanceProfile should be an ARN for %s' % (\n '/'.join(map(str, tree[:-1])))\n matches.append(RuleMatch(tree[:-1], message))\n else:\n if obj[1] == 'Arn':\n message = 'Property IamInstanceProfile shouldn\\'t be an ARN for %s' % (\n '/'.join(map(str, tree[:-1])))\n matches.append(RuleMatch(tree[:-1], message))\n\n # Search Refs\n trees = cfn.search_deep_keys('Ref')\n # Filter only resoureces\n trees = filter(lambda x: x[0] == 'Resources', trees)\n for tree in trees:\n if any(e == 'IamInstanceProfile' for e in tree):\n obj = tree[-1]\n objtype = cfn.template.get('Resources', {}).get(obj, {}).get('Type')\n if objtype:\n if objtype != 'AWS::IAM::InstanceProfile':\n message = 'Property IamInstanceProfile should relate to AWS::IAM::InstanceProfile for %s' % (\n '/'.join(map(str, tree[:-1])))\n matches.append(RuleMatch(tree[:-1], message))\n\n return matches\n", "path": "src/cfnlint/rules/resources/iam/InstanceProfile.py"}]}
| 2,019 | 384 |
gh_patches_debug_34951
|
rasdani/github-patches
|
git_diff
|
biopython__biopython-3706
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
lcc.lcc_mult returns extra zero value
### Setup
I am reporting a problem with Biopython version, Python version, and operating
system as follows:
```python
import sys; print(sys.version)
import platform; print(platform.python_implementation()); print(platform.platform())
import Bio; print(Bio.__version__)
```
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
CPython
Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-centos-7.6.1810-Core
1.78
### Expected behaviour
[1.9056390622295662]
### Actual behaviour
[0, 1.9056390622295662]
### Steps to reproduce
lcc.lcc_mult('ACGATAGC', 8)
In addition according the [article](https://www.researchgate.net/publication/229964618_Sequence_Complexity_and_Composition), the calculation of LCC uses log base 4 but the implementation uses log base 2. That is why for the example sequence 5 in Table 1 the value shown is half the function value.
</issue>
<code>
[start of Bio/SeqUtils/lcc.py]
1 # Copyright 2003, 2007 by Sebastian Bassi. [email protected]
2 # All rights reserved.
3 #
4 # This file is part of the Biopython distribution and governed by your
5 # choice of the "Biopython License Agreement" or the "BSD 3-Clause License".
6 # Please see the LICENSE file that should have been included as part of this
7 # package.
8 """Local Composition Complexity."""
9
10 import math
11
12
13 def lcc_mult(seq, wsize):
14 """Calculate Local Composition Complexity (LCC) values over sliding window.
15
16 Returns a list of floats, the LCC values for a sliding window over
17 the sequence.
18
19 seq - an unambiguous DNA sequence (a string or Seq object)
20 wsize - window size, integer
21
22 The result is the same as applying lcc_simp multiple times, but this
23 version is optimized for speed. The optimization works by using the
24 value of previous window as a base to compute the next one.
25 """
26 l2 = math.log(2)
27 tamseq = len(seq)
28 upper = str(seq).upper()
29 compone = [0]
30 lccsal = [0]
31 for i in range(wsize):
32 compone.append(
33 ((i + 1) / float(wsize)) * ((math.log((i + 1) / float(wsize))) / l2)
34 )
35 window = seq[0:wsize]
36 cant_a = window.count("A")
37 cant_c = window.count("C")
38 cant_t = window.count("T")
39 cant_g = window.count("G")
40 term_a = compone[cant_a]
41 term_c = compone[cant_c]
42 term_t = compone[cant_t]
43 term_g = compone[cant_g]
44 lccsal.append(-(term_a + term_c + term_t + term_g))
45 tail = seq[0]
46 for x in range(tamseq - wsize):
47 window = upper[x + 1 : wsize + x + 1]
48 if tail == window[-1]:
49 lccsal.append(lccsal[-1])
50 elif tail == "A":
51 cant_a -= 1
52 if window.endswith("C"):
53 cant_c += 1
54 term_a = compone[cant_a]
55 term_c = compone[cant_c]
56 lccsal.append(-(term_a + term_c + term_t + term_g))
57 elif window.endswith("T"):
58 cant_t += 1
59 term_a = compone[cant_a]
60 term_t = compone[cant_t]
61 lccsal.append(-(term_a + term_c + term_t + term_g))
62 elif window.endswith("G"):
63 cant_g += 1
64 term_a = compone[cant_a]
65 term_g = compone[cant_g]
66 lccsal.append(-(term_a + term_c + term_t + term_g))
67 elif tail == "C":
68 cant_c -= 1
69 if window.endswith("A"):
70 cant_a += 1
71 term_a = compone[cant_a]
72 term_c = compone[cant_c]
73 lccsal.append(-(term_a + term_c + term_t + term_g))
74 elif window.endswith("T"):
75 cant_t += 1
76 term_c = compone[cant_c]
77 term_t = compone[cant_t]
78 lccsal.append(-(term_a + term_c + term_t + term_g))
79 elif window.endswith("G"):
80 cant_g += 1
81 term_c = compone[cant_c]
82 term_g = compone[cant_g]
83 lccsal.append(-(term_a + term_c + term_t + term_g))
84 elif tail == "T":
85 cant_t -= 1
86 if window.endswith("A"):
87 cant_a += 1
88 term_a = compone[cant_a]
89 term_t = compone[cant_t]
90 lccsal.append(-(term_a + term_c + term_t + term_g))
91 elif window.endswith("C"):
92 cant_c += 1
93 term_c = compone[cant_c]
94 term_t = compone[cant_t]
95 lccsal.append(-(term_a + term_c + term_t + term_g))
96 elif window.endswith("G"):
97 cant_g += 1
98 term_t = compone[cant_t]
99 term_g = compone[cant_g]
100 lccsal.append(-(term_a + term_c + term_t + term_g))
101 elif tail == "G":
102 cant_g -= 1
103 if window.endswith("A"):
104 cant_a += 1
105 term_a = compone[cant_a]
106 term_g = compone[cant_g]
107 lccsal.append(-(term_a + term_c + term_t + term_g))
108 elif window.endswith("C"):
109 cant_c += 1
110 term_c = compone[cant_c]
111 term_g = compone[cant_g]
112 lccsal.append(-(term_a + term_c + term_t + term_g))
113 elif window.endswith("T"):
114 cant_t += 1
115 term_t = compone[cant_t]
116 term_g = compone[cant_g]
117 lccsal.append(-(term_a + term_c + term_t + term_g))
118 tail = window[0]
119 return lccsal
120
121
122 def lcc_simp(seq):
123 """Calculate Local Composition Complexity (LCC) for a sequence.
124
125 seq - an unambiguous DNA sequence (a string or Seq object)
126
127 Returns the Local Composition Complexity (LCC) value for the entire
128 sequence (as a float).
129
130 Reference:
131 Andrzej K Konopka (2005) Sequence Complexity and Composition
132 https://doi.org/10.1038/npg.els.0005260
133 """
134 wsize = len(seq)
135 upper = str(seq).upper()
136 l2 = math.log(2)
137 if "A" not in seq:
138 term_a = 0
139 # Check to avoid calculating the log of 0.
140 else:
141 term_a = ((upper.count("A")) / float(wsize)) * (
142 (math.log((upper.count("A")) / float(wsize))) / l2
143 )
144 if "C" not in seq:
145 term_c = 0
146 else:
147 term_c = ((upper.count("C")) / float(wsize)) * (
148 (math.log((upper.count("C")) / float(wsize))) / l2
149 )
150 if "T" not in seq:
151 term_t = 0
152 else:
153 term_t = ((upper.count("T")) / float(wsize)) * (
154 (math.log((upper.count("T")) / float(wsize))) / l2
155 )
156 if "G" not in seq:
157 term_g = 0
158 else:
159 term_g = ((upper.count("G")) / float(wsize)) * (
160 (math.log((upper.count("G")) / float(wsize))) / l2
161 )
162 return -(term_a + term_c + term_t + term_g)
163
[end of Bio/SeqUtils/lcc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/Bio/SeqUtils/lcc.py b/Bio/SeqUtils/lcc.py
--- a/Bio/SeqUtils/lcc.py
+++ b/Bio/SeqUtils/lcc.py
@@ -23,14 +23,14 @@
version is optimized for speed. The optimization works by using the
value of previous window as a base to compute the next one.
"""
- l2 = math.log(2)
+ l4 = math.log(4)
tamseq = len(seq)
upper = str(seq).upper()
compone = [0]
- lccsal = [0]
+ lccsal = []
for i in range(wsize):
compone.append(
- ((i + 1) / float(wsize)) * ((math.log((i + 1) / float(wsize))) / l2)
+ ((i + 1) / float(wsize)) * ((math.log((i + 1) / float(wsize))) / l4)
)
window = seq[0:wsize]
cant_a = window.count("A")
@@ -133,30 +133,30 @@
"""
wsize = len(seq)
upper = str(seq).upper()
- l2 = math.log(2)
+ l4 = math.log(4)
+ # Check to avoid calculating the log of 0.
if "A" not in seq:
term_a = 0
- # Check to avoid calculating the log of 0.
else:
term_a = ((upper.count("A")) / float(wsize)) * (
- (math.log((upper.count("A")) / float(wsize))) / l2
+ (math.log((upper.count("A")) / float(wsize))) / l4
)
if "C" not in seq:
term_c = 0
else:
term_c = ((upper.count("C")) / float(wsize)) * (
- (math.log((upper.count("C")) / float(wsize))) / l2
+ (math.log((upper.count("C")) / float(wsize))) / l4
)
if "T" not in seq:
term_t = 0
else:
term_t = ((upper.count("T")) / float(wsize)) * (
- (math.log((upper.count("T")) / float(wsize))) / l2
+ (math.log((upper.count("T")) / float(wsize))) / l4
)
if "G" not in seq:
term_g = 0
else:
term_g = ((upper.count("G")) / float(wsize)) * (
- (math.log((upper.count("G")) / float(wsize))) / l2
+ (math.log((upper.count("G")) / float(wsize))) / l4
)
return -(term_a + term_c + term_t + term_g)
|
{"golden_diff": "diff --git a/Bio/SeqUtils/lcc.py b/Bio/SeqUtils/lcc.py\n--- a/Bio/SeqUtils/lcc.py\n+++ b/Bio/SeqUtils/lcc.py\n@@ -23,14 +23,14 @@\n version is optimized for speed. The optimization works by using the\n value of previous window as a base to compute the next one.\n \"\"\"\n- l2 = math.log(2)\n+ l4 = math.log(4)\n tamseq = len(seq)\n upper = str(seq).upper()\n compone = [0]\n- lccsal = [0]\n+ lccsal = []\n for i in range(wsize):\n compone.append(\n- ((i + 1) / float(wsize)) * ((math.log((i + 1) / float(wsize))) / l2)\n+ ((i + 1) / float(wsize)) * ((math.log((i + 1) / float(wsize))) / l4)\n )\n window = seq[0:wsize]\n cant_a = window.count(\"A\")\n@@ -133,30 +133,30 @@\n \"\"\"\n wsize = len(seq)\n upper = str(seq).upper()\n- l2 = math.log(2)\n+ l4 = math.log(4)\n+ # Check to avoid calculating the log of 0.\n if \"A\" not in seq:\n term_a = 0\n- # Check to avoid calculating the log of 0.\n else:\n term_a = ((upper.count(\"A\")) / float(wsize)) * (\n- (math.log((upper.count(\"A\")) / float(wsize))) / l2\n+ (math.log((upper.count(\"A\")) / float(wsize))) / l4\n )\n if \"C\" not in seq:\n term_c = 0\n else:\n term_c = ((upper.count(\"C\")) / float(wsize)) * (\n- (math.log((upper.count(\"C\")) / float(wsize))) / l2\n+ (math.log((upper.count(\"C\")) / float(wsize))) / l4\n )\n if \"T\" not in seq:\n term_t = 0\n else:\n term_t = ((upper.count(\"T\")) / float(wsize)) * (\n- (math.log((upper.count(\"T\")) / float(wsize))) / l2\n+ (math.log((upper.count(\"T\")) / float(wsize))) / l4\n )\n if \"G\" not in seq:\n term_g = 0\n else:\n term_g = ((upper.count(\"G\")) / float(wsize)) * (\n- (math.log((upper.count(\"G\")) / float(wsize))) / l2\n+ (math.log((upper.count(\"G\")) / float(wsize))) / l4\n )\n return -(term_a + term_c + term_t + term_g)\n", "issue": "lcc.lcc_mult returns extra zero value\n### Setup\r\n\r\nI am reporting a problem with Biopython version, Python version, and operating\r\nsystem as follows:\r\n\r\n```python\r\nimport sys; print(sys.version)\r\nimport platform; print(platform.python_implementation()); print(platform.platform())\r\nimport Bio; print(Bio.__version__)\r\n```\r\n\r\n[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]\r\nCPython\r\nLinux-3.10.0-957.1.3.el7.x86_64-x86_64-with-centos-7.6.1810-Core\r\n1.78\r\n\r\n### Expected behaviour\r\n\r\n[1.9056390622295662]\r\n\r\n### Actual behaviour\r\n\r\n[0, 1.9056390622295662]\r\n\r\n### Steps to reproduce\r\n\r\nlcc.lcc_mult('ACGATAGC', 8)\r\n\r\nIn addition according the [article](https://www.researchgate.net/publication/229964618_Sequence_Complexity_and_Composition), the calculation of LCC uses log base 4 but the implementation uses log base 2. That is why for the example sequence 5 in Table 1 the value shown is half the function value.\r\n\n", "before_files": [{"content": "# Copyright 2003, 2007 by Sebastian Bassi. [email protected]\n# All rights reserved.\n#\n# This file is part of the Biopython distribution and governed by your\n# choice of the \"Biopython License Agreement\" or the \"BSD 3-Clause License\".\n# Please see the LICENSE file that should have been included as part of this\n# package.\n\"\"\"Local Composition Complexity.\"\"\"\n\nimport math\n\n\ndef lcc_mult(seq, wsize):\n \"\"\"Calculate Local Composition Complexity (LCC) values over sliding window.\n\n Returns a list of floats, the LCC values for a sliding window over\n the sequence.\n\n seq - an unambiguous DNA sequence (a string or Seq object)\n wsize - window size, integer\n\n The result is the same as applying lcc_simp multiple times, but this\n version is optimized for speed. The optimization works by using the\n value of previous window as a base to compute the next one.\n \"\"\"\n l2 = math.log(2)\n tamseq = len(seq)\n upper = str(seq).upper()\n compone = [0]\n lccsal = [0]\n for i in range(wsize):\n compone.append(\n ((i + 1) / float(wsize)) * ((math.log((i + 1) / float(wsize))) / l2)\n )\n window = seq[0:wsize]\n cant_a = window.count(\"A\")\n cant_c = window.count(\"C\")\n cant_t = window.count(\"T\")\n cant_g = window.count(\"G\")\n term_a = compone[cant_a]\n term_c = compone[cant_c]\n term_t = compone[cant_t]\n term_g = compone[cant_g]\n lccsal.append(-(term_a + term_c + term_t + term_g))\n tail = seq[0]\n for x in range(tamseq - wsize):\n window = upper[x + 1 : wsize + x + 1]\n if tail == window[-1]:\n lccsal.append(lccsal[-1])\n elif tail == \"A\":\n cant_a -= 1\n if window.endswith(\"C\"):\n cant_c += 1\n term_a = compone[cant_a]\n term_c = compone[cant_c]\n lccsal.append(-(term_a + term_c + term_t + term_g))\n elif window.endswith(\"T\"):\n cant_t += 1\n term_a = compone[cant_a]\n term_t = compone[cant_t]\n lccsal.append(-(term_a + term_c + term_t + term_g))\n elif window.endswith(\"G\"):\n cant_g += 1\n term_a = compone[cant_a]\n term_g = compone[cant_g]\n lccsal.append(-(term_a + term_c + term_t + term_g))\n elif tail == \"C\":\n cant_c -= 1\n if window.endswith(\"A\"):\n cant_a += 1\n term_a = compone[cant_a]\n term_c = compone[cant_c]\n lccsal.append(-(term_a + term_c + term_t + term_g))\n elif window.endswith(\"T\"):\n cant_t += 1\n term_c = compone[cant_c]\n term_t = compone[cant_t]\n lccsal.append(-(term_a + term_c + term_t + term_g))\n elif window.endswith(\"G\"):\n cant_g += 1\n term_c = compone[cant_c]\n term_g = compone[cant_g]\n lccsal.append(-(term_a + term_c + term_t + term_g))\n elif tail == \"T\":\n cant_t -= 1\n if window.endswith(\"A\"):\n cant_a += 1\n term_a = compone[cant_a]\n term_t = compone[cant_t]\n lccsal.append(-(term_a + term_c + term_t + term_g))\n elif window.endswith(\"C\"):\n cant_c += 1\n term_c = compone[cant_c]\n term_t = compone[cant_t]\n lccsal.append(-(term_a + term_c + term_t + term_g))\n elif window.endswith(\"G\"):\n cant_g += 1\n term_t = compone[cant_t]\n term_g = compone[cant_g]\n lccsal.append(-(term_a + term_c + term_t + term_g))\n elif tail == \"G\":\n cant_g -= 1\n if window.endswith(\"A\"):\n cant_a += 1\n term_a = compone[cant_a]\n term_g = compone[cant_g]\n lccsal.append(-(term_a + term_c + term_t + term_g))\n elif window.endswith(\"C\"):\n cant_c += 1\n term_c = compone[cant_c]\n term_g = compone[cant_g]\n lccsal.append(-(term_a + term_c + term_t + term_g))\n elif window.endswith(\"T\"):\n cant_t += 1\n term_t = compone[cant_t]\n term_g = compone[cant_g]\n lccsal.append(-(term_a + term_c + term_t + term_g))\n tail = window[0]\n return lccsal\n\n\ndef lcc_simp(seq):\n \"\"\"Calculate Local Composition Complexity (LCC) for a sequence.\n\n seq - an unambiguous DNA sequence (a string or Seq object)\n\n Returns the Local Composition Complexity (LCC) value for the entire\n sequence (as a float).\n\n Reference:\n Andrzej K Konopka (2005) Sequence Complexity and Composition\n https://doi.org/10.1038/npg.els.0005260\n \"\"\"\n wsize = len(seq)\n upper = str(seq).upper()\n l2 = math.log(2)\n if \"A\" not in seq:\n term_a = 0\n # Check to avoid calculating the log of 0.\n else:\n term_a = ((upper.count(\"A\")) / float(wsize)) * (\n (math.log((upper.count(\"A\")) / float(wsize))) / l2\n )\n if \"C\" not in seq:\n term_c = 0\n else:\n term_c = ((upper.count(\"C\")) / float(wsize)) * (\n (math.log((upper.count(\"C\")) / float(wsize))) / l2\n )\n if \"T\" not in seq:\n term_t = 0\n else:\n term_t = ((upper.count(\"T\")) / float(wsize)) * (\n (math.log((upper.count(\"T\")) / float(wsize))) / l2\n )\n if \"G\" not in seq:\n term_g = 0\n else:\n term_g = ((upper.count(\"G\")) / float(wsize)) * (\n (math.log((upper.count(\"G\")) / float(wsize))) / l2\n )\n return -(term_a + term_c + term_t + term_g)\n", "path": "Bio/SeqUtils/lcc.py"}]}
| 2,766 | 649 |
gh_patches_debug_37593
|
rasdani/github-patches
|
git_diff
|
getmoto__moto-1565
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Authorize Security Group Ingress Only Works with Multi-Rule?
Near as I can tell, I should be able to use a simplified form of authorizing ingress when I only need a single rule; but this doesn't seem to work with Moto. The multi-rule syntax does work, on the other hand.
See these tests:
```
import boto3
import pytest
from moto import mock_ec2
@mock_ec2
def test_security_group_ingress_succeeds():
ec2 = boto3.resource('ec2', 'ca-central-1')
sg = ec2.create_security_group(Description='Test SG',GroupName='test-sg')
assert len(sg.ip_permissions) == 0
sg.authorize_ingress(IpPermissions=[
{
'FromPort': 22,
'ToPort': 22,
'IpProtocol': 'tcp',
'IpRanges': [
{
'CidrIp': '192.168.0.1/32'
}
]
}
])
assert len(sg.ip_permissions) == 1
sg_after = ec2.SecurityGroup(sg.id)
assert len(sg_after.ip_permissions) == 1
@mock_ec2
def test_security_group_ingress_fails_without_multirule():
ec2 = boto3.resource('ec2', 'ca-central-1')
sg = ec2.create_security_group(Description='Test SG', GroupName='test-sg')
assert len(sg.ip_permissions) == 0
sg.authorize_ingress(CidrIp='192.168.0.1/32', FromPort=22, ToPort=22, IpProtocol='tcp')
# Fails
assert len(sg.ip_permissions) == 1
@mock_ec2
def test_security_group_ingress_fails_without_multirule_after_reload():
ec2 = boto3.resource('ec2', 'ca-central-1')
sg = ec2.create_security_group(Description='Test SG', GroupName='test-sg')
assert len(sg.ip_permissions) == 0
sg.authorize_ingress(CidrIp='192.168.0.1/32', FromPort=22, ToPort=22, IpProtocol='tcp')
# Also Fails
sg_after = ec2.SecurityGroup(sg.id)
assert len(sg_after.ip_permissions) == 1
```
The first test, using the multi-rule syntax with the `IpPermission` array, works fine.
The second two tests fail. AFAIK, this syntax is valid, but doesn't work with moto.
I expected all three tests to pass, but they don't. Am I doing something wrong, or is this a Moto defect?
Using moto 1.2.0, installed with pipenv, using python mocks. Both version 1.6.6, installed the same way.
</issue>
<code>
[start of moto/ec2/responses/security_groups.py]
1 from __future__ import unicode_literals
2
3 from moto.core.responses import BaseResponse
4 from moto.ec2.utils import filters_from_querystring
5
6
7 def try_parse_int(value, default=None):
8 try:
9 return int(value)
10 except (TypeError, ValueError):
11 return default
12
13
14 class SecurityGroups(BaseResponse):
15
16 def _process_rules_from_querystring(self):
17 group_name_or_id = (self._get_param('GroupName') or
18 self._get_param('GroupId'))
19
20 querytree = {}
21 for key, value in self.querystring.items():
22 key_splitted = key.split('.')
23 key_splitted = [try_parse_int(e, e) for e in key_splitted]
24
25 d = querytree
26 for subkey in key_splitted[:-1]:
27 if subkey not in d:
28 d[subkey] = {}
29 d = d[subkey]
30 d[key_splitted[-1]] = value
31
32 ip_permissions = querytree.get('IpPermissions') or {}
33 for ip_permission_idx in sorted(ip_permissions.keys()):
34 ip_permission = ip_permissions[ip_permission_idx]
35
36 ip_protocol = ip_permission.get('IpProtocol', [None])[0]
37 from_port = ip_permission.get('FromPort', [None])[0]
38 to_port = ip_permission.get('ToPort', [None])[0]
39
40 ip_ranges = []
41 ip_ranges_tree = ip_permission.get('IpRanges') or {}
42 for ip_range_idx in sorted(ip_ranges_tree.keys()):
43 ip_ranges.append(ip_ranges_tree[ip_range_idx]['CidrIp'][0])
44
45 source_groups = []
46 source_group_ids = []
47 groups_tree = ip_permission.get('Groups') or {}
48 for group_idx in sorted(groups_tree.keys()):
49 group_dict = groups_tree[group_idx]
50 if 'GroupId' in group_dict:
51 source_group_ids.append(group_dict['GroupId'][0])
52 elif 'GroupName' in group_dict:
53 source_groups.append(group_dict['GroupName'][0])
54
55 yield (group_name_or_id, ip_protocol, from_port, to_port, ip_ranges,
56 source_groups, source_group_ids)
57
58 def authorize_security_group_egress(self):
59 if self.is_not_dryrun('GrantSecurityGroupEgress'):
60 for args in self._process_rules_from_querystring():
61 self.ec2_backend.authorize_security_group_egress(*args)
62 return AUTHORIZE_SECURITY_GROUP_EGRESS_RESPONSE
63
64 def authorize_security_group_ingress(self):
65 if self.is_not_dryrun('GrantSecurityGroupIngress'):
66 for args in self._process_rules_from_querystring():
67 self.ec2_backend.authorize_security_group_ingress(*args)
68 return AUTHORIZE_SECURITY_GROUP_INGRESS_REPONSE
69
70 def create_security_group(self):
71 name = self._get_param('GroupName')
72 description = self._get_param('GroupDescription')
73 vpc_id = self._get_param('VpcId')
74
75 if self.is_not_dryrun('CreateSecurityGroup'):
76 group = self.ec2_backend.create_security_group(
77 name, description, vpc_id=vpc_id)
78 template = self.response_template(CREATE_SECURITY_GROUP_RESPONSE)
79 return template.render(group=group)
80
81 def delete_security_group(self):
82 # TODO this should raise an error if there are instances in the group.
83 # See
84 # http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DeleteSecurityGroup.html
85
86 name = self._get_param('GroupName')
87 sg_id = self._get_param('GroupId')
88
89 if self.is_not_dryrun('DeleteSecurityGroup'):
90 if name:
91 self.ec2_backend.delete_security_group(name)
92 elif sg_id:
93 self.ec2_backend.delete_security_group(group_id=sg_id)
94
95 return DELETE_GROUP_RESPONSE
96
97 def describe_security_groups(self):
98 groupnames = self._get_multi_param("GroupName")
99 group_ids = self._get_multi_param("GroupId")
100 filters = filters_from_querystring(self.querystring)
101
102 groups = self.ec2_backend.describe_security_groups(
103 group_ids=group_ids,
104 groupnames=groupnames,
105 filters=filters
106 )
107
108 template = self.response_template(DESCRIBE_SECURITY_GROUPS_RESPONSE)
109 return template.render(groups=groups)
110
111 def revoke_security_group_egress(self):
112 if self.is_not_dryrun('RevokeSecurityGroupEgress'):
113 for args in self._process_rules_from_querystring():
114 success = self.ec2_backend.revoke_security_group_egress(*args)
115 if not success:
116 return "Could not find a matching egress rule", dict(status=404)
117 return REVOKE_SECURITY_GROUP_EGRESS_RESPONSE
118
119 def revoke_security_group_ingress(self):
120 if self.is_not_dryrun('RevokeSecurityGroupIngress'):
121 for args in self._process_rules_from_querystring():
122 self.ec2_backend.revoke_security_group_ingress(*args)
123 return REVOKE_SECURITY_GROUP_INGRESS_REPONSE
124
125
126 CREATE_SECURITY_GROUP_RESPONSE = """<CreateSecurityGroupResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/">
127 <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
128 <return>true</return>
129 <groupId>{{ group.id }}</groupId>
130 </CreateSecurityGroupResponse>"""
131
132 DELETE_GROUP_RESPONSE = """<DeleteSecurityGroupResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/">
133 <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
134 <return>true</return>
135 </DeleteSecurityGroupResponse>"""
136
137 DESCRIBE_SECURITY_GROUPS_RESPONSE = """<DescribeSecurityGroupsResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/">
138 <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
139 <securityGroupInfo>
140 {% for group in groups %}
141 <item>
142 <ownerId>123456789012</ownerId>
143 <groupId>{{ group.id }}</groupId>
144 <groupName>{{ group.name }}</groupName>
145 <groupDescription>{{ group.description }}</groupDescription>
146 {% if group.vpc_id %}
147 <vpcId>{{ group.vpc_id }}</vpcId>
148 {% endif %}
149 <ipPermissions>
150 {% for rule in group.ingress_rules %}
151 <item>
152 <ipProtocol>{{ rule.ip_protocol }}</ipProtocol>
153 {% if rule.from_port %}
154 <fromPort>{{ rule.from_port }}</fromPort>
155 {% endif %}
156 {% if rule.to_port %}
157 <toPort>{{ rule.to_port }}</toPort>
158 {% endif %}
159 <groups>
160 {% for source_group in rule.source_groups %}
161 <item>
162 <userId>123456789012</userId>
163 <groupId>{{ source_group.id }}</groupId>
164 <groupName>{{ source_group.name }}</groupName>
165 </item>
166 {% endfor %}
167 </groups>
168 <ipRanges>
169 {% for ip_range in rule.ip_ranges %}
170 <item>
171 <cidrIp>{{ ip_range }}</cidrIp>
172 </item>
173 {% endfor %}
174 </ipRanges>
175 </item>
176 {% endfor %}
177 </ipPermissions>
178 <ipPermissionsEgress>
179 {% for rule in group.egress_rules %}
180 <item>
181 <ipProtocol>{{ rule.ip_protocol }}</ipProtocol>
182 <fromPort>{{ rule.from_port }}</fromPort>
183 <toPort>{{ rule.to_port }}</toPort>
184 <groups>
185 {% for source_group in rule.source_groups %}
186 <item>
187 <userId>123456789012</userId>
188 <groupId>{{ source_group.id }}</groupId>
189 <groupName>{{ source_group.name }}</groupName>
190 </item>
191 {% endfor %}
192 </groups>
193 <ipRanges>
194 {% for ip_range in rule.ip_ranges %}
195 <item>
196 <cidrIp>{{ ip_range }}</cidrIp>
197 </item>
198 {% endfor %}
199 </ipRanges>
200 </item>
201 {% endfor %}
202 </ipPermissionsEgress>
203 <tagSet>
204 {% for tag in group.get_tags() %}
205 <item>
206 <resourceId>{{ tag.resource_id }}</resourceId>
207 <resourceType>{{ tag.resource_type }}</resourceType>
208 <key>{{ tag.key }}</key>
209 <value>{{ tag.value }}</value>
210 </item>
211 {% endfor %}
212 </tagSet>
213 </item>
214 {% endfor %}
215 </securityGroupInfo>
216 </DescribeSecurityGroupsResponse>"""
217
218 AUTHORIZE_SECURITY_GROUP_INGRESS_REPONSE = """<AuthorizeSecurityGroupIngressResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/">
219 <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
220 <return>true</return>
221 </AuthorizeSecurityGroupIngressResponse>"""
222
223 REVOKE_SECURITY_GROUP_INGRESS_REPONSE = """<RevokeSecurityGroupIngressResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/">
224 <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
225 <return>true</return>
226 </RevokeSecurityGroupIngressResponse>"""
227
228 AUTHORIZE_SECURITY_GROUP_EGRESS_RESPONSE = """
229 <AuthorizeSecurityGroupEgressResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/">
230 <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
231 <return>true</return>
232 </AuthorizeSecurityGroupEgressResponse>"""
233
234 REVOKE_SECURITY_GROUP_EGRESS_RESPONSE = """<RevokeSecurityGroupEgressResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/">
235 <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
236 <return>true</return>
237 </RevokeSecurityGroupEgressResponse>"""
238
[end of moto/ec2/responses/security_groups.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/moto/ec2/responses/security_groups.py b/moto/ec2/responses/security_groups.py
--- a/moto/ec2/responses/security_groups.py
+++ b/moto/ec2/responses/security_groups.py
@@ -11,6 +11,29 @@
return default
+def parse_sg_attributes_from_dict(sg_attributes):
+ ip_protocol = sg_attributes.get('IpProtocol', [None])[0]
+ from_port = sg_attributes.get('FromPort', [None])[0]
+ to_port = sg_attributes.get('ToPort', [None])[0]
+
+ ip_ranges = []
+ ip_ranges_tree = sg_attributes.get('IpRanges') or {}
+ for ip_range_idx in sorted(ip_ranges_tree.keys()):
+ ip_ranges.append(ip_ranges_tree[ip_range_idx]['CidrIp'][0])
+
+ source_groups = []
+ source_group_ids = []
+ groups_tree = sg_attributes.get('Groups') or {}
+ for group_idx in sorted(groups_tree.keys()):
+ group_dict = groups_tree[group_idx]
+ if 'GroupId' in group_dict:
+ source_group_ids.append(group_dict['GroupId'][0])
+ elif 'GroupName' in group_dict:
+ source_groups.append(group_dict['GroupName'][0])
+
+ return ip_protocol, from_port, to_port, ip_ranges, source_groups, source_group_ids
+
+
class SecurityGroups(BaseResponse):
def _process_rules_from_querystring(self):
@@ -29,28 +52,17 @@
d = d[subkey]
d[key_splitted[-1]] = value
+ if 'IpPermissions' not in querytree:
+ # Handle single rule syntax
+ ip_protocol, from_port, to_port, ip_ranges, source_groups, source_group_ids = parse_sg_attributes_from_dict(querytree)
+ yield (group_name_or_id, ip_protocol, from_port, to_port, ip_ranges,
+ source_groups, source_group_ids)
+
ip_permissions = querytree.get('IpPermissions') or {}
for ip_permission_idx in sorted(ip_permissions.keys()):
ip_permission = ip_permissions[ip_permission_idx]
- ip_protocol = ip_permission.get('IpProtocol', [None])[0]
- from_port = ip_permission.get('FromPort', [None])[0]
- to_port = ip_permission.get('ToPort', [None])[0]
-
- ip_ranges = []
- ip_ranges_tree = ip_permission.get('IpRanges') or {}
- for ip_range_idx in sorted(ip_ranges_tree.keys()):
- ip_ranges.append(ip_ranges_tree[ip_range_idx]['CidrIp'][0])
-
- source_groups = []
- source_group_ids = []
- groups_tree = ip_permission.get('Groups') or {}
- for group_idx in sorted(groups_tree.keys()):
- group_dict = groups_tree[group_idx]
- if 'GroupId' in group_dict:
- source_group_ids.append(group_dict['GroupId'][0])
- elif 'GroupName' in group_dict:
- source_groups.append(group_dict['GroupName'][0])
+ ip_protocol, from_port, to_port, ip_ranges, source_groups, source_group_ids = parse_sg_attributes_from_dict(ip_permission)
yield (group_name_or_id, ip_protocol, from_port, to_port, ip_ranges,
source_groups, source_group_ids)
|
{"golden_diff": "diff --git a/moto/ec2/responses/security_groups.py b/moto/ec2/responses/security_groups.py\n--- a/moto/ec2/responses/security_groups.py\n+++ b/moto/ec2/responses/security_groups.py\n@@ -11,6 +11,29 @@\n return default\n \n \n+def parse_sg_attributes_from_dict(sg_attributes):\n+ ip_protocol = sg_attributes.get('IpProtocol', [None])[0]\n+ from_port = sg_attributes.get('FromPort', [None])[0]\n+ to_port = sg_attributes.get('ToPort', [None])[0]\n+\n+ ip_ranges = []\n+ ip_ranges_tree = sg_attributes.get('IpRanges') or {}\n+ for ip_range_idx in sorted(ip_ranges_tree.keys()):\n+ ip_ranges.append(ip_ranges_tree[ip_range_idx]['CidrIp'][0])\n+\n+ source_groups = []\n+ source_group_ids = []\n+ groups_tree = sg_attributes.get('Groups') or {}\n+ for group_idx in sorted(groups_tree.keys()):\n+ group_dict = groups_tree[group_idx]\n+ if 'GroupId' in group_dict:\n+ source_group_ids.append(group_dict['GroupId'][0])\n+ elif 'GroupName' in group_dict:\n+ source_groups.append(group_dict['GroupName'][0])\n+\n+ return ip_protocol, from_port, to_port, ip_ranges, source_groups, source_group_ids\n+\n+\n class SecurityGroups(BaseResponse):\n \n def _process_rules_from_querystring(self):\n@@ -29,28 +52,17 @@\n d = d[subkey]\n d[key_splitted[-1]] = value\n \n+ if 'IpPermissions' not in querytree:\n+ # Handle single rule syntax\n+ ip_protocol, from_port, to_port, ip_ranges, source_groups, source_group_ids = parse_sg_attributes_from_dict(querytree)\n+ yield (group_name_or_id, ip_protocol, from_port, to_port, ip_ranges,\n+ source_groups, source_group_ids)\n+\n ip_permissions = querytree.get('IpPermissions') or {}\n for ip_permission_idx in sorted(ip_permissions.keys()):\n ip_permission = ip_permissions[ip_permission_idx]\n \n- ip_protocol = ip_permission.get('IpProtocol', [None])[0]\n- from_port = ip_permission.get('FromPort', [None])[0]\n- to_port = ip_permission.get('ToPort', [None])[0]\n-\n- ip_ranges = []\n- ip_ranges_tree = ip_permission.get('IpRanges') or {}\n- for ip_range_idx in sorted(ip_ranges_tree.keys()):\n- ip_ranges.append(ip_ranges_tree[ip_range_idx]['CidrIp'][0])\n-\n- source_groups = []\n- source_group_ids = []\n- groups_tree = ip_permission.get('Groups') or {}\n- for group_idx in sorted(groups_tree.keys()):\n- group_dict = groups_tree[group_idx]\n- if 'GroupId' in group_dict:\n- source_group_ids.append(group_dict['GroupId'][0])\n- elif 'GroupName' in group_dict:\n- source_groups.append(group_dict['GroupName'][0])\n+ ip_protocol, from_port, to_port, ip_ranges, source_groups, source_group_ids = parse_sg_attributes_from_dict(ip_permission)\n \n yield (group_name_or_id, ip_protocol, from_port, to_port, ip_ranges,\n source_groups, source_group_ids)\n", "issue": "Authorize Security Group Ingress Only Works with Multi-Rule?\nNear as I can tell, I should be able to use a simplified form of authorizing ingress when I only need a single rule; but this doesn't seem to work with Moto. The multi-rule syntax does work, on the other hand.\r\n\r\nSee these tests:\r\n\r\n```\r\nimport boto3\r\nimport pytest\r\n\r\nfrom moto import mock_ec2\r\n\r\n@mock_ec2\r\ndef test_security_group_ingress_succeeds():\r\n ec2 = boto3.resource('ec2', 'ca-central-1')\r\n sg = ec2.create_security_group(Description='Test SG',GroupName='test-sg')\r\n\r\n assert len(sg.ip_permissions) == 0\r\n sg.authorize_ingress(IpPermissions=[\r\n {\r\n 'FromPort': 22,\r\n 'ToPort': 22,\r\n 'IpProtocol': 'tcp',\r\n 'IpRanges': [\r\n {\r\n 'CidrIp': '192.168.0.1/32'\r\n }\r\n ]\r\n }\r\n ])\r\n\r\n assert len(sg.ip_permissions) == 1\r\n\r\n sg_after = ec2.SecurityGroup(sg.id)\r\n assert len(sg_after.ip_permissions) == 1\r\n\r\n\r\n@mock_ec2\r\ndef test_security_group_ingress_fails_without_multirule():\r\n ec2 = boto3.resource('ec2', 'ca-central-1')\r\n sg = ec2.create_security_group(Description='Test SG', GroupName='test-sg')\r\n\r\n assert len(sg.ip_permissions) == 0\r\n sg.authorize_ingress(CidrIp='192.168.0.1/32', FromPort=22, ToPort=22, IpProtocol='tcp')\r\n\r\n # Fails\r\n assert len(sg.ip_permissions) == 1\r\n\r\n\r\n@mock_ec2\r\ndef test_security_group_ingress_fails_without_multirule_after_reload():\r\n ec2 = boto3.resource('ec2', 'ca-central-1')\r\n sg = ec2.create_security_group(Description='Test SG', GroupName='test-sg')\r\n\r\n assert len(sg.ip_permissions) == 0\r\n sg.authorize_ingress(CidrIp='192.168.0.1/32', FromPort=22, ToPort=22, IpProtocol='tcp')\r\n\r\n # Also Fails\r\n sg_after = ec2.SecurityGroup(sg.id)\r\n assert len(sg_after.ip_permissions) == 1\r\n```\r\n\r\nThe first test, using the multi-rule syntax with the `IpPermission` array, works fine.\r\n\r\nThe second two tests fail. AFAIK, this syntax is valid, but doesn't work with moto.\r\n\r\nI expected all three tests to pass, but they don't. Am I doing something wrong, or is this a Moto defect?\r\n\r\nUsing moto 1.2.0, installed with pipenv, using python mocks. Both version 1.6.6, installed the same way.\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom moto.core.responses import BaseResponse\nfrom moto.ec2.utils import filters_from_querystring\n\n\ndef try_parse_int(value, default=None):\n try:\n return int(value)\n except (TypeError, ValueError):\n return default\n\n\nclass SecurityGroups(BaseResponse):\n\n def _process_rules_from_querystring(self):\n group_name_or_id = (self._get_param('GroupName') or\n self._get_param('GroupId'))\n\n querytree = {}\n for key, value in self.querystring.items():\n key_splitted = key.split('.')\n key_splitted = [try_parse_int(e, e) for e in key_splitted]\n\n d = querytree\n for subkey in key_splitted[:-1]:\n if subkey not in d:\n d[subkey] = {}\n d = d[subkey]\n d[key_splitted[-1]] = value\n\n ip_permissions = querytree.get('IpPermissions') or {}\n for ip_permission_idx in sorted(ip_permissions.keys()):\n ip_permission = ip_permissions[ip_permission_idx]\n\n ip_protocol = ip_permission.get('IpProtocol', [None])[0]\n from_port = ip_permission.get('FromPort', [None])[0]\n to_port = ip_permission.get('ToPort', [None])[0]\n\n ip_ranges = []\n ip_ranges_tree = ip_permission.get('IpRanges') or {}\n for ip_range_idx in sorted(ip_ranges_tree.keys()):\n ip_ranges.append(ip_ranges_tree[ip_range_idx]['CidrIp'][0])\n\n source_groups = []\n source_group_ids = []\n groups_tree = ip_permission.get('Groups') or {}\n for group_idx in sorted(groups_tree.keys()):\n group_dict = groups_tree[group_idx]\n if 'GroupId' in group_dict:\n source_group_ids.append(group_dict['GroupId'][0])\n elif 'GroupName' in group_dict:\n source_groups.append(group_dict['GroupName'][0])\n\n yield (group_name_or_id, ip_protocol, from_port, to_port, ip_ranges,\n source_groups, source_group_ids)\n\n def authorize_security_group_egress(self):\n if self.is_not_dryrun('GrantSecurityGroupEgress'):\n for args in self._process_rules_from_querystring():\n self.ec2_backend.authorize_security_group_egress(*args)\n return AUTHORIZE_SECURITY_GROUP_EGRESS_RESPONSE\n\n def authorize_security_group_ingress(self):\n if self.is_not_dryrun('GrantSecurityGroupIngress'):\n for args in self._process_rules_from_querystring():\n self.ec2_backend.authorize_security_group_ingress(*args)\n return AUTHORIZE_SECURITY_GROUP_INGRESS_REPONSE\n\n def create_security_group(self):\n name = self._get_param('GroupName')\n description = self._get_param('GroupDescription')\n vpc_id = self._get_param('VpcId')\n\n if self.is_not_dryrun('CreateSecurityGroup'):\n group = self.ec2_backend.create_security_group(\n name, description, vpc_id=vpc_id)\n template = self.response_template(CREATE_SECURITY_GROUP_RESPONSE)\n return template.render(group=group)\n\n def delete_security_group(self):\n # TODO this should raise an error if there are instances in the group.\n # See\n # http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DeleteSecurityGroup.html\n\n name = self._get_param('GroupName')\n sg_id = self._get_param('GroupId')\n\n if self.is_not_dryrun('DeleteSecurityGroup'):\n if name:\n self.ec2_backend.delete_security_group(name)\n elif sg_id:\n self.ec2_backend.delete_security_group(group_id=sg_id)\n\n return DELETE_GROUP_RESPONSE\n\n def describe_security_groups(self):\n groupnames = self._get_multi_param(\"GroupName\")\n group_ids = self._get_multi_param(\"GroupId\")\n filters = filters_from_querystring(self.querystring)\n\n groups = self.ec2_backend.describe_security_groups(\n group_ids=group_ids,\n groupnames=groupnames,\n filters=filters\n )\n\n template = self.response_template(DESCRIBE_SECURITY_GROUPS_RESPONSE)\n return template.render(groups=groups)\n\n def revoke_security_group_egress(self):\n if self.is_not_dryrun('RevokeSecurityGroupEgress'):\n for args in self._process_rules_from_querystring():\n success = self.ec2_backend.revoke_security_group_egress(*args)\n if not success:\n return \"Could not find a matching egress rule\", dict(status=404)\n return REVOKE_SECURITY_GROUP_EGRESS_RESPONSE\n\n def revoke_security_group_ingress(self):\n if self.is_not_dryrun('RevokeSecurityGroupIngress'):\n for args in self._process_rules_from_querystring():\n self.ec2_backend.revoke_security_group_ingress(*args)\n return REVOKE_SECURITY_GROUP_INGRESS_REPONSE\n\n\nCREATE_SECURITY_GROUP_RESPONSE = \"\"\"<CreateSecurityGroupResponse xmlns=\"http://ec2.amazonaws.com/doc/2013-10-15/\">\n <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>\n <return>true</return>\n <groupId>{{ group.id }}</groupId>\n</CreateSecurityGroupResponse>\"\"\"\n\nDELETE_GROUP_RESPONSE = \"\"\"<DeleteSecurityGroupResponse xmlns=\"http://ec2.amazonaws.com/doc/2013-10-15/\">\n <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>\n <return>true</return>\n</DeleteSecurityGroupResponse>\"\"\"\n\nDESCRIBE_SECURITY_GROUPS_RESPONSE = \"\"\"<DescribeSecurityGroupsResponse xmlns=\"http://ec2.amazonaws.com/doc/2013-10-15/\">\n <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>\n <securityGroupInfo>\n {% for group in groups %}\n <item>\n <ownerId>123456789012</ownerId>\n <groupId>{{ group.id }}</groupId>\n <groupName>{{ group.name }}</groupName>\n <groupDescription>{{ group.description }}</groupDescription>\n {% if group.vpc_id %}\n <vpcId>{{ group.vpc_id }}</vpcId>\n {% endif %}\n <ipPermissions>\n {% for rule in group.ingress_rules %}\n <item>\n <ipProtocol>{{ rule.ip_protocol }}</ipProtocol>\n {% if rule.from_port %}\n <fromPort>{{ rule.from_port }}</fromPort>\n {% endif %}\n {% if rule.to_port %}\n <toPort>{{ rule.to_port }}</toPort>\n {% endif %}\n <groups>\n {% for source_group in rule.source_groups %}\n <item>\n <userId>123456789012</userId>\n <groupId>{{ source_group.id }}</groupId>\n <groupName>{{ source_group.name }}</groupName>\n </item>\n {% endfor %}\n </groups>\n <ipRanges>\n {% for ip_range in rule.ip_ranges %}\n <item>\n <cidrIp>{{ ip_range }}</cidrIp>\n </item>\n {% endfor %}\n </ipRanges>\n </item>\n {% endfor %}\n </ipPermissions>\n <ipPermissionsEgress>\n {% for rule in group.egress_rules %}\n <item>\n <ipProtocol>{{ rule.ip_protocol }}</ipProtocol>\n <fromPort>{{ rule.from_port }}</fromPort>\n <toPort>{{ rule.to_port }}</toPort>\n <groups>\n {% for source_group in rule.source_groups %}\n <item>\n <userId>123456789012</userId>\n <groupId>{{ source_group.id }}</groupId>\n <groupName>{{ source_group.name }}</groupName>\n </item>\n {% endfor %}\n </groups>\n <ipRanges>\n {% for ip_range in rule.ip_ranges %}\n <item>\n <cidrIp>{{ ip_range }}</cidrIp>\n </item>\n {% endfor %}\n </ipRanges>\n </item>\n {% endfor %}\n </ipPermissionsEgress>\n <tagSet>\n {% for tag in group.get_tags() %}\n <item>\n <resourceId>{{ tag.resource_id }}</resourceId>\n <resourceType>{{ tag.resource_type }}</resourceType>\n <key>{{ tag.key }}</key>\n <value>{{ tag.value }}</value>\n </item>\n {% endfor %}\n </tagSet>\n </item>\n {% endfor %}\n </securityGroupInfo>\n</DescribeSecurityGroupsResponse>\"\"\"\n\nAUTHORIZE_SECURITY_GROUP_INGRESS_REPONSE = \"\"\"<AuthorizeSecurityGroupIngressResponse xmlns=\"http://ec2.amazonaws.com/doc/2013-10-15/\">\n <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>\n <return>true</return>\n</AuthorizeSecurityGroupIngressResponse>\"\"\"\n\nREVOKE_SECURITY_GROUP_INGRESS_REPONSE = \"\"\"<RevokeSecurityGroupIngressResponse xmlns=\"http://ec2.amazonaws.com/doc/2013-10-15/\">\n <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>\n <return>true</return>\n</RevokeSecurityGroupIngressResponse>\"\"\"\n\nAUTHORIZE_SECURITY_GROUP_EGRESS_RESPONSE = \"\"\"\n<AuthorizeSecurityGroupEgressResponse xmlns=\"http://ec2.amazonaws.com/doc/2013-10-15/\">\n <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>\n <return>true</return>\n</AuthorizeSecurityGroupEgressResponse>\"\"\"\n\nREVOKE_SECURITY_GROUP_EGRESS_RESPONSE = \"\"\"<RevokeSecurityGroupEgressResponse xmlns=\"http://ec2.amazonaws.com/doc/2013-10-15/\">\n <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>\n <return>true</return>\n</RevokeSecurityGroupEgressResponse>\"\"\"\n", "path": "moto/ec2/responses/security_groups.py"}]}
| 4,020 | 721 |
gh_patches_debug_18313
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-575
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
docker.utils.decorators.check_resource problem
When use `docker_client.start(**kwargs)` to start the container, will be raise follow exception:
```
c.start(**s_kwargs)
File "/home/simplecloud/shiyanlou/env/local/lib/python2.7/site-packages/docker/utils/decorators.py", line 12, in wrapped
return f(self, resource_id, *args, **kwargs)
TypeError: start() got multiple values for keyword argument 'container'
```
</issue>
<code>
[start of docker/utils/decorators.py]
1 from .. import errors
2
3
4 def check_resource(f):
5 def wrapped(self, resource_id=None, *args, **kwargs):
6 if resource_id is None and (
7 kwargs.get('container') is None and kwargs.get('image') is None
8 ):
9 raise errors.NullResource(
10 'image or container param is None'
11 )
12 return f(self, resource_id, *args, **kwargs)
13 return wrapped
14
[end of docker/utils/decorators.py]
[start of docker/version.py]
1 version = "1.2.1-dev"
2 version_info = tuple([int(d) for d in version.replace("-dev", "").split(".")])
3
[end of docker/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docker/utils/decorators.py b/docker/utils/decorators.py
--- a/docker/utils/decorators.py
+++ b/docker/utils/decorators.py
@@ -3,11 +3,14 @@
def check_resource(f):
def wrapped(self, resource_id=None, *args, **kwargs):
- if resource_id is None and (
- kwargs.get('container') is None and kwargs.get('image') is None
- ):
- raise errors.NullResource(
- 'image or container param is None'
- )
+ if resource_id is None:
+ if kwargs.get('container'):
+ resource_id = kwargs.pop('container')
+ elif kwargs.get('image'):
+ resource_id = kwargs.pop('image')
+ else:
+ raise errors.NullResource(
+ 'image or container param is undefined'
+ )
return f(self, resource_id, *args, **kwargs)
return wrapped
diff --git a/docker/version.py b/docker/version.py
--- a/docker/version.py
+++ b/docker/version.py
@@ -1,2 +1,2 @@
-version = "1.2.1-dev"
+version = "1.2.1"
version_info = tuple([int(d) for d in version.replace("-dev", "").split(".")])
|
{"golden_diff": "diff --git a/docker/utils/decorators.py b/docker/utils/decorators.py\n--- a/docker/utils/decorators.py\n+++ b/docker/utils/decorators.py\n@@ -3,11 +3,14 @@\n \n def check_resource(f):\n def wrapped(self, resource_id=None, *args, **kwargs):\n- if resource_id is None and (\n- kwargs.get('container') is None and kwargs.get('image') is None\n- ):\n- raise errors.NullResource(\n- 'image or container param is None'\n- )\n+ if resource_id is None:\n+ if kwargs.get('container'):\n+ resource_id = kwargs.pop('container')\n+ elif kwargs.get('image'):\n+ resource_id = kwargs.pop('image')\n+ else:\n+ raise errors.NullResource(\n+ 'image or container param is undefined'\n+ )\n return f(self, resource_id, *args, **kwargs)\n return wrapped\ndiff --git a/docker/version.py b/docker/version.py\n--- a/docker/version.py\n+++ b/docker/version.py\n@@ -1,2 +1,2 @@\n-version = \"1.2.1-dev\"\n+version = \"1.2.1\"\n version_info = tuple([int(d) for d in version.replace(\"-dev\", \"\").split(\".\")])\n", "issue": "docker.utils.decorators.check_resource problem\nWhen use `docker_client.start(**kwargs)` to start the container, will be raise follow exception:\n\n```\nc.start(**s_kwargs)\n File \"/home/simplecloud/shiyanlou/env/local/lib/python2.7/site-packages/docker/utils/decorators.py\", line 12, in wrapped\n return f(self, resource_id, *args, **kwargs)\nTypeError: start() got multiple values for keyword argument 'container'\n```\n\n", "before_files": [{"content": "from .. import errors\n\n\ndef check_resource(f):\n def wrapped(self, resource_id=None, *args, **kwargs):\n if resource_id is None and (\n kwargs.get('container') is None and kwargs.get('image') is None\n ):\n raise errors.NullResource(\n 'image or container param is None'\n )\n return f(self, resource_id, *args, **kwargs)\n return wrapped\n", "path": "docker/utils/decorators.py"}, {"content": "version = \"1.2.1-dev\"\nversion_info = tuple([int(d) for d in version.replace(\"-dev\", \"\").split(\".\")])\n", "path": "docker/version.py"}]}
| 790 | 279 |
gh_patches_debug_25492
|
rasdani/github-patches
|
git_diff
|
sql-machine-learning__elasticdl-643
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create owner references among master and worker pods
Currently we have the following two ways to delete master + worker pods:
* Delete each pod one by one
* Delete all pods related to this elasticdl run via `elasticdl_job_name` label `kubectl delete pod -l elasticdl_job_name=test-job-1559292773-93`
It would be much easier if users could just delete master pod and then worker pods can be delete automatically. This would be possible if there are owner references among master and worker pods.
</issue>
<code>
[start of elasticdl/python/elasticdl/master/k8s_client.py]
1 import logging
2 import os
3 import threading
4 import traceback
5
6 from kubernetes import client, config, watch
7 from kubernetes.client import (
8 V1PersistentVolumeClaimVolumeSource as pvcVolumeSource,
9 )
10
11 WORKER_POD_NAME_PREFIX = "elasticdl-worker-"
12
13
14 class Client(object):
15 def __init__(self, *, worker_image, namespace, job_name, event_callback):
16 """
17 ElasticDL k8s client.
18
19 Args:
20 worker_image: Docker image path for ElasticDL workers.
21 namespace: k8s namespace for ElasticDL pods.
22 job_name: ElasticDL job name, should be unique in the namespace.
23 Used as worker pod name prefix and value for "elasticdl" label.
24 event_callback: If not None, an event watcher will be created and
25 events passed to the callback.
26 """
27 if os.getenv("KUBERNETES_SERVICE_HOST"):
28 # We are running inside k8s
29 config.load_incluster_config()
30 else:
31 # Use user's kube config
32 config.load_kube_config()
33
34 self._v1 = client.CoreV1Api()
35 self._logger = logging.getLogger(__name__)
36 self._image = worker_image
37 self._ns = namespace
38 self._job_name = job_name
39 self._event_cb = event_callback
40 if self._event_cb:
41 threading.Thread(
42 target=self._watch, name="event_watcher", daemon=True
43 ).start()
44
45 def _watch(self):
46 stream = watch.Watch().stream(
47 self._v1.list_namespaced_pod,
48 self._ns,
49 label_selector="elasticdl_job_name=" + self._job_name,
50 )
51 for event in stream:
52 try:
53 self._event_cb(event)
54 except Exception:
55 traceback.print_exc()
56
57 def get_worker_pod_name(self, worker_id):
58 return WORKER_POD_NAME_PREFIX + self._job_name + "-" + str(worker_id)
59
60 def _create_worker_pod(
61 self,
62 worker_id,
63 resource_requests,
64 resource_limits,
65 priority,
66 mount_path,
67 volume_name,
68 image_pull_policy,
69 command,
70 args,
71 restart_policy,
72 ):
73 # Worker container config
74 container = client.V1Container(
75 name=self.get_worker_pod_name(worker_id),
76 image=self._image,
77 command=command,
78 resources=client.V1ResourceRequirements(
79 requests=resource_requests, limits=resource_limits
80 ),
81 image_pull_policy=image_pull_policy,
82 args=args,
83 )
84
85 # Pod
86 spec = client.V1PodSpec(
87 containers=[container], restart_policy=restart_policy
88 )
89
90 # Mount data path
91 if mount_path is not None and volume_name is not None:
92 volume = client.V1Volume(
93 name="data-volume",
94 persistent_volume_claim=pvcVolumeSource(
95 claim_name="fileserver-claim", read_only=False
96 ),
97 )
98 spec.volumes = [volume]
99 container.volume_mounts = [
100 client.V1VolumeMount(name=volume_name, mount_path=mount_path)
101 ]
102
103 if priority is not None:
104 spec.priority_class_name = priority
105
106 pod = client.V1Pod(
107 spec=spec,
108 metadata=client.V1ObjectMeta(
109 name=self.get_worker_pod_name(worker_id),
110 labels={
111 "app": "elasticdl",
112 "elasticdl_job_name": self._job_name,
113 },
114 ),
115 )
116 return pod
117
118 def create_worker(
119 self,
120 worker_id,
121 resource_requests,
122 resource_limits,
123 priority=None,
124 mount_path=None,
125 volume_name=None,
126 image_pull_policy=None,
127 command=None,
128 args=None,
129 restart_policy="OnFailure",
130 ):
131 self._logger.info("Creating worker: " + str(worker_id))
132 pod = self._create_worker_pod(
133 worker_id,
134 resource_requests,
135 resource_limits,
136 priority,
137 mount_path,
138 volume_name,
139 image_pull_policy,
140 command=command,
141 args=args,
142 restart_policy=restart_policy,
143 )
144 return self._v1.create_namespaced_pod(self._ns, pod)
145
146 def delete_worker(self, worker_id):
147 self._logger.info("Deleting worker: " + str(worker_id))
148 self._v1.delete_namespaced_pod(
149 self.get_worker_pod_name(worker_id),
150 self._ns,
151 body=client.V1DeleteOptions(grace_period_seconds=0),
152 )
153
[end of elasticdl/python/elasticdl/master/k8s_client.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/elasticdl/python/elasticdl/master/k8s_client.py b/elasticdl/python/elasticdl/master/k8s_client.py
--- a/elasticdl/python/elasticdl/master/k8s_client.py
+++ b/elasticdl/python/elasticdl/master/k8s_client.py
@@ -103,6 +103,16 @@
if priority is not None:
spec.priority_class_name = priority
+ # Find that master pod that will be used as the owner reference
+ # for this worker pod.
+ pods = self._v1.list_namespaced_pod(
+ namespace=self._ns,
+ label_selector="elasticdl_job_name=" + self._job_name
+ ).items
+ master_pod = [pod for pod in pods if (
+ pod.metadata.name == "elasticdl-master-" + self._job_name
+ )][0]
+
pod = client.V1Pod(
spec=spec,
metadata=client.V1ObjectMeta(
@@ -111,6 +121,17 @@
"app": "elasticdl",
"elasticdl_job_name": self._job_name,
},
+ # TODO: Add tests for this once we've done refactoring on
+ # k8s client code and the constant strings
+ owner_references=[
+ client.V1OwnerReference(
+ api_version="v1",
+ block_owner_deletion=True,
+ kind="Pod",
+ name=master_pod.metadata.name,
+ uid=master_pod.metadata.uid,
+ ),
+ ],
),
)
return pod
|
{"golden_diff": "diff --git a/elasticdl/python/elasticdl/master/k8s_client.py b/elasticdl/python/elasticdl/master/k8s_client.py\n--- a/elasticdl/python/elasticdl/master/k8s_client.py\n+++ b/elasticdl/python/elasticdl/master/k8s_client.py\n@@ -103,6 +103,16 @@\n if priority is not None:\n spec.priority_class_name = priority\n \n+ # Find that master pod that will be used as the owner reference\n+ # for this worker pod.\n+ pods = self._v1.list_namespaced_pod(\n+ namespace=self._ns,\n+ label_selector=\"elasticdl_job_name=\" + self._job_name\n+ ).items\n+ master_pod = [pod for pod in pods if (\n+ pod.metadata.name == \"elasticdl-master-\" + self._job_name\n+ )][0]\n+\n pod = client.V1Pod(\n spec=spec,\n metadata=client.V1ObjectMeta(\n@@ -111,6 +121,17 @@\n \"app\": \"elasticdl\",\n \"elasticdl_job_name\": self._job_name,\n },\n+ # TODO: Add tests for this once we've done refactoring on\n+ # k8s client code and the constant strings\n+ owner_references=[\n+ client.V1OwnerReference(\n+ api_version=\"v1\",\n+ block_owner_deletion=True,\n+ kind=\"Pod\",\n+ name=master_pod.metadata.name,\n+ uid=master_pod.metadata.uid,\n+ ),\n+ ],\n ),\n )\n return pod\n", "issue": "Create owner references among master and worker pods\nCurrently we have the following two ways to delete master + worker pods:\r\n* Delete each pod one by one\r\n* Delete all pods related to this elasticdl run via `elasticdl_job_name` label `kubectl delete pod -l elasticdl_job_name=test-job-1559292773-93`\r\n\r\nIt would be much easier if users could just delete master pod and then worker pods can be delete automatically. This would be possible if there are owner references among master and worker pods.\n", "before_files": [{"content": "import logging\nimport os\nimport threading\nimport traceback\n\nfrom kubernetes import client, config, watch\nfrom kubernetes.client import (\n V1PersistentVolumeClaimVolumeSource as pvcVolumeSource,\n)\n\nWORKER_POD_NAME_PREFIX = \"elasticdl-worker-\"\n\n\nclass Client(object):\n def __init__(self, *, worker_image, namespace, job_name, event_callback):\n \"\"\"\n ElasticDL k8s client.\n\n Args:\n worker_image: Docker image path for ElasticDL workers.\n namespace: k8s namespace for ElasticDL pods.\n job_name: ElasticDL job name, should be unique in the namespace.\n Used as worker pod name prefix and value for \"elasticdl\" label.\n event_callback: If not None, an event watcher will be created and\n events passed to the callback.\n \"\"\"\n if os.getenv(\"KUBERNETES_SERVICE_HOST\"):\n # We are running inside k8s\n config.load_incluster_config()\n else:\n # Use user's kube config\n config.load_kube_config()\n\n self._v1 = client.CoreV1Api()\n self._logger = logging.getLogger(__name__)\n self._image = worker_image\n self._ns = namespace\n self._job_name = job_name\n self._event_cb = event_callback\n if self._event_cb:\n threading.Thread(\n target=self._watch, name=\"event_watcher\", daemon=True\n ).start()\n\n def _watch(self):\n stream = watch.Watch().stream(\n self._v1.list_namespaced_pod,\n self._ns,\n label_selector=\"elasticdl_job_name=\" + self._job_name,\n )\n for event in stream:\n try:\n self._event_cb(event)\n except Exception:\n traceback.print_exc()\n\n def get_worker_pod_name(self, worker_id):\n return WORKER_POD_NAME_PREFIX + self._job_name + \"-\" + str(worker_id)\n\n def _create_worker_pod(\n self,\n worker_id,\n resource_requests,\n resource_limits,\n priority,\n mount_path,\n volume_name,\n image_pull_policy,\n command,\n args,\n restart_policy,\n ):\n # Worker container config\n container = client.V1Container(\n name=self.get_worker_pod_name(worker_id),\n image=self._image,\n command=command,\n resources=client.V1ResourceRequirements(\n requests=resource_requests, limits=resource_limits\n ),\n image_pull_policy=image_pull_policy,\n args=args,\n )\n\n # Pod\n spec = client.V1PodSpec(\n containers=[container], restart_policy=restart_policy\n )\n\n # Mount data path\n if mount_path is not None and volume_name is not None:\n volume = client.V1Volume(\n name=\"data-volume\",\n persistent_volume_claim=pvcVolumeSource(\n claim_name=\"fileserver-claim\", read_only=False\n ),\n )\n spec.volumes = [volume]\n container.volume_mounts = [\n client.V1VolumeMount(name=volume_name, mount_path=mount_path)\n ]\n\n if priority is not None:\n spec.priority_class_name = priority\n\n pod = client.V1Pod(\n spec=spec,\n metadata=client.V1ObjectMeta(\n name=self.get_worker_pod_name(worker_id),\n labels={\n \"app\": \"elasticdl\",\n \"elasticdl_job_name\": self._job_name,\n },\n ),\n )\n return pod\n\n def create_worker(\n self,\n worker_id,\n resource_requests,\n resource_limits,\n priority=None,\n mount_path=None,\n volume_name=None,\n image_pull_policy=None,\n command=None,\n args=None,\n restart_policy=\"OnFailure\",\n ):\n self._logger.info(\"Creating worker: \" + str(worker_id))\n pod = self._create_worker_pod(\n worker_id,\n resource_requests,\n resource_limits,\n priority,\n mount_path,\n volume_name,\n image_pull_policy,\n command=command,\n args=args,\n restart_policy=restart_policy,\n )\n return self._v1.create_namespaced_pod(self._ns, pod)\n\n def delete_worker(self, worker_id):\n self._logger.info(\"Deleting worker: \" + str(worker_id))\n self._v1.delete_namespaced_pod(\n self.get_worker_pod_name(worker_id),\n self._ns,\n body=client.V1DeleteOptions(grace_period_seconds=0),\n )\n", "path": "elasticdl/python/elasticdl/master/k8s_client.py"}]}
| 1,966 | 352 |
gh_patches_debug_11323
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-19201
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Typo in keras.distribution.initialize()
Hi,
There is a typo when calling `keras.distribution.initialize` due to a typo in the jax backend. The function pass the `corrdinator_address` argument instead of `coordinator_address` to `jax.distributed.initialize`
```log
---> 13 keras.distribution.initialize()
File /usr/local/lib/python3.10/site-packages/keras/src/distribution/distribution_lib.py:131, in initialize(job_addresses, num_processes, proceed_id)
129 if proceed_id is None and "KERAS_DISTRIBUTION_PROCESS_ID" in os.environ:
130 proceed_id = int(os.environ["KERAS_DISTRIBUTION_PROCESS_ID"])
--> 131 distribution_lib.initialize(job_addresses, num_processes, proceed_id)
File /usr/local/lib/python3.10/site-packages/keras/src/backend/jax/distribution_lib.py:207, in initialize(job_addresses, num_processes, process_id)
204 else:
205 corrdinator_address = job_addresses
--> 207 jax.distributed.initialize(
208 corrdinator_address=corrdinator_address,
209 num_processes=num_processes,
210 process_id=process_id,
211 )
TypeError: initialize() got an unexpected keyword argument 'corrdinator_address'
```
</issue>
<code>
[start of keras/backend/jax/distribution_lib.py]
1 """!!!DO NOT USE!!!
2
3 Distribution related class for JAX backend.
4
5 This is just a prototype and we might want to unify it
6 with other backends in the future.
7 """
8
9 import jax
10 import numpy as np
11
12 from keras.utils import jax_utils
13
14
15 def list_devices(device_type=None):
16 """Return all the available devices based on the device type.
17
18 Note that this should return the global devices in a distributed setting.
19
20 Args:
21 device_type: string of `"cpu"`, `"gpu"` or `"tpu"`. Defaults to `"gpu"`
22 or `"tpu"` if available when device_type is not provided. Otherwise
23 will return the `"cpu"` devices.
24
25 Return:
26 List of devices that are available for distribute computation.
27 """
28 device_type = device_type.lower() if device_type else None
29 jax_devices = jax.devices(backend=device_type)
30 return [f"{device.platform}:{device.id}" for device in jax_devices]
31
32
33 def distribute_variable(value, layout):
34 """Create a distributed variable for JAX.
35
36 Since JAX doesn't have a variable class, this will just return a `jax.Array`
37 with the corresponding layout/sharding specified.
38
39 Note that this function should be used in eager context, not in jitted
40 function.
41
42 Args:
43 value: the initial value of the variable.
44 layout: `TensorLayout` for the created variable, or a
45 `jax.sharding.Sharding` instance.
46
47 Returns:
48 jax.Array which is the distributed variable.
49 """
50 if not isinstance(layout, jax.sharding.Sharding):
51 layout = _to_jax_layout(layout)
52 if isinstance(
53 value, (jax.Array, jax.numpy.ndarray)
54 ) and value.sharding.is_equivalent_to(layout, ndim=len(value.shape)):
55 # Skip the relayout if the value is already having the proper sharding
56 return value
57
58 if layout.is_fully_addressable:
59 return jax.device_put(value, layout)
60 else:
61 # Need to only distribute the value to local addressible devices, and
62 # repack them back into global format.
63 mapping = layout.addressable_devices_indices_map(value.shape)
64 local_values = jax.device_put(
65 [value[i] for i in mapping.values()], list(mapping.keys())
66 )
67 global_value = jax.make_array_from_single_device_arrays(
68 value.shape, layout, local_values
69 )
70 return global_value
71
72
73 def distribute_tensor(tensor, layout):
74 """Distribute the tensor based on the layout.
75
76 Note that this function can be used both in eager context, or within a
77 jitted function.
78
79 Args:
80 tensor: `jax.Array` that need to be distributed.
81 layout: `TensorLayout` for the distribution information, or a
82 `jax.sharding.Sharding` instance.
83
84 Returns:
85 Distributed value.
86 """
87 if not isinstance(layout, jax.sharding.Sharding):
88 layout = _to_jax_layout(layout)
89 # TODO(scottzhu): This might not be a cheap check, we should consider
90 # have some proper JAX API for doing this check.
91 if jax_utils.is_in_jax_tracing_scope():
92 return jax.lax.with_sharding_constraint(tensor, layout)
93
94 if layout.is_fully_addressable:
95 return jax.device_put(tensor, layout)
96 else:
97 # Need to only distribute the value to local addressible devices, and
98 # repack them back into global format.
99 mapping = layout.addressable_devices_indices_map(tensor.shape)
100 local_values = jax.device_put(
101 [tensor[i] for i in mapping.values()], list(mapping.keys())
102 )
103 global_value = jax.make_array_from_single_device_arrays(
104 tensor.shape, layout, local_values
105 )
106 return global_value
107
108
109 def distribute_data_input(inputs, layout):
110 """Distribute the input data with the corresponding layout.
111
112 Note that the inputs here is a local worker batch. Within the local worker,
113 the data need to be further partitioned to map to the each of the devices.
114
115 Args:
116 inputs: `jax.Array` that is already sharded to a local process size.
117 layout: `TensorLayout` for the distribution information, or a
118 `jax.sharding.Sharding` instance.
119
120 Returns:
121 Distributed inputs thats been properly put to local devices.
122 """
123 if not isinstance(layout, jax.sharding.Sharding):
124 layout = _to_jax_layout(layout)
125 if layout.is_fully_addressable:
126 return jax.device_put(inputs, layout)
127
128 # We need the jax mesh information to determine how to place the data
129 # on to each of the worker.
130 jax_mesh = layout.mesh
131 mesh_rank = len(jax_mesh.shape)
132 per_process_batch_size = inputs.shape[0]
133 if mesh_rank == 1:
134 # This is data parallel mesh only. We will split the full data
135 # across the batch dim.
136 num_split = jax.local_device_count()
137 per_replica_batch_size = per_process_batch_size // num_split
138 if per_process_batch_size % per_replica_batch_size != 0:
139 raise ValueError(
140 f"The local batch size {per_process_batch_size} is not"
141 "divisible by the number of local replicas "
142 f"{num_split}"
143 )
144 global_batch_size = per_process_batch_size * jax.process_count()
145 per_replica_batches = jax.numpy.split(inputs, num_split, axis=0)
146 elif mesh_rank == 2:
147 # Data+Model parallel
148 # In this case, we need to check if the mesh batch dim shape is large
149 # than number of local devices, so that we can decide whether a split
150 # is needed for the data, or a repeat/copy of the data is needed for
151 # each of the device.
152 # TODO(scottzhu): The mesh batch dim name is not available here, since
153 # we only have jax Mesh. We assume the first dim is for batch, and
154 # second dim is for model for now.
155 mesh_batch_dim_size = list(jax_mesh.shape.values())[0]
156 local_device_count = jax.local_device_count()
157 if mesh_batch_dim_size < local_device_count:
158 # No split needed, we only need to repeat here.
159 global_batch_size = per_process_batch_size
160 per_replica_batches = [inputs for _ in range(local_device_count)]
161 else:
162 # Note that global batch size is not simply per_process_batch_size *
163 # num_process. It actually depends on the model dim size.
164 global_batch_size = per_process_batch_size * (
165 mesh_batch_dim_size // local_device_count
166 )
167 per_replica_batches = jax.numpy.split(
168 inputs, local_device_count, axis=0
169 )
170 else:
171 raise ValueError(
172 "Only 1D or 2D mesh is supported at the moment. "
173 f"Received mesh shape = {jax_mesh.shape}"
174 )
175
176 global_shape = (global_batch_size,) + inputs.shape[1:]
177 global_batch_array = jax.make_array_from_single_device_arrays(
178 global_shape,
179 layout,
180 arrays=[
181 jax.device_put(batch, device)
182 for batch, device in zip(
183 per_replica_batches, layout.addressable_devices
184 )
185 ],
186 )
187 return global_batch_array
188
189
190 def initialize(job_addresses, num_processes, process_id):
191 if job_addresses and "," in job_addresses:
192 # When user provide all the job addresses, we will split and get the
193 # first one, which is the coordinator.
194 job_addresses = job_addresses.split(",")
195 # Do a sanity check to make sure the number of addresses also match
196 # the num_processes.
197 if num_processes is not None and num_processes != len(job_addresses):
198 raise ValueError(
199 f"The provided job_addresses {job_addresses} has "
200 f"{len(job_addresses)} jobs, but num_processes is "
201 f"{num_processes}"
202 )
203 corrdinator_address = job_addresses[0]
204 else:
205 corrdinator_address = job_addresses
206
207 jax.distributed.initialize(
208 corrdinator_address=corrdinator_address,
209 num_processes=num_processes,
210 process_id=process_id,
211 )
212
213
214 def num_processes():
215 """Return the number of processes for the current distribution setting."""
216 return jax.process_count()
217
218
219 def process_id():
220 """Return the current process ID for the distribution setting."""
221 return jax.process_index()
222
223
224 def _to_jax_device(device_id):
225 if isinstance(device_id, jax.Device):
226 return device_id
227 device_type, index = device_id.split(":")
228 index = int(index)
229 devices = jax.devices(backend=device_type)
230 if index >= len(devices):
231 raise ValueError(f"Unknown device: {device_id}")
232 return devices[index]
233
234
235 def _to_jax_mesh(device_mesh):
236 """Convert the DeviceMesh to JAX backend specific Mesh.
237
238 Args:
239 device_mesh: DeviceMesh instance to convert.
240
241 Returns:
242 A `jax.sharding.Mesh` instance.
243 """
244 shape = device_mesh.devices.shape
245 devices = [_to_jax_device(d) for d in device_mesh.devices.flatten()]
246 devices = np.array(devices).reshape(shape)
247 return jax.sharding.Mesh(devices, device_mesh.axis_names)
248
249
250 def _to_jax_layout(tensor_layout):
251 """Convert the TensorLayout to JAX backend specific Sharding.
252
253 Args:
254 tensor_layout: TensorLayout instance to convert.
255
256 Returns:
257 A `jax.sharding.NamedSharding` instance.
258 """
259 if tensor_layout.device_mesh is None:
260 raise ValueError(
261 "Cannot create sharding when device mesh is not set "
262 "for TensorLayout."
263 )
264 partition_spec = jax.sharding.PartitionSpec(*tensor_layout.axes)
265 jax_mesh = _to_jax_mesh(tensor_layout.device_mesh)
266 return jax.sharding.NamedSharding(jax_mesh, partition_spec)
267
[end of keras/backend/jax/distribution_lib.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/keras/backend/jax/distribution_lib.py b/keras/backend/jax/distribution_lib.py
--- a/keras/backend/jax/distribution_lib.py
+++ b/keras/backend/jax/distribution_lib.py
@@ -200,12 +200,12 @@
f"{len(job_addresses)} jobs, but num_processes is "
f"{num_processes}"
)
- corrdinator_address = job_addresses[0]
+ coordinator_address = job_addresses[0]
else:
- corrdinator_address = job_addresses
+ coordinator_address = job_addresses
jax.distributed.initialize(
- corrdinator_address=corrdinator_address,
+ coordinator_address=coordinator_address,
num_processes=num_processes,
process_id=process_id,
)
|
{"golden_diff": "diff --git a/keras/backend/jax/distribution_lib.py b/keras/backend/jax/distribution_lib.py\n--- a/keras/backend/jax/distribution_lib.py\n+++ b/keras/backend/jax/distribution_lib.py\n@@ -200,12 +200,12 @@\n f\"{len(job_addresses)} jobs, but num_processes is \"\n f\"{num_processes}\"\n )\n- corrdinator_address = job_addresses[0]\n+ coordinator_address = job_addresses[0]\n else:\n- corrdinator_address = job_addresses\n+ coordinator_address = job_addresses\n \n jax.distributed.initialize(\n- corrdinator_address=corrdinator_address,\n+ coordinator_address=coordinator_address,\n num_processes=num_processes,\n process_id=process_id,\n )\n", "issue": "Typo in keras.distribution.initialize()\nHi,\r\n\r\nThere is a typo when calling `keras.distribution.initialize` due to a typo in the jax backend. The function pass the `corrdinator_address` argument instead of `coordinator_address` to `jax.distributed.initialize`\r\n\r\n```log\r\n---> 13 keras.distribution.initialize()\r\n\r\nFile /usr/local/lib/python3.10/site-packages/keras/src/distribution/distribution_lib.py:131, in initialize(job_addresses, num_processes, proceed_id)\r\n 129 if proceed_id is None and \"KERAS_DISTRIBUTION_PROCESS_ID\" in os.environ:\r\n 130 proceed_id = int(os.environ[\"KERAS_DISTRIBUTION_PROCESS_ID\"])\r\n--> 131 distribution_lib.initialize(job_addresses, num_processes, proceed_id)\r\n\r\nFile /usr/local/lib/python3.10/site-packages/keras/src/backend/jax/distribution_lib.py:207, in initialize(job_addresses, num_processes, process_id)\r\n 204 else:\r\n 205 corrdinator_address = job_addresses\r\n--> 207 jax.distributed.initialize(\r\n 208 corrdinator_address=corrdinator_address,\r\n 209 num_processes=num_processes,\r\n 210 process_id=process_id,\r\n 211 )\r\n\r\nTypeError: initialize() got an unexpected keyword argument 'corrdinator_address'\r\n```\r\n\r\n\n", "before_files": [{"content": "\"\"\"!!!DO NOT USE!!!\n\nDistribution related class for JAX backend.\n\nThis is just a prototype and we might want to unify it\nwith other backends in the future.\n\"\"\"\n\nimport jax\nimport numpy as np\n\nfrom keras.utils import jax_utils\n\n\ndef list_devices(device_type=None):\n \"\"\"Return all the available devices based on the device type.\n\n Note that this should return the global devices in a distributed setting.\n\n Args:\n device_type: string of `\"cpu\"`, `\"gpu\"` or `\"tpu\"`. Defaults to `\"gpu\"`\n or `\"tpu\"` if available when device_type is not provided. Otherwise\n will return the `\"cpu\"` devices.\n\n Return:\n List of devices that are available for distribute computation.\n \"\"\"\n device_type = device_type.lower() if device_type else None\n jax_devices = jax.devices(backend=device_type)\n return [f\"{device.platform}:{device.id}\" for device in jax_devices]\n\n\ndef distribute_variable(value, layout):\n \"\"\"Create a distributed variable for JAX.\n\n Since JAX doesn't have a variable class, this will just return a `jax.Array`\n with the corresponding layout/sharding specified.\n\n Note that this function should be used in eager context, not in jitted\n function.\n\n Args:\n value: the initial value of the variable.\n layout: `TensorLayout` for the created variable, or a\n `jax.sharding.Sharding` instance.\n\n Returns:\n jax.Array which is the distributed variable.\n \"\"\"\n if not isinstance(layout, jax.sharding.Sharding):\n layout = _to_jax_layout(layout)\n if isinstance(\n value, (jax.Array, jax.numpy.ndarray)\n ) and value.sharding.is_equivalent_to(layout, ndim=len(value.shape)):\n # Skip the relayout if the value is already having the proper sharding\n return value\n\n if layout.is_fully_addressable:\n return jax.device_put(value, layout)\n else:\n # Need to only distribute the value to local addressible devices, and\n # repack them back into global format.\n mapping = layout.addressable_devices_indices_map(value.shape)\n local_values = jax.device_put(\n [value[i] for i in mapping.values()], list(mapping.keys())\n )\n global_value = jax.make_array_from_single_device_arrays(\n value.shape, layout, local_values\n )\n return global_value\n\n\ndef distribute_tensor(tensor, layout):\n \"\"\"Distribute the tensor based on the layout.\n\n Note that this function can be used both in eager context, or within a\n jitted function.\n\n Args:\n tensor: `jax.Array` that need to be distributed.\n layout: `TensorLayout` for the distribution information, or a\n `jax.sharding.Sharding` instance.\n\n Returns:\n Distributed value.\n \"\"\"\n if not isinstance(layout, jax.sharding.Sharding):\n layout = _to_jax_layout(layout)\n # TODO(scottzhu): This might not be a cheap check, we should consider\n # have some proper JAX API for doing this check.\n if jax_utils.is_in_jax_tracing_scope():\n return jax.lax.with_sharding_constraint(tensor, layout)\n\n if layout.is_fully_addressable:\n return jax.device_put(tensor, layout)\n else:\n # Need to only distribute the value to local addressible devices, and\n # repack them back into global format.\n mapping = layout.addressable_devices_indices_map(tensor.shape)\n local_values = jax.device_put(\n [tensor[i] for i in mapping.values()], list(mapping.keys())\n )\n global_value = jax.make_array_from_single_device_arrays(\n tensor.shape, layout, local_values\n )\n return global_value\n\n\ndef distribute_data_input(inputs, layout):\n \"\"\"Distribute the input data with the corresponding layout.\n\n Note that the inputs here is a local worker batch. Within the local worker,\n the data need to be further partitioned to map to the each of the devices.\n\n Args:\n inputs: `jax.Array` that is already sharded to a local process size.\n layout: `TensorLayout` for the distribution information, or a\n `jax.sharding.Sharding` instance.\n\n Returns:\n Distributed inputs thats been properly put to local devices.\n \"\"\"\n if not isinstance(layout, jax.sharding.Sharding):\n layout = _to_jax_layout(layout)\n if layout.is_fully_addressable:\n return jax.device_put(inputs, layout)\n\n # We need the jax mesh information to determine how to place the data\n # on to each of the worker.\n jax_mesh = layout.mesh\n mesh_rank = len(jax_mesh.shape)\n per_process_batch_size = inputs.shape[0]\n if mesh_rank == 1:\n # This is data parallel mesh only. We will split the full data\n # across the batch dim.\n num_split = jax.local_device_count()\n per_replica_batch_size = per_process_batch_size // num_split\n if per_process_batch_size % per_replica_batch_size != 0:\n raise ValueError(\n f\"The local batch size {per_process_batch_size} is not\"\n \"divisible by the number of local replicas \"\n f\"{num_split}\"\n )\n global_batch_size = per_process_batch_size * jax.process_count()\n per_replica_batches = jax.numpy.split(inputs, num_split, axis=0)\n elif mesh_rank == 2:\n # Data+Model parallel\n # In this case, we need to check if the mesh batch dim shape is large\n # than number of local devices, so that we can decide whether a split\n # is needed for the data, or a repeat/copy of the data is needed for\n # each of the device.\n # TODO(scottzhu): The mesh batch dim name is not available here, since\n # we only have jax Mesh. We assume the first dim is for batch, and\n # second dim is for model for now.\n mesh_batch_dim_size = list(jax_mesh.shape.values())[0]\n local_device_count = jax.local_device_count()\n if mesh_batch_dim_size < local_device_count:\n # No split needed, we only need to repeat here.\n global_batch_size = per_process_batch_size\n per_replica_batches = [inputs for _ in range(local_device_count)]\n else:\n # Note that global batch size is not simply per_process_batch_size *\n # num_process. It actually depends on the model dim size.\n global_batch_size = per_process_batch_size * (\n mesh_batch_dim_size // local_device_count\n )\n per_replica_batches = jax.numpy.split(\n inputs, local_device_count, axis=0\n )\n else:\n raise ValueError(\n \"Only 1D or 2D mesh is supported at the moment. \"\n f\"Received mesh shape = {jax_mesh.shape}\"\n )\n\n global_shape = (global_batch_size,) + inputs.shape[1:]\n global_batch_array = jax.make_array_from_single_device_arrays(\n global_shape,\n layout,\n arrays=[\n jax.device_put(batch, device)\n for batch, device in zip(\n per_replica_batches, layout.addressable_devices\n )\n ],\n )\n return global_batch_array\n\n\ndef initialize(job_addresses, num_processes, process_id):\n if job_addresses and \",\" in job_addresses:\n # When user provide all the job addresses, we will split and get the\n # first one, which is the coordinator.\n job_addresses = job_addresses.split(\",\")\n # Do a sanity check to make sure the number of addresses also match\n # the num_processes.\n if num_processes is not None and num_processes != len(job_addresses):\n raise ValueError(\n f\"The provided job_addresses {job_addresses} has \"\n f\"{len(job_addresses)} jobs, but num_processes is \"\n f\"{num_processes}\"\n )\n corrdinator_address = job_addresses[0]\n else:\n corrdinator_address = job_addresses\n\n jax.distributed.initialize(\n corrdinator_address=corrdinator_address,\n num_processes=num_processes,\n process_id=process_id,\n )\n\n\ndef num_processes():\n \"\"\"Return the number of processes for the current distribution setting.\"\"\"\n return jax.process_count()\n\n\ndef process_id():\n \"\"\"Return the current process ID for the distribution setting.\"\"\"\n return jax.process_index()\n\n\ndef _to_jax_device(device_id):\n if isinstance(device_id, jax.Device):\n return device_id\n device_type, index = device_id.split(\":\")\n index = int(index)\n devices = jax.devices(backend=device_type)\n if index >= len(devices):\n raise ValueError(f\"Unknown device: {device_id}\")\n return devices[index]\n\n\ndef _to_jax_mesh(device_mesh):\n \"\"\"Convert the DeviceMesh to JAX backend specific Mesh.\n\n Args:\n device_mesh: DeviceMesh instance to convert.\n\n Returns:\n A `jax.sharding.Mesh` instance.\n \"\"\"\n shape = device_mesh.devices.shape\n devices = [_to_jax_device(d) for d in device_mesh.devices.flatten()]\n devices = np.array(devices).reshape(shape)\n return jax.sharding.Mesh(devices, device_mesh.axis_names)\n\n\ndef _to_jax_layout(tensor_layout):\n \"\"\"Convert the TensorLayout to JAX backend specific Sharding.\n\n Args:\n tensor_layout: TensorLayout instance to convert.\n\n Returns:\n A `jax.sharding.NamedSharding` instance.\n \"\"\"\n if tensor_layout.device_mesh is None:\n raise ValueError(\n \"Cannot create sharding when device mesh is not set \"\n \"for TensorLayout.\"\n )\n partition_spec = jax.sharding.PartitionSpec(*tensor_layout.axes)\n jax_mesh = _to_jax_mesh(tensor_layout.device_mesh)\n return jax.sharding.NamedSharding(jax_mesh, partition_spec)\n", "path": "keras/backend/jax/distribution_lib.py"}]}
| 3,694 | 176 |
gh_patches_debug_56983
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-contrib-172
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove unused import
As per comment https://github.com/open-telemetry/opentelemetry-python-contrib/pull/107#discussion_r516262746, there appears to be an unused import in the jinja2 instrumentation
</issue>
<code>
[start of instrumentation/opentelemetry-instrumentation-jinja2/src/opentelemetry/instrumentation/jinja2/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16
17 Usage
18 -----
19
20 The OpenTelemetry ``jinja2`` integration traces templates loading, compilation
21 and rendering.
22
23 Usage
24 -----
25
26 .. code-block:: python
27
28 from jinja2 import Environment, FileSystemLoader
29 from opentelemetry.instrumentation.jinja2 import Jinja2Instrumentor
30 from opentelemetry import trace
31 from opentelemetry.trace import TracerProvider
32
33 trace.set_tracer_provider(TracerProvider())
34
35 Jinja2Instrumentor().instrument()
36
37 env = Environment(loader=FileSystemLoader("templates"))
38 template = env.get_template("mytemplate.html")
39
40 API
41 ---
42 """
43 # pylint: disable=no-value-for-parameter
44
45 import logging
46
47 import jinja2
48 from wrapt import ObjectProxy
49 from wrapt import wrap_function_wrapper as _wrap
50
51 from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
52 from opentelemetry.instrumentation.jinja2.version import __version__
53 from opentelemetry.instrumentation.utils import unwrap
54 from opentelemetry.trace import SpanKind, get_tracer
55 from opentelemetry.trace.status import Status, StatusCode
56
57 logger = logging.getLogger(__name__)
58
59 ATTRIBUTE_JINJA2_TEMPLATE_NAME = "jinja2.template_name"
60 ATTRIBUTE_JINJA2_TEMPLATE_PATH = "jinja2.template_path"
61 DEFAULT_TEMPLATE_NAME = "<memory>"
62
63
64 def _with_tracer_wrapper(func):
65 """Helper for providing tracer for wrapper functions.
66 """
67
68 def _with_tracer(tracer):
69 def wrapper(wrapped, instance, args, kwargs):
70 return func(tracer, wrapped, instance, args, kwargs)
71
72 return wrapper
73
74 return _with_tracer
75
76
77 @_with_tracer_wrapper
78 def _wrap_render(tracer, wrapped, instance, args, kwargs):
79 """Wrap `Template.render()` or `Template.generate()`
80 """
81 with tracer.start_as_current_span(
82 "jinja2.render", kind=SpanKind.INTERNAL,
83 ) as span:
84 if span.is_recording():
85 template_name = instance.name or DEFAULT_TEMPLATE_NAME
86 span.set_attribute(ATTRIBUTE_JINJA2_TEMPLATE_NAME, template_name)
87 return wrapped(*args, **kwargs)
88
89
90 @_with_tracer_wrapper
91 def _wrap_compile(tracer, wrapped, _, args, kwargs):
92 with tracer.start_as_current_span(
93 "jinja2.compile", kind=SpanKind.INTERNAL,
94 ) as span:
95 if span.is_recording():
96 template_name = (
97 args[1]
98 if len(args) > 1
99 else kwargs.get("name", DEFAULT_TEMPLATE_NAME)
100 )
101 span.set_attribute(ATTRIBUTE_JINJA2_TEMPLATE_NAME, template_name)
102 return wrapped(*args, **kwargs)
103
104
105 @_with_tracer_wrapper
106 def _wrap_load_template(tracer, wrapped, _, args, kwargs):
107 with tracer.start_as_current_span(
108 "jinja2.load", kind=SpanKind.INTERNAL,
109 ) as span:
110 if span.is_recording():
111 template_name = kwargs.get("name", args[0])
112 span.set_attribute(ATTRIBUTE_JINJA2_TEMPLATE_NAME, template_name)
113 template = None
114 try:
115 template = wrapped(*args, **kwargs)
116 return template
117 finally:
118 if template and span.is_recording():
119 span.set_attribute(
120 ATTRIBUTE_JINJA2_TEMPLATE_PATH, template.filename
121 )
122
123
124 class Jinja2Instrumentor(BaseInstrumentor):
125 """An instrumentor for jinja2
126
127 See `BaseInstrumentor`
128 """
129
130 def _instrument(self, **kwargs):
131 tracer_provider = kwargs.get("tracer_provider")
132 tracer = get_tracer(__name__, __version__, tracer_provider)
133
134 _wrap(jinja2, "environment.Template.render", _wrap_render(tracer))
135 _wrap(jinja2, "environment.Template.generate", _wrap_render(tracer))
136 _wrap(jinja2, "environment.Environment.compile", _wrap_compile(tracer))
137 _wrap(
138 jinja2,
139 "environment.Environment._load_template",
140 _wrap_load_template(tracer),
141 )
142
143 def _uninstrument(self, **kwargs):
144 unwrap(jinja2.Template, "render")
145 unwrap(jinja2.Template, "generate")
146 unwrap(jinja2.Environment, "compile")
147 unwrap(jinja2.Environment, "_load_template")
148
[end of instrumentation/opentelemetry-instrumentation-jinja2/src/opentelemetry/instrumentation/jinja2/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/instrumentation/opentelemetry-instrumentation-jinja2/src/opentelemetry/instrumentation/jinja2/__init__.py b/instrumentation/opentelemetry-instrumentation-jinja2/src/opentelemetry/instrumentation/jinja2/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-jinja2/src/opentelemetry/instrumentation/jinja2/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-jinja2/src/opentelemetry/instrumentation/jinja2/__init__.py
@@ -52,7 +52,6 @@
from opentelemetry.instrumentation.jinja2.version import __version__
from opentelemetry.instrumentation.utils import unwrap
from opentelemetry.trace import SpanKind, get_tracer
-from opentelemetry.trace.status import Status, StatusCode
logger = logging.getLogger(__name__)
|
{"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-jinja2/src/opentelemetry/instrumentation/jinja2/__init__.py b/instrumentation/opentelemetry-instrumentation-jinja2/src/opentelemetry/instrumentation/jinja2/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-jinja2/src/opentelemetry/instrumentation/jinja2/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-jinja2/src/opentelemetry/instrumentation/jinja2/__init__.py\n@@ -52,7 +52,6 @@\n from opentelemetry.instrumentation.jinja2.version import __version__\n from opentelemetry.instrumentation.utils import unwrap\n from opentelemetry.trace import SpanKind, get_tracer\n-from opentelemetry.trace.status import Status, StatusCode\n \n logger = logging.getLogger(__name__)\n", "issue": "Remove unused import\nAs per comment https://github.com/open-telemetry/opentelemetry-python-contrib/pull/107#discussion_r516262746, there appears to be an unused import in the jinja2 instrumentation\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\n\nUsage\n-----\n\nThe OpenTelemetry ``jinja2`` integration traces templates loading, compilation\nand rendering.\n\nUsage\n-----\n\n.. code-block:: python\n\n from jinja2 import Environment, FileSystemLoader\n from opentelemetry.instrumentation.jinja2 import Jinja2Instrumentor\n from opentelemetry import trace\n from opentelemetry.trace import TracerProvider\n\n trace.set_tracer_provider(TracerProvider())\n\n Jinja2Instrumentor().instrument()\n\n env = Environment(loader=FileSystemLoader(\"templates\"))\n template = env.get_template(\"mytemplate.html\")\n\nAPI\n---\n\"\"\"\n# pylint: disable=no-value-for-parameter\n\nimport logging\n\nimport jinja2\nfrom wrapt import ObjectProxy\nfrom wrapt import wrap_function_wrapper as _wrap\n\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.jinja2.version import __version__\nfrom opentelemetry.instrumentation.utils import unwrap\nfrom opentelemetry.trace import SpanKind, get_tracer\nfrom opentelemetry.trace.status import Status, StatusCode\n\nlogger = logging.getLogger(__name__)\n\nATTRIBUTE_JINJA2_TEMPLATE_NAME = \"jinja2.template_name\"\nATTRIBUTE_JINJA2_TEMPLATE_PATH = \"jinja2.template_path\"\nDEFAULT_TEMPLATE_NAME = \"<memory>\"\n\n\ndef _with_tracer_wrapper(func):\n \"\"\"Helper for providing tracer for wrapper functions.\n \"\"\"\n\n def _with_tracer(tracer):\n def wrapper(wrapped, instance, args, kwargs):\n return func(tracer, wrapped, instance, args, kwargs)\n\n return wrapper\n\n return _with_tracer\n\n\n@_with_tracer_wrapper\ndef _wrap_render(tracer, wrapped, instance, args, kwargs):\n \"\"\"Wrap `Template.render()` or `Template.generate()`\n \"\"\"\n with tracer.start_as_current_span(\n \"jinja2.render\", kind=SpanKind.INTERNAL,\n ) as span:\n if span.is_recording():\n template_name = instance.name or DEFAULT_TEMPLATE_NAME\n span.set_attribute(ATTRIBUTE_JINJA2_TEMPLATE_NAME, template_name)\n return wrapped(*args, **kwargs)\n\n\n@_with_tracer_wrapper\ndef _wrap_compile(tracer, wrapped, _, args, kwargs):\n with tracer.start_as_current_span(\n \"jinja2.compile\", kind=SpanKind.INTERNAL,\n ) as span:\n if span.is_recording():\n template_name = (\n args[1]\n if len(args) > 1\n else kwargs.get(\"name\", DEFAULT_TEMPLATE_NAME)\n )\n span.set_attribute(ATTRIBUTE_JINJA2_TEMPLATE_NAME, template_name)\n return wrapped(*args, **kwargs)\n\n\n@_with_tracer_wrapper\ndef _wrap_load_template(tracer, wrapped, _, args, kwargs):\n with tracer.start_as_current_span(\n \"jinja2.load\", kind=SpanKind.INTERNAL,\n ) as span:\n if span.is_recording():\n template_name = kwargs.get(\"name\", args[0])\n span.set_attribute(ATTRIBUTE_JINJA2_TEMPLATE_NAME, template_name)\n template = None\n try:\n template = wrapped(*args, **kwargs)\n return template\n finally:\n if template and span.is_recording():\n span.set_attribute(\n ATTRIBUTE_JINJA2_TEMPLATE_PATH, template.filename\n )\n\n\nclass Jinja2Instrumentor(BaseInstrumentor):\n \"\"\"An instrumentor for jinja2\n\n See `BaseInstrumentor`\n \"\"\"\n\n def _instrument(self, **kwargs):\n tracer_provider = kwargs.get(\"tracer_provider\")\n tracer = get_tracer(__name__, __version__, tracer_provider)\n\n _wrap(jinja2, \"environment.Template.render\", _wrap_render(tracer))\n _wrap(jinja2, \"environment.Template.generate\", _wrap_render(tracer))\n _wrap(jinja2, \"environment.Environment.compile\", _wrap_compile(tracer))\n _wrap(\n jinja2,\n \"environment.Environment._load_template\",\n _wrap_load_template(tracer),\n )\n\n def _uninstrument(self, **kwargs):\n unwrap(jinja2.Template, \"render\")\n unwrap(jinja2.Template, \"generate\")\n unwrap(jinja2.Environment, \"compile\")\n unwrap(jinja2.Environment, \"_load_template\")\n", "path": "instrumentation/opentelemetry-instrumentation-jinja2/src/opentelemetry/instrumentation/jinja2/__init__.py"}]}
| 1,998 | 183 |
gh_patches_debug_37732
|
rasdani/github-patches
|
git_diff
|
mars-project__mars-2150
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Move the metadata into setup.cfg
https://github.com/gvalkov/setuptools-py2cfg can be helpful.
</issue>
<code>
[start of setup.py]
1 # Copyright 1999-2020 Alibaba Group Holding Ltd.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import platform
17 import re
18 import sys
19 from setuptools import setup, find_packages, Extension
20 from distutils.sysconfig import get_config_var
21 from distutils.version import LooseVersion
22
23 import numpy as np
24 from Cython.Build import cythonize
25
26 try:
27 import distutils.ccompiler
28 if sys.platform != 'win32':
29 from numpy.distutils.ccompiler import CCompiler_compile
30 distutils.ccompiler.CCompiler.compile = CCompiler_compile
31 except ImportError:
32 pass
33
34 # From https://github.com/pandas-dev/pandas/pull/24274:
35 # For mac, ensure extensions are built for macos 10.9 when compiling on a
36 # 10.9 system or above, overriding distuitls behaviour which is to target
37 # the version that python was built for. This may be overridden by setting
38 # MACOSX_DEPLOYMENT_TARGET before calling setup.py
39 if sys.platform == 'darwin':
40 if 'MACOSX_DEPLOYMENT_TARGET' not in os.environ:
41 current_system = LooseVersion(platform.mac_ver()[0])
42 python_target = LooseVersion(
43 get_config_var('MACOSX_DEPLOYMENT_TARGET'))
44 if python_target < '10.9' and current_system >= '10.9':
45 os.environ['MACOSX_DEPLOYMENT_TARGET'] = '10.9'
46
47
48 repo_root = os.path.dirname(os.path.abspath(__file__))
49
50
51 def execfile(fname, globs, locs=None):
52 locs = locs or globs
53 exec(compile(open(fname).read(), fname, "exec"), globs, locs)
54
55
56 version_file_path = os.path.join(repo_root, 'mars', '_version.py')
57 version_ns = {'__file__': version_file_path}
58 execfile(version_file_path, version_ns)
59 version = version_ns['__version__']
60 # check version vs tag
61 if os.environ.get('GIT_TAG') and re.search(r'v\d', os.environ['GIT_TAG']) \
62 and os.environ['GIT_TAG'] != 'v' + version:
63 raise ValueError('Tag %r does not match source version %r'
64 % (os.environ['GIT_TAG'], version))
65
66 requirements = []
67 with open(os.path.join(repo_root, 'requirements.txt'), 'r') as f:
68 requirements.extend(f.read().splitlines())
69
70 extra_requirements = []
71 with open(os.path.join(repo_root, 'requirements-extra.txt'), 'r') as f:
72 extra_requirements.extend(f.read().splitlines())
73
74 dev_requirements = []
75 with open(os.path.join(repo_root, 'requirements-dev.txt'), 'r') as f:
76 dev_requirements.extend(f.read().splitlines())
77
78 vineyard_requirements = []
79 with open(os.path.join(repo_root, 'requirements-vineyard.txt'), 'r') as f:
80 vineyard_requirements.extend(f.read().splitlines())
81
82 long_description = None
83 if os.path.exists(os.path.join(repo_root, 'README.rst')):
84 with open(os.path.join(repo_root, 'README.rst'), encoding='utf-8') as f:
85 long_description = f.read()
86
87
88 if os.path.exists(os.path.join(repo_root, '.git')):
89 git_info = version_ns['get_git_info']()
90 if git_info:
91 with open(os.path.join(repo_root, 'mars', '.git-branch'), 'w') as git_file:
92 git_file.write(' '.join(git_info))
93
94 cythonize_kw = dict(language_level=sys.version_info[0])
95 cy_extension_kw = dict()
96 if os.environ.get('CYTHON_TRACE'):
97 cy_extension_kw['define_macros'] = [('CYTHON_TRACE_NOGIL', '1'), ('CYTHON_TRACE', '1')]
98 cythonize_kw['compiler_directives'] = {'linetrace': True}
99
100 if 'MSC' in sys.version:
101 extra_compile_args = ['/Ot', '/I' + os.path.join(repo_root, 'misc')]
102 cy_extension_kw['extra_compile_args'] = extra_compile_args
103 else:
104 extra_compile_args = ['-O3']
105 cy_extension_kw['extra_compile_args'] = extra_compile_args
106
107
108 def _discover_pyx():
109 exts = dict()
110 for root, _, files in os.walk(os.path.join(repo_root, 'mars')):
111 for fn in files:
112 if not fn.endswith('.pyx'):
113 continue
114 full_fn = os.path.relpath(os.path.join(root, fn), repo_root)
115 mod_name = full_fn.replace('.pyx', '').replace(os.path.sep, '.')
116 exts[mod_name] = Extension(mod_name, [full_fn], **cy_extension_kw)
117 return exts
118
119
120 cy_extension_kw['include_dirs'] = [np.get_include()]
121 extensions_dict = _discover_pyx()
122 cy_extensions = list(extensions_dict.values())
123
124 extensions = cythonize(cy_extensions, **cythonize_kw) + \
125 [Extension('mars.lib.mmh3', ['mars/lib/mmh3_src/mmh3module.cpp', 'mars/lib/mmh3_src/MurmurHash3.cpp'])]
126
127
128 setup_options = dict(
129 name='pymars',
130 version=version,
131 description='MARS: a tensor-based unified framework for large-scale data computation.',
132 long_description=long_description,
133 long_description_content_type='text/x-rst',
134 author='Qin Xuye',
135 author_email='[email protected]',
136 maintainer='Qin Xuye',
137 maintainer_email='[email protected]',
138 url='http://github.com/mars-project/mars',
139 license='Apache License 2.0',
140 classifiers=[
141 'Operating System :: OS Independent',
142 'Programming Language :: Python',
143 'Programming Language :: Python :: 3',
144 'Programming Language :: Python :: 3.6',
145 'Programming Language :: Python :: 3.7',
146 'Programming Language :: Python :: 3.8',
147 'Programming Language :: Python :: Implementation :: CPython',
148 'Topic :: Software Development :: Libraries',
149 ],
150 packages=find_packages(exclude=('*.tests.*', '*.tests')),
151 include_package_data=True,
152 entry_points={'console_scripts': [
153 'mars-scheduler = mars.scheduler.__main__:main',
154 'mars-worker = mars.worker.__main__:main',
155 'mars-web = mars.web.__main__:main',
156 ]},
157 python_requires='>=3.6',
158 install_requires=requirements,
159 ext_modules=extensions,
160 extras_require={
161 'distributed': extra_requirements,
162 'dev': extra_requirements + dev_requirements,
163 'vineyard': vineyard_requirements,
164 }
165 )
166 setup(**setup_options)
167
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -16,7 +16,7 @@
import platform
import re
import sys
-from setuptools import setup, find_packages, Extension
+from setuptools import setup, Extension
from distutils.sysconfig import get_config_var
from distutils.version import LooseVersion
@@ -63,27 +63,6 @@
raise ValueError('Tag %r does not match source version %r'
% (os.environ['GIT_TAG'], version))
-requirements = []
-with open(os.path.join(repo_root, 'requirements.txt'), 'r') as f:
- requirements.extend(f.read().splitlines())
-
-extra_requirements = []
-with open(os.path.join(repo_root, 'requirements-extra.txt'), 'r') as f:
- extra_requirements.extend(f.read().splitlines())
-
-dev_requirements = []
-with open(os.path.join(repo_root, 'requirements-dev.txt'), 'r') as f:
- dev_requirements.extend(f.read().splitlines())
-
-vineyard_requirements = []
-with open(os.path.join(repo_root, 'requirements-vineyard.txt'), 'r') as f:
- vineyard_requirements.extend(f.read().splitlines())
-
-long_description = None
-if os.path.exists(os.path.join(repo_root, 'README.rst')):
- with open(os.path.join(repo_root, 'README.rst'), encoding='utf-8') as f:
- long_description = f.read()
-
if os.path.exists(os.path.join(repo_root, '.git')):
git_info = version_ns['get_git_info']()
@@ -126,41 +105,7 @@
setup_options = dict(
- name='pymars',
version=version,
- description='MARS: a tensor-based unified framework for large-scale data computation.',
- long_description=long_description,
- long_description_content_type='text/x-rst',
- author='Qin Xuye',
- author_email='[email protected]',
- maintainer='Qin Xuye',
- maintainer_email='[email protected]',
- url='http://github.com/mars-project/mars',
- license='Apache License 2.0',
- classifiers=[
- 'Operating System :: OS Independent',
- 'Programming Language :: Python',
- 'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.6',
- 'Programming Language :: Python :: 3.7',
- 'Programming Language :: Python :: 3.8',
- 'Programming Language :: Python :: Implementation :: CPython',
- 'Topic :: Software Development :: Libraries',
- ],
- packages=find_packages(exclude=('*.tests.*', '*.tests')),
- include_package_data=True,
- entry_points={'console_scripts': [
- 'mars-scheduler = mars.scheduler.__main__:main',
- 'mars-worker = mars.worker.__main__:main',
- 'mars-web = mars.web.__main__:main',
- ]},
- python_requires='>=3.6',
- install_requires=requirements,
ext_modules=extensions,
- extras_require={
- 'distributed': extra_requirements,
- 'dev': extra_requirements + dev_requirements,
- 'vineyard': vineyard_requirements,
- }
)
setup(**setup_options)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -16,7 +16,7 @@\n import platform\n import re\n import sys\n-from setuptools import setup, find_packages, Extension\n+from setuptools import setup, Extension\n from distutils.sysconfig import get_config_var\n from distutils.version import LooseVersion\n \n@@ -63,27 +63,6 @@\n raise ValueError('Tag %r does not match source version %r'\n % (os.environ['GIT_TAG'], version))\n \n-requirements = []\n-with open(os.path.join(repo_root, 'requirements.txt'), 'r') as f:\n- requirements.extend(f.read().splitlines())\n-\n-extra_requirements = []\n-with open(os.path.join(repo_root, 'requirements-extra.txt'), 'r') as f:\n- extra_requirements.extend(f.read().splitlines())\n-\n-dev_requirements = []\n-with open(os.path.join(repo_root, 'requirements-dev.txt'), 'r') as f:\n- dev_requirements.extend(f.read().splitlines())\n-\n-vineyard_requirements = []\n-with open(os.path.join(repo_root, 'requirements-vineyard.txt'), 'r') as f:\n- vineyard_requirements.extend(f.read().splitlines())\n-\n-long_description = None\n-if os.path.exists(os.path.join(repo_root, 'README.rst')):\n- with open(os.path.join(repo_root, 'README.rst'), encoding='utf-8') as f:\n- long_description = f.read()\n-\n \n if os.path.exists(os.path.join(repo_root, '.git')):\n git_info = version_ns['get_git_info']()\n@@ -126,41 +105,7 @@\n \n \n setup_options = dict(\n- name='pymars',\n version=version,\n- description='MARS: a tensor-based unified framework for large-scale data computation.',\n- long_description=long_description,\n- long_description_content_type='text/x-rst',\n- author='Qin Xuye',\n- author_email='[email protected]',\n- maintainer='Qin Xuye',\n- maintainer_email='[email protected]',\n- url='http://github.com/mars-project/mars',\n- license='Apache License 2.0',\n- classifiers=[\n- 'Operating System :: OS Independent',\n- 'Programming Language :: Python',\n- 'Programming Language :: Python :: 3',\n- 'Programming Language :: Python :: 3.6',\n- 'Programming Language :: Python :: 3.7',\n- 'Programming Language :: Python :: 3.8',\n- 'Programming Language :: Python :: Implementation :: CPython',\n- 'Topic :: Software Development :: Libraries',\n- ],\n- packages=find_packages(exclude=('*.tests.*', '*.tests')),\n- include_package_data=True,\n- entry_points={'console_scripts': [\n- 'mars-scheduler = mars.scheduler.__main__:main',\n- 'mars-worker = mars.worker.__main__:main',\n- 'mars-web = mars.web.__main__:main',\n- ]},\n- python_requires='>=3.6',\n- install_requires=requirements,\n ext_modules=extensions,\n- extras_require={\n- 'distributed': extra_requirements,\n- 'dev': extra_requirements + dev_requirements,\n- 'vineyard': vineyard_requirements,\n- }\n )\n setup(**setup_options)\n", "issue": "Move the metadata into setup.cfg\nhttps://github.com/gvalkov/setuptools-py2cfg can be helpful.\n", "before_files": [{"content": "# Copyright 1999-2020 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport platform\nimport re\nimport sys\nfrom setuptools import setup, find_packages, Extension\nfrom distutils.sysconfig import get_config_var\nfrom distutils.version import LooseVersion\n\nimport numpy as np\nfrom Cython.Build import cythonize\n\ntry:\n import distutils.ccompiler\n if sys.platform != 'win32':\n from numpy.distutils.ccompiler import CCompiler_compile\n distutils.ccompiler.CCompiler.compile = CCompiler_compile\nexcept ImportError:\n pass\n\n# From https://github.com/pandas-dev/pandas/pull/24274:\n# For mac, ensure extensions are built for macos 10.9 when compiling on a\n# 10.9 system or above, overriding distuitls behaviour which is to target\n# the version that python was built for. This may be overridden by setting\n# MACOSX_DEPLOYMENT_TARGET before calling setup.py\nif sys.platform == 'darwin':\n if 'MACOSX_DEPLOYMENT_TARGET' not in os.environ:\n current_system = LooseVersion(platform.mac_ver()[0])\n python_target = LooseVersion(\n get_config_var('MACOSX_DEPLOYMENT_TARGET'))\n if python_target < '10.9' and current_system >= '10.9':\n os.environ['MACOSX_DEPLOYMENT_TARGET'] = '10.9'\n\n\nrepo_root = os.path.dirname(os.path.abspath(__file__))\n\n\ndef execfile(fname, globs, locs=None):\n locs = locs or globs\n exec(compile(open(fname).read(), fname, \"exec\"), globs, locs)\n\n\nversion_file_path = os.path.join(repo_root, 'mars', '_version.py')\nversion_ns = {'__file__': version_file_path}\nexecfile(version_file_path, version_ns)\nversion = version_ns['__version__']\n# check version vs tag\nif os.environ.get('GIT_TAG') and re.search(r'v\\d', os.environ['GIT_TAG']) \\\n and os.environ['GIT_TAG'] != 'v' + version:\n raise ValueError('Tag %r does not match source version %r'\n % (os.environ['GIT_TAG'], version))\n\nrequirements = []\nwith open(os.path.join(repo_root, 'requirements.txt'), 'r') as f:\n requirements.extend(f.read().splitlines())\n\nextra_requirements = []\nwith open(os.path.join(repo_root, 'requirements-extra.txt'), 'r') as f:\n extra_requirements.extend(f.read().splitlines())\n\ndev_requirements = []\nwith open(os.path.join(repo_root, 'requirements-dev.txt'), 'r') as f:\n dev_requirements.extend(f.read().splitlines())\n\nvineyard_requirements = []\nwith open(os.path.join(repo_root, 'requirements-vineyard.txt'), 'r') as f:\n vineyard_requirements.extend(f.read().splitlines())\n\nlong_description = None\nif os.path.exists(os.path.join(repo_root, 'README.rst')):\n with open(os.path.join(repo_root, 'README.rst'), encoding='utf-8') as f:\n long_description = f.read()\n\n\nif os.path.exists(os.path.join(repo_root, '.git')):\n git_info = version_ns['get_git_info']()\n if git_info:\n with open(os.path.join(repo_root, 'mars', '.git-branch'), 'w') as git_file:\n git_file.write(' '.join(git_info))\n\ncythonize_kw = dict(language_level=sys.version_info[0])\ncy_extension_kw = dict()\nif os.environ.get('CYTHON_TRACE'):\n cy_extension_kw['define_macros'] = [('CYTHON_TRACE_NOGIL', '1'), ('CYTHON_TRACE', '1')]\n cythonize_kw['compiler_directives'] = {'linetrace': True}\n\nif 'MSC' in sys.version:\n extra_compile_args = ['/Ot', '/I' + os.path.join(repo_root, 'misc')]\n cy_extension_kw['extra_compile_args'] = extra_compile_args\nelse:\n extra_compile_args = ['-O3']\n cy_extension_kw['extra_compile_args'] = extra_compile_args\n\n\ndef _discover_pyx():\n exts = dict()\n for root, _, files in os.walk(os.path.join(repo_root, 'mars')):\n for fn in files:\n if not fn.endswith('.pyx'):\n continue\n full_fn = os.path.relpath(os.path.join(root, fn), repo_root)\n mod_name = full_fn.replace('.pyx', '').replace(os.path.sep, '.')\n exts[mod_name] = Extension(mod_name, [full_fn], **cy_extension_kw)\n return exts\n\n\ncy_extension_kw['include_dirs'] = [np.get_include()]\nextensions_dict = _discover_pyx()\ncy_extensions = list(extensions_dict.values())\n\nextensions = cythonize(cy_extensions, **cythonize_kw) + \\\n [Extension('mars.lib.mmh3', ['mars/lib/mmh3_src/mmh3module.cpp', 'mars/lib/mmh3_src/MurmurHash3.cpp'])]\n\n\nsetup_options = dict(\n name='pymars',\n version=version,\n description='MARS: a tensor-based unified framework for large-scale data computation.',\n long_description=long_description,\n long_description_content_type='text/x-rst',\n author='Qin Xuye',\n author_email='[email protected]',\n maintainer='Qin Xuye',\n maintainer_email='[email protected]',\n url='http://github.com/mars-project/mars',\n license='Apache License 2.0',\n classifiers=[\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Topic :: Software Development :: Libraries',\n ],\n packages=find_packages(exclude=('*.tests.*', '*.tests')),\n include_package_data=True,\n entry_points={'console_scripts': [\n 'mars-scheduler = mars.scheduler.__main__:main',\n 'mars-worker = mars.worker.__main__:main',\n 'mars-web = mars.web.__main__:main',\n ]},\n python_requires='>=3.6',\n install_requires=requirements,\n ext_modules=extensions,\n extras_require={\n 'distributed': extra_requirements,\n 'dev': extra_requirements + dev_requirements,\n 'vineyard': vineyard_requirements,\n }\n)\nsetup(**setup_options)\n", "path": "setup.py"}]}
| 2,484 | 720 |
gh_patches_debug_6582
|
rasdani/github-patches
|
git_diff
|
ytdl-org__youtube-dl-6539
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Handle protocol-relative URLs
Trying to download http://www.funnyordie.com/videos/ea20db28f8/kristen-stewart-jesse-eisenberg-interview-each-other (warning: autostarting video) fails with:
```
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.funnyordie.com/videos/ea20db28f8/kristen-stewart-jesse-eisenberg-interview-each-other']
[debug] Encodings: locale utf-8, fs utf-8, out utf-8, pref utf-8
[debug] youtube-dl version 2015.07.28
[debug] Python version 2.7.10 - Darwin-13.4.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 2.7.2, ffprobe 2.7.2, rtmpdump 2.4
[debug] Proxy map: {}
[FunnyOrDie] ea20db28f8: Downloading webpage
[debug] Invoking downloader on u'//vo.fod4.com/v/ea20db28f8/v2500.mp4'
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/opt/local/bin/youtube-dl/__main__.py", line 19, in <module>
File "/opt/local/bin/youtube-dl/youtube_dl/__init__.py", line 410, in main
File "/opt/local/bin/youtube-dl/youtube_dl/__init__.py", line 400, in _real_main
File "/opt/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1504, in download
File "/opt/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 667, in extract_info
File "/opt/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 713, in process_ie_result
File "/opt/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1174, in process_video_result
File "/opt/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1436, in process_info
File "/opt/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1378, in dl
File "/opt/local/bin/youtube-dl/youtube_dl/downloader/common.py", line 342, in download
File "/opt/local/bin/youtube-dl/youtube_dl/downloader/http.py", line 59, in real_download
File "/opt/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1732, in urlopen
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 423, in open
protocol = req.get_type()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 285, in get_type
raise ValueError, "unknown url type: %s" % self.__original
ValueError: unknown url type: //vo.fod4.com/v/ea20db28f8/v2500.mp4
```
Plugging the url into wget (with a scheme) correctly downloads the video so youtube-dl finds it just fine, it just needs to resolve the protocol-relative URL by re-using the protocol of the source page.
</issue>
<code>
[start of youtube_dl/extractor/funnyordie.py]
1 from __future__ import unicode_literals
2
3 import json
4 import re
5
6 from .common import InfoExtractor
7 from ..utils import ExtractorError
8
9
10 class FunnyOrDieIE(InfoExtractor):
11 _VALID_URL = r'https?://(?:www\.)?funnyordie\.com/(?P<type>embed|articles|videos)/(?P<id>[0-9a-f]+)(?:$|[?#/])'
12 _TESTS = [{
13 'url': 'http://www.funnyordie.com/videos/0732f586d7/heart-shaped-box-literal-video-version',
14 'md5': 'bcd81e0c4f26189ee09be362ad6e6ba9',
15 'info_dict': {
16 'id': '0732f586d7',
17 'ext': 'mp4',
18 'title': 'Heart-Shaped Box: Literal Video Version',
19 'description': 'md5:ea09a01bc9a1c46d9ab696c01747c338',
20 'thumbnail': 're:^http:.*\.jpg$',
21 },
22 }, {
23 'url': 'http://www.funnyordie.com/embed/e402820827',
24 'info_dict': {
25 'id': 'e402820827',
26 'ext': 'mp4',
27 'title': 'Please Use This Song (Jon Lajoie)',
28 'description': 'Please use this to sell something. www.jonlajoie.com',
29 'thumbnail': 're:^http:.*\.jpg$',
30 },
31 }, {
32 'url': 'http://www.funnyordie.com/articles/ebf5e34fc8/10-hours-of-walking-in-nyc-as-a-man',
33 'only_matching': True,
34 }]
35
36 def _real_extract(self, url):
37 mobj = re.match(self._VALID_URL, url)
38
39 video_id = mobj.group('id')
40 webpage = self._download_webpage(url, video_id)
41
42 links = re.findall(r'<source src="([^"]+/v)[^"]+\.([^"]+)" type=\'video', webpage)
43 if not links:
44 raise ExtractorError('No media links available for %s' % video_id)
45
46 links.sort(key=lambda link: 1 if link[1] == 'mp4' else 0)
47
48 bitrates = self._html_search_regex(r'<source src="[^"]+/v,((?:\d+,)+)\.mp4\.csmil', webpage, 'video bitrates')
49 bitrates = [int(b) for b in bitrates.rstrip(',').split(',')]
50 bitrates.sort()
51
52 formats = []
53 for bitrate in bitrates:
54 for link in links:
55 formats.append({
56 'url': '%s%d.%s' % (link[0], bitrate, link[1]),
57 'format_id': '%s-%d' % (link[1], bitrate),
58 'vbr': bitrate,
59 })
60
61 subtitles = {}
62 for src, src_lang in re.findall(r'<track kind="captions" src="([^"]+)" srclang="([^"]+)"', webpage):
63 subtitles[src_lang] = [{
64 'ext': src.split('/')[-1],
65 'url': 'http://www.funnyordie.com%s' % src,
66 }]
67
68 post_json = self._search_regex(
69 r'fb_post\s*=\s*(\{.*?\});', webpage, 'post details')
70 post = json.loads(post_json)
71
72 return {
73 'id': video_id,
74 'title': post['name'],
75 'description': post.get('description'),
76 'thumbnail': post.get('picture'),
77 'formats': formats,
78 'subtitles': subtitles,
79 }
80
[end of youtube_dl/extractor/funnyordie.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/youtube_dl/extractor/funnyordie.py b/youtube_dl/extractor/funnyordie.py
--- a/youtube_dl/extractor/funnyordie.py
+++ b/youtube_dl/extractor/funnyordie.py
@@ -53,7 +53,7 @@
for bitrate in bitrates:
for link in links:
formats.append({
- 'url': '%s%d.%s' % (link[0], bitrate, link[1]),
+ 'url': self._proto_relative_url('%s%d.%s' % (link[0], bitrate, link[1])),
'format_id': '%s-%d' % (link[1], bitrate),
'vbr': bitrate,
})
|
{"golden_diff": "diff --git a/youtube_dl/extractor/funnyordie.py b/youtube_dl/extractor/funnyordie.py\n--- a/youtube_dl/extractor/funnyordie.py\n+++ b/youtube_dl/extractor/funnyordie.py\n@@ -53,7 +53,7 @@\n for bitrate in bitrates:\n for link in links:\n formats.append({\n- 'url': '%s%d.%s' % (link[0], bitrate, link[1]),\n+ 'url': self._proto_relative_url('%s%d.%s' % (link[0], bitrate, link[1])),\n 'format_id': '%s-%d' % (link[1], bitrate),\n 'vbr': bitrate,\n })\n", "issue": "Handle protocol-relative URLs\nTrying to download http://www.funnyordie.com/videos/ea20db28f8/kristen-stewart-jesse-eisenberg-interview-each-other (warning: autostarting video) fails with:\n\n```\n[debug] System config: []\n[debug] User config: []\n[debug] Command-line args: [u'-v', u'http://www.funnyordie.com/videos/ea20db28f8/kristen-stewart-jesse-eisenberg-interview-each-other']\n[debug] Encodings: locale utf-8, fs utf-8, out utf-8, pref utf-8\n[debug] youtube-dl version 2015.07.28\n[debug] Python version 2.7.10 - Darwin-13.4.0-x86_64-i386-64bit\n[debug] exe versions: ffmpeg 2.7.2, ffprobe 2.7.2, rtmpdump 2.4\n[debug] Proxy map: {}\n[FunnyOrDie] ea20db28f8: Downloading webpage\n[debug] Invoking downloader on u'//vo.fod4.com/v/ea20db28f8/v2500.mp4'\nTraceback (most recent call last):\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 162, in _run_module_as_main\n \"__main__\", fname, loader, pkg_name)\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/opt/local/bin/youtube-dl/__main__.py\", line 19, in <module>\n File \"/opt/local/bin/youtube-dl/youtube_dl/__init__.py\", line 410, in main\n File \"/opt/local/bin/youtube-dl/youtube_dl/__init__.py\", line 400, in _real_main\n File \"/opt/local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 1504, in download\n File \"/opt/local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 667, in extract_info\n File \"/opt/local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 713, in process_ie_result\n File \"/opt/local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 1174, in process_video_result\n File \"/opt/local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 1436, in process_info\n File \"/opt/local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 1378, in dl\n File \"/opt/local/bin/youtube-dl/youtube_dl/downloader/common.py\", line 342, in download\n File \"/opt/local/bin/youtube-dl/youtube_dl/downloader/http.py\", line 59, in real_download\n File \"/opt/local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 1732, in urlopen\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py\", line 423, in open\n protocol = req.get_type()\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py\", line 285, in get_type\n raise ValueError, \"unknown url type: %s\" % self.__original\nValueError: unknown url type: //vo.fod4.com/v/ea20db28f8/v2500.mp4\n```\n\nPlugging the url into wget (with a scheme) correctly downloads the video so youtube-dl finds it just fine, it just needs to resolve the protocol-relative URL by re-using the protocol of the source page.\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport json\nimport re\n\nfrom .common import InfoExtractor\nfrom ..utils import ExtractorError\n\n\nclass FunnyOrDieIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?funnyordie\\.com/(?P<type>embed|articles|videos)/(?P<id>[0-9a-f]+)(?:$|[?#/])'\n _TESTS = [{\n 'url': 'http://www.funnyordie.com/videos/0732f586d7/heart-shaped-box-literal-video-version',\n 'md5': 'bcd81e0c4f26189ee09be362ad6e6ba9',\n 'info_dict': {\n 'id': '0732f586d7',\n 'ext': 'mp4',\n 'title': 'Heart-Shaped Box: Literal Video Version',\n 'description': 'md5:ea09a01bc9a1c46d9ab696c01747c338',\n 'thumbnail': 're:^http:.*\\.jpg$',\n },\n }, {\n 'url': 'http://www.funnyordie.com/embed/e402820827',\n 'info_dict': {\n 'id': 'e402820827',\n 'ext': 'mp4',\n 'title': 'Please Use This Song (Jon Lajoie)',\n 'description': 'Please use this to sell something. www.jonlajoie.com',\n 'thumbnail': 're:^http:.*\\.jpg$',\n },\n }, {\n 'url': 'http://www.funnyordie.com/articles/ebf5e34fc8/10-hours-of-walking-in-nyc-as-a-man',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n mobj = re.match(self._VALID_URL, url)\n\n video_id = mobj.group('id')\n webpage = self._download_webpage(url, video_id)\n\n links = re.findall(r'<source src=\"([^\"]+/v)[^\"]+\\.([^\"]+)\" type=\\'video', webpage)\n if not links:\n raise ExtractorError('No media links available for %s' % video_id)\n\n links.sort(key=lambda link: 1 if link[1] == 'mp4' else 0)\n\n bitrates = self._html_search_regex(r'<source src=\"[^\"]+/v,((?:\\d+,)+)\\.mp4\\.csmil', webpage, 'video bitrates')\n bitrates = [int(b) for b in bitrates.rstrip(',').split(',')]\n bitrates.sort()\n\n formats = []\n for bitrate in bitrates:\n for link in links:\n formats.append({\n 'url': '%s%d.%s' % (link[0], bitrate, link[1]),\n 'format_id': '%s-%d' % (link[1], bitrate),\n 'vbr': bitrate,\n })\n\n subtitles = {}\n for src, src_lang in re.findall(r'<track kind=\"captions\" src=\"([^\"]+)\" srclang=\"([^\"]+)\"', webpage):\n subtitles[src_lang] = [{\n 'ext': src.split('/')[-1],\n 'url': 'http://www.funnyordie.com%s' % src,\n }]\n\n post_json = self._search_regex(\n r'fb_post\\s*=\\s*(\\{.*?\\});', webpage, 'post details')\n post = json.loads(post_json)\n\n return {\n 'id': video_id,\n 'title': post['name'],\n 'description': post.get('description'),\n 'thumbnail': post.get('picture'),\n 'formats': formats,\n 'subtitles': subtitles,\n }\n", "path": "youtube_dl/extractor/funnyordie.py"}]}
| 2,443 | 163 |
gh_patches_debug_27210
|
rasdani/github-patches
|
git_diff
|
sql-machine-learning__elasticdl-282
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[master]Use different RPC for reporting gradient and task result.
</issue>
<code>
[start of elasticdl/master/servicer.py]
1 import threading
2 import numpy as np
3
4 import tensorflow as tf
5 assert tf.executing_eagerly()
6
7 from proto import master_pb2
8 from proto import master_pb2_grpc
9 from util.ndarray import ndarray_to_tensor, tensor_to_ndarray
10
11
12 class MasterServicer(master_pb2_grpc.MasterServicer):
13 """Master service implementation"""
14
15 def __init__(self, logger, grads_to_wait, optimizer):
16 self.logger = logger
17 self._opt = optimizer
18 self._lock = threading.Lock()
19 # TODO: random initialization
20 # A <string, tf.ResourceVariable> map. We use tf.ResourceVariable
21 # instead ndarray to avoid copying and conversion when calling
22 # optimizer's apply_gradients() function.
23 self._model = {}
24 self._version = 0
25 self._gradient_sum = {}
26 self._grad_to_wait = grads_to_wait
27 self._grad_n = 0
28
29 def _set_model_var(self, name, value):
30 """Add or set model variable. Value should be a float32 ndarray"""
31 if value.dtype != np.float32:
32 raise ValueError("Value should be a float32 numpy array")
33 self._model[name] = tf.Variable(value, name=name)
34
35 def GetTask(self, request, context):
36 # TODO: implent task queues. Return an empty task for now.
37 res = master_pb2.Task()
38 res.shard_file_name = ""
39 res.model_version = self._version
40 return res
41
42 def GetModel(self, request, context):
43 if request.min_version > self._version:
44 err_msg = (
45 "Requested version %d not available yet, current version: %d"
46 % (request.min_version, self._version)
47 )
48 self.logger.warning(err_msg)
49 raise ValueError(err_msg)
50
51 res = master_pb2.Model()
52 with self._lock:
53 res.version = self._version
54 for k, v in self._model.items():
55 res.param[k].CopyFrom(ndarray_to_tensor(v.numpy()))
56 return res
57
58 def _update_model(self):
59 assert self._lock.locked()
60 grad_var = []
61 for k in self._gradient_sum:
62 self._gradient_sum[k] = self._gradient_sum[k] / self._grad_to_wait
63 grad_var.append((self._gradient_sum[k], self._model[k]))
64 self._opt.apply_gradients(grad_var)
65 self._version += 1
66 self._gradient_sum.clear()
67 self._grad_n = 0
68
69 def ReportTaskResult(self, request, context):
70 if request.model_version > self._version:
71 err_msg = "Model version %d out of range, current version: %d" % (
72 request.model_version,
73 self._version,
74 )
75 self.logger.warning(err_msg)
76 raise ValueError(err_msg)
77
78 res = master_pb2.ReportTaskResultReply()
79 if request.model_version < self._version:
80 self.logger.warning(
81 "Task result for outdated version %d dropped",
82 request.model_version,
83 )
84 res.accepted = False
85 res.model_version = self._version
86 return res
87
88 if request.err_message:
89 self.logger.warning("Worker error: %s" % request.err_message)
90 res.accepted = False
91 res.model_version = self._version
92 return res
93
94 # TODO: Update task queue with task_id
95 with self._lock:
96 tmp = {}
97 # Do sanity check before accumulating gradients.
98 for k, v in request.gradient.items():
99 if k not in self._model:
100 raise ValueError(
101 "Gradient key: %s is not part of model", k
102 )
103 arr = tensor_to_ndarray(v)
104 if arr.shape != self._model[k].numpy().shape:
105 raise ValueError(
106 "Gradient key: %s has incompatible dimension", k
107 )
108 tmp[k] = arr
109
110 for k, v in tmp.items():
111 if k in self._gradient_sum:
112 self._gradient_sum[k] = self._gradient_sum[k] + v
113 else:
114 self._gradient_sum[k] = v
115
116 self._grad_n += 1
117 if self._grad_n >= self._grad_to_wait:
118 self._update_model()
119 res.accepted = True
120 res.model_version = self._version
121 return res
122
[end of elasticdl/master/servicer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/elasticdl/master/servicer.py b/elasticdl/master/servicer.py
--- a/elasticdl/master/servicer.py
+++ b/elasticdl/master/servicer.py
@@ -66,7 +66,7 @@
self._gradient_sum.clear()
self._grad_n = 0
- def ReportTaskResult(self, request, context):
+ def ReportGradient(self, request, context):
if request.model_version > self._version:
err_msg = "Model version %d out of range, current version: %d" % (
request.model_version,
@@ -75,7 +75,7 @@
self.logger.warning(err_msg)
raise ValueError(err_msg)
- res = master_pb2.ReportTaskResultReply()
+ res = master_pb2.ReportGradientReply()
if request.model_version < self._version:
self.logger.warning(
"Task result for outdated version %d dropped",
@@ -85,12 +85,6 @@
res.model_version = self._version
return res
- if request.err_message:
- self.logger.warning("Worker error: %s" % request.err_message)
- res.accepted = False
- res.model_version = self._version
- return res
-
# TODO: Update task queue with task_id
with self._lock:
tmp = {}
|
{"golden_diff": "diff --git a/elasticdl/master/servicer.py b/elasticdl/master/servicer.py\n--- a/elasticdl/master/servicer.py\n+++ b/elasticdl/master/servicer.py\n@@ -66,7 +66,7 @@\n self._gradient_sum.clear()\n self._grad_n = 0\n \n- def ReportTaskResult(self, request, context):\n+ def ReportGradient(self, request, context):\n if request.model_version > self._version:\n err_msg = \"Model version %d out of range, current version: %d\" % (\n request.model_version,\n@@ -75,7 +75,7 @@\n self.logger.warning(err_msg)\n raise ValueError(err_msg)\n \n- res = master_pb2.ReportTaskResultReply()\n+ res = master_pb2.ReportGradientReply()\n if request.model_version < self._version:\n self.logger.warning(\n \"Task result for outdated version %d dropped\",\n@@ -85,12 +85,6 @@\n res.model_version = self._version\n return res\n \n- if request.err_message:\n- self.logger.warning(\"Worker error: %s\" % request.err_message)\n- res.accepted = False\n- res.model_version = self._version\n- return res\n-\n # TODO: Update task queue with task_id\n with self._lock:\n tmp = {}\n", "issue": "[master]Use different RPC for reporting gradient and task result.\n\n", "before_files": [{"content": "import threading\nimport numpy as np\n\nimport tensorflow as tf\nassert tf.executing_eagerly()\n\nfrom proto import master_pb2\nfrom proto import master_pb2_grpc\nfrom util.ndarray import ndarray_to_tensor, tensor_to_ndarray\n\n\nclass MasterServicer(master_pb2_grpc.MasterServicer):\n \"\"\"Master service implementation\"\"\"\n\n def __init__(self, logger, grads_to_wait, optimizer):\n self.logger = logger\n self._opt = optimizer\n self._lock = threading.Lock()\n # TODO: random initialization\n # A <string, tf.ResourceVariable> map. We use tf.ResourceVariable\n # instead ndarray to avoid copying and conversion when calling\n # optimizer's apply_gradients() function.\n self._model = {}\n self._version = 0\n self._gradient_sum = {}\n self._grad_to_wait = grads_to_wait\n self._grad_n = 0\n\n def _set_model_var(self, name, value):\n \"\"\"Add or set model variable. Value should be a float32 ndarray\"\"\"\n if value.dtype != np.float32:\n raise ValueError(\"Value should be a float32 numpy array\")\n self._model[name] = tf.Variable(value, name=name)\n\n def GetTask(self, request, context):\n # TODO: implent task queues. Return an empty task for now.\n res = master_pb2.Task()\n res.shard_file_name = \"\"\n res.model_version = self._version\n return res\n\n def GetModel(self, request, context):\n if request.min_version > self._version:\n err_msg = (\n \"Requested version %d not available yet, current version: %d\"\n % (request.min_version, self._version)\n )\n self.logger.warning(err_msg)\n raise ValueError(err_msg)\n\n res = master_pb2.Model()\n with self._lock:\n res.version = self._version\n for k, v in self._model.items():\n res.param[k].CopyFrom(ndarray_to_tensor(v.numpy()))\n return res\n\n def _update_model(self):\n assert self._lock.locked()\n grad_var = []\n for k in self._gradient_sum:\n self._gradient_sum[k] = self._gradient_sum[k] / self._grad_to_wait\n grad_var.append((self._gradient_sum[k], self._model[k]))\n self._opt.apply_gradients(grad_var)\n self._version += 1\n self._gradient_sum.clear()\n self._grad_n = 0\n\n def ReportTaskResult(self, request, context):\n if request.model_version > self._version:\n err_msg = \"Model version %d out of range, current version: %d\" % (\n request.model_version,\n self._version,\n )\n self.logger.warning(err_msg)\n raise ValueError(err_msg)\n\n res = master_pb2.ReportTaskResultReply()\n if request.model_version < self._version:\n self.logger.warning(\n \"Task result for outdated version %d dropped\",\n request.model_version,\n )\n res.accepted = False\n res.model_version = self._version\n return res\n\n if request.err_message:\n self.logger.warning(\"Worker error: %s\" % request.err_message)\n res.accepted = False\n res.model_version = self._version\n return res\n\n # TODO: Update task queue with task_id\n with self._lock:\n tmp = {}\n # Do sanity check before accumulating gradients.\n for k, v in request.gradient.items():\n if k not in self._model:\n raise ValueError(\n \"Gradient key: %s is not part of model\", k\n )\n arr = tensor_to_ndarray(v)\n if arr.shape != self._model[k].numpy().shape:\n raise ValueError(\n \"Gradient key: %s has incompatible dimension\", k\n )\n tmp[k] = arr\n\n for k, v in tmp.items():\n if k in self._gradient_sum:\n self._gradient_sum[k] = self._gradient_sum[k] + v\n else:\n self._gradient_sum[k] = v\n\n self._grad_n += 1\n if self._grad_n >= self._grad_to_wait:\n self._update_model()\n res.accepted = True\n res.model_version = self._version\n return res\n", "path": "elasticdl/master/servicer.py"}]}
| 1,745 | 301 |
gh_patches_debug_23313
|
rasdani/github-patches
|
git_diff
|
jazzband__pip-tools-2082
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Output --no-binary and --only-binary options to preserve pip behavior
<!-- Describe the issue briefly here. -->
#### Environment Versions
1. OS Type: Linux
1. Python version: 3.12.3
1. pip version: 24.0
1. pip-tools version: 7.4.1
#### Steps to replicate
1. Compile using `--pip-args='--only-binary=:all: --no-binary=library'
2. See that compiled requirements list the `--no-binary=library` option first, `--only-binary=:all:` option second.
3. When attempting to install from these requirements the `--no-binary` is wiped out by the `--only-binary=:all:`
#### Expected result
The resulting requirements contain `--no-binary` and `--only-binary` options that have the same behavior as input options.
#### Actual result
Requirements don't have the same behavior as input options.
This improvement matters because using --no-binary/--only-binary is the best way for users to control the amount of potential code execution that happens during the installation process. I have a local fix that I plan on creating a PR.
</issue>
<code>
[start of piptools/writer.py]
1 from __future__ import annotations
2
3 import io
4 import os
5 import re
6 import sys
7 from itertools import chain
8 from typing import BinaryIO, Iterable, Iterator, cast
9
10 from click import unstyle
11 from click.core import Context
12 from pip._internal.models.format_control import FormatControl
13 from pip._internal.req.req_install import InstallRequirement
14 from pip._vendor.packaging.markers import Marker
15 from pip._vendor.packaging.utils import canonicalize_name
16
17 from .logging import log
18 from .utils import (
19 comment,
20 dedup,
21 format_requirement,
22 get_compile_command,
23 key_from_ireq,
24 strip_extras,
25 )
26
27 MESSAGE_UNHASHED_PACKAGE = comment(
28 "# WARNING: pip install will require the following package to be hashed."
29 "\n# Consider using a hashable URL like "
30 "https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip"
31 )
32
33 MESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(
34 "# WARNING: The following packages were not pinned, but pip requires them to be"
35 "\n# pinned when the requirements file includes hashes and the requirement is not"
36 "\n# satisfied by a package already installed. "
37 "Consider using the --allow-unsafe flag."
38 )
39
40 MESSAGE_UNSAFE_PACKAGES = comment(
41 "# The following packages are considered to be unsafe in a requirements file:"
42 )
43
44 MESSAGE_UNINSTALLABLE = (
45 "The generated requirements file may be rejected by pip install. "
46 "See # WARNING lines for details."
47 )
48
49
50 strip_comes_from_line_re = re.compile(r" \(line \d+\)$")
51
52
53 def _comes_from_as_string(comes_from: str | InstallRequirement) -> str:
54 if isinstance(comes_from, str):
55 return strip_comes_from_line_re.sub("", comes_from)
56 return cast(str, canonicalize_name(key_from_ireq(comes_from)))
57
58
59 def annotation_style_split(required_by: set[str]) -> str:
60 sorted_required_by = sorted(required_by)
61 if len(sorted_required_by) == 1:
62 source = sorted_required_by[0]
63 annotation = "# via " + source
64 else:
65 annotation_lines = ["# via"]
66 for source in sorted_required_by:
67 annotation_lines.append(" # " + source)
68 annotation = "\n".join(annotation_lines)
69 return annotation
70
71
72 def annotation_style_line(required_by: set[str]) -> str:
73 return f"# via {', '.join(sorted(required_by))}"
74
75
76 class OutputWriter:
77 def __init__(
78 self,
79 dst_file: BinaryIO,
80 click_ctx: Context,
81 dry_run: bool,
82 emit_header: bool,
83 emit_index_url: bool,
84 emit_trusted_host: bool,
85 annotate: bool,
86 annotation_style: str,
87 strip_extras: bool,
88 generate_hashes: bool,
89 default_index_url: str,
90 index_urls: Iterable[str],
91 trusted_hosts: Iterable[str],
92 format_control: FormatControl,
93 linesep: str,
94 allow_unsafe: bool,
95 find_links: list[str],
96 emit_find_links: bool,
97 emit_options: bool,
98 ) -> None:
99 self.dst_file = dst_file
100 self.click_ctx = click_ctx
101 self.dry_run = dry_run
102 self.emit_header = emit_header
103 self.emit_index_url = emit_index_url
104 self.emit_trusted_host = emit_trusted_host
105 self.annotate = annotate
106 self.annotation_style = annotation_style
107 self.strip_extras = strip_extras
108 self.generate_hashes = generate_hashes
109 self.default_index_url = default_index_url
110 self.index_urls = index_urls
111 self.trusted_hosts = trusted_hosts
112 self.format_control = format_control
113 self.linesep = linesep
114 self.allow_unsafe = allow_unsafe
115 self.find_links = find_links
116 self.emit_find_links = emit_find_links
117 self.emit_options = emit_options
118
119 def _sort_key(self, ireq: InstallRequirement) -> tuple[bool, str]:
120 return (not ireq.editable, key_from_ireq(ireq))
121
122 def write_header(self) -> Iterator[str]:
123 if self.emit_header:
124 yield comment("#")
125 yield comment(
126 "# This file is autogenerated by pip-compile with Python "
127 f"{sys.version_info.major}.{sys.version_info.minor}"
128 )
129 yield comment("# by the following command:")
130 yield comment("#")
131 compile_command = os.environ.get(
132 "CUSTOM_COMPILE_COMMAND"
133 ) or get_compile_command(self.click_ctx)
134 yield comment(f"# {compile_command}")
135 yield comment("#")
136
137 def write_index_options(self) -> Iterator[str]:
138 if self.emit_index_url:
139 for index, index_url in enumerate(dedup(self.index_urls)):
140 if index == 0 and index_url.rstrip("/") == self.default_index_url:
141 continue
142 flag = "--index-url" if index == 0 else "--extra-index-url"
143 yield f"{flag} {index_url}"
144
145 def write_trusted_hosts(self) -> Iterator[str]:
146 if self.emit_trusted_host:
147 for trusted_host in dedup(self.trusted_hosts):
148 yield f"--trusted-host {trusted_host}"
149
150 def write_format_controls(self) -> Iterator[str]:
151 for nb in dedup(sorted(self.format_control.no_binary)):
152 yield f"--no-binary {nb}"
153 for ob in dedup(sorted(self.format_control.only_binary)):
154 yield f"--only-binary {ob}"
155
156 def write_find_links(self) -> Iterator[str]:
157 if self.emit_find_links:
158 for find_link in dedup(self.find_links):
159 yield f"--find-links {find_link}"
160
161 def write_flags(self) -> Iterator[str]:
162 if not self.emit_options:
163 return
164 emitted = False
165 for line in chain(
166 self.write_index_options(),
167 self.write_find_links(),
168 self.write_trusted_hosts(),
169 self.write_format_controls(),
170 ):
171 emitted = True
172 yield line
173 if emitted:
174 yield ""
175
176 def _iter_lines(
177 self,
178 results: set[InstallRequirement],
179 unsafe_requirements: set[InstallRequirement],
180 unsafe_packages: set[str],
181 markers: dict[str, Marker],
182 hashes: dict[InstallRequirement, set[str]] | None = None,
183 ) -> Iterator[str]:
184 # default values
185 unsafe_packages = unsafe_packages if self.allow_unsafe else set()
186 hashes = hashes or {}
187
188 # Check for unhashed or unpinned packages if at least one package does have
189 # hashes, which will trigger pip install's --require-hashes mode.
190 warn_uninstallable = False
191 has_hashes = hashes and any(hash for hash in hashes.values())
192
193 yielded = False
194
195 for line in self.write_header():
196 yield line
197 yielded = True
198 for line in self.write_flags():
199 yield line
200 yielded = True
201
202 unsafe_requirements = unsafe_requirements or {
203 r for r in results if r.name in unsafe_packages
204 }
205 packages = {r for r in results if r.name not in unsafe_packages}
206
207 if packages:
208 for ireq in sorted(packages, key=self._sort_key):
209 if has_hashes and not hashes.get(ireq):
210 yield MESSAGE_UNHASHED_PACKAGE
211 warn_uninstallable = True
212 line = self._format_requirement(
213 ireq, markers.get(key_from_ireq(ireq)), hashes=hashes
214 )
215 yield line
216 yielded = True
217
218 if unsafe_requirements:
219 yield ""
220 yielded = True
221 if has_hashes and not self.allow_unsafe:
222 yield MESSAGE_UNSAFE_PACKAGES_UNPINNED
223 warn_uninstallable = True
224 else:
225 yield MESSAGE_UNSAFE_PACKAGES
226
227 for ireq in sorted(unsafe_requirements, key=self._sort_key):
228 ireq_key = key_from_ireq(ireq)
229 if not self.allow_unsafe:
230 yield comment(f"# {ireq_key}")
231 else:
232 line = self._format_requirement(
233 ireq, marker=markers.get(ireq_key), hashes=hashes
234 )
235 yield line
236
237 # Yield even when there's no real content, so that blank files are written
238 if not yielded:
239 yield ""
240
241 if warn_uninstallable:
242 log.warning(MESSAGE_UNINSTALLABLE)
243
244 def write(
245 self,
246 results: set[InstallRequirement],
247 unsafe_requirements: set[InstallRequirement],
248 unsafe_packages: set[str],
249 markers: dict[str, Marker],
250 hashes: dict[InstallRequirement, set[str]] | None,
251 ) -> None:
252 if not self.dry_run:
253 dst_file = io.TextIOWrapper(
254 self.dst_file,
255 encoding="utf8",
256 newline=self.linesep,
257 line_buffering=True,
258 )
259 try:
260 for line in self._iter_lines(
261 results, unsafe_requirements, unsafe_packages, markers, hashes
262 ):
263 if self.dry_run:
264 # Bypass the log level to always print this during a dry run
265 log.log(line)
266 else:
267 log.info(line)
268 dst_file.write(unstyle(line))
269 dst_file.write("\n")
270 finally:
271 if not self.dry_run:
272 dst_file.detach()
273
274 def _format_requirement(
275 self,
276 ireq: InstallRequirement,
277 marker: Marker | None = None,
278 hashes: dict[InstallRequirement, set[str]] | None = None,
279 ) -> str:
280 ireq_hashes = (hashes if hashes is not None else {}).get(ireq)
281
282 line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)
283 if self.strip_extras:
284 line = strip_extras(line)
285
286 if not self.annotate:
287 return line
288
289 # Annotate what packages or reqs-ins this package is required by
290 required_by = set()
291 if hasattr(ireq, "_source_ireqs"):
292 required_by |= {
293 _comes_from_as_string(src_ireq.comes_from)
294 for src_ireq in ireq._source_ireqs
295 if src_ireq.comes_from
296 }
297
298 # Filter out the origin install requirements for extras.
299 # See https://github.com/jazzband/pip-tools/issues/2003
300 if ireq.comes_from and (
301 isinstance(ireq.comes_from, str) or ireq.comes_from.name != ireq.name
302 ):
303 required_by.add(_comes_from_as_string(ireq.comes_from))
304
305 required_by |= set(getattr(ireq, "_required_by", set()))
306
307 if required_by:
308 if self.annotation_style == "split":
309 annotation = annotation_style_split(required_by)
310 sep = "\n "
311 elif self.annotation_style == "line":
312 annotation = annotation_style_line(required_by)
313 sep = "\n " if ireq_hashes else " "
314 else: # pragma: no cover
315 raise ValueError("Invalid value for annotation style")
316 if self.strip_extras:
317 annotation = strip_extras(annotation)
318 # 24 is one reasonable column size to use here, that we've used in the past
319 lines = f"{line:24}{sep}{comment(annotation)}".splitlines()
320 line = "\n".join(ln.rstrip() for ln in lines)
321
322 return line
323
[end of piptools/writer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/piptools/writer.py b/piptools/writer.py
--- a/piptools/writer.py
+++ b/piptools/writer.py
@@ -148,9 +148,27 @@
yield f"--trusted-host {trusted_host}"
def write_format_controls(self) -> Iterator[str]:
- for nb in dedup(sorted(self.format_control.no_binary)):
+ # The ordering of output needs to preserve the behavior of pip's
+ # FormatControl.get_allowed_formats(). The behavior is the following:
+ #
+ # * Parsing of CLI options happens first to last.
+ # * --only-binary takes precedence over --no-binary
+ # * Package names take precedence over :all:
+ # * We'll never see :all: in both due to mutual exclusion.
+ #
+ # So in summary, we want to emit :all: first and then package names later.
+ no_binary = self.format_control.no_binary.copy()
+ only_binary = self.format_control.only_binary.copy()
+
+ if ":all:" in no_binary:
+ yield "--no-binary :all:"
+ no_binary.remove(":all:")
+ if ":all:" in only_binary:
+ yield "--only-binary :all:"
+ only_binary.remove(":all:")
+ for nb in dedup(sorted(no_binary)):
yield f"--no-binary {nb}"
- for ob in dedup(sorted(self.format_control.only_binary)):
+ for ob in dedup(sorted(only_binary)):
yield f"--only-binary {ob}"
def write_find_links(self) -> Iterator[str]:
|
{"golden_diff": "diff --git a/piptools/writer.py b/piptools/writer.py\n--- a/piptools/writer.py\n+++ b/piptools/writer.py\n@@ -148,9 +148,27 @@\n yield f\"--trusted-host {trusted_host}\"\n \n def write_format_controls(self) -> Iterator[str]:\n- for nb in dedup(sorted(self.format_control.no_binary)):\n+ # The ordering of output needs to preserve the behavior of pip's\n+ # FormatControl.get_allowed_formats(). The behavior is the following:\n+ #\n+ # * Parsing of CLI options happens first to last.\n+ # * --only-binary takes precedence over --no-binary\n+ # * Package names take precedence over :all:\n+ # * We'll never see :all: in both due to mutual exclusion.\n+ #\n+ # So in summary, we want to emit :all: first and then package names later.\n+ no_binary = self.format_control.no_binary.copy()\n+ only_binary = self.format_control.only_binary.copy()\n+\n+ if \":all:\" in no_binary:\n+ yield \"--no-binary :all:\"\n+ no_binary.remove(\":all:\")\n+ if \":all:\" in only_binary:\n+ yield \"--only-binary :all:\"\n+ only_binary.remove(\":all:\")\n+ for nb in dedup(sorted(no_binary)):\n yield f\"--no-binary {nb}\"\n- for ob in dedup(sorted(self.format_control.only_binary)):\n+ for ob in dedup(sorted(only_binary)):\n yield f\"--only-binary {ob}\"\n \n def write_find_links(self) -> Iterator[str]:\n", "issue": "Output --no-binary and --only-binary options to preserve pip behavior\n<!-- Describe the issue briefly here. -->\r\n\r\n#### Environment Versions\r\n\r\n1. OS Type: Linux\r\n1. Python version: 3.12.3\r\n1. pip version: 24.0\r\n1. pip-tools version: 7.4.1\r\n\r\n#### Steps to replicate\r\n\r\n1. Compile using `--pip-args='--only-binary=:all: --no-binary=library'\r\n2. See that compiled requirements list the `--no-binary=library` option first, `--only-binary=:all:` option second.\r\n3. When attempting to install from these requirements the `--no-binary` is wiped out by the `--only-binary=:all:`\r\n\r\n#### Expected result\r\n\r\nThe resulting requirements contain `--no-binary` and `--only-binary` options that have the same behavior as input options.\r\n\r\n#### Actual result\r\n\r\nRequirements don't have the same behavior as input options.\r\n\r\nThis improvement matters because using --no-binary/--only-binary is the best way for users to control the amount of potential code execution that happens during the installation process. I have a local fix that I plan on creating a PR.\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport io\nimport os\nimport re\nimport sys\nfrom itertools import chain\nfrom typing import BinaryIO, Iterable, Iterator, cast\n\nfrom click import unstyle\nfrom click.core import Context\nfrom pip._internal.models.format_control import FormatControl\nfrom pip._internal.req.req_install import InstallRequirement\nfrom pip._vendor.packaging.markers import Marker\nfrom pip._vendor.packaging.utils import canonicalize_name\n\nfrom .logging import log\nfrom .utils import (\n comment,\n dedup,\n format_requirement,\n get_compile_command,\n key_from_ireq,\n strip_extras,\n)\n\nMESSAGE_UNHASHED_PACKAGE = comment(\n \"# WARNING: pip install will require the following package to be hashed.\"\n \"\\n# Consider using a hashable URL like \"\n \"https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip\"\n)\n\nMESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(\n \"# WARNING: The following packages were not pinned, but pip requires them to be\"\n \"\\n# pinned when the requirements file includes hashes and the requirement is not\"\n \"\\n# satisfied by a package already installed. \"\n \"Consider using the --allow-unsafe flag.\"\n)\n\nMESSAGE_UNSAFE_PACKAGES = comment(\n \"# The following packages are considered to be unsafe in a requirements file:\"\n)\n\nMESSAGE_UNINSTALLABLE = (\n \"The generated requirements file may be rejected by pip install. \"\n \"See # WARNING lines for details.\"\n)\n\n\nstrip_comes_from_line_re = re.compile(r\" \\(line \\d+\\)$\")\n\n\ndef _comes_from_as_string(comes_from: str | InstallRequirement) -> str:\n if isinstance(comes_from, str):\n return strip_comes_from_line_re.sub(\"\", comes_from)\n return cast(str, canonicalize_name(key_from_ireq(comes_from)))\n\n\ndef annotation_style_split(required_by: set[str]) -> str:\n sorted_required_by = sorted(required_by)\n if len(sorted_required_by) == 1:\n source = sorted_required_by[0]\n annotation = \"# via \" + source\n else:\n annotation_lines = [\"# via\"]\n for source in sorted_required_by:\n annotation_lines.append(\" # \" + source)\n annotation = \"\\n\".join(annotation_lines)\n return annotation\n\n\ndef annotation_style_line(required_by: set[str]) -> str:\n return f\"# via {', '.join(sorted(required_by))}\"\n\n\nclass OutputWriter:\n def __init__(\n self,\n dst_file: BinaryIO,\n click_ctx: Context,\n dry_run: bool,\n emit_header: bool,\n emit_index_url: bool,\n emit_trusted_host: bool,\n annotate: bool,\n annotation_style: str,\n strip_extras: bool,\n generate_hashes: bool,\n default_index_url: str,\n index_urls: Iterable[str],\n trusted_hosts: Iterable[str],\n format_control: FormatControl,\n linesep: str,\n allow_unsafe: bool,\n find_links: list[str],\n emit_find_links: bool,\n emit_options: bool,\n ) -> None:\n self.dst_file = dst_file\n self.click_ctx = click_ctx\n self.dry_run = dry_run\n self.emit_header = emit_header\n self.emit_index_url = emit_index_url\n self.emit_trusted_host = emit_trusted_host\n self.annotate = annotate\n self.annotation_style = annotation_style\n self.strip_extras = strip_extras\n self.generate_hashes = generate_hashes\n self.default_index_url = default_index_url\n self.index_urls = index_urls\n self.trusted_hosts = trusted_hosts\n self.format_control = format_control\n self.linesep = linesep\n self.allow_unsafe = allow_unsafe\n self.find_links = find_links\n self.emit_find_links = emit_find_links\n self.emit_options = emit_options\n\n def _sort_key(self, ireq: InstallRequirement) -> tuple[bool, str]:\n return (not ireq.editable, key_from_ireq(ireq))\n\n def write_header(self) -> Iterator[str]:\n if self.emit_header:\n yield comment(\"#\")\n yield comment(\n \"# This file is autogenerated by pip-compile with Python \"\n f\"{sys.version_info.major}.{sys.version_info.minor}\"\n )\n yield comment(\"# by the following command:\")\n yield comment(\"#\")\n compile_command = os.environ.get(\n \"CUSTOM_COMPILE_COMMAND\"\n ) or get_compile_command(self.click_ctx)\n yield comment(f\"# {compile_command}\")\n yield comment(\"#\")\n\n def write_index_options(self) -> Iterator[str]:\n if self.emit_index_url:\n for index, index_url in enumerate(dedup(self.index_urls)):\n if index == 0 and index_url.rstrip(\"/\") == self.default_index_url:\n continue\n flag = \"--index-url\" if index == 0 else \"--extra-index-url\"\n yield f\"{flag} {index_url}\"\n\n def write_trusted_hosts(self) -> Iterator[str]:\n if self.emit_trusted_host:\n for trusted_host in dedup(self.trusted_hosts):\n yield f\"--trusted-host {trusted_host}\"\n\n def write_format_controls(self) -> Iterator[str]:\n for nb in dedup(sorted(self.format_control.no_binary)):\n yield f\"--no-binary {nb}\"\n for ob in dedup(sorted(self.format_control.only_binary)):\n yield f\"--only-binary {ob}\"\n\n def write_find_links(self) -> Iterator[str]:\n if self.emit_find_links:\n for find_link in dedup(self.find_links):\n yield f\"--find-links {find_link}\"\n\n def write_flags(self) -> Iterator[str]:\n if not self.emit_options:\n return\n emitted = False\n for line in chain(\n self.write_index_options(),\n self.write_find_links(),\n self.write_trusted_hosts(),\n self.write_format_controls(),\n ):\n emitted = True\n yield line\n if emitted:\n yield \"\"\n\n def _iter_lines(\n self,\n results: set[InstallRequirement],\n unsafe_requirements: set[InstallRequirement],\n unsafe_packages: set[str],\n markers: dict[str, Marker],\n hashes: dict[InstallRequirement, set[str]] | None = None,\n ) -> Iterator[str]:\n # default values\n unsafe_packages = unsafe_packages if self.allow_unsafe else set()\n hashes = hashes or {}\n\n # Check for unhashed or unpinned packages if at least one package does have\n # hashes, which will trigger pip install's --require-hashes mode.\n warn_uninstallable = False\n has_hashes = hashes and any(hash for hash in hashes.values())\n\n yielded = False\n\n for line in self.write_header():\n yield line\n yielded = True\n for line in self.write_flags():\n yield line\n yielded = True\n\n unsafe_requirements = unsafe_requirements or {\n r for r in results if r.name in unsafe_packages\n }\n packages = {r for r in results if r.name not in unsafe_packages}\n\n if packages:\n for ireq in sorted(packages, key=self._sort_key):\n if has_hashes and not hashes.get(ireq):\n yield MESSAGE_UNHASHED_PACKAGE\n warn_uninstallable = True\n line = self._format_requirement(\n ireq, markers.get(key_from_ireq(ireq)), hashes=hashes\n )\n yield line\n yielded = True\n\n if unsafe_requirements:\n yield \"\"\n yielded = True\n if has_hashes and not self.allow_unsafe:\n yield MESSAGE_UNSAFE_PACKAGES_UNPINNED\n warn_uninstallable = True\n else:\n yield MESSAGE_UNSAFE_PACKAGES\n\n for ireq in sorted(unsafe_requirements, key=self._sort_key):\n ireq_key = key_from_ireq(ireq)\n if not self.allow_unsafe:\n yield comment(f\"# {ireq_key}\")\n else:\n line = self._format_requirement(\n ireq, marker=markers.get(ireq_key), hashes=hashes\n )\n yield line\n\n # Yield even when there's no real content, so that blank files are written\n if not yielded:\n yield \"\"\n\n if warn_uninstallable:\n log.warning(MESSAGE_UNINSTALLABLE)\n\n def write(\n self,\n results: set[InstallRequirement],\n unsafe_requirements: set[InstallRequirement],\n unsafe_packages: set[str],\n markers: dict[str, Marker],\n hashes: dict[InstallRequirement, set[str]] | None,\n ) -> None:\n if not self.dry_run:\n dst_file = io.TextIOWrapper(\n self.dst_file,\n encoding=\"utf8\",\n newline=self.linesep,\n line_buffering=True,\n )\n try:\n for line in self._iter_lines(\n results, unsafe_requirements, unsafe_packages, markers, hashes\n ):\n if self.dry_run:\n # Bypass the log level to always print this during a dry run\n log.log(line)\n else:\n log.info(line)\n dst_file.write(unstyle(line))\n dst_file.write(\"\\n\")\n finally:\n if not self.dry_run:\n dst_file.detach()\n\n def _format_requirement(\n self,\n ireq: InstallRequirement,\n marker: Marker | None = None,\n hashes: dict[InstallRequirement, set[str]] | None = None,\n ) -> str:\n ireq_hashes = (hashes if hashes is not None else {}).get(ireq)\n\n line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)\n if self.strip_extras:\n line = strip_extras(line)\n\n if not self.annotate:\n return line\n\n # Annotate what packages or reqs-ins this package is required by\n required_by = set()\n if hasattr(ireq, \"_source_ireqs\"):\n required_by |= {\n _comes_from_as_string(src_ireq.comes_from)\n for src_ireq in ireq._source_ireqs\n if src_ireq.comes_from\n }\n\n # Filter out the origin install requirements for extras.\n # See https://github.com/jazzband/pip-tools/issues/2003\n if ireq.comes_from and (\n isinstance(ireq.comes_from, str) or ireq.comes_from.name != ireq.name\n ):\n required_by.add(_comes_from_as_string(ireq.comes_from))\n\n required_by |= set(getattr(ireq, \"_required_by\", set()))\n\n if required_by:\n if self.annotation_style == \"split\":\n annotation = annotation_style_split(required_by)\n sep = \"\\n \"\n elif self.annotation_style == \"line\":\n annotation = annotation_style_line(required_by)\n sep = \"\\n \" if ireq_hashes else \" \"\n else: # pragma: no cover\n raise ValueError(\"Invalid value for annotation style\")\n if self.strip_extras:\n annotation = strip_extras(annotation)\n # 24 is one reasonable column size to use here, that we've used in the past\n lines = f\"{line:24}{sep}{comment(annotation)}\".splitlines()\n line = \"\\n\".join(ln.rstrip() for ln in lines)\n\n return line\n", "path": "piptools/writer.py"}]}
| 4,078 | 352 |
gh_patches_debug_61519
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmpose-1906
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
little config error in 1.x
mmpose/tree/1.x/configs/hand_2d_keypoint/topdown_heatmap/coco_wholebody_hand)/td-hm_mobilenetv2_8xb32-210e_coco-wholebody-hand-256x256.py
mobilenetv2 out_channels is 1280, however "in_channles" of the head is 2048 in this config file.
</issue>
<code>
[start of configs/hand_2d_keypoint/topdown_heatmap/coco_wholebody_hand/td-hm_mobilenetv2_8xb32-210e_coco-wholebody-hand-256x256.py]
1 _base_ = ['../../../_base_/default_runtime.py']
2
3 # runtime
4 train_cfg = dict(max_epochs=210, val_interval=10)
5
6 # optimizer
7 optim_wrapper = dict(optimizer=dict(
8 type='Adam',
9 lr=5e-4,
10 ))
11
12 # learning policy
13 param_scheduler = [
14 dict(
15 type='LinearLR', begin=0, end=500, start_factor=0.001,
16 by_epoch=False), # warm-up
17 dict(
18 type='MultiStepLR',
19 begin=0,
20 end=210,
21 milestones=[170, 200],
22 gamma=0.1,
23 by_epoch=True)
24 ]
25
26 # automatically scaling LR based on the actual training batch size
27 auto_scale_lr = dict(base_batch_size=256)
28
29 # hooks
30 default_hooks = dict(checkpoint=dict(save_best='AUC', rule='greater'))
31 # codec settings
32 codec = dict(
33 type='MSRAHeatmap', input_size=(256, 256), heatmap_size=(64, 64), sigma=2)
34
35 # model settings
36 model = dict(
37 type='TopdownPoseEstimator',
38 data_preprocessor=dict(
39 type='PoseDataPreprocessor',
40 mean=[123.675, 116.28, 103.53],
41 std=[58.395, 57.12, 57.375],
42 bgr_to_rgb=True),
43 backbone=dict(
44 type='MobileNetV2',
45 widen_factor=1.,
46 out_indices=(7, ),
47 init_cfg=dict(type='Pretrained', checkpoint='mmcls://mobilenet_v2')),
48 head=dict(
49 type='HeatmapHead',
50 in_channels=2048,
51 out_channels=21,
52 loss=dict(type='KeypointMSELoss', use_target_weight=True),
53 decoder=codec),
54 test_cfg=dict(
55 flip_test=True,
56 flip_mode='heatmap',
57 shift_heatmap=True,
58 ))
59
60 # base dataset settings
61 dataset_type = 'CocoWholeBodyHandDataset'
62 data_mode = 'topdown'
63 data_root = 'data/coco/'
64
65 # pipelines
66 train_pipeline = [
67 dict(type='LoadImage', file_client_args={{_base_.file_client_args}}),
68 dict(type='GetBBoxCenterScale'),
69 dict(
70 type='RandomBBoxTransform', rotate_factor=180,
71 scale_factor=(0.7, 1.3)),
72 dict(type='RandomFlip', direction='horizontal'),
73 dict(type='TopdownAffine', input_size=codec['input_size']),
74 dict(type='GenerateTarget', encoder=codec),
75 dict(type='PackPoseInputs')
76 ]
77 val_pipeline = [
78 dict(type='LoadImage', file_client_args={{_base_.file_client_args}}),
79 dict(type='GetBBoxCenterScale'),
80 dict(type='TopdownAffine', input_size=codec['input_size']),
81 dict(type='PackPoseInputs')
82 ]
83
84 # data loaders
85 train_dataloader = dict(
86 batch_size=32,
87 num_workers=2,
88 persistent_workers=True,
89 sampler=dict(type='DefaultSampler', shuffle=True),
90 dataset=dict(
91 type=dataset_type,
92 data_root=data_root,
93 data_mode=data_mode,
94 ann_file='annotations/coco_wholebody_train_v1.0.json',
95 data_prefix=dict(img='train2017/'),
96 pipeline=train_pipeline,
97 ))
98 val_dataloader = dict(
99 batch_size=32,
100 num_workers=2,
101 persistent_workers=True,
102 drop_last=False,
103 sampler=dict(type='DefaultSampler', shuffle=False, round_up=False),
104 dataset=dict(
105 type=dataset_type,
106 data_root=data_root,
107 data_mode=data_mode,
108 ann_file='annotations/coco_wholebody_val_v1.0.json',
109 data_prefix=dict(img='val2017/'),
110 test_mode=True,
111 pipeline=val_pipeline,
112 ))
113 test_dataloader = val_dataloader
114
115 val_evaluator = [
116 dict(type='PCKAccuracy', thr=0.2),
117 dict(type='AUC'),
118 dict(type='EPE')
119 ]
120 test_evaluator = val_evaluator
121
[end of configs/hand_2d_keypoint/topdown_heatmap/coco_wholebody_hand/td-hm_mobilenetv2_8xb32-210e_coco-wholebody-hand-256x256.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/configs/hand_2d_keypoint/topdown_heatmap/coco_wholebody_hand/td-hm_mobilenetv2_8xb32-210e_coco-wholebody-hand-256x256.py b/configs/hand_2d_keypoint/topdown_heatmap/coco_wholebody_hand/td-hm_mobilenetv2_8xb32-210e_coco-wholebody-hand-256x256.py
--- a/configs/hand_2d_keypoint/topdown_heatmap/coco_wholebody_hand/td-hm_mobilenetv2_8xb32-210e_coco-wholebody-hand-256x256.py
+++ b/configs/hand_2d_keypoint/topdown_heatmap/coco_wholebody_hand/td-hm_mobilenetv2_8xb32-210e_coco-wholebody-hand-256x256.py
@@ -47,7 +47,7 @@
init_cfg=dict(type='Pretrained', checkpoint='mmcls://mobilenet_v2')),
head=dict(
type='HeatmapHead',
- in_channels=2048,
+ in_channels=1280,
out_channels=21,
loss=dict(type='KeypointMSELoss', use_target_weight=True),
decoder=codec),
|
{"golden_diff": "diff --git a/configs/hand_2d_keypoint/topdown_heatmap/coco_wholebody_hand/td-hm_mobilenetv2_8xb32-210e_coco-wholebody-hand-256x256.py b/configs/hand_2d_keypoint/topdown_heatmap/coco_wholebody_hand/td-hm_mobilenetv2_8xb32-210e_coco-wholebody-hand-256x256.py\n--- a/configs/hand_2d_keypoint/topdown_heatmap/coco_wholebody_hand/td-hm_mobilenetv2_8xb32-210e_coco-wholebody-hand-256x256.py\n+++ b/configs/hand_2d_keypoint/topdown_heatmap/coco_wholebody_hand/td-hm_mobilenetv2_8xb32-210e_coco-wholebody-hand-256x256.py\n@@ -47,7 +47,7 @@\n init_cfg=dict(type='Pretrained', checkpoint='mmcls://mobilenet_v2')),\n head=dict(\n type='HeatmapHead',\n- in_channels=2048,\n+ in_channels=1280,\n out_channels=21,\n loss=dict(type='KeypointMSELoss', use_target_weight=True),\n decoder=codec),\n", "issue": "little config error in 1.x\n\r\nmmpose/tree/1.x/configs/hand_2d_keypoint/topdown_heatmap/coco_wholebody_hand)/td-hm_mobilenetv2_8xb32-210e_coco-wholebody-hand-256x256.py\r\n\r\nmobilenetv2 out_channels is 1280, however \"in_channles\" of the head is 2048 in this config file. \r\n\n", "before_files": [{"content": "_base_ = ['../../../_base_/default_runtime.py']\n\n# runtime\ntrain_cfg = dict(max_epochs=210, val_interval=10)\n\n# optimizer\noptim_wrapper = dict(optimizer=dict(\n type='Adam',\n lr=5e-4,\n))\n\n# learning policy\nparam_scheduler = [\n dict(\n type='LinearLR', begin=0, end=500, start_factor=0.001,\n by_epoch=False), # warm-up\n dict(\n type='MultiStepLR',\n begin=0,\n end=210,\n milestones=[170, 200],\n gamma=0.1,\n by_epoch=True)\n]\n\n# automatically scaling LR based on the actual training batch size\nauto_scale_lr = dict(base_batch_size=256)\n\n# hooks\ndefault_hooks = dict(checkpoint=dict(save_best='AUC', rule='greater'))\n# codec settings\ncodec = dict(\n type='MSRAHeatmap', input_size=(256, 256), heatmap_size=(64, 64), sigma=2)\n\n# model settings\nmodel = dict(\n type='TopdownPoseEstimator',\n data_preprocessor=dict(\n type='PoseDataPreprocessor',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n bgr_to_rgb=True),\n backbone=dict(\n type='MobileNetV2',\n widen_factor=1.,\n out_indices=(7, ),\n init_cfg=dict(type='Pretrained', checkpoint='mmcls://mobilenet_v2')),\n head=dict(\n type='HeatmapHead',\n in_channels=2048,\n out_channels=21,\n loss=dict(type='KeypointMSELoss', use_target_weight=True),\n decoder=codec),\n test_cfg=dict(\n flip_test=True,\n flip_mode='heatmap',\n shift_heatmap=True,\n ))\n\n# base dataset settings\ndataset_type = 'CocoWholeBodyHandDataset'\ndata_mode = 'topdown'\ndata_root = 'data/coco/'\n\n# pipelines\ntrain_pipeline = [\n dict(type='LoadImage', file_client_args={{_base_.file_client_args}}),\n dict(type='GetBBoxCenterScale'),\n dict(\n type='RandomBBoxTransform', rotate_factor=180,\n scale_factor=(0.7, 1.3)),\n dict(type='RandomFlip', direction='horizontal'),\n dict(type='TopdownAffine', input_size=codec['input_size']),\n dict(type='GenerateTarget', encoder=codec),\n dict(type='PackPoseInputs')\n]\nval_pipeline = [\n dict(type='LoadImage', file_client_args={{_base_.file_client_args}}),\n dict(type='GetBBoxCenterScale'),\n dict(type='TopdownAffine', input_size=codec['input_size']),\n dict(type='PackPoseInputs')\n]\n\n# data loaders\ntrain_dataloader = dict(\n batch_size=32,\n num_workers=2,\n persistent_workers=True,\n sampler=dict(type='DefaultSampler', shuffle=True),\n dataset=dict(\n type=dataset_type,\n data_root=data_root,\n data_mode=data_mode,\n ann_file='annotations/coco_wholebody_train_v1.0.json',\n data_prefix=dict(img='train2017/'),\n pipeline=train_pipeline,\n ))\nval_dataloader = dict(\n batch_size=32,\n num_workers=2,\n persistent_workers=True,\n drop_last=False,\n sampler=dict(type='DefaultSampler', shuffle=False, round_up=False),\n dataset=dict(\n type=dataset_type,\n data_root=data_root,\n data_mode=data_mode,\n ann_file='annotations/coco_wholebody_val_v1.0.json',\n data_prefix=dict(img='val2017/'),\n test_mode=True,\n pipeline=val_pipeline,\n ))\ntest_dataloader = val_dataloader\n\nval_evaluator = [\n dict(type='PCKAccuracy', thr=0.2),\n dict(type='AUC'),\n dict(type='EPE')\n]\ntest_evaluator = val_evaluator\n", "path": "configs/hand_2d_keypoint/topdown_heatmap/coco_wholebody_hand/td-hm_mobilenetv2_8xb32-210e_coco-wholebody-hand-256x256.py"}]}
| 1,908 | 316 |
gh_patches_debug_28870
|
rasdani/github-patches
|
git_diff
|
arviz-devs__arviz-1435
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plot_compare issue with pWAIC
The plot_compare method seems to add the pWAIC values to the in-sample deviance to get WAIC values regardless of scale (deviance or log). Shouldn't the pWAIC be subtracted in the log scale, where a higher score is better? Otherwise, for example with two models m1 and m2 with the same in-sample deviance of 20: if m1 has pWAIC of 10, m2 has pWAIC of 5 then m1 WAIC is 30 and m2 WAIC is 25 so m1 is preferred. However, with the same in-sample deviance the model with the lower pWAIC should be preferred i.e. m2.
Example from my work:

I might be getting confused about this and my example isn't well explained, sorry.
</issue>
<code>
[start of arviz/plots/backends/bokeh/compareplot.py]
1 """Bokeh Compareplot."""
2 from bokeh.models import Span
3
4 from ...plot_utils import _scale_fig_size
5 from .. import show_layout
6 from . import backend_kwarg_defaults, create_axes_grid
7
8
9 def plot_compare(
10 ax,
11 comp_df,
12 figsize,
13 plot_ic_diff,
14 plot_standard_error,
15 insample_dev,
16 yticks_pos,
17 yticks_labels,
18 plot_kwargs,
19 textsize,
20 information_criterion,
21 step,
22 backend_kwargs,
23 show,
24 ):
25 """Bokeh compareplot."""
26 if backend_kwargs is None:
27 backend_kwargs = {}
28
29 backend_kwargs = {
30 **backend_kwarg_defaults(),
31 **backend_kwargs,
32 }
33
34 figsize, _, _, _, line_width, _ = _scale_fig_size(figsize, textsize, 1, 1)
35
36 if ax is None:
37 ax = create_axes_grid(
38 1,
39 figsize=figsize,
40 squeeze=True,
41 backend_kwargs=backend_kwargs,
42 )
43
44 yticks_pos = list(yticks_pos)
45
46 if plot_ic_diff:
47 yticks_labels[0] = comp_df.index[0]
48 yticks_labels[2::2] = comp_df.index[1:]
49
50 ax.yaxis.ticker = yticks_pos
51 ax.yaxis.major_label_overrides = {
52 dtype(key): value
53 for key, value in zip(yticks_pos, yticks_labels)
54 for dtype in (int, float)
55 if (dtype(key) - key == 0)
56 }
57
58 # create the coordinates for the errorbars
59 err_xs = []
60 err_ys = []
61
62 for x, y, xerr in zip(
63 comp_df[information_criterion].iloc[1:], yticks_pos[1::2], comp_df.dse[1:]
64 ):
65 err_xs.append((x - xerr, x + xerr))
66 err_ys.append((y, y))
67
68 # plot them
69 ax.triangle(
70 comp_df[information_criterion].iloc[1:],
71 yticks_pos[1::2],
72 line_color=plot_kwargs.get("color_dse", "grey"),
73 fill_color=plot_kwargs.get("color_dse", "grey"),
74 line_width=2,
75 size=6,
76 )
77 ax.multi_line(err_xs, err_ys, line_color=plot_kwargs.get("color_dse", "grey"))
78
79 else:
80 yticks_labels = comp_df.index
81 ax.yaxis.ticker = yticks_pos[::2]
82 ax.yaxis.major_label_overrides = {
83 key: value for key, value in zip(yticks_pos[::2], yticks_labels)
84 }
85
86 ax.circle(
87 comp_df[information_criterion],
88 yticks_pos[::2],
89 line_color=plot_kwargs.get("color_ic", "black"),
90 fill_color=None,
91 line_width=2,
92 size=6,
93 )
94
95 if plot_standard_error:
96 # create the coordinates for the errorbars
97 err_xs = []
98 err_ys = []
99
100 for x, y, xerr in zip(comp_df[information_criterion], yticks_pos[::2], comp_df.se):
101 err_xs.append((x - xerr, x + xerr))
102 err_ys.append((y, y))
103
104 # plot them
105 ax.multi_line(err_xs, err_ys, line_color=plot_kwargs.get("color_ic", "black"))
106
107 if insample_dev:
108 ax.circle(
109 comp_df[information_criterion] - (2 * comp_df["p_" + information_criterion]),
110 yticks_pos[::2],
111 line_color=plot_kwargs.get("color_insample_dev", "black"),
112 fill_color=plot_kwargs.get("color_insample_dev", "black"),
113 line_width=2,
114 size=6,
115 )
116
117 vline = Span(
118 location=comp_df[information_criterion].iloc[0],
119 dimension="height",
120 line_color=plot_kwargs.get("color_ls_min_ic", "grey"),
121 line_width=line_width,
122 line_dash=plot_kwargs.get("ls_min_ic", "dashed"),
123 )
124
125 ax.renderers.append(vline)
126
127 scale_col = information_criterion + "_scale"
128 if scale_col in comp_df:
129 scale = comp_df[scale_col].iloc[0].capitalize()
130 else:
131 scale = "Deviance"
132 ax.xaxis.axis_label = scale
133 ax.y_range._property_values["start"] = -1 + step # pylint: disable=protected-access
134 ax.y_range._property_values["end"] = 0 - step # pylint: disable=protected-access
135
136 show_layout(ax, show)
137
138 return ax
139
[end of arviz/plots/backends/bokeh/compareplot.py]
[start of arviz/plots/backends/matplotlib/compareplot.py]
1 """Matplotlib Compareplot."""
2 import matplotlib.pyplot as plt
3
4 from ...plot_utils import _scale_fig_size
5 from . import backend_kwarg_defaults, backend_show, create_axes_grid
6
7
8 def plot_compare(
9 ax,
10 comp_df,
11 figsize,
12 plot_ic_diff,
13 plot_standard_error,
14 insample_dev,
15 yticks_pos,
16 yticks_labels,
17 plot_kwargs,
18 information_criterion,
19 textsize,
20 step,
21 backend_kwargs,
22 show,
23 ):
24 """Matplotlib compare plot."""
25 if backend_kwargs is None:
26 backend_kwargs = {}
27
28 backend_kwargs = {
29 **backend_kwarg_defaults(),
30 **backend_kwargs,
31 }
32
33 if figsize is None:
34 figsize = (6, len(comp_df))
35
36 figsize, ax_labelsize, _, xt_labelsize, linewidth, _ = _scale_fig_size(figsize, textsize, 1, 1)
37
38 backend_kwargs.setdefault("figsize", figsize)
39 backend_kwargs["squeeze"] = True
40
41 if ax is None:
42 _, ax = create_axes_grid(1, backend_kwargs=backend_kwargs)
43
44 if plot_ic_diff:
45 yticks_labels[0] = comp_df.index[0]
46 yticks_labels[2::2] = comp_df.index[1:]
47 ax.set_yticks(yticks_pos)
48 ax.errorbar(
49 x=comp_df[information_criterion].iloc[1:],
50 y=yticks_pos[1::2],
51 xerr=comp_df.dse[1:],
52 color=plot_kwargs.get("color_dse", "grey"),
53 fmt=plot_kwargs.get("marker_dse", "^"),
54 mew=linewidth,
55 elinewidth=linewidth,
56 )
57
58 else:
59 yticks_labels = comp_df.index
60 ax.set_yticks(yticks_pos[::2])
61
62 if plot_standard_error:
63 ax.errorbar(
64 x=comp_df[information_criterion],
65 y=yticks_pos[::2],
66 xerr=comp_df.se,
67 color=plot_kwargs.get("color_ic", "k"),
68 fmt=plot_kwargs.get("marker_ic", "o"),
69 mfc="None",
70 mew=linewidth,
71 lw=linewidth,
72 )
73 else:
74 ax.plot(
75 comp_df[information_criterion],
76 yticks_pos[::2],
77 color=plot_kwargs.get("color_ic", "k"),
78 marker=plot_kwargs.get("marker_ic", "o"),
79 mfc="None",
80 mew=linewidth,
81 lw=0,
82 )
83
84 if insample_dev:
85 ax.plot(
86 comp_df[information_criterion] - (2 * comp_df["p_" + information_criterion]),
87 yticks_pos[::2],
88 color=plot_kwargs.get("color_insample_dev", "k"),
89 marker=plot_kwargs.get("marker_insample_dev", "o"),
90 mew=linewidth,
91 lw=0,
92 )
93
94 ax.axvline(
95 comp_df[information_criterion].iloc[0],
96 ls=plot_kwargs.get("ls_min_ic", "--"),
97 color=plot_kwargs.get("color_ls_min_ic", "grey"),
98 lw=linewidth,
99 )
100
101 scale_col = information_criterion + "_scale"
102 if scale_col in comp_df:
103 scale = comp_df[scale_col].iloc[0].capitalize()
104 else:
105 scale = "Deviance"
106 ax.set_xlabel(scale, fontsize=ax_labelsize)
107 ax.set_yticklabels(yticks_labels)
108 ax.set_ylim(-1 + step, 0 - step)
109 ax.tick_params(labelsize=xt_labelsize)
110
111 if backend_show(show):
112 plt.show()
113
114 return ax
115
[end of arviz/plots/backends/matplotlib/compareplot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/arviz/plots/backends/bokeh/compareplot.py b/arviz/plots/backends/bokeh/compareplot.py
--- a/arviz/plots/backends/bokeh/compareplot.py
+++ b/arviz/plots/backends/bokeh/compareplot.py
@@ -105,8 +105,16 @@
ax.multi_line(err_xs, err_ys, line_color=plot_kwargs.get("color_ic", "black"))
if insample_dev:
+ scale = comp_df[f"{information_criterion}_scale"][0]
+ p_ic = comp_df[f"p_{information_criterion}"]
+ if scale == "log":
+ correction = p_ic
+ elif scale == "negative_log":
+ correction = -p_ic
+ elif scale == "deviance":
+ correction = -(2 * p_ic)
ax.circle(
- comp_df[information_criterion] - (2 * comp_df["p_" + information_criterion]),
+ comp_df[information_criterion] + correction,
yticks_pos[::2],
line_color=plot_kwargs.get("color_insample_dev", "black"),
fill_color=plot_kwargs.get("color_insample_dev", "black"),
diff --git a/arviz/plots/backends/matplotlib/compareplot.py b/arviz/plots/backends/matplotlib/compareplot.py
--- a/arviz/plots/backends/matplotlib/compareplot.py
+++ b/arviz/plots/backends/matplotlib/compareplot.py
@@ -82,8 +82,16 @@
)
if insample_dev:
+ scale = comp_df[f"{information_criterion}_scale"][0]
+ p_ic = comp_df[f"p_{information_criterion}"]
+ if scale == "log":
+ correction = p_ic
+ elif scale == "negative_log":
+ correction = -p_ic
+ elif scale == "deviance":
+ correction = -(2 * p_ic)
ax.plot(
- comp_df[information_criterion] - (2 * comp_df["p_" + information_criterion]),
+ comp_df[information_criterion] + correction,
yticks_pos[::2],
color=plot_kwargs.get("color_insample_dev", "k"),
marker=plot_kwargs.get("marker_insample_dev", "o"),
|
{"golden_diff": "diff --git a/arviz/plots/backends/bokeh/compareplot.py b/arviz/plots/backends/bokeh/compareplot.py\n--- a/arviz/plots/backends/bokeh/compareplot.py\n+++ b/arviz/plots/backends/bokeh/compareplot.py\n@@ -105,8 +105,16 @@\n ax.multi_line(err_xs, err_ys, line_color=plot_kwargs.get(\"color_ic\", \"black\"))\n \n if insample_dev:\n+ scale = comp_df[f\"{information_criterion}_scale\"][0]\n+ p_ic = comp_df[f\"p_{information_criterion}\"]\n+ if scale == \"log\":\n+ correction = p_ic\n+ elif scale == \"negative_log\":\n+ correction = -p_ic\n+ elif scale == \"deviance\":\n+ correction = -(2 * p_ic)\n ax.circle(\n- comp_df[information_criterion] - (2 * comp_df[\"p_\" + information_criterion]),\n+ comp_df[information_criterion] + correction,\n yticks_pos[::2],\n line_color=plot_kwargs.get(\"color_insample_dev\", \"black\"),\n fill_color=plot_kwargs.get(\"color_insample_dev\", \"black\"),\ndiff --git a/arviz/plots/backends/matplotlib/compareplot.py b/arviz/plots/backends/matplotlib/compareplot.py\n--- a/arviz/plots/backends/matplotlib/compareplot.py\n+++ b/arviz/plots/backends/matplotlib/compareplot.py\n@@ -82,8 +82,16 @@\n )\n \n if insample_dev:\n+ scale = comp_df[f\"{information_criterion}_scale\"][0]\n+ p_ic = comp_df[f\"p_{information_criterion}\"]\n+ if scale == \"log\":\n+ correction = p_ic\n+ elif scale == \"negative_log\":\n+ correction = -p_ic\n+ elif scale == \"deviance\":\n+ correction = -(2 * p_ic)\n ax.plot(\n- comp_df[information_criterion] - (2 * comp_df[\"p_\" + information_criterion]),\n+ comp_df[information_criterion] + correction,\n yticks_pos[::2],\n color=plot_kwargs.get(\"color_insample_dev\", \"k\"),\n marker=plot_kwargs.get(\"marker_insample_dev\", \"o\"),\n", "issue": "plot_compare issue with pWAIC\nThe plot_compare method seems to add the pWAIC values to the in-sample deviance to get WAIC values regardless of scale (deviance or log). Shouldn't the pWAIC be subtracted in the log scale, where a higher score is better? Otherwise, for example with two models m1 and m2 with the same in-sample deviance of 20: if m1 has pWAIC of 10, m2 has pWAIC of 5 then m1 WAIC is 30 and m2 WAIC is 25 so m1 is preferred. However, with the same in-sample deviance the model with the lower pWAIC should be preferred i.e. m2.\r\n\r\nExample from my work:\r\n\r\n\r\n\r\nI might be getting confused about this and my example isn't well explained, sorry.\n", "before_files": [{"content": "\"\"\"Bokeh Compareplot.\"\"\"\nfrom bokeh.models import Span\n\nfrom ...plot_utils import _scale_fig_size\nfrom .. import show_layout\nfrom . import backend_kwarg_defaults, create_axes_grid\n\n\ndef plot_compare(\n ax,\n comp_df,\n figsize,\n plot_ic_diff,\n plot_standard_error,\n insample_dev,\n yticks_pos,\n yticks_labels,\n plot_kwargs,\n textsize,\n information_criterion,\n step,\n backend_kwargs,\n show,\n):\n \"\"\"Bokeh compareplot.\"\"\"\n if backend_kwargs is None:\n backend_kwargs = {}\n\n backend_kwargs = {\n **backend_kwarg_defaults(),\n **backend_kwargs,\n }\n\n figsize, _, _, _, line_width, _ = _scale_fig_size(figsize, textsize, 1, 1)\n\n if ax is None:\n ax = create_axes_grid(\n 1,\n figsize=figsize,\n squeeze=True,\n backend_kwargs=backend_kwargs,\n )\n\n yticks_pos = list(yticks_pos)\n\n if plot_ic_diff:\n yticks_labels[0] = comp_df.index[0]\n yticks_labels[2::2] = comp_df.index[1:]\n\n ax.yaxis.ticker = yticks_pos\n ax.yaxis.major_label_overrides = {\n dtype(key): value\n for key, value in zip(yticks_pos, yticks_labels)\n for dtype in (int, float)\n if (dtype(key) - key == 0)\n }\n\n # create the coordinates for the errorbars\n err_xs = []\n err_ys = []\n\n for x, y, xerr in zip(\n comp_df[information_criterion].iloc[1:], yticks_pos[1::2], comp_df.dse[1:]\n ):\n err_xs.append((x - xerr, x + xerr))\n err_ys.append((y, y))\n\n # plot them\n ax.triangle(\n comp_df[information_criterion].iloc[1:],\n yticks_pos[1::2],\n line_color=plot_kwargs.get(\"color_dse\", \"grey\"),\n fill_color=plot_kwargs.get(\"color_dse\", \"grey\"),\n line_width=2,\n size=6,\n )\n ax.multi_line(err_xs, err_ys, line_color=plot_kwargs.get(\"color_dse\", \"grey\"))\n\n else:\n yticks_labels = comp_df.index\n ax.yaxis.ticker = yticks_pos[::2]\n ax.yaxis.major_label_overrides = {\n key: value for key, value in zip(yticks_pos[::2], yticks_labels)\n }\n\n ax.circle(\n comp_df[information_criterion],\n yticks_pos[::2],\n line_color=plot_kwargs.get(\"color_ic\", \"black\"),\n fill_color=None,\n line_width=2,\n size=6,\n )\n\n if plot_standard_error:\n # create the coordinates for the errorbars\n err_xs = []\n err_ys = []\n\n for x, y, xerr in zip(comp_df[information_criterion], yticks_pos[::2], comp_df.se):\n err_xs.append((x - xerr, x + xerr))\n err_ys.append((y, y))\n\n # plot them\n ax.multi_line(err_xs, err_ys, line_color=plot_kwargs.get(\"color_ic\", \"black\"))\n\n if insample_dev:\n ax.circle(\n comp_df[information_criterion] - (2 * comp_df[\"p_\" + information_criterion]),\n yticks_pos[::2],\n line_color=plot_kwargs.get(\"color_insample_dev\", \"black\"),\n fill_color=plot_kwargs.get(\"color_insample_dev\", \"black\"),\n line_width=2,\n size=6,\n )\n\n vline = Span(\n location=comp_df[information_criterion].iloc[0],\n dimension=\"height\",\n line_color=plot_kwargs.get(\"color_ls_min_ic\", \"grey\"),\n line_width=line_width,\n line_dash=plot_kwargs.get(\"ls_min_ic\", \"dashed\"),\n )\n\n ax.renderers.append(vline)\n\n scale_col = information_criterion + \"_scale\"\n if scale_col in comp_df:\n scale = comp_df[scale_col].iloc[0].capitalize()\n else:\n scale = \"Deviance\"\n ax.xaxis.axis_label = scale\n ax.y_range._property_values[\"start\"] = -1 + step # pylint: disable=protected-access\n ax.y_range._property_values[\"end\"] = 0 - step # pylint: disable=protected-access\n\n show_layout(ax, show)\n\n return ax\n", "path": "arviz/plots/backends/bokeh/compareplot.py"}, {"content": "\"\"\"Matplotlib Compareplot.\"\"\"\nimport matplotlib.pyplot as plt\n\nfrom ...plot_utils import _scale_fig_size\nfrom . import backend_kwarg_defaults, backend_show, create_axes_grid\n\n\ndef plot_compare(\n ax,\n comp_df,\n figsize,\n plot_ic_diff,\n plot_standard_error,\n insample_dev,\n yticks_pos,\n yticks_labels,\n plot_kwargs,\n information_criterion,\n textsize,\n step,\n backend_kwargs,\n show,\n):\n \"\"\"Matplotlib compare plot.\"\"\"\n if backend_kwargs is None:\n backend_kwargs = {}\n\n backend_kwargs = {\n **backend_kwarg_defaults(),\n **backend_kwargs,\n }\n\n if figsize is None:\n figsize = (6, len(comp_df))\n\n figsize, ax_labelsize, _, xt_labelsize, linewidth, _ = _scale_fig_size(figsize, textsize, 1, 1)\n\n backend_kwargs.setdefault(\"figsize\", figsize)\n backend_kwargs[\"squeeze\"] = True\n\n if ax is None:\n _, ax = create_axes_grid(1, backend_kwargs=backend_kwargs)\n\n if plot_ic_diff:\n yticks_labels[0] = comp_df.index[0]\n yticks_labels[2::2] = comp_df.index[1:]\n ax.set_yticks(yticks_pos)\n ax.errorbar(\n x=comp_df[information_criterion].iloc[1:],\n y=yticks_pos[1::2],\n xerr=comp_df.dse[1:],\n color=plot_kwargs.get(\"color_dse\", \"grey\"),\n fmt=plot_kwargs.get(\"marker_dse\", \"^\"),\n mew=linewidth,\n elinewidth=linewidth,\n )\n\n else:\n yticks_labels = comp_df.index\n ax.set_yticks(yticks_pos[::2])\n\n if plot_standard_error:\n ax.errorbar(\n x=comp_df[information_criterion],\n y=yticks_pos[::2],\n xerr=comp_df.se,\n color=plot_kwargs.get(\"color_ic\", \"k\"),\n fmt=plot_kwargs.get(\"marker_ic\", \"o\"),\n mfc=\"None\",\n mew=linewidth,\n lw=linewidth,\n )\n else:\n ax.plot(\n comp_df[information_criterion],\n yticks_pos[::2],\n color=plot_kwargs.get(\"color_ic\", \"k\"),\n marker=plot_kwargs.get(\"marker_ic\", \"o\"),\n mfc=\"None\",\n mew=linewidth,\n lw=0,\n )\n\n if insample_dev:\n ax.plot(\n comp_df[information_criterion] - (2 * comp_df[\"p_\" + information_criterion]),\n yticks_pos[::2],\n color=plot_kwargs.get(\"color_insample_dev\", \"k\"),\n marker=plot_kwargs.get(\"marker_insample_dev\", \"o\"),\n mew=linewidth,\n lw=0,\n )\n\n ax.axvline(\n comp_df[information_criterion].iloc[0],\n ls=plot_kwargs.get(\"ls_min_ic\", \"--\"),\n color=plot_kwargs.get(\"color_ls_min_ic\", \"grey\"),\n lw=linewidth,\n )\n\n scale_col = information_criterion + \"_scale\"\n if scale_col in comp_df:\n scale = comp_df[scale_col].iloc[0].capitalize()\n else:\n scale = \"Deviance\"\n ax.set_xlabel(scale, fontsize=ax_labelsize)\n ax.set_yticklabels(yticks_labels)\n ax.set_ylim(-1 + step, 0 - step)\n ax.tick_params(labelsize=xt_labelsize)\n\n if backend_show(show):\n plt.show()\n\n return ax\n", "path": "arviz/plots/backends/matplotlib/compareplot.py"}]}
| 3,170 | 514 |
gh_patches_debug_40016
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-15907
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Adjust plugin thread-local APIs to account for `WorkunitStore` state
When `eager_fetch=False`, it's possible that a workunit's "artifacts" contain `Digest`s which haven't actually been fetched. When that's the case for a `Digest`, and a `StreamingWorkunit` plugin is using any of the [context methods which fetch files](https://github.com/pantsbuild/pants/blob/1d8205538a2932badcc1738fb1288600908b01e1/src/python/pants/engine/streaming_workunit_handler.py#L55-L69) from a background thread, they will encounter a:
> A WorkunitStore has not been set for this thread.
...error. That's because our existing `native_engine.stdio_thread_set_destination` statics only set the thread local `stdio` state, and not also our workunit state.
----
To fix this, we should adjust the existing method to additionally set the workunit store. But we should also deprecate the existing method and add a new one with a more accurate name (replacing #12295).
</issue>
<code>
[start of src/python/pants/engine/streaming_workunit_handler.py]
1 # Copyright 2019 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import logging
7 import threading
8 from abc import ABC, abstractmethod
9 from dataclasses import dataclass
10 from typing import Any, Callable, Iterable, Sequence, Tuple
11
12 from pants.base.specs import Specs
13 from pants.engine.addresses import Addresses
14 from pants.engine.fs import Digest, DigestContents, FileDigest, Snapshot
15 from pants.engine.internals import native_engine
16 from pants.engine.internals.scheduler import SchedulerSession, Workunit
17 from pants.engine.internals.selectors import Params
18 from pants.engine.rules import Get, MultiGet, QueryRule, collect_rules, rule
19 from pants.engine.target import Targets
20 from pants.engine.unions import UnionMembership, union
21 from pants.goal.run_tracker import RunTracker
22 from pants.option.options_bootstrapper import OptionsBootstrapper
23 from pants.util.logging import LogLevel
24
25 logger = logging.getLogger(__name__)
26
27
28 # -----------------------------------------------------------------------------------------------
29 # Streaming workunits plugin API
30 # -----------------------------------------------------------------------------------------------
31
32
33 @dataclass(frozen=True)
34 class TargetInfo:
35 filename: str
36
37
38 @dataclass(frozen=True)
39 class ExpandedSpecs:
40 targets: dict[str, list[TargetInfo]]
41
42
43 @dataclass(frozen=True)
44 class StreamingWorkunitContext:
45 _scheduler: SchedulerSession
46 _run_tracker: RunTracker
47 _specs: Specs
48 _options_bootstrapper: OptionsBootstrapper
49
50 @property
51 def run_tracker(self) -> RunTracker:
52 """Returns the RunTracker for the current run of Pants."""
53 return self._run_tracker
54
55 def single_file_digests_to_bytes(self, digests: Sequence[FileDigest]) -> list[bytes]:
56 """Return `bytes` for each `FileDigest`."""
57 return self._scheduler.single_file_digests_to_bytes(digests)
58
59 def snapshots_to_file_contents(
60 self, snapshots: Sequence[Snapshot]
61 ) -> tuple[DigestContents, ...]:
62 """Given a sequence of Snapshot objects, return a tuple of DigestContents representing the
63 files contained in those `Snapshot`s in sequence."""
64 return self._scheduler.snapshots_to_file_contents(snapshots)
65
66 def ensure_remote_has_recursive(self, digests: Sequence[Digest | FileDigest]) -> None:
67 """Invoke the internal ensure_remote_has_recursive function, which ensures that a remote
68 ByteStore, if it exists, has a copy of the files fingerprinted by each Digest."""
69 return self._scheduler.ensure_remote_has_recursive(digests)
70
71 def get_metrics(self) -> dict[str, int]:
72 """Invoke the internal get_metrics function, which returns metrics for the Session."""
73 return self._scheduler.get_metrics()
74
75 def get_observation_histograms(self) -> dict[str, Any]:
76 """Invoke the internal get_observation_histograms function, which serializes histograms
77 generated from Pants-internal observation metrics observed during the current run of Pants.
78
79 These metrics are useful for debugging Pants internals.
80 """
81 return self._scheduler.get_observation_histograms()
82
83 def get_expanded_specs(self) -> ExpandedSpecs:
84 """Return a dict containing the canonicalized addresses of the specs for this run, and what
85 files they expand to."""
86
87 (unexpanded_addresses,) = self._scheduler.product_request(
88 Addresses, [Params(self._specs, self._options_bootstrapper)]
89 )
90
91 expanded_targets = self._scheduler.product_request(
92 Targets, [Params(Addresses([addr])) for addr in unexpanded_addresses]
93 )
94 targets_dict: dict[str, list[TargetInfo]] = {}
95 for addr, targets in zip(unexpanded_addresses, expanded_targets):
96 targets_dict[addr.spec] = [
97 TargetInfo(
98 filename=(
99 tgt.address.filename if tgt.address.is_file_target else str(tgt.address)
100 )
101 )
102 for tgt in targets
103 ]
104 return ExpandedSpecs(targets=targets_dict)
105
106
107 class WorkunitsCallback(ABC):
108 @abstractmethod
109 def __call__(
110 self,
111 *,
112 started_workunits: tuple[Workunit, ...],
113 completed_workunits: tuple[Workunit, ...],
114 finished: bool,
115 context: StreamingWorkunitContext,
116 ) -> None:
117 """
118 :started_workunits: Workunits that have started but not completed.
119 :completed_workunits: Workunits that have completed.
120 :finished: True when the last chunk of workunit data is reported to the callback.
121 :context: A context providing access to functionality relevant to the run.
122 """
123
124 @property
125 @abstractmethod
126 def can_finish_async(self) -> bool:
127 """Can this callback finish its work in the background after the Pants run has already
128 completed?
129
130 The main reason to `return False` is if your callback logs in its final call, when
131 `finished=True`, as it may end up logging to `.pantsd.d/pants.log` instead of the console,
132 which is harder for users to find. Otherwise, most callbacks should return `True` to avoid
133 slowing down Pants from finishing the run.
134 """
135
136
137 @dataclass(frozen=True)
138 class WorkunitsCallbackFactory:
139 """A wrapper around a callable that constructs WorkunitsCallbacks.
140
141 NB: This extra wrapping is because subtyping is not supported in the return position of a
142 rule. See #11354 for discussion of that limitation.
143 """
144
145 callback_factory: Callable[[], WorkunitsCallback | None]
146
147
148 class WorkunitsCallbackFactories(Tuple[WorkunitsCallbackFactory, ...]):
149 """A list of registered factories for WorkunitsCallback instances."""
150
151
152 @union
153 class WorkunitsCallbackFactoryRequest:
154 """A request for a particular WorkunitsCallbackFactory."""
155
156
157 @rule
158 async def construct_workunits_callback_factories(
159 union_membership: UnionMembership,
160 ) -> WorkunitsCallbackFactories:
161 request_types = union_membership.get(WorkunitsCallbackFactoryRequest)
162 workunit_callback_factories = await MultiGet(
163 Get(WorkunitsCallbackFactory, WorkunitsCallbackFactoryRequest, request_type())
164 for request_type in request_types
165 )
166 return WorkunitsCallbackFactories(workunit_callback_factories)
167
168
169 # -----------------------------------------------------------------------------------------------
170 # Streaming workunits handler
171 # -----------------------------------------------------------------------------------------------
172
173
174 class StreamingWorkunitHandler:
175 """Periodically calls each registered WorkunitsCallback in a dedicated thread.
176
177 This class should be used as a context manager.
178 """
179
180 def __init__(
181 self,
182 scheduler: SchedulerSession,
183 run_tracker: RunTracker,
184 callbacks: Iterable[WorkunitsCallback],
185 options_bootstrapper: OptionsBootstrapper,
186 specs: Specs,
187 report_interval_seconds: float,
188 allow_async_completion: bool,
189 max_workunit_verbosity: LogLevel,
190 ) -> None:
191 scheduler = scheduler.isolated_shallow_clone("streaming_workunit_handler_session")
192 self.callbacks = callbacks
193 self.context = StreamingWorkunitContext(
194 _scheduler=scheduler,
195 _run_tracker=run_tracker,
196 _specs=specs,
197 _options_bootstrapper=options_bootstrapper,
198 )
199 self.thread_runner = (
200 _InnerHandler(
201 scheduler=scheduler,
202 context=self.context,
203 callbacks=self.callbacks,
204 report_interval=report_interval_seconds,
205 # TODO(10092) The max verbosity should be a per-client setting, rather than a global
206 # setting.
207 max_workunit_verbosity=max_workunit_verbosity,
208 allow_async_completion=allow_async_completion,
209 )
210 if callbacks
211 else None
212 )
213
214 def __enter__(self) -> None:
215 if not self.thread_runner:
216 return
217 self.thread_runner.start()
218
219 def __exit__(self, exc_type, exc_value, traceback) -> None:
220 if not self.thread_runner:
221 return
222 self.thread_runner.end()
223 if exc_type is not None:
224 self.thread_runner.join()
225
226
227 class _InnerHandler(threading.Thread):
228 def __init__(
229 self,
230 scheduler: Any,
231 context: StreamingWorkunitContext,
232 callbacks: Iterable[WorkunitsCallback],
233 report_interval: float,
234 max_workunit_verbosity: LogLevel,
235 allow_async_completion: bool,
236 ) -> None:
237 super().__init__(daemon=True)
238 self.scheduler = scheduler
239 self.context = context
240 self.stop_request = threading.Event()
241 self.report_interval = report_interval
242 self.callbacks = callbacks
243 self.max_workunit_verbosity = max_workunit_verbosity
244 # TODO: Have a thread per callback so that some callbacks can always finish async even
245 # if others must be finished synchronously.
246 self.block_until_complete = not allow_async_completion or any(
247 callback.can_finish_async is False for callback in self.callbacks
248 )
249 # Get the parent thread's logging destination. Note that this thread has not yet started
250 # as we are only in the constructor.
251 self.logging_destination = native_engine.stdio_thread_get_destination()
252
253 def poll_workunits(self, *, finished: bool) -> None:
254 workunits = self.scheduler.poll_workunits(self.max_workunit_verbosity)
255 for callback in self.callbacks:
256 callback(
257 started_workunits=workunits["started"],
258 completed_workunits=workunits["completed"],
259 finished=finished,
260 context=self.context,
261 )
262
263 def run(self) -> None:
264 # First, set the thread's logging destination to the parent thread's, meaning the console.
265 native_engine.stdio_thread_set_destination(self.logging_destination)
266 while not self.stop_request.isSet(): # type: ignore[attr-defined]
267 self.poll_workunits(finished=False)
268 self.stop_request.wait(timeout=self.report_interval)
269 else:
270 # Make one final call. Note that this may run after the Pants run has already
271 # completed, depending on whether the thread was joined or not.
272 self.poll_workunits(finished=True)
273
274 def end(self) -> None:
275 self.stop_request.set()
276 if self.block_until_complete:
277 logger.debug(
278 "Async completion is disabled: waiting for workunit callbacks to complete..."
279 )
280 super().join()
281 else:
282 logger.debug(
283 "Async completion is enabled: workunit callbacks will complete in the background."
284 )
285
286
287 def rules():
288 return [
289 QueryRule(WorkunitsCallbackFactories, (UnionMembership,)),
290 QueryRule(Targets, (Addresses,)),
291 QueryRule(Addresses, (Specs, OptionsBootstrapper)),
292 *collect_rules(),
293 ]
294
[end of src/python/pants/engine/streaming_workunit_handler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/python/pants/engine/streaming_workunit_handler.py b/src/python/pants/engine/streaming_workunit_handler.py
--- a/src/python/pants/engine/streaming_workunit_handler.py
+++ b/src/python/pants/engine/streaming_workunit_handler.py
@@ -12,7 +12,7 @@
from pants.base.specs import Specs
from pants.engine.addresses import Addresses
from pants.engine.fs import Digest, DigestContents, FileDigest, Snapshot
-from pants.engine.internals import native_engine
+from pants.engine.internals.native_engine import PyThreadLocals
from pants.engine.internals.scheduler import SchedulerSession, Workunit
from pants.engine.internals.selectors import Params
from pants.engine.rules import Get, MultiGet, QueryRule, collect_rules, rule
@@ -30,6 +30,24 @@
# -----------------------------------------------------------------------------------------------
+def thread_locals_get_for_current_thread() -> PyThreadLocals:
+ """Gets the engine's thread local state for the current thread.
+
+ In order to safely use StreamingWorkunitContext methods from additional threads,
+ StreamingWorkunit plugins should propagate thread local state from the threads that they are
+ initialized on to any additional threads that they spawn.
+ """
+ return PyThreadLocals.get_for_current_thread()
+
+
+def thread_locals_set_for_current_thread(thread_locals: PyThreadLocals) -> None:
+ """Sets the engine's thread local state for the current thread.
+
+ See `thread_locals_get`.
+ """
+ thread_locals.set_for_current_thread()
+
+
@dataclass(frozen=True)
class TargetInfo:
filename: str
@@ -246,9 +264,9 @@
self.block_until_complete = not allow_async_completion or any(
callback.can_finish_async is False for callback in self.callbacks
)
- # Get the parent thread's logging destination. Note that this thread has not yet started
+ # Get the parent thread's thread locals. Note that this thread has not yet started
# as we are only in the constructor.
- self.logging_destination = native_engine.stdio_thread_get_destination()
+ self.thread_locals = PyThreadLocals.get_for_current_thread()
def poll_workunits(self, *, finished: bool) -> None:
workunits = self.scheduler.poll_workunits(self.max_workunit_verbosity)
@@ -261,8 +279,9 @@
)
def run(self) -> None:
- # First, set the thread's logging destination to the parent thread's, meaning the console.
- native_engine.stdio_thread_set_destination(self.logging_destination)
+ # First, set the thread's thread locals to the parent thread's in order to propagate the
+ # console, workunit stores, etc.
+ self.thread_locals.set_for_current_thread()
while not self.stop_request.isSet(): # type: ignore[attr-defined]
self.poll_workunits(finished=False)
self.stop_request.wait(timeout=self.report_interval)
|
{"golden_diff": "diff --git a/src/python/pants/engine/streaming_workunit_handler.py b/src/python/pants/engine/streaming_workunit_handler.py\n--- a/src/python/pants/engine/streaming_workunit_handler.py\n+++ b/src/python/pants/engine/streaming_workunit_handler.py\n@@ -12,7 +12,7 @@\n from pants.base.specs import Specs\n from pants.engine.addresses import Addresses\n from pants.engine.fs import Digest, DigestContents, FileDigest, Snapshot\n-from pants.engine.internals import native_engine\n+from pants.engine.internals.native_engine import PyThreadLocals\n from pants.engine.internals.scheduler import SchedulerSession, Workunit\n from pants.engine.internals.selectors import Params\n from pants.engine.rules import Get, MultiGet, QueryRule, collect_rules, rule\n@@ -30,6 +30,24 @@\n # -----------------------------------------------------------------------------------------------\n \n \n+def thread_locals_get_for_current_thread() -> PyThreadLocals:\n+ \"\"\"Gets the engine's thread local state for the current thread.\n+\n+ In order to safely use StreamingWorkunitContext methods from additional threads,\n+ StreamingWorkunit plugins should propagate thread local state from the threads that they are\n+ initialized on to any additional threads that they spawn.\n+ \"\"\"\n+ return PyThreadLocals.get_for_current_thread()\n+\n+\n+def thread_locals_set_for_current_thread(thread_locals: PyThreadLocals) -> None:\n+ \"\"\"Sets the engine's thread local state for the current thread.\n+\n+ See `thread_locals_get`.\n+ \"\"\"\n+ thread_locals.set_for_current_thread()\n+\n+\n @dataclass(frozen=True)\n class TargetInfo:\n filename: str\n@@ -246,9 +264,9 @@\n self.block_until_complete = not allow_async_completion or any(\n callback.can_finish_async is False for callback in self.callbacks\n )\n- # Get the parent thread's logging destination. Note that this thread has not yet started\n+ # Get the parent thread's thread locals. Note that this thread has not yet started\n # as we are only in the constructor.\n- self.logging_destination = native_engine.stdio_thread_get_destination()\n+ self.thread_locals = PyThreadLocals.get_for_current_thread()\n \n def poll_workunits(self, *, finished: bool) -> None:\n workunits = self.scheduler.poll_workunits(self.max_workunit_verbosity)\n@@ -261,8 +279,9 @@\n )\n \n def run(self) -> None:\n- # First, set the thread's logging destination to the parent thread's, meaning the console.\n- native_engine.stdio_thread_set_destination(self.logging_destination)\n+ # First, set the thread's thread locals to the parent thread's in order to propagate the\n+ # console, workunit stores, etc.\n+ self.thread_locals.set_for_current_thread()\n while not self.stop_request.isSet(): # type: ignore[attr-defined]\n self.poll_workunits(finished=False)\n self.stop_request.wait(timeout=self.report_interval)\n", "issue": "Adjust plugin thread-local APIs to account for `WorkunitStore` state\nWhen `eager_fetch=False`, it's possible that a workunit's \"artifacts\" contain `Digest`s which haven't actually been fetched. When that's the case for a `Digest`, and a `StreamingWorkunit` plugin is using any of the [context methods which fetch files](https://github.com/pantsbuild/pants/blob/1d8205538a2932badcc1738fb1288600908b01e1/src/python/pants/engine/streaming_workunit_handler.py#L55-L69) from a background thread, they will encounter a:\r\n> A WorkunitStore has not been set for this thread.\r\n\r\n...error. That's because our existing `native_engine.stdio_thread_set_destination` statics only set the thread local `stdio` state, and not also our workunit state.\r\n\r\n----\r\n\r\nTo fix this, we should adjust the existing method to additionally set the workunit store. But we should also deprecate the existing method and add a new one with a more accurate name (replacing #12295).\n", "before_files": [{"content": "# Copyright 2019 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nimport logging\nimport threading\nfrom abc import ABC, abstractmethod\nfrom dataclasses import dataclass\nfrom typing import Any, Callable, Iterable, Sequence, Tuple\n\nfrom pants.base.specs import Specs\nfrom pants.engine.addresses import Addresses\nfrom pants.engine.fs import Digest, DigestContents, FileDigest, Snapshot\nfrom pants.engine.internals import native_engine\nfrom pants.engine.internals.scheduler import SchedulerSession, Workunit\nfrom pants.engine.internals.selectors import Params\nfrom pants.engine.rules import Get, MultiGet, QueryRule, collect_rules, rule\nfrom pants.engine.target import Targets\nfrom pants.engine.unions import UnionMembership, union\nfrom pants.goal.run_tracker import RunTracker\nfrom pants.option.options_bootstrapper import OptionsBootstrapper\nfrom pants.util.logging import LogLevel\n\nlogger = logging.getLogger(__name__)\n\n\n# -----------------------------------------------------------------------------------------------\n# Streaming workunits plugin API\n# -----------------------------------------------------------------------------------------------\n\n\n@dataclass(frozen=True)\nclass TargetInfo:\n filename: str\n\n\n@dataclass(frozen=True)\nclass ExpandedSpecs:\n targets: dict[str, list[TargetInfo]]\n\n\n@dataclass(frozen=True)\nclass StreamingWorkunitContext:\n _scheduler: SchedulerSession\n _run_tracker: RunTracker\n _specs: Specs\n _options_bootstrapper: OptionsBootstrapper\n\n @property\n def run_tracker(self) -> RunTracker:\n \"\"\"Returns the RunTracker for the current run of Pants.\"\"\"\n return self._run_tracker\n\n def single_file_digests_to_bytes(self, digests: Sequence[FileDigest]) -> list[bytes]:\n \"\"\"Return `bytes` for each `FileDigest`.\"\"\"\n return self._scheduler.single_file_digests_to_bytes(digests)\n\n def snapshots_to_file_contents(\n self, snapshots: Sequence[Snapshot]\n ) -> tuple[DigestContents, ...]:\n \"\"\"Given a sequence of Snapshot objects, return a tuple of DigestContents representing the\n files contained in those `Snapshot`s in sequence.\"\"\"\n return self._scheduler.snapshots_to_file_contents(snapshots)\n\n def ensure_remote_has_recursive(self, digests: Sequence[Digest | FileDigest]) -> None:\n \"\"\"Invoke the internal ensure_remote_has_recursive function, which ensures that a remote\n ByteStore, if it exists, has a copy of the files fingerprinted by each Digest.\"\"\"\n return self._scheduler.ensure_remote_has_recursive(digests)\n\n def get_metrics(self) -> dict[str, int]:\n \"\"\"Invoke the internal get_metrics function, which returns metrics for the Session.\"\"\"\n return self._scheduler.get_metrics()\n\n def get_observation_histograms(self) -> dict[str, Any]:\n \"\"\"Invoke the internal get_observation_histograms function, which serializes histograms\n generated from Pants-internal observation metrics observed during the current run of Pants.\n\n These metrics are useful for debugging Pants internals.\n \"\"\"\n return self._scheduler.get_observation_histograms()\n\n def get_expanded_specs(self) -> ExpandedSpecs:\n \"\"\"Return a dict containing the canonicalized addresses of the specs for this run, and what\n files they expand to.\"\"\"\n\n (unexpanded_addresses,) = self._scheduler.product_request(\n Addresses, [Params(self._specs, self._options_bootstrapper)]\n )\n\n expanded_targets = self._scheduler.product_request(\n Targets, [Params(Addresses([addr])) for addr in unexpanded_addresses]\n )\n targets_dict: dict[str, list[TargetInfo]] = {}\n for addr, targets in zip(unexpanded_addresses, expanded_targets):\n targets_dict[addr.spec] = [\n TargetInfo(\n filename=(\n tgt.address.filename if tgt.address.is_file_target else str(tgt.address)\n )\n )\n for tgt in targets\n ]\n return ExpandedSpecs(targets=targets_dict)\n\n\nclass WorkunitsCallback(ABC):\n @abstractmethod\n def __call__(\n self,\n *,\n started_workunits: tuple[Workunit, ...],\n completed_workunits: tuple[Workunit, ...],\n finished: bool,\n context: StreamingWorkunitContext,\n ) -> None:\n \"\"\"\n :started_workunits: Workunits that have started but not completed.\n :completed_workunits: Workunits that have completed.\n :finished: True when the last chunk of workunit data is reported to the callback.\n :context: A context providing access to functionality relevant to the run.\n \"\"\"\n\n @property\n @abstractmethod\n def can_finish_async(self) -> bool:\n \"\"\"Can this callback finish its work in the background after the Pants run has already\n completed?\n\n The main reason to `return False` is if your callback logs in its final call, when\n `finished=True`, as it may end up logging to `.pantsd.d/pants.log` instead of the console,\n which is harder for users to find. Otherwise, most callbacks should return `True` to avoid\n slowing down Pants from finishing the run.\n \"\"\"\n\n\n@dataclass(frozen=True)\nclass WorkunitsCallbackFactory:\n \"\"\"A wrapper around a callable that constructs WorkunitsCallbacks.\n\n NB: This extra wrapping is because subtyping is not supported in the return position of a\n rule. See #11354 for discussion of that limitation.\n \"\"\"\n\n callback_factory: Callable[[], WorkunitsCallback | None]\n\n\nclass WorkunitsCallbackFactories(Tuple[WorkunitsCallbackFactory, ...]):\n \"\"\"A list of registered factories for WorkunitsCallback instances.\"\"\"\n\n\n@union\nclass WorkunitsCallbackFactoryRequest:\n \"\"\"A request for a particular WorkunitsCallbackFactory.\"\"\"\n\n\n@rule\nasync def construct_workunits_callback_factories(\n union_membership: UnionMembership,\n) -> WorkunitsCallbackFactories:\n request_types = union_membership.get(WorkunitsCallbackFactoryRequest)\n workunit_callback_factories = await MultiGet(\n Get(WorkunitsCallbackFactory, WorkunitsCallbackFactoryRequest, request_type())\n for request_type in request_types\n )\n return WorkunitsCallbackFactories(workunit_callback_factories)\n\n\n# -----------------------------------------------------------------------------------------------\n# Streaming workunits handler\n# -----------------------------------------------------------------------------------------------\n\n\nclass StreamingWorkunitHandler:\n \"\"\"Periodically calls each registered WorkunitsCallback in a dedicated thread.\n\n This class should be used as a context manager.\n \"\"\"\n\n def __init__(\n self,\n scheduler: SchedulerSession,\n run_tracker: RunTracker,\n callbacks: Iterable[WorkunitsCallback],\n options_bootstrapper: OptionsBootstrapper,\n specs: Specs,\n report_interval_seconds: float,\n allow_async_completion: bool,\n max_workunit_verbosity: LogLevel,\n ) -> None:\n scheduler = scheduler.isolated_shallow_clone(\"streaming_workunit_handler_session\")\n self.callbacks = callbacks\n self.context = StreamingWorkunitContext(\n _scheduler=scheduler,\n _run_tracker=run_tracker,\n _specs=specs,\n _options_bootstrapper=options_bootstrapper,\n )\n self.thread_runner = (\n _InnerHandler(\n scheduler=scheduler,\n context=self.context,\n callbacks=self.callbacks,\n report_interval=report_interval_seconds,\n # TODO(10092) The max verbosity should be a per-client setting, rather than a global\n # setting.\n max_workunit_verbosity=max_workunit_verbosity,\n allow_async_completion=allow_async_completion,\n )\n if callbacks\n else None\n )\n\n def __enter__(self) -> None:\n if not self.thread_runner:\n return\n self.thread_runner.start()\n\n def __exit__(self, exc_type, exc_value, traceback) -> None:\n if not self.thread_runner:\n return\n self.thread_runner.end()\n if exc_type is not None:\n self.thread_runner.join()\n\n\nclass _InnerHandler(threading.Thread):\n def __init__(\n self,\n scheduler: Any,\n context: StreamingWorkunitContext,\n callbacks: Iterable[WorkunitsCallback],\n report_interval: float,\n max_workunit_verbosity: LogLevel,\n allow_async_completion: bool,\n ) -> None:\n super().__init__(daemon=True)\n self.scheduler = scheduler\n self.context = context\n self.stop_request = threading.Event()\n self.report_interval = report_interval\n self.callbacks = callbacks\n self.max_workunit_verbosity = max_workunit_verbosity\n # TODO: Have a thread per callback so that some callbacks can always finish async even\n # if others must be finished synchronously.\n self.block_until_complete = not allow_async_completion or any(\n callback.can_finish_async is False for callback in self.callbacks\n )\n # Get the parent thread's logging destination. Note that this thread has not yet started\n # as we are only in the constructor.\n self.logging_destination = native_engine.stdio_thread_get_destination()\n\n def poll_workunits(self, *, finished: bool) -> None:\n workunits = self.scheduler.poll_workunits(self.max_workunit_verbosity)\n for callback in self.callbacks:\n callback(\n started_workunits=workunits[\"started\"],\n completed_workunits=workunits[\"completed\"],\n finished=finished,\n context=self.context,\n )\n\n def run(self) -> None:\n # First, set the thread's logging destination to the parent thread's, meaning the console.\n native_engine.stdio_thread_set_destination(self.logging_destination)\n while not self.stop_request.isSet(): # type: ignore[attr-defined]\n self.poll_workunits(finished=False)\n self.stop_request.wait(timeout=self.report_interval)\n else:\n # Make one final call. Note that this may run after the Pants run has already\n # completed, depending on whether the thread was joined or not.\n self.poll_workunits(finished=True)\n\n def end(self) -> None:\n self.stop_request.set()\n if self.block_until_complete:\n logger.debug(\n \"Async completion is disabled: waiting for workunit callbacks to complete...\"\n )\n super().join()\n else:\n logger.debug(\n \"Async completion is enabled: workunit callbacks will complete in the background.\"\n )\n\n\ndef rules():\n return [\n QueryRule(WorkunitsCallbackFactories, (UnionMembership,)),\n QueryRule(Targets, (Addresses,)),\n QueryRule(Addresses, (Specs, OptionsBootstrapper)),\n *collect_rules(),\n ]\n", "path": "src/python/pants/engine/streaming_workunit_handler.py"}]}
| 3,806 | 639 |
gh_patches_debug_13649
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-4196
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
remove: remove dvc.yaml and dvc.lock if they are empty
https://github.com/iterative/dvc/pull/4074#issuecomment-648097445
```
$ cat dvc.lock
{}
$ cat dvc.yaml
stages: {}
```
</issue>
<code>
[start of dvc/dvcfile.py]
1 import collections
2 import contextlib
3 import logging
4 import os
5
6 from voluptuous import MultipleInvalid
7
8 from dvc.exceptions import DvcException
9 from dvc.stage import serialize
10 from dvc.stage.exceptions import (
11 StageFileBadNameError,
12 StageFileDoesNotExistError,
13 StageFileFormatError,
14 StageFileIsNotDvcFileError,
15 )
16 from dvc.stage.loader import SingleStageLoader, StageLoader
17 from dvc.utils import relpath
18 from dvc.utils.collections import apply_diff
19 from dvc.utils.yaml import dump_yaml, parse_yaml, parse_yaml_for_update
20
21 logger = logging.getLogger(__name__)
22
23 DVC_FILE = "Dvcfile"
24 DVC_FILE_SUFFIX = ".dvc"
25 PIPELINE_FILE = "dvc.yaml"
26 PIPELINE_LOCK = "dvc.lock"
27
28
29 class LockfileCorruptedError(DvcException):
30 pass
31
32
33 def is_valid_filename(path):
34 return path.endswith(DVC_FILE_SUFFIX) or os.path.basename(path) in [
35 DVC_FILE,
36 PIPELINE_FILE,
37 ]
38
39
40 def is_dvc_file(path):
41 return os.path.isfile(path) and (
42 is_valid_filename(path) or os.path.basename(path) == PIPELINE_LOCK
43 )
44
45
46 def check_dvc_filename(path):
47 if not is_valid_filename(path):
48 raise StageFileBadNameError(
49 "bad DVC-file name '{}'. DVC-files should be named "
50 "'Dvcfile' or have a '.dvc' suffix (e.g. '{}.dvc').".format(
51 relpath(path), os.path.basename(path)
52 )
53 )
54
55
56 class FileMixin:
57 SCHEMA = None
58
59 def __init__(self, repo, path, **kwargs):
60 self.repo = repo
61 self.path = path
62
63 def __repr__(self):
64 return "{}: {}".format(
65 self.__class__.__name__, relpath(self.path, self.repo.root_dir)
66 )
67
68 def __hash__(self):
69 return hash(self.path)
70
71 def __eq__(self, other):
72 return self.repo == other.repo and os.path.abspath(
73 self.path
74 ) == os.path.abspath(other.path)
75
76 def __str__(self):
77 return f"{self.__class__.__name__}: {self.relpath}"
78
79 @property
80 def relpath(self):
81 return relpath(self.path)
82
83 def exists(self):
84 return self.repo.tree.exists(self.path)
85
86 def _load(self):
87 # it raises the proper exceptions by priority:
88 # 1. when the file doesn't exists
89 # 2. filename is not a DVC-file
90 # 3. path doesn't represent a regular file
91 if not self.exists():
92 raise StageFileDoesNotExistError(self.path)
93 check_dvc_filename(self.path)
94 if not self.repo.tree.isfile(self.path):
95 raise StageFileIsNotDvcFileError(self.path)
96
97 with self.repo.tree.open(self.path) as fd:
98 stage_text = fd.read()
99 d = parse_yaml(stage_text, self.path)
100 self.validate(d, self.relpath)
101 return d, stage_text
102
103 @classmethod
104 def validate(cls, d, fname=None):
105 assert isinstance(cls.SCHEMA, collections.abc.Callable)
106 try:
107 cls.SCHEMA(d) # pylint: disable=not-callable
108 except MultipleInvalid as exc:
109 raise StageFileFormatError(f"'{fname}' format error: {exc}")
110
111 def remove(self, force=False): # pylint: disable=unused-argument
112 with contextlib.suppress(FileNotFoundError):
113 os.unlink(self.path)
114
115 def dump(self, stage, **kwargs):
116 raise NotImplementedError
117
118
119 class SingleStageFile(FileMixin):
120 from dvc.schema import COMPILED_SINGLE_STAGE_SCHEMA as SCHEMA
121
122 @property
123 def stage(self):
124 data, raw = self._load()
125 return SingleStageLoader.load_stage(self, data, raw)
126
127 @property
128 def stages(self):
129 data, raw = self._load()
130 return SingleStageLoader(self, data, raw)
131
132 def dump(self, stage, **kwargs):
133 """Dumps given stage appropriately in the dvcfile."""
134 from dvc.stage import PipelineStage
135
136 assert not isinstance(stage, PipelineStage)
137 check_dvc_filename(self.path)
138 logger.debug(
139 "Saving information to '{file}'.".format(file=relpath(self.path))
140 )
141 dump_yaml(self.path, serialize.to_single_stage_file(stage))
142 self.repo.scm.track_file(self.relpath)
143
144 def remove_stage(self, stage): # pylint: disable=unused-argument
145 self.remove()
146
147
148 class PipelineFile(FileMixin):
149 """Abstraction for pipelines file, .yaml + .lock combined."""
150
151 from dvc.schema import COMPILED_MULTI_STAGE_SCHEMA as SCHEMA
152
153 @property
154 def _lockfile(self):
155 return Lockfile(self.repo, os.path.splitext(self.path)[0] + ".lock")
156
157 def dump(
158 self, stage, update_pipeline=False, no_lock=False, **kwargs
159 ): # pylint: disable=arguments-differ
160 """Dumps given stage appropriately in the dvcfile."""
161 from dvc.stage import PipelineStage
162
163 assert isinstance(stage, PipelineStage)
164 check_dvc_filename(self.path)
165
166 if update_pipeline and not stage.is_data_source:
167 self._dump_pipeline_file(stage)
168
169 if not no_lock:
170 self._dump_lockfile(stage)
171
172 def _dump_lockfile(self, stage):
173 self._lockfile.dump(stage)
174
175 def _dump_pipeline_file(self, stage):
176 data = {}
177 if self.exists():
178 with open(self.path) as fd:
179 data = parse_yaml_for_update(fd.read(), self.path)
180 else:
181 logger.info("Creating '%s'", self.relpath)
182 open(self.path, "w+").close()
183
184 data["stages"] = data.get("stages", {})
185 stage_data = serialize.to_pipeline_file(stage)
186 existing_entry = stage.name in data["stages"]
187
188 action = "Modifying" if existing_entry else "Adding"
189 logger.info("%s stage '%s' in '%s'", action, stage.name, self.relpath)
190
191 if existing_entry:
192 orig_stage_data = data["stages"][stage.name]
193 if "meta" in orig_stage_data:
194 stage_data[stage.name]["meta"] = orig_stage_data["meta"]
195 apply_diff(stage_data[stage.name], orig_stage_data)
196 else:
197 data["stages"].update(stage_data)
198
199 dump_yaml(self.path, data)
200 self.repo.scm.track_file(self.relpath)
201
202 @property
203 def stage(self):
204 raise DvcException(
205 "PipelineFile has multiple stages. Please specify it's name."
206 )
207
208 @property
209 def stages(self):
210 data, _ = self._load()
211 lockfile_data = self._lockfile.load()
212 return StageLoader(self, data.get("stages", {}), lockfile_data)
213
214 def remove(self, force=False):
215 if not force:
216 logger.warning("Cannot remove pipeline file.")
217 return
218
219 super().remove()
220 self._lockfile.remove()
221
222 def remove_stage(self, stage):
223 self._lockfile.remove_stage(stage)
224 if not self.exists():
225 return
226
227 with open(self.path, "r") as f:
228 d = parse_yaml_for_update(f.read(), self.path)
229
230 self.validate(d, self.path)
231 if stage.name not in d.get("stages", {}):
232 return
233
234 logger.debug("Removing '%s' from '%s'", stage.name, self.path)
235 del d["stages"][stage.name]
236 dump_yaml(self.path, d)
237
238
239 class Lockfile(FileMixin):
240 from dvc.schema import COMPILED_LOCKFILE_SCHEMA as SCHEMA
241
242 def load(self):
243 if not self.exists():
244 return {}
245 with self.repo.tree.open(self.path) as fd:
246 data = parse_yaml(fd.read(), self.path)
247 try:
248 self.validate(data, fname=self.relpath)
249 except StageFileFormatError:
250 raise LockfileCorruptedError(
251 f"Lockfile '{self.relpath}' is corrupted."
252 )
253 return data
254
255 def dump(self, stage, **kwargs):
256 stage_data = serialize.to_lockfile(stage)
257 if not self.exists():
258 modified = True
259 logger.info("Generating lock file '%s'", self.relpath)
260 data = stage_data
261 open(self.path, "w+").close()
262 else:
263 with self.repo.tree.open(self.path, "r") as fd:
264 data = parse_yaml_for_update(fd.read(), self.path)
265 modified = data.get(stage.name, {}) != stage_data.get(
266 stage.name, {}
267 )
268 if modified:
269 logger.info("Updating lock file '%s'", self.relpath)
270 data.update(stage_data)
271 dump_yaml(self.path, data)
272 if modified:
273 self.repo.scm.track_file(self.relpath)
274
275 def remove_stage(self, stage):
276 if not self.exists():
277 return
278
279 with open(self.path) as f:
280 d = parse_yaml_for_update(f.read(), self.path)
281 self.validate(d, self.path)
282
283 if stage.name not in d:
284 return
285
286 logger.debug("Removing '%s' from '%s'", stage.name, self.path)
287 del d[stage.name]
288
289 dump_yaml(self.path, d)
290
291
292 class Dvcfile:
293 def __new__(cls, repo, path, **kwargs):
294 assert path
295 assert repo
296
297 _, ext = os.path.splitext(path)
298 if ext in [".yaml", ".yml"]:
299 return PipelineFile(repo, path, **kwargs)
300 # fallback to single stage file for better error messages
301 return SingleStageFile(repo, path, **kwargs)
302
[end of dvc/dvcfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dvc/dvcfile.py b/dvc/dvcfile.py
--- a/dvc/dvcfile.py
+++ b/dvc/dvcfile.py
@@ -233,7 +233,11 @@
logger.debug("Removing '%s' from '%s'", stage.name, self.path)
del d["stages"][stage.name]
- dump_yaml(self.path, d)
+
+ if d["stages"]:
+ dump_yaml(self.path, d)
+ else:
+ super().remove()
class Lockfile(FileMixin):
@@ -286,7 +290,10 @@
logger.debug("Removing '%s' from '%s'", stage.name, self.path)
del d[stage.name]
- dump_yaml(self.path, d)
+ if d:
+ dump_yaml(self.path, d)
+ else:
+ self.remove()
class Dvcfile:
|
{"golden_diff": "diff --git a/dvc/dvcfile.py b/dvc/dvcfile.py\n--- a/dvc/dvcfile.py\n+++ b/dvc/dvcfile.py\n@@ -233,7 +233,11 @@\n \n logger.debug(\"Removing '%s' from '%s'\", stage.name, self.path)\n del d[\"stages\"][stage.name]\n- dump_yaml(self.path, d)\n+\n+ if d[\"stages\"]:\n+ dump_yaml(self.path, d)\n+ else:\n+ super().remove()\n \n \n class Lockfile(FileMixin):\n@@ -286,7 +290,10 @@\n logger.debug(\"Removing '%s' from '%s'\", stage.name, self.path)\n del d[stage.name]\n \n- dump_yaml(self.path, d)\n+ if d:\n+ dump_yaml(self.path, d)\n+ else:\n+ self.remove()\n \n \n class Dvcfile:\n", "issue": "remove: remove dvc.yaml and dvc.lock if they are empty\nhttps://github.com/iterative/dvc/pull/4074#issuecomment-648097445\r\n\r\n```\r\n$ cat dvc.lock\r\n{} \r\n$ cat dvc.yaml\r\nstages: {} \r\n```\n", "before_files": [{"content": "import collections\nimport contextlib\nimport logging\nimport os\n\nfrom voluptuous import MultipleInvalid\n\nfrom dvc.exceptions import DvcException\nfrom dvc.stage import serialize\nfrom dvc.stage.exceptions import (\n StageFileBadNameError,\n StageFileDoesNotExistError,\n StageFileFormatError,\n StageFileIsNotDvcFileError,\n)\nfrom dvc.stage.loader import SingleStageLoader, StageLoader\nfrom dvc.utils import relpath\nfrom dvc.utils.collections import apply_diff\nfrom dvc.utils.yaml import dump_yaml, parse_yaml, parse_yaml_for_update\n\nlogger = logging.getLogger(__name__)\n\nDVC_FILE = \"Dvcfile\"\nDVC_FILE_SUFFIX = \".dvc\"\nPIPELINE_FILE = \"dvc.yaml\"\nPIPELINE_LOCK = \"dvc.lock\"\n\n\nclass LockfileCorruptedError(DvcException):\n pass\n\n\ndef is_valid_filename(path):\n return path.endswith(DVC_FILE_SUFFIX) or os.path.basename(path) in [\n DVC_FILE,\n PIPELINE_FILE,\n ]\n\n\ndef is_dvc_file(path):\n return os.path.isfile(path) and (\n is_valid_filename(path) or os.path.basename(path) == PIPELINE_LOCK\n )\n\n\ndef check_dvc_filename(path):\n if not is_valid_filename(path):\n raise StageFileBadNameError(\n \"bad DVC-file name '{}'. DVC-files should be named \"\n \"'Dvcfile' or have a '.dvc' suffix (e.g. '{}.dvc').\".format(\n relpath(path), os.path.basename(path)\n )\n )\n\n\nclass FileMixin:\n SCHEMA = None\n\n def __init__(self, repo, path, **kwargs):\n self.repo = repo\n self.path = path\n\n def __repr__(self):\n return \"{}: {}\".format(\n self.__class__.__name__, relpath(self.path, self.repo.root_dir)\n )\n\n def __hash__(self):\n return hash(self.path)\n\n def __eq__(self, other):\n return self.repo == other.repo and os.path.abspath(\n self.path\n ) == os.path.abspath(other.path)\n\n def __str__(self):\n return f\"{self.__class__.__name__}: {self.relpath}\"\n\n @property\n def relpath(self):\n return relpath(self.path)\n\n def exists(self):\n return self.repo.tree.exists(self.path)\n\n def _load(self):\n # it raises the proper exceptions by priority:\n # 1. when the file doesn't exists\n # 2. filename is not a DVC-file\n # 3. path doesn't represent a regular file\n if not self.exists():\n raise StageFileDoesNotExistError(self.path)\n check_dvc_filename(self.path)\n if not self.repo.tree.isfile(self.path):\n raise StageFileIsNotDvcFileError(self.path)\n\n with self.repo.tree.open(self.path) as fd:\n stage_text = fd.read()\n d = parse_yaml(stage_text, self.path)\n self.validate(d, self.relpath)\n return d, stage_text\n\n @classmethod\n def validate(cls, d, fname=None):\n assert isinstance(cls.SCHEMA, collections.abc.Callable)\n try:\n cls.SCHEMA(d) # pylint: disable=not-callable\n except MultipleInvalid as exc:\n raise StageFileFormatError(f\"'{fname}' format error: {exc}\")\n\n def remove(self, force=False): # pylint: disable=unused-argument\n with contextlib.suppress(FileNotFoundError):\n os.unlink(self.path)\n\n def dump(self, stage, **kwargs):\n raise NotImplementedError\n\n\nclass SingleStageFile(FileMixin):\n from dvc.schema import COMPILED_SINGLE_STAGE_SCHEMA as SCHEMA\n\n @property\n def stage(self):\n data, raw = self._load()\n return SingleStageLoader.load_stage(self, data, raw)\n\n @property\n def stages(self):\n data, raw = self._load()\n return SingleStageLoader(self, data, raw)\n\n def dump(self, stage, **kwargs):\n \"\"\"Dumps given stage appropriately in the dvcfile.\"\"\"\n from dvc.stage import PipelineStage\n\n assert not isinstance(stage, PipelineStage)\n check_dvc_filename(self.path)\n logger.debug(\n \"Saving information to '{file}'.\".format(file=relpath(self.path))\n )\n dump_yaml(self.path, serialize.to_single_stage_file(stage))\n self.repo.scm.track_file(self.relpath)\n\n def remove_stage(self, stage): # pylint: disable=unused-argument\n self.remove()\n\n\nclass PipelineFile(FileMixin):\n \"\"\"Abstraction for pipelines file, .yaml + .lock combined.\"\"\"\n\n from dvc.schema import COMPILED_MULTI_STAGE_SCHEMA as SCHEMA\n\n @property\n def _lockfile(self):\n return Lockfile(self.repo, os.path.splitext(self.path)[0] + \".lock\")\n\n def dump(\n self, stage, update_pipeline=False, no_lock=False, **kwargs\n ): # pylint: disable=arguments-differ\n \"\"\"Dumps given stage appropriately in the dvcfile.\"\"\"\n from dvc.stage import PipelineStage\n\n assert isinstance(stage, PipelineStage)\n check_dvc_filename(self.path)\n\n if update_pipeline and not stage.is_data_source:\n self._dump_pipeline_file(stage)\n\n if not no_lock:\n self._dump_lockfile(stage)\n\n def _dump_lockfile(self, stage):\n self._lockfile.dump(stage)\n\n def _dump_pipeline_file(self, stage):\n data = {}\n if self.exists():\n with open(self.path) as fd:\n data = parse_yaml_for_update(fd.read(), self.path)\n else:\n logger.info(\"Creating '%s'\", self.relpath)\n open(self.path, \"w+\").close()\n\n data[\"stages\"] = data.get(\"stages\", {})\n stage_data = serialize.to_pipeline_file(stage)\n existing_entry = stage.name in data[\"stages\"]\n\n action = \"Modifying\" if existing_entry else \"Adding\"\n logger.info(\"%s stage '%s' in '%s'\", action, stage.name, self.relpath)\n\n if existing_entry:\n orig_stage_data = data[\"stages\"][stage.name]\n if \"meta\" in orig_stage_data:\n stage_data[stage.name][\"meta\"] = orig_stage_data[\"meta\"]\n apply_diff(stage_data[stage.name], orig_stage_data)\n else:\n data[\"stages\"].update(stage_data)\n\n dump_yaml(self.path, data)\n self.repo.scm.track_file(self.relpath)\n\n @property\n def stage(self):\n raise DvcException(\n \"PipelineFile has multiple stages. Please specify it's name.\"\n )\n\n @property\n def stages(self):\n data, _ = self._load()\n lockfile_data = self._lockfile.load()\n return StageLoader(self, data.get(\"stages\", {}), lockfile_data)\n\n def remove(self, force=False):\n if not force:\n logger.warning(\"Cannot remove pipeline file.\")\n return\n\n super().remove()\n self._lockfile.remove()\n\n def remove_stage(self, stage):\n self._lockfile.remove_stage(stage)\n if not self.exists():\n return\n\n with open(self.path, \"r\") as f:\n d = parse_yaml_for_update(f.read(), self.path)\n\n self.validate(d, self.path)\n if stage.name not in d.get(\"stages\", {}):\n return\n\n logger.debug(\"Removing '%s' from '%s'\", stage.name, self.path)\n del d[\"stages\"][stage.name]\n dump_yaml(self.path, d)\n\n\nclass Lockfile(FileMixin):\n from dvc.schema import COMPILED_LOCKFILE_SCHEMA as SCHEMA\n\n def load(self):\n if not self.exists():\n return {}\n with self.repo.tree.open(self.path) as fd:\n data = parse_yaml(fd.read(), self.path)\n try:\n self.validate(data, fname=self.relpath)\n except StageFileFormatError:\n raise LockfileCorruptedError(\n f\"Lockfile '{self.relpath}' is corrupted.\"\n )\n return data\n\n def dump(self, stage, **kwargs):\n stage_data = serialize.to_lockfile(stage)\n if not self.exists():\n modified = True\n logger.info(\"Generating lock file '%s'\", self.relpath)\n data = stage_data\n open(self.path, \"w+\").close()\n else:\n with self.repo.tree.open(self.path, \"r\") as fd:\n data = parse_yaml_for_update(fd.read(), self.path)\n modified = data.get(stage.name, {}) != stage_data.get(\n stage.name, {}\n )\n if modified:\n logger.info(\"Updating lock file '%s'\", self.relpath)\n data.update(stage_data)\n dump_yaml(self.path, data)\n if modified:\n self.repo.scm.track_file(self.relpath)\n\n def remove_stage(self, stage):\n if not self.exists():\n return\n\n with open(self.path) as f:\n d = parse_yaml_for_update(f.read(), self.path)\n self.validate(d, self.path)\n\n if stage.name not in d:\n return\n\n logger.debug(\"Removing '%s' from '%s'\", stage.name, self.path)\n del d[stage.name]\n\n dump_yaml(self.path, d)\n\n\nclass Dvcfile:\n def __new__(cls, repo, path, **kwargs):\n assert path\n assert repo\n\n _, ext = os.path.splitext(path)\n if ext in [\".yaml\", \".yml\"]:\n return PipelineFile(repo, path, **kwargs)\n # fallback to single stage file for better error messages\n return SingleStageFile(repo, path, **kwargs)\n", "path": "dvc/dvcfile.py"}]}
| 3,517 | 203 |
gh_patches_debug_20002
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-2510
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Disk queues don't preserve Request class
When a Request subclass (e.g. FormRequest) is sent to a disk queue a bare Request is what you get back.
This is inconvenient for scrapy-splash: Splash requests all have Splash URL as request.url, but for logging it is nice to display the requested URL, not only Splash URL. In scrapy-splash this is implemented by changing `__repr__` in a Request subclass, but it works only when request is kept in memory.
</issue>
<code>
[start of scrapy/utils/reqser.py]
1 """
2 Helper functions for serializing (and deserializing) requests.
3 """
4 import six
5
6 from scrapy.http import Request
7 from scrapy.utils.python import to_unicode, to_native_str
8
9
10 def request_to_dict(request, spider=None):
11 """Convert Request object to a dict.
12
13 If a spider is given, it will try to find out the name of the spider method
14 used in the callback and store that as the callback.
15 """
16 cb = request.callback
17 if callable(cb):
18 cb = _find_method(spider, cb)
19 eb = request.errback
20 if callable(eb):
21 eb = _find_method(spider, eb)
22 d = {
23 'url': to_unicode(request.url), # urls should be safe (safe_string_url)
24 'callback': cb,
25 'errback': eb,
26 'method': request.method,
27 'headers': dict(request.headers),
28 'body': request.body,
29 'cookies': request.cookies,
30 'meta': request.meta,
31 '_encoding': request._encoding,
32 'priority': request.priority,
33 'dont_filter': request.dont_filter,
34 }
35 return d
36
37
38 def request_from_dict(d, spider=None):
39 """Create Request object from a dict.
40
41 If a spider is given, it will try to resolve the callbacks looking at the
42 spider for methods with the same name.
43 """
44 cb = d['callback']
45 if cb and spider:
46 cb = _get_method(spider, cb)
47 eb = d['errback']
48 if eb and spider:
49 eb = _get_method(spider, eb)
50 return Request(
51 url=to_native_str(d['url']),
52 callback=cb,
53 errback=eb,
54 method=d['method'],
55 headers=d['headers'],
56 body=d['body'],
57 cookies=d['cookies'],
58 meta=d['meta'],
59 encoding=d['_encoding'],
60 priority=d['priority'],
61 dont_filter=d['dont_filter'])
62
63
64 def _find_method(obj, func):
65 if obj:
66 try:
67 func_self = six.get_method_self(func)
68 except AttributeError: # func has no __self__
69 pass
70 else:
71 if func_self is obj:
72 return six.get_method_function(func).__name__
73 raise ValueError("Function %s is not a method of: %s" % (func, obj))
74
75
76 def _get_method(obj, name):
77 name = str(name)
78 try:
79 return getattr(obj, name)
80 except AttributeError:
81 raise ValueError("Method %r not found in: %s" % (name, obj))
82
[end of scrapy/utils/reqser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scrapy/utils/reqser.py b/scrapy/utils/reqser.py
--- a/scrapy/utils/reqser.py
+++ b/scrapy/utils/reqser.py
@@ -5,6 +5,7 @@
from scrapy.http import Request
from scrapy.utils.python import to_unicode, to_native_str
+from scrapy.utils.misc import load_object
def request_to_dict(request, spider=None):
@@ -32,6 +33,8 @@
'priority': request.priority,
'dont_filter': request.dont_filter,
}
+ if type(request) is not Request:
+ d['_class'] = request.__module__ + '.' + request.__class__.__name__
return d
@@ -47,7 +50,8 @@
eb = d['errback']
if eb and spider:
eb = _get_method(spider, eb)
- return Request(
+ request_cls = load_object(d['_class']) if '_class' in d else Request
+ return request_cls(
url=to_native_str(d['url']),
callback=cb,
errback=eb,
|
{"golden_diff": "diff --git a/scrapy/utils/reqser.py b/scrapy/utils/reqser.py\n--- a/scrapy/utils/reqser.py\n+++ b/scrapy/utils/reqser.py\n@@ -5,6 +5,7 @@\n \n from scrapy.http import Request\n from scrapy.utils.python import to_unicode, to_native_str\n+from scrapy.utils.misc import load_object\n \n \n def request_to_dict(request, spider=None):\n@@ -32,6 +33,8 @@\n 'priority': request.priority,\n 'dont_filter': request.dont_filter,\n }\n+ if type(request) is not Request:\n+ d['_class'] = request.__module__ + '.' + request.__class__.__name__\n return d\n \n \n@@ -47,7 +50,8 @@\n eb = d['errback']\n if eb and spider:\n eb = _get_method(spider, eb)\n- return Request(\n+ request_cls = load_object(d['_class']) if '_class' in d else Request\n+ return request_cls(\n url=to_native_str(d['url']),\n callback=cb,\n errback=eb,\n", "issue": "Disk queues don't preserve Request class\nWhen a Request subclass (e.g. FormRequest) is sent to a disk queue a bare Request is what you get back. \n\nThis is inconvenient for scrapy-splash: Splash requests all have Splash URL as request.url, but for logging it is nice to display the requested URL, not only Splash URL. In scrapy-splash this is implemented by changing `__repr__` in a Request subclass, but it works only when request is kept in memory.\n\n", "before_files": [{"content": "\"\"\"\nHelper functions for serializing (and deserializing) requests.\n\"\"\"\nimport six\n\nfrom scrapy.http import Request\nfrom scrapy.utils.python import to_unicode, to_native_str\n\n\ndef request_to_dict(request, spider=None):\n \"\"\"Convert Request object to a dict.\n\n If a spider is given, it will try to find out the name of the spider method\n used in the callback and store that as the callback.\n \"\"\"\n cb = request.callback\n if callable(cb):\n cb = _find_method(spider, cb)\n eb = request.errback\n if callable(eb):\n eb = _find_method(spider, eb)\n d = {\n 'url': to_unicode(request.url), # urls should be safe (safe_string_url)\n 'callback': cb,\n 'errback': eb,\n 'method': request.method,\n 'headers': dict(request.headers),\n 'body': request.body,\n 'cookies': request.cookies,\n 'meta': request.meta,\n '_encoding': request._encoding,\n 'priority': request.priority,\n 'dont_filter': request.dont_filter,\n }\n return d\n\n\ndef request_from_dict(d, spider=None):\n \"\"\"Create Request object from a dict.\n\n If a spider is given, it will try to resolve the callbacks looking at the\n spider for methods with the same name.\n \"\"\"\n cb = d['callback']\n if cb and spider:\n cb = _get_method(spider, cb)\n eb = d['errback']\n if eb and spider:\n eb = _get_method(spider, eb)\n return Request(\n url=to_native_str(d['url']),\n callback=cb,\n errback=eb,\n method=d['method'],\n headers=d['headers'],\n body=d['body'],\n cookies=d['cookies'],\n meta=d['meta'],\n encoding=d['_encoding'],\n priority=d['priority'],\n dont_filter=d['dont_filter'])\n\n\ndef _find_method(obj, func):\n if obj:\n try:\n func_self = six.get_method_self(func)\n except AttributeError: # func has no __self__\n pass\n else:\n if func_self is obj:\n return six.get_method_function(func).__name__\n raise ValueError(\"Function %s is not a method of: %s\" % (func, obj))\n\n\ndef _get_method(obj, name):\n name = str(name)\n try:\n return getattr(obj, name)\n except AttributeError:\n raise ValueError(\"Method %r not found in: %s\" % (name, obj))\n", "path": "scrapy/utils/reqser.py"}]}
| 1,342 | 243 |
gh_patches_debug_16259
|
rasdani/github-patches
|
git_diff
|
tensorflow__addons-270
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update sparsemax to use tf.where V2
As described in #250 and temporarily patched in #251 sparsemax has one instance of tf.where that needs the broadcasting dimensions changed to match numpy and TF2 style.
</issue>
<code>
[start of tensorflow_addons/activations/sparsemax.py]
1 # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15
16 from __future__ import absolute_import
17 from __future__ import division
18 from __future__ import print_function
19
20 import tensorflow as tf
21
22 from tensorflow_addons.utils import keras_utils
23
24
25 @tf.function
26 @keras_utils.register_keras_custom_object
27 def sparsemax(logits, axis=-1, name=None):
28 """Sparsemax activation function [1].
29
30 For each batch `i` and class `j` we have
31 $$sparsemax[i, j] = max(logits[i, j] - tau(logits[i, :]), 0)$$
32
33 [1]: https://arxiv.org/abs/1602.02068
34
35 Args:
36 logits: Input tensor.
37 axis: Integer, axis along which the sparsemax operation is applied.
38 name: A name for the operation (optional).
39 Returns:
40 Tensor, output of sparsemax transformation. Has the same type and
41 shape as `logits`.
42 Raises:
43 ValueError: In case `dim(logits) == 1`.
44 """
45 logits = tf.convert_to_tensor(logits, name="logits")
46
47 # We need its original shape for shape inference.
48 shape = logits.get_shape()
49 rank = shape.rank
50 is_last_axis = (axis == -1) or (axis == rank - 1)
51
52 if is_last_axis:
53 output = _compute_2d_sparsemax(logits, name=name)
54 output.set_shape(shape)
55 return output
56
57 # If dim is not the last dimension, we have to do a transpose so that we can
58 # still perform softmax on its last dimension.
59
60 # Swap logits' dimension of dim and its last dimension.
61 rank_op = tf.rank(logits)
62 axis_norm = axis % rank
63 logits = _swap_axis(logits, axis_norm, tf.math.subtract(rank_op, 1))
64
65 # Do the actual softmax on its last dimension.
66 output = _compute_2d_sparsemax(logits)
67 output = _swap_axis(
68 output, axis_norm, tf.math.subtract(rank_op, 1), name=name)
69
70 # Make shape inference work since transpose may erase its static shape.
71 output.set_shape(shape)
72 return output
73
74
75 def _swap_axis(logits, dim_index, last_index, **kwargs):
76 return tf.transpose(
77 logits,
78 tf.concat([
79 tf.range(dim_index), [last_index],
80 tf.range(dim_index + 1, last_index), [dim_index]
81 ], 0), **kwargs)
82
83
84 @tf.function
85 def _compute_2d_sparsemax(logits, name=None):
86 """Performs the sparsemax operation when axis=-1."""
87 shape_op = tf.shape(logits)
88 obs = tf.math.reduce_prod(shape_op[:-1])
89 dims = shape_op[-1]
90
91 # In the paper, they call the logits z.
92 # The mean(logits) can be substracted from logits to make the algorithm
93 # more numerically stable. the instability in this algorithm comes mostly
94 # from the z_cumsum. Substacting the mean will cause z_cumsum to be close
95 # to zero. However, in practise the numerical instability issues are very
96 # minor and substacting the mean causes extra issues with inf and nan
97 # input.
98 # Reshape to [obs, dims] as it is almost free and means the remanining
99 # code doesn't need to worry about the rank.
100 z = tf.reshape(logits, [obs, dims])
101
102 # sort z
103 z_sorted, _ = tf.nn.top_k(z, k=dims)
104
105 # calculate k(z)
106 z_cumsum = tf.math.cumsum(z_sorted, axis=-1)
107 k = tf.range(1, tf.cast(dims, logits.dtype) + 1, dtype=logits.dtype)
108 z_check = 1 + k * z_sorted > z_cumsum
109 # because the z_check vector is always [1,1,...1,0,0,...0] finding the
110 # (index + 1) of the last `1` is the same as just summing the number of 1.
111 k_z = tf.math.reduce_sum(tf.cast(z_check, tf.int32), axis=-1)
112
113 # calculate tau(z)
114 # If there are inf values or all values are -inf, the k_z will be zero,
115 # this is mathematically invalid and will also cause the gather_nd to fail.
116 # Prevent this issue for now by setting k_z = 1 if k_z = 0, this is then
117 # fixed later (see p_safe) by returning p = nan. This results in the same
118 # behavior as softmax.
119 k_z_safe = tf.math.maximum(k_z, 1)
120 indices = tf.stack(
121 [tf.range(0, obs), tf.reshape(k_z_safe, [-1]) - 1], axis=1)
122 tau_sum = tf.gather_nd(z_cumsum, indices)
123 tau_z = (tau_sum - 1) / tf.cast(k_z, logits.dtype)
124
125 # calculate p
126 p = tf.math.maximum(
127 tf.cast(0, logits.dtype), z - tf.expand_dims(tau_z, -1))
128 # If k_z = 0 or if z = nan, then the input is invalid
129 # TODO: Adjust dimension order for TF2 broadcasting
130 p_safe = tf.compat.v1.where(
131 tf.math.logical_or(
132 tf.math.equal(k_z, 0), tf.math.is_nan(z_cumsum[:, -1])),
133 tf.fill([obs, dims], tf.cast(float("nan"), logits.dtype)), p)
134
135 # Reshape back to original size
136 p_safe = tf.reshape(p_safe, shape_op, name=name)
137 return p_safe
138
[end of tensorflow_addons/activations/sparsemax.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tensorflow_addons/activations/sparsemax.py b/tensorflow_addons/activations/sparsemax.py
--- a/tensorflow_addons/activations/sparsemax.py
+++ b/tensorflow_addons/activations/sparsemax.py
@@ -126,11 +126,12 @@
p = tf.math.maximum(
tf.cast(0, logits.dtype), z - tf.expand_dims(tau_z, -1))
# If k_z = 0 or if z = nan, then the input is invalid
- # TODO: Adjust dimension order for TF2 broadcasting
- p_safe = tf.compat.v1.where(
- tf.math.logical_or(
- tf.math.equal(k_z, 0), tf.math.is_nan(z_cumsum[:, -1])),
- tf.fill([obs, dims], tf.cast(float("nan"), logits.dtype)), p)
+ p_safe = tf.where(
+ tf.expand_dims(
+ tf.math.logical_or(
+ tf.math.equal(k_z, 0), tf.math.is_nan(z_cumsum[:, -1])),
+ axis=-1), tf.fill([obs, dims], tf.cast(float("nan"),
+ logits.dtype)), p)
# Reshape back to original size
p_safe = tf.reshape(p_safe, shape_op, name=name)
|
{"golden_diff": "diff --git a/tensorflow_addons/activations/sparsemax.py b/tensorflow_addons/activations/sparsemax.py\n--- a/tensorflow_addons/activations/sparsemax.py\n+++ b/tensorflow_addons/activations/sparsemax.py\n@@ -126,11 +126,12 @@\n p = tf.math.maximum(\n tf.cast(0, logits.dtype), z - tf.expand_dims(tau_z, -1))\n # If k_z = 0 or if z = nan, then the input is invalid\n- # TODO: Adjust dimension order for TF2 broadcasting\n- p_safe = tf.compat.v1.where(\n- tf.math.logical_or(\n- tf.math.equal(k_z, 0), tf.math.is_nan(z_cumsum[:, -1])),\n- tf.fill([obs, dims], tf.cast(float(\"nan\"), logits.dtype)), p)\n+ p_safe = tf.where(\n+ tf.expand_dims(\n+ tf.math.logical_or(\n+ tf.math.equal(k_z, 0), tf.math.is_nan(z_cumsum[:, -1])),\n+ axis=-1), tf.fill([obs, dims], tf.cast(float(\"nan\"),\n+ logits.dtype)), p)\n \n # Reshape back to original size\n p_safe = tf.reshape(p_safe, shape_op, name=name)\n", "issue": "Update sparsemax to use tf.where V2\nAs described in #250 and temporarily patched in #251 sparsemax has one instance of tf.where that needs the broadcasting dimensions changed to match numpy and TF2 style.\n", "before_files": [{"content": "# Copyright 2016 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\nfrom tensorflow_addons.utils import keras_utils\n\n\[email protected]\n@keras_utils.register_keras_custom_object\ndef sparsemax(logits, axis=-1, name=None):\n \"\"\"Sparsemax activation function [1].\n\n For each batch `i` and class `j` we have\n $$sparsemax[i, j] = max(logits[i, j] - tau(logits[i, :]), 0)$$\n\n [1]: https://arxiv.org/abs/1602.02068\n\n Args:\n logits: Input tensor.\n axis: Integer, axis along which the sparsemax operation is applied.\n name: A name for the operation (optional).\n Returns:\n Tensor, output of sparsemax transformation. Has the same type and\n shape as `logits`.\n Raises:\n ValueError: In case `dim(logits) == 1`.\n \"\"\"\n logits = tf.convert_to_tensor(logits, name=\"logits\")\n\n # We need its original shape for shape inference.\n shape = logits.get_shape()\n rank = shape.rank\n is_last_axis = (axis == -1) or (axis == rank - 1)\n\n if is_last_axis:\n output = _compute_2d_sparsemax(logits, name=name)\n output.set_shape(shape)\n return output\n\n # If dim is not the last dimension, we have to do a transpose so that we can\n # still perform softmax on its last dimension.\n\n # Swap logits' dimension of dim and its last dimension.\n rank_op = tf.rank(logits)\n axis_norm = axis % rank\n logits = _swap_axis(logits, axis_norm, tf.math.subtract(rank_op, 1))\n\n # Do the actual softmax on its last dimension.\n output = _compute_2d_sparsemax(logits)\n output = _swap_axis(\n output, axis_norm, tf.math.subtract(rank_op, 1), name=name)\n\n # Make shape inference work since transpose may erase its static shape.\n output.set_shape(shape)\n return output\n\n\ndef _swap_axis(logits, dim_index, last_index, **kwargs):\n return tf.transpose(\n logits,\n tf.concat([\n tf.range(dim_index), [last_index],\n tf.range(dim_index + 1, last_index), [dim_index]\n ], 0), **kwargs)\n\n\[email protected]\ndef _compute_2d_sparsemax(logits, name=None):\n \"\"\"Performs the sparsemax operation when axis=-1.\"\"\"\n shape_op = tf.shape(logits)\n obs = tf.math.reduce_prod(shape_op[:-1])\n dims = shape_op[-1]\n\n # In the paper, they call the logits z.\n # The mean(logits) can be substracted from logits to make the algorithm\n # more numerically stable. the instability in this algorithm comes mostly\n # from the z_cumsum. Substacting the mean will cause z_cumsum to be close\n # to zero. However, in practise the numerical instability issues are very\n # minor and substacting the mean causes extra issues with inf and nan\n # input.\n # Reshape to [obs, dims] as it is almost free and means the remanining\n # code doesn't need to worry about the rank.\n z = tf.reshape(logits, [obs, dims])\n\n # sort z\n z_sorted, _ = tf.nn.top_k(z, k=dims)\n\n # calculate k(z)\n z_cumsum = tf.math.cumsum(z_sorted, axis=-1)\n k = tf.range(1, tf.cast(dims, logits.dtype) + 1, dtype=logits.dtype)\n z_check = 1 + k * z_sorted > z_cumsum\n # because the z_check vector is always [1,1,...1,0,0,...0] finding the\n # (index + 1) of the last `1` is the same as just summing the number of 1.\n k_z = tf.math.reduce_sum(tf.cast(z_check, tf.int32), axis=-1)\n\n # calculate tau(z)\n # If there are inf values or all values are -inf, the k_z will be zero,\n # this is mathematically invalid and will also cause the gather_nd to fail.\n # Prevent this issue for now by setting k_z = 1 if k_z = 0, this is then\n # fixed later (see p_safe) by returning p = nan. This results in the same\n # behavior as softmax.\n k_z_safe = tf.math.maximum(k_z, 1)\n indices = tf.stack(\n [tf.range(0, obs), tf.reshape(k_z_safe, [-1]) - 1], axis=1)\n tau_sum = tf.gather_nd(z_cumsum, indices)\n tau_z = (tau_sum - 1) / tf.cast(k_z, logits.dtype)\n\n # calculate p\n p = tf.math.maximum(\n tf.cast(0, logits.dtype), z - tf.expand_dims(tau_z, -1))\n # If k_z = 0 or if z = nan, then the input is invalid\n # TODO: Adjust dimension order for TF2 broadcasting\n p_safe = tf.compat.v1.where(\n tf.math.logical_or(\n tf.math.equal(k_z, 0), tf.math.is_nan(z_cumsum[:, -1])),\n tf.fill([obs, dims], tf.cast(float(\"nan\"), logits.dtype)), p)\n\n # Reshape back to original size\n p_safe = tf.reshape(p_safe, shape_op, name=name)\n return p_safe\n", "path": "tensorflow_addons/activations/sparsemax.py"}]}
| 2,269 | 291 |
gh_patches_debug_5264
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-contrib-308
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Asgi request headers are not retrieved properly
Environment:
Python: python3.8
fastapi==0.63.0
opentelemetry-api==0.16b1
opentelemetry-sdk==0.16b1
opentelemetry-instrumentation-fastapi==0.16b1
opentelemetry-exporter-google-cloud==0.16b1
opentelemetry-tools-google-cloud==0.16b1
When using `CloudTraceFormatPropagator` for [GCP](https://github.com/GoogleCloudPlatform/opentelemetry-operations-python), `X-Cloud-Trace-Context` header cannot be retrieved.
**Steps to reproduce**
Describe exactly how to reproduce the error. Include a code sample if applicable.
```
# server.py
import uvicorn
from fastapi import FastAPI, Request
from opentelemetry import trace
from opentelemetry.propagators import set_global_textmap
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.tools import cloud_trace_propagator
set_global_textmap(cloud_trace_propagator.CloudTraceFormatPropagator())
app = FastAPI()
tracer = trace.get_tracer("test")
FastAPIInstrumentor.instrument_app(app)
@app.get("/trace")
async def test(r: Request):
with tracer.start_as_current_span("test") as span:
trace_id = span.get_span_context().trace_id
print(f"{trace_id:32x}") # should print trace ID from `X-Cloud-Trace-Context` header value
uvicorn.run(app)
```
```
# client.py
import requests
r = requests.Session()
r.headers.setdefault("X-Cloud-Trace-Context",
"f3ef5c2ede256aa77491057e600eca11/15104302039794794507;o=1")
r.get("http://localhost:8000/trace")
```
**What is the expected behavior?**
Printed value should be `f3ef5c2ede256aa77491057e600eca11` based from the header sent
**What is the actual behavior?**
A newly generated value everything `/trace` is called
**Additional context**
`X-Cloud-Trace-Context` header value is not retrieved properly in `CloudTraceFormatPropagator`
</issue>
<code>
[start of instrumentation/opentelemetry-instrumentation-asgi/src/opentelemetry/instrumentation/asgi/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 The opentelemetry-instrumentation-asgi package provides an ASGI middleware that can be used
17 on any ASGI framework (such as Django-channels / Quart) to track requests
18 timing through OpenTelemetry.
19 """
20
21 import typing
22 import urllib
23 from functools import wraps
24 from typing import Tuple
25
26 from asgiref.compatibility import guarantee_single_callable
27
28 from opentelemetry import context, propagators, trace
29 from opentelemetry.instrumentation.asgi.version import __version__ # noqa
30 from opentelemetry.instrumentation.utils import http_status_to_status_code
31 from opentelemetry.trace.propagation.textmap import DictGetter
32 from opentelemetry.trace.status import Status, StatusCode
33
34
35 class CarrierGetter(DictGetter):
36 def get(
37 self, carrier: dict, key: str
38 ) -> typing.Optional[typing.List[str]]:
39 """Getter implementation to retrieve a HTTP header value from the ASGI
40 scope.
41
42 Args:
43 carrier: ASGI scope object
44 key: header name in scope
45 Returns:
46 A list with a single string with the header value if it exists,
47 else None.
48 """
49 headers = carrier.get("headers")
50 decoded = [
51 _value.decode("utf8")
52 for (_key, _value) in headers
53 if _key.decode("utf8") == key
54 ]
55 if not decoded:
56 return None
57 return decoded
58
59
60 carrier_getter = CarrierGetter()
61
62
63 def collect_request_attributes(scope):
64 """Collects HTTP request attributes from the ASGI scope and returns a
65 dictionary to be used as span creation attributes."""
66 server_host, port, http_url = get_host_port_url_tuple(scope)
67 query_string = scope.get("query_string")
68 if query_string and http_url:
69 if isinstance(query_string, bytes):
70 query_string = query_string.decode("utf8")
71 http_url = http_url + ("?" + urllib.parse.unquote(query_string))
72
73 result = {
74 "http.scheme": scope.get("scheme"),
75 "http.host": server_host,
76 "net.host.port": port,
77 "http.flavor": scope.get("http_version"),
78 "http.target": scope.get("path"),
79 "http.url": http_url,
80 }
81 http_method = scope.get("method")
82 if http_method:
83 result["http.method"] = http_method
84
85 http_host_value_list = carrier_getter.get(scope, "host")
86 if http_host_value_list:
87 result["http.server_name"] = ",".join(http_host_value_list)
88 http_user_agent = carrier_getter.get(scope, "user-agent")
89 if http_user_agent:
90 result["http.user_agent"] = http_user_agent[0]
91
92 if "client" in scope and scope["client"] is not None:
93 result["net.peer.ip"] = scope.get("client")[0]
94 result["net.peer.port"] = scope.get("client")[1]
95
96 # remove None values
97 result = {k: v for k, v in result.items() if v is not None}
98
99 return result
100
101
102 def get_host_port_url_tuple(scope):
103 """Returns (host, port, full_url) tuple.
104 """
105 server = scope.get("server") or ["0.0.0.0", 80]
106 port = server[1]
107 server_host = server[0] + (":" + str(port) if port != 80 else "")
108 full_path = scope.get("root_path", "") + scope.get("path", "")
109 http_url = scope.get("scheme", "http") + "://" + server_host + full_path
110 return server_host, port, http_url
111
112
113 def set_status_code(span, status_code):
114 """Adds HTTP response attributes to span using the status_code argument."""
115 if not span.is_recording():
116 return
117 try:
118 status_code = int(status_code)
119 except ValueError:
120 span.set_status(
121 Status(
122 StatusCode.ERROR,
123 "Non-integer HTTP status: " + repr(status_code),
124 )
125 )
126 else:
127 span.set_attribute("http.status_code", status_code)
128 span.set_status(Status(http_status_to_status_code(status_code)))
129
130
131 def get_default_span_details(scope: dict) -> Tuple[str, dict]:
132 """Default implementation for span_details_callback
133
134 Args:
135 scope: the asgi scope dictionary
136
137 Returns:
138 a tuple of the span, and any attributes to attach to the
139 span.
140 """
141 method_or_path = scope.get("method") or scope.get("path")
142
143 return method_or_path, {}
144
145
146 class OpenTelemetryMiddleware:
147 """The ASGI application middleware.
148
149 This class is an ASGI middleware that starts and annotates spans for any
150 requests it is invoked with.
151
152 Args:
153 app: The ASGI application callable to forward requests to.
154 span_details_callback: Callback which should return a string
155 and a tuple, representing the desired span name and a
156 dictionary with any additional span attributes to set.
157 Optional: Defaults to get_default_span_details.
158 """
159
160 def __init__(self, app, excluded_urls=None, span_details_callback=None):
161 self.app = guarantee_single_callable(app)
162 self.tracer = trace.get_tracer(__name__, __version__)
163 self.span_details_callback = (
164 span_details_callback or get_default_span_details
165 )
166 self.excluded_urls = excluded_urls
167
168 async def __call__(self, scope, receive, send):
169 """The ASGI application
170
171 Args:
172 scope: A ASGI environment.
173 receive: An awaitable callable yielding dictionaries
174 send: An awaitable callable taking a single dictionary as argument.
175 """
176 if scope["type"] not in ("http", "websocket"):
177 return await self.app(scope, receive, send)
178
179 _, _, url = get_host_port_url_tuple(scope)
180 if self.excluded_urls and self.excluded_urls.url_disabled(url):
181 return await self.app(scope, receive, send)
182
183 token = context.attach(propagators.extract(carrier_getter, scope))
184 span_name, additional_attributes = self.span_details_callback(scope)
185
186 try:
187 with self.tracer.start_as_current_span(
188 span_name + " asgi", kind=trace.SpanKind.SERVER,
189 ) as span:
190 if span.is_recording():
191 attributes = collect_request_attributes(scope)
192 attributes.update(additional_attributes)
193 for key, value in attributes.items():
194 span.set_attribute(key, value)
195
196 @wraps(receive)
197 async def wrapped_receive():
198 with self.tracer.start_as_current_span(
199 span_name + " asgi." + scope["type"] + ".receive"
200 ) as receive_span:
201 message = await receive()
202 if receive_span.is_recording():
203 if message["type"] == "websocket.receive":
204 set_status_code(receive_span, 200)
205 receive_span.set_attribute("type", message["type"])
206 return message
207
208 @wraps(send)
209 async def wrapped_send(message):
210 with self.tracer.start_as_current_span(
211 span_name + " asgi." + scope["type"] + ".send"
212 ) as send_span:
213 if send_span.is_recording():
214 if message["type"] == "http.response.start":
215 status_code = message["status"]
216 set_status_code(send_span, status_code)
217 elif message["type"] == "websocket.send":
218 set_status_code(send_span, 200)
219 send_span.set_attribute("type", message["type"])
220 await send(message)
221
222 await self.app(scope, wrapped_receive, wrapped_send)
223 finally:
224 context.detach(token)
225
[end of instrumentation/opentelemetry-instrumentation-asgi/src/opentelemetry/instrumentation/asgi/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/instrumentation/opentelemetry-instrumentation-asgi/src/opentelemetry/instrumentation/asgi/__init__.py b/instrumentation/opentelemetry-instrumentation-asgi/src/opentelemetry/instrumentation/asgi/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-asgi/src/opentelemetry/instrumentation/asgi/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-asgi/src/opentelemetry/instrumentation/asgi/__init__.py
@@ -47,6 +47,11 @@
else None.
"""
headers = carrier.get("headers")
+ if not headers:
+ return None
+
+ # asgi header keys are in lower case
+ key = key.lower()
decoded = [
_value.decode("utf8")
for (_key, _value) in headers
|
{"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-asgi/src/opentelemetry/instrumentation/asgi/__init__.py b/instrumentation/opentelemetry-instrumentation-asgi/src/opentelemetry/instrumentation/asgi/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-asgi/src/opentelemetry/instrumentation/asgi/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-asgi/src/opentelemetry/instrumentation/asgi/__init__.py\n@@ -47,6 +47,11 @@\n else None.\n \"\"\"\n headers = carrier.get(\"headers\")\n+ if not headers:\n+ return None\n+\n+ # asgi header keys are in lower case\n+ key = key.lower()\n decoded = [\n _value.decode(\"utf8\")\n for (_key, _value) in headers\n", "issue": "Asgi request headers are not retrieved properly\nEnvironment: \r\nPython: python3.8\r\n\r\nfastapi==0.63.0\r\nopentelemetry-api==0.16b1\r\nopentelemetry-sdk==0.16b1\r\nopentelemetry-instrumentation-fastapi==0.16b1\r\nopentelemetry-exporter-google-cloud==0.16b1\r\nopentelemetry-tools-google-cloud==0.16b1\r\n\r\nWhen using `CloudTraceFormatPropagator` for [GCP](https://github.com/GoogleCloudPlatform/opentelemetry-operations-python), `X-Cloud-Trace-Context` header cannot be retrieved.\r\n\r\n**Steps to reproduce**\r\nDescribe exactly how to reproduce the error. Include a code sample if applicable.\r\n\r\n```\r\n# server.py\r\nimport uvicorn\r\nfrom fastapi import FastAPI, Request\r\nfrom opentelemetry import trace\r\nfrom opentelemetry.propagators import set_global_textmap\r\nfrom opentelemetry.instrumentation.fastapi import FastAPIInstrumentor\r\nfrom opentelemetry.sdk.trace import TracerProvider\r\nfrom opentelemetry.tools import cloud_trace_propagator\r\n\r\nset_global_textmap(cloud_trace_propagator.CloudTraceFormatPropagator())\r\n\r\napp = FastAPI()\r\n\r\ntracer = trace.get_tracer(\"test\")\r\nFastAPIInstrumentor.instrument_app(app)\r\n\r\[email protected](\"/trace\")\r\nasync def test(r: Request):\r\n with tracer.start_as_current_span(\"test\") as span:\r\n trace_id = span.get_span_context().trace_id\r\n print(f\"{trace_id:32x}\") # should print trace ID from `X-Cloud-Trace-Context` header value\r\n\r\nuvicorn.run(app)\r\n```\r\n\r\n```\r\n# client.py\r\nimport requests\r\n\r\nr = requests.Session()\r\nr.headers.setdefault(\"X-Cloud-Trace-Context\",\r\n \"f3ef5c2ede256aa77491057e600eca11/15104302039794794507;o=1\")\r\nr.get(\"http://localhost:8000/trace\")\r\n```\r\n\r\n**What is the expected behavior?**\r\nPrinted value should be `f3ef5c2ede256aa77491057e600eca11` based from the header sent\r\n\r\n**What is the actual behavior?**\r\nA newly generated value everything `/trace` is called\r\n\r\n**Additional context**\r\n`X-Cloud-Trace-Context` header value is not retrieved properly in `CloudTraceFormatPropagator`\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThe opentelemetry-instrumentation-asgi package provides an ASGI middleware that can be used\non any ASGI framework (such as Django-channels / Quart) to track requests\ntiming through OpenTelemetry.\n\"\"\"\n\nimport typing\nimport urllib\nfrom functools import wraps\nfrom typing import Tuple\n\nfrom asgiref.compatibility import guarantee_single_callable\n\nfrom opentelemetry import context, propagators, trace\nfrom opentelemetry.instrumentation.asgi.version import __version__ # noqa\nfrom opentelemetry.instrumentation.utils import http_status_to_status_code\nfrom opentelemetry.trace.propagation.textmap import DictGetter\nfrom opentelemetry.trace.status import Status, StatusCode\n\n\nclass CarrierGetter(DictGetter):\n def get(\n self, carrier: dict, key: str\n ) -> typing.Optional[typing.List[str]]:\n \"\"\"Getter implementation to retrieve a HTTP header value from the ASGI\n scope.\n\n Args:\n carrier: ASGI scope object\n key: header name in scope\n Returns:\n A list with a single string with the header value if it exists,\n else None.\n \"\"\"\n headers = carrier.get(\"headers\")\n decoded = [\n _value.decode(\"utf8\")\n for (_key, _value) in headers\n if _key.decode(\"utf8\") == key\n ]\n if not decoded:\n return None\n return decoded\n\n\ncarrier_getter = CarrierGetter()\n\n\ndef collect_request_attributes(scope):\n \"\"\"Collects HTTP request attributes from the ASGI scope and returns a\n dictionary to be used as span creation attributes.\"\"\"\n server_host, port, http_url = get_host_port_url_tuple(scope)\n query_string = scope.get(\"query_string\")\n if query_string and http_url:\n if isinstance(query_string, bytes):\n query_string = query_string.decode(\"utf8\")\n http_url = http_url + (\"?\" + urllib.parse.unquote(query_string))\n\n result = {\n \"http.scheme\": scope.get(\"scheme\"),\n \"http.host\": server_host,\n \"net.host.port\": port,\n \"http.flavor\": scope.get(\"http_version\"),\n \"http.target\": scope.get(\"path\"),\n \"http.url\": http_url,\n }\n http_method = scope.get(\"method\")\n if http_method:\n result[\"http.method\"] = http_method\n\n http_host_value_list = carrier_getter.get(scope, \"host\")\n if http_host_value_list:\n result[\"http.server_name\"] = \",\".join(http_host_value_list)\n http_user_agent = carrier_getter.get(scope, \"user-agent\")\n if http_user_agent:\n result[\"http.user_agent\"] = http_user_agent[0]\n\n if \"client\" in scope and scope[\"client\"] is not None:\n result[\"net.peer.ip\"] = scope.get(\"client\")[0]\n result[\"net.peer.port\"] = scope.get(\"client\")[1]\n\n # remove None values\n result = {k: v for k, v in result.items() if v is not None}\n\n return result\n\n\ndef get_host_port_url_tuple(scope):\n \"\"\"Returns (host, port, full_url) tuple.\n \"\"\"\n server = scope.get(\"server\") or [\"0.0.0.0\", 80]\n port = server[1]\n server_host = server[0] + (\":\" + str(port) if port != 80 else \"\")\n full_path = scope.get(\"root_path\", \"\") + scope.get(\"path\", \"\")\n http_url = scope.get(\"scheme\", \"http\") + \"://\" + server_host + full_path\n return server_host, port, http_url\n\n\ndef set_status_code(span, status_code):\n \"\"\"Adds HTTP response attributes to span using the status_code argument.\"\"\"\n if not span.is_recording():\n return\n try:\n status_code = int(status_code)\n except ValueError:\n span.set_status(\n Status(\n StatusCode.ERROR,\n \"Non-integer HTTP status: \" + repr(status_code),\n )\n )\n else:\n span.set_attribute(\"http.status_code\", status_code)\n span.set_status(Status(http_status_to_status_code(status_code)))\n\n\ndef get_default_span_details(scope: dict) -> Tuple[str, dict]:\n \"\"\"Default implementation for span_details_callback\n\n Args:\n scope: the asgi scope dictionary\n\n Returns:\n a tuple of the span, and any attributes to attach to the\n span.\n \"\"\"\n method_or_path = scope.get(\"method\") or scope.get(\"path\")\n\n return method_or_path, {}\n\n\nclass OpenTelemetryMiddleware:\n \"\"\"The ASGI application middleware.\n\n This class is an ASGI middleware that starts and annotates spans for any\n requests it is invoked with.\n\n Args:\n app: The ASGI application callable to forward requests to.\n span_details_callback: Callback which should return a string\n and a tuple, representing the desired span name and a\n dictionary with any additional span attributes to set.\n Optional: Defaults to get_default_span_details.\n \"\"\"\n\n def __init__(self, app, excluded_urls=None, span_details_callback=None):\n self.app = guarantee_single_callable(app)\n self.tracer = trace.get_tracer(__name__, __version__)\n self.span_details_callback = (\n span_details_callback or get_default_span_details\n )\n self.excluded_urls = excluded_urls\n\n async def __call__(self, scope, receive, send):\n \"\"\"The ASGI application\n\n Args:\n scope: A ASGI environment.\n receive: An awaitable callable yielding dictionaries\n send: An awaitable callable taking a single dictionary as argument.\n \"\"\"\n if scope[\"type\"] not in (\"http\", \"websocket\"):\n return await self.app(scope, receive, send)\n\n _, _, url = get_host_port_url_tuple(scope)\n if self.excluded_urls and self.excluded_urls.url_disabled(url):\n return await self.app(scope, receive, send)\n\n token = context.attach(propagators.extract(carrier_getter, scope))\n span_name, additional_attributes = self.span_details_callback(scope)\n\n try:\n with self.tracer.start_as_current_span(\n span_name + \" asgi\", kind=trace.SpanKind.SERVER,\n ) as span:\n if span.is_recording():\n attributes = collect_request_attributes(scope)\n attributes.update(additional_attributes)\n for key, value in attributes.items():\n span.set_attribute(key, value)\n\n @wraps(receive)\n async def wrapped_receive():\n with self.tracer.start_as_current_span(\n span_name + \" asgi.\" + scope[\"type\"] + \".receive\"\n ) as receive_span:\n message = await receive()\n if receive_span.is_recording():\n if message[\"type\"] == \"websocket.receive\":\n set_status_code(receive_span, 200)\n receive_span.set_attribute(\"type\", message[\"type\"])\n return message\n\n @wraps(send)\n async def wrapped_send(message):\n with self.tracer.start_as_current_span(\n span_name + \" asgi.\" + scope[\"type\"] + \".send\"\n ) as send_span:\n if send_span.is_recording():\n if message[\"type\"] == \"http.response.start\":\n status_code = message[\"status\"]\n set_status_code(send_span, status_code)\n elif message[\"type\"] == \"websocket.send\":\n set_status_code(send_span, 200)\n send_span.set_attribute(\"type\", message[\"type\"])\n await send(message)\n\n await self.app(scope, wrapped_receive, wrapped_send)\n finally:\n context.detach(token)\n", "path": "instrumentation/opentelemetry-instrumentation-asgi/src/opentelemetry/instrumentation/asgi/__init__.py"}]}
| 3,432 | 189 |
gh_patches_debug_7433
|
rasdani/github-patches
|
git_diff
|
SciTools__cartopy-439
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: Geostationary example bug
```
python cartopy/docs/source/examples/geostationary.py
Traceback (most recent call last):
File "/net/home/h05/cpelley/git/cartopy/docs/source/examples/geostationary.py", line 60, in <module>
main()
File "/net/home/h05/cpelley/git/cartopy/docs/source/examples/geostationary.py", line 54, in main
img, crs, extent, origin = geos_image()
File "/net/home/h05/cpelley/git/cartopy/docs/source/examples/geostationary.py", line 43, in geos_image
img_handle = BytesIO(urllib2.urlopen(url).read())
NameError: global name 'urllib2' is not defined
```
</issue>
<code>
[start of lib/cartopy/examples/geostationary.py]
1 """
2 Reprojecting images from a Geostationary projection
3 ---------------------------------------------------
4
5 This example demonstrates Cartopy's ability to project images into the desired
6 projection on-the-fly. The image itself is retrieved from a URL and is loaded
7 directly into memory without storing it intermediately into a file. It
8 represents pre-processed data from Moderate-Resolution Imaging
9 Spectroradiometer (MODIS) which has been put into an image in the data's
10 native Geostationary coordinate system - it is then projected by cartopy
11 into a global Miller map.
12
13 """
14 __tags__ = ["Scalar data"]
15 try:
16 from urllib2 import urlopen
17 except ImportError:
18 from urllib.request import urlopen
19 from io import BytesIO
20
21 import cartopy.crs as ccrs
22 import matplotlib.pyplot as plt
23
24
25 def geos_image():
26 """
27 Return a specific MODIS image by retrieving it from a github gist URL.
28
29 Returns
30 -------
31 img : numpy array
32 The pixels of the image in a numpy array.
33 img_proj : cartopy CRS
34 The rectangular coordinate system of the image.
35 img_extent : tuple of floats
36 The extent of the image ``(x0, y0, x1, y1)`` referenced in
37 the ``img_proj`` coordinate system.
38 origin : str
39 The origin of the image to be passed through to matplotlib's imshow.
40
41 """
42 url = ('https://gist.github.com/pelson/5871263/raw/'
43 'EIDA50_201211061300_clip2.png')
44 img_handle = BytesIO(urllib2.urlopen(url).read())
45 img = plt.imread(img_handle)
46 img_proj = ccrs.Geostationary(satellite_height=35786000)
47 img_extent = (-5500000, 5500000, -5500000, 5500000)
48 return img, img_proj, img_extent, 'upper'
49
50
51 def main():
52 ax = plt.axes(projection=ccrs.Miller())
53 ax.coastlines()
54 ax.set_global()
55 img, crs, extent, origin = geos_image()
56 plt.imshow(img, transform=crs, extent=extent, origin=origin, cmap='gray')
57 plt.show()
58
59
60 if __name__ == '__main__':
61 main()
62
[end of lib/cartopy/examples/geostationary.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/cartopy/examples/geostationary.py b/lib/cartopy/examples/geostationary.py
--- a/lib/cartopy/examples/geostationary.py
+++ b/lib/cartopy/examples/geostationary.py
@@ -41,7 +41,7 @@
"""
url = ('https://gist.github.com/pelson/5871263/raw/'
'EIDA50_201211061300_clip2.png')
- img_handle = BytesIO(urllib2.urlopen(url).read())
+ img_handle = BytesIO(urlopen(url).read())
img = plt.imread(img_handle)
img_proj = ccrs.Geostationary(satellite_height=35786000)
img_extent = (-5500000, 5500000, -5500000, 5500000)
|
{"golden_diff": "diff --git a/lib/cartopy/examples/geostationary.py b/lib/cartopy/examples/geostationary.py\n--- a/lib/cartopy/examples/geostationary.py\n+++ b/lib/cartopy/examples/geostationary.py\n@@ -41,7 +41,7 @@\n \"\"\"\n url = ('https://gist.github.com/pelson/5871263/raw/'\n 'EIDA50_201211061300_clip2.png')\n- img_handle = BytesIO(urllib2.urlopen(url).read())\n+ img_handle = BytesIO(urlopen(url).read())\n img = plt.imread(img_handle)\n img_proj = ccrs.Geostationary(satellite_height=35786000)\n img_extent = (-5500000, 5500000, -5500000, 5500000)\n", "issue": "BUG: Geostationary example bug\n```\npython cartopy/docs/source/examples/geostationary.py\nTraceback (most recent call last):\n File \"/net/home/h05/cpelley/git/cartopy/docs/source/examples/geostationary.py\", line 60, in <module>\n main()\n File \"/net/home/h05/cpelley/git/cartopy/docs/source/examples/geostationary.py\", line 54, in main\n img, crs, extent, origin = geos_image()\n File \"/net/home/h05/cpelley/git/cartopy/docs/source/examples/geostationary.py\", line 43, in geos_image\n img_handle = BytesIO(urllib2.urlopen(url).read())\nNameError: global name 'urllib2' is not defined\n```\n\n", "before_files": [{"content": "\"\"\"\nReprojecting images from a Geostationary projection\n---------------------------------------------------\n\nThis example demonstrates Cartopy's ability to project images into the desired\nprojection on-the-fly. The image itself is retrieved from a URL and is loaded\ndirectly into memory without storing it intermediately into a file. It\nrepresents pre-processed data from Moderate-Resolution Imaging\nSpectroradiometer (MODIS) which has been put into an image in the data's\nnative Geostationary coordinate system - it is then projected by cartopy\ninto a global Miller map.\n\n\"\"\"\n__tags__ = [\"Scalar data\"]\ntry:\n from urllib2 import urlopen\nexcept ImportError:\n from urllib.request import urlopen\nfrom io import BytesIO\n\nimport cartopy.crs as ccrs\nimport matplotlib.pyplot as plt\n\n\ndef geos_image():\n \"\"\"\n Return a specific MODIS image by retrieving it from a github gist URL.\n\n Returns\n -------\n img : numpy array\n The pixels of the image in a numpy array.\n img_proj : cartopy CRS\n The rectangular coordinate system of the image.\n img_extent : tuple of floats\n The extent of the image ``(x0, y0, x1, y1)`` referenced in\n the ``img_proj`` coordinate system.\n origin : str\n The origin of the image to be passed through to matplotlib's imshow.\n\n \"\"\"\n url = ('https://gist.github.com/pelson/5871263/raw/'\n 'EIDA50_201211061300_clip2.png')\n img_handle = BytesIO(urllib2.urlopen(url).read())\n img = plt.imread(img_handle)\n img_proj = ccrs.Geostationary(satellite_height=35786000)\n img_extent = (-5500000, 5500000, -5500000, 5500000)\n return img, img_proj, img_extent, 'upper'\n\n\ndef main():\n ax = plt.axes(projection=ccrs.Miller())\n ax.coastlines()\n ax.set_global()\n img, crs, extent, origin = geos_image()\n plt.imshow(img, transform=crs, extent=extent, origin=origin, cmap='gray')\n plt.show()\n\n\nif __name__ == '__main__':\n main()\n", "path": "lib/cartopy/examples/geostationary.py"}]}
| 1,355 | 207 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.